text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
50 Years of Misunderstanding Bell's Theorem
Precisely 50 years ago, Bell's paper "On the Einstein Podolsky Rosen Paradox", containing his famous theorem was received by the journal Physics. Today is John Bell Day.
Bell's theorem is one of the most influential result in physics, despite the fact that it is a negative result. Contrary to what many people believe, Bell was actually searching for a hidden variable theory, and he found instead some severe limitations of such theories. The limitation expressed by Bell's theorem celebrated today is that hidden variable theories have to be nonlocal. The outcome of measurements are correlated in a way which seems to ignore the separation in space. Some misunderstand this result as rejecting determinism, or as rejecting any kind of hidden variables, or at least as proving that any theory which describes the quantum world using hidden variables has to rely on instantaneous communication.
Maybe others searching for a hidden variables description of quantum phenomena hit the same wall Bell hit, but rather than having the same revelation as Bell, they ignored it and continued to search for a replacement or completion of quantum mechanics. For example, Einstein had all the data to find Bell's theorem almost 30 years before Bell. The paper coauthored by Einstein, Podolsky and Rosen, "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" shown that entanglement allows nonlocal correlations. But Einstein disliked nonlocality because it seemed to violate special relativity. So he concluded that quantum mechanics was incomplete, by interpreting those correlations as revealing that Heisenberg's uncertainty principle can be trespassed. So Einstein and coauthors hit the same wall as Bell, only that they considered that the problem could be solved by completing quantum mechanics. Bell's theorem clarifies their findings in showing that no matter how you put it, the world is nonlocal (if Bell's inequality is violated, as it was confirmed by experiments).
Almost 30 years later, Bell understood nonlocality as the major consequence of the EPR "paradox", and expressed it in the form of his theorem. Today, at 50 years after Bell clarified the problem, there are so many who consider that Einstein was a crackpot in what concerns quantum mechanics, and Bell defeated him. Today it is easy for any student who took a class of quantum mechanics or philosophy of physics, to consider that he has a better understanding of quantum mechanics that Einstein, and to feel superior to him (true story, just search the physics blogs and forums and you will see many examples). Most often they believe (as they are taught) that quantum mechanics is so radically different because it is not deterministic, and that what Einstein searched was a deterministic theory. And that EPR suggested this, and Bell rejected it. This is so unfair for EPR, but also for Bell.
The truth is that despite the 10 hot years of discoveries in quantum mechanics, when nearly every aspect was understood, and the foundations were laid down, nobody before Einstein, Podolsky and Rosen found that "paradox", which is true and relevant. It is unfair to consider the EPR an attack against quantum mechanics, as it is seen by many since the beginning. Rather, it is a most important discovery, which could only be made because three rebels were not satisfied with Bohr's prescriptions. Moreover, in almost 30 years since the EPR paper, nobody solved their "paradox". Not even Bohr, who rushed to respond too quickly with an article bearing the same name as the EPR one. And the solution was found by Bell, who was a supporter of hidden variables, and maybe he wouldn't find it either, without the reformulation of the EPR argument due to the main exponent of hidden variable theories at that time, David Bohm.
Now, the reader may think that I am defending the hidden variables, by praising hidden variables theorists like Einstein, Bohm, and Bell. I actually don't defend hidden variables, and I don't say this just because of the witch hunt against "Bohmians". I just want to emphasize that without these "crackpots", we would not have today the understanding of entanglement and nonlocality which allows scientists to put at use the "magic" of quantum mechanics at work in quantum computing, quantum information, quantum cryptography, and other recent hot areas.
Actually, to be honest, among Einstein, Bohm, and Bell, only the first two are considered a lacking understanding of quantum mechanics, and Bell is considered as the one who defeat them, so he is celebrated, while the other two are not. But this is only because Bell is perceived as being, because of his theorem, against hidden variables, while in fact he was also searching for a hidden variable theory.
Moreover, for some reason, many consider that Bell's theorem is only about hidden variable theories, while in fact it is about any quantum theory or interpretation which describes quantum correlations as are observed in nature, and therefore violates Bell's inequality. Including therefore standard quantum mechanics. So, quantum mechanics is nonlocal too, and no Copenhagen Interpretation, no Many Worlds Interpretation, no Decoherence Interpretation can make it otherwise. Similarly, quantum mechanics is contextual too, despite the fact that the Bell-Kochen-Specker theorem is considered to apply to hidden variable theories only.
But why some tend to consider only hidden variable theories guilty of the sins of nonlocality and contextuality? Maybe because they just want to reject such theories? Or could it be because they believe that it makes no sense to think about what happens between measurements (as Bohr teaches us)? Or because nearly everyone, when first learning about quantum mechanics, has the instinct of finding a local realist explanation, and fails, of course, and then denies having this sin by throwing stones at those who seem to have it? I think this is fine, since this is what we should do, we should question everything, and that the persistence with which we should question a claim has to increase with the degree by which that claim contradicts what we learned before, as is the case of quantum mechanics.
For lack of time, for the rush of getting published, for the fear of getting rejected for having unorthodox views, we tend to eat much more than we can digest, and actually we cease digesting. This is why misunderstanding are propagated even at the top of the scientific community. Misunderstandings concerning quantum mechanics and Bell's theorem prevent us from seeing both the truth, and the amazing beauty of quantum mechanics, which is transformed into a mere tool to calculate probabilities, and any attempt at understanding it is regarded with disdain.
I find very fortunate the fact that Tim Maudlin wrote for the 50th anniversary of Bell's theorem a paper named "What Bell did", in which he explains that Bell's result is that indeed our world, hence quantum mechanics, is nonlocal. He makes a thorough and in my opinion probably the most down to earth analysis of the meaning of the EPR paper and of Bell's theorem, and how they are misunderstood. He identifies a cluster of misunderstandings that are propagated among physicists and philosophers of physics. This is one of the cases when a philosopher really can help physicists understand physics. I'll leave you the pleasure to read it.
Posted by Cristi Stoica at 1:18 AM
Labels: History of science, Physics, Quantum Theory
Happy Birthday, Nature!
Exactly 145 years ago, on 4 November 1869, the first number of Nature appeared.
According to the current Romanian prime minister Victor Ponta, Nature is controlled (unofficially) by the current president of Romania, Traian Băsescu, with the main purpose to accuse Ponta of plagiarism. A possible explanation is that some collaborators of Băsescu traveled in time to create the journal. And then they also founded 145 years ago a secret society of scientists, who kept making great scientific discoveries. This also explains why most scientific discoveries were made in the last 1.5 centuries. The reason to make these scientific breakthroughs is not to advance the world, but to publish them in Nature, or in the other journals citing Nature, to make it world's most cited journal, so that, when the time comes, Nature's accusations against Ponta will have greater impact. And to invent rules that it is dishonest to copy text from other books and articles without attributing it explicitly when you write your PhD thesis. The second reason would be that the research made by this secret society of genii secretly led by the mastermind Băsescu will eventually lead to the discovery of time travel [1,2,3,4], which would allow him to send his people 145 years back in time...
Now, presidential elections are taking place in Romania. Two days ago was the first round, and now Ponta is the favorite to become the new president after the second round, in 12 days. Ponta has now the chance to forbid time travel, to change the history back to its track, which is a world without the Nature journal and all that scientific research in it, a world without physicists who can discover time travel. Or quite the opposite, he may actually make use of time travel, to set our history back to 25 years ago, when people overthrew Ceaușescu, and the communists were forced to disguise themselves as social-democrats to continue to keep the power.
Labels: Fun, politics
Dots plus dots equal spheres
I took this photo in a bus in Pisa. We can see a pattern of spheres.
Here is how to obtain it. We overlap these dots
over a shrunk version of theirs
and we get the following pattern:
Posted by Cristi Stoica at 11:05 PM
Labels: Fun, Geometry of Illusion
Living in a vector
Vectors are present in all domains of fundamental physics, so if you want to understand physics, you will need them. You may think you know them, but the truth is that they appear in so many guises, that nobody really knows everything about them. But vectors are a gate that allows you to enter the Cathedral of physics, and once you are inside, they can guide you in all places. That is, special and general relativity, quantum mechanics, particle physics, gauge theory... all these places need vectors, and once you master the vectors, they become much simpler (if you don't know them and are interested, read this post).
The Cathedral has many gates, and vectors are just one of them. You can enter through groups, sets and relations, functions, categories, through all sorts of objects or structures from algebra, geometry, even logic. I decided to show you now the way of vectors, because I think is fast and deep in the same time, but remember, this is a matter of choice. And vectors will lead us, inevitably, to the other gates too.
I will explain some elementary and not so elementary things about vectors, but you have to read and practice, because here I just give some guidelines, a big picture. The reason I am doing this is that when you study, you may get lost in details and miss the essential.
Very basic things
A vector can be understood in many ways. One way is to see it as a way to specify how to move from one point to another. A vector is like an arrow, and if you place the arrow in that point, you find the destination point. To find the new position for any point, just place the vector in that point, and the tip of the vector will show you the new position. You can compose more such arrows, and what you'll get is another vector, their sum. You can also subtract them, just place their origins in the same point, and the difference is the vector obtained by joining their tips with another arrow.
Once you fix a reference position, an origin, you can specify any position, by the vector that tells you how to move from origin to that position. You can see that vector as being the difference between the destination, and the starting position.
You can add and subtract vectors. You can multiply them with numbers. Those numbers are from a field $\mathbb{K}$, and we can take for example $\mathbb{K}=\mathbb{R}$, or $\mathbb{K}=\mathbb{C}$, and are called scalars. A vector space is a set of vectors, so that no matter how you add them and scale them, the result is from the same set. The vector space is real (complex), if the scalars are real (complex) numbers. A sum of rescaled vectors is named linear combination. You can always pick a basis, or a frame, a set of vectors so that any vector can be written as a linear combination of the basis vectors, in a unique way.
Vectors and functions
Consider a vector $v$ in an $n$-dimensional space $V$, and suppose its components in a given basis are $(v^1,\ldots,v^n)$. You can represent any vector $v$ as a function $f:\{1,\ldots,n\}\to\mathbb{K}$ given by $f(i)=v^i$. Conversely, any such function defines a unique vector. In general, if $S$ is a set, then the set of the functions $f:S\to\mathbb{K}$ form a vector space, which we will denote by $\mathbb{K}^S$. The cardinal of $S$ gives the dimension of the vector space, so $\mathbb{K}^{\{1,\ldots,n\}}\cong\mathbb{K}^n$. So, if $S$ is an infinite set, we will have an infinite dimensional vector space. For example, the scalar fields on a three dimensional space, that is, the functions $f:\mathbb{R}^3\to \mathbb{R}$, form an infinite dimensional vector space. Not only the vector spaces are not limited to $2$ or $3$ dimensions, but infinite dimensional spaces are very natural too.
Dual vectors
If $V$ is a $\mathbb{K}$-vector space, a linear functions $f:V\to\mathbb{K}$ is a function satisfying $f(u+v)=f(u)+f(v)$, and $f(\alpha u)=\alpha f(u)$, for any $u,v\in V,\alpha\in\mathbb{K}$. The linear functions $f:V\to\mathbb{K}$ form a vector space $V^*$ named the dual space of $V$.
Consider now two sets, $S$ and $S'$, and a field $\mathbb{K}$. The Cartesian product $S\times S'$ is defined as the set of pairs $(s,s')$, where $s\in S$ and $s'\in S'$. The functions defined on the Cartesian product, $f:S\times S'\to\mathbb{K}$, form a vector space $\mathbb{K}^{S\times S'}$, named the tensor product of $\mathbb{K}^{S}$ and $\mathbb{K}^{S'}$, $\mathbb{K}^{S\times S'}=\mathbb{K}^{S}\otimes\mathbb{K}^{S'}$. If $(e_i)$ and $(e'_j)$ are bases of $\mathbb{K}^{S}$ and $\mathbb{K}^{S'}$, then $(e_ie'j)$, where $e_ie'_j(s,s')=e_i(s)e'_j(s')$, is a basis of $\mathbb{K}^{S\times S'}$. Any vector $v\in\mathbb{K}^{S_1\times S_2}$ can be uniquely written as $v=\sum_i\sum_j \alpha_{ij} e_ie'j$.
Also, the set of functions $f:S\to\mathbb{K}^{S'}$ is a vector space, which can be identified with the tensor product $\mathbb{K}^{S}\otimes(\mathbb{K}^{S'})^*$.
The vectors that belong to tensor products of vector spaces are named tensors. So, tensors are vectors with some extra structure.
The tensor product can be defined easily for any kind of vector spaces, because any vector space can be thought of as a space of functions. The tensor product is associative, so we can define it between multiple vector spaces. We denote the tensor product of $n>1$ copies of $V$ by $V^{\otimes n}$. We can check that for $m,n>1$, $V^{\otimes (m+n)}=V^{\otimes {m}}\otimes V^{\otimes {n}}$. This can work also for $m,n\geq 0$, if we define $V^1=V$, $V^0=\mathbb{K}$. So, vectors and scalars are just tensors.
Let $U$, $V$ be $\mathbb{K}$-vector spaces. A linear operator is a function $f:U\to V$ which satisfies $f(u+v)=f(u)+f(v)$, and $f(\alpha u)=\alpha f(u)$, for any $u\in U,v\in V,\alpha\in\mathbb{K}$. The operator $f:U\to V$ is in fact a tensor from $U^*\otimes V$.
Inner products
Given a basis, any vector can be expressed as a set of numbers, the components of the vector. But the vector is independent of this numerical representation. The basis can be chosen in many ways, and in fact, any non-zero vector can have any components (provided not all are zero) in a well chosen basis. This shows that any two non-zero vectors play identical roles, which may be a surprise. This is a key point, since a common misconception when talking about vectors is that they have definite intrinsic sizes and orientations, or that they can make an angle. But in fact the sizes and orientations are relative to the frame, or to the other vectors. Moreover, you can say that from two vectors, one is larger than the other, only if they are collinear. Otherwise, no matter how small is one of them, we can easily find a basis in which it becomes larger than the other. It makes no sense to speak about the size, or magnitude, or length of a vector, as an intrinsic property.
But wait, one may say, there is a way to define the size of a vector! Consider a basis in a two-dimensional vector space, and a vector $v=(v^1,v^2)$. Then, the size of the vector is given by Pythagoras's theorem, by $\sqrt{(v^1)^2+(v^2)^2}$. The problem with this definition is that, if you change the basis, you will obtain different components, and different size of the vector. To make sure that you obtain the same size, you should allow only certain bases. To speak about the size of a vector, and about the angle between two vectors, you need an additional object, which is called inner product, or scalar product. Sometimes, for example in geometry and in relativity, it is called metric.
Choosing a basis gives a default inner product. But the best way is to define the inner product, and not to pick a special basis. Once you have the inner product, you can define angles between vectors too. But size and angles are not intrinsic properties of vectors, they depend on the scalar product too.
The inner product between two vectors $u$ and $v$, defined by a basis, is $u\cdot v = u^1 v^1 + u^2 v^2 + \ldots + u^n v^n$. But in a different basis, it will have a general form $u\cdot v=\sum_i\sum_j g_{ij} u^i v^j$, where $g_{ij}=g_{ji}$ can be seen as the components of a symmetric matrix. These components change when we change the basis, they form the components of a tensor from $V^*\otimes V^*$. Einstein had the brilliant idea to omit the sum signs, so the inner product looks like $u\cdot v=g_{ij} u^i v^j$, where you know that since $i$ and $j$ appear both in upper and in lower positions, we make them run from $1$ to $n$ and sum. This is a thing that many geometers hate, but physicists find it very useful and compact in calculations, because the same summation convention appears in many different situations, which to geometers appear to be different, but in fact are very similar.
Given a basis, we can define the inner product by choosing the coefficients $g_{ij}$. And we can always find another basis, in which $g_{ij}$ is diagonal, that is, it vanishes unless $i=j$. And we can rescale the basis so that $g_{ii}$ are equal to $-1$, $1$, or $0$. Only if $g_{ii}$ are all $1$ in some basis, the size of the vector is given by the usual Pythagoras's theorem, otherwise, there will be some minus signs there, and even some terms will be omitted (corresponding to $g_{ii}=0$).
Quantum particles are described by Schrödinger's equation. Its solutions are, for a single elementary particle, complex functions $|\psi\rangle:\mathbb{R}^3\to\mathbb{C}$, or more general, $|\psi\rangle:\mathbb{R}^3\to\mathbb{C}^k$, named wavefunctions. They describe completely the states of the quantum particle. They form a vector space $H$ which also has a hermitian product (a complex scalar product so that $h_{ij}=\overline{h_{ji}}$), and is named the Hilbert space (because in the infinite dimensional case also satisfies an additional property which we don't need here), or the state space. Linear transformations of $H$ which preserve the complex scalar product are named unitary transformations, and they are the complex analogous of rotations.
The wavefunctions are represented in a basis as functions of positions, $|\psi\rangle:\mathbb{R}^3\to\mathbb{C}^k$. The element of the position basis represent point particles. But we can make a unitary transformation and obtain another basis, made of functions of the form $e^{i (k_x x + k_y y + k_z z)}$, which represent pure waves. Some observations use one of the bases, some the other, and here is why there is a duality between waves and point particles.
For more elementary particles, the state space is the tensor product of the state spaces of the individual particles. A tensor product of the form $|\psi\rangle\otimes|\psi'\rangle$ represents separable states, which can be observed independently. If the system can't be written like this, but only as a sum, the particles are entangled. When we measure them, the outcomes are correlated.
The evolution of a quantum system is described by Schrödinger's equation. Basically, the state rotates, by a unitary transformation. Only such transformations conserve the probabilities associated to the wavefunction.
When you measure the quantum systems, you need an observable. One can see an observable as defining a decomposition of the state space, in perpendicular subspaces. After the observation, the state is found to be in one of the subspaces. We can only know the subspace, but not the actual state vector. This is strange, because the system can, in principle, be in any possible state, but the measurement finds it to be only in one of these subspaces (we say it collapsed). This is the measurement problem. The things become even stranger, if we realize that if we measure another property, the corresponding decomposition of the state space is different. In other words, if you look for a point particle, you find a point particle, and if you look for a wave, you find a wave. This seems as if the unitary evolution given by the Schrödinger's equation is broken during observations. Perhaps the wavefunction remains intact, but to us, only one of the components continues to exist, corresponding to the subspace we obtained after the measurement. In the many worlds interpretation the universes splits, and all outcomes continue to exist, in new created universes. So, not only the state vector contains the universe, but it actually contains many universes.
I have a proposed explanation for some strange quantum features, in [1, 2, 3], and in these videos:
An example when there is a minus signs in the Pythagoras's theorem is given by the theory of relativity, where the squared size of a vector is $v\cdot v=-(v^t)^2+(v^x)^2+(v^y)^2+(v^z)^2$.
This inner product is named the Lorentz metric. Special relativity takes place in the Minkowski spacetime, which has four dimensions. A vector $v$ is named timelike if $v\cdot v < 0$, spacelike if $v\cdot v > 0$, and null or lightlike if $v\cdot v = 0$. A particle moving with the speed of light is described by a lightlike vector, and one moving with an inferior speed, by a timelike vector. Spacelike vectors would describe faster than light particles, if they exist. Points in spacetime are named events. Events can be simultaneous, but this depends on the frame. Anyway, to be simultaneous in a frame, two events have to be separated by a spacelike interval. If they are separated by a lightlike or timelike interval, they can be connected causally, or joined by a particle with a speed equal to, respectively smaller than the speed of light.
In Newtonian mechanics, the laws remain unchanged to translations and rotations in space, translations in time, and inertial movements of the frame - together they form the Galilei transformations. However, electromagnetism disobeyed. In fact, this was the motivation of the research of Einstein, Poincaré, Lorentz, and FitzGerald. Their work led to the discovery of special relativity, according to which the correct transformations are not those of Galilei, but those of Poincaré, which preserve the distances given by the Lorentz metric.
Curvilinear coordinates
A basis or a frame of vectors in the Minkowski spacetime allows us to construct Cartesian coordinates. However, if the observer's motion is accelerated (hence the observer is non-inertial), her frame will rotate in time, so Cartesian coordinates will have to be replaced with curved coordinates. In curved coordinates, the coefficients $g_{ij}$ depend on the position. But in special relativity they have to satisfy a flatness condition, otherwise spacetime will be curved, and this didn't make much sense back in 1905, when special relativity was discovered.
Einstein remarked that to a non-inertial observer, inertia looks similar to gravity. So he imagined that a proper choice of the metric $g_{ij}$ may generate gravity. This turned out indeed to be true, but the choice of $g_{ij}$ corresponds to a curved spacetime, and not a flat one.
One of the problems of general relativity is that it has singularities. Singularities are places where some of the components of $g_{ij}$ become infinite, or where $g_{ij}$ has, when diagonalized, some zero entries on the diagonal. For this reason, many physicist believe that this problem indicates that general relativity should be replaced with some other theory, to be discovered. Maybe it will be solved when we will replace it with a theory of quantum gravity, like string theory or loop quantum gravity. But until we will know what is the right theory of quantum gravity, general relativity can actually deal with its own singularities (while the ones mentioned above did not solve this problem). I will not describe this here, but you can read my articles about this, and also this essay, and these posts about the black hole information paradox [1, 2, 3]. And watch this video
Vector bundles and forces
We call fields the functions defined on the space or the spacetime. We have seen that fields valued in vector spaces are actually vector spaces. On a flat space $M$ which looks like a vector space, the fields valued in vector spaces can be thought of as being valued in the same vector space, for example $f:M\to V$. But if the space is curved, or if it has nontrivial topology, we are forced to consider that at each point there is another copy of $V$. So, such a field will be more like $f(x)\in V_x$, where $V_x$ is the copy of the vector space $V$ at the point $x$. Such fields still form a vector space. The union of all $V_x$ is called a vector bundle. The fields are also called sections, and $V_x$ is called the fiber at $x$.
Now, since $V_x$ are copies of $V$ at each point, there is no invariant way to identify each $V_x$ with $V$. In other words, $V_x$ and $V$ can be identified, for each $x$, up to a linear transformation of $V$. We need a way to move from $V_x$ to a neighboring $V_{x+d x}$. This can be done with a connection. Also, moving a vector from $V_x$ along a closed curve reveals that, when returning to $V_x$, the vector is rotated. This is explained by the presence of a curvature, which can be obtained easily from the connection.
Connections behave like potentials of force fields. And a force field corresponds to the curvature of the connection. This makes very natural to use vector bundles to describe forces, and this is what gauge theory does.
Forces in the standard model of particles are described as follows. We assume that there is a typical complex vector space $V$ of dimension $n$, endowed with a hermitian scalar product. The connection is required to preserve this hermitian product when moving among the copies $V_x$. The set of linear transformations that preserve the scalar product is named unitary group, and is denoted by $U(n)$. The subset of transformations having the determinant equal to $1$ is named the special unitary group, $SU(n)$. The electromagnetic force corresponds to $U(1)$, the weak force to $SU(2)$, and the strong force to $SU(3)$. Moreover, all particles turn out to correspond to vectors that appear in the representations of the gauge groups on vector spaces.
Vectors are present everywhere in physics. We see that they help us understand quantum mechanics, special and general relativity, and the particles and forces. They seem to offer a unitary view of fundamental physics.
However, up to this point, we don't know how to unify
unitary evolution and the collapse of the wavefunction
the quantum level with the mundane classical level
quantum mechanics and general relativity
the electroweak and strong forces (we know though how to combine the electromagnetic and weak forces, in the unitary group $U(2)$)
the standard model forces and gravity
Labels: Geometry of Physics, Physics, Quantum Theory, Singularities, Symmetry, Time
The unreasonable beauty of mathematics in the natural sciences*
Imagine a man and a woman, seeing and liking each other at a party or club or so. They start talking, the mutual attraction is obvious, but they want to be casual for two minutes. So they exchange informal formalities about doesn't matter what. Then he asks her: "so, what do you do?", and she replies "I'm a poet". What if the guy would say something like "I hate poetry!", or even declare proudly "I never knew how to use letters to write words and stuff, and I don't care!". Or imagine she's a musician, and he says "I hate music!". There are two things we can say about that kind of guy. First, he is very rude, he never ever deserves a second chance with that girl or any other human being for that matter. He should be isolated, kept outside society. Second, or maybe this should be first, how on earth can he be proud for being illiterate!
You probably guessed that this story is true. OK, In my case it was about math instead of poetry, and the genders are reversed. This happened to me or to anyone in the same situation quite often. There is no political correctness when it comes about math, maybe because one tends to believe that if you like math, you have no feelings, and such a remark wouldn't hurt you. And I actually was never offended when a girl said such outrageous things like that she hates math. Because whenever a girl told me she hates math, I knew she calls math something that really is boring and ugly, and not what I actually call math. Because math as I know it is poetry, is music, and is a wonderful goddess.
The story continues, years later. You talk about physics, with people interested in physics, or even with physicists. And you say something about this being just a mathematical consequence of that, or that certain phenomenon can be better understood if we consider it as certain mathematical object. It happens sometimes that your interlocutor becomes impatient and says that this is only math, and you were discussing physics, that math has no power there, and so on. Or that math is at best just a tool, and it actually obscures the real picture, or even that it limits our power of understanding.
People got the wrong picture that math is about numbers, or letters that stand for unknown numbers, or being extremely precise and calculating a huge number of decimals, or being very rigid and limited. In fact, math is just the study of relations. You will be surprised, but this is actually the mathematical definition of math. Numbers come into math only incidentally, as they come into music, when you indicate the duration or the tempo. Math is just a qualitative description of relations, and by relations we can understand a wide rainbow of things. I will detail this another time.
Imagine you wake up and you don't remember where you are, or who you are, like you were just born. You are surrounded by noise, which hurts your ears and your brain, meaningless random violent noise. You run desperately, trying to avoid it, but it is everywhere. And you finally find a spot where everything becomes suddenly wonderful: the noise becomes music, a celestial, beautiful music, and everything starts making sense. You are in a wonderful Cathedral, and you are tempted to call what you are listening "music of the spheres". The same music played earlier, but you were in the wrong place, where the acoustics was bad, or the sounds reached your ear in the wrong order, because of the relative positions of the instruments. Or maybe your ears were not yet tuned to the music. The point is that what seemed to be ugly noise, suddenly became so wonderful.
So, when someone says "I hate math!", all I hear is "I am in the Cathedral you call wonderful, but in the wrong place, where the celestial music becomes ugly violent noise!".
If you are interested in physics, you entered the Cathedral. But if you hate math, you will not last here, and maybe it is better to get out immediately! And if you are still interested in physics, come inside slowly, carefully choosing your steps, to avoid being assaulted by the music of the spheres, to allow it gently to enter in your mind, and to open your eyes. Choose carefully what you read, what lectures you watch, and ask questions. Don't be shy, any question you will ask is the right question for your current position, and for your next step.
There are some places in the Cathedral where the music is really beautiful. If you meet people there, to share the music, to dance, you will feel wonderful. If not, you will feel lonely. So you will want to share that place, you will want to invite your friends to join you.
The reason I love physics, is that I want to find these places. The reason I read blogs and papers, is that I want them to help me find such places. The reason I write papers, and I blog about this, is that I would like to share my places with others. I attend conferences (four so far this year) because they are like concerts, where you get the chance to listen some wonderful music, and to play your own.
But these are just words. I would like to write more posts in which I show the unreasonable beauty of math in physics, with concrete examples. Judging by the statistics, I have a few readers; judging by the number of comments, I don't really touch many of them. I know sometimes I am too serious, or too brief when I should explain more, especially when mathematical subtleties are involved. I am not very good at explaining abstract things to non-specialists, but I want to learn. I would like to write better, to be more useful, so, I would like to encourage comments and suggestions. Ask me to clarify, to explain, to detail, to simplify. Tell me what you would like to understand.
To start, I would like to write about vectors. They are so fundamentals in all areas of physics and mathematics, so I think it's a good idea to start with them. You may think they are too simple, and that you know all about them from high school, but you don't know the whole story. Later, when I will say something about quantum mechanics and relativity, they will be necessary (after all, according to quantum mechanics, the state of the universe is a vector). On the other hand, if you will understand them well, you will be around half of the way to understand some modern physics.
* You surely guessed that the title is a reference to Wigner's brilliant and insightful lecture, The unreasonable effectiveness of mathematics in the natural sciences.
Update, October 14, 2014
I just watched an episode of the Colbert Report, where the mathematician Edward Frenkel was invited in April this year. It was about Frenkel's new book and about his movie. He discusses at some point precisely the fact that it is so acceptable to hate math, as opposed to hating music or painting. Here is what he says for The Wall Street Journal:
It's like teaching an art class where they only tell you how to paint a fence but they never show you Picasso. People say 'I'm bad at math,' but what they're really saying is 'I was bad at painting the fence.'
Also see this video:
Labels: Art, Debates, Essence, Physics
Will science end after the last experiment will be performed?
Science is supposed to work like this: you make a theory which explains the experimental data collected up to this point, but also proposes new experiments, and predicts the results. If the experiment doesn't reject your theory, you are allowed to keep it (for a while).
I agree with this. On the other hand, much of the progress in science is not done like this, and we can look back in history and see.
Now, to be fair, making testable predictions is something really excellent, without which there would be no science. To paraphrase Churchill,
Scientific method is the worst form of conducting science, except for all the others.
I am completely for experiments, and I think we should never stop testing our theories. On the other hand, we should not be extremists about making predictions. Science advances in the absence of new experiments too.
For example, Newton had access to a lot of data already collected by his predecessors, and sorted by Kepler, Galileo, and others. Newton came with the law of universal attraction, which applies to how planets move, in conformity with Kepler's laws, but also to how bodies fall on earth. His equation allowed him to calculate from one case the gravitational constant, but then, this applied to all other data. Of course, later experiments were performed, and they confirmed Newton's law. But his theory was already science, before these experiments were performed. Why? Because his single formula gave the quantitative and qualitative descriptions of a huge amount of data, like the movements of planets and earth gravity.
Once Newton guessed the inverse square law, and checked its validity (on paper) on the data about the motion of a planet and on the data about several projectiles, he was sure that it will work for other planets, comets, etc. And he was right (up to a point, of course, corrected by general relativity, but that's a different story). For him, checking his formula for a new planet was like a new experiment, only that the data was already collected by Tycho Brahe, and already analyzed by Kepler.
Assuming that this data was not available, and it was only later collected, would this mean that Newton's theory would have been more justified? I don't really think so. From his viewpoint, just checking the new cases, already known, was a corroboration of his law. Because he could not come up with his formula from all the data available. He started with one or two cases, then guessed it, then checked with the others. The data for the other cases was already available, but it could very well be obtained later, by new observations or experiments.
New experiments and observations that were performed after that were just redundant.
Now, think at special relativity. By the work of Lorentz, Poincaré, Einstein and others, the incompatibility between the way electromagnetic fields and waves transform when one changes the reference frame, and how were they expected to transform by the formulae known from classical mechanics, was resolved. The old transformations of Galileo were replaced by the new ones of Lorentz and Poincaré. As a bonus, mass, energy and momentum became unified, electric and magnetic fields became unified, and several known phenomena gained a better and simpler explanation. Of course, new predictions were also made, and they served as new reasons to prefer special relativity over classical mechanics. But assuming these predictions were not made, or not verified, or were already known, how would this make special relativity less scientific? This theory already explained in a unified way various apparently disconnected phenomena which were already known.
One said that Maxwell unified the electric and magnetic fields with his equations. While I agree with this, the unification became even better understood in the context of special relativity. There, it became clear that the electric and magnetic fields are just part of a four-dimensional tensor $F$. The magnetic field corresponds to the spatial components $F_{xy}$, $F_{yz}$, $F_{zx}$, and the electric field to the mixed, spatial and temporal, components $F_{tx}$, $F_{ty}$, $F_{tz}$ of that tensor. Scalar and vector potentials turned out to be unified in a four-dimensional vector potential. Moreover, the unification became clearer when the differential form of Maxwell's equations was found, and even clearer when the gauge theory formulation was discovered. These are simple conceptual jumps, but they are science. And if they were also accompanied by empirical predictions which were confirmed, even better.
Suppose for a moment that we live in an Euclidean world. Say that we performed experiments and tested the axioms of Euclid. Then, we keep performing experiments to test various propositions that result from these axioms. Would this make any sense? Yes, but not as much as it is usually implied. They already are bound to be true by logic, because they are deduced from the axioms, which are already tested. So, why bother to make more and more experiments, to test various theorems in Euclidean geometry? This would be silly. Unless we want to check by this that the theorems were correctly proven.
On the other hand, in physics, a lot of experiments are performed, to test various predictions of quantum mechanics or special relativity, or of the standard model of particle physics, which follow logically and necessarily from the postulates which are already tested decades ago. This should be done, one should never say "no more tests". But on the other hand, this gives us the feeling that we are doing new science, because we are told that science without experiment is not science. And we are just checking the same principles over and over again.
Imagine a world where all possibly conceivable experiments were done. Suppose we even know some formulae that tell us what experimental data we would obtain, if we would do again any of these experiments. Would this mean that science reached its end, and there is nothing more to be done?
Obviously it doesn't mean this. We can systematize the data. Tycho Brahe's tables were not the final word in the astronomy of our solar system. They could be systematize by Kepler, and then, Kepler's laws could be obtained as corollaries by Newton. Of course, Kepler's laws have more content that Brahe's tables, because they would apply also to new planets, and new planetary systems. Newton's theory of gravity does more than Kepler's laws, and Einstein's general relativity does more than Newton's gravity. But, such predictions were out of our reach at that time. Even assuming that Tycho Brahe had the means to make tables for all planets in the universe, this would not make Kepler's laws less scientific.
Assuming that we have all the data about the universe, science can continue to advance, to systematize, to compress this data in more general laws. To compress the data better, the laws have to be as universal as possible, as unified as possible. And this is still science. Understanding that Maxwell's four equations (two scalar and two vectorial) can be written as only two, $d F = 0$ and $\delta F = J$ (or even one, $(d + \delta)F=J$), is scientific progress, because it tells us more than we previously knew about this.
But there is also another reason not to consider that science without experiments is dead. The idea that any theory should offer the means to be tested is misguided. Of course, it is preferred, but why would Nature give us the mean to check any truth about Her? Isn't this belief a bit anthropocentric?
Another reason to not be extremist about predictions is the following. Researchers try to find better explanation of known phenomena. But because they don't want they claims to appear unscientific, they try to come up with experiments, even if it is not the case. For example, you may want to find a better interpretation of quantum mechanics, but how would you test it? Hidden variables stay hidden, alternative worlds remain alternative, if you believe measurement changes the past, you can't go back in time and see it changed without actually measuring it etc. It is like quantum mechanics is protected by a spell against various interpretations. But, should we reject an alternative explanation of quantum phenomena, because it doesn't make predictions that are different from the standard quantum formalism? No, so instead of calling them "alternative theories", we call them "interpretations". If there is no testable difference, they are just interpretations or reconstructions.
A couple of months ago, the physics blogosphere debated about post-empirical science. This debate was ignited by a book by Richard Dawid, named String Theory and the Scientific Method, and an interview. His position seemed to be that, although there are no accessible means to test string theory, it still is science. Well, I did not write this blog to defend string theory. I think it has, at this time, bigger problems that the absence of means to test what happens at Plank scale. It predicts things that were not found, like supersymmetric particles, non-positive cosmological constant, huge masses for particles, and it fails to reproduce the standard model of particle physics. Maybe these will be solved, but I am not interested about string theory here. I am just interested in post-empirical science. And while string theory may be a good example that post-empirical science is useful, I don't want to take advantage of the trouble in which this theory is now.
The idea that science will continue to exist after we will exhaust all experiments, which I am not sure describes fairly the real position of Richard Dawid, was severely criticized, for example in Backreaction: Post-empirical science is an oxymoron. And the author of that article, Bee, is indeed serious about experiment. For example, she entertains a superdeterministic interpretation of quantum mechanics. I think this is fine, given that my own view can be seen as superdeterministic. In fact, if you want to reject faster-than-light communication, you have to accept superdeterminism, but this is another story. The point is that you can't make an experiment to distinguish between standard quantum mechanics, and a superdeterministic interpretation, because that interpretation came from the same data as the standard one. Well, you can't in general, but for a particular type of superdeterministic theory, you can. So Bee has an experiment, which is relevant only if the superdeterministic theory is such that making a measurement A, then another one B, and then repeating A, will give the same result whenever you measure A, even if A and B are incompatible. Now, any quantum mechanics book which discusses sequences of spin measurements claims the opposite. So this is a strong prediction, indeed. But how could we test superdeterminism, if it is not like this? Why would Nature choose a superdeterministic mechanism behind quantum mechanics, in this very special way, only to be testable? As if Nature tries to be nice with us, and gives us only puzzles that we can solve.
Labels: Debates, Quantum Theory, Scientific method
Science and lottery
Ask anyone who buys lottery tickets systematically, most of them will confirm they have a system. Most of them seem to be based on birthdays, although the days of the month are a serious limitation of the possibilities. Some play random numbers, which they withdraw from a bag (this is the best "system"), but most have a sort of a system.
I don't believe there is a winning system. People tried to convince me that numbers have their own life, and they are not quite random. "Laymen" tend to believe that if you toss a coin and you get head, next time are bigger chances to get tail. If you pay attention in US movies, you will see that almost every time a number appears, its digits are unique, for example 52490173, a permutation of a subset of 0123456789. Except of course for the phone numbers, which start with 555. This is because a number like 254377 seem too special. In fact such numbers which don't have unique digits are encountered more often in real life. So I don't buy the idea that lottery numbers are not random. Some try to convince me that because the balls are not perfect, they are biased, and some numbers are more likely to be extracted than others. Even if this is the case, I don't think you can actually use this to predict the numbers.
My opinion is that from lottery only the house wins, at least on average. This doesn't mean that if you play you will not win.
Now, since almost anyone who plays systematically has a system, and since the winner will be among these guys, most winners have a system. So, what happens when you win? You will believe that finally your system turned out to be correct. You may even write a book in which you explain the system, end get even richer by selling it. But you will definitely believe that you won because of your system. While I don't believe your system. You can tell me that your system turned out to be correct, even that it is science, because it made predictions, and it was confirmed by the most difficult test: actually playing and winning in real life! But I still don't believe in your system. Because anyone who wins has a system, and he won because sometimes people win, but not because of the system.
Now, imagine a world in which
in order for a paper to be considered scientific, its basic hypotheses have to be falsifiable by experiments
scientists have to publish lot of original papers, otherwise they will perish
This is pretty much our world, and I think that these two conditions lead to an avalanche of predictions. Whenever an experiment will be about to be performed, scientists will bet for various outcomes. And just like in betting, they will try to cover all possible outcomes.
So, after the experiment is performed, some will win the lottery, while some will lose it. Does this ensure that the winners really cracked the laws of Nature? Did they win because of their theory, because of their system? Or just because of pure luck, and they just tend to give credit to their system?
Doesn't this mean that something is wrong with the way we define science? Making predictions is easy. Suppose that there are 5 possible outcomes, and there are 5 theories predicting them, one for each outcome. Suppose that the experiment corroborates one of them, and falsifies the other four. Why where those 4 wrong in the first place? Just because after the experiment they turned out to be wrong? Why couldn't we see the reason why they are wrong before performing the experiment? What if the fifth, which was corroborated, is correct by a coincidence, for the wrong reason? What if there are 10 possible other explanations of the same result?
Yes, it is possible for a theory to be right for the wrong reason. Consider for example the following calculation:
The result is clearly correct, but the proof is wrong.
If a theory makes a correct prediction, this doesn't mean that it is correct. This is why we never consider a theory to be proved, or even confirmed. We just say that the experimental results corroborate it. Maybe later we will find a better theory, which will make the right predictions for the right reasons.
The problem is that, if we will find another theory which makes the same predictions, it will be considered inferior. The theory will be asked to come with new experimental proposals and its own predictions, which will contradict the predictions of the previous theories. If it will not be able to make new predictions, rather than being considered equal to the currently accepted one, it will be considered inferior. Because the current one made new predictions, but the new one made the same predictions.
This means that from two theories making the same predictions, the one that was proposed earlier will have some advantages over the one that was proposed at a later time. Even if the latter is conceptually superior, or simpler, or have other advantages.
Labels: Scientific method
Last week I was in Castiglioncello. It is a small but very beautiful town in Tuscany, Italy, somewhere not far from Pisa. Between the plane and the train, I had time to detour and see the leaning tower.
(also check this optical illusion inspired by the leaning tower.)
The reason of my visit to Castiglioncello was a physics conference :
It is organized every other year, mainly by Thomas Elze, a brilliant physicist and a dear friend who invited me. It takes place in Castello Pasquini:
Castello Pasquini
I will not list now all about 150 participants, excellent physicists from various countries and areas of research. I will mention for the moment just the Nobel prize winner Gerard 't Hooft, Tom Kibble, who also deserves a Nobel prize for co-discovering the Higgs boson, and the Fields medalist Alain Connes. It was an excellent opportunity to finally meet in person other people, with whom I just communicated via Internet, or whom I knew only by their research papers, and also to meet again people whom I knew from other conferences.
Initially, I thought there was no beach, because people were sun bathing on the cliffs,
but asking a sandy girl, she pointed me to some stairs
leading to a real beach
It was so difficult to decide which of the talks to skip to visit Castiglioncello, or to swim in the sea. Luckily, I could swim in the night.
I staid at Hotel Leopoldo, where the breakfast was made by an excellent Romanian cook named Florin, and at the desk was a cute girl named Valentina.
At lunch, we ate at the restaurant Il Peschereccio.
Courtesy of Yaron Hadad.
A great place to have some drinks and eat a good pizza is Ghostbuster.
The conference dinner took place at Grand Hotel Villa Parisi,
a wonderful place owned by Francesca, a friendly beautiful girl.
Are sciences and arts perversions?
According to Wikipedia, perversion is
a type of human behavior that deviates from that which is understood to be orthodox or normal.
Now consider the human mind. We evolved so that we find food, make children, avoid predators etc. All these are just means that serve to our survival in a universe that is trying to kill us. Or, even better, they serve to the selfish gene, to its replication.
So, the human mind shouldn't care about things that don't serve this purpose. What evolutionary purpose can be in doing math and physics? Indulging yourself in such activities doesn't serve the purpose of your survival and replication. One may say that, at least for some, science is their job, they earn money, and they survive. But researchers know better that jobs in the industry are safer and better paid, and with better success rate. And better success at ladies (although artists are doing even better). But anyway, sciences and arts are recent, so they can't be the product of mutation and selection. So, science and arts are perversions of the original purpose of the brain.
While they are not the product of evolution, they may be a byproduct. In order to survive, our ancestors had to identify patterns around them, use these patterns to make predictions. To anticipate when a wild animal will attack them, to recognize comestible fruits, to identify a sexual partner with good potential, all these require pattern recognition and the ability to make predictions. And this is why we became intelligent. So, even if we are using our intelligence to other purposes like sciences, this is a byproduct of evolution.
Nature has a way to reward you when you do something good for your genes. This is why we like to eat and to have sex. This is why we feel proud and happy when our children accomplish tasks or acquire new skills. But the blind gene doesn't know the future, so she can't reward us for actually doing something good for her. Instead, she rewards us for guessing patterns. We feel happy when we guess a pattern, and especially when a long anticipated prediction is confirmed. We identify patterns in sounds and drawings too, so this is why we like music and other arts. Even literature, builds on our predictions and anticipation. During anticipation, the brain produces the drugs that will make us happy. Building anticipation and suspense is the craft of accumulating this happiness in the consumer of one's art.
We are surprised when we make predictions which we consider safe, and turn out to be wrong. Sometimes, the anticipation accumulates the feel-good drug, and the surprise makes it explode. So this is how we laugh. Jokes are just clever ways to manipulate us into making predictions that turn out to be wrong in an unexpected and usually harmless way.
OK, so evolution explains all these as perversions of our mind, as byproducts. However, we know that science helped us to survive better. As a result of science and of its progeny, technology, we live longer, in better conditions, we find and produce food easier, we can take care better of our children and it also helps the children of our children.
So, science really helps the replication of the selfish gene.
I can't help but asking, did the selfish gene have a secret plan all this time?
Posted by Cristi Stoica at 7:16 PM
Labels: Debates, Essence
Black holes can't keep secrets
At first, math seemed to show that anything that enters a black hole, is lost forever. Later, it seemed that black holes evaporate, but the secrets remain lost. But maybe it is not so.
My new video at the FQXi contest is called Can a black hole keep a secret?, and can be seen and rated at http://fqxi.org/community/forum/topic/2205:
To rate my video or those of my competitors, click "rate this video". You will be required to enter an email address to avoid duplicate votes. Then press "go" and vote.
You can check and rate other videos at FQXi Video Contest - Spring, 2014. You can submit your own video until August 22.
In the previous post, named The puzzle of quantum reality, in theaters near you, and at FQXi, I mentioned my other video, and my son's.
On youtube, my videos can be watched with subtitles, in English or in Romanian:
The puzzle of quantum reality
Can a black hole keep a secret?
Labels: Causality, FQXi, Fun, Geometry of Physics, Physics, Singularities, Symmetry, Time
The puzzle of quantum reality, in theaters near you, and at FQXi
I made a 7 minutes video introducing some puzzling aspects of quantum mechanics to a general audience. At the end it contains a proposed view which, at least to me, makes the things clearer, so I hope it can help others too.
My video, named The puzzle of quantum reality, can be seen and rated at http://fqxi.org/community/forum/topic/2183:
To rate my video or those of my competitors, click "rate this video". You will be required to enter an email address to avoid duplicate votes.
I also compete against my son, whose video is at http://fqxi.org/community/forum/topic/2176:
You can check and rate other videos, ranging from fun to informative, at FQXi Video Contest - Spring, 2014. You can submit your own video until August 22.
Labels: Causality, FQXi, Geometry of Physics, Physics, Quantum Theory, Smooth Quantum Mechanics, Time
A confused sleeping beauty 2
This post contains a small twist of the original experiment discussed in the previous post, A confused sleeping beauty. The new version doesn't require putting anyone to sleep and removing her memories, because we replace memory removal with lack of information.
Confusing Sleeping Beauty without erasing her memory
Sleeping Beauty is no longer required to sleep, but she may still need to sleep, to remain beautiful.
Consider the following settings:
- We toss a fair coin.
- If it lands heads, we will ask once Sleeping Beauty her belief for the proposition that the question landed heads.
- If the coin lands tail, we ask her twice.
This is similar to the original experiment, but instead of erasing her memory, we just do the following:
- Before asking her any question, we toss the coin a large number of times.
- Then we ask Beauty, but not in the same order in which we tossed. For example, when we toss a coin, if it landed heads, we write down a question and don't ask it yet. If it landed tails, we write down two questions, and don't ask them yet. Then we shuffle the questions and we ask Beauty one at a time. We take care to keep track for each question to which toss is connected.
To prevent the possibility that she adjust her estimates by counting counting the number of heads and tails about which she was already asked, we don't tell her whether she guessed or not, until the end of the experiment.
We see that the most rational answer she can give is 1/3. On the other hand, of course she knows that the probability that when the coin was tossed it landed heads is 1/2.
Labels: Debates, Fun, Puzzles
A confused sleeping beauty
The Sleeping Beauty problem
A recent post by Sean Carroll reignited a debate about the "Sleeping beauty problem".
This is a simple problem of probabilities, involving tossing a coin. But for some reason, it seems to be no agreement about its solution.
Consider the following experiment:
- On Sunday, put Sleeping Beauty to sleep.
- Toss a fair coin.
- We are interested to ask Sleeping Beauty the question
Q. What is your belief now for the proposition that the coin landed heads?
- If the coin comes up heads, wake up Sleeping Beauty on Monday and ask her the question. Then drug her to forget that awakening.
- If the coin comes up tails, wake up Sleeping Beauty both on Monday and Tuesday and ask her the question. Each time drug her to forget that awakening.
- In both cases, don't forget to wake her on Wednesday and end the experiment.
If you have trouble convincing a Beauty to let you put her to sleep and drug her, you can try your luck with people who already have very short memory, like Lucy Whitmore from "50 First Dates", Leonard from "Memento", Allie from The Notebook, or Dory from "Finding Nemo". Or you can make the experiment with Dory from "Finding Nemo".
You can also perform the experiment with Dory from "Finding Nemo".
Those thinking they know the answer are mainly in one of two camps: halfers, who think she should answer 1/2, and thirdirs, who think she should answer 1/3. Thirdirs say that when Beauty is waken and interviewed, she thinks she can be in one of three situations. Since only in one of the cases the coin turned up heads, the answer must be 1/3. Halfers say that this answer is wrong, being probably caused by drug abuse, and since the coin is fair, the answer should be 1/2. There is nothing that can provide new information to Sleeping Beauty, so this answer should remain 1/2.
I will not detail here the debates still ongoing on the net, and the articles which are written about this. I just want to explain why I think that this debate is based on different understandings of the question.
Another experiment
Consider the following experiment.
- Prepare a large box, in which you can put apples and oranges, without seeing its content.
- If the coin comes up heads, put one orange in the box.
- If the coin comes up tails, put two apples in the box.
- Repeat this many times.
- At the end, randomly extract a fruit from the box. Unless the experiment took too long, the fruits are not yet rotten, so you can extract a fruit.
- Then answer the following questions:
1. What is the probability that the fruit you will extract was introduced after the coin landed heads?
2. What fraction of the total times the coin was tossed, it landed heads?
The answer to question 1 is of course 1/3, because 1/3 of the fruits are oranges, and oranges were placed in the box when the coin landed heads.
The answer to question 2 is of course 1/2, because the coin is supposed to be fair.
My claim is that thirdirs were actually answering question 1, and halfers were answering question 2.
The question Sleeping Beauty was asked can be seen as equivalent to both question 1 and question 2.
To see how it can be seen as equivalent to question 1, consider a combination of the two experiments. Say that Sleeping Beauty is not only asked the question, but also it is given a fruit to put in the box. If the coin landed heads, she will receive an orange, and if it landed tails, she will receive an apple. She will put them in the box, then she will be put to sleep and forget about the awakening. Say the experiment is ran a large number of times. At the end, she can just count the fruits, and she will find that 1/3 of them are indeed oranges, so she will know that indeed the answer to the question is 1/3. Asking her about her belief that the coin landed heads that time is the same as asking her about her belief that she will receive an orange.
It is true that for every time the coin landed tails, she gets two apples, while every time it landed heads, she gets only one orange. This is why some tend to understand the question as being actually question 2.
Removing the confusion
So the dispute between thirdirs and halfers is due to the fact that they interpreted the question differently, and consequently answered different questions.
Instead of asking Sleeping Beauty the question as originally stated, we could just ask her two questions:
Q1. What is your belief that this awakening occurred following an event in which the coin landed heads?
Q2. What is your belief that when the coin was tossed, it landed heads?
Impossibility theorems, a counterexample (the seven bridges problem)
In mathematics and physics there are some results called no-go theorems, or impossibility theorems. To name just a few: Euler's solution to the problem of the seven bridges of Königsberg, Gödel's incompleteness theorems, Bell's theorem, Kochen-Specker theorem, Penrose and Hawking's singularity theorems.
Research is an adventurous activity - you can spend years on researching a dead end, or you can stumble by luck upon something worthy without even knowing (for example, the discovery by Penzias and Wilson of the cosmic microwave background radiation). To avoid spending years looking in the wrong places, researchers use various guiding lines. Impossibility results are some of them, which are by far the most reliable. Other guidelines are following the trends of the moment (also dictated by the need to publish and receive citations), following the opinions of authorities in the field, reading only what they read etc. I personally consider misguided the idea of interpreting the results and filtering what you read and research by using the eyes of the authorities, no matter who they are. But it is understandable that they may seem the best we have, and that anyway the "mainstream" follows them, so if you want to fit in, you have to do the same.
What about the impossibility theorems, aren't they more objective than just fashion trends dictated by authority figures? Of course they are. However, they apply to specific situations, contained in the hypothesis of the theorems. Moreover, they rely on a mathematical model of reality, and not on reality itself. While I think that the physical world is isomorphic to a mathematical model, this doesn't mean that it is isomorphic to the models we use.
I will give just a simple example. Remember the problem of the Seven Bridges of Königsberg. It was solved negatively by Euler in 1735, and led to graph theory and anticipated the idea of topology. The problem is to walk through the city by crossing each bridge once and only once. Here is a map, which is of course an idealization:
Euler reduced the problem to an even more idealized one. He denoted the shores and the islands by vertices, and the bridges by edges, and obtained probably the first graph in the history:
Euler was then able to show immediately that there is no way to walk and cross each of the bridges once and only once (without jumping like Mario or swimming in the river, or being teleported!). The reason is that an even number of edges have to meet at the vertices which are not those where you start or end the trip. But there are no such vertices in the above graph, so all four have to be starting or ending vertices. But at most two vertices can be used to start and end, so the problem has negative answer.
This illustrates the main point of this article. The problem has a negative answer, but this doesn't mean that in reality the answer is negative too. The mathematical model is an idealization, which forgets one thing: that the Pregel river has a spring, a source of origin. If we add the spring to the map, we obtain a different problem:
This problem has a simple solution, which is obtained by "going back to the origin":
This is "thinking outside the box", literally, because you have to go outside the original picture box. I came up with this solution years ago, when I was in school and read about Euler's solution. Of course, it doesn't contradict Euler's theorem, because the resulting graph is different than the one he considered, as we can see below:
Hence, Euler's theorem itself tells us how to solve the problem associated with this graph. The problem is solved by the very theorem which one considers to forbid the existence of a solution.
The main point of this simple example is that even in simple cases we don't actually know the true settings in which we apply the no-go theorems, or we ignore them to idealize the problem. We are applying the no-go theorems in the dark, so perhaps, rather than being guidelines, they are blocking our access to the real solutions of the real problems. While most researchers try to avoid being in contradiction with impossibility theorems, maybe it is good to reopen closed cases from time to time.
Labels: Fun, Geometry of Illusion, History of science
The unreasonable beauty of mathematics in the natu...
Will science end after the last experiment will be...
The puzzle of quantum reality, in theaters near yo...
Impossibility theorems, a counterexample (the seve... | CommonCrawl |
Optimal cut-off of obesity indices to predict cardiovascular disease risk factors and metabolic syndrome among adults in Northeast China
Jianxing Yu1,
Yuchun Tao1,
Yuhui Tao2,
Sen Yang1,
Yaqin Yu1,
Bo Li1 &
Lina Jin1
CVD risk factors (hypertension, dyslipidemia and diabetes) and MetS are closely related to obesity. The selection of an optimal cut-off for various obesity indices is particularly important to predict CVD risk factors and MetS.
Sixteen thousand seven hundred sixty-six participants aged 18–79 were recruited in Jilin Province in 2012. Five obesity indices, including BMI, WC, WHR, WHtR and BAI were investigated. ROC analyses were used to evaluate the predictive ability and determine the optimal cut-off values of the obesity indices for CVD risk factors and MetS.
BMI had the highest adjusted ORs, and the adjusted ORs for hypertension, dyslipidemia, diabetes and MetS were 1.19 (95 % CI, 1.17 to 1.20), 1.20 (95 % CI, 1.19 to 1.22), 1.12 (95 % CI, 1.10 to 1.13), and 1.40 (95 % CI, 1.38 to 1.41), respectively. However, BMI did not always have the largest adjusted AUROC. In general, the young age group (18 ~ 44) had higher ORs and AUROCs for CVD risk factors and MetS than those of the other age groups. In addition, the optimal cut-off values for WC and WHR in males were relatively higher than those in females, whereas the BAI in males was comparatively lower than that in females.
The appropriate obesity index, with the corresponding optimal cut-off values, should be selected in different research studies and populations. Generally, the obesity indices and their optimal cut-off values are: BMI (24 kg/m2), WC (male: 85 cm; female: 80 cm), WHR (male: 0.88; female: 0.85), WHtR (0.50), and BAI (male: 25 cm; female: 30 cm). Moreover, WC is superior to other obesity indices in predicting CVD risk factors and MetS in males, whereas, WHtR is superior to other obesity indices in predicting CVD risk factors and MetS in females.
With economic development and the improvement of living conditions, the prevalence of obesity is increasing dramatically in China [1, 2]. A number of studies have demonstrated that obesity is associated with hypertension, dyslipidemia, diabetes and MetS [3–5], and hypertension, dyslipidemia and diabetes are considered risk factors for CVD [6, 7].
To evaluate obesity, many indices have been proposed, including BMI, WC, WHR, WHtR and BAI. Generally, BMI is one of the most commonly used indices for obesity, which approximates body mass using a mathematical ratio of weight and height [8]. WC is the central diagnostic index of obesity and only considers abdominal obesity [9]. WHR and WHtR are indices for evaluating fat distribution using WC compared to HC or height [10, 11]. Finally, BAI is an index to measure the amount of body fat that uses HC compared to height [12]. Obviously, other indices may be used to measure obesity, but we do not consider all of them here.
Some studies indicated that WC or WHtR might be better predictors for CVD risk factors or MetS in Korean/Chinese populations [9, 13], whereas, Mbanya et al. noted that WC was the best predictor in Cameroonians [14]. Moreover, Bergman et al. found that BAI was a better predictor for African-Americans and Mexican-Americans [12], However, Lam et al. proposed that BAI is not likely to be better than BMI and does not apply to Asians [11]. Therefore, selection of the proper obesity index for specific research and study populations was a challenge.
In our study, the predictive ability and the optimal cut-off values of five obesity indices (BMI, WC, WHR, WHtR and BAI) for CVD risk factors and MetS are comprehensively investigated. Data from 16,766 participants aged 18–79 in Jilin Province were used to evaluate the obesity indices. Jilin is in central northeast China and has an annual average temperature 4.8 °C (latitude 40° ~ 46°, longitude 121° ~ 131°) [15]. Therefore, the results can be instructive and meaningful for studies related to obesity in northeast China. WC and WHtR are superior to other obesity indices in predicting CVD risk factors and MetS in our study, with optimal cut-off values of WC and WHtR of 85 (male)/80 (female) and 0.5, respectively.
A large-scale cross-sectional survey was implemented in Jilin Province in 2012. A total of 16,766 participants who had lived in Jilin Province for more than 6 months and were 18–79 years old were selected through multistage stratified random cluster sampling (see details in Part 1 of the Additional file 1).
Height, weight, WC and HC were measured according to a standardized protocol and techniques, with the participants wearing light clothing but no shoes. Blood pressure was measured by trained professionals using a mercury sphygmomanometer. After an overnight fast, FBG and serum lipids were measured before breakfast using a Bai Ankang fingertip blood glucose monitor (Bayer, Leverkusen, Germany) and a MODULE P800 biochemical analysis machine (Roche Co., Ltd., Shanghai, China), respectively (see details in Part 2 of the Additional file 1).
The various obesity indices were calculated as follows:
$$ \mathrm{B}\mathrm{M}\mathrm{I}=\frac{\mathrm{weight}\left(\mathrm{kg}\right)}{\mathrm{heigh}{\mathrm{t}}^2\left(\mathrm{m}\right)},\mathrm{W}\mathrm{H}\mathrm{R}=\kern0.5em \frac{\mathrm{WC}\left(\mathrm{cm}\right)}{\mathrm{HC}\left(\mathrm{cm}\right)},\mathrm{WH}\mathrm{t}\mathrm{R}=\kern0.5em \frac{\mathrm{WC}\left(\mathrm{cm}\right)}{\mathrm{heigh}\mathrm{t}\left(\mathrm{cm}\right)},\mathrm{B}\mathrm{A}\mathrm{I}=\kern0.5em \frac{\mathrm{HC}\left(\mathrm{cm}\right)}{\mathrm{heigh}{\mathrm{t}}^{1.5}\left(\mathrm{m}\right)}\hbox{-} 18 $$
CVD risk factors refer to hypertension, dyslipidemia and diabetes in our study. Hypertension was defined as resting SBP ≥140 mmHg and/or DBP ≥ 90 mmHg and/or by the use of antihypertensive medication in the past two weeks [16]. Dyslipidemia was defined as use of lipid-lowering drugs or having one or more of the following: TG ≥ 1.7 mmol/L, TC ≥ 5.2 mmol/L, HDL-C < 1.0 mmol/L and LDL-C ≥ 3.4 mmol/L [17]. Diabetes was defined as the use of hypoglycemic agents or a self-reported history of diabetes or FBG of 7.0 mmol/L or more [18]. MetS [19, 20] was defined as three or more of the following conditions clustered in one subject: a) WC ≥ 85 cm for males or ≥ 80 cm for females; b) TG ≥ 1.7 mmol/L or ongoing hypertriglyceridemia treatment; c) HDL-C < 1.00 mmol/L for males or < 1.30 mmol/L for females, or ongoing treatment; d) SBP ≥ 130 mmHg and DBP ≥ 85 mmHg, or ongoing antihypertensive drug therapy; and e) FBG ≥ 5.6 mmol/L or ongoing anti-diabetic drug treatment.
The continuous variables were expressed as the means ± standard deviations (SD) and compared using the t test. The categorical variables were expressed as counts or percentages and compared using the Rao-Scott-χ2 test. ROC analyses were used to compare the predictive ability and determine the optimal cut-off values of the various obesity indices for CVD risk factors and MetS [21]. The value that led to the maximum Youden index (SEN + SPE −1) [22] was taken as the optimal cut-off value, and the AUROC was the index of the predictive ability. Logistic regression models were used to calculate the ORs and to evaluate the obesity indices. All statistical analyses were performed using IBM SPSS 20.0. (SPSS Inc., New York, NY, USA) Statistical significance was set at a P value < 0.05.
The characteristics of the participants are shown in Table 1. Females had a higher age, TC, LDL-C and HDL-C than males (P < 0.05), but other anthropometric indices were significantly higher in males than those in females (P < 0.01). The prevalence of hypertension, dyslipidemia, diabetes, and MetS differed significantly by gender and were higher in males than in females (P < 0.05).
Table 1 Descriptive characteristics of the participants by gender
For an overview of each obesity index, Table 2 presents the adjusted ORs and AUROCs (adjusted for gender and age). In general, BMI had the highest adjusted ORs for CVD risk factors and MetS, but it did not always have the largest adjusted AUROC. BMI, WC and WHtR had the optimal adjusted AUROC for hypertension, whereas WC, WHR and BMI had the largest adjusted AUROC for dyslipidemia, diabetes and MetS, respectively. Moreover, BAI did not have a better adjusted OR or AUROC for any CVD risk factor or MetS in our study.
Table 2 Adjusted ORs and adjusted AUROC for obesity indices in relation to CVD risk factors and MetS
Then, the detailed performance of 5 obesity indices associated with CVD risk factors and MetS was investigated. For females (Table 3), the ORs and AUROCs of the obesity indices for CVD risk factors and MetS were the largest in the 18 ~ 44 age group, followed by the 45 ~ 64 group. Thus, obesity in the younger age groups was at a higher risk for CVD risk factors and MetS (higher ORs), and it had better predictive ability for CVD risk factors and MetS as well (larger AUROC). Further, the AUROC for males had a similar tendency and characteristics as that of females (see Additional file 1: Table S3).
Table 3 ORs and AUROCs for the obesity indices in relation to CVD risk factors and MetS in females by age group
The detailed optimal operating points (OOPs) for BMI, WC, WHR, WHtR and BAI to predict CVD risk factors and MetS are given in Table 4, in which the OOP is the cut-off value that leads to the maximum Youden index (SEN + SPE −1) [22]. Obviously, the OOPs for different risk factors were different, so we chose a single accessible value (close to the mean of the OOPs) as the optimal cut-off value for each index. For example, the OOPs of BMI for CVD risk factors and MetS ranged from 23.24 to 24.48, so we chose 24 as the optimal cut-off value for BMI, whereas the OOPs of WC ranged from 84.13 to 85.74 for males and 79.32 to 81.58 for females, so we chose 85 and 80 as the optimal WC cut-off values. Similarly, the optimal cut-off value for WHR was 0.88 and 0.85, for WHtR was 0.5, and for BAI was 25 and 30, respectively. In addition, the optimal cut-off values of BMI and WHtR were the same in both genders, whereas the optimal cut-off values of WC and WHR in males were relatively higher than those in females, but the opposite occurred for BAI. Generally, most of the optimal index cut-off values were the same as or similar to other studies in literature [10, 11, 13, 23].
Table 4 Optimal operating points of the obesity indices for predicting CVD risk factors and MetS
Finally, we investigated the adjusted ORs and AUROC of each obesity index for CVD risk factors and MetS (Table 5) using the optimal cut-off values determined above. In general, the WC and WHtR had higher adjusted ORs and AUROCs for CVD risk factors and MetS, regardless of the small difference between genders. WC was superior to other obesity indices in predicting CVD risk factors and MetS in males, but WHtR was superior to other obesity indices in predicting CVD risk factors and MetS in females. Abnormal WC or WHtR was at a higher risk for CVD risk factors and MetS, whereas WC and WHtR were superior to other indices in predicting CVD risk factors and MetS.
Table 5 Adjusted ORs and AUROCs of the obesity indices associated with CVD risk factors and MetS
The prevalence of hypertension, dyslipidemia, diabetes and MetS in our study were 37.27 %, 39.76 %, 10.07 % and 33.1 %, respectively, much higher than those in other studies [17]. It was believed that obesity was associated with CVD risk factors and MetS [3] and various obesity indices were used in literature [24, 25] to describe obesity. Unfortunately, no obesity index was consistently superior in predicting CVD risk factors and MetS, and the selection of an obesity index depended on the study population and other factors [11]. Thus, in this study, we investigated the proper obesity index and optimal cut-off values to predict CVD risk factors and MetS for a population in northeast China.
In this study, obesity in younger age groups was a higher risk and had better predictive ability for CVD risk factors and MetS than in older groups. It was implied that obesity might have more influence on young people. One possible reason was that the young people took part in fewer outdoor activities and had worse eating habits than the older people. Another possible reason was that other factors might have larger effects on CVD risk factors and MetS than obesity among older people. It was suggested that the younger the participant, the more effective it is to control obesity.
We investigated the performance of five obesity indices (BMI, WC, WHR, WHtR and BAI) for CVD risk factors and MetS in northeast China. A series of optimal cut-off values of each obesity index was determined in our study, which could provide an instructive suggestion in similar studies and populations. In summary, BMI, WC and WHtR had the same optimal cut-offs as other studies in China [13, 23], while the optimal cut-off value of WHR was a little higher [13], and that of BAI was a little lower than previous studies [12]. A probable reason might be the characteristics of Asians (especially Asian women), with smaller HC than Americans [26]. The higher tolerance of WHR for CVD risk factors and MetS might be due to the flexibility of fat for those in northeast China under the long duration of cold weather.
Further, WC and WHtR were superior to other obesity indices in our study, which was consistent with other studies [27–32]. Moreover, the global cut-off value of WHtR was 0.5, which implied that this criterion might be applied to people in northeast China [10]. Meanwhile, a number of meta-analyses on CVD risk factors outcomes suggested that 0.5 (WHtR) could be appropriate for different genders and age groups [24, 33]. Moreover, the WGOC (Working Group on Obesity in China) developed a cut-off value for central obesity (85.0 cm for male and 80.0 cm for female) using WC and overweight status (24 kg/m2) using BMI for the general Chinese population [34], which were coincident with those in our study. In addition, other studies in Asian countries reported cut-off values of WC for males and females of approximately 80–85 and 75–80, respectively [35, 36], that were similar to those in our study.
Here, we indicate the limitations of our study. First, the definition of MetS overlapped with that of WC, so the AUROC and adjusted ORs for MetS might be overestimated. Despite this, the optimal WC cut-off value was consistent with the definition of MetS, which could be viewed as evidence of the rationality of our study. Second, gender and age were adjusted for in our study; however, other confounders that might have impacts on CVD risk factors and MetS, such as physical activity, smoking, etc., were not under our consideration this time, which might have some slight effect on our results.
Finally, we investigated the adjusted ORs of each index, based on the proposed optimal cut-off values. Generally, WC and WHtR were superior to other indices (larger AUROC), and the people with abnormal WC or WHtR were at higher risk (higher ORs) for CVD risk factors and MetS. Obviously, both indices could measure central obesity to some extent. Thus, it might be implied that the distribution of fat was more important than the amount of fat in predicting the risk for CVD risk factors and MetS.
The proper obesity index should be selected in different research studies and populations, with the corresponding optimal cut-off values. Generally, the obesity indices considered in our study and their optimal cut-off values are: BMI (24 kg/m2), WC (male: 85 cm; female: 80 cm), WHR (male: 0.88; female: 0.85), WHtR (0.50), and BAI (male: 25 cm; female: 30 cm). Moreover, WC is superior to other obesity indices in predicting CVD risk factors and MetS in males, but WHtR is superior to other obesity indices in predicting CVD risk factors and MetS in females.
BAI:
Body adiposity index
WHR:
WHtR:
Waist-to-height ratio
DBP:
Diastolic blood pressure
FBG:
Fasting blood glucose
HDL-C:
High-density lipoprotein cholesterol
LDL-C:
Low-density lipoprotein cholesterol
SBP:
Systolic blood pressure
TC:
TG:
Triglyceride
AUROC:
Area under ROC
CVD:
HC:
MetS:
Receiver operating characteristic
SEN:
SPE:
Wang Z, Hao G, Wang X, Chen Z, Zhang L, Guo M, Tian Y, Shao L, Zhu M. Current prevalence rates of overweight, obesity, central obesity, and related cardiovascular risk factors that clustered among middle-aged population of China. Zhonghua Liu Xing Bing Xue Za Zhi. 2014;35(4):354–8.
Andegiorgish AK, Wang J, Zhang X, Liu X, Zhu H. Prevalence of overweight, obesity, and associated risk factors among school children and adolescents in Tianjin, China. Eur J Pediatr. 2012;171(4):697–703.
Dankel SJ, Loenneke JP, Loprinzi PD. The impact of overweight/obesity duration on the association between physical activity and cardiovascular disease risk: an application of the "fat but fit" paradigm. Int J Cardiol. 2015;201:88–9.
Roberts VHJ, Frias AE, Grove KL. Impact of Maternal Obesity on Fetal Programming of Cardiovascular Disease. Physiology. 2015;30(3):224–31.
Lee SY, Chang HJ, Sung J, Kim KJ, Shin S, Cho IJ, Shim CY, Hong GR, Chung N. The Impact of Obesity on Subclinical Coronary Atherosclerosis According to the Risk of Cardiovascular Disease. Obesity. 2014;22(7):1762–8.
Yu DH, Huang JF, Hu DS, Chen JC, Cao J, Li JX, Gu DF. Association Between Prehypertension and Clustering of Cardiovascular Disease Risk Factors Among Chinese Adults. J Cardiovasc Pharm. 2009;53(5):388–400.
Murakami Y, Okamura T, Nakamura K, Miura K, Ueshima H: The clustering of cardiovascular disease risk factors and their impacts on annual medical expenditure in Japan: community-based cost analysis using Gamma regression models. BMJ Open. 2013;3(3). doi:10.1136/bmjopen-2012-002234
Bennasar-Veny M, Lopez-Gonzalez AA, Tauler P, Cespedes ML, Vicente-Herrero T, Yanez A, Tomas-Salva M, Aguilo A. Body Adiposity Index and Cardiovascular Health Risk Factors in Caucasians: A Comparison with the Body Mass Index and Others. Plos One. 2013;8(5):e63999.
Park SH, Choi SJ, Lee KS, Park HY. Waist Circumference and Waist-to-Height Ratio as Predictors of Cardiovascular Disease Risk in Korean Adults. Circ J. 2009;73(9):1643–50.
Browning LM, Hsieh SD, Ashwell M. A systematic review of waist-to-height ratio as a screening tool for the prediction of cardiovascular disease and diabetes: 0.5 could be a suitable global boundary value. Nutr Res Rev. 2010;23(2):247–69.
Lam BCC, Koh GCH, Chen C, Wong MTK, Fallows SJ. Comparison of Body Mass Index (BMI), Body Adiposity Index (BAI), Waist Circumference (WC), Waist-To-Hip Ratio (WHR) and Waist-To-Height Ratio (WHtR) as Predictors of Cardiovascular Disease Risk Factors in an Adult Population in Singapore. Plos One. 2015;10(4):e0122985.
Bergman RN, Stefanovski D, Buchanan TA, Sumner AE, Reynolds JC, Sebring NG, Xiang AH, Watanabe RM. A Better Index of Body Adiposity. Obesity. 2011;19(5):1083–9.
Zeng Q, He Y, Dong SY, Zhao XL, Chen ZH, Song ZY, Chang G, Yang F, Wang YJ. Optimal cut-off values of BMI, waist circumference and waist: height ratio for defining obesity in Chinese adults. Brit J Nutr. 2014;112(10):1735–44.
Mbanya VN, Kengne AP, Mbanya JC, Akhtar H. Body mass index, waist circumference, hip circumference, waist-hip-ratio and waist-height-ratio: Which is the better discriminator of prevalent screen-detected diabetes in a Cameroonian population? Diabetes Res Clin Pr. 2015;108(1):23–30.
Gao B, Xu QT, Li YB. Dynamic Change and Analysis of Driving Factors of Carbon Emissions from Traffic and Transportation Energy Consumption in Jilin Province. Appl Mech Mater. 2014;472:851–5.
Yip GWK, Li AM, So HK, Choi KC, Leung LCK, Fong NC, Lee KW, Li SPS, Wong SN, Sung RYT. Oscillometric 24-h ambulatory blood pressure reference values in Hong Kong Chinese children and adolescents. J Hypertens. 2014;32(3):606–19.
Gu DF, Gupta A, Muntner P, Hu SS, Duan XF, Chen JC, Reynolds RF, Whelton PK, He J. Prevalence of cardiovascular disease risk factor clustering among the adult population of china - Results from the International Collaborative Study of Cardiovascular Disease in Asia (InterAsia). Circulation. 2005;112(5):658–65.
Gao BX, Zhang LX, Wang HY. D CNSCK: Clustering of Major Cardiovascular Risk Factors and the Association with Unhealthy Lifestyles in the Chinese Adult Population. Plos One. 2013;8(6):e66780.
Wu YH, Yu Q, Wang SB, Shi JP, Xu ZQ, Zhang QQ, Fu YL, Qi Y, Liu JW, Fu R, et al. Zinc Finger Protein 259 (ZNF259) Polymorphisms are Associated with the Risk of Metabolic Syndrome in a Han Chinese Population. Clin Lab. 2015;61(5–6):615–21.
Alberti KG, Eckel RH, Grundy SM, Zimmet PZ, Cleeman JI, Donato KA, Fruchart JC, James WP, Loria CM, Smith Jr SC. Harmonizing the metabolic syndrome: a joint interim statement of the International Diabetes Federation Task Force on Epidemiology and Prevention; National Heart, Lung, and Blood Institute; American Heart Association; World Heart Federation; International Atherosclerosis Society; and International Association for the Study of Obesity. Circulation. 2009;120(16):1640–5.
Dong XL, Liu Y, Yang J, Sun Y, Chen L. Efficiency of anthropometric indicators of obesity for identifying cardiovascular risk factors in a Chinese population. Postgrad Med J. 2011;87(1026):251–6.
Chen FY, Xue YQ, Tan MT, Chen PY. Efficient statistical tests to compare Youden index: accounting for contingency correlation. Stat Med. 2015;34(9):1560–76.
Cai L, Liu AP, Zhang YM, Wang PY. Waist-to-Height Ratio and Cardiovascular Risk Factors among Chinese Adults in Beijing. Plos One. 2013;8(7):e69298.
Ashwell M, Gunn P, Gibson S. Waist-to-height ratio is a better screening tool than waist circumference and BMI for adult cardiometabolic risk factors: systematic review and meta-analysis. Obes Rev. 2012;13(3):275–86.
Hsieh SD, Muto T. The superiority of waist-to-height ratio as an anthropometric index to evaluate clustering of coronary risk factors among non-obese men and women. Prev Med. 2005;40(2):216–20.
Li CY, Ford ES, Zhao GX, Kahn HS, Mokdad AH. Waist-to-thigh ratio and diabetes among US adults: The Third National Health and Nutrition Examination Survey. Diabetes Res Clin Pr. 2010;89(1):79–87.
Hsieh SD, Yoshinaga H, Muto T. Waist-to-height ratio, a simple and practical index for assessing central fat distribution and metabolic risk in Japanese men and women. Int J Obesity. 2003;27(5):610–6.
Ashwell M, Gibson S. Waist to Height Ratio Is a Simple and Effective Obesity Screening Tool for Cardiovascular Risk Factors: Analysis of Data from the British National Diet and Nutrition Survey of Adults Aged 19–64 Years. Obes Facts. 2009;2(2):97–103.
Tseng CH, Chong CK, Chan TT, Bai CH, You SL, Chiou HY, Su TC, Chen CJ. Optimal anthropometric factor cutoffs for hyperglycemia, hypertension and dyslipidemia for the Taiwanese population. Atherosclerosis. 2010;210(2):585–9.
Ho SY, Lam TH, Janus ED, Fact HKCR. Waist to stature ratio is more strongly associated with cardiovascular risk factors than other simple anthropometric indices. Ann Epidemiol. 2003;13(10):683–91.
Haun DR, Pitanga FJG, Lessa I. Waist-Height Ratio Compared to Other Indicators of Obesity as Predictosr of High Coronary Risk. Rev Assoc Med Bras. 2009;55(6):705–11.
Hadaegh F, Zabetian A, Harati H, Azizi F. Waist/height ratio as a better predictor of type 2 diabetes compared to body mass index in tehranian adult men - A 3.6-year prospective study. Exp Clin Endocr Diab. 2006;114(6):310–5.
Lee CMY, Huxley RR, Wildman RP, Woodward M. Indices of abdominal obesity are better discriminators of cardiovascular risk factors than BMI: a meta-analysis. J Clin Epidemiol. 2008;61(7):646–53.
Zhou BF. Predictive values of body mass index and waist circumference for risk factors of certain related diseases in Chinese adults--study on optimal cut-off points of body mass index and waist circumference in Chinese adults. Biomed Environ Sci. 2002;15(1):83–96.
Pua YH, Ong PH. Anthropometric indices as screening tools for cardiovascular risk factors in Singaporean women. Asia Pac J Clin Nutr. 2005;14(1):74–9.
Ito H, Nakasuga K, Ohshima A, Maruyama T, Kaji Y, Harada M, Fukunaga M, Jingu S, Sakamoto M. Detection of cardiovascular risk factors by indices of obesity obtained from anthropometry and dual-energy X-ray absorptiometry in Japanese individuals. Int J Obesity. 2003;27(2):232–7.
The study was funded by the National Natural Science Foundation of China (grant number: 11301213, 11571068) and the Scientific Research Foundation of the Health Bureau of Jilin Province, China (grant number: 2011Z116).
The survey was implemented by the School of Public Health, Jilin University and the Jilin Center for Disease Control and Prevention in Jilin Province in 2012. According to relevant regulations, we are sorry that the data cannot be shared.
JY and LJ made substantial contributions to conception and design of this study. JY and YT drafted the manuscript. BL and YY revised the manuscript. YT and SY made contribution to acquisition and performed the statistical analysis. All authors read and approved the final manuscript.
The ethics committee of the School of Public Health, Jilin University approved the study, and written informed consent was obtained from all of the participants before data collection.
Epidemiology and Biostatistics, School of Public Health, Jilin University, NO. 1163 Xinmin Street, Changchun, 130021, Jilin, China
Jianxing Yu, Yuchun Tao, Sen Yang, Yaqin Yu, Bo Li & Lina Jin
Department of Immunization Program, Changchun Center for Disease Control and Prevention, Changchun, 130021, Jilin, China
Yuhui Tao
Jianxing Yu
Yuchun Tao
Sen Yang
Yaqin Yu
Bo Li
Lina Jin
Correspondence to Lina Jin.
The supplementary material of the article. (DOCX 20 kb)
Yu, J., Tao, Y., Tao, Y. et al. Optimal cut-off of obesity indices to predict cardiovascular disease risk factors and metabolic syndrome among adults in Northeast China. BMC Public Health 16, 1079 (2016). https://doi.org/10.1186/s12889-016-3694-5
Optimal cut-off
Obesity indices | CommonCrawl |
A reconstruction approach in wavelet domain for fluorescent molecular tomography via rotated sources illumination
Wei Zou1,2,3,
Jiajun Wang1,2,3,
Danfeng Hu1 &
Wenxia Wang1
Fluorescent molecular tomography (FMT) aims at reconstructing the spatial map of optical and fluorescence parameters from fluence measurements. Basically, solving large-scale matrix equations is computationally expensive for image reconstruction of FMT. Despite the reconstruction quality can be improved with more sources, it may result in higher computational costs for reconstruction. This article presents a novel method in the wavelet domain with rotated sources illumination.
We use the finite element method for the computation of the forward model. The global inverse problem is solved based on wavelet in conjunction with principal component analysis. The iterative reconstruction is implemented with sources rotated in a certain angle. The original excitation light sources are used to reconstruct the image in the first iteration. Then, upon the sources are rotated by a certain angle, they are employed for the next iteration of reconstruction.
Simulation results demonstrate that our method can considerably reduce the time taken for the computation of inverse problem in FMT. Furthermore, the approach proposed is also shown to largely outperform the traditional method in terms of the precision of inverse solutions.
Our method has the capability to locate the inclusions. The proposed method can significantly speed up the reconstruction process with the high reconstruction quality.
Over the past decade, near-infrared (NIR) biomedical optical imaging is a rapidly evolving field. It has the potential in a wide range of medical applications. The ongoing development in this area is led by the cooperation of physicians, engineers, physicists, etc. [1, 2]. Among the optical molecular imaging, fluorescent molecular tomography (FMT) is a promising tool, which is expected to have a substantial impact on the prevention and treatment of cancer and of other lethal diseases [3]. This emerging imaging modality can offer an opportunity for noninvasive visualization of biological processes at the molecular or genetic level, targeting the detection of abnormalities at the molecular stage [4, 5]. FMT depends on the perturbation of electron densities of molecules through the absorption of light at the fluorophore's excitation wavelength. Upon radiative relaxation, fluorescent light is emitted and the fluorophore returns to its ground state with some characteristic time constant. The fluorescent photons are measured by the detectors widely spaced over the surface of the object. From these data, one can detect and map the accumulation of indocyanine green in tissue. Compared to other tomography methods, FMT offers several distinct advantages in terms of sensitivity to functional changes, safety, and cost [6]. For model-based iterative image reconstruction, the light propagation model is utilized as a predictor of measurements. Typically, the model is described by coupled partial differential equations [7]. Besides the forward model, the inversion technique is also needed for image reconstruction [8]. These techniques take into account the diffuse nature of photon propagation to achieve the spatial distribution of fluorochromes in tissues.
Considering the fact that the fluorophore is excited by the excitation light from source, the source may be an important factor for yielding the reconstructed results. Intuitionally, more sources can result in improved reconstruction results. But on the other hand, it may lead to the matrix system with larger scale and hence higher computational costs for reconstruction [9]. A model-order reduction method was proposed in [10] to reduce the computational complexity in the system matrix calculation. However, the transformation matrix needs to be constructed with the basis vectors, which possesses relatively high computational requirements. In [11], an efficient algorithm was proposed to locate and characterize the object, where the B-spline model and appropriate parameterization were utilized to reduce the number of unknowns. However, this method addresses the problem with only one object.
To accelerate inverse problem of FMT, some compression approaches have been proposed [e.g., wavelet transform, principal component analysis (PCA), etc.]. The most important feature of the wavelet transform lies in the fact that most information of the signal is contained in a small number of entries with other entries being very small and therefore can be neglected. PCA is one of the most widely used feature extraction methods, which aims to obtain the most compact representations of the high dimensional data. Some related research has been conducted in the inverse reconstruction. Ducros et al. applied compression techniques to the measurements acquired with structured illuminations [12]. This method is based on the exploitation of the wavelet transform of the measurements acquired after wavelet patterned illuminations. Correia et al. introduced a method with wavelet-based data and solution compression to improve the efficiency of image reconstruction for fluorescence diffuse optical tomography [13]. This approach preserves the resolution of the forward operator and compresses its representation. In [14], Zhang et al. proposed to use PCA to reduce the dimension of the sub weight matrix, and thus to accelerate the reconstruction process of dynamic FMT. Cao et al. solved the inverse problem based on reducing the dimension of the weight matrix with PCA [15]. Furthermore, some other fast reconstruction techniques have been investigated, including sparsity regularization based on the iterated shrinkage method [16], acceleration strategy using graphics processing unit [17, 18], and sparsity adaptive subspace pursuit method [5]. In addition, a reconstruction method using permissible region extraction strategy was proposed in [19].
Considering the compression characteristic of wavelet transform and PCA, to further speed up the reconstruction process of FMT as well as improve the precision of the inverse solutions, a new method using the wavelet-based PCA is proposed in this paper. In our method, the original excitation light sources and those rotated in a certain angle are used for iteration of image reconstruction in turn. Simulation results demonstrate that the proposed method can significantly speed up the reconstruction process and achieve high accuracy of inverse solutions.
Diffusion model
As it has been stated earlier, the forward model is used to predict the observable states at the measurement locations from knowledge of the excitation light source and spatial distribution of optical and fluorescent properties. The propagation of photons through a highly scattering medium with low absorption can be well described by the diffusion equation [20]. We employ the widely-used diffusion equation as a forward model that is appropriate for a variety of optical tomography schemes of tissues. Herein, the excitation field \( \Phi_{x} \left( {{\mathbf{r}},\omega } \right) \) and the emission field \( \Phi_{m} \left( {{\mathbf{r}},\omega } \right) \) are modelled with a pair of coupled diffusion equations as follows
$$ - \nabla \cdot [D_{x} \left( {\mathbf{r}} \right)\nabla \Phi_{x} \left( {{\mathbf{r}},\omega } \right)] + k_{x} \left( {{\mathbf{r}},\omega } \right)\Phi_{x} \left( {{\mathbf{r}},\omega } \right) = S_{x} \left( {{\mathbf{r}},\omega } \right) $$
$$ - \nabla \cdot [D_{m} \left( {\mathbf{r}} \right)\nabla \Phi_{m} \left( {{\mathbf{r}},\omega } \right)] + k_{m} \left( {{\mathbf{r}},\omega } \right)\Phi_{m} \left( {{\mathbf{r}},\omega } \right) = \alpha \left( {{\mathbf{r}},\omega } \right)\Phi_{x} \left( {{\mathbf{r}},\omega } \right) $$
where the first equation depicts the transport of the excitation photons and the second one describes the excitation and transport of the fluorescent photons; \( \nabla \) is the grad operator, \( S_{x} \left( {{\mathbf{r}},\omega } \right) \) is the source term for the excitation light; \( D_{x,m} \left( {\mathbf{r}} \right) \) and \( k_{x,m} \left( {{\mathbf{r}},\omega } \right) \) denote the diffusion and decay coefficients at the excitation and emission wavelengths, respectively; \( \alpha \) is the emission source coefficient. They are defined by:
$$ D_{x,m} \left( {\mathbf{r}} \right) = \frac{1}{{3[\mu_{ax,mi} \left( {\mathbf{r}} \right) + \mu_{ax,mf} \left( {\mathbf{r}} \right) + \mu^{\prime}_{sx,m} \left( {\mathbf{r}} \right)]}} $$
$$ k_{x,m} \left( {{\mathbf{r}},\omega } \right) = \frac{i\omega }{c} + \mu_{ax,mi} \left( {\mathbf{r}} \right) + \mu_{ax,mf} \left( {\mathbf{r}} \right) $$
$$ \alpha \left( {{\mathbf{r}},\omega } \right) = \frac{{\eta \mu_{axf} \left( {\mathbf{r}} \right)}}{{1 - i\omega \tau \left( {\mathbf{r}} \right)}} $$
where \( \mu_{ax,mi} \left( {\mathbf{r}} \right) \) represent the absorption coefficients due to non-fluorescing chromophore; \( \mu_{ax,mf} \left( {\mathbf{r}} \right) \) represent the absorption coefficients due to fluorophore; \( \mu^{\prime}_{sx,m} \left( {\mathbf{r}} \right) \) denote the isotropic scattering coefficients; fluorescence parameters \( \eta \) and \( \tau \left( {\mathbf{r}} \right) \) denote the fluorescence quantum efficiency and fluorescence lifetime, respectively; \( c \) is the speed of light in the media; \( i \) is the imaginary unit; \( \omega \) stands for the angular modulation frequency of the input signal.
Here, we make use of the popular Robin boundary conditions for a bounded domain \( \Omega \), which take the form as
$$ \Phi_{x} \left( {{\mathbf{r}},\omega } \right) + 2A_{x} \left( {\mathbf{r}} \right)D_{x} \left( {\mathbf{r}} \right){\mathbf{n}}\left( {\mathbf{r}} \right) \cdot \nabla \Phi_{x} \left( {{\mathbf{r}},\omega } \right) = 0 $$
$$ \Phi_{m} \left( {{\mathbf{r}},\omega } \right) + 2A_{m} \left( {\mathbf{r}} \right)D_{m} \left( {\mathbf{r}} \right){\mathbf{n}}\left( {\mathbf{r}} \right) \cdot \nabla \Phi_{m} \left( {{\mathbf{r}},\omega } \right) = 0 $$
where \( {\mathbf{n}} \) is the outer normal to the boundary, and \( A_{x,m} \left( {\mathbf{r}} \right) \) is a parameter modelling internal reflection at the boundary.
Finite element approximation of the forward model
Like most others working in FMT, we are currently using the finite element method (FEM) for the computation of the forward model. FEM is versatile especially in regard to complex geometries and for modelling boundary effects [21]. In principle, FEM can be applied to any partial differential equations model of the transport process. In the FEM framework, the computational domain is discretized to a mesh with P elements and N vertex nodes [22]. The solution \( \Phi_{x,m} \) is approximated by the piecewise function \( \Phi_{x,m} = \sum\nolimits_{i}^{N} {{\varvec{\Phi}}_{xi,mi} \varphi_{i} } \), with locally supported basis functions \( \varphi_{i} \) (\( i = 1,2, \ldots ,N \)).
Suppose \( V_{0}^{h} = span\left\{ {\varphi_{j} } \right\}_{j = 1}^{N} \) and thus \( v_{h} = \sum\nolimits_{k = 1}^{N} {c_{k} \varphi_{k} } \). Let \( u_{h} = \sum\nolimits_{j = 1}^{N} {\Phi_{j} \varphi_{j} } \). To yield the weak solutions of the forward equations, we rewrite Eqs. (1) and (2) by the formulation as
$$ a_{{\Omega_{h} }} (u_{h} ,v_{h} )_{x,m} = (f_{x,m} ,v_{h} )_{{\Omega_{h} }} $$
$$ a_{{\Omega_{h} }} (u_{h} ,v_{h} )_{x,m} = \iint\limits_{{\Omega_{h} }} {[D_{x,m} (\nabla u_{h} \cdot \nabla v_{h} ) + k_{k,m} u_{h} v_{h} ]d\Omega } + \int\limits_{{\Gamma_{h} }} {b_{x,m} u_{h} v_{h} ds} $$
$$ (f_{x,m} ,v_{h} )_{{\Omega_{h} }} = \iint\limits_{{\Omega_{h} }} {f_{x,m} v_{h} d\Omega } $$
$$ f_{x} = S_{x} ,\quad f_{m} = \beta \Phi_{x} $$
with the bounded domain \( \Omega_{h} \) and its boundary \( \Gamma_{h} \).
Equation (8) can be written by the matrix formulation
$$ {\mathbf{A}}_{x,m} {\varvec{\Phi}}_{x,m} = {\mathbf{S}}_{x,m} $$
$$ {\mathbf{S}}_{x,m} = \left[ {\begin{array}{*{20}c} {(f_{x,m} ,\varphi_{1} )_{{\Omega_{h} }} } \\ \vdots \\ {(f_{x,m} ,\varphi_{N} )_{{\Omega_{h} }} } \\ \end{array} } \right] $$
$$ {\mathbf{A}}_{x,m} = \left[ {\begin{array}{*{20}c} {a_{{\Omega_{h} }} (\varphi_{1} ,\varphi_{1} )_{x,m} } & \cdots & {a_{{\Omega_{h} }} (\varphi_{N} ,\varphi_{1} )_{x,m} } \\ \vdots & {} & \vdots \\ {a_{{\Omega_{h} }} (\varphi_{1} ,\varphi_{N} )_{x,m} } & \cdots & {a_{{\Omega_{h} }} (\varphi_{N} ,\varphi_{N} )_{x,m} } \\ \end{array} } \right]. $$
The matrices \( {\mathbf{A}}_{x,m} \) have elements
$$ a_{{\Omega_{h} }} (\varphi_{i} ,\varphi_{j} )_{x,m} = \iint\limits_{{\Omega_{h} }} {D_{x,m} \nabla \varphi_{i} \cdot \nabla \varphi_{j} d\Omega + \iint\limits_{{\Omega_{h} }} {k_{x,m} \varphi_{i} \varphi_{j} d\Omega }} + \int\limits_{{\Gamma_{h} }} {b_{x,m} \varphi_{i} \varphi_{j} ds} . $$
Combining Eqs. (12) and (15), the forward equations within the FEM scheme become
$$ \left( {{\mathbf{D}}_{x} + {\mathbf{K}}_{x} + {\mathbf{B}}_{x} } \right){\varvec{\Phi}}_{x} = {\mathbf{S}}_{x} $$
$$ \left( {{\mathbf{D}}_{m} + {\mathbf{K}}_{m} + {\mathbf{B}}_{m} } \right){\varvec{\Phi}}_{m} = {\mathbf{S}}_{m} $$
$$ D_{ij} = \iint\limits_{{\Omega_{h} }} {D_{x,m} \nabla \varphi_{i} \cdot \nabla \varphi_{j} d\Omega } $$
$$ K_{ij} = \iint\limits_{{\Omega_{h} }} {k_{x,m} \varphi_{i} \varphi_{j} d\Omega } $$
$$ B_{ij} = \int\limits_{{\Gamma_{h} }} {b_{x,m} \varphi_{i} \varphi_{j} ds} . $$
Inverse problem
The inverse problem of FMT consists in estimating the optical parameters and fluorescent properties of the tissue by using the measured data as described earlier [23]. To generally pose the inverse problem, we first define the forward mapping as \( F \). Therefore, the inverse problem reads
$$ x = F^{ - 1} \left( y \right) $$
where \( y \) denotes boundary measurement, and \( x \) denotes optical or fluorescent properties.
The above non-linear problem can be linearized. To proceed, we can expand about \( x_{0} \) in a Taylor series. Neglecting the higher order terms, we thus arrive at the linear problem as
$$ y - y_{0} = {\mathbf{J}}\left( {x - x_{0} } \right) $$
where \( {\mathbf{J}} \) is the Jacobian of the forward mapping.
Due to the fact that the inverse reconstruction problem is ill-posed and underdetermined, we introduce the Moore–Penrose inversion in conjunction with Tikhonov regularization, leading to the following formula:
$$ x - x_{0} = \left( {{\mathbf{J}}^{T} {\mathbf{J}} +\upxi{\mathbf{I}}} \right)^{ - 1} {\mathbf{J}}^{T} \left( {y - y_{0} } \right) $$
where \( {\mathbf{I}} \) represents the identity matrix, and \( \upxi \) acts as a regularization parameter.
Equation (23) can be written in a succinct matrix form by
$$ {\mathbf{K}}\Delta {\mathbf{x}} = {\mathbf{b}} $$
where we define \( {\mathbf{K}} = \left( {{\mathbf{J}}^{T} {\mathbf{J}} + \xi {\mathbf{I}}} \right) \) and \( {\mathbf{b}} = {\mathbf{J}}^{T} \Delta {\mathbf{y}} \).
Image reconstruction with the wavelet-based principal component analysis
We solve the inverse problem in the wavelet domain. To this aim, we take the wavelet transform on both sides of Eq. (24)
$$ {\hat{\mathbf{K}}}\Delta {\hat{\mathbf{x}}} = {\hat{\mathbf{b}}} $$
where \( {\hat{\mathbf{K}}} = {\mathbf{W}}_{{\mathbf{b}}} {\mathbf{KW}}_{{\mathbf{x}}}^{T} \), \( \Delta {\hat{\mathbf{x}}} = {\mathbf{W}}_{{\mathbf{x}}} \Delta {\mathbf{x}} \), \( {\hat{\mathbf{b}}} = {\mathbf{W}}_{{\mathbf{b}}} {\mathbf{b}} \). However, the level-by-level implementation scheme in the conventional wavelet-based reconstruction method [24] not only is computationally expensive but also causes information lost in the system matrix of the reconstruction problem [25], which inevitably deteriorates the final reconstruction quality. In order to circumvent that problem, we propose to solve the global inverse problem as Eq. (24) based on wavelet in conjunction with the PCA instead of the level-by-level wavelet transform scheme. To this aim, we briefly present the PCA principles. It is well known that PCA performs a dimensionality reduction by searching for a projection matrix with a small number of eigenvectors with respect to the largest eigenvalues. Assume that \( {\mathbf{L}} \) is the covariance matrix of the given matrix \( {\mathbf{K}} \), that is,
$$ {\mathbf{L}} = E\left\{ {\left[ {{\mathbf{K}} - E\left( {\mathbf{K}} \right)} \right]\left[ {{\mathbf{K}} - E\left( {\mathbf{K}} \right)} \right]^{T} } \right\} $$
\( {\mathbf{L}} \) can be diagonalized via
$$ {\mathbf{L}} = {\varvec{\Psi}}\Lambda {\varvec{\Psi}}^{T} $$
where \( \Lambda \) is a diagonal matrix consisting of the eigenvalues of \( {\mathbf{L}} \), and \( {\varvec{\Psi}} \) is the matrix of eigenvectors of \( {\mathbf{L}} \).
Thus the principal components of the matrix \( {\mathbf{K}} \) can be achieved by
$$ {\tilde{\mathbf{K}}} = {\mathbf{\Psi K}}. $$
Multiplying (24) from the left with \( {\varvec{\Psi}} \), one has
$$ {\tilde{\mathbf{K}}}\Delta {\mathbf{x}} = {\tilde{\mathbf{b}}} $$
where \( {\tilde{\mathbf{b}}} = {\mathbf{\Psi b}} \).
Keeping the first \( q \) largest principal components, we can obtain a new matrix equation with reduced scale, namely
$$ {\tilde{\mathbf{K}}}_{q} \Delta {\mathbf{x}} = {\tilde{\mathbf{b}}}_{q} . $$
Therefore, the global matrix system as Eq. (24) can be approximately solved with the reduced-scale matrix system according to PCA.
The inverse reconstruction with the wavelet-based PCA is summarized in "Algorithm 1".
Algorithm 1
Take wavelet transform with respect to K and b in Eq. (24) to achieve the approximation components \( {\hat{\mathbf{K}}}_{1} \) and \( {\hat{\mathbf{b}}}_{1} \);
Solve \( {\hat{\mathbf{K}}}_{1} \Delta {\hat{\mathbf{x}}}_{1} = {\hat{\mathbf{b}}}_{1} \) with PCA;
Prolongate \( \Delta {\hat{\mathbf{x}}}_{1} \) by padding zeros to achieve an initial guess for \( \Delta {\hat{\mathbf{x}}} \) at the original resolution, i.e., \( \Delta {\hat{\mathbf{x}}}^{\left( 0 \right)} = \left[ {\Delta {\hat{\mathbf{x}}}_{1}^{T} ,{\mathbf{0}}^{T} } \right]^{T} \);
Solve \( {\mathbf{K}}\Delta {\mathbf{x}} = {\mathbf{b}} \) with the initial guess \( \Delta {\mathbf{x}}^{\left( 0 \right)} = {\mathbf{W}}_{{\mathbf{x}}}^{ \, T} \Delta {\hat{\mathbf{x}}}^{\left( 0 \right)} \).
Iteration based on the strategy of excitation light sources rotation
The tomographic imaging involves placing sources and detectors over the available surface of the tissue. Basically, the excitation light sources are arranged at the fixed positions during the process of image reconstruction. By means of increased number of sources, the image quality can be improved. However, such a strategy may result in the matrix system with larger scale and hence higher computational complexity. Although we can reduce the number of sources to safe the computation time, the information for image reconstruction will decrease, which may lead to the poor quality of reconstruction. As a result, there exists a contradiction between the reconstruction accuracy and the computational requirements. In order to address this tradeoff, we propose a new strategy for iterative calculation. In such a strategy, the original excitation light sources are used to reconstruct the image in the first iteration. Then, upon the sources are rotated by a certain angle, they are employed for the second iteration of reconstruction. This means that the whole iterative reconstruction is performed using the sources with different rotation angles in turn. This process is repeated until some stopping criteria are satisfied. This strategy is motivated by the fact that the excitation light sources from different angles can provide more information than those from some fixed angle during the iteration process, and thus the quality of reconstructed results can be improved. In our method, the number of excitation light sources is not increased, and thus it will not lead to higher computational cost. Moreover, the iterative results from the sources with one angle can provide a good initial guess for the next iteration from the sources with other angle. In this way, the precision of solutions can be improved with rotation of the lights. However, if the rotation angle of sources is too small, it may provide quite limit information for reconstruction. On the contrary, large rotation angle may lead to the superposition between the original and rotated sources. In such a case, it is unable to provide additional information for iterative reconstruction. Suppose the sources are distributed around the circumference of the tissue with equal angle between each source. To overcome those difficulties, in our work, the rotation angle is set as a half of the angle between each source. This strategy can be schematically illustrated as in Fig. 1.
Illustrative explanation of the strategy of sources rotation. a Sources before rotation, and b sources after rotation
For derivation of the algorithm, we minimize the residual error between the predicted data and measured data to acquire the solution to the reconstruction problem by
$$ M\left( {\mathbf{x}} \right) = \left\| {{\mathbf{y}} - F\left( {\mathbf{x}} \right)} \right\| $$
where \( M\left( {\mathbf{x}} \right) \) is the objective function, \( {\mathbf{y}} \) is the measured data, and \( F\left( {\mathbf{x}} \right) \) is the predicted data with regard to a forward model. Let us suppose that \( \beta \) is a half of the angle between each source, and thus the resulting reconstruction algorithm is summarized in "Algorithm 2".
Initialize \( {\mathbf{x}} = {\mathbf{x}}_{0} \), i = 0;
\( \theta = i \cdot \beta \);
Compute \( \Delta {\mathbf{y}} \) and \( {\mathbf{J}} \) at x based on the excitation light sources with the rotation angle \( \theta \);
i = i + 1;
Solve Eq. (24) with "Algorithm 1";
Update x with \( {\mathbf{x}} = {\mathbf{x}} + \Delta {\mathbf{x}} \);
Compute the objective function \( M\left( {\mathbf{x}} \right) \) with the current x by Eq. (31);
Until \( M\left( {\mathbf{x}} \right) < \delta \)
Output x.
Simulation results and discussion
In this section we performed simulation study using different phantoms to test the performance of our algorithm and the obtained results. The forward model as Eqs. (1) and (2) is used to simulate the measured data. In order to better simulate the realistic conditions, we add Gaussian noise with a signal-to-noise ratio of 10 dB to the calculated data. Actually, large regularization parameter may lead to low contrast and resolution of the image, while small parameter can result in increased contrast and resolution. However, small parameter also increases the high frequency noise in the image [26]. The regularization parameter \( \xi \) is set to 0.001 in the simulations for better results after a lot of simulations. The termination criterion \( \delta \) is set to 0.02.
In the first example, verification of the performance of the proposed method is investigated using the test phantom containing one inclusion as indicated in Fig. 2. Four excitation light sources are uniformly distributed around the simulated phantom. The measurements are sampled by thirty detectors uniformly placed on the boundary of the phantom.
Simulated phantom with one inclusion. The value of absorption coefficient \( \mu_{axf} \) of the inclusion is 0.4 mm−1, and the value of absorption coefficient \( \mu_{axf} \) of the background is 0.06 mm−1
To reduce the computational requirements without significant reduction of image resolution, we compute the reconstructions based on the mesh that is adaptively refined with respect to the a priori image as portrayed in Fig. 3. Figure 4 displays the mesh containing 122 nodes and 212 triangular elements.
Prior image for phantom with one inclusion. The prior image is utilized to guide the generation of the adaptively refined mesh for one-inclusion reconstruction
Adaptively refined mesh for reconstruction of one-inclusion phantom. The adaptively refined mesh contains 122 nodes and 212 triangular elements
The details for optical and fluorescent parameters in different areas of the test phantom are provided in Table 1. In order to compare the reconstructed object with the true one, we define an image quality metric by introducing the mean square error (MSE), given as
$$ \text{MSE} = \frac{1}{N}\sum\limits_{i = 1}^{N} {[x^{rec} \left( i \right) - x^{act} \left( i \right)]^{2} } $$
where the superscript rec denotes the values obtained using reconstruction algorithms, and act denotes the actual distribution of the optical or fluorescent parameters which is used to generate the synthetic image data set.
Table 1 Optical parameters used for one-inclusion phantom
The reconstructed images of \( \mu_{axf} \) for one-inclusion phantom with two sources and four sources are depicted in Fig. 5a, b, respectively. Both of them are obtained without using the wavelet-based PCA. The results presented in Fig. 5 can be explained by considering that reconstruction with the increasing number of sources can enhance the quality of image, whereas the time requirements for reconstruction may increase.
Reconstructed images of absorption coefficient \( \mu_{axf} \) for one-inclusion phantom. a Reconstructed image with two sources, and b reconstructed image with four sources
In Fig. 6 we show the resulting reconstructions using the different algorithms. Figure 6a displays the reconstructed result using the proposed method with four sources. Figure 6b, c depict the traditional reconstructed result with four sources and that with eight sources, respectively. We see that the method proposed is capable of yielding the reconstructed target with improved contrast and contour comparatively to the traditional method.
Reconstructed images of absorption coefficient \( \mu_{axf} \) for phantom with one inclusion. a Reconstructed image based on the proposed method with four sources, b reconstructed image based on the traditional method with four sources, and c reconstructed image based on the traditional method with eight sources
We demonstrate the benefits of the proposed method by comparing the performance of our method to the traditional method. For quantitative validation, the performance of reconstructions in terms of the computation time and MSE is tabulated in Table 2. We remark, that the computation time for the proposed algorithm is much faster than the traditional method, which demonstrates that our method is time efficient. Although the increased number of sources can improve the quality of reconstruction, it will slow down the speed of reconstruction. In addition, the MSE of the proposed method is smaller than that of the other compared method. Therefore, the above results suggest that the algorithm proposed can substantially speed up the reconstruction process and possess high accuracy.
Table 2 Method performance comparison for phantom with one inclusion
The phantom for the second test case is shown in Fig. 7. It consists of two inclusions of different shapes. As before, the phantom is illuminated by four equally spaced sources located on its boundary. The detector readings are obtained from 30 different points from the boundary of the circular domain. The distance between the successive detector positions is the same through the boundary.
Simulated phantom with two inclusions. The value of low absorption coefficient \( \mu_{axf} \) of the inclusion is 0.3 mm−1, the value of high absorption coefficient \( \mu_{axf} \) of the inclusion is 0.4 mm−1, and the value of absorption coefficient \( \mu_{axf} \) of the background is 0.06 mm−1
Figure 8 displays the a priori image as a guidance for generation of the adaptively refined mesh. The resulting mesh with 148 nodes and 264 triangular elements is depicted in Fig. 9. Table 3 lists the values of optical and fluorescent parameters of the simulated phantom.
Prior image for phantom with two inclusions. The prior image is utilized to guide the generation of the adaptively refined mesh for two-inclusion reconstruction
Adaptively refined mesh for reconstruction of two-inclusion phantom. The adaptively refined mesh contains 148 nodes and 264 triangular elements
Table 3 Optical parameters used for two-inclusion phantom
In Fig. 10 we show the reconstructed images of \( \mu_{axf} \) for two-inclusion phantom with 2 sources (see Fig. 10a) and that with four sources (see Fig. 10b). We also notice that one can obtain better reconstructed results with increasing sources. Nevertheless, reconstruction with more sources may lead to a heavy computation burden.
Reconstructed images of absorption coefficient \( \mu_{axf} \) for two-inclusion phantom. a Reconstructed image with two sources, and b reconstructed image with four sources
The reconstruction from our method with four sources is shown in Fig. 11a and those from the traditional method are depicted in Fig. 11b, c. Particularly, the reconstructed images are obtained with four sources (see Fig. 11b) and eight sources (see Fig. 11c).
Reconstructed images of absorption coefficient \( \mu_{axf} \) for phantom with two inclusions. a Reconstructed image based on the proposed method with four sources, b reconstructed image based on the traditional method with four sources, and c reconstructed image based on the traditional method with eight sources
We find that the contrast of image can be enhanced with our method. The reconstruction with more sources can improve the reconstruction accuracy while result in a heavy computation burden. More importantly, we also note that the proposed method can improve the quality of reconstruction with more accurate shape and position of both targets.
We provide the quantitative comparisons of different reconstructions presented in Table 4. As can be clearly seen, improvement in quality of reconstruction can be achieved by proposed algorithm. Additionally, it is evident from Table 4 that our method requires less reconstruction time as compared with the traditional method. Therefore, the main conclusion we can draw from these simulation studies is that the approach proposed has comparable computational efficiency to the traditional method and high capability to achieve accurate reconstruction.
Table 4 Method performance comparison for phantom with two inclusions
To illustrate the superiorities of the proposed algorithm, we show the reconstructed results from the different algorithms (see Fig. 12). Figure 12a–c display the reconstructed images with the proposed approach, wavelet method, and PCA method, respectively. Table 5 summarizes the quantitative performance of reconstruction. From Table 5, it can be clearly seen that the proposed algorithm has better performance on accuracy and speed of reconstruction than algorithms only using wavelet method or PCA method.
Reconstructed images of absorption coefficient \( \mu_{axf} \) using different methods. a Reconstructed image based on the proposed method, b reconstructed image based on the wavelet method, and c reconstructed image based on the PCA method
Table 5 Performance comparison of different methods
To validate the proposed approach in the 3D case, the methods previously defined for triangular elements are extended to tetrahedral elements. The integration of products of shape functions over the volume of the elements, and surface integrals over a side of the element is performed by numerical integration rules. Herein, a cylindrical phantom as illustrated in Fig. 13 is utilized for 3D simulations. A small cylindrical inclusion is suspended in this phantom. The dashed curves represent the planes of measurement. Six sources and sixteen measurements are employed for each plane. The data are collected in all three measurement planes. The mesh for 3D reconstruction containing 3208 tetrahedral elements as well as 858 nodes is shown in Fig. 14. Figures 15 and 16 display the 3D reconstructed images based on the proposed approach and the traditional method, respectively. These are 2D cross sections through the reconstructed 3D images. The quantitative performance of the above two methods is given in Table 6 to further evaluate the reconstruction quality. As one can see from Table 6, our proposed algorithm can also significantly speed up the reconstruction process and improve the quality of reconstruction in the 3D case.
Simulated phantom for 3D reconstruction. The phantom of radius 10 mm and height 40 mm with a uniform background of \( \mu_{axf} = 0.005 \) mm−1 is located at \( x = 10 \) mm, \( y = 0 \) mm and \( z = 20 \) mm. The small cylindrical inclusion has a radius of 2 mm and height 6 mm with \( \mu_{axf} = 0.01 \) mm−1. The inclusion is located at \( x = 5 \) mm, \( y = 0 \) mm, and \( z = 20 \) mm. The dashed curves represent the measurement planes, at \( z = 15 \) mm, \( z = 20 \) mm, \( z = 25 \) mm
Mesh for 3D image reconstruction. Mesh for 3D image reconstruction contains 858 nodes and 3208 tetrahedral elements
Reconstructed images based on the proposed algorithm. The right-hand side corresponds to the top of the cylinder (\( z = 40 \) mm), and the left-hand side corresponds to the bottom of the cylinder (\( z = 0 \) mm), with each slice representing a 10 mm increment
Reconstructed images based on the traditional method. The right-hand side corresponds to the top of the cylinder (\( z = 40 \) mm), and the left-hand side corresponds to the bottom of the cylinder (\( z = 0 \) mm), with each slice representing a 10 mm increment
Table 6 Performance comparison of 3D reconstruction methods
Finally, we test the reconstruction algorithms with the Monte Carlo method. As most commonly used stochastic technique, Monte Carlo method is regarded as gold standard for modelling the light propagation and has a long pedigree in transport theory. We utilize the Monte Carlo method to generate the measurement data, which is employed to reconstruct the image of FMT. Figure 17 shows the model for reconstruction and Fig. 18 shows the corresponding reconstructed results with four sources. The reconstructed results are obtained from the proposed algorithm (see Fig. 18a) and the conventional method (see Fig. 18b). The quantitative performance is listed in Table 7, from which we can also see that both the speed and precision of the reconstruction can be improved with the proposed algorithm.
Simulated phantom based on Monte Carlo method. The value of absorption coefficient \( \mu_{axf} \) of the inclusion is 0.06 mm−1, and the value of absorption coefficient \( \mu_{axf} \) of the background is 0.025 mm−1
Reconstructed results with different methods. a Reconstructed image based on the proposed method, and b reconstructed image based on the conventional method
Table 7 Method performance comparison based on Monte Carlo simulation
In this work, we have developed a highly efficient method for image reconstruction of FMT by means of wavelet-based PCA combining the new strategy for iterative calculation. During the process of reconstruction, the excitation light sources are rotated for each iteration. The proposed algorithm is tested by numerical experiments based on simulated data obtained both from the deterministic forward model and the stochastic Monte Carlo simulation. We see from the results shown in the previous sections that our method can considerably reduce the time taken for the computation of inverse problem in FMT. Furthermore, the approach proposed is also shown to largely outperform the traditional method in terms of the precision of inverse solutions. Therefore, we expect that, this study might be used both to improve current reconstruction methods and also as a guidance for clinical studies.
NIR:
near-infrared
FMT:
fluorescent molecular tomography
PCA:
FEM:
MSE:
mean square error
Ntziachristos V. Going deeper than microscopy: the optical imaging frontier in biology. Nat Methods. 2010;7:603–14.
Balas C. Review of biomedical optical imaging-a powerful, non-invasive, non-ionizing technology for improving in vivo diagnosis. Meas Sci Technol. 2009;20:1–12.
Darne C, Lu Y, Sevick-Muraca EM. Small animal fluorescence and bioluminescence tomography: a review of approaches, algorithms and technology update. Phys Med Biol. 2014;59:R1–64.
Zhang W, Wu L, Li J, Yi X, Wang X, Lu Y, Chen W, Zhou Z, Zhang L, Zhao H, Gao F. Combined hemoglobin and fluorescence diffuse optical tomography for breast tumor diagnosis: a pilot study on time-domain methodology. Biomed Opt Express. 2013;4:331–48.
Ye J, Chi C, Xue Z, Wu P, An Y, Xu H, Zhang S, Tian J. Fast and robust reconstruction for fluorescence molecular tomography via a sparsity adaptive subspace pursuit method. Biomed Opt Express. 2014;5:387–406.
Ntziachristos V. Fluorescence molecular imaging. Annu Rev Biomed Eng. 2006;8:1–33.
Zhang X, Liu F, Zuo S, Shi J, Zhang G, Bai J, Luo J. Reconstruction of fluorophore concentration variation in dynamic fluorescence molecular tomography. IEEE Trans Biomed Eng. 2015;62:138–44.
Arridge SR, Schotland JC. Optical tomography: forward and inverse problems. Inverse Probl. 2009;25:1–59.
Gibson AP, Hebden JC, Arridge SR. Recent advances in diffuse optical imaging. Phys Med Biol. 2005;50:R1–43.
Zhai Y, Cummer SA. Fast tomographic reconstruction strategy for diffuse optical tomography. Opt Express. 2009;17:5285–97.
Kilmer ME, Miller EL, Boas DA, Brooks DH, DiMarzio CA, Gaudette RJ. Direct object localization and characterization from diffuse photon density wave data. Proc SPIE. 1999;3597:45–54.
Ducros N, Andrea CD, Valentini G, Rudge T, Arridge S, Bassi A. Full-wavelet approach for fluorescence diffuse optical tomography with structured illumination. Opt Lett. 2010;35:3676–8.
Correia T, Rudge T, Koch M, Ntziachristos V, Arridge S. Wavelet-based data and solution compression for efficient image reconstruction in fluorescence diffuse optical tomography. J Biomed Opt. 2013;18:086008.
Zhang G, He W, Pu H, Liu F, Chen M, Bai J, Luo J. Acceleration of dynamic fluorescence molecular tomography with principal component analysis. Biomed Opt Express. 2015;6:2036–55.
Cao X, Wang X, Zhang B, Liu F, Luo J, Bai J. Accelerated image reconstruction in fluorescence molecular tomography using dimension reduction. Biomed Opt Express. 2013;4:1–14.
Han D, Tian J, Zhu S, Feng J, Qin C, Zhang B, Yang X. A fast reconstruction algorithm for fluorescence molecular tomography with sparsity regularization. Opt Express. 2010;18:8630–46.
Wang D, Qiao H, Song X, Fan Y, Li D. Fluorescence molecular tomography using a two-step three-dimensional shape-based reconstruction with graphics processing unit acceleration. Appl Opt. 2012;51:8731–44.
Wang X, Zhang B, Cao X, Liu F, Luo J, Bai J. Acceleration of early-photon fluorescence molecular tomography with graphics processing units. Comput Math Methods Med. 2013;2013:1–9.
MathSciNet Google Scholar
Zhang J, Shi J, Cao X, Liu F, Bai J, Luo J. Fast reconstruction of fluorescence molecular tomography via a permissible region extraction strategy. J Opt Soc Am A. 2014;31:1886–94.
Zou W, Pan X. Compressed-sensing-based fluorescence molecular tomographic image reconstruction with grouped sources. BioMed Eng Online. 2014;13:1–15.
Joshi A, Bangerth W, Sevick-Muraca EM. Adaptive finite element based tomography for fluorescence optical imaging in tissue. Opt Express. 2004;12:5402–17.
Arridge SR, Hebden JC. Optical imaging in medicine: II. Modelling and reconstruction. Phys Med Biol. 1997;42:841–53.
Davis SC, Dehghani H, Wang J, Jiang S, Pogue BW, Paulsen KD. Image-guided diffuse optical fluorescence tomography implemented with Laplacian-type regularization. Opt Express. 2007;15:4066–82.
Zhu W, Wang Y, Deng Y, Yao Y, Barbour RL. A wavelet-based multiresolution regularized least squares reconstruction approach for optical tomography. IEEE Trans Med Imaging. 1997;16:210–7.
Frassati AL, Dinten JM, Georges D, Silva AD. Model reduction using wavelet multiresolution technique applied to fluorescence diffuse optical tomography. Appl Opt. 2009;48:6878–92.
Pogue BW, McBride TO, Prewitt J, Osterberg UL, Paulsen KD. Spatially variant regularization improves diffuse optical tomography. Appl Opt. 1999;38:2950–61.
WZ conceived the study, implemented the algorithm, and drafted the manuscript. JJW made the design of the algorithm and analyzed the simulation results. DFH made the design and discussion of the 3D reconstruction. WXW participated in the design and discussion to test the performance of the different algorithms. All authors read and approved the final manuscript.
This work was supported by Natural Science Foundation of Jiangsu Province, China under Grant No. BK20130324, Specialized Research Fund for the Doctoral Program of Higher Education (SRFDP) under Grant No. 20123201120009, and Natural Science Foundation of the Jiangsu Higher Education Institutions of China under Grant No. 12KJB510029.
Compliance with ethical guidelines
School of Electronic and Information Engineering, Soochow University, Suzhou, 215006, China
Wei Zou, Jiajun Wang, Danfeng Hu & Wenxia Wang
Department of Electronic and Information Engineering, Hong Kong Polytechnic University, Kowloon, Hong Kong
Wei Zou & Jiajun Wang
School of Information Technologies, The University of Sydney, Sydney, NSW, 2006, Australia
Wei Zou
Jiajun Wang
Danfeng Hu
Wenxia Wang
Correspondence to Jiajun Wang.
Zou, W., Wang, J., Hu, D. et al. A reconstruction approach in wavelet domain for fluorescent molecular tomography via rotated sources illumination. BioMed Eng OnLine 14, 86 (2015). https://doi.org/10.1186/s12938-015-0080-y
Wavelet | CommonCrawl |
Tanmayee Narendra Blog Publications
Algorithms for Causal Discovery
This article considers the problem of estimating the structure of the causal DAG given some observational data.
How difficult is it to learn a DAG?
The number of DAGs for $n$ nodes increases super exponentially in $n$.
The entire list upto $n=40$ can be found here.
From what I have read so far, there are three major methods to learn causal structures. They are
Methods based on Independence Tests
Score based methods
Methods based on making assumptions about the distribution
The first two methods estimate the true causal DAG upto the Markov equivalent class. What that means is that the final DAG that is output from the algorithm may contain undirected edges (formally called CPDAG - completed partially directed acyclic graph). Algorithms in the third method claim to estimate the true causal DAG, assuming that the required conditions are met.
The following is an overview of the algorithms in each method.
Consider three variables $A$, $B$ and $C$. Let us assume that there are no unobserved variables. These variables can be arranged in four ways.
Figures 1,2 and 3 represent the same set of conditional independence relationships between the three variables, and Figure 4 represents a different set of conditional independence relationships.
Let us consider Figure 4, with $A$ and $C$ both pointing towards $B$ (This is sometimes called a collider). $A$ and $C$ are dependent conditioned on $B$. One way to understand this is to consider $A$ and $C$ as the outcomes of coin tosses of two separate coins, and $C$ as a random variable that takes the value of $1$ if both coins give the same outcome, and $0$ otherwise. The two coin tosses are independent unconditionally. But, once we know the value of $B$, $A$ and $C$ are dependent on each other.
Algorithms based on Independence Tests primarily use this property of colliders to identify $v$ structures from the data. However, they cannot orient all edges because conditional independence testing cannot distinguish between Figures (1), (2) and (3). Hence, these algorithms can estimate the causal DAG upto the Markov equivalence class.
In score based methods, we assign a score to every DAG depending on the goodness of fit with the data, and search over the space of DAGs to find the one with the best score. The number of DAGs for $n$ variables grows super-exponentially, as seen earlier.
Greedy search algorithms are used to find the best scoring DAG.
If we consider the framework of Structural Equation Models (also Structural Causal Models), making some assumptions about the model helps us recover the true DAG.
Let us formally define Structural Equation Models (SEM). For some set of random variables $X={X_1,..,X_p}$, the general SEM is defined as a set of functions where $N_1,..,N_p$ are mutually independent. $PA_i$ denotes the parents of $X_i$ in the DAG, $N_i$ is the noise variable and $f_i$ is a function from $\mathbb{R}^{|PA_i|+1} \rightarrow \mathbb{R}$. Depending on the kind of assumption made on $f_i$, there are three different approaches that I have come across.
Linear Gaussian Acyclic Model (LinGAM)
Consider a linear SEM where all $N_j$ are mutually independent and are non-Gaussian, and $\beta_{jk} \ne 0$ for all $k \in PA_j$. This SEM is also called as LinGAM.
Basically, this model makes the assumption that every variable in the SEM depends linearly on its parents. In 2006, it was proved that the original DAG is identifiable from this SEM. A practical method for finite data uses Independent Component Analysis (ICA) to estimate the true DAG. The original paper can be found here
Causal Additive Models (CAM)
Consider an SEM of the form with $N_1, …, N_p$ mutually independent with $N_j=\mathcal{N}(0, \sigma_j^2)$, $\sigma_j^2 > 0$ for all $j$. Also, all $f_{j,k}(.)$ are smooth functions from $\mathbb{R} \rightarrow \mathbb{R}$.
In other words, we assume that each variable is determined by a sum of functions of its parents with a Gaussian noise component. It has been proved that if the functions $f_{j,k}$ are non-linear, the DAG is identifiable from the joint distribution. This paper provides a broad framework to identify the DAG from finite data, assuming the distribution meets the conditions mentioned earlier. This method is feasible for upto 200 variables.
Additive Noise Models (ANM)
Consider the Additive Noise Model (ANM) which is of the form where $PA_j$ denotes the parents of $X_J$, functions $f_j$ are non-constant and all noise variables $N_j$ are mutually independent. If we additionally assume that the noise variables have non-vanishing densities and the functions $f_j$ are three times continuously differentiable, it has been proved that the original DAG is identifiable from the joint distribution. One practical method for finite data is Regression with Subsequent Independence Test (RESIT). More about the method can be found here. ANM is more general than CAM. This method is feasible for upto 20 variables.
Algorithms for Causal Discovery - February 7, 2017 - Tanmayee Narendra
© 2020 Tanmayee Narendra
Created with Jekyll and the theme tufte-css-jekyll. | CommonCrawl |
Probability of causation for occupational cancer after exposure to ionizing radiation
Eun-A Kim1,
Eujin Lee1,
Seong-Kyu Kang2 &
Meeseon Jeong3
Probability of causation (PC) is a reasonable way to estimate causal relationships in radiation-related cancer. This study reviewed the international trend, usage, and critiques of the PC method. Because it has been used in Korea, it is important to check the present status and estimation of PC in radiation-related cancers in Korea.
Research articles and official reports regarding PC of radiation-related cancer and published from the 1980s onwards were reviewed, including studies used for the revision of the Korean PC program. PC has been calculated for compensation-related cases in Korea since 2005.
The United States National Institutes of Health first estimated the PC in 1985. Among the 106 occupational diseases listed in the International Labor Organization Recommendation 194 (International Labor Office (ILO), ILO List of Occupational Diseases, 2010), PC is available only for occupational cancer after ionizing radiation exposure. The United States and United Kingdom use PC as specific criteria for decisions on the compensability of workers' radiation-related health effects. In Korea, PC was developed firstly as Korean Radiation Risk and Assigned Share (KORRAS) in 1999. In 2015, the Occupational Safety and Health Research Institute and Radiation Health Research Institute jointly developed a more revised PC program, Occupational Safety and Health-PC (OSH-PC). Between 2005 and 2015, PC was applied in 16 claims of workers' compensation for radiation-related cancers. In most of the cases, compensation was given when the PC was more than 50%. However, in one case, lower than 50% PC was accepted considering the possibility of underestimation of the cumulative exposure dose.
PC is one of the most advanced tools for estimating the causation of occupational cancer. PC has been adjusted for baseline cancer incidence in Korean workers, and for uncertainties using a statistical method. Because the fundamental reason for under- or over-estimation is probably inaccurate dose reconstruction, a proper guideline is necessary.
In contrast to occupational injury, which has a definite cause, the causation of occupational disease is not clear in most cases. Particularly, occupational cancers cannot be distinguished from cancers occurred spontaneously in general population due to the complexity of multi-cause pathogenesis. However, the attribution of a particular cancer risk can greatly influence workers' compensation. Therefore, quantitative risk estimation of radiation-related disease requires appropriate methods for determining probabilities under complex circumstances [1].
Using accumulated results regarding the probability of cancer following low-dose exposure to radiation and statistical modeling for assessment of the causation of ionizing radiation-induced cancer, the National Institutes of Health (NIH) in the United States first estimated the probability of causation (PC) in 1985 [2]. The International Atomic Energy Agency, International Labor Organization (ILO), and World Health Organization developed guidance on the formulation and application of PC schemes in 2010, with revisions accounting for the uncertainty in PC [3]. PC allows the attribution of cancer to occupational radiation exposure and assists decision-makers in establishing compensation schemes for occupational cancer related to ionizing radiation.
Most countries with a workers' compensation system have adopted the occupational disease list, based on the recommendation of the ILO and European Commission, to specify compensability of a particular disease. Among the 106 occupational diseases listed in ILO Recommendation 194 [4], PC is used only for occupational cancer after ionizing radiation exposure. The United States and United Kingdom use PC as specific criteria for decisions on the compensability of workers' radiation-related health effects [3]. The United States gave compensation to 10,479 cancer patients out of 37,155 claims (28.2%) with greater than 50% PC up to September 2015 [5]. The Compensation Scheme of Radiation-Linked Disease in the United Kingdom has considered 1496 cases since the scheme began, and 156 of these cases have resulted in successful claims based on the PC [6].
In Korea, PC was developed firstly as Korean Radiation Risk and Assigned Share (KORRAS) in 1999 and used in the Ministry of Science and Technology's ordinance (MST; no. 2001–35: approval standard of occupational disease after exposure to radiation) in 2001 [7]. Since workers' compensation comes under the Ministry of Labor and Employment, the MST approval standard is not a legal requirement for the compensation process. However, almost all cancer cases after ionizing radiation consider PC estimates from the Radiation Health Research Institute (RHRI) [8]. KORRAS used point estimation with uncertainty problems, while the RHRI's revised program from 2004, Radiation Health Research Institute-Program for Estimating the Probability of Causation (RHRI-PEPC), adjusted for the uncertainty issue [7]. In 2015, the Occupational Safety and Health Research Institute (OSHRI) and RHRI jointly developed another revised PC program incorporating the recent Korean cancer incidence rate and statistical revision.
In this study, we reviewed the international trend, usage, and critique of PC estimation and the current status of the Korean PC program. In addition, we estimated PC for radiation-related cancer cases in Korea using the revised PC program.
Research articles and official reports regarding PC of radiation-related cancer and published from 1985 onwards were reviewed. For revising the Korean PC program, we determined the statistical model for each cancer type by reviewing the latest information about cancer risk following radiation exposure. Statistical uncertainty of PC was based on the model, correction for errors in dosimetry, dependence of risk on dose and dose rate in terms of the dose-dose rate effectiveness factor (DDREF), risk transfer to the Korean population, relationship between the radiation dose and smoking history in lung cancer, and latent period. Uncertainty factors were combined using the Monte Carlo method. The PC program used organ dose assumptions. Using the revised PC program, we re-calculated the PC of compensation-related cases, which had been assessed by RHRI-PEPC since 2005.
Statistical modeling of PC for radiation cancer
In the National Cancer Institute-Center of Disease Control (NCI-CDC) model, PC is basically a calculation of excess relative risk (ERR) as a function of radiation dose for each exposure using the following formula [9]:
$$ {\displaystyle \begin{array}{l} PC= risk\; due\; to\kern0.17em radiation\kern0.17em exposure/\left( baseline\kern0.17em risk+ risk\; due\; to\kern0.17em radiation\kern0.17em exposure\right)\times 100\%\\ {}= ERR/\left(1+ ERR\right)\times 100\\ {} Where\; ERR\left( Excess\kern0.17em relative\kern0.17em risk\right)= excess\kern0.17em risk/ baseline\kern0.17em risk\end{array}} $$
ERR is estimated after adjusting for cancer type, sex, age at exposure, attained age, and radiation dose. The sensitivity to radiation by cancer type depends on the dose coefficient in PC model which corresponds to ERR per unit dose(Sv) for exposure age(e) 30 or older and attained age(a) 50 or older for most cancer type. When we calculate the ERR/Sv at e = 30 and a = 50, the ERR/Sv in male is higher in the order of leukemia, lung cancer, thyroid cancer and cancer in kidney and other urinary organs. In female it is leukemia, cancer in kidney and other urinary organs, bladder cancer and breast cancer. Applying different age at exposure and attained age may change the order of sensitivity. The younger the age at exposure, the higher ERR. It is same for the attained age. The dose-response relationship is different according to the cancer type and the characteristics of exposure. Solid cancer and leukemia from high-linear energy transfer (LET) exposure or chronic low-LET exposure assume a linear-dose response relationship. On the other hand, leukemia from acute low-LET exposure assume the linear-quadratic dose response. In case of multiple exposures, each ERR is calculated separately and added together.
International trend of PC for radiation cancer
Since the first PC model for radiation cancer was proposed by the NIH, there have been several models such Biologic Effects of Ionizing Radiation V [10] and United Nations Scientific Committee on the Effects of Atomic Radiation [11]. Most are based on the mortality of Japanese atomic bomb survivors without adjusting for many uncertainties. Furthermore, other limitations remain such as the extrapolation from one population to another and difference in PC according to the choice of model (additive or multiplicative model). The revised PC model proposed by the NCI-CDC in 2013 [9], based on Japanese atomic bomb survivors focused on the evaluation of the distribution of uncertainty. It adapted the random mixed model considering the basic uncertainties and was rated the most reasonable and objective model. Using the NCI-CDC model, the NCI-Interactive Radio Epidemiological Program (IREP) was developed after adjusting for the population of the United States [9] and the National Institute of Occupational Health (NIOSH) developed the NIOSH-IREP, a revision of NCI-IREP [12]. A risk model for radiation cancer based on a life span study of Japanese atomic bomb survivors is updated regularly by the Radiation Effect Research Foundation [13].
Development and recent revision of PC for radiation cancer in Korea
The first PC program for radiation cancer in Korea was KORRAS in 1999. It used point estimation, which was revised in RHRI-PEPC by evaluation of the uncertainty using the NCI-IREP model and Korean baseline cancer incidence in 2003 [7]. In the RHRI-PEPC model, the ERR per unit radiation exposure (mSv) was estimated mainly based on the Japanese atomic bomb survivors study, for the 30 cancer types in the NCI-CDC report (Table 1) [9]. To adjust the uncertainties, the systematic and random errors of radiation exposure dose measurement were statistically revised. Uncertainty due to population transition was adjusted using the random linear combination model. In this transition, the Japanese baseline cancer incidence was from Hiroshima and Nagasaki [12] and the Korean baseline incidence was from 1993 to 1998 [14]. DDREF for chronic low-LET radiation allowed discrete distribution. For acute low-LET radiation, the starting dose at which DDREF applied was randomly decided by log-uniform distribution. In the case of lung cancer, the interaction between radiation and cigarette smoking was added as a mixed additive and multiplicative model. The relative risk of lung cancer by smoking level was obtained from an NCI-CDC report [9]. Other interaction factors such as the effect of race on skin cancer and the first full term delivery on breast cancer were not considered. The minimum latent period of cancer was assumed to be 1 year for leukemia, 2 years for thyroid cancer, and 4 years for other solid cancers. The cancer risk was phased by an S-shaped function to avoid rapid increase from 0 immediately after exposure to full value after a transition period. Therefore, minimum and maximum values were attained at 1 and 5 years for leukemia, 2 and 8 years for thyroid cancers, 4 and 11 years for most solid cancers, respectively.
Table 1 Cancer types for Radiation Health Research Institute-Program for Estimating the Probability of Causation
In 2015, OSHRI and RHRI jointly revised the RHRI-PEPC to Occupational Safety and Health (OSH)-PC for the assessment of uncertainties and baseline cancer incidence in Korea. Among the NCI-CDC [9], a radiation risk assessment tool for lifetime cancer risk projection (RadRAT) program [15], and Japan atomic bomb survivor [16] models, the NCI-CDC model included a large number of cancer types and an uncertainty evaluation method for each cancer. Finally, the OSH-PC program included risk models for 29 cancer types (Table 1) from RHRI-PEPC except digestive system cancers. Among the solid cancers in Table 1, bone cancer, connective tissue cancer, eye cancer, endocrine glands cancer except thyroid cancer, and other, ill-defined cancers do not have individual cancer risk models. For these cancers, the residual solid cancer model from NCI-CDC was adopted. Male breast cancer used the female breast cancer model. For malignant melanoma, the non-melanoma skin cancer model was used. While NIOSH-IREP and RHRI-PEPC used the age-standardized baseline cancer incidence rate to assess population transfer uncertainty, OSH-PC adopted the age-specific cancer incidence rate for a 5-year interval and therefore gave us more accurate PC values for each case. It is well known that lung cancer risk depends on the interaction of radiation and smoking. If the worker is a smoker, the contribution of radiation to his lung cancer is lower than in non-smoker. In the new PC model we reflected the smoking-related adjustment factors in Korea for each lung cancer type (squamous cell, adenocarcinoma, small cell, other types) and the factors were derived using lung cancer relative risks by smoking category in Korea [17]. Differences between RHRI-PEPC and OSH-PC were summarized in Table 2.
Table 2 Summary of Differences between RHRI-PEPC and OSH-PC
PC of cancer cases after radiation exposure in Korea
In the 10 years since 2005, 16 claims of workers' compensation for cancers after radiation exposure used PC (Table 3). Half of these were lymphohematopoietic cancers (7 leukemias and 1 lymphoma). The remainder included three thyroid cancers, and one case each of breast cancer, auditory canal cancer, rectal cancer, multiple cancer (gastric and pancreatic), and unknown origin cancer. Leukemia cases consisted of three lymphatic leukemia and four myeloid leukemia (three myelocytic, one myelomonocytic, and one myeloblastic) cases. Seven cases involved health care industry workers such as a radiologist, radiological technologists, and a dental nurse. Others worked in nuclear power plants (3 cases) as operators or in radioactive waste treatment, non-destructive tests (3 cases), the semiconductor industry as maintenance workers (2 cases), and sales of medical devices such as the installation of X-ray instrument (Table 3).
Table 3 Cancer cases after radiation exposure in Korea (2005–2014)
Assessment conducted using RHRI-PEPC when the claim was filed (PC1), and by OSH-PC after revision of the PC program (PC2) (Table 3). Because the RHRI-PEPC and OSH-PC dealt with all types of leukemias except chronic lymphocytic leukemia as leukemia, the PC of the 7 cases were assessed as leukemia (Table 3). The cumulative radiation exposure dose for the leukemia cases was 1.7–204 mSv, and the upper 99th confidence limits of PC1 and PC2 were 2.3–65% and 1–58.9%, respectively (Table 3). Workers' compensation was offered in two leukemia cases with more than 50% PC in the 99th confidence level. A radiologist with 9.1% of PC at the 99th confidence level was accepted for workers' compensation because his cumulative radiation exposure dose was not believed to correctly reflect the actual exposure. Considerations included inappropriate protective glove during the procedure, higher cumulative dose of co-workers, and testimony of co-workers that they used to work without a film badge. Work-relatedness of a leukemia case was not decided because of lack of objective exposure level data (Table 3). A non-Hodgkin's lymphoma case with 15.7% PC at the 99th confidence level was accepted for compensation because the exposure dose was believed to be underestimated.
The PCs of three thyroid cancer cases at the 99th confidence interval were 1.2%–33.3% with 3.8–245 mSv of cumulative dose. Workers' compensation was not given for any of these cases, while a case of carcinoma of unknown primary site was accepted with high PC (67.3%) and cumulative dose (1870 mSv). The PC of a squamous cell carcinoma in the external auditory canal was analyzed under 'the rest of unclassified cancers or unclear cancer' category, and was shown to be 16.7% with 99th confidence intervals and not accepted for compensation. The PC of rectal and breast cancer was lower than 1.9% and 12%, respectively, with 72.2 mSv and 16.4 mSv dose, respectively. The PC of multiple cancers assessed as a multiple effect (gastric cancer + pancreatic cancer) was 10% at the 99th confidence level (Table 3).
Looking at the three cases (number 2, 9, 11) although case 2 was exposed to more radiation dose, case 2 (1870 mSv) and case 9 (204 mSv) have similar PC values. This is presumably due to the fact that leukemia has a higher radiation risk than 'the rest of unclassified cancers or unclear cancer' and that the age at first exposure of case 9 (24 yr) was younger than case 2(30 yr). Leukemia in case 11 (51 mSv) has a lower exposure than case 9, but the PC value is quite high. The reason is presumably because he was exposed to radiation in a relatively short period (10y1m).
PC2 assessed by OSH-PC was higher than PC1 assessed by RHRI-PEPC in three cases (number 5, 6, 12). PC2 in other cases were lower than PC1 except three cases with the same PC (number 3, 14, 16). The highest variation was seen for a carcinoma of unknown primary site (number 2). In case of the 95th and 99th PC, the highest variation was observed for acute myelocytic leukemia (number 5, Table 3). In thyroid cancer, PC1 and PC2 showed the same results because ERR was transferred independently based on baseline cancer incidence.
The exposure dose of some cases, especially number 3 and 13 based on the estimation based on the statements by the patient and their co-workers, the PC of them might be distorted from actual level.
In Korea, PC assessment has been used in the workers' compensation process through the epidemiologic investigation of OSHRI, which referred to the Korea Workers' Compensation and Welfare Service [18]. OSHRI and RHRI jointly estimated the PC of cancer cases in the epidemiologic investigation of workers' cancer after exposure to ionizing radiation. PC provides important evidence regarding whether radiation has a substantial effect. However, exceptions were made in some cases for low PC (lower than 50% at the 99th confidence interval) because of possible underestimation of the exposure dose. A possible explanation for the underestimation of radiation dose is the low wearing rate for film badges [19, 20]. Another possibility is that the monitoring system assigned 0 values to the value having under the detection limit of the assessment system [21]. PC estimation requires an accurate dose estimate for more reliability. To reduce underestimation or overestimation of radiation exposure, a reasonable dose reconstruction guideline should be prepared for the nationwide working environment. In the Unites States, NIOSH applies the radiation dose reconstruction method under the energy employees occupational illness compensation program act of 2000 [22].
Since ILO added radium and other radioactive substances as well as X-rays to the occupational list of the Workmen's Compensation (Occupational Diseases) Convention number 42 (C042) in 1934 [23], ionizing radiation has constantly been on national and international occupational disease lists. In 1964, the item's name was changed to 'ionizing radiation' in Schedule I of the Occupational Disease List of the Injury Benefits Convention number 121 [24]. The current ILO occupational disease list in ILO Recommendation 194, which has been adopted by the majority of ILO member states, contains 106 occupational diseases [4]. Thanks to the long history of cancer risk research, especially the experience of the Japanese atomic bomb cohort, PC could have been developed for ionizing radiation.
For most cancer types, the difference between PC1 and PC2 values was mainly due to the population transfer method. PC1 used age standardized rate (ASR) during 1993–1998 [14] for Korean baseline cancer incidence rate but PC2 used age-specific cancer incidence rate in the year of diagnosis.
Because the PC model used the cancer risk model based on ERR from the Japanese atomic bomb survivor cohort, there are statistical uncertainties and a measurement error in the dose data. In addition, extrapolation between different populations, and DDREF must be considered when the model is applied to the low dose and low dose rate exposure groups. The interaction effect of radiation and cigarette smoking and interpersonal difference of the latent period can have uncertainties. PC model should be estimated considering all the uncertainties and needs distribution and the confidence interval rather than point estimation. OSH-PC program gives the results based on distribution and confidence interval after adjusting all these uncertainties.
With extension of the follow-up period for Japanese atomic bomb survivors and more research on the risk of radiation-related cancer, better statistical models could develop.
In OSH-PC, all types of leukemia except chronic lymphocytic leukemia (CLL) were considered as a group. PC could not been estimated separately for acute myelogenous, acute lymphocytic, and chronic myelocytic leukemia because each baseline incidence rate was not able to use in Korea. In 2013, NIOSH-IREP incorporated the CLL risk model as a group of lymphoma and multiple myeloma because CLL appears etiologically and clinically to be a lymphoma [25] and a risk model for CLL has been developed similar to that for lymphoma and multiple myeloma [26]. In OSH-PC, CLL was excluded from the PC calculation because its association with radiation exposure was not specified clearly even in Japanese atomic bomb cohort and is still controversy.
Greenland and others [27,28,29] have argued that PC is a logically flawed concept and therefore unsuitable for the adjudication of compensation claims in possible cases of radiation-related cancer. They argued that PC based on epidemiologic data which does not consider biologic mechanism, might be unavoidable the uncertainties. The NCI-CDC working group concluded that the argument may have theoretical merit but, as a practical matter, is unpersuasive in the light of current information about radiation-related risk [9]. Even though PC has limitations, its use seems to be inevitable in estimating the radiation-related cancer risk, especially for the worker's compensation process, keeping in mind that accurate dose reconstruction reflecting the workplace exposure is more important than PC itself.
PC is one of the most advanced tools for estimating the causation of occupational cancer. Despite issues of uncertainty, PC for Korean workers has been adjusted for the baseline incidence of cancer, and recently statistical methods have been used to adjust for the uncertainties. Because the fundamental reason of under- or over-estimation is probably inaccurate dose reconstruction, a proper guideline is necessary.
ASR:
Age standardized rate
CLL:
DDREF:
Dose-dose rate effectiveness factor
Excess relative risk
ILO:
International Labor Organization
KORRAS:
Korean Radiation Risk and Assigned Share
Linear energy transfer
Ministry of Science and Technology's ordinance
NCI-CDC:
National Cancer Institute-Center of Disease Control
NCI-IREP:
NCI-Interactive Radio Epidemiological Program
NIH:
NIOSH:
National Institute of Occupational Health
OSH-PC:
Occupational Safety and Health-PC
OSHRI:
Occupational Safety and Health Research Institute
Probability of causation
RadRAT:
A radiation risk assessment tool for lifetime cancer risk projection
RHRI:
Radiation Health Research Institute
RHRI-PEPC:
RHRI-Program for Estimating the Probability of Causation
Bond VP. The cancer risk attributable to radiation exposure: some practical problems. Health Phys. 1981;40:108–11.
Breitenstein BD. The probability that a specific cancer and a specified radiation exposure are causally related. Health Phys. 1988;55:397–8.
Niu S, Deboodt P, Zeeb H. Approaches to attribution of detrimental health effects to ccupational ionizing radiation exposure and their application in compensation programmes for cancer. Geneva: ILO, IAEA, WHO; 2010. [Occupational Safety and Health Series, vol. 73].
International Labor Office (ILO). ILO List of Occupational Diseases (revised 2010). 2010.
CDC - NIOSH - Radiation Dose Reconstruction - Calculating Probability of Causation. http://www.cdc.gov/niosh/ocas/pccalc.html.
CSRLD: 2014 Compensation Scheme Annual Statement. http://www.csrld.org.uk/html/annual_statement.php.
Jeong M, Jin Y, Kim J. Program for estimating the probability of causation to Korean radiation workers with cancer. J Radiat Prot Res. 2004;29:221–30.
Jin Y-W, Jeong M, Moon K, Jo M-H, Kang S-K. Ionizing radiation-induced diseases in Korea. J Korean Med Sci. 2010;25(Suppl):S70–6.
Land C, Gilbert E, Smith J. Report of the NCI-CDC Working Group to Revise the 1985 NIH Radioepidemiological Tables. Washington: U.S. Department of Health and Human Services National Institutes of Health National Cancer Institute; 2003. p. 118.
Council National Research. Health Effects of Exposure to Low Levels of Ionizing Radiation. Beir V. National Academies Press; 1990.
UNSCEAR: Report of the United Nations Scientific Committee on the Effects of Atomic Radiation to the General Assembly. UNSCEAR; 2000. [Vol. I: Sources].
NIOSH-IREP. https://www.niosh-irep.com/irep_niosh/.
The Radiation Effects Research Foundation Website. http://www.rerf.jp/index_e.html.
Parkin D, Whelan S, Ferlay J, Teppo L, Thomas D. Cancer incidence in five continents Vol. VIII. Volume VIII. Lyon: International Agency for Resrach on Cancer; 2002. [IARC Scientific Publication, No. 155].
Berrington de Gonzalez A, Iulian Apostoaei A, Veiga LHS, Rajaraman P, Thomas BA, Owen Hoffman F, Gilbert E, Land C. RadRAT: a radiation risk assessment tool for lifetime cancer risk projection. J Radiol Prot Off J Soc Radiol Prot. 2012;32:205–22.
Preston DL, Ron E, Tokuoka S, Funamoto S, Nishi N, Soda M, Mabuchi K, Kodama K. Solid cancer incidence in atomic bomb survivors: 1958-1998. Radiat Res. 2007;168:1–64.
Yun YH, Lim MK, Jung KW, Bae J-M, Park SM, Shin SA, Lee JS, Park J-G. Relative and absolute risks of cigarette smoking on major histologic types of lung cancer in Korean men. Cancer Epidemiol Biomark Prev Publ Am Assoc Cancer Res Cosponsored Am Soc Prev Oncol. 2005;14:2125–30.
Kang S-K, Kim EA. Occupational diseases in Korea. J Korean Med Sci. 2010;25(Suppl):S4–S12.
Oh MS, Yoon JK, Kim HS, Kim H, Lee JK, Lee JH, Kim YH. Two case of Erythroleukemia and myelodysplastic syndrome in a non-destructive inspector. Korean J Occup Environ Med. 2011;23:471–9.
Kottou S, Neofotistou V, Tsapaki V, Lobotessi H, Manetou A, Molfetas MG. Personnel doses in Haemodynamic units in Greece. Radiat Prot Dosim. 2001;94:121–4.
Sont WN, Zielinski JM, Ashmore JP, Jiang H, Krewski D, Fair ME, Band PR, Létourneau EG. First analysis of cancer incidence and occupational radiation exposure based on the National Dose Registry of Canada. Am J Epidemiol. 2001;153:309–18.
Department of Health and Human Service (DHHS). DHHS 42 CFR Part 82 Methods for Radiation Dose Reconstruction under the Energy Employees Occupational Illness Compensation Program Act. 2002.
International Labor Office (ILO). Convention C042 - Workmen's Compensation (Occupational Diseases) Convention (Revised), 1934 (No. 42). 1934.
NCI (National Cancer Institute). Adult Non-Hodgkin Lymphoma: Treatment, health professional version, 2009. Bethesda: NCI, U.S. National Institute of health.
Trabalka JR, Apostoaei AI. Development of a risk model for chronic lymphocytic leukemia for NIOSH-IREP. A report for NIOSH-OCAS. Oak Ridge: SENES Oak Ridge, Inc.; 2010.
International Labor Office (ILO). Convention C121 - Employment Injury Benefits Convention, 1964 [Schedule I Amended in 1980] (No. 121). 1964.
Greenland S. Relation of probability of causation to relative risk and doubling dose: a methodologic error that has become a social problem. Am J Public Health. 1999;89:1166–9.
Greenland S, Robins JM. Conceptual problems in the definition and interpretation of attributable fractions. Am J Epidemiol. 1988;128:1185–97.
Greenland S, Robins JM. Epidemiology, justice, and the probability of causation. Jurimetrics. 2000;40:321–40.
There was no funding for this study.
The datasets generated and/or analyzed during the current study are not publicly available but are available from the corresponding author on reasonable request.
Occupational Safety and Health Research Institute, Korea Occupational Safety and Health Agency, Ulsan, Korea
Eun-A Kim & Eujin Lee
Korea Occupational Safety and Health Agency, Ulsan, Korea
Radiation Health Institute, Korea Hydro & Nuclear Power Co. Ltd, 172 Dolma-ro, Bundang-gu, Seongnam-si, Gyeonggi-do, 13605, Korea
Meeseon Jeong
Eun-A Kim
Eujin Lee
EAK and MJ wrote the introduction, method, results, and discussion. SKK and EL reviewed the case studies and computed the PC. EAK, MJ, SKK, and EL wrote the conclusion and the abstract. All the authors reviewed previous studies. All authors read and approved the final manuscript.
Correspondence to Meeseon Jeong.
Kim, EA., Lee, E., Kang, SK. et al. Probability of causation for occupational cancer after exposure to ionizing radiation. Ann of Occup and Environ Med 30, 3 (2018). https://doi.org/10.1186/s40557-018-0220-5
Occupational cancer
Guidelines for recognition of occupational cancers in Korea | CommonCrawl |
BMC Pharmacology and Toxicology
New ibuprofen derivatives with thiazolidine-4-one scaffold with improved pharmaco-toxicological profile
Ioana-Mirela Vasincu1,
Maria Apotrosoaei1,
Sandra Constantin1,
Maria Butnaru2,
Liliana Vereștiuc2,
Cătălina-Elena Lupușoru3,4,
Frederic Buron4,
Sylvain Routier4,
Dan Lupașcu1,
Roxana-Georgiana Taușer1 &
Lenuța Profire ORCID: orcid.org/0000-0002-7953-30231
BMC Pharmacology and Toxicology volume 22, Article number: 10 (2021) Cite this article
Aryl-propionic acid derivatives with ibuprofen as representative drug are very important for therapy, being recommended especially for anti-inflammatory and analgesic effects. On other hand 1,3-thiazolidine-4-one scaffold is an important heterocycle, which is associated with different biological effects such as anti-inflammatory and analgesic, antioxidant, antiviral, antiproliferative, antimicrobial etc. The present study aimed to evaluated the toxicity degree and the anti-inflammatory and analgesic effects of new 1,3-thiazolidine-4-one derivatives of ibuprofen.
For evaluation the toxicity degree, cell viability assay using MTT method and acute toxicity assay on rats were applied. The carrageenan-induced paw-edema in rat was used for evaluation of the anti-inflammatory effect while for analgesic effect the tail-flick test, as thermal nociception in rats and the writhing assay, as visceral pain in mice, were used.
The toxicological screening, in terms of cytotoxicity and toxicity degree on mice, revealed that the ibuprofen derivatives (4a-n) are non-cytotoxic at 2 μg/ml. In addition, ibuprofen derivatives reduced carrageenan-induced paw edema in rats, for most of them the maximum effect was recorded at 4 h after administration which means they have medium action latency, similar to that of ibuprofen. Moreover, for compound 4d the effect was higher than that of ibuprofen, even after 24 h of administration. The analgesic effect evaluation highlighted that 4 h showed increased pain inhibition in reference to ibuprofen in thermal (tail-flick assay) and visceral (writhing assay) nociception models.
The study revealed for ibuprofen derivatives, noted as 4 m, 4 k, 4e, 4d, a good anti-inflammatory and analgesic effect and also a safer profile compared with ibuprofen. These findings could suggest the promising potential use of them in the treatment of inflammatory pain conditions.
Non-steroidal anti-inflammatory drugs (NSAIDs) in which aryl-propionic acid derivatives (ibuprofen, fenoprofen, ketoprofen, naproxen, etc.) have an important place, being among the most widely used pain drugs to treat pain and inflammation associated with rheumatic diseases but also the other types of pain such as renal colic, biliary colic, headache, dysmenorrhea, etc. [1].
Moreover, recent researches bring solid arguments regarding the beneficial effects of NSAIDs in neurodegenerative and cancer diseases, in which, inflammation and over-production of pro-inflammatory cytokines, as well as the oxidative stress, play an important role [2].
The epidemiological studies showed that more than 35 million people take daily NSAIDs and 40% of them are aged over 60 years. It also should be noted that annual sales of NSAIDs reach huge amounts that exceed $6 billion and for Europe NSAIDs prescriptions are more than 7.5% of all prescriptions issued in a year [1, 3].
On the other hand thiazolidine-4-one scaffold is one of the heterocycles that is widespread in the organic chemistry, being responsible for many biological effects [4] such as: anti-inflammatory and analgesic, antifungal and antimicrobial, anti-mycobacterial, antioxidant, antiviral and anti-HIV, anticonvulsant, hypoglycemic and antitumor effects [5].
Current researches aim the design and synthesis of hybrid molecules containing two or more pharmacophore scaffolds in order to improve the properties of the classical drugs or to induce new biological effects, providing in the same time a safer profile [6].
The aim of this work was to investigate the biological effects of the new derivatives of ibuprofen with thiazolidine-4-one scaffold synthesized by our research group [7], targeting the toxicity degree and the anti-inflammatory and analgesic effects.
Dulbecco's Modified Eagle Medium (DMEM) with 4500 mg/ml glucose, 110 mg/l sodium pyruvate and 0.584 mg/l L-glutamine; Bovine Fetal Serum (BFS); Penicillin/ streptomycin/neomycin (P/S/N) solution with 5000 units penicillin, 5 mg streptomycin and 10 mg neomycin/ml; phosphate buffered saline; 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide (MTT), ibuprofen, dimethylsulfoxide (DMSO), acetic acid, tween 80, k-carrageenan. All materials and reagents were purchased from Sigma-Aldrich. The thiazolidin-4-one derivatives of ibuprofen (4a-n, Fig. 1) were previously synthesized and characterized by our research group [7].
The structure of ibuprofen derivatives with thiazolidine-4-one scaffold (4a-n)
Swiss albino mice and Wistar rats provided from the Biobase of "Grigore T. Popa" University of Medicine and Pharmacy from Iasi, were used. The animals were housed in polyethylene cages with access to water and food ad libitum for 7 days before starting the experiments. The environmental conditions during the study were maintained relatively constant at a 23 ± 2 °C temperature, 40–60% relative humidity and a 12 h light/dark cycle. The food and the water were withdrawn 18 h before starting the experiments. The animals (mice and rats respectively) were randomly divided into several groups (n = 8), depending of the method applied. The inclusion criteria used in the design of the experiment refer to the weight of animal (20–30 g for mice, 150–200 g for rats) and healthy state and the exclusion criteria applied were any pathological conditions observed to the animals. All the experiments were designed to cause minimum harm to animals. At the end of the experiments, the animals were anesthetized with ethyl ether and then the cervical dislocation procedure was used to euthanasia the animals. Prior to the disposal, the animal death was confirmed by observing the movement, heartbeat, respiration and eye reflex. All procedures were strictly conducted by the expert personnel and were in agreement with the guideline of laboratory animal studies.
The study was conducted in agreement with actual deontology and ethics guidelines about laboratory animal studies (Law no. 206/27 May 2004, EU/2010/63 - CE86/609/EEC) and was approved by Research Ethics Committee of "Grigore T. Popa" University of Medicine and Pharmacy from Iasi (resolution no. 292).
Cell viability assay using the MTT method is based on the capacity of cell to reduce the slightly yellow tetrazolium salts to intense purple formazan by intracellular reduction system mostly located in the mitochondria. The amount of formazan, which is correlated with number of viable cells, is spectrophotometrically measured at 570 nm [8]. Primary cells, mesenchymal type with stem cell potential, isolated by collagenization from adipose tissue, were used. The cell line was maintained in DMEM supplemented with BFS (10%) and P/S/N in a humidified incubator with 5% CO2 at 37 °C. The cells were seeded in 24-well plates (104 cells/ml) and treated after 24 h with different concentrations of ibuprofen derivatives (4a-n) (50 μg/ml, 10 μg/ml, 2 μg/ml) for 24 h, 48 h and 72 h. DMSO was used as solvent and a negative control (blank, DMSO 0.2%) and a positive control (DMSO 5%) were used in similar conditions. The culture media was removed and MTT (500 μl) was next added to each well and the cells were further incubated for 3 h at 37 °C. The medium was then removed and isopropanol was added. After 15 min from each well were transferred 100 μl in a 96-well plate and the absorbance was recorded at 570 nm with a microplate reader [9, 10]. The cell viability rate was calculated using the following formula [9]:
$$ \mathrm{Cell}\ \mathrm{viability}\ \left(\%\right)=\left({\mathrm{A}}_{\mathrm{s}}/{\mathrm{A}}_{\mathrm{c}}\right)\times 100 $$
in which,
As = the absorbance of culture cells incubated with the sample (ibuprofen derivatives);
Ac = the absorbance of culture cells incubated with DMSO (0.2%).
The experiments were performed in triplicate and the results are presented as mean ± standard deviation (SD).
Acute toxicity assay was performed on mice and the tested compounds (4a-n) were suspended in tween 80 and administrated orally, in volume of 0.1 ml/10 g animal. The different doses (1000–3000 mg/kg body weight) of the tested compounds were used [11] and the survival rate was noted at the different timelines: 24, 48, 72 h and 7 and 14 days. The LD50 was calculated on base of Kärber arithmetic method [12] using the following formula:
$$ {\mathrm{LD}}_{50}={\mathrm{LD}}_{100}-\left(\sum \left(\mathrm{a}+\mathrm{b}\right)/\mathrm{n}\right), $$
a = the difference between two succesive doses of tested compounds;
b = the average number of dead animals in two successive doses;
n = the number of animals for each group;
LD100 = the lethal dose causing the 100% death of all test animals.
Carrageenan-induced paw edema assay was used to evaluate the anti-inflammatory effect, according to the protocols describes in the literature [13, 14] with slightly modifications. The edema was induced by intra-plantar administration of 0.2 ml of 1% suspension of k-carrageenan in physiological saline into the left hind paw of rat and the volume of paw was measured using the digital pletismometer LE. After induction of edema, the ibuprofen derivatives (4a-n) were administered orally in a daily dose representing 1/20 of LD50 as suspension in tween 80 (0.5 mL/100 g b.w.). Ibuprofen as reference drug and tween 80 (0.5 mL/100 g b.w.) as control, were used in similar conditions (Table 1).
Table 1 The doses (mg/kg b.w.) of ibuprofen derivatives (4a-n) used for anti-inflammatory and analgesic assays
The volume of the left hind paw was measured at different timelines (2, 4, 6 and 24 h). The edema inhibition (%) was calculated, using the following formula:
$$ \mathrm{Edema}\ \mathrm{inhibition}\ \left(\%\right)=\left(\Delta {\mathrm{V}}_{\mathrm{c}}-\Delta {\mathrm{V}}_{\mathrm{t}}\right)\ \mathrm{x}\ 100/\Delta {\mathrm{V}}_{\mathrm{c}} $$
ΔVc = the rat paw's volume recorded for control group;
ΔVt = the rat paw's volume recorded for groups treated with ibuprofen derivatives.
Analgesic effect
Tail-flick assay, used as model of thermal nociception is based on measuring the sensibility of rats when a thermal stimulus is applied on tail. According to the experimental protocol, the animals were initially tested by applying a radiant beam at the distal part of the tail, measuring the latency at time 0 (T0) [15, 16]. The response to pain was quantified using a Tail-flick algesimeter (Harvard Apparatus, United States of America). The ibuprofen derivatives (4a-n) were administered by oral gavage in a daily dose representing 1/20 of LD50, as suspension in tween 80 (0.5 mL/100 g b.w.) (Table 1). Ibuprofen, as reference drug, was used in similar conditions. In order to assess the analgesic effect, the initial response to pain and at 4 h after administration of ibuprofen derivatives (4a-n) it was determined. The maximum allowed time (cut-off time), for not causing tissue lesions was established to be 10 s (Tm).
The pain inhibition (%) was calculated for each tested compound, using the following formula:
$$ \mathrm{Pain}\ \mathrm{inhibition}\ \left(\%\right)=\left({\mathrm{T}}_{\mathrm{t}}-{\mathrm{T}}_0\right)/\left({\mathrm{T}}_{\mathrm{m}}-{\mathrm{T}}_0\right)\ \mathrm{x}\ 100 $$
Tt = the nociceptive response measured at 4 h after of ibuprofen derivatives administration;
T0 = the nociceptive response measured before any treatment (initially);
Tm = the maximum allowed time (cut-off time).
The presence of analgesic effect (anti-nociceptive potential) is highlighted by an increased latency response after administration of the ibuprofen derivatives in reference with the initially value.
Writhing assay is a model of visceral pain, induced to mice by intraperitoneal administration of acetic acid. It is characterized through abdominal contractions, body movements (especially of the posterior members), writhing of the dorsal-abdominal muscles with reduced locomotor activity. The applied experimental protocol was in agreement with the literature data with slightly modifications [17, 18]. The ibuprofen derivatives (4a-n) were administered by oral gavage in a daily dose representing 1/20 of LD50 as suspension in tween 80 (0.1 mL/10 g b.w.) (Table 1). Ibuprofen, as reference drug and tween 80 (0.1 ml/10 g b.w.), as control, were used in similar conditions. One hour after administration of tested compounds, the acetic acid (0.6% water solution), as irritating agent, in volume of 0.1 ml/10 g b.w. was intraperitoneally injected. After other 5 min the number of writhings for each mouse was noted, every 5 min, during 30 min. The analgesic effect, expressed as inhibition (%) of writhings was calculated for each tested compounds, using the following formula [14]:
$$ \mathrm{Inhibition}\ \left(\%\right)=\left({\mathrm{N}}_{\mathrm{c}}-{\mathrm{N}}_{\mathrm{t}}\right)\ \mathrm{x}\ 100/{\mathrm{N}}_{\mathrm{c}} $$
Nc = the writhings number recorded for mice from control group.
Nt = the writhings number recorded for mice of group treated with ibuprofen derivatives.
It is estimated that the analgesic activity is higher if the writhings number is decreased in comparison with control group.
Data are presented as mean ± standard deviation. The one-way analysis of variance (ANOVA) and Tukey post hoc test were used to determine whether there are any statistically significant differences between tested compounds and control. The p value < 0.05 was considered to be statistically significant.
Cell viability assay
The cell viability values recorded for tested compounds (4a-n) at different concentrations (50 μg/ml, 10 μg/ml and 2 μg/ml) and at different timelines (24, 48 and 72 h) are presented at Tables 2, 3 and 4.
At 50 μg/ml was evidenced decreasing of cell viability with exposure time and the resulting percentages varied between 1.40 ± 0.26% to 64.25 ± 1.06% (24 h), from 1.14 ± 0.24% to 56.18 ± 0.88% (48 h) and from 0.34 ± 0.09% to 53.12 ± 1.65% (72 h). At this concentration, 4 m (R = 4-NH2) and 4n (R = 4-NHCOCH3), were the least toxic (Table 2).
Table 2 Cell viability (%) of ibuprofen derivatives (4a-n) at 50 μg/ml
An improving of cell viability was observed at 10 μg/ml, the values recorded ranging from 51.95 ± 1.42% to 97.51 ± 2.01% (24 h), from 41.04 ± 0.12 to 90.51 ± 0.66% (48 h) and from 41.03 ± 0.62% to 77.68 ± 1.59% (72 h) respectively. The less toxic derivatives were 4f (R = 3-NO2, 97.51 ± 2.01%), 4 k (R = 4-CN, 92.29 ± 0.33%) and 4n (R = 4-NHCOCH3, 95.25 ± 1.74%), the cell viability values being slightly increased (4f, 4n) or comparable (4 k) to ibuprofen (92.83 ± 2.24%) (Table 3).
For all compounds (4a-n) the cell viability values were higher than 70% at 2 μg/ml so they are considered non-cytotoxic [19] at this concentration (Table 4). The values of cell viability were ranged from 85.06 ± 2.01% to 99.64 ± 1.89% (24 h), from 77.07 ± 2.06% to 99.61 ± 1.53% (48 h) while the values were placed in 73.84–95.02% interval at 72 h. After 72 h the less toxic were 4 m (R = 4-NH2, 95.02 ± 1.46% 4f (R = 3-NO2, 94.44 ± 0.10%), 7c (R = 4-Br, 94.39 ± 1.65%), 4n (R = 4-NHCOCH3, 93.91 ± 1.96%).
Table 4 Cell viability (%) of ibuprofen derivatives (4a-n) at 2 μg/ml
Acute toxicity assay
Referring to the in vivo toxicity degree the data revealed that all tested compounds have showed to be less toxic compared with ibuprofen (LD50 = 1375 mg/kg b.w.) having the LD50 values ranged between 1565 mg/kg b.w. and 1840 mg/kg b.w. (Table 5). The less toxic derivatives were 4j (R = 4-CF3, LD50 = 1840 mg / kg b.w.), 4e (R = 2-NO2, LD50 = 1820 mg/kg b.w.) and 4 g (R = 4-NO2, LD50 = 1820 mg/kg b.w.) that proved to be about 1.3 times less toxic than ibuprofen.
Table 5 The values of LD50 recorded for ibuprofen derivatives (4a-n)
Carrageenan-induced paw edema assay
The new synthesized ibuprofen derivatives (4a-n) were tested at a dose of 1/20 LD50 and the results, expressed as edema inhibition (%) are show in Fig. 2. It can be noticed that at 2 h after administration, the edema inhibition (%) varied between 42.72 ± 4.55% and 61.81 ± 9.87%, for most part of the tested compounds the effect being comparable to the ibuprofen one (56.36 ± 7.87%). At this time the most active compounds were 4f (R = 3-NO2) and 4 k (R = 4-CN), for which the value of edema inhibition (%) was 61.81 ± 9.87%, slightly higher than ibuprofen. A noticeable activity, similar to that of ibuprofen, showed also 4e (R = 2-NO2), 4j (R = 4-CF3) and 4 m (R = 4-NHCOCH3), for which the value of edema inhibition was 56.36 ± 7.87%. The effect remained in quite similar range at 4 h after administration, the edema inhibition (%) ranging between 39.01 ± 2.81% and 68.52 ± 9.57%. At this timeline an appreciable effect was noted in case of 4d (R = 4-F) and 4f (R = 3-NO2), for which the edema inhibition (%) was 68.52 ± 9.57%. A similar effect was noted for 4j (R = 4-CF3) and 4 k (R = 4-CN), for which the inhibition edema (%) was 66.55 ± 10.72%. In similar condition the edema inhibition recorded for ibuprofen was 66.55 ± 10.72%.
The edema inhibition (%) recorded for ibuprofen derivatives (4a-n) in comparison with ibuprofen at different timelines (the data represent the mean of 8 values ± standard deviation). One-way analysis of variance (ANOVA) followed by Tukey post hoc test was performed. & - p < 0.01 vs. vehicle, # - p < 0.001 vs. vehicle, * - p < 0.05 vs. ibuprofen, $ - p < 0.01 vs. ibuprofen, % - p < 0.001 vs. ibuprofen
At 6 h after administration the most active compounds were 4d (R = 4-F) and 4e (R = 2-NO2) for which the inhibition percentages of paw edema were 65.71 ± 10.49% and 60.81 ± 8.49%, being higher than ibuprofen value (43.67 ± 5.20%). The analysis of the data recorded at 24 h after administration, revealed a long lasting anti-inflammatory effect for tested derivatives, even slightly higher than ibuprofen for some of them. The most active compound proved to be 4d (R = 4-F), with an edema inhibition value of 53.04 ± 13.17%.
Tail-flick assay
Based on the value of reaction time of animals from the control group and the value of the maximum set time (10 s), for each group treated with ibuprofen derivatives (4a-n), the maximum effect, expressed as pain inhibition (%) was calculated (Fig. 3). The pain inhibition (%) recorded for ibuprofen derivatives ranged between 13.86 ± 1.19% and 75.67 ± 5.94%, while for ibuprofen the recorded value was 67.15 ± 8.66. The most active compounds proved to be 4 m (R = 4-NH2,75.67 ± 5.94%), 4 k (R = 4-CN, 75.36 ± 3.08%) and 4 h (R = 4-CH3, 74.45 ± 6.06%) for which the pain inhibition effect was higher than of that of ibuprofen.
The pain inhibition (%) recorded for ibuprofen derivatives (4a-n) in comparison with ibuprofen at different timelines (the data represent the mean of 8 values ± standard deviation). One-way analysis of variance (ANOVA) followed by Tukey post hoc test was performed. # - p < 0.001 vs. vehicle, % - p < 0.001 vs. ibuprofen
The compounds 4n (R = 4-NHCOCH3, 69.59 ± 5.56%), 4i (R = 3-CF3, 68.37 ± 4.70%) and 4e (R = 2-NO2, 63.50 ± 6.49%) also showed an appreciable pain inhibitory effect, comparable with that of ibuprofen.
Writhing assay
From the analysis of the obtained results (Fig. 4) it is noticed a decrease of the writhings number in case of groups treated with ibuprofen derivatives (4a-n) in comparison to the control group, which means they could be appreciated having good analgesic effects.
The inhibition of writhings (%) recorded for ibuprofen derivatives (4a-n) in comparison with ibuprofen (the data represent the mean of 8 values ± standard deviation). One-way analysis of variance (ANOVA) followed by Tukey post hoc test was performed. & - p < 0.01 vs. vehicle, # - p < 0.001 vs. vehicle, * - p < 0.05 vs. ibuprofen, $ - p < 0.01 vs. ibuprofen
The analgesic effect, expressed as inhibition of writhing number, was 52.37 ± 10.33% for ibuprofen while for the ibuprofen derivatives the recorded values varied between 23.38 ± 1.45% and 56.37 ± 10.30%. The most intense peripheral analgesic effect, higher than that of ibuprofen was recorded for 4 h (R = 4-CH3, 56.37 ± 10.30) and 4e (R = 2-NO2, 53.06 ± 10.63). Appreciable effects, comparable to that of ibuprofen, were showed also by 4a (R = H, 52.37 ± 11.39), 4j (R = 4-CF3, 50.58 ± 12.69), 4 k (R = 4-CN, 50.30 ± 11.52), 4f (R = 3-NO2, 49.47 ± 5.02), 4 g (R = 4-NO2, 49.47 ± 10.92), 4 m (R = 4-NH2, 49.19 ± 5.62) and 4b (R = 4-Cl, 47.31 ± 7.54).
Ibuprofen is a widely used nonsteroidal anti-inflammatory drug which belongs to aryl-propionic acid derivatives. It acts mainly as nonselective inhibitor of both cyclooxygenase (COX) enzymes (COX-1 and COX-2). Its free carboxyl group allows a variety of structural modification and is also responsible for a part of side effects that could appear at gastric, renal or liver level. It was noted that modifying the carboxyl group could lead to compounds that become more selective for COX-2 isoform. Some pharmacophores such as two aromatic rings seem to be also responsible for COX-2 selectivity by fitting in the enzyme structure [20]. Starting from ibuprofen, new thiazolidine-4-one derivatives have been synthesized by our research group [7], as candidates to alleviate the pain or inflammation associated with various pathological conditions. To prove the therapeutic potential of the synthesized derivatives a pharmaco-toxicological screening that has included in vitro and in vivo assays, was performed.
The MTT cell viability assay is an in vitro colorimetric assay, which determines the mitochondria activity, hence providing information on cellular energy metabolism [9, 10]. It was observed that the values of cell viability recorded for tested compounds (4a-n) at 50 μg/ml decrease in time, a good cell viability being recorded for 4 m (R = 4-NH2) and 4n (R = 4-NHCOCH3) at all timelines (24 h, 48 h, 72 h). An improving of cell viability was observed at 10 μg/ml, for some derivatives (4f, R = 3-NO2; 4 k, R = 4-CN and 4n, R = 4-NHCOCH3) the value recorded being comparable with of ibuprofen one, especially at 24 h. At 2 μg/ml, all tested compounds (4a-n) are considered non-cytotoxic because the values of cell viability are higher than 70% and comparable with ibuprofen at all timelines. The leader compound in view of citotoxicity seems to be 4n, for which the values of cell viability recorded for all tested concentrations (50 μg/ml, 10 μg/ml and 2 μg/ml) support its non-citotoxicity. These results support the favorable influence of NHCOCH3 substituent on aromatic ring in view of decreasing of the cytotoxicity degree.
The acute toxicity, expressed by median lethal dose 50 (LD50), represents the base for toxicological classification of substances [21, 22]. It is knows that toxicity is responsible for many side effects, which could appear either immediately or after a time, following administration of a single dose or multiple doses of the substance in 24 h. All tested ibuprofen derivatives (4a-n) are slightly toxic, the values of LD50 being higher than of ibuprofen one. So, it could be appreciated that chemical modulation of ibuprofen using thiazolidine-4-one scaffold leads to decreasing of the toxicity. In addition, the substitution of aromatic ring of thiazolidine-4-one scaffold seems to have favorable influence for reducing the toxicity degree.
Carrageenan-induced paw edema is a widely used experimental acute inflammation model to assess the anti-inflammatory action of compounds designed as potential anti-inflammatory agents [23,24,25,26]. This model involves many mediators such as prostaglandins, cytokines, histamine and bradykinin that stimulate the inflammatory process [27,28,29]. Our study demonstrated that all tested ibuprofen derivatives reduced paw edema, for the most part of them the effect being comparable to that of ibuprofen, used as reference drug. For the most of them (4a-g, 4j, 4 k, 4n) the maximum effect was recorded after 4 h of administration, similar to ibuprofen, which means they are considered compounds with medium action latency. For 4 h (R = 4-CH3), 4i (R = 3-CF3), 4 l (R = 2,6-diCl) and 4 m (R = 4-NH2) the maximum effect was recorded after 2 h of administration, which means they have short action latency. The most proper compounds proved to be 4d (R = 4-F), 4a (R = −H), 4e (R = 2-NO2), 4 g (R = 4-NO2) and 4f (R = 3-NO2), for which a long-term anti-inflammatory effect was recorded. In addition, the effect of these compounds was higher or comparable to that of ibuprofen, used as reference drug, at all studied time intervals (2, 4, 6 and 24 h).
The tail-flick assay is a pain receptive model which measures animal nociceptive response latencies to thermal stimulus, based mainly on spinal response [30]. It is known that the response to pain and the reaction time to painful stimulus are mediated at the central nervous system level, and more specific, at the spinal level [31]. It is consider that the substances which present an increase in reaction time to painful stimulus, compared to the control, are considered to have analgesic potential. In our study the influence of chemical modulation of ibuprofen on pain inhibition effect was noticed. It could be appreciate that the best influence was showed by radicals 4-NH2, 4-CN and 4-CH3, which substitute the aromatic ring of thiazolidine-4-one scaffold, the corresponding compounds (4 m, 4 k and 4 h) being more active than ibuprofen.
Writhing assay is a model used to evaluate the inhibition effect of visceral pain, considering the behavioral modifications, in terms of abdominal contortions [32]. The most intense peripheral analgesic effect was recorded for 4 h (R = 4-CH3) and 4e (R = 2-NO2), for which the analgesic effect was higher than of ibuprofen. The findings of analgesic assays based on central and visceral mediated pain, suggest that both mechanisms are involved in the anti-nociceptive activity of the tested compounds.
In this study the toxicity degree as well as the anti-inflammatory and analgesic effects of new ibuprofen derivatives with thiazolidine-4-one scaffold, using in vitro and in vivo assays have been reported. The results highlighted the therapeutic potential of four ibuprofen derivatives (4 m, 4 k, 4e, 4d) for different disorders where inflammation and pain play an important role such as inflammatory, neurodegenerative and cancer diseases.
The data supporting the conclusions of this article are incuded in the article. The supplementary can be requested from the corresponding author.
NSAIDs:
Non-steroidal anti-inflammatory drugs
DMEM:
Dulbecco's Modified Eagle Medium
BFS:
Bovine Fetal Serum
P/S/N:
Penicillin/streptomycin/neomycin
MTT:
3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2H-tetrazolium bromide
Dimethylsulfoxide
LD50 :
Lethal dose 50
b.w.:
Wongrakpanich S, Wongrakpanich A, Melhado K, Rangaswami J. A comprehensive review of non-steroidal anti-inflammatory drug use in the elderly. Aging Dis. 2018;9(1):143–50.
Zhang Z, Chen F, Shang L. Advances in antitumor effects of NSAIDs. Cancer Manag Res. 2018;10:4631–40.
Pilotto A, Sancarlo D, Addante F, Scarcelli C, Franceschi M. Non-steroidal anti-inflammatory drug use in the elderly. Surg Oncol. 2010;19(3):167–72.
Ghafoori H, Rezaei M, Mohammadi A. Anti-inflammatory effects of novel thiazolidinone derivatives as bioactive heterocycles on RAW264.7 cells. Iran J Allergy Asthma Immunol. 2017;16(1):28–38.
Kaur Manjal S, Kaur R, Bhatia R, Kumar K, Singh V, Shankar R, Kaur R, Rawal RK. Synthetic and medicinal perspective of thiazolidinones: a review. Bioorg Chem. 2017;75:406–23.
Pawełczyk A, Sowa-Kasprzak K, Olender D, Zaprutko L. Molecular consortia-various structural and synthetic concepts for more effective therapeutics synthesis. Int J Mol Sci. 2018;19(4):1104.
Vasincu IM, Apotrosoaei M, Panzariu AT, Buron F, Routier S, Profire L. Synthesis and biological evaluation of new 1,3-thiazolidine-4-one derivatives of 2-(4-isobutylphenyl) propionic acid. Molecules. 2014;19(9):15005–25.
Lim SW, Loh HS, Ting KN, Bradshaw TD, Allaudin ZN. Reduction of MTT to purple formazan by vitamin E isomers in the absence of cells. Trop Life Sci Res. 2015;26(1):111–20.
Racles C, Iacob M, Butnaru M, Sacarescu L, Cazacu M. Aqueous dispersion of metal oxide nanoparticles, using siloxane surfactants. Colloid Surf A - Physicochem Eng Asp. 2014;448:160–8.
Macocinschi D, Filip D, Vlad S, Butnaru M, Knieling L. Evaluation of polyurethane based on cellulose derivative-ketoprofen biosystem for implant biomedical devices. Int J Biol Macromol. 2013;52:32–7.
Ranganathan A, Hindupur R, Vallikannan B. Biocompatible lutein-polymer-lipid nanocapsules: acute and subacute toxicity and bioavailability in mice. Mater Sci Eng C. 2016;69:1318–27.
Ramakrishnan MA. Determination of 50% end point titer using a simple formula. World J Virol. 2016;5(2):85–6.
Oliveira PA, de Almeida TB, de Oliveira RG, Gonçalves GM, de Oliveira JM, Neves dos Santos BB, Laureano-Melo R, WDS C, TDN F, MLAA V, Marinho BG. Evaluation of the antinociceptive and anti-inflammatory activities of piperic acid: involvement of the cholinergic and vanilloid systems. Eur J Pharmacol. 2018;834:54–64.
Nayak A. In vitro and in vivo study of poly(ethylene glycol) conjugated ibuprofen to extend the duration of action. Sci Pharm. 2011;79(2):359–73.
Reis GM, Fais RS, Prado WA. The antinociceptive effect of stimulating the retrosplenial cortex in the rat tail-flick test but not in the formalin test involves the rostral anterior cingulate cortex. Pharmacol Biochem Behav. 2015;131:112–8.
Le Bars D, Gozariu M, Cadden SW. Animal models of nociception. Pharmacol Rev. 2001;53(4):597–652.
De Oliveira AM, Conserva LM, De Souza Ferro JN, Brito FDA, Lemos RPL, Barreto E. Antinociceptive and anti-inflammatory effects of octacosanol from the leaves of Sabicea Grisea Var. Grisea in mice. Int J Mol Sci. 2012;13(12):1598–611.
Gupta AK, Parasar D, Sagar A, Choudhary V, Chopra BS, Garg R, Ashish KN. Analgesic and anti-inflammatory properties of Gelsolin in acetic acid induced writhing, tail immersion and carrageenan induced paw edema in mice. PLoS One. 2015;10(8):e0135558.
Butruk BA, Ziętek PA, Ciach T. Simple method of fabrication of hydrophobic coatings for polyurethanes. Cent Eur J Chem. 2011;9(6):1039–45.
Ahmadi A, Khalili M, Ahmadian S, Shahghobadi N, Nahri-Niknafs B. Synthesis and pharmacological evaluation of new chemical entities based on paracetamol and their ibuprofen conjugates as novel and superior analgesic and anti-inflammatory candidates. Pharm Chem J. 2014;48:109–15.
Paramveer DS, Chanchal MK, Paresh M, Asha R, Shrivastava B, Rajesh KN. Effective alternative methods of LD50 help to save number of experimental animals. J Chem Pharm Res. 2010;2(6):450–3.
Vasincu A, Ababei DC, Arcan OD, Bulea D, Neamţu M, Chiriac SB, Bild V. Preliminary experimental research on acute toxicity of Vernonia Kotschyana extracts in mice. Vet Drug. 2018;12(1):57–62.
Martinez RM, Longhi-Balbinot DT, Zarpelon AC, Staurengo-Ferrari L, Baracat MM, Georgetti SR, Sassonia RC, Verri WA Jr, Casagrande R. Anti-inflammatory activity of betalain-rich dye of Beta vulgaris: effect on edema, leukocyte recruitment, superoxide anion and cytokine production. Arch Pharm Res. 2015;4:494–504.
Mahapatra DK, Dadure KM, Shivhare RS. Edema reducing potentials of some emerging Schiff's bases of murrayanine. MOJ Biorg Org Chem. 2018;2(4):171–4.
Srivastava AR, Bhatia R, Chawla P. Synthesis, biological evaluation and molecular docking studies of novel 3,5-disubstituted 2,4-thiazolidinediones derivatives. Bioorg Chem. 2019;89:102993.
Abdellatif KRA, Fadaly WAA, Kamel GM, Elshaier Y, El-Magd MA. Design, synthesis, modeling studies and biological evaluation of thiazolidine derivatives containing pyrazole core as potential anti-diabetic PPAR-γ agonists and anti-inflammatory COX-2 selective inhibitors. Bioorg Chem. 2019;82:86–99.
Karim N, Khan I, Khan W, Khan I, Khan A, Halim SA, Khan H, Hussain J, Al-Harrasi A. Anti-nociceptive and anti-inflammatory activities of asparacosin a involve selective cyclooxygenase 2 and inflammatory cytokines inhibition: an in-vitro, in-vivo, and in-silico approach. Front Immunol. 2019;10:581.
Singh G, Singh G, Bhatti R, Gupta M, Kumar A, Sharma A, Ishar MPS. Indolyl-isoxazolidines attenuate LPS-stimulated pro-inflammatory cytokines and increase survival in a mouse model of sepsis: identification of potent lead. Eur J Med Chem. 2018;153:56–64.
Galvão GM, Florentino IF, Sanz G, Vaz BG, Lião LM, Sabino JR, Cardoso CS, da Silva DPB, Costa EA, Silva ALP, da Silva ACG, Valadares MC, Leite JA, de S Gil E, Menegatti R. Anti-inflammatory and antinociceptive activity profile of a new lead compound - LQFM219. Int Immunopharmacol. 2020;88:106893.
Santenna C, Kumar S, Balakrishnan S, Jhaj R, Ahmed SN. A comparative experimental study of analgesic activity of a novel non-steroidal anti-inflammatory molecule - zaltoprofen, and a standard drug - piroxicam, using murine models. J Exp Pharmacol. 2019;11:85–91.
Mischkowski D, Palacios-Barrios EE, Banker L, Dildine TC, Atlas LY. Pain or nociception? Subjective experience mediates the effects of acute noxious heat on autonomic responses. Pain. 2018;159(4):699–711.
Lenardão EJ, Savegnago L, Jacob RG, Victoria FN, Martinez DM. Antinociceptive effect of essential oils and their constituents: an update review. J Braz Chem Soc. 2016;27(3):435–74.
This study was financially supported by University of Medicine and Pharmacy "Grigore T. Popa" Iasi, based on contracts no. 23401/07.11.2018 and 30341/28.12.2017 and by the grant of UEFISCDI, PN III Program, AUF-RO, AUF-IFA 2019–2020, contract no. 28/2019. The study funders had no further role in the study design, data collection, analyses, interpretation of results, writing of the article, or the decision to submit it for publication.
Pharmaceutical Chemistry Department, Faculty of Pharmacy, University of Medicine and Pharmacy "Grigore T. Popa" of Iasi, Iași, Romania
Ioana-Mirela Vasincu, Maria Apotrosoaei, Sandra Constantin, Dan Lupașcu, Roxana-Georgiana Taușer & Lenuța Profire
Biomedical Sciences Department, Faculty of Medical Bioengineering, University of Medicine and Pharmacy "Grigore T. Popa" of Iasi, Iași, Romania
Maria Butnaru & Liliana Vereștiuc
Pharmacology Department, Faculty of Medicine, University of Medicine and Pharmacy "Grigore T. Popa" of Iasi, Iași, Romania
Cătălina-Elena Lupușoru
Institute of Organic and Analytical Chemistry, Université d'Orléans - Pôle de chimie, Orléans, France
Cătălina-Elena Lupușoru, Frederic Buron & Sylvain Routier
Ioana-Mirela Vasincu
Maria Apotrosoaei
Sandra Constantin
Maria Butnaru
Liliana Vereștiuc
Frederic Buron
Sylvain Routier
Dan Lupașcu
Roxana-Georgiana Taușer
Lenuța Profire
LP, SR, FB, IMV designed the study. IMV, MA, SC, MB performed the research. IMV, LV, CEL, LP, DR analyzed the data. IMV, DL, RGT prepared the article. All authors read and approved the final manuscript.
Correspondence to Sylvain Routier or Lenuța Profire.
Animal experiments were approved by the Research Ethics Committee of "Grigore T. Popa" University of Medicine and Pharmacy from Iasi (resolution no. 292).
Vasincu, IM., Apotrosoaei, M., Constantin, S. et al. New ibuprofen derivatives with thiazolidine-4-one scaffold with improved pharmaco-toxicological profile. BMC Pharmacol Toxicol 22, 10 (2021). https://0-doi-org.brum.beds.ac.uk/10.1186/s40360-021-00475-0
Accepted: 20 January 2021
DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s40360-021-00475-0
Thiazolidine-4-one
Toxicity degree
Anti-inflammatory and analgesic effects
Submission enquiries: [email protected] | CommonCrawl |
Adaptive information processing of network modules to dynamic and spatial stimuli
J. Krishnan ORCID: orcid.org/0000-0001-6196-20331 &
Ioannis Floros1,2
BMC Systems Biology volume 13, Article number: 32 (2019) Cite this article
Adaptation and homeostasis are basic features of information processing in cells and seen in a broad range of contexts. Much of the current understanding of adaptation in network modules/motifs is based on their response to simple stimuli. Recently, there have also been studies of adaptation in dynamic stimuli. However a broader synthesis of how different circuits of adaptation function, and which circuits enable a broader adaptive behaviour in classes of more complex and spatial stimuli is largely missing.
We study the response of a variety of adaptive circuits to time-varying stimuli such as ramps, periodic stimuli and static and dynamic spatial stimuli. We find that a variety of responses can be seen in ramp stimuli, making this a basis for discriminating between even similar circuits. We also find that a number of circuits adapt exactly to ramp stimuli, and dissect these circuits to pinpoint what characteristics (architecture, feedback, biochemical aspects, information processing ingredients) allow for this. These circuits include incoherent feedforward motifs, inflow-outflow motifs and transcritical circuits. We find that changes in location in such circuits where a signal acts can result in non-adaptive behaviour in ramps, even though the location was associated with exact adaptation in step stimuli. We also demonstrate that certain augmentations of basic inflow-outflow motifs can alter the behaviour of the circuit from exact adaptation to non-adaptive behaviour. When subject to periodic stimuli, some circuits (inflow-outflow motifs and transcritical circuits) are able to maintain an average output independent of the characteristics of the input. We build on this to examine the response of adaptive circuits to static and dynamic spatial stimuli. We demonstrate how certain circuits can exhibit a graded response in spatial static stimuli with an exact maintenance of the spatial mean-value. Distinct features which emerge from the consideration of dynamic spatial stimuli are also discussed. Finally, we also build on these results to show how different circuits which show any combination of presence or absence of exact adaptation in ramps, exact mainenance of time average output in periodic stimuli and exact maintenance of spatial average of output in static spatial stimuli may be realized.
By studying a range of network circuits/motifs on one hand and a range of stimuli on the other, we isolate characteristics of these circuits (structural) which enable different degrees of exact adaptive and homeostatic behaviour in such stimuli, how they may be combined, and also identify cases associated with non-homeostatic behaviour. We also reveal constraints associated with locations where signals may act to enable homeostatic behaviour and constraints associated with augmentations of circuits. This consideration of multiple experimentally/naturally relevant stimuli along with circuits of adaptation of relevance in natural and engineered biology, provides a platform for deepening our understanding of adaptive and homeostatic behaviour in natural systems, bridging the gap between models of adaptation and experiments and in engineering homeostatic synthetic circuits.
Cellular systems employ a number of distinct and characteristic nonlinear information processing modules, such as monostable switches, bistable switches and oscillators. Each of these modules plays critical roles in cells, and consequently such modules has been a focal point in a number of cellular contexts [1–5]. A particular information processing characteristic repeatedly encountered in cellular networks is adapation. Adaptation is the characteristic of a module wherein the output of the module is essentially independent of the input at steady state, even though the input is "connected" to the output. A confluence of different characteristics of the module allows for this special form of information processing.
Adaptation in cellular networks is seen in multiple contexts and with different consequences. One common context in which adaptation is seen is in sensory transduction. In the context of chemotaxis (directed cellular migration in response to gradients of chemical concentrations) adaptation is observed in a range of cell types including bacteria (E.coli, rhodobacter spheroides) and eukaryotes (Dictyostelium) [6–10]. The fact that adaptation is present right at the sensory level allows bacteria to exhibit sensitivity to temporal gradients over a very broad range of ambient mean concentrations. In the case of Dictyostelium, adaptation to spatially uniform stimuli is seen alongside non-adaptive behaviour in spatial gradients. In both these cases, it appears that adaptation has been incorporated, through evolution, into signal transduction to realize specific capabilities for cells. Another context in which adaptation is seen playing a similar role, is in visual signal transduction [11–13]. Adaptation is also seen in other signal transduction settings such as osmoregulation, studied for instance in yeast, and in the heat shock response [14–17]. Finally, homeostasis in cellular systems in response to different changes in the environment, is associated with adaptive behaviour of this kind, an example being iron homeostasis in bacteria [18–24].
A fairly broad range of studies have focussed on different aspects of adaptation in biochemical and genetic networks. On one hand there are a number of experimental studies of adaptation in specific contexts, including those listed above. These studies show how adaptation occurs in the relevant circuits/pathways and what the cellular implications are. On the theoretical side, in addition to modelling the adaptive modules in various contexts, studies have focussed on a number of related information processing aspects. Exact adaptation and in particular robust exact adaptation has been widely studied, discriminating it from non-robust adaptation and focussing on the integral control underpinnings (eg. [23, 25–29]). Motifs which have resulted in adaptation have been studied widely. For instance [30] studied a range of motifs involving inflow and outflow resulting in exact adaptation. This was further expanded to study inflow and outflow controllers in adaptation [31]. An exhaustive computational study of 3-node motifs revealed incoherent feedforward and negative feedback as the two adaptive motifs which emerge [32]. Studies of information processing in adaptive motifs have been performed in [30, 33–40] focussing on spatial and temporal behaviour of incoherent adaptive circuits, response of motifs to oscillatory stimuli, spatial and stochastic aspects of adaptive signal transduction. Fold-adaptation which incorporates adaptation with a fixed fold change behaviour has also been the focus of numerous studies [9, 41–43]. Finally adaptive behaviour has also been engineered in genetic circuits in synthetic biology (eg. see [44]).
While there have been a large number of studies on adaptation, there are relatively few which study the response of adaptive modules/circuits to dynamic and complex stimuli [34, 35, 37–39]. Understanding the response of adaptive circuits and obtaining a synthesis of adaptive responses in dynamic and complex stimuli is important for multiple reasons. Firstly, this deepens our understanding of adaptive modules/circuits and shines a light on how they (and cells using these circuits) process dynamic information. Secondly a number of adaptive modules behave in a more or less similiar way to simple stimuli such as step inputs, and it is not clear at the outset whether and under which conditions such similar behaviour extends to complex dynamic stimuli. Thirdly certain dynamic and spatial stimuli have already been used in experiments, in certain contexts [45–49]. However, there a number of contexts where this aspect has not been studied, but whose deployment could provide valuable insights. Such a study has relevance in both cases. Fourthly, cellular systems are faced with dynamic and complex stimuli and dynamic environments as a norm and it is important to assess how adaptation impacts behaviour and decision-making in these environments. Since dynamic environments are the norm rather than the exception, this can provide important clues into what types of adaptive modules have emerged in evolution. Finally a broader view of adaptive circuits and their response to dynamic and spatial stimuli, suggests engineering design principles associated with circuits responding adaptively to one or more classes of complex stimuli. This serves as a basis for engineering biomolecular circuits in cells with specific adaptive and homeostatic capabilities.
In this paper, we examine the response of adaptive circuits to different transient signals such as ramp stimuli (of different types) and periodic stimuli, and subsequently spatial signals. In order to do this, we draw on a range of adaptive circuits/motifs in the literature. We investigate the response of these various modules and relate this to characteristics (structural, biochemical, auxilliary) of the circuit. Of particular interest here is the subset of adaptive circuits which exhibit degrees of exact adaptation to one or more classes of complex stimuli, and our aim is to distill the underlying characteristics responsible for this. This allows us to achieve a clear synthesis of how different circuit characteristics impact the dynamic response and enable broader adaptive behaviour.
In the next section we discuss the circuits and motifs which are employed in our study. In the subsequent section, computational results are presented. This is followed by a concise analytical discussion in the next section (with further details in the Appendix). This analytical section can be skipped, without any loss of continuity, by readers not interested in the details. The conclusions synthesize the various insights. The Additional file 1 contains further information.
At the outset we emphasize that our goal is to obtain a synthesis of the functioning of adaptive circuits to dynamic/spatial stimuli in a systematic manner, with a particular view to determining when broader exact adaptive behaviour is seen and tracing this to circuit characteristics. Adaptation can be realized through both gene regulatory and biochemical circuits, and there are a range of models which have been used to model adaptation. We use a suite of models drawn from the literature as a basis for probing the response of adaptive circuits to dynamic environments. These models are presented in the Additional file 1. The models encompass different biological types (genetic, biochemical), different model network structures and particular characteristics. Engineering homeostatic circuits can also be realized through DNA strand displacement reactions, modelled as reaction networks, but for the most part we do not study these circuits separately.
For purposes of organization, the models are placed in a table according to two characteristics of the reaction network/motif (see Table 1). On one axis, the classification is based on how the signal appears in the model (zeroth order reaction/source), first order irreversible reaction or first order reversible reaction. Transcriptional models are classified along with the zeroth order reaction. On the second axis, the dominant characteristic responsible for adaptation in static signals is the basis for classification. The categories employed here are incoherent feedforward,negative feedback, open systems and other special characteristics. Most adaptive circuits studied fall into one of these categories. For instance incoherent feedforward and negative feedback motifs have been the focus of numerous studies in both signalling and gene regulation. Various studies including [30, 31] directly employ the fact that the network is an open system as the primary basis for adaptation. In this context, we point out that closed biochemical networks (i.e. without inflow/outlow) could result in models analogous to inflow/outflow motifs: this occurs if the only source of flux to a subnetwork from the rest of the network occurs via a zero-order reaction, while flux from the subnetwork to the rest of the network can occur through first order reactions. In such cases the subnetwork has a model essentially analogous to an inflow-outflow motif of the kind we study, with the only difference that there is coupling to the ambient network through conservation of species (this still allows for adaptive behaviour). This is especially pertinent because some negative feedback circuits (in E.coli chemotaxis) achieve exact adaptation precisely because of the presence of zeroth order reactions and the resulting model could belong to both the negative feedback and the inflow-outflow category. In this case, we briefly study it as a particular example of negative feedback (as it is the core part of a negative feedback adaptive mechanism), but draw parallels to inflow-outflow motifs. An example of a distinct behaviour responsible for adaptation is an autocatalytic circuit exhibiting a transcritical bifurcation [50, 51]. Here the system is a closed system and the feedback is a positive feedback from substrate to enzyme. This circuit has entities which reach a steady state independent of the signal. From the vantage point of the adapting variable, the governing mechanism is that of an autocatalytic negative feedback. This is treated separately as it involves a distinct ingredient responsible for adaptation, and in fact the core biochemical circuit is an example of one which has been studied in the context of absolute concentration robustness [51].
Table 1 List of primary models analyzed
We have analyzed the entire suite of models in detail. For purposes of clarity, we will present results on a smaller selection of models, which encompasses the different underpinning characteristics and the range of behaviour observed. We also examine other variants of models possessing the same characteristics responsible for adaptation, in the supplementary information. The models employed are all ODE based models (except in cases where a spatial analogue of an ODE model is considered). Further discussion of all models, along with model equations, parameters and inputs, is presented in the Additional file 1.
Model parameters are chosen according to the original model description in the literature. A number of models exhibit exact adaptation in response to a step input (i.e. exact recovery of output to prestimulus value) while a few exhibit inexact adaptation (partial recovery of output to prestimulus value). We keep parameters fixed in our study. We note here that a separation in time scale between input variation and the circuit time scale, can result in adaptive behaviour being essentially maintained (discussed later): we do not assume such a special case, and generally the time scale of input variation and adaptation are comparable.
In our simulations we start with a steady basal level of input, wait for the system to reach a steady state. Following this, we subject the models to different kinds of experimentally relevant inputs (a) A linear ramp (and by way of constrast also examine other increasing stimuli, such as quadratic ramps and exponential stimuli).Since a stimulus is always bounded (due to finite number of receptors or other factors), we also briefly examine ramps which saturate. (b) Periodic sinusoidal oscillations. (c) Spatially varying stimuli, including static spatial gradients, and dynamic spatiotemporal stimuli such as travelling waves and standing waves. For the purposes of presentation of responses to spatial stimuli, we focus on a smaller subset of circuits, where clear correspondences with temporal behaviour can be made (discussed later).
At the outset we note that the circuits chosen can exhibit different types of responses, depending on the nature of the stimulus. This can include (i) Exact adaptation (ii) Exact adaptation of certain features of the output (eg. mean values in response to periodic stimuli) (iii) Inexact adaptation of response or mean-values as appropriate (iv) Non-adaptive responses
Most circuits exhibit some degree of (inexact) adaptive behaviour to the stimuli considered. Our particular focus is on key qualitative features of the landscape of circuit responses, especially on circuits exhibiting different degrees of exactly adaptive behaviour in complex stimuli, where the behaviour can be traced to structural features of the model, independent of model parameters. This reveals core design features responsible for the adaptive behaviour. We also discuss non-adaptive responses, as they represent a fundamentally opposite qualitative response. We study the effect of numerically varying characteristics of inputs (or model parameters) if this is especially relevant. Our analysis and presentation of the results involves a combination of numerical simulations and mathematical analysis: numerical simulations reveals a range of different and noteworthy behaviour, while mathematical analysis reveals how certain motifs/circuits display different kinds of exact adaptive behaviour in complex stimuli. Simulations are performed in MATLAB using ode15s.
We have analyzed the set of models in Table 1. This reveals both similarities between different models in a given group, as well as differences, which can arise from subtle differences in details; this is also useful for determining what combinations of characteristics can exhibit exact adaptive behaviour in complex stimuli. We now comment on how we present the results. We present the results for a selection of circuits (Fig. 1), which essentially cover both the different model types, as well as distinct qualitative behaviour which we wish to demonstrate. The circuits include incoherent feedforward, feedback motifs, open systems as well as a circuit of autocatalytic feedback giving rise to a transcritical bifurcation. We employ a typical feedback motif (a 3-node motif with a buffer node) contrasting it with other feedback circuits (discussed later) and consider multiple variants of incoherent feedforward motifs. For instance models KR09 and KR11 are models of an incoherent feedforward adaptive motif developed to explain adaptation in Dictyostelium, and an expansion of that model to incorporate saturation (here and below, we refer to models through labels which are used to denote them in Additional file 1). Model CO09.M2 depicts an incoherent feedforward structure with thresholds and saturation. The model KI14, is another incoherent feedforward model exhibiting (parameter dependent approximate) fold adaptation. Similarly multiple variants of open circuits are possible. Figure 1 shows both linear and cyclic motifs DR08.M1 and DR08.M33 (which both exhibit perfect adaptation in a step stimulus), while other models involving open systems involve regulation of either inflow or outflow: a representative candidate is the model Dr12.M4 shown in Fig. 1. We present a selection of computational results to reveal the range of behaviour. This is followed by analytical results which focusses on explaining when adaptation in dynamic stimuli may be observed and what the associated design principles are.
Schematic of circuits. A schematic representation of the primary motifs/circuits in the literature, employed in this paper: (a) Linear network structure, with inflow and outflow: model DR08.M1 (b, c) Two variants of circuits comprising two linear inflow/outflow motifs interacting with one another (DR08.M1**,DR12.M4). (d) A cyclic motif with inflow and outflow (DR08.M33). (e) A negative feedback motif (MA09.FB). (f, g, h) Three variants of incoherent feedforward motifs (KR09,CO09.M2,KI14) (i) A transcritical motif (TC). Additional variants of linear and cyclic motifs with inflow and outflow are presented in the Additional file 1: Figure S1
Ramp stimuli
A range of contrasting responses to ramp stimuli are elicited from adaptive circuits. We start by examining the response of these circuits to a linear ramp input. The model is subject to a low steady basal level of input, to which it adapts. Using the steady state as the initial condition, a ramp input of fixed slope is applied at a specific time t=50. The range of responses is seen in Fig. 2. Some circuits (such as DR12.M1) do not reach a steady state, as seen in the output of the circuit steadily increasing. Other circuits such as DR08.M31, reach a steady state but the response is non-adaptive (i.e monotonic and saturating). In fact the new steady state depends on the gradient of the ramp. In contrast other models such as KR11 exhibit partial adaptation: as seen in Fig. 2, and depending on the parameters of the model, could exhibit underadaptation or overadaptation. Finally other circuits, such as DR08.M1 exhibit perfect adaptation in a ramp, and this feature does not depend on the gradient of the ramp. This demonstrates that while all circuits exhibit perfect adaptation or partial adaptation (often close to perfect adaptation) in step stimuli, their response to temporal gradients can be strikingly different, spanning a range of outcomes.
Responses of circuits to a linear ramp. A linear ramp elicits a range of qualitatively different responses in adaptive circuits such as (a) An unsteady state, increasing response (circuit DR12.M1: an inflow controlling open system) (b) A non-adaptive steady state (circuit DR08.M31) (c) Partial adaptation (KR11) (d) Exact adaptation (DR08.M1). (e, f) Two apparently similar looking circuits such as DR12.M4 and DR08.M1** (shown in Fig. 1) give contrasting responses. The circuits are depicted as insets in these (and subsequent) plots
Figure 2e,f contrasts the behaviour of apparently similar motifs DR08.M∗∗ and DR12.M4, both involving an open systems structure: the former adapts perfectly while the latter is very close to perfect in a step stimulus (Additional file 1: Figure S2). Their response to ramp stimuli demonstrates a clear qualitative difference: one adapts perfectly while the other does not even reach a steady state. This clearly demonstrates that ramp stimuli can elicit qualitatively different responses, and consequently be used as a basis for disciminating between adaptive circuits, even ones which appear structurally similar.
Exact adaptation to ramp stimuli. Figure 3 shows the response of six different circuits, indicating that they all exhibit exact adaptation in a ramp stimulus. This indicates that exact adaptation can occur in response to dynamic stimuli and this is not an isolated occurrence with multiple circuits exhibiting this behaviour. Furthermore, this behaviour is independent of parameters (unless otherwise noted).
Adaptive responses of circuits. A range of circuits exhibit exact adaptation in a ramp, such as (a) A linear motif with inflow and outflow DR08.M1 (b) A cyclic motif with inflow and outflow DR08.M33 (c) An incoherent feedforrward motif (KR09) (d) An incoherent feedforward motif which can exhibit fold-change detection KI14 (e) An incoherent feedforward motif CO09.M2, which has (opposite) thresholds associated with each leg. (f) A transcritical circuit TC. The respective network motifs are depicted in the inset
Open Systems. The first two circuits (Fig. 3) are those of open systems: the first having a linear topology and the second a cyclic topology. In these models the requirement that inflow matches outflow (which has to hold at steady state) results in the output adapting to a step input independent of the value of the signal. In the linear motif, when subject to a ramp, the increasing value of this signal effectively short circuits the associated step in the circuit: thus the circuit behaves as if the extra step is not present (i.e. inflow applied directly to the outflow variable) and consequently exhibits exact adaptation. Analytical results consolidate this intuitive result. A similar situation is observed in the cyclic motif. It is then worth asking, under which conditions adaptation occurs in a ramp in such inflow-outflow circuits.
Design principles associated with ramp adaptation in open systems. Our consideration of a range of linear and cyclic motifs, with inflow and outflow reveals the following insights. (i) For linear motifs, exact adaptation in a ramp occurs, as long as the ramp signal is not associated with the conversion/degradation of the output species. If in the two species linear motif, the ramp signal was associated with the conversion of the output species, a steady state is still observed, which is not adaptive. (ii) For three species cyclic motifs, with only one outflow (the adapting variable), exact adaptation in a ramp occurs, as long as the ramp signal is not associated with the conversion of the output species to another species (also see Fig. 4, which shows how location of a signal in a network can determine the behaviour). This consolidates the insight from the previous point, incidentally. (iii) When more than one outflow variable is present, additional restrictions occur. Firstly if all species are associated with reversible reactions, adaptation does not occur in a ramp (or for that matter in a step). Adaptation is possible if some of the reactions between species are irreversible. If the additional (non-output) outflow variable is associated with irreversible reactions, then adaptation in a ramp occurs only if the signal is not associated with the interconversion from or between outflow variables. In general, greater the number of outflow variables, greater are the constraints on where the signal can act to elicit exact adaptation in a ramp. These insights emerge from analytical results discussed in the next section.
Inexact adaptation in ramp stimuli. A range of circuits may exhibit inexact adaptation in a ramp such as (a) A feedback motif (MA09.FB), where an increase of feedback strength brings steady state output closer to pre-stimulus value. (b) A feedforward motif (MA09.FB) with saturation (c) Another feedforward motif (CO09.M1). Here due to thresholds in one feedforward leg, the steady state reaches 0. (d) The location at which a signal appears in a motif can be of great importance: shown are two different locations of signal appearing in model DR08.M33, one resulting in exact adaptation and the other, a nonadaptive response. In contrast to the previous cases, the non-adaptive behaviour here is not due to saturation, and the steady state carries information about the gradient of the ramp (see text)
Incoherent feedforward motifs. Three of the motifs in Fig. 3 are incoherent feedforward motifs. The first motif is a motif used to explain adaptation in chemotaxis in Dictyostelium. Here we find that the output of the model adapts to a ramp even though some entities in the circuit do not even reach a steady state. The reason for adaptation in this circuit is the cancellation effect of two pathways, neither of which adapts, or even reaches a steady state. Since the two pathways constitute the opposing enzymes in a covalent modification cycle, the output does reach a steady state. This can be seen explicitly analytically (also see [52]) and is discussed in the next section. This feature is shared by the second feedforward motif, KI14. Another incoherent motif CO09.M2 also reveals exact adaptation in a ramp. Here the reason for adaptation is subtly different: the adaptive circuit involves (competing) pathways each associated with a threshold: their product regulates the output. Here, under basal conditions, one of the two pathways is at a zero steady state, while an increasing signal such as a ramp ends up making the other pathway fall below its threshold, again resulting in a zero steady state. This ensures that the product of the two pathways is still zero, leading to exact adaptation. This suggests how incorporating threshold effects in different "directions" in interacting/cooperating pathways will lead to adaptation in a ramp. Taken together, there are a subset of incoherent feedforward motifs which maintain a "cancellation" effect of the two pathways (realized in different ways), in increasing stimuli.
Figure 3f demonstrates that a circuit of adaptation relying on a transcritical bifurcation (TC), also results in exact adaptation. The reason why this circuit exhibits exact adaptation is different from the ones above. Here the application of a ramp results in the moving of all the species from one part of the pathway to the covalent modification cycle involving the autocatalytic feedback (which is the core of the adaptive circuit). This subsystem reaches a steady state which does not depend on the total amount of species, as is seen analytically below, explaining the adaptive behaviour (a similar result would apply to other circuits exhibiting absolute concentration robustness). Note that this depends on the location of the signal relative to core autocatalytic circuit. If the signal was appearing "downstream" of the autocatalytic species, it would not result in exact adaptation in a ramp, since this would result in the movement of species away from the autocatalytic circuit. The underlying insight can be extended to other circuits which exhibit absolute concentration robustness (discussed later): circuits whose output do not depend on the total concentration of substrate species, when "connected" to an ambient network regulated by a signal, can give rise to adaptation in a ramp though this places restrictions on the locations of the connection (and action of the signal).
Figure 4a shows the response of a feedback motif to a ramp. Here, a low feedback may result in a non-adaptive response, but a higher feedback will result in a partially adaptive response. Other examples of inexact adaptive behaviour are also shown in Fig. 4. It is worth briefly contrasting this with the behaviour of a 3 node motif DR08.M4, which is a core part of a negative feedback mechanism used to describe aspects of chemotactic adaptation in E.coli (see Additional file 1). Here the motif contains a pair of reversible reactions (with which are associated chemoattractant and chemorepellent signals). Here we find that for a ramp signal associated with one of the reactions, exact adaptation ensues, but this is not the case for the other reaction (see Additional file 1). This motif while a core aspect of a negative feedback mechanism for adaptation, actually shares many features with the inflow-outflow system models studied above (though it is a closed system), including the underpinning reason for adaptation (it could thus be included in either category). Finally looking back to Fig. 2e, we also find that nonadaptive unsteady state responses in a ramp may be seen when the signal is associated with an inflow, even with feedback: in this case, an adaptive (though not exact) response is seen in in step stimuli. Taken together, this shows how even in feedback circuits, the presence of other characteristics (zeroth order reaction, or signal associated with inflow), can significantly impact qualitative behaviour
The effect of capping a ramp. A ramp is an unbounded stimulus while in cellular systems there are multiple factors which result in signals being bounded. Figure 5a shows the effect of "capping" of a ramp for a circuit which exhibits exact adaptation in a ramp revealing that, the capping has no effect since the output of the circuit has already adapted. In fact, for all the circuits showing exact adaptation in a ramp (in Fig. 3), (i) exact adaptation continues to hold good (this behaviour arises from the intrinsic characteristrics of the circuit, without requiring capping) (ii) depending on the balance of level of capping and the ramp slope (high enough capping/not too steep ramp), the capping can have negligible effects on the temporal profiles as well. In other cases, capping a ramp can convert an inexactly adaptive or even a non-adaptive response to an exactly adaptive one.
Effect of variation of the input stimulus. a:The effect of "capping" the ramp stimulus: DR08.M1 adapts exactly for the capped ramp signal. The transient response in this case (for this level of capping) is also practically identical to that where there was no capping of the ramp signal. b,c: A quadratic ramp:DR08.M1 and KR09 respectively adapt exactly to a quadratic ramp stimulus. In both cases the transient responses differ from that for the linear ramp signal. d An exponential stimulus can elicit imperfect adaptation in a circuit, even when there is exact adaptation in linear and quadratic ramps. The sensitivity of motif KR09 to the exponent is depicted. e The gradient of the ramp has a clear effect on the transient response, without altering the feature of exact adaptation(shown for circuit DR08.M1*). f While both basal level and ramp gradient can affect the response, we find that a proportional change of both keeps the response unaltered in the circuit KI14 which exhibits fold adaptation
Other increasing stimuli. We also examined other ramps (quadratic, exponential). We expect the same insights arising from the analysis above, to carry through to a quadratic ramp, or even an exponential stimulus. The one type of motif where it is not clear a priori what the response would be is the incoherent feedforward motif which relies on adaptation through cancellation of contributions of two pathways. Here, the output of such a model (KR09) adapts even to a quadratic ramp (Fig. 5). The fact that the "cancellation of pathways" works even for this stimulus can be seen analytically as discussed below. However for an exponential stimulus, we now find deviations from exact adaptation in this model (Fig. 5d) (both these features are shared by the feedforward motif KI14, see Additional file 1). Increasing the exponent of the stimulus leads to more pronounced deviations from exact adaptation, eventually resulting in non-adaptive responses. Another example of the subtle roles of the stimuli comes up in examining inflow-outflow circuits: we have already discussed how a stimulus applied to certain reactions can result in non-adaptive steady state responses. Interestingly, in such cases if the stimulus is a quadratic stimulus, this results in a zero steady state. This is discussed subsequently.
Summary. Our study of ramp stimuli demonstrates the range of responses which may be observed. In particular it reveals design features of circuits which enable exact adaptation in a ramp, and scenarios where non-adaptive behaviour may be observed. The implications of this, and the effect of network location and augmentation of circuits therein, are discussed in the "Discussion" section.
Temporal periodic stimuli
We now turn to periodic stimuli. At the outset we note a basic characteristic of the response of adaptive circuits. If the period of oscillations is large (relative to the time scales of the adaptive circuit), the output remains practically unchanged: this is because this scenario corresponds to a quasistatic modulation of the input, and the output adapts to the slowly varying stimulus and is consequently practically unchanged. On the other hand if the stimuli is of high frequency, the output is again close to steady: this follows from the fact that the circuit effectively samples the average of the stimulus. We focus on scenarios which do not correspond to either extreme case. We consider a stimulus of the form S=a+bsin(wt) : a is the basal level and b is the amplitude.
Effect of stimulus mean value. We first consider the effect of varying the basal level for fixed amplitude (Fig. 6a-d). This reveals the following trends. For some of the circuits, especially those associated with no saturation, an increase in the basal level results in smaller amplitude oscillations, even though the average of the oscillations doesnt vary much. If we consider a model such as KR11, we can clearly see the effect of saturation: in this case increasing the stimulus mean value results in lower amplitude oscillations, but about a mean which either increases or decreases. The former behaviour is seen in circuits which exhibit underadaptation and the latter in circuits which exhibit overadaptation. A feedback motif MA09.FB shows the behaviour similar to an under-adaptive feedforward circuit. A distinct pattern in seen in the feedforward model CO09.M2. Here changing the mean value of the input stimulus causes a transition from non-periodic behaviour to periodic behaviour, whose amplitude increases, then starts to decrease, following which oscillations are lost. This illustrates how core characteristics of the circuit are brought to the fore in dynamic stimuli and result in distinct responses.
Response to periodic stimuli A range of responses to periodic stimuli are depicted. (a-d): Responses of models to a periodic signal with a fixed amplitude and varying mean value. The typical response is one where upon increasing basal(mean) value the amplitude of oscillations of the response decreases. In (a), DR08.M1 shows a maintenance of the mean of the output (explained analytically). In (b, c) KR09 and KR11 behave typically and the mean is not maintained. The effect of saturation is seen in the difference between (b) and (c). In (d), CO09.M2 behaves atypically, i.e. the amplitude of oscillations and their mean value reach a maximum and then decrease. (e-h): Responses of the models to a periodic signal with a fixed basal level and varying amplitude. The typical response is one where increasing the amplitude of the input increases the amplitude of the output. In (e-g) we see this typical behaviour but with maintenance of output mean value for DR08.M1 (e), while this is not the case for the other models (KR09, KR11). The saturation effect in (g) exhibits itself as a pronounced asymmetry of oscillations (relative to (f)). In (h), CO09.M2 behaves atypically: oscillations and their mean value reach a maximum and then decrease (the lowest and highest basal levels correspond to zero oscillations in (d) and (h))
Effect of variation of stimulus amplitude. When the amplitude is increased, keeping input mean value fixed (a constraint on the amplitude is implicit here, since the input has to remain positive), higher amplitude oscillations are seen in multiple inflow-outflow circuits, irrespective of topology, (Fig. 6) and other circuits (Additional file 1). In some cases this can also result in a pronounced asymmetry of oscillations (even though the input is symmetric), especially when one or the other pathway saturates, as seen in the model KR11. Even in a feedforward motif without saturation, changing the amplitude also alters the mean value of the output. Finally in the case of the circuit CO09.M2, there is a transition from no oscillation to oscillations of increasing amplitude (and mean), before this decreases, again showing how intrinsic circuit features are brought to the fore.
Variation of both mean value and amplitude. We also studied the effect of variation of both basal level and amplitude keeping their ratio fixed (Additional file 1: Figure S7). A new notable feature is that the circuit KI14 shows no change in the response, and this factor can be traced to the fact that this circuit exhibits fold-adaptation. This can also be understood analytically.
The variation of the output mean value. Exact adaptation to constant stimuli means that the output steady state is independent of the stimulus level. When we consider time-varying stimuli such as periodic stimuli, we note that the the output is also a time-varying periodic stimulus. It is then worth asking, to what extent one can expect an effect like adaptation here. There are two kinds of adaptation one can think of: (i) the mean of the output is maintained irrespective of the input (mean as well as oscillation characteristics) (ii) the mean value of the input does not affect the output. In either case, we require an insulation of a mean or its effects, either from the input or the output end. If we require that the output characteristics are independent of the input mean value, we find that none of the circuits strictly meet this criterion, though some circuits exhibit a relatively modest change in output amplitude for a substantial change in input mean value.
Design features underlying maintainence of output mean value. We find that two classes of models show an independence of output mean value on characteristics of the input. One is the class of inflow-outflow models studied in [30]. The other circuit exhibiting this behaviour is the transcritical circuit. Robust exact adaptation in constant stimulus is associated with the presence of an integral control action. In the case of these circuits, there is the presence of an integral controller with fixed coefficients (even when the stimulus is time varying). A basic analysis reveals that in such a case the mean value of the output is maintained at a constant value independent of the input characteristics. This is discussed further in the next section. In the case of the inflow-outflow motifs, further insights can be obtained. If there is only one outflow variable, then this is seen (as long as periodic solutions are seen in the system). If there is more than one outflow variable, then there are restrictions on where the stimulus may act, for this behaviour to occur. Interestingly these restrictions only partially overlap with the restrictions on signal location for exact adaptation in a ramp.
Summary. Time-periodic stimuli typically elicit oscillatory responses from adaptive circuits, whose features depend on input characteristics, with the effect of adaptation reflected (though not exactly) in the mean of the output, in many cases. The exact maintenance of the mean in some circuits (with the associated design features) and the abrogation of a periodic response in specific circuits are notable points
Spatially varying stimuli.
Thus far, we have focussed on responses to adaptive circuits to dynamic stimuli, studied in purely temporal terms. It is well known that spatial factors can have significant effects on cellular information processing. In some cases spatial aspects are of direct importance because cells have to respond to spatially graded cues (as in eukaryotic chemotaxis), while in others spatial organization of information processing can affect the temporal response, in a way which cannot be understood through purely temporal models. The effect of homeostatic mechanisms at the tissue level, in response to spatially graded signals is also relevant here. We perform an extension of some essential insights developed above to the case of spatially varying stimuli. Here, we will focus on three types of circuits above: a sample incoherent feedforward motif KR09, a three node motif with inflow and outflow DR08.M34 and the transcritical circuit. All these circuits exhibit exact adaptation in temporal ramp stimuli, and the last two circuits maintain a mean value of the output in periodic stimuli. We ask the question: what implications does this characteristic behaviour have for stimuli with spatial and temporal variation?
To consider spatial stimuli we study the spatially extended adaptive circuits in one-spatial dimension with periodic boundary conditions. This is sufficient for the insights which we draw, which are relevant in other settings (and other boundary conditions) as well. The analysis we perform is relevant both at the cellular level (adaptive response to spatially graded stimuli) and the tissue level (homeostatic mechanisms response to spatially varying stimuli, with cells stationary in the tissue).
We focus on experimentally relevant spatial analogues of the stimuli considered earlier. We examine four types of stimuli: (a) A static spatial signal. (b)A spatially homogeneous basal signal upon which is imposed a ramp stimulus whose gradient is spatially varying (c) A travelling wave (d) A standing wave. These stimuli combine dynamic characteristics of stimuli studied so far, with non-trivial spatial aspects, and can be used to probe new aspects of the adaptive/homeostatic behaviour.
Case 1: No species diffusible. In this case information processing is purely local and all the behaviour studied earlier continues to hold good. We focus briefly on one case (Fig. 7a), a three node motif with inflow and outflow:DR08.M34). We found in earlier analysis that if a ramp stimulus was applied to the conversion of B to A, the system would exhibit a non-adaptive response reaching a steady state. Now if a ramp stimulus is imposed on the system with a spatially varying gradient, the adaptive circuit will give rise to a spatially graded steady state which is not adaptive (Fig. 7a). The significance of this is the following. It is usually assumed that adaptive circuits cannot be consistent with non-adaptive behaviour (gradient sensing) to spatial gradients, unless some species in the system is diffusible. We find here that a circuit can indeed adapt to a static temporal stimulus (eg a step), and be capable of preceiving spatial gradients when they are not steady, even with no diffusible species. Note that in this circuit if the signal was a quadratic ramp, this behaviour would be lost, as the steady state would be zero (discussed above).
Inputs with spatial and temporal variation. (a) shows how an adaptive circuit DR08.M34 (where the signal converts B to A) can sense and respond to a spatial gradient which is not static in a persistent way, even without any diffusible entity. The response of two circuits: DR08.M34 and TC to ramps whose slopes vary spatially (b, c), travelling wave inputs (which elicit travelling wave responses implying that local time averaged outputs do not vary spatially)) (d, e) and standing wave inputs (f, g) are depicted. (f, g) depict the temporal average over a cycle of oscillations as a function of spatial location. In model DR08.M34 the temporal average varies with the spatial location, though the spatial average of the temporal average is fixed. TC, even with the autocatalytic species moderately diffusible, exhibits an essentially exact maintenance of temporal average at every location. The insets in (f, g) show the oscillations at different spatial locations, indicative of a standing wave.In particular (d, f) shows how a circuit can exactly maintain its local time averaged response to a travelling but not standing wave. See text for discussion
Case 2: Diffusible species in circuit. In the incoherent feedforward motif KR09, it has been shown that having a diffusible species can give rise to adaptation with spatial sensing, and that differences in diffusivity can be used to achieve different combinations of temporal and spatial responses (see [8] where the model was formulated and also [34–36, 53]). In the context of the three node motif with inflow and outflow, it can easily be seen that having species A diffuse can give rise to non-adaptive behaviour in static spatial gradients: the essential insight being that the diffusion term contributes an extra "sink" which along with outflow has to match inflow to the system. Since the diffusion term contains spatial information (see Appendix), this means that matching inflow and outflow to the full system, will result in the adaptive variable B containing gradient information. In the case of the transcritical circuit, with a diffusible species A, the steady state of the system is one where the autocatalytic species C=0. This allows for a non-adaptive response of the adapting variable B (a non-zero steady state for the autocatalytic species is the basis for adaptation in this circuit: see analysis in Additional file 1 which shows that this is prevented in this case). Overall, having a diffusible species in the circuit can allow the species to exhibit clear gradient response (non-adaptive behaviour) in a static spatial gradient. We point out that only certain choices of diffusing variables will allow for this in general. We further note that in the case of the inflow-outflow circuits such as DR08.M34 (if A is diffusible), the spatial average of the output can be maintained at steady state, irrespective of the input characteristics, even while a graded response is achieved. This is true if there is only one outflow variable, and in some restricted cases when there are two outflow variables (Additional file 1). This is not the case in the other circuits.
Temporally varying signals. We now focus on temporally varying signals. When subject to a ramp stimulus whose gradient varies with space, all the circuits exhibit non-adaptive behaviour (Fig. 7b,c). This is not surprising noting that the same thing happens even in a steady gradient. This shows how in such cases all the circuits can give non-adaptive behaviour in such spatiotemporal ramps, even though they adapt in purely temporal ramps.
When we consider periodic stimuli, we ask if the (temporal) mean of the adapting variable is maintained, as was seen in the inflow-outflow circuits and the transcritical circuit in the purely temporal case. If no species diffuses, information processing is purely local and the temporal mean is maintained at the same value everywhere. Note that this happens even in a standing wave, where different locations are associated with different signals.
When some species are diffusible, matters are more subtle. For the transcritical circuit with species A diffusing (which gives rise to graded response in a static gradient), the mean of the adapting variable is still maintained both in response to standing waves and travelling waves (Fig. 7). The essential insight is that the diffusion of A does not affect the analysis which led to the establishment of fixed mean of the output (in the temporal case). However, if the autocatalytic variable C is non-diffusible, the response of the circuit (even though time-periodic at every location) cannot always be guaranteed to be qualitatively similar to the stimulus (standing wave/travelling wave). If the autocatalytic variable weakly diffuses, a close to exact maintenance of the mean value (over a temporal cycle) can be achieved (see Fig. 7 where this is practically exact even for moderate diffusion of the autocatalytic variable). Here the output of the circuit mirrors the input.
For the inflow-outflow circuit (with species A diffusing), we find, interestingly, that the output maintains its mean value (in time) in response to a travelling wave but not a standing wave.Travelling and standing wave inputs lead respectively to travelling wave and standing wave outputs. Simulations in Fig. 7 show clearly that different locations have different mean values in response to a standing wave. The fact that this motif exhibits a fixed mean in response to a travelling wave is established analytically in the next section.
Summary. Our consideration of spatial systems reveals how some circuits demonstrate graded responses to static spatial stimuli with exact maintenance of the (spatial) mean value, and how some circuits exhibit exact maintenance of the (local) temporal mean value in response to spatiotemporal periodic stimuli, though this can depend on both the circuit and the specific nature of the stimulus.
Combining different types of adaptive responses
Design principles and features underpinning different kinds of adaptive responses. Our analysis shows how different degrees of exact adaptation (parameter independent) can occur in complex stimuli: exact adaptation in a ramp, maintenance of mean value in a periodic stimulus, maintenance of mean value in a spatial gradient. We now synthesize these various results by focussing on the enabling features which make each of these behaviours possible, and how different motifs can combine one or more of these features. Adaptation in a ramp can occur in incoherent feedforward motifs, the transcritical circuit and in inflow-outflow circuits (with some restrictions). The ramp in a transcritical model acts on a step which results in the flux of species to the autocatalytic subnetwork which is responsible for this. With regard to inflow-outflow circuits we summarize the results by noting that if the adaptive variable is the only outflow variable, then a ramp will result in exact adaptation as long as it does not act on the conversion/degradation of this variable. In the 3 node network with another outflow variable more restrictions emerge. Maintenance of mean value of output in periodic stimuli, occurs in the transcritical circuit, and in inflow-outflow circuits with only one outflow. Additional outflows place restrictions on where the signal may act. The maintenance of mean value in a spatial gradient occured only in the inflow-outflow circuits, with certain species being highly diffusible. A look at all these constraints (Fig. 8) reveals, interestingly, the diverse and non-overlapping constraints placed on the circuits to achieve such special behaviour. This prompts the question as to what extent the presence or absence of these characteristic responses may be combined.
Summary of circuits exhibiting different degrees of exact adaptive responses in different stimuli. Note that the restrictions on signal locations (noted by nominal signal location and filled circles) where applicable is different for different stimuli. The spatial gradient response is realized for some species being (highly) diffusible. See text for details
The essential results are summarized in Table 2. Here we discuss how all combinations of the presence or absence of these three classes of behaviour may be seen in different circuits as delineated in Table 2. To start with we note that inflow-outflow circuits (of linear topology for instance) where the signal does not degrade the output variable can allow for all such behaviour to be realized. The diffusion of the species associated with the first node enables the desired gradient sensing behaviour. On the other hand in multiple circuits (for example completely reversible 3 node motifs with two outflows, or even feedback models studied earlier), none of the three behaviour is observed. An inflow-outflow circuit (for eg. in a linear topology) with signal acting on the output variable would show adaptive behaviour only in periodic stimuli and in spatial gradients. We have already seen how the incoherent feedforward motifs considered result only in adaptation in a ramp, while a transcritical circuit would show adaptive behaviour in both ramps and periodic stimuli. A signal acting in the opposite reaction as depicted in a transcritical circuit would prevent adaptation in both ramps and spatial gradients. This already accounts for six of the eight possible cases, all of which directly emerge from our study of design principles. The remaining non-trivial possibilities are those which show adaptive behaviour in a spatial gradient but not in oscillations. This appears more tricky since in the models seen, adaptive behaviour in a spatial gradient occurs in a subset of models (only inflow-outflow circuits) associated with adaptive behaviour in periodic stimuli (inflow-outflow and transcritical circuits).
Table 2 Combinations of exact adaptive behaviour in a range of dynamic and spatial stimuli (see text for details)
In order to obtain the adaptive behaviour in a spatial gradient but not in a periodic stimuli, we need to consider motifs which realize certain restrictions on one behaviour but not the other. This can be done by combining characteristics of motifs. For example in a linear inflow-outflow motif, with the signal regulating both inflow and outflow (through one intermediate species, in each case, one of which is diffusible), it is possible to get adaptation in a ramp, and a spatial gradient but not in periodic stimuli (see next section). The key insight is that having a signal regulate both the inflow and outflow (one through a diffusible pathway) leads to adaptation (of mean value) in a gradient, while having a non-trivial gradient response. The adaptation of mean value in periodic stimuli does not occur, though adaptation in a ramp does occur (similar to cancellation of feedforward pathways). In order to obtain to obtain a non-adaptive behaviour in a ramp, this structure can be modified in two ways: one is to have the signal regulate the adaptive variable through conversion to the upstream species in a nonlinear way, in addition to the above regulation. Another way is to incorporate an autocatalytic effect in the conversion of the adapting species to its upstream species. In both cases, this does not affect the combination of response to spatial gradient and periodic stimulus, but results in non-adaptive response in a ramp (see next section). This accounts for the remaining two cases. This shows how our analysis of enabling design features in earlier cases can be used to construct new circuits with desired behaviour.
Analysis of models
We now present a selection of analytical results. Our goal in this section is not to perform exhaustive analysis of all models (further studies and details are presented in the Appendix). Instead we focus on succinct analysis of a range of specific cases to clearly illuminate points made previously. This section can be skipped without any loss of continuity.
Response to ramp stimuli
We first dissect 3 classes of basic models which exhibit adaptation in ramp stimuli: (i) Open systems with inflow and outflow (ii)The transcritical circuit and (iii) Incoherent feedforward motifs. We focus our analysis on the circuits which exhibit exact adaptation.
Inflow-Outflow circuits: We examined a range of inflow-outflow circuits, studied by Ruoff and co-workers. This includes a range of 2 and 3 node motifs. The analysis of response to ramps was performed in two ways: (i) studying the response (especially of two node motifs and simpler extensions to 3-node motifs) directly (ii) studying the full range of 2 and 3 node motifs using a model reduction based on a quasi-steady state analysis. Both approaches give the same results. We summarize the main insights which emerge below, with further details in the Appendix.
Two-node motifs. The first circuit is a two node motif with inflow and outflow. This corresponds to the model DR08.M1 (for simplicity the inflow to B is set to 0, as this does not affect any of the conclusions below). The model equations are
$$\begin{array}{@{}rcl@{}} dA/dt&= &k_{0} -k_{1}SA +k_{11}B \\ dB/dt &=& k_{1}SA -k_{11}B -k_{2}B \end{array} $$
This model covers both the irreversible two node motif (k11=0) and the reversible motif.
At the outset, we note by adding the two equations that
$$\begin{array}{@{}rcl@{}} d(A+B)/dt &=& k_{0}-k_{2}B \end{array} $$
Thus if the system (i.e. both A and B) reach a steady state, then B adapts to the value k0/k2. The main insights which arise from the analysis may be summarized as follows:
Case 1 (k11=0): Here by applying a ramp stimulus S=S0+S1t, we find that A decays to 0 as time increases, but does so in a manner that the flux from A to B approaches a constant level k0. As far as B is concerned, the circuit behaviour essentially reduces to a one node motif with (constant) inflow and outflow, and B reaches a steady state which is of course adaptive. Further analysis is presented in the Appendix to demonstrate this.
Case 2 (k11>0): In this reversible circuit again the essential behaviour is similar to the previous case. The concentration of A approaches 0, while the net flux along the pathway from A to B approaches a constant value. Since A and B reach steady states, B of necessity will adapt to its prestimulus level. Both these examples show that the primary effect of the ramp eventually is an effective "short-circuiting" of the node A in the motif (though this insight must be carefully applied when consider reverse reactions from B to A).
Another aspect is worth mentioning when k11>0. We have associated the input stimulus as mediating the conversion from A to B. A stimulus could also have been associated with the conversion from B to A. For step stimuli applied here, the system would readily adapt based on the argument above. Interestingly however, when a ramp stimulus is applied, an important qualitative change occurs: the output reaches a steady state which is non-adaptive. The analysis of this case is presented in the Appendix. The essential insight can however be easily explained. While B reaches a nonzero steady state, the concentration of A keeps increasing. Thus A+B does not reach a steady state, and consequently B will not adapt exactly (if it did, it would imply that A+B reached a steady state). In fact the response is not adaptive. We make an associated point here. If the stimulus was a quadratic ramp, rather than a linear ramp: if applied in the forward direction, it would result in an adaptive response. If it was applied in the reaction converting B to A, it would result in a (non-adaptive) zero steady state response (explained in multiple ways in the Appendix).
Three node motifs. We now examine two three-node motifs which include the two node (reversible) motif above, and an extra node C. B is converted to C and C is converted to A. In one case there is outflow to C (DR08.M32) and in the other B is the sole outflow variable (DR08.M34). We note that when we present equations for models in the main text below, for the purposes of analysis, these include constants associated with every transition, to facillitate model analysis: when a signal is involved in a transition, it appears multiplicatively. The models presented in the Additional file 1 in some cases associate certain transitions with signals explicitly and correspond exactly to how the model is simulated. The model for the first scenario (two outflow variables) is
$$\begin{array}{@{}rcl@{}} dA/dt&= &k_{0} -k_{1}SA +k_{11}B +k_{31}C \\ dB/dt &=& k_{1}SA -k_{11}B -k_{2}B -k_{32}B \\ dC/dt &=&k_{32}B -k_{33}C -k_{31}C \end{array} $$
A ramp associated with the A to B conversion leads to exact adaptation, but this is not the case when it is applied to the B to A reaction (for exactly the same reasons in the two node motif above). A ramp applied at other locations does not lead to exact adaptation of B (this is no surprise, since a step input at these locations does not lead to exact adaptation either).These results are seen through detailed analysis in the Appendix.
Now we examine a case of a 3 node motif, where B is the sole outflow variable (DR08.M34), where there is inflow to C (but no outflow: the only difference with the model above):
$$\begin{array}{@{}rcl@{}} dC/dt &=&k_{32}B +k_{3} -k_{31}C \end{array} $$
Here, a ramp input applied to the reactions not involving the degradation of B (i.e. A to B or C to A conversions) does lead to exact adaptation, while a ramp input applied at reactions involving degradation of B leads to a non-zero but not adaptive steady state. The reasons for this are identical to that of a 2-node motif as discussed above (see Appendix).
The Appendix presents a detailed analysis of 3 node motifs with both one and two outflow variables, and different degrees of reversibility in reactions involving the node C (note that in both the cases above C is involved only in irreversible reactions). This analysis reveals exactly what constraints emerge to satisfy exact adaptation in a ramp when reversible reactions involving C as well as multiple outflow variables are present.
The Transcritical circuit. The core model of the transcritical circuit (TC) involves 3 species A,B,C. The conversion from B to C is mediated by an autocatalytic feedback involving C. The model for this circuit is given by
$$\begin{array}{@{}rcl@{}} dA/dt&= & -k_{1}SA +k_{2}B \\ dB/dt &=& k_{1}SA -k_{2}B -k_{3}BC +k_{4}C \\ dC/dt &=&k_{3}BC -k_{4}C \end{array} $$
The conservation condition leads to an equation A+B+C=Xt, a constant. Analysis of this network shows two distinct steady states C=0,B=(k1SXt)/(k1S+k2),A=(k2Xt)/(k1S+k2) and B=k4/k3,A=k2k4/(k3k1S),C=Xt−A−B. It is clear that the second steady state is physically feasible only when Xt>(k4/k3)(1+k2/k1S). In this regime (which we assume, which places a lower bound on the signal) however B exhibits exact adaptation independent of S, when S is a constant. Thus exact adaptation to step increases of stimuli, naturally follows.
A ramp stimulus (mediating conversion of A to B) has the effect of converting all the A to B, so that at steady state A=0, whereas B=k4/k3 and C=Xt−k4/k3. Thus, we see that B exhibits exact adaptation to a ramp. Both quadratic ramps and exponential signals also results in exact adaptation for the same reason.
Incoherent feedforward motifs. We turn to incoherent feedforward motifs, which, as seen earlier exhibit adaptation in a ramp stimulus. For specificity we focus on one incoherent feedforward motif (KR09) described by
$$\begin{array}{@{}rcl@{}} dA/dt =k_{a}S-k_{-a}A \\ dI/dt =k_{i}S-k_{-i}I \\ dR^{*}/dt =k_{f}A\left(R_{T}-R^{*}\right)-k_{r}IR^{*} \end{array} $$
In this model, when the signal is constant, both activator A and inhibitor I reach a steady state proportional to S. The output R∗ reaches a steady state RT(A/I)/(kr/kf+A/I). Since A/I is independent of S, the system adapts exactly to a step. When subject to a ramp both A and I increase without bound (if saturation is introduced as in model KR11, that will no longer be the case). Asymptotically A and I exhibit linearly increasing dependence on time with a proportionality factor which depends on the slope of the ramp: A∼kaS1t/k−a,I∼kiS1t/k−i (S1 is the ramp slope) The output reaches a quasi-steady state, which as seen above depends on A/I and is exactly as the basal adaptive steady state. A similar adaptive behaviour is seen in a quadratic ramp for the same reason. On the other hand when the model is subject to a stimulus exp(at), then both A and I show exponential variation and A/I reaches a steady value which is not the prestimulus value. Thus the system does not exhibit perfect adaptation, and the higher the exponent is, the further the deviation from the prestimulus value. This is discussed in the Appendix. The incoherent feedforward mdoule KI14, exhibits very similar trends as this model (see Additional file 1).
Response to periodic and spatial stimuli
While discussing the response of adaptive circuits to periodic stimuli, we highlighted two types of circuits whose response showed a mean value which was independent of the periodic stimulus. We present relevant analysis here to support those observations.
One class of circuits which demonstrate this property are inflow-outflow circuits. We first consider the two node motif with reversible interconversion studied above (this covers the case of irreversible conversion). Since d/dt(A+B)=k0−k2B, and the input stimulus is periodic with period T, integrating both sides over a time period
$$\begin{array}{@{}rcl@{}} \int_{t}^{t+T} (d/dt (A+B)) &=&k_{0}T -k_{2} \int_{t}^{t+T} Bdt \end{array} $$
Noting that the integral of the left hand side is zero, since all variables oscillate periodically,
$$\begin{array}{@{}rcl@{}} (1/T)\int_{t}^{t+T} Bdt &=& k_{0}/k_{2} \end{array} $$
Thus the average of B is maintained in a periodic stimulus, irrerspective of the basal value and amplitude of the input. Incidentally, even if the periodic stimulus is associated with the conversion of B to A, the same result holds good for the same reason.
Now we turn to the two 3-node motifs discussed above, which differ only in whether C is involved in outflow or inflow. If the inflow in the circuit is A and C and the outflow in B (DR08.M34), then we have
$$\begin{array}{@{}rcl@{}} d/dt(A+B+C)&=& k_{0}+k_{3} -k_{2}B \\ (1/T)\int_{t}^{t+T} B&=& \left(k_{0}+k_{3}\right)/k_{2} \end{array} $$
Just as before the mean value of B is maintained at the steady state adaptive level. Finally if there is outflow of C (model DR08.M32), then we have
$$\begin{array}{@{}rcl@{}} d/dt(A+B+C) &=&k_{0}-k_{2}B -k_{3}C \\ (1/T) \int_{t}^{t+T} (k_{2} B+k_{3}C) &=& k_{0} \end{array} $$
Integrating the equation for C indicates that \(\int _{t}^{t+T} k_{32}Bdt= \int _{t}^{t+T}\left (k_{31}+k_{3}\right)C dt\). Since the averages of B and C are proportional and noting the equation above, we find that the average of B and C are fixed independent of the stimulus. This assumes that the stimulus is associated with the conversion of A to B or the reverse conversion. This property will not in general be satisfied if the signal were associated with the conversion of B to C or C to A.
Finally, we consider the transcritical circuit above. Rewriting the equation for C (valid as long as C is non-zero) and then integrating across a period of oscillations allows us to transparently see why the mean of B is maintainated in a periodic stimulus
$$\begin{array}{@{}rcl@{}} d (ln C)/dt&=& k_{3}B -k_{4} \\ (1/T)\int_{t}^{t+T}Bdt &=&k_{4}/k_{3} \end{array} $$
This clearly shows why the mean of B is maintained, as long as the stimulus is not associated with the reactions involving B and C.
Static Spatial Stimuli. We had asserted that in a 3-node motif, with inflow at A, the output maintains its mean value under certain conditions. We assume that A is diffusible. We consider two cases, one where there is only one outflow (DR08.M34) and one where B and C have outflow (DR08.M32).
In the former case, at steady state, adding all the equations results in
$$\begin{array}{@{}rcl@{}} \frac{\partial(A+B+C)}{\partial t}&=& k_{0}+k_{3} -k_{2}B +k_{d}\frac{\partial^{2}A}{\partial \theta^{2}} \end{array} $$
Now the LHS is zero (steady state) and integrating across the spatial domain, we find that the diffusion term integrates to zero. Thus we are left with
$$\begin{array}{@{}rcl@{}} (1/L)\int_{0}^{L} Bd\theta &=& (k_{0}+k_{3})/k_{2} \end{array} $$
Thus, the spatial average of the adapting variable is maintained, even though a graded response is obtained.
If we consider the case of outflow at B and C, and repeat this we find that at steady state
$$\begin{array}{@{}rcl@{}} k_{2}<B>+k_{3}<C>&=& k_{0} \end{array} $$
where <> denotes spatial average. If the signal is not associated with the transitions involving C, then at steady state C is proportional to B everywhere (independent of signal). In this case the spatial average of B is maintained. Otherwise in general this will not be the case. These results mirror the analysis of response of periodic stimuli in these circuits, with the only difference being the averaging is done in space rather than in time.
Spatiotemporal stimuli. We previously asserted that the transcritical circuit with A diffusing would result in a fixed mean output in spatiotemporal stimuli such as standing waves and travelling waves. This follows immediately from the analysis above
$$\begin{array}{@{}rcl@{}} d (ln C)/dt&=& k_{3}B -k_{4} \\ (1/T)\int_{t}^{t+T}Bdt& =&k_{4}/k_{3} \end{array} $$
which is unaffected by A diffusing. The conclusion is therefore the same, with the mean of B being maintained in both travelling wave and standing wave stimuli (also see Appendix which examines the effect of the autocatalytic species diffusing)
Now we turn to the three note motif with inflow and outflow (focussing on the variant with one outflow DR08.M34). We have
Inetgrating across the spatial domain (and dividing by L), and integrating across a temporal period (and dividing by T) yields
$$\begin{array}{@{}rcl@{}} (1/T)\int_{t}^{t+T} <B>&=& \left(k_{0}+k_{3}\right)/k_{2} \end{array} $$
Here <B> is the spatial average of B across the domain. In the above equation, we can interchange the temporal averaging and the spatial averaging to give
$$\begin{array}{@{}rcl@{}} <(1/T)\int_{t}^{t+T} B>&=& \left(k_{0}+k_{3}\right)/k_{2} \end{array} $$
Now when the input is a travelling wave, the response is also a travelling wave (as seen by simulations) so that the temporal average at every location is the same as every other location. Denoting \((1/T)\int _{t}^{t+T} B\) by B0, we find that since B0 is independent of space, the preceding equation simply implies that B0=(k0+k3)/k2. Thus the temporal average of the adapting variable is maintained at the same value at every location irrespective of the characteristics of the travelling wave stimulus. This is also seen in simulations. This analysis doesnt hold good for standing waves (since B0 can vary with position) and in fact simulations clearly show that the mean is not maintained in a standing wave. We note that when the input is a travelling wave, not only is the temporal average maintained at every location, but the spatial average asymptotically approaches a constant.
Combinations of adaptive behaviour
In the previous section we discussed circuits which could give exact adaptive behaviour in a spatial gradient but not a periodic stimulus. This can be seen, in the circuit
$$\begin{array}{@{}rcl@{}} dX/dt&=& k_{0}A -k_{2}IX \\ dA/dt&=&k_{a}S-k_{-a}A \\ dI/dt &=&k_{i}S-k_{-i}I +k_{d} \frac{ \partial^{2} I}{\partial \theta^{2}} \end{array} $$
Here at steady state X=k0A/k2I, and furthermore I is spatially homogeneous. Consequently I=ki<S>/k−i. It is easy to see that <X>=(k0/k2)(kak−i/k−aki) which corresponds to maintenance of the mean value. As we have seen, in a ramp A and I asymptotically approach kaS/k−a and kiS/k−i and it is easy to see from a simple analysis that X adapts, since the dominant contribution to the long term dynamics is given by dX/dt=k0(ka/k−a)S−k2(ki/k−iS)X where S∼αt. Just as in the other incoherent feedforward motif, we see adaptive behaviour in a ramp. In order to get non adaptive behaviour in a ramp, this motif can be modified, to an inflow-outflow system, similar to the two node inflow-outflow system considered above:
$$\begin{array}{@{}rcl@{}} {dX}_{1}/dt&=& k_{0}A - k_{1}X_{1}+k_{11}S^{2}X_{2} \\ {dX}_{2}/dt&=& k_{1}X_{1} -\left(k_{11}S^{2} +k_{2}\right)X_{2} \end{array} $$
The essential insight is that this regulation of the adaptive variable (but in a nonlinear way, different from the regulation of inflow and outflow and "stronger") will prevent adaptation in a ramp.This can be established in detail analytically. An alternative way is to introduce an extra autocatalytic nonlinearity by having the reaction from X2 to X1 mediated by X1. The equations are
$$\begin{array}{@{}rcl@{}} {dX}_{1}/dt&=& k_{0}A - k_{1}X_{1}+k_{11}X_{1}X_{2} \\ {dX}_{2}/dt&=& k_{1}X_{1} -\left(k_{11}X_{1} +k_{2}\right)X_{2} \end{array} $$
Analysis of this model also demonstrates inexact adaptation in a ramp.
Adaptation is a basic and widespread characteristic of information processing in cells. The primary interest in adaptation stems from the capabilities it provides a cell in its response to the environment (for eg. in chemotaxis and phototransduction), how it allows for homeostasis, and how it allows for a distinct mode of transmission of information. It is clear that dynamic stimuli/environments may be routinely encountered in cellular contexts (and may be regarded as more representative than static stimuli), and thus any in-depth understanding of the role of adaptation and homeostasis in cellular information processing has to properly account for this. Multiple models of adaptation include generic models and context-specific aspects have been proposed and studied. In trying to obtain a systematic synthesis of the response of adaptive circuits to dynamic and complex environments, it is necessary to consider different characteristics of the dynamic environment as well as different characteristics underpinning adaptation to isolate the interplay between the two. Similar broad circuit characteristics may be combined with subtle variations in model structure, which can prove important. Consequently our analysis focussed on a suite of models drawn from the literature. We summarize the essential insights which emerge and discuss its implications for natural and engineered biology.
Responses to ramp stimuli. We found that ramp stimuli could discriminate between even apparently similar circuits, which exhibit essentially exact adaptation to step inputs. A whole spectrum of responses from non-steady state to steady state non-adaptive, to partially adaptive to exactly adaptive responses were seen. Interestingly a range of circuits exhibited exact adaptation to linear (and even quadratic) ramp stimuli, indicative of a broader adaptive response. In cellular systems, other factors which limit the extent of stimulus, do exist. In the case of the above circuits, exact adaptation to ramps occurs purely as a consequence of the intrinsic information processing characteristics: the capping of stimulus may contribute at most a dynamic distortion of response. For other circuits which would not intrinsically adapt in a ramp, the capping of a stimulus could be a vital ingredient to convert the response to an exactly adaptive response. Our analysis delineates design features which allow for adaptation in a linear ramp: cancellation effects maintained in certain incoherent feedforward motifs,a confluenece of incoherent feedforward and threshold effects in others, short-circuiting of steps in inflow-outflow circuits. Another distinct design feature involves the ramp transferring species in closed circuits to the core adaptive subcircuits (eg transcritical circuits), which are reminiscent of circuits exhibiting absolute concentration robustness [54] to the total amount of species in the circuit. In inflow-outflow circuits, we found that the location of a stimulus in the circuit/motif could be critically important, with some locations associated with adaptive behaviour and other not, even though all these locations were associated with exact adaptation in step stimuli.
Responses to time-periodic and spatial stimuli. Basic as well as subtle aspects of the underlying circuit are reflected in the dynamic response to periodic stimuli. A small selection of circuits maintain the mean value of output irrespective of change in mean value or amplitude of the input. Analysis in these cases reveals the prescence of an integral controller (integrator) with constant (time invariant) coefficients, which is responsible for this. These circuits include inflow-outflow circuits and transcritical circuits. In the former, increasing the number of outflow variables (from one), imposes increasing restrictions on the degree of reversibility in the network, and the locations where the signals act to elicit such behaviour. Finally we performed some focussed analysis to extend these insights to the spatially distributed case (both input and circuit being spatially distributed: representative of either single cell or tissue levels). Interestingly we find that some circuits are capable of detecting spatial gradients in a persistent manner, when there is a temporal gradient as well, even in the absence of any diffusible inhibitor, primarily because the response in a temporal ramp is non-adaptive. Thus the requirement of having a diffusible species is not needed for dynamic spatial gradients. Having diffusible species can allow for a circuit to give a gradient (non-adaptive) response to static gradients, though this depends on which species is diffusible. Some circuits, notably inflow-outflow circuits, exhibit this non-adaptive gradient behaviour, while maintaining their spatial mean, indicating again an adaptation "in the mean". For spatiotemporal stimuli such as travelling waves and standing waves, some circuits essentially maintain the (temporal) mean-value of the response at every location to both stimuli, while others do so for only travelling waves. This clearly shows how echoes of the precise temporal structure of adaptation are seen even in spatially extended systems, but with the nature of the spatial signal and the nature of the circuit and their interplay playing important roles.
Exact adaptation in combinations of complex stimuli. Our simultaneous consideration of ramps, periodic stimuli and static spatial gradients, brings to the fore the different constraints and requirements for exact adaptation in ramps, adaptation of the mean in periodic stimuli and in static gradients (Table 3). In all cases considered, the factors which give rise to exact adaptation are structural and parameter independent. We demonstrated that it is possible to construct circuits which can exhibit any combination of the prescence or absence of exact adaptive behaviour to each of these stimuli. In particular, certain inflow-outflow circuits are capable of exhibiting exact adaptive behaviour to all three stimuli, demonstrating a broad and versatile adaptive behaviour. Also worth contrasting is adaptation of the mean value in periodic and static spatial stimuli, which reveals an important but subtle difference between time and space. The circuits which exhibit adaptation of mean in both periodic and static spatial stimuli (while exhibiting non-constant behaviour) show the presence of a constant coefficient integral controller. The transcritical circuit allows for mean adaptation in periodic stimuli, but not spatial stimuli: this distinction can ultimately be traced to the fact that time appears as a first derivative while space as a second derivative in the model. On the other hand, the combination of feedforward structures, with an inflow-outflow circuit can give rise to adaptation in the mean in static spatial gradients, but not periodic stimuli.
Table 3 Exact adaptive behaviour of classes of circuits considered in the text to different stimuli
It is clear that there are many variations of each class of circuit (and even an individual circuit) we have studied, along with augmentations. Our isolation of the underlying design features allows us to evaluate other such circuits (including new ones which have not yet been constructed/studied) and the consequences of augmentations/variations, though this will have to be done on a case-by-case basis.
Our analysis has been based on models of adaptation which are primarily ODE based with a focus on exact adaptation which can be understood in structural terms, independent of model parameters and their tuning (in the transcritical model alone, we have noted a broad parameter range for exact adaptation). This is valid as long as the original model description is valid, which we assume. When a ramp signal is associated with an inflow reaction, we assume this description remains valid. We recognize that inexact (but close to exact) adaptation could be just as relevant, and that biology may employ additional layers (eg thresholds) to transform inexact adaptation into exact adaptation. In such cases, the nature of the adaptive response to complex stimuli needs to be studied on a case-by-case basis, and can build on the foundation here, with additional parametric analysis.
We now discuss the relevance of our results to systems biology. Dynamic stimuli such as ramps have been used experimentally in specific contexts, such as osmoregulation, and chemotaxis:the response of E. coli to exponential ramps has been studied experimentally, as has the response of the gradient sensing network in Dictyostelium, where adaptation to linear ramps has been demonstrated [14, 46, 47]. On the other hand there are many other contexts, notably in homeostasis, where the response to dynamic stimuli has not been examined in detail. This is especially true in the case of "complex" homeostatic mechanisms involving multiple layers of homeostasis, for eg see [55]. Our study provides a platform for probing such systems, by examining the response of a variety of circuits to different classes of stimuli, and also by isolating key structural characteristics for different kinds of behaviour. Our study of ramps and periodic stimuli together, presents interesting parallels and contrasts. The response to ramps spans a broad range of behaviour ranging from exact adaptation to non-adaptive behaviour. Viewed from the perspective of homeostasis, exact adaptation and non-adaptive behaviour represent opposite ends of the spectrum, and our analysis allows us to transparently isolate the reasons for both these behaviours. Non-adaptive behaviour in a ramp, especially when exact adaptation is observed in a step input, represents a breakdown in homeostasis induced by the temporal nature of the input stimulus. In response to periodic stimulus, while exact adaptation in the mean was observed in some cases, a complete failure in maintaining the mean of the output, for instance, was rarely observed.
Our study also revealed that the location of the network where the input acted could be critical in determining whether exact adaptation was observed or not: notably in multiple inflow-outflow circuits, a change of location could completely alter the response and even make the circuit non-adaptive. The fact that this is seen even in basic 2-node inflow-outflow circuits indicates that there is a fundamental constraint of such circuits to exhibit exact adaptation to a ramp, when the signal acts at multiple network locations. This has implications for biological signalling where multiple inputs may act at different points in a network, and this suggests that there are preferred locations for a signal to act to enable homeostatic responses to such dynamic stimuli. It is also suggestive of the fact that there are nodes, which exhibit homeostatic behaviour in simple stimuli, exhibit potential "fragility" (i.e. marked departure from homeostasis) in certain classes of dynamic stimuli. This could have significant consequences for when a cell may not be able to withstand certain stressful signals (depending on the nature of the stimulus and location). It remains to be seen if such locations have been avoided in evolution, or if other factors (which have the effect of capping such stimuli) have been incorporated to limit this effect. Furthermore there are biological processes where opposite steps of the same network may be targeted to achieve opposite responses—for eg in chemoattraction and chemorepulsion. Our study suggests that there are fundamental basic constraints which create clear contrasts in the nature of adaptive response to ramp-like stimuli, showing how adaptive behaviour in ramps of both types of stimuli (attractant and repellent) may not be accommodated in such cases. Our study of multiple inflow-outflow circuits also reveals the potential consequences of augmenting a circuit with other steps: this can cause a complete alteration in the nature of the response. Since evolution in biology is believed to act by " tinkering" from existing circuits, this may create important new constraints for circuits thus constructed. On the other hand by creating an augmentation, in some cases it is possible for a different signal to act at other locations to also enable exact adaptation in a ramp: as an example, an augmentation of the two-node inflow-outflow circuits with a third node with irreversible steps (Fig. 8), allows for an input (for instance with an opposite effect, such as chemorepulsion) to act on a new node which removed the constraints of the two node circuit and enable exact adaptation in a ramp.
Spatially varying signals present a distinct aspect. Here in general adaptation/homeostatic behaviour is inherited from the network. There are many ways in which adaptation could act: (i) act at every location, where the input is present (ii) A spatial averaging is performed and the adaptation occurs downstream (iii) the adaptation and averaging operations are integrated, allowing for a non-adaptive response in spatially graded stimuli. In the last case we show how it is possible to exactly maintain the spatially averaged mean value of the response. This could be of relevance at the cellular level (spatial averaging of the output from the membrane being a trigger of downstream activity) or even at the tissue level (the spatial average of outputs from an array of cells being an input to some downstream communication, or involved in some additional developmental step). It is interesting to note that the gradient sensing network in Dictyostelium which occurs via a Local Excitation Global Inhibition module, does not in general allow for the exact maintenance of the mean value of the response (the lipid PI(3,4,5)P3), while a network mimicking a basic E.coli adaptive circuit, but with a diffusible entity, can do so, provided certain specific enzymatic reactions act in the unsaturated limit (this follows by analyzing such circuits which are similar to the inflow-outflow circuit we have analyzed). The new considerations which emerge when considering the interweaving of temporal and spatial stimuli is reinforced by our observation that circuits which exhibit both these features, do not maintain the mean value (in time) in spatio-temporal periodic stimuli.
Adaptation and homeostasis is an important ingredient for synthetic biology as well [56, 57]. Our insights into design features for adaptation in dynamic and spatial stimuli, lays bare key ingredients for engineering sophisticated biosensor circuits with a variety of adaptive responses in dynamic environments Given the importance of biosensors in synthetic biology, biomedicine and biotechnology, and the experimental work in this direction, adaptive and homeostatic regulators (especially in complex and dynamic environments) offers a vital capability to combine with bio-sensing [58]. Going even beyond the biological area, the advent of "soft robots" [59] which could conceivably be endowed with chemical sensing capabilities, and incorporating adaptation at the sensory level could be relevant here as well. Our insights are also relevant to the engineering of information processing and homeostatic controllers through non-enzymatic mechanisms such as DNA strand displacement reactions [60–64]. We have shown how it is possible to construct compact adaptive circuits which combine features of exact adaptation in static stimuli, ramps, temporal periodic stimuli and static spatial stimuli (or any subset of these capabilities). Our consideration of structural features which robustly enable this, as well as the effect of modular augmentation of such circuits, as well as choice of nodes at which stimuli act is relevant to both bottom-up construction as well as re-wiring of existing circuits. Finally, while design in synthetic biology focusses typically or a circuit, an alternative approach is to focus on the design of a stimulus or environment to elicit certain outcomes. Our analysis here provides insights into when this may be possible (when adaptation is present), for instance by eliciting non-homeostatic outcomes through the application of temporal and spatial stimuli. Whether engineering through synthetic biology (either conventionally or through strand displacement reactions) or even other chemical means [65], adaptation and homeostasis serves as a crucial focal point, and it can be expected that many interesting applications arising from this can be realized in the fairly near future.
Our simultaneous consideration of a variety of dynamic and spatial stimuli, on one hand and a variety of adaptive circuits on the other, provides insights into the types of adaptive responses which are possible in such stimuli, what features of circuits robustly enable such responses, and when adaptation/homeostasis may be compromised. This provides a platform for understanding adaptation/homeostasis in multiple cellular contexts, which may employ such circuits, or variations or combinations thereof. It allows for the evaluation of the adaptive responses of concrete cellular systems, including the robustness of the adaptive/homeostatic mechanisms employed. Ultimately it can also provide insights into whether the nature of the dynamic environment may have impacted the adaptive circuits which have emerged in evolution. On the other hand it provides a basis for engineering adaptive/homeostatic circuits for use in complex environments either by rewiring existing circuits or building circuits ab initio.
Analytical results on open systems.
In this section, we will analyze in a little more detail, 2 node and 3 node motifs with inflow and outflow, to ramp inputs, to further consolidate the points in the text. This is done in two different ways: firstly we analyze a selection of the models directly to illustrate the main insights. We then analyze this using quasi-steady state model reduction: this is done on a broader range of inflow-outflow motifs
To start with we consider the two node motif (neglecting any inflow to B):
We first start with the purely irreversible two node motif, i.e k11=0, for the model DR08.M1 (again inflow to B is neglected). We subject this model to an increasing signal S(t) such as a ramp. To study the eventual behaviour of this system, it is useful to change variables to w=SA−k0/k1,B0=B−k0/k2
$$\begin{array}{@{}rcl@{}} dw/dt &=&(w/S)dS/dt- k_{1}Sw +(k_{0}/k_{1}S)dS/dt \\ {dB}_{0}/dt &=&k_{1}w-k_{2}B_{0} \end{array} $$
Looking at the terms on the RHS of the first equation, for linear and quadratic ramps the first involving (1/S)dS/dt becomes small as time increases relative to the second, and the last term, which is independent of w also approaches zero. Therefore as time progresses, the system evolves to w=0, and it is dominated by the second term. This can demonstrated formally. Even when the signal is an exponential, while the last term approaches a constant, the second term dominates (both this and the first term) and so w approaches zero. In other words SA approaches k0/k1. This shows that overall A decreases in concentration so that the flux out of A (and into B) approaches k0. From this it easily follows that B0 approaches a steady state of zero implying that B approaches a steady state of k0/k2. This is consistent with the intuition that the intermediate step is effectively short circuited by the increasing signal, and explains why B adapts.
We now consider the reversible case, where k11 is nozero. The approach is similar. We employ a change of variables to w=SA−k0/k1−k11k0/(k2k1),B0=B−k0/k2. Here the equations in the new variables read
$$\begin{array}{@{}rcl@{}} dw/dt&=&- k_{1}Sw +k_{11}{SB}_{0}+[(w+k_{0}/k_{1}(1+k_{11}/k_{2}))]/S dS/dt \\ {dB}_{0}/dt&=&k_{1}w-k_{2}B_{0} -k_{11}B_{0} \end{array} $$
As before, the asymptotic behaviour is governed by the terms linear in w,B0 (the first two dominate the last term in the first equation). An inspection of this matrix readily reveals that for any fixed S it has eigenvalues which are negative or with negative real part, and with increasing S, w and B0 approach zero, just as before. This demonstrates the assertion that B adapts in this situation as well.
Now we briefly analyze the situation where the signal regulates the conversion of B to A. This is described by modifying the above model to reflect this
$$\begin{array}{@{}rcl@{}} dA/dt&= &k_{0} -k_{1}A +k_{11}SB \\ dB/dt &=& k_{1}SA -k_{11}SB -k_{2}B \end{array} $$
We now demonstrate two points (i) In a linear ramp, B will not exhibit exact adaptation (ii) In a quadratic ramp, B will reach a zero steady state. Both these observations have been seen in simulations.
To demonstrate the first point: suppose B reaches a steady state (a necessary pre-requisite for adaptation). Let us call this steady state B0. Examining the asymptotic evolution of A, we see that it is asymptotically governed by the equation:
$$\begin{array}{@{}rcl@{}} dA/dt&=&k_{0}+k_{11}(a+bt)B_{0} -k_{1}A \end{array} $$
where the ramp S=a+bt is incorporated. As a consequence of this increasing production term, A also evolves to a behaviour, which is dominated by this term. This can be seen by ignoring the k0 and a terms above, and seeing the evolution of A. This results in a dominant behaviour of A to be A∼bB0t/k1 (the full solution can be derived, and it is clear from that this is the dominant behaviour). Now when we consider the fact that d/dt(A+B)=k0−k2B, we see that with a linear increase of A in time, the left hand side contributes a constant, and hence B never reaches its prestimulus behaviour. Thus exact adaptation is not observed.
Finally we also demonstrate that in a quadratic ramp, a non-zero steady state cannot be obtained. We follow the same line of reasoning. Suppose B reaches a steady state B0. For a quadratic ramp, the asymptotic dynamics of A are described by
$$\begin{array}{@{}rcl@{}} dA/dt&=&k_{0}+k_{11}\left(a+bt^{2}\right)B_{0} -k_{1}A \end{array} $$
Now the domainant behaviour of the A dynamics arises from the quadratic term: in fact A∼bB0t2/k1 as long as B0>0. Now if we examine the overall equation d/dt(A+B)=k0−k2B we see that such a behaviour of A is inconsistent with a non-zero steady state for B: the left hand side has a dominant contribution which arises from a quadratically increasing function of time encoded in A. Therefore, to leading order the left hand side is a linearly increasing function of time, which is inconsistent with the right hand side which approaches a constant. The assumption that B reaches a non-zero steady state leads to a direct contradiction and the role of the quadratic ramp is transparently seen here.
Three node motifs. We concisely discuss the behaviour of two 3 node motifs presented in the text: DR08.M32 (node C associated with outflow) and DR08.M34 (node C associated with inflow). These are also presented with alternative analysis in the section to follow. In the two outflow case, the only location associated with exact adaptation in a ramp is the A to B reaction. That exact adaptation occurs in this case, follows from a simple extension of the 2-node motif case, for the same reason. Exact adaptation does not occur for the other three locations: for a signal applied to the B to A reaction, exact adaptation of B, implies a steady state and exact adaptation of C=k32B/(k31+k33), but linearly increasing A. This contradicts the mass balance, d/dt(A+B+C)=k0−k2B−k33C, since the RHS =0 for exact adaptation, while the LHS is non-zero. In this motif exact adaptation does not occur in a step in the B to C and C to A reactions, and this is also the case in a ramp.
In the one outflow motif (DR08.M34), a ramp results in exact adaptation, as long as it is not applied to the B to A and B to C reactions. We focus on the B to C and C to A reactions (reactions introduced by the third node). In the former case, exact adaptation of B implies linearly increasing C and A with time, asymptotically, which contradicts d/dt(A+B+C)=k0−k2B: the LHS >0 while the RHS =0. For a ramp applied to the C to A reaction C approaches a zero steady state, while the influx to C (from both B and the inflow) matches the flux from C to A. This scenario is analogous to having an additional source of inflow to A and an additional (constant rate) reaction from B to A (in a 2-node motif)
Alternative analysis of inflow-outflow circuits. We now present alternative analysis of inflow-outflow circuits for ramp inputs. This is based on quasi-steady state approximations for some species. In particular we demonstrate through this approach that (i) For a two-node reversible motif, exact adaptation ensues if the ramp is applied to the conversion of A (the inflow variable) to B (the outflow variable, adapting). (ii) Exact adaptation does not occur if the ramp is applied to the interconversion of B to A (iii) For a three node motif (involving an extra node C) where C is produced by B and converts to A, we consider two cases: outflow only through B, and outflow through both B and C. In the former case, adaptation occurs as long as the ramp is not applied to interconversion reactions arising from the degradation/conversion form B. (iv) If there are two outflow variables, we primarily focus on the case of the B to C and C to A reactions being irreversible. In such a scenario, we show that only a ramp applied to the A to B reaction, leads to exact adaption. We also briefly consider reversible 3 node motifs.
Two node motif. Consider the two node motif, which is assumed to be reversible, without loss of generality. Suppose the ramp is applied from A to B. A ramp is associated with an increasing, large signal. This allows us to make a quasi-steady state approximation for A as A=(k0+k11B)/k1S. Using this in the evolution equation for B yields
$$\begin{array}{@{}rcl@{}} dB/dt&=& k_{1}S(k_{0}+k_{11}B)/k_{1}S -k_{2}B -k_{11}B \end{array} $$
Thus as time becomes large, the dynamics of B approaches an evolution equation of the form dB/dt∼k0−ksB which indicates that B approaches a steady state which corresponds to exact adaptation. This is consistent with what was derived earlier indicating that A approaches 0 as 1/S.
Now we consider the signal associated with the B to A conversion. Here the quasi-steady approximation for B yields B=k1A/(k11S+k2). This now results in an asymptotic evolution equation for A of the form
$$\begin{array}{@{}rcl@{}} dA/dt&=&k_{0}+k_{11}S[k_{1}A/(k_{2}+k_{11}S)] -k_{1}A \\ dA/dt& =& k_{0} -k_{1}{Ak}_{2}/(k_{2}+k_{11}S) \\ dA/dt & \sim& k_{0} -(k_{1}k_{2}/k_{11})(A/S) \end{array} $$
In making the transition between step 1 and step two, we note that the difference between the the last two terms is calculated without making any assumption in the denominator of the second term. Note that these two terms are of comparable order, and neglecting the k2 in the denominator of the second term, will lead to an incorrect deviation. In fact once the second and third terms are combined, the assumption of large S is made. After substituting for a linear ramp stimulus, the last equation is the form dA/dt=a−bA/t, where a and b are constants. This can be solved by a change of variables A=ut. In terms of u the equation is tdu/dt=a−(b+1)u, which can be solved by separation of variables. This results in the solution [a−(b+1)u(0)]/[(a−(b+1)u(t)=tb+1]. From this, it follows that u approaches a steady state a/(b+1) (which contains information about the ramp slope) and consequently A linearly increases with time. This linear rate of increase with time numerically matches well results seen through computer simulations. From the expression for B we see that in a linear ramp, B asymptotes to a steady state not corresponding to exact adaptation, as seen earlier. Now suppose a quadratic ramp is applied: from the asymptotic evolution equation for A, we see that A will still approach a linear profile. This is because the variable u=A/t satisfies an equation of the form tdu/dt=a−u−bu/t, the last term can be neglected relative to the penultimate one as time becomes large. Thus u approaches a constant value and consequently A is linear. From this, looking at the quasi-steady approximation for B, we find that B approaches zero, exactly as seen in simulations.
3-node motifs with single outflow We focus on the model where the reactions involving the third node C are irreversible (for simplicity here and below we ignore inflow to C as that does not affect the qualitative conclusions):
If k33=0 there is no outflow through C. This is the case we consider here. Now if we look at ramp stimuli applied to the conversion from A to B and that from B to A, we find that the former one adapts, but the latter one does not. This emerges from an identical approach to quasi-steady state approximation to the one considered about, just including the augmentation of the C variable. The C "leg" of the pathway is just an additional pathway from B to A. In fact the insights here follows essentially from the consideration of the interaction of nodes A and B. For instance, if the signal was associated with the transition from B to A, we find, by an identical quasi-steady state approximation, a steady state for B and a linearly increasing profile for A, whose slope does not depend on reactions involving C. The slope obtained from the quasi-steady state approximation matched with computational simulations.
We now consider the ramp applied to the other two transitions. First consider the ramp applied from C to A. A quasi-steady state assumption for C results in C=k32B/k31S. This reduces the model to
$$\begin{array}{@{}rcl@{}} dA/dt&=&k_{0} +[k_{32}B/(k_{31}S)]k_{31}S -k_{1}A +k_{11}B \\ dA/dt&=& k_{0} +[k_{32}B] -k_{1}A +k_{11}B \\ dB/dt&=& k_{1}A -(k_{11}+k_{2}+k_{32})B \end{array} $$
This is just like a two node motif with an extra pathway from B to A. These equations when analyzed are simply indicative of a steady state for A and B with B adapting exactly.
The remaining case is when the signal acts from the B to C conversion. Here by applying the quasi-steady state approximation for B, we have B=k1A/(k11+k2+k32S). Implementing this reduction into the equations for A and C result in equations of the form
$$\begin{array}{@{}rcl@{}} d(A+C)/dt&=& k_{0} -k_{1}{Ak}_{2}/(k_{32}S) \\ dC/dt &=& k_{1}A-k_{31}C \end{array} $$
Analysis of this equation shows that C asymptotically is proportional to A with A approaching a linearly increasing function of time. Just as in the two node motif, B approaches a non-adaptive steady state. This justifies the statements made earlier.
We now turn to the case of a fully reversible 3 node system (depicted below for signal mediating conversion of A to B) which is described by the following equations:
$$\begin{array}{@{}rcl@{}} dA/dt&= &k_{0} -k_{1}SA +k_{11}B +k_{31}C -k_{13}A \\ dB/dt &=& k_{1}SA -k_{11}B -k_{2}B -k_{32}B +k_{23}C \\ dC/dt &=&k_{32}B -k_{33}C -k_{31}C +k_{13}A -k_{23}C \end{array} $$
Again we focus on the case of no outflow of C, i.e. k33=0. From simulations we find that when a ramp is applied to any of the transitions, excluding those involving degradation/conversion of B, exact adaptation ensues. While we will not repeat all the calculations in these cases, we focus on signal acting at two transitions (i) A to C and (ii) C to B, neither of which was present in the previous case. We examine the first case. Here, from a quasi-steady state for A we have A=(k0+k11B+k31C)/(k1+k13S). Substituting, we have
$$\begin{array}{@{}rcl@{}} dB/dt &=& k_{1} (k_{0}+k_{11}B +k_{31}C)/(k_{1}+k_{13}S) -(k_{11}+k_{2}+k_{23})B \,+\, k_{32}C \\ dC/dt &=& k_{23}B -k_{32}C +k_{3}S(k_{0}+k_{11}B +k_{31}C)/(k_{1}+k_{13}S) -k_{31}C \end{array} $$
This simplifies to
$$\begin{array}{@{}rcl@{}} dB/dt&=&k_{32}C- (k_{11}+k_{2}+k_{23})B \\ dC/dt&=& k_{0}+k_{11}B +k_{23}B -k_{32}C \end{array} $$
It is easy to see (for instance by adding the two equations) that this system reaches a steady state, which corresponds to exact adaptation of B. In fact this system has the structure of a 2-node motif, with inflow through C and outflow through B, with an extraneous pathway converting B to C. Note incidentally that if there was an outflow through C (k33≠0), then exact adaptation would not ensue.
Now we examine the case where the C to B transition is mediated by the signal. Here from a quasi-steady state for C, we have C=(k13A+k23B)/(k31+k32S).Substituting, we have a reduced model
$$\begin{array}{@{}rcl@{}} dA/dt &=&k_{0}-k_{1}A +k_{11}B +k_{31}(k_{13}A +k_{23}B)/(k_{31}+k_{32}S) -k_{13}A \\ dB/dt &=& k_{1}A \!- k_{11}B -k_{2}B -k_{23}B +k_{32}S(k_{13}A +k_{23}B)/(k_{31}+k_{32}S) \end{array} $$
which simplifies to
$$\begin{array}{@{}rcl@{}} dA/dt&=&k_{0}-k_{1}A +k_{11}B -k_{13}A \\ dB/dt &=&k_{1}A-k_{11}B -k_{2}B +k_{13}A \end{array} $$
From this we see easily that an adaptive steady state is attained. In fact, this is similar in structure to a basic two node motif, but with an extra pathway between A and B. The steady state of B is the adaptive steady state balancing inflow and outflow.
Taken together, we find that when we have only one outflow, for transitions independent of the degradation/conversion from B, exact adaptation ensues.
Two outflows. We briefly examine the case of two outflows from this perspective. If the system is fully reversible, then exact adaptation does not ensue in a step, at any location. Thus, there is no reason to expect it to act in a ramp, and indeed this is not observed. Now if we consider the case where the reactions involving C are irreversible, we find that exact adaptation occurs only when the ramp is applied in the A to B conversion. The fact that the other locations do not result in exact adaptation can be seen as an extension of the above analysis. We also note that a step signal acting at these locations does not lead to exact adaptation either. In essence, the steady state of the full system implies that a linear combination of B and C is constant. Exact adaptation for B ensues only when the B to C ratio is fixed at steady state independent of the signal. This can happen only when the signal does not involve any reactions involving C. Combining this with the earlier analysis that the signal cannot be associated with conversion of B, we find that there is only one transition which is associated with exact adaptation in a ramp.
Incoherent feedforward motifs. We study the behaviour of an incoherent feedforward motif KR09 whose equations are presented earlier (similar insights follow for model KI14): We consider its behaviour in response to linear ramps, quadratic ramps and exponentials. Now suppose S=a+bt. The asymptotic leading order behaviour for both A and I are determined by the increasing term bt.A full solution is easily obtained,
$$\begin{array}{@{}rcl@{}} A(t) &=& A(0) exp (-k_{-a}t) +k_{a}bt/k_{-a} -\left[k_{a}b/(k_{-a})^{2}\right] (1-exp(-k_{-a}t) \\ I(t) &=& I(0) exp (-k_{-i}t) +k_{i}bt/k_{-i} -\left[k_{i}b/(k_{-i})^{2}\right] (1-exp(-k_{-i}t) \end{array} $$
but the key point is that the dominant behaviour of A is given by A∼kabt/k−a while that of I is given by I∼kibt/k−i. Now examining the R∗ equation shows a linearly increasing forward and backward pathway which results in a quasi-steady state for the response R∗∼(A/I)/(kr/kf+A/I). The key point here is that even though A and I are increasingly asymptotically linearly in time their ratio A/I approaches a constant kak−i/kik−a which is exactly the level in a constant stimulus (and it is independent of the stimulus). This implies that the response reaches a steady state corresponding to exact adaptation. This was noted in [52].
A very similar insight follows in the case of a quadratic ramp. There, by a very similar approach A∼kabt2/k−a and I∼kibt2/k−i. For exactly the same reasons the response reaches a steady state. Asymptotically A/I reaches a steady level even though A and I are increasing, and the level it attains is exactly the value pre-stimulus: kak−i/kik−a. Thus exact adaptation is obtained in a quadratic ramp.
Now we consider an exponential stimulus S=S0exp(λt). The variation of A and I are given by
$$\begin{array}{@{}rcl@{}} A(t)&=& A(0)exp(-k_{-a}t) +(k_{a}S_{0}/(k_{-a}+ \lambda)[exp(\lambda t) -exp (-k_{-a}t)] \\ I(t)&=& I(0)exp(-k_{-i}t) +(k_{i}S_{0}/(k_{-i}+ \lambda)[exp(\lambda t) -exp (-k_{-i}t)] \end{array} $$
Both A and I are dominated by the increasing exponential terms A∼(kaS0/(k−a+λ)exp(λt) and I∼(kiS0/(k−i+λ)exp(λt). For the exact same reason as before, A/I reaches a steady value and the response reaches a steady state. Here however A/I asymptotes to a value ka(k−i+λ)/ki(k−a+λ) which is not the prestimulus level. In fact the higher the λ the greater the deviation from the exact adaptation.
Adaptation in the mean to temporally periodic and static spatial stimuli. In the text, we studied circuits which maintained their mean value in a periodic stimulus and also those which maintained the mean value in a static gradient. Inflow-outflow circuits (for instance two node motifs) could exhibit both, and a common structure allowed for this
$$\begin{array}{@{}rcl@{}} d/dt(A+B)&=& k_{0} -k_{2}B + k_{d} \frac{\partial^{2} A}{\partial \theta^{2}} \end{array} $$
In a purely temporally periodic signal, the spatial term (diffusion) is zero, and the temporal term is zero when averaged. Exactly the reverse happens in a static spatial gradient, with the net effect that the averages are maintained, and this is true for other such circuits with similar structure.At any rate the presence of a control structure with constant coefficients is responsible.
The transcritical model does result in maintenance of mean value in a temporally periodic stimulus because the control structure is present as dC/dt=C(k2B−k−2): since this equation is separable, and so corresponds to a system with an integral control structure with constant coefficients. We have already seen that if A is diffusible, the spatial average of B will not be maintained: the main point is that in the absence of C diffusion, either k2B=k−2 (implying no gradient response) or C=0 (implying no maintenance of mean value of B: the latter happens when A is diffusible. The only remaining aspect to consider is what if C is diffusible? Rewriting the equation (assuming C is nonzero) and integrating over the domain using integration by parts
$$\begin{array}{@{}rcl@{}} k_{d} \frac{\partial^{2} C}{\partial \theta^{2}} [1/C] + k_{2}B-k_{-2}&=&0 \\ (1/L) \int_{0}^{L}(k_{2} B-k_{-2}) d \theta&=& (k_{d}/L) \int_{0}^{L} \left[-(dC/d\theta)^{2} \right] \end{array} $$
This shows that the spatial average of B cannot be maintained, unless C is constant. One the other hand if C is constant, B cannot exhibit a gradient behaviour. This shows how fundamental constraints exist in this model, which also brings to the fore the difference between time and space, and the fact that space is associated with a second derivative which is ultimately what results in the deviation term on the RHS.
Tyson JJ, Chen KC, Novak B. Sniffers, buzzers, toggles and blinkers: Dynamics of regulatory and signaling pathways in the cell. Curr Opin Cell Biol. 2003; 15(2):221–31.
Marks F, Klingmuller U, Muller-Decker K. Cellular Signal Processing: An Introduction to the Molecular Mechanisms of Signal Transduction. New York: Garland Science; 2017.
Xiong W, Ferrell JE. A positive-feedback-based bistable 'memory module' that governs a cell fate decision. Nature. 2003; 426(6965):460–5.
Bar-Or RL, Maya R, Segel LA, Alon U, Levine AJ, Oren M. Generation of oscillations by the p53-Mdm2 feedback loop: A theoretical and experimental study. Proc Natl Acad Sci U S A. 2000; 97(21):11250–5.
Goldbeter A, Koshland DE. An amplified sensitivity arising from covalent modification in biological systems. Proc Natl Acad Sci U S A. 1981; 78(11):6840–4.
Barkai N, Leibler S. Robustness in simple biochemical networks. Nature. 1997; 387(6636):913–7.
Swaney KF, Huang C, Devreotes PN. Eukaryotic chemotaxis: A network of signaling pathways controls motility, directional sensing, and polarity. Annu Rev Biophys. 2010; 39(1):265–89.
Levchenko A, Iglesias PA. Models of eukaryotic gradient sensing: Application to chemotaxis of amoebae and neutrophils. Biophys J. 2002; 82(1):50–63.
Hamadeh A, Ingalls B, Sontag E. Transient dynamic phenotypes as criteria for model discrimination: Fold-change detection in Rhodobacter sphaeroides chemotaxis. J R Soc Interface. 2013; 10(80).
Manahan CL, Iglesias PA, Long Y, Devreotes PN. Chemoattractant signaling in Dictyostelium discoideum. Annu Rev Cell Dev Biol. 2004; 20:223–53.
Clark DA, Benichou R, Meister M, da Silveira RA. Dynamical adaptation in photoreceptors. PLoS Comput Biol. 2013; 9(11):1003289.
Korenbrot JI. Speed, sensitivity, and stability of the light response in rod and cone photoreceptors: Facts and models. Prog Retin Eye Res. 2012; 31(5):442–66.
Tranchina D, Sneyd J, Cadenas ID. Light adaptation in turtle cones. testing and analysis of a model for phototransduction. Biophys J. 1991; 60(1):217–37.
Muzzey D, Gomez-Uribe CA, Mettetal JT, van Oudenaarden A. A systems-level analysis of perfect adaptation in yeast osmoregulation. Cell. 2009; 138(1):160–71.
Patel AK, Bhartiya S, Venkatesh KV. Analysis of osmoadaptation system in budding yeast suggests that regulated degradation of glycerol synthesis enzyme is key to near-perfect adaptation. Syst Synth Biol. 2013; 8(2):141–54.
Klipp E, Nordlander B, Kruger R, Gennemark P, Hohmann S. Integrative model of the response of yeast to osmotic shock. Nat Biotechnol. 2005; 23(8):975–82.
You T, Ingram P, Jacobsen MD, Cook E, McDonagh A, Thorne T, Lenardon MD, de Moura AP, Romano MC, Thiel M, Stumpf M, Gow NAR, Haynes K, Grebogi C, Stark J, Brown AJP. A systems biology analysis of long and short-term memories of osmotic stress adaptation in fungi. BMC Res Notes. 2012; 5:258.
Jeong J, Guerinot ML. Homing in on iron homeostasis in plants. Trends Plant Sci. 2009; 14(5):280–5.
Amir A, Meshner S, Beatus T, Stavans J. Damped oscillations in the adaptive response of the iron homeostasis network of Escherichia coli. Mol Microbiol. 2010; 76(2):428–36.
Huang Y, Drengstig T, Ruoff P. Integrating fluctuating nitrate uptake and assimilation to robust homeostasis. Plant Cell Environ. 2012; 35(5):917–28.
Semsey S, Andersson AMC, Krishna S, Jensen MH, Massé E, Sneppen K. Genetic regulation of fluxes: Iron homeostasis of escherichia coli. Nucleic Acids Res. 2006; 34(17):4960–7.
Venkatesh KV, Bhartiya S, Ruhela A. Multiple feedback loops are key to a robust dynamic performance of tryptophan regulation in escherichia coli. FEBS Lett. 2004; 563(1-3):234–40.
Somvashi PR, Patel AK, Bhartiya S, Venkatesh KV. Implementation of integral feedback control in biological systems. Wiley Interdiscip Rev Syst Biol Med. 2015; 7(5):301–16.
Davis GW. Homeostatic control of neural activity: From phenomenology to molecular design. Ann Rev Neurosci. 2006; 29:307–23.
Yi TM, Huang Y, Simon MI, Doyle J. Robust perfect adaptation in bacterial chemotaxis through integral feedback control. Proc Natl Acad Sci U S A. 2000; 97(9):4649–53.
Sontag ED. Adaptation and regulation with signal detection implies internal model. Syst Control Lett. 2003; 50(2):119–26.
Briat C, Gupta A, Khammash M. Antithetic integral feedback ensures robust perfect adaptation in noisy biomolecular networks. Cell Syst. 2016; 2(1):15–26.
Saunders PT, Koeslag JH, Wessels JA. Integral rein control in physiology. J Theor Biol. 1998; 194(2):163–73.
Ang J, McMillen DR. Physical constraints on biological integral control design for homeostasis and sensory. Biophys J. 2013; 104(2):505–15.
Drengstig T, Ueda HR, Ruoff P. Predicting perfect adaptation motifs in reaction kinetic networks. J Phys Chem B. 2008; 112(51):16752–8.
Drengstig T, Jolma IW, Ni XY, Thorsen K, Xu XM, Ruoff P. A basic set of homeostatic controller motifs. Biophys J. 2012; 103(9):2000–10.
Ma W, Trusina A, El-Samad H, Lim WA, Tang C. Defining network topologies that can achieve biochemical adaptation. Cell. 2009; 138(4):760–73.
Krishnan J, Iglesias PA. Analysis of the signal transduction properties of a module of spatial sensing in eukaryotic chemotaxis. Bull Math Biol. 2003; 65(1):95–128.
Krishnan J, Iglesias PA. Systems analysis of regulatory processes underlying eukaryotic gradient perception. IEEE Trans Autom Control. 2008; 53(SPECIAL ISSUE):126–38.
Krishnan J. Signal processing through a generalized module of adaptation and spatial sensing. J Theor Biol. 2009; 259(1):31–43.
Krishnan J. Effects of saturation and enzyme limitation in feedforward adaptive signal transduction. IET Syst Biol. 2011; 5(3):208–19.
Cournac A, Sepulchre J-A. Simple molecular networks that respond optimally to time-periodic stimulation. BMC Syst Biol. 2009; 3:29.
Iglesias PA, Shi C. Comparison of adaptation motifs: Temporal, stochastic and spatial responses. IET Syst Biol. 2014; 8(6):268–81.
Marquez-Lago TT, Leier A. Stochastic adaptation and fold-change detection: From single-cell to population behaviour. BMC Syst Biol. 2011; 5:22.
Ferrell JE. Perfect and near-perfect adaptation in cell signaling. Cell Syst. 2016; 2(1):62–7.
Edgington MP, Tindall MJ. Fold-change detection in a whole-pathway model of Escherichia coli chemotaxis. Bull Math Biol. 2014; 76(6):1376–95.
Skataric M, Nikolaev EV, Sontag ED. Fundamental limitation of the instantaneous approximation in fold-change detection models. IET Syst Biol. 2015; 9(1):1–15.
Shoval O, Alon U, Sontag E. Symmetry invariance for adapting biological systems. SIAM J Appl Dyn Syst. 2011; 10(3):857–86.
Kim J, Khetarpal I, Sen S, Murray RM. Synthetic circuit for exact adaptation and fold-change detection. Nucleic Acids Res. 2014; 42(9):6078–89.
Tu Y. Quantitative modeling of bacterial chemotaxis: Signal amplification and accurate adaptation. Ann Rev Biophys. 2013; 42(1):337–59.
Wang CJ, Bergmann A, Lin B, Kim K, Levchenko A. Diverse sensitivity thresholds in dynamic signaling responses by social amoebae. Sci Signal. 2012; 5(213):17.
Shimizu TS, Tu Y, Berg HC. A modular gradient-sensing network for chemotaxis in Escherichia coli revealed by responses to time-varying stimuli. Mol Syst Biol. 2010; 6:382.
Iglesias PA. Chemoattractant signaling in Dictyostelium: Adaptation and amplification. Sci Signal. 2012; 5(213):8.
Rahi SJ, Larsch J, Pecani K, Katsov AY, Mansouri N, Tsaneva-Atanosova K, Sontag ED, Cross FRQ. Oscillatory stimuli differentiate adapting circuit topologies. Nat Methods. 2017; 14(10):1010–6.
Krishnan J, Mois K, Suwanmajo T. The behaviour of basic autocatalytic signalling modules in isolation and embedded in networks. J Chem Phys. 2014; 141(17):175102.
Shinnar G, Feinberg M. Structural sources of robustness in biochemical reaction networks. Science. 2010; 327(5971):1389–91.
Seaton DD, Krishnan J. Modular systems approach to understanding the interaction of adaptive and monostable and bistable threshold processes. IET Syst Biol. 2011; 5(2):81–94.
Alam-Nazki A, Krishnan J. An investigation of spatial signal transduction in cellular networks. BMC Syst Biol. 2012; 6:83.
Shinnar G, Milo R, Martinez MR, Alon U. Input-output robustness in simple bacterial signalling systems. Proc Natl Acad Sci U S A. 2000; 104(50):19931–5.
Agafonov O, Siesto CH, Thorsen K, Xu XM, Drengstig T, Ruoff P. The organization of controller motifs leading to robust plant iron homeostasis. PLoS ONE. 2016; 11(1):0147120.
Auslander D, Auslander S, Hamri GC-E, Sedlmayer F, Muller M, Frey O, Hierlemann A, Stelling J, Fussenegger M. A synthetic multifunctional mammalian pH sensor and CO2 transgene-control device. Mol Cell. 2014; 55(3):397–408.
Stapleton JA, Endo K, Fujita Y, Hayashi K, Takinoue M, Saito H, Inoue T. Feedback control of protein expression in mammalian cells by tunable synthetic translational inhibition. ACS Synth Biol. 2012; 1(3):83–8.
He F, Murabit E, Westerhoff HV. Synthetic biology and regulatory networks: Where metabolic systems biology meets control engineering. J R Soc Interface. 2016; 13(117):20151046.
Wehner M, Truby RL, Fitzgerald DJ, Mosadegh BM, Whitesides GM, Lewis JA, Wood RJ. An integrated design and fabrication strategy for entirely soft autonomous robots. Nature. 2016; 536(7617):451–5.
Zhang DY, Seelig G. Dynamic DNA nanotechnology using strand-displacement reactions. Nat Chem. 2013; 3(2):103–13.
Chen Y-J, Dalchau N, Srinivas N, Phillips A, Cardelli L, Soloveichik D, Seelig G. Programmable chemical controllers made from DNA. Nat Nanotechnol. 2013; 8(10):755–62.
Chen Y-J, Groves B, Muscat R, Seelig G. DNA nanotechnology from the test tube to the cell. Nat Nanotechnol. 2015; 10(9):748–60.
Briat C, Zechner C, Khammash M. Design of a synthetic integral feedback circuit: Dynamic analysis and DNA implementation. ACS Synth Biol. 2016; 5(10):1108–16.
Sawlekar R, Montefusco F, Kulkarni V, Bates DG. Implementing nonlinear feedback controllers through DNA strand displacement reactions. IEEE Trans Nanobioscience. 2016; 10(15):443–54.
Katz E. Biomolecular information processing: From logic systems to smart sensors and actuators. Vol. 8. Issue (5-6). Wiley: 2012. p. 339–46.
We acknowledge computational assistance from Govind Menon.
All relevant information about the models used are contained in the main text as well as the Additional file 1. The Additional file 1 contains further details about models, parameters and supplementary results/discussion.
Department of Chemical Engineering, Centre for Process Systems Engineering, Imperial College London, South Kensington, London, SW7 2AZ, UK
J. Krishnan & Ioannis Floros
National Centre of Scientific Research "Demokritos", Athens, Greece
Ioannis Floros
J. Krishnan
JK planned the work, IF performed the computational analysis, JK conducted the analytical studies and wrote the manuscript. Both authors have read and approved the manuscript.
Correspondence to J. Krishnan.
Additional file 1
Supplementary Material. (PDF 176 kb)
Krishnan, J., Floros, I. Adaptive information processing of network modules to dynamic and spatial stimuli. BMC Syst Biol 13, 32 (2019). https://doi.org/10.1186/s12918-019-0703-1
Dynamic stimuli
Spatial stimuli
Networks and information flow | CommonCrawl |
What are projection methods
Quoting from Solenthaler et. al. Predictive-Corrective Incompressible SPH (ACM Transactions on Graphics, Vol. 28, No. 3, Article 40, Publication date: August 2009) (PDF link here)
These incompressible SPH (ISPH) methods first integrate the velocity field in time without enforcing incompressibility. Then, either the intermediate velocity field, the resulting variation in particle density, or both are projected onto a divergence-free space to satisfy incompressibility through a pressure Poisson equation.
What does the author mean by "project?? Is there a simpler way of understanding this operation? I tried reading other articles, but they go even more deep saying solenoidal vectors etc. I am looking for a simple explanation at first to understand the concept.
linear-algebra particle
Geoff Oxberry
vkumarvkumar
$\begingroup$ Check out this code: math.mit.edu/cse/codes/mit18086_navierstokes.m and this paper: math.mit.edu/cse/codes/mit18086_navierstokes.pdf $\endgroup$ – Isopycnal Oscillation Dec 20 '13 at 17:56
$\begingroup$ Personally, I would interpret these methods as "operator splitting" rather than "projection". $\endgroup$ – Paul♦ Dec 21 '13 at 2:24
Projection methods typically split up the solution of transient Stokes or Navier-Stokes methods into the solution of two separate problems - one for the velocity, one for the pressure. For Stokes, the simplest version (by Chorin/Temam) goes as follows for a timestep $k$:
Compute an intermediate velocity $u^*$ by solving $$\frac{(u^*-u^k)}{dt} - \nu\Delta u^* = f(t^k)$$
Correct the velocity by solving $$\begin{cases} \frac{(u^{k+1}-u^*)}{dt} + \nabla p^k &= 0\\ \nabla \cdot u^{k+1} &= 0\\ u^{k+1}\cdot n &= 0 \quad \text{on the boundary} \end{cases}$$
This above step is the reason these splitting schemes are referred to as projection methods - the weak form of (2) can be interpreted as the scaled $L^2$ projection of $u^*$ onto a divergence-free $u^{k+1}$ (after multiplying by a test function, $p$ can then be viewed as a Lagrange multiplier to enforce the divergence free constraint).
A note: since pressure is undetermined in the second step, we need to determine it. Based on the assumption that $\nabla \cdot u^{k+1}$ must be $0$, we can take the divergence of the second equation to get $$\nabla \cdot \frac{u^*}{dt} + \nabla\cdot \nabla p^k = 0,$$ which yields a Poisson equation for $p$ given $u^*$. The overall scheme is then
Solve for $u^*$ through $$\frac{(u^*-u^k)}{dt} - \nu\Delta u^* = f(t^k)$$
Solve for $p$ with $$\Delta p = \frac{1}{dt}\nabla \cdot u^*$$
Solve for $u^{k+1}$ through $$\frac{(u^{k+1}-u^*)}{dt} + \nabla p^k = 0.$$
See this paper for a great overview of available projection methods.
Jesse ChanJesse Chan
To give an easier example, consider the ODE for rotation
\begin{align} \dot x=-ay\\ \dot y=ax \end{align}
If one solves that with the Euler method, the next point is found in the direction of the tangent of the circle, increasing the radius. \begin{align} x(t+h)^2+y(t+h)^2&=(x(t)-ahy(t))^2+(y(t)+ahx(t))^2\\ &=(1+a^2h^2)(x(t)^2+y(t)^2) \end{align}
However, since one knows that the exact solution of this ODE preserves the initial radius
$$x(t)^2+y(t)^2=r^2=x(0)^2+y(0)^2$$
one can correct this drifting away by projecting the numerical solution back to that circle.
$$(\tilde x(t+h),\tilde y(t+h)) =\frac{r}{\sqrt{x(t+h)^2+y(t+h)^2}}(x(t+h),y(t+h))$$
The angular velocity will still be different, but the approximation will still be better.
In general, if you have a system of first integrals for the problem, you may want to return, after a certain number of steps, to the set satisfying the first integrals. Using Newton's method with the Moore-Penrose pseudo-inverse will usually find a rather short path to that set.
Finding the closest point on some set usually is called "projection". This is conform to the characteristic of orthogonal projections in linear algebra.
Lutz LehmannLutz Lehmann
If you're confused why this operation is called "projection", note that the equation $$ \text{div} \, \mathbf u = 0 $$ means that we only consider velocity fields that are divergence free. This is no different than asking to find a solution of some problem subject to a constraint of the form $Bx=0$ where $B$ is a matrix with fewer rows than columns. The set of all vectors $x$ that satisfy this form a subspace (think of a plane in 3-space, or a hyperplane in $n$-space). What projection algorithms do is to find some approximation for a simpler subproblem, and then "project" back to the hyperplane of functions that are divergence free, where the projection is really just to be considered what one does in regular, finite dimensional algebra or geometry.
Bill Barth
Not the answer you're looking for? Browse other questions tagged linear-algebra particle or ask your own question.
Is there one general approach to build a projection methods for different problems?
What guidelines should I use when searching for good preconditioning methods for a specific problem?
What is too big for standard linear algebra/optimization methods?
Null Space Projection for Singular Systems
Fast projection onto semidefinite cone
What are the prominent algorithms for solving systems of linear inequalities?
Efficiently removing projection to subspace without having an orthogonal basis
Projection onto the set of Orthogonal matrices
In iterative methods, are matrix decompositions considered useful for implementation? | CommonCrawl |
Types of Graphs
1 Symmetric Ties and Undirected Graphs
2 Asymmetric Ties and Directed Graphs
2.1 Node Neighborhoods in Directed Graphs
2.2 Node Degree in Directed Graphs
3 Anti-Symmetric Ties and Tree Graphs
4 Tie Strength and Weighted Graphs
5 Sentiment Relations and Signed Graphs
5.1 Sentiment Networks
5.2 Signed Graphs
nodes and edges are indeed the building blocks of a graph. However, types of relationships that the edges represent can change both how we understand the network conceptually and also what mathematical techniques we can apply to the graph when we compute graph metrics (the subject of a future lesson). The basic idea is that when we do network analysis, we want to map our understanding of the nature of the social relationships we are studying to the types of graphs we use to represent the network formed by the concatenation of those relationships.
Figure 1.1: A undirected graph
Let us assume that Figure 1.1 represents a network of people who spend time together. One way of building this network would be to ask people on your dorm room floor who are the people that they spend some amount of time (e.g., more than an hour a week) hanging out with. By definition the relation "spending time together" lacks any inherent directionality. Mutuality (or reciprocity) is built in by construction. It would be nonsensical for a person (say A) to claim that they spend time with another person (say B) and for B to say that they do not spend time with A. In social network analysis these types of ties are called symmetric ties.
Accordingly, two people being in the same place at the same time (co-location), even if they do not one another, is an example of a symmetric tie. You also have the symmetric tie "being in the same class as" every other student that is also taking your Social Networks seminar this term. Note that, in this sense, all co-memberships (e.g., being in the same club or organization or being part of the same family) create symmetric ties among all actors involved (we will revisit this topic when talking about two-mode networks in another lesson). If I am a member of your family, you are also my family member; if we are both members of the soccer club, we are considered teammates. Social networks composed of symmetric ties are represented using undirected graphs like the one shown in Figure 1.1.
Networks composed of symmetric ties have some interesting properties. If we know that the relationship (R) linking two nodes A and B is symmetric, then only a single edge exists that links them, and it does not matter whether we call this edge AB or BA. The order does not matter. In this way, we can formally define as symmetric tie as one that lacks directionality; if a tie is symmetric, then if we know that A is related to B (the AB edge is part of the edge set of the graph), then we know by necessity that B is related to A.
Can you think of other examples of symmetric ties? Is friendship, as culturally defined in the contemporary world, a symmetric tie?
In contrast to spending time together, being members of the same family, or being in the same place at the same time, some social ties allow for inherent directionality. Edges in these graphs are are called asymmetric ties. That is, one member of the pair can claim to have a particular type of social relationship with the other, but it is possible (although not necessary) that the other person fails to have the same relationship with the first.
Helping or social support relations, are like this. For instance, you can help someone with their homework, or given them personal advice, but this does not necessarily mean that that person will return the favor. They may, or they may not. The point is that, in contrast to symmetric tie, mutuality or reciprocity is not built in by definition, but must happen as an empirical event in the world. We need to ask the other person to find out (or check their email logs). Can you think of other examples of asymmetric social ties?
Figure 2.1: A directed graph.
Reciprocity is an important concept in social network analysis. Some have said it is perhaps the most important concept for understanding human society ( ⊕ Gouldner 1960Gouldner, Alvin W. 1960. "The Norm of Reciprocity: A Preliminary Statement." American Sociological Review, 161–78.), which may be a bit of an exaggeration. Only asymmetric ties may have the property of being non-reciprocal or having more or less reciprocity. If I think you are my friend, I very much hope that you also think you are my friend. That said, sociologists have found that in many natural social settings this is not the case. Sometimes people think they are friends with others, but those other people disagree ( ⊕ Carley and Krackhardt 1996Carley, Kathleen M, and David Krackhardt. 1996. "Cognitive Inconsistencies and Non-Symmetric Friendship." Social Networks 18 (1): 1–27.). For this reason, sociologists typically ask: if I do you a favor, would you do me a favor in the future? Additionally, sociologists often ask: if I treat you with respect, will you also treat me with respect? If I text you, will you text me back? If this is true, we have a level of reciprocity in our relationship.
For some ties, such advice or support, or friendship relations, reciprocity is all or none; it either exists or it does not. For instance, the friendship offer you extend to someone may be reciprocated (or not). In the same way, you can like someone and they may like you back (or not), like the notes you passed around in middle school. For other ties, such as communication ties (e.g., those defined by the amount of texting, or calling), reciprocity is a matter of degree, there may be more or less. For instance, you can text someone 10 times a day, but they may text you back only half of those instances. In all cases, reciprocity is at a maximum when the content of the relationship is equally exchanged between actors.
Can you think of relationships in your life characterized by more or less reciprocity?
Just like symmetric ties are represented using a particular type of graph (namely, an undirected graph), social networks composed of asymmetric ties are best represented by a type of graph called a directed graph. 1 1 Directed graphs are also called digraphs Figure 2.1 shows the point and line diagram picture of a directed graph. What were simple lines for an undirected graph (Figure 1.1) have been replaced with arrows indicating directionality. A node sends a relationship to the node that the arrow points to. Up to two directed arrows may link nodes going both ways.
If Figure 2.1 were an advice network ( ⊕ Cross, Borgatti, and Parker 2001Cross, Rob, Stephen P Borgatti, and Andrew Parker. 2001. "Beyond Answers: Dimensions of the Advice Network." Social Networks 23 (3): 215–35.), on the other hand, we could say that H seeks advice from D, but D does not seek advice from H. This may be because D is higher in the office hierarchy or is more experienced than H, in which case lack of reciprocity may be indicative of an authority relationship between the two nodes.
In a directed graph, for every edge, there is a source node and a destination node. So in the case of "A helps B" the source node is A and the destination node is B. In the case of "B helps A" the source node is B and the destination node is A. This means that in a directed graph, in contrast to a undirected one, the order in which you list the nodes when you name the edges matters. Thus, the edge AB is a different one from the edge BA. For instance, the first one may exist but the second one may not exist.
One must always be careful when examining a directed network to make sure one properly understands the direction of the underlying social relationships!
Just like in undirected (simple) graphs, each node in a directed graph has a node neighborhood. However, because now each node can be the source or destination for a asymmetric edges, this means that we have to differentiate the neighborhood of a node depending on whether the node is the sender or the recipient of a given link.
So, we say that a node j is an an in-neighbor of a node i if there is a directed link with j as the source and i as the destination node. For instance, in Figure 2.1, E is an in-neighbor of C, because there's a asymmetric edge with E as the source and C as the destination.
In the same way, we say that a node i is an out-neighbor of a node j if there is a directed link with i as the source and j as the destination. For instance, in Figure 2.1, F is an in-neighbor of G, because there's a asymmetric edge with G as the source and F as the destination
For each node, the full set of in-neighbors forms the in-neighborhood of that node. This is written \(N^{in}(v)\), where \(v\) is the label corresponding to the node. For instance, in Figure 2.1, the node set \(N^{in}(D) = \{B, E, G\}\) is the in-neighborhood of node D.
In the same way, the full set of in-neighbors defines the out-neighborhood of that node. This is written \(N^{out}(v)\), where \(v\) is the label corresponding to the node. For instance, in Figure 2.1, the node set \(N^{out}(B) = \{A, C, D\}\) is the out-neighborhood of node B.
Note that typically, the set of in-neighbors and out-neighbors of a given node will not be exactly the same, and sometimes the two sets will be completely disjoint (they won't share any members).
Nodes will only show up in both the in and out-neighborhood set when there are reciprocal or mutual ties between the nodes. For instance, in Figure 2.1, the out-neighborhood of node F is \(\{A\}\) and the in-neighborhood is \(\{A, G\}\). Here node A shows up in both the in and out-neighborhood sets because A has a reciprocal tie with F.
Because in a directed graph, each node has two distinct set of neighbors, we can compute two versions of degree for the same node.
in a directed graph, for any node i, we can count the number of edges that have a given node \(v\) as their destination node. This is also the cardinality of the in-neighborhood set of that node. This is called a node's indegree and it is written \(k^{in}_i\), where i is the label corresponding to that node.
Additionally, in a directed graph, for any node i, we can count the number of edges that have a given node \(i\) as their source node. This is also the cardinality of the out-neighborhood set of that node. This is called that node's outdegree and it is written as \(k^{out}_i\), , where i is the label corresponding to that node.
For instance, in Figure 2.1, \(k^{out}_B = 3\) and \(k^{in}_B = 2\). Node B has three outgoing ties (from nodes A, C, and D) and three incoming ties (from nodes A and D).
Can you calculate what the indegree and outdegree of node D in Figure 2.1 is?
The graph theoretic ideas of indegree and outdegree have clear sociological interpretations. In a social network, for instance, a node having a large outdegree could indicate a sociable person (a person that likes to connect with others), while having a large indegree can indicate a popular person (e.g., a person lots of other people want to be friends with). 2 2 In a later lesson we will see how to use a directed graph's asymmetric adjacency matrix to readily compute the outdegree and indegree in real social networks.
This means that in a directed graph, there will typically be three types of (non-isolate) nodes ( ⊕ Harary, Norman, and Cartwright 1965Harary, Frank, Robert Z Norman, and Dorwin Cartwright. 1965. Structural Models: An Introduction to the Theory of Directed Graphs. Wiley.):
First, there will be nodes that receive ties but don't send them. These are called receivers (like node C in Figure 2.1). For receiver nodes \(k_{in} > 0\) and \(k_{in} = 0\).
Second, there will be nodes that receive ties and also send out ties. These are called carriers (like nodes A and B in Figure 2.1). For carrier nodes, \(k_{in} > 0\) and \(k_{in} > 0\).
Finally, there will be nodes that send ties but don't receive them. These are called transmitters (like nodes E and G in Figure 2.1). For transmitter nodes, \(k_{in} = 0\) and \(k_{out} > 0\).
There is a particular type of directed relationship that has the property of only going in one direction. These are called anti-symmetric ties. Like asymmetric ties, anti-symmetric ties have a directionality (and thus source and destination nodes), but reciprocity is forbidden by definition. That means that if A is anti-symmetrically connected to B, then B cannot send the same type of tie back to A (although B may be connected, and typically is, to A via some other type of tie in a different network).
A common example of anti-symmetric ties in political sociology are patron-client ties ( ⊕ Martin 2009Martin, John Levi. 2009. Social Structures. Princeton University Press.). Patrons can have many clients, but it is impossible for client of a patron to also be a patron to the same person. Other types of anti-symmetric ties are hierarchical relations at work, and cross-generation links in families. Your boss is your boss, while you are not your boss' boss. In armies and other command and control structures, giving orders to is an anti-symmetric relation. An officer who gives orders to another officer (and thus commands them) cannot by definition also receive orders from them. In the same way, your parents are your parents (but you can only be a son or daughter to your parents), and your grandparents are their parents, and so forth. "Being the parent of" thus counts as an anti-symmetric relation as we define here; it only goes way (from parents to children) but it cannot come back from children towards parents.
Figure 3.1: A tree graph.
One feature of a network composed of only anti-symmetric relations is that its corresponding graph can always be drawn from top to bottom, starting (at the top) with the node that only sends but does receive any ties and ending (at the bottom) with nodes that only receive, but do not send, ties. This is called a tree graph and an example is shown in Figure 3.1. Your family tree is an example of a tree graph of anti-symmetric kin ties. For instance, A could be your grandmother, and B, C, and D could be her three daughters. If B was you mom, then you could be E (along with your siblings F and G) and your cousins H, I, J, K, L, M.
Teacher-student, coach-athlete, buyer-seller are all examples of anti-symmetric relationships that can be depicted as tree graphs. In a future lesson, we will see that it is possible to characterize the level of anti-symmetry we observe in a directed graph, to see how closely they approximate the pure tree structure.
In addition to kinship, authority relations are a common antisymmetric tie between people. Thus, Figure 3.1 could be a network in which the anti-symmetric links are directed "gives orders to" (in an army or an office) relations, where the source node directs commands toward the destination node. So A is the top boss and commands B, C, and D. Node B, in their turn, gives orders to E, who is at the lowest level of the hierarchy, not commanding anybody in turn.
In the preceding sections, our understanding of relationships have centered around their existence or absence. In certain situations, it may be socially meaningful to consider relationships in terms of their intensity or frequency- or what is often referred to as the "strength" of the tie ( ⊕ Marsden and Campbell 1984Marsden, Peter V, and Karen E Campbell. 1984. "Measuring Tie Strength." Social Forces 63 (2): 482–501.).
For example, people might have many friends, and friendship ties can be turned into graphs as done in the above examples. However, people often have different types of friends, and some friends are more important than others. This is the idea behind the concept of having a best friend. Your best friend might be more important to you than all your other friends, and it might be sociologically meaningful to the topic you are studying to capture this difference in your network. While it might be a little meaningless to just mark your best friend different from the rest, we can think of social situations where a series of gradations might make sense.
For example, let's say that we want to understand who is the leader in a group of friends. By definition, we've already bound the case as an existing group of friends. If everyone had a tie to the others, because they're all friends, then we would not be able to detect any variation between these different friends. However, if we looked at the frequency of text messages sent from one of these friends to another one, then we would likely begin to detect variation.
Figure 4.1: An undirected weighted graph.
The variation in the strength of these ties in social networks is captured by using weighted graphs to represent such networks. Figure 4.1 shows an example of a undirected weighted graph. In weighted graphs, the relative intensity of the relationships between actors in the network is quantified, thus facilitating the comparison of particular actors and relationships within the network. This is done by associating each edge in the graph with a number, call the weight of that edge. So instead of being just a set of vertices and edges, a weighted graph (\(G_w\)) is a set of three sets. A set of vertices (\(V\)), a set of edges (\(E\)), and a set of weights (\(w\)) associated with each edge:
\[\begin{equation} G_w = (E, V, w) \end{equation}\]
Thinking of Figure 4.1 as a type of undirected tie, the numbers can be thought of as the intensity of the link between two people. Perhaps these are the number of times two people have met for coffee or a drink during last year. Inspecting the Figure, we can see that actors C and E hang out together quite frequently (perhaps they are close, or are working on a project together). Actors B and G, on the other hand, hang out together less often.
Actors also seem to have preferences as to which people they hang out with most frequently with among those they are connected to. For instance, actor C has four contacts in the network. However, they have met only a few times with A but meet quite a lot times with their other contacts. This means that C has a weak tie with A and a strong tie to the rest of their friends.
Figure 4.2: A directed weighted graph.
Weighted graphs can also be directed, like the one shown in Figure 4.2. The number along each asymmetric tie could be the number of times one actor calls or texts the other, or the number of times they retweet the other other person, or the number of times they like a post from the other person on Instagram. Note that all of these things can lead to imbalance, such that in weighted graphs, relationships can be non-reciprocal even if the two actors are connected bi-directionally (as we will see in a future lesson). For instance, C directs an edge of weight \(w= 20\) towards I (perhaps a number of texts), but I only sends a directed edge of weight \(w = 5\) towards C.
From varying weights among edges in a social network, representing some varying frequency or intensity of the relationship in the real world, we can better understand important sociological phenomena.
So far we have talked about social relations as having different properties above and beyond being either "on" or "off." Social relations can be weak or strong or they can be multiplex or uniplex. Another property that social relations can have is valence. That is, you can be connected to other people via either positive or negative links.
For instance, you can love or hate someone. You can like or dislike a person. Somebody can consider you their enemy or their friend. A terrible person can bully you, or you a kind person can help you. What all of these contrasts in connectivity have is that they distinguish relations by their valence, and that valence takes on one of two possible values. 3 3 Relations that can take one of two opposed values are called bipolar. Social Networks that are composed of valenced relationships are called sentiment networks.
Can you think of other examples of sentiment networks you have experience with?
As you might have already suspected, there is a special type of graph that is useful for representing sentiment networks. This is called a signed graph. An example of a signed graph is shown in Figure 5.1. This signed graph is complete because all the possible relations between nodes exist. Note also that the signed graph is directed because each node is both a source and a destination node for directed asymmetric ties.
Figure 5.1: A directed signed graph.
Mathematically, one way to think about a signed graph is as a special kind of multigraph (\(G_S\)) featuring two disjointsets of edges 4 4 Two sets \(A\) and \(B\) are disjoint when they don't share any members. That means their intersection is the empty set: $A B = : positive links (\(E^+\)) and negative links (\(E^-\)). Thus, a signed graph is a set of three sets:
\[\begin{equation} G_S = (E^+, E^-, V) \end{equation}\]
Signed graphs have a number of unique properties. Some of them are the basis for entire network theories, such as balance theory, status theory, and karma theory that we will discuss later. For instance, reciprocity, just as in weighted graphs, takes on a different meaning in complete signed graphs. In the usual graph theory sense, all the relations in 5.1 are "reciprocal" because the graph is complete and thus all the dyads are mutual.
However, in complete signed graphs, reciprocity is better defined as mutual dyads that have the same sentiment going from one node to the other. In a signed graph mutual dyad, the relationship is reciprocal if both people think that they are friends or both people hate one another. A mutual dyad in a signed graph is non-reciprocal if one person likes the other person, but that person hates the first person. In the graph theoretic sense, reciprocal dyads in a signed graph are those that are connected by two asymmetric edges of the type: either positive or negative. A dyad is non-reciprocal if the two nodes are connected by asymmetric edges of different types.
Thus, in 5.1 Nodes A and C have a mutually positive relationship; A likes C and C reciprocates by liking A back. In the same way, A and D have a mutually negative relationship; A hates D and D reciprocates by hating A back. While the notion of "reciprocity" in negative interactions like hating, or bullying seems counter-intuitive (because we tend to think of reciprocity as an inherently positive thing), we will see later, when discussing theories of negative interactions, that negative reciprocity makes sense as a driver of human behavior, and may explain important phenomena like the escalation of violence among urban gangs ( ⊕ Papachristos, Hureau, and Braga 2013Papachristos, Andrew V, David M Hureau, and Anthony A Braga. 2013. "The Corner and the Crew: The Influence of Geography and Social Networks on Gang Violence." American Sociological Review 78 (3): 417–47.).
Finally, note that nodes B and C have a non-reciprocal sentiment relation; B likes C but C does not reciprocate the sentiment. Instead, C dislikes B. This brings us to another property of signed graphs, which is that this type of "imbalance" makes us think that there is something wrong with this dyadic state, and that something will have to give. Either C starts to hate B (because their feelings are hurt), or C ultimately convinces B to like them back.
The idea that there are some states of signed graph that makes "more sense" than others (because the various sentiment relations are reciprocated) is behind the notion of balance. It makes sense to us that if someone likes somebody they should like them back, or if they hate somebody, that other person should hate them back. Those states seem "balanced"; it makes less sense when sentiment relations have opposite signs across a dyad. Imbalance makes you think about a process that in the future will change the state of the links in the graph so that they go from imbalanced (e.g., non-reciprocal sentiment relations) to balanced (reciprocal sentiment relations). We will see later, that this "balance" reasoning can be extended, in signed graphs, to triadic configurations (subsets of three nodes in the graph), and from from there to the entire graph, so that we can speak of balanced and imbalanced triads, and balanced and imbalanced graphs. 5 5 This is the basic idea behind balance theory. # Multiplexity and Multigraphs One simplifying assumption made in much of previous and contemporary network research, is that people in the network are linked by only type of tie at a time (e.g., linking, friendship, texting). The reality is that connected dyads in social networks are usually connected by multiple type of ties at the same time. For instance, you text your friends, are in the same class as them, and sometimes work together. This means that a friend, who you text frequently, who is also a co-worker and takes the same class as you is linked to you in at least four different ways! This phenomenon, first noticed by early qualitative fieldwork by social network anthropologists ( ⊕ Barnes 1954Barnes, John Arundel. 1954. "Class and Committees in a Norwegian Island Parish." Human Relations 7 (1): 39–58.; ⊕ Bott 1955Bott, Elizabeth. 1955. "Urban Families: Conjugal Roles and Social Networks." Human Relations 8 (4): 345–84.) and early quantitative work by sociologists ( ⊕ Verbrugge 1979Verbrugge, Lois M. 1979. "Multiplexity in Adult Friendships." Social Forces 57 (4): 1286–1309.) is called multiplexity. In a network a multiplex dyad is a dyad in which the two nodes are connected by multiple types of ties at the same time.
Figure 5.2: An undirected multigraph.
Multiplexity, as a common feature of social life, can be represented using a special type of graph called a multigraph. A multiple graph (\(G_M\)) is just like a regular graph, except that instead of having a single edge set \(E\), it has multiple edge sets \((E_1, E_2,\dots V_K\)), where \(K\) is the total number of different types of relations in the network:
\[\begin{equation} G_M = (E_1, E_2,\dots E_K, V) \end{equation}\]
A network diagram of a multigraph is shown in Figure 5.2. This graph has eight nodes joined by three different types of ties (\(K = 3\)). The ties in a multigraph are labeled so that we can tell the different kinds apart. In the figure, the type-of-tie labels are represented by different edge colors. For instance, if the three relations we are studying are friendship (blue), co-working (red), and being a member of the same soccer club (green), then we can see nodes D and C are a multiplex dyad because are connected in two distinct ways (they are co-workers and members of the soccer club). Nodes C A are also a multiplex dyad because they are friends who also happen to work together. Nodes B and H, by way of contrast, are a regular old uniplex dyad being connected by a single type of tie (they are both in the soccer club, but do not work together nor do they think of one another as friends). The same goes for the dyad formed by nodes D and F who are just friends who neither work together nor belong to the same club. Finally, note that in Figure 5.2 nodes A and D are part of a regular old null dyad (they are not connected by any type of relation). | CommonCrawl |
Matematik på svenska
Wed 14 August - Tue 31 December
Steve Lester: Signs of Fourier coefficients of half-integral weight modular forms
Seminar, Number Theory
Wednesday 2019-08-21, 11:00
Lecturer: Steve Lester, Queen Mary University, London
Location: F11,
2019-08-21T11:00:00.000+02:00 2019-08-21T11:00:00.000+02:00 Steve Lester: Signs of Fourier coefficients of half-integral weight modular forms (Seminar, Number Theory) Steve Lester: Signs of Fourier coefficients of half-integral weight modular forms (Seminar, Number Theory)
Swedish Summer PDEs
Mon 2019-08-26, 09:30 - Wed 2019-08-28, 17:35
Location: F11
2019-08-26T09:30:00.000+02:00 2019-08-28T17:35:00.000+02:00 Swedish Summer PDEs (Conference) Swedish Summer PDEs (Conference)
Jørgen Rennemo: Fix-point loci in Hilbert schemes of points in the plane
Seminar, Algebra & Geometry
Wednesday 2019-08-28, 13:15 - 15:00
Lecturer: Jørgen Rennemo (Oslo)
Location: Room 3418, KTH
2019-08-28T13:15:00.000+02:00 2019-08-28T15:00:00.000+02:00 Jørgen Rennemo: Fix-point loci in Hilbert schemes of points in the plane (Seminar, Algebra & Geometry) Jørgen Rennemo: Fix-point loci in Hilbert schemes of points in the plane (Seminar, Algebra & Geometry)
Dynamic Optimization for Agent-Based Systems and Inverse Optimal Control
Licentiate seminars
Friday 2019-08-30, 10:00
Location: sal F11, Lindstedtsvägen 22, KTH, Stockholm
Doctoral student: Yibei Li , Mathematics
2019-08-30T10:00:00.000+02:00 2019-08-30T10:00:00.000+02:00 Dynamic Optimization for Agent-Based Systems and Inverse Optimal Control (Licentiate seminars) Dynamic Optimization for Agent-Based Systems and Inverse Optimal Control (Licentiate seminars)
PhD course: Quantum graphs
Tuesday 2019-09-03, 10:15 - 12:00
Lecturer: Pavel Kurasov
Location: sal 306, hus 6, Kräftriket
2019-09-03T10:15:00.000+02:00 2019-09-03T12:00:00.000+02:00 PhD course: Quantum graphs (Kursstart) PhD course: Quantum graphs (Kursstart)
Niko Naumann: The Balmer spectrum of a group
Topological Activities
Lecturer: Niko Naumann, Regensburg
Location: Room 306, SU
2019-09-03T15:15:00.000+02:00 2019-09-03T17:00:00.000+02:00 Niko Naumann: The Balmer spectrum of a group (Topological Activities) Niko Naumann: The Balmer spectrum of a group (Topological Activities)
Jonathan Leake: Log-concavity and entropic inequalities via stable polynomials
Seminar, Combinatorics
Lecturer: Jonathan Leake
Location: Room 3418, Lindstedtsvägen 25. Department of Mathematics, KTH
2019-09-04T10:15:00.000+02:00 2019-09-04T11:00:00.000+02:00 Jonathan Leake: Log-concavity and entropic inequalities via stable polynomials (Seminar, Combinatorics) Jonathan Leake: Log-concavity and entropic inequalities via stable polynomials (Seminar, Combinatorics)
Eric Ahlqvist: The étale cohomology ring of the ring of integers of a number field
Lecturer: Eric Ahlqvist, KTH
Location: SU, room 306
2019-09-04T11:00:00.000+02:00 2019-09-04T12:00:00.000+02:00 Eric Ahlqvist: The étale cohomology ring of the ring of integers of a number field (Seminar, Number Theory) Eric Ahlqvist: The étale cohomology ring of the ring of integers of a number field (Seminar, Number Theory)
Ulrik Enstad: Time-frequency analysis on the adeles
Seminar, Harmonic analysis, operator algebras and representation theory
Lecturer: Ulrik Enstad, Oslo
Location: Room 31, SU
2019-09-04T15:30:00.000+02:00 2019-09-04T16:30:00.000+02:00 Ulrik Enstad: Time-frequency analysis on the adeles (Seminar, Harmonic analysis, operator algebras and representation theory) Ulrik Enstad: Time-frequency analysis on the adeles (Seminar, Harmonic analysis, operator algebras and representation theory)
Robert Altmann: A semi-explicit discretization scheme for elliptic-parabolic problems
Seminar, Numerical analysis
Thursday 2019-09-05, 14:15 - 15:00
Lecturer: Robert Altmann, University of Augsburg
Location: Room F11, Lindstedtsvägen 22, våningsplan 2, F-huset, KTH Campus.
2019-09-05T14:15:00.000+02:00 2019-09-05T15:00:00.000+02:00 Robert Altmann: A semi-explicit discretization scheme for elliptic-parabolic problems (Seminar, Numerical analysis) Robert Altmann: A semi-explicit discretization scheme for elliptic-parabolic problems (Seminar, Numerical analysis)
Workshop to honour Timo Koski's 67th birthday
Friday 2019-09-06, 08:30 - 16:30
2019-09-06T08:30:00.000+02:00 2019-09-06T16:30:00.000+02:00 Workshop to honour Timo Koski's 67th birthday (Conference) Workshop to honour Timo Koski's 67th birthday (Conference)
Gernot Akemann: Spacing Distribution in the Ginibre ensembles: Universality and applications
Seminar, Random matrix theory
Tuesday 2019-09-10, 15:15
Lecturer: Gernot Akemann, Bielefeld University and KAW guest professor at KTH
Location: F11, KTH
2019-09-10T15:15:00.000+02:00 2019-09-10T15:15:00.000+02:00 Gernot Akemann: Spacing Distribution in the Ginibre ensembles: Universality and applications (Seminar, Random matrix theory) Gernot Akemann: Spacing Distribution in the Ginibre ensembles: Universality and applications (Seminar, Random matrix theory)
Douglas S. Bridges: Apartness on Lattices
Seminar, Logic
Lecturer: Douglas S. Bridges, University of Canterbury, Christchurch, New Zealand
Location: Kräftriket, Hus 5, Sal 16
2019-09-11T10:00:00.000+02:00 2019-09-11T11:45:00.000+02:00 Douglas S. Bridges: Apartness on Lattices (Seminar, Logic) Douglas S. Bridges: Apartness on Lattices (Seminar, Logic)
Ezgi Kantarcı: A Queer Crystal Structure on Shifted Tableaux
Lecturer: Ezgi Kantarcı
2019-09-11T10:15:00.000+02:00 2019-09-11T11:00:00.000+02:00 Ezgi Kantarcı: A Queer Crystal Structure on Shifted Tableaux (Seminar, Combinatorics) Ezgi Kantarcı: A Queer Crystal Structure on Shifted Tableaux (Seminar, Combinatorics)
Magnus Carlson: Arithmetic Field theories
Lecturer: Magnus Carlson, Hebrew University, Jerusalem
Location: SU, room 31 (NOTE special location)
2019-09-11T11:00:00.000+02:00 2019-09-11T12:00:00.000+02:00 Magnus Carlson: Arithmetic Field theories (Seminar, Number Theory) Magnus Carlson: Arithmetic Field theories (Seminar, Number Theory)
Lars Arvestad: A fast method for selecting phylogenetic replacement rate models
Seminar, Computational Mathematics
Lecturer: Lars Arvestad
Location: Room 306, SU, Kräftriket
2019-09-11T11:30:00.000+02:00 2019-09-11T11:30:00.000+02:00 Lars Arvestad: A fast method for selecting phylogenetic replacement rate models (Seminar, Computational Mathematics) Lars Arvestad: A fast method for selecting phylogenetic replacement rate models (Seminar, Computational Mathematics)
Mikael Passare Day
Lecturer: Nils Dencker, Håkan Hedenmalm, Andrzej Szulkin and Alan Sola
Location: sal 14, hus 5, Kräftriket
2019-09-11T13:00:00.000+02:00 2019-09-11T17:00:00.000+02:00 Mikael Passare Day (Conference) Mikael Passare Day (Conference)
Ian Hambleton: Manifolds and symmetry
Lecturer: Ian Hambleton (McMaster university)
2019-09-11T13:15:00.000+02:00 2019-09-11T14:15:00.000+02:00 Ian Hambleton: Manifolds and symmetry (Seminar, Algebra & Geometry) Ian Hambleton: Manifolds and symmetry (Seminar, Algebra & Geometry)
Volker Schlue: On the stability of the cosmological region of Schwarzschild de Sitter spacetimes
Seminar, Mittag-Leffler
Lecturer: Volker Schlue - University of Melbourne
Location: Seminar Hall Kuskvillan, Institut Mittag-Leffler
2019-09-12T10:00:00.000+02:00 2019-09-12T11:00:00.000+02:00 Volker Schlue: On the stability of the cosmological region of Schwarzschild de Sitter spacetimes (Seminar, Mittag-Leffler) Volker Schlue: On the stability of the cosmological region of Schwarzschild de Sitter spacetimes (Seminar, Mittag-Leffler)
Klaus Kröncke: Stability of ALE Ricci-flat manifolds under Ricci flow
Lecturer: Klaus Kröncke - Universität Hamburg
2019-09-12T11:00:00.000+02:00 2019-09-12T12:00:00.000+02:00 Klaus Kröncke: Stability of ALE Ricci-flat manifolds under Ricci flow (Seminar, Mittag-Leffler) Klaus Kröncke: Stability of ALE Ricci-flat manifolds under Ricci flow (Seminar, Mittag-Leffler)
PhD course: infinity-categories (organisational meeting)
Thursday 2019-09-12, 14:00
Lecturer: Peter LeFanu Lumsdaine
Location: Kräftriket Hus 6, Rum 306 (SU)
2019-09-12T14:00:00.000+02:00 2019-09-12T14:00:00.000+02:00 PhD course: infinity-categories (organisational meeting) (Kursstart) PhD course: infinity-categories (organisational meeting) (Kursstart)
Paul Jenkins: Asymptotic genealogies of interacting particle systems
Seminar, MathDataLab
Lecturer: Paul Jenkins, University of Warwick
Location: Room F11, Lindstedtsvägen 22
2019-09-12T15:15:00.000+02:00 2019-09-12T15:15:00.000+02:00 Paul Jenkins: Asymptotic genealogies of interacting particle systems (Seminar, MathDataLab) Paul Jenkins: Asymptotic genealogies of interacting particle systems (Seminar, MathDataLab)
Gianpiero Canessa: Static risk averse models and applications
Seminar, Optimization and systems theory
Lecturer: Gianpiero Canessa, Postdoctoral Researcher at KTH Royal Institute of Technology
2019-09-13T11:00:00.000+02:00 2019-09-13T12:00:00.000+02:00 Gianpiero Canessa: Static risk averse models and applications (Seminar, Optimization and systems theory) Gianpiero Canessa: Static risk averse models and applications (Seminar, Optimization and systems theory)
Scott Mason: Statistical mechanics and the Ising model
Seminar, Graduate student
Lecturer: Scott Mason
Location: Room 3418, Lindstedtsvägen 25, 4th floor, Department of Mathematics, KTH
2019-09-13T13:15:00.000+02:00 2019-09-13T14:00:00.000+02:00 Scott Mason: Statistical mechanics and the Ising model (Seminar, Graduate student) Scott Mason: Statistical mechanics and the Ising model (Seminar, Graduate student)
Sid Resnick: Exploring Dependence in Multivariate Heavy Tailed Data
Monday 2019-09-16, 15:15
Lecturer: Sid Resnick, Cornell University
2019-09-16T15:15:00.000+02:00 2019-09-16T15:15:00.000+02:00 Sid Resnick: Exploring Dependence in Multivariate Heavy Tailed Data (Seminar, MathDataLab) Sid Resnick: Exploring Dependence in Multivariate Heavy Tailed Data (Seminar, MathDataLab)
Lionel Mason: From null geodesic to gravitational scattering [An alternative route from BMS to soft theorems via ambitwistors and strings]
Lecturer: Lionel Mason, University of Oxford
2019-09-17T10:00:00.000+02:00 2019-09-17T11:00:00.000+02:00 Lionel Mason: From null geodesic to gravitational scattering [An alternative route from BMS to soft theorems via ambitwistors and strings] (Seminar, Mittag-Leffler) Lionel Mason: From null geodesic to gravitational scattering [An alternative route from BMS to soft theorems via ambitwistors and strings] (Seminar, Mittag-Leffler)
Paul Tod: Asymptotically $AdS_2\times S^2$ metrics satisfying the Null Energy Condition
Lecturer: Paul Tod, University of Oxford
2019-09-17T11:00:00.000+02:00 2019-09-17T12:00:00.000+02:00 Paul Tod: Asymptotically $AdS_2\times S^2$ metrics satisfying the Null Energy Condition (Seminar, Mittag-Leffler) Paul Tod: Asymptotically $AdS_2\times S^2$ metrics satisfying the Null Energy Condition (Seminar, Mittag-Leffler)
Barbara Jaworski: Developing the teaching of undergraduate mathematics through research
Lecturer: Barbara Jaworski
Location: Room 31, House 5, Kräftriket.
2019-09-17T11:30:00.000+02:00 2019-09-17T13:00:00.000+02:00 Barbara Jaworski: Developing the teaching of undergraduate mathematics through research (Seminar, Education) Barbara Jaworski: Developing the teaching of undergraduate mathematics through research (Seminar, Education)
Benjamin Fahs: Uniform asymptotics of Toeplitz determinants with Fisher-Hartwig singularities
Lecturer: Benjamin Fahs, Imperial College London
2019-09-17T15:15:00.000+02:00 2019-09-17T16:15:00.000+02:00 Benjamin Fahs: Uniform asymptotics of Toeplitz determinants with Fisher-Hartwig singularities (Seminar, Random matrix theory) Benjamin Fahs: Uniform asymptotics of Toeplitz determinants with Fisher-Hartwig singularities (Seminar, Random matrix theory)
Richard Schoen: New perspectives on scalar curvature
SMC Colloquium
Lecturer: Richard Schoen, UC Irvine
Location: FR4, Oskar Klein, AlbaNova
2019-09-18T15:15:00.000+02:00 2019-09-18T17:00:00.000+02:00 Richard Schoen: New perspectives on scalar curvature (SMC Colloquium) Richard Schoen: New perspectives on scalar curvature (SMC Colloquium)
Richard Schoen: Scalar curvature and minimal hypersurface singularities
Lecturer: Richard Schoen, University of California, Irvine
2019-09-19T10:00:00.000+02:00 2019-09-19T11:00:00.000+02:00 Richard Schoen: Scalar curvature and minimal hypersurface singularities (Seminar, Mittag-Leffler) Richard Schoen: Scalar curvature and minimal hypersurface singularities (Seminar, Mittag-Leffler)
Ian Hambleton: Finite Group Actions and Chain Complexes over the Orbit Category
Lecturer: Ian Hambleton, McMaster University
Location: Room 33, Building 5, Kräftriket, SU
2019-09-19T10:15:00.000+02:00 2019-09-19T12:00:00.000+02:00 Ian Hambleton: Finite Group Actions and Chain Complexes over the Orbit Category (Topological Activities) Ian Hambleton: Finite Group Actions and Chain Complexes over the Orbit Category (Topological Activities)
Alessandro Carlotto: Constrained deformations of positive scalar curvature metrics
Lecturer: Alessandro Carlotto, ETH Zürich
2019-09-19T11:00:00.000+02:00 2019-09-19T12:00:00.000+02:00 Alessandro Carlotto: Constrained deformations of positive scalar curvature metrics (Seminar, Mittag-Leffler) Alessandro Carlotto: Constrained deformations of positive scalar curvature metrics (Seminar, Mittag-Leffler)
Topics in Workforce management, in a contact center context.
Lecturer: Göran Svensson
2019-09-20T11:00:00.000+02:00 2019-09-20T12:00:00.000+02:00 Topics in Workforce management, in a contact center context. (Seminar, Optimization and systems theory) Topics in Workforce management, in a contact center context. (Seminar, Optimization and systems theory)
Eleftherios Theodosiadis: The Loewner equation
Lecturer: Eleftherios Theodosiadis
Location: rum 306, hus 6, Kräftriket
2019-09-20T13:00:00.000+02:00 2019-09-20T14:00:00.000+02:00 Eleftherios Theodosiadis: The Loewner equation (Seminar, Graduate student) Eleftherios Theodosiadis: The Loewner equation (Seminar, Graduate student)
Philippe G. LeFloch: On the global nonlinear stability of self-gravitating matter
Lecturer: Philippe G. LeFloch, Sorbonne University
2019-09-24T10:00:00.000+02:00 2019-09-24T11:00:00.000+02:00 Philippe G. LeFloch: On the global nonlinear stability of self-gravitating matter (Seminar, Mittag-Leffler) Philippe G. LeFloch: On the global nonlinear stability of self-gravitating matter (Seminar, Mittag-Leffler)
Igor Khavkine: Conformal Killing Initial Data
Lecturer: Igor Khavkine, Czech Academy of Sciences
2019-09-24T11:00:00.000+02:00 2019-09-24T12:00:00.000+02:00 Igor Khavkine: Conformal Killing Initial Data (Seminar, Mittag-Leffler) Igor Khavkine: Conformal Killing Initial Data (Seminar, Mittag-Leffler)
Oleksiy Klurman: Multiplicative functions in short arithmetic progressions and applications
Lecturer: Oleksiy Klurman, MPIM Bonn
Location: F11, KTH math department
2019-09-25T11:00:00.000+02:00 2019-09-25T11:00:00.000+02:00 Oleksiy Klurman: Multiplicative functions in short arithmetic progressions and applications (Seminar, Number Theory) Oleksiy Klurman: Multiplicative functions in short arithmetic progressions and applications (Seminar, Number Theory)
Siyuan Ma: Linear stability for the Kerr spacetime
Lecturer: Siyuan Ma, Max Planck Institute for Gravitational Physics (Albert Einstein Institute)
2019-09-26T10:00:00.000+02:00 2019-09-26T11:00:00.000+02:00 Siyuan Ma: Linear stability for the Kerr spacetime (Seminar, Mittag-Leffler) Siyuan Ma: Linear stability for the Kerr spacetime (Seminar, Mittag-Leffler)
Stefano Borghini: Static vacuum spacetimes with positive cosmological constant
Lecturer: Stefano Borghini, Uppsala University
2019-09-26T11:00:00.000+02:00 2019-09-26T12:00:00.000+02:00 Stefano Borghini: Static vacuum spacetimes with positive cosmological constant (Seminar, Mittag-Leffler) Stefano Borghini: Static vacuum spacetimes with positive cosmological constant (Seminar, Mittag-Leffler)
Nils Hemmingsson: On the dynamics of a family of critical circle endomorphisms
Degree project
Location: Room 3418, Lindstedtsvägen 25
2019-09-26T11:00:00.000+02:00 2019-09-26T12:00:00.000+02:00 Nils Hemmingsson: On the dynamics of a family of critical circle endomorphisms (Degree project) Nils Hemmingsson: On the dynamics of a family of critical circle endomorphisms (Degree project)
Alvin Jin: Topological data analysis and the pursuit-evasion problem
Lecturer: Alvin Jin
2019-09-27T13:00:00.000+02:00 2019-09-27T14:00:00.000+02:00 Alvin Jin: Topological data analysis and the pursuit-evasion problem (Seminar, Graduate student) Alvin Jin: Topological data analysis and the pursuit-evasion problem (Seminar, Graduate student)
Topics in Work Force Management in a Contact Center Context
Applied and Computational Mathematics - Optimization and Systems Theory
Location: sal F3, Lindstedtsvägen 26, KTH, Stockholm
Doctoral student: Göran Svensson , Mathematics
2019-09-27T14:00:00.000+02:00 2019-09-27T14:00:00.000+02:00 Topics in Work Force Management in a Contact Center Context (Dissertations) Topics in Work Force Management in a Contact Center Context (Dissertations)
Topics in Workforce Management in a Contact Center Context
Location: F3, Lindstedtsvägen 26, Stockholm (English)
Doctoral student: Göran Svensson , Optimeringslära och systemteori
2019-09-27T14:00:00.000+02:00 2019-09-27T14:00:00.000+02:00 Topics in Workforce Management in a Contact Center Context (Dissertations) Topics in Workforce Management in a Contact Center Context (Dissertations)
Nam-Gyu Kang: Conformal field theory for multiple SLEs.
Seminar, KTH Analysis
Monday 2019-09-30, 13:15 - 14:15
Lecturer: Nam-Gyu Kang, KIAS, Korea
2019-09-30T13:15:00.000+02:00 2019-09-30T14:15:00.000+02:00 Nam-Gyu Kang: Conformal field theory for multiple SLEs. (Seminar, KTH Analysis) Nam-Gyu Kang: Conformal field theory for multiple SLEs. (Seminar, KTH Analysis)
Jories Bierkens: Piecewise deterministic Monte Carlo
Seminar, Mathematical statistics
Lecturer: Jories Bierkens, VU Amsterdam
2019-09-30T15:15:00.000+02:00 2019-09-30T16:15:00.000+02:00 Jories Bierkens: Piecewise deterministic Monte Carlo (Seminar, Mathematical statistics) Jories Bierkens: Piecewise deterministic Monte Carlo (Seminar, Mathematical statistics)
Benedikt Ahrens: Initial semantics for lambda calculi
Lecturer: Benedikt Ahrens, University of Birmingham
2019-10-01T10:00:00.000+02:00 2019-10-01T12:00:00.000+02:00 Benedikt Ahrens: Initial semantics for lambda calculi (Seminar, Logic) Benedikt Ahrens: Initial semantics for lambda calculi (Seminar, Logic)
Gerhard Rein: Can highly relativistic, self-gravitating matter distributions be stable?
Lecturer: Gerhard Rein, Universität Bayreuth
2019-10-01T10:00:00.000+02:00 2019-10-01T11:00:00.000+02:00 Gerhard Rein: Can highly relativistic, self-gravitating matter distributions be stable? (Seminar, Mittag-Leffler) Gerhard Rein: Can highly relativistic, self-gravitating matter distributions be stable? (Seminar, Mittag-Leffler)
PhD reading course: seminar on the Kervaire invariant one problem
Lecturer: Gregory Arone
Location: Sal 16, hus 6 , Kräftriket
2019-10-01T10:15:00.000+02:00 2019-10-01T12:00:00.000+02:00 PhD reading course: seminar on the Kervaire invariant one problem (Kursstart) PhD reading course: seminar on the Kervaire invariant one problem (Kursstart)
Bernardo Araneda: Twistor theory and the Teukolsky equations
Lecturer: Bernardo Araneda, Universidad Nacional de Cordoba
2019-10-01T11:00:00.000+02:00 2019-10-01T12:00:00.000+02:00 Bernardo Araneda: Twistor theory and the Teukolsky equations (Seminar, Mittag-Leffler) Bernardo Araneda: Twistor theory and the Teukolsky equations (Seminar, Mittag-Leffler)
Alexander Tovbis: Focusing Nonlinear Schroedenger Equation (fNLS): from small dispersion limit to soliton/breather gases
Lecturer: Alexander Tovbis
2019-10-01T15:15:00.000+02:00 2019-10-01T16:15:00.000+02:00 Alexander Tovbis: Focusing Nonlinear Schroedenger Equation (fNLS): from small dispersion limit to soliton/breather gases (Seminar, Random matrix theory) Alexander Tovbis: Focusing Nonlinear Schroedenger Equation (fNLS): from small dispersion limit to soliton/breather gases (Seminar, Random matrix theory)
Roy Skjelnes: The space of Twisted cubics
Lecturer: Roy Skjelnes (KTH)
Location: Room 306, House 6, Kräftriket, Department of Mathematics, Stockholm University
2019-10-02T13:15:00.000+02:00 2019-10-02T14:15:00.000+02:00 Roy Skjelnes: The space of Twisted cubics (Seminar, Algebra & Geometry) Roy Skjelnes: The space of Twisted cubics (Seminar, Algebra & Geometry)
Paige North: Two-sided weak factorization systems
Lecturer: Paige North, Ohio State University
2019-10-02T14:30:00.000+02:00 2019-10-02T16:30:00.000+02:00 Paige North: Two-sided weak factorization systems (Seminar, Logic) Paige North: Two-sided weak factorization systems (Seminar, Logic)
Martin Taylor: The nonlinear stability of the Schwarzschild family of black holes
Lecturer: Martin Taylor, Imperial College London
2019-10-03T10:00:00.000+02:00 2019-10-03T11:00:00.000+02:00 Martin Taylor: The nonlinear stability of the Schwarzschild family of black holes (Seminar, Mittag-Leffler) Martin Taylor: The nonlinear stability of the Schwarzschild family of black holes (Seminar, Mittag-Leffler)
Thomas Bäckdahl: Spinor techniques for black hole stability
Lecturer: Thomas Bäckdahl, Chalmers/University of Gothenburg
2019-10-03T11:00:00.000+02:00 2019-10-03T12:00:00.000+02:00 Thomas Bäckdahl: Spinor techniques for black hole stability (Seminar, Mittag-Leffler) Thomas Bäckdahl: Spinor techniques for black hole stability (Seminar, Mittag-Leffler)
Ivan Parra: Planar Orthogonal Polynomials on Ellipses in the Complex Plane
Lecturer: Ivan Parra, Bielefeld University
2019-10-03T15:15:00.000+02:00 2019-10-03T16:15:00.000+02:00 Ivan Parra: Planar Orthogonal Polynomials on Ellipses in the Complex Plane (Seminar, Random matrix theory) Ivan Parra: Planar Orthogonal Polynomials on Ellipses in the Complex Plane (Seminar, Random matrix theory)
Markus Ebke: Skew-Orthogonal Polynomials for Quaternion Non-Hermitian Random Matrices
Lecturer: Markus Ebke (Bielefeld University)
2019-10-03T16:15:00.000+02:00 2019-10-03T17:00:00.000+02:00 Markus Ebke: Skew-Orthogonal Polynomials for Quaternion Non-Hermitian Random Matrices (Seminar, Random matrix theory) Markus Ebke: Skew-Orthogonal Polynomials for Quaternion Non-Hermitian Random Matrices (Seminar, Random matrix theory)
Shen Peng: Chance constrained problem and some applications.
Lecturer: Shen Peng
2019-10-04T11:00:00.000+02:00 2019-10-04T12:00:00.000+02:00 Shen Peng: Chance constrained problem and some applications. (Seminar, Optimization and systems theory) Shen Peng: Chance constrained problem and some applications. (Seminar, Optimization and systems theory)
Johann Selewa: Stanley-Reisner rings of abstract simplicial complexes
Lecturer: Johann Selewa
2019-10-04T13:00:00.000+02:00 2019-10-04T14:00:00.000+02:00 Johann Selewa: Stanley-Reisner rings of abstract simplicial complexes (Seminar, Graduate student) Johann Selewa: Stanley-Reisner rings of abstract simplicial complexes (Seminar, Graduate student)
Patrick Rebeschini: Implicit Regularization for Optimal Sparse Recovery
Lecturer: Patrick Rebeschini, Oxford
2019-10-07T15:15:00.000+02:00 2019-10-07T16:15:00.000+02:00 Patrick Rebeschini: Implicit Regularization for Optimal Sparse Recovery (Seminar, Mathematical statistics) Patrick Rebeschini: Implicit Regularization for Optimal Sparse Recovery (Seminar, Mathematical statistics)
Marc Mars: Existence and uniqueness of rigidly rotating stars in second order perturbation theory
Lecturer: Marc Mars, University of Salamanca
2019-10-08T10:00:00.000+02:00 2019-10-08T11:00:00.000+02:00 Marc Mars: Existence and uniqueness of rigidly rotating stars in second order perturbation theory (Seminar, Mittag-Leffler) Marc Mars: Existence and uniqueness of rigidly rotating stars in second order perturbation theory (Seminar, Mittag-Leffler)
Po-Ning Chen: Quasi-local mass and Penrose inequality
Lecturer: Po-Ning Chen, University of California, Riverside
2019-10-08T11:00:00.000+02:00 2019-10-08T12:00:00.000+02:00 Po-Ning Chen: Quasi-local mass and Penrose inequality (Seminar, Mittag-Leffler) Po-Ning Chen: Quasi-local mass and Penrose inequality (Seminar, Mittag-Leffler)
Sigrid Källblad Nordin: Mathematical Finance and Measure-valued Martingales
Lecturer: Sigrid Källblad Nordin, KTH
2019-10-08T15:15:00.000+02:00 2019-10-08T16:15:00.000+02:00 Sigrid Källblad Nordin: Mathematical Finance and Measure-valued Martingales (Seminar, Random matrix theory) Sigrid Källblad Nordin: Mathematical Finance and Measure-valued Martingales (Seminar, Random matrix theory)
Katharina Jochemko: Generalized permutahedra: Minkowski linear functionals and Ehrhart positivity
Lecturer: Katharina Jochemko
2019-10-09T10:15:00.000+02:00 2019-10-09T11:00:00.000+02:00 Katharina Jochemko: Generalized permutahedra: Minkowski linear functionals and Ehrhart positivity (Seminar, Combinatorics) Katharina Jochemko: Generalized permutahedra: Minkowski linear functionals and Ehrhart positivity (Seminar, Combinatorics)
Josefin Ahlkrona: Finite Element Methods for Ice Sheet Modelling
Lecturer: Josefin Ahlkrona, Stockholms universitet
Location: 306, kräftriket
2019-10-09T11:45:00.000+02:00 2019-10-09T12:30:00.000+02:00 Josefin Ahlkrona: Finite Element Methods for Ice Sheet Modelling (Seminar, Computational Mathematics) Josefin Ahlkrona: Finite Element Methods for Ice Sheet Modelling (Seminar, Computational Mathematics)
Oliver Leigh: The Moduli Space of Stable Maps with Divisible Ramification
Lecturer: Oliver Leigh, Stockholm University
2019-10-09T13:15:00.000+02:00 2019-10-09T13:15:00.000+02:00 Oliver Leigh: The Moduli Space of Stable Maps with Divisible Ramification (Seminar, Algebra & Geometry) Oliver Leigh: The Moduli Space of Stable Maps with Divisible Ramification (Seminar, Algebra & Geometry)
Mathias Millberg Lindholm: How to ask sensitive multiple choice questions
Lecturer: Mathias Millberg Lindholm, Stockholm University
Location: Cramér room, room 306, house 6 at Kräftriket,
2019-10-09T15:15:00.000+02:00 2019-10-09T16:15:00.000+02:00 Mathias Millberg Lindholm: How to ask sensitive multiple choice questions (Seminar, Mathematical statistics) Mathias Millberg Lindholm: How to ask sensitive multiple choice questions (Seminar, Mathematical statistics)
Steffen Aksteiner: All local gauge invariants for black hole perturbation theory
Lecturer: Steffen Aksteiner, Max Planck Institute for Gravitational Physics (Albert Einstein Institute)
2019-10-10T10:00:00.000+02:00 2019-10-10T11:00:00.000+02:00 Steffen Aksteiner: All local gauge invariants for black hole perturbation theory (Seminar, Mittag-Leffler) Steffen Aksteiner: All local gauge invariants for black hole perturbation theory (Seminar, Mittag-Leffler)
Models for Additive and Sufficient Cause Interaction
Location: F11, Lindstedtsvägen 22, KTH Stockholm, (English)
Doctoral student: Daniel Berglund , Matematisk statistik
2019-10-10T10:00:00.000+02:00 2019-10-10T10:00:00.000+02:00 Models for Additive and Sufficient Cause Interaction (Licentiate seminars) Models for Additive and Sufficient Cause Interaction (Licentiate seminars)
Jacques Smulevici: On the initial value problem for the Einstein equations in the maximal gauge
Lecturer: Jacques Smulevici, Sorbonne University
2019-10-10T11:00:00.000+02:00 2019-10-10T12:00:00.000+02:00 Jacques Smulevici: On the initial value problem for the Einstein equations in the maximal gauge (Seminar, Mittag-Leffler) Jacques Smulevici: On the initial value problem for the Einstein equations in the maximal gauge (Seminar, Mittag-Leffler)
Bin Zhu: An Empirical Bayes Approach to Frequency Estimation
Lecturer: Bin Zhu, postdoc from University of Padova
2019-10-11T11:00:00.000+02:00 2019-10-11T12:00:00.000+02:00 Bin Zhu: An Empirical Bayes Approach to Frequency Estimation (Seminar, Optimization and systems theory) Bin Zhu: An Empirical Bayes Approach to Frequency Estimation (Seminar, Optimization and systems theory)
Francesca Tombari: Vietoris-Rips complexes in TDA and their decompositions
Lecturer: Francesca Tombari
2019-10-11T13:00:00.000+02:00 2019-10-11T14:00:00.000+02:00 Francesca Tombari: Vietoris-Rips complexes in TDA and their decompositions (Seminar, Graduate student) Francesca Tombari: Vietoris-Rips complexes in TDA and their decompositions (Seminar, Graduate student)
Richard Davis: The Use of Shape Constraints for Modeling Time Series of Counts
Lecturer: Richard Davis, Columbia University
2019-10-14T15:15:00.000+02:00 2019-10-14T16:15:00.000+02:00 Richard Davis: The Use of Shape Constraints for Modeling Time Series of Counts (Seminar, MathDataLab) Richard Davis: The Use of Shape Constraints for Modeling Time Series of Counts (Seminar, MathDataLab)
Boris Shapiro: On algebraic dependencies among harmonic and anti-harmonic moments of plane polygons
Seminar, Commutative Algebra
Lecturer: Boris Shapiro, SU
Location: room 35, bldn 5, Kräftriket
2019-10-14T15:30:00.000+02:00 2019-10-14T16:30:00.000+02:00 Boris Shapiro: On algebraic dependencies among harmonic and anti-harmonic moments of plane polygons (Seminar, Commutative Algebra) Boris Shapiro: On algebraic dependencies among harmonic and anti-harmonic moments of plane polygons (Seminar, Commutative Algebra)
Jose Senovilla: Characterizing the existence of gravitational radiation at null infinity in asymptotically de Sitter (and flat) spacetimes
Lecturer: Jose Senovilla, University of the Basque Country
2019-10-15T10:00:00.000+02:00 2019-10-15T11:00:00.000+02:00 Jose Senovilla: Characterizing the existence of gravitational radiation at null infinity in asymptotically de Sitter (and flat) spacetimes (Seminar, Mittag-Leffler) Jose Senovilla: Characterizing the existence of gravitational radiation at null infinity in asymptotically de Sitter (and flat) spacetimes (Seminar, Mittag-Leffler)
Eric Ling: Spacetime Extensions of the Big Bang
Lecturer: Eric Ling, KTH Royal Institute of Technology
2019-10-15T11:00:00.000+02:00 2019-10-15T12:00:00.000+02:00 Eric Ling: Spacetime Extensions of the Big Bang (Seminar, Mittag-Leffler) Eric Ling: Spacetime Extensions of the Big Bang (Seminar, Mittag-Leffler)
Patrik Ferrari: Time-time covariance for last passage percolation with generic initial profile
Lecturer: Patrik Ferrari
2019-10-15T15:15:00.000+02:00 2019-10-15T16:15:00.000+02:00 Patrik Ferrari: Time-time covariance for last passage percolation with generic initial profile (Seminar, Random matrix theory) Patrik Ferrari: Time-time covariance for last passage percolation with generic initial profile (Seminar, Random matrix theory)
Michelle Wachs: On the homogenized Linial arrangement and Genocchi numbers
Lecturer: Michelle Wachs
2019-10-16T10:15:00.000+02:00 2019-10-16T11:00:00.000+02:00 Michelle Wachs: On the homogenized Linial arrangement and Genocchi numbers (Seminar, Combinatorics) Michelle Wachs: On the homogenized Linial arrangement and Genocchi numbers (Seminar, Combinatorics)
Søren Galatius: Periodicity and stability in mapping class groups and other E_2 algebras
Lecturer: Søren Galatius, København
Location: Sal 306, hus 6, Kräftriket, SU
2019-10-16T13:15:00.000+02:00 2019-10-16T15:00:00.000+02:00 Søren Galatius: Periodicity and stability in mapping class groups and other E_2 algebras (Seminar, Algebra & Geometry) Søren Galatius: Periodicity and stability in mapping class groups and other E_2 algebras (Seminar, Algebra & Geometry)
Alan Sola: One dimensional scaling limits in a Laplacian random growth model
Seminar, Analysis Stockholm
Lecturer: Alan Sola, Stockholms universitet
2019-10-16T13:15:00.000+02:00 2019-10-16T14:15:00.000+02:00 Alan Sola: One dimensional scaling limits in a Laplacian random growth model (Seminar, Analysis Stockholm) Alan Sola: One dimensional scaling limits in a Laplacian random growth model (Seminar, Analysis Stockholm)
Filip Lindskog: Estimation of conditional mean squared error of prediction
Lecturer: Filip Lindskog, SU
Location: Cramér room, room 306, house 6 at Kräftriket
2019-10-16T15:15:00.000+02:00 2019-10-16T15:15:00.000+02:00 Filip Lindskog: Estimation of conditional mean squared error of prediction (Seminar, Mathematical statistics) Filip Lindskog: Estimation of conditional mean squared error of prediction (Seminar, Mathematical statistics)
Marc Herzlich: "Universal'' positive mass theorems
Lecturer: Marc Herzlich, Université de Montpellier
2019-10-17T10:00:00.000+02:00 2019-10-17T11:00:00.000+02:00 Marc Herzlich: "Universal'' positive mass theorems (Seminar, Mittag-Leffler) Marc Herzlich: "Universal'' positive mass theorems (Seminar, Mittag-Leffler)
Greg Galloway: Existence of CMC Cauchy surfaces and spacetime splitting
Lecturer: Greg Galloway, University of Miami
2019-10-17T11:00:00.000+02:00 2019-10-17T12:00:00.000+02:00 Greg Galloway: Existence of CMC Cauchy surfaces and spacetime splitting (Seminar, Mittag-Leffler) Greg Galloway: Existence of CMC Cauchy surfaces and spacetime splitting (Seminar, Mittag-Leffler)
Thorsten Neuschel: Critical behavior of non-intersecting Brownian motions
Lecturer: Thorsten Neuschel
2019-10-17T15:15:00.000+02:00 2019-10-17T16:15:00.000+02:00 Thorsten Neuschel: Critical behavior of non-intersecting Brownian motions (Seminar, Random matrix theory) Thorsten Neuschel: Critical behavior of non-intersecting Brownian motions (Seminar, Random matrix theory)
Stefan Reppen: A brief introduction to the Hodge, Tate and Mumford-Tate conjectures
Lecturer: Stefan Reppen, SU
2019-10-18T13:00:00.000+02:00 2019-10-18T14:00:00.000+02:00 Stefan Reppen: A brief introduction to the Hodge, Tate and Mumford-Tate conjectures (Seminar, Graduate student) Stefan Reppen: A brief introduction to the Hodge, Tate and Mumford-Tate conjectures (Seminar, Graduate student)
Cecile Huneau: High frequency limit for Einstein equations with a U(1) symmetry
Lecturer: Cecile Huneau, École Polytechnique
2019-10-22T10:00:00.000+02:00 2019-10-22T11:00:00.000+02:00 Cecile Huneau: High frequency limit for Einstein equations with a U(1) symmetry (Seminar, Mittag-Leffler) Cecile Huneau: High frequency limit for Einstein equations with a U(1) symmetry (Seminar, Mittag-Leffler)
Tom Dutilleul: Chaotic dynamics of spatially homogeneous spacetimes
Lecturer: Tom Dutilleul, Université Paris 13
2019-10-22T11:00:00.000+02:00 2019-10-22T12:00:00.000+02:00 Tom Dutilleul: Chaotic dynamics of spatially homogeneous spacetimes (Seminar, Mittag-Leffler) Tom Dutilleul: Chaotic dynamics of spatially homogeneous spacetimes (Seminar, Mittag-Leffler)
Martijn den Besten: Coherence for bicategories and bigroupoids
Lecturer: Martijn den Besten, University of Amsterdam
Location: Kräftriket, Hus 5, Sal 33 (note non-standard location!)
2019-10-23T10:00:00.000+02:00 2019-10-23T11:45:00.000+02:00 Martijn den Besten: Coherence for bicategories and bigroupoids (Seminar, Logic) Martijn den Besten: Coherence for bicategories and bigroupoids (Seminar, Logic)
Zhaojun Bai: Rayleigh quotient optimizations and eigenvalue problems
Lecturer: Zhaojun Bai, University of California, Davis
2019-10-23T11:45:00.000+02:00 2019-10-23T12:30:00.000+02:00 Zhaojun Bai: Rayleigh quotient optimizations and eigenvalue problems (Seminar, Computational Mathematics) Zhaojun Bai: Rayleigh quotient optimizations and eigenvalue problems (Seminar, Computational Mathematics)
Eric Opdam: Harmonic analysis and the local Langlands parameterization
Lecturer: Eric Opdam, Universiteit van Amsterdam
Location: FR4 Oskar Klein, AlbaNova
2019-10-23T15:15:00.000+02:00 2019-10-23T17:00:00.000+02:00 Eric Opdam: Harmonic analysis and the local Langlands parameterization (SMC Colloquium) Eric Opdam: Harmonic analysis and the local Langlands parameterization (SMC Colloquium)
Jeremie Szeftel: The nonlinear stability of Schwarzschild
Lecturer: Jeremie Szeftel, Sorbonne University
2019-10-24T10:00:00.000+02:00 2019-10-24T11:00:00.000+02:00 Jeremie Szeftel: The nonlinear stability of Schwarzschild (Seminar, Mittag-Leffler) Jeremie Szeftel: The nonlinear stability of Schwarzschild (Seminar, Mittag-Leffler)
Boris Shapiro: On topology of the space of real univariate polynomials with constrained real divisors
Location: hus 6, rum 306, SU
2019-10-24T10:15:00.000+02:00 2019-10-24T12:00:00.000+02:00 Boris Shapiro: On topology of the space of real univariate polynomials with constrained real divisors (Topological Activities) Boris Shapiro: On topology of the space of real univariate polynomials with constrained real divisors (Topological Activities)
Jan Metzger: Variational Problems related to the Hawking mass
Lecturer: Jan Metzger, Universität Potsdam
2019-10-24T11:00:00.000+02:00 2019-10-24T12:00:00.000+02:00 Jan Metzger: Variational Problems related to the Hawking mass (Seminar, Mittag-Leffler) Jan Metzger: Variational Problems related to the Hawking mass (Seminar, Mittag-Leffler)
Samuel Lundqvist: Some open problems related to the Lefschetz properties
Lecturer: Samuel Lundqvist
Location: hus 5, sal 35, Kräftriket.
2019-10-28T15:30:00.000+01:00 2019-10-28T16:30:00.000+01:00 Samuel Lundqvist: Some open problems related to the Lefschetz properties (Seminar, Commutative Algebra) Samuel Lundqvist: Some open problems related to the Lefschetz properties (Seminar, Commutative Algebra)
Roland Donninger: Strichartz estimates for the one-dimensional wave equation
Lecturer: Roland Donninger, University of Vienna
2019-10-29T10:00:00.000+01:00 2019-10-29T11:00:00.000+01:00 Roland Donninger: Strichartz estimates for the one-dimensional wave equation (Seminar, Mittag-Leffler) Roland Donninger: Strichartz estimates for the one-dimensional wave equation (Seminar, Mittag-Leffler)
Anna Sakovich: On the spacetime intrinsic flat convergence
Lecturer: Anna Sakovich, Uppsala University
2019-10-29T11:00:00.000+01:00 2019-10-29T12:00:00.000+01:00 Anna Sakovich: On the spacetime intrinsic flat convergence (Seminar, Mittag-Leffler) Anna Sakovich: On the spacetime intrinsic flat convergence (Seminar, Mittag-Leffler)
Vali Pour Arash: Beskrivningslogiken ALC
Lecturer: Vali Pour Arash
2019-10-29T13:30:00.000+01:00 2019-10-29T14:30:00.000+01:00 Vali Pour Arash: Beskrivningslogiken ALC (Degree project) Vali Pour Arash: Beskrivningslogiken ALC (Degree project)
Yuanyuan Xu: Central limit theorem for mesoscopic eigenvalue statistics of deformed Wigner matrix
Lecturer: Yuanyuan Xu, KTH
2019-10-29T15:15:00.000+01:00 2019-10-29T16:15:00.000+01:00 Yuanyuan Xu: Central limit theorem for mesoscopic eigenvalue statistics of deformed Wigner matrix (Seminar, Random matrix theory) Yuanyuan Xu: Central limit theorem for mesoscopic eigenvalue statistics of deformed Wigner matrix (Seminar, Random matrix theory)
Peter LeFanu Lumsdaine: Essentially algebraic theories and Gabriel–Ulmer duality, part 2
Lecturer: Peter LeFanu Lumsdaine, SU
Location: Kräftriket, Hus 5, Sal 16 (back to the usual room)
2019-10-30T10:00:00.000+01:00 2019-10-30T11:45:00.000+01:00 Peter LeFanu Lumsdaine: Essentially algebraic theories and Gabriel–Ulmer duality, part 2 (Seminar, Logic) Peter LeFanu Lumsdaine: Essentially algebraic theories and Gabriel–Ulmer duality, part 2 (Seminar, Logic)
Christopher Frei: Quantitative results about norms in abelian extensions
Lecturer: Christopher Frei, University of Manchester
Location: 306, SU
2019-10-30T10:00:00.000+01:00 2019-10-30T11:00:00.000+01:00 Christopher Frei: Quantitative results about norms in abelian extensions (Seminar, Number Theory) Christopher Frei: Quantitative results about norms in abelian extensions (Seminar, Number Theory)
Dan Petersen: Factorization statistics and bug-eyed configuration spaces
Lecturer: Dan Petersen
2019-10-30T10:15:00.000+01:00 2019-10-30T10:15:00.000+01:00 Dan Petersen: Factorization statistics and bug-eyed configuration spaces (Seminar, Combinatorics) Dan Petersen: Factorization statistics and bug-eyed configuration spaces (Seminar, Combinatorics)
Arno Kret: Construction of Galois representations for GSO_2n
Lecturer: Arno Kret, Amsterdam
Location: SU 306
2019-10-30T11:00:00.000+01:00 2019-10-30T12:00:00.000+01:00 Arno Kret: Construction of Galois representations for GSO_2n (Seminar, Number Theory) Arno Kret: Construction of Galois representations for GSO_2n (Seminar, Number Theory)
Andrea Di Lorenzo: The Chow ring of the stack of stable curves of genus 2
Lecturer: Andra Di Lorenzo, Aarhus
2019-10-30T13:15:00.000+01:00 2019-10-30T13:15:00.000+01:00 Andrea Di Lorenzo: The Chow ring of the stack of stable curves of genus 2 (Seminar, Algebra & Geometry) Andrea Di Lorenzo: The Chow ring of the stack of stable curves of genus 2 (Seminar, Algebra & Geometry)
Peter Olofsson: Muller's Ratchet in Populations Doomed to Extinction
Lecturer: Peter Olofsson, Jönköping University
2019-10-30T15:15:00.000+01:00 2019-10-30T15:15:00.000+01:00 Peter Olofsson: Muller's Ratchet in Populations Doomed to Extinction (Seminar, Mathematical statistics) Peter Olofsson: Muller's Ratchet in Populations Doomed to Extinction (Seminar, Mathematical statistics)
Carla Cederbaum: On special hypersurfaces of the Schwarzschild spacetime and related uniqueness theorems
Lecturer: Carla Cederbaum, Universität Tübingen
2019-10-31T10:00:00.000+01:00 2019-10-31T11:00:00.000+01:00 Carla Cederbaum: On special hypersurfaces of the Schwarzschild spacetime and related uniqueness theorems (Seminar, Mittag-Leffler) Carla Cederbaum: On special hypersurfaces of the Schwarzschild spacetime and related uniqueness theorems (Seminar, Mittag-Leffler)
Wojciech Chachólski: What is persistence?
Lecturer: Wojciech Chachólski, KTH
2019-10-31T10:20:00.000+01:00 2019-10-31T12:00:00.000+01:00 Wojciech Chachólski: What is persistence? (Topological Activities) Wojciech Chachólski: What is persistence? (Topological Activities)
Todd Oliynyk: The Fuchsian approach to global existence for hyperbolic equations
Lecturer: Todd Oliynyk, Monash University
2019-10-31T11:00:00.000+01:00 2019-10-31T12:00:00.000+01:00 Todd Oliynyk: The Fuchsian approach to global existence for hyperbolic equations (Seminar, Mittag-Leffler) Todd Oliynyk: The Fuchsian approach to global existence for hyperbolic equations (Seminar, Mittag-Leffler)
Data-driven Methods in Inverse Problems
Location: F3, Lindstedtsvägen26, KTH, Stockholm (English)
Doctoral student: Jonas Adler , Matematik (Avd.)
2019-10-31T14:00:00.000+01:00 2019-10-31T14:00:00.000+01:00 Data-driven Methods in Inverse Problems (Dissertations) Data-driven Methods in Inverse Problems (Dissertations)
Louis Hainaut: The Dold-Kan correspondence
Lecturer: Louis Hainaut
2019-11-01T13:00:00.000+01:00 2019-11-01T14:00:00.000+01:00 Louis Hainaut: The Dold-Kan correspondence (Seminar, Graduate student) Louis Hainaut: The Dold-Kan correspondence (Seminar, Graduate student)
Andras Vasy: Outgoing Fredholm theory and the limiting absorption principle for asymptotically conic spaces'
Lecturer: Andras Vasy, Stanford University
2019-11-05T10:00:00.000+01:00 2019-11-05T11:00:00.000+01:00 Andras Vasy: Outgoing Fredholm theory and the limiting absorption principle for asymptotically conic spaces' (Seminar, Mittag-Leffler) Andras Vasy: Outgoing Fredholm theory and the limiting absorption principle for asymptotically conic spaces' (Seminar, Mittag-Leffler)
Dejan Gajic: Quasinormal modes on asymptotically flat black holes
Lecturer: Dejan Gajic, King's College, Cambridge
2019-11-05T11:00:00.000+01:00 2019-11-05T12:00:00.000+01:00 Dejan Gajic: Quasinormal modes on asymptotically flat black holes (Seminar, Mittag-Leffler) Dejan Gajic: Quasinormal modes on asymptotically flat black holes (Seminar, Mittag-Leffler)
Inna Zakharevich: The Dehn complex - scissors congruence, K-theory, and regulators
Lecturer: Inna Zakharevich, Cornell
2019-11-05T13:15:00.000+01:00 2019-11-05T15:00:00.000+01:00 Inna Zakharevich: The Dehn complex - scissors congruence, K-theory, and regulators (Topological Activities) Inna Zakharevich: The Dehn complex - scissors congruence, K-theory, and regulators (Topological Activities)
Tim Wuerfel: Expectation values of characteristic polynomials in polynomial ensembles
Lecturer: Tim Wuerfel, Universität Bielefeld
Location: KTH, Room F11
2019-11-05T15:15:00.000+01:00 2019-11-05T16:15:00.000+01:00 Tim Wuerfel: Expectation values of characteristic polynomials in polynomial ensembles (Seminar, Random matrix theory) Tim Wuerfel: Expectation values of characteristic polynomials in polynomial ensembles (Seminar, Random matrix theory)
Benno van den Berg: Uniform Kan fibrations in simplicial sets
Lecturer: Benno van den Berg, University of Amsterdam
Location: Kräftriket, house 5, room 16
2019-11-06T10:00:00.000+01:00 2019-11-06T12:00:00.000+01:00 Benno van den Berg: Uniform Kan fibrations in simplicial sets (Seminar, Logic) Benno van den Berg: Uniform Kan fibrations in simplicial sets (Seminar, Logic)
Kathlén Kohn: The adjoint polynomial of a polytope
2019-11-06T10:15:00.000+01:00 2019-11-06T11:00:00.000+01:00 Kathlén Kohn: The adjoint polynomial of a polytope (Seminar, Combinatorics) Kathlén Kohn: The adjoint polynomial of a polytope (Seminar, Combinatorics)
Jeroen Sijsling: Endomorphisms and decompositions of Jacobians
Lecturer: Jeroen Sijsling, Ulm
2019-11-06T11:00:00.000+01:00 2019-11-06T12:00:00.000+01:00 Jeroen Sijsling: Endomorphisms and decompositions of Jacobians (Seminar, Number Theory) Jeroen Sijsling: Endomorphisms and decompositions of Jacobians (Seminar, Number Theory)
Woosok Moon: A balanced state consistent with planetary-scale motion for quasi-geostrophic dynamics
Lecturer: Woosok Moon
Location: Kräftriket, House 6, Room 306
2019-11-06T11:45:00.000+01:00 2019-11-06T12:30:00.000+01:00 Woosok Moon: A balanced state consistent with planetary-scale motion for quasi-geostrophic dynamics (Seminar, Computational Mathematics) Woosok Moon: A balanced state consistent with planetary-scale motion for quasi-geostrophic dynamics (Seminar, Computational Mathematics)
Orlando Marigliano: Discrete Statistical Models with Rational Maximum Likelihood Estimator
Lecturer: Orlando Marigliano, Max Planck Institute for Mathematics in the Sciences (Leipzig)
Location: KTH 3418
2019-11-06T13:15:00.000+01:00 2019-11-06T13:15:00.000+01:00 Orlando Marigliano: Discrete Statistical Models with Rational Maximum Likelihood Estimator (Seminar, Algebra & Geometry) Orlando Marigliano: Discrete Statistical Models with Rational Maximum Likelihood Estimator (Seminar, Algebra & Geometry)
Olof Sisask: On the L^p-norms of convolutions
Lecturer: Olof Sisask, Stockholms universitet
2019-11-06T13:15:00.000+01:00 2019-11-06T14:14:00.000+01:00 Olof Sisask: On the L^p-norms of convolutions (Seminar, Analysis Stockholm) Olof Sisask: On the L^p-norms of convolutions (Seminar, Analysis Stockholm)
Malin Palö Forsström: Color representations of Ising models
Lecturer: Malin Palö Forsström, KTH
2019-11-06T15:15:00.000+01:00 2019-11-06T15:15:00.000+01:00 Malin Palö Forsström: Color representations of Ising models (Seminar, Mathematical statistics) Malin Palö Forsström: Color representations of Ising models (Seminar, Mathematical statistics)
Colin Zwanziger: Towards CwF semantics for modal dependent type theory
Lecturer: Colin Zwanziger, Carnegie Mellon University
Location: Kräftriket, house 6, room 306 (Cramér-rum)
2019-11-07T10:00:00.000+01:00 2019-11-07T12:00:00.000+01:00 Colin Zwanziger: Towards CwF semantics for modal dependent type theory (Seminar, Logic) Colin Zwanziger: Towards CwF semantics for modal dependent type theory (Seminar, Logic)
David Fajman: Stability of the Milne model with matter
Lecturer: David Fajman, University of Vienna
2019-11-07T10:00:00.000+01:00 2019-11-07T11:00:00.000+01:00 David Fajman: Stability of the Milne model with matter (Seminar, Mittag-Leffler) David Fajman: Stability of the Milne model with matter (Seminar, Mittag-Leffler)
Marcus Khuri: Geometric Inequalities for Quasi-Local Masses
Lecturer: Marcus Khuri, Stony Brook University
2019-11-07T11:00:00.000+01:00 2019-11-07T12:00:00.000+01:00 Marcus Khuri: Geometric Inequalities for Quasi-Local Masses (Seminar, Mittag-Leffler) Marcus Khuri: Geometric Inequalities for Quasi-Local Masses (Seminar, Mittag-Leffler)
Samu Potka: Cyclic sieving
Lecturer: Samu Potka
2019-11-08T13:15:00.000+01:00 2019-11-08T14:00:00.000+01:00 Samu Potka: Cyclic sieving (Seminar, Graduate student) Samu Potka: Cyclic sieving (Seminar, Graduate student)
Celia García-Pareja: Exact simulation of coupled Wright-Fisher diffusions
Lecturer: Celia García-Pareja
2019-11-11T15:15:00.000+01:00 2019-11-11T16:15:00.000+01:00 Celia García-Pareja: Exact simulation of coupled Wright-Fisher diffusions (Seminar, Mathematical statistics) Celia García-Pareja: Exact simulation of coupled Wright-Fisher diffusions (Seminar, Mathematical statistics)
Olivier Biquard: Renormalized volume for ALE Ricci-flat 4-manifolds
Lecturer: Olivier Biquard, Ecole normale supérieure of Paris
2019-11-12T10:00:00.000+01:00 2019-11-12T11:00:00.000+01:00 Olivier Biquard: Renormalized volume for ALE Ricci-flat 4-manifolds (Seminar, Mittag-Leffler) Olivier Biquard: Renormalized volume for ALE Ricci-flat 4-manifolds (Seminar, Mittag-Leffler)
Jacek Jezierski: Geometry of null hypersurfaces
Lecturer: Jacek Jezierski, University of Warsaw
2019-11-12T11:00:00.000+01:00 2019-11-12T12:00:00.000+01:00 Jacek Jezierski: Geometry of null hypersurfaces (Seminar, Mittag-Leffler) Jacek Jezierski: Geometry of null hypersurfaces (Seminar, Mittag-Leffler)
Kurt Johansson: On the rough-smooth interface in the two-periodic Aztec diamond
Lecturer: Kurt Johansson, KTH
Location: KTH F11
2019-11-12T15:15:00.000+01:00 2019-11-12T16:15:00.000+01:00 Kurt Johansson: On the rough-smooth interface in the two-periodic Aztec diamond (Seminar, Random matrix theory) Kurt Johansson: On the rough-smooth interface in the two-periodic Aztec diamond (Seminar, Random matrix theory)
Anja Petković: Andromeda 2.0
Lecturer: Anja Petković, University of Ljubljana
2019-11-13T10:00:00.000+01:00 2019-11-13T11:45:00.000+01:00 Anja Petković: Andromeda 2.0 (Seminar, Logic) Anja Petković: Andromeda 2.0 (Seminar, Logic)
Asaf Horev: Geometric representation theory and factorization homology on surfaces
Lecturer: Asaf Horev, Stockholms universitet
2019-11-13T11:00:00.000+01:00 2019-11-13T12:00:00.000+01:00 Asaf Horev: Geometric representation theory and factorization homology on surfaces (Seminar, Number Theory) Asaf Horev: Geometric representation theory and factorization homology on surfaces (Seminar, Number Theory)
Marcel Rubió: Structure theorems for the cohomology jump loci of singularities
Lecturer: Marcel Rubió, Stockholms universitet
2019-11-13T13:15:00.000+01:00 2019-11-13T15:00:00.000+01:00 Marcel Rubió: Structure theorems for the cohomology jump loci of singularities (Seminar, Algebra & Geometry) Marcel Rubió: Structure theorems for the cohomology jump loci of singularities (Seminar, Algebra & Geometry)
Akseli Haaralas: On the electrostatic Born-Infeld equations and the Lorentz mean curvature operator
Lecturer: Akseli Haaralas, Helsingin yliopisto (University of Helsinki)
2019-11-13T13:15:00.000+01:00 2019-11-13T14:14:00.000+01:00 Akseli Haaralas: On the electrostatic Born-Infeld equations and the Lorentz mean curvature operator (Seminar, Analysis Stockholm) Akseli Haaralas: On the electrostatic Born-Infeld equations and the Lorentz mean curvature operator (Seminar, Analysis Stockholm)
Alan Sola: Scaling limits in conformal Laplacian random growth models
2019-11-13T15:15:00.000+01:00 2019-11-13T15:15:00.000+01:00 Alan Sola: Scaling limits in conformal Laplacian random growth models (Seminar, Mathematical statistics) Alan Sola: Scaling limits in conformal Laplacian random growth models (Seminar, Mathematical statistics)
Yash Lodah: Finitely generated infinite simple groups of homeomorphisms of the real line
Lecturer: Yash Lodah, École polytechnique fédérale de Lausanne
2019-11-13T15:30:00.000+01:00 2019-11-13T16:30:00.000+01:00 Yash Lodah: Finitely generated infinite simple groups of homeomorphisms of the real line (Seminar, Harmonic analysis, operator algebras and representation theory) Yash Lodah: Finitely generated infinite simple groups of homeomorphisms of the real line (Seminar, Harmonic analysis, operator algebras and representation theory)
Håkan Andreasson: On the existence and structure of stationary solutions of the Einstein-Vlasov system
Lecturer: Håkan Andreasson, Chalmers/University of Gothenburg
2019-11-14T10:00:00.000+01:00 2019-11-14T11:00:00.000+01:00 Håkan Andreasson: On the existence and structure of stationary solutions of the Einstein-Vlasov system (Seminar, Mittag-Leffler) Håkan Andreasson: On the existence and structure of stationary solutions of the Einstein-Vlasov system (Seminar, Mittag-Leffler)
Piotr Pstrągowski: Chromatic homotopy is algebraic when p > n^2+n+1
Lecturer: Piotr Pstrągowski, Stockholms universitet
2019-11-14T10:15:00.000+01:00 2019-11-14T12:00:00.000+01:00 Piotr Pstrągowski: Chromatic homotopy is algebraic when p > n^2+n+1 (Topological Activities) Piotr Pstrągowski: Chromatic homotopy is algebraic when p > n^2+n+1 (Topological Activities)
Annegret Burtscher: Spacetime convergence for warped products
Lecturer: Annegret Burtscher, Radboud University
2019-11-14T11:00:00.000+01:00 2019-11-14T12:00:00.000+01:00 Annegret Burtscher: Spacetime convergence for warped products (Seminar, Mittag-Leffler) Annegret Burtscher: Spacetime convergence for warped products (Seminar, Mittag-Leffler)
Wietse Boon: Mixed-Dimensional PDEs: From Functional Analysis to Discretization Methods
Lecturer: Wietse Boon, KTH
2019-11-14T14:15:00.000+01:00 2019-11-14T15:00:00.000+01:00 Wietse Boon: Mixed-Dimensional PDEs: From Functional Analysis to Discretization Methods (Seminar, Numerical analysis) Wietse Boon: Mixed-Dimensional PDEs: From Functional Analysis to Discretization Methods (Seminar, Numerical analysis)
Nausica Aldeghi: Self-adjoint operators, quadratic forms and the Trotter product formula
Lecturer: Nausica Aldeghi, Stockholms universitet
2019-11-15T13:00:00.000+01:00 2019-11-15T14:00:00.000+01:00 Nausica Aldeghi: Self-adjoint operators, quadratic forms and the Trotter product formula (Seminar, Graduate student) Nausica Aldeghi: Self-adjoint operators, quadratic forms and the Trotter product formula (Seminar, Graduate student)
Oliver Lindblad Petersen: Cauchy horizons in vacuum spacetimes
Lecturer: Oliver Lindblad Petersen, Universität Hamburg
2019-11-19T10:00:00.000+01:00 2019-11-19T11:00:00.000+01:00 Oliver Lindblad Petersen: Cauchy horizons in vacuum spacetimes (Seminar, Mittag-Leffler) Oliver Lindblad Petersen: Cauchy horizons in vacuum spacetimes (Seminar, Mittag-Leffler)
Thomas Johnson: The linear stability of the Schwarzschild solution in a generalised wave gauge
Lecturer: Thomas Johnson, Imperial College London
2019-11-19T11:00:00.000+01:00 2019-11-19T12:00:00.000+01:00 Thomas Johnson: The linear stability of the Schwarzschild solution in a generalised wave gauge (Seminar, Mittag-Leffler) Thomas Johnson: The linear stability of the Schwarzschild solution in a generalised wave gauge (Seminar, Mittag-Leffler)
Fortino Garcia: WaveHoltz: Iterative solution of the Helmholtz equation via the wave equation
Lecturer: Fortino Garcia, University of Colorado Boulder
Location: KTH, 3418
2019-11-19T11:00:00.000+01:00 2019-11-19T11:45:00.000+01:00 Fortino Garcia: WaveHoltz: Iterative solution of the Helmholtz equation via the wave equation (Seminar, Numerical analysis) Fortino Garcia: WaveHoltz: Iterative solution of the Helmholtz equation via the wave equation (Seminar, Numerical analysis)
Fanny Augeri: Large deviations for traces of Wigner matrices
Lecturer: Fanny Augeri, Weizmann Institute of Science
Location: KTH, F11
2019-11-19T15:15:00.000+01:00 2019-11-19T16:15:00.000+01:00 Fanny Augeri: Large deviations for traces of Wigner matrices (Seminar, Random matrix theory) Fanny Augeri: Large deviations for traces of Wigner matrices (Seminar, Random matrix theory)
Francesca Balestrieri: Arithmetic of zero-cycles on products of Kummer varieties and K3 surfaces
Lecturer: Francesca Balestrieri, Institute of Science and Technology Austria
2019-11-20T11:00:00.000+01:00 2019-11-20T12:00:00.000+01:00 Francesca Balestrieri: Arithmetic of zero-cycles on products of Kummer varieties and K3 surfaces (Seminar, Number Theory) Francesca Balestrieri: Arithmetic of zero-cycles on products of Kummer varieties and K3 surfaces (Seminar, Number Theory)
Samuel Modée: Limiting Behavior of the Largest Eigenvalues of Random Toeplitz Matrices
Lecturer: Samuel Modée
2019-11-20T13:00:00.000+01:00 2019-11-20T14:00:00.000+01:00 Samuel Modée: Limiting Behavior of the Largest Eigenvalues of Random Toeplitz Matrices (Degree project) Samuel Modée: Limiting Behavior of the Largest Eigenvalues of Random Toeplitz Matrices (Degree project)
Ujue Etayo: Thomson Problem Revisited - Distributing Points on a Sphere
Lecturer: Ujue Etayo, Technische Universität Graz
Location: Kräftriket, house 6, room 306 (Cramér-rummet)
2019-11-20T13:15:00.000+01:00 2019-11-20T14:14:00.000+01:00 Ujue Etayo: Thomson Problem Revisited - Distributing Points on a Sphere (Seminar, Analysis Stockholm) Ujue Etayo: Thomson Problem Revisited - Distributing Points on a Sphere (Seminar, Analysis Stockholm)
David Rydh: Birational Geometry and Derived Stacks
Seminar, other
Lecturer: David Rydh, KTH
Location: KTH, D2
2019-11-20T14:00:00.000+01:00 2019-11-20T14:30:00.000+01:00 David Rydh: Birational Geometry and Derived Stacks (Seminar, other) David Rydh: Birational Geometry and Derived Stacks (Seminar, other)
Kristian Bjerklöv: Resonance problems in Dynamical Systems
Lecturer: Kristian Bjerklöv, KTH
2019-11-20T14:30:00.000+01:00 2019-11-20T15:00:00.000+01:00 Kristian Bjerklöv: Resonance problems in Dynamical Systems (Seminar, other) Kristian Bjerklöv: Resonance problems in Dynamical Systems (Seminar, other)
Erik Ekström: Ghost games
Lecturer: Erik Ekström, Uppsala universitet
2019-11-20T15:15:00.000+01:00 2019-11-20T15:15:00.000+01:00 Erik Ekström: Ghost games (Seminar, Mathematical statistics) Erik Ekström: Ghost games (Seminar, Mathematical statistics)
Markus Kunze: Higher regularity of the 'tangential' fields in the relativistic Vlasov-Maxwell system
Lecturer: Markus Kunze, University of Cologne
2019-11-21T10:00:00.000+01:00 2019-11-21T11:00:00.000+01:00 Markus Kunze: Higher regularity of the 'tangential' fields in the relativistic Vlasov-Maxwell system (Seminar, Mittag-Leffler) Markus Kunze: Higher regularity of the 'tangential' fields in the relativistic Vlasov-Maxwell system (Seminar, Mittag-Leffler)
Mincong Zeng: Dual Steenrod algebra, real cobordism and Morava E-theories
Lecturer: Mincong Zeng, Utrecht University
2019-11-21T10:15:00.000+01:00 2019-11-21T12:00:00.000+01:00 Mincong Zeng: Dual Steenrod algebra, real cobordism and Morava E-theories (Topological Activities) Mincong Zeng: Dual Steenrod algebra, real cobordism and Morava E-theories (Topological Activities)
Dietrich Häfner: Linear stability of slowly rotating Kerr spacetimes
Lecturer: Dietrich Häfner, Université Grenoble Alpes
2019-11-21T11:00:00.000+01:00 2019-11-21T12:00:00.000+01:00 Dietrich Häfner: Linear stability of slowly rotating Kerr spacetimes (Seminar, Mittag-Leffler) Dietrich Häfner: Linear stability of slowly rotating Kerr spacetimes (Seminar, Mittag-Leffler)
Natalia Andrea Londono Castrillon: A Topological analysis of an alternative to the PageRank algorithm in weighted directed graphs
Lecturer: Natalia Andrea Londono Castrillon
2019-11-21T13:00:00.000+01:00 2019-11-21T14:00:00.000+01:00 Natalia Andrea Londono Castrillon: A Topological analysis of an alternative to the PageRank algorithm in weighted directed graphs (Degree project) Natalia Andrea Londono Castrillon: A Topological analysis of an alternative to the PageRank algorithm in weighted directed graphs (Degree project)
Mini-Symposium on Computational Mathematics
2019-11-21T13:30:00.000+01:00 2019-11-21T17:00:00.000+01:00 Mini-Symposium on Computational Mathematics (Conference) Mini-Symposium on Computational Mathematics (Conference)
Patrice Koehl: Coarse-grained dynamics of supramolecules
Lecturer: Patrice Koehl, University of California, Davis and Institut de Physique Théorique (IPhT)
2019-11-21T13:30:00.000+01:00 2019-11-21T14:15:00.000+01:00 Patrice Koehl: Coarse-grained dynamics of supramolecules (Miscellaneous) Patrice Koehl: Coarse-grained dynamics of supramolecules (Miscellaneous)
Ozan Öktem: Bayesian inversion for tomography through machine learning
Lecturer: Ozan Öktem, KTH
2019-11-21T14:15:00.000+01:00 2019-11-21T15:00:00.000+01:00 Ozan Öktem: Bayesian inversion for tomography through machine learning (Miscellaneous) Ozan Öktem: Bayesian inversion for tomography through machine learning (Miscellaneous)
Murtazo Nazarov: Stabilized finite element methods for fluid problems
Lecturer: Murtazo Nazarov, Uppsala University
2019-11-21T15:30:00.000+01:00 2019-11-21T16:15:00.000+01:00 Murtazo Nazarov: Stabilized finite element methods for fluid problems (Miscellaneous) Murtazo Nazarov: Stabilized finite element methods for fluid problems (Miscellaneous)
Markus Kowalewski: Non-adiabatic molecular dynamics in light activated chemical reactions
Lecturer: Markus Kowalewski, Stockholms universitet
2019-11-21T16:15:00.000+01:00 2019-11-21T17:00:00.000+01:00 Markus Kowalewski: Non-adiabatic molecular dynamics in light activated chemical reactions (Miscellaneous) Markus Kowalewski: Non-adiabatic molecular dynamics in light activated chemical reactions (Miscellaneous)
Erik Palmgren Memorial Ceremony
Location: Kräftriket, house 6, Matematiska biblioteket
2019-11-22T11:00:00.000+01:00 2019-11-22T11:00:00.000+01:00 Erik Palmgren Memorial Ceremony (Miscellaneous) Erik Palmgren Memorial Ceremony (Miscellaneous)
Svenska matematikersamfundets höstmöte
2019-11-22T13:00:00.000+01:00 2019-11-22T17:00:00.000+01:00 Svenska matematikersamfundets höstmöte (Conference) Svenska matematikersamfundets höstmöte (Conference)
Philippe Moreillon: Strong law of large numbers under the assumption of weak covariances
Lecturer: Philippe Moreillon
2019-11-22T13:15:00.000+01:00 2019-11-22T15:00:00.000+01:00 Philippe Moreillon: Strong law of large numbers under the assumption of weak covariances (Seminar, Graduate student) Philippe Moreillon: Strong law of large numbers under the assumption of weak covariances (Seminar, Graduate student)
Patrice Koehl: Optimal transport at finite temperature
2019-11-22T14:00:00.000+01:00 2019-11-22T15:00:00.000+01:00 Patrice Koehl: Optimal transport at finite temperature (Seminar, Computational Mathematics) Patrice Koehl: Optimal transport at finite temperature (Seminar, Computational Mathematics)
Jonas Nordqvist: Residue fixed point index and wildly ramified power series
Lecturer: Jonas Nordqvist, Linnéuniversitetet
2019-11-22T14:10:00.000+01:00 2019-11-22T14:30:00.000+01:00 Jonas Nordqvist: Residue fixed point index and wildly ramified power series (Miscellaneous) Jonas Nordqvist: Residue fixed point index and wildly ramified power series (Miscellaneous)
Jacob Muller: Spectra of higher order differential operators on graphs and almost periodic functions
Lecturer: Jacob Muller, Stockholms universitet
2019-11-22T14:35:00.000+01:00 2019-11-22T14:55:00.000+01:00 Jacob Muller: Spectra of higher order differential operators on graphs and almost periodic functions (Miscellaneous) Jacob Muller: Spectra of higher order differential operators on graphs and almost periodic functions (Miscellaneous)
Tomas Berggren: Domino tilings of the Aztec diamond with doubly periodic weightings
Lecturer: Tomas Berggren, KTH
2019-11-22T15:00:00.000+01:00 2019-11-22T15:20:00.000+01:00 Tomas Berggren: Domino tilings of the Aztec diamond with doubly periodic weightings (Miscellaneous) Tomas Berggren: Domino tilings of the Aztec diamond with doubly periodic weightings (Miscellaneous)
Julian Mauersberger: Large gap asymptotics for determinantal point processes
Lecturer: Julian Mauersberger, KTH
2019-11-22T15:45:00.000+01:00 2019-11-22T16:05:00.000+01:00 Julian Mauersberger: Large gap asymptotics for determinantal point processes (Miscellaneous) Julian Mauersberger: Large gap asymptotics for determinantal point processes (Miscellaneous)
Samu Potka: The Cyclic Sieving Phenomenon on Circular Dyck Paths
Lecturer: Samu Potka, KTH
2019-11-22T16:35:00.000+01:00 2019-11-22T16:55:00.000+01:00 Samu Potka: The Cyclic Sieving Phenomenon on Circular Dyck Paths (Miscellaneous) Samu Potka: The Cyclic Sieving Phenomenon on Circular Dyck Paths (Miscellaneous)
Christian Franzke: Statistical modeling of extreme precipitation
Lecturer: Christian Franzke
2019-11-25T15:15:00.000+01:00 2019-11-25T16:15:00.000+01:00 Christian Franzke: Statistical modeling of extreme precipitation (Seminar, Mathematical statistics) Christian Franzke: Statistical modeling of extreme precipitation (Seminar, Mathematical statistics)
Stefanos Aretakis: Observational signatures for extremal black holes
Lecturer: Stefanos Aretakis, University of Toronto
2019-11-26T10:00:00.000+01:00 2019-11-26T11:00:00.000+01:00 Stefanos Aretakis: Observational signatures for extremal black holes (Seminar, Mittag-Leffler) Stefanos Aretakis: Observational signatures for extremal black holes (Seminar, Mittag-Leffler)
Pieter Blue: On the stability of higher dimensions
Lecturer: Pieter Blue, University of Edinburgh
2019-11-26T11:00:00.000+01:00 2019-11-26T12:00:00.000+01:00 Pieter Blue: On the stability of higher dimensions (Seminar, Mittag-Leffler) Pieter Blue: On the stability of higher dimensions (Seminar, Mittag-Leffler)
Kozhasov Khazhgali: On complete monotonicity of inverse powers of elementary symmetric polynomials
Lecturer: Kozhasov Khazhgali
2019-11-27T10:15:00.000+01:00 2019-11-27T11:00:00.000+01:00 Kozhasov Khazhgali: On complete monotonicity of inverse powers of elementary symmetric polynomials (Seminar, Combinatorics) Kozhasov Khazhgali: On complete monotonicity of inverse powers of elementary symmetric polynomials (Seminar, Combinatorics)
Khazhgali Kozhasov: On the number of critical points of a real form on the sphere
Lecturer: Khazhgali Kozhasov, TU Braunschweig
2019-11-27T13:15:00.000+01:00 2019-11-27T14:15:00.000+01:00 Khazhgali Kozhasov: On the number of critical points of a real form on the sphere (Seminar, Algebra & Geometry) Khazhgali Kozhasov: On the number of critical points of a real form on the sphere (Seminar, Algebra & Geometry)
Meredith Sargent: Escaping non-tangentiality: a different approach to Julia-Caratheodory theory
Lecturer: Meredith Sargent, University of Arkansas
2019-11-27T13:15:00.000+01:00 2019-11-27T14:15:00.000+01:00 Meredith Sargent: Escaping non-tangentiality: a different approach to Julia-Caratheodory theory (Seminar, Analysis Stockholm) Meredith Sargent: Escaping non-tangentiality: a different approach to Julia-Caratheodory theory (Seminar, Analysis Stockholm)
Christian Bönicke: Regularity properties for ample groupoids
Lecturer: Christian Bönicke, University of Glasgow
2019-11-27T15:30:00.000+01:00 2019-11-27T16:30:00.000+01:00 Christian Bönicke: Regularity properties for ample groupoids (Seminar, Harmonic analysis, operator algebras and representation theory) Christian Bönicke: Regularity properties for ample groupoids (Seminar, Harmonic analysis, operator algebras and representation theory)
Oscar Reula: On necessary and sufficient conditions for strong hyperbolicity
Lecturer: Oscar Reula, Universidad Nacional de Cordoba
2019-11-28T10:00:00.000+01:00 2019-11-28T11:00:00.000+01:00 Oscar Reula: On necessary and sufficient conditions for strong hyperbolicity (Seminar, Mittag-Leffler) Oscar Reula: On necessary and sufficient conditions for strong hyperbolicity (Seminar, Mittag-Leffler)
Kristian Moi: Grothendieck-Witt groups of stable infinity categories
Lecturer: Kristian Moi, KTH
2019-11-28T10:15:00.000+01:00 2019-11-28T12:00:00.000+01:00 Kristian Moi: Grothendieck-Witt groups of stable infinity categories (Topological Activities) Kristian Moi: Grothendieck-Witt groups of stable infinity categories (Topological Activities)
Stephen McCormick: Gluing collars to manifolds; how and why
Lecturer: Stephen McCormick, Uppsala University
2019-11-28T11:00:00.000+01:00 2019-11-28T12:00:00.000+01:00 Stephen McCormick: Gluing collars to manifolds; how and why (Seminar, Mittag-Leffler) Stephen McCormick: Gluing collars to manifolds; how and why (Seminar, Mittag-Leffler)
Disa Hansson: Modelling Sexual Interactions - Sexual behaviour and the spread of sexually transmitted infections on dynamic networks
Doctoral student: Disa Hansson , Stockholms universitet
2019-11-28T13:00:00.000+01:00 2019-11-28T13:00:00.000+01:00 Disa Hansson: Modelling Sexual Interactions - Sexual behaviour and the spread of sexually transmitted infections on dynamic networks (Dissertation) Disa Hansson: Modelling Sexual Interactions - Sexual behaviour and the spread of sexually transmitted infections on dynamic networks (Dissertation)
Giampaolo Mele: Krylov methods for nonlinear eigenvalue problems and matrix equations
Lecturer: Giampaolo Mele, KTH
2019-11-28T14:15:00.000+01:00 2019-11-28T15:00:00.000+01:00 Giampaolo Mele: Krylov methods for nonlinear eigenvalue problems and matrix equations (Seminar, Numerical analysis) Giampaolo Mele: Krylov methods for nonlinear eigenvalue problems and matrix equations (Seminar, Numerical analysis)
Robin Stoll: Operads and the recognition principle for loop spaces
Lecturer: Robin Stoll
2019-11-29T13:00:00.000+01:00 2019-11-29T14:00:00.000+01:00 Robin Stoll: Operads and the recognition principle for loop spaces (Seminar, Graduate student) Robin Stoll: Operads and the recognition principle for loop spaces (Seminar, Graduate student)
Michael Damron: Absence of backward infinite paths in first-passage percolation in arbitrary dimension
Lecturer: Michael Damron, Georgia Tech
2019-11-29T14:30:00.000+01:00 2019-11-29T14:30:00.000+01:00 Michael Damron: Absence of backward infinite paths in first-passage percolation in arbitrary dimension (Seminar, Mathematical statistics) Michael Damron: Absence of backward infinite paths in first-passage percolation in arbitrary dimension (Seminar, Mathematical statistics)
Nicola Quercioli: Group equivariant non-expansive operators for data analysis and machine learning
Lecturer: Nicola Quercioli, University of Bologna
2019-12-02T15:15:00.000+01:00 2019-12-02T16:15:00.000+01:00 Nicola Quercioli: Group equivariant non-expansive operators for data analysis and machine learning (Seminar, MathDataLab) Nicola Quercioli: Group equivariant non-expansive operators for data analysis and machine learning (Seminar, MathDataLab)
Chao Liu: Stability of FLRW metric for polytropic (Makino) fluids
Lecturer: Chao Liu, Peking University
Location: Institut Mittag-Leffler, Seminar Hall Kuskvillan
2019-12-03T10:00:00.000+01:00 2019-12-03T11:00:00.000+01:00 Chao Liu: Stability of FLRW metric for polytropic (Makino) fluids (Seminar, Mittag-Leffler) Chao Liu: Stability of FLRW metric for polytropic (Makino) fluids (Seminar, Mittag-Leffler)
Lorenzo Mazzieri: Monotonicity formulas in potential theory and applications
Lecturer: Lorenzo Mazzieri, University of Trento
2019-12-03T11:00:00.000+01:00 2019-12-03T12:00:00.000+01:00 Lorenzo Mazzieri: Monotonicity formulas in potential theory and applications (Seminar, Mittag-Leffler) Lorenzo Mazzieri: Monotonicity formulas in potential theory and applications (Seminar, Mittag-Leffler)
Alexey Bufetov: Color-position symmetry in interacting particle systems
Lecturer: Alexey Bufetov, Universität Bonn
2019-12-03T15:15:00.000+01:00 2019-12-03T16:15:00.000+01:00 Alexey Bufetov: Color-position symmetry in interacting particle systems (Seminar, Random matrix theory) Alexey Bufetov: Color-position symmetry in interacting particle systems (Seminar, Random matrix theory)
Katrien Antonio: Boosting insights in insurance tariff plans with tree-based machine learning methods
Lecturer: Katrien Antonio, KU Leuven
2019-12-03T15:30:00.000+01:00 2019-12-03T15:30:00.000+01:00 Katrien Antonio: Boosting insights in insurance tariff plans with tree-based machine learning methods (Seminar, Mathematical statistics) Katrien Antonio: Boosting insights in insurance tariff plans with tree-based machine learning methods (Seminar, Mathematical statistics)
Anders Mörtberg: Programming and proving with higher inductive types in Cubical Agda
Lecturer: Anders Mörtberg
2019-12-04T10:00:00.000+01:00 2019-12-04T11:45:00.000+01:00 Anders Mörtberg: Programming and proving with higher inductive types in Cubical Agda (Seminar, Logic) Anders Mörtberg: Programming and proving with higher inductive types in Cubical Agda (Seminar, Logic)
Felix Wahl: Micro-level claims reserving in non-life insurance
Doctoral student: Felix Wahl , Stockholms universitet
2019-12-04T10:00:00.000+01:00 2019-12-04T10:00:00.000+01:00 Felix Wahl: Micro-level claims reserving in non-life insurance (Dissertation) Felix Wahl: Micro-level claims reserving in non-life insurance (Dissertation)
Matthias Beck: Lonely Runner Polyhedra
Lecturer: Matthias Beck
2019-12-04T10:15:00.000+01:00 2019-12-04T11:00:00.000+01:00 Matthias Beck: Lonely Runner Polyhedra (Seminar, Combinatorics) Matthias Beck: Lonely Runner Polyhedra (Seminar, Combinatorics)
Valentijn Karemaker: Comparing obstructions to local-global principles for rational points over semiglobal fields
Lecturer: Valentijn Karemaker, Universiteit Utrecht and Stockholms Universitet
2019-12-04T11:00:00.000+01:00 2019-12-04T12:00:00.000+01:00 Valentijn Karemaker: Comparing obstructions to local-global principles for rational points over semiglobal fields (Seminar, Number Theory) Valentijn Karemaker: Comparing obstructions to local-global principles for rational points over semiglobal fields (Seminar, Number Theory)
Ties Laarakker: Vertical Vafa-Witten Invariants
Lecturer: Ties Laarakker, Imperial College London
2019-12-04T13:15:00.000+01:00 2019-12-04T14:15:00.000+01:00 Ties Laarakker: Vertical Vafa-Witten Invariants (Seminar, Algebra & Geometry) Ties Laarakker: Vertical Vafa-Witten Invariants (Seminar, Algebra & Geometry)
Mitja Nedic: Analytic characterizations of the Lebesgue measure
Lecturer: Mitja Nedic, Stockholms universitet
2019-12-04T13:15:00.000+01:00 2019-12-04T14:14:00.000+01:00 Mitja Nedic: Analytic characterizations of the Lebesgue measure (Seminar, Analysis Stockholm) Mitja Nedic: Analytic characterizations of the Lebesgue measure (Seminar, Analysis Stockholm)
Odysseas Bakas: Remarks on a theorem of Pichorides
Lecturer: Odysseas Bakas, Lunds universitet
2019-12-04T15:30:00.000+01:00 2019-12-04T16:30:00.000+01:00 Odysseas Bakas: Remarks on a theorem of Pichorides (Seminar, Harmonic analysis, operator algebras and representation theory) Odysseas Bakas: Remarks on a theorem of Pichorides (Seminar, Harmonic analysis, operator algebras and representation theory)
Istvan Racz: On construction of Riemannian three-spaces with smooth generalized inverse mean curvature flows
Lecturer: Istvan Racz, Wigner Research Center for Physics
2019-12-05T10:00:00.000+01:00 2019-12-05T11:00:00.000+01:00 Istvan Racz: On construction of Riemannian three-spaces with smooth generalized inverse mean curvature flows (Seminar, Mittag-Leffler) Istvan Racz: On construction of Riemannian three-spaces with smooth generalized inverse mean curvature flows (Seminar, Mittag-Leffler)
Lior Yanovsky: Higher semiadditivity (a.k.a. "ambidexterity") and chromatic homotopy theory
Lecturer: Lior Yanovsky, Max-Planck-Institut für Mathematik, Bonn
2019-12-05T10:15:00.000+01:00 2019-12-05T12:00:00.000+01:00 Lior Yanovsky: Higher semiadditivity (a.k.a. "ambidexterity") and chromatic homotopy theory (Topological Activities) Lior Yanovsky: Higher semiadditivity (a.k.a. "ambidexterity") and chromatic homotopy theory (Topological Activities)
Andrzej Rostworowski: A new perspective on metric gravitational perturbations of spherically symmetric spacetimes
Lecturer: Andrzej Rostworowski, Jagiellonian University
2019-12-05T11:00:00.000+01:00 2019-12-05T12:00:00.000+01:00 Andrzej Rostworowski: A new perspective on metric gravitational perturbations of spherically symmetric spacetimes (Seminar, Mittag-Leffler) Andrzej Rostworowski: A new perspective on metric gravitational perturbations of spherically symmetric spacetimes (Seminar, Mittag-Leffler)
On two-dimensional conformal geometry related to the Schramm-Loewner evolution
Doctoral student: Lukas Schoug , Matematik (Avd.)
2019-12-06T09:00:00.000+01:00 2019-12-06T09:00:00.000+01:00 On two-dimensional conformal geometry related to the Schramm-Loewner evolution (Dissertations) On two-dimensional conformal geometry related to the Schramm-Loewner evolution (Dissertations)
Alexander Aurell: Topics in the mean-field type approach to pedestrian crowd modeling and conventions.
Lecturer: Alexander Aurell, KTH
2019-12-06T11:00:00.000+01:00 2019-12-06T12:00:00.000+01:00 Alexander Aurell: Topics in the mean-field type approach to pedestrian crowd modeling and conventions. (Seminar, Optimization and systems theory) Alexander Aurell: Topics in the mean-field type approach to pedestrian crowd modeling and conventions. (Seminar, Optimization and systems theory)
Lecturer: Eric Ahlqvist
2019-12-06T13:00:00.000+01:00 2019-12-06T14:00:00.000+01:00 temp (Seminar, Graduate student) temp (Seminar, Graduate student)
Eric Ahlqvist: Primes and knots
2019-12-06T13:00:00.000+01:00 2019-12-06T14:00:00.000+01:00 Eric Ahlqvist: Primes and knots (Seminar, Graduate student) Eric Ahlqvist: Primes and knots (Seminar, Graduate student)
Jonathan Rohleder: An introduction to spectral theory of the Laplacian
Trial lecture
Lecturer: Jonathan Rohleder, Stockholms universitet
2019-12-06T14:00:00.000+01:00 2019-12-06T14:00:00.000+01:00 Jonathan Rohleder: An introduction to spectral theory of the Laplacian (Trial lecture) Jonathan Rohleder: An introduction to spectral theory of the Laplacian (Trial lecture)
Lucia Geometrica, a celebration of geometry
Mon 2019-12-09, 09:30 - Fri 2019-12-13, 14:30
2019-12-09T09:30:00.000+01:00 2019-12-13T14:30:00.000+01:00 Lucia Geometrica, a celebration of geometry (Conference) Lucia Geometrica, a celebration of geometry (Conference)
Jerzy Lewandowski: Isolated horizons, near horizon geometries and the Petrov type D equation
Lecturer: Jerzy Lewandowski, University of Warsaw
2019-12-10T10:00:00.000+01:00 2019-12-10T11:00:00.000+01:00 Jerzy Lewandowski: Isolated horizons, near horizon geometries and the Petrov type D equation (Seminar, Mittag-Leffler) Jerzy Lewandowski: Isolated horizons, near horizon geometries and the Petrov type D equation (Seminar, Mittag-Leffler)
Rita Teixeira da Costa: Mode stability for the Teukolsky equation on extremal Kerr black hole spacetimes
Lecturer: Rita Teixeira da Costa, University of Cambridge
2019-12-10T11:00:00.000+01:00 2019-12-10T12:00:00.000+01:00 Rita Teixeira da Costa: Mode stability for the Teukolsky equation on extremal Kerr black hole spacetimes (Seminar, Mittag-Leffler) Rita Teixeira da Costa: Mode stability for the Teukolsky equation on extremal Kerr black hole spacetimes (Seminar, Mittag-Leffler)
Evelina Stringer: Origami och matematik. Här nedan är mitt abstract.
Lecturer: Evelina Stringer
2019-12-10T11:00:00.000+01:00 2019-12-10T12:00:00.000+01:00 Evelina Stringer: Origami och matematik. Här nedan är mitt abstract. (Degree project) Evelina Stringer: Origami och matematik. Här nedan är mitt abstract. (Degree project)
Johan Klint: Simulating carriage of an infectious disease using mathematical models
Lecturer: Johan Klint
2019-12-11T09:00:00.000+01:00 2019-12-11T10:00:00.000+01:00 Johan Klint: Simulating carriage of an infectious disease using mathematical models (Degree project) Johan Klint: Simulating carriage of an infectious disease using mathematical models (Degree project)
Anders Mörtberg: Programming and proving with higher inductive types in Cubical Agda, part 2
Lecturer: Anders Mörtberg, Stockholms universitet
2019-12-11T10:00:00.000+01:00 2019-12-11T11:45:00.000+01:00 Anders Mörtberg: Programming and proving with higher inductive types in Cubical Agda, part 2 (Seminar, Logic) Anders Mörtberg: Programming and proving with higher inductive types in Cubical Agda, part 2 (Seminar, Logic)
Erik Thorsén: Assessment of the uncertainty in small and large dimensional portfolio allocation
Licentiate seminar
Doctoral student: Erik Thorsén , Stockholms universitet
2019-12-11T15:15:00.000+01:00 2019-12-11T15:15:00.000+01:00 Erik Thorsén: Assessment of the uncertainty in small and large dimensional portfolio allocation (Licentiate seminar) Erik Thorsén: Assessment of the uncertainty in small and large dimensional portfolio allocation (Licentiate seminar)
Justin Corvino: Deformation and asymptotics for the constraint equations
Lecturer: Justin Corvino, Lafayette College
2019-12-12T10:00:00.000+01:00 2019-12-12T11:00:00.000+01:00 Justin Corvino: Deformation and asymptotics for the constraint equations (Seminar, Mittag-Leffler) Justin Corvino: Deformation and asymptotics for the constraint equations (Seminar, Mittag-Leffler)
Christian Bär: Index theory on Lorentzian manifolds
Lecturer: Christian Bär, University of Potsdam
2019-12-12T11:00:00.000+01:00 2019-12-12T12:00:00.000+01:00 Christian Bär: Index theory on Lorentzian manifolds (Seminar, Mittag-Leffler) Christian Bär: Index theory on Lorentzian manifolds (Seminar, Mittag-Leffler)
Mathias Anselmann: Application of higher order Galerkin-collocation time discretizations to waves and the Navier-Stokes equations with outlook towards Cut-FEM
Lecturer: Mathias Anselmann, Helmut Schmidt University, Hamburg
2019-12-12T13:15:00.000+01:00 2019-12-12T14:00:00.000+01:00 Mathias Anselmann: Application of higher order Galerkin-collocation time discretizations to waves and the Navier-Stokes equations with outlook towards Cut-FEM (Seminar, Numerical analysis) Mathias Anselmann: Application of higher order Galerkin-collocation time discretizations to waves and the Navier-Stokes equations with outlook towards Cut-FEM (Seminar, Numerical analysis)
Shun Wakatsuki: Eilenberg-Moore isomorphism
Lecturer: Shun Wakatsuki
2019-12-13T13:00:00.000+01:00 2019-12-13T14:00:00.000+01:00 Shun Wakatsuki: Eilenberg-Moore isomorphism (Seminar, Graduate student) Shun Wakatsuki: Eilenberg-Moore isomorphism (Seminar, Graduate student)
Markus Land: Chromatic localizations of algebraic K-theory
Lecturer: Markus Land, Universität Regensburg
2019-12-13T15:15:00.000+01:00 2019-12-13T17:00:00.000+01:00 Markus Land: Chromatic localizations of algebraic K-theory (Topological Activities) Markus Land: Chromatic localizations of algebraic K-theory (Topological Activities)
Oliver Krüger: On linear graph invariants related to Ramsey and edge numbers
Doctoral student: Oliver Krüger , Stockholms universitet
2019-12-16T10:00:00.000+01:00 2019-12-16T10:00:00.000+01:00 Oliver Krüger: On linear graph invariants related to Ramsey and edge numbers (Dissertation) Oliver Krüger: On linear graph invariants related to Ramsey and edge numbers (Dissertation)
Topics in the mean-field type approach to pedestrian crowd modeling and conventions
Location: Kollegiesalen, Brinellvägen 8, Stockholm (English)
Doctoral student: Alexander Aurell , Matematisk statistik
2019-12-16T10:00:00.000+01:00 2019-12-16T10:00:00.000+01:00 Topics in the mean-field type approach to pedestrian crowd modeling and conventions (Dissertations) Topics in the mean-field type approach to pedestrian crowd modeling and conventions (Dissertations)
Alessandro Oneto: The geometry of strength of polynomials
Lecturer: Alessandro Oneto, Universität Magdeburg
2019-12-16T15:30:00.000+01:00 2019-12-16T15:30:00.000+01:00 Alessandro Oneto: The geometry of strength of polynomials (Seminar, Commutative Algebra) Alessandro Oneto: The geometry of strength of polynomials (Seminar, Commutative Algebra)
Innocent Ndikubwayo: Topics in polynomial sequences defined by linear recurrences
Doctoral student: Innocent Ndikubwayo , Stockholms universitet
2019-12-17T10:00:00.000+01:00 2019-12-17T10:00:00.000+01:00 Innocent Ndikubwayo: Topics in polynomial sequences defined by linear recurrences (Licentiate seminar) Innocent Ndikubwayo: Topics in polynomial sequences defined by linear recurrences (Licentiate seminar)
Boundary integral methods for fast and accurate simulation of droplets in two-dimensional Stokes flow
Applied and Computational Mathematics, Numerical Analysis
Doctoral student: Sara Pålsson , Numerisk analys, NA
2019-12-18T10:00:00.000+01:00 2019-12-18T10:00:00.000+01:00 Boundary integral methods for fast and accurate simulation of droplets in two-dimensional Stokes flow (Dissertations) Boundary integral methods for fast and accurate simulation of droplets in two-dimensional Stokes flow (Dissertations)
Alison Etheridge: Modelling evolution in a spatial continuum
Lecturer: Alison Etheridge, University of Oxford
Location: AlbaNova, FR4 Oskar Klein
2019-12-18T15:15:00.000+01:00 2019-12-18T17:00:00.000+01:00 Alison Etheridge: Modelling evolution in a spatial continuum (SMC Colloquium) Alison Etheridge: Modelling evolution in a spatial continuum (SMC Colloquium)
List the rest of the year
Current time interval
Export rest of the year
Activities, Algebra and Geometry
Seminar, Analysis SU
Seminar, Differential geometry and general relativity
Seminar, DNA
Seminar, PDE
Seminar, Probability Theory
Seminar, Statistics
Seminar, Theoretical computer science
Göran Gustafsson Lectures in mathematics
Popular lecture
BRAHE Lecture | CommonCrawl |
논문 표/그림
KoreaScience란? 통계 목록
논문 상세정보
Asian Pacific Journal of Cancer Prevention
Pages.2689-2698
아시아태평양암예방학회 (Asian Pacific Organization for Cancer Prevention)
Epithelial-mesenchymal Transition and Its Role in the Pathogenesis of Colorectal Cancer
Zhu, Qing-Chao (Department of Surgery, The Sixth People's Hospital Affiliated to Shanghai Jiao Tong University) ;
Gao, Ren-Yuan (Department of Surgery, The Sixth People's Hospital Affiliated to Shanghai Jiao Tong University) ;
Wu, Wen (Department of Surgery, The Sixth People's Hospital Affiliated to Shanghai Jiao Tong University) ;
Qin, Huan-Long (Department of Surgery, The Sixth People's Hospital Affiliated to Shanghai Jiao Tong University)
발행 : 2013.05.30
https://doi.org/10.7314/APJCP.2013.14.5.2689
인용 PDF KSCI
Epithelial-to-mesenchymal transition (EMT) is a collection of events that allows the conversion of adherent epithelial cells, tightly bound to each other within an organized tissue, into independent fibroblastic cells possessing migratory properties and the ability to invade the extracellular matrix. EMT contributes to the complex architecture of the embryo by permitting the progression of embryogenesis from a simple single-cell layer epithelium to a complex three-dimensional organism composed of both epithelial and mesenchymal cells. However, in most tissues EMT is a developmentally restricted process and fully differentiated epithelia typically maintain their epithelial phenotype. Recently, elements of EMT, specially the loss of epithelial markers and the gain of mesenchymal markers, have been observed in pathological states, including epithelial cancers. Increasing evidence has confirmed its presence in human colon during colorectal carcinogenesis. In general, chronic inflammation is considered to be one of the causes of many human cancers including colorectal cancer(CRC). Accordingly, epidemiologic and clinical studies indicate that patients affected by ulcerative colitis and Crohn's disease, the two major forms of inflammatory bowel disease, have an increased risk of developing CRC. A large body of evidence supports roles for the SMAD/STAT3 signaling pathway, the NF-kB pathway, the Ras-mitogenactivated protein kinase/Snail/Slug and microRNAs in the development of colorectal cancers via epithelial-tomesenchymal transition. Thus, EMT appears to be closely involved in the pathogenesis of colorectal cancer, and analysis refered to it can yield novel targets for therapy.
Epithelial-mesenchymal transition;colorectal cancer;inflammation;signaling pathway
원문 PDF 다운로드
Kenney PA, Wszolek MF, Rieger-Christ KM, et al (2011). Novel ZEB1 expression in bladder tumorigenesis. BJU Int, 107, 656-63. https://doi.org/10.1111/j.1464-410X.2010.09489.x
Katoh M (2008). RNA technology targeted to the WNT signaling pathway. Cancer Biol Ther, 7, 275-7. https://doi.org/10.4161/cbt.7.2.5574
Koay MH, Crook M, Stewart CJ (2012). Cyclin D1, E-cadherin and beta-catenin expression in FIGO stage IA cervical squamous carcinoma: diagnostic value and evidence for epithelial-mesenchymal transition. Histopathology, 61, 1125-33. https://doi.org/10.1111/j.1365-2559.2012.04326.x
Kong D, Wang Z, Sarkar SH, et al (2008). Platelet-derived growth factor-D overexpression contributes to epithelialmesenchymal transition of PC3 prostate cancer cells. Stem Cells, 26, 1425-35. https://doi.org/10.1634/stemcells.2007-1076
Lee JM, Dedhar S, Kalluri R, Thompson EW (2006). The epithelial-mesenchymal transition: new insights in signaling, development, and disease. J Cell Biol, 172, 973-81. https://doi.org/10.1083/jcb.200601018
Lee H, Herrmann A, Deng JH, et al (2009). Persistently activated Stat3 maintains constitutive NF-kappaB activity in tumors. Cancer Cell, 15, 283-93. https://doi.org/10.1016/j.ccr.2009.02.015
Lopez-Novoa JM, Nieto MA (2009). Inflammation and EMT: an alliance towards organ fibrosis and cancer progression. EMBO Mol Med, 1, 303-14. https://doi.org/10.1002/emmm.200900043
Lemieux E, Bergeron S, Durand V, et al (2009). Constitutively active MEK1 is sufficient to induce epithelial-tomesenchymal transition in intestinal epithelial cells and to promote tumor invasion and metastasis. Int J Cancer, 125, 1575-86. https://doi.org/10.1002/ijc.24485
Lamouille S, Derynck R (2007). Cell size and invasion in TGF-beta-induced epithelial to mesenchymal transition is regulated by activation of the mTOR pathway. J Cell Biol, 178, 437-51. https://doi.org/10.1083/jcb.200611146
Li S, Christensen C, Kiselyov VV, et al (2008). Fibroblast growth factor-derived peptides: functional agonists of the fibroblast growth factor receptor. J Neurochem, 104, 667-82.
Markowitz SD, Dawson DM, Willis J, Willson JK (2002). Focus on colon cancer. Cancer Cell, 1, 233-6. https://doi.org/10.1016/S1535-6108(02)00053-3
Berx G, Raspe E, Christofori G, Thiery JP, Sleeman JP (2007). Pre-EMTing metastasis? Recapitulation of morphogenetic processes in cancer. Clin Exp Metastasis, 24, 587-97. https://doi.org/10.1007/s10585-007-9114-6
Barrallo-Gimeno A, Nieto MA (2005). The Snail genes as inducers of cell movement and survival: implications in development and cancer. Development, 132, 3151-61. https://doi.org/10.1242/dev.01907
Brabletz T, Jung A, Reu S, et al (2001). Variable beta-catenin expression in colorectal cancers indicates tumor progression driven by the tumor environment. Proc Natl Acad Sci U S A, 98, 10356-61. https://doi.org/10.1073/pnas.171610498
Axelson H, Fredlund E, Ovenberger M, Landberg G, Pahlman S (2005). Hypoxia-induced dedifferentiation of tumor cells--a mechanism behind heterogeneity and aggressiveness of solid tumors. Semin Cell Dev Biol, 16, 554-63. https://doi.org/10.1016/j.semcdb.2005.03.007
Bates RC, Mercurio AM (2005). The epithelial-mesenchymal transition (EMT) and colorectal cancer progression. Cancer Biol Ther, 4, 365-70. https://doi.org/10.4161/cbt.4.4.1655
Bataille F, Rohrmeier C, Bates R, et al (2008). Evidence for a role of epithelial mesenchymal transition during pathogenesis of fistulae in Crohn's disease. Inflamm Bowel Dis, 14, 1514-27. https://doi.org/10.1002/ibd.20590
Burk U, Schubert J, Wellner U, et al (2008). A reciprocal repression between ZEB1 and members of the miR-200 family promotes EMT and invasion in cancer cells. EMBO Rep, 9, 582-9. https://doi.org/10.1038/embor.2008.74
Brabletz S, Bajdak K, Meidhof S, et al (2011). The ZEB1/miR-200 feedback loop controls Notch signalling in cancer cells. EMBO J, 30, 770-82. https://doi.org/10.1038/emboj.2010.349
Bracken CP, Jung A, Spaderna S, et al (2009). Opinion: migrating cancer stem cells-an integrated concept of malignant tumor progression. Nat Rev Cancer, 5, 744-9.
Matsunaga K, Hosokawa A, Oohara M, et al (1998). Direct action of a protein-bound polysaccharide, PSK, on transforming growth factor-beta. Immunopharmacology, 40, 219-30. https://doi.org/10.1016/S0162-3109(98)00045-9
Mjaatvedt CH, Markwald RR. (1989). Induction of an epithelialmesenchymal transition by an in vivo adheron-like complex. Dev Biol, 136, 118-28. https://doi.org/10.1016/0012-1606(89)90135-8
Neil JR, Schiemann WP (2008). Altered TAB1: IkB kinase interaction promotes TGF-$\beta$-mediated NF-kB activation during breast cancer progression. Cancer Res, 68, 1462-70. https://doi.org/10.1158/0008-5472.CAN-07-3094
Neves R, Scheel C, Weinhold S, et al (2010). Role of DNA methylation on miR-200c/141 cluster silencing in invasive breast cancer cells. BMC Res Notes, 3, 219. https://doi.org/10.1186/1756-0500-3-219
Olmeda H, Jorda M, Peinado H, et al (2007). Snail silencing effectively suppresses tumor growth and invasiveness. Oncogene, 26, 1862-74. https://doi.org/10.1038/sj.onc.1209997
Ono Y, Hayashida T, Konagai A, et al (2012). Direct inhibition of the transforming growth factor-$\beta$ pathway by protein-bound polysaccharide through inactivation of Smad2 signaling. Cancer Sci, 103, 317-24. https://doi.org/10.1111/j.1349-7006.2011.02133.x
Paterson EL, Kolesnikoff N, Gregory PA, et al (2008). The microRNA-200 family regulates epithelial to mesenchymal transition. Sci World J, 8, 901-4. https://doi.org/10.1100/tsw.2008.115
Peinado H, Olmeda D, Cano A (2007). Snail, Zeb and bHLH factors in tumor progression: an alliance against the epithelial phenotype? Nat Rev Cancer, 7, 415-28. https://doi.org/10.1038/nrc2131
Roy HK, Olusola BF, Clemens DL, et al (2002). AKT protooncogene overexpression is an early event during sporadic colon carcinogenesis. Carcinogenesis, 23, 201-5. https://doi.org/10.1093/carcin/23.1.201
Shin S, Dimitri CA, Yoon SO, et al (2010). ERK2 but not ERK1 induces epithelial-to-mesenchymal transformation via motifdependent signaling events. Mol Cell, 38, 114-27. https://doi.org/10.1016/j.molcel.2010.02.020
Shook D, Keller R (2003). Mechanisms, mechanics and function of epithelial-mesenchymal transitions in early development. Mech Dev, 120, 1351-83. https://doi.org/10.1016/j.mod.2003.06.005
Savagner P (2010). The epithelial-mesenchymal transition (EMT) phenomenon. Ann Oncol, 21, 89-92.
Sarkar FH, Li Y, Wang Z, Kong D (2008). NF-kappaB signaling pathway and its therapeutic implications in human diseases. Int Rev Immunol, 27, 293-319. https://doi.org/10.1080/08830180802276179
Sreekumar R, Sayan BS, Mirnezami AH, Sayan AE (2011). MicroRNA control of invasion and metastasis pathways. Front Genet, 2, 58.
Michael KW, Tressa MA, William PS (2009). Mechanisms of Epithelial-Mesenchymal Transition by TGF-$\beta$. Future Oncol, 5, 1145-68. https://doi.org/10.2217/fon.09.90
Mercado-Pimentel ME, Runyan RB (2007). Multiple transforming growth factor-beta isoforms and receptors function during epithelial-mesenchymal cell transformation in the embryonic heart. Cells Tissues Organs, 185, 146-56. https://doi.org/10.1159/000101315
Sakamoto J, Morita S, Oba K, et al (2006). Efficacy of adjuvant immunochemotherapy with polysaccharide K for patients with curatively resected colorectal cancer: a meta-analysis of centrally randomized controlled clinical trials. Cancer Immunol Immunother, 55, 404-11. https://doi.org/10.1007/s00262-005-0054-1
Cai ZG, Zhang SM, Zhang H, et al (2013). Aberrant expression of microRNAs involved in Epithelial-Mesenchymal Transition of HT-29 cell line. Cell Biol Int, 37, 669-74. https://doi.org/10.1002/cbin.10087
Calvert PM, Frucht H (2002). The genetics of colorectal cancer. Ann Intern Med, 137, 603-12. https://doi.org/10.7326/0003-4819-137-7-200210010-00012
Carla C, Yoshiharu M, Juan LI (2010). Epithelial-to-Mesenchymal Transition in pancreatic adenocarcinoma. Sci World J, 10, 1947-57. https://doi.org/10.1100/tsw.2010.183
Center MM, Jemal A, Ward E (2009). International trends in colorectal cancer incidence rates. Cancer Epidemiol Biomarkers Prev, 18, 1688-94. https://doi.org/10.1158/1055-9965.EPI-09-0090
Chua HL, Bhat-Nakshatri P, Clare SE, et al (2007). NF-kappaB represses E-cadherin expression and enhances epithelial to mesenchymal transition of mammary epithelial cells: potential involvement of ZEB-1 and ZEB-2. Oncogene, 26, 711-24. https://doi.org/10.1038/sj.onc.1209808
Chung CH, Parker JS, Ely K, et al (2006). Gene expression profiles identify epithelial-to-mesenchymal transition and activation of nuclear factor-kappaB signaling as characteristics of a high-risk head and neck squamous cell carcinoma. Cancer Res, 66, 8210-8. https://doi.org/10.1158/0008-5472.CAN-06-1213
Colotta F, Allavena P, Sica A, Garlanda C, Mantovani A (2009). Cancer-related inflammation, the seventh hallmark of cancer: links to genetic instability. Carcinogenesis, 30, 1073-81. https://doi.org/10.1093/carcin/bgp127
Cottonham CL, Kaneko S, Xu L (2010). miR-21 and miR-31 converge on TIAM1 to regulate migration and invasion of colon carcinoma cells. J Biol Chem, 285, 35293-302. https://doi.org/10.1074/jbc.M110.160069
Cuevas BD, Uhlik MT, Garrington TP, Johnson GL (2005). MEKK1 regulates the AP-1 dimer repertoire via control of JunB transcription and Fra-2 protein stability. Oncogene, 24, 801-9. https://doi.org/10.1038/sj.onc.1208239
Chen X, Halberg RB, Burch RP, Dove WF (2008). Intestinal adenomagenesis involves core molecular signatures of the epithelial-mesenchymal transition. J Mol Histol, 39, 283-94. https://doi.org/10.1007/s10735-008-9164-3
Coussens LM, Werb Z (2002). Inflammation and cancer. Nature, 420, 860-7. https://doi.org/10.1038/nature01322
Conidi A, van den Berghe V, Huylebroeck D (2013). Aptamers and their potential to selectively target aspects of EGF, Wnt/$\beta$-catenin and TGF-$\beta$-Smad family signaling. Int J Mol, 14, 6690-719. https://doi.org/10.3390/ijms14046690
Bates RC (2005). Colorectal cancer progression: integrin alphavbeta6 and the epithelial-mesenchymal transition (EMT). Cell Cycle, 4, 1350-2. https://doi.org/10.4161/cc.4.10.2053
Bates RC, Bellovin DI, Brown C, et al (2005). Transcriptional activation of integrin beta6 during the epithelial-mesenchymal transition defines a novel prognostic indicator of aggressive colon carcinoma. J Clin Invest, 115, 339-47. https://doi.org/10.1172/JCI200523183
Douglas S, Micalizzi S M, Farabaugh H L (2010). Epithelial-Mesenchymal Transition in cancer: Parallels between normal development and tumor progression. J Mammary Biol Neoplasia, 15, 117-34. https://doi.org/10.1007/s10911-010-9178-9
De Krijger I, Mekenkamp LJ, Punt CJ, Nagtegaal ID (2011). MicroRNAs in colorectal cancer metastasis. J Pathol, 224, 438-47. https://doi.org/10.1002/path.2922
Davalos V, Moutinho C, Villanueva A, et al (2012). Dynamic epigenetic regulation of the microRNA-200 family mediates epithelial and mesenchymal transitions in human tumorigenesis. Oncogene, 31, 2062-74. https://doi.org/10.1038/onc.2011.383
Dirisina R, Katzman RB, Goretsky T, et al (2011). p53 and PUMA independently regulate apoptosis of intestinal epithelial cells in patients and mice with colitis. Gastroenterology, 141, 1036-45. https://doi.org/10.1053/j.gastro.2011.05.032
Dissanayake SK, Wade M, Johnson CE, et al (2007). The Wnt5A/protein kinase C pathway mediates motility in melanoma cells via the inhibition of metastasis suppressors and initiation of an epithelial to mesenchymal transition. J Biol Chem, 282, 17259-71. https://doi.org/10.1074/jbc.M700075200
Galliher AJ, Neil JR, Schiemann WP (2006). Role of TGF-$\beta$in cancer progression. Future Oncol, 2, 743-63. https://doi.org/10.2217/14796694.2.6.743
Goss KH, Groden J. (2000). Biology of the adenomatous polyposis coli tumor suppressor. J Clin Oncol, 18, 1967-79.
Grivennikov S, Karin E, Terzic J, et al (2009). IL-6 and Stat3 are required for survival of intestinal epithelial cells and development of colitis-associated cancer. Cancer Cell, 15, 103-13. https://doi.org/10.1016/j.ccr.2009.01.001
Greten FR, Eckmann L, Greten TF, et al (2004). IKKbeta links inflammation and tumorigenesis in a mouse model of colitisassociated cancer. Cell, 118, 285-96. https://doi.org/10.1016/j.cell.2004.07.013
Guarino M (2007). Epithelial-mesenchymal transition and tumour invasion. Int J Biochem Cell Biol, 39, 2153-60. https://doi.org/10.1016/j.biocel.2007.07.011
Gulhati P, Bowen KA, Liu J, et al (2011). mTORC1 and mTORC2 regulate EMT, motility, and metastasis of colorectal cancer via RhoA and Rac1 signaling pathways. Cancer Res, 71, 3246-56. https://doi.org/10.1158/0008-5472.CAN-10-4058
Hoentjen F, Sartor RB, Ozaki M, Jobin C (2005). STAT3 regulates NF-kappaB recruitment to the IL-12p40 promoter in dendritic cells. Blood, 105, 689-96. https://doi.org/10.1182/blood-2004-04-1309
Herrinton LJ, Liu L, Levin TR, et al (2012). Incidence and mortality of colorectal adenocarcinoma in persons with inflammatory bowel disease from 1998-2010. Gastroenterology, 143, 382-9. https://doi.org/10.1053/j.gastro.2012.04.054
Javle MM, Gibbs JF, Iwata KK, et al (2007). Epithelialmesenchymal transition (EMT) and activated extracellular signal-regulated kinase (p-Erk) in surgically resected pancreatic cancer. Ann Surg Oncol, 14, 3527-33. https://doi.org/10.1245/s10434-007-9540-3
Jess T, Simonsen J, Jorgensen KT, et al (2012). Decreasing risk of colorectal cancer in patients with inflammatory bowel disease over 30 years. Gastroenterology, 143, 375-81. https://doi.org/10.1053/j.gastro.2012.04.016
Jing Y, Han Z, Zhang S, Liu Y, Wei L (2011). Epithelial-Mesenchymal Transition in tumor microenvironment. Cell Biosci, 1, 29. https://doi.org/10.1186/2045-3701-1-29
Saydam O, Shen Y, Wurdinger T, et al (2009). Downregulated microRNA-200a in meningiomas promotes tumor growth by reducing E-cadherin and activating the Wnt/beta-catenin signaling pathway. Mol Cell Biol, 29, 5923-40. https://doi.org/10.1128/MCB.00332-09
Sipos F, Galamb O (2012). Epithelial-to-mesenchymal and mesenchymal-to-epithelial transitions in the colon. World J Gastroenterol, 18, 601-8. https://doi.org/10.3748/wjg.v18.i7.601
Tang FY, Pai MH, Chiang EP (2012). Consumption of high-fat diet induces tumor progression and epithelial-mesenchymal transition of colorectal cancer in a mouse xenograft model. J Nutr Biochem, 23, 1302-13. https://doi.org/10.1016/j.jnutbio.2011.07.011
Techasen A, Loilome W, Namwat N, et al (2012). Cytokines released from activated human macrophages induce epithelial mesenchymal transition markers of cholangiocarcinoma cells. Asian Pac J Cancer Prev, 13, 115-8.
Thiery JP (2003). Epithelial-mesenchymal transitions in development and pathologies. Curr Opin Cell Biol, 15, 740-6. https://doi.org/10.1016/j.ceb.2003.10.006
Trimboli AJ, Fukino K, de Bruin A, et al (2008). Direct evidence for epithelial-mesenchymal transitions in breast cancer. Cancer Res, 68, 937-45. https://doi.org/10.1158/0008-5472.CAN-07-2148
Thuault S, Tan EJ, Peinado H, et al (2008). HMGA2 and Smads co-regulate Snail expression during induction of epithelialmesenchymal transition. J Biol Chem, 283, 33437-46. https://doi.org/10.1074/jbc.M802016200
Thompson EW, Torri J, Sabol M, et al (1994). Oncogene-induced basement membrane invasiveness in human mammary epithelial cells. Clin Exp Metastasis, 12, 181-94. https://doi.org/10.1007/BF01753886
Thiery JP, Acloque H, Huang RY, Nieto MA (2009). Epithelialmesenchymal transitions in development and disease. Cell, 139, 871-90. https://doi.org/10.1016/j.cell.2009.11.007
Varnat F, Duquet A, Malerba M, et al (2009). Human colon cancer epithelial cells harbour active HEDGEHOG-GLI signalling that is essential for tumour growth, recurrence, metastasis and stem cell survival and expansion. EMBO Mol Med, 1, 338-51. https://doi.org/10.1002/emmm.200900039
Vibeke A, Jonal H, Ulla V (2012). Colorectal cancer in patients with inflammatory bowel disease: can we predict risk? World J Gastroenterol, 18, 4091-4. https://doi.org/10.3748/wjg.v18.i31.4091
Vidic S, Markelc B, Sersa G, et al (2010). MicroRNAs targeting mutant K-ras by electrotransfer inhibit human colorectal adenocarcinoma cell growth in vitro and in vivo. Cancer Gene Ther, 17, 409-19. https://doi.org/10.1038/cgt.2009.87
Wang H, Wang HS, Zhou BH, et al (2013). Epithelialmesenchymal transition (EMT) induced by TNF-$\alpha$ requires AKT/GSK-3$\beta$-mediated stabilization of snail in colorectal cancer. PLoS One, 8, e56664. https://doi.org/10.1371/journal.pone.0056664
Wang X, Belguise K, Kersual N, et al (2007). Oestrogen signalling inhibits invasive phenotype by repressing RelB and its target BCL2. Nat Cell Biol, 9, 470-8. https://doi.org/10.1038/ncb1559
Westbrook AM, Szakmary A, Schiestl RH (2010). Mechanisms of intestinal inflammation and development of associated cancers: lessons learned from mouse models. Mutat Res, 705, 40-59. https://doi.org/10.1016/j.mrrev.2010.03.001
Wu L, Fan J, Belasco JG (2006). MicroRNAs direct rapid deadenylation of mRNA. Proc Natl Acad Sci U S A, 103, 4034-9. https://doi.org/10.1073/pnas.0510928103
Wienholds E, Koudijs MJ, van Eeden FJ, Cuppen E, Plasterk RH (2003). The microRNA-producing enzyme Dicer1 is essential for zebrafish development. Nat Genet, 35, 217-8. https://doi.org/10.1038/ng1251
Wu ST, Sun GH, Hsu CY, et al (2011). Tumor necrosis factor-$\alpha$ induces epithelial-mesenchymal transition of renal cell carcinoma cells via a nuclear factor kappa B-independent mechanism. Exp Biol Med, 236, 1022-9. https://doi.org/10.1258/ebm.2011.011058
Yang J, Mani SA, Donaher JL, et al (2004). Twist, a master regulator of morphogenesis, plays an essential role in tumor metastasis. Cell, 117, 927-39. https://doi.org/10.1016/j.cell.2004.06.006
Zhang F, Zhang X, Li M, et al (2010). mTOR complex component Rictor interacts with PKCzeta and regulates cancer cell metastasis. Cancer Res, 70, 9360-70. https://doi.org/10.1158/0008-5472.CAN-10-0207
Zhao S, Venkatasubbarao K, Lazor JW, et al (2008). Inhibition of STAT3 Try705 phosphorylation by Smad4 suppresses transforming growth factor beta-mediated invasion and metasis in pancreatic cancer cells. Cancer Res, 68, 4221-8. https://doi.org/10.1158/0008-5472.CAN-07-5123
Zou J, Luo H, Zeng Q, et al (2011). Protein kinase $CK2\alpha$ is overexpressed in colorectal cancer and modulates cell proliferation and invasion via regulating EMT-related genes. J Transl Med, 9, 97. https://doi.org/10.1186/1479-5876-9-97
피인용 문헌
High expression level of TMPRSS4 predicts adverse outcomes of colorectal cancer patients vol.30, pp.4, 2013, https://doi.org/10.1007/s12032-013-0712-7
Overexpression of peroxiredoxin 2 inhibits TGF-β1-induced epithelial-mesenchymal transition and cell migration in colorectal cancer vol.10, pp.2, 2014, https://doi.org/10.3892/mmr.2014.2316
Knockdown of Y-box-binding protein-1 inhibits the malignant progression of HT-29 colorectal adenocarcinoma cells by reversing epithelial-mesenchymal transition vol.10, pp.5, 2014, https://doi.org/10.3892/mmr.2014.2545
An Epigenetic Mechanism Underlying Doxorubicin Induced EMT in the Human BGC-823 Gastric Cancer Cell vol.15, pp.10, 2014, https://doi.org/10.7314/APJCP.2014.15.10.4271
Radiation Induces Phosphorylation of STAT3 in a Dose- and Time-dependent Manner vol.15, pp.15, 2014, https://doi.org/10.7314/APJCP.2014.15.15.6161
GRP78 Secreted by Colon Cancer Cells Facilitates Cell Proliferation via PI3K/Akt Signaling vol.15, pp.17, 2014, https://doi.org/10.7314/APJCP.2014.15.17.7245
Insights into the Diverse Roles of miR-205 in Human Cancers vol.15, pp.2, 2014, https://doi.org/10.7314/APJCP.2014.15.2.577
Cascade vol.15, pp.22, 2014, https://doi.org/10.7314/APJCP.2014.15.22.9967
Expression in Colorectal Cancer is Linked to Ethnic Origin vol.15, pp.5, 2014, https://doi.org/10.7314/APJCP.2014.15.5.2083
BMI1 and TWIST1 Downregulated mRNA Expression in Basal Cell Carcinoma vol.15, pp.8, 2014, https://doi.org/10.7314/APJCP.2014.15.8.3797
Crosstalk of Oncogenic Signaling Pathways during Epithelial–Mesenchymal Transition vol.4, pp.2234-943X, 2014, https://doi.org/10.3389/fonc.2014.00358
microRNA-20a enhances the epithelial-to-mesenchymal transition of colorectal cancer cells by modulating matrix metalloproteinases vol.10, pp.2, 2015, https://doi.org/10.3892/etm.2015.2538
Prognostic Evaluation of Tumor-Stroma Ratio in Patients with Early Stage Cervical Adenocarcinoma Treated by Surgery vol.16, pp.10, 2015, https://doi.org/10.7314/APJCP.2015.16.10.4363
Roles of Signaling Pathways in the Epithelial-Mesenchymal Transition in Cancer vol.16, pp.15, 2015, https://doi.org/10.7314/APJCP.2015.16.15.6201
Correlation of Overexpression of Nestin with Expression of Epithelial-Mesenchymal Transition-Related Proteins in Gastric Adenocarcinoma vol.16, pp.7, 2015, https://doi.org/10.7314/APJCP.2015.16.7.2777
Early Growth Response Protein-1 Involves in Transforming Growth factor-β1 Induced Epithelial-Mesenchymal Transition and Inhibits Migration of Non-Small-Cell Lung Cancer Cells vol.16, pp.9, 2015, https://doi.org/10.7314/APJCP.2015.16.9.4137
Aberrant Expression of Calretinin, D2–40 and Mesothelin in Mucinous and Non-Mucinous Colorectal Carcinomas and Relation to Clinicopathological Features and Prognosis vol.22, pp.4, 2016, https://doi.org/10.1007/s12253-016-0060-y
miR-375 inhibits the invasion and metastasis of colorectal cancer via targeting SP1 and regulating EMT-associated genes vol.36, pp.1, 2016, https://doi.org/10.3892/or.2016.4834
Effects of HMGA2 siRNA and doxorubicin dual delivery by chitosan nanoparticles on cytotoxicity and gene expression of HT-29 colorectal cancer cell line vol.68, pp.9, 2016, https://doi.org/10.1111/jphp.12593
Regulation of Natural Killer Cell Function by STAT3 vol.7, pp.1664-3224, 2016, https://doi.org/10.3389/fimmu.2016.00128
Cancer-type OATP1B3 mRNA has the potential to become a detection and prognostic biomarker for human colorectal cancer vol.11, pp.8, 2017, https://doi.org/10.2217/bmm-2017-0098
Modeling of Colorectal Cancer vol.23, pp.19-20, 2017, https://doi.org/10.1089/ten.tea.2017.0397
Snail homolog 1 is involved in epithelial-mesenchymal transition-like processes in human glioblastoma cells vol.13, pp.5, 2017, https://doi.org/10.3892/ol.2017.5875
inhibits epithelial-to-mesenchymal transition by targeting multiple pathways in triple-negative breast cancers pp.00219541, 2018, https://doi.org/10.1002/jcp.27222
inhibiting epithelial-mesenchymal transition vol.175, pp.15, 2018, https://doi.org/10.1111/bph.14352
Toll-Like Receptor 2-Mediated Suppression of Colorectal Cancer Pathogenesis by Polysaccharide A From Bacteroides fragilis vol.9, pp.1664-302X, 2018, https://doi.org/10.3389/fmicb.2018.01588
The Kraken Wakes: induced EMT as a driver of tumour aggression and poor outcome vol.35, pp.4, 2018, https://doi.org/10.1007/s10585-018-9906-x
Shenling Baizhu San supresses colitis associated colorectal cancer through inhibition of epithelial-mesenchymal transition and myeloid-derived suppressor infiltration vol.15, pp.1, 2015, https://doi.org/10.1186/s12906-015-0649-9
The Involvement of NF-κB/Klotho Signaling in Colorectal Cancer Cell Survival and Invasion pp.1532-2807, 2019, https://doi.org/10.1007/s12253-018-0493-6
한국과학기술정보연구원 NDSL 한국학술지 인용보고서 KPubs 한국과학기술인용색인서비스 한국전통지식포털
(34141) 대전광역시 유성구 대학로 245 한국과학기술정보연구원 TEL 042)869-1004
자세히 찾기
제목, 요약, 키워드
권
호
저자명
저자소속기관
학술회의자료
협회지 | CommonCrawl |
← (Re)-Building A Better Metric – Part I
Leetsauced Podcast Appearances →
(Re)-Building A Better Metric – Part II
Posted on April 8, 2014 by Theck
In Part I, we talked about the criteria we wanted to satisfy to ensure that a metric was good, and briefly assessed the results of our beta test of the new version of TMI. The conclusion I came to after that testing was that, in short, it needed more work.
I don't know that it's entirely true to say that I went "back to the drawing board," so much as I went back to my slew of equations and mulled over what I could tweak in them to fix the problems. To recap, the formula I was using was:
$$\large {\rm Beta\_TMI} = c_1 \ln \left [ 1 + \frac{c_2}{N} \sum_{i=1}^N e^{F(MA_i-1)} \right ],$$
with $F=10$, $c_1=500$ and $c_2=e^{10}$.
One of the problems I was running into was one of conflicting constraints. If you look back at the last blog post, you'll see that constraint #6 was that the numbers had to stay reasonable. Mentally, I had converted this constraint to be "should have a fixed range of a few thousand," possibly up to 10 or 20 thousand at a maximum. So I was rigidly trying to keep the score down around a few thousand.
But the obvious solution to the stat weight problem was to increase $c_1$, which increases the slope of the graph. That makes a small change in spike size a more significant change in TMI, and gives you larger stat weights. Multiply $c_1$ by ten, and your stat weights all get multiplied by 10. Seems simple enough.
Except that in the beta test, I got data with TMIs ranging from a few hundred to over 12 thousand. So if I multiply by ten, I'm looking at TMIs ranging from around a thousand to over 120 thousand, which is a much larger range. And a factor of ten still wouldn't have fixed everything thanks to the "knee" in the graph, because if your TMI was on the really low end you could still get garbage stat weights.
It felt like the two constraints were at odds with one another. And both at odds with a third, somewhat self-imposed constraint, which is that I wanted to keep the zero-bounding effect that the "1+" in the brackets produced. Because without that, the score could go negative, which is odd. After all, what does it mean when your arbitrary FICO-like metric goes negative? Which just led back to more fussing over the fact that I was still pretty light on "meaning" in this metric to begin with.
It was a conversation with a colleague that led me to the solution. While discussing the stat weight issues, and how I could tweak the equation to fix them, he mentioned that he would rather have a metric with large numbers that had an obvious meaning than a nicely-constrained metric that didn't. We were talking in terms of percentages of health, and it was only at that point that the answer hit me. Within a day of that conversation, I made all of the changes I needed to give TMI a meaning.
Asking The Right Question
As is often the case, the answer had been staring me in the face the entire time. I've been looking at this graph (in various different incarnations, with various different constants) for the last few months:
Simulated TMI data using the Beta_TMI formula. Red is the uniform damage case, blue is the single-spike case, and green is pseudo-random combat data.
What that conversation led me to realize was that I was asking the wrong question. I was trying to figure out what combination of constants I needed to keep the numbers "reasonable." But my definition of "reasonable" was vague and arbitrary. So it's no surprise that what I was getting out was also… vague and arbitrary.
What I should have been doing was trying come up with a score that does a better job of communicating to the user how big those spikes were. Because that, by definition, would be "reasonable" no matter what size the numbers were.
In other words, the question I should have been asking was "how can I tweak this equation so that the number it spits out has a simple and intuitive relationship to the spike size, expressed in a scale that the user can not only easily understand, but easily remember?"
And the answer, which was clear after that conversation, was to use percent health.
To illustrate, let's flip that graph around it's diagonal, such that instead of plotting TMI vs. $MA_{\rm max}$, we were plotting $MA_{\rm max}$ vs. TMI.
The same data, just plotted in reverse.
At a given TMI value, the $MA_{\rm max}$ values we get from the random combat simulation always fall below the blue single-spike line. In other words, at a TMI of X, you can confidently say that the maximum spike you will take is of size Y. It could be smaller, of course – you could take a few spikes that are a little smaller than Y and get the same score. But you can be absolutely sure it isn't above Y.
So we just need to find a way to make the relationship between X and Y obvious, such that someone can look at a TMI of e.g. 20k and immediately know how large of a damage spike that is, as a percentage of their health.
We could use a one-to-one relationship, such that a TMI of 100 meant you were taking spikes that were 100% of your health. That would correspond to a slope of 100, or a $c_1$ of 10. But that would give us even smaller stat weights, which is a problem. We could literally end up with a plot in Simulationcraft where every single one of your stat weights was 0.00.
It would be nice to keep using factors of ten. Bumping it up to a slope of 1000 doesn't work. That's a $c_1$ of 100, which is still smaller than what we used in Beta_TMI. A slope of 10000, or a $c_1$ of 1000, is only a factor of two improvement over Beta_TMI, so our stat weights will still be sloppy.
But a slope of 100k… that might just work. A TMI of 100k would mean that your maximum spikes were around 100% of your health. If your TMI went up to 120k, you'd immediately know that the spikes are now about 120% of your health. Easy. Intuitive. Now we're getting somewhere. The stat weights would also be 20x as large as they were for Beta_TMI, ensuring that we would get good unnormalized weights even with two decimal places of precision.
So, assuming we're happy with that, it locks down our $c_1$ at $10^4$, so that every percentage of health corresponds to 1k TMI. Now we just have to look at the formula and figure out what else, if anything, needs to be changed.
Narrowing the Field
The very first thing I did after coming to this realization is toss out the "1+" in the formula. While I liked zero-bounding when we were treating this metric like a FICO score, it suddenly has no relevance if the metric has a distinct and clear meaning. Removing it allows for negative TMI values, but those negative values actually mean something now! If you end up with a TMI of -10k, it means that you were out-healing your damage intake by so much that the largest "spike" you ever took was smaller than your incoming healing in that time window. It also tells you exactly how much smaller: 10% of your health. While it's not a situation we'll run into that often, I suspect, it actually has meaning. There's no sense obscuring that information with zero-bounding.
Which just leaves the question of what to do with $c_2$. Let's look at the equation after removing the "+1″:
$$\large {\rm TMI} = c_1 \ln \left [ \frac{c_2}{N} \sum_{i=1}^N e^{F(MA_i-1)} \right ] $$
If we make the single-spike approximation, i.e. that we can replace the sum with a single $e^{F(MA_{\rm max}-1)}$, we get:
$$\large \begin{align} {\rm TMI_{SS}} &= c_1 (\ln c_2 – \ln N) + c_1 F (MA_{\rm max} – 1) \\&~\\ &= c_1 F MA_{\rm max} + c_1 ( \ln c_2 – \ln N – F ) \end{align}$$
just as before. Now that we've removed the "1+" from the formula, the single-spike approximation isn't limited to large spikes anymore, so this is valid for any value of $\large MA_{\rm max}.$
Remember that in our single-spike approximation, $c_2$ controlled the y-intercept of the plot. And now that this y-intercept isn't being artificially modified by zero-bounding, it actually has some meaning. It's the value of $MA_{\rm max}$ at which our TMI is zero.
And given our convention that X*1000 TMI is a spike that's X% of our health, a TMI of zero should mean that we take spikes that are 0% of our health. In other words, this should happen at $MA_{\rm max}=0$. So we want our y-intercept to be zero, or
$$\large c_1 ( \ln c_2 – \ln N – F ) = 0 .$$
Since $c_1$ can't be zero, there's only one way to accomplish this: $c_2 = N e^F.$ I was already using $e^F$ for $c_2$ in Beta_TMI, so this wasn't totally unexpected. In fact, I figured out quite a while ago that the choice of $e^F$ for $c_2$ was equivalent to simplifying the term inside the sum:
$$\large \frac{e^F}{N}\sum_{i=1}^N e^{F(MA_i-1)} = \frac{1}{N}\sum_{i=1}^N e^{F\cdot MA_i}.$$
Defining $c_2=Ne^F$ would also eliminate the $1/N$ factor in front of the sum. However, there's a problem here: I don't want to eliminate it. That $1/N$ is serving an important purpose: normalizing the metric for fight length. For example, let's consider two simulations, one being three minutes long and the other five minutes long. We'll assume the boss is identical in both cases, so the magnitude and frequency of spikes are identical. In theory, the metric should give you nearly identical results for both, because the amount of danger is identical. A fight that's twice as long should have roughly twice as many large spikes, but they're spread over twice as much time.
But a longer fight will have more terms in the sum for a particular bin size, and a shorter fight will have fewer terms. So the sum will be approximately twice as large for the longer fight. The $1/N$ cancels that effect because $N$ would also be twice as large. If we get rid of that $1/N$, then the longer fight will seem significantly more dangerous than the shorter one. In other words, it would cause the metric to vary significantly with fight length, which isn't good.
So I decided to define $c_2$ slightly differently. Rather than $Ne^F$, I chose to use $N_0e^F$, where $N_0$ is a default fight length. This means that we're normalizing the fight length to $N_0$ rather than eliminating the dependence entirely, which should mean much smaller fluctuations in the metric across a large range of fight lengths. Since the default fight length in SimC is 450 seconds, that seemed like an obvious choice for $N_0$.
To illustrate that graphically, I fired up Visual Studio and coded the new metric into Simulationcraft, with and without the normalization. I then ran a character through for fight lengths ranging from 100s to 600s. Here are the results:
Comparison of normalized ($N_0/N$) and unnormalized versions of the TMI metric. Vertical axis is in thousands.
The difference is pretty clear. The version where $c_2=Ne^F$ varies from a little under 65k TMI to around 86k TMI. The normalized version where $c_2 = N_0e^F=450e^F$ varies much less, from about 80k to a little over 83k, and most of that variation happening for fights that are shorter than four minutes long (i.e. not that common). This version is stable enough that it should work well for combat log analysis sites, where we'd expect a wide variety of encounter lengths.
There was one final change I felt I should make, and it's not to the formula per se, it's to the definition of $MA$. If you recall from the last post, we defined it as follows:
$$\large MA_i = \frac{T_0}{T}\sum_{j=1}^{T / dt} D_{i+j-1} / H.$$
This definition normalizes for two things: player health (by dividing by $H$), and window size (by multiplying by $T_0$). The latter is the part I wanted to change.
The reason we originally multiplied by $T_0/T$ was to allow the user to specify a shorter time window $T$ over which to calculate spikes, for example in cases where you were getting a large heal every 5 second, but were fighting a boss who could kill you in 3 or 4 seconds in-between those heals. This normalization meant that it calculated the moving average over $T$-second intervals, but always scaled the total damage up to what it would be if that damage intake rate were sustained for $T_0$ seconds. Doing this kept the metric from varying significantly with window size, as we discussed last year.
But that particular normalization doesn't make sense anymore now that the metric is representing a real quantity. If my TMI is a direct reflection of spike size, then I'd expect it to go up or down fairly significantly as I change the window size. If I take X damage in a 6-second time window, but only X/2 damage in a 3-second time window, then I want my TMI to drop by a factor of 2 when I drop the window size from 6 seconds to 3 seconds as well.
In other words, I want TMI to accurately reflect what percentage of my health I lose in the window I'm considering. If I want to analyze a 3-second window, then I want to know what percentage of my health the boss can take off in that 3 seconds, not how much he would take off if he had 6 seconds.
So we're entirely eliminating the time-window normalization in the definition of $MA_i$. That seems to match people's intuition for how the time-window control should work anyway (this topic has come up before, including in the comments of the Crowdsourcing TMI post), so it's a win on multiple fronts.
Bringing it all Together
Now, we have all the pieces we need to construct a formal definition for TMI v2.0. I'll update the TMI Standard Reference Document with the rigorous details, but since we've already discussed many of them, I'm only going to summarize it here. Assume we start with an array $D$ containing the damage we take in every time bin of size $dt$, and the player has health $H$.
The moving average array is now defined as
$$\large MA_i = \frac{1}{H}\sum_{j=1}^{T / dt} D_{i+j-1}.$$
In other words, it's the array in which each element is the $T$-second moving sum of damage taken, normalized to player health $H$.
We then take this array and use it to calculate TMI as follows:
$$\large {\rm TMI} = 10^4 \ln \left [ \frac{N_0}{N}\sum_{i=1}^N e^{10 MA_i} \right ] ,$$
where $N$ is the length of the $MA$ array, or equivalently the fight length divided by $dt$, and $N_0=450/dt$ is the "default" array size corresponding to a fight length of 450 seconds.
But Does It Work?
To illustrate how this works, let's look at some examples using Simulationcraft. I coded the new formula into my local copy and ran some tests. Here are two reports, both against the T16H25 boss, using my own character and the T16H Protection Warrior profile:
Theck
T16H Protection Warrior
The very first thing I looked at was the stat weights:
Stat weights generated with Theck using TMI 2.0
Much, much better. This was with 25k iterations, but even 10k iterations gave us reasonable (if noisy) stat weights. The error bars here are all pretty reasonable, and it wouldn't be hard to increase the precision by bumping it up to 50k iterations if we wanted to. The warrior profile's stat weights are similarly high-precision.
We could also look at the TMI distribution:
TMI distribution for Theck using TMI 2.0
Again, much nicer looking than before. We're still getting a bit of skew here, but that mostly has to do with being slightly overgeared for the boss definition. The warrior profile exhibits even stronger skew, but tests run with characters of lower gear levels (and thus higher average TMI values) show very little skew.
I also wanted to see exactly how well the TMI value reflected maximum spike size, and what (if any) difference there was. So you may have noticed that I've enhanced the tanking section of the SimC report a little bit by adding some new columns:
Updated tanking section of the SimC report, including information about spike size.
In short, SimC now also records the "Maximum Spike Damage," or MSD, for each iteration and calculates the maximum, minimum, and mean MSD value. It reports this information in units of "percentage of player health" right alongside the DTPS and TMI information that you're used to getting. Lest the multiple "max" modifiers be confusing: the MSD for one iteration is the biggest spike you take that iteration, and the "MSD Max" is the largest spike you take out of all iterations.
You may be wondering, at this point, if this isn't all superfluous. If I can code SimC to report the biggest spike, why wouldn't we want to use that directly? What does TMI add that we can't get from MSD?
The answer is continuity. MSD uses a max() function to isolate the absolute biggest spike in each iteration. Which is fine, but often misleading. For example, let's consider two different tanks, one of which takes a single spike that's 90% of their health, and another that takes one 90% spike and three or four 89% spikes. Assume nothing else in the encounter is remotely threatening them. Their MSD values will be identical, because it ignores all but the largest spike. But it's clear that the second tank is in more danger, because he's taking a large spike more frequently, and the TMI value will accurately reflect that.
That continuity also translates into generating better and more reliable stat weights. A stat that reduces the frequency of 90% spikes without eliminating them would be given a garbage stat weight if we tried to scale over MSD, because MSD doesn't retain any information about frequency. However, we know that stats like hit and expertise are strong partly because they reduce spike frequency. TMI reflects that accurately while MSD simply can't.
MSD is still useful though, in that having both TMI and MSD gives us additional information about our spike patterns. It also gives us a convenient way to compare the two to see how TMI works.
First, take a look at the TMI Max and MSD Max values. You'll notice they mimic each other pretty well: MSD Max is 150.3%, TMI Max is 151.7k. This makes sense for the extreme case because that's when all the planets align to create your worst-case scenario, which is rare. It won't happen multiple times per fight, so it's a situation where you have one giant spike that dominates the score, much like our single-spike approximation. And in that approximation, TMI is roughly equal to the largest spike size, just like it should be.
Comparing the mean TMI value (just "TMI" on the table) to the MSD mean shows a little bit of a gap: MSD Mean is 69.5%, TMI mean is 82.8k. The TMI is about 13k above where you'd expect it to be based on the single-spike model. That's because of spike frequency. You wouldn't normally expect to take one giant spike in an encounter and nothing else; the more common case is to take several spikes of similar magnitude over that 450 seconds. If we're taking 3-4 of those spikes, then that's going to raise the TMI value a little bit compared to the situation where we only take one. That's exactly what's happening here.
Mathematically, if we take $n$ spikes, we expect the TMI to be $\ln(n)$ times as large as the single-spike case. In this simulation, the TMI is about 1.2 times larger, meaning that $n\approx 3.3.$ In other words, on average we're taking about 3.3 spikes every 450 seconds, each of which is about 69.5% of our health. That's pretty useful information – in fact, I may add it to the table in the future if people would like SimC to calculate it for them.
You can see that the gap grows considerably for the minimum TMI and MSD values. The MSD Min is only about 31% while the minimum TMI is ~66k. Again, this comes down to frequency. Large spikes tend to be infrequent due to statistics, as they require a failure to avoid any one of multiple attacks. But as we eliminate those (either by gearing, or in this case, by lucky RNG on one iteration) we're left with smaller, more frequent spikes. In the extreme limit, you could imagine a scenario where you alternated between taking a full hit and avoiding every second attack, in which case you'd have loads of really tiny spikes. So what we're seeing at this end of the distribution is that we're taking about $n=8.4$ small spikes in the low-TMI iterations.
This behavior also has a more subtle, but rather important meaning. TMI is really good at prioritizing large spikes and giving you stat weights that preferentially eliminate them. Once you eliminate those spikes, it automatically shifts to prioritizing the next-biggest spikes, and so on. If you smooth your damage intake sufficiently that you're taking a lot of moderately-sized spikes, it naturally tries to reduce the frequency of those spikes. In other words, if you've successfully eliminated the danger of isolated spikes, it automatically starts optimizing you for DTPS. So it seamlessly fuses spike mitigation and DTPS into a metric that shifts the goalposts based on your biggest concern, as determined by the combat data.
A lot of those ideas can be seen graphically, as well. Here's a plot showing data generated with my own character pitted against the T16H25 boss. We're plotting MSD (which I was originally calling "Max Moving Average") against the reported TMI score. To generate this plot, I used a variety of window sizes. At each window size, I recorded the minimum, mean, and maximum TMI and MSD values. The dotted line is the expected relationship, i.e. 100k TMI = 100% max health.
MSD vs. TMI for Theck against the T16H25 boss.
Generally speaking, as we increase or decrease the window size, the MSD and TMI should similarly increase or decrease. That's certainly happening for the maximum MSD and TMI values, which should be expected. And in that limit, we see that TMI and MSD mostly agree and lie close to the dotted line.
However, the mean values show a much smaller spread, and the minimum values show almost no spread. It turns out that this is the fault of EF's crazy scaling. A paladin in this level of gear is basically self-sufficient against the T16H25 boss, so changing the window size doesn't have a large effect unless we consider the most extreme cases. If we're out-healing the boss, then a longer window won't cause a noticeable increase in damage intake or spike size. At the very low end, where the minimum TMI & MSD values show up, we're basically plotting window-edge effects.
The results look a lot cleaner if we consider a player that's undergeared for the boss (and of a class that doesn't have a strong self-healing mechanic, like a warrior):
MSD vs. TMI for a sample warrior against the T16H25 boss.
This is one of the warriors who submitted multiple data sets for the beta test. He's got an average ilvl of 517, which is well below what would be needed to comfortably survive the 25H boss. As a result, his TMI values are fairly high, with even the smallest values being over 200k. As you can see, though, all of the values cluster nicely around the equivalence line, meaning that the TMI value is a very good representation of his expected spike size. Also note that the colors are more evenly distributed on this plot. That's because the window size adjustment is working properly here. The lowest values are from simulations with a window size of 2 seconds, while the largest ones are using a window size of 10 seconds. And the data is pretty linear: double the window size, and you double the MSD and TMI.
So this final version of the metric seems to be hitting all the right notes. Let's get our checklist out and grade it on each of the criteria we set out to satisfy.
Accurately representing danger: Pass. There's really no difference between this version and the beta version in this category. If anything, this may be a bit better since it no longer has the "knee" obfuscating danger for smaller spikes.
Work seamlessly: Pass. Apart from coding the metric into SimC, it took no additional tweaks to get it to work properly with the default plotting and analysis tools.
Generate useful stat weights: Pass. The stat weights are being generated properly and to sufficient precision to identify differences between the stats, without having to normalize. It will generate useful stat weights even in low-damage regimes thanks to the removal of the "knee," and it automatically adapts to generate DTPS-like results when you've done all you can for smoothing. Massive improvement in this category.
Useful statistics: Pass. Again, not much difference between this version and Beta_TMI, at least in this category.
Easily interpreted: Pass. This is the most important improvement. If I get a TMI score of 80k, I immediately know that I'm in danger of taking spikes that are up to 80% of my health. I don't need to do any mental math to figure it out, just replace a "k" with a "%" and I'm there. No need to look back to a blog post or remember a funny conversion factor. As long as I know what TMI is, I know what it means.
Numbers should be reasonable: Pass. While the numbers aren't technically small, I think it's fair to say that they're reasonable. After Mists, everyone is comfortable working in thousands ("I do 400k DPS and have 500k health"), so I don't think the nomenclature will be confusing. The biggest issue with the original TMI was that it varied wildly by orders of magnitude due to small changes, which can't happen in this new form. Going from 75k to 125k has a clear and obvious meaning, and won't throw anyone for a loop, unlike going from 75k to 18.3M (an equivalent change in Old_TMI).
I'll admit that I may be a little biased when it comes to grading my own metric, but I don't think you can argue that I'm being unfairly kind in any of these categories. I set up clear expectations for what I wanted in each category, and made sure the metric met them. If it hadn't, you probably wouldn't be reading about it, because I'd have tossed it like Beta_TMI and continued working on it until I found a version that did.
But keep in mind that this doesn't mean the metric is flawless. It just means that we haven't discovered what (if any) its flaws are yet. As the logging sites get on-board with the new metric and implement it, we'll be able to look for differences between real-world performance and Simulationcraft results and identify the causes. And if we do find problems, we'll adjust it as necessary to fix them.
It shouldn't be much of a surprise that I'm very happy with TMI 2.0. It finally has a solid meaning, and will be far simpler to explain to players discovering it for the first time. It's a vast improvement over the original version of the metric in so many ways that it's hard to even compare the two.
And by giving the metric a clear meaning, we've opened up a number of new possible applications. For example, let's say you sim your character and get a TMI of 85k. You and your healers now know they need to be prepared for you to take a spike that's around 85% of your health at any given moment. Which leads directly into the question, "how much healing do I need to ensure survival?"
If your healer is a druid, you might consider how many Rejuvenation ticks you can rely on in a 6-second window and how much healing that will be. If it's 20% of your health, then you (and your healer!) immediately have an estimate of how much on-demand healer throughput you'll need to keep you safe. Or if you have multiple HoTs, and they sum up to about 50% of your health in that time window, your healers know that as long as they keep you HoT-ted up, they can spend their GCDs elsewhere and just spot-heal you when you hit 50% health.
In other words, TMI may be a tanking metric, but it's got the potential to have a meaning for (and be useful to) your healers as well.
Extend this idea even further: TMI was originally defined as only including self-healing effects, not external heals. The new definition can be much looser, because it still has a meaning if you include external heals. Adding a healer to your simulation may reduce your TMI, but the end result is still meaningful because it tells you how large a spike you took with a healer focusing on you.
Likewise, a combat logging site might report your regular TMI and an "ETMI" or Effective TMI, which includes outside healing. And that ETMI would tell you something slightly different – what was the biggest spike you took and survived (or not!) on that pull. If your ETMI is less than 50k you're never really in much danger. If your ETMI is pushing 90k or 100k (and you didn't die), it means you're getting awfully close to dying at least a few times in that encounter, which may warrant some investigation. You could then analyze your own logs and your healers' logs to figure out why that's happening and determine ways to improve it.
I'm really excited to see where this goes over the next few months. For now, though, I'm going to focus on getting the foundations in place. I've already coded the new metric into Simulationcraft, so as of the next release (547-3) all TMI calculations will use the new formula.
I also plan on working with both WarcraftLogs and AskMrRobot, both of whom have expressed an interest in implementing TMI, to get it up and running on their logging sites. And I'll be updating the standard reference document shortly with a rigorous definition of the standard to facilitate that.
This entry was posted in Simcraft, Simulation, Tanking, Theck's Pounding Headaches, Theorycrafting and tagged LaTeX, metrics, Min/Max, simcraft, tank, tanking, theck, Theorycraft, theorycrafting, TMI, warcraft, warriors, WoW. Bookmark the permalink.
54 Responses to (Re)-Building A Better Metric – Part II
Paendamonium says:
Nice work Theck! I think this will definitely be more understandable for the slightly less statistically inclined of us out there! One thought: I understand why for the simulation it makes sense for TMI to be measured in thousands (k), but is there any reason that the output couldn't just be in terms of the %? I think that might be easier to comprehend for someone new to the metric. Also, if that is the mental translation viewers need to do (80k TMI = 80% health spike), is there a reason not to just have the simulation do it that way and stat weights be computed behind the scenes?
There's no reason that I can't just display "81k" TMI as "81%" TMI on the report. I'm hesitant to do so because I think it would actually make it more confusing (ex: "TMI says 81% but MSD says 70%, which is right?"). Keeping it slightly more abstract communicates the idea that TMI is subtly different. That subtlety being that it takes into account spike frequency as well as magnitude, while MSD is only magnitude.
So yes, basically I believe that reporting it as "81k" serves a distinct purpose here. I have faith that the average user will be able to mentally replace "k" with "%" – the whole point of choosing normalization factors was to make this process as easy as possible for the user.
Also note that, while it's not clear in the screenshots, I intend to only ever report TMI in units of 1000. In other words, despite the fact that the table showed TMI as "82838" in this blog post, when I'm done cleaning up the table it will be reported as "82k" or "82.8k." I haven't decided exactly how much precision to use here, but I'm leaning towards just "82k" since changes of 0.1% of your health are probably meaningless, but if people are comparing gear sets they may want to have that precision available.
(This creates an odd case if your TMI is literally less than 1k, but that's going to be such a rare situation that I'm not sure I care enough to code special cases for it; "0.5k" is probably sufficient.)
Çapncrunch says:
I think that 1 decimal place (83.8k) would in general be better than just whole k's. It'll probably be a little more transparent to see some sort of difference when making smaller changes to things like gear, or especially rotations (where the stat-weights won't matter, so it'll be worthwhile to see if that change from 82k to 83k was only an increase from 82.9k to 83.0k or if it was a jump from 82.2k to 83.9k).
Also, I think psychologically that one decimal place just makes it "look" better. Even in cases where you don't are about that precision it just feels more official knowing that your TMI isn't just 77, but it's "77 point 3″.
And it's not like that one decimal place is likely to confuse anyone.
Dalmasca says:
Wow, I see why you were so excited now! Massive improvement — I wish I had this metric years ago for my raiders, haha!
I think adding the "3.3 spikes every 450 seconds, each of which is about 69.5% of our health" type of readout to SimC would be a very good idea. It could be extremely useful in planning out how many tank/healer CDs you will need, and what magnitude of damage they need to cover.
Thanks again, Theck!
Yeah, it occurred to me while writing the blog post that it would be very convenient to have that, so I will almost certainly add it to the table. Wondering whether it would make sense to report it as "spikes per iteration" or "spikes per minute."
I'm always a fan of having more data, so I'd say both.
Spikes/min is more relevant to CD planning, while spikes/iteration is probably more relevant to gearing/rotation strategy.
I do agree with Paendamonium on his points. Though, I do think that implementing thousands separator in simcraft would go a long way to clarify the numbers.
I think these improvements to the metric makes for a vastly superior metric then the previous even-though some of the changes are primarily visual. I do believe that this metric is actually suitable for a standalone metric for optimization in contrast to the previous where some consideration should be given to DTPS and EH to ensure the results were meaningful. However, I don't think the metric is perfect yet. I do see a potential issue in the fixed size of the MA window, especially when taking into account healing. In one case, the boss might perform a damage spike over 7 seconds of 130% but when using a 6 second window this could potentially figure only as a series of 65% spikes. In the opposite case, which I believe to be much worse, the boss may spike the tank with 130% damage over 3 seconds. However, external or self-heals (LoH) on either side of the spike can dwarf or nullify the spike even-though the spike could have resulted in a tank death. I do think it's a flaw in the metric if it allows such cases to go unnoticed and I think the source of these issues are the edges of the rectangular window used for the moving average.
The question then becomes, which type of window would adequately balance the risk of near-instant spike deaths to the risk of death from persistent unmitigated attacks and does such a window even exist? I do think this issue merits further discussion. Expanding upon that, before a new window can be created. We have to be able to answer the following questions;
How much more dangerous is taking 80% damage in 6 seconds versus 7 seconds if at all and equally for 3 seconds vs 4 seconds. Lastly, when do we the risk from instantaneous damage to the damage taken over x second.
See my response to Paenda; reporting it as "100k" consistently everywhere should clear up that problem (I agree with you about Simc's lack of thousands separators, though).
Regarding the widow, keep in mind that in Simcraft that window is user-definable. So if you run the simulation with the window set to 6 seconds and get a value of 65% (clearly not very dangerous), your first reaction should be to raise it to 7 seconds (or higher) and see when you finally hit something close to your max HP.
Keep in mind that TMI is not trying to tell you whether you would have died in a particular situation, like the one you're describing with LoH. It's giving you an amalgamated metric describing spike vulnerability. The "bookending" problem (where two heals act as "bookends" for a lethal string of damage) can certainly happen, but it turns out to be very rare if you're using a window size that is an integer multiple of boss swing timer (i.e. boss swing timer is 1.5 seconds by default, window is 4 swings = 6 seconds). Over many iterations, the handful of bookend situations won't significantly affect the actual TMI result.
This is less true when considering actual logs, but if you took a lethal amount of damage in an actual log… you died. So that kinda sorts itself out.
Apodization of the window is something that would be fairly simple from a technical perspective. I'm not entirely sure it's more useful than a fixed rectangular window though. It helps eliminate bookends, but since heals are discrete it doesn't actually have as large an effect as you'd think. For every case where you "discover" a new lethal damage spike because of adding in apodized damage from the wings, you eliminate some lethal spikes because of excess healing (EF/SoI procs) that occurred in those same regions. Ultimately I'm not sure using an apodized window actually improves the metric in any measurable way.
Regarding the last question: all of those answers are fairly arbitrary. The choice of constants (specifically $F$) in the metric attempt to quantify that thought, but in the end it may vary from user to user. Hence why the window size is user-definable in SimC.
re: "If we get rid of that 1/N, then the longer fight will seem significantly more dangerous than the shorter one. In other words, it would cause the metric to vary significantly with fight length, which isn't good."
Disclaimer: The math is beyond me, but… I'm not sure I entirely agree with this statement. It probably is true of a metric like TMI, because we'd rather TMI didn't vary wildly, however a longer fight really is more dangerous than a shorter one. The longer the fight, the higher the odds of a "one hundred year flood" situation.
That said, MSD sounds like a more appropriate measure for trying to single out the more-time-for-a-cockup element.
And just to clarify in my mind – we could have a situation where TMI was lower than mean(MSD), yeah? And that would be describing a scenario where you had fewer + more spread out spikes to keep the TMI down, but tended to have one massive "hundred year spike" to drag up the mean(MSD)?
So that scenario would effectively be describing a tank focused excessively on DTPS?
RE: 1/N: Strictly speaking, you are correct that a longer fight has a larger chance of having that biggest, worst-case scenario spike. Where you are incorrect is that this effect does not cause a "significant" increase in danger with fight length.
You also seem to assume that, based on my wording, this normalization scheme removes that effect. It does not. In fact, you can see it in the plot in that section – it is the reason that the normalized curve rises from ~80k to ~83k TMI as we vary fight length. Each individual iteration has a higher likelihood of the rare big spike, which means more of those iterations have it, and thus have a larger TMI, bringing the average up slightly.
The more significant variation is based on the frequent spikes. If you expect to take ~3 spikes of around 80% of your health in a 2-minute encounter, then you expect ~6 of them in a 4-minute encounter and ~9 in a 6-minute encounter. That means the sum is increasing linearly, and the 1/N successfully suppresses that variation.
I think the confusion here is based on thinking about this "per iteration" rather than as a frequency. When you evaluate a DPS class, you generally report DPS, not damage done. Because you know that the amount of damage you do will significantly vary with fight length – you will do roughly twice as much damage on a fight that is two times longer. In order to have a useful metric, we divide by fight length to determine DPS, which is a better representation of the player's output that is more consistent across the board.
TMI is no different. The exponential in the sum is essentially our measure of "spikes" in one second, just as it would be damage in one second if we were calculating DPS. We therefore need to divide by the encounter length if we want an accurate estimate of "spikes" per second, just as we do for DPS. Note that I'm putting "spikes" in quotes here since it's a little vague, but each exponential is essentially a weighted measure of spike size.
The key here is that a longer encounter is slightly more dangerous due to probability, but not significantly so. You may take roughly twice as many spikes in a 2-minute fight than a 1-minute fight, but the spike *frequency* hasn't changed. Likewise, if you run for 25k iterations, you'll have more of those "one-hundred-year flood" spikes if your duration is set to 2 minutes than if it is set to 1, but the frequency is still the same – you just happen to be measuring the number for twice as many minutes of combat.
So, again, the metric preserves that feature you're concerned about (mild increase in danger due to longer combat). It's only a mild effect because we're already considering relatively long periods (minutes) with many melee events. It would become a more significant variation, even in the normalized version, if we started looking at very short fights. For example, I'd expect a much more significant variation going from a 15-second to a 30-second fight than from a 2-minute to a 4-minute fight.
RE: $TMI \lt mean(MSD)$: No, we should never have a situation where TMI is less than MSD. Because on a fundamental level, TMI>MSD for every iteration, so the reported TMI (which is really mean(TMI)) is necessarily larger than mean(MSD). The proof is pretty straightforward (if every $x_i \gt y_i$, then $\sum x_i\gt \sum y_i$, and thus $\sum x_i/N \gt \sum y_i/N$.
The situation where you're focused excessively on DTPS is when your $TMI \gg mean(MSD)$, because it means you're taking many small spikes of size mean(MSD), so your TMI is approximately log(n)*mean(MSD), where n is large.
Kihra says:
For Warcraft Logs, there are really two issues with this calculation:
(1) It depends on player health, which is not known. If Advanced Combat Logging is turned on, you only get told about the current health and not the maximum health. Computing the maximum health is pretty difficult given the 10% shaman buff that stacks invisibly (you can't see the stacking in the combat log). You'd have to write special case buff tracking code for every possible hit point boosting ability (including trinkets, etc.).
(2) Correlating absorbs from specific damage taken events with the person responsible for the absorb effect is extremely difficult and would require me to write absorb tracking code (I would have to know specifically how Blizzard resolves multiple absorb effects on the player as well as deal with a large # of special case absorb effects that don't conform to Blizzard's rules).
Therefore it's likely this computation would only function with Advanced Combat Logging enabled, since you know nothing about the player's health without that turned on. Second, it's unlikely I would implement personal TMI. Instead I'll probably just implement a version of TMI that includes absorb contributions from healers.
My personal opinion is that the self-only TMI is not particularly relevant in a real fight. What matters more is your TMI factoring in healers. If they are helping you stay alive routinely that is relevant. Again, having to write special case buff tracking code to try to detect all the invisible ways healers can reduce a tank's damage taken (e.g., cooldowns that don't include absorbs) seems problematic.
I think I would have to agree that when it comes to logs, the "personal" TMI calculation is likely superfluous especially when you factor in conflicting overhealing (which I imagine would not be included in the tmi calculation since it doesn't actually change health) between the tank and external heals. IE if the paladin's self-healing suddenly dropped (which would significantly hurt their personal TMI) it wouldn't necessarily make them any more likely to have died in the log because the healers' overhealing would help compensate, and the other way around as well.
Unless the log calculates TMI by considering overhealing as actual healing (which would probably produce garbage TMI results anyways) the personal TMI calculation would be pretty meaningless.
The value of being able to calculate solo or external TMI scores in a simulation is distinctly different as that healer is only going to even exist when we actually want him included in the calculation, but when we're only concerned with our personal survival then we're going to be alone in the sim. But these are both just experimental results, run to get an idea of what our survival might be and how to improve it. When using a log the calculation will be to see what our survival actually was.
See my comments to Kihra below, but I disagree. I think that personal TMI serves a very different purpose than "raid TMI."
Also note that on a technical note, overhealing "counts" towards your personal TMI. In other words, if you heal yourself for X, Y of which was overheal, it still counts as a heal for X as far as TMI is concerned, because it's a measure of your self-sufficiency. It's essentially saying that you *could* have taken Y more damage there without danger, because you were that survivable. I've outlined a number of reasons why this is the more logical approach in last year's series of blog posts.
For a combat logging site, that means they would just treat all healing as effective healing for the purposes of TMI. That shouldn't add any complication since they can already show effective healing and overhealing for charts.
Calculating raid TMI might be a scenario where we change that rule; I'm not sure. Many of the arguments for counting overhealing still apply there, but I think it's a case where it's less clear-cut. Keep in mind that massive overhealing doesn't really affect your TMI score, because by definition that overhealing occurs when you're at full health, not mid-fatal-spike. So TMI essentially ignores the bulk of that overhealing anyway.
More thoughts on this, though this may be a discussion you and I should have via e-mail instead of comments.
(1) Yes, this is a problem. As a decent first approximation, we could use the player's initial health on the pull (i.e. not dynamically account for max health fluctuations). As you said, we could account for everything but the shaman buff by doing some complicated aura-checking, but I think that this is a situation where we'd be better off asking Blizzard to add max health to the list of things reported by the Advanced Combat Logging feature.
(2) There may be an easier workaround for this. Consider a combat log that contains the lines:
hh:mm:ss Theck takes X damage (Y absorbed).
hh:mm:ss Theck loses Sacred Shield (6-sec buff, amount was Z
That's exactly how WCL works today. Damage events count absorbs as actual taken damage, and they don't credit any absorb healing until they see the remove buff event (which counts as the "heal" ).
This approach still isn't good enough, since there are buffs that provide absorbs without telling you how much they absorbed, e.g., Dampen Harm. In addition, the Stagger absorb damage only shows up in the damage events. There is no corresponding "heal" for Stagger, so you have to find a way to meaningfully separate the Stagger damage absorbed. Maybe this is as simple as assuming 20% of X + Y is Stagger, but I'm not sure how Stagger's reduction fits in timing-wise with other absorbs and CDs.
There are also absorbs that just get the math wrong in the events, e.g., Shroud of Purgatory, and that will throw everything off.
There are also external CDs from healers that reduce a tank's damage taken without using absorb effects at all, so in order to discount those, you'd have to scan for all of those CDs being used. Some of these effects may be non-obvious (e.g., any armor-increasing effects).
Anyway, this is sort of why I was leaning towards ETMI only, since you could ignore absorbs in damage events and then only count overheal from the absorb buff removal events, and get a very accurate picture.
Great work on all of this Theck! Much more intuitive.
One question: I love the idea of being able to speak to healers the way you demonstrated. If TMI is incorporated with logging sites that seems simple. However, let's say I'm studying for the next 3 bosses in a tier and wanted to be able to prep myself and healers prior to having done the fights. In order to do this accurately wouldn't each boss need to be coded in SimC? My understanding is that the T16N10 boss would've been built around normal mode Garrosh, which wouldn't necessarily mean much for the Malkorok fight. Am I mistaken here? If not, is it even viable to have each boss coded to SimC each tier?
Thels says:
From what I understand, the idea is for TMI to provide you with a general self-assessment, not a specific boss-to-boss assessment. That would be impossible to track, because it's also very depending on your guild's strategy, the way you chain raid CDs, how good your healers are, and how long the fight lasts for your guild. Too many factors to skewer those results.
Right now, it gives you an estimate of where you're standing at bosses of certain difficulty. If you're using Garrosh 25 Normal, and your TMI is 50k, you can be pretty confident about having the gear to clear entire normal mode. If it gives you a TMI of 120k, you know that for the fights that hit hard, such as Juggernaut and of course Garrosh itself, both you and your healers have to be on your toes to survive.
It also advises you about gearing strategies. While it's pretty clear cut and dry for protection paladins right now, there are classes where it's not as obvious, and going into WoD will have us questioning if full on Haste will remain the way to go (I seriously do hope that Haste remains our best stat, as I love the lower GCD). As long as we're not seriously overgearing a boss, the difficulty of a boss shouldn't matter too much for these weights, though Readyness could be an outlier.
In addition to what Thels pointed out: if you wanted to compare your TMI from an actual combat log to simulation results, you would have to code that fight into SimC. That isn't as hard as it sounds, since it's mostly just approximating the boss's abilities using auto_attack, spell_nuke, spell_dot, etc. Note that they won't be perfect approximations, but as long as they're close to what the boss does during the hardest-hitting period, they should give similar results.
In fact, I wouldn't be surprised if someone out there has already done this for many bosses. I've seen some *very* impressive boss approximations done in SimC by certain users, at least back in Throne of Thunder.
But I think that the strengths of TMI for logs is different from its strengths for simulations. As Thels pointed out, the simulations give you a general self-assessment, and details on how to optimize your character for a generic boss fight. The advantage of calculating TMI from an actual combat log is to get real information about how effectively you're playing your character.
If your TMI is abnormally large (as compared to other, similarly-geared tanks) then it tells you that you may be doing something differently (and/or wrong!). Likewise, we'll be able to scrutinize those logs and see whether e.g. talent X or talent Y did a better job on a given encounter, based on comparing different pulls of the boss or different logs.
Since you only get one "iteration" per logged encounter, the statistical analysis isn't going to be there unless you have a large database of logs to sift through (something I've discussed with AMR, in fact). So it's only going to be a rough estimate, but still contains interesting information. For example, if you find that your personal TMI is 65% on a fight during progression, then that may be enough information to determine that you can drop a healer. Stuff like that.
Reminder that, as I said on Twitter, I was traveling all of yesterday and have a busy day today. I'll try to find some time between classes to respond to some of the comments today, but I may not get to them all until later this evening or even tomorrow.
Ok, so I have a pondry, which is perhaps a little beyond the scope of your work, but still a natural progression of TMI….
What do you think of the prospect of calculating TMI in real time? Such as in an addon ie recount or skada or even something completely specific for TMI?
Because being able to sim your TMI to see where you should be is fine. And being able to calculate TMI from a log to analyze and replan things from one night to the next is good too. But being able to actually measure you (or someone else's) performance during or between pulls seems like it'd be very useful as well.
Now I'm not asking you to write a TMI addon or anything, I'm just wondering what your (or anyone else's) thoughts are on the viability of being able to continuously calculate and update a TMI value in real time.
Nothing about the metric would be tough to calculate in real-time; in fact, it would probably be easier than doing it in logs because we can query all of the relevant information in-game via the API. You'd basically just need to register a bunch of combat log events to keep track of damage done in the last T seconds and use that to calculate each element of the moving average array. The TMI result could be updated in real-time as the array is growing.
I'm actually proficient enough with LUA that I could write such an addon, given enough time. But it's been a long time, and I would have significantly more trouble coding the interface for it than the logic behind it. If someone who's more familiar with addon writing offered to code the interface, I'd be happy to help with the actual TMI calculation logic.
Yeah, it didn't strike me as particularly "difficult", I just don't have any experience with addons or the WoW API so I wasn't sure how performance-heavy it might be to do it. I've got some programming experience, so I know that it can be hard to tell what is or isn't practical for real-time work when you're not familiar with the system that'd be running it.
I have no doubts that this will find its way into an addon, probably several actually, at least by the time WoD rolls around, if not sooner. I mean once the new TMI gets "out there" I can't imagine recount/skada not including it since they're already sifting through the combat log and looking at all of those damage/healing taken numbers anyways (plus they've both attempted to do tanking modules too, so it's like they've literally been waiting for TMI).
Yeah, tracking TMI would be no more computationally-intensive than what Recount does already. You'd literally need to add a UnitMaxHealth() call, a little array maintenance, and a few simple multiplies every second.
Tengenstein says:
Oh somebody Please make this happen.
It will, there's no doubt about it. Although I do see 1 potential issue with using it in real-time. Which is the way it's dominated by the biggest spikes, so it won't necessarily reflect your actual "current" survivability at a given moment in the fight. IE one the most dangerous part of the fight is over our TMI will likely plateau there, it won't really go down or even out the way a dps meter will even out as you change from high to low dps phases.
In fact, unless I'm missing something watching a realtime TMI be calculated I'm pretty sure it's outright impossible for it to ever go down, it'll constantly increase (increasing faster or slower depending on spike sizes).
So we'll be able to see any portion of the fight that is more dangerous that anything before it, but we won't really be able to see anything that's not a new biggest spike. Now obviously seeing those spots are very useful as we obviously will want to be aware of them so we can focus on them. But it almost makes me think it'd be useful if we could see some sort of "instantaneous TMI", that would fluctuate down as well as up.
Maybe whoever/whenever this becomes an addon it'd be useful to also see the moving average (or perhaps pass the moving average through the TMI formula but without summing it) in addition to the overall TMI, similar to the way dps meters show total damage done in addition to dps. That way we can see both our overall TMI score as well as a more fluid display of our survival at each given moment.
It wouldn't be impossible for it to go down. It will slowly decay as you sail through "safe" periods, it just won't decay that much because of the filtering effect. If your max spike was 85% of your health, your TMI might decay from, say, 100k to 90k during an extended safe period. It would just never drop below 85k.
Yeah, I misinterpreted a part of its nature there, I looked at the way TMI is always going to be bigger than the biggest spike, as though that meant it couldn't get smaller, without considering that it could go down and still be bigger than the biggest spike.
But it would still provide little real-time information after that "largest spike". In your example there, if your max spike was 85% of your health, and in the next phase your only taking spikes for 70% of your health, seeing your TMI drop from 100k to 90k doesn't really tell you much about those 70% spikes you're taking. And this is by design, really, but as a real-time tool it'd be useful to be able to clearly see all of those peaks and valleys of our survivability more clearly.
I'm thinking that a single-spike approximation using the current MA would be the best choice for that "instantaneous TMI" value, since if we were to break down a fight to calculate or survival for just a single moment of the fight we'd essentially be calculating TMI over a window of just a few seconds which would end up being very similar to how you defined the single-spike model.
I'm thinking of something like this sort of display (assuming a recount or similar addon):
name………………..TMI (SSTMI)
Similar to the way dps meters tend to look like
name……………….damage (dps)
I think you're over-thinking it. TMI would really only be useful as a complete-encounter measure. For example, if you had a TMI ranking in Recount, it would give you information about which tank suffered larger and/or more frequent bursts during the entire encounter.
If you pare it down to looking just at the current MA window (i.e. the last 6 seconds), then you're basically just measuring raw damage taken in the last 6 seconds, because you're ignoring all of the other information about what happened earlier in the fight. At that point, you may as well just plot "damage taken in the last 6 seconds," because the extra logarithm and exponentiation aren't accomplishing anything (there's no smaller spikes to filter).
We already have a pretty good indicator of that though: our health. It may still be interesting info to have (for example, are we in a period of increased damage but not noticing because our healers are compensating), but most tanks probably have a good feel for that already just based on their knowledge o the fight and/or seeing their health dip.
It's possible I'm overthinking it. And I'm aware that the overall TMI is definitely still important, I'm not saying to not track that as well. Just that in terms of a real-time tool TMI would be a little lacking due to the filtering aspect of it.
I'm probably more in the realm of the psychology of it, but once you have a meter running in the game measuring your performance you sort of expect it to be able to tell you how you're performing at that given moment, as well as over the course of the entire fight. Sure our healthbar pretty much already serves to show our survivability at any given moment, but that doesn't change the fact that you also expect that behavior from your meter. As far as just displaying the moving average without performing the logarithm, my intent there was simply to make sure that the "real-time" number shared the same logical properties as the overall TMI number, since they would both represent the same quality just measured over different spans. So the SS model seemed like an appropriate approach. Though as I take a closer look at the formulas again, I guess since the MA value is already normalized to our healthpool, all that's really necessary is to scale it up by 10^4.
Yeah, I guess at that point the number I'm suggesting be shown next to TMI is so "raw" that it almost seems pointless to see, and I'm probably just asking for something that'll lead new tanks to staring at their meter instead of paying attention to what's happening around them. But I just can't shake that feeling that a performance meter should also tell you about the "now" as well as the overall.
Solaron says:
Regarding integration with tools, would it be possible to include a little blurb even just in a mouseover giving some indication of what TMI means and how it compares to MSD? I know, for example, some of the SimCraft options have mouseovers that give the user an idea of what the option does and how it affects the simulation. Would it make sense to include something similar for the SimCraft results page so a user can mouse over his tanking section and get a quick idea of how TMI translates to incoming spike damage and frequency?
Yep. In fact, in that build all of the tooltips provide a short description of the metric, but you obviously can't see it because I haven't shown the tooltips in that screenshot. Improving the clarity of the default tanking results table has been high on my list of priorities for SimC development for some time, and now seemed like the logical time to start tackling it.
Weebey says:
That seems like a lot of text to say "I took the log and rescaled"
Pingback: Leetsauced Podcast Appearances | Sacred Duty
Geodew says:
(1) Regarding the embedded picture http://www.sacredduty.net/wp-content/uploads/2014/04/theck_sw.png … Why is the attack power scalar "negative?" Doesn't it strengthen Eternal Flame, Seal of Insight, etc? Vengeance drowning out the difference, maybe?
(2) I was inspired by an idea to improve the metric while reading these. You may have already thought of it and may not like it, though Here it is.
I was thinking that T, the chosen interval length, seems just as arbitrary as the problems with Old TMI that arose due to choosing the minimum spike size to consider etc. It seems you would generate a different kind of edge effect when you choose what interval length to use. Each metric of a specified T value is valid and useful on its own, as long as you know what it's measuring, but it seems that there should be a way to create a metric that is not a function of something so arbitrary as the window length. To put it another way, it should scale smoothly up for increased damage, as TMI does, but also SMOOTHLY up for damage that takes place closer together temporally, as an indication that healers have less time to heal the tank in between the damage events.
For example, in place of summing moving averages, you could do: For every two damage events, add (damage of first event)*(damage of second event)/(time between events) to an accumulating sum.
Now, this particular "solution" has some obvious problems, like (a) two damage events at the same time means infinite TMI and (b) calculation time is O(size(D)^2) instead of O(size(D)), but I just wanted to use that example to help clarify the kind of metric that I mean.
For these reasons, I don't like my example given, but I'd like to hit on that ideal that the metric would scale up smoothly as damage events get temporally closer to eliminate the need for a pre-determined, somewhat arbitrary parameter (window length).
I think Vengeance pretty much completely drowns out the difference, yeah. Consider that when you have 500k+ attack power from Vengeance *and* you're fully self-sufficient on average, adding 1k more AP (as the sim does to gauge its effectiveness) is almost irrelevant.
As far as the window length: I think there are two major downsides to your suggestion.
1) If we start weighting pairs of damage events like that, we quickly lose the intuitiveness of the metric and go back to an arbitrary, FICO-score-like number. I see that as a major step backwards, because it was one of the biggest (and most valid, IMO) criticisms of the original metric. I haven't thought about it exhaustively, but so far I haven't come up with a good way to do the type of weighting you describe without completely tossing the "size of your biggest spike" intuition out the window.
2) It's not even entirely clear that including the time between the attacks is necessary. As a healer, you care about the time between a pair of fatal attacks because you may have a chance to save the tank if there's >1 second between them, but not if there's tens of milliseconds between them. But for a tank that cannot die (i.e. in SimC), it's far less important, because the results can be fairly similar (ignoring window-edge effects).
More importantly, if the damage can be concentrated in such a small window, then the damage in the full 6 second window (or whatever size you're using) should be likewise higher. That's one of the reasons the TMI bosses use fairly simple melee/dot setups – to reduce the sort of "all the stars align" variations you could get with e.g. Fluffy_Pillow.
The healing & health changes for WoD also suggest that we won't care as much about the timing of individual damage events, since if they're successful in their implementations, we won't be worried about spikes over 1- or 2-second intervals like we can be now. The idea of a tank being whittled down in 5 or 6 seconds (or more) during a period of movement or healer incapacitation should become the most common death scenario. All of that points to a scenario where it's less important when the heals/damage landed than whether they did, and how much of each happened in aggregate.
If anything, I think the more straightforward solution would be to calculate TMI-1 through TMI-8 and give all of those values in a table. Each number would give you another small piece of the puzzle without obscuring anyof the meaning. TMI-1 & TMI-2 would tell you whether you were ever getting globaled, which is basically what your D1*D2/T calculation is trying to emphasize. The rest of the values would give you the longer-term aggregate damage situations.
I like the table idea, and lacking a better solution, I am of course forced to agree that TMI v2 is best for now. I think the D1*D2/T would indicate whether or not you can be globaled, but more importantly, I just think the "healers have exactly 6 seconds to react to damage" part is arbitrary, and think the metric could potentially be a better measure of survivability if it dynamically measured temporal distance between hits, even if it would lose its intuitiveness. I do realize, though, that with how the time window averaging works, that most damage patterns will be accounted for already, due to the fact that rearranging damage patterns inside of a window changes the values in surrounding windows.
As a somewhat related thought, you're working on applying this to logs, right? Note that due to lag and stuff, often the boss swings are not exactly 1.5 seconds apart in logs. For example, if the boss swings at t=0.00, 1.57, 3.21, 4.55, 6.20, then you may have windows which include only three attacks, even if all of those attacks hit. This will likely cause TMI calculated from logs to be much lower than in simulations of the same boss mechanics, since a 6-second window would include 3-4 attacks instead of exactly 4. Now that I think about it, this is actually one example of where the 6-second window edge effects will negatively impact the accuracy of the metric.
Actually, the 6-second window limits you to 4 attacks: at 0.00, 1.50, 3.00, and 4.50. The attack at 6.00 would never be in the window together with the 0.00 attack – it's always one or the other. So it's actually pretty insensitive to small increases in the swing timer due to latency in that direction.
The bigger problem is over-estimation. Let's say that 0.00 attack actually hits the log at 0.20 due to latency, but the 6.00 attack isn't delayed. Now you can have 5 swings in a 6-second period if you're recalculating using step sizes of $\leq$ 0.20 seconds.
That can mostly be avoided by using coarser binning – i.e. only recalculate using a 0.50- or 1.00-second time step. In SimC we actually use 1 second right now, though I plan on decreasing the time step to at least 0.5 seconds soon(tm).
Either way, I don't expect any particular boss fight log to line up exactly with SimC results. There's just too much variation between the two to get exact agreement, and there are a number of hurdles involved in getting TMI calculated from a log at all. The hope is that they agree well enough to validate SimC and the stat weights it produces.
David Sloan says:
Optimizing for TMI suggests we should be staggering defensive cooldowns, not stacking them. The default simcraft prot paladin APL stacks them: https://code.google.com/p/simulationcraft/issues/detail?id=2069
I'll take a look, but I really didn't bother optimizing the profile for TMI that much. I think calculating TMI with cooldowns at all is a foolish thing to do, personally, because you're basically just throwing away simulation time.
I started poking at this in the first place because I wanted to compare the value of a second amplification trinket vs the cooldown reduction trinket. In my case, optimizing for TMI, it turns out amp > cdr, but the gap narrows significantly if the simulator staggers cooldowns, and perhaps closes entirely if I can fix the "fire all cooldowns at the start, then stagger for the rest of the fight" behavior I'm seeing now.
I responded in the issue ticket, but the reason it's firing everything at the pull is because it's all off cooldown and off-GCD. Since you're using the "react" conditional, it's checking some point in the past for the buffs (based on the player's reaction time). So it runs through the action list three times and schedules all three cooldowns because none of them were up a few hundred milliseconds ago.
I think we can get around this with two tricks. The first is using the "up" conditional instead of the "react" conditional. Which I think is fair, since you're generally planning cooldowns in this scenario, not reacting to things with them. But we'll also have to use some conditionals to keep them from being used simultaneously later on in the fight, especially if we want to add Ardent Defender to the mix.
For example, use AD if (none of the other cooldowns are up) & (GAnK's Cooldown > 1s) etc.
Yuval says:
First, I want to thank you for all the hard work done in formalizing this metric (and other stuff, but we're on this subject here :)).
I lately rerolled a tank, and am more concerned with tank related metrics than I used to be, so I decided to learn what TMI actually means.
After reading this page and the other one explaining TMI, I can't say I liked your decision of replacing that N by N0 there (now I don't say it was a bad decision with your consideration on hand, but I think it can be avoided).
I also suspect that I might have found the reason for that, and I'd appreciate if you follow my logic on that.
In the other page, titled "Theck-Meloree Index Standard Reference Document", you defined N as following:
N=(L–T)/Δt
(or L/Δt, I don't think that matters)
And since N needs to be an integer (this assumption may be my entire query's downfall :p), it should be defined as the ceiling function of:
(L-T)/Δt
Shouldn't it?
*Ceiling instead of floor because at bare minimum, no matter how you divide your time frame, you'd get 1 "window", or more generally, if you have a time frame of L-T, and pick a Δt that leaves a remainder, L-T-X divides by Δt giving you so many spots in the array, then the remaining X/Δt<1 still needs a spot in the array, otherwise you miss any damage done in that time frame*
With that in mind, although not amazingly important on it's own, we move to this page, where you defined your condition for C2 in the following manner (extrapolated from the single spike case):
lnc2–lnN–F=0.
But in the single spike case, N should equal 1, shouldn't it?
If that's the case, ln=0, and c2=e^f, instead of N*e^f.
*In the single spike case, L-T=epsilon, one of those tricky epsilons :). In the sense that it's greater than 0, but always lower than whatever Δt you may pick, so the ceiling function in this case is 1*
This "solves" the issue you had later, where c2/N is not dependent on N, even though you fully expect a 1/N to be there, so you kinda arbitrarily (but wisely) decided to change c2=N0*e^f.
Or basically, what I'm saying, is that I didn't like the fact you were forced to add that arbitrary N0 in there after such a rigorous piece of work, so I searched with all my might (OK, SOME of my might, I didn't punch the screen, yet) what might be the cause for that. I do hope that I found it.
Again, thanks for all the hard work,
Yuval.
P.S. If this was addressed in the past and I missed it, I apologize and would love a reference, there is a lot of text in here, and missing something is rather easy.
Also, I do apologize if the formulas are hard to read, I simply lack the knowledge of how to write them in a neater fashion.
"But in the single spike case, N should equal 1, shouldn't it?"
No. because $N$ is not the number of spikes, it's the number of time bins. In other words, it's the fight length (but in discrete units of $\Delta t$).
Let's assume we're using the $N=L/\Delta t$ version for simplicity, though in reality it hardly matters whether you use an apodized or shortened $MA$ array. Thus, the fight length is $N\Delta t$.
The "uniform damage" case means that every element of $D$ is identical – i.e. you take the exact same damage in every time bin of width $\Delta t$, and thus almost every element of $MA$ is identical as well. Thus the sum $\sum_{i=1}^N e^{F*MA_i} \approx Ne^{F*MA_{avg}}$. It's clear from this situation that the $N$'s cancel and we end up getting just $e^{F*MA_{avg}}$ as the argument for our log.
The "single spike" case refers to a case where you still tank for $N\Delta t$ seconds, but only one of the time bins of $D$ contains any damage at all. We approximate this in $MA$ as if $MA_i=0$ for $i\neq j$, and $MA_j$ is some nonzero value. The sum then is essentially just the contribution of $e^{F*MA_j}$. (This is obviously an abstraction – the real $MA$ for a single spike is going to be a triangular function, but it's not that important since this limit isn't realizable in real encounters/sims).
In either case, though, $N$ is the same, provided we're comparing equal fight lengths. Hence why I wanted to perform the normalization. For example, let's say that on average we get a single spike every 1 minute, so we model this using the single-spike case and our sum is just $e^{F*MA_j}$.
If we instead run a 2-minute sim, we should expect to get two of those spikes, and thus have two terms contributing to the sum: $e^{F*MA_{j1}} + e^{F*MA_{j2}} \approx 2e^{F*MA_j}$. But then when we take the log, we'll get a number that is $c1*\log{2}$ higher than our value for the 1-minute value. Likewise, a 3-minute fight would be the original value + $c1*\log{3}$, a 4-minute fight would be the original value + $c1*\log{4}$, and so on.
Which brings us to a more philosophical question: Is a 1-minute fight less dangerous than a 2-minute, 3-minute, or 4-minute fight, and so on? In some senses yes (obviously for a 1-minute fight you can chain cooldowns). In other senses no, because we're looking to model the danger of steady-state situation, and that steady-state hasn't really changed because the boss is still hitting for the same amount with the same frequency. The normalization accounts for this, and makes the metric less fight-length-dependent (as shown on the plot).
The downside is that it introduces nonlinearity in the TMI value due to the extra $c1*\log{N_0} \approx 61{\rm k}$, but this only occurs when we're taking less than around 70% of our damage in 6 seconds – in other words, a boss that really shouldn't *ever* kill us. While it would be nice for the metric to be completely linear down through zero (which is what we get if we let $c2=Ne^F$ and essentially eliminate $N$ from the equation entirely), it would mean that we're far more sensitive to fight length. I made the executive decision that it was worth having a more consistent metric in cases that mattered (i.e. TMI values above 75k-80k) even if it meant we got less useful (though not useless!) data in cases that should rarely show up in practice.
I'm still not 100% sure that was the right call either, but it's the call I made at the time. For SimC it shouldn't make much difference at all, and in some ways the unnormalized version would be more preferable for its linearity. However, if you wanted to compare different fight lengths, as is common in logging sites like WCL or AMR, then you may very well value the consistency over the linearity. I think once we have a tier worth of raiding where people can actually see their TMI in logs, we'll have a better feel for whether we should roll back the normalization entirely, or whether it should be kept.
I understand what you are saying now, but I still insist that the change of the factor of N to N0 in the c2 condition is not necessary, and in fact, there shouldn't be a factor there in the first place whatsoever.
Reading this a few times made it clear that it's far simpler than what I expected it to be.
You treated the exponent in the sum as a 0 contributing part for every ij (using your index notation), that is not the case.
The elements of the sum can be described as follows:
{e^(-F) if ij
{e^(F*MAj-F) if i=j
You get N-1 of the former, and 1 of the latter, and thus the sum is (N-1)*e^(-F)+e^(F*MAj-F)=e^(-F)*(N-1+e^(F*MAj))
In the case of MAj ALSO equaling 0, this is actually N*e^(-F)
So the equation should read, if we want to calibrate TMI to be 0 in this case, as:
0=ln[(c2*e^F)/N*N]=ln[c2*e^F]
1=c2*e^f
c2=e^(-F)
You can develop the function with the ugly sum stated earlier (e^(-F)*(N-1+e^(F*MAj)) if you'd like, and only enter MAj=0 at the very end, the result is the same.
You can also try to look at the trivial case (no damage taken causes 0 TMI) in another manner, where you take equal hits for the same amount every interval, then set that amount to 0 and you'd get the same result.
Numerically, the addition of the N0 as you put it, just increases all TMI by a flat of (10^4)*ln(N0), and in case of N0=450, it's about 61.1K. This just inflates the TMI of everyone, as even in the trivial case (if you take 0 damage), you'd get that TMI.
This heavily skews the scaling of TMI considering that this flat addition is often enough LARGER than the varying component of it (they are definitely in the same order of magnitude in almost all cases), and I honestly believe it should be taken away from the formula.
P.S. Thanks for the quick reply to my first query.
I treated the exponent in the sum as a 0 contributing part for every ij because I explicitly said that's how I was normalizing it. Technically I misspoke in my reply to you earlier by saying $MA_i=0$ when $i\neq j$ – my actual normalization scheme was assuming that $MA_i$ was sufficiently negative in every bin such that $e^{F(MA_i-1)}$ was negligible compared to $e^{F(MA_j-1)}$.
However, this really comes down to exactly how large $F$ is, because we're comparing $(N-1)e^{-F}$ to $e^{F(MA_i-1)}$. Consider the case of $N=450$ (thus $\Delta t=1$) and $MA_j=1$. Since $F=10$, we have:
$$(N-1)e^{-F}=449e^{-10}=0.0204$$
$$e^{F(MA_j-1)}=e^0=1$$
In other words, in this single-spike case the other 449 bins of $MA$ contribute about 2% of the total value of the sum. There's absolutely no question that this is dominated by the spike, and that approximating that 2% as 0% is a reasonable simplification.
You're correct that *if* we were attempting to normalize such that we'd get a TMI of zero when we had an $MA$ array in which every element was zero, we'd be using $c_2=e^{F}$ (Note also your typo – you summed $e^{-F}$ and somehow got $Ne^F$ rather than $Ne^{-F}$). However, we're really talking about small shifts in the zero-TMI intersection point. Note that $\ln{Ne^F}\approx 16$, while $\ln{e^F}=10$. This is a change of 6, which after multiplying by $c_1$ gives the 61k point we're discussing.
That said, you're incorrect about this just being a flat 61k added to TMI. It isn't. That's true for your simple case of $MA_i=0$ (assuming, of course, that you *expected* zero in the first place), but it isn't for a realistic $MA$ array. Recall that our linear approximation is just that – an approximation. This sum is actually being fed to a logarithm, so if your $MA$ array contains many nonzero elements, those will quickly dominate the value. That's why if you look at the plots near the end of the post, you'll see that for experimental data, a max MA of ~1 gives you a TMI of around 100k, and higher MA values show a very linear relationship – max MA of 2 gives around 200k, max MA of 3 gives around 300k, and so on.
http://www.sacredduty.net/wp-content/uploads/2014/04/mma_vs_tmi_paladin.png
http://www.sacredduty.net/wp-content/uploads/2014/04/mma_vs_tmi_warr.png
In fact, what this contribution does is cause a curvature of the TMI curve once you get under around an MA of 1 due to the logarithm. If you look at the first linked plot there, you'll see that the data starts to go sub-linear, and we can interpolate that it would crash into the axis somewhere around 60k. The metric still works here, but our normalization factor has "cost" us linearity. Again, this is a trade-off of linearity in the regime we're less likely to care about (taking so little damage we're not in danger) for stability in the higher TMI ranges with respect to fight length.
Again, this is something I haven't completely settled on. I could definitely envision a normalization factor of $c_2 = Ne^F$, which is equivalent to setting the single-spike zero-TMI point to zero, or $c_2=e^F$ like the one you suggest, which is equivalent to setting the uniform-damage zero-TMI point to zero. Note that they can't *both* simultaneously be zero, because they're entirely different models with different linear approximations (see the first two plots, which show both models). The latter has the advantage of still being less sensitive to variation, but if I recall correctly will skew TMI values a little bit from the single-spike model, and as you can see in the first two plots in this post, the single-spike model does a better job of modeling the randomly-generated data.
First, I want to thank you for the time and trouble reading my post and answering, and thanks for the correction on the typo.
I'll need to read your post and think it through in a more convenient time to give any further insight I might have, but I would like you to look at the following:
Mathematically speaking, ignoring anything else for a while and only focusing on N0 in the formula, I want to look at the final formula you've set for TMI:
TMI=10^4*ln[(N0/N)*Sum(e^(10*MAi))]
If we were to define the term (that is everything in the logarithm, but N0, in case I make a typo):
(1/N)*Sum(e^(10*MAi))=1/B
(I hope B was not taken, if it is, mentally switch it to something else and bear with me :)), we can now write TMI as:
TMI=10^4*ln[No*B]=10^4*ln(N0)+10^4*ln(B)
This is true for any B (except maybe when B causes lnB to be meaningless, whatever).
The first term, as you mentioned, is 61K, and is not dependent on any variable we have (unless you decide to change N0, of course, but that's not normally touched). The second term, is the one without the N factor in c2.
We can test it if you'd like, but I'm nearly 100% certain (I'm not certain about anything anymore :p), that all that N0 contributes to the function is adding a flat (10^4)*ln(N0)~61K to it, and that's it.
God knows why I defined it as 1/B, it should be B. Shows what happens when I post at 4 in the morning, I really hope that's the only mistake there :p.
See my response below. It's only a flat 61k addition if you assume the uniform-damage model holds… which it doesn't. As soon as you depart from that $MA_i=0$ model and shift into the single-spike model, it's no longer a flat additive 61k.
Or to put it more accurately, the $10^4\ln{N_0}$ is obviously still an additive 61k, but the $10^4\ln{B}$ is not the actual TMI you want – it's about 61k short!
To clarify this some more, I ran some more MATLAB simulations to try and illustrate why your version doesn't really fix anything. I rewrote the code to be a little cleaner, but it's otherwise identical to the code I used way back when this batch of blog posts was first written. What I stumbled across is actually a little more interesting than I expected.
These sims plot four different models. The first is the single-spike model (blue line), the second is the uniform model (red line).
The third is a random damage model like what was used in the plots in this post. This model uses normal avoidance, block, and sotr mechanics (treated stochastically). The boss swing distribution is a mean damage value with a *fixed* damage variation to generate randomness. In this case, the mean damage ranges from 0 to 0.8 (in units of player health), and the variation is fixed at 0.2 (again, in units of player health). So when mean damage is 0, the boss hits for -0.1 to 0.1 damage per swing. While it's obviously silly for a boss to hit for *negative* amounts of health, this is equivalent to having some background healing going on (for example, Seal of Insight) that compensates for some of the boss' damage some of the time. There's also a background healing of 0.2 (again, in units of player health) per swing going on, which doesn't materially affect the results, it just allows the data to extend down below 0 MSD (max spike damage, which is just the max element of the $MA$ array). Without this offset, you get a crash at 0 MSD because an avoided attack registers a 0, so every TMI value below a certain threshold has an x-coordinate of 0 MSD.
The fourth model is arguably more realistic. It's exactly like the model above with one exception: instead of a fixed 0.2 damage variation, the boss's swing damage varies by 20% of its mean value. So for a mean damage of 0.5, it would vary from 0.4 to 0.6. This also means that when we approach zero, the variation goes to zero with it. I've also removed the background healing (because it *does* materially affect how this curve behaves), so our minimum TMI will be when we get an entire MA array of zeros.
The first plot below uses single-spike normalization, which is the N0=450 in the spec. The second uses your proposed uniform-damage normalization, i.e. N0=1.
https://www.dropbox.com/s/icsihtofmstfr1t/ss_normalized.png?dl=0
https://www.dropbox.com/s/9bn4i0kd9q4hc17/uf_normalized.png?dl=0
First, let's look at how the spec works now. The fourth damage model is acting just like our real SimC data did. It's experiencing nonlinearity once MSD goes below 1. Earlier, I claimed that this was due to the normalization, but looking at this data it's clear that is NOT the case. The third damage model doesn't experience this nonlinearity, even though it is also subject to the same normalization factor. In fact, it follows quite accurately, reaching 0 TMI very near the place it reaches 0 MSD. And both of the random damage models give pretty good TMI agreement for MSD>1, so it's clear this normalization is working, in that Xk TMI does in fact mean X% of your health in damage during the damage window.
Now let's look at the plot where we use your normalization. You'll notice it looks almost identical, but everything is shifted down such that the fourth data set is hitting 0 TMI at 0 MSD, just as you intended. Your normalization fixes the intercept, but at a pretty steep cost. None of the TMI values above 0 match anymore. At an MSD of 1, we have a TMI of ~50k. At an MSD of 2, it's only about 140k, and so on. If we want agreement for MSD>1, we'd have to artificially inflate the values, but since it would be (presumably) outside the logarithm, it'll never match the uniform-damage model line perfectly.
This tells us two things:
1) The nonlinearity we observe is not actually a result of the normalization, but a fundamental result of the way damage intake actually varies. The single-spike approximation is good for large spikes, but as the "spikes" become smaller and smaller portions of our health, they transition from the single-spike model to the uniform model. This is experimentally observed in SimC data as well.
2) The choice of normalization constant $N_0$ just shifts that entire curve up or down, changing the zero-TMI intercept. It does not materially change the behavior of the curve.
3) If we want TMI values to actually represent the % of health we took in damage for values of interest (i.e. MSD>0.75), we can't use a uniform-damage normalization scheme like you propose without additional modification.
One such modification might be to increase $c_1$ by a multiplicative factor. Since the UF normalization gives us an intercept of zero, we can do this without worrying about changing that. Unfortunately, we also know we'll never get perfect agreement with this system, because logs are not linear. Still, here's what it looks like using $c_1=13000$ rather than $c_1=10000$:
https://www.dropbox.com/s/ya10hlm7827746l/uf_normalized_modc1.png?dl=0
Not bad, actually. One downside is that TMI under-estimates the max spike size for MSD<1. It also over-estimates it for MSDs above 2 or 2.5, but the range we're probably more interested in is between 1
For reference, here's what it looks like with $c_1=12000$, $c_1=14000$, and $c_1=15000$:
https://www.dropbox.com/s/00slce0m7faofgj/uf_normalized_modc1_12000.png?dl=0
https://www.dropbox.com/s/o7k9mdjddi6ju8l/uf_normalized_modc1_14000.png?dl=0
https://www.dropbox.com/s/g7kyk793qdnnrk6/uf_normalized_modc1_15000.png?dl=0
Can we keep this going on in E-mail? I MIGHT have some more to add to this, but I definitely have a lot of questions before hand, as I'm not 100% I understand the fine points of the model (largely because programming is beyond me :)). I also doubt my questions are relevant for everyone that might be reading this, as they are mostly minor.
Sure, I'll contact you shortly. | CommonCrawl |
Can the sum of two limits exist when one of them exists and the other doesn't?
I'm trying to evaluate this sum of limits:
$$ \lim_{x \to 4} \frac{x^4 - 64}{x-4} + \lim_{x \to 900} \frac{900-x}{30-\sqrt{x}} $$
And I noticed that this limit $ \lim_{x \to 4} \frac{x^4 - 64}{x-4}$ doesn't exist, since the numerator is positive and the denominator is positive for $x \to 4^+$ and negative for $x \to 4^-$. But the $\lim_{x \to 900} \frac{900-x}{30-\sqrt{x}}$ exists and is equal to $60$ (I used L'Hôpital's rule). So my intuition says this sum can't exist, because I can't sum something that doesn't exist to something that exists, but the lack of rigor in this is making me suspicious, specially because Wolfram says the limit is $\infty$.
Any help would be appreciated. Thanks.
calculus limits proof-verification
creepyrodentcreepyrodent
$\begingroup$ You are right. The limit does not exist. $\endgroup$ – Math Lover Jun 13 '18 at 17:25
$\begingroup$ An obvious comment, just for completeness' sake: the limit of the sums may however be well-defined even if each limit individually doesn't exist - e.g. $\lim_{x\rightarrow\infty}[(x)+(-x)]$. $\endgroup$ – Noah Schweber Jun 13 '18 at 17:37
You are almost right. The only problem is where you write that the first limit doesn't exist "since the numerator is positive and the denominator is positive for $x\to4^+$ and negative for $x\to4^−$". What you can deduce from this is that either the limit doesn't exist or that it is equal to $0$. But you are right: the limit doesn't exist. And since the other limit exists, the sum of the limits doesn't exist either.
José Carlos SantosJosé Carlos Santos
$\begingroup$ Oh, yes. When I said positive and negative I meant strictly, but you're right. Thanks. $\endgroup$ – creepyrodent Jun 13 '18 at 17:30
$\begingroup$ Strictly doesn't necessarily cut it. For example, $x^3$ is strictly positive for $x > 0$ and strictly negative for $x < 0$, but its limit is still $0$. $\endgroup$ – Theo Bendit Jun 13 '18 at 17:31
You can use algebra of limits. If $\lim_{x \to 4} \frac{x^4 - 64}{x - 4}$ existed, and was equal to $L$, then by the algebra of limits, $$192 = \lim_{x \to 4} x^4 - 64 = \lim_{x \to 4} \frac{x^4 - 64}{x - 4} \cdot \lim_{x \to 4} x - 4 = L \cdot 0 = 0,$$ which is a contradiction.
As for the sum, I'm not even comfortable with the expression. The sum of limits shouldn't even be written down unless the limits are known to exist (or are assumed to exist). I think the question shouldn't be, "Does this sum of limits exist?" as much as it should be "Is this expression well-defined?".
EDIT: In response to the question in the comments:
But isn't this correct? $\lim_{x \to 0} 1/x$ and $\lim_{x \to 0} -1/x$ doesn't exist, but $$ \lim_{x \to 0} \frac{1}{x} + \lim_{x \to 0} -\frac{1}{x} = \lim_{x \to 0} \frac{1}{x} - \frac{1}{x} = \lim_{x \to 0} 0 = 0?$$
No, $\lim_{x \to 0} 1/x + \lim_{x \to 0} -1/x$ does not make sense, although the other equalities are fine. In order to parse the sum, you first must take limits (both undefined), and then sum them (how!?).
Think about it: how else would you define such an expression? If you mean sum them, then take the limit, we already have an expression for this: $$\lim_{x \to 0} \frac{1}{x} - \frac{1}{x}.$$ Otherwise, we cannot sum these undefinable quantities. It becomes particularly problematic, as you've noticed, when summing two limits that approach different values. When you combined the expressions into one limit, you could at least make sense of it by tying the two variables in the two limits together. If they approach different values, what does this quantity even mean if not "find limits first, then sum them"?
Theo BenditTheo Bendit
$\begingroup$ But isn't this correct? $\lim_{x \to 0} 1/x$ and $\lim_{x \to 0} -1/x$ doesn't exist, but $\lim_{x \to 0} 1/x + \lim_{x \to 0} -1/x = \lim_{x \to 0} 1/x - 1/x = \lim_{x \to 0} 0 = 0$? $\endgroup$ – creepyrodent Jun 13 '18 at 17:45
$\begingroup$ @dude3221: Edited with a reply. $\endgroup$ – Theo Bendit Jun 13 '18 at 17:56
Suppose that the limit of a sum of two functions exists and the limit of one by itself also exists. Then you have a situation where for some real $L$ and $M$ $$\lim_{x \rightarrow a}f(x) + g(x) = L$$ and $$\lim_{x \rightarrow a}f(x)=M.$$ Whenever both limits exist, we can use the difference rule for limits, giving us $$L-M =\lim_{x \rightarrow a}f(x) + g(x) - \lim_{x \rightarrow a}f(x) = \lim_{x \rightarrow a}f(x) + g(x)-f(x)=\lim_{x \rightarrow a} g(x).$$ This proves the limit of the other function must exist as well.
Not the answer you're looking for? Browse other questions tagged calculus limits proof-verification or ask your own question.
Proving limit doesn't exist using the $\epsilon$-$\delta$ definition
Is this use of L'Hôpital's rule incorrect?
Prove the limit doesn't exist using basic Calculus
Terminology - Limit doesn't exist
How do I know when the limit of a function at a certain point doesn't exist?
Prove that the limit doesn't exist of the combination of sines and cosines
find $\lim_{x\to 1} \frac{\ln x - x + 1}{e^{\pi(x-1)} + \sin (\pi x) -1}$
Does $\lim_{n\to\infty}\frac{3^n+5^n}{(-2)^n+7^n}$ exist?
Compute the limit or show it doesn't exist: $\lim_{n\to \infty}(\sin\sqrt{n+1} - \sin\sqrt{n})$
Can I conclude that $\lim_{x\to0^+}\frac{x^2}{e^{-\frac{1}{x^2}}\cos(\frac{1}{x^2})^2}$ is infinite or it doesn't exists? | CommonCrawl |
Evolution of Eigenstates when two spin systems are coupled
I would like to describe the following situation:
We have two spin systems: Spin 1 ($S_1$) and Spin 1/2 ($S_2$).
Now imagine you somehow change their interaction so that you can fine-tune the coupling $J$ between them in the form:
$$H = \mathbf{S_1} \cdot \mathbf{J_{12}} \cdot \mathbf{S_2}$$
where $\mathbf{J}$ is a matrix describing this interaction.
Now my question is how do I write this in matrix form in order to calculate the different eigenstates of this coupled system for different coupling strenghts $J$?
Should I assume a spin 3/2 system (4x4 Matrix) or an entangled Hilbert space with spin 1/2 and spin 1 (6x6 Matrix)?
Also, what if I still want to include effects on the spin 1 system such as Zeeman splitting in a magnetic field $B_z$, how could I include this?
So let's make the situation a bit more simple, just a magnetic field $B_z$ acting on the spin-1 and only an isotropic ferromagnetic coupling between the spin-1 and the spin-1/2:
$$H = g\mu_B * B_z * S_z + J * \mathbf{S_{Spin1}} \cdot \mathbf{S_{Spin1/2}}$$
So I know my spin matrices for the spin-1/2 (Pauli matrices) and for the spin-1. My approach now would be to take the tensorproduct of these operators to create the new operators for the above Hamiltonian, i.e.:
$$S_x^{both} = S_x^{spin1} \otimes S_x^{spin1/2}$$ as well as for $y$ and $z$.
With these I construct the new Hamiltonian, I think these operators are correct for the spin coupling term, for the magnetic field B_z that should only act on the spin-1 I need to project it on the subspace of the spin-1 system I think?
quantum-mechanics homework-and-exercises quantum-spin quantum-entanglement spinors
Qmechanic♦
$\begingroup$ Dear Matthias. I plan to come back to my answer to tidy it a bit and make it more general some time. I have made some notes at the end as to how you might tackle the problem: I suspect the best way is to use Schur's lemma in some way. You could also find the subspace which is common to all three of the nullspaces of the three $36\times 36$ matrices $1_{36\times 36}\otimes \Sigma_j - \Sigma_j^T\otimes 1_{36\times 36}$ in Mathematica or Matlab, but I suspect there is a much eleganter method. $\endgroup$ – WetSavannaAnimal Apr 16 '15 at 1:03
$\begingroup$ Also, please add your own answer if you work it out: I'm actually quite interested in this myself now. My answer could probably reformulate the question so that it could be asked on Maths SE. $\endgroup$ – WetSavannaAnimal Apr 16 '15 at 1:05
$\begingroup$ Dear Rod, thanks for the detailed answer but I'm afraid that this is a bit too complicated for me. I thought it must be easier by making a few simplifications, e.g. we only care about the isotropic part of the coupling and assume ferromagnetic coupling. So let's say in this case I want to apply an external magnetic field B_z to the spin-1 system and the two spins are connected by $H = J * \vec{S_1} \cdot \vec{S_2}$. So our total Hamiltonian would be: $H = g\mu B_z*S_z + J * \vec{S_1} \cdot \vec{S_2}$. Can't I just take the tensorproducts of the individual operators and that's it? $\endgroup$ – Mike May 16 '15 at 17:05
I haven't thought about this one before, so here is an approach that will work if you work hard enough at it.
Before I begin banging on, point number 1:
Unquestionably the latter. It is a bipartite system and its state space is the tensor product of the two particle spaces. It simply cannot be anything else.
The basic principle here is conservation of angular momentum, so your basic procedure to solve your problem is:
Work out the matrices for the observables for the three nett angular momentum components (the three nett angular momentum operators);
Find the most general Hamiltonian which commutes for all of these three as commutation with the Hamiltonian is equivalent to invariance with time of all the moments of probability distributions of the measurements.
Part 1: The Three Angular Momentum Operators
The $x$-AM component observable for the spin half particle,
$$\sigma_x=\left(\begin{array}{cc}0&1\\1&0\end{array}\right)$$
has AM eigenvectors:
$$\psi_+=\frac{1}{\sqrt{2}}\left(\begin{array}{c}1\\1\end{array}\right);\quad\psi_-=\frac{1}{\sqrt{2}}\left(\begin{array}{c}1\\-1\end{array}\right)$$
and AM eigenvalues $\lambda_+=+\frac{1}{2}$ and $\lambda_-=-\frac{1}{2}$, respectively.
The $x$-AM component observable for the spin 1 particle,
$$S_x = \left(\begin{array}{ccc}0&0&0\\0&0&i\\0&-i&0\end{array}\right)$$
$$\Psi_+=\frac{1}{\sqrt{2}}\left(\begin{array}{c}0\\i\\1\end{array}\right);\quad\Psi_-=\frac{1}{\sqrt{2}}\left(\begin{array}{c}0\\1\\i\end{array}\right);\quad\Psi_0=\left(\begin{array}{c}1\\0\\0\end{array}\right)$$
and AM eigenvalues $\Lambda_+=+1$, $\Lambda_-=-1$ and $\Lambda_0=0$, respectively. So now, for the two particle system, the six $x$-AM eigenstates are:
$\psi_+\otimes\Psi_+$ with AM eigenvalue $\frac{1}{2}+1=\frac{3}{2}$
$\psi_+\otimes\Psi_0$ with AM eigenvalue $\frac{1}{2}+0=\frac{1}{2}$
$\psi_+\otimes\Psi_-1$ with AM eigenvalue $\frac{1}{2}-1=-\frac{1}{2}$
$\psi_-\otimes\Psi_+$ with AM eigenvalue $-\frac{1}{2}+1=\frac{1}{2}$
$\psi_-\otimes\Psi_0$ with AM eigenvalue $-\frac{1}{2}+0=-\frac{1}{2}$
$\psi_-\otimes\Psi_-1$ with AM eigenvalue $-\frac{1}{2}-1=-\frac{3}{2}$
and so, if we order the eigenstates as above, the eigenvectors as columns are $\mathrm{vec}(\psi_+\otimes\Psi_+),\,\mathrm{vec}(\psi_+\otimes\Psi_0)\cdots$ (see the Wikipedia Vectorization Page) and so at last we get as the total $x$-AM component observable $\Sigma_X = P_X \Lambda_X P_X^\dagger$ where
$$P_X=\left( \begin{array}{cccccc} 0 & \frac{1}{\sqrt{2}} & 0 & 0 & \frac{1}{\sqrt{2}} & 0 \\ 0 & \frac{1}{\sqrt{2}} & 0 & 0 & -\frac{1}{\sqrt{2}} & 0 \\ \frac{i}{2} & 0 & \frac{1}{2} & \frac{i}{2} & 0 & \frac{1}{2} \\ \frac{i}{2} & 0 & \frac{1}{2} & -\frac{i}{2} & 0 & -\frac{1}{2} \\ \frac{1}{2} & 0 & \frac{i}{2} & \frac{1}{2} & 0 & \frac{i}{2} \\ \frac{1}{2} & 0 & \frac{i}{2} & -\frac{1}{2} & 0 & -\frac{i}{2} \\ \end{array} \right)$$
and $\Lambda_X =\mathrm{diag}\left(\frac{3}{2},\,\frac{1}{2},\,\frac{-1}{2},\,\frac{1}{2},\,\frac{-1}{2},\,\frac{3}{2}\right)$. The result is:
$$\Sigma_X=\left( \begin{array}{cccccc} 0 & \frac{1}{2} & 0 & 0 & 0 & 0 \\ \frac{1}{2} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \frac{3}{4} & -\frac{1}{4} & \frac{i}{4} & \frac{3 i}{4} \\ 0 & 0 & -\frac{1}{4} & \frac{3}{4} & \frac{3 i}{4} & \frac{i}{4} \\ 0 & 0 & -\frac{i}{4} & -\frac{3 i}{4} & \frac{3}{4} & -\frac{1}{4} \\ 0 & 0 & -\frac{3 i}{4} & -\frac{i}{4} & -\frac{1}{4} & \frac{3}{4} \\ \end{array} \right)$$
From here on it should be conceptually clear how to go, although tedious. You do the same for the $y$-AM observables:
$$\sigma_y=\left(\begin{array}{cc}0&-i\\i&0\end{array}\right)$$ $$S_y = \left(\begin{array}{ccc}0&0&-i\\0&0&0\\i&0&0\end{array}\right)$$
to find the total system $y$-AM observable $\Sigma_Y$ and for the $z$-AM observables:
$$\sigma_z=\left(\begin{array}{cc}i&0\\0&-i\end{array}\right)$$ $$S_z = \left(\begin{array}{ccc}0&i&0\\-i&0&0\\0&0&0\end{array}\right)$$
to get the total system $z$-AM observable $\Sigma_Z$.
Part 2: Find most general Hamiltonian
Your most general Hamiltonian will be defined by the three commutator relationships expressing conservation of AM:
$$[\hat{H},\,\Sigma_j]=0;\;j=X,\,Y,\,Z$$
You'll need to work out the invariant spaces of the three $\Sigma$s to do this. You'll get a linear space of possible $\hat{H}$s: in the two coupled spin half particles case there is essentially only one possible Hamiltonian that falls out of this approach and that is one proportional to $\sigma_x\otimes\sigma_x+\sigma_y\otimes\sigma_y+\sigma_z\otimes\sigma_z$ (plus a term proportional to the $4\times4$ identity matrix expressing the shift in the ground state energy) but this six dimensional case things will be a bit more complicated. Now as I said, I've never done this before, so I daresay there is a more systematic and less cumbersome way to work this all out. But any method is going to rest on the first principles expressed above.
What are the terms for the influence of the magnetic field. Well that's an easy one: in the ordering we have studied above, the uncoupled Hamiltonian will be:
$$\hat{H} = \gamma_{\frac{1}{2}}\left(\sigma_x\,B_x + \sigma_y\,B_y+ \sigma_z\,B_z\right)\otimes 1_{3\times3} + \gamma_1\,I_{2\times2}\otimes\left(S_x\,B_x + S_y\,B_y+ S_z\,B_z\right)$$
where $\gamma_{\frac{1}{2}}$ and $\gamma_1$ are the respective gyromagnetic ratios.
Notes on completing the method. You can also represent a bipartite state $\Phi=\psi\otimes\Psi$ as the literal $2\times 3$ matrix that is the outer product $\Phi=\psi Psi^T$ of the $2\times 1$ and $3\times 1$ column vectors. Then the operator on the first space act on the left and the operators on the second act on the right. So our $x$-component observable would be the linear, homogeneous transformation:
$$\Phi\mapsto \sigma_x\,\Phi\,S_x^T$$
and the vectorization operator (See Vectorization Wiki Page), which reorders our states into a $6\times 1$ column vectors as in my answer, writes this as
$$\mathrm{vec}(\Phi) \mapsto S_x\otimes\sigma_x\,\mathrm{vec}(\Phi)$$
Using the standard formula $\mathrm{vec}(A\,B\,C) = C^T\otimes A \,\mathrm{vec}(B)$. By dint of the formula $(A\otimes B)\, (C\otimes D) = (A\,C)\otimes(B\,D)$, and using the fact that inverse, complex conjugate, Hermitian conjugate and transpose operations distribute over the Kronecker produt, we can diagonalize $S_x\otimes\sigma_x$ inside the Kronecker product and find that the coupled system's eigenstates are $\Pi_x\otimes \pi_x$, where $P_X,\,p_x$ are the matrices of eigenvectors of the individual multiplicands written as columns. So this will let you calculate $\Sigma_j,\,j=X,\,Y,\,Z$ systematically and fast.
Now to find the most general Hamiltonian, you need to find the invariant space of the group of matrices generated by the three matrices $\exp(i\,\Sigma_j)$ and find the irreducible representation of it: equivalently the smallest vector subspace of $\mathbb{C}^6$ left invariant by the group: by Schur's lemma, any matrix commuting with all three must be proportional to the identity operator when restricted to this subspace. The scaling factor is possibly nought – i.e. the operator could possibly be the zero endomorphism. This completely characterizes the most general Hamiltionian: it can be any operator which is proportional to the identity when restricted to this irreducible subspace.
You could also find the subspace which is common to all three of the nullspaces of the three $36\times 36$ matrices $1_{36\times 36}\otimes \Sigma_j - \Sigma_j^T\otimes 1_{36\times 36}$ in Mathematica or Matlab, but I suspect there is a much eleganter method grounded on Schur's lemma!
Not the answer you're looking for? Browse other questions tagged quantum-mechanics homework-and-exercises quantum-spin quantum-entanglement spinors or ask your own question.
Energy Spectrum of pair of spin-1/2 particles with general Hamiltonian
Spin Transition Energies
Perturbation of coupled spin
Find Eigenstates of a Hamiltonian that lets two spin 1/2 interact but also acts on one of them
Writing Breit-Pauli spin-spin-coupling Hamiltonian as a sum of irreducible spin tensor operators
Spin Orbit Coupling Hamiltonians
Eigenvalues of Hamiltonian for a system w/ three interacting spin degrees of freedom with spin-1/2
Spin-orbit coupling, magnetic field
Confusion on good quantum numbers
Hamiltonian in rotating frame | CommonCrawl |
Biomaterials Research
Optimization of silver nanoparticle synthesis by chemical reduction and evaluation of its antimicrobial and toxic activity
Catalina Quintero-Quiroz ORCID: orcid.org/0000-0001-9682-55301,
Natalia Acevedo1,
Jenniffer Zapata-Giraldo2,
Luz E. Botero2,
Julián Quintero3,
Diana Zárate-Triviño4,
Jorge Saldarriaga5 &
Vera Z. Pérez1,6
Biomaterials Research volume 23, Article number: 27 (2019) Cite this article
Chemical reduction has become an accessible and useful alternative to obtain silver nanoparticles (AgNPs). However, its toxicity capacity depends on multiple variables that generate differences in the ability to inhibit the growth of microorganisms. Thus, optimazing parameters for the synthesis of AgNPs can increase its antimicrobial capacity by improving its physical-chemical properties.
In this study a Face Centered Central Composite Design (FCCCD) was carried out with four parameters: AgNO3 concentration, sodium citrate (TSC) concentration, NaBH4 concentration and the pH of the reaction with the objective of inhibit the growth of microorganisms. The response variables were the average size of AgNPs, the peak with the greatest intensity in the size distribution, the polydispersity of the nanoparticle size and the yield of the process. AgNPs obtained from the optimization were characterized physically and chemically. The antimicrobial activity of optimized AgNPs was evaluated against Staphylococcus aureus, Escherichia coli, Escherichia coli AmpC resistant, and Candida albicans and compared with AgNPs before optimization. In addition, the cytotoxicity of the optimized AgNPs was evaluated by the colorimetric assay MTT (3- (4,5- Dimethylthiazol- 2- yl)- 2, 5 - Diphenyltetrazolium Bromide).
It was found that the four factors studied were significant for the response variables, and a significant model (p < 0.05) was obtained for each variable. The optimal conditions were 8 for pH and 0.01 M, 0.0 6M, 0.01 M for the concentration of TSC, AgNO3, and NaBH4, respectively. Optimized AgNPs spherical and hemispherical were obtained, and 67.66% of it had a diameter less than 10.30 nm. A minimum bactericidal concentration (MBC) and minimum fungicidal Concentration (MFC) of optimized AgNPs was found against Staphylococcus aureus, Escherichia coli, Escherichia coli AmpC resistant, and Candida albicans at 19.89, 9.94, 9.94, 2.08 μg/mL, respectively. Furthermore, the lethal concentration 50 (LC50) of optimized AgNPs was found on 19.11 μg/mL and 19.60 μg/mL to Vero and NiH3T3 cells, respectively.
It was found that the factors studied were significant for the variable responses and the optimization process used was effective to improve the antimicrobial activity of the AgNPs.
Silver ions have been known for their effectiveness against a wide range of microorganisms [1]. The antimicrobial activity of silver nanoparticles (AgNPs) has been confirmed in both Gram-positive and Gram-negative bacteria as well as in fungus [2, 3]. AgNPs have been used in several medical application such as sunscreen lotions, burn treatment, wound dressings, textiles, dental materials, bone implants and medical device coating among others [4–6].
The antimicrobial effect of AgNPs relies on physic-chemical characteristics like size, shape, distribution and concentration [5]. The mechanism of action has been associated to several factors including damage to the cell membrane of bacteria or the plasma membrane of fungi that causes the loss of cellular components [5, 7, 8], disruption of the respiratory chain and synthesis of adenosine triphosphate (ATP), which affects the cellular energy source causing death of the microorganism, damage to deoxyribonucleic acid (DNA) and disruption of cell replication [4, 7, 8]. However, it is expected that AgNPs does not cause cellular damage or affect beneficial microorganisms [2]. As a result, the cytotoxic activity is important to define their applications [1].
Methods of synthesis for AgNPs have gained a lot of attention recently due to the need to find more efficient ways to obtain nanoparticles. The bottom-up chemical technique is one of the most use methods in terms of nanoparticle production. This method is low cost and has a large-scale production capacity. It is based on the reduction of a metal salt via a reducing agent in the presence of a protective material. AgNPs formation begins with generating a neutral silver atom that forms Ag2+ precursor. Subsequently, more atoms are added and this forms a cluster that allows to control shape and size of the nanoparticles [1, 5].
Due to the advances in AgNPs production with different characteristics, and because of physico-chemical properties effects on microorganisms, characterization techniques have been developed. Those techniques allow analyzing AgNPs structure, morphology, composition, and behavior using technologies such as visible ultraviolet spectroscopy (UV -Vis), dynamic light scattering (DLS), and transmission electron microscopy (TEM) [5]. UV-Vis evaluates surface plasmon resonance (LSPR) of metal nanoparticles, provides information on its size and has been used as a benchmark for the performance of the nanoparticle synthesis process [9, 10]. On the other hand, DLS uses a monochromatic light source to measure the size, structure, and distribution of nanomaterials [11, 12]. The electrical potential to measure the electrostatic attraction or repulsion capacity between particles can be measured through the evaluation of the zeta potential by DLS [11, 12]. In addition, it is possible to obtain nanoparticle images with a resolution of up to 0.1 nm employing TEM [13].
There are several studies that optimize the synthesis parameters of AgNPs, mainly to reduce the size and improve their physico-chemical properties [14, 15]. However, even if there are well-established techniques for the preparation of metallic nanoparticles, it is necessary to investigate simple synthesis methods, which require short reaction times and low cost to obtain nanoparticles with greater antimicrobial activity [16].
In this study, a Face Centered Central Composite Design (FCCCD) was carried out to optimize the synthesis of AgNPs obtained based on the method of chemical reduction [17]. These nanoparticles were established as the reference AgNPs, and their antimicrobial activity was evaluated. The design was performed with four parameters: AgNO3 concentration, sodium citrate (TSC) concentration, NaBH4 concentration, and the pH of the reaction to obtain a better antibacterial effect of the reference AgNPs. The optimized AgNPs were characterized by evaluating some of their physical-chemical properties and antimicrobial activity. Additionally, the cytotoxic effect was assessed using NiH3T3 and Vero cell lines. NiH3T3 is the standardized cell line recommended by the Organization for Economic Cooperation and Development (OECD) as the in vitro model to test the cytotoxicity of manufactured nanomaterials [18]. The Vero cell line was used to determinate the cytotoxicity of the AgNPs over a blood filter cell, like the kidneys, since it has been determined that the hemocompatibility of nanoparticles is a prior requirement for its use in medical products [19].
A schematic diagram of the optimization process is depicted in Fig. 1.
Optimization process for the synthesis of silver nanoparticles using experimental design
Synthesis of silver nanoparticles
Briefly, 5 mL of sodium citrate 0.05 M (TSC, Sigma-Aldrich CAS 6132-04-3) and 5 mL of silver nitrate 0.05 M (AgNO3, PANREAC CAS 7761-88-8) were added to 185 mL of water type 1 (Milli Q Ⓡ) in a cold bath between 6 ∘C to 10 ∘C. The solution was stirred for 3 min at 3000 RPM. Subsequently, 5 mL of sodium borohydride 0.05 M (NaBH4, Sigma-Aldrich CAS 16940-66-2) was dripped slowly. The pH was adjusted to 10 with sodium hydroxide 1.25 M (NaOH, PANREAC CAS 1310-73-2). The nanoparticles obtained were stored in amber bottles at 4 ∘C. These nanoparticles were the reference AgNPs (Ref-AgNPs) for the study.
Experimental design and optimization
An optimization process of Ref-AgNPs was carried out, with the purpose of improving some of its physico-chemical properties. A FCCCD was executed using the Design Expert Version 7.0.0 software (Stat-Ease, USA) with four parameters: AgNO3 concentration (0.01 - 0.09 M), TSC concentration (0.01 - 0.09 M), NaBH4 concentration (0.01 - 0.09 M) and the pH of the reaction (8 – 10). The response variables were established according to the synthesis performance. The variables were, i) area under the curve of the UV-Vis absorbance spectrum; ii) average size of AgNPs; iii) the greatest intensity peak in the size distribution of AgNPs (PSGI); iv) the polydispersity of AgNPs. These dependent variables were quantified with the following techniques:
Ultraviolet- visible spectroscopy (UV-Vis)
This technique was used to determine the plasmonic surface resonance. A spectrophotometer UV probe 1601pc Shimadzu was used for reading the absorbance between 350 – 420 nm. The yield of the synthesis was estimated as the area under the curve of the ultraviolet absorbance of the nanoparticles evaluated [21].
Dynamic light scattering (DLS)
DLS was used to establish the average size of AgNPs, PSGI and the polydispersity. A Zetasizer Nano Series Malvern Instruments (USA) was used. The samples were diluted in water type 1 at controlled temperature (23 ∘C) to obtain a dilution factor that allow a reliable reading. Three measurements were made, each with 30 s of balance and 15 runs of 10 s of duration [22].
The optimization process sought to maximize the yield with an importance of 5. On the other hand, it was work towards minimize the average size, the peak size with greater intensity, and the polydispersity related to the size of AgNPs, with importance of 3, 4 and 5, respectively. The least squares multiple regression method was used.
The experimental data was adjusted using the second order polynomial equation by comparing the coefficient of determination (R2) and the adjusted coefficient of determination (R2−adj). The analysis of variance (ANOVA) was used to evaluate the statistical significance of the independent variables from the obtained models (with a confidence level of 95%). The accuracy of the optimal conditions was evaluated by calculating the relative and absolute errors between the responses predicted by the model and those obtained experimentally under optimal conditions.
Characterization of AgNPs
Physico-chemical characterizations of optimized AgNPs
Optimized AgNPs (Opt-AgNPs) were characterized physico-chemically by the UV-Vis and DLS methods as described in the design of the experiment. In addition, AgNPs were characterized by atomic absorption (AAS), zeta potential, and transmission electron microscopy (TEM) as described follows:
Atomic absorption spectroscopy (AAS)
The silver concentration in each synthesis was determined with the flame method using the AAS technique using a Thermo Scentific ICE 3000, USA [23]. A sample of the colloidal solution of the undiluted nanoparticles was nebulized and disseminated as an aerosol to measure the parts per million Ag.
Zeta potential
The Zeta potential was determined by Laser Doppler Electrophoresis using a Zetasizer Nano ZS and the Zetasizer software. The nanoparticles were diluted in water type 1 at controlled temperature (23 ∘C) and three measurements were made, each of them with 30 s of equilibrium and 15 runs of 10 s in length [24].
The size and morphology of the samples were confirmed by transmission electron microscopy (TEM) using a Tecnai F20 Super Twin TMP, FEI. The samples were prepared using a drop of approximately 60 nm thickness of each suspension and deposited on a carbon membrane [21, 25].
Evaluation of the antimicrobial effect of AgNPs
The antibacterial and antifungal activity of Ref-AgNPs and Opt-AgNPs was evaluated with the macrodilution and microdilution techniques [21, 26]. The microorganism used where Staphylococcus aureus ATCC 25923, Escherichia coli ATCC 25922, Escherichia coli AmpC resistant, and Candida albicans ATCC 14053. The minimum bactericidal concentration (MBC) and the minimum fungicidal concentration (MFC) was evaluated to establish the bactericidal and antifungal capacity of the AgNPs, respectively.
Briefly, each bacterium species were seeded on Müller Hinton agar (BD, REF 211438) and incubated for 24 h at 37 ∘C. Subsequently, a sample of each microorganism was cultured between 12 and 24 h in Brain Heart Infusion liquid medium (BHI, BD REF 211065) at 37 ∘C in order to reach log phase. Each bacterium was adjusted to 5 ×104 CFU/mL using a spectrophotometer (Genesys 20, Thermo Scientific USA). The microorganism were diluted at different concentrations of AgNPs (2.48, 4.97, 9.94, 19.89, 29.83 and 39.78 μg/mL) each one with 2.5 ×104 UCF/mL bacteria. 150 μl at 0.02M TSC was added to each nanoparticle solution. Each dilution was incubated for 24 h at 37 ∘C in a shaking incubator (Rosy 1000, Thermolyne USA), under constant stirring at 75 RPM. A volume of 10 μL of these dilutions were seeded on Müeller-Hinton agar and incubated at 37 ∘C for 24 h. The MBCs were determined visually as the lowest concentration of AgNPs that visually inhibits 99.9% growth of microorganisms [26]. For each assay, there were a number of controls such as microorganism growth, AgNPs, diluent and TSC sterility controls.
On the other hand, the MFC of AgNPs on Candida albicans ATCC 14053 was evaluated through the microdilution technique in broth [26]. The fungus was seeded on Sabouraud agar (BD, REF 210950) for 48 h at 37 ∘C. The microorganism was subculture for 48 h in BHI liquid medium at 37 ∘C. Apart from that, AgNPs concentrations were obtained from 0.12 to 3.97 μg/mL by diluting with water type 1 (Milli Q Ⓡ). The fungus suspension was adjusted to 2.5 ×103 CFU/mL using a Genesys 20 spectrophotometer (Thermo Scientific USA). On a 96 well flat bottom microplate (Costar REF 3599), 20 μL of each dilution of AgNPs, 220 μL of BHI culture medium, and 10 μL of the microorganism were added. Sterility and sensitivity controls of the microorganism were cultured using fluconazole 99% (Pfizer, lot 04821) at 10 and 5 μg/mL, and viability control. In each well, a final volume of 250 μL was obtained.The microplate was incubated at 37 ∘C and kept under agitation at 60 RPM for 24 h in an incubator-agitator (Rosy 1000, Thermolyne USA). After 24 h, the absorbance of each well was read at a wavelength of 530 nm and 10 μL of each well was seeded in Petri dishes with Sabouraud agar. The dishes were incubated for 48 h at 37 ∘C. The antibacterial and antifungal activity was evaluated in triplicate to obtain the median CFU/mL. The MFC for the fungus of AgNPs was evaluated visually and using the Probit regression method and the IBM SPSS Statistics 24.0 software was used for statistical analysis.
Evaluation of the cytotoxic effect of optimized AgNPs
The evaluation of the cytotoxic effect of Ref-AgNPs was carried out on NiH3T3 and Vero cells through a MTT assay ((3-[4,5-dimethylthiazol-2-yl]-diphenyl tetrazolium bromide)) of cellular viability. The cells were cultured as monolayer in Dulbecco's Modified Eagle Medium (DMEM) supplemented with 5% of fetal bovine serum (SFB) and 1% antibiotics (penicillin and streptomycin). The cultures were maintained in incubator at 37 ∘C with 5% CO2 atmosphere in a 25 cm2 culture flask. Cells were trypsinized (0,05% trypsin EDTA) and seeded in 96-well plates, for cytotoxicity assay. 5 ×103 cells in 200 μl of medium were seeded in each well. The plates were incubated at 37 ∘C with 5% CO2 atmosphere for 24 h to allow the cellular adherence. Then, the medium was removed and new medium was added with AgNPs at different concentrations (0, 5, 10, 15, 20, 40, 60, 70, 80, 90 μg/mL) and in different wells. The cells were incubated for 24 h at 37 ∘C with 5% CO2 atmosphere. After that, the medium with AgNPs was removed and was added 100 μl medium with 10 μl of MTT in each well and the plates were incubated for 2 h at 37 ∘C. Subsequently, 100 μl of Dimethylsulfoxide (DMSO) was added to each well and its absorbance was measured using a microplate reader (Synergy HT Biotek Ⓡ) at 570 nm [27]. Untreated cells with AgNPs was used as control. MTT assay was performed with six replicas for the different AgNPs concentrations and to each cellular line. The data was statistically analyzed with IBM SPSS Statistics software to determine the cellular viability and recognize if AgNPs cause cytotoxic effect. Lethal concentration 20 (LC20) and lethal concentration 50 (LC50) of AgNPs optimized were obtained by Probit analysis that assesses the mortality percentage for each AgNPs concentration evaluated. Percentage of cellular viability was calculated taking the cellular control as 100% of viability [28].
A central composite design was obtained using the Response Surface Methodology (RSM) with the objective of identifying the interactions between the parameters such as AgNO3 concentration (0.01 - 0.09 M), TSC concentration (0.01 - 0.09 M), NaBH4 concentration (0.01 - 0.09 M) and the pH of the reaction (8 - 10). The objective was to increase the yield in the production of AgNPs, as well as to reduce the average size and the polydispersity in the size of the nanoparticles.
The Design of Experiments (DOE) showed 29 experimental runs which are presented in a randomized manner in Table 1. The highest response for the UV-Vis area was 15.00 which was obtained with TSC [0.09 M], AgNO3 [0.09 M], NaBH4 [0.09 M] and pH 8. On the other hand, the average size of the lowest nanoparticle, the peak of the highest intensity of the smallest size and the lowest polydispersity were 12.39, 15.17 and 0.112, which were obtained with AgNO3 at a concentration of 0.01 M and pH 8. The seven tests with area 0.00 of the UV-Vis spectrum, were synthesized with pH 12. They had the largest average sizes of AgNPs as well as the most significant peaks in size with the highest intensity. Besides, among these essays were found the 3 with greater polydispersity in size.
Table 1 Three-variable FCCCD design with four responses for the synthesis of AgNPs
Validation of the experimental model using residuals
The optimal conditions and validation of the design of the experiment were determined using Design Expert Software. It was based on the analysis of the normal residual graphs for each of the response variables (Fig. 2), the analysis of the residual vs. predicted graphs of the validation model (Fig. 3), and the residual vs. observation order graphs (Fig. 4). It was observed that the experimental design presented a linear relationship in the distribution of errors. The assumption of normality was verified by the normal probability graph, the independence between residues, and the normal and random distribution between positive and negative residues.
Normal plot of residuals; a) for the performance in the production of AgNPs; b) for the average size of AgNPs; c) for the highest intensity peak of AgNPs; d) for polydispersity of the size of AgNPs
Graph of residuals vs. predicted model; a) for the performance in the production of AgNPs; b) for the average size of AgNPs; c) for the highest intensity peak of AgNPs; d) for polydispersity of the size of AgNPs
Residual graph vs. observation order; a) for the performance in the production of AgNPs; b) for the average size of AgNPs; c) for the highest intensity peak of AgNPs; d) for polydispersity of the size of AgNPs
On the other hand, the validation of optimal values prediction was assessed with the calculation of the relative and absolute errors was accomplished between the responses predicted by the model versus the ones obtained experimentally under optimal conditions (Table 3).
Statistical analysis of experiments
ANOVA analysis were performed in each experimental unit, where the quadratic model was found to be significant (P < 0.05). Table 2 shows ANOVA results and the statistical description for the model obtained in each of the responses.
Table 2 ANOVA and statistical description by FCCCD
The relationships between the dependent variables (yield, size, PSGI, polydispersity) and the independent variables (TSC, AgNO3, NaBH4, and pH) are expressed by the following regression equations:
The yield of AgNPs synthesis (yield) model is given below in Eq. 1.
$$ \begin{aligned} \log_{10}(yield + 0.15) = &-7.26071 x 10^{-3} + 30.61911 \times TSC \\ &+ 72.43530 \times {AgNO}_{3} \\ - 0.10162 \times pH &- 2.81328 \times {AgNO}_{3} \times pH - 287.25097 \\ &\times TSC^{2} - 406.12621 \times {{AgNO}_{3}}^{2} \end{aligned} $$
The average size of AgNPs (size) model is given below in Eq. 2.
$$ \begin{aligned} \log_{10}(size)= 22.59344 &+ 13.17486 \times TSC - 33.27239 \times {NaBH}_{4}\\ &- 4.56699 \times pH \\ - 273.76717 \times TSC &\times {NaBH}_{4} + 5.05783 \times {NaBH}_{4} \times pH\\ &+ 0.23722 \times pH^{2} \end{aligned} $$
The peak size with greater intensity of AgNPs (PSGI) model is given below in Eq. 3.
$$ \begin{aligned} \log_{10}(PSGI) =& 15.22917 + 9.55131 \times TSC + 6.56465\\ &\times {AgNO}_{3} - 12.83922 \\ * {NaBH}_{4} - 3.17281 &\times pH - 218.23765 \times TSC \times {NaBH}_{4} + 2.98745 \\ &\times {NaBH}_{4} \times pH + 0.17228 \times pH^{2} \end{aligned} $$
The AgNPs size polydispersity (Polydispersity) model is given below in Eq. 4.
$$ \begin{aligned} Polydispersity = -1.24875 &+ 3.87813 \times TSC + 22.27832\\ &\times {NaBH}_{4} +0.16253 \\ &\times pH -2.16396 \times {NaBH}_{4} \times pH \end{aligned} $$
All the models obtained a significant value (p < 0.05), where the pH was statistically significant. The coefficient of determination (R2) indicates the correlation between experimental and predicted data. In addition, adjusted R2 and R2 corroborate the significance of the models. According to the coefficients of each effect analyzed, the concentration of AgNO3 has greater effect on the yield in the production of AgNPs, while TSC has greater effect on the size and maximum peak of the size of the nanoparticles. In terms of polydispersity in the size of AgNPs, the concentration of NaBH4 is the factor with the greatest effect.
Additionally, Fig. 5 shows the response surfaces with greater significance for the yield, size and polydispersity of the size of AgNPs using the interactions of three variables. Figure 5a shows that when increasing pH and concentration of AgNO3 is between 0.05 and 0.07 M, the yield in the production of AgNPs increases, hence, this image suggests that there are optimal conditions related to pH and AgNO3 concentration to increase the production of AgNPs. In addition, the size effect of AgNPs from pH is observed in Fig. 5b in which, the size of the particles increases when pH increases although the concentration of AgNO3 varies. Finally, Fig. 5c shows the effect of the concentration of AgNO3 and pH in the polydispersity of the AgNPs size, showing that the polydispersity decreases by reducing pH.
3D interaction plot of AgNPs. a interaction of pH and AgNO3 on the production performance of AgNPs; b interaction of concentration pH and AgNO3 on the average size of AgNPs; c interaction of pH and concentration of AgNO3 on the polydispersity of the size of AgNPs
Based on the statistical analysis obtained, Table 3 shows the limits and importance of each parameter as well as the optimal values of the parameters with the predicted responses for each response variable.
Table 3 Restrictions and optimal conditions predicted from the model obtained
Physico-chemical characterization of AgNPs
The formation of optimized AgNPs was confirmed through the UV-Vis absorption spectrum (Fig. 6), wavelengths were observed at 400 nm and no absorption peaks were observed indicating the presence of residues of the synthesis process in the range of evaluated length. Table 4 shows the results of the characterization by AAS and DLS of AgNPs.
UV-Vis spectroscopy of optimized AgNPs [1%] (v/v)
Table 4 Physico-chemical characterization of AgNPs
In addition, Fig. 7 shows TEM micrographs for Opt- AgNPs. The AgNPs were observed with a heterogeneous distribution with variable spherical trend morphology.
Micrograph of AgNPs taken with TEM at magnification of 50 nm
Table 5 shows the bactericidal and fungicidal effect of AgNPs. It was found better antimicrobial activity in optimized nanoparticles than in reference. Also, its antifungical effect against Candida albicans was greater with the AgNPs than the control of fluconazole, which was evaluated at a concentration of 10 μg/mL. Growth and sterility controls were appropriate for all essays. Also, Fig. 8 shows these results as a comparison between the two AgNPs using a bar chart.
Comparison between the MBC and MFC of the reference AgNPs and the optimized AgNPs
Table 5 Antimicrobial activity of AgNPs at 24h incubation
Figure 9 shows the viability percentage of Vero and NiH3T3 cells according to the different concentrations of AgNPs. It was found that the viability of both cells decreased as the concentration of AgNPs increased. Besides, no significant differences were found between the cell lines evaluated.
Cell viability in Vero and NiH3T3 cells exposed to AgNPs
Furthermore, LC20 and LC50 of AgNPs was found on 1.74 μg/mL and 19.11 μg/mL to Vero cells, respectively. LC20 and LC50 of AgNPs was found on 3.21 μg/mL and 19.60 μg/mL to NiH3T3 cells, respectively.
The novelty in this article is the use of FCCCD, using the RSM, to optimize the physico-chemical properties and antimicrobial activity of silver nanoparticles presented as Ref-AgNPs. This work shows that it is possible to improve the antimicrobial activity of the optimized AgNPs by comparing with the initial AgNPs, using the same method, but making changes in four parameters: AgNO3 concentration, TSC concentration, NaBH4 concentration and pH of the solution.
For this purpose, an optimization process of AgNPs was carried out, starting from an initial chemical reduction synthesis process. Optimized AgNPs were obtained, and their physico-chemical properties, antimicrobial activity, and cytotoxic effect were evaluated.
The FCCCD results showed the formulation that delivered the highest yield in the production of AgNPs was the synthesis that used the highest concentration of the three reagents and a final pH adjustment to 8.This may be related to the fact that, the higher the concentration of the metal precursor and the reducing agent, the greater the possibility of obtaining AgNPs, due to the availability of the substrate and the ability of NaBH4 to release electrons to the oxidizing agent and reduce the silver in nanoparticles [17].
In addition, it was found that the synthesis with the lowest average nanoparticle size, the peak of greatest intensity of the lowest size and the lowest polydispersity were prepared with AgNO3 [0.01 M] and pH 8. The explanation to this is related to the way the synthesis process was performed with the lowest concentration of the evaluated precursor agent, which had the opportunity to be more exposed to the reducing agent, forming electric layers around the nanoparticles that inhibit aggregation and reduce the size [17]. Furthermore, it has been found that the size of nanoparticles depends on the speed of nucleation and growth process, which can be controlled by parameters such as pH [29]. Ondari et al. found that lower synthesis pH generated smaller nanoparticle sizes [29].
Likewise, some synthesis processes were carried out with pH 12, in which no area under the curve of the UV-Vis spectrum was found between the lengths of 350 nm and 420 nm and in which larger sizes of AgNPs and polydispersity of size were recorded. It is possible that the parameters established for the concentration of the three reagents and the pH did not allow the formation of AgNPs with sizes between 5 and 50 nm [30] and large silver clusters have been formed that could not be adequately measured by the equipment of size measurement. Similary, it has been found that synthesis formulations with pH 12 can affect the size of AgNPs. Tagad et al. [31] synthesized AgNPs and evaluated the effect of pH on the size of the nanoparticles in the different reactions at pH. The authors found an agglomeration of AgNPs in the synthesis with the highest pH. This could explain the results obtained in our study, where an extreme alkaline pH can generate low stability.
Futhermore, when validating the models and observing the normal residual graphs for each model, some distances was found between the predicted values and the real values, due to the experimental conditions related to the synthesis processes. However, these follow a normal distribution, endorsing the models [32].
In this study, the DOE developed showed pH, AgNO3 and pH interactions, TSC2 and AgNO3 are significant factors for the yield in the production of AgNPs by this method. Likewise, pH, AgNO3 and protective agent influenced the size of the nanoparticles obtained, as has been reported in other studies [14, 29]. Considering that the nanoparticles are made up AgNO3, the concentration of this reagent is determinant for the production yield of AgNPs. Moreover, the concentration of TSC has greater effect on the size of AgNPs due to its role of preventing the aggregation of the nanoparticles. Subsequently, it was found that NaBH4 has a great effect on the polydispersity of the AgNPs size, since NaBH4 is a strong reducer, which allows that the reaction rate in the nucleation stage of the synthesis to be greater [33], whereby, the silver ions have less time to generate clusters of variable sizes.
On the other hand, it was possible to verify that the AgNPs obtained with the optimization of the chemical synthesis corresponded to silver when evaluating LSPR by UV-Vis. The observed bands showed a widening that suggests a distribution of different sizes of nanoparticles. Also, adequate yield was found in the production of AgNPs evaluated by AAS and a slightly negative Zeta Potential since NaBH4 was used as a reducing agent and the nanoparticles formed absorbed the nitrate and borate ions that are slightly negative. For instance, with TEM, it was possible to observe individual nanoparticles of the synthesis carried out. These results are related to those obtained by other authors [1, 21, 22] who have shown that chemical methods allow obtaining small nanoparticles with a spherical tendency.
It has been found that the size, oxidation, and release capacity of AgNPs are factors that are associated with their antimicrobial activity [1, 34, 35]. The micro and macrodilutions methodology used to evaluate the antimicrobial capacity of the synthesized AgNPs allowed us to know their MBC and MFC against all the microorganisms evaluated. The fact that changing the parameters of the formulation can decrease the lowest concentration of the nanoparticles required to kill some bacterias and fungi, reveals the importance of the physico-chemical properties in its antimicrobial capacity.
The results obtained indicate that the nanoparticle solution was not monodispersed, since the particles were not obtained of a uniform size. Several factors can generate these results, among which are the preparation and reaction conditions of the synthesis method used. It has been found that the rate of incorporation of the reagents, the agitation of the mixture, and the reaction rate determine the size distribution of AgNPs obtained [36]. This paper presents a TEM image to demonstrate the formation of Opt-AgNPs. Nanoparticles of different sizes are observed; however, this is an image from a portion of the sample. The polydispersity of the nanoparticle size was calculated by the DLS technique and described in Table 3
As in our study, other authors have found antimicrobial activity of AgNPs, synthesized by chemical reduction, against Staphylococcus aureus, Escherichia coli, and Candida albicans [1, 21, 22, 35]. Three toxicity mechanisms of AgNPs against microorganism have been established [4, 5, 7, 8].
All these toxicity mechanisms of AgNPs begin with the adhesion and permeability of the membrane of the microorganisms. However, Gram-positive bacteria have a greater thickness of the cell wall through the peptidoglycan layer (30 to 100 nm thick) than Gram-negative bacteria [37]. This could explain the difference in the MBC between Staphylococcus aureus and Escherichia coli [37]. Another explanation for this phenomenon can be related to the presence of lipoteichoic acid in Gram-positive bacteria, which protect these microorganisms against external agents [37]. Nevertheless, it was found greater sensitivity of Candida albicans to AgNPs comparing to bacteria's sensitivy, which can be attributed to the large number of functional groups present on the surface of bacteria with respect to that of fungi [38].
In particular, it was found that AgNPs obtained in this study achieved a MBC against Staphylococcus aureus, unlike AgNPs before the optimization of the synthesis parameters. Also, this study found that the optimized nanoparticles possessed a higher toxicity against Escherichia coli, Escherichia coli AmpC resistant and Candida albicans compared to the reference AgNPs. These results can be attributed to differences in the concentrations of TSC, AgNO3 and NaBH4 between both formulations, where the protective and reducing agents of the optimized synthesis were lower than the initial while the metallic precursor increased.
in addition, other authors have used some of the same reagents to synthesize AgNPs used in this work. However, differences have been found in the concentrations used and in the antimicrobial effects of nanoparticles.
By comparison, Raji et al. [1] synthesized AgNPs using AgNO3 [0.1M] and NaBH4 as a reducing agent. They found a MIC for Escherichia coli and Staphylococcus aureus of 1.39 μg/mL and 5.5 mg/mL against Candida albicans after 24 and 48 h of incubation. Thus, the nanoparticles optimized in our study required 8.55 μg/mL and 18.5 μg/mL more than Raji's to achieve inhibition of Escherichia coli and Staphylococcus aureus. However, 5.5 ×10−3μg/mL less of our optimized AgNPs is required to inhibit the growth of Candida albicans compared to Raji, which is a big difference. This could be related to the difference in reagent concentration. It should be noted that some authors do not specify the type of strain used in each microorganism. Additionally, these authors did not use a design of experiments to improve the antimicrobial activity of their nanoparticles, optimizing some parameters of their synthesis process like this work.
Furthermore, previous studies have linked the antimicrobial activity of spherical AgNPs with its size, and other characteristics such as oxidation capacity and the release of silver ions [1, 34, 35]. Also, because the antimicrobial effect of AgNPs on bacteria and fungi is affected by the interaction of these nanoparticles with microorganisms, it has been claimed that smaller nanoparticles may have higher antimicrobial activity compared to larger [1]. This, because smaller nanoparticles have a larger surface area available to interact with microorganisms and release more ions [19, 19]. However, larger AgNPs may have less toxic effects on human cells than those of small sizes [19]. Jeong et al. [19] prepared two different sizes of AgNPs (10 and 100 nm in average diameters) with similar chemical composition and using an AgNO3 reduction method like the one of the present work. The authors found that smaller particles showed a higher cytotoxic effect, at the same concentration, compared to larger particles.
The evaluation of the cytotoxic effect of optimized AgNPs was carried out through cellular viability by MTT assay on cells NiH3T3 and Vero. This type of testing is necessary to determine the cytotoxicity of any product that is intended for use in humans [39]. It was found that cell viability and cytotoxicity dependence of AgNPs concentration [40, 41]. This occurs because, at a higher concentration of AgNPs, cells are more prone to damage in the cell membrane, which produces permeability in the mitochondrial membrane and greater exposure to Ag ions [42]. For this reason, at concentrations less than 13.88 μg/mL and 14.66 μg/mL AgNPs, the viability of cells was above 60% for NiH3T3 and Vero cells. However, concentration of 20 μg/mL AgNPs reduced the viability to 50%.
It was found that LC20 and LC50 of AgNPs were lower for Vero than NiH3T3 cells. This is probably because of some kind of cells or cellular lines can be more sensitive than other types of cells to nanoparticles [41, 43]. In this case, Vero cells were more sensitive than fibroblast cells (NiH3T3). Other studies have also studied the cytotoxic effect of AgNPs [44, 45]. Accordingly, it is possible to employ our AgNPs at concentrations less than 10 μg/mL to achieve bactericidal activity against Escherichia coli, Escherichia coli AmpC resistant, and Candida albicans and ensure a viability of 70% for Vero and NiH3T3 cells. However, it is not advisable to use these nanoparticles with concentrations greater than 20 μg/mL to eradicate Staphylococcus aureus, since the viability of Vero and NiH3T3 cells would be significantly affected.
To inhibit the growth of Staphylococcus aureus, a minimum inhibitory concentration could be used, which may be less than the MBC. Thomas et al. evaluated the antimicrobial activity of AgNPs against Staphylococcus aureus and found an MBC and MIC of 62.5 μg/mL and 1.95 μg/mL, respectively [46]. Similarly, Du et al. synthesized AgNPs and found an MBC of 100 μg/mL and a MIC of 50 μg/mL against the same microorganism [47]. These studies suggest that MBC may be higher than the MIC of AgNPs for the same bacteria.
Lastly, it is not clear what is the ideal size distribution of AgNPs that guarantees low antimicrobial activity and avoids the toxic effects of AgNPs for health as much as possible. Nevertheless, the size distribution of the AgNPs optimized in this study could have generated a lower cytotoxic effect on Vero and NiH3T3 cells. Likewise, the smallest nanoparticles of optimized AgNPs size distribution and the ability to release silver ions from the larger nanoparticles could generate the antimicrobial effect against the microorganisms evaluated.
In this study, a face centered central composite design (FCCCD) through response surface methodology (RSM) was applied to optimize the chemical reduction synthesis of AgNPs. The objective was increasing the antimicrobial capacity of a reference AgNPs through the optimization of some synthesis factors. The experimental results confirmed that all the factors studied were significant for the variable responses. AgNPs optimized with an average size of 9.94 nm and spherical and hemispherical shapes were obtained. Higher antimicrobial activity was found in optimized AgNPs than in reference AgNPs against Escherichia coli, Escherichia coli AmpC resistant, and Candida albicans and was necessary 9.94, 9.94 and 2.08 μg/mL of AgNPs, respectively, to eliminate them. Further, it was achieved bactericidal effect against Staphylococcus at a concentration of 19.89 μg/mL AgNPs. It was also found that optimized AgNPs show no significant cytotoxicity against Vero and NiH3T3 cells and allowed a minimum viability of 70% at concentrations less than 10 μg/mL AgNPs.
This work is the first study that optimizes the process of obtaining AgNPs with the design of experiments from the synthesis method presented in this work, and in which a better antimicrobial effect was achieved compared to the reference AgNPs. This work shows that it is possible to improve the antimicrobial activity of AgNPs obtained by a specific method, altering some parameters, without changing that methodology. This could be applied in those cases in which, for reasons of availability of other methods or lack of resources, it is not possible to change the methodology of synthesis chosen.
Future works may also consider using different parameters (for example, stirring time, mixing RPM, and reaction temperature) for the optimization of AgNPs that allow reducing minimum bactericidal concentration (MBC) and minimum fungicidal concentration (MFC) against microorganisms and avoid reducing cell viability.
All data generated or analysed during this study are included in this published article.
AAS:
Atomic absorption spectroscopy
AgNPs:
Silver nanoparticles
ANOVA:
ATCC:
American type culture collection
ATP:
BHI:
Brain heart infusion
CFU:
Colony Forming Units
DLS:
Dynamic light scattering
DMEM:
Dulbecco's modified eagle Medium
DMSO:
Dimethylsulfoxide
Deoxyribonucleic acid
DOE:
L C 20 :
Lethal concentration 20
MBC:
Minimum bactericidal concentration
MFC:
Minimum fungicidal concentration
MIC:
Minimum inhibitory concentration
MTT:
3-[4,5-dimethylthiazol-2-yl]-diphenyl tetrazolium bromide
OECD:
Economic Cooperation and Development
PSGI:
Peak with the greatest intensity in the size distribution
SFB:
RSM:
Response Surface Methodology
TEM:
TSC:
UV-Vis:
Ultraviolet- visible spectroscopy
Raji V, Chakraborty M, Parikh PA. Synthesis of Starch-Stabilized Silver Nanoparticles and Their Antimicrobial Activity. Part Sci Technol. 2012; 30(6):565–77. https://doi.org/10.1080/02726351.2011.626510.
Karwowska E. Antibacterial potential of nanocomposite-based materials- a short review. Nanotechnol Rev. 2016; 6(2):243–54. https://doi.org/10.1515/ntrev-2016-0095.
Davoodbasha M, Kim S-C, Lee S-Y, Kim J-W. The facile synthesis of chitosan-based silver nano-biocomposites via a solution plasma process and their potential antimicrobial efficacy. Arch Biochem Biophys. 2016; 605:49–58. https://doi.org/10.1016/j.abb.2016.01.013.
Rai M, Yadav A, Gade A. Silver nanoparticles as a new generation of antimicrobials. Biotechnol Adv. 2009; 27(1):76–83. https://doi.org/10.1016/j.biotechadv.2008.09.002.
Dos Santos CA, Seckler MM, Ingle AP, Gupta I, Galdiero S, Galdiero M, Gade A, Rai M. Silver nanoparticles: Therapeutical uses, toxicity, and safety issues. J Pharm Sci. 2014; 103(7):1931–44. https://doi.org/10.1002/jps.24001.
Eremenko AM, Petrik IS, Smirnova NP, Rudenko AV, Marikvas YS. Antibacterial and Antimycotic Activity of Cotton Fabrics, Impregnated with Silver and Binary Silver/Copper Nanoparticles. Nanoscale Res Lett. 2016; 11(1):28–36. https://doi.org/10.1186/s11671-016-1240-0.
Wong KKY, Liu X. Silver nanoparticles - the real "silver bullet" in clinical medicine?,. Med Chem Commun. 2010; 1(2):125–31. https://doi.org/10.1039/c0md00069h.
Durán N, Durán M, de Jesus MB, Seabra AB, Fávaro WJ, Nakazato G. Silver nanoparticles: A new view on mechanistic aspects on antimicrobial activity. Nanomed Nanotechnol Biol Med. 2016; 12(3):789–99. https://doi.org/10.1016/j.nano.2015.11.016.
Liu B, Han G. Shell thickness-dependent raman enhancement for rapid identification and detection of pesticide residues at fruit peels. Anal Chem. 2011; 84(1):255–61.
Alessio P, Aoki PHB, Furini LN, Aliaga AE, Constantino CJL. Spectroscopic Techniques for Characterization of Nanomaterials In: Da Róz AL, Ferreira M, Leite FdL, Osvaldo N OJ, editors. Nanocharacterization Techniques. 1st edn. Elsevier Inc.: 2017. p. 65–98. Chap. 3. https://doi.org/10.1016/B978-0-323-49778-7/00003-5.
Filipe V, Hawe A, Jiskoot W. Critical evaluation of nanoparticle tracking analysis (NTA) by nanosight for the measurement of nanoparticles and protein aggregates. Pharm Res. 2010; 27(5):796–810.
Brar K S, Vierma M. Measurement of nanoparticles by light-scattering techniques. TrAC Trends Anal Chem. 2011; 30(1):4–17.
Leng Y. Materials Characterization. Introduction to Microscopic and Spectroscopic Methods, 2nd edn. Wingheim: Wiley-VCH Verlag GmbH & Co; 2013, pp. 1–383.
Hasnain MS, Javed MN, Alam MS, Rishishwar P, Rishishwar S, Ali S, Nayak AK, Beg S. Purple heart plant leaves extract-mediated silver nanoparticle synthesis: Optimization by Box-Behnken design. Mater Sci Eng C. 2019; 99:1105–14. https://doi.org/10.1016/j.msec.2019.02.061.
Núñez RN, Veglia AV, Pacioni NL. Improving reproducibility between batches of silver nanoparticles using an experimental design approach. Microchem J. 2018; 141:110–7. https://doi.org/10.1016/j.microc.2018.05.017.
Ajitha B, Reddy YA, Reddy P. Enhanced antimicrobial activity of silver nanoparticles with controlled particle size by pH variation. Powder Technol. 2015; 269(3):110–7. https://doi.org/10.1016/j.powtec.2014.08.049.
Brown AN, Smith K, Samuels TA, Lu J, Obare SO, Scott ME. Nanoparticles functionalized with ampicillin destroy multiple-antibiotic-resistant isolates of Pseudomonas aeruginosa and Enterobacter aerogenes and methicillin-resistant Staphylococcus aureus. Appl Environ Microbiol. 2012; 78(8):2768–74. https://doi.org/10.1128/AEM.06513-11.
Chueh PJ, Liang RY, Lee YH, Zeng ZM, Chuang SM. Differential cytotoxic effects of gold nanoparticles in different mammalian cell lines. J Hazard Mater. 2014; 264(2014):303–12. https://doi.org/10.1016/j.jhazmat.2013.11.031.
Yoon J, Dong Woo L, Choi J. Assessment of Size-Dependent Antimicrobial and Cytotoxic Properties of Silver Nanoparticles. Adv Mater Sci Eng. 2014; 2014:1–6. https://doi.org/10.1155/2014/763807.
Zapata-Giraldo J, Mena P, Cuesta D, Galeano B, Mejía M, Botero LE, Ortiz I, Escobar N, Hoyos L. Characterization of silver nanoparticles for potential use as antimicrobial agent. In: VII Congreso Latinoamericano de Ingeniería Biomédica. Bucaramanga: Asociación Colombiana de Ingeniería Biomédica ABIOIN: 2016. p. 245–7.
Monteiro DR, Gorup LF, Silva S, Negri M, de Camargo ER, Oliveira R, Barbosa DB, Henriques M. Silver colloidal nanoparticles: antifungal effect against adhered cells and biofilms of Candida albicans and Candida glabrata. Biofouling. 2011; 27(7):711–9. https://doi.org/10.1080/08927014.2011.599101.
Panáček A, Kolář M, Večeřová R, Prucek R, Soukupová J, Kryštof V, Hamal P, Zbořil R, Kvítek L. Antifungal activity of silver nanoparticles against Candida spp,. Biomaterials. 2009; 30(31):6333–40. https://doi.org/10.1016/j.biomaterials.2009.07.065.
Mahl D, Diendorf J, Meyer-Zaika W, Epple M. Possibilities and limitations of different analytical methods for the size determination of a bimodal dispersion of metallic nanoparticles. Colloids Surf A Physicochem Eng Asp. 2011; 377(1-3):386–92. https://doi.org/10.1016/j.colsurfa.2011.01.031.
Kruk T, Szczepanowicz K, Stefańska J, Socha RP, Warszyński P. Synthesis and antimicrobial activity of monodisperse copper nanoparticles. Colloids Surf B Biointerfaces. 2015; 128:17–22. https://doi.org/10.1016/j.colsurfb.2015.02.009.
Kumar SV, Bafana AP, Pawar P, Faltane M, Rahman A, Dahoumane SA, Kucknoor A, Jeffryes CS. Optimized production of antibacterial copper oxide nanoparticles in a microwave-assisted synthesis reaction using response surface methodology. Colloids Surf A Physicochem Eng Asp. 2019; 573:170–8. https://doi.org/10.1016/j.colsurfa.2019.04.063.
Krishnan R, Vijay A, Vasaviah SK. The MIC and MBC of Silver Nanoparticles against Enterococcus faecalis - A Facultative Anaerobe. J Nanomedicine Nanotechnol. 2015; 06(03). https://doi.org/10.4172/2157-7439.1000285.
Mosmann T. Rapid Colorimetric assay for cellular growth and survival: application to proliferation and cytotoxicity assay. J Immunol Methods. 1983; 65(1-2):55–63. https://doi.org/10.1016/0022-1759(83)90303-4.
Fertig J. Probit Analysis: A Statistical Treatment of the Sigmoid Response Curve. D. J. Finney. Q Rev Biol. 1948; 23(1):102.
Ondari Nyakundi E, Padmanabhan MN. Green chemistry focus on optimization of silver nanoparticles using response surface methodology (RSM) and mosquitocidal activity: anopheles stephensi (diptera: culicidae). Spectrochim Acta Part A Mol Biomol Spectrosc. 2015; 149:978–84. https://doi.org/10.1016/j.saa.2015.04.057.
Chowdhury S, Yusof F, Faruck MO, Sulaiman N. Process Optimization of Silver Nanoparticle Synthesis Using Response Surface Methodology. In: Procedia Engineering. 4th International Conference on Process Engineering and Advanced Materials, vol. 148. Elsevier Ltd: 2016. p. 992–9. https://doi.org/10.1016/j.proeng.2016.06.552.
Park S, Aiyer R, Sabharwal S, Dugasani SR, Kulkarni A, Tagad CK. Green synthesis of silver nanoparticles and their application for the development of optical fiber based hydrogen peroxide sensor. Sensors Actuators B Chem. 2013; 183:144–9. https://doi.org/10.1016/j.snb.2013.03.106.
Draper NR, Smith H. Applied Regression Analysis, 3rd edn: Wiley-Interscience; 2014, p. 736. https://doi.org/10.1002/9781118625590.
Sharma VK, Yngard RA, Lin Y. Silver nanoparticles: Green synthesis and their antimicrobial activities. Adv Colloid Interface Sci. 2009; 145(1-2):83–96. https://doi.org/10.1016/j.cis.2008.09.002.
GF P, AS P, NP K, SA K, AI E, TG E, TV F, LM. S. Green synthesis of water-soluble nontoxic polymeric nanocomposites containing silver nanoparticles. Int J Nanomedicine. 2014; 9(16):1883–89. https://doi.org/10.2147/IJN.S57865.
Cakić M, Glišić S, Nikolić G, Nikolić GM, Cakić K, Cvetinov M. Synthesis, characterization and antimicrobial activity of dextran sulphate stabilized silver nanoparticles. J Mol Struct. 2016; 1110:156–61. https://doi.org/10.1016/j.molstruc.2016.01.040.
Viudez AJ. Síntesis, caracterización y ensamblaje de nanopartículas de oro protegidas por monocapas moleculares. Ph.d. thesis, Universidad de Córdoba. Córdoba: Servicio de Publicaciones de la Universidad de Córdoba; 2011, p. 321.
Mohanbaba S, Gurunathan S. Differential Biological Activities of Silver Nanoparticles Against Gram-negative and Gram-positive Bacteria: A Novel Approach for Antimicrobial Therapy: Elsevier Inc.; 2016, pp. 193–227. https://doi.org/10.1016/B978-0-323-42864-4.00006-3.
Biao L, Tan S, Wang Y, Guo X, Fu Y, Xu F, Zu Y, Liu Z. Synthesis, characterization and antibacterial study on the chitosan-functionalized Ag nanoparticles. Mater Sci Eng C. 2017; 76(1):73–80. https://doi.org/10.1016/j.msec.2017.02.154.
Packirisamy G, Gogoi S, Chattopadhyay A, Ghosh S. Implications of silver nanoparticle induced cell apoptosis for in vitro gene therapy. Nanotechnology. 2008; 19:075104. https://doi.org/10.1088/0957-4484/19/7/075104.
Brennan SA, Ní Fhoghlú C, Devitt BM, O'Mahony FJ, Brabazon D, Walsh A. Silver nanoparticles and their orthopaedic applications. Bone Joint J. 2015; 97-B:582–9. https://doi.org/10.1302/0301-620x.97b5.33336.
Ahamed M, AlSalhi MS, Siddiqui MKJ. Silver nanoparticle applications and human health. Clin Chimica Acta. 2010; 411(23-24):1841–8.
Nuñez-Anita RE, Acosta-Torres LS, Vilar-Pineda J, Martínez-Espinosa JC, De la fuente-Hernández J, Castaño VM. Toxicology of antimicrobial nanoparticles for prosthetic devices. Int J Nanomedicine. 2014; 9(1):3999–4006.
Kasithevar M, Saravanan M, Prakash P, Kumar H, Ovais M, Barabadi H, Shinwari ZK. Green synthesis of silver nanoparticles using Alysicarpus monilifer leaf extract and its antibacterial activity against MRSA and CoNS isolates in HIV patients. J Interdiscip Nanomedicine. 2017; 2(2):131–41. https://doi.org/10.1002/jin2.26.
Barbalinardo M, Caicci F, Cavallini M, Gentili D. Protein Corona Mediated Uptake and Cytotoxicity of Silver Nanoparticles in Mouse Embryonic Fibroblast. Small. 2018; 14(34):1–8. https://doi.org/10.1002/smll.201801219.
Thomas R, Mathew S, Nayana AR, Mathews J, Radhakrishnan EK. Microbially and phytofabricated AgNPs with different mode of bactericidal action were identified to have comparable potential for surface fabrication of central venous catheters to combat Staphylococcus aureus biofilm. J Photochem Photobiol B Biol. 2017; 171(February):96–103. https://doi.org/10.1016/j.jphotobiol.2017.04.036.
Du J, Hu Z, Yu Z, Li H, Pan J, Zhao D, Bai Y. Antibacterial activity of a novel Forsythia suspensa fruit mediated green silver nanoparticles against food-borne pathogens and mechanisms investigation. Mater Sci Eng C. 2019; 102(136):247–53. https://doi.org/10.1016/j.msec.2019.04.031.
We would like to acknowledge with gratitude the support of Centro de Bioingeniería, Universidad Pontificia Bolivariana and Laboratorio de nanobiotecnología, Facultad de ciencias biológicas, Universidad Autonoma de Nuevo León.
This work has been supported by COLCIENCIAS - República de Colombia and the Universidad Pontificia Bolivariana, research project No.FP44842-016-2017. All sources of funding financed the work of researchers and resources to the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.
Centro de Bioingeniería, Grupo de investigaciones en Bioingeniería, Universidad Pontificia Bolivariana, circular 1 No. 73-76, Medellín, 050031, Colombia
Catalina Quintero-Quiroz
, Natalia Acevedo
& Vera Z. Pérez
Grupo de Investigación de Biología de Sistemas,Universidad Pontificia Bolivariana, Cl 78B No. 72A-109, Medellín, 050031, Colombia
Jenniffer Zapata-Giraldo
& Luz E. Botero
Universidad de Antioquia, Cl.67 No. 53-108, Medellín, 050010, Colombia
Julián Quintero
Laboratorio de Inmunología y Virología, Universidad Autónoma de Nuevo León, Ave. Pedro de Alba S/N Ciudad Universitaria San Nicolás de los Garza, Monterrey, 64450, México
Diana Zárate-Triviño
Grupo de Investigación Sobre Nuevos Materiales, Universidad Pontificia Bolivariana, Cq.1 No. 70-01, Medellín, 050031, Colombia
Jorge Saldarriaga
Facultad de Ingeniería Eléctrica y Electrónica, Medellín, 050031, Colombia
Vera Z. Pérez
Search for Catalina Quintero-Quiroz in:
Search for Natalia Acevedo in:
Search for Jenniffer Zapata-Giraldo in:
Search for Luz E. Botero in:
Search for Julián Quintero in:
Search for Diana Zárate-Triviño in:
Search for Jorge Saldarriaga in:
Search for Vera Z. Pérez in:
CQ carried out the experimental design, syntheses, physico-chemical characterization, evaluation of AgNPs antimicrobial effect, analyzed results, and drafted the manuscript. NA carried out the cytotoxic evaluation of AgNPs and drafted part of the manuscript. JZ and LB participated in the evaluation of the antimicrobial effect of AgNPs. JQ contributed to the design of the experiment and analysis of the results. DZ participated in the experimental design and cytotoxic evaluation of AgNPs. JS and VZ critically reviewed the article. All authors read and approved the final manuscript.
Correspondence to Catalina Quintero-Quiroz.
Not aplicable.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Quintero-Quiroz, C., Acevedo, N., Zapata-Giraldo, J. et al. Optimization of silver nanoparticle synthesis by chemical reduction and evaluation of its antimicrobial and toxic activity. Biomater Res 23, 27 (2019) doi:10.1186/s40824-019-0173-y
DOI: https://doi.org/10.1186/s40824-019-0173-y | CommonCrawl |
In the previous chapters, we discussed the Bode plots. There, we have two separate plots for both magnitude and phase as the function of frequency. Let us now discuss about polar plots. Polar plot is a plot which can be drawn between magnitude and phase. Here, the magnitudes are represented by normal values only.
The polar form of $G(j\omega)H(j\omega)$ is
$$G(j\omega)H(j\omega)=|G(j\omega)H(j\omega)| \angle G(j\omega)H(j\omega)$$
The Polar plot is a plot, which can be drawn between the magnitude and the phase angle of $G(j\omega)H(j\omega)$ by varying $\omega$ from zero to ∞. The polar graph sheet is shown in the following figure.
This graph sheet consists of concentric circles and radial lines. The concentric circles and the radial lines represent the magnitudes and phase angles respectively. These angles are represented by positive values in anti-clock wise direction. Similarly, we can represent angles with negative values in clockwise direction. For example, the angle 2700 in anti-clock wise direction is equal to the angle −900 in clockwise direction.
Rules for Drawing Polar Plots
Follow these rules for plotting the polar plots.
Substitute, $s = j\omega$ in the open loop transfer function.
Write the expressions for magnitude and the phase of $G(j\omega)H(j\omega)$.
Find the starting magnitude and the phase of $G(j\omega)H(j\omega)$ by substituting $\omega = 0$. So, the polar plot starts with this magnitude and the phase angle.
Find the ending magnitude and the phase of $G(j\omega)H(j\omega)$ by substituting $\omega = \infty$. So, the polar plot ends with this magnitude and the phase angle.
Check whether the polar plot intersects the real axis, by making the imaginary term of $G(j\omega)H(j\omega)$ equal to zero and find the value(s) of $\omega$.
Check whether the polar plot intersects the imaginary axis, by making real term of $G(j\omega)H(j\omega)$ equal to zero and find the value(s) of $\omega$.
For drawing polar plot more clearly, find the magnitude and phase of $G(j\omega)H(j\omega)$ by considering the other value(s) of $\omega$.
Consider the open loop transfer function of a closed loop control system.
$$G(s)H(s)=\frac{5}{s(s+1)(s+2)}$$
Let us draw the polar plot for this control system using the above rules.
Step 1 − Substitute, $s = j\omega$ in the open loop transfer function.
$$G(j\omega)H(j\omega)=\frac{5}{j\omega(j\omega+1)(j\omega+2)}$$
The magnitude of the open loop transfer function is
$$M=\frac{5}{\omega(\sqrt{\omega^2+1})(\sqrt{\omega^2+4})}$$
The phase angle of the open loop transfer function is
$$\phi=-90^0-\tan^{-1}\omega-\tan^{-1}\frac{\omega}{2}$$
Step 2 − The following table shows the magnitude and the phase angle of the open loop transfer function at $\omega = 0$ rad/sec and $\omega = \infty$ rad/sec.
Frequency (rad/sec)
Phase angle(degrees)
0 ∞ -90 or 270
∞ 0 -270 or 90
So, the polar plot starts at (∞,−900) and ends at (0,−2700). The first and the second terms within the brackets indicate the magnitude and phase angle respectively.
Step 3 − Based on the starting and the ending polar co-ordinates, this polar plot will intersect the negative real axis. The phase angle corresponding to the negative real axis is −1800 or 1800. So, by equating the phase angle of the open loop transfer function to either −1800 or 1800, we will get the $\omega$ value as $\sqrt{2}$.
By substituting $\omega = \sqrt{2}$ in the magnitude of the open loop transfer function, we will get $M = 0.83$. Therefore, the polar plot intersects the negative real axis when $\omega = \sqrt{2}$ and the polar coordinate is (0.83,−1800).
So, we can draw the polar plot with the above information on the polar graph sheet. | CommonCrawl |
Periodic, subharmonic, and quasi-periodic oscillations under the action of a central force
Detectable canard cycles with singular slow dynamics of any order at the turning point
January 2011, 29(1): 141-167. doi: 10.3934/dcds.2011.29.141
Time-dependent attractor for the Oscillon equation
Francesco Di Plinio 1, , Gregory S. Duane 2, and Roger Temam 3,
Indiana University Mathematics Department, Bloomington, IN 47405, United States
Rosenstiel School of Marine and Atmospheric Sciences, University of Miami, Miami, FL 33149, United States
The Institute for Scientific Computing and Applied Mathematics, Indiana University, 831 E. 3rd St., Rawles Hall, Bloomington, IN 47405
Received January 2010 Revised May 2010 Published September 2010
We investigate the asymptotic behavior of the nonautonomous evolution problem generated by the Oscillon equation
∂ tt $u(x,t) +H $ ∂ t$ u(x,t) -\e^{-2Ht}$ ∂ xx $ u(x,t) + V'(u(x,t)) =0, \quad (x,t)\in (0,1) \times \R,$
with periodic boundary conditions, where $H>0$ is the Hubble constant and $V$ is a nonlinear potential of arbitrary polynomial growth. After constructing a suitable dynamical framework to deal with the explicit time dependence of the energy of the solution, we establish the existence of a regular global attractor $\A=\A(t)$. The kernel sections $\A(t)$ have finite fractal dimension.
Keywords: nonautonomous attractors, Oscillon equation, fractal dimension..
Mathematics Subject Classification: Primary: 37L30, 35B41; Secondary: 83D0.
Citation: Francesco Di Plinio, Gregory S. Duane, Roger Temam. Time-dependent attractor for the Oscillon equation. Discrete & Continuous Dynamical Systems - A, 2011, 29 (1) : 141-167. doi: 10.3934/dcds.2011.29.141
A. B. Adib, M. Gleiser and C. A. S. Almeida, Long lived oscillons from asymmetric bubbles: Existence and stability,, Phys. Rev. D, 66 (2002). doi: doi:10.1103/PhysRevD.66.085011. Google Scholar
L. Arnold, "Random Dynamical Systems,", Springer-Verlag, (1998). Google Scholar
A. V. Babin and M. I. Vishik, "Attractors of Evolution Equations,", North-Holland, (1992). Google Scholar
V. Belleri and V. Pata, Attractors for semilinear strongly damped wave equations on $\R^3$,, Discrete Cont. Dynam. Syst., 7 (2001), 719. doi: doi:10.3934/dcds.2001.7.719. Google Scholar
Z. Brzeźniak, M. Capiński and F. Flandoli, Pathwise global attractors for stationary random dynamical systems,, Probab. Theory Related Fields, 95 (1993), 87. doi: doi:10.1007/BF01197339. Google Scholar
T. Caraballo and J. A. Langa, On the upper semicontinuity of cocycle attractors for nonautonomous and random dynamical systems,, Dynam. Contin. Discrete Impuls. Systems A, 10 (2003), 491. Google Scholar
T. Caraballo, J. A. Langa and J. Valero, The dimension of attractors of nonautonomous partial differential equations,, ANZIAM J., 45 (2003), 207. doi: doi:10.1017/S1446181100013274. Google Scholar
T. Caraballo, G. Łukaszewicz and J. Real, Pullback attractors for asymptotically compact nonautonomous dynamical systems,, Nonlinear Anal., 64 (2006), 484. doi: doi:10.1016/j.na.2005.03.111. Google Scholar
T. Caraballo, G. Ł ukaszewicz and J. Real, Pullback attractors for non-autonomous 2D-Navier-Stokes equations in some unbounded domains,, C. R. Acad. Sci. Paris, 342 (2006), 263. Google Scholar
D. N. Cheban, P. E. Kloeden and B. Schmalfuß, The relationship between pullback, forwards and global attractors of nonautonomous dynamical systems,, Nonlinear Dynam. Systems Theory, 2 (2002), 9. Google Scholar
V. V. Chepyzhov and M. I. Vishik, Attractors of nonautonomous dynamical systems and their dimension,, J. Math. Pures Appl., 73 (1994), 279. Google Scholar
V. V. Chepyzhov and M. I. Vishik, "Attractors of Equations of Mathematical Physics,'', American Mathematical Society Colloquium Publications, (2002). Google Scholar
E. J. Copeland, M. Gleiser and H. R. Muller, Oscillons: Resonant configurations during bubble collapse,, Phys. Rev. D., 52 (1995), 1920. doi: doi:10.1103/PhysRevD.52.1920. Google Scholar
H. Crauel, A. Debussche and F. Flandoli, Random attractors,, J. Dynam. Differential Equations, 9 (1997), 307. doi: doi:10.1007/BF02219225. Google Scholar
H. Crauel and F. Flandoli, Attractors for random dynamical systems,, Probab. Theory Related Fields, 100 (1994), 365. doi: doi:10.1007/BF01193705. Google Scholar
P. Fabrie, C. Galusinski, A. Miranville and S. Zelik, Uniform exponential attractors for a singularly perturbed damped wave equation,, Discrete Cont. Dyn. Systems, 10 (2004), 221. Google Scholar
E. Farhi, N. Graham, V. Khemani, et al., An oscillon in the $SU(2)$ gauged Higgs model,, Phys. Rev. D, 72 (2005). doi: doi:10.1103/PhysRevD.72.101701. Google Scholar
S. Gatti, M. Grasselli, A. Miranville and V. Pata, A construction of a robust family of exponential attractors,, Proc. Amer. Math. Soc., 134 (2006), 117. doi: doi:10.1090/S0002-9939-05-08340-1. Google Scholar
J.-M. Ghidaglia and R. Temam, Attractors for damped nonlinear hyperbolic equations,, J. Math. Pures Appl., 66 (1987), 273. Google Scholar
M. Gleiser and A. Sornberger, Longlived localized field configurations in small lattices: Application to oscillons,, Phys. Rev. E, 62 (2000), 1368. doi: doi:10.1103/PhysRevE.62.1368. Google Scholar
J. K. Hale, "Asymptotic Behavior of Dissipative Systems,'', Mathematical Surveys and Monographs, (1988). Google Scholar
A. Haraux, "Systèmes Dynamiques Dissipatifs et Applications,'', Recherches en Mathmatiques Appliques [Research in Applied Mathematics], (1991). Google Scholar
O. Ladyzhenskaya, "Attractors for Semigroups and Evolution Equations,'', Cambridge University Press, (1991). Google Scholar
B. B. Mandelbrot, "The Fractal Geometry of Nature,'', Schriftenreihe fr den Referenten. [Series for the Referee] W. H. Freeman and Co., (1982). Google Scholar
A. Miranville and S. Zelik, "Attractors for Dissipative Partial Differential Equations in Bounded and Unbounded Domains,'', Handbook of Differential Equations: Evolutionary Equations, IV (2008), 103. Google Scholar
P. Marín-Rubio and J. Real, On the relation between two different concepts of pullback attractors for non-autonomous dynamical systems,, Nonlinear Anal., 71 (2009), 3956. doi: doi:10.1016/j.na.2009.02.065. Google Scholar
I. Moise, R. Rosa and X. Wang, Attractors for noncompact nonautonomous systems via energy equations,, Discrete Cont. Dynam. Syst., 10 (2004), 473. doi: doi:10.3934/dcds.2004.10.473. Google Scholar
V. Pata and S. Zelik, A result on the existence of global attractors for semigroups of closed operators,, Comm. Pure Appl. Anal., 2 (2007), 481. Google Scholar
A. Riotto, Are oscillons present during a first order electroweak phase transition?,, Phys. Lett. B, 365 (1996), 64. doi: doi:10.1016/0370-2693(95)01239-7. Google Scholar
B. Schmalfuß, Backward cocycles and attractors of stochastic differential equations,, International Seminar on Applied Mathematics and Nonlinear Dynamics: Attractor Approximation and Global Behaviour (eds. V.\ Reitmann, 73 (1992), 185. Google Scholar
M. Schroeder, "Fractals, Chaos, Power Laws,'', W. H. Freeman and Company, (1991). Google Scholar
C. Sun, D. Cao and J. Duan, Non-autonomous dynamics of wave equations with nonlinear damping and critical nonlinearity,, Nonlinearity, 19 (2006), 2645. doi: doi:10.1088/0951-7715/19/11/008. Google Scholar
R. Temam, "Infinite-Dimensional Dynamical Systems in Mechanics and Physics,'' 2nd edition, Applied Mathematical Sciences, (1997). Google Scholar
P. B. Umbanhower, F. Melo and H. L. Swinney, Localized excitations in a vertically vibrated granular layer,, Nature, 382 (1996), 793. doi: doi:10.1038/382793a0. Google Scholar
Y. Wang, Pullback attractors for nonautonomous wave equations with critical exponent,, Nonlinear Anal., 68 (2008), 365. doi: doi:10.1016/j.na.2006.11.002. Google Scholar
Michael L. Frankel, Victor Roytburd. Fractal dimension of attractors for a Stefan problem. Conference Publications, 2003, 2003 (Special) : 281-287. doi: 10.3934/proc.2003.2003.281
María Anguiano, Alain Haraux. The $\varepsilon$-entropy of some infinite dimensional compact ellipsoids and fractal dimension of attractors. Evolution Equations & Control Theory, 2017, 6 (3) : 345-356. doi: 10.3934/eect.2017018
Shengfan Zhou, Min Zhao. Fractal dimension of random attractor for stochastic non-autonomous damped wave equation with linear multiplicative white noise. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2887-2914. doi: 10.3934/dcds.2016.36.2887
Joseph Squillace. Estimating the fractal dimension of sets determined by nonergodic parameters. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5843-5859. doi: 10.3934/dcds.2017254
Yejuan Wang, Chengkui Zhong, Shengfan Zhou. Pullback attractors of nonautonomous dynamical systems. Discrete & Continuous Dynamical Systems - A, 2006, 16 (3) : 587-614. doi: 10.3934/dcds.2006.16.587
Bernd Aulbach, Martin Rasmussen, Stefan Siegmund. Approximation of attractors of nonautonomous dynamical systems. Discrete & Continuous Dynamical Systems - B, 2005, 5 (2) : 215-238. doi: 10.3934/dcdsb.2005.5.215
P.E. Kloeden, Victor S. Kozyakin. Uniform nonautonomous attractors under discretization. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 423-433. doi: 10.3934/dcds.2004.10.423
V. V. Chepyzhov, A. A. Ilyin. On the fractal dimension of invariant sets: Applications to Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 117-135. doi: 10.3934/dcds.2004.10.117
Ioana Moise, Ricardo Rosa, Xiaoming Wang. Attractors for noncompact nonautonomous systems via energy equations. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 473-496. doi: 10.3934/dcds.2004.10.473
Björn Schmalfuss. Attractors for nonautonomous and random dynamical systems perturbed by impulses. Discrete & Continuous Dynamical Systems - A, 2003, 9 (3) : 727-744. doi: 10.3934/dcds.2003.9.727
David Cheban. Global attractors of nonautonomous quasihomogeneous dynamical systems. Conference Publications, 2001, 2001 (Special) : 96-101. doi: 10.3934/proc.2001.2001.96
Bernd Aulbach, Martin Rasmussen, Stefan Siegmund. Invariant manifolds as pullback attractors of nonautonomous differential equations. Discrete & Continuous Dynamical Systems - A, 2006, 15 (2) : 579-596. doi: 10.3934/dcds.2006.15.579
Hillel Furstenberg. From invariance to self-similarity: The work of Michael Hochman on fractal dimension and its aftermath. Journal of Modern Dynamics, 2019, 15: 437-449. doi: 10.3934/jmd.2019027
Arne Ogrowsky, Björn Schmalfuss. Unstable invariant manifolds for a nonautonomous differential equation with nonautonomous unbounded delay. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1663-1681. doi: 10.3934/dcdsb.2013.18.1663
Yonghai Wang, Chengkui Zhong. Upper semicontinuity of pullback attractors for nonautonomous Kirchhoff wave models. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 3189-3209. doi: 10.3934/dcds.2013.33.3189
Pierre Fabrie, Alain Miranville. Exponential attractors for nonautonomous first-order evolution equations. Discrete & Continuous Dynamical Systems - A, 1998, 4 (2) : 225-240. doi: 10.3934/dcds.1998.4.225
Luís Silva. Periodic attractors of nonautonomous flat-topped tent systems. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1867-1874. doi: 10.3934/dcdsb.2018243
Igor Kukavica. On Fourier parametrization of global attractors for equations in one space dimension. Discrete & Continuous Dynamical Systems - A, 2005, 13 (3) : 553-560. doi: 10.3934/dcds.2005.13.553
Francisco Balibrea, José Valero. On dimension of attractors of differential inclusions and reaction-diffussion equations. Discrete & Continuous Dynamical Systems - A, 1999, 5 (3) : 515-528. doi: 10.3934/dcds.1999.5.515
Francesco Di Plinio Gregory S. Duane Roger Temam | CommonCrawl |
Describe a force field and calculate the strength of an electric field due to a point charge.
Calculate the force exerted on a test charge by an electric field.
Explain the relationship between electrical force (F) on a test charge and electrical field strength (E).
Contact forces, such as between a baseball and a bat, are explained on the small scale by the interaction of the charges in atoms and molecules in close proximity. They interact through forces that include the Coulomb force. Action at a distance is a force between objects that are not close enough for their atoms to "touch." That is, they are separated by more than a few atomic diameters.
For example, a charged rubber comb attracts neutral bits of paper from a distance via the Coulomb force. It is very useful to think of an object being surrounded in space by a force field. The force field carries the force to another object (called a test object) some distance away.
A field is a way of conceptualizing and mapping the force that surrounds any object and acts on another object at a distance without apparent physical connection. For example, the gravitational field surrounding the earth (and all other masses) represents the gravitational force that would be experienced if another mass were placed at a given point within the field.
In the same way, the Coulomb force field surrounding any charge extends throughout space. Using Coulomb's law, [latex]{F = k|{q_1}{q_2}|/r^2}[/latex], its magnitude is given by the equation [latex]{F = k|qQ|/r^2}[/latex], for a point charge (a particle having a charge [latex]{Q}[/latex]) acting on a test charge [latex]{q}[/latex] at a distance [latex]{r}[/latex] (see [link]). Both the magnitude and direction of the Coulomb force field depend on [latex]{Q}[/latex] and the test charge [latex]{q}[/latex].
Figure 1. The Coulomb force field due to a positive charge Q is shown acting on two different charges. Both charges are the same distance from Q. (a) Since q1 is positive, the force F1 acting on it is repulsive. (b) The charge q2 is negative and greater in magnitude than q1, and so the force F2 acting on it is attractive and stronger than F1. The Coulomb force field is thus not unique at any point in space, because it depends on the test charges q1 and q2 as well as the charge Q.
To simplify things, we would prefer to have a field that depends only on [latex]{Q}[/latex] and not on the test charge [latex]{q}[/latex]. The electric field is defined in such a manner that it represents only the charge creating it and is unique at every point in space. Specifically, the electric field [latex]{E}[/latex] is defined to be the ratio of the Coulomb force to the test charge:
[latex]{E =}[/latex] [latex]{\frac{F}{q}}[/latex] ,
where [latex]{F}[/latex] is the electrostatic force (or Coulomb force) exerted on a positive test charge
[latex]{q}[/latex]. It is understood that [latex]{E}[/latex] is in the same direction as
[latex]{F}[/latex]. It is also assumed that [latex]{q}[/latex] is so small that it does not alter the charge distribution creating the electric field. The units of electric field are newtons per coulomb (N/C). If the electric field is known, then the electrostatic force on any charge [latex]{q}[/latex] is simply obtained by multiplying charge times electric field, or [latex]{ \textbf{F} = q \textbf{E}}[/latex]. Consider the electric field due to a point charge [latex]{Q}[/latex]. According to Coulomb's law, the force it exerts on a test charge [latex]{q}[/latex] is [latex]{ F = k|qQ|/r^2 }[/latex] . Thus the magnitude of the electric field, [latex]{E}[/latex], for a point charge is
[latex]{E =}[/latex] [latex]{|\frac{F}{q}|}[/latex] [latex]{= k}[/latex] [latex]{|\frac{qQ}{qr^2}|}[/latex] [latex]{= k}[/latex] [latex]{\frac{|Q|}{r^2}}.[/latex]
Since the test charge cancels, we see that
[latex]{E = k}[/latex] [latex]{\frac{|Q|}{r^2}}[/latex]
The electric field is thus seen to depend only on the charge [latex]{Q}[/latex] and the distance [latex]{r}[/latex]; it is completely independent of the test charge [latex]{q}[/latex].
Example 1: Calculating the Electric Field of a Point Charge
Calculate the strength and direction of the electric field [latex]{E}[/latex] due to a point charge of 2.00 nC (nano-Coulombs) at a distance of 5.00 mm from the charge.
We can find the electric field created by a point charge by using the equation [latex]{E = kQ/r^2}[/latex].
Here [latex]{ Q = 2.00 \times 10^{-9} \;\textbf{C}}[/latex] and [latex]{r = 5.00 \times 10^{-3} \;\text{m}}[/latex]. Entering those values into the above equation gives
[latex]$\begin{array}{r @{{}={}} l} {E} & {k \frac{Q}{r^2}} \\[1em] & {(8.99 \times 10^9 \; \textbf{N} \cdot \text{m}^2 / \textbf{C}^2) \times \frac{(2.00 \times 10^{-9} \;\textbf{C})}{(5.00 \times 10^{-3} \;\text{m})^2}} \\[1em] & {7.19 \times 10^5 \;\textbf{N} / \textbf{C}.} \end{array}[/latex]
This electric field strength is the same at any point 5.00 mm away from the charge [latex]{Q}[/latex] that creates the field. It is positive, meaning that it has a direction pointing away from the charge [latex]{Q}[/latex].
Example 2: Calculating the Force Exerted on a Point Charge by an Electric Field
What force does the electric field found in the previous example exert on a point charge of [latex]{-0.250 \;\mu \textbf{C}}[/latex]?
Since we know the electric field strength and the charge in the field, the force on that charge can be calculated using the definition of electric field [latex]{\textbf{E} = \textbf{F}/q}[/latex] rearranged to [latex]{ \textbf{F} = q \textbf{E}}[/latex].
The magnitude of the force on a charge [latex]{q = -0.250 \;\mu\textbf{C}}[/latex] exerted by a field of strength [latex]{E = 7.20 \times 10^5}[/latex] N/C is thus,
[latex]$\begin{array}{r @{{}={}} l} {F} & {-qE} \\[1em] & {(0.250 \times 10^{-6} \;\textbf{C})(7.20 \times 10^5 \;\textbf{N} / \textbf{C})} \\[1em] & {0.180 \;\textbf{N}.} \end{array}$[/latex]
Because [latex]{q}[/latex] is negative, the force is directed opposite to the direction of the field.
The force is attractive, as expected for unlike charges. (The field was created by a positive charge and here acts on a negative charge.) The charges in this example are typical of common static electricity, and the modest attractive force obtained is similar to forces experienced in static cling and similar situations.
PhET Explorations: Electric Field of Dreams
Play ball! Add charges to the Field of Dreams and see how they react to the electric field. Turn on a background electric field and adjust the direction and magnitude.
Figure 2. Electric Field of Dreams
The electrostatic force field surrounding a charged object extends out into space in all directions.
The electrostatic force exerted by a point charge on a test charge at a distance [latex]{r}[/latex] depends on the charge of both charges, as well as the distance between the two.
The electric field [latex]\textbf{E}[/latex] is defined to be
[latex]{E =}[/latex] [latex]{\frac{\textbf{F}}{q,}}[/latex]
where [latex]\textbf{F}[/latex] is the Coulomb or electrostatic force exerted on a small positive test charge [latex]{q}[/latex]. [latex]\textbf{E}[/latex] has units of N/C.
The magnitude of the electric field [latex]\textbf{E}[/latex] created by a point charge [latex]{Q}[/latex] is
[latex]{\textbf{E} = k}[/latex] [latex]{\frac{|Q|}{r^2}}.[/latex]
where [latex]{r}[/latex] is the distance from [latex]{Q}[/latex]. The electric field [latex]{E}[/latex] is a vector and fields due to multiple charges add like vectors.
1: Why must the test charge [latex]{q}[/latex] in the definition of the electric field be vanishingly small?
2: Are the direction and magnitude of the Coulomb force unique at a given point in space? What about the electric field?
Problem Exercises
1: What is the magnitude and direction of an electric field that exerts a [latex]{2.00 \times 10^{-5} \;\textbf{N}}[/latex] upward force on a [latex]{-1.75 \;\mu \textbf{C}}[/latex] charge?
2: What is the magnitude and direction of the force exerted on a [latex]{3.50 \;\mu \textbf{C}}[/latex] charge by a 250 N/C electric field that points due east?
3: Calculate the magnitude of the electric field 2.00 m from a point charge of 5.00 mC (such as found on the terminal of a Van de Graaff).
4: (a) What magnitude point charge creates a 10,000 N/C electric field at a distance of 0.250 m? (b) How large is the field at 10.0 m?
5: Calculate the initial (from rest) acceleration of a proton in a [latex]{5.00 \times 10^6 \;\textbf{N} / \textbf{C}}[/latex] electric field (such as created by a research Van de Graaff). Explicitly show how you follow the steps in the Problem-Solving Strategy for electrostatics.
6: (a) Find the direction and magnitude of an electric field that exerts a [latex]{4.80 \times 10^{-17} \;\textbf{N}}[/latex] westward force on an electron. (b) What magnitude and direction force does this field exert on a proton?
a map of the amount and direction of a force acting on other objects, extending out into space
point charge
A charged particle, designated [latex]{Q}[/latex], generating an electric field
test charge
A particle (designated [latex]{q}[/latex]) with either a positive or negative charge set down within an electric field generated by a point charge
2: [latex]{8.75 \times 10{-4} \;\textbf{N}}[/latex]
(a) [latex]{6.94 \times 10^{-8} \;\textbf{C}}[/latex]
(b) [latex]{6.25 \;\textbf{N} / \textbf{C}}[/latex]
(a) 300 N/C (east)
(b) [latex]{4.80 \times 10^{-17} \;\textbf{N (east)}}[/latex]
Previous: 18.3 Coulomb's Law
Next: 18.5 Electric Field Lines: Multiple Charges | CommonCrawl |
A Maximal Ideal in the Ring of Continuous Functions and a Quotient Ring
Let $R$ be the ring of all continuous functions on the interval $[0, 2]$.
Let $I$ be the subset of $R$ defined by
\[I:=\{ f(x) \in R \mid f(1)=0\}.\]
Then prove that $I$ is an ideal of the ring $R$.
Moreover, show that $I$ is maximal and determine $R/I$.
Number Theoretical Problem Proved by Group Theory. $a^{2^n}+b^{2^n}\equiv 0 \pmod{p}$ Implies $2^{n+1}|p-1$.
Let $a, b$ be relatively prime integers and let $p$ be a prime number.
Suppose that we have
\[a^{2^n}+b^{2^n}\equiv 0 \pmod{p}\] for some positive integer $n$.
Then prove that $2^{n+1}$ divides $p-1$.
Abelian Normal subgroup, Quotient Group, and Automorphism Group
Let $G$ be a finite group and let $N$ be a normal abelian subgroup of $G$.
Let $\Aut(N)$ be the group of automorphisms of $G$.
Suppose that the orders of groups $G/N$ and $\Aut(N)$ are relatively prime.
Then prove that $N$ is contained in the center of $G$.
Surjective Group Homomorphism to $\Z$ and Direct Product of Abelian Groups
Let $G$ be an abelian group and let $f: G\to \Z$ be a surjective group homomorphism.
Prove that we have an isomorphism of groups:
\[G \cong \ker(f)\times \Z.\]
If Quotient $G/H$ is Abelian Group and $H < K \triangleleft G$, then $G/K$ is Abelian
Let $H$ and $K$ be normal subgroups of a group $G$.
Suppose that $H < K$ and the quotient group $G/H$ is abelian.
Then prove that $G/K$ is also an abelian group.
Quotient Group of Abelian Group is Abelian
Let $G$ be an abelian group and let $N$ be a normal subgroup of $G$.
Then prove that the quotient group $G/N$ is also an abelian group.
Give a Formula For a Linear Transformation From $\R^2$ to $\R^3$
Let $\{\mathbf{v}_1, \mathbf{v}_2\}$ be a basis of the vector space $\R^2$, where
\[\mathbf{v}_1=\begin{bmatrix}
1 \\
\end{bmatrix} \text{ and } \mathbf{v}_2=\begin{bmatrix}
\end{bmatrix}.\] The action of a linear transformation $T:\R^2\to \R^3$ on the basis $\{\mathbf{v}_1, \mathbf{v}_2\}$ is given by
T(\mathbf{v}_1)=\begin{bmatrix}
\end{bmatrix} \text{ and } T(\mathbf{v}_2)=\begin{bmatrix}
Find the formula of $T(\mathbf{x})$, where
\[\mathbf{x}=\begin{bmatrix}
x \\
\end{bmatrix}\in \R^2.\]
Each of the following sets are not a subspace of the specified vector space. For each set, give a reason why it is not a subspace.
(1) \[S_1=\left \{\, \begin{bmatrix}
x_1 \\
\end{bmatrix} \in \R^3 \quad \middle | \quad x_1\geq 0 \,\right \}\] in the vector space $\R^3$.
\end{bmatrix} \in \R^3 \quad \middle | \quad x_1-4x_2+5x_3=2 \,\right \}\] in the vector space $\R^3$.
\end{bmatrix}\in \R^2 \quad \middle | \quad y=x^2 \quad \,\right \}\] in the vector space $\R^2$.
(4) Let $P_4$ be the vector space of all polynomials of degree $4$ or less with real coefficients.
\[S_4=\{ f(x)\in P_4 \mid f(1) \text{ is an integer}\}\] in the vector space $P_4$.
(5) \[S_5=\{ f(x)\in P_4 \mid f(1) \text{ is a rational number}\}\] in the vector space $P_4$.
(6) Let $M_{2 \times 2}$ be the vector space of all $2\times 2$ real matrices.
\[S_6=\{ A\in M_{2\times 2} \mid \det(A) \neq 0\} \] in the vector space $M_{2\times 2}$.
(7) \[S_7=\{ A\in M_{2\times 2} \mid \det(A)=0\} \] in the vector space $M_{2\times 2}$.
(Linear Algebra Exam Problem, the Ohio State University)
(8) Let $C[-1, 1]$ be the vector space of all real continuous functions defined on the interval $[a, b]$.
\[S_8=\{ f(x)\in C[-2,2] \mid f(-1)f(1)=0\} \] in the vector space $C[-2, 2]$.
(9) \[S_9=\{ f(x) \in C[-1, 1] \mid f(x)\geq 0 \text{ for all } -1\leq x \leq 1\}\] in the vector space $C[-1, 1]$.
(10) Let $C^2[a, b]$ be the vector space of all real-valued functions $f(x)$ defined on $[a, b]$, where $f(x), f'(x)$, and $f^{\prime\prime}(x)$ are continuous on $[a, b]$. Here $f'(x), f^{\prime\prime}(x)$ are the first and second derivative of $f(x)$.
\[S_{10}=\{ f(x) \in C^2[-1, 1] \mid f^{\prime\prime}(x)+f(x)=\sin(x) \text{ for all } -1\leq x \leq 1\}\] in the vector space $C[-1, 1]$.
(11) Let $S_{11}$ be the set of real polynomials of degree exactly $k$, where $k \geq 1$ is an integer, in the vector space $P_k$.
(12) Let $V$ be a vector space and $W \subset V$ a vector subspace. Define the subset $S_{12}$ to be the complement of $W$,
\[ V \setminus W = \{ \mathbf{v} \in V \mid \mathbf{v} \not\in W \}.\]
If 2 by 2 Matrices Satisfy $A=AB-BA$, then $A^2$ is Zero Matrix
Let $A, B$ be complex $2\times 2$ matrices satisfying the relation
\[A=AB-BA.\]
Prove that $A^2=O$, where $O$ is the $2\times 2$ zero matrix.
Normal Nilpotent Matrix is Zero Matrix
A complex square ($n\times n$) matrix $A$ is called normal if
\[A^* A=A A^*,\] where $A^*$ denotes the conjugate transpose of $A$, that is $A^*=\bar{A}^{\trans}$.
A matrix $A$ is said to be nilpotent if there exists a positive integer $k$ such that $A^k$ is the zero matrix.
(a) Prove that if $A$ is both normal and nilpotent, then $A$ is the zero matrix.
You may use the fact that every normal matrix is diagonalizable.
(b) Give a proof of (a) without referring to eigenvalues and diagonalization.
(c) Let $A, B$ be $n\times n$ complex matrices. Prove that if $A$ is normal and $B$ is nilpotent such that $A+B=I$, then $A=I$, where $I$ is the $n\times n$ identity matrix.
Application of Field Extension to Linear Combination
Consider the cubic polynomial $f(x)=x^3-x+1$ in $\Q[x]$.
Let $\alpha$ be any real root of $f(x)$.
Then prove that $\sqrt{2}$ can not be written as a linear combination of $1, \alpha, \alpha^2$ with coefficients in $\Q$.
Irreducible Polynomial $x^3+9x+6$ and Inverse Element in Field Extension
Prove that the polynomial
\[f(x)=x^3+9x+6\] is irreducible over the field of rational numbers $\Q$.
Let $\theta$ be a root of $f(x)$.
Then find the inverse of $1+\theta$ in the field $\Q(\theta)$.
Irreducible Polynomial Over the Ring of Polynomials Over Integral Domain
Let $R$ be an integral domain and let $S=R[t]$ be the polynomial ring in $t$ over $R$. Let $n$ be a positive integer.
\[f(x)=x^n-t\] in the ring $S[x]$ is irreducible in $S[x]$.
Special Linear Group is a Normal Subgroup of General Linear Group
Let $G=\GL(n, \R)$ be the general linear group of degree $n$, that is, the group of all $n\times n$ invertible matrices.
Consider the subset of $G$ defined by
\[\SL(n, \R)=\{X\in \GL(n,\R) \mid \det(X)=1\}.\] Prove that $\SL(n, \R)$ is a subgroup of $G$. Furthermore, prove that $\SL(n,\R)$ is a normal subgroup of $G$.
The subgroup $\SL(n,\R)$ is called special linear group
Beautiful Formulas for pi=3.14…
The number $\pi$ is defined a s the ratio of a circle's circumference $C$ to its diameter $d$:
\[\pi=\frac{C}{d}.\]
$\pi$ in decimal starts with 3.14… and never end.
I will show you several beautiful formulas for $\pi$.
Linear Transformation $T(X)=AX-XA$ and Determinant of Matrix Representation
Let $V$ be the vector space of all $n\times n$ real matrices.
Let us fix a matrix $A\in V$.
Define a map $T: V\to V$ by
\[ T(X)=AX-XA\] for each $X\in V$.
(a) Prove that $T:V\to V$ is a linear transformation.
(b) Let $B$ be a basis of $V$. Let $P$ be the matrix representation of $T$ with respect to $B$. Find the determinant of $P$.
Linear Transformation to 1-Dimensional Vector Space and Its Kernel
Let $n$ be a positive integer. Let $T:\R^n \to \R$ be a non-zero linear transformation.
Prove the followings.
(a) The nullity of $T$ is $n-1$. That is, the dimension of the nullspace of $T$ is $n-1$.
(b) Let $B=\{\mathbf{v}_1, \cdots, \mathbf{v}_{n-1}\}$ be a basis of the nullspace $\calN(T)$ of $T$.
Let $\mathbf{w}$ be the $n$-dimensional vector that is not in $\calN(T)$. Then
\[B'=\{\mathbf{v}_1, \cdots, \mathbf{v}_{n-1}, \mathbf{w}\}\] is a basis of $\R^n$.
(c) Each vector $\mathbf{u}\in \R^n$ can be expressed as
\[\mathbf{u}=\mathbf{v}+\frac{T(\mathbf{u})}{T(\mathbf{w})}\mathbf{w}\] for some vector $\mathbf{v}\in \calN(T)$.
Quiz 8. Determine Subsets are Subspaces: Functions Taking Integer Values / Set of Skew-Symmetric Matrices
(a) Let $C[-1,1]$ be the vector space over $\R$ of all real-valued continuous functions defined on the interval $[-1, 1]$.
Consider the subset $F$ of $C[-1, 1]$ defined by
\[F=\{ f(x)\in C[-1, 1] \mid f(0) \text{ is an integer}\}.\] Prove or disprove that $F$ is a subspace of $C[-1, 1]$.
(b) Let $n$ be a positive integer.
An $n\times n$ matrix $A$ is called skew-symmetric if $A^{\trans}=-A$.
Let $M_{n\times n}$ be the vector space over $\R$ of all $n\times n$ real matrices.
Consider the subset $W$ of $M_{n\times n}$ defined by
\[W=\{A\in M_{n\times n} \mid A \text{ is skew-symmetric}\}.\] Prove or disprove that $W$ is a subspace of $M_{n\times n}$.
Idempotent Linear Transformation and Direct Sum of Image and Kernel
Let $A$ be the matrix for a linear transformation $T:\R^n \to \R^n$ with respect to the standard basis of $\R^n$.
We assume that $A$ is idempotent, that is, $A^2=A$.
Then prove that
\[\R^n=\im(T) \oplus \ker(T).\]
If the Order of a Group is Even, then the Number of Elements of Order 2 is Odd
Prove that if $G$ is a finite group of even order, then the number of elements of $G$ of order $2$ is odd.
Page 21 of 38« First«...10...1819202122232425...30...»Last »
A Recursive Relationship for a Power of a Matrix
If a Group $G$ Satisfies $abc=cba$ then $G$ is an Abelian Group
Determine a Matrix From Its Eigenvalue
Find the Inverse Matrices if Matrices are Invertible by Elementary Row Operations
Non-Example of a Subspace in 3-dimensional Vector Space $\R^3$ | CommonCrawl |
The Annals of Applied Probability
Ann. Appl. Probab.
Volume 14, Number 4 (2004), 2016-2037.
On sampling of stationary increment processes
J. M. P. Albin
More by J. M. P. Albin
Under a complex technical condition, similar to such used in extreme value theory, we find the rate q(ɛ)−1 at which a stochastic process with stationary increments ξ should be sampled, for the sampled process ξ(⌊⋅/q(ɛ)⌋q(ɛ)) to deviate from ξ by at most ɛ, with a given probability, asymptotically as ɛ↓0. The canonical application is to discretization errors in computer simulation of stochastic processes.
Ann. Appl. Probab., Volume 14, Number 4 (2004), 2016-2037.
First available in Project Euclid: 5 November 2004
https://projecteuclid.org/euclid.aoap/1099674087
doi:10.1214/105051604000000468
Primary: 60G10: Stationary processes 60G70: Extreme value theory; extremal processes
Secondary: 60G15: Gaussian processes 68U20: Simulation [See also 65Cxx]
Fractional stable motion Lévy process sampling self-similar process stable process stationary increment process
Albin, J. M. P. On sampling of stationary increment processes. Ann. Appl. Probab. 14 (2004), no. 4, 2016--2037. doi:10.1214/105051604000000468. https://projecteuclid.org/euclid.aoap/1099674087
Albin, J. M. P. (1990). On extremal theory for stationary processes. Ann. Probab. 18 92--128.
JSTOR: links.jstor.org
Albin, J. M. P. (1992). On the general law of iterated logarithm with application to selfsimilar processes and to Gaussian processes in $\mathbbR^n$ and Hilbert space. Stochastic Process. Appl. 41 1--31.
Digital Object Identifier: doi:10.1016/0304-4149(92)90144-F
Albin, J. M. P. (1993). Extremes of totally skewed stable motion. Statist. Probab. Lett. 16 219--224.
Albin, J. M. P. (1998). Extremal theory for self-similar processes. Ann. Probab. 26 743--793.
Digital Object Identifier: doi:10.1214/aop/1022855649
Project Euclid: euclid.aop/1022855649
Albin, J. M. P. (1999). Extremes of totally skewed $\alpha $-stable processes. Stochastic Process. Appl. 79 185--212.
Digital Object Identifier: doi:10.1016/S0304-4149(98)00093-3
Albin, J. M. P. (2003a). On extremes of infinitely divisible Ornstein--Uhlenbeck processes. Available at www.math.chalmers.se/~palbin/ornstein.ps.
Albin, J. M. P. (2003b). Large deviations of stationary infinitely divisible processes. Preprint. Available at www.math.chalmers.se/~palbin/id.ps.
Asmussen, S. (1987). Applied Probability and Queues. Wiley, New York.
Belyaev, Yu. K. and Simonyan, A. H. (1979). Asymptotic properties of deviations of a sample path of a Gaussian process from approximation by broken line for decreasing width of quantization. In Random Processes and Fields (Yu. K. Belyaev, ed.) 9--21. Moscow Univ. Press. (In Russian.)
Berman, S. M. (1982). Sojourns and extremes of stationary processes. Ann. Probab. 10 1--46.
Berman, S. M. (1986). The supremum of a process with stationary independent and symmetric increments. Stochastic Process. Appl. 23 281--290.
Digital Object Identifier: doi:10.1016/0304-4149(86)90041-4
Berman, S. M. (1992). Sojourns and Extremes of Stochastic Processes. Wadsworth and Brooks/Cole, Belmont, CA.
Hüsler, J. (1999). Extremes of Gaussian processes, on results of Piterbarg and Seleznjev. Statist. Probab. Lett. 44 251--258.
Kuelbs, J. and Li, W. V. (1993). Metric entropy and the small ball problem for Gaussian measures. J. Funct. Anal. 116 133--157.
Digital Object Identifier: doi:10.1006/jfan.1993.1107
Leadbetter, M. R., Lindgren, G. and Rootzén, H. (1983). Extremes and Related Properties of Random Sequences and Processes. Springer, New York.
Marcus, M. B. (2000). Probability estimates for lower levels of certain Gaussian processes with stationary increments. In High Dimensional Probability (E. Giné, D. M. Mason and J. A. Wellner, eds.) 173--179. Birkhäuser, Boston.
Pickands, J., III (1969a). Upcrossing probabilities for stationary Gaussian processes. Trans. Amer. Math. Soc. 145 51--73.
Pickands, J., III (1969b). Asymptotic properties of the maximum in a stationary Gaussian process. Trans. Amer. Math. Soc. 145 75--86.
Piterbarg, V. I. and Seleznjev, O. (1994). Linear interpolation of random processes and extremes of a sequence of Gaussian nonstationary processes. Technical Report 446, Dept. Statistics, Univ. North Carolina, Chapel Hill.
Samorodnitsky, G. and Taqqu, M. S. (1994). Stable Non-Gaussian Random Processes. Chapman and Hall, London.
Scheffé, H. (1947). A useful convergence theorem for probability distributions. Ann. Math. Statist. 18 434--438.
Mathematical Reviews (MathSciNet): MR21585
Digital Object Identifier: doi:10.1214/aoms/1177730390
Seleznjev, O. (1996). Large deviations in the piecewise linear approximation of Gaussian processes with stationary increments. Adv. in Appl. Probab. 28 481--499.
Willekens, E. (1987). On the supremum of an infinitely divisible process. Stochastic Process. Appl. 26 173--175.
The Institute of Mathematical Statistics
First Online
Future Papers
Large Deviations for Processes with Independent Increments
Lynch, James and Sethuraman, Jayaram, The Annals of Probability, 1987
Split invariance principles for stationary processes
Berkes, István, Hörmann, Siegfried, and Schauer, Johannes, The Annals of Probability, 2011
Capacity and error exponents of stationary point processes under random additive displacements
Anantharam, Venkat and Baccelli, François, Advances in Applied Probability, 2015
On extremal theory for self-similar processes
Albin, J. M. P., The Annals of Probability, 1998
Random Stopping Preserves Regular Variation of Process Distributions
Greenwood, Priscilla and Monroe, Itrel, The Annals of Probability, 1977
Nonparametric tests of the Markov hypothesis in continuous-time models
Aït-Sahalia, Yacine, Fan, Jianqing, and Jiang, Jiancheng, The Annals of Statistics, 2010
A $(2+1)$-dimensional growth process with explicit stationary measures
Toninelli, Fabio Lucio, The Annals of Probability, 2017
Nested particle filters for online parameter estimation in discrete-time state-space Markov models
Crisan, Dan and Míguez, Joaquín, Bernoulli, 2018
Optimal sparse volatility matrix estimation for high-dimensional Itô processes with measurement errors
Tao, Minjing, Wang, Yazhen, and Zhou, Harrison H., The Annals of Statistics, 2013
Approximate group context tree
Belloni, Alexandre and Oliveira, Roberto I., The Annals of Statistics, 2017
euclid.aoap/1099674087 | CommonCrawl |
BMC Ecology and Evolution
Narrow environmental niches predict land-use responses and vulnerability of land snail assemblages
Katja Wehner ORCID: orcid.org/0000-0002-0792-05421,
Carsten Renker2,
Nadja K. Simons1,
Wolfgang W. Weisser3 &
Nico Blüthgen1
BMC Ecology and Evolution volume 21, Article number: 15 (2021) Cite this article
How land use shapes biodiversity and functional trait composition of animal communities is an important question and frequently addressed. Land-use intensification is associated with changes in abiotic and biotic conditions including environmental homogenization and may act as an environmental filter to shape the composition of species communities. Here, we investigated the responses of land snail assemblages to land-use intensity and abiotic soil conditions (pH, soil moisture), and analyzed their trait composition (shell size, number of offspring, light preference, humidity preference, inundation tolerance, and drought resistance). We characterized the species' responses to land use to identify 'winners' (species that were more common on sites with high land-use intensity than expected) or 'losers' of land-use intensity (more common on plots with low land-use intensity) and their niche breadth. As a proxy for the environmental 'niche breadth' of each snail species, based on the conditions of the sites in which it occurred, we defined a 5-dimensional niche hypervolume. We then tested whether land-use responses and niches contribute to the species' potential vulnerability suggested by the Red List status.
Our results confirmed that the trait composition of snail communities was significantly altered by land-use intensity and abiotic conditions in both forests and grasslands. While only 4% of the species that occurred in forests were significant losers of intensive forest management, the proportion of losers in grasslands was much higher (21%). However, the species' response to land-use intensity and soil conditions was largely independent of specific traits and the species' Red List status (vulnerability). Instead, vulnerability was only mirrored in the species' rarity and its niche hypervolume: threatened species were characterized by low occurrence in forests and low occurrence and abundance in grasslands and by a narrow niche quantified by land-use components and abiotic factors.
Land use and environmental responses of land snails were poorly predicted by specific traits or the species' vulnerability, suggesting that it is important to consider complementary risks and multiple niche dimensions.
Land use disturbs natural environments, changes local geographical landscape structure and alters local biotic and abiotic conditions, e.g. microclimate [1,2,3,4,5,6]. Reduction of habitat and microhabitat heterogeneity may lead to a homogenization of plant and animal communities, trigger a reduction in functional diversity and thus lower the capacity of an ecosystem to buffer disturbances [7, 8]. Homogenization of animal communities by increasing land-use intensity has been shown for several taxa; e.g., in managed grasslands, 34% of plant- and leafhoppers species were significant losers (i.e. species that were significantly less abundant under conditions of high land-use intensity) of land-use intensification, particularly increases in mowing frequency had a negative effect [9].
Land snails are an important macroinvertebrate group that is directly and indirectly involved in ecosystem processes such as litter decomposition or nutrient cycling [10, 11]. There is a natural north–south and west–east gradient of snail species distributions and abundances within Europe; species richness increases from north to south and to a lesser extent from west to east which is linked to regional and ecological differences and the land-use history [12]. Snail species also differ in their tolerance to abiotic factors (pH, soil moisture), and vary greatly in life-history parameters (e.g., lifespan, development, number of offspring, food requirement, shell size) and general behavior [13] which also affect their distribution. Variation in body size and diet seems to be especially important for structuring snail communities [14] as well as the species-specific tolerance to a variety of environmental factors which can result in nested communities at a specific site [15, 16].
Studies on trait composition of snail communities in Sweden pointed to the importance of the species' niche-width and the importance of local environmental conditions over spatial variables [17]. While tolerance-related traits such as humidity preference or inundation tolerance were positively associated with abiotic soil moisture, a large amount of variation remained unexplained [17], which may be related to land use. The impact of land use and its intensity on land snail communities is less intensively investigated although most land snail species are characterized by a limited mobility and therefore are vulnerable to human introduced habitat changes [15, 18,19,20]. Changes in abiotic factors such as soil pH, soil moisture, soil calcium content, leaf litter depth, soil surface structure or the type of vegetation have been shown to alter snail communities [15, 21,22,23,24,25]. Also land-use factors such as the proportion of wood harvested in forests or the amount of grazing livestock in grasslands can influence snail communities directly and/or indirectly [20, 26, 27]. In addition, disturbances by different land-use types and intensities may alter the trait composition of snail communities on the regional level; i.e. the presence of coniferous timber may favor snail communities with differing traits than communities in natural deciduous stands.
In the present study, we investigated land snail communities at forest and grassland sites in different regions of Germany, which were characterized by different land-use types and intensities. We aimed to test whether the trait composition of the snail community is influenced by land-use intensity (and soil conditions). We then tested the responses of each snail species to land-use intensity; 'winners' significantly increase in abundance and occurrence with land-use intensity, whereas 'losers' significantly decrease compared to the null model [9, 39]. We than compared these responses with the snail species' habitat association; i.e. we asked whether species that only occasionally occur in forests are more affected by forest management than species that are specialized to forest habitats. On the other hand, do species that are grassland specialist suffer less from grassland management than those only occasionally occurring in grasslands? Finally, we compared our findings of the land-use effects and the 'winner/loser' status of a species with its putative vulnerability (Red List status), to test if losers of land-use intensifications in forests and grasslands are those species that are classified as vulnerable.
Response to land use
The trait composition of land snail communities differed strongly between forests and grasslands within regions, indicated by a strong differentiation of community-weighted mean trait values (CWMs). Assemblages of forest species consisted of larger species, consistently showed lower light and higher humidity preference, lower drought resistance and mostly lower inundation tolerance than grassland assemblages; differences in the number of offspring were inconsistent among forest and grassland habitats (Fig. 1).
Trait distribution (a shell size, b number of offspring, c light preference, d humidity preference, e drought resistance, f inundation tolerance) of snail communities among forest (grey) and grassland (white) habitats in the Swabian Alb, the Hainich-Dün and the Schorfheide-Chorin. Traits are given as community weighted mean (CWM). Difference among habitats per region are tested using an ANOVA (asterisks), differences between regions are tested by a posthoc Tukey test (letters). Significances: ns not significant, *p < 0.05, **p < 0.01, ***p < 0.001
In forests, land-use intensity and abiotic conditions significantly influenced the CWMs of all traits investigated, although often in a different way across regions (Table 1, Additional file 1: Appendix 1; see interaction terms with region). Similarly, in grasslands the trait composition of snail communities was significantly influenced by most land-use components and abiotic conditions (Table 2, Additional file 1: Appendix 1).
Table 1 Influence of land-use parameter and abiotic factors on the trait composition of snail communities in forest habitats
Table 2 Influence of land-use parameter and abiotic factors on the trait composition of snail communities in grassland habitats
In forest habitats, some 4% of all species were 'losers' of the combined forest management index (i.e. they were significantly less common in intensively used forests), whereas 12% were 'winners' and thus increased with forest management intensity (Table 3). The proportions of non-native trees (4% losers vs. 8% winners) and the proportion of dead wood with saw cuts (6% losers vs. 8% winners) revealed a similar pattern, but for the proportion of wood harvested the percentage of losers (12%) exceeded that of winners (8%).
Table 3 Red list status, occurrence and total abundance of snail species in the Swabian Alb (A), the Hainich-Dün (H) and the Schorfheide-Chorin (S) in forest habitats
In grasslands, many species were predominantly found at low land-use intensities (LUI); 21% of all species were significant losers and only Monacha cartusiana profited from high LUI (Table 4). However, single land-use components in grasslands had only weak effects. Grazing intensity positively affected Cecilioides acicula and Cepaea hortensis, but showed no negative impact. Similarly, mowing (2% losers and 2% winners) and fertilization (4% losers and 4% winners) had a very little impact compared to the combined LUI.
Table 4 Red list status, occurrence and total abundance of snail species in the Swabian Alb (A), the Hainich-Dün (H) and the Schorfheide-Chorin (S) in grassland habitats
However, in both forests and grasslands, species' land-use responses (i.e. their 'winner/loser' status) were independent of their traits; i.e. losers in forests or grasslands were neither characterized by a smaller or larger shell size nor by lower or higher numbers of offspring nor by lower or higher light preference etc. (Additional files 2–15: Appendix 2–15).
Response to abiotic factors
Although niches of common land snail species for soil pH and soil moisture were generally broad, some differentiation was found in the communities of both habitats. In forests, Aegopinella pura, the genus Carychium, Cochlicopa lubrica, Ena montana and Vitrea contracta were significantly associated with higher pH values (Table 3) and Cepaea hortensis, Euconulus fulvus, Nesovitrea hammonis, Vallonia pulchella and Vitrinobrachium breve were found at sites with low pH (Table 3). Furthermore, A. pura and Carychium tridentatum were associated with high soil moisture in forests and Ceciliodes acicula, E. fulvus, N. hammonis, Punctum pygmaeum, Trochulus striolatus and V. pulchella were found at low soil moisture values (Table 3).
Grassland sites had a higher mean pH (6.7) as compared to forest soils, and many snail species (e.g., Candidula unifasciata, the genus Carychium, Granaria frumentum, Pupilla muscorum, Vertigo antivertigo) were associated with higher pH values (Table 4). Only N. hammonis was significantly more common on sites with low pH. Soil moisture niches of grassland species were even broader than those of pH. The genus Carychium, Trochulus hispidus and Vallonia pulchella were found at high moisture values, while C. unifasciata, Discus rotundatus, Truncatellina cylindrica, V. excentrica were associated with low soil moisture (Table 4).
Habitat association
Snail species differed in their habitat association and their distribution among regions (Fig. 2). However, effects of land-use management components and abiotic factors in forests were independent of the species' habitat association, i.e. species that occurred in forests at low frequencies (e.g., 25% of the individuals in Cochlicopa lubrica; Fig. 2) were equally affected by land-use intensification as species that are exclusively found in forests (e.g., Cepaea hortensis) (F1,49 = 0.14, p = 0.71, Fig. 2, Additional file 14: Appendix 14). In contrast, grassland species that predominately prefer grassland habitats were less tolerant to fertilization than species that also occur in forests (F1,50 = 5.84, p = 0.019, Fig. 3a, Additional file 15: Appendix 15). Furthermore, grassland "specialists" were significantly associated with higher pH values (F1,49 = 9.21, p = 0.004, Fig. 3b).
Relation between the responses (abundance-weighted mean) of each snail species to fertilization (a) and soil pH (b) and their proportional occurrence in forests. Indicated species above the line are significant "winners" for fertilization respective soil pH, indicated species below the line (in italic) are significant "losers"
Proportional distribution of land snail species in the Schorfheide-Chorin, the Hainich-Dün and the Swabian Alb. Grasslands are given in light grey, forests in dark grey. The three most abundant species are symbolized by big circles, less abundant species by small circles. Species that are underlined are specific for the respective region. Percentages in brackets indicate the proportional occurrence of species of the same genus
Species' vulnerability
Across forests and grasslands, 75% of the 61 snail species found are currently not threatened or endangered according to their Red List status (Tables 3, 4). Nevertheless, Nesovitrea petronella, Candidula unifasciata and Granaria frumentum are regarded as 'endangered' while Vallonia enniensis is 'highly endangered' and V. angustior is listed on the FFH directive.
There was no statistical support that a negative response to land-use intensity of a certain species ("loser") is associated with a high vulnerability of the species, neither in forests nor in grasslands (Table 5). A better predictor for the species' vulnerability in forests was a relatively low number of sites in which the species occurred, and in grasslands both a low occurrence and a low total abundance corresponded to a higher vulnerability (Table 5). Furthermore, the 5-dimensional niche hypervolume based on the species' tolerance to land-use components and abiotic conditions was significantly correlated with the species' vulnerability, hence species with a small niche hypervolume are more vulnerable in both forests (Spearman rank test: S = 20,091, p = 0.0004; Fig. 4a) and grasslands (Spearman rank test: S = 15,547, p = 0.003, Fig. 4b).
Table 5 Statistical p values of a general linearized model with Poisson distribution testing the influence of land-use parameters and abiotic factors on species vulnerability
Species vulnerability in relation to the five-dimensional niche hypervolume in forest (a) and grassland (b). The hypervolume was the product of the abundance-weighted standard deviations (AWSDs) of all single land-use components as well as pH and soil moisture in forests or grasslands, respectively
Response to land use and abiotic factors
Land snail species are slow-dispersing organisms, and historical influences are of general importance for their distribution [28]. Their diversity and heterogeneity is modified by predation, parasitism, competition, abiotic environmental gradients, natural barriers and disturbances [16]. While abiotic and vegetation parameters can be used to predict snail communities, disturbances by human land use are less frequently discussed. Our previous study [27] focused on land snail density, diversity and species composition and emphasized that direct impacts of land use on snail communities were on average lower than the impact of abiotic drivers and biotic substrates. However, unlike several studies on insects, few direct effects have been shown for wood harvesting in forests and mowing in grasslands on snail diversity [27]. How these direct land-use effects influence populations of single species and whether these effects are related to species-specific traits remains largely unclear.
Our study showed that snail assemblages varied consistently in their trait composition (shell size, number of offspring, light and humidity preference, drought resistance and inundation tolerance) across regions and among the two habitats, forests and grasslands. The variation between regions is consistent with a biogeographic gradient of increasing land snail diversity from the north to south caused by historical and ecological factors (temperature, moisture) [12, 22] and snail species responded differently to variable physical environments [13]. Local environmental conditions have been shown to explain about 19% of the trait variability of a snail metacommunity in Sweden [17], where the authors suggested that the unexplained variation may mirror land use. Our results confirmed that land-use intensity significantly influenced the trait distribution of snail communities, a pattern that was more pronounced in forest habitats than in grasslands. Since snail species in forest communities seem to be more specialized than those of grassland communities [12, 28], they may suffer more from habitat changes. For example, as the activity level of snails is temperature-dependent, thinning the canopy by wood harvesting or a high amount of non-native trees can enhance solar irradiance and the enhanced snail locomotion allows the exploitation of ambient heterogeneity [29] and may favor species with higher light preferences. This hypothesis is consistent with results from snail assemblages in our study, since the community-weighted mean (CWM) of light preference increased with the amount of non-native, mainly coniferous trees that may not have a closed canopy. Furthermore, changes of the community trait composition are not only directly caused by land-use parameters, but also by indirectly changing abiotic factors such as soil pH and soil moisture although most snail species exhibit broad niches for these abiotic factors.
In our study, 4% of all forest and 21% of all grassland snail species were significant losers concerning the compound indices of land-use intensity, including three land-use components in the forests or in the grasslands, respectively. The proportion of losers among grassland snail species was lower than the level found for grasshoppers (about 52%) [30] and plant- and leafhoppers (about 34%) [9], but similar to that for moths (28%) [31], confirming that snails are a suitable indicator for habitat quality and land-use intensity [17, 22, 32, 33]. The low proportion of loser species may be explained by their ground-living behavior (intangible for combine harvesters), the presence of a shell (protection against exposure and predation) and a larger diet breath compared to insect taxa (omnivory for flexibly changing food resources). However, we may have underestimated the amount of loser species since we did not distinguish between living individuals and empty shells. Empty shells decay at different rates under different ecological conditions [44]. Therefore, in some cases we may have evaluated shells of species which can no longer be found alive in the respective places. Keeping this in mind, our methodological approach may have ramifications on the conclusions drawn.
While increasing land-use intensity in open habitats is known to trigger a decline of pollinator species, and such losses were associated with species-specific trait attributes such as a narrow diet breadth, climate specialization, a large body-size and low fecundity [33,34,35,36,37,38,39], we did not find traits for snail species to correspond with their land-use response at species level. This is surprising, given that particularly those traits that are associated with soil moisture (drought resistance, inundation tolerance), body size or reproductive outcome are likely to respond to human-mediated disturbances. Furthermore, land-use effects in forests were independent of the species habitat association (i.e. forests specialists were equally affected as non-forests specialists), but grassland specialists suffered more from land use (i.e. fertilization) and were more dependent on high soil pH.
Note that single land-use parameters and abiotic conditions are often confounded in real landscapes as in our study, and thus responses of some snail species may not always correspond to single environmental dimensions as known from their global distribution or other sources. For example, Cochlicopa lubricella is a xerophilic land snail [42] whereas our data showed a neutral response to soil moisture.
The range of resources and the ecological conditions generally define the niche breadth and determine the geographical area of a species at the small or large scale [40]. Specialists are expected to be more vulnerable to habitat loss and climate change due to synergistic effects of a narrow niche and a small range size.
Only a few snails in our study across managed forests and grasslands are considered threatened or endangered according to the national Red List. Consistent with the expectation based on their environmental niche breadth, the species' vulnerability status was significantly predicted by a particularly narrow niche hypervolume—an index that includes single land-use components as well as pH and soil moisture in each habitat. The smaller the hypervolume of a species, the higher its vulnerability according to the Red List. In addition, rarity was important: in forests, the most important predictor for their vulnerable status was a low number of sites in which they occurred. In grasslands, both their restricted occurrence and low total abundance predicted the species' vulnerability.
In summary, our results indicate that the trait composition of snail communities was significantly altered by land-use intensities and abiotic conditions, and several species especially in grasslands were losers of intensive land use. These land-use and environmental responses were largely independent of specific traits and the species' Red List status—this suggests that complementary risks may be important for predicting a species' vulnerability. Instead, species vulnerability was mirrored in the species' rarity and its overall niche hypervolume including single land-use components and abiotic factors.
Data origin
Data for this study were already part of a previous analysis of biodiversity and community composition, i.e. Wehner et al. [27] and are available at https://www.bexis.uni-jena.de/PublicData/PublicDataSet.aspx?DatasetId=24986. Wehner et al. [27] collected 15,607 snail individuals belonging to 71 taxa in three regions in Germany in the framework of the Biodiversity Exploratories Project (http://www.biodiversity-exploratories.de) [2]. The collaborative research unit addresses effects of land-use on biodiversity and biodiversity-related ecosystem processes in three regions: the Swabian Alb (ALB), a low-mountain range in South-West Germany (460–860 m a.s.l., 09° 10′ 49″–09° 35′ 54″ E/48° 20′ 28″–48° 32′ 02″ N), the Hainich-Dün (HAI), a hilly region in Central Germany (285–550 m a.s.l., 10° 10′24″–10° 46′ 45″ E/50° 56′ 14″–51° 22′ 43″ N) and the Schorfheide-chorin (SCH), a glacial formed landscape in North-East Germany (3–140 m a.s.l., 13° 23′ 27″–14° 08′ 53″ E/52° 47′ 25″–53° 13′ 26″ N). SCH is characterized by the lowest annual precipitation (520–580 mm), with a mean annual temperature of 6–7 °C. It is followed by HAI (630–800 mm, 6.5–8 °C) and ALB (800–930 mm, 8–8.5 °C).
In each region, 100 experimental plots (50 in forests and 50 in grasslands) were setup in 2008 along a land-use gradient covering different management types and intensities including mowing, grazing and fertilization in grasslands and the proportion of non-native trees, the proportion of dead-wood with saw cuts and the proportion of wood harvested in forests (Table 6). Forest plots have a size of 1 ha and grassland plots are 0.5 ha in size.
Table 6 Description and origin of land-use parameter and abiotic factors
In June 2017, Wehner et al. [27] took five replicated surface samples from all 50 forest and 50 grassland experimental plots (EPs) in the Swabian Alb and the Hainich, and from 49 forest and 34 grassland plots in the Schorf-heide due to constrained accessibility (1415 samples in total). Shelled snails were subsequently determined to the species, genus or family level using [41,42,43]. Although suggested elsewhere [e.g., 44], [27] did not distinguish between empty shells and living snail individuals.
As our current study focuses on species-level responses, only those individuals that could be assigned to the species level were used (ALB grasslands: 36, ALB forests: 37, HAI grassland: 31, HAI forest: 35, SCH grassland: 24, SCH forest: 21, 61 different land snail species in total). Grassland plots (although not permanently flooded) in one region (Schorfheide) harbored large numbers of aquatic and semi-aquatic snails. In contrast to our previous analysis that covered all snails recorded [27], we excluded aquatic snails from the analyses since their role and responses to terrestrial environmental variables such as land-use in grasslands remain unclear,
All statistical analyses were performed in R 3.5.2 [45] using the main packages "car" [46], "dplyr" [47], "lme4" [48] and "SMDTools" [49].
Trait composition of snail communities
Morphological and life-history trait values for all snail species were obtained from an established trait database by Falkner et al. [50] and compared to findings of [51] whenever possible; see Astor et al. [17] for a similar approach based on [50]. Traits for the set of species in our study are summarized in Table 7. Note that these traits are either continuous variables (size), integers (offspring) or ranks (all others); ranks can been treated as integers or continuous variables for an analysis based on community weighted mean (CWM, see below); the resulting distribution of the CWM in species-rich communities and across a large number of plots typically approach a Gaussian distribution. Moreover, to explore the response to potential environmental filtering, traits with different meaning are treated independently for the following analysis (a common practice, although some traits, e.g. shell size and number of offspring, may be correlated, see [17]).
Table 7 Characterization of snail traits according to Falkner et al. 2001 [50]
For comparing snail communities among habitats and regions, the community weighted mean (CWM) of each trait was calculated as CWM per plot p
$${CWM}_{p}= \sum_{i=1}^{I}{T}_{i}\bullet \frac{{a}_{i,p}}{{A}_{p}}$$
where Ti is the trait value of species i, ai,p is the abundance of species i in plot p and Ap the total abundance of all snails in plot p (total I species).
Environmental niches
We characterized the environmental conditions of each forest or grassland plot by its land-use intensity and two abiotic soil parameters (pH and soil moisture; Table 6) [52, 53]. Data were obtained from the BExIS database (Table 6).
We tested the response of the CWM of each trait to variation in environmental conditions using linear regressions. Values for grazing and fertilization were square root transformed before statistical analyses.
In order to characterize the snail species' responses to environmental conditions (land-use gradient, soil conditions), we calculated each species' "environmental niche". The method has been established in the context of the Biodiversity Exploratories and was applied to several taxa such as grasshoppers [30], cicadas, moths [31], bumblebees [54] or plants [55]. The "niche optimum" was calculated as the abundance weighted mean (AWM) for species i as
$${AWM}_{i}= \sum_{p=1}^{{n}_{p}}{L}_{p}\bullet \frac{{a}_{i,p}}{{A}_{i}}$$
where np is the number of plots investigated, Lp is the land-use gradient value of plot p, ai,p the abundance of species i in plot p and Ai the total abundance of species i across all 149 forest or 134 grasslands sites, respectively. Hence, the CWM characterizes the plots by the trait distribution of snails, and the AWM characterizes snail species by the environmental conditions of the plot, and the snail abundance ai,p is used to weight either species or plot, respectively.
In addition to the AWM as a niche optimum, we also characterized the "niche breadth" of each species to a single environmental variable using the abundance-weighted standard deviation (AWSD) [30]. To test whether AWMs and AWSDs statistically deviate from an expected random distribution, we compared the calculated values against the expected values obtained from a null model that distributes each species across Ni sites with the same probability, with Ni being the number of sites in which species i was found. The null model thus chooses values of the focal land-use parameter (LUI, Formi, single components, pH, soil moisture) of Ni sites and calculates a distribution of predicted AWMs and AWSDs values for each species based on 10,000 iterations. The null model was restricted to the one, two, or three regions in which the species was recorded to consider potential distribution boundaries of each species in Germany that may not be related to plot conditions [30].
As in any randomization model, the proportion of AWMs or AWSDs from 10,000 null models with greater or smaller expected values respectively than the observed value, provides the p value for the significance of the deviation between observed and expected values. A 'winner' is defined as a species with an observed AWM larger than the upper 5% of the distribution of AWMs obtained by the null models (i.e. adapted on higher-than average land-use intensity), a 'loser' shows an observed AWM smaller than the lower 5% (low land-use intensity specialist). For species which could be classified neither as 'losers' nor as 'winners', we tested whether they are specialized on intermediate land-use or abiotic levels, that is, whether they have an intermediate AWM with a narrower niche than expected. We standardized the niche breadth as weighted coefficient of variation (CV = AWSD/AWM) to account for the increase in SD with increasing mean, and compared observed CV and expected CV from the null models. This comparison allows us to distinguish 'opportunists' (observed CV ≥ expected CV) from species that are 'specialized' on intermediate land-use intensities (observed CV < expected CV and species not only occurring on one site, i.e., CV ≠ 0) [30]. The environmental niche (AWM, AWSD) and the assignment of low- and high-gradient specialists were also calculated for soil pH and soil moisture, although we did not adopt the 'loser'/'winner' terminology here unlike for land-use intensity.
Vulnerability (classified as a rank variable comparable to IUCN categories: least concern, endangered to unknown extent, very rare, near threatened, critically endangered, endangered, vulnerable) of land snail species was obtained from the Red List 2011 (according to [56]; see Table 3). We tested the relation of vulnerability with the species' habitat association by calculating the proportional occurrence in either forest or grassland habitats of a certain species' presence; a 'specialist' was defined if more than 90% of all individuals found were present in one habitat (forest or grassland). The relation between vulnerability and species' habitat association was tested by a linear model using the land-use management components and the abiotic conditions as fixed factors and the proportional occurrence as explanatory factor.
To further test if a species' vulnerability can be predicted by its land-use response ('winner' or 'loser' status) and its relation to abiotic soil conditions, we used a general linearized model with Poisson distribution including vulnerability as response factor and the respective land-use parameter or abiotic factor, the number of plots where the species occurred and its total abundance as explanatory factors. Values for grazing and fertilization were square-root transformed prior to statistical analyses and data on abundances and occurrence were log transformed because of data structure.
Finally, we calculated a five-dimensional niche hypervolume (consistent with Hutchinson's n‐dimensional niche concept) as a proxy for the total 'niche breadth' of each snail species by multiplying the abundance-weighted standard deviations (AWSD) of all three single land-use components as well as of pH and soil moisture, respectively. The hypervolume was defined for forests and grasslands separately.
Whether the total niche breadth can predict vulnerability was tested using a Spearman rank correlation between the vulnerability and the five-dimensional niche hypervolume.
Snail data obtained by [27] and used in this study are available online under https://www.bexis.uni-jena.de/PublicData/PublicDataSet.aspx?DatasetId=24986. Data on snail vulnerability were obtained from the Red List 2011 according to [43] and snail traits were extracted from [38]. Environmental data and those for land-use intensity in grasslands and forests were obtained from the BExIS database (see Table 6).
Poschlod P, Bakker JP, Kahmen S. Changing land use and its impact on biodiversity. Basic Appl Ecol. 2005;6:93–8.
Fischer M, Bossdorf O, Gockel S, Hänsel F, Hemp A, Hessenmöllerd D, Weisser WW, et al. Implementing large-scale and long-term functional biodiversity research: the biodiversity exploratories. Basic Appl Ecol. 2010;11:473–85.
Steinhäußer R, Siebert R, Steinführer A, Hellmich M. National and regional land-use conflicts in Germany from the perspective of stakeholders. Land Use Policy. 2015;49:183–94.
Axelsson R, Angelstam P, Svensson J. Natural forest and cultural woodland with continuous tree cover in Sweden: how much remains and how is it managed? Scand J Forest Res. 2007;22:545–58.
Socher AS, Prati D, Boch S, Müller J, Klaus VH, Hölzel N, Fischer M. Direct and productivity-mediated indirect effects of fertilization, mowing and grazing on grassland species richness. J Ecol. 2012;100:1391–9.
Simons NK, Gossner MM, Lewinsohn TM, Boch S, Lange M, Müller J, Weisser WW, et al. Resource-mediated indirect effects of grassland management on arthropod diversity. PLoS ONE. 2014;9:e107033.R.
Hooper DU, Chapin FS, Ewel JJ, Hector A. Effects of biodiversity on ecosystem functioning: a consensus of current knowledge. Ecol Monogr. 2005;75:3–35.
Dormann CF, Schweiger O, Augenstein I, Bailey D, Billeter R, De Blust G, Zobel M, et al. Effects of landscape structure and land-use intensity on similarity of plant and animal communities. Global Ecol Biogeogr. 2007;16:774–87.
Chisté MN, Mody K, Kunz G, Gunczy J, Blüthgen N. Intensive land use drives small-scale homogenization of plant- and leafhopper communities and promotes generalists. Oecologia. 2018;186:529–40.
Astor T, Lenoir L, Berg MP. Measuring feeding traits of a range of litter-consuming terrestrial snails: leaf litter consumption, faeces production and scaling with body size. Oecologia. 2015;178:833–45.
Cameron R. Slugs and Snails. Collins New Naturalist Library, Book 133; HarperCollins Publishers, ePub edition; 2006.
Limondin-Lozouet N, Preece RC. Quaternary perspectives on the diversity of land snail assemblages from northwestern Europe. J Mollus Stud. 2014;80:224–37.
Randolph PA. Influence of environmental variability on land snail population properties. Ecology. 1973;54:933–55.
Schamp B, Horsák M, Hájek M. Deterministic assembly of land snail communities according to species size and diet. J Anim Ecol. 2010;79:803–10.
Hylander K, Nilsson C, Jonsson BG, Göther T. Differences in habitat quality explain nestedness in a land snail meta-community. Oikos. 2005;108:351–61.
Hovermann JT, Davis CJ, Werner EE, Skelly DK, Relyea RA, Yurewicz KL. Environmental gradients and the structure of freshwater snail communities. Ecography. 2011;34:1049–58.
Astor T, von Proschwitz T, Strengbom J, Berg MP, Bengtsson J. Importance of environmental and spatial components for species and trait composition in terrestrial snail communities. J Biogeogr. 2017;44:1362–72.
Goodfried GA. Variation in land-snail shell form and size and its causes: a review. System Zool. 1986;35:204–23.
Baur A, Baur B. Individual movement patterns of the minute land snail Punctum pygmaeum (Draparnaud) (Pulmonata: Endodontidae). Veliger. 1988;30:372–6.
Kappes H, Jordaens K, Hendrickx F, Maelfait J-P, Lens L, Backeljau T. Response of snails and slugs to fragmentation of lowland forests in NW Germany. Landsc Ecol. 2009;24:685–97.
Wäreborn I. Changes in the land mollusc fauna and soil chemistry in an inland district in southern Sweden. Ecography. 1992;15:62–9.
Nekola JC. Large-scale terrestrial gastropod community composition patterns in the Great Lakes region of North America. Divers Distrib. 2003;9:55–71.
Martin K, Sommer M. Relationships between land snail assemblage patterns and soil properties in temperate humid ecosystems. J Biogeogr. 2004a;31:531–45.
Martin K, Sommer M. Effects of soil properties and land management on the structure of grassland snail assemblages in SW Germany. Pedobiologia. 2004b;48:193–203.
Horsák M. Mollusc community patterns and species response curves along a mineral richness gradient: a case study in fens. J Biogeogr. 2006;33:98–107.
Denmead LH, Barker GM, Standish RJ, Didham RK. Experimental evidence that even minor livestock trampling has severe effects on land snail communities in forest remnants. J Appl Ecol. 2013;52:161–70.
Wehner K, Renker C, Brückner A, Simons NK, Weisser WW, Blüthgen N. Land-use affects land snail assemblages directly and indirectly bymodulating abiotic and biotic drivers. Ecosphere. 2019;10(5):e02726.
Cameron RAD, Down K, Pannett DJ. Historical and environmental influences on hedgerow snail faunas. Biol J Linn Soc. 1980;13:75–87.
Chapperon C, Seuront L. Space-time variability in environmental thermal properties and snail thermoregulatory behavior. Funct Ecol. 2011;25:1040–50.
Chisté M, Mody K, Gossner MM, Simons NK, Köhler G, Weisser WW, Blüthgen N. Losers, winners, and opportunists: How grassland land-use intensity affects orthopteran communities. Ecosphere. 2016;7(11):e01545.
Mangels J, Fiedler K, Schneider FD, Blüthgen N. Diversity and trait composition of moths respond to land-use intensification in grasslands: generalists replace specialists. Biodivers Conserv. 2017;26:3385–405.
Čejka T, Hamerlík L. Land snails as indicator of soil humidity in Danubian woodland (SW Slovakia). Pol J Ecol. 2009;57:741–7.
Banaszak-Cibicka W, Żmihorski M. Wild bees along an urban gradient: winners and losers. J Insect Conserv. 2011;16:331–43.
Douglas DD, Brown DR, Pederson N. Land snail diversity can reflect degrees of anthropogenic disturbance. Ecosphere. 2013;4:1–14.
McKinney ML, Lockwood JL. Biotic homogenization: a few winners replacing many losers in the next mass extinction. TREE. 1999;11:450–3.
Williams P, Colla S, Xie Z. Bumblebee vulnerability: common correlates of winners and losers across three continents. Conserv Biol. 2008;23:931–40.
Rader R, Bartomeus I, Tylianakis JM, Lalibert E. The winners and losers of land use intensification: pollinator community disassembly is non-random and alters functional diversity. Divers Distrib. 2014;20:908–17.
Weiner CN, Werner M, Linsenmair KE, Blüthgen N. Land use impacts on mutualistic networks: disproportional declines in specialized pollinators via changes in flower composition. Ecology. 2014;95:466–74.
Kühsel S, Blüthgen N. High diversity stabilizes the thermal resilience of pollinator communities in intensively managed grasslands. Nat Commun. 2015;6:7989.
Slatyer RA, Hirst M, Sexton JP. Niche breadth predicts geographical range size: a general ecological pattern. Ecol Lett. 2013;16:1104–14.
Welter-Schultes F. European non-marine molluscs. A guide for species identification. Göttingen: Planet Poster Editions; 2012.
Wiese V. Die Landschnecken Deutschlands. 2nd ed. Wiebelsheim: Quelle & Meyer; 2016.
Süßwassermollusken GP. Ein Bestimmungsschlüssel für die Muscheln und Schnecken im Süßwasser der Bundesrepublik Deutschland. Göttingen: Deutscher Jugendbund für Naturbeobachtungen; 2017.
Pearce TA. When a snail dies in the forest, how long will the shell persist? Effect of dissolution and micro-bioerosion. Am Malacol Bull. 2008;26:111–7.
R Core Team. R: a language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2010. http://www.R-project.org/. ISBN 3-900051-07-0.
Fox J, Weisberg S. An R companion to applied regression. 2nd ed. Thousand Oaks: Sage; 2011.
Wickham H, Franҫois R, Henry L, Müller K. "dplyr": A grammar of data manipulation. 2019. http://dplyr.tidyverse.org, https://github.com/tidyverse/dplyr.
Bates D, Maechler M, Bolker B, Walker S. Fitting linear mixed-effects models using lme4. J Stat Softw. 2015;67(1):1–48.
VanDerWal J, Falconi L, Januchowski S, Shoo L, Storlie C. `SMDTools`: species distribution modelling tools: tools for processing data associated with species distribution modelling exercises. 2014. http://tidyr.tidyverse.org, https://github.com/tidyverse/tidyr.
Falkner G, Obrdlík P, Castella E, Speight MCD. Shelled gastropoda of Western Europe. Munich: Friedrich-Held-Gesellschaft; 2001.
Frömming E. Biologie der mitteleuropäischen Landgastropoden. Berlin: Duncker & Humblot; 1954.
Kahl T, Bauhus J. An index of forest management intensity based on assessment of harvested tree volume, tree species composition and dead wood origin. Nat Conserv. 2014;7:15–27.
Blüthgen N, et al. A quantitative index of land-use intensity in grasslands: integrating mowing, grazing and fertilization. Basic Appl Ecol. 2012;13:207–20.
Kämper W, Weiner C, Kühsel S, Storm C, Thomas ELTZ, Blüthgen N. Evaluating the effects of floral resource specialisation and of nitrogen regulation on the vulnerability of social bees in agricultural landscapes. Apidologie. 2017;48(3):371–83.
Busch V, Klaus VH, Schäfer D, Prati D, Boch S, Müller J, Hölzel N, et al. Will i stay or will i go? Plant species-specific response and tolerance to high land-use intensity in temperate grassland ecosystems. J Veg Sci. 2019;30(4):674–86.
Jungbluth J, von Knorre D, Bößneck U, Groh K, Hackenberg E, Kobialka, Zettler M, et al. Rote Liste und Gesamtartenliste der Binnenmollusken (Schnecken und Muscheln; Gastropoda et Bivalvia) Deutschlands. 6. überarbeitete Fassung, Stand Februar 2019. Naturschutz Biolog Vielfalt. 2011;70(3):647–708.
We thank the managers of the three Exploratories, Kirsten Reichel-Jung, Iris Steitz, and Sandra Weithmann, Juliane Vogt, Miriam Teuscher and all former managers for their work in maintaining the plot and project infrastructure; Christiane Fischer for giving support through the central office, Andreas Ostrowski for managing the central data base, and Markus Fischer, Eduard Linsenmair, Dominik Hessenmöller, Daniel Prati, Ingo Schöning, François Buscot, Ernst-Detlef Schulze, and the late Elisabeth Kalko for their role in setting up the Biodiversity Exploratories project. Many thanks to all research assistants: Kevin Frank, Wiebke Kämper, Jessica Schneider, Andrea Hilpert, Matteo Trevisan, Matthias Brandt, Sebastian Schmelzle, Tewannakit Mermagen, Kathrin Ziegler, Annika Keil, Andreas Kerner, Katja Gruschwitz, and Kimberly Adam.
Open Access funding enabled and organized by Projekt DEAL. The work has partly been funded by the DFG Priority Program 1374 "Infrastructure-Biodiversity-Exploratories" (DFG BL860/8-3).
Ecological Networks, Technische Universität Darmstadt, Schnittspahnstraße 3, 64287, Darmstadt, Germany
Katja Wehner, Nadja K. Simons & Nico Blüthgen
Naturhistorisches Museum Mainz, Landessammlung für Naturkunde RLP, Reichklarastraße 1, 55116, Mainz, Germany
Carsten Renker
Department of Ecology and Ecosystem management, Technische Universität München, Hans-Carl-von-Carlowitz-Platz 2, 85350, Freising-Weihenstephan, Germany
Wolfgang W. Weisser
Katja Wehner
Nadja K. Simons
Nico Blüthgen
KW did the fieldwork, collected and determined snail species, performed the statistical analyses and wrote the manuscript. CR assisted in the species determination and commented on the manuscript. NKS assisted in the statistical analyses and commented on the manuscript. WWW and NB designed the study, NB also assisted in the statistical analyses and the paper writing. All authors have approved to the final version.
Correspondence to Katja Wehner.
The study complied the fundamental principles of the Basel declaration for research in animals. The investigated species are not at risk of extinction. Fieldwork permits were issued by the responsible state environmental offices of Baden-Württemberg, Thüringen, and Brandenburg.
Additional file 1: Appendix 1.
Summary of significant effects of land-use parameters and abiotic factors in forests (forest management index Formi, proportion of non-native tress, proportion of dead wood with saw cuts, proportion of wood harvested, pH and soil moisture) and grasslands (land-use index LUI, mowing, grazing, fertilization, pH and soil moisture) on the community weighted mean of the maximum shell size, the number of offspring, light preference, humidity preference, drought resistance and inundation tolerance. * p < 0.05, ** p < 0.01, *** p < 0.001. ↓ negative effect, ↑ positive effect.
Influence of the abundance-weighted mean (AWM) of the forest management index on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in forests. Species in italics are land-use "winners", species in bold are land-use "losers".
Influence of the abundance-weighted mean (AWM) of the proportion of non-native trees on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in forests. Species in italics are land-use "winners", species in bold are land-use "losers".
Influence of the abundance-weighted mean (AWM) of the proportion of deadwood with saw cuts on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in forests. Species in italics are land-use "winners", species in bold are land-use "losers".
Influence of the abundance-weighted mean (AWM) of the proportion of wood harvested on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in forests. Species in italics are land-use "winners", species in bold are land-use "losers".
Influence of the abundance-weighted mean (AWM) of soil pH on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in forests. Species in italics are land-use "winners", species in bold are land-use "losers".
Influence of the abundance-weighted mean (AWM) of soil moisture on the maximum shell, size number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in forests. Species in italics are land-use "winners", species in bold are land-use "losers".
Influence of the abundance-weighted mean (AWM) of land-use intensity (LUI) on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in grasslands. Species in italics are land-use "winners", species in bold are land-use "losers".
Influence of the abundance-weighted mean (AWM) of mowing on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in grasslands. Species in italics are land-use "winners", species in bold are land-use "losers".
Additional file 10: Appendix 10.
Influence of the abundance-weighted mean (AWM) of grazing on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in grasslands. Species in italics are land-use "winners", species in bold are land-use "losers".
Influence of the abundance-weighted mean (AWM) of fertilization on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in grasslands. Species in italics are land-use "winners", species in bold are land-use "losers".
Influence of the abundance-weighted mean (AWM) of soil pH on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in grasslands. Species in italics are land-use "winners", species in bold are land-use "losers".
Influence of the abundance-weighted mean (AWM) of soil moisture on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in grasslands. Species in italics are land-use "winners", species in bold are land-use "losers".
Relation of the abundance-weighted means (AWM) of the forest management index, proportion of non-native trees, proportion of dead wood with saw cuts, proportion of wood harvested, pH and soil moisture and the proportional occurrence of a certain species in forests.
Relation of the abundance-weighted means (AWM) of the land-use intensity, mowing, grazing, fertilization, pH and soil moisture and the proportional occurrence of a certain species in forests.
Wehner, K., Renker, C., Simons, N.K. et al. Narrow environmental niches predict land-use responses and vulnerability of land snail assemblages. BMC Ecol Evo 21, 15 (2021). https://doi.org/10.1186/s12862-020-01741-1
Land snails
Land-use intensity
Biodiversity Exploratories
Submission enquiries: [email protected] | CommonCrawl |
Skip to main content Skip to sections
Journal of Thermal Analysis and Calorimetry
January 2020 , Volume 139, Issue 2, pp 1111–1120 | Cite as
Non-isothermal crystallization kinetics of UHMWPE composites filled by oligomer-modified CaCO3
Zexiong Wu
Zishou Zhang
Kancheng Mai
The processability of ultrahigh molecular weight polyethylene (UHMWPE) improved by oligomer-modified calcium carbonate (CaCO3) was observed in our previous work. In order to understand the effect of oligomer-modified CaCO3 on the crystallization of UHMWPE, the non-isothermal crystallization behavior and crystallization kinetics of UHMWPE composites filled by oligomer-modified CaCO3 was studied by differential scanning calorimetry in this work. Jeziorny and Mo methods were used to describe the non-isothermal crystallization kinetics of UHMWPE composites. The effect of modified filler content and cooling rate on the crystallization temperature and crystallization rate was discussed. The heterogeneous nucleation of modified CaCO3 slightly increases the crystallization temperature of UHMWPE. The crystallization enthalpy of UHMWPE composites is significantly higher than that of UHMWPE. The crystallization rate of UHMWPE composites depends on the filler contents and cooling rate.
Non-isothermal crystallization kinetics Ultrahigh molecular weight polyethylene Oligomer-modified calcium carbonate Crystallization
Ultrahigh molecular weight polyethylene (UHMWPE) is an excellent thermoplastic crystalline plastic [1, 2, 3, 4]. In comparison with other polyethylene, UHMWPE has lower coefficient of friction [5, 6, 7], higher mechanical strength [8, 9] and abrasion resistance [10, 11, 12]. Thus, UHMWPE has a wider foreground of application for pipeline industry [13, 14], biological medicine [15, 16, 17], textile industry [18, 19] and so on. Nevertheless, UHMWPE has extremely high molecular weight so that it is difficult to be processed by common processing method. Aim at this situation, a lot of researchers prepared a series of UHMWPE-based compounds to improve its performance [9, 20, 21, 22, 23, 24]. Meanwhile, UHMWPE as a crystalline polymer, addition of other materials can change the crystallization property of UHMWPE, which can influence the UHMWPE performance [25, 26, 27, 28, 29]. Therefore, it is necessary to study the crystallization behavior of modified UHMWPE materials.
Sattari et al. [30] reported the non-isothermal crystallization and melting behavior of UHMWPE hybrid composites reinforced with short carbon fiber (SCF) and nano-SiO2 particles. It is observed that adding SCF to UHMWPE increased the melting peak temperatures and the crystallinity of UHMWPE. The amounts of nano-SiO2 had no obvious effect on melting temperatures of UHMWPE and decreased the crystallinity of UHMWPE. Ozawa–Avrami method can be used to describe the non-isothermal crystallization kinetics of the composites.
Liu et al. [31] prepared a novel antibacterial UHMWPE/chlorhexidine acetate (CA)—montmorillonite (MMT) composites and investigated the crystallization kinetics of UHMWPE, UHMWPE/MMT, UHMWPE/CA and UHMWPE/CA-MMT. It is observed that the CA and MMT exhibited strong heterogeneous nucleation to increase the crystallization temperatures of UHMWPE. Addition of CA lowered the melting temperature of UHMWPE due to its plastization. The results of crystallization kinetics indicated that the addition of CA decreased the crystallization rate and broadened the range of crystal growth temperature, while MMT increased the crystallization rate due to its heterogeneous nucleation.
Zhang et al. [32, 33] studied the non-isothermal crystallization kinetics of UHMWPE in liquid paraffin (LP) systems and used the Avrami method modified by Jeziorny and Mo to describe the influence of UHMWPE content and cooling rate on crystallization mechanism and spherulitic structure of UHMWPE. Addition of LP increased the crystallization rate of UHMWPE. The Avrami plots of UHMWPE and UHMWPE/LP blends show a good linearity. At the primary crystallization stage, it is found that the Avrami exponent n is variable around 5 and decreases slightly as the cooling rate decreases. High values of n1 for UHMWPE and UHMWPE/LP blends may be generated from their high viscosities yielding a more complicated crystallization mechanism. In addition, the extent of secondary crystallization increases with increasing cooling rate. The Avrami exponent n2, ranging from 0.80 to 1.96, indicated the simpler crystallization mode. Further, the value of F(T) in the Mo method increased with an increase in relative crystallinity and UHMWPE content in the blends.
Shen et al. [34] researched the influence of compatibilizer and dispersant on the non-isothermal crystallization behavior of HDPE/UHMWPE/hydroxyapatite blends. Analyzed by Jeziorny method, the value of Avrami exponent n is between 2.7 and 3.5. At the same temperature, adding compatibilizer and dispersant increased the crystallization rate, while decreased the half crystallization time and the activation energy of crystallization. In addition, the compatibilizer and dispersant also improved the mechanical properties of blends.
Except as mentioned above, the effect of UHMWPE on the crystallization behavior and crystallization kinetics of HDPE has been reported [13, 35]. Nevertheless, fewer investigation focus on the crystallization behavior and the non-isothermal crystallization kinetics UHMWPE composites filled by inorganic particles. In our previous works, a new UHMWPE composite filled by oligomer-modified CaCO3 was prepared in our laboratory. It is found that our prepared oligomer-modified CaCO3 can improve the processability of UHMWPE [36]. In order to understand the effect of oligomer-modified CaCO3 on the crystallization of UHMWPE, the crystallization behavior of UHMWPE composites was investigated by DSC, while the non-isothermal crystallization kinetics was described by different kinetic models. The influence of the modified filler on the UHMWPE crystallization was discussed in this article.
UHMWPE (Mw 2.2 × 106) was purchased from Mitsui Chemicals, Japan. PE wax (Product No.1020, Mw about 2000–5000 and melt point at 116 °C) was provided by SCG, Thailand. Micro-CaCO3 was provided by Keynes nanomaterial (Lianzhou, China). Benzoyl peroxide (BPO) and acrylic acid (AA) were purchased from Damao Chemical (Tianjin, China).
In order to obtain acrylic acid (AA)-modified calcium carbonate (AA-CaCO3), 500 g CaCO3 was added in 2.5 L ethyl alcohol at room temperature, then 40 g AA was added and stirred 20 min by mechanical stirring so that CaCO3 reacted with AA thoroughly. Then, the solvent was volatilized at room temperature, and the product of AA-modified calcium carbonate (AA-CaCO3) was dried in at 80 °C for 1 h.
The AA-CaCO3, PE wax (mass ratio 8:2) and 0.5 mass% BPO (relative to CaCO3) as initiator were mixed in mixer chamber at 150 °C for 10 min. The long chain-modified CaCO3 (PEW-g-CaCO3) was prepared due to reaction between PEW and AA. Composites of UHMWPE filled by different contents of PEW-g-CaCO3 were prepared in mixer chamber at 190 °C for 10 min.
Sample characterization
The melting and crystallization behavior of UHMWPE composites were measured by differential scanning calorimetry (DSC) at nitrogen atmosphere with the DSC-8500 (PE, America), calibrated with indium. About 2–3 mg sample was firstly heated up to 180 °C at the rate of 100 °C min−1 and kept this temperature for 3 min to eliminate the thermal history of sample. Then, the sample was cooled at the rate of 5, 10, 15, 20, 25 °C min−1 from 180 to 60 °C to obtain the non-isothermal crystallization curves. At last, the sample was reheated up to 180 °C at the rate of 10 °C min−1 to obtain the melting curves. The crystallinity of sample (Xc) is calculated by Eq. 1.
$$ X_{\text{c}} \% = \frac{{\Delta H_{\text{m}} }}{{\Delta H_{\text{m}}^{0} \cdot w}} \times 100\% $$
where ΔHm is the melting enthalpy of sample measured by DSC, \( \Delta H_{\text{m}}^{0} \) is the melting enthalpy of 100% crystalline UHMWPE (290 J g−1) [30] and w is the mass content of UHMWPE in the composites.
Effect of modified-CaCO3 on non-isothermal crystallization behavior of UHMWPE
Figure 1 shows the crystallization curves of UHMWPE composites cooled at different rates. The plots of crystallization peak temperatures and crystallization enthalpy versus cooling rate are shown in Fig. 2. It is observed that the crystallization peak of UHMWPE is shifted to the lower temperature and crystallization peak becomes wider with an increase in cooling rates. The crystallization peak temperature of UHMWPE composites is higher than that of UHMWPE, and the contents of PEW-g-CaCO3 have little influence on the crystallization peak temperature of UHMWPE composites. The crystallization enthalpy of composites also decreases with an increase in cooling rate. At the same cooling rate, the crystallization enthalpy of composites is significantly higher than that of UHMWPE and increases with an increase in the PEW-g-CaCO3 contents at cooling rate of 5 °C min−1. However, the contents of PEW-g-CaCO3 have little influence on the crystallization enthalpy of UHMWPE composites at high cooling rate.
DSC crystallization curves for PEW-g-CaCO3/UHMWPE at different cooling rate PEW-g-CaCO3/wt%: a 0, b 10 and c 20
Plots of crystallization peak temperature and crystallization enthalpy versus cooling rate
The crystallization enthalpy of composites decreases with an increase in cooling rate, while crystallization peak becomes wider. It is suggested that the molecular chain of UHMWPE at higher cooling rate has poor mobility which is difficult to crystallize, resulting in a wider crystallization peak. At the same cooling rate, addition of modified filler increases the crystallization temperature of UHMWPE which indicates the presence of the heterogeneous nucleation of PEW-g-CaCO3. The higher crystallization enthalpy of UHMWPE is attributed to improved mobility of molecular chain of UHMWPE by oligomer-modified CaCO3.
In order to understand the effect of different modified-CaCO3 on crystallization of UHMWPE, the non-isothermal crystallization behavior of UHMWPE composites filled by CaCO3, AA-CaCO3 and PEW-g-CaCO3 are investigated by DSC. The plots of crystallization peak temperature and crystallization enthalpy versus cooling rate are shown in Fig. 3. It can be observed from Fig. 3 that the crystallization peak temperatures from high to low are AA-CaCO3/UHMWPE > PEW-g-CaCO3/UHMWPE ≈ CaCO3/UHMWPE > UHMWPE at the same cooling rate. It is indicted that the heterogeneous nucleation of AA-CaCO3, PEW-g-CaCO3 and CaCO3 can increase the crystallization peak temperature of UHMWPE and the heterogeneous nucleation of AA-CaCO3 is higher than that of PEW-g-CaCO3 and CaCO3. The crystallization enthalpy of UHMWPE composites is also higher than that of UHMWPE due to the heterogeneous nucleation of fillers. It can also be seen the crystallization enthalpy of PEW-g-CaCO3/UHMWPE is higher than that of AA-CaCO3/UHMWPE, CaCO3/UHMWPE and UHMWPE at cooling rate of 5 °C min−1. In our previous work [36], it is found that the oligomer-modified CaCO3 can improve the processability of UHMWPE. It is suggested that the presence of oligomer in the surface of PEW-g-CaCO3 is benefit to the mobility of molecular chain of UHMWPE to increase the degree of crystallization. However, the crystallization enthalpy of PEW-g-CaCO3/UHMWPE is lower than that of AA-CaCO3/UHMWPE and CaCO3/UHMWPE at high cooling rate above 5 °C min−1. It is considered that the mobility of molecular chain of UHMWPE is restricted in PEW-g-CaCO3/UHMWPE at high cooling rate and the heterogeneous nucleation of PEW-g-CaCO3 is lower than that of AA-CaCO3, resulting in the decrease in the crystallization enthalpy of PEW-g-CaCO3/UHMWPE with an increase in cooling rate.
Plots of crystallization peak temperatures and crystallization enthalpy versus cooling rate for UHMWPE composites filled by 10% filler
Effect of modified-CaCO3 on non-isothermal crystallization kinetics of UHMWPE
Some models were used to describe the non-isothermal crystallization kinetics of polymer and its blends and composites, including Ozawa, Jeziorny and Mo models, etc. In our work, effect of modified-CaCO3 on non-isothermal crystallization kinetics of UHMWPE was investigated by the Avrami theory modified by Jeziorny and Mo models.
Based on the Avrami theory modified by Jeziorny [33, 37, 38], the non-isothermal crystallization kinetics can be described as follows:
$$ X_{\text{t}} = 1 - {\text{e}}^{{ - z_{\text{t}} t^{n} }} $$
$$ \lg [ - \ln \left( {1 - X_{\text{t}} } \right)] = n\lg t + \lg Z_{\text{t}} $$
$$ \lg Z_{\text{c}} = \frac{{\lg Z_{\text{t}} }}{R} $$
where n is the Avrami exponent, Zt and Zc are the Avrami and Jeziorny crystallization rate constants, respectively, R is the cooling rate, the time-dependent relative crystallinity, Xt, can be calculated by Eqs. 5 and 6.
$$ X_{\text{T}} = \frac{{\mathop \smallint \nolimits_{{T_{0} }}^{T} \frac{{{\text{d}}H}}{{{\text{d}}T}}{\text{d}}T}}{{\mathop \smallint \nolimits_{{T_{0} }}^{{T_{\infty } }} \frac{{{\text{d}}H}}{{{\text{d}}T}}{\text{d}}T}} $$
where T0 and t0 are the crystallization onset temperature and time, T and t are the arbitrary crystallization temperature and time, T∞ and t∞ are the ultimate crystallization temperature and time, respectively, XT is the temperature-dependent relative crystallinity. If lg[ln (1 − Xt)] is linear on lgt, the Avrami exponent n and the crystallization rate constant Zt can be obtained by the slope and intercept. The crystallization rate constant Zc can be calculated by Eq. (4).
Figures 4 and 5 are the representative curves of relative crystallinity of UHMWPE composites versus temperature or time, respectively. The plot of half crystallization time (t1/2) (Table 1) obtained from Fig. 5 versus cooling rate is shown in Fig. 6. It can be seen that the half crystallization time for all samples decreases with an increase in cooling rate, indicating that accelerating the cooling rate can increase the crystallization rate so that the crystallization time can be shortened. On the other hand, the half crystallization time of UHMWPE composites is lower than that of pure UHMWPE, indicating high crystallization rate in UHMWPE composites filled by PEW-g-CaCO3.
Curves of relative crystallinity versus temperature of PEW-g-CaCO3/UHMWPE PEW-g-CaCO3/wt%: a 0, b 10 and c 20
Curves of relative crystallinity versus time of PEW-g-CaCO3/UHMWPE PEW-g-CaCO3/wt%: a 0, b 20 and c 10. Line: the experiment results, point: the kinetic calculation results
Kinetic parameters of UHMWPE and its composites from Jeziorny method
Filler/wt%
R/°C min−1
t1/2/min
Plots of half crystallization time versus cooling rate
According to Eq. (3), the plots of lg[ln(1 − Xt)] versus lgt are showed in Fig. 7. A good linear relation means that Jeziorny method is suitable to describe the non-isothermal crystallization kinetics of UHMWPE and its composites. The Avrami exponent n and crystallization rate constant Zt obtained by the slope and intercept are shown in Table 1. It can be observed from Table 1 that the crystallization rate constant of all samples increases with accelerating the cooling rate from 5 to 10 °C min−1 and then decreases with an increase in the cooling rate. The Avrami exponent n is about 5, which may be generated from their high viscosities, yielding a more complicated crystallization mechanism [32].
Plots of lg[ln(1 − Xt)] − lgt for non-isothermal crystallization of PEW-g-CaCO3/UHMWPE PEW-g-CaCO3/wt%: a 0, b 10 and c 20
The relationship of crystallization rate constant of UHMWPE composites versus cooling rate is shown in Fig. 8. It can be seen that the crystallization rate constant of UHMWPE composites depends on the filler contents and cooling rate. At cooling rate of 10 °C min−1, UHMWPE and its composites exhibit maximum crystallization rate. At cooling rate above 10 °C min−1, the crystallization rate of UHMWPE decreases with an increase in cooling rate. At the filler content below 10 wt%, the crystallization rate of UHMWPE composites increases with an increase in filler contents. At the filler content above 10 wt%, the crystallization rate of UHMWPE composites decreases with an increase in filler contents. It is suggested that higher contents of filler and high cooling rate should hinder the movement of molecular chain to decrease the crystallization rate of UHMWPE.
Plots of crystallization rate constant versus cooling rate
For the verification of the kinetic calculation results, we compare the experiment results with the calculation results by kinetic constants, shown in Fig. 5c. It can be found that the experiment results are agreed with the kinetic calculation results at relative crystallinity range of 0–50%. Above 50% relative crystallinity, the experiment results deviated with the kinetic calculation results may be due to the effect of second crystallization.
Mo method is generally used to investigate the non-isothermal crystallization kinetics of polymer and its composites [33, 37, 38].
$$ \ln R = \ln F(T) - b\ln t $$
where the R is the cooling rate, F(T) = [k/Zt]1/m can be used to characterize the crystallization rate. The value of F(T) is bigger, the crystallization rate is slower. If lgR is linear on lgt, the values of F(T) can be calculated by the intercept.
Figure 9 is the plots of lgR versus lgt for UHMWPE and its composites. The plots of lgR have a good linearity with lgt, which means that Mo method is suitable to describe the non-isothermal crystallization kinetics of UHMWPE and its composites. The plots of value of F(T) calculated by the intercept versus relative crystallinity is shown in Fig. 10. The value of F(T) increases with an increase in relative crystallinity, indicating that the crystallization rate becomes slower in high crystallinity. The F(T) valve of pure UHMWPE is higher than those of all composites, also indicating that the PEW-g-CaCO3 can accelerate the crystallization of UHMWPE. Besides, the lower F(T) of UHMWPE composites filled by 10–20% PEW-g-CaCO3 also indicated the higher crystallization rate, conforming to the conclusion obtained by Jeziorny method.
Plots of lgR − lgt for non-isothermal crystallization of PEW-g-CaCO3/UHMWPE PEW-g-CaCO3/wt%: a 0, b 10 and c 20
Plots of F(T) versus relative crystallinity
In order to improve the properties of UHMWPE composites filled by CaCO3, oligomer-modified CaCO3 was prepared and the effect of oligomer-modified CaCO3 on the non-isothermal crystallization behavior and crystallization kinetics of UHMWPE was investigated by differential scanning calorimetry. The results indicated that the addition of oligomer-modified CaCO3 increases the crystallization temperature and crystallization enthalpy of UHMWPE. It is attributed to the heterogeneous nucleation of oligomer-modified CaCO3 and the improved mobility of molecular chain of UHMWPE by oligomer-modified CaCO3. Jeziorny and Mo methods can be used to describe the non-isothermal crystallization kinetics of UHMWPE and its composites. The crystallization rate of UHMWPE composites depends on the filler contents and cooling rate. UHMWPE and its composites exhibit maximum crystallization rate at cooling rate of 10 °C min−1. The crystallization rate of UHMWPE decreases with an increase in cooling rate at cooling rate above 10 °C min−1. The crystallization rate of UHMWPE composites increases with an increase in filler contents at the filler content below 10 wt% and decreases with an increase in filler contents at the filler content above 10 wt%. The higher contents of filler and high cooling rate should hinder the movement of molecular chain to decrease the crystallization rate of UHMWPE.
We acknowledge the support of this work by the National Key Research and Development Program of China (Grant No. 2016YFB0302302), the Natural Science Foundations of China (51573213, 51303215) and the Pearl River Nova Program of Guangzhou (201610010163).
Khasraghi SS, Rezaei M. Preparation and characterization of UHMWPE/HDPE/MWCNT melt-blended nanocomposites. J Thermoplast Compos. 2015;28:305–26.CrossRefGoogle Scholar
Visco A, Yousef S, Galtieri G, Nocita D, Pistone A, Njuguna J. Thermal, mechanical and rheological behaviors of nanocomposites based on UHMWPE/paraffin oil/carbon nanofiller obtained by using different dispersion techniques. JOM. 2016;68:1078–89.CrossRefGoogle Scholar
Forster AL, Forster AM, Chin JW, Peng JS, Lin CC, Petit S, Kang KL, Paulter N, Riley MA, Rice KD, Al-Sheikhly M. Long-term stability of UHMWPE fibers. Polym Degrad Stab. 2015;114:45–51.CrossRefGoogle Scholar
Chen L, Zheng K, Fang Q. Effect of strain rate on the dynamic tensile behaviour of UHMWPE fibre laminates. Polym Test. 2017;63:54–64.CrossRefGoogle Scholar
Saikko V. Effect of contact area on the wear and friction of UHMWPE in circular translation pin-on-disk tests. J Tribol. 2017;139:061606.CrossRefGoogle Scholar
Chu YY, Chen XG, Tian LP. Modifying friction between ultra-high molecular weight polyethylene (UHMWPE) yarns with plasma enhanced chemical vapour deposition (PCVD). Appl Surf Sci. 2017;406:77–83.CrossRefGoogle Scholar
Wang YZ, Yin ZW, Li HL, Gao GY, Zhang XL. Friction and wear characteristics of ultrahigh molecular weight polyethylene (UHMWPE) composites containing glass fibers and carbon fibers under dry and water-lubricated conditions. Wear. 2017;380–381:42–51.CrossRefGoogle Scholar
Ruan FT, Bao LM. Mechanical enhancement of UHMWPE fibers by coating with carbon nanoparticles. Fiber Polym. 2014;15:723–8.CrossRefGoogle Scholar
Tang G, Hu X, Tang TH, Claramunt C. Mechanical properties of surface treated UHMWPE fiber and SiO2 filled PMMA composites. Surf Interface Anal. 2017;49:898–903.CrossRefGoogle Scholar
Kumar A, Bijwe J, Sharma S. Hard metal nitrides: role in enhancing the abrasive wear resistance of UHMWPE. Wear. 2017;378–379:35–42.CrossRefGoogle Scholar
Sharma S, Bijwe JE, Panier S. Assessment of potential of nano and micro-sized boron carbide particles to enhance the abrasive wear resistance of UHMWPE. Compos B. 2016;99:312–20.CrossRefGoogle Scholar
Sharma S, Bijwe J, Panier S, Sharma M. Abrasive wear performance of SiC-UHMWPE nano-composites-influence of amount and size. Wear. 2015;332–333:863–71.CrossRefGoogle Scholar
He XL, Wang YH, Wang QT, Tang Y, Liu BP. Effects of addition of ultra-high molecular weight polyethylene on tie-molecule and crystallization behavior of unimodal PE-100 pipe materials. J Macromol Sci B. 2016;55:1007–21.CrossRefGoogle Scholar
Pi L, Hu XY, Nie M, Wang Q. Role of ultrahigh molecular weight polyethylene during rotation extrusion of polyethylene pipe. Ind Eng Chem Res. 2014;53:13828–32.CrossRefGoogle Scholar
Liu YM, Shi F, Bo L, Zhi W, Weng J, Qu SX. A novel alginate-encapsulated system to study biological response to critical-sized wear particles of UHMWPE loaded with alendronate sodium. Mater Sci Eng C. 2017;79:679–86.CrossRefGoogle Scholar
Riveiro A, Soto R, Del Val J, Comesana R, Boutinguiza M, Quintero F, Lusquinos F, Pou J. Laser surface modification of ultra-high-molecular-weight polyethylene (UHMWPE) for biomedical applications. Appl Surf Sci. 2014;302:236–42.CrossRefGoogle Scholar
Souza VC, Oliveira JE, Lima SJG, Silva LB. Influence of vitamin C on morphological and thermal behaviour of biomedical UHMWPE. Macromol Symp. 2014;344:8–13.CrossRefGoogle Scholar
Fang XD, Wyatt T, Hong YF, Yao DG. Gel spinning of UHMWPE fibers with polybutene as a new spin solvent. Polym Eng Sci. 2016;56:697–706.CrossRefGoogle Scholar
Huang CY, Wu JY, Tsai CS, Hsieh KH, Yeh JT, Chen KN. Effects of argon plasma treatment on the adhesion property of ultra high molecular weight polyethylene (UHMWPE) textile. Surf Coat Technol. 2013;231:507–11.CrossRefGoogle Scholar
Chukov DI, Stepashkin AA, Gorshenkov MV, Tcherdyntsev VV, Kaloshkin SD. Surface modification of carbon fibers and its effect on the fiber-matrix interaction of UHMWPE based composites. J Alloys Compd. 2014;586(SUPPL 1):S459–63.CrossRefGoogle Scholar
Panin CV, Kornienko LA, Suan TN, Ivanova LR, Poltaranin MA. The effect of adding calcium stearate on wear-resistance of ultra-high molecular weight polyethylene. Procedia Eng. 2015;113:490–8.CrossRefGoogle Scholar
Puértolas JA, Kurtz SM. Evaluation of carbon nanotubes and graphene as reinforcements for UHMWPE-based composites in arthroplastic applications: a review. J Mech Behav Biomed Mater. 2014;39:129–45.CrossRefGoogle Scholar
Yeh JT, Wang CK, Yu W, Huang KS. Ultradrawing and ultimate tensile properties of ultrahigh molecular weight polyethylene composite fibers filled with functionalized nanoalumina fillers. Polym Eng Sci. 2015;55:2205–14.CrossRefGoogle Scholar
Dintcheva NT, Morici E, Arrigo R, Zerillo G, Marona V, Sansotera M, Magagnin L, Navarrini W. High performance composites containing perfluoropolyethers-functionalized carbon-based nanoparticles: rheological behavior and wettability. Compos B Eng. 2016;95:29–39.CrossRefGoogle Scholar
Xu GY, Zhu QR. Studies on crystallization and melting behaviors of UHMWPE/MWNTs nanocomposites with reduced chain entanglements. Polym Polym Compos. 2017;25:495–506.Google Scholar
Doshi BN, Ghali B, Godleski-Beckos C, Lozynsky AJ, Oral E, Muratoglu OK. High pressure crystallization of vitamin e-containing radiation cross-linked UHMWPE. Macromol Mater Eng. 2015;300:458–65.CrossRefGoogle Scholar
George A, Ngo HD, Bellare A. Influence of crystallization conditions on the tensile properties of radiation crosslinked, vitamin E stabilized UHMWPE. J Mech Behav Biomed Mater. 2014;40:406–12.CrossRefGoogle Scholar
Shi XM, Bin YZ, Hou DS, Men YF, Matsuo M. Gelation/crystallization mechanisms of UHMWPE solutions and structures of ultradrawn gel films. Polym J. 2014;46:21–35.CrossRefGoogle Scholar
Zuo JD, Liu SM, Zhao JQ. Cocrystallization behavior of HDPE/UHMWPE blends prepared by two-step processing way. Polym Polym Compos. 2015;23:59–64.Google Scholar
Sattari M, Mirsalehi SA, Khavandi A, Alizadeh O, Naimi-Jamal MR. Non-isothermal melting and crystallization behavior of UHMWPE/SCF/nano-SiO2 hybrid composites. J Therm Anal Calorim. 2015;122:1319–30.CrossRefGoogle Scholar
Liu C, Qiu HT, Liu CJ, Zhang J. Study on crystal process and isothermal crystallization kinetics of UHMWPE/CA-MMT composites. Polym Compos. 2012;33:1987–92.CrossRefGoogle Scholar
Zhang CF, Zhu BK, Ji GL, Xu YY. Studies on nonisothermal crystallization of ultra-high molecular weight polyethylene in liquid paraffin. J Appl Polym Sci. 2006;99:2782–8.CrossRefGoogle Scholar
Zhang CF, Bai YX, Gu J, Sun YP. Crystallization kinetics of ultra high-molecular weight polyethylene in liquid paraffin during solid–liquid thermally induced phase separation process. J Appl Polym Sci. 2011;122:2442–8.CrossRefGoogle Scholar
Shen HL, Zhang N. Nonisothermal crystallization kinetics of HDPE/UHMWPE/n-HA composites. Adv Mater Res. 2011;194–196:2351–4.CrossRefGoogle Scholar
Song SJ, Wu PY, Ye MX, Feng JC, Yang YL. Effect of small amount of ultra high molecular weight component on the crystallization behaviors of bimodal high density polyethylene. Polymer. 2008;49:2964–73.CrossRefGoogle Scholar
Wu ZX, Zhang ZS, Mai K. Preparation and thermal property of ultrahigh molecular weight polyethylene composites filled by calcium carbonate modified with long chain. J Thermoplast Compos Mater. 2018. https://doi.org/10.1177/0892705718807955.CrossRefGoogle Scholar
Jeziorny A. Parameters characterizing the kinetics of the nonisothermal crystallization of poly(ethylene terephthalate) determined by DSC. Polymer. 1978;19:1142–4.CrossRefGoogle Scholar
Song J, Zhang H, Ren M, Chen Q, Sun X, Wang S, Zhang H, Mo Z. Crystal transition of nylon-12,12 under drawing and annealing. Macromol Rapid Commun. 2005;26:487–90.CrossRefGoogle Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1.Key Laboratory of Polymeric Composites and Functional Materials of Ministry of Education, Guangdong Provincial Key Laboratory for High Performance Polymer-Based Composites, Materials Science Institute, School of ChemistrySun Yat-sen UniversityGuangzhouPeople's Republic of China
Wu, Z., Zhang, Z. & Mai, K. J Therm Anal Calorim (2020) 139: 1111. https://doi.org/10.1007/s10973-019-08428-w
Received 25 July 2018 | CommonCrawl |
Multi-sensing based target tracking by using decision-making strategy with spatial and temporal properties
Liu Yang1,
Liu Xiuju2,
Jin Huixia1,
Fu Yuanyuan1 &
Zhang Chi1
Multi-sensing system for target tracking has been addressed by many researchers from different fields. In this work, a multi-sensing system within the time, space, and frequency domains is first described. The system development is based on sensor spatial and temporal characteristics, and therefore is reliable and stable. The frequency outcome of sensors is employed to segment the detection scope into different domains, and is simple to implement. For resolving the challenge of decision-making strategy, this work combines the spatial and temporal properties with the determination mechanism as well as optimizes the system devising—making it a very promising basis for the realization of tracking accuracy improvement. With this strategy, target detection results are obtained and evaluated by probability-based parameters. The proposed decision-making scheme receives a high detection accuracy as well as a good working performance according to statistical analysis, allowing a straightforward methodology for the configuration of multi-sensing system.
The significance of sensors for target tracking as well as its application has attracted a great deal of interest over last decades. With the evolution of sensing technology, the sensing platforms have become increasingly complex [1]. As such, more information from sensors is gained to improve tracking reliability and accuracy. Nevertheless, the demand of signal detecting results in a big volume and a low speed of tracking [2]. This problem is most pronounced in the difficulties in real-time tracking and object analysis. The utilization of multi-sensing system, which is featured by combing tracking outputs with multiple resources, provides users with comprehensive and complementary information [3]. Thereby, the information from multiple sensors can be integrated to gain a better understanding of tracking.
On the basic of previous researches, multi-sensors have already been employed for traffic speed and travel time detecting [4,5,6]. Indoor positioning is another such field, with recent publications revealing the capability in healthcare monitoring [7, 8], surveillance [9, 10], and target group pattern generation [11, 12]. Multi-sensing systems outperform traditional tracking techniques in managing the flow of signal and coordinating sensor actions [13]. A tracking system equipped with sensors, which functions in an autonomous or semi-autonomous mode, has to be capable of defining the target from the sensors as well as making decisions based on this information [14]. The information of different modalities can be applied to obtain an exact understanding of the target with signal integration. In a multi-sensing system, multiple sensors inevitably introduce problems of data redundancy or data conflicting due to the data stored in multiple databases with inconsistent attributes [15,16,17]. The use of available data, the connectivity and diversity between data sources, and abilities of data analytic methods make the tracking precision still a challenge. Without the strategies of precise positioning, the address of sensor output can hardly be reliable. That is, the accuracy improvement of sensing system is identified as a specific issue in tracking optimization. The importance of this issue is that it is an underlying concern of the sensing system.
Many techniques have been explored in the improvement of tracking accuracy. Previously, Alon Shalev Housfater applied the idea of sequential Monte Carlo method to nonlinear inputs optimization in multi-sensor tracking system [18]. Their work indicates that the Monte Carlo algorithm can rapidly converge to get an optimal result. Somnath Deb et al. proposed an S-dimensional assignment algorithm, provided as a more effective alternative, in sensor outputs association and estimation in surveillance [19]. For indoor tracking, combination of Kalman sensor group fusion architecture and Alpha-Beta filter is devised to handle persons randomly walking and positioning issues [20]. Recent research works face this problem by implementing processing algorithms in the presence of multi-sensing systems.
In this paper, we will consider the decision-making mechanism of tracking. A set of decision-making strategies is proposed for accuracy improvement. Accordingly, the remaining of this paper is organized as follows: the problem statement as well as the basic working model is presented in the next section. The decision-making strategy in target tracking is proposed in Section 3 followed with the mathematical analysis. The signal processing principle is depicted in brief and the experimental outcomes for test the proposed methodology and data analysis results are shown in Section 4. Conclusions with discussion are given in the last section.
Basic concepts on spatial and temporal characteristics
We describe now the theory of spatial and temporal features of moving objects, which influences the accuracy demanded of tracking [21]. Within this research, the characteristics integration refers to the synergistic use of the both spatial and temporal information provided by multiple sensory devices to accomplish the tracking task. Let
$$ R=\left\{T,S,f\right\} $$
be a three-dimension domain, with T standing for time domain, S for spatial domain, and f for frequency domain. During a specific time interval, the occupation area of the sensing node is computationally efficient to work out with positioning coordinates Dx and Dy. Then the area of one single sensor is presented as follows:
$$ {S}_R={D}_x\times {D}_y $$
Hence, each sensor is normalized and defined in the three-dimension domain as shown in Fig. 1.
Sensing node definition in three-dimension domain. Figure shows sensing node definition in three-dimension domain
On the foundation of research [22], the object is characterized with remarkable time and spatial characteristics under moving condition. One of the key facts gleaned from the research is that the frequency response can be utilized as a resolution for target description in the dynamic state monitoring. Thereby in the continuous-spatial domain, the sensor with frequency f is transformed to a coordinate-based representation (Fig. 2).
Spatial domain description of sensor. Figure shows spatial domain description of sensor
A general tracking target rT in the plane is positioned by (xT, yT) in Cartesian coordinates. The object in the aforementioned three-dimension domain is delivered as rT = {tT, sT, fT}, which can also be expressed as the detection signal (xT, yT) at time tT. In general, a set of properties are required to form the concentration on the spatial domain; the variation scope for detecting is named as ST for target interpretation in this paper. Since the time and spatial features show significant correlation, the frequency within this period is distinguished with three different thresholds th1, th2, and th3. According to the statistical analysis of our tracking signals, the value th1 is the inflection of frequency trend, at which point the direction reverses. Meanwhile, there is a frequency shift between th2 and th3. In this way, both of them equal zero. Frequencies between these two can be regarded negative for computation. Figure 3 shows the frequency curve for target property construction in time domain.
Frequency variation of target in time domain. Figure shows the frequency curve for target property construction in time domain
The consistency of the tracking target governs the domain transformation. Note that when scanning the object, the target may move from one ST to another. This way, we also obtain a continuous multi-domain target in a quantization form. The derived target with updated representation is written as rS = {tS, sS, fS} and the subscripted variable is the corresponding mark in new region. To extend the sensor detection from one scope to the entire range, the system interference and measurement error should therefore be considered for accurate positioning. Computation with differentiation position data are facilitated by using the following formula on the basic of statistical outcome:
$$ rs=\underset{-{f}_T}{\overset{f_T}{\int }}\phi \left({R}^n\right)\ln \left(\cos \left(\frac{3\pi }{2}\omega +\lambda \right)\right) dr $$
$$ \omega =\delta \pi f $$
where λ stands for the correlation coefficient of spatial with Cartesian coordinate system and ω is variant of frequency f with respect to angle δ and δ is given and keep constant. The multi-domain figure based on classical sensing signal of different orders is discrete in its dimension parameter Rn. Suppose that there are two independent ST s, i.e., the variability of the object trajectories is calculated by the measuring of the two regions. Let typical frequency threshold {th1, th2, th3} denote the identity of ST with respective coefficient λ. The frequency variation accounting for different ST domains is illustrated in Fig. 4.
Frequency band change with different coefficient. The frequency variation accounting for different ST domains is illustrated in Fig. 4
The multiple sensors detect the designated target in continuous time series and the sensing frequencies are updated and transformed in a timely manner. Typical tracking parameters are quantified by selecting the proper value of correlation coefficient λ for sensing where a wider time interval [th1, th2] and a narrower [th2, th3] are gained due to the increasing of coefficient. The shape of frequency curves in Fig. 4 indicate that, with a same step of tracking, the detection provides a quicker response in [th1, th2] with a λ1 and in [th2, th3] with a λ2 where λ1 ≤ λ2 for computing. Consequently, sensing parameters based on different function scopes can be integrated and further optimized for showing how these values are suitable for tracking purposes. A desired correlation coefficient can be deduced through Eq. 5.
$$ \lambda =\frac{1}{2}\delta \sqrt[\frac{2}{3}]{\cos^2\left(\omega t\right)} $$
Seeing that ω = δπf, we shall also define for:
$$ f=\phi \left(\lambda \right) $$
As long as the angle of the angle δ is fixed, it is thus possible to use the above exhibited formula to determine the domain representation in an effective way.
Target tracking methodology
Tacking overview
We take the assumption of considering the target addressing as a bivariate resolution issue. The detection method starts by recording the frequency information received from the target. An ST domain identification of the target can be delivered as follows:
$$ {M}_d\left({p}_1,\cdots {p}_N\right)=\left\{\begin{array}{cc}0,& {SP}_T<\lambda \left|f<\phi \right.\left(\lambda \right)\\ {}1,& {SP}_T>\lambda \left|f<\phi \right.\left(\lambda \right)\end{array}\right. $$
where pk is signal on the kth dimension and in this research and the maximum k is three. Observing that from Eq. 7, the detection outcome is of discrete value set 0 and 1. The expression Md(p1, ⋯pN) = 0 refers to invalid sensing signals. For the situation Md(p1, ⋯pN) = 1, the object is addressed by sensor outputs. The cumulative set of all measurements from sensors collected in time sequences is denoted by Rk, d where the subscript k and d are the serial number of sampling vector and ST domains respectively. The track trajectory formulation of a local sensor can be approximated using data statistical analysis
$$ z\left({p}_1,\cdots {p}_N\right)=\frac{1}{\delta^{\frac{3}{2}}}\sum \limits_{d=1}^3{\left\Vert {R}_{k,d}\right\Vert}^{\lambda } $$
In general, two distinguished decision-making mechanisms for target detection are employed [23], which are (1) soft-decision making based on the decoding and fusing of raw sensing data and (2) hard-decision making based on the deployment of multiple parallel sensing elements in communicating condition.
We refer readers to [24,25,26] for more detailed information on the decision making theory.
For the reason that we consider target tracking using multiple sensors provides better results than single sensor, each local sensor tracker initializes track information and the track information is transmitted for data fusion. Our task is to establish sensing strategy that make the target tracking within the three-dimensional domain using spatial and temporal characteristics. Hardware system with ST principle is made to explore the possibilities for setting up the multi-domain detection and to test algorithms. Signal processing tools are used for data processing, aiming to solve the overlap of the signals coming from the simultaneous sensing devices. After that, techniques for data identification and fusion are implemented and tested.
Design of multi-sensing system
The collection of the data is developed which depends upon the target trajectories as well as the sensor parameters and coordinates. The sensing output is given based on the measurement of frequency and power signals, which are in general finite sequences, properties that is clearly useful for the practical implementation of sensing system [27, 28]. Considering the statistical characteristics of signal and noise, two estimation parameters detection probability Pd [29, 30] and false alarm probability Pfa [31, 32] are introduced to evaluate the working performance in combination with Eq. 7. We have that:
$$ {M}_d\left({p}_1,\cdots {p}_N\right)=\left\{\begin{array}{c}0,{SP}_T<\beta \left|f<\phi \left(\beta \right)\right.\\ {}1,{SP}_T>\beta \left|f>\phi \left(\beta \right)\right.\end{array}\right. $$
where β is the correlation threshold of coefficient λ. As pointed out in Section 2, the tracking evolving according to spatial and temporal characteristics influences the measurement exactly. If we use detection probability and false alarm probability to determine the detecting outcome, we would like to have
$$ \left\{\begin{array}{c}{P}_{fa}=\left({M}_d\left({p}_i\right)=0\right)=P\Big({SP}_T<\beta \left|f<\phi \left(\beta \right)\Big)\right.\\ {}{P}_d=\left({M}_d\left({p}_i\right)=1\right)=P\left({SP}_T>\beta \left|f>\phi \left(\beta \right)\right.\right)\end{array}\right. $$
We shall thus use a more reliable extension of target demonstration. This extension is the refinement on the foundation of raw frequency and power data. As a consequence, the interpretation Pd and Pfa are the result of finite number of discrete values in the three-dimensional domain.
$$ \left\{\begin{array}{c}{P}_{fa}={e}^{-\vartheta (ST)}\sum \limits_{d=1}^3\frac{1}{d}\left(\frac{\delta \sqrt[\frac{2}{3}]{\cos^2\left(\omega t\right)}}{2\phi \left(\lambda \right)}\right)\\ {}{P}_d={e}^{\frac{-\vartheta (ST)}{2\phi \left(\lambda \right)}}\sum \limits_{d=1}^3\frac{1}{d}\left(\delta \sqrt[\frac{2}{3}]{\cos^2\left(\omega t\right)}\right)\end{array}\right. $$
where ϑ(ST) is the function associated to the frequency and power information in each ST. We can then define
$$ \vartheta (ST)=1+\lambda d\sqrt{\frac{\delta }{f}} $$
Meanwhile, we have indeed f = ϕ(λ) in line with Eq. 6. The application of Eq. 11 and Eq. 12 shows the connection of detection accuracy with spatial and temporal properties. If the frequency of a movement increases, movement time will be reduced by a factor and the ϑ(ST) follows the drop. Similarly, if frequency-power function as well as the frequency decreases and the other independent variables remain the same, the detection probability is definitely improved.
Note that the spatial and temporal properties govern the working performances of both hardware and algorithms, especially when the domain basis is given. From sensing output, we obtain that the continuously parameterized Pd and Pfa will then be defined, corresponding to the devise of hardware and software respectively. The configuration of sensing devices operation principle is shown as follows:
$$ \left\{\begin{array}{c}{P}_{fa}\le P\left({e}^{\delta \vartheta (k)}\sum \limits_{i=1}^N\beta \left|\frac{\delta {\cos}^2\left(\omega t+\alpha \right)}{2\phi \left(\lambda \right)}-\sqrt[\frac{2}{3}]{\cos^2\left(\omega t\right)}\right|\right)\\ {}{P}_d\ge P\left(\sum \limits_{i=1}^N\frac{1}{d}\left(\delta {\cos}^2\left(\omega t\right)\right)\right)\end{array}\right. $$
Similarly, the programs designing rules can find a feasible optimal solution:
$$ \left\{\begin{array}{c}{P}_{fa}\le P\left({e}^{\delta \vartheta (k)}\sum \limits_{d=1}^3\sqrt[\frac{2}{3}]{\cos^2\left(\omega t\right)}\frac{\delta }{d}\left|1-\frac{1}{2\phi \left(\lambda \right)}\right|\right)\\ {}{P}_d\ge P\left(\sum \limits_{d=1}^3\frac{1}{d}\left(\delta \sqrt[\frac{2}{3}]{\cos^2\left(\omega t\right)}\right)\right)\end{array}\right. $$
Figures 5 and 6 illustrate the variability of both hard-decision making and soft-decision making of the sensing system during target movement presented above. Figure 5 clearly shows the trend of detection results based on Eq. 13 and Eq. 14. In practical, it is impossible to get a 100% positioning accuracy with respect to environment noise and system clutter. Compared with soft decision-making scheme, hard decision-making achieves a higher proportion in detection probability as well as a lower proportion in false alarm probability.
Decision-making strategies combined with spatial and temporal properties. Figure clearly shows the trend of detection results. The simulation results are shown in Fig. 5
Comparison of algorithm convergence. The integrated result is shown in Fig. 6
In view of the spatial and temporal representation are model depending on the statistic studies; the decision-making strategy is assumed to integrate with it. Further, the working performance of target tracking can be re-estimated. The integrated result is shown in Fig. 6 and the quantitative statistical outcomes confirm the sharp decrease in false alarm probability. Thereupon, we propose a working process which aims at both detecting and interpreting a target from the spatial and temporal characteristics corresponding to the three-dimensional responses, as exhibited in Fig. 7.
Block diagram of proposed system working procedure. Both detecting and interpreting a target from the spatial and temporal characteristics corresponding to the three-dimensional responses in Fig. 7
The application of target tracking steps is for obtaining a better tracking performance. To start with, the detection scope is divided into ST domains with time series and each sensor involved is made candidate for further utilization. Moreover, the detection frequencies of the target as well as each sensor spatial and time characteristics are computed, based on which, the correlation coefficient of the system is determined. Lastly, the decision-making manner is set while both soft and hard decision-making is corresponding to the ST domain properties. The detection probability and false alarm probability, which represent the detecting accuracy, are adjusted on the basic of sensing outputs. The outcomes can be viewed as the representation of the decision-making strategy associated to the sensing system. As far as the parameters are regulated, the tracking system is able to find an optimum solution (Fig. 8).
Circuit diagram of decision-making module
For the purpose of property integration, a chip with the decision-making circuit is designed and installed in the head of the sensor associated to the sensing element. Signal-conditioning circuit with the type of SN74S13 from Texas Instrument is employed for data decoding in in real time [33]. Two and gates are applied for decision making, which both eliminate common mode signal and magnify the system sensitivity. It is worth highlighting that the described circuit is able to work in streaming and hence, no time delay is generated by signal processing.
In order to realize the proposed method in the target tracking, the platform is established. Fundamental experiment is carried out to demonstrate the working performance of actual system through the developed of multi-sensing system. Frequency information is detected and regulated through the sensing element, which can be applied in spatial and temporal characteristic calculation and ST domain segmentation. Meanwhile, the collaboration controlling circuit provides the decision-making function for optimizing the measurement of sensors. Thus, the detection scope of each sensor is re-arranged for obtaining a higher tracking accuracy.
The experiment is conducted in an air-conditioned room with almost constant temperature, humidity, and air pressure. A multi-sensing system with twenty sensors is deployed. One object moving within six ST domains is for tracking. Each ST domain is measured to quantify by time, space, and frequency dimensions. The moving or positioning information of target is recorded from the output of each sensor. The detection sampling rate is 1000ch/s with the time series of 10 min. In consideration of such a big amount of data, the detection signal intensity is computed and can be obtained by the following formula exactly.
$$ \left\{\begin{array}{c}{I}_s=\frac{2{X}_0{\omega}_0}{1-\frac{f_0}{\omega_0}}\cos \left({\omega}_t-{\varphi}_t\right)\\ {}{\varphi}_t= arc\cot \left(\frac{\delta }{{\lambda \omega}_0}\right)\end{array}\right. $$
where φt is the original domain angle of ST.
The decision-making variables are adjusted based on the system outcomes that generate optimum results via domain-feature analysis. In order to statistically evaluate the results, the working performance is shown in Fig. 9. Figure 9a presents the measurement accuracy for three examples. The difference of distinguished determination scheme is statistically significantly with noise added. As long as the clutter is added to the environment, the signal-to-noise ratio is set as − 5 dB initially. The detection probabilities obtain 80%, 70%, and 80% for hard decision-making scheme, soft decision-making scheme, and proposed decision-making scheme, respectively. The detection accuracy of the three remains increasing whenever the noise declines. Specifically, the proposed determination strategy reaches nearly 99% when the ratio of signal-to-noise is over 3.
a Detection accuracy with clutter. b PSD for fixed frequency and different sensor number. c Detection error for different tracking time
Despite that working accuracy, in general, the optimization of sensor number not always improves results or, if there is an improvement, the final results are far from being the optimal ones. Figure 9b indicates this effect for \( \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$f$}\right. \) signal spectrum. The power spectral density (PSD) is calculated at the same frequency using fast Fourier transformations technique. Maximum PSD points for different decision-making strategies are 12, 12, and 9, respectively. This result provides evidence that the configuration of sensing system does influence the working efficiency.
In Fig. 9c, the impact of different time duration on the detection error rate is evaluated. The experiments in this study attempt to conduct 2- to 20-min tests where fluctuation exists for all the three strategies. The best value obtained is at the point of eight tracking minutes using proposed determination strategy, which is 2% deviation rate. After that, the increase of number obtains counterproductive results.
Conclusions and future work
This work has presented the multi-sensing determination strategy to track objects from an established scope by using the spatial and temporal characteristics. The decision-making scheme is analyzed in an attentional oriented task. Results indicate that the detection accuracy of the sensing system can be estimated via probability-based functions by segmenting the working region into three-dimensional domain. The proposed domain dividing method as well as decision-making principle presented in this paper improves the detection accuracy and provides optimal configuration of the system development.
Firstly, by replacing the traditional tracking technique for a more efficient one, to improve the working performance of target tracking. This spatial and time property is studied with the collection of tracking frequency. Combined with the real-time performance, the ST domains are generated.
Secondly, the decision-making strategy principle is also regulated. While two types of decision-making strategies (hard decision-making and soft decision making) are employed widely, both of them are considered and analyzed. A more applicable decision-making strategy is proposed. This changed integrated with spatial and temporal feature is utilized in connection with the sensing element as a determination module.
The paper also presented a multi-sensing system set up and optimization process for practical use. In addition, all the basis for system deployment is carefully computed and analyzed statistical analysis.
This study offers an opportunity to the improvement of target tracking accuracy in concerning of the three-dimensional outcome integration. Experiments are conducted on the proposed system to verify its working performance, which is in comparison of two traditional-used methods. The testing results, even if imperfect, provide a direction to explore the tracking properties on the foundation of natural characteristics such as their working frequency.
Future work should pay more attention to the more complex situations where multiple objects are positioned within the tracking scope to explore whether the simple decision-making strategy can also be extended to a multi-target case. Clearly, it seems that the system can precisely segment the region due to the natural characteristic; it is still an open question if the determination strategy could identify each target among all of them.
Decision-making strategy
MSS:
Multi-sensing system
STC:
Spatial and temporal characteristics
WSN:
M. Kalandros, L.Y. Pao, Multisensor covariance control strategies for reducing bias effects in interacting target scenarios. IEEE Trans. Aerosp. Electron. Syst. 41(1), 153–173 (2005)
S. Musick, R. Malhotra, Chasing the elusive sensor manager. Aerospace Electron. Conf. 1, 606–613 (1994)
T. Kerr, Modeling and evaluating an empirical INS difference monitoring procedure used to sequence SSBN NAVAID fixes. Navigation 28(4), 263–285 (1981)
C. Lundquist, L. Hammarstrand, F. Gustafsson, Road intensity based mapping using radar measurements with a probability hypothesis density filter. IEEE Trans. Signal Process. 59(4), 1397–1408 (2011)
J.S. Wasson, J.R. Sturdevant, D.M. Bullock, Real-time travel time estimates using media access control address matching. ITE J. 78(6), 20–23 (2008)
S. Young, in University of Maryland. Bluetooth traffic monitoring technology—concept of operation & deployment guidelines. Dissertation (2008)
J.A. Kirkup, D.D. Rowlands, D.V. Thiel, Team player tracking using sensors and signal strength for indoor basketball. IEEE Sensors J. 16(11), 4622–4630 (2016)
P.C. Liang, P. Krause, Real-time indoor patient movement pattern telemonitoring with one-meter precision, Eai International Conference on Wireless Mobile Communication and Healthcare (2014), pp. 141–144
G. Tang, X. Liu, C. Chen, et al., Active tracking using color silhouettes for indoor surveillance, International Conference on Wireless Communications and signal processing (2015), pp. 1–5
B.T. Vo, C.M. See, N. Ma, W.T. Ng, Multi-sensor joint detection and tracking with the Bernoulli filter. IEEE Trans. Aerosp. Electron. Syst. 48(2), 1385–1402 (2012)
A. Yaeli, P. Bak, G. Feigenblat, et al., Understanding customer behavior using indoor location analysis and visualization. IBM J. Res. Dev. 58, 3):1–3)12 (2014)
F. De Cillis, F. De Simio, L. Faramondi, et al., Indoor positioning system using walking pattern classification, Mediterranean Conference of Control and Automation (2014), pp. 511–516
E.H. Lee, T.L. Song, Multi-sensor track-to-track fusion with target existence in cluttered environments. IET Radar Sonar Navig. 11(7), 1108–1115 (2017)
H. Ken, Managing sensor performance uncertainty in a multi-sensor robotic system. Dissertation (University of South Florida, Tampa Bay, 1994)
V. Zadorozhny, Y.F. Hsu, in Scalable Uncertainty International Conference on Scalable Uncertainty Management: Scalable Uncertainty Management. Conflict-aware historical data fusion (2011), pp. 331–345
M.H. Habaebi, R.O. Khamis, A. Zyoud, M.R. Islam, RSS based localization techniques for ZigBee wireless sensor network, International Conference on Computer & Communication Engineering (2015), pp. 72–75
L. Matthies, S.A. Shafer, Error modeling in stereo navigation. IEEE J. Robot. Autom. 3(3), 239–248 (1987)
A.S. Housfater, Sequential Monte Carlo methods for multi-sensor tracking with applications to radar systems. Dissertation (Ryerson University, Toronto, 2006)
S. Deb, M. Yeddanapudi, K. Pattipati, Y. Bar-Shalom, A generalized S-D assignment algorithm for multisensor-multitarget state estimation. IEEE Trans. Aerosp. Electron. Syst. 33(2), 523–538 (1997)
A. Belmonte Hernandez, G. Hernandez Penaloza, F. Alvarez, G. Conti, Adaptive fingerprinting in multi-sensor fusion for accurate indoor tracking. IEEE Sensors J. 17(15), 4983–4998 (2017)
J.F. Soechting, Effect of target size on spatial and temporal characteristics of a pointing movement in man. Exp. Brain Res. 54(1), 121–132 (1984)
G. Sapiro, A. Cohen, A.M. Bruckstein, A subdivision scheme for continuous-scale B-splines and affine-invariant progressive smoothing. J Math Imaging Vision 7, 23–40 (1997)
Y. Cui, R.M. Voyles, J.T. Lane, A. Krishnamoorthy, M.H. Mahoor, A mechanism for real-time decision making and system maintenance for resource constrained robotic systems through ReFrESH. Auton. Robot. 39(4), 487–502 (2015)
D. Zhang, A joint response model for matched decision makers: exploring decision making mechanism for mutually-selected agents. Dissertation (Rensselaer Polytechnic Institute, Troy, 2016)
D.-W. Yue, H.H. Nguyen, Orthogonal DF cooperative relay networks with multiple-snr thresholds and multiple hard-decision detections. EURASIP J. Wirel. Commun. Netw. (2010). https://doi.org/10.1155/2010/169597
J.-T. Sung, H.-T. Pai, B.-H. Lee, Performance analysis for distributed classification fusion using soft-decision decoding in wireless sensor networks, EUC 2007 Embedded and Ubiquitous Computing (2007), pp. 623–634
M.H. Chen, P.F. Yan, A multiscale approach based on morphological filtering. IEEE Trans Pattern Anal Mach Intell 11(7), 694–700 (1989)
R. Pokrywka, Reducing false alarm rate in anomaly detection with layered filtering, International Conference on Computational Science (2008), pp. 396–404
J. Naganawa, H. Miyazaki, H. Tajima, Detection probability estimation model for wide area multilateration, Integrated Communications, Navigation and Surveillance Conference (2017), pp. 2B1_1–2B1_15
Q. Zheng, R. Yang, Z. Shan, J. Chen, Research on the detection probability of airdrop torpedo based on analytical method, International Conference on Progress in Informatics and Computing (2016), pp. 674–678
Y. Wang, Y. Zhang, Q. Zhang, S. Wu, Optimal selection of false alarm probability for dynamic spectrum access. IEEE Commun. Lett. 17(5), 844–847 (2013)
L. Anitori, M. Otten, P. Hoogeboom, False alarm probability estimation for compressive sensing radar, IEEE Radar Conference (2011), pp. 206–211
SN74S138A: 3-Line To 8-Line Decoders/ Demultiplexers datasheet, http://www.ti.com/product/SN74S138A/technicaldocuments?keyMatch=74S13&tisearch=Search-EN-Products
The authors would like to thank all kinds of funds for funding.
This research is supported by the Natural Science Foundation of Hunan Province, China (Grant No. 2018JJ2023) and supported by Shandong Province Education and Science Planned Research Topics during the "12th Five-Year Plan" Special Computer Teaching Project, China (Grant No.YBJ15006).
College of Information and Electronic Engineering, Hunan City University, Yiyang, 413000, China
Liu Yang
, Jin Huixia
, Fu Yuanyuan
& Zhang Chi
College of Computer, Heze University, Heze, 274000, Shandong, China
Liu Xiuju
Search for Liu Yang in:
Search for Liu Xiuju in:
Search for Jin Huixia in:
Search for Fu Yuanyuan in:
Search for Zhang Chi in:
LY is in charge of the major theoretical analysis, algorithm design, and numerical simulations; the others were in charge of part of the theoretical analysis and algorithm design. All authors read and approved the final manuscript.
Correspondence to Liu Xiuju.
Yang, L., Xiuju, L., Huixia, J. et al. Multi-sensing based target tracking by using decision-making strategy with spatial and temporal properties. J Wireless Com Network 2019, 117 (2019). https://doi.org/10.1186/s13638-019-1449-6
Target tracking
Recent Challenges & Avenues in Wireless Communication through Advance computational Intelligence or Deep learning methods | CommonCrawl |
Published by Ganit Charcha | Category - Maths and Technology | 2014-09-23 03:49:04
Bitcoins are digital coins which are not issued by any government, bank, or organization, and rely on cryptographic protocols and a distributed network of users to mint, store, and transfer. The scheme was ?rst suggested in 2008 by Satoshi Nakamoto, and became fully operational in January 2009. It had attracted a large number of users and a lot of media attention. How much is a bitcoin worth? Well, it's worth whatever somebody will pay for a unit of the online currency, which as I write this is 626.93 US Dollar, up by 100 times from 6.65 US Dollar two years ago. How does one mint a bitcoin? What is the maths behind it? Is it robust? We will answer some of these questions in this article.
Lets start with reflecting little bit on why is first place one need Bitcoins? For that we start with the concept of money. Money is a container for deferred payment, a promised value, and it is a verifiable record. A government, bank, or organization works as an authority to ensure verifiability and providing promised value. For deferred payment e.g. settling a debt, government or international treaty can be in enforcement.
The premise of Bitcoin is ever increasing volume of transaction over digital communication channel, where ensuring the definition of money is in the hand of financial institutes. This is a third party trust based model. It is rather unlikely, that you have never faced the problem inherent in such system - where money was deducted from your account, but payment was not received in exchange (of goods that have purchased). As volume of transaction over digital communication channel increases, fraud, failure and dispute volume will increase. As per Nilson Report the global card fraud amounts to $11.27 billion in calendar year 2012. Reversal of the transaction becomes unavoidable because of failure, and in the presence of fraud - trust becomes distributed (institute must trust you to perform reversal).
If two willing parties can transact directly with each other without the need for a trusted third party, then that would be a cost effective solution - as maintaining distributed trust in the face of increased, fraud, failure and dispute is an ever growing cost. So how can you get some bitcoin? After all it's just a string of 0 and 1, so why to hack it up? Well for a money (genuine or a fake one) to work it must be acceptable by another party, and for a digital money like Bitcoin what can be better than double spend it (use the same money to transact many times). To be able to do this we must start with transaction process with a Bitcoin before we can hack one up.
Recall that a function is defined by two sets $X$ and $Y$ and a rule $f$ which assigns to each element in $X$ one element in $Y$. The image $y$ of $x$ is denoted by $y = f(x)$. Preimage of $y$ is an element $x \in X$ for which $f(x) = y$. The set of all elements in $Y$ which have at least one pre-image is called the image of $f$, denoted $Im(f)$. A function $f$ from a set $X$ to a set $Y$ is called a one-way function if $f(x)$ is "easy" to compute for all $x \in X$ but for "essentially all" elements $y \in Im(f)$ it is "computationally infeasible" to find any $x \in X$ such that $f(x) = y$. Let's consider an example, select primes $p = 48611$, $q = 53993$, form $n = pq = 2624653723$, and let $X = {1, 2, 3, \ldots, n − 1}$. Define a function $f$ on $X$ by $f(x) = r_{x}$ for each $x \in X$, where $r_{x}$ is the remainder when $x^{3}$ is divided by $n$. For instance, $f(2489991) = 1981394214$ since $24899913 = 5881949859 · n + 1981394214$. Computing $f(x)$ is a relatively simple thing to do, but to reverse the procedure is much more difficult.
A trapdoor one-way function is a one-way function $f$ from a set $X$ to a set $Y$ with the additional property that given some extra information (called the trapdoor information) it becomes feasible to find for any given $y \in Im(f )$, an $x \in X$ such that $f(x) = y$.
One-way functions and trapdoor one-way functions play the defining role of cryptographic building blocks required for building the Bitcoin system. A cryptographic primitive which is fundamental in authentication, authorization, and non-repudiation is the digital signature. The purpose of a digital signature is to provide a means for an entity to bind its identity to a piece of information. The process of signing entails transforming the message and some secret information held by the entity into a tag called a signature.
A bitcoin can be thought of as a chain of transactions from one owner to the next. In the transaction owners are identified by their public key. In each transaction, the previous owner signs a hash of the transaction in which he received the bitcoins and the public key of the next owner - using the secret signing key corresponding to his public key. This signature (i.e., transaction) can then be added to the set of transactions that constitutes the bitcoin. Since each of these transactions refer the previous transaction (i.e., in sending bitcoins, the current owner must specify where they came from), the transactions form a chain. To verify the validity of a bitcoin, a user can check the validity of each of the signatures in this chain.
Transaction Chain in Bitcoin Formation
To prevent double spending, all users of the system is made aware of all transactions. Double spending can then be identified when a user attempts to transfer a bitcoin after he has already done so. To determine which transaction came first, transactions are grouped into blocks and time stamped to vouch for their validity. Time stamped Blocks are formed into a chain, with each block referencing the previous one (and thus further reinforcing the validity of all previous transactions). This process yields a block chain, which is then publicly available to every user within the system.
We must hack the time stamping system then to have our bitcoin accepted. Yes, block formation is the core of a valid bitcoin formation, and this is how bitcoins are generated in the first place. The figure illustrating the Bitcoin block formation is shown below. In fact, this happens in the process of forming a block: each accepted block (i.e., each block incorporated into the block chain) is required to be such that, when all the data inside the block is hashed, the hash begins with a certain number of zeroes. Time stamping / block formation mechanism here works based on proof-of-work system. A proof of work is a piece of data which is costly, and time-consuming to produce so as to satisfy certain requirements (e.g. produce a hash using SHA256 which has 52 leading zeros). It must be trivial to check whether data satisfies said requirements (that a hash has 52 leading zeros), while producing a valid proof of work involves a lot of trial and computation on average.
However we are in a decentralized system and we have no central authority producing bitcoin blocks. To allow users to find this particular collection of data, blocks contain, in addition to a list of transactions, a nonce. Once someone finds a nonce that allows the block to have the correctly formatted hash, the block is then broadcast in the same peer-to-peer manner as transactions. To compensate for increasing hardware speed and varying interest in running nodes over time, the proof-of-work difficulty is determined by a moving average targeting an average number of blocks per hour. If they're generated too fast, the difficulty increases. The system is designed to generate only 21 million bitcoins in total. Finding a block currently comes with an attached reward of 25 BTC; this rate was 50 BTC until November 28 2012 (block height 210,000), and is expected to halve again in 2016, and eventually drop to 0 in 2140.
Subhas Kumar Ghosh (email: [email protected]) received his bachelor degree in Electrical Engineering from Indian Institute of Technology, Kharagpur, India. His interests include computer security, graph theory, distributed computing, and approximation algorithms. His current interests also includes text analysis, and machine learning. Subhas has 25 international journal and conference publication. Over 19 years Subhas has worked with many corporate research labs including Honeywell Labs and Siemens Corporate Technology. Recently he has co-founded a text analysis start-up, and works as a consultant for a silicon valley based big-data start-up. | CommonCrawl |
Welcome to ShortScience.org!
ShortScience.org is a platform for post-publication discussion aiming to improve accessibility and reproducibility of research ideas.
The website has 1546 public summaries, mostly in machine learning, written by the community and organized by paper, conference, and year.
Reading summaries of papers is useful to obtain the perspective and insight of another reader, why they liked or disliked it, and their attempt to demystify complicated sections.
Also, writing summaries is a good exercise to understand the content of a paper because you are forced to challenge your assumptions when explaining it.
Finally, you can keep up to date with the flood of research by reading the latest summaries on our Twitter and Facebook pages.
CompSci
Popular (Today)
WAIC, but Why? Generative Ensembles for Robust Anomaly Detection
Hyunsun Choi and Eric Jang and Alexander A. Alemi
Keywords: stat.ML, cs.LG
First published: 2018/10/02 (2 years ago)
Abstract: Machine learning models encounter Out-of-Distribution (OoD) errors when the data seen at test time are generated from a different stochastic generator than the one used to generate the training data. One proposal to scale OoD detection to high-dimensional data is to learn a tractable likelihood approximation of the training distribution, and use it to reject unlikely inputs. However, likelihood models on natural data are themselves susceptible to OoD errors, and even assign large likelihoods to samples from other datasets. To mitigate this problem, we propose Generative Ensembles, which robustify density-based OoD detection by way of estimating epistemic uncertainty of the likelihood model. We present a puzzling observation in need of an explanation -- although likelihood measures cannot account for the typical set of a distribution, and therefore should not be suitable on their own for OoD detection, WAIC performs surprisingly well in practice.
[link] Summary by Massimo Caccia 1 year ago
### Summary
Knowing when a model is qualified to make a prediction is critical to safe deployment of ML technology. Model-independent / Unsupervised Out-of-Distribution (OoD) detection is appealing mostly because it doesn't require task-specific labels to train. It is tempting to suggest a simple one-tailed test in which lower likelihoods are OoD (assigned by a Likelihood Model), but the intuition that In-Distribution (ID) inputs should have highest likelihoods _does not hold in higher dimension_. The authors propose to use the Watanabe-Akaike Information Criterion (WAIC) to circumvent this problem and empirically show the robustness of the approach.
### Counterintuitive Properties of Likelihood Models:
https://i.imgur.com/4vo0Ff5.png
So a GLOW model with Gaussian prior maps SVHN closer to the origin than Cifar (but never actually generates SVHN because Gaussian samples are on the shell). This is bad news for OoD detection.
### Proposed Methodology:
Use the WAIC criterion for OoD detection which gives an asymptotically correct estimate of the gap between the training set and test set expectations:
https://i.imgur.com/vasSxuk.png
Basically, the correction term subtracts the variance in likelihoods across independent samples from the posterior. This acts to robustify the estimate, ensuring that points that are sensitive to the particular choice of posterior are penalized. They use an ensemble of generative models as a proxy for posterior samples i.e. the ensembles acts as approximate posterior samples.
Now, OoD can be detected with a Likelihood Model:
https://i.imgur.com/M3CDKOA.png
### Discussion
Interestingly, GLOW maps Cifar and other datasets INSIDE the gaussian shell (which is an annulus of radius $\sqrt{dim} = \sqrt{3072} \approx 55.4$
https://i.imgur.com/ERdgOaz.png
This is in itself quite disturbing, as it suggests that better flow-based generative models (for sampling) can be obtained by encouraging the training distribution to overlap better with the typical set in latent
Do Deep Generative Models Know What They Don't Know?
Eric Nalisnick and Akihiro Matsukawa and Yee Whye Teh and Dilan Gorur and Balaji Lakshminarayanan
Abstract: A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize inputs for which a neural network is highly confident yet wrong. Generative models are widely viewed to be robust to such mistaken confidence as modeling the density of the input features can be used to detect novel, out-of-distribution inputs. In this paper we challenge this assumption. We find that the density learned by flow-based models, VAEs, and PixelCNNs cannot distinguish images of common objects such as dogs, trucks, and horses (i.e. CIFAR-10) from those of house numbers (i.e. SVHN), assigning a higher likelihood to the latter when the model is trained on the former. Moreover, we find evidence of this phenomenon when pairing several popular image data sets: FashionMNIST vs MNIST, CelebA vs SVHN, ImageNet vs CIFAR-10 / CIFAR-100 / SVHN. To investigate this curious behavior, we focus analysis on flow-based generative models in particular since they are trained and evaluated via the exact marginal likelihood. We find such behavior persists even when we restrict the flow models to constant-volume transformations. These transformations admit some theoretical analysis, and we show that the difference in likelihoods can be explained by the location and variances of the data and the model curvature. Our results caution against using the density estimates from deep generative models to identify inputs similar to the training distribution until their behavior for out-of-distribution inputs is better understood.
[link] Summary by ameroyer 2 years ago
CNNs predictions are known to be very sensitive to adversarial examples, which are samples generated to be wrongly classifiied with high confidence. On the other hand, probabilistic generative models such as `PixelCNN` and `VAEs` learn a distribution over the input domain hence could be used to detect ***out-of-distribution inputs***, e.g., by estimating their likelihood under the data distribution. This paper provides interesting results showing that distributions learned by generative models are not robust enough yet to employ them in this way.
* **Pros (+):** convincing experiments on multiple generative models, more detailed analysis in the invertible flow case, interesting negative results.
* **Cons (-):** It would be interesting to provide further results for different datasets / domain shifts to observe if this property can be quanitfied as a characteristics of the model or of the input data.
## Experimental negative result
Three classes of generative models are considered in this paper:
* **Auto-regressive** models such as `PixelCNN` [1]
* **Latent variable** models, such as `VAEs` [2]
* Generative models with **invertible flows** [3], in particular `Glow` [4].
The authors train a generative model $G$ on input data $\mathcal X$ and then use it to evaluate the likelihood on both the training domain $\mathcal X$ and a different domain $\tilde{\mathcal X}$. Their main (negative) result is showing that **a model trained on the CIFAR-10 dataset yields a higher likelihood when evaluated on the SVHN test dataset than on the CIFAR-10 test (or even train) split**. Interestingly, the converse, when training on SVHN and evaluating on CIFAR, is not true.
This result was consistantly observed for various architectures including [1], [2] and [4], although it is of lesser effect in the `PixelCNN` case.
Intuitively, this could come from the fact that both of these datasets contain natural images and that CIFAR-10 is strictly more diverse than SVHN in terms of semantic content. Nonetheless, these datasets vastly differ in appearance, and this result is counter-intuitive as it goes against the direction that generative models can reliably be use to detect out-of-distribution samples. Furthermore, this observation also confirms the general idea that higher likelihoods does not necessarily coincide with better generated samples [5].
## Further analysis for invertible flow models
The authors further study this phenomenon in the invertible flow models case as they provide a more rigorous analytical framework (exact likelihood inference unlike VAE which only provide a bound on the true likelihood).
More specifically invertible flow models are characterized with a ***diffeomorphism*** (invertible function), $f(x; \phi)$, between input space $\mathcal X$ and latent space $\mathcal Z$, and choice of the latent distribution $p(z; \psi)$. The ***change of variable formula*** links the density of $x$ and $z$ as follows:
\int_x p_x(x)d_x = \int_x p_z(f(x)) \left| \frac{\partial f}{\partial x} \right| dx
And the training objective under this transformation becomes
\arg\max_{\theta} \log p_x(\mathbf{x}; \theta) = \arg\max_{\phi, \psi} \sum_i \log p_z(f(x_i; \phi); \psi) + \log \left| \frac{\partial f_{\phi}}{\partial x_i} \right|
Typically, $p_z$ is chosen to be Gaussian, and samples are build by inverting $f$, i.e.,$z \sim p(\mathbf z),\ x = f^{-1}(z)$. And $f_{\phi}$ is build such that computing the log determinant of the Jacabian in the previous equation can be done efficiently.
First, they observe that contribution of the flow can be decomposed in a ***density*** element (left term) and a ***volume*** element (right term), resulting from the change of variables formula. Experiment results with Glow [4] show that the higher density on SVHN mostly comes from the ***volume element contribution***.
Secondly, they try to directly analyze the difference in likelihood between two domains $\mathcal X$ and $\tilde{\mathcal X}$; which can be done by a second-order expansion of the log-likelihood locally around the expectation of the distribution (assuming $\mathbb{E} (\mathcal X) \sim \mathbb{E}(\tilde{\mathcal X})$). For the constant volume Glow module, the resulting analytical formula indeed confirms that the log-likelihood of SVHN should be higher than CIFAR's, as observed in practice.
## References
* [1] Conditional Image Generation with PixelCNN Decoders, van den Oord et al, 2016
* [2] Auto-Encoding Variational Bayes, Kingma and Welling, 2013
* [3] Density estimation using Real NVP, Dinh et al., ICLR 2015
* [4] Glow: Generative Flow with Invertible 1x1 Convolutions, Kingma and Dhariwal
* [5] A Note on the Evaluation of Generative Models, Theis et al., ICLR 2016
Critic Regularized Regression
Ziyu Wang and Alexander Novikov and Konrad Zolna and Jost Tobias Springenberg and Scott Reed and Bobak Shahriari and Noah Siegel and Josh Merel and Caglar Gulcehre and Nicolas Heess and Nando de Freitas
Keywords: cs.LG, cs.AI, stat.ML
Abstract: Offline reinforcement learning (RL), also known as batch RL, offers the prospect of policy optimization from large pre-recorded datasets without online environment interaction. It addresses challenges with regard to the cost of data collection and safety, both of which are particularly pertinent to real-world applications of RL. Unfortunately, most off-policy algorithms perform poorly when learning from a fixed dataset. In this paper, we propose a novel offline RL algorithm to learn policies from data using a form of critic-regularized regression (CRR). We find that CRR performs surprisingly well and scales to tasks with high-dimensional state and action spaces -- outperforming several state-of-the-art offline RL algorithms by a significant margin on a wide range of benchmark tasks.
[link] Summary by CodyWild 1 month ago
Offline reinforcement learning is potentially high-value thing for the machine learning community learn to do well, because there are many applications where it'd be useful to generate a learnt policy for responding to a dynamic environment, but where it'd be too unsafe or expensive to learn in an on-policy or online way, where we continually evaluate our actions in the environment to test their value. In such settings, we'd like to be able to take a batch of existing data - collected from a human demonstrator, or from some other algorithm - and be able to learn a policy from those pre-collected transitions, without being able to query the environment further by taking arbitrary actions.
There are two broad strategies for learning a policy from precollected transitions. One is to simply learn to mimic the action policy used by the demonstrator, predicting the action the demonstrator would take in a given state, without making use of reward data at all. This is Behavioral Cloning, and has the advantage of being somewhat more conservative (in terms of not experimenting with possibly-unsafe-or-low-reward actions the demonstrator never took), but this is also a disadvantage, because it's not possible to get higher reward than the demonstrator themselves got if you're simply copying their behavior. Another approach is to learn a Q function - estimating the value of a given action in a given state - using the reward data from the precollected transitions. This can also have some downsides, mostly in the direction of overconfidence. Q value Temporal Difference learning works by using the current reward added to the max Q value over possible next actions as the target for the current-state Q estimate. This tends to lead to overestimates, because regression to the mean effects mean that the highest value Q estimates are disproportionately likely to be noisy (possibly because they correspond to an action with little data in the demonstrator dataset). In on-policy Q learning, this is less problematic, because the agent can take the action associated with their noisily inaccurate estimate, and as a result get more data for that action, and get an estimate that is less noisy in future. But when we're in a fully offline setting, all our learning is completed before we actually start taking actions with our policy, so taking high-uncertainty actions isn't a valuable source of new information, but just risky.
The approach suggested by this DeepMind paper - Critic Regularized Regression, or CRR - is essentially a synthesis of these two possible approaches. The method learns a Q function as normal, using temporal difference methods. The distinction in this method comes from how to get a policy, given a learned Q function. Rather than simply taking the action your Q estimate says is highest-value at a particular point, CRR optimizes a policy according to the formula shown below. The f() function is a stand-in for various potential functions, all of which are monotonic with respect to the Q function, meaning they increase when the Q function does.
https://i.imgur.com/jGmhYdd.png
This basically amounts to a form of a behavioral cloning loss (with the part that maximizes the probability under your policy of the actions sampled from the demonstrator dataset), but weighted or, as the paper terms it, filtered, by the learned Q function. The higher the estimated q value for a transition, the more weight is placed on that transition from the demo dataset having high probability under your policy. Rather than trying to mimic all of the actions of the demonstrator, the policy preferentially tries to mimic the demonstrator actions that it estimates were particularly high-quality. Different f() functions lead to different kinds of filtration. The `binary`version is an indicator function for the Advantage of an action (the Q value for that action at that state minus some reference value for the state, describing how much better the action is than other alternatives at that state) being greater than zero. Another, `exp`, uses exponential weightings which do a more "soft" upweighting or downweighting of transitions based on advantage, rather than the sharp binary of whether an actions advantage is above 1.
The authors demonstrate that, on multiple environments from three different environment suites, CRR outperforms other off-policy baselines - either more pure behavioral cloning, or more pure RL - and in many cases does so quite dramatically. They find that the sharper binary weighting scheme does better on simpler tasks, since the trade-off of fewer but higher-quality samples to learn from works there. However, on more complex tasks, the policy benefits from the exp weighting, which still uses and learns from more samples (albeit at lower weights), which introduces some potential mimicking of lower-quality transitions, but at the trade of a larger effective dataset size to learn from.
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
Shiyu Liang and Yixuan Li and R. Srikant
Keywords: cs.LG, stat.ML
Abstract: We consider the problem of detecting out-of-distribution images in neural networks. We propose ODIN, a simple and effective method that does not require any change to a pre-trained neural network. Our method is based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection. We show in a series of experiments that ODIN is compatible with diverse network architectures and datasets. It consistently outperforms the baseline approach by a large margin, establishing a new state-of-the-art performance on this task. For example, ODIN reduces the false positive rate from the baseline 34.7% to 4.3% on the DenseNet (applied to CIFAR-10) when the true positive rate is 95%.
[link] Summary by David Stutz 1 year ago
Liang et al. propose a perturbation-based approach for detecting out-of-distribution examples using a network's confidence predictions. In particular, the approaches based on the observation that neural network's make more confident predictions on images from the original data distribution, in-distribution examples, than on examples taken from a different distribution (i.e., a different dataset), out-distribution examples. This effect can further be amplified by using a temperature-scaled softmax, i.e.,
$ S_i(x, T) = \frac{\exp(f_i(x)/T)}{\sum_{j = 1}^N \exp(f_j(x)/T)}$
where $f_i(x)$ are the predicted logits and $T$ a temperature parameter. Based on these softmax scores, perturbations $\tilde{x}$ are computed using
$\tilde{x} = x - \epsilon \text{sign}(-\nabla_x \log S_{\hat{y}}(x;T))$
where $\hat{y}$ is the predicted label of $x$. This is similar to "one-step" adversarial examples; however, in contrast of minimizing the confidence of the true label, the confidence in the predicted label is maximized. This, applied to in-distribution and out-distribution examples is illustrated in Figure 1 and meant to emphasize the difference in confidence. Afterwards, in- and out-distribution examples can be distinguished using simple thresholding on the predicted confidence, as shown in various experiment, e.g., on Cifar10 and Cifar100.
https://i.imgur.com/OjDVZ0B.png
Figure 1: Illustration of the proposed perturbation to amplify the difference in confidence between in- and out-distribution examples.
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
Meta-learners' learning dynamics are unlike learners'
Neil C. Rabinowitz
First published: 2019/05/03 (1 year ago)
Abstract: Meta-learning is a tool that allows us to build sample-efficient learning systems. Here we show that, once meta-trained, LSTM Meta-Learners aren't just faster learners than their sample-inefficient deep learning (DL) and reinforcement learning (RL) brethren, but that they actually pursue fundamentally different learning trajectories. We study their learning dynamics on three sets of structured tasks for which the corresponding learning dynamics of DL and RL systems have been previously described: linear regression (Saxe et al., 2013), nonlinear regression (Rahaman et al., 2018; Xu et al., 2018), and contextual bandits (Schaul et al., 2019). In each case, while sample-inefficient DL and RL Learners uncover the task structure in a staggered manner, meta-trained LSTM Meta-Learners uncover almost all task structure concurrently, congruent with the patterns expected from Bayes-optimal inference algorithms. This has implications for research areas wherever the learning behaviour itself is of interest, such as safety, curriculum design, and human-in-the-loop machine learning.
[link] Summary by CodyWild 1 year ago
Meta learning, or, the idea of training models on some distribution of tasks, with the hope that they can then learn more quickly on new tasks because they have "learned how to learn" similar tasks, has become a more central and popular research field in recent years. Although there is a veritable zoo of different techniques (to an amusingly literal degree; there's an emergent fad of naming new methods after animals), the general idea is: have your inner loop consist of training a model on some task drawn from a distribution over tasks (be that maze learning with different wall configurations, letter identification from different languages, etc), and have the outer loop that updates some structural part of your model be based on improving generalization error on each task within the distribution. It's been demonstrated that meta-learned systems can in fact learn more quickly (at least when their tasks are "in distribution" relative to the distribution they were trained on, which is an important point to be cognizant of), but this paper is less interested with how much better or faster they're learning, and more interested in whether there are qualitative differences in the way normal learning systems and meta-trained learning systems go about learning a new task.
The author (oddly for DeepMind, which typically goes in for super long author lists, there's only the one on this paper) goes about this by studying simple learning tasks where it's easier for us to introspect into what each model is learning over time.
https://i.imgur.com/ceycq46.png
In the first test, he looks at linear regression in a simple setting: where for each individual "task" data is generated according a known true weight matrix (sampled from a prior over weight matrices), with some noise added in. Given this weight matrix, he takes the singular value decomposition (think: PCA), and so ends up with a factorized representation of the weights, where higher eigenvalues on the factors, or "modes", represent that those factors represent larger-scale patterns that explain more variance, and lower eigenvalues are smaller scale refinements on top of that. He can apply this same procedure to the weights the network has learned at any given point in training, and compare, to see how close the network is to having correctly captured each of these different modes. When normal learners (starting from a raw initialization) approach the task, they start by matching the large scale (higher eigenvalue) factors of variation, and then over the course of training improve performance on the higher-precision factors. By contrast, meta learners, in addition to learning faster, also learn large scale and small scale modes at the same rate. Similar analysis was performed and similar results found for nonlinear regression, where instead of PCA-style components, the function generating data were decomposed into different Fourier frequencies, and the normal learner learned the broad, low-frequency patterns first, where the meta learner learned them all at the same rate.
The paper finds intuition for this by showing that the behavior of the meta learners matches quite well against how a Bayes-optimal learner would update on new data points, in the world where that learner had a prior over the data-generating weights that matched the true generating process. So, under this framing, the process of meta learning is roughly equivalent to your model learning a prior correspondant with the task distribution it was trained on. This is, at a high level, what I think we all sort of thought was happening with meta learning, but it's pretty neat to see it laid out in a small enough problem where we can actually validate against an analytic model.
A bit of a meta (heh) point: I wish this paper had more explanation of why the author chose to use the specific eigenvalue-focused metrics of progression on task learning that he did. They seem reasonable, but I'd have been curious to see an explication of what is captured by these, and what might be captured by alternative metrics of task progress.
(A side note: the paper also contained a reinforcement learning experiment, but I both understood that one less well and also feel like it wasn't really that analogous to the other tests)
On the Robustness of Convolutional Neural Networks to Internal Architecture and Weight Perturbations
Cheney, Nicholas and Schrimpf, Martin and Kreiman, Gabriel
arXiv e-Print archive - 2017 via Local Bibsonomy
Keywords: dblp
Cheney et al. study the robustness of deep neural networks, especially AlexNet, with regard to randomly dropping or perturbing weights. In particular, the authors consider three types of perturbations: synapse knockouts set random weights to zero, node knockouts set all weights corresponding to a set of neurons to zero, and weight perturbations add random Gaussian noise to the weights of a specific layer. These perturbations are studied on AlexNet, considering the top-5 accuracy on ImageNet; perturbations are considered per layer. For example, Figure 1 (left) shows the influence on accuracy when knocking out synapses. As can be seen, the lower layers, especially the first convolutional layer, are impacted significantly by these perturbations. Similar observations, Figure 1 (right) are made for random perturbations of weights; although the impact is less significant. Especially high-level features, i.e., the corresponding layers, seem to be robust to these kind of perturbations. The authors also provide evidence that these results extend to the top-1 accuracy, as well as other architectures. For VGG, however, the impact is significantly less pronounced which may also be due to the employed dropout layers.
https://i.imgur.com/78T6Gg2.png
Figure 1: Left: Influence of setting weights in the corresponding layers to zero. Right: Influence of randomly perturbing weights of specific layers. Experiments are on ImageNet using AlexNet.
Deep Networks with Stochastic Depth
Huang, Gao and Sun, Yu and Liu, Zhuang and Sedra, Daniel and Weinberger, Kilian
Keywords: deeplearning, acreuser
[link] Summary by Martin Thoma 4 years ago
**Dropout for layers** sums it up pretty well. The authors built on the idea of [deep residual networks](http://arxiv.org/abs/1512.03385) to use identity functions to skip layers.
The main advantages:
* Training speed-ups by about 25%
* Huge networks without overfitting
## Evaluation
* [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html): 4.91% error ([SotA](https://martin-thoma.com/sota/#image-classification): 2.72 %) Training Time: ~15h
* [CIFAR-100](https://www.cs.toronto.edu/~kriz/cifar.html): 24.58% ([SotA](https://martin-thoma.com/sota/#image-classification): 17.18 %) Training time: < 16h
* [SVHN](http://ufldl.stanford.edu/housenumbers/): 1.75% ([SotA](https://martin-thoma.com/sota/#image-classification): 1.59 %) - trained for 50 epochs, begging with a LR of 0.1, divided by 10 after 30 epochs and 35. Training time: < 26h
Fast R-CNN
Girshick, Ross B.
International Conference on Computer Vision - 2015 via Local Bibsonomy
[link] Summary by Joseph Paul Cohen 4 years ago
This method is based on improving the speed of R-CNN \cite{conf/cvpr/GirshickDDM14}
1. Where R-CNN would have two different objective functions, Fast R-CNN combines localization and classification losses into a "multi-task loss" in order to speed up training.
2. It also uses a pooling method based on \cite{journals/pami/HeZR015} called the RoI pooling layer that scales the input so the images don't have to be scaled before being set an an input image to the CNN. "RoI max pooling works by dividing the $h \times w$ RoI window into an $H \times W$ grid of sub-windows of approximate size $h/H \times w/W$ and then max-pooling the values in each sub-window into the corresponding output grid cell."
3. Backprop is performed for the RoI pooling layer by taking the argmax of the incoming gradients that overlap the incoming values.
This method is further improved by the paper "Faster R-CNN" \cite{conf/nips/RenHGS15}
Deep Residual Learning for Image Recognition
He, Kaiming and Zhang, Xiangyu and Ren, Shaoqing and Sun, Jian
Deeper networks should never have a higher **training** error than smaller ones. In the worst case, the layers should "simply" learn identities. It seems as this is not so easy with conventional networks, as they get much worse with more layers. So the idea is to add identity functions which skip some layers. The network only has to learn the **residuals**.
* Learning the identity becomes learning 0 which is simpler
* Loss in information flow in the forward pass is not a problem anymore
* No vanishing / exploding gradient
* Identities don't have parameters to be learned
The learning rate starts at 0.1 and is divided by 10 when the error plateaus. Weight decay of 0.0001 ($10^{-4}$), momentum of 0.9. They use mini-batches of size 128.
* ImageNet ILSVRC 2015: 3.57% (ensemble)
* CIFAR-10: 6.43%
* MS COCO: 59.0% [email protected] (ensemble)
* PASCAL VOC 2007: 85.6% [email protected]
## See also
* [DenseNets](http://www.shortscience.org/paper?bibtexKey=journals/corr/1608.06993)
Neural Ordinary Differential Equations
Ricky T. Q. Chen and Yulia Rubanova and Jesse Bettencourt and David Duvenaud
Abstract: We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.
[link] Summary by wassname 2 years ago
Summary by senior author [duvenaud on hackernews](https://news.ycombinator.com/item?id=18678078).
A few years ago, everyone switched their deep nets to "residual nets". Instead of building deep models like this:
h1 = f1(x)
h2 = f2(h1)
y = f5(h4)
They now build them like this:
h1 = f1(x) + x
h2 = f2(h1) + h1
y = f5(h4) + h4
Where f1, f2, etc are neural net layers. The idea is that it's easier to model a small change to an almost-correct answer than to output the whole improved answer at once.
In the last couple of years a few different groups noticed that this looks like a primitive ODE solver (Euler's method) that solves the trajectory of a system by just taking small steps in the direction of the system dynamics and adding them up. They used this connection to propose things like better training methods.
We just took this idea to its logical extreme: What if we _define_ a deep net as a continuously evolving system? So instead of updating the hidden units layer by layer, we define their derivative with respect to depth instead. We call this an ODE net.
Now, we can use off-the-shelf adaptive ODE solvers to compute the final state of these dynamics, and call that the output of the neural network. This has drawbacks (it's slower to train) but lots of advantages too: We can loosen the numerical tolerance of the solver to make our nets faster at test time. We can also handle continuous-time models a lot more naturally. It turns out that there is also a simpler version of the change of variables formula (for density modeling) when you move to continuous time. | CommonCrawl |
Performance of extended end-plate bolted connections subjected to static and blast-like loads
Ahmed A. Osman ORCID: orcid.org/0000-0003-3497-106X1 &
Sherif A. Mourad1
In this study, numerical models were developed to predict the behavior of steel extended end-plate moment connections subjected to static and blast-like loading. Two types of extended end-plate connections were considered, stiffened, and unstiffened, with pretensioned bolts. The models were verified by comparing the results with published experimental data. The models were used to compute the moment-rotation curves for the connection under static loading, and then under different blast durations. The pressure impulse diagram and the energy dissipation for the connection under dynamic loading were determined. The failure modes were examined, and the numerical results were compared with the simplified models presented in codes and standards. Improvement in the performance of the connection by adding one or two stiffeners was demonstrated. For the configuration studied, introducing a stiffener increased plastic dissipation energy for blast loading by 45% compared to the unstiffened connection, whereas under static loading, the plastic energy dissipation for stiffened connection, SC2, was higher than the unstiffened connection by 30%. A conservative estimate for the dynamic increase factor (DIF) was found to be 1.2 for steel yield stress and 1.05 for bolt failure.
Many researches have investigated the behavior of end-plate connections under static, cyclic, or seismic loading. Examination of the dynamic behavior of steel connections was motivated by the interest in the seismic performance of steel frames (Popov et al. [1]). Many provisions are currently included in design codes based on experience from damage during earlier earthquakes [2]. However, investigation of the behavior of steel moment-resisting connections under blast loading still needs further attention.
Krauthammer [3] conducted a series of numerical studies on structural concrete and structural steel connection subjected to blast loads, in order to understand the effect of structural details on their behavior. Sabuwala et al. [4] presented the behavior of welded steel moment connections under blast loads using the finite element software ABAQUS [5]. The research investigated the behavior of these connections analytically under blast loading and suggested modifications to TM5-1300 [6].
Yim et al. [7] provided a study showing the load-impulse characterization for steel connection under blast loading. Lee et al. [8] presented an analytical approach to understand the nature of blast wave and understand the complex interaction between blast loading and steel column behavior. Hadianfard et al. [9] studied the effect of the shape of column sections and boundary conditions on the behavior and failure of steel columns under blast load are studied. The study identified the importance of elastic-plastic properties of sections and proposed criteria for choosing the best section and boundary conditions for columns to resist blast loading.
Grimsmo et al. [10] conducted an experiment to investigate the end-plate moment connection behavior under dynamic load where a trolley hits the connection. The research provided a global overview for the end-plate connections under dynamic loading through studying contact forces between the impact plate and the trolley, the velocity of the impacted column after applying the initial velocity, and the displacement of the connection under the impact test.
Yang et al. [11] provided numerical simulations of rigid steel beam-column joints under impact loads. On the simulated connections, the beam flanges are directly welded to the column and the shear force is transmitted through the fin plate from the beam web to the column flanges. The study provided practical recommendations for the design of welded steel joints under impact loading in accordance with the parameters studied.
Most of the previously mentioned researches focused on one type of moment connections (beam flanges directly welded to the columns and fin plate used to support the beam web). The available research on other moment connection types is limited to static and seismic loading, which confirms that the blast loading problem of steel end-plate moment connections is yet to be thoroughly investigated.
This research investigates numerically the performance of steel extended end-plate moment connections subjected to static and blast-like loading. Some of the connections represented in the literature [10, 12] are modeled using finite element software ABAQUS [5] to verify the results of numerical models against those of the experimental tests. These models are then used to apply blast loading to investigate the behavior of extended end-plate moment connections under blast loading. The models are used to compute the moment-rotation curves for the connection under different blast durations. The pressure impulse diagram and the energy dissipation for both static and dynamic loading are determined. The failure modes are examined, and the numerical results are also compared with the simplified models presented in AISC design guide No. 26 [13] and UFC340-02 [14].
Models and verification
Numerical model and static load validation
The connections studied by Shi et al. [12] were chosen to validate the finite element model under static loading as detailed below.
Model geometry
The dimensions of columns and beams of the connections in the finite element model were identical to those used by Shi et al. [12] and shown in Table 1. The typical connection prototype model is shown in Fig. 1, and details of two of the connections are shown in Table 2. A gradual load was applied to the tip of the beam up to failure to plot the moment rotation curve and other properties of the connections.
Table 1 Cross-section dimensions of beams and columns Shi et al. [12]
Connection prototype model including SC2 and SC3 tested by Shi et al. [12] and new proposed connection DR1, proposed by the authors
Table 2 Types and details of 2 of the 8 connections studied by Shi et al. [12]
Selected elements
Connections SC2 and SC3—two connections from the eight connections studied by Shi et al. [12]—were modeled for validation of the finite element model under static loading. The finite element program ABAQUS [5] was used for the modeling.
The nut and bolt head were considered as a single body together with the bolt-shank. The threaded part of the bolt shank and extended length of the bolt beyond each nut were ignored. The hexagonal shape of the bolt head and nut was replaced with a cylinder. The typical bolted joint is presented in Fig. 1. All plates, beams, and columns were modeled using 8-node first-order linear hexagon (Brick) elements with reduced integration (C3D8R). For bolts, the 4-node linear tetrahedron elements (C3D4) were used for bolts. Mesh sensitivity analysis was performed to get optimum mesh sizes for bolts, plates, and beams to achieve accurate results and efficient running time. Threaded bolt diameter was considered as 18.65 mm (i.e., threaded area equal 0.75 nominal bolt area). The details of finite element meshing are shown in Figs. 2 and 3. Finite sliding surface-to-surface method was considered for all contacts. The surface contact properties between the plate elements were modeled as triangular using the penalty friction option with a friction coefficient value of 0.44 [12]. The hard contact was used for the connection between bolt-head/nut and plate elements to prevent penetration between steel surfaces. Since the bolts are more rigid than hot rolled sections, they were considered as master in contact pairs formulations.
End-plate moment connection simulation (SC2)
Finite element model of end-plate and bolt simulation (mesh size 5 mm)
Two models were created; one to input the bolt pretension and the second for applying displacement loading up to failure of connections. The whole procedure may be summarized in the following three steps:
Step 1. Bolt preloading/activating the contact elements.
Step 2: Fixing the bolt length.
Step 3: Applying the external load.
The first model performed the first two steps related to the bolt pretension. Then the results of the first model were imported into a second model through the predefined field option built in ABAQUS Software [5]. During the analysis of the first model for step 1, fictitious supports were added to the mid-surface of the bolt, and the bolt pretension force was applied to it. To allow the program to run and sense the contacts, at the beginning of step 2, these supports were deactivated. This approach helped to eliminate the singularity errors, and the program continues the run with results to be corrected at the second step (i.e., fixing the bolt length). The extended end-plate connection consists of a steel H-shaped beam and column (dimensions shown in Table 1), high strength pretension bolts (grade 10.9), end-plate (thickness 20 mm), and column stiffener (12 mm). The specimens were fabricated using Q345B steel. Bolt diameter and bolt pretension values relevant to each test are shown on Table 2. Gradient loading was applied at the beam tip at a distance of 1200 mm from the column face. The thickness of the column flange at the interface with the end-plate connection was enlarged 100 mm above and below the extension of the end-plate to have the same thickness as the end-plate. Yield stress and ultimate strength values of the steel plates thicker than 16 mm are 363 and 537 MPa, respectively. Young's modulus value was taken as 204,227 MPa and Poisson's ratio 0.3. The stress-strain relationships for plates, beams, and columns as well as the high strength bolts were considered trilinear (Fig. 4).
Trilinear stress-strain curve for steel plates with thickness more than 16 mm and high strength bolts [13]
Fracture modeling
The failure criteria applied for all models was based on combining both shear and ductile failure of the elements [11].
Fracture models are defined by correlating tri-axial stress to fracture strain. In the models, the parameters given by ABAQUS are normalized by the material fracture strain so that steel with various fracture strains can be modeled. These parameters are then calibrated by existing experimental tests. It should be noted that the effects of strain rate and temperature are not considered. Due to possible ductile or shear failure of steel joints under impact loads, two formulae are derived, as expressed in Eqs. (1) and (2), which establish the relationship of triaxial stress and the normalized fracture plastic strain for ductile and shear failures [11].
$$ {\displaystyle \begin{array}{c}{\varepsilon}_{\mathrm{fd}}/{\varepsilon}_{\mathrm{u}}=1.13\\ {}0.04+0.86\ \exp\ \left(-0.7\ {T}^{\sigma}\right)\\ {}0.12\end{array}}{\displaystyle \begin{array}{c}\mathrm{for}\ {T}^{\sigma}\le -1/3\\ {}\mathrm{for}-1/3<{T}^{\sigma}\le 10/3\\ {}\mathrm{for}\ 10/3<{T}^{\sigma}\end{array}} $$
$$ {\displaystyle \begin{array}{c}{\varepsilon}_{\mathrm{fs}}/{\varepsilon}_{\mathrm{u}\kern0.5em }=\kern1em 0.43\\ {}0.38+0.40\ \exp\ \left(6.69\ \left(\ {T}^{\sigma }-2\right)\ \right)\\ {}0.78\end{array}}{\displaystyle \begin{array}{c}\mathrm{for}\ {T}^{\sigma}\le 5/3\\ {}\mathrm{for}\kern0.75em 5/3<{T}^{\sigma}\le 2\\ {}\mathrm{for}\ 2<{T}^{\sigma}\end{array}} $$
εu is the ultimate plastic strain of steel;
εfd is the initial fracture plastic strain for ductile failure;
εfs is the initial fracture plastic strain for shear failure;
Tσ is the stress triaxiality.
Figure 5 shows the relationships of normalized strain and stress triaxiality. In case of ductile failure, the normalized fracture strain is 1.13 when the stress triaxiality is less than − 1/3 and 0.12 when the stress triaxiality is greater than 10/3. A nonlinear reduction of the strain can be observed with stress triaxiality between − 1/3 and 10/3. However, the fracture strain increases from 0.43 to 0.78 for shear failure. Therefore, a critical value of 1.17 exists in the fracture strain-stress triaxiality curves. When the stress triaxiality is less than 1.17, shear failure is dominant over ductile failure; otherwise, ductile failure occurs prior to shear failure. Based on Fig. 5, the fracture models are set in the ABAQUS and employed in the numerical simulations.
Fracture model with stress triaxiality for steel material [12]
The main parameter required to calculate the fracture initiation strain required on ABAQUS input is the ultimate failure strain. The bolt failure strain is taken equal to 0.2 and steel plates are taken equal 0.455 (engineering elastic and plastic strain) [10]. As Grimsmo et al. [10] did not provide the failure strain value directly but provided the fracture initiation strain values for both bolts and plates as 0.07 and 0.16, respectively, and by substituting in Eq. (1) considering triaxiality for direct tension tests equal − 1/3 [15], one may calculate the value of failure strain. The failure strain of steel plates calculated from Grimsmo et al. [10] was used in Shi et al. [12] FEM modeling as it has a similar high grade of steel and bolts as no fracture analysis data was available. When the element reached the value of fracture strain as defined by Yang et al. [11], the program removes the element and redistributes stresses among the remaining adjacent elements up to complete failure of component. Weld modeling is not considered as no failure in welds was observed. All welds were full penetration and stronger than the steel material itself.
Mesh sensitivity and validation
In order to choose a suitable size for the finite element mesh, connection SC2 was modeled with 4 sizes of mesh for both bolts and end-plate. The chosen sizes were 10, 7.5, 5, and 2.5 mm. Figure 6 shows the moment-rotation curves of the experimental test versus the numerical models using the four different mesh sizes. The course meshes of size 10 mm and 7.5 mm were unable to predict the ultimate rotation and ultimate moment accurately. Meanwhile, the fine meshes 2.5 and 5 mm predicted accurately the ultimate rotation and moment. The difference between the 2.5 and 5 mm was not significant (i.e., the two curves almost coincide); hence, the 5 mm mesh was used through all numerical models of this paper to achieve the required accuracy within reasonable running time (Fig. 6).
Mesh sensitivity for M-Ø curve for connection SC2
Dynamic problem validation
An experiment was conducted [10] to investigate the behavior of end-plate moment connection under impact load. The published results of this experiment were used to validate the results versus the FEM result.
Connection geometry
Figure 7 shows the dimensions of the tested connection. It consisted of a column section of HEB220 fixed to two beams of HEA180 with end-plates of 12 mm thickness. Meanwhile, the bolts were M16 arranged as shown in Fig. 8.
Elevation view for the connection [10]
End-plate dimensions and bolts (M16) arrangement [10]
Material modeling
The steel material for members and plates has been input to the FEM considering the elastic trilinear relationship with yield stress value of steel equal 413.7 MPa for end-plate (Fig. 9). Moreover, trilinear relationship is considered for bolts.
Trilinear stress-strain curve for steel plates and high strength bolts [16]
Finite element model
A trolley was used to hit the impact plate by speed up to 12 m/s. The weight of the trolley used on the analysis equals the actual weight of 727 kg and the impact plate was welded firmly to the bottom of the column [16]. The trolley was modeled as a concentrated mass of 727 kg and the velocity of 12 m/s was applied to this point.
An ABAQUS model was created to validate the results of numerical dynamic analysis versus the published test results conducted [10]. All plates, beams, and columns were modeled using 8-node first-order linear hexagon (Brick) elements with reduced integration (C3D8R) with a fine mesh of 5 mm for end-plates; meanwhile, the bolts were modeled as tetrahedron elements with a mesh size of 5 mm and elements (C3D4). The finite element model is shown in Fig. 10 whereas a close-up on the meshing of the bolts and nuts is shown in Fig. 11. Finite sliding surface-to-surface method was considered for all contacts. The surface contact properties between the plate elements were modeled using the penalty friction option with a friction coefficient value of 0.2 for all contact surfaces [16]. The hard contact was used for the connection between bolt-head/nut and plate elements to prevent penetration between master in contact pair's formulations steel surfaces. The bolts are more rigid than hot rolled sections and thus were considered as master. A dynamic bolt pretension is modeled using the same procedure explained previously. Bolt tightening value equal 80 N m was incorporated to all bolts modeling on this model. For more details on the experiments, refer to [10].
Finite element model and boundary conditions
Bolt and nut meshing/model meshing
The fracture model explained earlier was considered in this analysis. Calculated values of engineering failure strain of 0.455 and 0.185 were based on published fracture initiation values of 0.16 and 0.07 for both steel plates and bolts, respectively. Triaxiality value for direct tension tests was considered equal to − 1/3 when substituting in Eq. (1). Moreover, no fracture modeling is considered for weld as no expected failure on this weld. The effect of the dynamic load was incorporated on the material model by applying the strain rate effect using the Cowper Symonds model [2], which is built in the ABAQUS code.
Comparison of results
Figure 12 presents a comparison of the force time history measured from the test [10] and that from the finite element analysis, whereas Fig. 13 presents the trolley velocity time history comparison. Figures 12 and 13 show close agreement between the validated FEM model and the published test results. The percentage of error in estimating the maximum force between the impact plate and the nose of the trolley was 3%. Moreover, Figs. 14 and 15 show that the model was able to predict the mode of failure and deformation.
Finite element vs. experimental testing for the force-column displacement relationship
Comparison between finite element and experimental testing for the velocity of trolley versus time
Deformation of the joint region immediately prior to fracture in simulation
Deformation of the joint region immediately prior to fracture in the test by [10]
Results and discussions
Model geometry and problem description
The behavior of extended end-plate connection is investigated under static and blast-like loads. Two connections that were studied earlier by Shi et al. [12] were modeled: SC2 and SC3. A new proposal of a third connection, DR1, proposed by the author having two (double) rib stiffeners was also examined (Fig. 1d).
For blast loading, the problem of internal fully vented blast loading is studied. Only blast pressure is considered in the analysis. The modeling procedure outlined earlier on static loading tests [12] was used in this stage, except that the applied load was blast loading.
Material and finite element modeling
In order to be able to apply the pretension on the connection subjected to dynamic loading, the approach presented by Krolo et al. [17] was used and may be summarized in the three steps mentioned in the "Selected elements" section.
Three models were investigated. These models are SC2, SC3, and DR1. The effect of the dynamic load was incorporated on the material model by applying the strain rate effect using the Cowper Symonds model. The Cowper Symonds model [2] is built in the ABAQUS code. The equation of Cowper Symond is as follows:
$$ \frac{\sigma_d}{\sigma_o}=1+{\left(\frac{\varepsilon_o}{D}\right)}^{1/q} $$
where σd is the dynamic yield strength calculated as a function of instantaneous strain rate, σ0 is the nominal static yield strength, ε0 is the instantaneous strain rate, D and q are materials constants and were selected as D = 40.4 and q = 5 for steel materials [2]. The dynamic increase factor was calculated automatically by the program for each dynamic analysis case.
The same connection details used in the validation stage were used in this stage (i.e., blast loading) except that the beams and columns length were updated to be 3 m long. The dynamic pressure was applied on the inner face of both the beam and the column. A fine mesh of 5 mm was used for both end-plate and bolts to assure accuracy based on mesh sensitivity analysis. The same fracture modeling of the "Fracture modeling" section was also used, and the same material stress-strain curves are shown in Figs. 4 and 5. The blast pressure was applied to the interior flanges of both the beam and the column at a certain load duration until failure of the connection. Thus, one can get the pressure value corresponding to the blast duration that caused the failure of the connection. Then one can multiply half the pressure value by the corresponding time to get the impulse, in order to plot the pressure impulse diagram for each connection. The impulse diagrams provide the maximum pressure value corresponding to the maximum impulse which is defined as the area under the curve of the applied dynamic pressure at a specific explosion period. The points under the curve are considered safe values; meanwhile, the points above the curve are considered unsafe. Full details on the modeling may be found in Ref. [18].
Static load results
The steel design guide No. 16 [19] proposes equations that may be used to obtain numerical values for connection nominal resistance as follows (Fig. 16):
Geometry, yield line, and bolt force model [19]
End-plate yield equations
$$ {\mathrm{M}}_{\mathrm{n}}={\mathrm{M}}_{\mathrm{p}}={\mathrm{F}}_{\mathrm{p}\mathrm{y}}{{\mathrm{t}}_{\mathrm{p}}}^2\mathrm{Y} $$
$$ \boldsymbol{Y}=\frac{{\boldsymbol{b}}_{\boldsymbol{p}}}{\mathbf{2}}\left({\boldsymbol{h}}_{\mathbf{1}}\left(\frac{\mathbf{1}}{\kern0.5em {\boldsymbol{p}}_{\boldsymbol{f},\boldsymbol{i}}}+\frac{\mathbf{1}}{\boldsymbol{S}}\right)+{\boldsymbol{h}}_{\mathbf{0}}\left(\frac{\mathbf{1}}{{\boldsymbol{p}}_{\boldsymbol{f},\mathbf{0}}}\right)-\frac{\mathbf{1}}{\mathbf{2}}\right)+\frac{\mathbf{2}}{\boldsymbol{g}}\left({\boldsymbol{h}}_{\mathbf{1}}\left({\boldsymbol{p}}_{\boldsymbol{f},\boldsymbol{i}}+\boldsymbol{s}\right)\right) $$
$$ \boldsymbol{S}=\frac{\mathbf{1}}{\mathbf{2}}\surd \left({\boldsymbol{b}}_{\boldsymbol{p}}\boldsymbol{g}\right) $$
Mn: Nominal moment resistance of connection
Mp: Nominal moment resistance due to end-plate yielding.
Fpy: Yield stress of steel.
The remaining parameters in Eqs. (5) and (6) are geometric parameters defined in Fig. 16.
Bolts rupture model
$$ {\boldsymbol{M}}_{\boldsymbol{n}}={\boldsymbol{M}}_{\boldsymbol{n}\boldsymbol{b}}=\mathbf{2}{\boldsymbol{P}}_{\boldsymbol{t}}\left({\boldsymbol{h}}_{\mathbf{0}}+{\boldsymbol{h}}_{\mathbf{1}}\right) $$
Mnb: Nominal moment resistance due to bolt rupture.
Pt: Ultimate bolt tensile strength.
h0: Distance from compression flange centerline to the uppermost two bolts
h1: Distance from compression flange centerline to the second row of two bolts.
The failure for connection SC3 was due to end-plate yielding, and Mn was computed as 296.5 kN m. Meanwhile, for connections SC2 and DR1, bolt failure was the reason for connection failure. Mnb was found to be 348 kN m for SC2. Although the design guide does not provide a formula for the double rib stiffener connection (DR1), its capacity may be assumed equal to SC2, since they were both governed by bolt failure.
Figure 17 shows the strain distribution at ultimate load for connection SC3. Values indicated are the equivalent plastic strain of steel computed by ABAQUS. Higher values as shown in Fig. 17 indicate end-plate yielding. Figure 18 shows the strain distribution for SC2. The values of equivalent plastic strain for SC2 are much lower than SC3. A closer look at the bolt strain (Fig. 19) showed excessive strain, which confirms the failure of bolts. Table 3 shows the ultimate load values and ultimate moment values obtained from the analysis model versus test published values. Meanwhile, Fig. 20 shows the mode of connection failure for DR1 under static load and dynamic blast load (i.e., bolt rupture). The calculated moments shown in Table 3 are based on equations given by AISC design guide 16 [19].
Strain distribution for connection SC3 at failure
Strain distribution for bolt for connection SC2
Table 3 Comparison of loading capacities between FEA and tests
Stress distribution for connection DR1 at failure
Blast-like load results
Figure 21 shows model geometry and location of blast load application. Typically, blast loading may be modeled using one of three alternatives: Arbitrary Lagrangian Eulerian (ALE), Load Blast Enhanced (LBE), and pressure-time history methods. Earlier research [20] concluded that using pressure-time history can predict the displacement response due to blast loading with sufficient accuracy as compared to the other two techniques, while providing substantial saving in computational time. Thus, in this research, the pressure-time history analysis technique was adopted, also to be consistent with design guide No.26 [13] and UFC 340-02 [14] analysis methods.
Blast pressure application on both column and beam inside flanges
Figure 22 shows different time duration versus corresponding pressure values for connection SC2. The same time durations are used for SC3 and DR1 with different pressure values. The problem examined is an explosion inside a fully vented room of dimensions 6.0 × 6.0 × 6.0 m. Only half the frame was considered for analysis due to symmetry. For the case of fully vented room (i.e., one wall missing), the gas pressure may be neglected as per UFC 340-02 [14].
Blast pressure versus blast duration for connection SC2
It is observed that failure of connections considering different blast duration was limited to end-plate yielding for unstiffened connection, SC3 (Fig. 23), whereas the lowermost four bolts failed as shown in Fig. 24 for the stiffened connections, SC2 and DR1.
finite element model of the SC3 connection, connection failure mode, end-plate yielding
Connection failure, failure of the most bottom four bolts, duration (td = 0.005) as an example of failure in connection SC2
Figures 25, 26, and 27 show the moment rotation time history for all blast time durations used in this research for connections SC2, SC3, and DR1. Various blast loading durations are applied to the models of connections SC2/SC3/DR1 (0.005, 0.01, 0.015, and 0.02 s). Comparing the figures, it is clear that the connection SC3 (without the additional end-plate rib stiffener) has a higher rotational capacity.
Connection SC2—summary of the moment-rotation curve for all blast durations
Connection DR1—summary of the moment-rotation curve for all blast durations
A separate model was created considering each blast duration period, and the peak pressure was increased gradually until the pressure value that causes the connection to fail was obtained. The resulting pressure impulse diagrams for the three connections are shown in Fig. 28. It is clear by comparing SC2 and SC3 that connection SC2 has higher pressure resistance for the same blast duration. The increase in pressure resistance is about 4%. Meanwhile, comparing SC3 with DR1 indicates that DR1 has higher pressure resistance (about 6% higher).
Linear load—impulse diagram for connections SC2 and SC3 and DR1
Plastic energy dissipation for connections SC2, SC3, and DR1 under blast loading and blast duration 0.02 s
UFC 340-02 proposes a dynamic increase factor (DIF) of 1.05 for failure due to ultimate stress (bolt failure) whereas the value increases to a range from 1.2 to 1.3 for failure due to yielding (steel plate failure).
The dynamic yield stress is calculated as follows:
$$ \mathrm{Fds}=\mathrm{DIF}\times \mathrm{Fys} $$
Fds: dynamic yield stress
Fys: static yield stress
DIF: dynamic increase factor
Applying this concept to the connections SC2 and SC3, it is noted that using DIF = 1.05 provides a lower bound and underestimates the moment capacity of the connection under blast-like loading by 16 to 33%, whereas using a DIF of 1.41 provides an upper bound for the connection capacity (Table 4).
Table 4 Comparing ultimate moments obtained by numerical models vs. values calculated by UFC 340-02, design guides No. 16 and 26, in addition to DIF
Plastic energy dissipation for connections SC2, SC3, and DR1 under static loading
For connection SC3, the rotation capacity of the same connection under dynamic loading was higher by 14% than the rotation capacity under static loading. Meanwhile, the rotation capacity for the connections SC2 and DR1 under dynamic load was higher than the static loading by 22% and 23%, respectively. Hence, under dynamic loading, the connection shows more ductile behavior as compared to static loading conditions.
The plastic energy dissipation (Ep) curves are calculated by ABAQUS software (software output) but using the following equation:
$$ Ep={\int}_0^t\left[{\int}_V{\sigma}^C{\varepsilon}^{\mathrm{pl}}\mathrm{dv}\right]\;\mathrm{dt} $$
Ep: Plastic energy dissipation
힂c: Undamaged stress
εpl: Plastic strain rate
v: Volume
Figure 29 shows that the stiffened connection had higher plastic energy dissipation compared to the unstiffened connection under blast loading by 45%. However, under static loading, the plastic energy dissipation for stiffened connection was higher than the unstiffened connection by a range of 30% to 37% (Fig. 30). The plastic energy dissipation for the stiffened connection under blast load is 6.54 times the plastic energy dissipation under static load (Figs. 29 and 30).
For the unstiffened connection under blast load, the plastic energy dissipation under blast loading is 5.95 times the plastic energy dissipation under static load. The dynamic increase factor for stiffened connection (i.e., governed by bolt rupture) was found to range from 1.05 to 1.37. Meanwhile, for unstiffened connection (i.e., end-plate yielding is governing), it was found to be from 1.19 to 1.41.
A good estimate is proposed by UFC 340-02 that considers DIF from 1.2 to 1.3 for yielding of steel plates. However, 1.3 may be non-conservative in some cases. Also, considering the DIF for ultimate failure of bolts equal 1.05 is generally acceptable as shown on Table 4.
This paper describes the development of finite element models to simulate the behavior of end-plate connections under both static and blast loading. The numerical results are compared to the experimental results [10, 12]. After verification of the model, blast load is applied with a duration ranging from 0.005 to 0.02 s. The work provides pressure impulse diagrams for end-plate connections which may be used as a guide to improve UFC3-340-02 [14] and provide better insight for the design of end-plate connections under blast loading. Moment rotation diagrams for different blast durations are also provided.
The research compares the performance of an unstiffened end-plate connection with two types of stiffened connections; one has a rib stiffener welded to the middle of the end-plate and the other has two stiffeners welded at the edges.
Based on the analyses, the following conclusions were reached:
The rotation capacity of the same connection under dynamic loading was higher than the rotation capacity under static loading by 14%
It was observed that the stiffened connection, SC2 had higher plastic dissipation energy compared to the unstiffened connection under blast loading by 45%. However, under static loading the plastic energy dissipation for stiffened connection, SC2 was higher than the unstiffened connection, by a range of 30% to 37%.
A conservative estimate for the dynamic increase factor (DIF) was found to be 1.2 for steel yield stress, and 1.05 for bolt failure.
The dynamic rotation capacities were higher than static ones, and the connections under blast load showed better ductile behavior and higher energy dissipation than under static loading.
The presence of additional end-plate rib stiffeners improved the maximum pressure that can be sustained by the connection considering the same blast duration.
The rotation capacity of unstiffened connection was more than the stiffened connection.
The datasets generated and/or analyzed during the current study are available in the [4shared] repository, (https://www.4shared.com/rar/2Z3Hh8Mzea/new_work.html)
AISC:
American institute of steel construction
DIF:
Dynamic increase factor
UFC:
Unified facilities criteria
Popov EP, Tsai K-C, Engelhart MD (1989) On seismic steel joints and connections. Eng Struct. 11(3):148–162. https://doi.org/10.1016/0141-0296(89)90003-5
D'Aniello M, Tartaglia R, Costanzo S, Landolfo R (2017) Seismic design of extended stiffened end-plate joints in the framework of Eurocodes. J Constructional Steel Res 128:512–527
Krauthammer T (1999) Blast-resistant structural concrete and steel connections. Int J Impact Eng 22:887–910 https://doi.org/10.1016/S0734-743X (99)00009-3
Sabuwala, T.; Linzell, D.; Krauthammer, T.: Finite element analysis of steel beam to column connections subjected to blast loads. Int J Impact Eng 2005; 31:861-876. https://doi.org/https://doi.org/10.1016/j.ijimpeng.2004.04.013, 7.
ABAQUS version 2014, Finite element analysis software by simulia, Dassault Systems.
TM 5-1300, Structures to resist the effects of accidental explosions, Department of Army, Washington, DC, November 1990.
Yim, C.Y.; Krauthammer, T.: Load–impulse characterization for steel connection. Int J Impact Eng, Vol. 36, 2009; 737-745. https://doi.org/https://doi.org/10.1016/j.ijimpeng.2008.09.005, 5
Lee, K.; Kim, T.; Kim, J.: Local response of W-shaped steel columns under blast loading. J Struct Eng Mech 2009, Vol. 31, No. 1: 25-38. https://doi.org/https://doi.org/10.12989/sem.2009.31.1.025.
Hadianfard, M. A.; Farahani, A.; Jahromi, A. B.: On the effect of steel columns cross sectional properties on the behaviors when subjected to blast loading. J Struct Eng Mech 2012, Vol. 44, No. 4: 449-463. http://dx.doi.org/https://doi.org/10.12989/sem.2012.44.4.449
Grimsmo, E.L.; Clausen, A.H.; Langseth, M.; Aalberg, A.: An experimental study of static and dynamic behaviour of bolted end-plate joints of steel. Int J Impact Eng, Vol. 85, 2015, 132-145. https://doi.org/https://doi.org/10.1016/j.ijimpeng.2015.07.001
Yang, B.; Wang, H.; Yang, Y.; Kang, S., Zhou, X.; Wang, L.: Numerical study of rigid steel beam-column joints under impact loading. J Constructional Steel Res, Volume 147, August 2018: 62-73. https://doi.org/https://doi.org/10.1016/j.jcsr.2018.04.004
Shi, G.; Shi, Y.; Wang, Y.; Bradford, M.A.: Numerical simulation of steel pretensioned bolted end-plate connections of different types and details. J Eng Struct, Volume 30, Issue 10, October 2008: 2677-2686. https://doi.org/https://doi.org/10.1016/j.engstruct.2008.02.013.
Gilsanz, R.; Hamburger, R.; Barker, D.; Smith J.L.; and Rahimian, A., "Design of blast resistant structures", Steel Design Guide No. 26, American Institute of Steel Construction, Chicago, IL, U.S.A., 2013.
Unified Facilities Criteria. Structures to resist the effect of accidental explosions. UFC 3-340-02. 2008
Jia, L.; and Kuwamura, H.: Ductile fracture model for structural steel under cyclic large strain loading. J Constructional Steel Res, 2015: 110-121. https://doi.org/https://doi.org/10.1016/j.jcsr.2014.12.002, 106.
Grimsmo, E.L.; Clausen, A.H.; Aalberg, A.; and Langseth, M.: A numerical study of beam-to-column joints subjected to impact. Eng Struct, Vol. 120, 2016, 103-115. https://doi.org/https://doi.org/10.1016/j.engstruct.2016.04.031.
Krolo, P.; Grandić, D.; Bulić, M.: The guidelines for modelling the preloading bolts in the structural connection using finite element methods. J Comput Eng 2016, Vol. 2016, Page 8, Article ID 4724312. https://doi.org/https://doi.org/10.1155/2016/4724312, 1, 8.
Abdel-Aziz, A.: "Performance of end-plate bolted connections under blast loading" Ph.D. Dissertation – under preparation, Structural Eng. Dept., Faculty of Engineering, Cairo University, Egypt.
Murray, T.M.; Shoemaker, W.L.: Flush and extended multiple-row moment end-plate connections. Steel Design Guide No. 16, American Institute of Steel Construction, Chicago, IL, U.S.A., 2016
Abedini M, Zhang C, Mehrmashhadi J, Akhlaghi E (2020) Comparison of ALE, LBE and pressure time history methods to evaluate extreme loading effects in RC column. Structures. 28:456–466
The authors declare that no fund was received to perform this research.
Faculty of Engineering, Cairo University, Cairo, Egypt
Ahmed A. Osman & Sherif A. Mourad
Ahmed A. Osman
Sherif A. Mourad
The paper is based on the Ph.D. dissertation of AAO under the supervision of SAM. AAO wrote the initial draft of the manuscript under the supervision of SAM. Both authors developed the ideas and frameworks for the manuscript. All authors read and approved the final version of the manuscript.
Correspondence to Ahmed A. Osman.
The second Author, SAM, is an associate editor for the Journal of Engineering and Applied Science.
Osman, A.A., Mourad, S.A. Performance of extended end-plate bolted connections subjected to static and blast-like loads. J. Eng. Appl. Sci. 68, 8 (2021). https://doi.org/10.1186/s44147-021-00001-3
Extended end-plate connection
Pretensioned bolts
Blast load
Pressure-impulse diagram | CommonCrawl |
Multiple solutions to elliptic inclusions via critical point theory on closed convex sets
Variational analysis of semilinear plate equation with free boundary conditions
July 2015, 35(7): 3103-3131. doi: 10.3934/dcds.2015.35.3103
Asymptotics in shallow water waves
Robert McOwen 1, and Peter Topalov 1,
Northeastern University, 360 Huntington Avenue, Boston, MA 02115, United States, United States
Received August 2014 Revised September 2014 Published January 2015
In this paper we consider the initial value problem for a family of shallow water equations on the line $\mathbb{R}$ with various asymptotic conditions at infinity. In particular we construct solutions with prescribed asymptotic expansion as $x\to\pm\infty$ and prove their invariance with respect to the solution map.
Keywords: spatial asymptotics, well-posedness of non-linear PDEs, Camassa-Holm equation., Shallow water waves, groups of asymptotic diffeomorphisms.
Mathematics Subject Classification: 35Q35, 37K65, 35Q53, 37K1.
Citation: Robert McOwen, Peter Topalov. Asymptotics in shallow water waves. Discrete & Continuous Dynamical Systems - A, 2015, 35 (7) : 3103-3131. doi: 10.3934/dcds.2015.35.3103
V. Arnold, Sur la geometrié differentielle des groupes de Lie de dimension infinie et ses applications à l'hydrodynamique des fluids parfaits,, Ann. Inst. Fourier, 16 (1966), 319. doi: 10.5802/aif.233. Google Scholar
I. Bondareva and M. Shubin, Growing asymptotic solutions of the Korteweg-de Vries equation and of its higher analogues,, Dokl. Akad. Nauk SSSR, 267 (1982), 1035. Google Scholar
I. Bondareva and M. Shubin, Uniqueness of the solution of the Cauchy problem for the Korteweg-de Vries equation in classes of increasing functions,, Vestnik Moskov. Univ. Ser. I Mat. Mekh, 102 (1985), 35. Google Scholar
I. Bondareva and M. Shubin, Equations of Korteweg-de Vries type in classes of increasing functions,, J. Soviet Math., 51 (1990), 2323. doi: 10.1007/BF01094991. Google Scholar
A. Boutet de Monvel, A. Kostenko, D. Shepelsky and G. Teschl, Long-time asymptotics for the Camassa-Holm equation,, SIAM J. Math Anal., 41 (2009), 1559. doi: 10.1137/090748500. Google Scholar
R. Camassa and D. Holm, An integrable shallow water equation with peaked solitons,, Phys. Rev. Lett, 71 (1993), 1661. doi: 10.1103/PhysRevLett.71.1661. Google Scholar
A. Constantin, Existence of permanent and breaking waves for a shallow water equation: A geometric approach,, Ann. Inst. Fourier, 50 (2000), 321. doi: 10.5802/aif.1757. Google Scholar
A. Constantin, On the scattering problem for Camassa-Holm equation,, Proc. R. Soc. Lond. A, 457 (2001), 953. doi: 10.1098/rspa.2000.0701. Google Scholar
A. Constantin, The trajectories of particles in Stokes waves,, Invent. math., 166 (2006), 523. doi: 10.1007/s00222-006-0002-5. Google Scholar
A. Constantin and J. Escher, Global existence and blow-up for a shallow water equation,, Annali Sc. Norm. Sup. Pisa, 26 (1998), 303. Google Scholar
A. Constantin and J. Escher, On the blow-up rate and the blow-up set of breaking waves for a shallow water equation,, Math. Z., 233 (2000), 75. doi: 10.1007/PL00004793. Google Scholar
A. Constantin and J. Escher, Global weak solutions for a shallow water equation,, Indiana Univ. Math. J., 47 (1998), 1527. doi: 10.1512/iumj.1998.47.1466. Google Scholar
A. Constantin and D. Lannes, The hydrodynamical relevance of the Camassa-Holm and Degasperis-Procesi equation,, Arch. Rational Mech. Anal., 192 (2009), 165. doi: 10.1007/s00205-008-0128-2. Google Scholar
A. Constantin and H. McKean, A shallow water equation on the circle,, Comm. Pure Appl. Math., 52 (1999), 949. doi: 10.1002/(SICI)1097-0312(199908)52:8<949::AID-CPA3>3.0.CO;2-D. Google Scholar
C. De Lellis, T. Kappeler and P. Topalov, Low regularity solutions of the Camassa-Holm equation,, Comm. Partial Differential Equations, 32 (2007), 87. doi: 10.1080/03605300601091470. Google Scholar
D. Ebin and J. Marsden, Groups of diffeomorphisms and the motion of an incompressible fluid,, Ann. Math., 92 (1970), 102. doi: 10.2307/1970699. Google Scholar
A. Fokas and B. Fuchssteiner, Symplectic structures, their Bäcklund transformation and hereditary symmetries,, Physica D, 4 (1981), 47. doi: 10.1016/0167-2789(81)90004-X. Google Scholar
D. Holm and M. Staley, Wave structure and nonlinear balances in a family of evolutionary PDEs,, SIAM J. Applied Dynamical Systems, 2 (2003), 323. doi: 10.1137/S1111111102410943. Google Scholar
D. Holm and M. Staley, Nonlinear balance and exchange of stability in dynamics of solitons, peakons, ramps/cliffs and leftons in 1+1 nonlinear PDE,, Phys. Lett. A, 308 (2003), 437. doi: 10.1016/S0375-9601(03)00114-2. Google Scholar
H. Inci, T. Kappeler and P. Topalov, On the regularity of the composition of diffeomorphisms,, Mem. Amer. Math. Soc., 226 (2013). doi: 10.1090/S0065-9266-2013-00676-4. Google Scholar
T. Kappeler, P. Perry, M. Shubin and P. Topalov, Solutions of mKdV in classes of functions unbounded at infinity,, J. Geom. Anal., 18 (2008), 443. doi: 10.1007/s12220-008-9013-3. Google Scholar
S. Lang, Differential Manifolds,, Addison-Wesley Series in Mathematics, (1972). Google Scholar
H. McKean, Fredholm determinants and Camassa-Holm hierarchy,, Comm. Pure Appl. Math., 56 (2003), 638. doi: 10.1002/cpa.10069. Google Scholar
R. McOwen and P. Topalov, Groups of asymptotic diffeomorphisms,, , (). Google Scholar
G. Misiolek, A shallow water equation as a geodesic flow on the Bott-Virasoro group,, J. Geom. Phys., 24 (1998), 203. doi: 10.1016/S0393-0440(97)00010-7. Google Scholar
G. Misiolek, Classical solutions of the periodic Camassa-Holm equation,, GAFA, 12 (2002), 1080. doi: 10.1007/PL00012648. Google Scholar
V. Ovsienko and B. Khesin, Korteweg-de Vries superequations as an Euler equation,, Functional Anal. Appl., 21 (1987), 81. Google Scholar
J. Toland, Stokes waves,, Topological Methods in Nonlinear Analysis, 7 (1996), 1. Google Scholar
Jae Min Lee, Stephen C. Preston. Local well-posedness of the Camassa-Holm equation on the real line. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 3285-3299. doi: 10.3934/dcds.2017139
Delia Ionescu-Kruse. Variational derivation of the Camassa-Holm shallow water equation with non-zero vorticity. Discrete & Continuous Dynamical Systems - A, 2007, 19 (3) : 531-543. doi: 10.3934/dcds.2007.19.531
David F. Parker. Higher-order shallow water equations and the Camassa-Holm equation. Discrete & Continuous Dynamical Systems - B, 2007, 7 (3) : 629-641. doi: 10.3934/dcdsb.2007.7.629
Zhaoyang Yin. Well-posedness and blow-up phenomena for the periodic generalized Camassa-Holm equation. Communications on Pure & Applied Analysis, 2004, 3 (3) : 501-508. doi: 10.3934/cpaa.2004.3.501
Joachim Escher, Olaf Lechtenfeld, Zhaoyang Yin. Well-posedness and blow-up phenomena for the 2-component Camassa-Holm equation. Discrete & Continuous Dynamical Systems - A, 2007, 19 (3) : 493-513. doi: 10.3934/dcds.2007.19.493
Xi Tu, Zhaoyang Yin. Local well-posedness and blow-up phenomena for a generalized Camassa-Holm equation with peakon solutions. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2781-2801. doi: 10.3934/dcds.2016.36.2781
Jinlu Li, Zhaoyang Yin. Well-posedness and blow-up phenomena for a generalized Camassa-Holm equation. Discrete & Continuous Dynamical Systems - A, 2016, 36 (10) : 5493-5508. doi: 10.3934/dcds.2016042
Zhaoyang Yin. Well-posedness, blowup, and global existence for an integrable shallow water equation. Discrete & Continuous Dynamical Systems - A, 2004, 11 (2&3) : 393-411. doi: 10.3934/dcds.2004.11.393
Ying Fu, Changzheng Qu, Yichen Ma. Well-posedness and blow-up phenomena for the interacting system of the Camassa-Holm and Degasperis-Procesi equations. Discrete & Continuous Dynamical Systems - A, 2010, 27 (3) : 1025-1035. doi: 10.3934/dcds.2010.27.1025
Kai Yan, Zhaoyang Yin. Well-posedness for a modified two-component Camassa-Holm system in critical spaces. Discrete & Continuous Dynamical Systems - A, 2013, 33 (4) : 1699-1712. doi: 10.3934/dcds.2013.33.1699
Jaime Angulo, Carlos Matheus, Didier Pilod. Global well-posedness and non-linear stability of periodic traveling waves for a Schrödinger-Benjamin-Ono system. Communications on Pure & Applied Analysis, 2009, 8 (3) : 815-844. doi: 10.3934/cpaa.2009.8.815
Wei Luo, Zhaoyang Yin. Local well-posedness in the critical Besov space and persistence properties for a three-component Camassa-Holm system with N-peakon solutions. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 5047-5066. doi: 10.3934/dcds.2016019
Lei Zhang, Bin Liu. Well-posedness, blow-up criteria and gevrey regularity for a rotation-two-component camassa-holm system. Discrete & Continuous Dynamical Systems - A, 2018, 38 (5) : 2655-2685. doi: 10.3934/dcds.2018112
David M. Ambrose, Jerry L. Bona, David P. Nicholls. Well-posedness of a model for water waves with viscosity. Discrete & Continuous Dynamical Systems - B, 2012, 17 (4) : 1113-1137. doi: 10.3934/dcdsb.2012.17.1113
Radhia Ghanmi, Tarek Saanouni. Well-posedness issues for some critical coupled non-linear Klein-Gordon equations. Communications on Pure & Applied Analysis, 2019, 18 (2) : 603-623. doi: 10.3934/cpaa.2019030
Yongsheng Mi, Boling Guo, Chunlai Mu. On an $N$-Component Camassa-Holm equation with peakons. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1575-1601. doi: 10.3934/dcds.2017065
Helge Holden, Xavier Raynaud. Dissipative solutions for the Camassa-Holm equation. Discrete & Continuous Dynamical Systems - A, 2009, 24 (4) : 1047-1112. doi: 10.3934/dcds.2009.24.1047
Zhenhua Guo, Mina Jiang, Zhian Wang, Gao-Feng Zheng. Global weak solutions to the Camassa-Holm equation. Discrete & Continuous Dynamical Systems - A, 2008, 21 (3) : 883-906. doi: 10.3934/dcds.2008.21.883
Milena Stanislavova, Atanas Stefanov. Attractors for the viscous Camassa-Holm equation. Discrete & Continuous Dynamical Systems - A, 2007, 18 (1) : 159-186. doi: 10.3934/dcds.2007.18.159
Defu Chen, Yongsheng Li, Wei Yan. On the Cauchy problem for a generalized Camassa-Holm equation. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 871-889. doi: 10.3934/dcds.2015.35.871
Robert McOwen Peter Topalov | CommonCrawl |
Are these two graphs isomorphic? Why/Why not?
Are these two graphs isomorphic?
According to Bruce Schneier:
"A graph is a network of lines connecting different points. If two graphs are identical except for the names of the points, they are called isomorphic."
Schneier, B. "Graph Isomorphism"
From Applied Cryptography
John Wiley & Sons Inc.
According to a GeeksforGeeks article:
These two are isomorphic: And these two aren't isomorphic:
Manwani, C. "Graph Isomorphisms and Connectivity"
From GeeksforGeeks
https://www.geeksforgeeks.org/mathematics-graph-isomorphisms-connectivity/
According to a MathWorld article:
"Two graphs which contain the same number of graph vertices connected in the same way are said to be isomorphic."
Weisstein, Eric W. "Isomorphic Graphs."
From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/IsomorphicGraphs.html
The details are beyond me, but the MathWorld explanation seems to conflict with the first GeeksforGeeks example; the vertices appear the same, but they appear to be connected differently.
To add to the confusion, the same could be said for the second example. So I can't really deduce the facts.
Please try to keep answers as clear and simple as possible for the sake of understanding.
"Truth is ever to be found in the simplicity, and not in the multiplicity and confusion of things."
graph-theory graph-isomorphism graph-connectivity
edited Mar 10 at 12:14
tjt263tjt263
$\begingroup$ First of all, are you clear on the definition of isomorphic? $\endgroup$ – Mike Mar 9 at 19:24
$\begingroup$ You are not using definitions. Two things are isomorphic given an isomorphism, but you don't give one. Lacking one, common sense suggests "isomorphic" means for some isomorphism of a given kind. For graphs "isomorphic" assumes a certain kind of isomorphism. You are misusing descriptions that are too vague to be definitions. You quote MathWorld, but the explicitly informal introductory description, not the definition. Find formal definitions of "isomorphism" & "isomorphic". Also you seem to confuse the picture of a graph with the graph it pictures. Find a formal definition of "graph". $\endgroup$ – philipxy Mar 10 at 4:11
$\begingroup$ Words have meanings. You need to learn them. There is no royal road. "Explain like I was five" is said by people not willing to put in the effort to understand presentations they already found. This post is just asking, what does it mean for two graphs to be isomorphic, without any research effort--see the voting arrow mouseover texts. If you are stuck in some presentation(s) you should ask about where you are stuck, not ask for yet another one. You quote paraphrasings that are clearly from their context not definitions. They're useless. If you didn't know already, now you do. $\endgroup$ – philipxy Mar 10 at 9:50
$\begingroup$ Let's just say that @philipxy is completely right. The first sentence in the MathWorld article is merely an attempt at a gloss of the meaning of graph isomorphism, and not a precise statement. The precise definition is given in the second sentence, which you didn't even quote in your question! $\endgroup$ – user21820 Mar 10 at 10:13
$\begingroup$ I'm voting to close this question because I don't think that it is about mathematics as it is currently phrased. To be about mathematics, the objects being described need to be rigorously defined. For example, the terms "graph" and "graph isomorphism" have not been properly defined. $\endgroup$ – Xander Henderson Mar 10 at 12:21
The definition you quoted from MathWorld is too simplistic. Two graphs are isomorphic if there is some way to match up the vertices of one with the vertices of the other, so that the connections by edges are also matched up. The desired matching might not match vertices that are in the same positions in some drawings (for example, the top vertex in one picture need not match with the top vertex in another picture), nor does it necessarily match up vertices with similar-looking labels (like $v1$ and $v1'$). Any one-to-one correspondence between the vertices of one graph and the vertices of another graph is a candidate for an isomorphism --- a successful candidate if the edges then also match up. Travis's answer has given you an appropriate correspondence between the pentagon and the $5$-pointed star. You should check that it really works, i.e., that whenever two vertices of the pentagon are joined by an edge then (and only then) the corresponding vertices of the star are joined by an edge in the star.
A side comment: The fact that any one-to-one correspondence might serve as an isomorphism (if the edges match up correctly) is what makes it non-trivial to check whether two large graphs (i.e., graphs with many vertices and edges) are isomorphic. It's an open problem whether this checking can be done by an algorithm in a number of steps bounded by a polynomial function of the number of vertices. There has, however, been important progress recently; Babai has given an algorithm that's way more efficient than the brute force approach of checking all possible one-to-one correspondences.
Andreas BlassAndreas Blass
$\begingroup$ So, are the first two graphs isomorphic or not? $\endgroup$ – tjt263 Mar 10 at 6:04
$\begingroup$ @tjt263 Yes. We can use many mappings. This one doesn't require much rearranging to see that it works: e1 -> c5, e2 -> c2, e3 -> c4, e4 -> c1, e5 -> c3; visually, you can move c1 to below c4 and c3 to below c5 to get the same shape as the left graph. $\endgroup$ – jaxad0127 Mar 10 at 6:18
$\begingroup$ @jaxad0127 but isn't the whole idea of isomorphism, that it doesn't have to be rearranged? I thought that was the whole point. $\endgroup$ – tjt263 Mar 10 at 7:02
$\begingroup$ @tjt263 No, "doesn't have to be rearranged" is the opposite of "the whole point". Read (the first paragraph of) my answer, and pay attention to the words in bold face. $\endgroup$ – Andreas Blass Mar 10 at 11:48
$\begingroup$ @tjt263 The definition of graph given in many of the answers should be a clue here: a list of vertices and a list of edges. Physical arrangement of vertices doesn't matter for graphs, in general. Related concepts, like planar graphs, may involve vertex placement. $\endgroup$ – jaxad0127 Mar 10 at 23:40
Both claims are correct.
Mapping $$e_1 \to c_1, \qquad e_2 \to c_3, \qquad e_3 \to c_5, \qquad e_4 \to c_2, \qquad e_5 \to c_4$$ maps the edges of the left graph precisely to those of the right graph, so that map defines an isomorphism of graphs.
The right graph has cycles of length $3$ (e.g., $aefa$) but he left graph does not, so the graphs cannot be isomorphic.
TravisTravis
$\begingroup$ That's even more confusing. What are the cycle lengths of the first two? 5? You say the edges match precisely, then immediately after you seem to demonstrate the contrary. I mean.. I believe you, but I still don't get it. $\endgroup$ – tjt263 Mar 9 at 19:42
$\begingroup$ The two sets of comments apply separately to the two pairs of graphs. The first two graphs are cyclic of order $5$, so any cycle thereof has length $5$. $\endgroup$ – Travis Mar 9 at 19:51
$\begingroup$ But e2 corresponds with c2, and e3 corresponds with c3, and e4 with c4, and e5 with c5. It's about the vertices (not the edges) isn't it? $\endgroup$ – tjt263 Mar 9 at 20:38
$\begingroup$ There's no reason to consider that particular correspondence---it just happens that we've drawn the graph in a way that the $e_i$ are relatively positioned the same way as the $c||i$ but where we draw the vertices doesn't have anything to do with the graph itself. You should think of a graph as a pair $(V, E)$, where $V$ is a set of vertices and $E$ is a a set of edges connecting those vertices. As the first pair illustrates, there's more than one way to draw a graph. $\endgroup$ – Travis Mar 9 at 21:19
$\begingroup$ In any case, whether a map between graphs is an isomorphism depends on both $V$ and $E$. For example, the graphs $K_1 \cup K_1$ and $K_2$ both have two vertices, but they are not isomorphic, as $K_2$ has one component but $K_1 \cup K_1$ has two. $\endgroup$ – Travis Mar 9 at 21:21
Here's something important to keep in the back of your mind when studying graphs: the definition of a graph. There are actually a few deviations in how one can define a graph, but this one will suffice for our purposes:
A (simple) graph $G$ is an ordered pair of set $(V, E)$ (the sets of vertices and edges respectively), where $E$ consists of subsets of $V$ of cardinality $2$.
When the graph is finite (meaning $V$, and hence $E$, is a finite set), we can visually represent a graph by a diagram, assigning each point from $V$ to a distinct point in $\Bbb{R}^2$ (or occasionally, $\Bbb{R}^3$). If $\{u, v\} \in E$, then we draw a path from the point representing $u$ to the point representing $v$, going through no other point representing a point in $V$.
These diagrams are what we often tell people are "graphs", but they are really just a way to represent graphs. The first diagram represents the graph: $$G_1 = (\{e_1, e_2, e_3, e_4, e_5\},\{\{e_1, e_2\},\{e_2, e_3\}, \{e_3, e_4\}, \{e_4, e_5\}, \{e_5, e_1\}\}).$$ The second diagram represents the graph: $$G_2 = (\{c_1, c_4, c_2, c_5, c_3\},\{\{c_1, c_4\},\{c_4, c_2\}, \{c_2, c_5\}, \{c_5, c_3\}, \{c_3, c_1\}\}).$$ (Note the leading way in which I've decided to order the elements of my sets in defining $G_2$!)
Note that, if we took the picture of $G_1$ and, say, rotated it (keeping all the labels), then that picture would represent the same graph $G_1$. Not something isomorphic to $G_1$, I mean it would have exactly the same vertex and edge sets, i.e. the graph it represents would literally be $G_1$, even though it's a new picture. You could even start moving the vertices around independently of each other (again, keeping the same labels), and the diagram will continue to represent $G_1$.
In this way, we see that there is an enormous variety of diagrams to represent exactly the same graph.
Further, it is possible for multiple graphs to produce the same diagram (except with different labels on the vertices). If you draw, for example, $G_2$, with $c_1$ in the same position as $e_1$, $c_4$ where $e_2$ was, $c_2$ where $e_3$ was, $c_5$ where $e_4$ was, and $c_3$ where $e_5$ was, and connected up the adjacent vertices with line segments, it would come out to be the same diagram as the one for $G_1$, with different labels.
In that sense, we see that $G_1$ and $G_2$ are structurally the same graph, even though they share no vertices or edges! So, pictures introduce unnecessary variety through muddling up the positions of vertices, and the set definition of graphs introduce unnecessary variety by allowing label substitutions which don't affect the actual structure of the graph. How do we talk about two graphs being the same, in a way that doesn't throw up a false negative when the points are moved or renamed?
Enter, stage left, the concept of a graph isomorphism. If we have graphs $(V_1, E_1)$ and $(V_2, E_2)$, a graph isomorphism is a bijection $f : V_1 \to V_2$ with the property that $\{v, w\} \in E_1 \iff \{f(v), f(w)\} \in E_2$. So, the function preserves adjacency.
Two graphs are "the same" when an isomorphism exists between them. The isomorphism deals purely with the set definition of graphs (and hence doesn't care how you draw them), but will still exist even if you rename the vertices. We can therefore see that $G_1$ and $G_2$ are isomorphic, with an isomorphism as described above. The way I wrote $G_1$ and $G_2$ exposes this isomorphism clearly as well.
How can you tell that the other pair of graphs is not isomorphic? I think Travis covers this well in his answer. In the graph on the right, there are three vertices, e.g. $b, c, d$, such that any pair of them is an edge in the graph, i.e. $\{b, c\}, \{c, d\}, \{b, d\}$ are elements of the edge set. If an isomorphism $f$ existed, there would need to be points $f(b), f(c), f(d)$ such that $\{f(b), f(c)\}, \{f(c), f(d)\}, \{f(b), f(d)\}$. No such points $f(b), f(c), f(d)$ exist in the first graph (via quick exhaustive search), so no isomorphism exists. This implies that there's no way to rearrange the vertices from one diagram (and change their labels) to form the other diagram.
Summary (or tl;dr):
Graphs are defined using sets, not pictures!
The same graph may be drawn in many ways, so don't get distracted by vertices moving!
Isomorphisms don't care about the names of the vertices or their positions.
To see why the first pair are isomorphic, but the second pair aren't, see Travis's answer.
Theo BenditTheo Bendit
$\begingroup$ So far, I think this answer is making the most sense. When you talk about rotating the image, you must mean in 3D space? But I thought they were 2D only. $\endgroup$ – tjt263 Mar 10 at 7:30
$\begingroup$ I meant in 2D, but the same idea works for 3D too. Our pictures of graphs are in the (2D) plane, and if we turn the 2D plane, say, 90 degrees anticlockwise, we get a perfectly good picture of exactly the same graph (with the labels written vertically!). $\endgroup$ – Theo Bendit Mar 10 at 8:00
$\begingroup$ I don't see it. If you rotate the graph/plane 90°, it just looks like it's been turned on it's side (i.e. ↑ becomes ←). Surely you must be right and I'm wrong, but I'm obviously missing some crucial information here. It's like we're using the same words to describe totally different things. $\endgroup$ – tjt263 Mar 10 at 8:36
$\begingroup$ @tjt263 Yes, the picture has just been turned on its side. Note two things: the picture is different (in that the dots have moved position), but the graph is the same, i.e. the vertices are still $c_1, \ldots, c_n$ and the edges are still the same. I'm trying to demonstrate how the concept of a graph and the picture of the graph are not the same thing. $\endgroup$ – Theo Bendit Mar 11 at 9:09
This answer is an excuse to show an animation that a 5-year-old might understand. If you're looking for correct mathematical definitions, please switch to other answers or Wikipedia.
First, be please careful not to confuse:
a graph and the representation of a graph
a graph and a chart.
Isomorphism
In layman terms, two graphs are isomorph if there is a continuous movie transforming the representation of a graph into the other:
It is allowed to drag and rename vertices, it is not allowed to add or cut edges. The movie acts as an edge-preserving bijection, which is how graph isomorphism can be defined.
Here's the online tool I used.
Non-Isomorphism
The second example is here.
By dragging around vertices, you cannot create the triangles ($b,c,d$ or $a,e,f$) that are present in the other graph:
From a logical point of view, it isn't enough to show one failed attempt in order to prove that something isn't possible. To prove that the graphs aren't isomorph, you could count their cliques.
Eric DuminilEric Duminil
$\begingroup$ @tjt263: Indeed, graph isomorphism doesn't care about coordinates, color, weight or name. The only relevant information is : "are these nodes connected to one another"? $\endgroup$ – Eric Duminil Mar 10 at 10:17
$\begingroup$ @tjt263: The first graph in your question can be perfectly described by "e1-e2-e3-e4-e5-e1". No other information is needed. It might be easier to understand isomorphism once you reduce a graph to its bare minimum. $\endgroup$ – Eric Duminil Mar 10 at 10:42
$\begingroup$ This begs the question. Eg define "continuous movie transforming one graph into the other". (While you're at it--the asker doesn't seem to know what a graph is.) Eg "By dragging around vertices, you cannot create the triangles that are present in the other graph."--Oh? Where is this justified for this movie? How does it imply there's no other movie? (Whatever a movie is.) Etc etc. Re (more) reasons one might downvote: The asker's comment(s). (After inexplicable introductory "This might be the best"--eg the immediate request for an explanation.) PS Please clarify via edits, not comments. $\endgroup$ – philipxy Mar 10 at 11:00
$\begingroup$ @tjt263: I can see I received two upvotes and one downvote. Downvoting isn't a problem per se, it's simply better when there's a comment associated to it. "Bar graph" and "graph" aren't related to each other. A graph can be defined and used without any visualization, a bar chart is a visualization. $\endgroup$ – Eric Duminil Mar 10 at 11:38
$\begingroup$ As @philipxy said, your answer begs the question, and in fact is significantly more complicated than the actual correct definition of isomorphism. You are not only involving continuous deformations, which aren't part of the correct definition, but also not explaining what those things are, nor how to prove based on your wrong definition when there is no isomorphism. $\endgroup$ – user21820 Mar 10 at 11:45
I think the thing here that you are missing that from the only thing in the definition of the graph are the vertices and the edges (in the general case; there are other, more specialized graphs). So the only thing we should take from the visual representation of the grap are - the vertices and the edges.
So even though visual representation of a graph might have other properties like direction, dimension (2d or 3d representation), interjections, angles or edge-lenghts we should ignore them. What this means is that while dealing with a visual representation of a graph, we can move the points around in any dimension (keeping the edges intact), rotate the graph or stretch the edges and the graph stays the same. Not only are the graphs isomorphic, they are actually the same graph (apart from the different named vertices).
It might help to work out the actual definitions of said graphs by hand as described by Theo Bendit and see for yourself that there is nothing different in graphs one and two, unlike in graphs three and four.
edited Mar 10 at 9:37
TandeTande
$\begingroup$ People keep saying this, but if you move the vertices, rotate the graph, stretch the edges, etc. it's not the same anymore is it? I mean, how can it be? We've literally just changed it, haven't we? The coordinates are different, or the sequence in which they're connected has changed, so the edges have changed. And so on. If they were the same.. they'd be the same! So they'd look the same. This seems obvious. But it's contrary to the general consensus. I know that you must be right and I must be wrong. But how? Or why? $\endgroup$ – tjt263 Mar 10 at 9:52
$\begingroup$ @tjy263 The thing here is to seperate what a grap is from how we visualize what a graph is. A graph is a pair of a set of vertices and a set of unordered pairs of those vertices (i.e. edges). We can visualize these things in different ways by drawing them out in a descriptive way, but these visualisation s are inherently limited. An anagulous way would be to think of graphs as, say tennis balls connected with rubber strings. We can move and stretch and reorder the balls and strings but the underlying structure (graph) stays the same. $\endgroup$ – Tande Mar 10 at 10:24
$\begingroup$ @tjt263 So the orientation or coordinates or the sequence are not part of the graph, but the representation of the graph. The only thing that matters in a graph is if they have the same amount of vertices connected in the same way (think about the tennis ball analogy). $\endgroup$ – Tande Mar 10 at 10:49
$\begingroup$ Okay, thanks. This is helpful. You said: The thing here is to seperate what a graph is from how we visualize what a graph is. What exactly is a graph then? Is a bar graph not really a graph? What we're doing here seems analogous to say, if we had a bar graph, and we started re-arranging the bars and changing the plotted values, and flipping the axes, then saying the information hasn't changed, when it obviously has. See what I'm getting at? $\endgroup$ – tjt263 Mar 10 at 10:57
$\begingroup$ @tjt263 Yes I can see where the confusion comes from, but no, it has not changed. What has changed is the visual representation of a graph, not the actual graph. When we speak of a graph in mathematical terms all we have is the info of the sets V and E (where the graph G itself is G=(V,E) ) of the vertices and edges respectively. That is all that a graph is in mathematics. Graph is a structure. We can't change the graph without changing the vertices (nodes) or edges. All else is part of other things, like the representation of the graph rather than the graph in itself. $\endgroup$ – Tande Mar 10 at 12:40
What is a "graph"?
So you're reading through some math about data structures and whatnot and you just can't seem to wrap your head around what a "graph" is exactly. You've graphed functions on graph paper as far back as secondaryschool and had no trouble with it, but now the little "lines and dots" diagrams you keep seeing pop up in the explainations of "trees" and "cycles" and "edges" just seems to have no connection to what you've learned before about graphing. You see something like the following:
And it just seems to make no sense - it is as if the "graphs" have no care at all for the postion or scaling or orientation of the lines and points on the page!
There is a good reason for that. The dots and lines on the page is just a symbolic sketch of what the "graph" represents. The lines on the page have nothing to do with the "y=mx+b" equations you'd graphed in secondary school, and neither do the dots have anything to do with cartesian coordinates you may have plotted on those same pieces of graph paper. Instead, what is being represented on the page is the simple existance of a number of veritices (the dots) and the existence of connections between specific vertices (the lines); the actual "graph" itself being the abstract idea of a certain number of objects having a certain number of connections between them arranged in a specific way. Other than historical precedent, there isn't really any need for them to be called "graphs", nor for them to be pictorally represented by lines and dots. It's basically just because most textbook writers and textbook readers have meat-brains that are generally intuitive about 2D and 3D space that we keep this pictoral shorthand around. Computers would rather just be sent an easy-to-parse list of vertex objects and vertex connections, like the following:
(where we have ordered everything alphaneumarically for our own conveniance; though that fact is not required at all). In fact, one could systematically order every possible pairing of points and assign a binary 1/0 to the presence or lack of the corresponding connection (in practice that get messy as the number of possible connections scales approximately with O[n^2] while sparse matrices will more reasonably grow around O[n] connections).
So what does all this have to do with your question? Well, it lets us explain why we can poke and prod at the dots and lines in our "meatspace" symbolic diagrams and still get a logical comparison of similarities and differences between two abstract "graphs". Even though things look remarkably transformed in "meatspace", the relations go unchanged in "listspace" and thus the abstract objects themselves go unchanged despite all the cosmetic "meatspace" changes we've used to simplify the correspondence (or lack thereof) between multiple objects.
Is Pentagon-A isomorphic to Pentagram-B?
How would we go about this problem? Well, first we could look at the two graphs in "meatspace" to see if it's super straightforward (like comparing a square to a rectangle). Then if that doesn't work, we can look in "listspace" to see if anything is simpler there.
In this case, both "meatspace" and "listspace" don't do us any favors. Our pictoral diagram has way more confusing crossings in one picture versus the other, and in our list our default alphaneumeric ordering is doing us no favors. We could try brute-forcing the list by assigning b1=a1 and then assign b2=a2 and on down the line until something breaks and we try a different combination of assignments (like a really inefficient Sudoko puzzle), but perhaps there's a way to simplify things in meatspace instead.
We can now use this relation to find way to compare A directly to B. If one then plugs a1->b1, a2->b3, etc. into the A-list to get a B'-list.
Comparing the B list to the B' list, every vertex and every specific vertax-to-vertex connection occurs the exact same number of times. They're isomorphic!
This isn't the only way we could go about it, though. Instead of simplifying things in "meatspace" we could try to simplify things in "listspace". What if we noticed that both shapes can be drawn "without taking your pen off the paper", then we could come to the conclusion that if there is a long a1-to-a2-to-a3... path through the list of connections in A, then the two can only be isomorphic if there is an equivalent path through the Bs. Instead of listing connections alphaneumerically, we could instead try listing a connected chain of entries:
Making a "longest chain" through our entries appears to work really well for this pair of examples. The pentagon has a very simple chain a1-a2-a3-a4-a5-a1 that loops back on itself, and the pentagram also loops back on itself with the b1-b3-b5-b2-b4-b1 which also loops back on itself. Both loops are 5-connections long, and there are no other chains or side loops to worry about. They must be isometric!
But wait! Everything seems to work out no matter what we do. What if we plug just any old thing in for the ai->bj conversion?
Well, if we accidentaly guess 3/5 of the correct answers (relative to our other worked out example) then we get something like...
and clearly our guess did something wrong, because two specific connections are missing from B' and two extra connections exist where they aren't supposed to be; indicating that this ai->bj conversion was not an isomorphic mapping. So, clearly brute-forcing can hit bad answers.
Are these two hexagons with two extra edges isomorphic?
So now that we have a bunch of tools and tricks to fiddle around with these abstract "graph" things, how about those two examples with 6-points and 8-edges each?
Well, on the one hand, we can push and prod until we have no more crisscrossing happening in "meatspace", then we have a set of three quadrangles (and everything else outside) on our B side; but on the A side we have two triangles, one quadrangle, and everything else outside. That doesn't line up correctly with the other pictogram no matter how we rotate our hexagon.
If we try to go from "listspace" instead, then we can try the "longest chain" trick again and get a 6-gon with two extra connections, but there is no way that we can reorder things such that the two remaining connections in A' have anything to do with the two remaining connections in B'. If, instead we try to make a minimal chain, then we have 3-loops {a1,a2,a6} and {a3,a4,a5} which we cannot recreate in any way with the 4-loop minimums of {b1,b2,b3,b6}, {b3,b4,b5,b6}, etc.
In fact, 6-vertices is not too much that every 6! possible mapping from ai->bj could be tested in a reasonable amount of time to show that the graphs in fact have no working mappings, and thus they cant be isomorphisms of one another.
AmateurDotCounterAmateurDotCounter
$\begingroup$ Excellent answer, with a clear explanation and nice looking diagrams. $\endgroup$ – Eric Duminil Mar 10 at 10:44
$\begingroup$ Eric Duminil, same to you. I just now saw your answer, and those animations are super slick! $\endgroup$ – AmateurDotCounter Mar 10 at 11:58
$\begingroup$ This suffers from the problems I mention in my comments on @EricDuminil's answer. You use a lot of undefined vague terms and you never justify your claims or answer the question. $\endgroup$ – philipxy Mar 10 at 13:00
Not the answer you're looking for? Browse other questions tagged graph-theory graph-isomorphism graph-connectivity or ask your own question.
Are these two 10-vertex graphs isomorphic?
Determining if two graphs are isomorphic
Why are these two graphs isomorphic?
How to show these two graphs are not isomorphic?
Proof that graphs are not isomorphic
Determining if Graphs are Isomorphic.
What is the meaning of saying "two graph vertices are in correspondence?"
Why are these two graphs not isomorphic?
How are these graphs isomorphic?
How to prove these graphs are isomorphic or not? | CommonCrawl |
Depressive symptoms are associated with social isolation in face-to-face interaction networks
Interaction data from the Copenhagen Networks Study
Piotr Sapiezynski, Arkadiusz Stopczynski, … Sune Lehmann
Turnover in close friendships
Chandreyee Roy, Kunal Bhattacharya, … Kimmo Kaski
A longitudinal investigation on problematic Facebook use, psychological distress and well-being during the second wave of COVID-19 pandemic
Rubinia Celeste Bonfanti, Laura Salerno, … Gianluca Lo Coco
Temporal properties of higher-order interactions in social networks
Giulia Cencetti, Federico Battiston, … Márton Karsai
Head versus heart: social media reveals differential language of loneliness from depression
Tingting Liu, Lyle H. Ungar, … Sharath Chandra Guntuku
Social capital II: determinants of economic connectedness
Raj Chetty, Matthew O. Jackson, … Nils Wernerfelt
Cross-platform- and subgroup-differences in the well-being effects of Twitter, Instagram, and Facebook in the United States
Kokil Jaidka
Multilevel structural evaluation of signed directed social networks based on balance theory
Samin Aref, Ly Dinh, … Jana Diesner
Some socially poor but also some socially rich adolescents feel closer to their friends after using social media
J. Loes Pouwels, Patti M. Valkenburg, … Loes Keijsers
Timon Elmer1 &
Christoph Stadtfeld ORCID: orcid.org/0000-0002-2704-21341
Scientific Reports volume 10, Article number: 1444 (2020) Cite this article
Individuals with depressive symptoms are more likely to be isolated in their social networks, which can further increase their symptoms. Although social interactions are an important aspect of individuals' social lives, little is known about how depressive symptoms affect behavioral patterns in social interaction networks. This article analyzes the effect of depressive symptoms on social interactions in two empirical settings (Ntotal = 123, Ndyadic relations = 2,454) of students spending a weekend together in a remote camp house. We measured social interactions between participants with Radio Frequency Identification (RFID) nametags. Prior to the weekend, participants were surveyed on their depressive symptoms and friendship ties. Using state-of-the-art social network analysis methods, we test four preregistered hypotheses. Our results indicate that depressive symptoms are associated with (1) spending less time in social interaction, (2) spending time with similarly depressed others, (3) spending time in pair-wise interactions rather than group interactions but not with (4) spending relatively less time with friends. By "zooming in" on face-to-face social interaction networks, these findings offer new insights into the social consequences of depressive symptoms.
Social interactions are the smallest building blocks of interpersonal social networks and are a prerequisite of the formation of functional social relationships. The lack of social interactions and social relationships (i.e., social isolation) can have detrimental effects on an individual's physical and psychological health. Social isolation increases the risk for coronary heart disease, stroke, and mortality1,2,3 and can negatively influence psychological health leading to depressive symptoms4,5.
But social isolation can also be the consequence of depressive symptoms. It is well established that individuals with depressive symptoms have less rewarding and more dysfunctional social relationships6,7,8. In that vein, longitudinal social network studies have shown that depressive symptoms affect the creation, maintenance, and termination of social ties9,10. While the effects of depressive symptoms have mostly been examined in self-reported friendship networks, many processes are in fact argued to operate on the more fine-grained level of social interactions9,11,12,13,14. Investigating the social processes on an interaction level can help us to understand how depressive symptoms contribute to being socially isolated. This paper thus develops and tests four preregistered hypotheses on how depressive symptoms affect face-to- face interactions in social networks.
The first hypothesis (depression-isolation hypothesis) states that depressive symptoms are associated with less social interactions. It has been argued that depressive symptoms are accompanied by a change of social skills and motivation to socialize (e.g., more reassurance seeking)7,15,16. Individuals with more depressive symptoms may experience fewer social interactions because: (1) they may elicit rejection from others as they induce a negative mood in their interaction partners17,18,19 and (2) they are likely to receive less reinforcement from the social environment, which contributes to a feeling of discomfort in social interactions and decreased social participation7,20,21. In line with these theoretical considerations, Brown and colleagues20 have reported a negative association between depressive symptoms and the amount of self-reported social interactions. At the same time, other studies reported no differences in the quantity of social interactions but only on qualitative aspects of social interactions22,23,24. These self-report-based findings, however, may entail measurement biases that are associated with how depressed individuals self-report social interactions (e.g., having more negative social self-perceptions)25,26. The use of a direct behavioral measure of social interactions that we propose in this article allows us to overcome these measurement biases.
The second hypothesis (depression-homophily hypothesis) states that individuals are more likely to interact with others who have a similar level of depression9,10. The tendency to bond with similar others (homophily)27 has been found to be one of the most consistent patterns in social networks. It is expected to be prevalent on the depression scale, as sharing emotional states with similar others can lead to more compassion and self-disclosure and thus to more rewarding interactions28.
The third hypothesis (depression-friendship hypothesis) states that individuals' depressive symptoms are associated with the relative time that they interact with friends. The direction of this association, however, is unclear. We assume that friends tend to spend time together29 and that they will be more aware of each other's mental health than non-friends (e.g., through signs of verbal or non-verbal behaviors in previous interactions)7. On the one hand, some evidence suggests that friends are less rejecting of individuals with depressive symptoms than strangers30. This would indicate a positive association. On the other hand, individuals with more depressive symptoms are more likely to interact with others in a way that focuses on their problems, seeking reassurance, and to bring others to solve their problems16. This tendency might be particularly noticeable when depressed individuals interact with their friends as these relations are more characterized by self-disclosure31,32. This tendency may lead friends to avoid social interactions with individuals with more depressive symptoms. In one empirical study, Brown et al.20 showed that depressed individuals tend to interact with their friends less often, compared to healthy controls.
The fourth hypothesis (dyadic-isolation hypothesis) states that individuals' depressive symptoms are associated with a higher number of interactions in pairs (dyads), rather than interacting in groups of three or more. Depressed individuals may show a higher frequency of dyadic interaction because of their tendency of "discussing and revisiting problems, speculating about problems, and focusing on negative feelings" (p. 1830) in dyadic social interactions that are characterized by more self-disclosure (i.e., co-rumination)33. If co-rumination is more likely to occur in pairs, this could lead to an over-representation of dyadic interactions among depressed individuals.
The present study is situated in a context in which individuals (first-week undergraduate students) get to know each other in the process of an emerging social group. Two independent cohorts participated in this study (N1 = 73, N2 = 50). About 22% of the participants reported clinically relevant levels of depressive symptoms (more than 16 scale points) and 39% reported sub-clinical levels of depressive symptoms (between 9 and 16 scale points)34. Rather than using self-reports of social interactions, we collected fine-grained data on face-to-face interactions using newly developed Radio Frequency Identification Devices (RFID)35. Figure 1 shows a picture of an RFID badge, which is usually worn as part of a nametag. The badge automatically records when study participants face each other frontally in very close proximity that is typically associated with a social interaction. Recently, RFID badges have been validated for measuring such face-to-face social interaction36. The data collected by the RFID badge are combined with self-reported data on friendship relations and depressive symptoms assessed prior to the social interactions. We apply state-of-the-art statistical methods of social network analysis37,38 that take into account that relational observations are not statistically independent. We, thereby, test four preregistered hypotheses (osf.io/xce9g) on the interplay between social interactions and depressive symptoms, while taking the role of preexisting friendship into account. Furthermore, we statistically control for the effects that the Big Five personality traits have on social interaction, as they are argued to affect these39. The unique social network design allows us to test these relation hypotheses that require data of a closed group of interacting individuals and data on the depressive symptoms of (possibly) all individuals in the group.
A picture of an RFID badge.
Social isolation can be cause and consequence of depressive symptoms, potentially trapping some individuals in a vicious cycle. Understanding the fine-grained interaction patterns of individuals with depressive symptoms can be a first step towards future interventions to break this vicious cycle of social isolation and depressive symptoms.
Description of the data
On average, individuals reported a depression score of 10.28 (SD = 5.25) in sample one and 11.98 (SD = 7.97) in sample two. According to the screening criteria defined by Radloff40, 15% of the respondents in sample one and 29% of sample two show clinically relevant levels of depressive symptoms. In representative samples of university students, similar prevalences have been measured41,42. In sample one, we also collected data of individuals that were in the same study group but chose not to attend this voluntary social event or signed up after all slots have been taken. Those individuals attending the weekend did not differ in their level depressive symptoms from those that did not attend this voluntary event (N = 119), t(174) = 0.15, p = 0.881. A total number of 23,452 social interaction events were recorded in sample one and 12,225 in sample two. These numbers relate to the raw data of recorded RFID interactions over the whole weekend. The average duration of interactions was 94.51 (SD = 212.77) seconds and 86.81 (SD = 186.32) seconds, respectively. The large standard deviation indicates the amount of variability between pairs of students. These social interactions were aggregated to one adjacency matrix per sample where each entry represents the total duration of social interactions between individual i and j. Each participant on average interacted 16.87 hours (SD = 7.27) with others in sample one and 11.79 hours (SD = 6.41) in sample two. Figure 2 shows these interaction networks. Each individual is represented as a node, where the node color indicates the degree of depressive symptoms (dark red = high, yellow = low, grey = missing value). The thickness of ties denotes how long two individuals have interacted with each other. The networks exhibit typical social network structures — for example, one can see that interactions tend to cluster within certain regions of the network.
Durations of social interactions over the course of the data collection for sample one (a) and sample two (b); tie color and width = interaction duration, blue node frame = student organization member, color = depressive symptoms (dark red = high, yellow = low, grey = missing value), circles = females, squares = males, plotted with visone43.
On average, the participants reported 0.66 friendship ties (SD = 1.28) in sample one and 2.14 (SD = 2.13) in sample two. Because the participants of sample two knew each other for a week longer, more friendship relations were established. In total, 48 ties (sample one) and 107 ties (sample two) were reported. Of those 20 were mutual and 28 were asymmetric in sample one. In sample two, 78 friendship ties were mutual and 29 were asymmetric.
Before testing our hypotheses with multivariate social network methods, we – in the next paragraph – show how depressive symptoms and different aspects of social interactions correlate bivariately on the individual level. Table 1 shows the correlation coefficients of depressive symptoms with properties of the interaction network. These coefficients show that depressive symptoms are negatively correlated with how much time individuals spend in social interactions. Depressive symptoms do not correlate with the amount of time spent with friends (symmetrized measure). However, there is a negative correlation with the amount of time spent with mutual friends. We find no evidence for a correlation between depressive symptoms and the amount of time spent in dyadic interactions, but a negative correlation with the amount of time spent in group interactions. These differences between dyadic and group interactions are also reflected in the positive correlation of depressive symptoms with one's ratio of dyadic interactions in all social interactions.
Table 1 Pearson correlations between depressive symptoms and interaction aggregates.
Multi-group MRQAPs
To test the multivariate relationships between social interactions and individual's attributes, we conducted a multi-group MRQAP analysis38. Parameters of a MRQAP can be interpreted exactly like parameters of a linear regression model, but because the assumption of independent observations is violated, MRQAPs rely on a permutation-based test to obtain statistical inference (more details on MRQAPs can be found in the methods section). The result of our MRQAP analysis is shown in Table 2, reporting the estimates of observed network \(\hat{\beta }\) and the comparison with the β estimated under 5,000 network permutations. The mean value of the estimate under the permuted dependent networks is indicated by \(E(\beta )\).
Table 2 Multi-group QAP results on log-transformed interaction durations of dyads.
The results of the multi-group MRQAPs support the notion of depression isolation; dyads with a high mean in depressive symptoms were less likely to interact. It has to be noted that the effect size of the estimate cannot be interpreted directly due to the log transformation of the dependent matrix. The following example should illustrate the size of this effect: The interaction time between two individuals with a depression score of 5 each is estimated to be 9.12 seconds per hour (exp(2.504-0.059*5)), whereas an interaction between two individuals with a depression score of 20 is estimated to last for only 3.76 seconds per hour (exp(2.504-0.059*20); considering that everything else is the reference category - for instance, that there is no friendship tie present).
There was a positive effect for depression similarity; this suggests that social interactions were more likely between individuals that reported a similar level of depressive symptoms (depression-homophily hypothesis). Moreover, the interaction between depression mean and depression similarity was a negative predictor of social interactions, showing that depression homophily is stronger at the lower (the less depressed) end of the scale.
The multivariate interplay between predictors of social interactions and their effect size can be shown with a selection table where the estimates of a multivariate analysis are used to calculate an estimate for the dependent variable (i.e., social interaction duration) for various configurations of the predictors9. In our case, we want to show how various levels of depressive symptoms of individual i and j predict social interaction duration of the dyad yij with the estimates of depression mean, depression similarity, and their interaction. The values for \({\hat{y}}_{ij}\) of the observed range of depressive symptoms of i and j (0 to 36) are shown in a heatmap in Fig. 3. Details on the computation of \({\hat{y}}_{ij}\) for this Figure can be found in the Supplementary Material (Section Computation of the Selection Table). For the case of two male students of sample one that are not friends and have the same age (i.e., all reference categories), Fig. 3 shows that interactions where both individuals were highly depressed were the least likely and those most likely were interactions between low depressed individual or when one individual was highly depressed and the other one low in depression.
\({\hat{y}}_{ij}\)(in sec/h) for depression values between 1 and 36 (i.e., the range of observed values) for the case of all reference categories (of e.g., gender, age, friendship ties).
To investigate how depressive symptoms are associated with the extent to which individuals interact with friends, we tested an interaction of depression mean with the symmetrized friendship matrix. There was no significant effect of depression mean with being friends in predicting social interactions (depression-friendship hypothesis). As noted earlier, friendship relations might be mutual or asymmetric (either both individuals consider the relationship as a friendship or just one of the two). Neglecting this information might diffuse the differentiation between weak and strong friendship ties. For this reason, we conducted additional analyses in which we included two matrices capturing the mutual and asymmetric friendship relations instead of one symmetrized friendship matrix.
In those analyses, we find a negative interaction effect of depression mean with being mutual friends in predicting social interactions (β = −0.084, p = 0.029), indicating that depressed individuals tend to interact less with their reciprocated friends than non-depressed individuals. Interestingly, the interaction of asymmetric friendship ties with depression mean was positive but did not predict interaction duration significantly (β = 0.069, p = 0.095). Details on these results are provided in Table S3 of the Supplementary Materials.
Beyond these depression-related findings, the multi-group MRQAP analysis shows significantly higher estimates for sample two and the estimates increased with an increasing mean age and increasing age similarity. Negative estimates were found for both individuals being female, indicating that interactions between two females are less observed than between two males. The overall explained variance of the model is R2 = 0.12, which is not very high, but considerable given the large set of factors that potentially affect the formation of social interactions between two individuals.
We conducted a number of robustness analyses of these multi-group MRQAP analyses: (1) for the two samples separately, with a (2) non log-transformed dependent matrix, and (3) with non-merged RFID data (interactions of dyads that were no longer than 75 seconds apart have been merged as recommended by Elmer et al.36 for improved validity). Also, we included measures of the Big Five personality traits into the model. The results of these robustness analyses can be found in Table S1 and Table S2 of the Supplementary Material. All these robustness analyses yield that the findings of this study are robust against different data treatments, within each sample, and when controlling for the effect of personality traits. The exception being the depression similarity effect, which is not a significant predictor in the separate analysis of sample two (β = 0.024, p = 0.142) and when modeling the non log-transformed duration matrix (β = 0.557, p = 0.213), and the depression mean effect which is not significant when modeling the non log-transformed duration matrix (β = −0.474, p = 0.160).
Dyadic and group interactions
Finally, we tested the assumption that individuals with more depressive symptoms spend relatively more time in dyadic interactions than in group interactions (dyadic-isolation hypothesis). This hypothesis cannot be tested with the MRQAP as the unit of analysis is beyond a dyadic relation. To account for the interdependencies between observations, we performed a permutation-based correlation test of depressive symptoms on the ratio of dyadic interactions in all social interactions. Permuting the dependent variable (i.e., the ratio) here follows the general logic of bivariate QAPs37. There was a positive correlation between an individual's ratio of dyadic interactions and depressive symptoms and (r(121) = 0.263, p = 0.003, 5,000 Y-permutations). In other words, the more depressive symptoms an individual reports, the smaller is the proportion of group interactions of the total time spent in social interactions.
In this study, we investigated how individuals' depressive symptoms affect social interaction networks within two independent student communities spending a weekend socializing in a remote camp house. We find that individuals' depressive symptoms are associated with spending less time in social interactions. This is in line with our depression-isolation hypothesis. We also find that individuals tend to interact with others that have a similar level of depressive symptoms, as postulated by our depression-homophily hypothesis. This homophily effect is more pronounced on the lower end of the depression scale. We find no support for the depression-friendship hypothesis, stating that individuals' depressive symptoms are associated with the extent to which they interact with friends. In further explorations, we find that the likelihood of interacting with mutual friends (i.e., both individuals nominating each other) decreases with higher depression scores. We find no such effects for asymmetric friendship ties (i.e., only one friendship nomination). This might indicate that the hypothesized association depends on the strength of a friendship relation. In line with the dyadic-isolation hypothesis, depressive symptoms are associated with the sizes of interaction groups; individuals high in depressive symptoms are more likely to interact in dyads than in groups.
Besides generally lower levels of social interactions, network-specific behavior patterns of individuals with higher levels of depressive symptoms can additionally contribute to their vicious cycle of social isolation and depression. First, the tendency to interact with similarly depressed individuals can lead to more exposure to their dysfunctional attitudes and thus being socially influenced to develop more depressive symptoms11. Second, because of the unique support that strong friends can provide (e.g., emotional support), a lack of interactions with those can lead to the development of more symptomology44. Third, the tendency of depressed individuals to interact in pairs instead of groups could additionally contribute to the interaction partners' social isolation, as they are both more likely to become dyadically isolated and interact less with other individuals in a group setting.
These findings contribute to the broad literature on the association between depressive symptoms and social interactions. All prior studies rely on self-reports of depression and interaction e.g.,20,23. However, more objective measures of social interactions and social network research designs are necessary to explore more complex relational phenomena.
To study the network dimension of social interaction and depressive symptoms, we apply established social network analysis methods (i.e., MRQAP)38. These consider that observations were not sampled randomly from a large population (like in most other psychological studies) but consisted of a closed community of individuals where the dependence between individual's depressive symptoms was at the core of the analysis (e.g., how likely is an interaction based on the similarity in depressive symptoms of two individuals). MRQAPs follow the general estimation intuition of a multivariate regression and are thus straightforwardly interpreted as illustrated in our results.
The empirical setting of this study was unique in many ways. First, we measured social interactions with recently developed RFID badges that allowed us to observe individual behavior directly. Given the small number of studies on "actual" behavior, scholars have been encouraged to applying such methods to approach psychological research questions45. We deliberately formulated and tested our hypotheses on the interaction level, this way, zooming in on the processes that are usually measured through friendship ties11,12,13. We argue that friendship relations only capture a very specific (and somewhat abstract) form of relations31 and thus do not say much about the broad range of social contact individuals have in daily life. Although people tend to interact with their friends frequently, a large proportion of individual's interactions are with non-friends. Social interactions, on the other hand, are the basic building blocks of social life and also occur frequently with non-friends. Most importantly, social interactions are on the level where social interventions can operate on: Interventions cannot change how many friends one has, but they can change with how many people one socially interacts.
Second, we combined these data with state of the art sociometric data (friendships) and self-report data of depressive symptoms. Thus, applying a multi-method approach, that also allows us to take the association between friendship and social interactions into account.
Third, the fact that the students spent an entire weekend in a remote camp house, constituted an isolated setting where only social interactions between participants were possible. All attendees of the weekends participated in the RFID data collection, providing us with a full-range view on the social interaction dynamics of the participating individuals.
Fourth, we conducted additional analyses in which the effects of the Big Five personality traits on social interactions were statistically controlled for. The findings of this study are robust, even when taking the effects of the Big Five personality traits into account. Hence, depressive symptoms explain unique aspects of social interactions beyond those that can be explained by the Big Five personality traits.
This study also had a number of limitations. First, our empirical setting was in a very specific population and context – a socializing weekend of first semester students. Presumably, all participants felt a norm of being socially engaged at this event. At the same time, friendship relations were often formed relatively recently. Future studies should investigate social interaction networks in different social settings. In that vein, the empirical settings were relatively small. Thus, the application of different types of social interaction measures (e.g., through smartphones46) could provide access to broader social settings. Second, in our samples 20% percent of individuals reported depressive symptoms above a clinically relevant cutoff-point. A further extension of this study would be to investigate and replicate the tested hypotheses using a sample where individuals with diagnosed depression are oversampled (e.g., in a psychiatric ward). Nevertheless, given that the social impairment associated with depressive symptoms is argued to increase linearly with the number of symptoms reported47,48, our findings potentially provide reliable estimates for social behavior of individuals with depression too. Second, our method of measuring social interaction was limited to assessing quantitative aspects of a social interaction but not qualitative aspects of the social interactions. Hence, we do not know how a potential social skill deficit of depressed individuals actually affected characteristics of social interactions (e.g., eye-contact avoidance of individuals with depression)7. Third, we aggregated the social interactions of the two samples for the time of the data-collection period and thus leave out the temporal dynamics of these social interactions. This is suitable to test the hypotheses in this article, because they relate to the overall amount of social interactions. Future studies, however, could aim at understanding how depressive symptoms related to particular interaction sequences. For such research questions, time-stamped network analysis methods are a suitable framework49,50,51. Fourth, the undirected nature of the social interaction measure only allows us to draw conclusions about which interactions are more likely—and not which interactions depressed individuals seek, avoid, or terminate. Fifth, it is important to consider that effects between depression and social ties can go in both directions9,11,12,13,14,52: Social ties can affect individual's levels of depressive symptoms and depressive symptoms can affect how individuals form and maintain social ties9,11,12,13,14. This article only focuses on the latter by showing how depressive symptoms predict social interactions. Future studies could investigate how social interactions on this weekend affected depressive symptoms later on.
Despite these limitations, our study has highlighted the strong effects that an individual's depressive symptoms have on social interactions. We have further demonstrated that social network designs and methodologies can offer us new insights on fundamental issues of psychology and behavioral studies. We believe that an in-depth understanding of the small-scale social consequences of depressive symptoms can help to design interventions targeting the downward spiral of depression and social isolation more effectively.
We investigated our research questions with two independent datasets newly formed undergraduate student cohorts attending a voluntary social event on the first (sample one) and second (sample two) weekend of their studies. The data was collected in the context of the Swiss StudentLife study53. The data that is analyzed in this article and the analysis script can be downloaded from osf.io/4sj4s.
The first sample consisted of N1 = 73 individuals, of which 14 individuals belonged to the student organization that organized the event. The second sample consisted of N2 = 50 individuals, including 14 student organization members. Prior to the weekend, 53 (73%; Sample one) and 48 (96%; Sample two) of the participants administered an online survey that assessed friendship ties within the cohort and depressive symptoms. None of the student organization members of sample one participated in the survey. All non-responses were treated as missing data. The first sample was predominately male (37% female), whereas the second sample was mostly female (60%). The mean age of the two samples were 20.75 years (SD = 2.09) and 21.73 years (SD = 3.24), respectively. In total, there were 3,853 dyadic relations (\(\frac{={N}_{1}({N}_{1}-1)}{2}=2,628\); \({N}_{2}^{dyads}=1,225\)) of which 2,454 (64%) remain after listwise deletion of missing data. Hence, the sample size for our analyses should be sufficiently large.
In the three days prior to the weekend (Tuesday to Thursday) participants were invited to administer the online questionnaire. The study was advertised as a broad investigation about social integration and the lives of students in the first year at university.
Before the arrival at the remotely located camp house, each participant was equipped with a badge that consisted of the active Radio Frequency Identification device (RFID; see Fig. 1), which allowed us to measure their social interactions35,36. The badge was covered with a piece of paper with the participant's name printed on it. Hence, the RFID badge was not visible. Participants were briefed on the badge's functionality and purpose of application. All participants were instructed to wear the RFID badge during their time spent awake and place them on chest height. In both samples, all of the participants agreed to wear the badge throughout the weekend. During the event, study confederates checked that the participants wore the badge correctly. After an initial excitement about the badges, participants soon did not seem to notice or discuss them frequently. The events were scheduled in late September 2016 from Friday 7 pm to Sunday 8am (sample one) and in early October 2016 from Saturday 3 pm to Sunday 11 pm (sample two). During the course of the weekend, there were some organized activities (e.g., group games, lectures), but most of the time was unstructured so that participants could freely interact with each other (structured time in Sample 1 was 120 minutes, in Sample 2 45 minutes).
Social interactions
During the course of the weekend, social interactions were assessed using active Radio Frequency Identification (RFID) badges. The hardware was constituted of 2.4 GHz RFID badges with realtime proximity and position tracking utilizing the Bluetooth low energy protocol. RFID badges measure proximity to other RFID badges up to 1.6 meters. Because the signal is shielded towards the back by the participant's body, they only measure frontal face-to-face social interactions. The validity of RFID badges to measure social interactions has been shown in Elmer et al.36.
To detect the signal between two RFID badges, both badges need to be close to each other (range 1-1.5 m)35 and to an RFID reader. RFID readers are designed to receive signals from RFID badges that are in the range of 10 meters from a reader. Before the arrival of the participants, the camp house was equipped with 8 RFID readers so that in every room of the house and in commonly used outside areas (e.g., smoking area) signals between RFID badges could be detected. We followed the recommendations by Elmer et al.36, to enhance the validity of RFID badges by merging interactions of the same dyad if the signals are no longer than 75 seconds apart. Robustness analyses conducted on data that was not processed in that way can be found in Table S1 and Table S2 of the Supplementary Materials. More details on the RFID badges to measure face-to-face interactions can be found elsewhere35,36.
The dependent variable in our subsequent analyses is the duration of these social interactions. The dependent variable is in our case an adjacency matrix, in which each cell indicates how long two individuals interacted with one another throughout the whole weekend. Hence, the adjacency matrix is undirected, symmetric, and weighted.
Existing studies predominately use self-report measures to assess social interactions. Biases in self-reports of individuals with depressive symptoms might contribute to differences in their self-reported interactions, as – for instance – depressed individuals tend to view things more negatively than non-depressed26. With the RFID based method of social interaction measurement, we aim to overcome these biases.
Friendship ties
Friendship ties were measured with the items "which of your fellow students would you call friends?" (German original: "Welche Deiner Mitstudierenden würdest Du als Freunde bezeichnen?"). Below the item were 20 name generators displayed (i.e., text boxes where participants could enter the names of the individuals). An auto-complete function suggested the full names of other participants when starting to type in this text field. The nominations of that item were used to construct a binary adjacency matrix A where each entry aij represents the nomination of individual j by individual i (0 = no nomination, 1 = nomination).
Because our statistical method requires the independent variables to be symmetric matrices (for details see Section Statistical Analyses), we constructed a symmetrized friendship matrix indicating if at least one i→j or j→i friendship nomination was present. To explore the unique contribution of weak and strong friendship ties, two additional adjacency matrices were created in which cells indicate if the tie is (i) a mutual (strong) friendship tie (i.e., i→j and j→i) or (ii) an asymmetric (weak) friendship tie (i.e., either i→j or j→i, but not a mutual tie). These measures can be used for explorations of friendship strength29 and stability54.
Depressive symptoms
Depressive symptoms were measured with the German version of the Center for Epidemiologic Studies Depression Scale – Revised55 with 20 items on a 4-point scale ranging from 0 (occurred never or rarely) to 3 (occurred most of the time or always) reflecting how often the respective symptoms was experienced during the preceding week. Sample items are for instance "feeling depressed" or "feeling everything one does is an effort". The depression score was computed by taking the sum of all 20 items. The total range of symptoms reflects the continuum between well-being and depression56. The items of this scale were highly internally consistent (Cronbach's alpha = 0.84).
Big five personality traits
We conducted additional analyses to control for the effects of personality traits on social interaction tendencies. The Big Five Personality traits (openness, conscientiousness, extraversion, agreeableness, neuroticism) were measured with the 10 item version of the Big Five Inventory (BFI)57 where every trait is measured with two items each, rated on a 5-point Likert scale ranging from "disagree strongly" (1) to "agree strongly" (5). A sample item for neuroticism is "I see myself as someone who: is relaxed, handles stress well" (inverse coded). The internal consistency of these item varied from α = 0.33 (agreeableness) to α = 0.80 (extraversion) and the mean values were between 2.87 (neuroticism) and 3.61 (openness).
We investigate our hypotheses using Multiple Regression Quadratic Assignment Procedures (MRQAP)37,38. In social network analysis, MRQAPs are considered a core method to analyze weighted networks. The MRQAP allows us to test the depression-isolation, depression-homophily, and depression-friendship hypotheses while accounting for the interdependent nature of the social network data.
There are several reasons for choosing this statistical model over other well-established statistical such as Exponential Random Graph Models (ERGMs)58, Stochastic Actor-Oriented Models (SAOMs)59, or relational event models49,50. First, MRQAPs allow for the analysis of weighted social networks, making it possible to analyze our social interaction adjacency matrix that constitutes of a continuous measure of how long two individuals interacted with one another. Second, statistical models that allow the modeling of time-stamped network data (such as ours), cannot model the duration of social interactions, but only the decisions to create a social interaction. Hence, using such a model would misalign the focus of the analyses with those of our hypotheses; on the interaction creation aspect but not the duration of such. Given the fluctuation in interaction signals in our data, the duration is a more reliable measure of social interactions that the creation. Third, the MRQAP method allows to make statements about effect sizes, which – in other network models – is mostly problematic. The only major disadvantage of the MRQAP method is that we had to aggregate the time-stamped data to the duration of the whole weekend, thus losing information about the order and frequency in which interactions happened.
Mathematically, a MRQAP is defined similarly to a linear regression model but with data arranged in matrices instead of vectors:
$${y}_{ij}={\beta }_{0}+\mathop{\sum }\limits_{k=1}^{m}{\beta }_{k}\left({x}_{ij}^{k}\right)+{e}_{ij}$$
where y is the dependent matrix and m is the number of independent matrices xk. Parameters βk are coefficients and eij the error terms.
Indexes i and j represent two individuals in a given matrix. If xk represents a friendship network, \({x}_{ij}^{k}\) would indicate that i considers j a friend. Similarly, xk could represent the similarity between individuals with \({x}_{ij}^{k}\), for example, indicating the difference in depressive symptoms of individuals i and j (i.e., depression similarity). In principle, parameters of a MRQAPs can be interpreted like parameters of a linear regression model, as they are estimated with ordinal least squares (OLS) estimators. MRQAPs differ only in two ways from linear regression models. The first difference is that, the unit of analysis in MRQAPs is on the dyadic level. Hence, the dependent variable is an adjacency matrix of dyadic relations (i.e., yij is the time individuals i and j interacted). Also, the independent variables of a MRQAP need to be defined on a dyadic level. Examples for friendship and depression similarity are given above. The second difference to linear regression models is concerned with the independence assumption. Social network data violate the assumption of independent observations: For instance, a person A's interactions with Person B cannot be assumed to be independent of Person A's interactions with Person C. Because characteristics of Person A - e.g., being female - affect both interactions. For this reason, the standard errors obtained through OLS estimation cannot be used for statistical inference. MRQAPs consider the dependencies between observations in the estimation of standard errors by relying on permutation tests for statistical inference: The OLS regression results obtained with the observed adjacency matrix are compared to a large number of regression results in which the dependent matrix y has been permuted. According to Dekker et al.38, the Y-permuted MRQAPs are (among the MRQAP methods) the most conservative method to obtain statistical inference—others are for example permutations of the independent variables. When permuting the dependent matrix y, random rows and columns are swapped, while the independent variables x remain unaffected. This way, structural aspects of the dependent network are preserved (e.g., the outdegree distribution), while generating a distribution that assumes no association between y and the xk. Because of the permutation-based statistical inference of the MRQAP framework, no standard errors or confidence intervals of the estimates can be computed. Thus, we rely on p-values for statistical inference. However, we also report the results of a multivariate linear regression model in Table S1 of the Supplementary Materials, where confidence intervals are reported. Within the MRQAP framework, the p-value is calculated based on the percent rank of the estimate of the observed network in the distribution of estimates based on permuted networks. For instance, the percent rank of 0.99 indicates that 99 percent of the coefficient based on permuted networks are smaller or equal to the observed estimate. The probability of observing larger estimates under the null-hypothesis is thus p = 0.01 (two-sided p-value)37,38.
We analyze the two samples jointly and, therefore, use a multi-group MRQAP, in which the dependent matrices y of the two samples are permuted separately38,60. The implementation of a multi-group MRQAP function in R, is made available on the public Open Science Framework repository of this study (osf.io/4sj4s).
Because the distribution of the residuals of the MRQAP model with this dependent matrix was highly skewed (s = 4.00), the linear regression assumption of normality of errors was violated. Thus, we log-transformed the dependent matrix (skewness of residuals after transformation: s = 0.26) following standard procedure in linear regression models. In Table S1 of the Supplementary Materials we also report results based on non-transformed variables.
The independent matrices \({x}_{ij}\) in our MRQAP model represent either dyad-level aggregates of individual's attributes (e.g., the difference in age of the two individuals) or dyadic relations (e.g., friendship nominations).
We test the depression-isolation hypothesis with the depression mean matrix, where each entry constitutes the mean depression score of both individuals i and j. The depression-homophily hypothesis is tested with the depression similarity matrix, which consists of values representing the degree of similarity in depression (\({x}_{ij}=(-1)\vee {v}_{i}-{v}_{j}\vee \), were vi is the depression value for individual i). Given that the reference category for the depression similarity effect is being identical on the depression score, the "raw" depression mean effect can be interpreted as the effect of both individuals being equally depressed. We included an interaction of these two matrices to account for differences in the importance of homophilic processes depending on the levels of depression.
To what extent depressed individuals interact with their friends (depression-friendship hypothesis) is tested with interactions of the depression mean and a friendship matrix. Friendship was defined when at least one of two individuals of a dyad reported a friendship tie. Whether or not friendship is mutual or asymmetric can be potentially relevant and serve as an indicator of relationship strength29 and stability54. For this reason, we conducted additional analyses in which we also consider the mutual and asymmetric friendship ties as separate independent matrices (i.e., a binary matrix indicating when both individuals nominated each other as friends and a binary matrix indicating whether or not exactly one individual of the dyad nominated the other as a friend).
Additionally, we included a dummy variable indicating whether or not the data was collected in sample two. To control for the effect of gender, we added dummy matrices as independent variables for the case of at least one female being in the interaction and for both individuals being female. Age-related effects were included with a centered age mean matrix (\({x}_{ij}^{k}=\frac{({v}_{i}-\bar{v})+({v}_{j}-\bar{v})}{2}\)) and an age similarity matrix (\({x}_{ij}^{k}=(-1)\vee {v}_{i}-{v}_{j}\vee \), where vi is the age value for individual i).
In the supplementary analyses, we control for the effects of the Big Five personality traits on social interactions. For this, we constructed two matrices for each trait that represent the centered mean value of i and j in the respective trait as well as their similarity in that trait.
Dyadic isolation is evaluated outside the MRQAP framework. For this, we computed the number of seconds that each individual spent in either a dyadic or group interaction (i.e., at least three individuals present in the social interaction). These two variables are then compared to each other with respect to an individual's depression scores to assess the degree of dyadic isolation. To test this hypothesis, we compute the Pearson correlation between an individual's depression score and the ratio of dyadic interactions in all social interactions. We compare this correlation to those of 5,000 permuted variables, representing the null distribution.
The study was reviewed and approved by the institutional ethics committee of ETH Zürich (approval 2016-N-27). The study was carried out in accordance with the relevant guidelines and regulations. Informed consent was obtained from all participants.
The datasets generated during and/or analysed during the current study are available in the Open Science Framework repository, osf.io/4sj4s.
Valtorta, N. K., Kanaan, M., Gilbody, S., Ronzi, S. & Hanratty, B. Loneliness and social isolation as risk factors for coronary heart disease and stroke: Systematic review and meta-analysis of longitudinal observational studies. Heart 102, 1009–1016 (2016).
Holt-Lunstad, J., Smith, T. B., Baker, M., Harris, T. & Stephenson, D. Loneliness and Social Isolation as Risk Factors for Mortality: A Meta-Analytic Review. Perspect. Psychol. Sci. 10, 227–237 (2015).
Steptoe, A., Shankar, A., Demakakos, P. & Wardle, J. Social isolation, loneliness, and all-cause mortality in older men and women. Proc. Natl. Acad. Sci. 110, 5797–5801 (2013).
Kawachi, I. & Berkman, L. F. Social ties and mental health. J. Urban Heal. 78, 458–467 (2001).
Jose, P. E. & Lim, B. T. L. Social connectedness predicts lower loneliness and depressive symptoms over time in adolescents. Open J. Depress. 03, 154–163 (2014).
Barnett, P. A. & Gotlib, I. H. Psychosocial functioning and depression: Distinguishing among antecedents, concomitants, and consequences. Psychol. Bull. 104, 97–126 (1988).
Segrin, C. Social skills deficits associated with depression. Clin. Psychol. Rev. 20, 379–403 (2000).
Wenzel, A. & Kashdan, T. B. In Handbook of relationship initiation (eds. Wenzel, A. & Kashdan, T. B.) (Routledge, 2008).
Elmer, T., Boda, Z. & Stadtfeld, C. The co-evolution of emotional well-being with weak and strong friendship ties. Netw. Sci. 5, 278–307 (2017).
Schaefer, D. R., Kornienko, O. & Fox, A. M. Misery does not love company: Network selection mechanisms and depression homophily. Am. Sociol. Rev. 76, 764–785 (2011).
van Zalk, M. H. W., Kerr, M., Branje, S. J. T., Stattin, H. & Meeus, W. H. J. Peer contagion and adolescent depression: The role of failure anticipation. J. Clin. Child Adolesc. Psychol. 39, 837–848 (2010).
van Zalk, M. H. W., Kerr, M., Branje, S. J. T., Stattin, H. & Meeus, W. H. J. It takes three: selection, influence, and de-selection processes of depression in adolescent friendship networks. Dev. Psychol. 46, 927–938 (2010).
Giletta, M. et al. Friendship context matters: Examining the domain specificity of alcohol and depression socialization among adolescents. J. Abnorm. Child Psychol. 40, 1027–1043 (2012).
Cacioppo, J. T., Fowler, J. H. & Christakis, N. A. Alone in the Crowd: The Structure and Spread of Loneliness in a Large Social Network. J. Pers. Soc. Psychol. 97, 977–991 (2009).
Lewinsohn, P. M. In The psychology of depression: Contemporary theory and research (eds. Friedman, R. & Katz, M.) 157–78 (John Wiley & Sons Inc, 1974).
Joiner, T. E. In Handbook of depression (eds. Gotlib, I. & Hammen, C.) 1–30 (Guilford Press, 2008).
Coyne, J. C. Towards an interactional description of depression. Psychiatry Interpers. Biol. Process. 39, 28–40 (1976).
Coyne, J. C. Depression and the response of others. J. Abnorm. Psychol. 85, 186–193 (1976).
Joiner, T. E. & Katz, J. Contagion of depressive symptoms and mood: Meta-analytic review and explanations from cognitive, behavioral, and interpersonal viewpoints. Clin. Psychol. Sci. Pract. 6, 149–164 (1999).
Brown, L. H., Strauman, T., Barrantes-Vidal, N., Silvia, P. J. & Kwapil, T. R. An experience-sampling study of depressive symptoms and their social context. J. Nerv. Ment. Dis. 199, 403–409 (2011).
Libet, J. M. & Lewinsohn, P. M. Concept of Social Skill With Special Reference To the Behavior of Depressed Persons. J. Consult. Clin. Psychol. 40, 304–312 (1973).
Nezlek, J. B., Hampton, C. P. & Shean, G. D. Clinical depression and day-to-day social interaction in a community sample. J. Abnorm. Psychol. 109, 11–9 (2000).
Nezlek, J. B., Imbrie, M. & Shean, G. D. Depression and everyday social interaction. J. Pers. Soc. Psychol. 67, 1101–11 (1994).
Baddeley, J. L., Pennebaker, J. W. & Beevers, C. G. Everyday Social Behavior During a Major Depressive Episode. Soc. Psychol. Personal. Sci. 4, 445–452 (2012).
Gadassi, R. & Rafaeli, E. Interpersonal perception as a mediator of the depression-interpersonal difficulties link: A review. Pers. Individ. Dif. 87, 1–7 (2015).
Gotlib, I. H. Perception and recall of interpersonal feedback: Negative bias in depression. Cognit. Ther. Res. 7, 399–412 (1983).
McPherson, M., Smith-Lovin, L. & Cook, J. M. Birds of a feather: Homophily in social networks. Annu. Rev. Sociol. 27, 415–444 (2001).
Rook, K. S., Pietromonaco, P. R. & Lewis, Ma When are dysphoric individuals distressing to others and vice versa? Effects of friendship, similarity, and interaction task. J. Pers. Soc. Psychol. 67, 548–59 (1994).
Friedkin, N. E. A Guttman Scale for the Strength of an Interpersonal Tie. Soc. Networks 12, 239–252 (1990).
Segrin, C. Interpersonal Reactions to Dysphoria: The Role of Relationship with Partner and Perceptions of Rejection. J. Soc. Pers. Relat. 10, 83–97 (1993).
Article ADS Google Scholar
Bukowski, W. M., Motzoi, C. & Meyer, F. In Handbook of peer interactions, relationships, and groups (eds. Rubin, K. H., Bukowski, W. M. & Laursen, B.) 217–231 (Guilford Press, 2009).
Wei, M., Russell, D. W. & Zakalik, R. A. Adult attachment, social self-efficacy, self-disclosure, loneliness, and subsequent depression for freshman college students: A longitudinal study. J. Couns. Psychol. 52, 602–614 (2005).
Rose, A. J. Co-rumination in the friendships of girls and boys. Child Dev. 73, 1830–1843 (2002).
Meyer, T. D. & Hautzinger, M. Allgemeine Depressions-Skala (ADS). Diagnostica 47, 208–215 (2001).
Cattuto, C. et al. Dynamics of person-to-person interactions from distributed RFID sensor networks. PLoS One 5, 1–9 (2010).
Elmer, T., Chaitanya, K., Purwar, P. & Stadtfeld, C. The validity of RFID badges measuring face-to-face interactions. Behav. Res. Methods 51, 2120–2138 (2019).
Krackhardt, D. Predicting with networks: Nonparametric multiple regression analysis of dyadic data. Soc. Networks 10, 359–381 (1988).
Dekker, D., Krackhardt, D. & Snijders, T. A. B. Sensitivity of MRQAP tests to collinearity and autocorrelation conditions. Psychometrika 72, 563–581 (2007).
Article MathSciNet PubMed PubMed Central MATH Google Scholar
Selden, M. & Goodie, A. S. Review of the effects of Five Factor Model personality traits on network structures and perceptions of structure. Soc. Networks 52, 81–99 (2018).
Radloff, L. S. The CES-D Scale: A Self-Report Depression Scale for Research in the General Population. Appl. Psychol. Meas. 1, 385–401 (1977).
Eisenberg, D., Gollust, S. E., Golberstein, E. & Hefner, J. L. Prevalence and correlates of depression, anxiety, and suicidality among university students. Am. J. Orthopsychiatry 77, 534–542 (2007).
Mikolajczyk, R. T. et al. Prevalence of depressive symptoms in university students from Germany, Denmark, Poland and Bulgaria. Soc. Psychiatry Psychiatr. Epidemiol. 43, 105–112 (2008).
Brandes, U. & Wagner, D. In Graph Drawing Software. Mathematics and Visualization (eds. Jünger, M. & Mutzel, P.) 321–340 (Springer, 2004).
Lin, N., Ye, X. & Ensel, W. M. Social support and depressed mood: A structural analysis. J. Health Soc. Behav. 40, 344–359 (1999).
Baumeister, R. F., Vohs, K. D. & Funder, D. C. Psychology as the Science of Self-Reports and Finger Movements: Whatever Happened to Actual Behavior? Perspect. Psychol. Sci. 2, 396–403 (2007).
Eagle, N., Pentland, A. S. & Lazer, D. Inferring friendship network structure by using mobile phone data. Proc. Natl. Acad. Sci. USA 106, 15274–15278 (2009).
Backenstrass, M. et al. A comparative study of nonspecific depressive symptoms and minor depression regarding functional impairment and associated characteristics in primary care. Compr. Psychiatry 47, 35–41 (2006).
Gotlib, I. H., Lewinsohn, P. M. & Seeley, J. R. Symptoms versus a diagnosis of depression: differences in psychosocial functioning. J. Consult. Clin. Psychol. 63, 90–100 (1995).
Butts, C. T. A Relational Event Framework for Social Action. Sociol. Methodol. 38, 155–200 (2008).
Stadtfeld, C., Hollway, J. & Block, P. Dynamic Network Actor Models: Investigating Coordination Ties through Time. Sociol. Methodol. 47, 1–40 (2017).
Stadtfeld, C. & Geyer-Schulz, A. Analyzing event stream dynamics in two-mode networks: An exploratory analysis of private communication in a question and answer community. Soc. Networks 33, 258–272 (2011).
Santini, Z. I., Koyanagi, A., Tyrovolas, S. & Haro, J. M. The association of relationship quality and social networks with depression, anxiety, and suicidal ideation among older married adults: Findings from a cross-sectional analysis of the Irish Longitudinal Study on Ageing (TILDA). J. Affect. Disord. 179, 134–141 (2015).
Stadtfeld, C., Vörös, A., Elmer, T., Boda, Z. & Raabe, I. J. Integration in emerging social networks explains academic failure and success. Proc. Natl. Acad. Sci. USA 116, 792–797 (2019).
Hallinan, M. T. The process of friendship formation. Soc. Networks 1, 193–210 (1978).
Hautzinger, M. & Bailer, M. Allgemeine Depressionsskala [General Depression Scale]. (Hogrefe Verlag, 1993).
Siddaway, A. P., Wood, A. M. & Taylor, P. J. The Center for Epidemiologic Studies-Depression (CES-D) scale measures a continuum from well-being to depression: Testing two key predictions of positive clinical psychology. J. Affect. Disord. 213, 180–186 (2017).
Rammstedt, B. & John, O. P. Measuring personality in one minute or less: A 10-item short version of the Big Five Inventory in English and German. J. Res. Pers. 41, 203–212 (2007).
Robins, G., Pattison, P., Kalish, Y. & Lusher, D. An introduction to exponential random graph (p*) models for social networks. Soc. Networks 29, 173–191 (2007).
Snijders, T. A. B., van de Bunt, G. G. & Steglich, C. E. G. Introduction to stochastic actor-based models for network dynamics. Soc. Networks 32, 44–60 (2010).
Burnett Heyes, S. et al. Relationship Reciprocation Modulates Resource Allocation in Adolescent Social Networks: Developmental Effects. Child Dev. 86, 1489–1506 (2015).
For their help in conducting the study and the preparation of the manuscript we wish to thank Zsófia Boda, Anna Ekert-Centowska, Laura Bringmann, Prateek Purwar, Krishna Chaitanya, Julia von Fellenberg, and members of the Chair of Social Networks at ETH Zürich – in particular the StudentLife team. The Swiss StudentLife Study was supported by Swiss National Science Foundation Grant 10001A_169965 and the rectorate of ETH Zürich.
Social Networks Lab, Department of Humanities, Social and Political Sciences, ETH Zürich, Zürich, Switzerland
Timon Elmer & Christoph Stadtfeld
Timon Elmer
Christoph Stadtfeld
T. Elmer developed the study concept. Both authors contributed to the study design. The data collection were performed by T. Elmer. T. Elmer performed the data analysis and interpretation under the supervision of C. Stadtfeld. Both authors contributed to the writing of the manuscript. Both authors approved the final version of the manuscript for submission.
Correspondence to Timon Elmer.
Supplementary Material.
Elmer, T., Stadtfeld, C. Depressive symptoms are associated with social isolation in face-to-face interaction networks. Sci Rep 10, 1444 (2020). https://doi.org/10.1038/s41598-020-58297-9
Introducing a depression-like syndrome for translational neuropsychiatry: a plea for taxonomical validity and improved comparability between humans and mice
Iven-Alex von Mücke-Heim
Lidia Urbina-Treviño
Jan M. Deussing
Molecular Psychiatry (2023)
A Qualitative Exploration of Body Image from the Perspective of Adolescents with a Focus on Psychological Aspects: Findings from Iran
Sara Jalali-Farahani
Parisa Amiri
Fereidoun Azizi
Child Psychiatry & Human Development (2023)
Hearing loss and depressive symptoms in older Chinese: whether social isolation plays a role
Hao Huang
Jiao Wang
Lin Xu
BMC Geriatrics (2022)
Patterns of social relationships among community-dwelling older adults in Japan: latent class analysis
Kumi Watanabe Miura
Takuya Sekiguchi
Tokie Anme
Feasibility and efficacy of TouchCare system using application for older adults living alone: a pilot pre-experimental study
Jo Woon Seok
Yu-Jin Kwon
Hyangkyu Lee
Editor's choice: psychosocial stress | CommonCrawl |
Difference between pointwise mutual information and log likelihood ratio
I came across some papers on statistical methods in natural language processing, particularly Ted Dunning's paper, and there I found the formula that he has used to calculate Log Likelihood Ratio, which seemed awfully similar to the formula for Pointwise Mutual Information. I would like to know the principal difference between those two measures?
natural-language likelihood-ratio mutual-information
Chill2Macht
m_amberm_amber
$\begingroup$ Please provide a link to the paper you are talking about. $\endgroup$ – Alexey Zaytsev Oct 28 '15 at 11:16
$\begingroup$ @Alexey this is the link aclweb.org/anthology/J93-1003 $\endgroup$ – m_amber Oct 28 '15 at 11:18
$\begingroup$ Not a silly question, by the way -- there seem to be 30 different ways of saying the same thing, and I find it kind of bothersome that so few people take the time to show overlaps and unifying principles. $\endgroup$ – senderle Mar 21 '17 at 17:36
They're very closely related. "Log-likelihood" is not a very useful name, because it sounds like it's going to be a log probability. But it's not that at all. "Log likelihood ratio" as used at times in the original paper is more correct, and that immediately hints that we should expect to see this metric as a comparison (ratio) between two different probabilities. Furthermore, the formula is structured as a weighted average of logarithms, just like the basic formula for entropy ($\sum_p p \log(p)$). These all strongly hint that this metric is closely related to an information-theoretic metric like mutual information.
Here's the formula for a given $p_n$, $k_n$, and $n_n$ as it appears in the paper:
$$ \mathrm{logL}(p_n, n_n, k_n) = k_n \log(p_n) + (n_n - k_n) \log (1 - p_n) $$
As the paper itself notes, $p$ is just $k / n$, so we need only normalize by $n$ to get the entropy formula for a Bernoulli distribution (coin flips with a weighted coin, with probability $p$ of turning up heads):
$$ \begin{align} \mathrm{BernoulliEntropy}(p, n, k) & = \frac{k}{n} \log(p) + \frac{(n - k)}{n} \log (1 - p) \\ \mathrm{BernoulliEntropy}(p) & = p \log(p) + (1 - p)\log(1 - p) \end{align} $$
This normalization is the only real difference, and I find it a little puzzling that the author didn't adopt this simplified approach.
The other equation we need (using this new $n$-normalized formulation) is the formula for cross-entropy. It is almost identical, but it compares two different probabilities: it gives us a way to measure the "cost" of representing one probability distribution with another. (Concretely, suppose you compress data from one distribution with an encoding optimized for another distribution; this tells you how many bits it will take on average.)
$$ \mathrm{CrossEntropy}(p,p') = p \log(p') + (1 - p) \log (1 - p') $$
Note that if $p$ and $p'$ are the same, then this winds up being identical to Bernoulli entropy.
$$ \mathrm{BernoulliEntropy}(p) = \mathrm{CrossEntropy}(p, p) $$
To put these formulas together, we just have to specify the exact comparison we want to make. Suppose we have data from two different coin-flipping sessions, and we want to know whether the same coin was used in both sessions, or whether there were two different coins with different weights.
We propose a null hypothesis: the coin used was the same. To test that hypothesis, we need to perform a total of eight weighted log calculations. Four of them will be simple entropy calculations, while the other four will be cross-entropy calculations; the difference will be in the probabilistic model we use. We will use the data for each session to calculate two different probabilities: $p_{1h}$ and $p_{2h}$. Then we will use all the data to calculate just one combined probability $p_{ch}$. The cross-entropy calculations will compare session probabilities to combined probabilities; the entropy calculations will compare session probabilities to themselves. Under the null hypothesis, the values will be the same, and will cancel each other out.
Recall that the logarithm of a probability is always negative; for clarity later, the formulas below are negated so that they give positive values.
Entropy calculations:
Heads: $-p_{1h} \log(p_{1h})$
Tails: $-(1 - p_{1h}) \log(1 - p_{1h})$
Cross-entropy calculations:
Heads: $-p_{1h} \log(p_{ch})$
Tails: $-(1 - p_{1h}) \log(1 - p_{ch})$
Cross entropy is always equal to or greater than entropy, so we want to subtract entropy from cross entropy to get a meaningful value. If they are close to equal, then the result will be zero, and we accept the null hypothesis. If not, then the value is guaranteed to be positive, and for larger and larger values, we will be more and more inclined to reject the null hypothesis.
Recall that logarithms allow us to convert subtraction into division, so subtracting the first entropy value from the first cross entropy value gives
$$ \begin{align} & -p_{1h} \log(p_{ch}) - -p_{1h} \log(p_{1h}) \\ =\ & p_{1h} \log(p_{1h}) - p_{1h} \log(p_{ch}) \\ =\ & p_{1h} \log\frac{p_{1h}}{p_{ch}} \\ \end{align} $$
Combining all the terms and logarithms together, we get this definition of the combined, normalized log likelihood, $ \mathrm{logL'} $:
$$ \begin{align} \mathrm{logL'} &=\ p_{1h}\ \log\frac{p_{1h}}{p_{ch}} \\ &+ (1 - p_{1h})\ \log\frac{(1 - p_{1h})}{(1 - p_{ch})} \\ &+ p_{2h}\ \log\frac{p_{2h}}{p_{ch}} \\ &+ (1 - p_{2h})\ \log\frac{(1 - p_{2h})}{(1 - p_{ch})} \\ \end{align} $$
At this point, converting this to PMI is just a matter of reinterpreting the notation. $p_{1h}$ is the probability of turning up heads in the first session, and so we could also call it the conditional probability of heads given that we're looking only at data from the first session. We can do similar things to the other probabilities from each session:
$$ \begin{align} p_{1h} &= \mathrm{P}(h|c_1) \\ (1 - p_{1h}) &= \mathrm{P}(t|c_1) \\ p_{2h} &= \mathrm{P}(h|c_2) \\ (1 - p_{2h}) &= \mathrm{P}(t|c_2) \end{align} $$
The null hypothesis probabilities are not conditional but "prior" probabilities -- probabilities calculated without taking into account additional information about the sessions:
$$ \begin{align} p_{ch} &= \mathrm{P}(h) \\ 1-p_{ch} &= \mathrm{P}(t) \\ \end{align} $$
Now, by the definition of conditional probability we have
$$ \begin{align} \frac{\mathrm{P}(h,c)}{\mathrm{P}(c)} &= \mathrm{P}(h|c) \\ \frac{\mathrm{P}(h,c)}{\mathrm{P}(c)\mathrm{P}(h)} &= \frac{\mathrm{P}(h|c)}{\mathrm{P}(h)} \end{align} $$
And we see that by converting the $\mathrm{logL'}$ formulas to use conditional probability notation, we have (for example)
$$ \begin{align} & \mathrm{P}(h|c_1)\ \log\frac{\mathrm{P}(h|c_1)}{\mathrm{P}(h)} \\ \end{align} $$
This is exactly the (pointwise) formula for $\mathrm{D_{KL}}((h|c_1)\ ||\ h)$, the KL divergence of the conditional distribution of heads in session one from the prior distribution of heads. So the log likelihood, when normalized by the number of trials in each session, is the same as the sum, for each possible outcome, of KL divergences of conditional distributions from prior distributions. If you understand KL divergence, this provides a good intuition for how this test works: it measures the "distance" between the conditional and unconditional probabilities for each outcome. If the difference is large, then the null hypothesis is probably false.
The relationship between mutual information and KL divergence is well known. So we're nearly done. Starting from the above formula, we have
$$ \begin{align} & \mathrm{P}(h|c_1)\ \log\frac{\mathrm{P}(h|c_1)}{\mathrm{P}(h)} \\ =\ & \mathrm{P}(h|c_1)\ \log\frac{\mathrm{P}(h,c_1)}{\mathrm{P}(c_1)\mathrm{P}(h)} \\ =\ & \mathrm{P}(h|c_1) \cdot \mathrm{PMI}(h; c_1) \end{align} $$
Where the last version is based on the definition of Pointwise Mutual Information (as given here). Putting it all together:
$$ \begin{align} \mathrm{logL'} &= \mathrm{P}(h|c_1) \cdot \mathrm{PMI}(h; c_1) \\ &+ \mathrm{P}(t|c_1) \cdot \mathrm{PMI}(t; c_1) \\ &+ \mathrm{P}(h|c_2) \cdot \mathrm{PMI}(h; c_2) \\ &+ \mathrm{P}(t|c_2) \cdot \mathrm{PMI}(t; c_2) \\ \end{align} $$
We could recover the pre-normalized version by using total counts from the first and second sessions: multiplying $\mathrm{P}(h|c_n)$ by the number of trials in session $n$ gives the number of heads in session $n$, which recovers the original definition of $\mathrm{logL}$ at the top of this answer.
Dividing that number by the total number of trials would give $\mathrm{P}(h,c_n)$, converting this formula into the formula for mutual information, the weighted sum of PMI values for each outcome. So the difference between "log likelihood" and mutual information (pointwise or otherwise) is just a matter of normalization scheme.
senderlesenderle
Not the answer you're looking for? Browse other questions tagged natural-language likelihood-ratio mutual-information or ask your own question.
Bounding mutual information given bounds on pointwise mutual information
Log-likelihood ratio in document summarization
Calculating pointwise mutual information between two strings
Information gain and mutual information: different or equal?
What are the pros and cons of applying pointwise mutual information on a word cooccurrence matrix before SVD?
Integrating pointwise mutual information
about PMI(pointwise mutual information) independence | CommonCrawl |
Designing bike networks using the concept of network clusters
Meisam Akbarzadeh1,
Syed Sina Mohri1 &
Ehsan Yazdian2
Applied Network Science volume 3, Article number: 12 (2018) Cite this article
In this paper, a novel method is proposed for designing a bike network in urban areas. Based on the number of taxi trips within an urban area, a weighted network is abstracted. In this network, nodes are the origins and destinations of taxi trips and the number of trips among them is abstracted as link weights. Data is extracted from the Taxi smart card system of a real city. Then, Communities i.e. clusters of this network are detected using a modularity maximization method. Each community contains the nodes with highest number of trips within the cluster and lowest number of trips with other clusters. Within each community, the nodes close enough to each other for being traveled by bicycle are detected as key points and some non-dominated bike network connecting these nodes are enumerated using a bi-objective optimization model. The total travel cost (distance or time) on the network and the path length are considered as objectives. The method is applied to Isfahan city in Iran and a total of seven regions with some non-dominated bike networks are proposed.
Promotion of non-motorized transportation is a step toward sustainable urban development. The benefits of travel by cycling and walking include increased physical health, decreased dependence on fossil fuel combustion, decreased production of environmental pollutants, efficient use of capacity of urban passages, and provision of more equitable conditions due to lack of dependence on citizens' economic and car ownership status. Promotion of non-motorized forms of transportation requires requires proper infrastructure and service. In the case of cycling, the presence of bike-lanes with suitable safety, geometric design and pavement can have a significant impact on citizens' willingness to use bicycles for short and medium range travels. Common methods of identification of suitable routes for construction of a bike network are based on two principles: i) determination of urban passages suited for allocation of necessary width to bike-lanes, and ii) identification of origins and destinations of short and medium range travels. These origins and destinations can be identified by direct statistical surveys (through observation and questionnaire) or indirect use of past data (the outputs of comprehensive urban transportation plans that have been developed based on direct surveys).
Statistical surveys are based on rigorous scientific principles; however, the presence of inevitable errors (e.g. sampling error), the high cost of collecting adequate sampling, and the difficulty of securing the effective cooperation of respondents make these surveys a challenging phase of transportation studies. The widespread use of intelligent transportation systems however allows researchers to extract useful information about citizens' travel behavior without the need for any direct engagement. Recently, the presence of automated vehicle location systems, automatic transit fare collection systems, speed cameras and license plate scanners provide unprecedented access to raw data necessary for the study of traffic behaviors.
The method proposed in this paper is based on data pertaining to taxi trips and does not therefore require any direct survey. In this method, origins and destinations of short taxi trips are abstracted as vertices of a graph. Short trips are those within the feasible distance traversed by bike which is assumed 4 km in this study. If a trip is made between two vertices, they become connected by an arc. The number of travels between two points is modeled as the weight of the arc connecting the corresponding vertices. Modeling the travel patterns as a graph paved the way for using the concept of community i.e. clusters to identify the points with more significant travel connections. On this basis, after detecting the graph communities, the point with highest rates of short-range trips in each community were identified, and then the best networks connecting these points was attained based on a bi-objective mathematical model. The first objective of the model minimizes the total travel cost (distance or time) on the network as a users' objective. While, the second objective minimizes the total network length as planners' objective. Therefore, the model by considering a trade-off between users and planners objectives proposes some non-dominated (pareto-optimal) bike networks.
The rest of the paper is organized as follows: A review of application of graph theory in transportation networks, the usage of information of taxi positioning systems, and methods of bike network design are presented in section "Review of literature". In section "Research method", a methodology of identifying the non-dominated bike networks in a city is proposed based on integrating a community detection method and a bi-objective optimization problem. Section "Data and results" is devoted to analyzing the results of applying the presented method on a real case study of Isfahan network.
Review of literature
The literature review in this study follows of three streams, methods of analysis the complex networks and their applications, the usage of extracted information of taxi positioning system on urban planning and the method of designing bike networks.
Graph theory and complex networks
Graph theory and complex networks have found many applications in air, sea, rail and land (highways and public) transportation networks. Previous studies in this field are mainly focused on identification of network's functional communities, vulnerably (Hu and Zhu 2009; Li and Cai 2007; Mohmand and Wang 2014), reliability (Duan and Lu 2014; Qian et al. 2012), evolution pattern (Jia et al. 2014; Roth et al. 2012), and comparative studies on different networks through performance measurements (Leng et al. 2014; Von Ferber et al. 2009; Xu et al. 2007).
One of the applications of network-based approach is the identification of potential community of a network. In a graph, community also known as cluster is a subgraph whose vertices have a high degree of inter-connection and relatively low connection with vertices outside that subgraph. Figure 1 shows an example of communities in a simple network.
Communities of a sample network
In large and complex networks, communities cannot be detected by shear intuition; but literature has provided several methods for this purpose. These methods can be grouped into two categories: division methods, and aggregation methods (Clauset et al. 2004; Girvan and Newman 2002; Newman 2006; Newman and Girvan 2004; Pons and Latapy 2005; Radicchi et al. 2004; Wu and Huberman 2004). Division methods assume the entire network as a large community and then select the vertices most suited for isolation. These methods divide the network to its communities and continue this process until the generated communities exhibit the desired quality, i.e. when vertices of each community have a high degree of inter-connection and relatively low connection with vertices of other communities. Aggregation methods first assume each vertex as a minuscule community, and then determine the vertex most suited for formation of a larger community (containing two vertices). These methods aggregate the vertices to form the most suitable communities, and then examine the addition of remaining vertices to existing communities, and repeat this process until the generated communities exhibit the above-mentioned quality.
The usage of information of taxi positioning systems
The information obtained from automated positioning system of taxis have been used in numerous transportation and urban planning studies. Previous studies in this regard have mostly focused on developing and updating street maps (Cao and Krumm 2009; Lou et al. 2009), developing transportation routes and services based on frequent patterns of taxi trips (Chen et al. 2013; Wei et al. 2012; Ziebart et al. 2008), predicting the time and volume of traffic in city streets and identifying the points with frequent traffic jams (Castro et al. 2012; Gao et al. 2013; Liu et al. 2010b; Wang et al. 2009; Zhu et al. 2011), classifying the land use by analyzing the information regarding the arrival and departure of passengers over space and time dimensions (Pan et al. 2013; Yuan et al. 2012), recommending optimal routes during rush hours based on route selected by taxis (Liu et al. 2010a; Yuan et al. 2010), predicting the dynamic patterns of travel distribution by analyzing factors such as time, location of taxis, and weather conditions (Chang et al. 2009; Yue et al. 2012), identifying the unknown connections in the network of intra-urban travel (Zheng et al. 2011), and identifying the nearest source of passengers for vacant and roaming taxis (Veloso et al. 2011; Yuan et al. 2011).
This study, by applying the clustering method on the information of taxi trips, where are gathered by a digital payment service, identifies the potential locations (key bike nodes) of a city for setting up a bike network. Considering these potential locations as some small networks instead of the whole city network, reduces the size of problem with preserving the quality of results for designing a bike network facility.
Bike network design
Several studies represents that the countries and cities with a high cycling demand in Western Europe and North America have large networks of separate bike facilities (Fraser and Lock 2011; Furth 2012; Pucher et al. 2010). In contrast with the other transportation network design, the cyclists considers a broader range of factors for selecting routes such as travel time, distance, comfort, slope, turn frequency, noise, pollution etc. (Broach et al. 2012; Winters et al. 2011). Therefore, designing the bike networks or routing the bike lanes usually is done based on some different criteria. There is a difference between routing bike lanes and bike network design. The objective of routing problem is proposing some best routes between a specific origin and destination (OD). While, the bike network design problem considers some OD pairs and presents some directed bike lanes as a bike network (Buehler and Dill 2016; Hrncir et al. 2015; Mauttone et al. 2017; Song et al. 2014).
Buehler and Dill (2016) with reviewing the literature reported the different approaches to design the cycling infrastructures such as links, nodes and network. They concluded that designing a bike network as a whole is the much remained approach for planning cycling infrastructures. The literature on the topic of bike network design is relatively scarce. Mesbah and Thompson (2011) presented a bi-level optimization model for bike network design. The upper-level simultaneously maximized the share of bike trips and its impact over car travel time due to reduction of street space. The lower-level was a traffic assignment for both bikes and cars with a user-equilibrium hypothesis (Mesbah and Thompson 2011). Duthie and Unnikrishnan (2014) proposed a single objective optimization model which aimed to decrease the total constructing costs of a bike network in a city. It was assumed that the total bike OD demand in network must be covered with the proposed network. Also, the construction costs of network was related to the links and intersections (Duthie and Unnikrishnan 2014). Mauttone et al. 2017, with considering the interest of planners and users proposed a single objective model for bike network design that minimized the distance of bike trips given by an OD matrix. The interest of planner was provided by applying a budget constraint into the model (Mauttone et al. 2017).
This study with considering both objectives of planners and users proposes a bi-objective model for bike network design. In contrast with (Mauttone et al. 2017), we consider the interest of planner as a model objective with minimizing the length of proposed network. Also, the potential OD demands for bike network are gathered by a digital payment service as a revealed preferences data. The previous studies built the OD matrix with a stated preferences data that were collected by home surveys (Duthie and Unnikrishnan 2014; Mauttone et al. 2017; Mesbah and Thompson 2011). One of the big problem of previous studies for routing bike lanes or bike network design was the the big size of the problem and disability of exact methods for solving it (Hrncir et al. 2015; Mauttone et al. 2017; Song et al. 2014). This study with identifying the key OD pairs in each cluster that have the most potential of moving to bike network decreases the size and complexity of problem.
The objective of present study is to determine the routes most suited for the development of a bike network by analyzing the matrix of taxi trips based on the data obtained from digital payment service. Figure 2 shows the methodology as a flowchart.
The framework of the methodology
Steps are explained in detail in the following subsections.
Step 1: Extract the taxi trip data for a time period
Taxi trip data was extracted from the fare transaction system of Isfahan Taxi Organization. The database was anonymized and included the longitude and latitude of the trip origin and destination, and the boarding and alighting time of each passenger.
Step 2: Create a weighted graph based on trip patterns
Every trip origin and destination could potentially be considered as a node of the graph. This would yield a huge graph. Hence, trip origins and destinations close to each other were aggregated and contracted to one node. Corresponding trips of aggregated nodes were also aggregated. The method of aggregation is described in section "Data and results".
Step 3: Detect the communities of the graph
Community detection was conducted by the heuristic algorithm presented by Blondel et al. (Blondel et al. 2008) applicable to undirected networks. The algorithm consists of two consecutively repeating steps. In the first step, algorithm considers each vertex of the network as a community. In the second step, algorithm identifies the two vertices with the most interaction and groups them as one community. It then replaces these two vertices with one (virtual) vertex and repeats the first step. In this algorithm, the suitability of vertices for aggregation is determined by the value of modularity. Modularity (Q) is a variable that compares the density of intra-community and inter-community connections, and as a result, its value represents the quality of the formed communities. In a weighted network, this index is defined as the following equation:
$$ Q=\frac{1}{2m}\sum \limits_{i,j}\left[{A}_{ij}-\frac{K_i{K}_j}{2m}\right]\updelta \left({C}_i,{C}_j\right) $$
where A ij denotes the weight of the arc connecting vertex i to vertex j; K i represents the total weight of all arcs connected to vertex i; C i is the community that includes the vertex i; δ is a binary function which is 1 when i and j are in the same community, and is 0 otherwise; and 2m is the total weight of all arcs in the network.
The community detection algorithm first selects an arbitrary vertex (i), separates it from its community and inserts it into the neighboring community (j), and then recalculates the resulting modularity index. It repeats this process for all vertices adjacent to vertex i, and ultimately adds the vertex i to the neighboring community with maximum positive ΔQ (difference between modularity index of target community with that of original community). The change in the modularity index (ΔQ) is calculated by eq. (2):
$$ \Delta Q=\left[\frac{\sum_{in}+{K}_{i, in}}{2m}-{\left(\frac{\sum_{tot}+{K}_i}{2m}\right)}^2\right]-\left[\frac{\sum_{in}.}{2m}-{\left(\frac{\sum_{tot}.}{2m}\right)}^2-{\left(\frac{K_i}{2m}\right)}^2\right] $$
where ∑ in .is the total weight of all arcs inside the community C; Ki, in is the total weight of all arcs connecting the vertex i to other vertices of the community C; and ∑ tot . is the total weight of all arcs connected to the vertices of the community C.
The algorithm repeats this process for all vertices in the network and continues until ΔQ cannot be improved any further. In the second step, algorithm considers each formed community as one vertex and considers the total weight of connections between the two communities of the first step as the weight of the new arc. This leads to formation of a new network whose layout and properties depends on the output of the first step. This algorithm then repeats the entire process for the new network. This second step of algorithm continues until ΔQ cannot be further improved. This marks the end of algorithm's first cycle (iteration) and the start of a new cycle through re-initiation of step1. These iterations continue until modularity index cannot be improved any further.
Step 4: Select the links in each community suitable for passing bike lanes
Suitable distance for biking was assumed to be four kilometers. Hence, in each community node pairs with distances less than 4 km were selected as potential bike lane routes. Hereafter, these nodes are called the key points.
Step 5: Find the best bicycle route in each community
Routes connecting the key points in each community were enumerated to generate the choice set for selecting the best route for constructing the bike network. An economical and desirable network should consider a trade-off among the goals of users and planners. Accordingly, we present a bi-objective optimization model for bike network design problem to generate some non-dominated solutions and facilitate the process of decision making. The mixed integer formulation of model is a variant of fixed-charge multi commodity network design problem (Magnanti and Wong 1984). The first problem objective is minimizing the total travel costs (distance) as the users' objective. The second objective minimizes the total length of proposed directed bike network as the planners' objective. This objective covers the economic issues for constructing the bike network and has a conflict with the first one. An example is illustrated in Fig. 3 to explain how the optimization model works. Assume a grid network in which five nodes have high values of short-length taxi trips.
Alternative bike networks within a community: an example
Figure 3 shows four possible networks selected as the non-dominated bike networks. A non-dominated solution is a solution for which each objective could not be improved without deteriorating the other objectives. There is a large variety of classical methods for converting a multi-objective model to a single objective one and generally, none of them can be said to be superior to others (Hartikainen et al. 2012). For example, the values of objectives (O1, O2) for these four networks are as follow: (100, 20), (80, 30), (60, 35), and (50, 40).
In this study, a weighted method with normalization is used to convert the bi-objective to a single objective model. The weighted method with normalization is an extension of weighting method in which the objectives are normalized to return a value between zero and one (Grodzevich and Romanko 2006). Normalization of each objective is done by deducing the value of ideal solution of the objective formulation and dividing them by different values between the nadir and the ideal solutions of the objective function. For a bi-objective model the ideal solution for each objective is obtained by minimizing it without considering the other objective. Also, when the first objective is minimized, the value of second objective is a nadir solution for second objective and vice versa. Equation (3) shows the process of normalization for objective i. Equation (4) demonstrates the new weighted objective constructed from initial two objectives.
$$ {f}_i^{\prime }(x)=\frac{f_i(x)-{f^L}_i}{{f^N}_i-{f^L}_i} $$
$$ h(x)={w}_1{f}_1^{\prime }(x)+{w}_2{f}_2^{\prime }(x) $$
Where, f i (x) and \( {f}_i^{\prime }(x) \) are the objective function i and its normalized form, respectively. fL i and fN i are the ideal and nadir solutions of objective function i, respectively. h(x) is the new weighted objective constructed from initial two objectives. w1 and w2 are the weights of the first and second objectives, respectively. Also, in this method the sum of weights must be equal to one. In this study, for extracting a set of non-dominated solutions the weight of first objective increases from zero to one by steps equal to 0.1.
Before describing the mathematical problem of bi-objective bike network design, the used sets, indices, input parameters and decision variables are described.
N : The set of network nodes
A : The set of network links
D : The set of network demands
s : The index for network demands
i, j : The index for network nodes
L ij : The length of link (i, j)
Q s : The amount of OD demand flow s
O(s) : The origin node of OD demand flow s
D(s) : The destination node of OD demand flow s
M : A constant positive number and it is equal to the number of OD demand flows in the network
Decision variables
\( {x}_{ij}^s \) : A binary decision variable, it is equal to one if link (i, j) be selected for routing OD demand flow s, otherwise it is equal to zero.
Z ij : A binary decision variable, it is equal to one if link (i, j) be selected as a network link, otherwise it is equal to zero.
$$ \mathit{\operatorname{Min}}\ {O}_1=\sum \limits_{s\in D}\sum \limits_{i\in N}\sum \limits_{j\in N,\left(i,j\right)\in A}{x}_{ij}^s\times {L}_{ij}\times {Q}_s $$
$$ \mathit{\operatorname{Min}}\ {O}_2=\sum \limits_{i\in N}\sum \limits_{j\in N,\left(i,j\right)\in A}{Z}_{ij}\times {L}_{ij} $$
$$ St: $$
$$ \sum \limits_{j\in N,\left(i,j\right)\in A}{x}_{ij}^s-\sum \limits_{j\in N,\left(j,i\right)\in A}{x}_{ji}^s=1\kern2.5em \forall i\in N,\forall s\in D, and\ i=O(s) $$
$$ \sum \limits_{j\in N,\left(j,i\right)\in A}{x}_{ji}^s-\sum \limits_{j\in N,\left(i,j\right)\in A}{x}_{ij}^s=1\kern2.5em \forall i\in N,\forall s\in D, and\ i=D(s) $$
$$ \sum \limits_{j\in N,\left(i,j\right)\in A}{x}_{ij}^s-\sum \limits_{j\in N,\left(j,i\right)\in A}{x}_{ji}^s=0\kern2.5em \forall i\in N,\forall s\in D, and\ i\ne \left\{O(s),D(s)\right\} $$
$$ \sum \limits_{s\in D}{x}_{ij}^s\le M{Z}_{ij}\kern2.25em \forall i,j\in N, and\left(i,j\right)\in A $$
$$ {x}_{ij}^s\ and\ {Z}_{ij}\in \left\{0,1\right\} $$
Equation (5) is the users' objective and minimizes the total travel distance in the network. Equation (6) is the system or planners' objective that minimizes the total length of bike network. Equations (7), (8), and (9) are the flow conservation constraints. Equation (7) ensures that for each OD pair, a network link is departed from origin. Equation (9) expresses that for each OD pair, a link must arrive at the destination of the OD demand. Equation (9) is for intersection nodes and ensures that if a link enters an intersection node another link for leaving it must exist. Based on Eq. (10) if link (i, j) is selected for transferring the OD flows, this link must be constructed in the network. Finally, Eq. (11) shows the nature of the decision variables (binary variable).
Data and results
Taxi trip data of Isfahan, Iran was used for implementing the model. Accordingly, travel information within the period of 26 May 2014 to 30 May 2014 of all the taxis equipped with the smart card system were obtained. The database contained the coordinates of trip origins and destinations, trip duration, and actual distances traveled. The database contained nearly fifty-three thousand trips.
The taxi trips made on workdays were used to form the weighted directed network G (N, E) consisting of n nodes and e links. In order to form a network with tractable number of nodes, spatial aggregation was implemented on the origin and destination points. Nodes were assumed to be located at the intersections and major trip attracting areas of the city. Then every trip originated or destined within a circle of radius of 200 (m) around them, were aggregated. In other words, each node represents an area of trip generation and attraction with a radius of 200 m. Radius of these areas (200 m) was selected after considering the size of squares and intersections and relative position of nearby taxi stations. Figure 4 shows an example of aggregation of points within 200-m radius of a square.
Spatial distribution of aggregated OD points
The priority of each area for designation as a node was determined based on the number of trips generated and attracted to that area. After identifying the high priority areas on the Isfahan map, it was observed that almost 70% of all trips made in workdays pertained to 114 nodes. The links of the network represented direct trips among the nodes, and after aggregating the trips made between nodes, each link was assigned a weight equal to the total number of trips made on that particular route. Links were assumed to be undirected as the trips made by bike would be bidirectional. Next, the links with very low trip counts (less than 5 trips per day) were eliminated and the network of Isfahan's taxi trips in workdays was developed with 114 nodes and 1112 links. Figure 5 shows a view of the network.
Vertices and arcs of Isfahan's taxi network
The community detection algorithm detected seven clusters in the network. In each cluster, the node pairs with distance less than four kilometers were considered as key points for being located on future bike networks. The key points of clusters are illustrated in Fig. 6.
The key points in each community for constructing bike networks
Once key points in each community were determined, some road networks around each key point were determined to extract the non-dominated bike networks. The key points in communities 3, 4, 5, and 6 are situated in a direct path. Therefore, applying the proposed model on them was not necessary and they have a non-dominated solution that connects the key points of the cluster to each other with a two-directed path. The bike networks for these communities are shown in Appendix. Figure 7 shows the proposed road network around the key points of community 7. Also, the proposed road networks around the key points of communities 1 and 2 are shown in Appendix. The proposed road network in community 7 consists of 14 nodes and 38 directed links. Among the 14 network nodes, the number of demand nodes is 6 and the eight remaining nodes belong to intersection nodes. The input parameters for running the model i.e. the length of network links and OD demand flows where extracted from Google Map and the data of taxi payment service, respectively.
The proposed road network around key points of community 7
The model was applied on the proposed road networks of communities 1, 2, and 7. Commercial software IBM ILOG CPLEX 12.6.1 was used for solving the bi-objective model in a device with Intel(R) Core(TM) i7 CPU @2.13 GHz and 6 Gbytes of RAM under the 64-bit Windows 7 operating system. In order to obtain an ideal solution for an objective in bi-objective programming models, its weight and the weight of the other objective were set equal to 1 and 0, respectively. In this situation the value of the other objective is equal to its nadir solution. Table 1 shows the obtained ideal and nadir solutions for all communities.
Table 1 The ideal and nadir solutions of bike networks of all communities
The results of finding the ideal and nadir solutions for communities 3, 4, 5, and 6 also confirm that these communities just have an optimal solution for both objectives. In order to find the non-dominated solutions for communities 1, 2, and 7, the weight of the first objective (w1) was increased from zero to one in increments of 0.1. It means that the weight of the second objective (w2) decreased from one to zero in increments of 0.1. Table 2 shows the characteristics of the non-dominated solutions for all communities. Some of the weighting systems yield same solutions which are illustrated in column 3 of Table 2.
Table 2 Characteristics of non-dominated networks for all communities
The model yielded 3, 5, and 6 non-dominated bike networks for communities 1, 2, and 7, respectively. A comparison between the objective values of non-dominated bike networks in each community shows that by making a little increase in users' objective, it is possible to make a considerable improvement in planners' objective and vice versa. For instance, consider the networks 1 and 2 in community 2. The value of the first objective in network 2 in comparison to network 1 is increased by 2.3%, while the second objective decreased by 52.3%. Therefore, with a little attention to the planners' objective, one can decrease 50% of the total network length by another non-dominated bike network. As another example consider networks 4 and 5 in community 2. In this case with a little attention to users' objective in network 4, the users' objective is improved by 39.9%, while the planners' objective is only increased by 2.1%. Figure 8 shows all non-dominated bike networks for community 7. The non-dominated bike networks for communities 1 and 2 are shown in Appendix.
The proposed non-dominated bike networks for community 7
In order to evaluate the quality of the proposed model, its performance in terms of speed, and trip coverage was compared to a random node selection approach. An extended covering area around communities 1 and 2 was selected and key points were randomly selected. Figure 9 shows the assumed area and its nominated nodes as well as the road facilities. The matrix of network demand is shown in Appendix: Table A.1.
The road networks around the key points are situated in the picture area
The presented key points for community 2 in this area have a bigger captured OD flows than community 1. Therefore, the aim is comparing the values of model objectives for new and old (key points for community 2) sets. We believe that the total amount of OD flows between key points in new proposed sets affects the value of the model objectives and consequently disparages the process of comparison. For instance, if an alternative set was constructed with eliminating some key points of community 2, both model objectives would attain less values than prior and a dominating solution would be achieved. But this solution is derived from diminishing the total OD flows that is as important as two model objectives. In order to have a valid comparison, we can either consider the amount of captured OD flows as an additional objective or select some OD flows such that their sum is close to the amount of community 2.
We adopted the second approach into the proposed mathematical bi-objective bike network design by randomly selecting some key points from the network that their total OD flows have a maximum of 5% deviation of total OD flows in community 2 (193 ± 10 trips). We linked the Java software with commercial software IBM ILOG CPLEX 12.6.1 to choose a subset of network demand periodically and solved the model.
Figure 10 compares the quality of the obtained non-dominated solutions for 10, 30, and 50 iterations for each weighting system with the proposed non-dominated solutions of community 2 which were resulted by integrating network clustering with the optimization model. The total time for finding the communities of network and solving the mathematical model for each weighting system was 14 s. While, the recorded solution time without applying clustering approach for each weighting system were 301, 169, and 91 s with considering 300, 500, and 700 iterations, respectively. Therefore, the proposed approach is faster than random key points generation integrated with bike network design.
Fig. 10
Demonstrating the quality of non-dominated bike networks based on presented approach
A first look at the non-dominated solutions of both approaches shows that the random key point selection approach could produce more optimal solutions from the standpoint of user objective than the presented clustering approach (solutions which are situated in yellow box). But this is caused by compromising the amount of total OD flows between the key points. Investigating the amount of total OD flows for solutions that are situated in the yellow box shows that they have a total OD flows up to 185 trips while the proposed model was applied on a network with 193 trips. Also, the clustering approach produced more optimal solutions from standpoint of planner objective even with reducing the total amount of OD flows in the network.
The proposed model belongs to the category of strategic problems for designing bike networks. In addition to identifying the links of a bike network, the number and position of bike stations is important. There are some other tactical problems that are concerned about locating the most suitable positon of bicycle stations and ensuring the adequate redistribution of bicycles. Adequate redistribution of bicycles increases the likelihood of stations servicing new passengers, increases the fleet productivity, and reduced the fleet size required to provide adequate service, which all in turn increase the demand and desirability of the whole program. These problems can be integrated with bike network design in future to propose a uniform network.
A method for using the data collected by intelligent transportation system devices in planning urban infrastructure was proposed. This paper used the taxi trip data to suggest a number of bike networks for a city. The aim of this study was to provide a conceptual framework and a suitable approach for this purpose. The results of this paper can be further improved by repeating the work with a wider range of data, different community detection methods, engaging the attitude of residents, and trying different values for model parameters.
In this paper, the travel data of Isfahan's taxis were used to extract the common origins and destinations of travels made by citizens. Then each set of relatively proximate points showing a high volume of exchange were classified as one community. Ultimately, authors proposed seven potential regions for setting up a bike network for Isfahan. In each community the vertices spaced less than 4 km from each other were considered as key points for designing a bike network. After identifying the key points in each community, with considering the road network types and their characteristics, in each community a network connecting the key points were proposed as bike network.
Next, a bi-objective optimization model was applied to each community to find the non-dominated bike networks. The first objective of the model minimized the total travel distance in network and was a users' objective. The second objective minimized the total network length and was a planners' objective.
The proposed method circumvents the need for collection of massive stated data on travelers' trips and preferences. Since smart cards in bus and taxi are being rapidly embraced by cities, using their data does not incur extra charge. Although elegant, community detection is not a complicated and time consuming process. Therefore, the proposed method can be applied in almost every urban context.
Origin Destination matrix/flow/pair
Blondel VD, Guillaume J-L, Lambiotte R, Lefebvre E (2008) Fast unfolding of communities in large networks. J Stat Mech Theory Exp 2008:P10008
Broach J, Dill J, Gliebe J (2012) Where do cyclists ride? A route choice model developed with revealed preference GPS data. Transp Res A Policy Pract 46:1730–1740
Buehler R, Dill J (2016) Bikeway networks: a review of effects on cycling. Transp Rev 36:9–27
Cao L, Krumm J From GPS traces to a routable road map. In: Proceedings of the 17th ACM SIGSPATIAL international conference on advances in geographic information systems. Seattle: ACM; 2009. pp 3–12.
Castro PS, Zhang D, Li S Urban traffic modelling and prediction using large scale taxi GPS traces. In: International Conference on Pervasive Computing, 2012. Springer, pp 57–72
Chang H-W, Tai Y-C, Hsu JY-J (2009) Context-aware taxi demand hotspots prediction. Int J Bus Intelligence Data Mining 5:3–18
Chen C, Zhang D, Zhou Z-H, Li N, Atmaca T, Li S B-planner: night bus route planning using large-scale taxi GPS traces. In: Pervasive Computing and Communications (PerCom), 2013 IEEE International Conference on, 2013. IEEE, pp 225–233
Clauset A, Newman ME, Moore C (2004) Finding community structure in very large networks. Phys Rev E 70:066111
ADS Article Google Scholar
Duan Y, Lu F (2014) Robustness of city road networks at different granularities. Physica A 411:21–34
Duthie J, Unnikrishnan A (2014) Optimization framework for bicycle network design. J Transp Eng 140:04014028
Fraser SD, Lock K (2011) Cycling for transport and public health: a systematic review of the effect of the environment on cycling. Eur J Pub Health 21:738–743
Furth PG (2012) Bicycling infrastructure for mass cycling: a trans-Atlantic comparison City cycling, pp 105–140
Gao M, Zhu T, Wan X, Wang Q Analysis of travel time patterns in urban using taxi gps data. In: Green Computing and Communications (GreenCom), 2013 IEEE and Internet of Things (iThings/CPSCom), IEEE International Conference on and IEEE Cyber, Physical and Social Computing, 2013. IEEE, pp 512–517
Girvan M, Newman ME (2002) Community structure in social and biological networks. Proc Natl Acad Sci 99:7821–7826
ADS MathSciNet Article MATH Google Scholar
Grodzevich O, Romanko O (2006) Normalization and other topics in multi-objective optimization
Hartikainen M, Miettinen K, Wiecek MM (2012) PAINT: Pareto front interpolation for nonlinear multiobjective optimization. Comput Optim Appl 52:845–867
MathSciNet Article MATH Google Scholar
Hrncir J, Zilecky P, Song Q, Jakob M Speedups for Multi-Criteria Urban Bicycle Routing. In: OASIcs-OpenAccess Series in Informatics, 2015. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik,
Hu Y, Zhu D (2009) Empirical analysis of the worldwide maritime transportation network. Physica A 388:2061–2071
Jia T, Qin K, Shan J (2014) An exploratory analysis on the evolution of the US airport network. Physica A 413:266–279
Leng B, Zhao X, Xiong Z (2014) Evaluating the evolution of subway networks: evidence from Beijing subway network. EPL (Europhysics Letters) 105:58004
Li W, Cai X (2007) Empirical analysis of a scale-free railway network in China. Physica A 382:693–703
Liu L, Andris C, Ratti C (2010a) Uncovering cabdrivers' behavior patterns from their digital traces. Comput Environ Urban Syst 34:541–548
Liu S, Liu Y, Ni LM, Fan J, Li M Towards mobility-based clustering. In: Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, 2010b. ACM, pp 919–928
Lou Y, Zhang C, Zheng Y, Xie X, Wang W, Huang Y Map-matching for low-sampling-rate GPS trajectories. In: Proceedings of the 17th ACM SIGSPATIAL international conference on advances in geographic information systems, 2009. ACM, pp 352–361
Magnanti TL, Wong RT (1984) Network design and transportation planning: models and algorithms. Transp Sci 18:1–55
Mauttone A, Mercadante G, Rabaza M, Toledo F (2017) Bicycle network design: model and solution algorithm. Transportation Res Procedia 27:969–976
Mesbah M, Thompson R Optimal design of bike lane facilities in an urban Network In: Australian Transport Research Forum 2011 Proceedings, 2011. Citeseer, pp 28–30
Mohmand YT, Wang A (2014) Complex network analysis of Pakistan railways Discrete Dynamics in Nature and Society, p 2014
Newman ME (2006) Finding community structure in networks using the eigenvectors of matrices. Phys Rev E 74:036104
ADS MathSciNet Article Google Scholar
Newman ME, Girvan M (2004) Finding and evaluating community structure in networks. Phys Rev E 69:026113
Pan G, Qi G, Wu Z, Zhang D, Li S (2013) Land-use classification using taxi GPS traces. IEEE Trans Intell Transp Syst 14:113–123
Pons P, Latapy M Computing communities in large networks using random walks. In: Yolum,Güngör T, Gürgen F, Özturan C. (eds) IComputer and Information Sciences - ISCIS 2005. ISCIS 2005. Lecture Notes in Computer Science, vol 3733. Springer, Berlin, Heidelberg
Pucher J, Dill J, Handy S (2010) Infrastructure, programs, and policies to increase bicycling: an international review. Prev Med 50:S106–S125
Qian Y-S, Wang M, Kang H-X, Zeng J-W, Liu Y-F (2012) Study on the road network connectivity reliability of valley city based on complex network. Math Probl Eng Volume 2012, Article ID 430785, 14 pages
Radicchi F, Castellano C, Cecconi F, Loreto V, Parisi D (2004) Defining and identifying communities in networks. Proc Natl Acad Sci U S A 101:2658–2663
Roth C, Kang SM, Batty M, Barthelemy M (2012) A long-time limit for world subway networks. J R Soc Interface 2012:0259
Song Q, Zilecky P, Jakob M, Hrncir J Exploring pareto routes in multi-criteria urban bicycle routing. In: Intelligent Transportation Systems (ITSC), 2014 IEEE 17th International Conference on, 2014. IEEE, pp 1781–1787
Veloso M, Phithakkitnukoon S, Bento C Urban mobility study using taxi traces. In: Proceedings of the 2011 international workshop on Trajectory data mining and analysis, 2011. ACM, pp 23–30
Von Ferber C, Holovatch T, Holovatch Y, Palchykov V (2009) Public transport networks: empirical analysis and modeling. Eur Phys J B 68:261–275
Wang H, Zou H, Yue Y, Li Q Visualizing hot spot analysis result based on mashup. In: Proceedings of the 2009 International Workshop on Location Based Social Networks, 2009. ACM, pp 45–48
Wei L-Y, Zheng Y, Peng W-C Constructing popular routes from uncertain trajectories. In: Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, 2012. ACM, pp 195–203
Winters M, Davidson G, Kao D, Teschke K (2011) Motivators and deterrents of bicycling: comparing influences on decisions to ride. Transportation 38:153–168
Wu F, Huberman BA (2004) Finding communities in linear time: a physics approach. Eur Phys J B 38:331–338
Xu X, Hu J, Liu F, Liu L (2007) Scaling and correlations in three bus-transport networks of China. Physica A 374:441–448
Yuan J, Zheng Y, Xie X Discovering regions of different functions in a city using human mobility and POIs. In: Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, 2012. ACM, pp 186–194
Yuan J, Zheng Y, Zhang C, Xie W, Xie X, Sun G, Huang Y T-drive: driving directions based on taxi trajectories. In: Proceedings of the 18th SIGSPATIAL International conference on advances in geographic information systems, 2010. ACM, pp 99–108
Yuan J, Zheng Y, Zhang L, Xie X, Sun G Where to find my next passenger. In: Proceedings of the 13th international conference on Ubiquitous computing, 2011. ACM, pp 109–118
Yue Y, H-d W, Hu B, Li Q-q, Li Y-g, Yeh AG (2012) Exploratory calibration of a spatial interaction model using taxi GPS trajectories. Comput Environ Urban Syst 36:140–153
Zheng Y, Liu Y, Yuan J, Xie X Urban computing with taxicabs. In: Proceedings of the 13th international conference on Ubiquitous computing, 2011. ACM, pp 89–98
Zhu T, Li C, Ma S, Wu D, Wang C An evaluation of travel time on urban road network. In: ITS Telecommunications (ITST), 2011 11th International Conference on, 2011. IEEE, pp 497–502
Ziebart BD, Maas AL, Dey AK, Bagnell JA Navigate like a cabbie: Probabilistic reasoning from observed context-aware behavior. In: Proceedings of the 10th international conference on Ubiquitous computing, 2008. ACM, pp 322–331
Authors would like to express their appreciation to Isfahan Municipality.
Data used for this research may be disclosed upon request to the corresponding author and by the consent of Isfahan Taxi Organization.
Department of Transportation Engineering, Isfahan University of Technology, Isfahan, Iran
Meisam Akbarzadeh & Syed Sina Mohri
Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan, Iran
Ehsan Yazdian
Meisam Akbarzadeh
Syed Sina Mohri
MA designed the research, EY provided the data, and SSM prepared the data and ran the model. All authors have read and approved the manuscript.
Correspondence to Meisam Akbarzadeh.
The best bike networks for communities 3 and 4
The proposed networks for community 1 and 2
Table 3 OD flows for the network was considered around community 7
Akbarzadeh, M., Mohri, S.S. & Yazdian, E. Designing bike networks using the concept of network clusters. Appl Netw Sci 3, 12 (2018). https://doi.org/10.1007/s41109-018-0069-0
Smart card data | CommonCrawl |
Evaluation of anthocyanins in Aronia melanocarpa/BSA binding by spectroscopic studies
Jie Wei1,
Dexin Xu1,
Xiao Zhang1,
Jing Yang1 &
Qiuyu Wang1
The interaction between Anthocyanins in Aronia melanocarpa (AMA) and bovine serum albumin (BSA) were studied in this paper by multispectral technology, such as fluorescence quenching titration, circular dichroism (CD) spectroscopy and Fourier transform infrared spectroscopy (FTIR). The results of the fluorescence titration revealed that AMA could strongly quench the intrinsic fluorescence of BSA by static quenching. The apparent binding constants KSV and number of binding sites n of AMA with BSA were obtained by fluorescence quenching method. The thermodynamic parameters, enthalpy change (ΔH) and entropy change (ΔS), were calculated to be 18.45 kJ mol−1 > 0 and 149.72 J mol−1 K−1 > 0, respectively, which indicated that the interaction of AMA with BSA was driven mainly by hydrophobic forces. The binding process was a spontaneous process of Gibbs free energy change. Based on Förster's non-radiative energy transfer theory, the distance r between the donor (BSA) and the receptor (AMA) was calculated to be 3.88 nm. Their conformations were analyzed using infrared spectroscopy and CD. The results of multispectral technology showed that the binding of AMA to BSA induced the conformational change of BSA.
Aronia melanocarpa Elliot, a member of the Rosaceae family, Aronia melanocarpa fruits are one of the richest plant sources of anthocyanins, AMA are water-soluble plant pigments, it has gained popularity due to their high content anthocyanins with antioxidant anti-inflammatory, antimicrobial, hepatoprotective, gastroprotective and other activities (Malinowska et al. 2013; Fares et al. 2011; Kokotkiewicz et al. 2010; Chrubasik et al. 2010). AMA have the better abilities on scavenging free radicals, improving immunity, anti-cancer, anti-aging, anti-cardiovascular disease and so on (Wei et al. 2017, 2016). The basic structure of AMA shown in Scheme 1, the main components of its monomer are cyanidin-3-O-arabinoside, cyanidin-3-O-galactoside, cyanidin-3-O-glucoside and cyanidin-3-O-Xyloside. In our previous study, we have carried out a series of optimization on the extraction and purification of AMA, its composition and biological activity were initially identified and studied (de Santiago et al. 2014). Based on this study, it was found that AMA can inhibit the occurrence of diabetes and obesity, and regulate the metabolism balance and the stability of the redox system, we also carried out AMA on mouse aging mechanism of intervention. Research also shows AMA can be used as food additives owing to its strong antioxidant capacity (Hassellund et al. 2012).
The structure of anthocyanins
Bovine serum albumin (BSA), one of the major components in plasma protein, is the most extensively studied serum albumin, due to its structural homology with HAS(Manikandamathavan et al. 2017). we investigated the binding and associated energy transfer effects of AMA with BSA. A model of this interaction is proposed in which the intrinsic fluorescence of BSA has been quenched by AMA binding by a static quenching procedure. It was found that the hydrophobic interaction between AMA and BSA played a major role in combination of thermodynamic parameters. FTIR and CD analysis showed that AMA significantly affected the polarity and hydrophobicity of tyrosine and tryptophan in BSA, which could influence the composition of BSA secondary structure, alter the conformation of protein, and further confirm the interaction between AMA and BSA.
The binding of AMA to BSA can alter the pharmacology and pharmacodynamics of these compounds such as their distribution. Therefore, the study of interaction between AMA and BSA binding through spectroscopic techniques is necessary, It laid the foundation for the study of the stability of AMA and BSA (Zhang et al. 2008a, b; Zhang et al. 2012; Sedighipoor et al. 2017).
Bovine serum albumin (BSA) was purchased from Xi'an Rui Xi Biological Technology Co., Ltd; Aronia melanocarpa Elliot fruit was provided by Liaoning Academy of Forestry (Shenyang, China); Anthocyanin standards (cyanidin-3-O-arabinoside, cyanidin-3-O-galactoside, cyanidin-3-O-glucoside and cyanidin-3-O-Xyloside) were purchased from Weikeqi Biotechnology Co., Ltd.
Fluorescence quenching titration
5.0 mL anthocyanins solution (1.0 μM) was titrated by successive additions of BSA solution with the concentration of 1.0 × 10−5 mol L−1 at different conditions (T = 297, 317, 337 K). The fluorescence quenching of Bovine serum albumin (BSA) with the addition of AR1/AG 50 was recorded in the range of 290–450 nm by fluorescence spectrofluorimeter. The width of the excitation and emission slit was adjusted at 5 nm, and the excitation wavelength was selected at 280 nm. The temperature of samples was kept by recycle water during the whole experiment. All fluorescence titration experiments were done manually by 100 mL microsyringe (Zhang et al. 2013).
Fourier transform infrared spectroscopy (FTIR) analysis
FTIR spectra of AMA and BSA were recorded on Nicolet-6700. FTIR spectrometer via the attenuated total reflection (ATR) at a resolution of 4 cm−1 and 32 scans in the range of 400–4000 cm−1 at room temperature. The corresponding absorbance contributions of BSA and Anthocyanins solutions were recorded and digitally subtracted with the same instrumental parameters, and their FTIR spectra was done by OMNIC (Li et al. 2016).
Circular dichroism (CD) studies
The optical chamber of the CD spectrometer was deoxygenated with dry nitrogen before used and kept in a nitrogen atmosphere during experiments. The scanning speed was 60 nm min−1, the spectral resolution was 0.2 nm, the response time was 0.25 s, and the slit width was 1 nm. The samples were scanned at 190–250 nm. The composition and content of the secondary structure of the protein were fitted using the Origin program, and the CDpro software was used to fit the protein (BSA) secondary structure.
Molecular docking studies
Molecular docking were carried out to visualize the binding site of AMA to BSA. All the docking calculations were performed by using Autodock 4.2.1.5 Tools (Molecular Graphics Laboratory, The Scripps Research Institute). The 3D structure of four anthocyanins was downloaded from PUBCHEM-OPEN CHEMISTRY DATABASE (https://pubchem.ncbi.nlm.nih.gov/substance). Both BSA and four anthocyanins molecules were prepared using AutoDockTools 1.5.6 before docking, The docking was carried out with 126 × 126 × 126 0.375 Å spacing grids covering the entire surface of BSA. The Lamarckian genetic algorithm, which is considered one of the most appropriate docking methods available in AutoDock, was used in the docking analysis (Paul et al. 2017).
Molecular dynamic (MD) simulations
The results of Molecular docking simulations determine a general binding mode of ligand. Nevertheless, MD simulation on the ligand–protein complex for further investigation of the effects of ligand binding on the conformation of protein was used. A MD simulation was performed using the AutoDockTools-1.5.6 software package. The crystal structure of BSA complex was downloaded from the Protein Data Bank (RCSB). The model of four anthocyanins monomer were constructed using the Chem3D 16.0 software package (Zhang et al. 2015).
Fluorescence spectra of interaction between different anthocyanins in Aronia melanocarpa and BSA
Qualitative analysis of binding of AMA to BSA can be detected by examining fluorescence spectra. Generally, the fluorescence of protein is caused by three intrinsic fluors present in the protein, such as tryptophan, tyrosine, and phenylalanine residues. The fluorescence quenching pattern of BSA was shown in Fig. 1. The figure showed the fluorescence spectrum of the protein when the excitation wavelength is 280 nm, the maximum fluorescence emission wavelength (λmax) of BSA is about 330 nm. The fluorescence intensity at λmax decreases with the increase of anthocyanin concentration, and λmax has red shift phenomenon, which indicates that the microenvironment near the tryptophan and tyrosine residues in this protein was enhanced and the hydrophobicity was decreased. With the increase of the concentration of arabinoside and glucoside, the λmax of BSA appeared blue shift, indicating that the polarity of the binding cavity near the tryptophan residue was weakened and the secondary structure changed (Gallo et al. 2013). In Fig. 1, the λmax of BSA did not change significantly, indicating that the microenvironment of the tryptophan residue did not change. According to the fluorescence data of λex = 280 nm, the quenching rates of cyanidin-3-O-arabinoside, cyanidin-3-O-galactoside, cyanidin-3-O-glucoside and cyanidin-3-O-Xyloside were 25, 31, 30, 32%. Different quenching rates may be related to the reaction process, the results showed that the quenching rate: The extent of reaction cyanidin-3-i-Xyloside and BSA was the most obvious (Zhang et al. 2008a, b; Unnikrishnan et al. 2014).
Fluorescence emission spectra of BSA suspension at excitation wavelength 280 nm in presence of 0, 5, 10, 20, 30, 40 μmol/L (a–f) Cyanidin-3-O-arabinoside (a), Cyanidin-3-O-galactoside (b), Cyanidin-3-O-glucoside (c) and Cyanidin-3-O-Xyloside (d)
Quenching mechanism of BSA fluorescence by AMA
From Fig. 2 it is clear that fluorescence of BSA has been completely quenched by Anthocyanins. The quenching constants has been calculated according to the Stern–Volmer equation Eq. (1),
$$ {\text{F}}_{0} /{\text{F}} = 1 + {\text{K}}_{\text{SV}} \cdot {\text{C}}_{\text{q}} = 1 - {\text{K}}_{\text{q}}\uptau_{0} {\text{C}}_{\text{q}} $$
wherein, F and F0 are the fluorescence intensity before and after the action of the fluorescence quencher molecule, KSV is the Stern–Volmer dynamic quenching constant, Cq is the quencher concentration, Kq is the rate constant of the biological macromolecule quenching process, τ0 is the lifetime of the fluorescent molecules (10−8 s) when the quencher is absent. According to the formula, the Stern–Volmer (S-V) curve of BSA interacting with four monomer can be obtained by plotting F0/F to Cq. The S-V curve of the protein was in a non-linear relationship, indicating that the quenching of the BSA endogenous fluorescence was not caused by dynamic quenching, possibly due to the formation of non-luminescent complexes between the fluorescent molecules and the quenchers things (Soares et al. 2007).
The Stern–Volmer curves of Cyanidin-3-O-arabinoside (a), Cyanidin-3-O-galactoside (b), Cyanidin-3-O-glucoside (c) and Cyanidin-3-O-Xyloside (d) at 297, 317 and 337 K
Dynamic quenching constants of BSA at different temperatures
The dynamic quenching constant KSV of the protein at different temperatures (297, 317 and 337 K) and the dynamic quenching process rate constant Kq were shown in the Table 1. The results showed that when the fluorescence quenching mechanism of the protein was dynamic quenching, KSV generally increased with the temperature of the system, and the maximum diffusion collision quenching constant of the quencher to the biological macromolecule was about 2 × 1010 L Mol−1 s−1. It could be seen that the value of KSV decreased with the increase of temperature, and the fluorescence quenching rate constants of the four anthocyanins were much larger than 2 × 1010 L mol−1 s−1. It was shown that the quenching mechanism of the four monomer was not a dynamic quenching caused by diffusion and collision, but because of the static quenching caused by the formation of non-luminescent ground state complexes between the fluorescent molecules and the quencher (Sun et al. 2017).
Table 1 The dynamic quenching constants of Cyanidin-3-O-arabinoside, Cyanidin-3-O-galactoside, Cyanidin-3-O-glucoside and Cyanidin-3-O-Xyloside at 297, 317 and 337 K
Determination of Binding constants, the number of binding sites and the type of binding
Double logarithmic regression curves of the interaction of four anthocyanins with BSA was shown in Fig. 3 When small molecules bind independently to a set of equivalent sites on a macromolecule, the equilibrium between free and bound molecules is given by the equation Eq. (2).
$$ \log ({\text{F}}_{0} - {\text{F}})/{\text{F}} = \log K_{\text{s}} + {\text{n}}\log {\text{C}}_{\text{q}} $$
where KS and n are the apparent binding constant and the number of binding sites. Thus, a plot (Fig. 3) of log (F0 − F)/F versus log(Q) yielded the KS and n values to be 0.574 × 103 L mol−1, 0.484 × 103 L mol−1, 0.425 × 103 L mol−1, 0.521 × 103 L mol−1 and 0.9395, 0.9195, 0.9153, 0.9265 at 297 K as shown in Table 2, respectively. An n value of approximately equal to 1 indicated that there was only a single binding site in the binding of AMA and BSA.
The double logarithm regression curve of log [(F0 − F)/F] versus log [cq] of Cyanidin-3-O-arabinoside (a), Cyanidin-3-O-galactoside (b), Cyanidin-3-i-glucoside (c) and Cyanidin-3-O-Xyloside (d) at 297, 317 and 337 K
Table 2 The binding constants and thermodynamic parameters of Cyanidin-3-O-arabinoside, Cyanidin-3-O-galactoside, Cyanidin-3-O-glucoside and Cyanidin-3-O-Xyloside at 297, 317 and 337 K
The thermodynamic constants of ligand-macromolecule binding can be calculated according to the Van't Hoff equation Eq. (3)
$$ \Delta {\text{H}} = {\text{d}}\left( {\frac{\Delta G}{T}} \right)/{\text{d}}\left( {\frac{1}{T}} \right) $$
$$ \Delta {\text{G}} = - {\text{RT}}\;\ln {\text{K}}_{\text{s}} $$
$$ \Delta {\text{G}} = \Delta {\text{H}} - {\text{T}}\Delta {\text{S}} $$
wherein the KS binding constants representative of the temperature T, ΔH, ΔS, ΔG, respectively enthalpy change of the bonding process, entropy and free energy, R is the gas constant (8.314 J mol−1 K−1). The values of ∆H, ∆G, and ∆S are listed in Table 2. From the point of view of water structure, a positive ∆S value was frequently taken as evidence for hydrophobic interaction. The negative value of ∆G revealed that the interaction process was spontaneous (Pomar et al. 2005). From the results we can conclude that the ΔG of the binding of the four monomers to BSA was less than 0, indicating that the reaction between the four monomers was spontaneous; ΔH(BSA) > 0, ΔS(BSA) > 0, indicating that the effect of BSA was mainly hydrophobic. ΔH(BSA) > 0, the reaction was endothermic reaction, the KS value increased with increasing temperature. The interaction forces between a small molecule and macromolecule include hydrogen bonds, van der Waals force, hydrophobic force, electrostatic interactions, etc. In order to elucidate the interaction of AMA with BSA, the thermodynamic parameters were calculated.
The infrared spectrum was shown in Fig. 4, Changes in the infrared spectrum indicated that the four monomer caused a change in the secondary structure of the BSA. Several oxygen atoms, hydroxyl and BSA C=O, C–N groups through the hydrogen bond, hydrophobic interaction combined to form a complex, resulting in BSA peptide chain rearrangement, and ultimately leaded to secondary structure changes(Hu et al. 2004).
FTIR spectra in the region 4000–400 cm−1for Cyanidin-3-O-arabinoside (a), Cyanidin-3-O-galactoside (b), Cyanidin-3-O-glucoside (c) and Cyanidin-3-O-Xyloside (d) and their polyphenol complexes
Circular dichroism analysis
Circular dichroic analysis is a useful method for secondary structure estimation of protein molecule. The shape and particular wavelengths of CD spectrums are very sensitive to the secondary structure of proteins. The secondary structure changes of BSA in the presence of the four monomer were studied using circular dichroism spectroscopy. Figure 5 shows CD analysis of BSA in the absence and presence of four monomer, the interaction between AMA and BSA could be verified (Wawer et al. 2006; Slimestad et al. 2005). The four monomer were listed in Table 3 by software, we can conclude that after addition of four monomer, α-helix content of BSA was almost no change, β-fold content was increased, but corner and random curl content was decreased (Karnaukhova 2007). So the CD results demonstrated that the interaction of the four monomer with the BSA leaded to a change in the secondary structure of the BSA, which was consistent with the infrared spectrum (Sahu et al. 2008).
The far-UV CD spectra of Cyanidin-3-O-arabinoside (a), Cyanidin-3-O-galactoside (b), Cyanidin-3-O-glucoside (c) and Cyanidin-3-O-Xyloside (d) in absence and presence of BSA
Table 3 Secondary structure analysis from the BSA and AMA
Computational analysis of the binding between AMA and BSA
We carried out docking simulations to investigate the possible 4 anthocyanins-binding site on BSA. The binding energy of 50 models in docking is shown in Fig. 6. In areas where these binding patterns are present, AMA may bind to BSA and is located in the region shown in Fig. 6. Consequently, the stabilizing effect contributed by AMA on the appendant structure of BSA may prevent the occurrence of domain swapping, so as to redirect BSA away from the fibril-forming pathway and into forming nontoxic, unstructured, and off-pathway aggregates (Gao et al. 2005). So we speculated that AMA could inhibit the fibrillation of BSA complex in this study. The observation was particular significance as these four anthocyanins had higher stability after interacting with BSA.
Use Autodock software binding energy of 50 docking models. Panoramic view showing the binding mode between Cyanidin-3-O-arabinoside (a), Cyanidin-3-O-galactoside (b), Cyanidin-3-O-glucoside (c) and Cyanidin-3-O-Xyloside (d), and BSA
It is reported that the content of AMA is up to 1%, far higher than other plants (Olszewska and Michel 2009), so in this paper, the aim of above research is to clarify the binding mechanism of AMA with BSA, we will provide valuable information about interaction of AMA as a plant-based food additives with BSA as an important carrier protein, this is of great significance for the follow-up study of AMA and BSA (Li et al. 2016), and we also provide useful information for understanding the Pharmacological effects at molecular level (Zhang et al. 2012).
A series of multispectral technology and molecular docking studies, including interaction was used to analyze AMA and BSA. Fluorescence quenching showed that AMA can quench the fluorescence intensity of BSA through static mechanism making it possible to study the interaction of AMA with this protein using Stern–Volmer equation.
The results obtained from FTIR to CD showed that the α-helix content of BSA did not change significantly in the absence and presence of the four monomer, and the content of β-sheet was increased and the curvature and random curl content were decreased. The interaction of the four substance with BSA resulted the changes of BSA in the conformation and secondary structure. According to the Förster's non-radiative energy transfer theory, the binding distance r between AMA and BSA was calculated, the result represents a static quenching, and the binding reaction is spontaneous and is largely mediated by hydrophobic forces.
Molecular docking is a key tool in structural molecular biology. The goal of ligand–protein docking is to predict the predominant binding mode(s) of a ligand with a protein of known three-dimensional structure (Morris and Lim-Wilby 2008). The obtained molecular docking results indicated that AMA can interact with BSA, without breaking the secondary structure of BSA. Conformational studies of BSA indicate that Trp212 is involved in the interfacial formation of subdomains IIA and IIIA and that the two hydrophobic cavities are the major regions where small molecule compounds bind to proteins through molecular modeling, several anthocyanins share the same binding site. In view of above this, it is of great significance to study the combination of AMA/BSA through the multi-spectral and molecular docking described.
molecular dynamic simulations
anthocyanins in Aronia melanocarpa
circular dichroism spectroscopy
FTIR:
Fourier transform infrared spectroscopy
Chrubasik C, Li G, Chrubasik S (2010) The clinical effectiveness of chokeberry: a systematic review. Phytother Res 24(8):1107. https://doi.org/10.1002/ptr.3226
de Santiago MCPA, Gouvêa ACMS, de Oliveira RLO, Borguini RG, Pacheco S, Nogueira RI, da de do Nascimento LSM, Freitas SP (2014) Analytical standards production for the analysis of pomegranate anthocyanins by HPLC. Braz J Food Technol 17(1):51–57. https://doi.org/10.1590/bjft.2014.008
Fares R, Bazzi S, Baydoun SE, Abdel-Massih RM (2011) The antioxidant and anti-proliferative activity of the Lebanese Olea europaea extract. Plant Foods Hum Nutr 66(1):58–63. https://doi.org/10.1007/s11130-011-0213-9
Gallo M, Vinci G, Graziani G, De Simone C, Ferranti P (2013) The interaction of cocoa polyphenols with milk proteins studied by proteomic techniques. Food Res Int 54(1):406–415. https://doi.org/10.1016/j.foodres.2013.07.011
Gao D, Tian Y, Bi S, Chen Y, Aimin Yu, Zhang H (2005) Studies on the interaction of colloidal gold and serum albumins by spectral methods. Spectrochim Acta A 62(4–5):1203–1208. https://doi.org/10.1016/j.saa.2005.04.026
Hassellund SS, Flaa A, Sandvik L, Kjeldsen SE, Rostrup M (2012) Effects of anthocyanins on blood pressure and stress reactivity: a double-blind randomized placebo-controlled crossover study. J Hum Hypertens 26(6):396–404. https://doi.org/10.1038/jhh.2011.41
Hu YJ, Liu Y, Wang JB, Xiao XH, Qu SS (2004) Study of the interaction between monoammonium glycyrrhizinate and bovine serum albumin. J Pharmaceut Biomed 36(4):915–919. https://doi.org/10.1016/j.jpba.2004.08.021
Karnaukhova E (2007) Interactions of human serum albumin with retinoic acid, retinal and retinyl acetate. Biochem Pharmacol 73(6):901–910. https://doi.org/10.1016/j.bcp.2006.11.023
Kokotkiewicz A, Jeremicz Z, Luczkiewicz M (2010) Aronia plants: a review of traditional use, biological activities, and perspectives for modern medicine. J Med Food 13(3):255–269. https://doi.org/10.1089/jmf.2009.0062
Li T, Cheng Z, Cao L, Jiang X, Fan L (2016) Data of fluorescence, UV–vis absorption and FTIR spectra for the study of interaction between two food colourants and BSA. Data Brief 8(C):755–783. https://doi.org/10.1016/j.dib.2016.06.025
Malinowska J, Oleszek W, Stochmal A, Olas B (2013) The polyphenol-rich extracts from black chokeberry and grape seeds impair changes in the platelet adhesion and aggregation induced by a model of hyperhomocysteinemia. Eur J Nutr 52(3):1049–1057. https://doi.org/10.1007/s00394-012-0411-8
Manikandamathavan VM, Thangaraj M, Weyhermuller T, Parameswari RP, Murthy NN, Punitha, Nair BU (2017) Novel mononuclear Cu (II) terpyridine complexes: impact of fused ring thiophene and thiazole head groups towards DNA/BSA interaction, cleavage and antiproliferative activity on HepG2 and triple negative CAL-51 cell line. Eur J Med Chem 135:434–446. https://doi.org/10.1016/j.ejmech.2017.04.030
Morris GM, Lim-Wilby M (2008) Molecular docking. Methods Mol Biol 443(443):365–382
Olszewska MA, Michel P (2009) Antioxidant activity of inflorescences, leaves and fruits of three Sorbus species in relation to their polyphenolic composition. Nat Prod Res 23(16):1507–1521
Paul S, Sepay N, Sarkar S, Roy P, Dasgupta S, Sardar PS, Majhi A (2017) Interaction of serum albumins with fluorescent ligand 4-azido coumarin: spectroscopic analysis and molecular docking studies. New J Chem 41(24):15392–15404. https://doi.org/10.1039/C7NJ02335A
Pomar F, Novo M, Masa A (2005) Varietal differences among the anthocyanin profiles of 50 red table grape cultivars studied by high performance liquid chromatography. J Chromatogr A 1094(1–2):34. https://doi.org/10.1016/j.chroma.2005.07.096
Sahu A, Kasoju N, Bora U (2008) Fluorescence study of the curcumin-casein micelle complexation and its application as a drug nanocarrier to cancer cells. Biomacromolecules 9(10):2905–2912. https://doi.org/10.1021/bm800683f
Sedighipoor M, Kianfar AH, Mahmood WAK, Azarian MH (2017) Synthesis and electronic structure of novel schiff bases Ni/Cu (II) complexes: evaluation of DNA/Serum protein binding by spectroscopic studies. Polyhedron 129:1–8. https://doi.org/10.1016/j.poly.2017.03.027
Slimestad R, Torskangerpoll K, Nateland HS, Johannessen T, Giske NH (2005) Flavonoids from black chokeberries. Aronia melanocarpa. J Food Compos Anal 18(1):61–68. https://doi.org/10.1016/j.jfca.2003.12.003
Soares S, Mateus N, De Freitas V (2007) Interaction of different polyphenols with bovine serum albumin (BSA) and human salivary alpha-amylase (HSA) by fluorescence quenching. J Agric Food Chem 55(16):6726–6735. https://doi.org/10.1021/jf070905x
Sun L, Gidley MJ, Warren FJ (2017) The mechanism of interactions between tea polyphenols and porcine pancreatic alpha-amylase: analysis by inhibition kinetics, fluorescence quenching, differential scanning calorimetry and isothermal titration calorimetry. Mol Nutr Food Res 61(10):1613. https://doi.org/10.1002/mnfr.20170032
Unnikrishnan B, Wei SC, Chiu WJ, Cang J, Hsu PH, Huang CC (2014) Nitrite ion-induced fluorescence quenching of luminescent BSA-Au(25) nanoclusters: mechanism and application. Analyst 139(9):2221–2228. https://doi.org/10.1039/c3an02291a
Wawer I, Wolniak M, Wawer Paradowska K I (2006) Solid state NMR study of dietary fiber powders from Aronia, bilberry, black currant and apple. Solid State Nucl Magn Reson 30(2):106–113. https://doi.org/10.1016/j.ssnmr.2006.05.001
Wei J, Zhang G, Zhang X, Gao J, Fan J, Zhou Z (2016) Anthocyanin improving metabolic disorders in obese mice from Aronia melanocarpa. Indian J Pharm Educ Res 50(3):368–375. https://doi.org/10.5530/ijper.50.3.8
Wei J, Zhang G, Zhang X, Xu D, Gao J, Fan J, Zhou Z (2017) Anthocyanins from black chokeberry (Aronia melanocarpa Elliot) delayed aging-related degenerative changes of brain. J Agric Food Chem 65(29):5973–5984. https://doi.org/10.1021/acs.jafc.7b02136
Zhang G, Wang A, Jiang T, Guo J (2008a) Interaction of the irisflorentin with bovine serum albumin: a fluorescence quenching study. J Mol Struct 891(1–3):93–97. https://doi.org/10.1016/j.molstruc.2008.03.002
Zhang Y-Z, Zhou B, Liu Y-X, Zhou C-X, Ding X-L, Liu Y (2008b) Fluorescence study on the interaction of bovine serum albumin with P-aminoazobenzene. J FLUORESC 18(1):109–118. https://doi.org/10.1007/s10895-007-0247-4
Zhang G, Ma Y, Wang L, Zhang Y, Zhou J (2012) Multispectroscopic studies on the interaction of maltol, a food additive, with bovine serum albumin. Food Chem 133(2):264–270. https://doi.org/10.1016/j.foodchem.2012.01.014
Zhang J, Zhuang S, Tong C, Liu W (2013) Probing the molecular interaction of triazole fungicides with human serum albumin by multispectroscopic techniques and molecular modeling. J Agric Food Chem 61(30):7203–7211. https://doi.org/10.1021/jf401095n
Zhang X, Li M, Wang Y, Zhao Y (2015) Insight into the binding mode of a novel LSD1 inhibitor by molecular docking and molecular dynamics simulations. J Recept Signal Transduct 35(5):363–369. https://doi.org/10.3109/10799893.2015.1049360
WJ and WQY conceived and designed the study. XDX performed the experiments and wrote the manuscript. YJ and ZX reviewed and edited the manuscript. All authors read and approved the manuscript.
This research was supported by the National Natural Science Foundation of China (Grant No. 31701656).
School of Life Science of Liaoning University, Chongshan Middle road 66, Huanggu District, Shenyang, 110036, Liaoning, China
Jie Wei
, Dexin Xu
, Xiao Zhang
, Jing Yang
& Qiuyu Wang
Search for Jie Wei in:
Search for Dexin Xu in:
Search for Xiao Zhang in:
Search for Jing Yang in:
Search for Qiuyu Wang in:
Correspondence to Jie Wei or Qiuyu Wang.
Wei, J., Xu, D., Zhang, X. et al. Evaluation of anthocyanins in Aronia melanocarpa/BSA binding by spectroscopic studies. AMB Expr 8, 72 (2018) doi:10.1186/s13568-018-0604-5
Binding mode | CommonCrawl |
Research Letter
Wind tunnel measurements of turbulent boundary layer flows over arrays of ribs and cubes
Ziwei Mo1 &
Chun-Ho Liu ORCID: orcid.org/0000-0001-9980-04001
Geoscience Letters volume 5, Article number: 16 (2018) Cite this article
Understanding the effect of building morphology on the flow aloft is important to the ventilation and pollutant removal in cities. This study examines the dynamics over hypothetical urban areas in isothermal conditions using wind tunnel experiments. Different configurations of rib-type and cube-type arrays are designed to model hypothetical rough urban surfaces. The mean and fluctuating velocities are measured by hot-wire anemometry with X-wire probes. The results show that significant variations of fluctuating velocities and momentum fluxes are clearly observed in the near-wall region, depicting the inhomogeneous flow in response to the presence of roughness elements in the lower part of turbulent boundary layer. Comparing the variables over different rough surfaces, the roof-level fluctuating velocities and momentum fluxes increase with increasing surface roughness. Quadrant analyses and frequency spectra collectively suggest that the fresh air entrainment and aged air removal are enhanced over rougher surfaces. Larger energy-carrying turbulence motions contribute mostly to the more efficient ventilation over urban areas.
In the presence of building obstacles in urban areas, the atmospheric boundary layer (ABL) is developed similar to the rough-wall turbulent boundary layer (TBL; Pope 2000). The flow structure and turbulence behavior are highly modified over different types of surface roughness (Jiménez 2004). It is, therefore, important to study the flow characteristics in the TBLs over rough surfaces.
Wind tunnel experiments are commonly performed to examine the turbulent flows over rough surfaces (Raupach et al. 1991). Scaling down the dimensions of realistic urban areas in a wind tunnel offers a cost-effective platform for sensitivity tests with full control of variables and boundary conditions (Cermak 1981). A series of wind tunnel studies have been carried out to demystify the effects of roughness-element configurations on the flows in rough-wall TBLs (Britter and Hanna 2003; Salizzoni et al. 2008; Liu et al. 2015). ABL velocity profiles are examined over arrays of ribs (Salizzoni et al. 2008; Ho and Liu 2017) and arrays of cubes (Cheng and Castro 2002a, b). Some of the aerodynamic parameters, such as displacement height d and roughness length z0, were contrasted over different surface configurations. The effect of roughness elements on the roughness sublayer (RSL) was also investigated (Placidi and Ganapathisubramani 2015). Besides, turbulence structure was characterized by autocorrelation, quadrant analyses as well as spectra over cube-type arrays (Castro et al. 2006). These experimental studies have enriched our understanding of turbulent flows over rough-wall TBLs. However, more wind tunnel results are needed to study the effect of surface configurations on the turbulence behavior and the associated street-level ventilation over urban areas.
In this study, a series of wind tunnel experiments are carried out to examine the flows in the TBLs over rib-type and cube-type arrays. Square aluminum bars and LEGO™ bricks are used to fabricate different configurations of hypothetical urban areas. The profiles of mean wind speeds and turbulence are sampled in each repeating unit of roughness element. The effect of sampling position and rough-surface configurations on the flows is contrasted. Quadrant analyses and frequency spectra are performed as well to elucidate the scale of motions governing the roof-level ventilation mechanism over urban areas.
An open-circuit, isothermal wind tunnel, which is located in the Department of Mechanical Engineering, The University of Hong Kong (Ho and Liu 2017; Mo and Liu 2018), is employed to perform the laboratory-scale experiments (Fig. 1a). The dimensions of its test section are 6 m long, 0.56 m wide and 0.56 m high. Repeating units of the reduced-scale models are glued on the whole floor to generate a fully developed TBL (Kozmar 2010). The free-stream wind speed U∞ in the wind tunnel is being monitored by a pitot-static tube installed upstream of the test section thorough a set of experiments to maintain steady wind conditions. The wind tunnel is equipped with a digital traversed system operated by National Instruments (NI 2018) motion control modules (PCI-7390) for sensor positioning whose spatial resolution is 1 mm in both streamwise x and vertical z direction.
Schematic of wind tunnel (a) and configuration of roughness element. b AR = 1/2, c AR = 1/3, d h:2l and e h:6l
Roughness elements
Models of hypothetical urban areas are fabricated by idealized roughness elements in the wind tunnel test section. Two types of rough surfaces are considered in this study, namely, rib-type arrays and cube-type arrays. The rib-type arrays are assembled by square aluminum bars of size l (= 560 mm; long) × h (= 9 mm; wide) × h (= 9 mm; high). The ribs are placed evenly apart in crossflows, spanning the full width of the wind tunnel test section. Ten configurations of rib-type arrays are adopted by adjusting the separation between the ribs w. The roughness-element-height-to-separation (aspect) ratios (AR) are equal to 1, 1/2, 1/3, 1/4, 1/5, 1/6, 1/8, 1/10, 1/12, and 1/15. For the cube-type arrays, roughness elements are assembled by staggering LEGO® bricks on a LEGO® baseboard. The size of each piece of LEGO® brick is l (= 16 mm; long) × l (= 16 mm; wide) × h (= 11.4 mm; high, including the studs at the top). The separation among the LEGO® bricks is varied in the streamwise x direction, covering h:l, h:2l, h:3l, h:4l, h:5l, h:6l, h:7l and h:9l. In addition, the height of cube-type arrays is increased by mounting double (h:4l − D), triple (h:4l − T) and quadruple (h:4l − Q) layers of LEGO® bricks on the h:4l configuration. Examples of the roughness configurations (AR = 1/2, AR = 1/4, h:2l, and h:6l) are shown in Fig. 1b–e. A total of eleven configurations of cube-type array of roughness element are employed in the wind tunnel measurements.
Velocity measurements
The mean and fluctuating velocities are measured by a constant-temperature hot-wire anemometer (CTA). A X-wire probe is mounted to measure streamwise u and vertical w velocity components. The sensing elements are made of 5-μm (diameter) platinum-plated tungsten wires with 2-mm active length by copper electroplating. The included angle between the two wires is 100° (> 90°) that helps reduce the error due to inadequate yaw response in elevated turbulence intensity in near-wall region (Krogstad et al. 1992; Perry et al. 1987; Cheng and Castro 2002a, b). The CTA analog signal is digitized by a 24-bit NI data acquisition module (NI 9239) mounted in a NI CompactDAQ chassis (NI cDAQ-9188). The digital data are then collected by LabVIEW software on a digital computer. The (voltage) signal is then converted to velocity based on the universal calibration scheme (Bruun 1971). The CTA-measured velocity is compared with the velocity measured by the (upstream) pitot-static tube in which the regression coefficient R2 is up to 0.999. Seven vertical profiles are collected for each repeating unit of roughness element (Fig. 2), covering the top of roughness elements (P1 and P7), cavity top (P3, P4, and P5), leeward edge (P2), and windward edge (P6). A total of 96 sampling points are probed in each vertical profile, ranging from the roughness element height z = h to the wall-normal distance over the TBLs z = 350 mm. The sampling time is 66 s at each point and the sampling frequency is 2000 Hz. Over 217 data are collected at each point and the sampling duration for each case of array configuration is over 12 h.
Plan view of a rib-type and b cube-type arrays. Also shown are the sampling positions of the vertical profiles over rough surfaces (black solid circles)
Dynamics over different rough surfaces are analyzed based on the wind tunnel measurements. In the following section, overbar \(\overline{ \bullet }\), angle bracket \(\left\langle \bullet \right\rangle\) and double prime \(\bullet^{\prime\prime}\) (= \(\bullet - \left\langle {\overline{ \bullet } } \right\rangle\)) denote the temporal average, spatial average and fluctuating component, respectively. Temporal average \(\overline{ \bullet }\) is the averaged property during the sampling duration at each point while spatial average \(\left\langle \bullet \right\rangle\) is the averaged property at wall-normal distance z of seven vertical profiles measured at different streamwise positions x.
Turbulent boundary layer parameters
Based on the velocity measurements, the TBL parameters in this study are tabulated in Table 1. The TBL thickness δ is defined by at the wall-normal distance z where the spatio-temporal average of mean wind speeds converge to 99% of the free-stream one \(\left. {\left\langle {\overline{u} } \right\rangle } \right|_{z = \delta } = 0.99U_{\infty }\) (Cheng and Castro 2002a, b). In this study, the free-stream wind speeds at the TBL top are in the ranges of 8–9 m s−1 and 10–11 m s−1, respectively, for the rib-type and cube-type arrays. The TBL thickness over rib-type arrays is in the range of 219 mm (12h) ≤ δ ≤ 304 mm (16h) that is larger than its cube-type counterpart which is in the range of 135 mm (5h) ≤ δ ≤ 219 mm (14h). The thicker TBLs over rib-type arrays are caused by the higher obstacle height together with the elevated aerodynamic resistance. The Reynolds number based on free-stream wind speed and TBL thickness Re∞ (= U∞δ/ν) is in the range of 125,000 ≤ Re∞ ≤ 277,000 for rib-type arrays and 135,000 ≤ Re∞ ≤ 255,000 for cube-type arrays that is sufficiently high to neglect the effect of molecular viscosity in the analyses.
Table 1 Parameters in the turbulent boundary layers over different rough surfaces
The friction velocity is defined as u* = (τ w /ρ)1/2 where τ w is the total shear stress on the rough surface and ρ the fluid density. In the wind tunnel measurements, the friction velocity is commonly estimated by the relationship u* (= \(\left\langle {\overline{{u^{\prime\prime}w^{\prime\prime}}} } \right\rangle^{1/2}\)) by averaging the turbulent momentum flux over the entire rough surface (Cheng and Castro 2002a; Salizzoni et al. 2008). Cheng et al. (2007) reported that u * was underestimated by 25% over staggered arrays of cubical elements based on averaging \(\overline{{u^{\prime\prime}w^{\prime\prime}}}\) in the inertia sublayer (ISL) compared with that of direct drag measurement. In addition, u * is obtained by assuming it to be the maximum of Reynolds shear stresses in the same studies, and comparable with a corrected estimate value defined as \(\left. {\left( {1 + 0.25} \right) \times \left\langle {\overline{{u^{\prime\prime}w^{\prime\prime}}} } \right\rangle^{1/2} } \right|_{\text{ISL}}\) (Manes et al. 2011; Placidi and Ganapathisubramani 2015; Cheng et al. 2007). In this study, we adopt the conventional method by assuming that u* is equal to the peaked \(\left\langle {\overline{{u^{\prime\prime}w^{\prime\prime}}} } \right\rangle^{1/2}\). Although this would introduce error (within 25% uncertainty) in estimating the value of u * , the variation pattern of u * in this study will not be significantly affected as a consistent method is used among the testing cases. The friction velocity u * over rib-type and cube-type arrays is estimated in the ranges of 0.36–0.67 m s−1 and 0.42–0.70 m s−1, respectively (Table 1). Using u * as the slope, the other two key rough-TBL parameters, roughness length z0 and displacement height d, are determined by the best fit of the wind-tunnel-measured mean wind speed profiles to the theoretical logarithmic law of the wall (log law). As shown in Table 1, the displacement height is in the range of 4.1 mm (0.2h) ≤ d ≤ 13.6 mm (0.72h) over the rib-type arrays and 3.6 mm (0.09h) ≤ d ≤ 5.8 mm (0.5h) over the cube-type arrays. The roughness length z0 is much smaller, ranging from 0.04 mm (0.002h) to 1.04 mm (0.06h) over rib-type arrays and from 0.02 mm (0.002h) to 0.52 mm (0.013h) over cube-type arrays. Drag coefficient C d (= 2u * 2 /U ∞ 2 ) is commonly used to measure the aerodynamic resistance for flows over (non-smooth) solid boundaries. It is found to be 4.1 × 10−3 ≤ C d ≤ 10.1 × 10−3 over rib-type arrays and 3.6 × 10−3 ≤ C d ≤ 7.9 × 10−3 over cube-type arrays.
The variations of TBL parameters, such as δ, z0, and d, are closely influenced by the configurations of surface roughness. The relationship between TBL parameters and the aspect ratio (rib-type arrays) or packing density (frontal and plan solidities of cube-type arrays) has been evaluated in previous studies (Cheng et al. 2007; Placidi and Ganapathisubramani 2015; Ho and Liu 2017). In this paper, we use the drag coefficient C d as the quantitative indicator of different configurations of rough surface. Figure 3 plots z0, d, and δ against C d in both dimensional and dimensionless form. The data over ribs and cubes obtained from previous studies (Cheng and Castro 2002a; Salizzoni et al. 2008; Placidi and Ganapathisubramani 2015) are also compared. There is a noticeable trend that the z0 increases with increasing C d (Fig. 3a). However, the increasing rate of z0/h for cube-type elements and rib-type elements becomes significantly different. It is thus suggested that the roughness-element height h is not the most appropriate characteristic length scale for normalization (Fig. 3b). Displacement height, d, does not show any obvious increase with increasing C d for both rib-type and cube-type elements while its dimensionless form d/h varies significantly (Fig. 3c, d). It is in turn suggested that d has insignificant relation with C d . It should be noted that there is large uncertainty in the estimate of d by the best fit of measured mean wind profile to the log law. The TBL thickness δ increases slightly with increasing C d that suggests a possible relation between them (Fig. 3e). However, scatters of δ scaled by the roughness element height h are found with increasing C d for the cube-type elements so they are two different characteristic length scales (Fig. 3f).
Comparison of a roughness length z0, c displacement height d and d boundary layer thickness δ plotted against drag coefficient C d . Also shown in (b, d and f) are the corresponding properties normalized by the roughness-element height h
Velocity profiles
Velocity profiles measured at different positions
To compare the velocity profiles measured at different positions in a repeating unit of roughness element, Fig. 4 depicts the mean wind speed \(\overline{u}\), streamwise fluctuating velocity \(\overline{{u^{\prime\prime}u^{\prime\prime}}}^{1/2}\), vertical fluctuating velocity \(\overline{{w^{\prime\prime}w^{\prime\prime}}}^{1/2}\), and momentum flux \(\overline{{u^{\prime\prime}w^{\prime\prime}}}^{1/2}\) over rib-type arrays of AR = 1/4 and cube-type arrays of h:4l. The velocities are normalized by the free-stream wind speed U∞. The wall-normal distance is measured from the roof-level z − h which is then normalized by the TBL thickness δ. The gradients of mean wind speed are similar over different measured positions. It is about 40% of U ∞ at the roughness-element height (z = h). The differences of the mean wind speed among individual profiles and their spatial average are less than 6%. However, scattered data (deviations within 12% from the spatially averaged profiles) are found for \(\overline{{u^{\prime\prime}u^{\prime\prime}}}^{1/2}\) and \(\overline{{w^{\prime\prime}w^{\prime\prime}}}^{1/2}\) in the near-wall region (z − h < 0.1δ). It hence, demonstrates the inhomogeneous flows due to the presence of roughness elements in the lower TBL. This feature is in fact more noticeable for the vertical profiles of \(\overline{{u^{\prime\prime}w^{\prime\prime}}}^{1/2}\). It highly varies (up to 80% deviation from the spatially average profiles) in the near-wall region, suggesting that significant dynamic effects are induced by individual roughness elements. The inhomogeneous flows are mainly located in z − h < 0.1δ over rib-type arrays and even lower in z − h < 0.05δ over cube-type arrays. A constant turbulent momentum flux region, which is defined as inertial sublayer (ISL), is revealed in 0.1δ < z − h <0.3δ over rib-type arrays and 0.05δ < z − h < 0.15δ over cube-type arrays.
Dimensionless vertical profiles of mean velocity \(\bar{u}\), streamwise fluctuating velocity \(\overline{{u^{\prime\prime}u^{\prime\prime}}}^{1/2}\), vertical fluctuating velocity \(\overline{{w^{\prime\prime}w^{\prime\prime}}}^{1/2}\) and momentum flux \(\overline{{u^{\prime\prime}w^{\prime\prime}}}\) measured at different locations over the rib-type array AR = 1/4 (a–d) and cube-type array h:4l (e–h). P1 ( ); P2 ( ); P3 ( ); P4 ( ); P5 ( ); P6 ( ); P7 ( ). Dark solid line is spatially average profile
Velocity profiles measured over different rough surfaces
To compare the effect of rough-surface configurations on the dynamics, the spatially average profiles of mean wind speed \(\left\langle {\bar{u}} \right\rangle\) (Fig. 5a), streamwise fluctuating velocity \(\left\langle {\overline{{u^{\prime\prime}u^{\prime\prime}}} } \right\rangle^{1/2}\) (Fig. 5b), vertical fluctuating velocity \(\left\langle {\overline{{w^{\prime\prime}w^{\prime\prime}}} } \right\rangle^{1/2}\) (Fig. 5c) and momentum flux \(\left\langle {\overline{{u^{\prime\prime}w^{\prime\prime}}} } \right\rangle^{1/2}\)(Fig. 5d) over an entire repeating unit of roughness element. The mean wind speed profiles are generally similar over different surface configurations, nevertheless, noticeable differences are found in the lower TBL. The roof-level mean wind speeds over all the roughness elements are in the range of 0.35U∞ < \(\left\langle {\bar{u}} \right\rangle_{z = h}\) < 0.5U∞. There are notable variations of \(\left\langle {\overline{{u^{\prime\prime}u^{\prime\prime}}} } \right\rangle^{1/2}\), \(\left\langle {\overline{{w^{\prime\prime}w^{\prime\prime}}} } \right\rangle^{1/2}\), and \(\left\langle {\overline{{u^{\prime\prime}w^{\prime\prime}}} } \right\rangle^{1/2}\) over rib- and cube-type arrays, implying that the elevated roof-level turbulence intensity is attributed to the shear close to solid boundary. The variation in velocities over different rib- and cube-type arrays vanishes with increasing wall-normal distance. The mean wind speed profiles collapse in the outer TBL (z − h > 0.6δ) where the flows are barely affected by the surface roughness. The maxima of \(\left\langle {\overline{{u^{\prime\prime}u^{\prime\prime}}} } \right\rangle^{1/2}\), \(\left\langle {\overline{{w^{\prime\prime}w^{\prime\prime}}} } \right\rangle^{1/2}\) and \(\left\langle {\overline{{u^{\prime\prime}w^{\prime\prime}}} } \right\rangle^{1/2}\) reside in the near-wall region (z − h > 0.1δ) that increase with widening roughness-element separation. They reach a plateau (over the rib-type array of AR = 1/8 and the cube-type array of h:4l), then decrease thereafter with increasing the separation. It is because in the closely packed configurations (small separation among roughness elements), namely, skimming flow regime (Oke 1988), the flows seldom entrain into the cavity, resulting in a lower turbulence level. With increasing roughness-element separation, the turbulence level is enhanced by the interaction between the prevailing flows and cavity flows. However, with further increasing separation, the surface becomes smoother again as the roughness elements are sparsely distributed, which results in lower turbulence level. The high turbulence indicated strong shear over the top of roughness element.
Dimensionless spatio-temporally averaged vertical profiles of flow properties over rib-type and cube-type arrays expressed as functions of dimensionless wall-normal distance (z − h)/δ. a, e Mean wind speed \(\left\langle {\bar{u}} \right\rangle\), b, f streamwise fluctuating velocity \(\left\langle {\overline{{u^{\prime\prime}u^{\prime\prime}}} } \right\rangle^{1/2}\); c, g vertical fluctuating velocity \(\left\langle {\overline{{w^{\prime\prime}w^{\prime\prime}}} } \right\rangle^{1/2}\) and d, h turbulent momentum flux \(\left\langle {\overline{{u^{\prime\prime}w^{\prime\prime}}} } \right\rangle\). Ribs AR = 1 ( ); 1/2 ( ); 1/3 ( ); 1/4 ( ); 1/5 ( ); 1/6 ( ); 1/8 ( );1/12 ( ); and 1/15 ( ). Cubes h:1 ( ); h:2l ( ); h:3l ( ); h:4l ( ); h:5l ( ); h:6l ( ); h:7l ( ); h:9l ( ); h:4l − D ( ) h:4l − T ( ); and h:4l − Q ( )
Quadrant analyses
At the roof level, a substantial variation of turbulence level is observed over different sampling positions (Fig. 4) and over different rough-surface configurations (Fig. 5). To elucidate the momentum transfer between the prevailing flows and cavity flows, quadrant analyses are performed for data at the roof-level sampling points (z = h). Based on the instantaneously measured components of fluctuating streamwise u″ and vertical w″ velocity, events of momentum flux transport are categorized into four quadrants, namely, outward interaction Q1 (u″ > 0 and w″ > 0), ejection Q2 (u″ < 0 and w″ > 0), inward interaction Q3 (u″ < 0 and w″ < 0) and sweep Q4 (u″ > 0 and w″ < 0) (Wallace et al. 1972; Lu and Willmarth 1973; Wallace 2016). The momentum flux can be calculated by:
$$\overline{{u^{\prime\prime}w^{\prime\prime}}} = \int\limits_{ - \infty }^{ + \infty } {u^{\prime\prime}w^{\prime\prime}P\left( {u^{\prime\prime},w^{\prime\prime}} \right)du^{\prime\prime}dw^{\prime\prime}} ,$$
where P (u″, w″) is the joint probability density function (JPDF) of the fluctuating velocity components u″ and w″; here u″w″P(u″, w″) is the covariance integrand. The JPDF depicts the occurrence frequency of fluctuating velocities u″ and w″ in each quadrant event. The covariance integrand, on the other hand, illustrates the contribution of the total momentum flux to each quadrant. Figure 6 shows the roof-level JPDF and covariance integrand (Fig. 6a over rib-type arrays and Fig. 6e over cube-type arrays), leeward side (Fig. 6b, f), cavity top (Fig. 6c, g), and windward side (Fig. 6d, h). The JPDF is peaked at small fluctuating velocities over roughness elements and on the leeward/windward sides. It spreads out to Q2 and Q4 at cavity top. At the same time, the strength of Q2 and Q4 increases while Q1 and Q3 is suppressed (contour lines). The occurrence of Q2 and Q4 is more frequent than Q1 and Q3, indicating that the ejection Q2 (u″ < 0 and w″ > 0) and sweep Q4 (u″ > 0 and w″ < 0) dominate the mechanism of roof-level transport processes which is in line with previous studies (Wallace 2016). The larger values of covariance integrand Q2 and Q4 at the cavity top suggest that aged air removal (w″ ≥ 0) and fresh air entrainment (w″ ≤ 0) are driven by decelerating (u″ ≤ 0) and accelerating (u″ ≥ 0) air masses, respectively.
Shaded contours of joint probability density function (JPDF) P (u″, w″) and line contours of covariance integrand u″w″P (u″, w″) at canopy level (z = h) measured at different location over the rib-type array of AR = 1/4 (a–d) and cube-type array of h:4l (e–h)
Figure 7 compares the JDPF and covariance integrand at the cavity top (P4 in Fig. 2) over different rough-surface configurations. It is clearly shown that the JDPF spreads out in the directions of ejection Q2 and sweep Q4 with increasing drag coefficient while Q1 and Q3 are suppressed accordingly. The covariance integrands of Q2 and Q4 are strengthened with increasing aerodynamic resistance. It is thus suggested that the air entrainment and removal are enhanced over rougher surfaces, resulting in more efficient roof-level ventilation.
Shaded contours of joint probability density function (JPDF) P (u″, w″) and line contours of covariance integrand u″w″P(u″, w″) at canopy height (z = h) of P4 measured over the rib-type arrays of a AR = 1, b AR = 1/2, c AR = 1/4, and d AR = 1/8 and cube-type arrays of e 1h:1l, f 1h:2l, g 1h:4l, and h 1h:4l-T
Frequency spectra
Frequency spectra are calculated to examine the roof-level turbulence motion scales (El-Gabry 2014). Similar to the quadrant analyses using the data at cavity top (P4 in Fig. 2), the instantaneous flow signal is proceeded using Fast Fourier Transform (FFT) to convert it from the time domain to the frequency domain (Storey 2002; El-Gabry 2014). As shown in Fig. 8, the energy spectra of u″ are higher than those of w″ over an order of magnitude for f × h/u * < 1, but they decrease sharply when f × h/u* > 1 for the streamwise fluctuating velocity u″ and f × h/u* > 10 for the vertical fluctuating velocity w″. The inertial subrange is also clearly depicted for both u″ and w″, showing the energy cascade in different scales of motions in isothermal conditions. The spectra of u″ and w″ are comparable for f × h/u* > 10 because of the isotropic small-scale motions. Comparing the energy spectra over different sampling positions (Fig. 8a, b for rib-type arrays and Fig. 8e, f for cube-type arrays), energy spectra are higher for u″ and w″ at the cavity top (P4) than those at roof level (P1). Large-scale motions enhance the turbulent transport at the cavity top. Comparing the energy spectra over different rough surfaces, large-scale turbulence is found with increasing drag coefficient over rib-type arrays, especially for vertical fluctuating velocity w″ (taking Case AR = 1, AR = 1/12, and h:l, h:4l − D for exampling). The feature is mild for cube-type arrays, probably because the drag coefficient is similar among different cubical roughness elements. These results suggest that the vertical transports are governed by larger scale turbulence with increasing drag coefficient in which the momentum transports are enhanced.
Frequency spectra of dimensionless streamwise Φ(u″u″/u * 2 ) and vertical Φ(w″w″/u * 2 ) turbulence intensities at canopy level z = h over hypothetical urban areas. a, b Are ribs of AR = 1/4 and (e, f) are cubes for l:4h measured at P1 (black), P2 (orange), P4 (magenta). c, d Are ribs for AR = AR = 1/2 (green), AR = 1/4 (blue), and AR = 1/8 (red) measured at P4. g, h Are cubes for h:1l (green), h:4l (blue), and h:4l − T (red) measured at P4
TBLs over rib- and cube-type arrays are developed in the wind tunnel to examine the flow and turbulence characteristics. For the aerodynamic parameters, a notable trend is observed that roughness length z0 increases with increasing drag coefficient C d while displacement height d varies significantly with increasing C d . Significant variations of fluctuating velocities and momentum flux are found in the near-wall region, demonstrating the inhomogeneous flows due to the presence of roughness elements in the bottom of TBL. Comparing the velocities over different rough surfaces, it is found that the spatially averaged fluctuating streamwise velocity \(\left\langle {\overline{{u^{\prime\prime}w^{\prime\prime}}} } \right\rangle^{1/2}\), fluctuating vertical velocity \(\left\langle {\overline{{w^{\prime\prime}w^{\prime\prime}}} } \right\rangle^{1/2}\) and momentum flux \(\left\langle {\overline{{u^{\prime\prime}w^{\prime\prime}}} } \right\rangle\) in the near-wall region increase with widening separation among roughness elements, reach a plateau (over rib-type array of AR = 1/8 and cube-type array of h:4l), then finally decrease with further increasing separation between roughness elements. Quadrant analyses and frequency spectra show that the flow entrainment and air removal are enhanced over rougher surfaces. Larger scale motions of turbulence also effectuate roof-level ventilation over urban areas.
Britter RE, Hanna SR (2003) Flow and dispersion in urban areas. Annu Rev Fluid Mech 35:469–496
Bruun H (1971) Interpretation of a hot wire signal using a universal calibration law. J Phys E Sci Instrum 4:225
Castro IP, Cheng H, Reynolds R (2006) Turbulence over urban-type roughness: deductions from wind-tunnel measurements. Bound Layer Meteorol 118:109–131
Cermak JE (1981) Wind tunnel design for physical modelling of atmospheric boundary layers. J Eng Mech Div ASCE 108:523–642
Cheng H, Castro IP (2002a) Near wall flow over urban-like roughness. Bound Layer Meteorol 104:229–259
Cheng H, Castro IP (2002b) Near-wall flow development after a step change in surface roughness. Bound Layer Meteorol 105:411–432
Cheng H, Hayden P, Robins AG, Castro IP (2007) Flow over cube arrays of different packing densities. J Wind Eng Ind Aerodyn 95:715–740
El-Gabry LA, Thurman DR, Poinsatte PE (2014) Procedure for determining turbulence length scales using hotwire anemometry. Technical Report NASA/TM—2014-218403, NASA Glenn Research Center, Cleveland, OH, United States
Ho YK, Liu CH (2017) A wind tunnel study of flows over idealised urban surfaces with roughness sublayer corrections. Theor Appl Climatol 130(1–2):305–320
Jiménez J (2004) Turbulent flows over rough walls. Annu Rev Fluid Mech 36:173–196
Kozmar H (2010) Scale effects in wind tunnel modelling of an urban atmospheric boundary layer. Theor Appl Climatol 100:153–162
Krogstad PA, Antonia R, Browne L (1992) Comparison between rough- and smooth-wall turbulent boundary layers. J Fluid Mech 245:599–617
Liu CH, Ng T, Wong CCC (2015) A theory of ventilation estimate over hypothetical urban areas. J Hazard Mater 296:9–16
Lu S, Willmarth W (1973) Measurements of the structure of the Reynolds stress in a turbulent boundary layer. J Fluid Mech 60(3):481–511
Manes C, Poggi D, Ridolfi L (2011) Turbulent boundary layers over permeable walls: scaling and near-wall structure. J Fluid Mech 687:141–170
Mo Z, Liu CH (2018) Wind tunnel measurements of pollutant plume dispersion over hypothetical urban areas. Build Environ. 132:357–366
NI (2018) National Instrument National Instruments, Austin, TX http://www.ni.com
Oke TR (1988) Street design and urban canopy layer climate. Energy Build 11:103–113
Perry AE, Lim KL, Henbest SM (1987) An experimental study of the turbulence structure in smooth- and rough-wall boundary layers. J Fluid Mech 177:437–466
Placidi M, Ganapathisubramani B (2015) Effects of frontal and plan solidities on aerodynamic parameters and the roughness sublayer in turbulent boundary layers. J Fluid Mech 782:541–566
Pope SB (2000) Turbulent flows. Cambridge University Press, Cambridge
Raupach MR, Antonia RA, Rajagopalan S (1991) Rough-wall turbulent boundary layers. Appl Mech Rev 44(1):1–25
Salizzoni P, Soulhac L, Mejean P, Perkins RJ (2008) Influence of a two-scale surface roughness on a neutral turbulent boundary layer. Bound Layer Meteorol 127:97–110
Storey BD (2002) Computing Fourier series and power spectrum with Matlab. TEX paper
Wallace JM (2016) Quadrant analysis in turbulence research: history and evolution. Annu Rev Fluid Mech 48:131–158
Wallace JM, Brodkey RS, Eckelmann H (1972) The wall region in turbulent shear flow. J Fluid Mech 54:39–48
ZM performed the experiments and drafted the manuscript. CHL performed data interpretation and drafted the manuscript. Both authors read and approved the final manuscript.
The data are available from the corresponding author on reasonable request.
This study is partly supported by the General Research Fund (GRF) 17210115 of The Hong Kong Research Grants Council (RGC).
Department of Mechanical Engineering, The University of Hong Kong, 7/F, Haking Wong Building, Pokfulam Road, Hong Kong, China
Ziwei Mo
& Chun-Ho Liu
Search for Ziwei Mo in:
Search for Chun-Ho Liu in:
Correspondence to Chun-Ho Liu.
Mo, Z., Liu, C. Wind tunnel measurements of turbulent boundary layer flows over arrays of ribs and cubes. Geosci. Lett. 5, 16 (2018). https://doi.org/10.1186/s40562-018-0115-x
Momentum fluxes
Turbulent flows
Wind tunnel laboratory experiments
Asian Urban Meteorology and Climate | CommonCrawl |
Independent Segregation of Chromosomes solved by 624
Dec. 4, 2012, 7:08 a.m. by Rosalind Team
Topics: Heredity, Probability
Mendel's Work Examined
Mendel's laws of heredity were initially ignored, as only 11 papers have been found that cite his paper between its publication in 1865 and 1900. One reason for Mendel's lack of popularity is that information did not move quite so readily as in the modern age; perhaps another reason is that as a friar in an Austrian abbey, Mendel was already isolated from Europe's university community.
It is fair to say that no one who did initially read Mendel's work fully believed that traits for more complex organisms, like humans, could be broken down into discrete units of heredity (i.e., Mendel's factors). This skepticism was well-founded in empirical studies of inheritance, which indicated a far more complex picture of heredity than Mendel's theory dictated. The friar himself admitted that representing every trait with a single factor was overly simplistic, and so he proposed that some traits are polymorphic, or encoded by multiple different factors.
Yet any hereditary model would ultimately be lacking without an understanding of how traits are physically passed from organisms to their offspring. This physical mechanism was facilitated by Walther Flemming's 1879 discovery of chromosomes in salamander eggs during cell division, followed by Theodor Boveri's observation that sea urchin embryos with chromatin removed failed to develop correctly (implying that traits must somehow be encoded on chromosomes). By the turn of the 20th century, Mendel's work had been rediscovered by Hugo de Vries and Carl Correns, but it was still unclear how Mendel's hereditary model could be tied to chromosomes.
Fortunately, Walter Sutton demonstrated that grasshopper chromosomes occur in matched pairs called homologous chromosomes, or homologs. We now know that the DNA found on homologous chromosomes is identical except for minor variations attributable to SNPs and small rearrangements, which are typically insertions and deletions. Sutton himself, working five decades before Watson & Crick and possessing no real understanding of DNA, actually surmised that variations to homologous chromosomes should somehow correspond to Mendel's alleles.
Yet it still remained to show how chromosomes themselves are inherited. Most multicellular organisms are diploid, meaning that their cells possess two sets of chromosomes; humans are included among diploid organisms, having 23 homologous chromosome pairs.
Gametes (i.e., sex cells) in diploid organisms form an exception and are haploid, meaning that they only possess one chromosome from each pair of homologs. During the fusion of two gametes of opposite sex, a diploid embryo is formed by simply uniting the two gametes' halved chromosome sets.
Mendel's first law can now be explained by the fact that during the meiosis each gamete randomly selects one of the two available alleles of the particular gene.
Mendel's second law follows from the fact that gametes select nonhomologous chromosomes independently of each other; however, this law will hold only for factors encoded on nonhomologous chromosomes, which leaves open the inheritance of factors encoded on homologous chromosomes.
Consider a collection of coin flips. One of the most natural questions we can ask is if we flip a coin 92 times, what is the probability of obtaining 51 "heads", vs. 27 "heads", vs. 92 "heads"?
Each coin flip can be modeled by a uniform random variable in which each of the two outcomes ("heads" and "tails") has probability equal to 1/2. We may assume that these random variables are independent (see "Independent Alleles"); in layman's terms, the outcomes of the two coin flips do not influence each other.
A binomial random variable $X$ takes a value of $k$ if $n$ consecutive "coin flips" result in $k$ total "heads" and $n-k$ total "tails." We write that $X \in \mathrm{Bin}(n, 1/2)$.
Given: A positive integer $n \leq 50$.
Return: An array $A$ of length $2n$ in which $A[k]$ represents the common logarithm of the probability that two diploid siblings share at least $k$ of their $2n$ chromosomes (we do not consider recombination for now).
Sample Dataset
0.000 -0.004 -0.024 -0.082 -0.206 -0.424 -0.765 -1.262 -1.969 -3.010
Please login to solve this problem. | CommonCrawl |
Computer Based Modeling 1
Harry Morgan
Fluid Report
\usepackage{natbib}
\usepackage{amsfonts}
\newcommand{\del}{\mathbf{\nabla}}
\renewcommand{\d}{\partial}
\renewcommand{\vec}[1]{\mathbf{#1}}
\setlength\parindent{0pt}
\large{\textbf{}}
\title{\textbf{Computer Based Modeling 1} \\ Part 2 – Fluid Flow \\ MENG11511}
\author{Harry \textsc{Morgan}}
\date{April 22, 2016}
The objective of the set exercise was to determine the result of using Laplace's equation to approximate the nature of fluid flow through a complex chamber. This application would be simulated and visualised through the production of a streamlined plot for a 2-D system, through a relaxation method.
\setcounter{tocdepth}{2}
\addcontentsline{toc}{chapter}{Contents}
\addcontentsline{toc}{chapter}{List of Figures}
\par Laplace's Equation, $\del^2 u=0$\cite{notes}, is a second-order partial differential equation capable of being adapted to provide an accurate approximation of various properties within different systems. This report will address the method and outcome of applying a relaxation method to the simulation of a flow of a fluid through a chamber whilst obstructed by several cylinders across its path. The simulation produced was modeled in MathWorks Matlab \circledR, and produces a visible representation of the streamlines in the system. The extent of the efficiency to which this method works will also be considered, through the analysation of a convergence plot, where the size of the error is seen.\\
\par The aim of the simulation is to obtain, through iteration, a reasonable approximation of the shape and velocity of the flow around the cylinders. The dimensions of the chamber will be of specified height and length by the user, or of a preset value, stated in the beginning of the code. However, due to the simplistic nature of the problem, the depth of the chamber remains unspecified, and the problem is analysed in 2 dimensions. In the essence of the simplicity of this problem, any drag between the fluid and the surface of the cylinder and boundaries are excluded from any calculation. The flow in this problem is considered to be entirely ideal and laminar.\\
\par Whilst preset values are included in this code, it was also important that the user could apply their own variables.
\section{Methodology}
\subsection{User Option}
\par The code begins with the option to use default variables, which are later defined, or to set as custom. This put in place by giving the user the option, upon running the code, of \textit"{"Use Preset Variables? yes/no"}" appearing in the command window. An input of \textit{yes}, into the command window, will run a code with defined variables for initial velocity of flow, resolution in the x and y directions, channel dimensions and the position and size of the cylinders. A \textit{no} response, will bring up the instructions to input the desired values in the command window. This is achieved with an \textbf{if} statement, to determine which part of the code should be executed following the corresponding yes/no response. An \textbf{elseif} component to this, allows the code to be restarted if question is answered incorrectly, the user will be responded to with an "\textit{error please retry message}". In this code the user has the option to set the circles one-by-one, rather than defining all of the respective characteristics in separate arrays (i.e. an array for the x values of circle centres, y values of circle centres and radius of each circle), as shown below, in order to allow each circle to be defined as separate entities, providing a more intuitive input sequence for the user.
\begin{figure}[h]
U=1;\\
X=4;\\
Y=2;\\
dx=0.0125;\\
dy=0.0125;\\
C1=[0.5,0.7,0.2];\\
C2=[1,1.4,0.3];\\
C6=[2.3,1.6,0.2];
\caption{Default Settings for Code}
\label{streamplot}
\subsection{Defining Required Values and Matrices}
\par Following this loop, the circles values are put into corresponding arrays, depending on their characteristic, in order to improve the ease of access from later parts of the code. The resolutions, determined at the start of code are used to make arrays of values, progressing at increments equal to this resolution, up to the relative length/width of the system. The vector size of this is then calculated and used to create a zeros matrix of the correct size in which to begin inputting steam function values. Before any physics is used, the arrays created, are used to create a mesh grid, which will allow the access of selection of each element within the created matrix.
\subsection{Boundary Conditions}
The first instance of the use of stream flow follows this, with the input of the stream function: $psi_y=(U*Y).'$. In coding, this uses the earlier created y vector with increments of dy, which is then transposed to give a column vector, which can be used for the creation of boundary conditions.
$psi_y=\left[\begin{array}{l}
0 \\
0.5\\
1\\
\end{array}\right]$
\caption{Example $psi_y$ Transpose if $dy=0.5$}
The while loop is preceded by the defining of the starting iteration as zero, and the decided tolerance and error. The tolerance and error determine at what point of convergence is the solution at a point that can be considered as correct. The boundary conditions are created, in this case, inside the while loop, in order to keep them constant throughout iterations. Both walls and the cylinders cannot be penetrated and hence can be considered to have constant psi values within their bodies and upon their surfaces. These are however independent of each other. The bottom wall, with a $y=0$ value, can be set at a zero value. The top, at a maximum value, defined by the result of $U*Y$ at maximum Y value. In the instance of this problem, the preset values of $Y=2$ and $U=1$, resulting in the maximum stream function value of 2. The cylinders are done slightly differently, in order to increase the speed of the code. Rather than approaching each circle individually, for loop is used to determine the mean value of psi values within each circles. This is achieved by iterating the sequence in figure 3 for each circle, and taking values from the arrays defined earlier.
The left and right of the chamber are defined as the same, and to be of increasing at steady intervals. These are created by selecting the desired column of the psi matrix, and inputting the transposed $psi_y$ vector. These stated conditions provide a psi matrix of unchanging values for these parts of the simulation.
\begin{figure}[!ht]
\textbf{if}\\ $i = 1:NC;
cyl = find(sqrt((x_array-Xcirc(1,i)).^2 + (y_array-Ycirc(1,i)).^2)<=Rcirc(1,i));
psi(cyl) = mean(psi(cyl));$\\
\textbf{end}
\caption{Circle Loop}
\subsection{Iteration}
\par In order to fill the rest of the matrix, and create streamlines, the psi must be plotted throughout and changed accordingly to the effect of the cylinders and boundaries. This is called a relaxation method. Initially, this was achieved using a \textbf{'for'} loop, but in the interest of increasing the speed an efficiency of the code, this section was vectorised\cite{matlab}, for the values from each incremented value in dx and dy. This section again used Laplace's equation in the form.
\par The following few lines of code reset the psi values to those calculated by Laplace's equations, and through iteration, will become more accurate and close to a correct solution. In order to see this converge and eventually bring the loop to an end, the error value must be updated during each loop. To conclude the loop, the iteration and iteration count must be updated by adding one to each and a vector created for, in order to create a convergence plot with the updated error, which is also vectorised. The loop will only end when the error reaches a value lower than a preset tolerance, in this case 0.01.
\subsection{Plotting}
\par After the loop, all that remains is the plotting stage. Four plots were chosen to help the user visualise the various characteristics of the code as well as the fluid flow. First the figure is created; a full screen display with 4 subplots. The contour of psi values is plotted to show streamlines, with 50 streamlines, as appropriate to the size of the sub-plot. The axis were made of equal size, to show a proportional simulation, and visible circles were plotted clearly, to show the location of the cylinders and help emphasize the fluid flowing around them. The steamlines were made black, as opposed to a range of colours, to present the results more clearly.
\par The second sub-plot is a meshgrid, presenting the psi values at each point on the 2D grid. This is shown in a colour variation to improve the clarity of the results and make it easier to visualise the circles. It also helps highlight the method of Laplaces equation, in the averaging of points across each cylinders cross section.
\par The third is the convergence plot, comparing the number of iterations to the size of error produced. Initially, a plot with scalar axis was produced. The nature of the rapid convergence made this hard to interpreted, hence providing requirement for a log scale on the X axis.
\par A quiver is produced to emphasise the results of the streamline plot, showing the direction and size of the flows velocity. Due to the nature of the resolution of the simulation, the arrows are small, thus zooming in may be necessary.
\subsection{Error Conditions}
\par Finally, some error conditions are included, in case of custom values which would cause a problem in the production of figures. A limit is put on how high the resolution can be, by restricting how small the dx and dy values can be (below 0.0125 is prohibited). It was considered that if the circles were outside the grid, an error message may be useful, however, it would have little effect on the fluid flow.
\subsection{GUI}
\par As an extension, the code was input into a custom made GUI. This allowed for a more user-friendly interface, allowing the user to press one of 3 buttons to produce the same four graphs. The first, default, inputs the default settings into the code, the second allows the input of custom settings whilst the third closes down the GUI.
\section{Results and Discussion}
\subsection{Contour Streamline Plot}
\includegraphics[width=\textwidth]{streamplot.png}
\caption{Contour Plot of Streamlines in Chamber}
The above plot shows simulated streamlines around the cylinders obstructing the flow. Due to the boundary conditions, the input flow is identical to the output flow, and the flow does not cross any boundaries. This potentially shows an accurate approximation of laminar flow in this scenario. This flow is always fully attached to the cylinder surface and there is no boundary layer separation. As this occurs independent of speed, the assumption in this case is that the fluid is infinitely viscous\cite{nasa}, which is of course impossible. Due to this fact, and the likelihood of some turbulence due to skin friction drag with the boundaries, this model is unlikely to be effective in simulating any real life problems with flow at any significant velocity.
\subsection{Mesh Grid}
\includegraphics[width=10cm]{meshgrid.png}
\caption{Meshgrid of psi(streamfunction values) in the Chamber}
\label{meshgrid}
The meshgrid pot, whilst less useful than the steamline plot, does help the user to visualise the effect of Laplace's equation in the code. In the results from the run coding, the ability to rotate the grid enables the visualisation of the averaged-out results of the circles, and the varying psi values at each point in the system.
\subsection{Iteration vs Error}
\includegraphics[width=10cm]{iter.png}
\caption{Log of Iteration Count against Error Size }
\label{iter}
The convergence of an iteration method like this is important in determining the accuracy of the approximation when the relaxation method is implemented. The results above show a very steep curve, even when a log scale is introduced, suggesting a very rapid convergence. However an incredibly large amount of iterations are required to reduce the error to below the set tolerance level. This shows the used method to be incredibly effective at gaining rough estimations of fluid flow, yet it seems to head asymptotically towards a correct value, hence there must be a point where the user defines the approximation to be ample in describing the flow.
\subsection{Quiver Plot}
\includegraphics[width=10cm]{quiver.png}
The quiver plot is not dissimilar to the contour plot earlier analysed. This plot however does define not only the direction of the flow, but its velocity at each point, by differentiating the position against iteration number. This would be effective in presenting this, but due to the high resolution in the coding, the arrows appear very small and are not easy for the user to interpret. Although if zoomed in, they do effectively support the results found by the contour plot.
\includegraphics[width=10cm]{gui.png}
The GUI is an effective way of making the interface for the user more intuitive and as seen above, offers the user an option to use, or not use the default settings. It also provides an easy option to close the program. These options are presented as push buttons which once pushed, run the appropriate code to their action. The GUI effectively presents the results in an attractive layout, allowing the user to analyse the effectiveness and accuracy of the relaxation method on the problem at hand.
\section{Coding Problems and Potential Modifications}
The main issue with the coding, is the time in which it takes to run. This was addressed by the addition of vectorising certain loops to prevent, iterations occuring at an unnecessary frequency. In terms of further improvements, an interpolation method would decrease the error from an earlier stage, and decrease the need for further iteration. Methods such as the Successive over-relaxation method\cite{relaxation}. There are also potentially more built-in functions in matlab which may help decrease the need for iteration\cite{ox}. In terms of improving the experience for the user, potentially a pop up box, with input boxes, could be added to eliminate the obscure use of the command window. This could be initially filled with the preset values, to indicate to the user the suggested dimensions and the correct method to input values.
\section{Conclusion}
\item The GUI is a useful way of presenting all the information to the user, but would be improved by the addition of an option pop up box.
\item The contour plot provides an effective simulation of how fluid would flow, if at very low velocity or high viscosity, but would not provide accurate solutions to real world problems where flow is not entirely ideal and laminar.
\item The accuracy of the plot converges rapidly but is unlikely to ever reach an exact value.
\item The speed of the program is increased as more elements are vectorised, and would be further increased if interpolation was introduced.
\item The program can be adapted by the user to fit custom values for speed, dimensionsm, resolution and obstructions, making it versatile and able to be applied to different problems.
\item For the model to be entirely accurate, other aspects, such as frictional forces and irregularities must be considered.
\medskip
\bibliographystyle{unsrt}
\bibliography{sample} | CommonCrawl |
How to determine what group a Galois group is isomorphic to
Consider $x^{4}-2=(x+\sqrt[4]{2})(x-\sqrt[4]{2})(x+i\sqrt[4]{2})(x-i\sqrt[4]{2}) \in \mathbb{Q}[x]$. Let $K=\mathbb{Q}(\sqrt[4]{2},i)$ be the splitting field of $x^{4}-2$. Since $K$ is a splitting field and we are in characteristic 0, it follows that $K/\mathbb{Q}$ is Galois. Finally, $[K\colon\mathbb{Q}]=8$.
I want to compute the Galois group $K/\mathbb{Q}$. Since the extension is Galois, there are 8 elements in this group. It turns out that this group is isomorphic to the dihedral group of order 8 (I've seen examples of this but don't have a reference).
What steps would I take to reach the conclusion that this group is the dihedral group?
Specifically I know that each automorphism of the Galois group permutes the roots, but I don't see how to make the connection that these elements are the same as the dihedral group of order 8. I would appreciate a detailed analysis because I think this would allow me to apply these techniques to many other problems in Galois theory. Thanks.
abstract-algebra galois-theory dihedral-groups
EdisonEdison
$\begingroup$ \sqrt[n]{a} produces $\sqrt[n]{a}$. $\endgroup$ – Arturo Magidin Apr 20 '12 at 5:21
$\begingroup$ You can find in Lang's book on Algebra the study of extensions constructed by adjoining an $n$th root such as your $K/\mathbb Q$, and in particular the determination of their Galois groups, but if I recall correctly he only goes into detail for the case in which $n$ is odd, which is simpler. $\endgroup$ – Mariano Suárez-Álvarez Apr 20 '12 at 6:07
$\begingroup$ \root n\of a also produces $\root n\of a$. $\endgroup$ – Gerry Myerson Apr 20 '12 at 6:13
$\begingroup$ @GerryMyerson, that's really a plaintex-ism, which should probably be avoided in LaTeX sources. $\endgroup$ – Mariano Suárez-Álvarez Apr 20 '12 at 6:22
$\begingroup$ @Mariano, I'm a PlainTeX kind of guy. Perhaps this comment thread isn't the place to discuss it, but I'd like to know why Plain constructs should be avoided here - they seem to be supported. $\endgroup$ – Gerry Myerson Apr 20 '12 at 6:51
Since $x^4-2$ is irreducible over $\mathbb{Q}$ (Eisensein's Criterion at $p-2$, for example), the Galois group is transitive on the four roots. There is an automorphism that maps $\sqrt[4]{2}$ to $i\sqrt[4]{2}$; call it $\rho$. This map either maps $i$ to $i$, or $i$ to $-i$.
If it maps $i$ to $i$, then consider $\sigma$, complex conjugation. We have that $\sigma\rho$ maps $i$ to $-i$ and $\sqrt[4]{2}$ to $-i\sqrt[4]{2}$, whereas $\rho\sigma$ maps $i$ to $-i$ but $\sqrt[4]{2}$ to $i\sqrt[4]{2}$. So $G$ is not abelian.
If $\rho$ maps $i$ to $-i$, then $\sigma\rho$ maps $\sqrt[4]{2}$ to $-i\sqrt[4]{2}$ and $i$ to $i$. Taking $(\sigma\rho)^3$ we obtain a map that sends $i$ to $i$ and $\sqrt[4]{2}$ to $i\sqrt[4]{2}$, so we are back in the previous case. So either way, $G$ is not abelian.
That means that it is either dihedral, or the quaternion group of order $8$ (the only nonabelian groups of order $8$). But in the quaternion group, the only element of order $2$ is central (it is $-1$), whereas complex conjugation, which is of order $2$ in the Galois group, is not central, as we just saw. That means that the Galois group must be the dihedral group of order $8$. Alternatively, in the quaternion group there is a single element of order $2$; but $G$ has at least two: complex conjugation and $\rho^2$. So $G$ must be dihedral, not quaternion.
Explicitly, our map $\rho$ that sends $\sqrt[4]{2}$ to $i\sqrt[4]{2}$ and $i$ to $i$ is of order $4$; complex conjugation $\sigma$ is of order $2$; now note that $\sigma\rho=\rho^3\sigma$ (both map $i$ to $-i$, and $\sqrt[4]{2}$ to $-i\sqrt[4]{2}$). This gives you explicitly the dihedral structure: $G$ contains $\langle \sigma,\rho\mid \sigma^2=\rho^4=1,\ \sigma\rho=\rho^3\sigma\rangle$, which is of order $8$, so $G$ is this group, which is the dihedral group of order $8$.
Arturo MagidinArturo Magidin
You don't say whether you know what the automorphisms are. If you do, you should be able to find automorphisms $f$ and $g$ with $f^4=1$, $g^2=1$, and $fg=gf^{-1}$, and that (together with knowing that $f$ and $g$ generate the group) is a presentation of the dihedral group.
Not the answer you're looking for? Browse other questions tagged abstract-algebra galois-theory dihedral-groups or ask your own question.
Is the size of the Galois group always $n$ factorial?
Galois group of a degree 5 irreducible polynomial with two complex roots.
Determine the Galois Group of $(x^2-2)(x^2-3)(x^2-5)$
Generators for Galois Extension
Galois group of $x^4-2x^2-2$
Permutation of roots for Galois group with six elements
A sequence of rational polynomials whose splitting fields over $\mathbf{Q}$ have dihedral Galois groups.
Galois group of $x^4-2$
Proving a Galois Group is isomorphic to $D_4$
How Did Galois Understand the Galois Group? | CommonCrawl |
pdgLive Home > ${{\mathit H}^{0}}$ > ${{\mathit \tau}^{+}}{{\mathit \tau}^{-}}$ Final State
${{\mathit H}^{0}}$ SIGNAL STRENGTHS IN DIFFERENT CHANNELS
The ${{\mathit H}^{0}}$ signal strength in a particular final state ${{\mathit x}}{{\mathit x}}$ is given by the cross section times branching ratio in this channel normalized to the Standard Model (SM) value, $\sigma $ $\cdot{}$ B( ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit x}}{{\mathit x}}$ ) $/$ ($\sigma $ $\cdot{}$ B( ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit x}}{{\mathit x}}$ ))$_{{\mathrm {SM}}}$, for the specified mass value of ${{\mathit H}^{0}}$ . For the SM predictions, see DITTMAIER 2011 , DITTMAIER 2012 , and HEINEMEYER 2013A. Results for fiducial and differential cross sections are also listed below.
${{\mathit \tau}^{+}}{{\mathit \tau}^{-}}$ Final State
$\bf{ 1.15 {}^{+0.16}_{-0.15}}$ OUR AVERAGE
$1.09$ ${}^{+0.18}_{-0.17}$ ${}^{+0.26}_{-0.22}$ ${}^{+0.16}_{-0.11}$ 1
2019 AQ
ATLS ${{\mathit p}}{{\mathit p}}$ , 13 TeV, ${{\mathit H}}$ $\rightarrow$ ${{\mathit \tau}}{{\mathit \tau}}$
2019 AF
CMS ${{\mathit p}}{{\mathit p}}$ , 13 TeV
$1.11$ ${}^{+0.24}_{-0.22}$ 3, 4
TEVA ${{\mathit p}}$ ${{\overline{\mathit p}}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit X}}$ , 1.96 TeV
CMS ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit W}}$ / ${{\mathit H}^{0}}{{\mathit Z}}$ , ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit \tau}}{{\mathit \tau}}$ , 13 TeV
2019 AT
$0.98$ $\pm0.18$ 9
CMS ${{\mathit p}}{{\mathit p}}$ , 7, 8, 13 TeV
ATLS ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit W}}$ / ${{\mathit Z}}{{\mathit X}}$ , 8 TeV
$1.44$ ${}^{+0.30}_{-0.29}$ ${}^{+0.29}_{-0.23}$ 11
$1.43$ ${}^{+0.27}_{-0.26}$ ${}^{+0.32}_{-0.25}$ $\pm0.09$ 12
2015 AH
ATLS ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit X}}$ , 7, 8 TeV
CMS ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit X}}$ , 7, 8 TeV
CDF ${{\mathit p}}$ ${{\overline{\mathit p}}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit X}}$ , 1.96 TeV
ABAZOV
D0 ${{\mathit p}}$ ${{\overline{\mathit p}}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit X}}$ , 1.96 TeV
2012 AI
ATLS ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit X}}$ , 7 TeV
1 AABOUD 2019AQ use 36.1 fb${}^{-1}$ of data. The first, second and third quoted errors are statistical, experimental systematic and theory systematic uncertainties, respectively. The quoted signal strength is given for ${\mathit m}_{{{\mathit H}^{0}}}$ = 125 GeV and corresponds to 4.4 standard deviations. Combining with 7 TeV and 8 TeV results (AAD 2015AH), the observed significance is 6.4 standard deviations. The cross sections in the ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit \tau}}{{\mathit \tau}}$ decay channel (${\mathit m}_{{{\mathit H}^{0}}}$ = 125 GeV) are measured to $3.77$ ${}^{+0.60}_{-0.59}$ (stat) ${}^{+0.87}_{-0.74}$ (syst) pb for the inclusive, $0.28$ $\pm0.09$ ${}^{+0.11}_{-0.09}$ pb for VBF, and $3.1$ $\pm1.0$ ${}^{+1.6}_{-1.3}$ pb for gluon-fusion production. See their Table XI for the cross sections in the framework of simplified template cross sections.
2 SIRUNYAN 2019AF use 35.9 fb${}^{-1}$ of data. ${{\mathit H}^{0}}{{\mathit W}}$ /${{\mathit Z}}$ channels are added with a few updates on gluon fusion and vector boson fusion with respect to SIRUNYAN 2018Y. The quoted signal strength is given for ${\mathit m}_{{{\mathit H}^{0}}}$ = 125 GeV and corresponds to 5.5 standard deviations. The signal strengths for the individual production modes are: $1.12$ ${}^{+0.53}_{-0.50}$ for gluon fusion, $1.13$ ${}^{+0.45}_{-0.42}$ for vector boson fusion, $3.39$ ${}^{+1.68}_{-1.54}$ for ${{\mathit W}}{{\mathit H}^{0}}$ and $1.23$ ${}^{+1.62}_{-1.35}$ for ${{\mathit Z}}{{\mathit H}^{0}}$ . See their Fig. 7 for other couplings ( ${{\mathit \kappa}_{{V,}}}{{\mathit \kappa}_{{f}}}$ ).
3 AAD 2016AN perform fits to the ATLAS and CMS data at $\mathit E_{{\mathrm {cm}}}$ = 7 and 8 TeV. The signal strengths for individual production processes are $1.0$ $\pm0.6$ for gluon fusion, $1.3$ $\pm0.4$ for vector boson fusion, $-1.4$ $\pm1.4$ for ${{\mathit W}}{{\mathit H}^{0}}$ production, $2.2$ ${}^{+2.2}_{-1.8}$ for ${{\mathit Z}}{{\mathit H}^{0}}$ production, and $-1.9$ ${}^{+3.7}_{-3.3}$ for ${{\mathit t}}{{\overline{\mathit t}}}{{\mathit H}^{0}}$ production.
4 AAD 2016AN: In the fit, relative production cross sections are fixed to those in the Standard Model. The quoted signal strength is given for ${\mathit m}_{{{\mathit H}^{0}}}$ = 125.09 GeV.
5 AALTONEN 2013M combine all Tevatron data from the CDF and D0 Collaborations with up to 10.0 fb${}^{-1}$ and 9.7 fb${}^{-1}$, respectively, of ${{\mathit p}}{{\overline{\mathit p}}}$ collisions at $\mathit E_{{\mathrm {cm}}}$ = 1.96 TeV. The quoted signal strength is given for ${\mathit m}_{{{\mathit H}^{0}}}$ = 125 GeV.
6 SIRUNYAN 2019AF use 35.9 fb${}^{-1}$ of data. The quoted signal strength is given for ${\mathit m}_{{{\mathit H}^{0}}}$ = 125 GeV and corresponds to 2.3 standard deviations.
7 SIRUNYAN 2019AT perform a combine fit to 35.9 fb${}^{-1}$ of data at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV. This combination is based on SIRUNYAN 2018Y.
8 SIRUNYAN 2018Y use 35.9 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV. The quoted signal strength is given for ${\mathit m}_{{{\mathit H}^{0}}}$ = 125.09 GeV and corresponds to 4.9 standard deviations.
9 SIRUNYAN 2018Y combine the result of 35.9 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV with the results obtained from data of 4.9 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 7 TeV and 19.7 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 8 TeV (KHACHATRYAN 2015AM). The quoted signal strength is given for ${\mathit m}_{{{\mathit H}^{0}}}$ = 125.09 GeV and corresponds to 5.9 standard deviations.
10 AAD 2016AC measure the signal strength with ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit W}}$ / ${{\mathit Z}}{{\mathit X}}$ processes using 20.3 fb${}^{-1}$ of $\mathit E_{{\mathrm {cm}}}$ = 8 TeV. The quoted signal strength is given for ${\mathit m}_{{{\mathit H}^{0}}}$ = 125 GeV.
11 AAD 2016K use up to 4.7 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\mathit E_{{\mathrm {cm}}}$ = 7 TeV and up to 20.3 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 8 TeV. The quoted signal strength is given for ${\mathit m}_{{{\mathit H}^{0}}}$ = 125.36 GeV.
12 AAD 2015AH use 4.5 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\mathit E_{{\mathrm {cm}}}$ = 7 TeV and 20.3 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 8 TeV. The third uncertainty in the measurement is theory systematics. The signal strength for the gluon fusion mode is $2.0$ $\pm0.8$ ${}^{+1.2}_{-0.8}$ $\pm0.3$ and that for vector boson fusion and ${{\mathit W}}$ $/$ ${{\mathit Z}}{{\mathit H}^{0}}$ production modes is $1.24$ ${}^{+0.49}_{-0.45}{}^{+0.31}_{-0.29}$ $\pm0.08$. The quoted signal strength is given for ${\mathit m}_{{{\mathit H}^{0}}}$ = 125.36 GeV.
13 CHATRCHYAN 2014K use 4.9 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\mathit E_{{\mathrm {cm}}}$ = 7 TeV and 19.7 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 8 TeV. The quoted signal strength is given for ${\mathit m}_{{{\mathit H}^{0}}}$ = 125 GeV. See also CHATRCHYAN 2014AJ.
15 ABAZOV 2013L combine all D0 results with up to 9.7 fb${}^{-1}$ of ${{\mathit p}}{{\overline{\mathit p}}}$ collisions at $\mathit E_{{\mathrm {cm}}}$ = 1.96 TeV. The quoted signal strength is given for ${\mathit m}_{{{\mathit H}^{0}}}$ = 125 GeV.
16 AAD 2012AI obtain results based on 4.7 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\mathit E_{{\mathrm {cm}}}$ = 7 TeV. The quoted signal strengths are given in their Fig. 10 for ${\mathit m}_{{{\mathit H}^{0}}}$ = 126 GeV. See also Fig. 13 of AAD 2012DA.
17 CHATRCHYAN 2012N obtain results based on 4.9 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\mathit E_{{\mathrm {cm}}}$=7 TeV and 5.1 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$=8 TeV. The quoted signal strength is given for ${\mathit m}_{{{\mathit H}^{0}}}$=125.5 GeV. See also CHATRCHYAN 2013Y .
AABOUD 2019AQ
PR D99 072001 Cross-section measurements of the Higgs boson decaying into a pair of $\tau$-leptons in proton-proton collisions at $\sqrt{s}=13$ TeV with the ATLAS detector
SIRUNYAN 2019AF
JHEP 1906 093 Search for the associated production of the Higgs boson and a vector boson in proton-proton collisions at $\sqrt{s}=$ 13 TeV via Higgs boson decays to $\tau$ leptons
SIRUNYAN 2019AT
EPJ C79 421 Combined measurements of Higgs boson couplings in proton?proton collisions at $\sqrt{s}=13\,\text {Te}\text {V} $
SIRUNYAN 2018Y
PL B779 283 Observation of the Higgs boson decay to a pair of $\tau$ leptons with the CMS detector
AAD 2016AC
PR D93 092005 Search for the Standard Model Higgs Boson Produced in Association with a Vector Boson and Decaying into a Tau Pair in ${{\mathit p}}{{\mathit p}}$ Collisions at $\sqrt {s }$ = 8 TeV with the ATLAS Detector
AAD 2015AH
JHEP 1504 117 Evidence for the Higgs-Boson Yukawa Coupling to tau Leptons with the ATLAS Detector
CHATRCHYAN 2014K
JHEP 1405 104 Evidence for the 125 GeV Higgs Boson Decaying to a Pair of ${{\mathit \tau}}$ Leptons
AALTONEN 2013M
PR D88 052014 Higgs Boson Studies at the Tevatron
ABAZOV 2013L
PR D88 052011 Combined Search for the Higgs Boson with the ${D0}$ Experiment
AAD 2012AI
PL B716 1 Observation of a New Particle in the Search for the Standard Model Higgs Boson with the ATLAS Detector at the LHC
CHATRCHYAN 2012N
PL B716 30 Observation of a New Boson at a Mass of 125 GeV with the CMS Experiment at the LHC | CommonCrawl |
Math Topics
Pythagorean identity of Cosecant and Cotangent functions
Math Doubts
Pythagorean identities
$\csc^2{\theta}-\cot^2{\theta} \,=\, 1$
The subtraction of square of cot function from square of co-secant function equals to one is called the Pythagorean identity of cosecant and co-tangent functions.
In trigonometry, the cosecant and cotangent are two functions, which have a direct relationship between them in square form but it is derived from Pythagorean theorem. Hence, the relationship between cosecant and cot functions in square form is called the Pythagorean identity of co-secant and co-tangent functions.
$\Delta BAC$ is a right triangle in which its angle is denoted by the symbol theta.
The cosecant and cotangent functions are written as $\csc{\theta}$ and $\cot{\theta}$ respectively.
In the same way, their squares are written as $\csc^2{\theta}$ and $\cot^2{\theta}$ respectively in mathematical form.
The subtraction of the cot squared of angle from cosecant squared of angle is equal to one and it is called as the Pythagorean identity of cosecant and cotangent functions.
The Pythagorean identity of cosecant and cot functions is also written popularly in two other forms.
$\csc^2{x}-\cot^2{x} \,=\, 1$
$\csc^2{A}-\cot^2{A} \,=\, 1$
Remember that the angle of a right triangle can be denoted by any symbol but the relation between cosecant and cot functions have to be written in that symbol.
Learn how to prove the Pythagorean identity of cosecant and cot functions in mathematical form by geometrical method.
Latest Math Topics
Derivative of Hyperbolic Cosecant function
Proof of Derivative of Hyperbolic Cosecant function
Derivative of Hyperbolic Secant function
Proof of Derivative of Hyperbolic Secant function
Derivative of Hyperbolic Cotangent function
Latest Math Problems
Evaluate $\displaystyle \large \lim_{x \,\to\, 0}{\normalsize \Big(1+\sin{x}\Big)^{\Large \frac{1}{x}}}$
Evaluate $\displaystyle \int{\dfrac{1}{x^2+4x+3} \,} dx$
Evaluate $\displaystyle \large \lim_{x \,\to\, 0}{\normalsize \dfrac{x^3\sin{x}}{(\sec{x}-\cos{x})^2}}$
Evaluate $\displaystyle \large \lim_{x \,\to\, 0}{\normalsize \dfrac{(e^{-3x+2}-e^2)\sin{\pi x}}{4x^2}}$
Evaluate $\dfrac{d}{dx}{\Bigg[\dfrac{1}{b}\tan^{-1}{\Big(\dfrac{x}{b}\Big)}-\dfrac{1}{a}\tan^{-1}{\Big(\dfrac{x}{a}\Big)}\Bigg]}$
Math Doubts is a best place to learn mathematics and from basics to advanced scientific level for students, teachers and researchers. Know more
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising. | CommonCrawl |
Network meta-analysis combining individual patient and aggregate data from a mixture of study designs with an application to pulmonary arterial hypertension
Howard HZ Thom1Email author,
Gorana Capkun2,
Annamaria Cerulli2,
Richard M Nixon2 and
Luke S Howard3
© Thom et al.; licensee BioMed Central. 2015
Received: 26 May 2014
Accepted: 16 February 2015
Network meta-analysis (NMA) is a methodology for indirectly comparing, and strengthening direct comparisons of two or more treatments for the management of disease by combining evidence from multiple studies. It is sometimes not possible to perform treatment comparisons as evidence networks restricted to randomized controlled trials (RCTs) may be disconnected. We propose a Bayesian NMA model that allows to include single-arm, before-and-after, observational studies to complete these disconnected networks. We illustrate the method with an indirect comparison of treatments for pulmonary arterial hypertension (PAH).
Our method uses a random effects model for placebo improvements to include single-arm observational studies into a general NMA. Building on recent research for binary outcomes, we develop a covariate-adjusted continuous-outcome NMA model that combines individual patient data (IPD) and aggregate data from two-arm RCTs with the single-arm observational studies. We apply this model to a complex comparison of therapies for PAH combining IPD from a phase-III RCT of imatinib as add-on therapy for PAH and aggregate data from RCTs and single-arm observational studies, both identified by a systematic review.
Through the inclusion of observational studies, our method allowed the comparison of imatinib as add-on therapy for PAH with other treatments. This comparison had not been previously possible due to the limited RCT evidence available. However, the credible intervals of our posterior estimates were wide so the overall results were inconclusive. The comparison should be treated as exploratory and should not be used to guide clinical practice.
Our method for the inclusion of single-arm observational studies allows the performance of indirect comparisons that had previously not been possible due to incomplete networks composed solely of available RCTs. We also built on many recent innovations to enable researchers to use both aggregate data and IPD. This method could be used in similar situations where treatment comparisons have not been possible due to restrictions to RCT evidence and where a mixture of aggregate data and IPD are available.
Individual patient data
Covariate adjustments
Observational evidence
Mixed treatment comparison
Decision making bodies for national health care providers, such as the National Institute for Health and Care Excellence (NICE) for the NHS in England and Wales or the Pharmaceutical Benefits Advisory Committee (PBAC) in Australia, have a need to consider all available treatments when making recommendations for clinical practice. There is rarely a single definitive study comparing these treatments and it is often necessary to synthesise the best available evidence to come to a decision [1].
Network meta-analysis (NMA) for indirect mixed treatment comparisons of multiple treatments is a generalization of standard meta-analysis, which is used to combine the results of multiple studies, to the comparison of two or more than treatments. This has become a well-established methodology for evidence synthesis [2,3] and is routinely used and recommended by NICE [4,5]. The gold-standard of evidence to be included in a NMA are randomized controlled trials (RCTs) which include a control arm and whose populations are randomized to reduce bias and improve precision. The results are usually available from literature as only aggregate data. Access to individual patient data (IPD), when available, can be used to understand the relationship between covariates and outcomes [6,7]. Methods for the inclusion of IPD in pairwise meta-analysis have been developed by Sutton et al. [8] and Riley et al. [9,10] and these were extended to the network meta-analysis of binary outcomes by Saramago et al. [7] and Donegan et al. [6]. This model can easily be adapted to continuous outcomes and provides a covariate-adjusted NMA model combining IPD and aggregate data.
One of the requirements to perform an NMA is to have a connected network [4], which can be challenging when not enough RTCs are available, as illustrated in Figure 1 for the case of a NICE technology assessment follicular lymphoma [11]. This is often a problem in new indications for small populations or orphan diseases [12]. However, a decision on the most appropriate treatment is still needed and including non-randomized studies to complete the network and conduct the comparison is a potential solution [13]. A commonly available type of non-randomized study is the single-arm observational study, or before-and-after study [14], in which outcomes in a group of patients are investigated before and after an intervention.
Example of a disconnected network from network meta-analysis of first-line treatments for stage III-IV follicular lymphoma.
Several methods have been proposed to incorporate such observational studies [15]. One approach is the three-level hierarchical model which allows the incorporation of evidence from many different study designs [16,17]. An example of such a model consists of an overall effect for each treatment j, which can be labelled d j . Treatment effects for each different type of study, such as an RCT effect φj1, a before-and-after study effect φj2, and a case-control study effect φj3, could then be normally distributed around this overall effect. At the bottom level of the hierarchy are the individual study effects δ jki for each treatment j, study type k, and study i, which could be normally distributed around the study type treatment effects φ jk . This approach has the advantage of keeping the inference from each type of trial separate but is not applicable in cases where the number of studies per study type per treatment is small.
An alternative approach to including observational studies, and thus connecting the network, is that of propensity scores which are the probability that a patient would be given a particular treatment on the basis of their background characteristics [18-20]. These probabilities are often estimated using logistic regression. However, different propensity score models are required for each treatment and a great many studies are therefore required for each study. This is a particular drawback if IPD is not available for most of the treatments. Another disadvantage is the difficulty of incorporating propensity scores into the existing covariate-adjusted NMA models.
A final alternative for including observational studies in disconnected networks is the method of constructing empirical priors informed by these observational studies [15,21]. These empirical priors inform parameter estimation via:
$$ P\left(\left.\theta \right| Data\right)\alpha L\left(\left.\theta \right| RCTs\right)\times {\left[L\left(\left.\theta \right|Obs\right)\right]}^{\alpha }P\left(\theta \right) $$
where the L(θ|RCTs) is the likelihood on the basis of the RCT evidence, L(θ|Obs) is the likelihood on the basis of the observational evidence, P(θ) is the prior, and α is a parameter representing the strength given to the observational evidence. If α = 1, for example, the observational evidence would be given the same weight as the RCT evidence. This approach shares the advantage of the hierarchical method in that it explicitly separates the RCT and observational evidence but also shares the disadvantage of the propensity scores method that it is difficult to merge with existing NMA models.
The method we choose to build upon is the construction of control arms for before-and-after studies by matching their baseline characteristics to those of control arms in included RCTs [20,22]. This analysis of covariance method uses regression models to estimate the effect of treatments not included in the study. We adapted this in a natural fashion to covariate-adjusted NMA models through an assumption of exchangeability (random-effects) on the placebo effects of study arms. A similar approach was originally applied to meta-analysis [23] and has recently been proposed for the construction of baseline natural history models in NMA [24]. However, recent work has been critical of placing random-effects on the trial-level baseline improvements, namely the placebo effect [24,25], as it interferes with the randomization of the RCTs and learns across trial information. Despite these concerns, in cases such as the application we will discuss necessitate this approach as it would otherwise not be possible to compare the treatments of interest due to the disconnectedness of the evidence network.
Illustrative example: mixed treatment comparison of combination therapies for pulmonary arterial hypertension
We will illustrate our method for the inclusion of before-and-after studies in a mixed treatment comparisons of therapies for pulmonary arterial hypertension (PAH). PAH is a rare disease characterised by progressive elevation of pulmonary vascular resistance leading to right heart failure and death [26]. Current treatments include endothelin receptor antagonists (ERA), phosphodiesterase-5 inhibitors (PDE5i), and prostacyclin analogues (Pr) [27]. These drugs are often used in combination to try to improve outcomes [28,29]. The anticancer therapy imatinib is an oral therapy which has also recently been studied in PAH and its use is of interest to clinicians. No systematic comparison of available monotherapies and combination therapies for PAH has been conducted and, in particular, imatinib as add-on to other combination therapies has not been investigated. Imatinib was being evaluated as an alternative to prostacyclins as additional therapy for patients on a combination of ERA and PDE5i. The comparison of imatinib with prostacyclins for this patient group was not previously possible on the basis of direct evidence or through indirect NMA comparisons restricted to RCTs and we thus took it as our primary objective for treatment comparison. However, our comparison should be viewed as illustrative and should not be used to guide clinical practice as the evidence is indirect and the analysis relies on a number of model assumptions that were necessary to facilitate the comparison.
Our primary evidence base for the NMA was the IPD from the IMPRES trial [30]. This was a randomized placebo-controlled Phase-III trial to investigate the efficacy and safety of imatinib as an add-on to combination therapy for the treatment of PAH. The study included patients with severe PAH and receiving two or more PAH-specific treatments.. Patients were initially on one of four combination treatments, namely ERA + PDE5i, ERA + Pr, PDE5i + Pr, or ERA + PDE5i + Pr., were randomized within these background treatment groups to either imatinib or placebo and were followed-up for at least 24 weeks. Patient group characteristics are reported in Table 1, where heterogeneity in baseline characteristics between randomized groups is exhibited. The high dropout rates in this trial reflect the severity of the disease and the side-effects of the treatments.
Summary statistics of patients in IMPRES RCT
Baseline therapy
Add-on Therapy
Size¶
Drop out§
Mean 6MWD improvement
Mean age
Prop male
Mean STATUS
Mean 6MWD baseline
Mean PVR
ERA + PDE5i
Placebo*
2.54 (16.25)
43.7 (14.27)
ERA + Pr
ERA + PDE5i + Pr
−8.27 (10.47)
33.37 (11.54)
PDE5i + Pr
40 (14.59)
*ERA is any endothelin receptor antagonist, PDE5i is phosphodiesterase 5 inhibitor, and Pr is prostacyclins (oral, inhaled, intravenous or subcutaneous).
¶ Group size was number of patients taking 6MWD test at baseline and 24 weeks.
§ Dropout is number of patients dropping out of the study between baseline and 24 weeks. Dropout due to death, adverse events, consent withdrawal, protocol deviation, abnormal laboratory result, administrative error, or adverse reaction to study drug.
The continuous outcome of short term change in 6 minute walk distance (6MWD) from baseline, in meters, is used in licensing decisions by agencies such as the Food and Drug Administration [31] and as a result is the primary outcome in nearly all Phase-III trials in PAH. Although adjusting final 6MWD for baseline 6MWD is the recommended approach when analysing trial results [32], this was not possible as we had only aggregate data from most of the studies and many only reported the change outcome. We therefore chose change in 6MWD as our efficacy measure. Short term was defined as 12 weeks to 1 year, as clinical opinion was that patients would derive maximal benefit from treatments within 12 weeks.
Six covariates of interest were identified by a mixture of exploratory analysis of the IMPRES data and expert clinical opinion. The covariates identified were: the age at baseline (AGE), an indicator for whether a patient is male (SEX), the 6MWD at baseline (WALK), the World Health Organization New York Health Assessment status (STATUS) and pulmonary vascular resistance (PVR). STATUS categorises the severity of PAH into one of four increasingly severe categories, ranging from no limitation of activity and no symptoms with ordinary physical activity to marked limitation of activity and symptoms with any activity, even at rest. PVR is a measure of the resistance of the pulmonary vasculature calculated from the pressure drop across the pulmonary vascular bed divided by the pulmonary blood flow. Means of the covariates were used for the aggregate data.
Systematic literature review of studies in the literature
The results of a systematic literature review were available and was used to identify a network of studies to be included in the analysis. In this review, the MEDLINE® and EMBASE® databases were searched simultaneously. Patient Intervention Comparator Outcome Study type (PICOS) [33] criteria were followed and the quality assessment was performed according to the NICE checklist for RCTs [34]. Details of the PICOS terms are included in Additional file 1. Search terms included a combination of free-text and thesaurus terms relevant to PAH, ERA, prostacyclins, PDE5i, and RCTs, although case-control and cohort studies were also included. The Cochrane Central Register of Controlled Trials was also searched using a similar strategy. The relevance of each citation identified from the databases was based on title and abstract according to the PICOS criteria. As we wanted to explore the effects of covariates, studies that did not report two or more of the 6 covariates of interest were excluded, while we would use IMPRES data to perform single imputation when only one covariate is missing. From this review, we identified and included 5 monotherapy [35-39] and 4 combination therapy [28,40-42] RCTs, summary statistics for which are provided in Table 2 and Table 3, respectively. Additionally, 6 before-and-after studies investigating monotherapies and combination therapies were included [43-48] and their summary statistics are reported in Table 4. PRISMA flowcharts are provided for the systematic searches in Figure 2 and Figure 3 and a PRISMA checklist is provided in Additional file 2 [49]. Although there were substantial differences across trials in the doses of the administered treatments, as recorded in Tables 2, 3 and 4, clinical opinion was such that their effects would be comparable.
Details of included monotherapy RCTs*
Badesch 2002
Rubin 2002 (BREATHE-1)
Barst 2006 (STRIDE-2)
Barst 1996
Galie 2005 (SUPER-1)
Pr (iv ep)
PDE5i
Treatment dose
62.5 mg bosentan twice daily, increased to 125 mg twice daily after 4 weeks.
62.5 mg bosentan twice daily, increased to either 125 mg or 250 mg twice daily after 4 weeks.
mean dose of intravenous epoprostenol 9.2 ng/kg/min
80 mg sildenafil orally 3 times daily
Patients at end of trial
Change 6MWD
−6 (50.5)
−8 (9.5)
−6.5 (9.2)
−15 (33)
Baseline 6MWD
Sex (% male)
*ERA are endothelin receptor antagonists, PDE5i are phosphodiesterase 5 inhibitors, Pr are prostacyclin analogues. iv ep is intravenous epoprostenol.
Details of included combination therapy RCTs*
Barst 2011 (PHIRST-1)
Simonneau 2008 (PACES)
McLaughlin 2006 (STEP)
Humbert 2004 (BREATHE-2)
Pr (iv epo)
Pr (inh ilp)
ERA+ Pr (iv epo)
40 mg tadalafil once daily
3x20mg sildenafil daily, increased to 40 and 80 at 4 week intervals
5 μg inhaled iloprost
62.5 mg bosentan twice daily, increased to 125 mg twice daily after 4 weeks. Intravenous epoprostenol started at 2 ng/kg/min and increased up to 14 ± 2 ng/kg/min after 16 weeks.
40.2 (8.5)
1 (5.3)
348.9 (6.2)
323.04§
*E are endothelin receptor antagonists, P5 are phosphodiesterase 5 inhibitors, Pr are prostacyclin analogues. iv ep is intravenous epoprostenol, inh ilp is inhaled iloprost.
§Imputed based on linear model for baseline 6MWD with covariates for Age, Sex, mean STATUS, and mean right arterial pressure.
Details of included observational studies*
Jacobs 2009
Akagi 2008
Channick 2006
Hoeper 2003
Mathai 2007
Prostacyclin analogue
intravenous epoprostenol and subcutaneous treprostinil
intravenous epoprostenol
inhaled treprostinil
oral beraprost and inhaled iloprost
10-20 ng/kg/min subcutaneous treprostinil, 38.4 end of observation. 6-8 ng/kg/min intravenous epoprostenol, 16.2 end of observation
62.5 mg bosentan twice daily
6 on 30 mcg inhaled treprostinil 4 daily 6 on 45 mug 4 daily.
20 mg sildenafil (up to 100 mg included) once daily
Patients at end of study
957.8¶
*ERA are endothelin receptor antagonists, PDE5i are phosphodiesterase 5 inhibitors, Pr are prostacyclin analogues. iv ep is intravenous epoprostenol, inh ilp is inhaled iloprost, sc trep is subcutaneous treprostinil.
¶Imputed from linear model for PVR based on Age, right arterial pressure, MPAP and cardiac output.
PRISMA flowchart for selection of monotherapy and combination therapy RCTs.
PRISMA flowchart for selection of monotherapy and combination therapy observational studies.
Only two-arm RCTs were identified, with each arm involving the addition of some treatment or placebo to a group of patients who were either treatment naïve or on some baseline treatment. In these studies, included patients had been on the baseline treatment for a time period before randomization (eg. ERA for at least 4 months prior to randomization [42] which was assumed sufficient to derive maximal benefit from the baseline treatment. This assumption implies that any improvement was due to the additional treatment or the placebo effect.
The before-and-after studies were single-arm observational studies which reported the 6MWD of a group of patients on a particular background therapy before and after administering a new treatment. For example, Mathai et al. [47] studied the effect of initiating additional PDE5i therapy on a group of patients already on ERA monotherapy, thus providing evidence on the additional benefit of adding PDE5i over ERA alone.
The final evidence network of observational studies and RCTs for the NMA is shown in Figure 4. The treatment effects are labelled β i and are the expected short term improvement in 6MWD. Arrow directions indicate the interpretation of these parameters, eg. Positive β2 means that prostacyclins are more effective than placebo. The network of primary interest is highlighted in bold, and the comparison of primary interest β8 − β6, the effectiveness of imatinib against prostacyclins as an add-on to ERA + PDE5i, is highlighted by a bold, dashed, indirect link. This illustrates the necessity of including observational evidence as this network would be disconnected had it been restricted to RCTs. Although this indirect comparison could have been conducted with evidence from only IMPRES and the Jacobs et al. studies, the inclusion of a wider range of evidence strengthens our estimates of covariate adjustments and the short term placebo improvements in 6MWD in PAH patients. The following sections explain our development of a NMA model to estimate the parameters β i by synthesizing all available evidence. As only two-arm RCTs and single-arm observational studies were identified, the models we develop will not be designed for trials with more than two arms. This model development is summarized in Table 5.
Network of evidence for comparison of effectiveness of monotherapies and combination therapies for PAH. E are endothelin receptor antagonists, P5 are phosphodiesterase-5 inhibitors, Pr are prostacyclin analogues. Obs indicates that the study is observational, while all others are RCTs. IPD was only available for the IMPRES trial.
Summary of NMA models used for comparison of treatment combinations for PAH synthesising aggregate data from the literature and IPD from IMPRES study
Section with model details
Aggregate data only
Aggregates the IPD and includes observational data through random effects on change from baseline 6MWD
Aggregate and IPD
Extends M1 to combine aggregate data and IPD
Extends M2 to include covariate adjustments on change from baseline 6MWD
Extends M3 to include interactions between treatment effect and covariates
Same as M4 but SE in observational studies are inflated by a factor of 10 to downweight their evidence
Same as M4 but constructed a control arm for observational studies where all patients assumed to deteriorate by 25 m from baseline in 6MWD
Model M1: network meta-analysis of aggregate data from RCTs and observational studies
The first model we considered was a simple network meta-analysis of aggregated data from the IMPRES study and aggregate data from the literature. The mean short term change in 6MWD for each study i and arm j, \( {\overline{Y}}_{ij} \), was modelled as:
$$ {\overline{Y}}_{ij}\sim \mathrm{N}\left({\alpha}_i+{\theta}_{ij},{\mathrm{SE}}_{\mathrm{ij}}\right) $$
where SEij is the standard error of the observed change in 6MWD in arm j of study i. It should be noted that this parameterization is slightly different to that used in other network meta-analyses [15,50] as we are using a trial level placebo effect α i in combination with a trial level effect of treatment, θ ij . The placebo effect is the mean improvement in 6MWD that a group of patients would experience if they entered trial i and received only placebo, in addition to their background therapy, and is assumed to be the same for each of the arms of the trial. The effect of treatment in arm j is a linear combination of the effects of additional treatments initiated in that arm at the start of the trial:
$$ {\theta}_{ij}\sim N\left({f}_{ij}\left(\boldsymbol{\beta} \right),{\sigma}_{\beta}^2\right); $$
θ ij = effect of additional treatment initiated in jth arm of ith study
β = vector of treatment effects
f ij = linear function with coefficients +1 or -1
In Equation (2), random effects with common variance \( {\sigma}_{\beta}^2 \) were placed on the treatment effects θ ij in the ith study and jth arm, as they were assumed to be exchangeable and independent. The entries of the treatment effects vector β are the treatment effect parameters β l , which we assumed to be fixed effects. For arms receiving only a placebo, it was assumed that θ iC = 0 so that the improvement in 6MWD is only the placebo effect α i . Two-arm trials with no placebo arm would have mean improvements of α i + θi1 and α i + θi2, where θi1 and θi2 are the effect of the treatment combinations in the first and second arms, respectively.
Observational studies consist of only one arm and their inclusion required an assumption about their α i . We assumed that these α i , the placebo improvement in a trial, which would subsume the placebo effect, would be exchangeable across trials. In the model above, we expressed this by placing a Normal random effect with common mean α and variance \( {\sigma}_{\alpha}^2 \) on the α i s:
$$ {\alpha}_i\sim N\left(\alpha, {\sigma}_{\alpha}^2\right) $$
This use of random effects enables evidence from all RCT and before-and-after observational studies to estimate the expected change in 6MWD. Note that this assumption possibly interferes with randomization as the α i will be drawn towards the mean α and thus the treatment effects β may be biased. An alternative would be to treat the α i as fixed effects [24,25,51] and thus preserve randomization, but this would not allow the inclusion of before-and-after studies.
The linear functions f ij () were almost always single values, eg. β6 for Jacobs et al. as the only additional treatment was prostacyclin analogues [43]. In the BREATHE-2 study [40], labelled study i for convenience, arm j = 1 was a treatment naïve group started on bosentan (ERA) and intravenous epoprostenol (Pr) while arm j = 2 was a treatment naïve group started on bosentan (ERA) alone. This was represented by the functions:
$$ {f}_{i1}\left(\boldsymbol{\beta} \right)={\beta}_1+{\beta}_3 $$
$$ {f}_{i2}\left(\boldsymbol{\beta} \right)={\beta}_1 $$
which could be read from Figure 4. The β i are our analogues of the basic parameters in the standard indirect treatment comparison model described in Dias et al. [5], while f ij (β) are our analogues of the functional parameters.
The choice of priors for α and the β l s was based on the assumption that no patient would change their walking distance by more than 400 meters, which implied a standard deviation of 200 meters. Assuming that the smallest study had at least 10 patients, this gave \( SE=\raisebox{1ex}{$200$}\!\left/ \!\raisebox{-1ex}{$\sqrt{10}$}\right. \) and therefore a prior variance for effects on the mean of SE2 = 4000. We represented these prior beliefs via Normal distribution, which were judged appropriate in the context of changes in 6MWD through exploratory analysis of the IMPRES data and expert clinical opinion. For \( {\sigma}_{\beta}^2 \) and \( {\sigma}_{\alpha}^2 \), the vague assumptions that σ β ≤ 50 meters and σ α ≤ 50 meters were used, which expressed the belief that individual patients would not differ from the mean improvement in 6MWD by more than 100 meters. Following the recommendation of Lambert et al. [52], a uniform prior representing this belief was placed on the standard deviation. These considerations gave the priors:
$$ \alpha \sim N\left(0,4000\right) $$
$$ {\beta}_l\sim N\left(0,4000\right) $$
$$ {\upsigma}_{\upbeta}\sim U\left(0,50\right) $$
$$ {\upsigma}_{\upalpha}\sim U\left(0,50\right) $$
which completed the specification of a NMA model for aggregate data only.
Model M2: network meta-analysis of IPD and aggregate data from RCTs and observational studies
We extended the aggregate data model described by Equation (1) in Section 2.1 to include individual patient data through the relation:
$$ {Y}_{ijk} \sim N\left({\alpha}_i+{\theta}_{ij},{\sigma}^2\right) $$
for the change in 6MWD for patient k of arm j and study i, where σ2 is a common variance parameter to be fit to the data. Although in general we would use a separate σ2, with a subscript, for each IPD trial, we have dropped the subscript to simplify the notation as our application only includes a single IPD trial. The treatment effects and placebo effects were as in the aggregate data model M1:
$$ {\theta}_{ij} \sim N\left({f}_{ij}\left(\boldsymbol{\beta} \right),{\sigma}_{\beta}^2\right) $$
Normal prior distributions were again assumed for the means of the Normal distributions and Uniforms were placed on the standard deviations. As in the specification of priors for α and the β l s in model M1, we reasoned that if a patient was assumed not to have an improvement exceeding 400 meters, their standard deviations should be 200 meters and therefore have variances of 40000. These assumptions resulted in the priors:
$$ {\beta}_l\sim N\left(0,40000\right) $$
$$ \sigma \sim U\left(0,50\right). $$
As the evidence for the treatment effects β l came from both individual patient and aggregate (mean) level data, the 'vaguer' prior was used. The prior for the placebo effect α and for the standard deviations σ α , and σ β were kept the same as in the aggregate data models in Section 2.1. This was appropriate as they have the same meaning in both the IPD and aggregate data models.
Model M3: across-study and within-study covariate adjustments on the placebo effect
To account for across-study heterogeneity, we extended the model to include covariate adjustments on the placebo effects, the α i s. A further advantage was that these adjustments for differences in the patient populations led to better assessments of the placebo improvement in the single-arm before-and-after studies due to their better explanation of the heterogeneity. We also adjusted for heterogeneity within the studies, which is between-patient heterogeneity, for which we had IPD. The model was defined for a mean covariate \( {\overline{X}}_{ij} \) and individual covariate X ijk as follows:
$$ {\overline{Y}}_{ij} \sim N\left({\alpha}_i+\varphi {\overline{X}}_{ij}+{\theta}_{ij},S{E}_{ij}\right) $$
$$ {Y}_{ijk} \sim N\left({\alpha}_i+\varphi {\overline{X}}_{ij}+\pi \left({X}_{ijk}-{\overline{X}}_{ij}\right)+{\theta}_{ij},{\sigma}^2\right) $$
$$ {\theta}_{ij}\sim N\left({f}_{ij}\left(\boldsymbol{\beta} \right),{\sigma}_{\beta}^2\right) $$
The last two equations are as in models M1 and M2. In this model, φ was the effect of the mean and accounted for across-study differences, while π was the effect of an individual's covariate and accounted for within-study differences.
Note that the difference between π and φ in Equation (8) quantifies ecological bias, a bias that arises when the effect of the mean of a covariate is different from effect of the covariate itself, and that if π = φ then there would be no ecological bias.
Priors for α, β l , σ, σ α , and σ β were as in the model with no covariate adjustments of Section 2.2, while a vague Normal distribution for mean effects was used for φ and a vague Normal distribution for individual effects was used for π:
$$ \varphi \sim N\left(0,4000\right) $$
$$ \pi \sim N\left(0,40000\right) $$
which completed the NMA model combining IPD and aggregate data with covariate adjustments on the placebo effect.
Model M4: within-study covariate adjustments on treatment effects
Our final extension was to include covariate adjustments for the effect of patient characteristics on the efficacy of treatments, the β l s in the models. Such a model would be useful for predicting efficacy and evaluating cost-effectiveness in patient subgroups with specific baseline characteristics. As only a small number of studies were available in our example for each treatment effect, it was not practical to account for across-study heterogeneity. We therefore restricted treatment effect covariate adjustments to the within-study level, and thus to only the treatment effect of imatinib for which IPD was available. The model was defined as
$$ {Y}_{ijk} \sim N\left({\alpha}_i+\varphi {\overline{X}}_{ij}+\pi \left({X}_{ijk}-{\overline{X}}_{ij}\right)+{\theta}_{ijk},{\sigma}^2\right) $$
$$ {\theta}_{ijk}\sim N\left({f}_{ij}\left(\boldsymbol{\beta} +\boldsymbol{\gamma} \left({X}_{ijk}-{\overline{X}}_{ij}\right)\right),{\sigma}_{\beta}^2\right) $$
Where Equations (7) and (14) are modifications of Equations (8) and (2) to include patient specific treatment effects. The elements γ l of γ were the effects of the covariate on the treatment effect β l . The linear functions f ij () therefore acted on linear combinations of the treatment effects β and their covariate adjustments γ.
The same priors as before were used for α, β l , σ, σ α , σ β , φ and π, while a Normal distribution for individual patent level effects was used for the γ l , i.e.
$$ {\gamma}_l\sim N\left(0,40000\right) $$
This completed the specification of an NMA model for combining IPD and aggregate data from RCTs and observational studies with covariate adjustments on the placebo and treatment, of imatinib, effect. The models described in these sections are summarized in Table 5 and we applied them to the PAH example.
Covariate selection via DIC-based forward stepwise selection
Model M4 potentially includes covariates at three different levels and the full model space can be quite large. In our PAH example there are 6 possible covariates, so a total of 218 possible models. Although a model that includes all of these covariates would be highly adjustable to populations in which predictions are desired, it is necessary to avoid over fitting to the data. To avoid over fitting and produce robust predictions, we use the Deviance Information Criterion (DIC, [53]). This is a predictive criterion that balances fit and complexity. It is computationally infeasible to investigate the full model space so we instead apply DIC-based forward stepwise selection [54,55]. This allows us to search through the space of models using the following steps:
Initially chosen model has no covariates
Fit extended models with one extra covariate from chosen model
Choose minimum DIC model from original and extended models.
Return to step 2.
Initially, for the PAH example with 6 covariates of interest, Step 2 involves a search of 18 possible models. The second time through involves 17 possible models, and so on. This leads to a maximum of 171 models to search, which is computationally feasible.
All results presented here are from an implementation of the models described in Section 2, and summarized in Table 5, in the WinBUGS [56] software package. This is a Windows based software for Bayesian inference using Gibbs sampling. The code for these models is provided in Additional file 3 and the authors are happy to respond to any queries about its use. All results were sampled from 250 000 iterations of a single Markov chain Monte Carlo (MCMC) chain following a burn-in of 100 000 iterations. We also sampled a second chain from alternate initial values and confirmed that 250 000 iterations was sufficient for convergence on the basis of the Gelman-Rubin statistic [57].
Results of models M1 and M2: NMA with no covariate adjustments
Summary statistics of the posterior distributions of the placebo and treatment effects, on the scale of change in 6MWD in meters, for the comparison of imatinib against prostacyclin analogues as add-on to ERA and PDE5i from the model M1, described in Section 2.1, are presented in Table 6 and Figure 5. This NMA combined only summary statistics from the IMPRES trial and did not make use of the available IPD. The posterior means and 95% credible intervals are comfortably within the prior ranges specified in Section 2.1. The summary of the placebo effect α implies that a randomly selected group of patients would be expected to have a mean 6MWD improvement of 4.78 meters, and for this mean to lie within the range of -4.8 and 14.6 meters with a probability of 95%, were they to enter a placebo arm of one of the studies. This is not unreasonable on the basis of the means and standard errors of the observed changes in 6MWD in the control arms of the RCTs, reported in Table 2 and Table 3. The wide and inconclusive 95% credible intervals for the treatment effects and comparison are indicative of the weakness of the evidence. Also provided in Table 6 and Figure 5 are the results of model M2, described in Section 2.2, which combined available aggregate data with IPD from the IMPRES study. The means of the posterior distributions do not change very much but the credible intervals for parameters based on IPD from IMPRES, the α (imatinib to E + P5) and treatment effect β8, shrink. This reduction in the width of the credible intervals is due to the complex interaction between the vague priors in the different parameterisation of model M2 from M1 and is not due to any improvement in the use of the evidence. Even vague priors are somewhat informative and this is illustrated by the reduction in the credible intervals.
Results of four network meta-analyses: based on only aggregate data; combining IPD and aggregate data with no covariate adjustments; combining IPD and aggregate data with covariate adjustments for individual patient AGE, baseline STATUS and baseline PVR at within-study level; using the covariate adjusted IPD and aggregate data model with the observational studies down-weighted by inflating their standard errors by a factor of 10; using the covariate adjusted IPD and aggregate data model with constructed control arms for observational studies with an assumed deterioration of 25 meters in 6MWD
IPD and aggregate with no covariate adjustments
IPD and aggregate with covariate adjustments
Down-weighted observational studies
Constructed control arms
α i , Jacobs 2009
4.77 (-18.60,29.12)
4.55 (-19.07, 29.45)
−1.10 (-22.54, 17.77)
α i , (imatinib to ERA + PDE5i)
3.55 (-9.78, 17.12)
α (mean of α i s)
0.02 (-8.74, 8.78)
π 1 (AGE)
−0.90 (-1.52, -0.27)
π 3 (STATUS)
−11.89 (-28.20, 4.44)
π 5 (PVR)
β 6 (Pr to ERA + PDE5i)
35.98 (-44.15, 114.70)
22.64 (-168.30, 217.00)
β 8 (imatinib to ERA + PDE5i)
39.09 (2.72, 75.80)
39.91 (11.31, 68.36)
(β 8 − β 6 ) imatinib v Pr to ERA + PDE5i
Results sampled from 250000 iterations following a burn-in of 100000.
Forest plot of mean and 95% credible interval of posterior distribution for difference in treatment effect of imatinib and Pr given to patients on combination of ERA and PDE5i, on scale of short term change in 6MWD from baseline in meters.
Results of model M3 and M4: NMA of IPD and aggregate data with covariate adjustments
Summary statistics of the results of the application of the covariate adjusted NMA model of Section 2.3, model M3, to combining IPD from the IMPRES trial with aggregate data from the literature are reported in Table 6 and Figure 5, while further parameter estimates are provided in Additional file 4. We used DIC-based forward stepwise selection to choose the covariate adjustments at across-study and within-study level on the placebo effects and within-study level on the treatment effect of imatinib. It was found that the DIC-minimizing model had no across-study covariate adjustments on the placebo effect but had within-study adjustments for AGE, STATUS and PVR on the placebo effect.
The benefit of including IPD is again indicated by the reduction in the 95% credible interval for the treatment effect of imatinib added to ERA and PDE5i (β8) from that of model M1 of aggregated data. The 95% credible interval for the indirect comparison of imatinib against prostacyclins as add-on to ERA and PDE5i, (-76.65, 85.24) from model M3, remains approximately the same width as in the aggregate data model, (-83.70, 89.27) from model M1, as illustrated in Figure 5. This is because the effect of additional prostacyclins is based on only aggregate data. The direction of the effects of AGE (-0.90), STATUS (-11.89), and PVR (-0.05) on the expected 6MWD improvement of a patient in the IMPRES trial imply that older and sicker patients have a lower expected improvement, which is reasonable. The non-selection of across-study covariates indicates that the imputed values for missing covariates, such as PVR in Jacobs et al., have no effect on the results. That some values were imputed may affect the DIC-based selection but this is unlikely to be a strong effect as the covariate adjustments were generally found to have little impact.
We further fit model M4, described in Section 2.4, and applied DIC-based stepwise selection to choose covariate adjustments on the treatment effect of imatinib. However, no such covariate adjustments were included so the chosen model M4 was identical to M3.
In addition to applying our NMA methodology to the PAH example, we also tested the impact of its assumptions through sensitivity analyses.
Sensitivity analysis, model S1: down-weighting the observational studies
In our standard NMA models M1, M2, M3 and M4, we gave equal weight to the results of the before-and-after observational studies and those of two-arm RCTs. An alternative to this assumption is to down-weight the results, recognizing internal bias due to lack of rigor, through a multiplicative adjustment to the standard errors of the results in either or both arms of the aggregate data, i.e.
$$ {\tilde{SE}}_i=\raisebox{1ex}{$S{E}_i$}\!\left/ \!\raisebox{-1ex}{${\delta}_i$}\right. $$
where δ i is the quality weight of the ith study, based on a subjective assessment. This is similar to the weighting of the empirical priors derived from observational evidence discussed in the background section [15,21]. If δ i = 1, it would represent a study that was judged to be of the highest quality, and its evidence would be given full weight. This was the value we assigned to RCT data. Using the covariate adjusted model of Section 3.2, we repeated the simulations with δ i = 0.1, increasing observed standard errors by a factor of 10, for the observational studies, thus down-weighting them, relative to RCTs, to represent their poorer quality.
For example, the observed change in 6MWD from baseline in Jacobs et al. was 41 meters with a standard error of 38, as reported in Table 4. This sensitivity analysis would assume that this standard error had been 380, substantially larger than any of the observed standard errors reported in Tables 2, 3 or 4 (maximum was about 50). We can therefore conclude that if our analysis is robust to down-weighting by a factor of δ i = 0.1, it is likely to be robust to most levels of uncertainty we could plausibly observe.
The results from this sensitivity analysis, labeled model S1, are presented in Table 5 and Figure 5. The main change from the results of the model without down-weighting of the observational studies, models M1, M2 and M3 in Table 5, was the increased range of the 95% credible intervals for treatment effects estimated on the basis of observational studies, such as β6. The range of the 95% credible interval of the comparison of imatinib against prostacyclins as add-on to ERA + PDE5i was also increased, by a large factor, illustrated in Figure 5, due to its reliance on the down-weighted observational studies. The magnitude of the comparative effectiveness (β8 − β6) also increased substantially, but is most likely due to the increased random variation illustrated by the expanded credible intervals.
This decrease in the accuracy of the treatment effect estimates and indirect comparisons indicates the influence of the observational studies. As the effect was largely on the accuracy of these estimates and not on their direction, it could be concluded that the NMA methodology was robust to the down-weighting of the observational studies, although its reliance on possibly weak and biased observational evidence was highlighted.
Sensitivity analysis, model S2: constructed control arms in the observational studies
The lack of control arms in the observational studies presented a difficulty of not knowing what would have happened had patients not been given additional treatment. The NMA models of Section 2 placed Normally distributed random effects on the expected improvements in patients who had entered a study but only received a placebo,
An alternative was to construct a control arm for the observational studies by making an assumption about \( {\overline{Y}}_{iC} \), the mean change in 6MWD for patients who did not receive additional therapy. As the Jacobs et al. study [43] looked at patients who were deteriorating on oral therapy, we assumed that patients' 6MWD would decrease during the trial if they were not given any new treatments. This study included patients whose 6MWD had decreased by 58 meters over a mean time of 20.6 months before entering the study. Our short term follow-up was approximately 24 weeks, which is less than half of 20.6 months, so we assumed a mean change of \( {\overline{Y}}_{iC}\approx -25\mathrm{m} \) would be observed in missing control arms over this short term follow-up. We further assumed that the standard error of the mean in this constructed control arm, SE iC , is the same as that observed in the treatment arm. We used these assumptions to construct control arms for all observational studies.
We repeated the analysis using the covariate-adjusted model from Section 3.2 with the constructed control arms, giving the results, labeled model S2, presented in Table 5 and Figure 5. It was difficult to interpret the direction of the change in placebo and treatment effects, due to the effect of covariate adjustments. The direction of the comparison of imatinib against prostacyclins was shifted in favor of prostacyclins, which was expected due to the conservative assumption about the control arms. However, the wide confidence intervals and overall direction of the comparisons remained so the analysis was judged to be robust to this alternative assumption about the control arms.
In this paper we have considered the problem of how to perform a network meta-analysis when the RCT evidence does not form a complete network. Our proposal was to complete the network using single-arm before-and-after observational studies by building a covariate adjusted random effects model on the placebo improvements. We built on recent innovations to construct a model which combines IPD and aggregate data from RCTs and before-and-after studies and allows for the inclusion of covariate adjustments for heterogeneity at across- and within-study level on the placebo effect and within-study level on the treatment effect. Using this model, we performed a clinically novel comparison of the benefit of imatinib against prostacyclins as add-on therapy to PAH patients on a combination of ERA and PDE5i. This comparison was only possible through inclusion of observational studies as an evidence network restricted to RCTs would be disconnected.
As the credible intervals were very wide, the results of our application to PAH were considered to be inconclusive. This was due to the weakness of the evidence as only a few studies, with small sample sizes, were available for each edge of the network. This data limitation may also be the reason why we found that covariate adjustments had little effect on the NMA results and that no across-study adjustments were included on DIC grounds, although we can also interpret this as evidence that heterogeneity had little effect on the NMA. It is possible that important covariates were not reported by IMPRES or other studies, or that reported covariates were incorrectly considered to be of no importance due to the weakness of the data. It is also possible that the stepwise selection algorithm missed important covariates as it only investigates a small portion of the total 218 possible models. A simulation/robustness study could address these concerns but would be computationally intensive as the model selection step, even using stepwise selection to reduce the set of models under consideration, was resource intensive. The best strategy to improve the practical utility of this application of NMA to PAH is to collect further evidence, ideally IPD from a new or existing RCT.
Apart from these data limitations, which are specific to the application, there are a number of limitations and untestable assumptions of the model itself. As in many meta-analysis and NMA models [15], we assumed the effects of particular treatments were the same across studies by placing a fixed effect on each β l . In cases where sufficient data are available, this could be relaxed to a random effects assumption where we assume the β l from different trials follow a common, possibly Normal, distribution. Our model also assumed, in Equation (2), that effects of additional treatments had the same variance \( {\sigma}_{\beta}^2 \) in all studies, no matter how many additional treatments were being administered. This is possibly implausible as a the effect of a combination of three new treatments should have a higher variance than the effect of a single new treatment. As in the case of fixed effects, this assumption could be relaxed in cases where sufficient data are available. A further simplification that limits the generalizability of our model is that it is restricted to single- or two-arm trials. To extend the model to trials with three or more arms would require careful consideration of correlation in treatment effects across arms and within studies [5,50].
An assumption of our model that is common to most NMA models is the transitivity or consistency of treatment effects across studies. This is the assumption that studies informing the comparison of treatment A against treatment B and of treatment A against treatment C can be used to inform the comparison of B against C. Our evidence network was sparse and contained only one loop, making it impractical to test for consistency of direct and indirect evidence using node-splitting [58] or other measures of inconsistency [59,60]. If more studies became available, it would be recommended to test that comparing ERA + Pr to ERA using the direct evidence [44,46] gave similar results, within some range of acceptability, to performing the comparison with only indirect evidence.
The principle assumption that allowed the inclusion of single-arm before-and-after observational studies was that the placebo effects, the α i s, were exchangeable or that there was no a priori reason that there would be systematic differences between these effects. This assumption allowed us to model the α i s using a random effects distribution. We recommend the use of this assumption and our model in cases where networks are not densely populated or fully connected when restricted to RCT evidence, such as the PAH example. Decision makers would still need to give a recommendation on which treatment to use in such situations [13] and, indeed, in Australia the PBAC already considers non-randomized observational evidence, particularly in the absence of RCTs [1]. However, this type of evidence is considered to be weak and subject to bias by decision making bodies such as NICE [14]. Additionally, the GRADE scale, which is followed by PBAC, rates the quality of such as evidence as low [61]. In cases where networks can be densely populated and fully connected by RCT evidence, this assumption may shrink placebo effects towards the mean and thus interfere with randomization [24,25,51]. In those cases, we recommend treating the α i as independent fixed effects, or nuisance parameters, and not including observational evidence.
In the PAH application, clinical opinion supported the assumption of exchangeable placebo effects, although this assumption is not testable statistically. That no across-study adjustments were included in the model selection step gave an indication that there were no systematic differences in these expected improvements and that our exchangeability assumption was warranted. The simple alternative of constructing a control arm for observational studies was investigated but was found to have little effect on the results. We also explored down-weighting the observational evidence and found, as expected, a reduction in the accuracy of our findings, but no change in the overall direction of the indirect comparisons results.
Our model can be criticized on the grounds that the single-arm studies contribute to the estimation of the distribution for the placebo effects α i . We considered an alternative formulation of our model where only the RCTs would contribute to this estimation and the α i s for the observational studies would be sampled separately from this distribution. This is the method proposed for baseline natural history models by Dias et al. [24]. However, our model is designed to be applied to cases where data would already be limited, such as the PAH example, so a further reduction of the evidence base would be undesirable, although in practice the contribution of the observational studies to the α i estimation will be limited.
A very simple alternative to a random effects assumption for the placebo effects is to use a single fixed effect α for the α i s. This is an assumption that all patient populations started on placebo have the same short term expected improvement in 6MWD and that any differences are due to the treatment or covariate effects. We repeated the NMA with this assumption and found that the results were similar in magnitude and direction to those of the random effects model and that the DIC was considerably higher, with 1889 for fixed effects versus 1870 for random effects. This DIC gives evidence in favor of our random effects model. The single fixed effect model was also not clinically plausible as there were many inherent differences in the studies so a common placebo effect would be difficult to justify.
Several additional sensitivity analyses were conducted. Firstly, as no across-study covariates were included, we applied our final NMA model to an evidence network which included the studies which were excluded due to non-reporting of covariates. This included one extra RCT [62] and three observational studies [63-65]. The results of this sensitivity analysis, not reported in this paper, were almost identical to those of the base case. Prior sensitivity analyses, where we tried prior distributions with greater variances, led us to conclude that the results were not dependent on our choice of prior parameters. Although non-normal priors could be easily implemented if the application required them, normal priors were judged to be appropriate for the continuous outcome of change in 6MWD through expert clinical opinion and exploratory analysis of the IMPRES data.
Aside from the extensions to multi-arm trials, separation of the placebo estimation between RCT and observational studies, and other possibilities so far discussed, there are a variety of directions for future extension of our methodology. One such direction would be to apply the model to non-continuous outcomes such as binary outcomes. NMA models combining IPD and aggregate data for binary outcomes have been discussed in the literature [6,7] and the use of a random effects model for placebo effects to include single-arm studies would be a straightforward extension. Our methods are also readily applicable to pairwise meta-analysis, as it was in this setting that the use of random effects modelling of placebo effects to include single-arm studies was first proposed [23]. Although in pairwise meta-analysis the model would no longer be justified on the grounds of completing evidence networks, it may be useful in cases where there are only a limited number of small RCTs and large, high-quality single-arm studies are available. An additional direction for research is the joint network meta-analysis of multivariate outcomes, such as PVR and change in 6MWD in PAH [66]. This approach would treat all covariates as responses and would account for missing values, a reason for exclusion of several studies, through a form of multiple imputation. This would have the advantage of using the evidence more consistently, rather than our approach of singly imputing missing covariates, such as PVR in Jacobs et al. [43]. However, this extension would require a greater evidence base than was available for the PAH example.
All of the limitations we have discussed should be kept in mind if applying our model in order to avoid being misled by the results of an analysis in which observational evidence is included. We would recommend conducting the sensitivity analyses we have described to ensure the model and the implications of its various assumptions are fully understood.
We have developed an extension of existing NMA methodology to allow the completion of disconnected networks of RCT evidence through the inclusion of single-arm before-and-after observational studies. This model also brings together many recent developments in network meta-analysis of IPD and aggregate data. Our application to PAH demonstrated the utility of our methodology as comparisons impossible to conduct on the basis of RCTs alone could be conducted through the inclusion of observational studies. Although IPD and covariate adjustments were found to make little difference to the results, we believe this model could be easily applied to many other disease areas and settings which require the inclusion of observational evidence. Our work therefore furthers the range of evidence synthesis problems that can be approached through NMA.
Deviance information criterion
Endothelin receptor antagonist
NMA:
Network meta analysis
PAH:
PDE5i:
Phosphodiesterase-5 inhibitors
Prostacyclin analogues
PVR:
Pulmonary vascular resistance
IPD:
RCT:
6MWD:
6 minute walk distance
World Health Organization New York Health Assessment status
Novartis Pharma provided funding for HT and LH to complete this work while GC, RN, and AC were full-time employees of Novartis. Novartis Pharma permitted the publication of this manuscript. MAPI values, under contract to Novartis Pharma, provided the systematic review that informed the application. The authors are very grateful to the helpful comments received from our reviewers and the associate editor, in particular for pointing out the example of a disconnected network in the evaluation of treatments for follicular lymphoma.
Additional file 1: PICOS and Search Terms for systematic literature review.
Additional file 2: PRISMA checklist for the systematic literature review.
Additional file 3: WinBUGS code for Model M4: Covariate adjusted NMA of IPD and aggregate data.
Additional file 4: Further parameter estimates from NMA Model M3: Covariate adjusted NMA of IPD and aggregate data.
HT and LH were paid external consultants of Novartis while completing this work. GC, RN and AC are full-time employees of Novartis.
AC and GC led the project and conceived the PAH indirect comparison application. HT developed the methodology in consultation and under the supervision of RN and GC. LH provided clinical support throughout the project. All authors read and approved the final manuscript.
School of Social and Community Medicine, Bristol, UK
Novartis Pharma AG, Basel, Switzerland
National Heart & Lung Institute, Imperial College London, London, UK
PBAC: Guidelines for preparing submissions to the Pharmaceutical Benefits Advisory Committee (Version 4.4); available from http://www.pbac.pbs.gov.au/. Pharmaceutical Benefits Advisory Committee 2014.
Caldwell DM, Ades AE, Higgins JPT. Simultaneous comparison of multiple treatments: combining direct and indirect evidence. BMJ. 2005;331:897–900.View ArticlePubMedPubMed CentralGoogle Scholar
Sutton A, Ades AE, Cooper N, Abrams K. Use of indirect and mixed treatment comparisons for technology assessment. Pharmacoeconomics. 2008;26:753–67.View ArticlePubMedGoogle Scholar
Dias S, Welton N, Sutton A, Ades A: NICE DSU Technical Support Document 1: Introduction to evidence synthesis for decision making; 2011; last updated April 2012; available from http://www.nicedsu.org.uk. National Institute for Health and Care Excellence 2012.
Dias S, Welton N, Sutton A, Ades A: NICE DSU Technical Support Document 2: A Generalised Linear Modelling Framework for Pairwise and Network Meta-Analysis of Randomised Controlled Trials. 2011; last updated April 2014; available from http://www.nicedsu.org.uk. National Institute for Health and Care Excellence 2014.
Donegan S, Williamson P, D'Alessandro U, Garner P, Smith CT. Combining individual patient data and aggregate data in mixed treatment comparison meta-analysis: Individual patient data may be beneficial if only for a subset of trials. Stat Med. 2013;32:914–30.View ArticlePubMedGoogle Scholar
Saramago P, Sutton AJ, Cooper NJ, Manca A. Mixed treatment comparison using aggregate and individual participant level data. Stat Med. 2012;31:3516–36.View ArticlePubMedGoogle Scholar
Sutton AJ, Kendrick D, Coupland CAC. Meta-analysis of individual- and aggregate-level data. Stat Med. 2008;27:651–69.View ArticlePubMedGoogle Scholar
Riley RD, Steyerberg EW. Meta-analysis of a binary outcome using individual participant data and aggregate data. Research Synthesis Methods. 2010;1:2–19.View ArticlePubMedGoogle Scholar
Riley RD, Lambert PC, Staessen JA, Wang J, Gueyffier F, Thijs L, et al. Stat Med. 2008;27:1870–93.View ArticlePubMedGoogle Scholar
Papaioannou D, Rafia R, Rathbone J, Stevenson M, Buckley Woods H: Rituximab for the first-line treatment of stage III-IV follicular lymphoma (Review of TA 110); available from https://www.nice.org.uk/. Health Tecnhology Assessment 2011.
Aronson JK. Rare diseases and orphan drugs. Br J Clin Pharmacol. 2006;61:243–5.View ArticlePubMedPubMed CentralGoogle Scholar
Reeves BC, Higgins JPT, Ramsay C, Shea B, Tugwel P, Wells GA. An introduction to methodological issues when including non-randomised studies in systematic reviews on the effects of interventions. Research Synthesis Methods. 2013;4:1–11.View ArticlePubMedGoogle Scholar
NICE: Process and methods guides: Methods for the development of NICE public health guidance (third edition)); available from https://www.nice.org.uk/. National Institute for Health and Care Excellence 2012.
Welton NJ, Sutton AJ, Cooper NJ, Abrams KR, Ades AE. Evidence synthesis for decision making in healthcare. Chichester: John Wiley and Sons; 2012.View ArticleGoogle Scholar
Prevost T, KR A, Jones D. Hierarchical models in generalised synthesis of evidence: an example based on studies of breast cancer screening. Stat Med. 2000;19:3359–76.View ArticlePubMedGoogle Scholar
Schmitz S, Adams R, Walsh C. Incorporating data from various trial designs into a mixed treatment comparison model. Stat Med. 2013;32:2935–49.View ArticlePubMedGoogle Scholar
Rosenbaum P, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika. 1983;70:41–55.View ArticleGoogle Scholar
d'Agostino RJ. Propensity score methods for bias reduction in the comparison of a treatment to a non-randomized control group. Statistcs in Medicine. 1998;17:2265–81.View ArticleGoogle Scholar
d'Agostino RJ, d'Agostino RS. Estimating treatment effects using observational data. JAMA. 2007;297:314–6.View ArticlePubMedGoogle Scholar
Ibrahim JG, Chen MH. Power prior distributions for regression models. Stat Sci. 2000;15:46–60.View ArticleGoogle Scholar
D'Agostino RS, Kwan H. Measuring effectiveness: what to expect without a randomized control group. Med Care. 1995;33:AS95–AS105.PubMedGoogle Scholar
Li Z, Begg CB. Random effects models for combining results from controlled and uncontrolled studies in a meta-analysis. J Am Stat Assoc. 1994;89:1523–7.View ArticleGoogle Scholar
Dias S, Welton N, Sutton A, Ades A. Evidence synthesis for decision making 5: the baseline natural history model. Med Decis Making. 2013;33:657–70.View ArticlePubMedPubMed CentralGoogle Scholar
Senn S. Hans van Houwelingen and the Art of Summing up. Biom J. 2010;52:85–94.View ArticlePubMedGoogle Scholar
Farber HW, Loscalzo J. Pulmonary arterial hypertension. N Engl J Med. 2004;351:1655–65.View ArticlePubMedGoogle Scholar
Liu C, Liu K, Ji Z, Liu G. Treatments for pulmonary arterial hypertension. Respir Med. 2006;100:765.View ArticlePubMedGoogle Scholar
Simonneau G, Rubin LJ, Galie N, Barst RJ, Fleming TR, Frost AE, et al. Addition of sildenafil to long-term intravenous epoprostenol therapy in patients with pulmonary arterial hypertension. Ann Intern Med. 2008;149:521–30.View ArticlePubMedGoogle Scholar
Fox BD, Shimony A, Langleben D. Meta-analysis of monotherapy versus combination therapy for pulmonary arterial hypertension. Am J Cardiol. 2011;108:1177–82.View ArticlePubMedGoogle Scholar
Hoeper MM, Barst RJ, Bourge RC, Feldman J, Frost AE, Galie N, et al. Imatinib mesylate as add-on therapy for pulmonary arterial hypertension: results of the randomized IMPRES study. Circulation. 2013;127:1128–38.View ArticlePubMedGoogle Scholar
McLaughlin V, Badesch D, Delcroix M, Fleming TR, Gaine SP, Galie N, et al. End points and clinical trial design in pulmonary arterial hypertension. J Am Coll Cardiol. 2009;54:S97–S107.View ArticlePubMedGoogle Scholar
Senn S. Change from baseline and analysis of covariance revisited. Stat Med. 2006;25:4334–44.View ArticlePubMedGoogle Scholar
Alessandro L, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JPA, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.View ArticleGoogle Scholar
NICE: The Guidelines Manual: Appendix C: Methodology checklist: randomised controlled trials; available from https://www.nice.org.uk/. National Institute for Health and Care Excellence 2012, PMG6B.
Badesch DB, Bodin F, Channick RN, Frost A, Rainisio M, Robbins IM, et al. Complete Results of the first randomized, placebo-controlled study of bosentan, a dual endothelin receptor anagonis, in pulmonary arterial hypertension. Curr Ther Res. 2002;63:227–47.View ArticleGoogle Scholar
Rubin LJ, Badesch DB, Barst RJ, Galie N, Black CM, Keogh A, et al. Bosentan therapy for pulmonary arterial hypertension. N Engl J Med. 2002;346:896–903.View ArticlePubMedGoogle Scholar
Barst RJ, Langleben D, Badesch D, Frost A, Lawrence EC, Shapiro S, et al. Treatment of pulmonary arterial hypertension with the selective endothelin-a receptor antagonist sitaxsentan. J Am Coll Cardiol. 2006;47:2049–56.View ArticlePubMedGoogle Scholar
Barst RJ, Rubin LJ, Long WA, McGoon MD, Rich S, Badesch DB, et al. A comparison of continuous intravenous epoprostenol (prostacyclin) with conventional therapy for primary pulmonary hypertension. N Engl J Med. 1996;334:296–301.View ArticlePubMedGoogle Scholar
Galie N, Beghetti M, Gatzoulis M, Granton J, Berger R, Lauer A, et al. BREATHE-5: Bosentan improves hemodynamics and exercise capacity in the first randomized placebo-controlled trial in eisenmenger physiology. CHEST - Late-Breaking Science. 2005;128(issue 4):496S.Google Scholar
Humbert M, Barst RJ, Robbins IM, Channick RN, Galie N, Boonstra A, et al. Combination of bosentan with epoprostenol in pulmonary arterial hypertension: BREATHE-2. Eur Respir J. 2004;24:353–9.View ArticlePubMedGoogle Scholar
Barst RJ, Oudiz RJ, Beardsworth A, Brundage BH, Simonneau G, Ghofrani HA, et al. Tadalafil monotherapy and as add-on to background bosentan in patients with pulmonary arterial hypertension. J Heart Lung Transplant. 2011;30:632–42.View ArticlePubMedGoogle Scholar
McLaughlin VV, Oudiz RJ, Frost A, Tapson VF, Srinivas M, Channick RN, et al. Randomized study of adding inhaled iloprost to existing bosentan in pulmonary arterial hypertension. Am J Respir Crit Care Med. 2006;174:1257–63.View ArticlePubMedGoogle Scholar
Jacobs W, Boonstra A, Marcus JT, Postmu PE, Vonk-Noordegraaf A. Addition of prostanoids in pulmonary hypertension deteriorating on oral therapy. J Heart Lung Transplant. 2009;28:280–4.View ArticlePubMedGoogle Scholar
Akagi S, Matsubara H, Miyaji K, Ikeda E, Dan K, Tokunaga N, et al. Additional Effects of bosentan in patients with idiopathic pulmonary arterial hypertension already treated with high-dose epoprostenol. Circ J. 2008;72:1142–6.View ArticlePubMedGoogle Scholar
Channick RN, Olschewski H, Seeger W, Staub T, Voswinckel R, Rubin LJ. Safety and efficacy of inhaled treprostinil as add-on therapy to bosentan in pulmonary arterial hypertension. J Am Coll Cardiol. 2006;48:1433–7.View ArticlePubMedGoogle Scholar
Hoeper MM, Taha N, Bekjarova A, Gatzke R, Spiekerkoetter E. Bosentan treatment in patients with primary pulmonary hypertension receiving nonparenteral prostanoids. Eur Respir J. 2003;22:330–4.View ArticlePubMedGoogle Scholar
Mathai SC, Girgis RE, Fisher MR, Champion HC, Housten-Harris T, Zaiman A, et al. Addition of sildenafil to bosentan monotherapy in pulmonary arterial hypertension. Eur Respir J. 2007;29:469–75.View ArticlePubMedGoogle Scholar
Hoeper MM, Faulenbach C, Golpon H, Winkler J, Welte T, Niedermeyer J. Combination therapy with bosentan and sildenafil in idiopathic pulmonary arterial hypertension. Eur Respir J. 2004;24:1007–10.View ArticlePubMedGoogle Scholar
Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ. 2009;339:b2535.View ArticlePubMedPubMed CentralGoogle Scholar
Lu G, Ades AE. Combination of direct and indirect evidence in mixed treatment comparisons. Stat Med. 2004;23:3105–24.View ArticlePubMedGoogle Scholar
Senn S, Gavini F, Magrez D, Scheen A. Issues in performing a network meta-analysis. Stat Methods Med Res. 2011;22:169–89.View ArticleGoogle Scholar
Lambert PC, Sutton AJ, Burton PR, Abrams KR, Jones DR. How vague is vague? A simulation study of the impact of the use of vague prior distributions in MCMC using WinBUGS. Stat Med. 2005;24:2401–28.View ArticlePubMedGoogle Scholar
Spiegelhalter DJ, Best NG, Carlin BP, Linde A. Bayesian measures of model complexity and fit. JRStatist Soc B. 2002;64:583–639.Google Scholar
Hocking RR. The analysis and selection of variables in linear regression. Biometrics. 1976;32:1–49.View ArticleGoogle Scholar
Miller AJ. Selection of subsets of regression variables. J R Stat Soc Ser A. 1984;147:389–425.View ArticleGoogle Scholar
Lunn DJ, Thomas A, Best N, Spiegelhalter D. WinBUGS - a Bayesian modelling framework: concepts, structure, and extensibility. Stat Comput. 2000;10:325–37.View ArticleGoogle Scholar
Lunn DJ, Jackson CH, Best N, Thomas A, Spiegelhalter D: The BUGS Book. New York: CRC Press; 201Google Scholar
Dias S, Welton NJ, Caldwell DM, Ades AE. Checking consistency in mixed treatment comparison meta-analysis. Stat Med. 2010;29:932–44.View ArticlePubMedGoogle Scholar
Lu G, Ades A. Assessing evidence inconsistency in mixed treatment comparisons. J Am Stat Assoc. 2006;101:447–59.View ArticleGoogle Scholar
Dias S, Welton N, Sutton A, Caldwell D, Lu G, Ades A. Evidence synthesis for decision making 4: inconsistency in networks of evidence based on randomized controlled trials. Med Decis Making. 2013;33:641–56.View ArticlePubMedPubMed CentralGoogle Scholar
Guyatt GH, Oxman AD, Sultan S, Glasziou P, Akl EA, Alonso-Coello P, et al. GRADE guidelines: 9. Rating up the quality of evidence. J Clin Epidemiol. 2011;64:1311–6.View ArticlePubMedGoogle Scholar
Wilkins MR, Paul GA, Strange JW, Tunariu N, Gin-Sing W, Banya WA, et al. Sildenafil versus endothelin receptor antagonist for pulmonary hypertension (SERAPH) study. Am J Respir Crit Care Med. 2005;171:1292–7.View ArticlePubMedGoogle Scholar
Porhownik NR, Al-Sharif H, Bshouty Z. Addition of sildenafil in patients with pulmonary arterial hypertension with inadequate response to bosentan monotherapy. Can Respir J. 2008;15:427–30.View ArticlePubMedPubMed CentralGoogle Scholar
Ghofrani HA, Rose F, Schermuly RT, Olschewski H, Wiedeman R, Kreckel A, et al. Oral Sildenafil as long-term adjunct therapy to inhaled iloprost in severe pulmonary arterial hypertension. Pulmonary Hypertension. 2003;42:158–64.Google Scholar
Ruiz MJ, Escribano P, Delgado JF, Jimenez C, Tello R, Gomez A, et al. Efficacy of sildenafil as a rescue therapy for patients with severe pulmonary arterial hypertension and given long-trem treatment with prostanoids: 2-year experience. Pulmonary Hypertension. 2006;25:1353–7.Google Scholar
Ades AE, Welton NJ, Caldwell D, Price M, Goubar A, Lu G. Multiparameter evidence synthesis in epidemiology and medical decision-making. J Health Serv Res Policy. 2008;Suppl 3:12–22.View ArticleGoogle Scholar | CommonCrawl |
Antifungal activity of silver/silicon dioxide nanocomposite on the response of faba bean plants (Vicia faba L.) infected by Botrytis cinerea
Zakaria A. Baka1 &
Mohamed M. El-Zahed ORCID: orcid.org/0000-0003-2694-37201
Bioresources and Bioprocessing volume 9, Article number: 102 (2022) Cite this article
Silicon (Si) and its nanomaterials could help plants cope with different negative effects of abiotic and/or biotic stresses. In this study, the antifungal role of silver/silicon dioxide nanocomposite (Ag/SiO2NC) biosynthesized using a free-cell supernatant of Escherichia coli D8 was investigated for controlling the growth parameters and yield of faba bean (Vicia faba L.) infected by Botrytis cinerea. This nanocomposite was characterized using UV–Vis spectroscopy, Fourier transform-infrared (FTIR), transmission electron microscopy (TEM), zeta analysis, and X-ray diffraction pattern (XRD). Positively charged Ag/SiO2NC (+ 31.0 mV) with spherical-shaped silver nanoparticles (AgNPs) showed strong in vitro antifungal activity with minimal inhibition concentration (MIC) value equal to 40 ppm. In vivo experiments revealed the good resistance of Ag/SiO2NC-treated plants against the B. cinerea infection due to the increase of total phenolic content, peroxidase, and polyphenol oxidase activity. The ultrastructure of Ag/SiO2NC-treated plants showed normal morphology of cells including cell membranes and ellipsoidal-shaped chloroplasts with big starch grains. The concentration of silver content in Ag/SiO2NC-treated plants was similar to the untreated control plant indicating the low realizability of AgNPs. All of these results are promising outcomes for the application of the biosynthesized Ag/SiO2NC as a safe and effective antifungal agent against B. cinerea.
Nowadays, plant pathogens, especially fungi cause crop loss that threatens the food sufficiency of some countries; moreover, the huge economic losses that may be estimated at billions (Gennari et al. 2019). The faba bean plant is a multi-purpose crop that is often utilized as a common meal in poor nations and as an animal feeder in wealthy ones due to its high protein content (Brink et al. 2006). Owing to the continuous increase in demand, increasing the production of this crop is one of the agricultural goals in many nations, including Egypt, Sudan, Algeria, and others (Alaagib et al. 2022).
The faba bean is one of the most important strategic crops in Egypt, especially since it is one of the main meals on most Egyptian tables in the morning. It is considered one of the winter crops. During its growth stages, the faba bean plant needs special care to protect the crop from pests and diseases that may infect it during the growth stages and cause great losses (Sahile et al. 2008). The humid climate in Egypt (especially in winter) is suitable for the emergence and spread of many fungal diseases (Bond et al. 1994; Ouda and Zohry 2022). Chocolate spot disease caused by Botrytis cinerea is considered one of the most important fungal diseases that affect the faba bean crop and causes huge losses in the case of early infection (Hanounik and Hawtin 1982). This chocolate spot disease caused yield losses of 60 to 80% among susceptible cultivars and up to 34% among resistant cultivars in some African regions (Dhull et al. 2022). The disease usually appears during December and increases in January and February. Reports indicated that the B. cinerea chocolate spot disease is considered one of the most dangerous fungal diseases in Egypt, and threatens the productivity of faba beans, causing losses of up to 50% (Omar 2021). As it is known, the chocolate spot disease of faba bean plants consists of small distinct red–brown lesions on leaves, stems and pods (Sardiña 1929). This infection could transfer easily between the faba bean plants leading to the falling of leaves and flowers and the killing of stems. The infected pods fail to produce seeds, but if the infection occurred after the formation of the pod, the seeds formed inside will be shrunken and infected (Ellis and Waller 1974).
Thus, several studies were motivated to improve an effective elucidation for protecting food and agricultural products from this fungal infection (Hasan et al. 2020). Nanotechnology as a new technology used nanomaterials in pathogen detection, disease management and avoiding crop loss. The synthesis of nanomaterials by chemical and physical methods is highly costly and time-consuming. The biological methods for the synthesis of nanoscaled platforms include eco-friendly, uncomplicated, cost-effective, fast, and safe procedures in comparison to other chemical and physical methods (El Messaoudi et al. 2022). Nanomaterials were more proficiently and safe antifungal agents than chemical fungicides, herbicides, and fertilizers, throughout controlling their pathway and releasement rate (Li et al., 2007).
Silicon (Si) is the second most prevalent element in the earth's crust, after oxygen, and has a critical role in the growth, metabolism, defence and development of various crops (Mukarram et al. 2021). Silicon was reported as a strong inhibitor for fungal spore germination, germ tube elongation, and mycelial growth (Liu et al. 2010). On a similar trend, Si nanomaterials have been reported to have stronger impacts on plant growth and physiology than bulk Si (Tripathi et al. 2016). Besides Si, silver nanoparticles (AgNPs) are now one of the most commercialized nanomaterials possessing applications in several agricultural products as they kill plant pathogenic fungi and bacteria (Singh et al. 2015). The combination of Si and AgNPs will offer up new possibilities for using Si nanomaterials as a growth elicitor in a variety of commercial crops.
Therefore, the objectives of this study were to biosynthesize and investigate the antifungal activity of the silver/silicon dioxide nanocomposite (Ag/SiO2NC) against B. cinerea, to our knowledge, as a first in vivo study of such nanocomposite, and to evaluate the control efficacy of chocolate spot disease of Vicia faba L. caused by B. cinerea.
Materials and reagents
Silicon dioxide (SiO2) (granular, ≥ 99.9%), silver nitrate (AgNO3, crystals, ≥ 99.9) and culture media were purchased from Sigma–Aldrich Chemie, Steinheim, Germany. The Agricultural Research Center in Giza, Egypt, provided seeds of the faba bean cultivar (Giza 429) and the fungicide, Dithane M-45 (ethylene bis dithiocarbamate at 80%, 16 percent manganese, and just 2% zinc). Escherichia coli D8 (AC: MF062579) and B. cinerea (AC: KP151604) were provided by the Laboratory of Microbiology, Faculty of Science, Damietta University, Egypt. Botrytis cinerea was cultivated on faba bean dextrose agar (FDA) and incubated for 7 days at 25 °C.
The absorbance measurements were performed using a UV–Vis spectrophotometer (Beckman DU-40, USA). X-ray X' Pert powder diffractometer (Philips, D8-Brucker Model), fitted with Ni filter and Cu k-radiation (= 1.5418) at 40 kV and 30 mA, was used to record the Ag/SiO2NC X-ray diffraction (XRD) pattern. Fourier transform-infrared (FTIR) spectra were carried out using a KBr disc (KBr pellet) on a JASCO FTIR-410 spectrometer in the 4000–400 cm-1 region. Transmission electron microscopy (TEM, JEOL JEM-2100, Japan) and zeta potential analyses and were carried out at the Electron Microscope Unit, Mansoura University, Egypt. All previous instruments were used to characterize the biosynthesized nanocomposite. An atomic spectrometer (PerkinElmer, PinAAcle-500, UK) was used in measuring the concentration of silver content in Ag/SiO2NC-treated plants.
Biosynthesis of Ag/SiO2NC
Ag/SiO2NC was prepared according to Sadeghi et al. (2013) and modified by El-Zahed et al. (2022). Briefly, AgNPs were biosynthesized in the presence of sunlight by mixing free-cell supernatant of E. coli D8 (filtered from overnight bacterial growth, inoculated by a 0.5 McFarland standard, 1–2 × 108 CFU/ml) with 1.5 mM of AgNO3 solution (1:1 v/v). After the formation of the brown colour (the first indicator for AgNPs formation), the reaction was added to another beaker that included 100 g of SiO2. The whole solution was stirred for 30 min until the SiO2 granules were brown in colour. Finally, the Ag/SiO2NC granules were centrifuged, washed 3 times with distilled water and collected. The product was dried at 50 °C for 24 h and 185 °C for an additional 5 h. Dried Ag/SiO2NC was characterized using UV–Vis spectroscopy, FTIR, TEM, zeta potential analysis, and XRD.
Minimal inhibition concentration
Minimal inhibition concentration (MIC) of Ag/SiO2NC against B. cinerea was used to investigate the antifungal activity of Ag/SiO2NC in comparison to its bulk materials. Faba bean dextrose broth (FDB) medium was prepared, distributed into different flasks, and autoclaved. Different concentrations of AgNO3, SiO2 and Ag/SiO2NC (1–200 ppm) were added into FDB flasks separately and aseptically. Each flask was inoculated by a 5 mm B. cinerea disc (7 days growth) and incubated at 25 °C for 7 days. The biomass was collected after the incubation period by filtration using dry known weighted Whatman filter paper No. 1 (dried at 80 °C for 2 h), followed by vigorous washing with sterile distilled water to eliminate any medium components. The dry weight of fungal biomass was calculated, and the % inhibition of fungal growth was determined relative to the control. Dithane M-45 was made similarly and used as a positive control. The inhibition percentage (I%) was calculated according to the Tops and Wain equation (1957) as follows:
$${\text{I}}\% \, = \, \left( {{\text{A}} - {\text{B}}} \right)/{\text{A }} \times { 1}00,$$
where I%, inhibition percentage; A, dry weight of fungal biomass in the control; and B, dry weight of fungal biomass in treatment.
Field experiments
The experiment aimed to compare Ag/SiO2NC antifungal activity to that of a widely used fungicide (Dithane M-45). As antifungal agents, Ag/SiO2NC and Dithane M-45 were used at MIC doses. Faba bean seeds were surface sterilized in 0.01% mercuric chloride for 3 min, then washed several times with sterilized water to eliminate surplus disinfectant. Clay and sand were mixed in a 2:1 v/v ratio (Million et al. 1987) and autoclaved for 30 min. Three sterilized seeds per pot were planted in 20-cm plastic pots with autoclaved potting mix. Plants were grown at an average temperature of 22 °C in the light and 10 °C in the dark from half December to the beginning of April. Seedlings were irrigated and kept in the field growth conditions. Seedlings were watered with antifungal agent solutions after 28 days from planting, and after 7 days, the treatment was repeated. Plants were infected with 2.5 × 105 spore/ml of B. cinerea spore suspension when spraying the plants until wetness, then covered with transparent polyethylene bags. For comparative purposes, a control group was left without infection and treatment. To estimate growth parameters, leaf samples were obtained 72 days after planting. After 130 days, the final harvest was carried out to acquire the yield and estimate growth characteristics.
Analyses of growth and yield parameters
Shoot and root lengths, fresh and dry weight of shoots, shoot diameter, leaf area, root fresh and dry weight, number of nodes, number of legumes, legume air dry and oven-dry weight, number of seeds per legume, fresh and dry mass of seeds, the weight of 100 seeds were all measured for each harvest after 70 days from planting. Dry weights were recorded after drying the samples at 80 ºC for 48 h in a hot air oven until constant weight. All the weights were measured in grammes (g). Harvest index, mobilization index, crop index and relative seed yield were also calculated according to Hall et al. (2013) as follows:
$${\text{Shoot or root distribution }} = {\text{ Fresh mass }}/{\text{ length,}}$$
$${\text{Shoot or root density }} = {\text{ Dry mass/length,}}$$
$$\mathrm{Harvest\, index }= \frac{\mathrm{Seed}\, \mathrm{weight}(g)/\mathrm{plant}}{\mathrm{Straw}\, \mathrm{weight}(g)/\mathrm{plant}} \times 100,$$
$$\mathrm{Mobilization\, index }= \frac{\mathrm{Crop}\, \mathrm{weight} (g)/\mathrm{plant}}{\mathrm{Straw}\, \mathrm{weight} (g)/\mathrm{plant}} \times 100,$$
$$\mathrm{Crop \,index }= \frac{\mathrm{Seed} \mathrm{weight} (g)/\mathrm{plant}}{\mathrm{Seed}\, \mathrm{weight} (g)/\mathrm{plant} + \mathrm{Straw} \, \mathrm{weight} (g)/\mathrm{plant}} \times 100,$$
$$\mathrm{Relative\, seed \,yield }= \frac{\mathrm{Yield}\, \mathrm{in}\, \mathrm{treatment}}{\mathrm{Yield}\, \mathrm{in}\, \mathrm{control}} \times 100.$$
Estimation of proline
The method described by Snell and Snell (1959) was used to calculate proline. A combination of 4 ml of syrupy phosphoric acid 1:1 dilution and 6 ml of glacial acetic acid was employed as a reagent, along with 0.25 g of ninhydrin, which was heated to 70 ºC to full solubility. A blank utilizing the acid combination without ninhydrin was generated by pipetting one ml of concentrated water extract into a Quickfit tube, adding 1.0 ml of glacial acetic acid, then 1.0 ml of the reagent at the same time. There was also a reagent blank prepared. The samples and blanks were heated at 100 ºC for 60 min. The tubes were then filled with 1.0 mL glacial acetic acid and allowed to cool to room temperature. With glacial acetic acid, the volume in each tube was adjusted to 5 ml. The optical density of the generated colour was determined spectrophotometrically within one hour at 515 nm.
Estimation of total phenols
According to Singleton and Rossi (1965), total phenols in plants were determined as follows: T = [c × v/m] × 100, where T represents total phenolic content (mg catechol/100 g fresh weight), c represents catechol concentration, v represents volume utilized (ml), and m represents plant mass (g). The total phenolic content of plants infected with B. cinerea was determined using a spectrophotometric technique at 650 nm.
Estimation of peroxidase activity (POD)
At 4 °C, 0.5 g of leaf material was homogenized in a mortar with 30–40 ml of 0.02 M phosphate buffer (pH7), filtered and centrifuged at 4000 rpm for 10 min. The extract was then made up to 100 ml with the buffer. 0.1 ml from the extract was added to the reaction mixture of 0.5 ml 1% H2O2, 3 ml pyrogallol phosphate buffer (0.05 M pyrogallol in 0.1 M phosphate buffer, pH6). The production of purpurogallin caused a rise in absorbance at 420 nm, which was used to measure POD activity (Devi 2000). One enzyme unit is one per g of fresh material per min.
Estimation of polyphenol oxidase activity (PPO)
Using the previously prepared extract for POD estimation, the production of purpurogallin was used to measure PPO (Devi 2000). About 1 ml of the extract was added to 2 ml of 0.02 M phosphate buffer (pH 7) and 1 ml 0.1 M pyrogallol to the reaction mixture. Then, 1 ml of 2.5 N H2SO4 was added to the reaction mixture after 1 min of incubation at 25 ºC. A unit of enzyme is one per g of fresh material per min.
Estimation of total protein of faba bean seeds
The protein content of faba bean seeds was estimated according to Bradford (1976). A known weight from fresh seeds was macerated in a mortar with 2 ml of extraction buffer (0.2 M Tris–HCl, pH 6.8, 2% SDS, and 10% sucrose) and centrifuged at 4000 rpm for 15 min. The absorbance was measured at 595 nm against a blank made from 0.1 ml of the appropriate buffer plus a 5-ml protein reagent. Using bovine serum albumin solution as the standard protein, the amount of protein in the samples was determined from the standard curve.
Estimation of silver concentration content
The faba bean leaves and seeds were digested for 4 h with a solution of 10 ml concentrated nitric acid, 4 ml perchloric acid (60%), and 1 mL concentrated sulfuric acid. The digested contents were diluted with distilled water and filtered through a Whatman no. 42 filter before being measured for silver total mass using the atomic spectrometer (Issac and Johnson 1975).
Ultrastructural study
According to Hayat (1989), the processing of specimens for transmission electron microscope (TEM) was carried out by cutting faba bean leaves into small pieces and fixed with 2.5% glutaraldehyde in 0.1 M phosphate buffer for 24 h at room temperature. The specimens were washed with phosphate buffer and then fixed with 1% osmium tetroxide for 90 min at 4 °C and the dehydration was done by ethanol gradient and then replaced by 100% acetone. The specimens were impeded in resin (Epon 812, Switzerland) followed by sectioning into semi-thin and ultrathin sections with the help of an Ultramicrotome (RMC PT-XL Power Tome Ultra microtome). The ultrathin sections were stained by uranyl acetate followed by lead citrate and examined under a TEM at 160 kV (JEOL-JEM 2100) at 80 kV at the Electron Microscopy Unit, Mansoura University, Egypt.
The data were analysed using SPSS version 18 and ANOVA. The p < 0.05 significance threshold was used. All the experiments were carried out three times. The mean and standard error (SE) were used to express all the data.
Characterization of biosynthesized Ag/SiO2NC
The colour of SiO2 granules changed from white to brown, suggesting that Ag/SiO2NC had been formed. Figure 1A shows the UV–Vis spectra of SiO2, AgNPs and Ag/SiO2NC. There are no apparent absorption peaks at 400–750 nm for pure SiO2; however, a clear absorption peak develops at about 415 nm for AgNPs and Ag/SiO2NC particles, which is the typical absorption of nanosilver. FTIR spectra (Fig. 1B) of Ag/SiO2NC annealed between 400 and 4000 cm−1. Water bands corresponding to bending vibrations were identified in all the IR spectra, indicating that the powdered materials are hygroscopic. Si–O–Si and Si–OH absorptions are responsible for the strong bands detected at 1082, 785, and 458 cm−1. At roughly 462 and 693 cm−1, there is also a little amount of absorption owing to the Si–O–Ag connections stretching. The presence of the band in Ag/SiO2NC suggests that AgNPs and oxygen bound to silica are bonded. The morphology of AgNPs was investigated using the TEM (Fig. 1C, D, and E). AgNPs are found both on the surface of the silica and within the matrix. The TEM micrograph revealed a wide size dispersion of spherical AgNPs with diameters ranging from 12 to 29 nm. On the one hand, the coupling agent allows AgNPs with tiny particle sizes and large surfaces to mix well with the polymer matrix in Ag/SiO2NC. Zeta potential analysis (Fig. 1F) confirmed the positive charge of the biosynthesized Ag/SiO2NC (+ 31.0 mV). SiO2 and Ag/SiO2NC XRD patterns (Fig. 1G) were investigated. There were no additional diffraction peaks for pure silica particles. The amorphous silica characteristic diffraction peaks appeared at 15–25° (Wu et al. 2016). This amorphous character of SiO2 confirm its high adsorbing function to AgNPs. The XRD patterns of AgNPs and Ag/SiO2NC showed peaks at 5 angles of 31.88°, 37.92°, 44.1°, 64.20° and 78.2° which corresponded to the reflections of the (110), (111), (200), (220) and (311) crystalline planes of AgNPs' face-centred cubic (FCC) structure, suggesting that the coatings of AgNPs have crystallized well on the surfaces of amorphous SiO2 which might decrease the amorphous peak of Ag/SiO2NC and made it little broader and less than bulk SiO2 (Nguyen and Nguyen 2020; Kadhim et al. 2022). This coincides well with the XRD pattern of solo AgNPs particles. In addition, the XRD does not show the typical silver oxide peaks. It means that the Ag/SiO2NC coverage is pure AgNPs, not silver oxide or other contaminants.
Characterization of Ag/SiO2NC. A The UV–vis spectra of SiO2, AgNPs and Ag/SiO2NC. B The FTIR spectra of SiO2, AgNPs and Ag/SiO2NC. C TEM of SiO2, D AgNPs and E TEM of Ag/SiO2NC with bars scale = 100 nm. F Zeta potential measurement of Ag/SiO2NC. G The XRD patterns of SiO2, AgNPs and Ag/SiO2NC
The size of nanoparticles was estimated from the Debye Scherrer's formula: d = 0.89λ/(β cos θ), where λ is the X-ray wavelength, β is the full-width at half-maximum of the X-ray diffraction peak and θ is the diffraction angle (Birks and Friedman 1946). The estimated mean diameter of nanoparticles size was 20.4 nm in good agreement with those observed in TEM results.
Minimal inhibition concentration of Ag/SiO2NC against B. cinerea
Figure 2 shows that 40 and 60 ppm of Ag/SiO2NC and AgNPs, respectively (MIC value) and other high concentrations have a better fungicidal effect than lower ones. In a dose-dependent manner, Ag/SiO2NC demonstrated a good antifungal action against B. cinerea. While the MIC values of AgNO3 and SiO2 were 95 and 110 ppm, respectively. Although the Egyptian agricultural ministry recommended Dithane M-45 as a very strong antifungal agent against the development of chocolate spot disease caused by B. cinerea, the inhibitory rate values of Ag/SiO2NC were close to Dithane M-45 MIC values, indicating the nanocomposite's high efficiency against B. cinerea. Also, Ag/SiO2NC revealed more antifungal potential against B. cinerea than AgNO3 and SiO2.
A The minimal inhibition concentration and B the inhibition percentage of AgNO3, SiO2, AgNPs, Ag/SiO2NC and Dithane M-45 against B. cinerea
Field growth condition experiments
In comparison with the Ag/SiO2NC-treated, Dithane M-45-treated and control plants, chocolate spot symptoms emerged in the untreated and infected plants with B. cinerea (Fig. 3). The affected plants had necrotic flecks on their leaves, which were the typical symptoms of the chocolate spot disease. Furthermore, the untreated infected faba bean plants' green shoot length and flowering rate were lower than other plants.
Field growth condition experiments. A Non-infected and untreated control. B Infected and untreated control. C Infected and treated by Ag/SiO2NC at 40 ppm. D Infected and treated by Dithane M-45 at 40 ppm
Table 1 reveals that pathogen reduced root biomasses (fresh and dry masses), root length, root density, root dispersion, and root/shoot ratio when compared to control values. In the presence or absence of pathogen, the fungicide induced a significant reduction in all plant growth parameters except root distribution. Root biomasses, length, and root/shoot ratio all increased significantly after using Ag/SiO2NC.
Table 1 Effect of Ag/SiO2NC and Dithane M-45 on the growth vigour of faba bean plants root
Table 2 shows the differences in shoot growth vigour of the variously treated faba bean plants. The results revealed that untreated infected plants and fungicide-treated plants had considerable reductions in the shoot growth vigour of faba bean. Furthermore, as compared to control values, fungicide produced substantial reductions in shoot biomasses (fresh and dry weights), length, diameter, density, and leaf area. The treatment of faba bean plants with Ag/SiO2NC, on the other hand, resulted in a considerable rise in these parameters.
Table 2 Effect of Ag/SiO2NC and Dithane M-45 on the growth vigour of faba bean plants shoot
The impact of Ag/SiO2NC and Dithane M-45 on yield components of faba bean infected with B. cinerea is shown in Tables 3, 4. In comparison to control plants, infected plants and Dithane M-45-treated plants had a significant decrease in all yield components of faba bean plants, including a noticeable reduction in straw production per plant, relative seed yield, and biological yield. On the other hand, Ag/SiO2NC caused an enormous increase in practically all yield components of faba bean.
Table 3 Effect of Ag/SiO2NC and Dithane M-45 on yield and yield components of faba bean plants
Table 4 Effect of Ag/SiO2NC and Dithane M-45 on the yield and yield components of faba bean plants
Effect of Ag/SiO2NC on proline content
In comparison to the non-infected untreated control plants, infection of faba bean plants with B. cinerea increased proline concentration (Fig. 4). Furthermore, treatment with Ag/SiO2NC (MIC, 40 ppm) resulted in a considerable rise in proline concentration. As a result, Ag/SiO2NC might help faba bean plants become more physiologically resistant.
The proline content in leaf extract of faba bean plants (non-infected/infected) in the presence or absence of Ag/SiO2NC or Dithane M-45. Vertical bars represent the SE. Means denoted by similar letter are not significantly different at p ≤ 0.05 using Tukey–Kramer HSD test
Effect of Ag/SiO2NC on total phenols
While untreated infected plants showed a slight rise in phenol amount, Ag/SiO2NC-treated plants showed a great increase and stimulation of phenolic compounds (Fig. 5). Pathogen resulted in a massive decrease in the yielded faba bean seeds.
The total phenols in yielded seeds of faba bean plants (non-infected/infected) in the presence or absence of Ag/SiO2NC or Dithane M-45. Vertical bars represent the SE. Means denoted by similar letter are not significantly different at p ≤ 0.05 using Tukey–Kramer HSD test
Effect of Ag/SiO2NC on the activity of POD and PPO
The activities of the tested defence enzymes (POD and PPO) were significantly increased (P ≤ 0.05) by the development of the infection of faba bean plants when compared to the control one (Fig. 6). Besides, the treatment of faba bean plants with Ag/SiO2NC increased these enzyme activities much higher than the Dithane M-45-treated plants.
The activity of A POD and B PPO of faba bean plants (non-infected/infected) in the presence or absence of Ag/SiO2NC or Dithane M-45. Vertical bars represent the SE. Means denoted by similar letter are not significantly different at p ≤ 0.05 using Tukey–Kramer HSD test
The pathogen caused a significant reduction in the total protein content of the faba bean seeds. Furthermore, the total protein content of Ag/SiO2NC-treated seed increased significantly (P ≤ 0.05). Seed treated with Ag/SiO2NC completely prevented the negative effects of B. cinerea infection as compared to untreated infected plants. As shown in Fig. 7, Ag/SiO2NC-treated plants were more resistant to B. cinerea infection by inducing the accumulation of total phenols and total protein content in produced seeds as compared to Dithane M-45 treatment.
The total protein content in yielded seeds of faba bean plants (non-infected/infected) in the presence or absence of Ag/SiO2NC or Dithane M-45. Vertical bars represent the SE. Means denoted by similar letter are not significantly different at p ≤ 0.05 using Tukey–Kramer HSD test
Ag/SiO2NC-treated plants displayed a slight incensement in the silver concentration of stems, leaves and seeds compared to the other plants (Fig. 8).
The silver content in faba bean stems, leaves and seeds of faba bean plants (non-infected/infected) in the presence or absence of Ag/SiO2NC or Dithane M-45. Vertical bars represent the SE. Means denoted by similar letter are not significantly different at p ≤ 0.05 using Tukey–Kramer HSD test
The data in Fig. 9 revealed that the leaves of non-infected untreated faba bean had a normal cell plasma membrane, ellipsoidal-shaped chloroplasts with an organized membrane system of grana and intergranal lamellae, mitochondria, big vacuole, and a thin cytoplasm. In addition, the chloroplasts were arranged close to the cell wall. On the other hand, the infected untreated faba bean leaves showed few numbers of round-shaped chloroplasts with an irregular membrane system. However, the Ag/SiO2NC-treated plants had normal thick cell walls, normal plasma membrane, big vacuole and well-organized chloroplasts with big starch grains. Furthermore, the nucleus of Ag/SiO2NC-treated plants had distinguished electron-dense heterochromatin, electron-lucent euchromatin and an obvious large nucleolus.
TEM micrographs of faba bean leaves (non-infected/infected) in the presence or absence of Ag/SiO2NC where A is a whole cells view with scale bar = 20 μm and B, C are a highly magnified part of chloroplasts and chloroplasts next to nucleus, respectively, with scale bar = 5 or 2 μm. Ch chloroplast, S starch grains, N nucleus, HC electron-dense heterochromatin, EC electron-lucent euchromatin, W cell wall, CY cytoplasm, G grana system, IG intergranal lamellae, M mitochondria, NU nucleolus and V vacuole
Various approaches have proved their ability to manage chocolate spot disease and reduce yield losses of faba bean all around the world. Several fungicides and chemical compounds are effective in combating this disease (Anil et al. 2013). The use of fungicides is unfavourable because of high costs, negative effects on human health and the environment, and the fact that it kills beneficial soil microflora (Arora et al. 2018). In this regard, the need to change sustain techniques and provide new ways is fundamental. Nanotechnology, particularly green innovation offers an impressive commitment to easing these challenges. It has prompted changes and advances in numerous technologies and can help to develop different fields of the agricultural sector such as fertilizers, fungicides, composts, and different industrial applications related to agriculture. Because of their novel properties, nanomaterials are considered potent antimicrobial agents and/or stabilizing transporters for fertilizers and pesticides, as well as working with controlled supplement transfer and aid in crop protection (Ashraf et al. 2021). Thus, this study aimed to supplant a safe, inexpensive, and effective cutting-edge biosynthesized fungicidal nanocomposite (Ag/SiO2NC) for controlling the chocolate spot disease of a faba bean plant, as well as to work on the growth of faba bean plants' yield. The previous studies (Shah et al. 2014; Abd-Alla et al. 2016; Mahakham et al. 2017) tested and documented the enhancement and/or antimicrobial action of solo Si, Ag, nanosilicon or AgNPs on faba bean plants. While the presented study combined between AgNPs and Si to get the dual nutritional and antimicrobial action of both materials. The presented work is, to our knowledge, a first study for the in vivo antifungal activity of Ag/SiO2NC against of chocolate spot disease of V. faba L. caused by B. fabae.
The biosynthesized Ag/SiO2NC was characterized and showed a positive charge (+ 31.0 mV), as well as embedding of well-dispersed spherical-shaped AgNPs (12–29 nm). Rodrigues et al. (2020) demonstrated the biosynthesis of Ag/SiO2NC with AgNPs with a mean size of 45 ± 12 nm and a negative charge (− 35.5) using green tea extract. The positive charge of the nanomaterials increases the effective electrostatic association with the microbial cell wall's negative charges, allowing them to easily penetrate the cell membrane (El-Zahed et al. 2022). FTIR and XRD results stated the purity of Ag/SiO2NC particles that formed exclusively from Si, oxygen and silver, which was in concurrence with Jeon et al. (2003) and Wei et al. (2014) results.
Sand and clay soil provide the best vegetative growth for plants in terms of plant height, leaf number/plant, leaf dry weight, and high inflorescences (Nabih 1991; Mazhar et al. 2010; Abd El Gayed and Attia 2018). In the in vivo experiments, clay and sand were mixed in a ratio 2:1 volume-to-volume. Clay represents two extreme situations for agricultural reclamation compared to sand because of its strong water-holding, high productivity and cation exchange capabilities due to its smectite mineralogy (Million et al. 1987). These sand–clay combinations are advantageous for growing crops. Sandy soil helping the root system of the plant to benefit from all the nutrients provided to it as well as facilitates the penetration and spread of the root system that increase the aeration of the soil including oxygen levels. In addition, the absence of these elements will lead to the yellowing of plant leaves, the growth of improper fruits, or even the death of plants (Million et al. 1987; Ghareeb et al. 2021). In general, this study aimed to provide a practical and field model that simulates what the farmer faces during real cultivation to achieve maximum benefits and study the extent to which this study can be applied. The results showed that Ag/SiO2NC treatments significantly increased plant growth and yield compared to control. Plants growing under natural conditions do not suffer from Si deficiencies (such as the used control in field experiment). Although Si is considered as a non-essential element in plant nutrition (Richmond and Sussman 2003), several studies documented that the exogenous application of Si and its compounds can stimulate growth of most plant species and increase their yields (Romero-Aranda et al. 2006; Xie et al. 2014; Ismail et al. 2022). This effect of Si on plant growth is dose and crop specific. Generally, Si and its compounds such Ag/SiO2NC affects plant growth by affecting several parameters, including improvement the translocation of minerals and metabolites necessary for seed setting, upregulation of plant defense systems (Hasan et al. 2020), improvements in the ultrastructure of leaf organelles including an increasing in the chlorophyll contents, enlarging chloroplasts size and increasing number of grana in leaves resulting in improving photosynthetic potential and efficiency (Zhu et al. 2004), an enhancement in plant water status (Abou-Baker et al. 2012), and alleviation of the unfavorable and toxic ions in soil (Tahir et al. 2006). Also, Si treatments were reported to increase potassium ions uptake and decrease sodium ions uptake resulting in low electrolytic leakage and lipid peroxidation compared to control plants which is considered to be the major mechanism responsible for better growth and yield of plants (Al-aghabary et al. 2004).
Under B. cinerea infection, shoot length, shoot, and root fresh weight, shoot and root dry weight, and leaf area were decreased when compared with control, while these growth parameters were increased in case of treatment of plants by Ag/SiO2NC compared to that of the control. Under B. cinerea infection, shoot length, shoot, and root fresh weight, shoot and root dry weight, and leaf area were decreased when compared with control, while these growth parameters were increased in case of treatment of plants by Ag/SiO2NC compared to that of the control. Also, the registered data of plant yield revealed that B. cinerea infection decreased plant crop yield per plant including number of seeds per pod, seed weight per pod, total seed yield, and straw yield compared to control and Ag/SiO2NC-treated plants. The results showed that using Ag/SiO2NC on infected faba bean plants reduced chocolate spot disease symptoms considerably. In vitro, Ag/SiO2NC were shown to be more efficient inhibitors of B. cinerea (lower MIC values) than AgNO3 and SiO2 (Jeon et al. 2003; Wei et al. 2014; Abdul-Karim and Hussein 2022). B. cinerea caused a severe reduction in the shoot and root growth of faba bean plants (Mahmoud et al., 2011) which might be due to faba bean plant consumption by fungal hydrolysis enzymes that kill the infected plants (Elnahal et al. 2022). However, Ag/SiO2NC-treated plants showed a notable increase in both shoot and root growth parameters besides their yields. El-Flaah et al. (2021) and Hamed et al. (2019) documented the enhancement of the metabolically, physiological, and yield of faba bean throughout the treatment by nanosilicon and nanosilver, respectively. In addition, the use of Ag/SiO2NC increased shoot biomass compared to root biomass (Garg and Singh 2018). Qados (2015) reported the increase of proline content in the case of using nanosilicon in faba bean plants infected with B. cinerea. Similarly, Ag/SiO2NC enhanced the accumulation of proline in faba bean plants (Sarkar et al. 2022). Ag/SiO2NC treatment induced a great increase in the soluble phenolic compounds content (Farouk et al. 2017) and the protective antioxidant enzymes POD and PPO (Polanco et al. 2014). Also, Fortunato et al. (2015) reported an increase in the activity of POD and PPO when B. cinerea-infected soybean plants were treated with silicon. B. cinerea infection decreased the total content of protein in faba bean seeds (Rubiales and Khazaei 2022) in contrast to Ag/SiO2NC-treated plants that showed a significant increase in the protein content (Roohizadeh et al., 2014).
TEM micrographs showed several ultrastructural changes in host cell organelles after infection by B. cinerea. This pathogen produces hydrolytic enzymes that can degrade the cell wall (Elnahal et al. 2022), plasma membrane and middle lamella of plant cells (Kohmoto et al. 1993). Also, the mesophyll cells of infected plants had low numbers in chloroplasts (Farouk et al., 2017). On the other hand, Ag/SiO2NC treatment increased chloroplast number without abnormal effects and with large starch granules. In accordance, Asgari et al. (2018) used nanosilicon in oat plants and reported the normal ultrastructure of chloroplasts with normal grana.
High concentrations of Ag ions caused noticeable changes in treated plants, including the degradation of cytoplasmic components inside cells through autophagy and negative effects on the chloroplast ultrastructure. The fact that none of these changes were noticed after treating plants with Ag/SiO2NC points to the low rate at which AgNPs accumulated inside the treated plants. Additionally, compared to earlier researches (Abou-Baker et al. 2012; Shah et al. 2014; Abd-Alla et al. 2016; Mahakham et al. 2017), the current study found that the accumulation of Ag ions in the stem, leaves, and seeds was only 256, 497, and 540 ng/g dry wt, respectively. The present study demonstrated that the examined faba bean stem, leaf and yielded seeds using atomic spectrometer showed a slight increase in the concentration of silver content in Ag/SiO2NC-treated plants. This finding stated the low realizability of AgNPs from Ag/SiO2NC resulting in decreasing its accumulation in treated plants. In addition, SiO2 plays an important role in the reduction of the accumulation of harmful ion inside the plants (Hussain et al. 2019). Consequently, using SiO2 may be a method for decreasing the toxicity of AgNPs in plants and its concentration in grains. Comparing treated plants to untreated plants under abiotic stress, it can increase chlorophyll content, cause potassium ions uptake, modify sodium ions levels, and lessen cell wall damage (Hussain et al. 2019). SiO2 may also help in increasing of plants growth rate, biomass and productivity while experiencing less oxidative stress. These results are promising outcomes for the application of the biosynthesized silver nanocomposite as a safe and effective antifungal agent against B. cinerea, as well as limiting the negative and adverse harmful effects caused by the accumulation of silver ions in plants, which was limiting the use of these nanoparticles for fear of silver intoxication. Nevertheless, the toxicity of the biosynthesized silver nanocomposite needs for further research to explore its toxicity using in vivo with an animal model in future work.
The current study revealed that silver/silicon dioxide nanocomposite (Ag/SiO2NC) may be used as nutrients, antifungals, and growth and yield promoters in a variety of plants, including faba bean. Furthermore, the results of this study validated the effect of Ag/SiO2NC in suppressing chocolate spot disease of the faba bean caused by B. cinerea by improving physiological and ultrastructural features. In addition, Ag/SiO2NC improved faba bean resistance to B. cinerea by increasing proline, phenols, and defense enzymes (peroxidase and polyphenol oxidase enzymes). Furthermore, the toxicity of Ag/SiO2NC needs to be verified in vivo with an animal model.
SiO2 :
Ag/SiO2NC:
Silver/silicon dioxide nanocomposite
UV–Vis:
Ultraviolet–visible
Fourier transform-infrared spectroscopy
The X-ray diffraction pattern
MIC:
The minimal inhibition concentration
FDA:
The faba bean dextrose agar
FDB:
The faba bean dextrose broth
Peroxidase activity
PPO:
Polyphenol oxidase activity
ANOVA:
One-way analysis of variance
Abd El Gayed ME, Attia EA (2018) Impact of growing media and compound fertilizer rates on growth and flowering of cocks comb (Celosia argentea) Plants. J Plant Prod 9:895–900. https://doi.org/10.21608/jpp.2018.36599
Abd-Alla MH, Nafady NA, Khalaf DM (2016) Assessment of silver nanoparticles contamination on faba bean-Rhizobium leguminosarum bv. viciae-Glomus aggregatum symbiosis: implications for induction of autophagy process in root nodule. Agric Ecosyst Environ 218:163–177. https://doi.org/10.1016/j.agee.2015.11.022
Abdul-Karim EK, Hussein HZ (2022) The biosynthesis of nanoparticles by fungi and the role of nanoparticles in resisting of pathogenic fungi to plants: a review. Basrah J Agric Sci 35:243–256. https://doi.org/10.37077/25200860.2022.35.1.18
Abou-Baker NH, Abd-Eladl M, Eid TA (2012) Silicon and water regime responses in bean production under soil saline. J Appl Sci Res 8:5698–5707
Alaagib SB, Yousif IEA, Alrwis KN, Baig MB, Reed MR (2022) Realizing food security through agricultural development in Sudan. In: Food security and climate-smart food systems. Springer, pp 289–301. https://doi.org/10.1007/978-3-030-92738-7_14
Al-aghabary K, Zhu Z, Shi Q (2004) Influence of silicon supply on chlorophyll content, chlorophyll fluorescence, and antioxidative enzyme activities in tomato plants under salt stress antioxidative enzyme activities in tomato. J Plant Nutr 27:2101–2115. https://doi.org/10.1081/PLN-200034641
Anil KS, Naresh C, Ra M, Anitha P (2013) An assessment of faba bean (Vicia faba L.) current status and future prospect. Afr J Agric Res 8:6634–6641. https://doi.org/10.5897/AJAR2013.7335
Arora NK, Fatima T, Mishra I, Verma M, Mishra J, Mishra V (2018) Environmental sustainability: challenges and viable solutions. Environ Sustain 1:309–340. https://doi.org/10.1007/s42398-018-00038-w
Asgari F, Majd A, Jonoubi P, Najafi F (2018) Effects of silicon nanoparticles on molecular, chemical, structural and ultrastructural characteristics of oat (Avena sativa L.). Plant Physiol Biochem 127:152–160. https://doi.org/10.1016/j.plaphy.2018.03.021
Ashraf SA, Siddiqui AJ, Abd Elmoneim OE, Khan MI, Patel M, Alreshidi M, Moin A, Singh R, Snoussi M, Adnan M (2021) Innovations in nanoscience for the sustainable development of food and agriculture with implications on health and environment. Sci Total Environ 768:144990. https://doi.org/10.1016/j.scitotenv.2021.144990
Birks LS, Friedman H (1946) Particle size determination from X-ray line broadening. J Appl Phys 17:687–692. https://doi.org/10.1063/1.1707771
Bond DA, Jellis GJ, Rowland GG, Guen J Le, Robertson LD, Khalil SA, Li-Juan L (1994) Present status and future strategy in breeding faba beans (Vicia faba L.) for resistance to biotic and abiotic stresses. In: Expanding the production and use of cool season food legumes. Springer, pp 592–616. https://doi.org/10.1007/978-94-011-0798-3_36
Bradford MM (1976) A rapid and sensitive method for the quantitation of microgram quantities of protein utilizing the principle of protein-dye binding. Anal Biochem 72:248–254. https://doi.org/10.1016/0003-2697(76)90527-3
Brink M, Belay G, De Wet JMJ (2006) Plant resources of tropical Africa 1: cereals and pulses. PROTA Foundation Wageningen, The Netherlands
Devi P (2000) Principles and methods in plant molecular biology, biochemistry and genetics. Agrobios India 41:57–59
Dhull SB, Kidwai MK, Siddiq M, Sidhu JS (2022) Faba (broad) bean production, processing, and nutritional profile. Dry Beans Pulses Prod Process Nutr. https://doi.org/10.1002/9781119776802.ch14
El Messaoudi N, El Khomri M, Ablouh E-H, Bouich A, Lacherai A, Jada A, Lima EC, Sher F (2022) Biosynthesis of SiO2 nanoparticles using extract of Nerium oleander leaves for the removal of tetracycline antibiotic. Chemosphere 287:132453. https://doi.org/10.1016/j.chemosphere.2021.132453
El-Flaah RF, El-Said RAR, Nassar MA, Hassan M, Abdelaal KAA (2021) Effect of Rhizobium, nano silica and ascorbic acid on morpho-physiological characters and gene expression of POX and PPO in faba bean (Vicia faba L.) under salinity stress conditions. Fresenius Environ Bull 30:5751–5764
Ellis MB and Waller JM (1974b) Botrytis fabae. CMI Descriptions of pathogenic fungi and bacteria, No. 432
Elnahal ASM, El-Saadony MT, Saad AM, Desoky E-SM, El-Tahan AM, Rady MM, AbuQamar SF, El-Tarabily KA (2022) The use of microbial inoculants for biological control, plant growth promotion, and sustainable agriculture: a review. Eur J Plant Pathol 162:759–792. https://doi.org/10.1007/s10658-021-02393-7
El-Zahed MM, Abou-Dobara MI, El-Sayed AK, Baka ZAM (2022) Ag/SiO2nanocomposite mediated by Escherichia coli D8 and their antimicrobial potential. Nov Biotechnol Chim 21:e1023. https://doi.org/10.36547/nbc.1023
Farouk S, Belal BEA, El-Sharkawy HHA (2017) The role of some elicitors on the management of Roumy Ahmar grapevines downy mildew disease and it's related to inducing growth and yield characters. Sci Hortic (amsterdam) 225:646–658. https://doi.org/10.1016/j.scienta.2017.07.054
Fortunato AA, Debona D, Bernardeli AMA, Rodrigues FA (2015) Defence-related enzymes in soybean resistance to target spot. J Phytopathol 163:731–742. https://doi.org/10.1111/jph.12370
Garg N, Singh S (2018) Arbuscular mycorrhiza Rhizophagus irregularis and silicon modulate growth, proline biosynthesis and yield in Cajanus cajan L. Millsp. (pigeonpea) genotypes under cadmium and zinc stress. J Plant Growth Regul 37:46–63. https://doi.org/10.1007/s00344-017-9708-4
Gennari P, Rosero-Moncayo J, Tubiello FN (2019) The FAO contribution to monitoring SDGs for food and agriculture. Nat Plants 5:1196–1197. https://doi.org/10.1038/s41477-019-0564-z
Ghareeb A, Khalil M, Helal AE-M (2021) Philodendron domesticum GS bunting plant responses to potting media. J Product Dev 26:491–512. https://doi.org/10.21608/jpd.2021.184827
Hall DO, Scurlock JMO, Bolhar-Nordenkampf HR, Leegood RC, Long SP (2013) Photosynthesis and production in a changing environment: a field and laboratory manual. Springer Dordrecht. https://doi.org/10.1007/978-94-011-1566-7
Hamed SM, Hagag ES, El-Raouf NA (2019) Green production of silver nanoparticles, evaluation of their nematicidal activity against Meloidogyne javanica and their impact on growth of faba bean. Beni-Suef Univ J Basic Appl Sci 8:1–12. https://doi.org/10.1186/s43088-019-0010-3
Hanounik SB, Hawtin GC (1982) Screening for resistance to chocolate spot caused by Botrytis fabae. In: Faba bean improvement. Springer, pp 243–250. https://doi.org/10.1007/978-94-009-7499-9_25
Hasan KA, Soliman H, Baka Z, Shabana YM (2020) Efficacy of nano-silicon in the control of chocolate spot disease of Vicia faba L. caused by Botrytis fabae. Egypt J Basic Appl Sci 7:53–66. https://doi.org/10.1080/2314808X.2020.1727627
Hayat MA (1989) Principles and techniques of electron microscopy. In: Biological applications, volume 3. Macmillan Press, New York, NY, pp 229–230
Hussain A, Rizwan M, Ali Q, Ali S (2019) Seed priming with silicon nanoparticles improved the biomass and yield while reduced the oxidative stress and cadmium concentration in wheat grains. Environ Sci Pollut Res 26:7579–7588
Ismail LM, Soliman MI, El-aziz MHA (2022) Impact of silica ions and nano silica on growth and productivity of pea plants under salinity stress. Plants 11:494–515. https://doi.org/10.3390/plants11040494
Issac RA, Johnson WC (1975) Collaborative study of wet and dry techniques for the elemental analysis of plant tissue by Atomic Absorption Spectrophotometer. J Assoc of Agric Chem 58:436. https://doi.org/10.1093/jaoac/58.3.436
Jeon H-J, Yi S-C, Oh S-G (2003) Preparation and antibacterial effects of Ag–SiO2 thin films by sol–gel method. Biomaterials 24:4921–4928. https://doi.org/10.1016/S0142-9612(03)00415-0
Kadhim FJ, Hammadi OA, Mutesher NH (2022) Photocatalytic activity of TiO2/SiO2 nanocomposites synthesized by reactive magnetron sputtering technique. J Nanophotonics 16:26005. https://doi.org/10.1117/1.JNP.16.026005
Kohmoto K, Itoh Y, Shimomura N, Kondoh Y, Otani H, Kodama M, Nishimura S, Nakatsuka S (1993) Isolation and biological activities of two host-specific toxins from the tangerine pathotype of Alternariaalternata. Phytopathology 83:495–502. https://doi.org/10.1094/phyto-83-495
Li Z, Chen J, Liu F, Liu A, Wang Q, Sun H, Wen L (2007) Study of UV-shielding properties of novel porous hollow silica nanoparticle carriers for avermectin. Pest Manag Sci Former Pestic Sci 63:241–246. https://doi.org/10.1002/ps.1301
Liu J, Zong Y, Qin G, Li B, Tian S (2010) Plasma membrane damage contributes to antifungal activity of silicon against Penicillium digitatum. Curr Microbiol 61:274–279. https://doi.org/10.1007/s00284-010-9607-4
Mahakham W, Sarmah AK, Maensiri S, Theerakulpisut P (2017) Nanopriming technology for enhancing germination and starch metabolism of aged rice seeds using phytosynthesized silver nanoparticles. Sci Rep 7:1–21. https://doi.org/10.1038/s41598-017-08669-5
Mahmoud YA-G, Abu El Souod SM, Alsokari S, Ismaei A-E, Attia M, Ebrahim MK (2011) Recent approaches for controlling brown spot disease of faba bean in Egypt. Egypt Acad J Biol Sci G Microbiol 3:41–53. https://doi.org/10.21608/EAJBSG.2011.16694
Mazhar AA, Abd El-Aziz NG, Habba E (2010) Impact of different soil media on growth and chemical constituents of Jatropha curcas L. seedlings grown under water regime. J Amer Sci 6:549–556
Million JB, Gonzalez RX, Carrier III WD, Sartain JB (1987) Production of vegetables on mixtures of sand tailings and waste phosphatic clay. In: Proceedings of 1987 symposium on mining, hydrology, sedimentology, and reclamation. University of Kentucky. pp 355–362
Mukarram M, Khan MMA, Corpas FJ (2021) Silicon nanoparticles elicit an increase in lemongrass (Cymbopogon flexuosus (Steud.) Wats) agronomic parameters with a higher essential oil yield. J Hazard Mater 412:125254. https://doi.org/10.1016/j.jhazmat.2021.125254
Nabih A (1991) Effect of some potting media and chemical fertilization on growth, flowering and corm productivity of Freesia refracta cv. Aurora J Agric Res Tanta Univ 17:713–733
Nguyen CMT, Nguyen VT (2020) Room-temperature polyol synthesis of Ag/SiO2 nanocomposite as a catalyst for 4-nitrophenol reduction. Adv Mater Sci Eng. https://doi.org/10.1155/2020/6650576
Omar SAM (2021) The importance of faba bean (Vicia faba L.) diseases in Egypt. In: Mitigating environmental stresses for agricultural sustainability in Egypt. Springer, pp 371–388.https://doi.org/10.1007/978-3-030-64323-2_13
Ouda S, Zohry AE-H (2022) Climate Extremes and Crops. In: Climate-smart agriculture. Springer, pp 93–114. https://doi.org/10.1007/978-3-030-93111-7_5
Polanco LR, Rodrigues FA, Nascimento KJT, Cruz MFA, Curvelo CRS, DaMatta FM, Vale FXR (2014) Photosynthetic gas exchange and antioxidative system in common bean plants infected by Colletotrichum lindemuthianum and supplied with silicon. Trop Plant Pathol 39:35–42. https://doi.org/10.1590/S1982-56762014000100005
Qados AMSA (2015) Mechanism of nanosilicon-mediated alleviation of salinity stress in faba bean (Vicia faba L.) plants. Am J Exp Agric 7:78–95. https://doi.org/10.9734/AJEA/2015/15110
Richmond KE, Sussman M (2003) Got silicon ? The non-essential beneficial plant nutrient. Curr Opin Plant Biol 6:268–272. https://doi.org/10.1016/S1369-5266(03)00041-4
Rodrigues MC, Rolim WR, Viana MM, Souza TR, Gonçalves F, Tanaka CJ, Bueno-Silva B, Seabra AB (2020) Biogenic synthesis and antimicrobial activity of silica-coated silver nanoparticles for esthetic dental applications. J Dent 96:103327. https://doi.org/10.1016/j.jdent.2020.103327
Romero-Aranda MR, Jurado O, Cuartero J (2006) Silicon alleviates the deleterious salt effect on tomato plant growth by improving plant water status. J Plant Physiol 163:847–855. https://doi.org/10.1016/j.jplph.2005.05.010
Roohizadeh G, Arbabian S, Tajadod G, Majd A, Salimpour F (2014) The study of sodium silicate effects on the total protein content, and the activities of catalase, peroxidase and superoxide dismutase of Vicia faba L. Bull Environ Pharmacol Life Sci 3:243
Rubiales D, Khazaei H (2022) Advances in disease and pest resistance in faba bean. Theor Appl Genet. https://doi.org/10.1007/s00122-021-04022-7
Sadeghi B, Ghammamy S, Sedaghat S (2013) Synthesis and characterization of silver-silica heterogeneous nanocomposite particles by lithium aluminum hydroxide reducing method. IJND 3:271-279.https://doi.org/10.7508/ijnd.2012.04.003
Sahile S, Ahmed S, Fininsa C, Abang MM, Sakhuja PK (2008) Survey of chocolate spot (Botrytis fabae) disease of faba bean (Vicia faba L.) and assessment of factors influencing disease epidemics in northern Ethiopia. Crop Prot 27:1457–1463. https://doi.org/10.1016/j.cropro.2008.07.011
Sardiña JR (1929) Una nueva especie de Botrytis que ataca a las Habas. Mem. R. Boletín La Real Soc Española Hist Nat 15:291–295
Sarkar MM, Mathur P, Roy S (2022) Silicon and nano-silicon: New frontiers of biostimulants for plant growth and stress amelioration. In: Silicon and nano-silicon in environmental stress management and crop quality improvement. Elsevier, pp 17–36. https://doi.org/10.1016/B978-0-323-91225-9.00010-8
Shah V, Collins D, Walker VK, Shah S (2014) The impact of engineered cobalt, iron, nickel and silver nanoparticles on soil bacterial diversity under field conditions. Environ Res Lett 9:024001. https://doi.org/10.1088/1748-9326/9/2/024001
Singh R, Shedbalkar UU, Wadhwani SA, Chopade BA (2015) Bacteriagenic silver nanoparticles: synthesis, mechanism, and applications. Appl Microbiol Biotechnol 99:4579–4593. https://doi.org/10.1007/s00253-015-6622-1
Singleton VL, Rossi JA (1965) Colorimetry of total phenolics with phosphomolybdic-phosphotungstic acid reagents. Am J Enol Vitic 16:144–158
Snell FD, Snell CT (1959) Colorimetric methods of analysis. 2ndEdition, Van Nostrand, New York, pp 78–139
Tahir MA, Rahmatullah T, Aziz M, Ashraf S, Kanwal S, Maqsood MA (2006) Beneficial effects of silicon in wheat (Triticum aestivum L.) under salinity stress. Pakistan J Bot 38:1715–1722
Tripathi DK, Singh VP, Ahmad P, Chauhan DK, Prasad SM (Eds.) (2016) Silicon and Nanotechnology: Role in Agriculture and Future Perspectives. In: Silicon in Plants: Advances and Future Prospects (1st ed.). CRC Press, pp 101-116. https://doi.org/10.1201/9781315369310
Wei L, Chen X, Gao X, Guo R, Xu B (2014) Preparation of Ag/SiO2 powder with light color and antibacterial performance. Powder Technol 253:424–428. https://doi.org/10.1016/j.powtec.2013.12.011
Wu ZG, Jia YR, Wang J, Guo Y, Gao JF (2016) Core-shell SiO2/Ag composite spheres: synthesis, characterization and photocatalytic properties. Mater Sci 34:806–810. https://doi.org/10.1515/msp-2016-0121
Xie Z, Song F, Xu H, Shao H, Song R (2014) Effects of silicon on photosynthetic characteristics of maize (Zea mays L.) on alluvial soil. Sci World J 2014:718716. https://doi.org/10.1155/2014/718716
Zhu Z, Wei G, Li J, Qian Q, Yu J (2004) Silicon alleviates salt stress and increases antioxidant enzymes activity in leaves of salt-stressed cucumber (Cucumis sativus L). Plant Sci 167:527–533. https://doi.org/10.1016/j.plantsci.2004.04.020
Acknowledgements and thanks are due to Damietta University for funding this work. The authors would like to thank Assist. Prof. Mahmoud Khalifa, Faculty of Science, Damietta University, for his excellent assistance during the scientific research.
Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB). Supported by the Damietta University grant to Z. A. Baka. Open access funding is provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB).
Department of Botany and Microbiology, Faculty of Science, Damietta University, New Damietta, 34517, Egypt
Zakaria A. Baka & Mohamed M. El-Zahed
Zakaria A. Baka
Mohamed M. El-Zahed
All authors have contributed equally to the work. All authors read and approved the manuscript.
Correspondence to Mohamed M. El-Zahed.
Baka, Z.A., El-Zahed, M.M. Antifungal activity of silver/silicon dioxide nanocomposite on the response of faba bean plants (Vicia faba L.) infected by Botrytis cinerea. Bioresour. Bioprocess. 9, 102 (2022). https://doi.org/10.1186/s40643-022-00591-7
Antifungal activity
Botrytis cinerea | CommonCrawl |
Studying driving behavior and risk perception: a road safety perspective in Egypt
Islam Sayed1,
Hossam Abdelgawad1 &
Dalia Said ORCID: orcid.org/0000-0002-4351-54561
Roadway safety research indicates a correlation between drivers' behavior, the demographics, and the local environment affecting the risk perception and roadway crashes. This research examines these issues in an Egyptian context by addressing three groups: private cars drivers, truck drivers, and public transportation drivers. A Driver Behavior Questionnaire (DBQ) was developed to capture information about drivers' behavior, personal characteristics, risk perception, and involvement in crashes. The risk perception was captured subjectively by exposing participants to various visual scenarios representing specific local conditions to rank their perception of the situation from a safety perspective.
Results indicated that the human factor, in particular, failure of keeping a safe following distance, was a major cause of crashes. The analyzed data was used to predict expected crash frequency based on personal attributes, such as age, driving experience, personality traits, and driving behavior, using negative binomial models. The study recommends that the DBQ technique, combined with risk perception scenarios, can be used to understand drivers' characteristics and behaviors and collect information on the crashes they experience.
Practically the study findings could provide series of recommendations to the local authorities about the introduction of the traffic management and noise control act; raising awareness of driving etiquette; setting and enforcing driving hours' regulations, and consider specific training programs for beginners drivers.
World Health Organization (WHO) statistics for road crashes worldwide show that the number of deaths due to road crashes ranges between 1.25 and 1.35 million per year and are the leading cause of death among young people [1]. The number of road crashes has decreased in developed countries due to several interventions. However, this is not the case in developing countries, which account for 90% of worldwide road crash fatalities. The USA rate of road fatalities was 1.3 persons per 10,000 vehicles in 2016 [2]. At the same time, the rate in Egypt was 11 road fatalities per 10,000 vehicles [1]. An equally startling statistic is that there are 4 deaths in Egypt per 100 km roads [3], while the rates of death in the UK and USA are 0.47 and 0.92 people, respectively [1]. These statistics indicate an alarming number of fatalities in Egypt, resulting in a heavy toll on Egyptian welfare and the economy at large.
Accordingly, there is a strong need to study the relationship between drivers' behavior, risk perception, and roadway crashes. This research, therefore, tackles the roadway crashes issues in an Egyptian context by addressing three groups: general drivers, truck drivers, and public transportation drivers to find a way to enhance the safety of the Egyptian roads and to improve Egyptian driving behaviors. The Driver Behavior Questionnaire technique (DBQ) is adopted to capture information about drivers, their behavior, and risk perception. The collected data was then used to predict the expected crash frequency using negative binomial models.
The research objectives endeavors to investigate and quantify the impact of human behavior on road crashes in Egypt and to better understand driver conduct relevant to traffic safety. Specifically, the objective is to map the relationship between drivers' demographic characteristics, their history of traffic rules' violations and crashes, and their level of risk perception while driving along Egyptian roads. Due to the lack of an accurate and representative government roadway crash database from authorities; this research attempts to connect these dots using means of survey and drivers' interviews to ultimately model the association of these variables as a step towards improving traffic safety in Egypt. What further exaggerated the matter is that research studies related to the types and reasons for the negative behavior of drivers in Egypt are quite sporadic.
With the above objectives in mind this paper has been structured as follows: background studies summarize relevant background studies in relation to risk perception, drivers' behavior, driver demographics, and personality traits and their relation to roadway crashes. Besides, the background studies also discuss techniques used to study the drivers' behavior including the Driver Behavior Questionnaire (DBQ), its structure, fields of interest, and different ways to collect data. Then, the "Methods" section presents the research methodology, and the steps that have been followed to study the drivers' negative behaviors, risk perception, and their relationship to traffic crashes and violations concerning demographic factors. The results are then introduced by representing the results into descriptive analysis, and crash prediction modeling subsections. A specific section is devoted to a discussion of the modeling results and key findings. Finally, the paper wraps up with conclusions, limitations, and potential for future research.
According to the background studies, about 90–95% of traffic incidents result from human actions. Thus, it is reasonable to infer that a crash is likely the fault of the driver and not the fault of the vehicle. Dahlen et al. (2012) [4] showed that aggressiveness in driving increases the risk of crashes and physical injuries. The study associated anger, which interferes with judgment and coordination behind the wheel, with lowering driving performance and increasing the likelihood of a crash. Aggressive behavior is the intention to harm or injure other drivers or pedestrians in any emotional or physical way. Alonso et al. (2019) [5] found that the perception of anger, aggressiveness, and risky behavior changes according to the characteristics of sociodemographic variables of the participants, and people's attitudes and behaviors towards road safety is a reflex of their perception. Tao et al. (2017) [6] found that personality traits and driving experience played a role in predicting the risk of traffic crashes. Regev et al. (2018) [7] studied the relationship between exposure, age, gender, and time of driving in the UK. This research showed that both low and high exposure, that is, time behind the wheel, is very dangerous and risky. In Egypt, Elshamly et al. (2017) [8] found that fatigue due to long driving hours and lack of sleep is the likeliest cause of truck crashes in Egypt. These findings highlight the important role played by human factors on the risk of crash involvement among drivers. Vanlaar et al. (2006) [9] validated an empirical model discussing the driver's perception of causes of road accidents and differences in perceptions between participants by collecting data from 23 countries using face-to-face interviews to rate 15 causes of road accidents by six-point ordinal scale. The model showed that there are no relevant differences between the 23 countries' participants. However, driving under the influence of alcohol or drugs was perceived as the most significant variable in causing road accidents; followed by using mobile phones.
Several methods have been used to examine driver behavior and characteristics, including GPS tracking devices, the Mobile-Sensor-Platform for Intelligent Recognition of Aggressive Driving (MIROAD), and visual reality systems. Other methods involve the use of the Driver Behavior Questionnaire (DBQ) technique. This research adopts the DBQ method to collect demographic information, driving behavior, drivers' crash history, and estimates the level of risk perception.
Reason et al. [10] developed the Driver Behavior Questionnaire (DBQ) to measure drivers' actions behind the wheel. The DBQ is one of the most widely used instruments for measuring self-reported driving behaviors. The DBQ method has been used for studies in China [11], Canada [12], Denmark [13], Latvia [14], Qatar [15], and other countries. The content and structure of the studies varied with the study scope, objective, and participants.
Despite the popularity of the DBQ, this research is the first attempt to implement the DBQ with Egyptian drivers. The survey is divided into four sections: (A) demographic characteristics, (B) the driver's traffic violations and crashes, (C) driver behaviour, and (D) risk perception.
As stated earlier, the research objective is to study the drivers' negative behaviors, risk perception, and their relationship to traffic crashes. With the lack of advanced technologies (such as simulators) to conduct in-depth studies on the behavioral aspects by simulating real driving; this study adopted the DBQ technique and developed a survey form to collect the required data on demographics, crash history, violations of information, behaviors, perception, and personality traits. The data was then analyzed descriptively and statistically, and different statistical models were derived, tested, and compared. Accordingly, the best model was selected to predict the number of crashes by certain variables.
Survey form design
As survey design is critical to achieving the study's objectives, a comparative literature review was synthesized to determine how previous researchers designed their DBQ instrument. The research team then summarized the most relevant studies to capture the most significant reported variables; categorized by demographics, crash history, violations of information, behaviors, perception, and personality traits. This is illustrated in Table 1. The comparative literature review resulted in identifying 58 variables gleaned from previous studies, refer to Table 1 and categorized to very significant (VS); correlated, but not very significant (C),; not significant nor correlated at all (NS). Albeit the research team has identified the most significant variables from state-of-the-art research as a starting point; succinct knowledge of the local context and exploratory interviews with drivers accentuated the need to consider additional factors/questions, such as driving under stress, which seemed relevant in the Egyptian context. A 35-question instrument was developed; of which, 17 questions focused on driver behavior, while the rest were designed to capture the other factors associated with roadway safety, such as the driver's characteristics, previous traffic violations, and crash history. Most of the questions were on a 5-point Likert-like Scale (1 = never, 2 = rarely, 3 = sometimes, 4 = most of the time, and 5 = always). The questions of the questionnaire were divided into four sections, each with a set of related variables, as detailed in the following subsections:
Table 1 Comparative literature review and significance of reported variables*
Demographic characteristics
Questions in this section included the age, gender, driving experience, number of daily driving hours and trips, and education level to study the relationship between each factor and traffic crashes. Gleaning participants' age and gender are important to study the relationship between drivers' age group, their driving behavior, and the occurrence of a traffic crash.
The education level and driving experience are used to study the effect on risky driving behavior, traffic violations, traffic crashes, and risk perception. As the literature reveals that driving experience plays an important role in road safety, the instrument addressed the number of hours of daily driving and the number of trips per day to measure the exposure of the drivers to potential roadway incidents.
Traffic violations and crash history
This section of the questionnaire was designed to elicit information about the driver's traffic violation and crash history, specifically the cause and number of these crashes in the previous 3 years, to serve as a basis for the statistical and descriptive analysis to investigate the relationship between traffic crashes, driving behavior, risk perception, and demographic factors.
Driving behavior
This section has 17 questions measuring participants' risky and aggressive behavior while driving, such behavior could include speeding, tailgating, distracted driving, and failure to wear a seatbelt. Other questions also investigated the reaction of the participant in the event of experiencing aggressive or inappropriate behavior from other driver's/roadway users.
Risk perception
The participant's level of risk perception was captured by a specific technique. The research team mapped several selected roads to glean real-life footage of various combinations of traffic conditions, driver population, day/night, etc. Participants were then exposed to selected scenes from real-life situations captured on different roads in and around Cairo. These images depicted traffic violations, aggressive behavior, traffic safety, and road geometry concerns, in 10 scenarios typical to driving on Egyptian roads [for example: overloading a vehicle, picking up/drop-off of passengers along an urban highway, pedestrians crossing in front of traffic on the highway, heavy trucks drove on the left lane, to name a few]. The objective of this part was to investigate the relationship of driver risk perception, aggressive behavior, and crash history with relevant demographic factors. The selected scenarios were presented to the participants to evaluate them from their safety and perception point of view and safety awareness while driving; through rating the scenario/situation on a scale from 1 to 5, 1 being very safe and 5 being very dangerous.
The study depended on data collection by online questionnaires and field survey forms. It was expected that significant parameters for the drivers in Egypt may vary from those found in similar studies in other countries. The difference was related to road conditions, driver behavior, safety warrants, culture and habits, traffic laws, and enforcement.
The sample size calculation indicated a sample size of 385 assuming that the population size exceeds 1 million and margin of error is less than 0.05 and the confidence level is 95%.
$$ \mathrm{Sample}\ \mathrm{size}=\frac{\frac{{\mathrm{Z}}^2\times \mathrm{P}\left(1\hbox{-} \mathrm{P}\right)}{{\mathrm{e}}^2}}{1+\left(\frac{{\mathrm{Z}}^2\times \mathrm{P}\left(1\hbox{-} \mathrm{P}\right)}{{\mathrm{e}}^2\mathrm{N}}\right)} $$
where N is the population number, e is the margin of error, Z is the Z score value for the standard deviations equivalent to 1.96.
The data collection process started with a pilot survey on a limited scale to ensure the terminology was clear. A second trial involving 35 participants proved the soundness of the questionnaire. Then, the survey was published on multiple communication channels in early 2019. Researchers conducted personal interviews with drivers at factory loading stations, public transportation main terminals, and waiting areas around key attractions (e.g., shopping malls, cinemas, hospitals), resulting in 883 completed interviews with 515 private car drivers, 82 taxi drivers, 110 public bus drivers, 124 truck drivers, and 52 public transit drivers. After eliminating surveys with incomplete responses, the researchers had data from 824 participants.
The data collection process stopped at this number as it exceeded the minimum sample size. However, this survey collected above 824 valid responses. So, the confidence level can be increased to 99%. In this survey, all the drivers across Egypt were targeted. With roughly 8.6 million registered vehicles in Egypt and many drivers reaching be three times the number of registered vehicles; it is practically impossible to survey the entire population of drivers in Egypt. The sample size calculation typically reaches a fixed value after a certain population size. In addition, due to the geographical constraints in the targeting process of the trucks, taxis, and public transportation drivers; the field interview only captured drivers from Cairo and Giza. These drivers were all males which is expected to result in an over representative male percentage in the sample. Another point to consider is that the sample frame would have a bias due to the online communication channels used in the data collection; all of which have been included in the limitation section at the end of the paper.
It is worth noting that while conducting the questionnaire, approval and consent were part of the survey design, and the research and questionnaire approach did not use any personal data; indicating that participation was done voluntarily and anonymously. Confidentiality and the scientific value of data were emphasized, highlighting that data would be used only for research purposes to encourage participants to provide sincere answers to all questions as we noticed that some drivers were afraid to participate thinking that the data might be shared with traffic police. The data was then collected and initially wrangled by descriptive analysis to explore and cluster the participants according to driver categories. Each category was described separately and illustrated by graphs and figures demonstrating the distributions of answers among survey variables as shown below in the results section. The data were integrated into a logical format for further processing by Statistical Package for the Social Sciences (SPSS®) software using version 22 statistics package and Minitab software. Then, the analyzed data and variables to produce predictive models and estimate the number of likely crashes based on the drivers' characteristics were presented in the modeling section after the descriptive analysis results were discussed in the following section.
The researchers analyzed the data and variables initially using descriptive analysis to put on hand the significant variables as shown below.
Descriptive analysis
The descriptive analysis of the questionnaire was based on the participant's data showing the results of the driver crashes and accrual reasons and the driving behavior questions and the number of crashes related to these behaviors. Besides, the risk perception rating and percentages of each scenario which is based on the participants' opinion. Also, the age and gender data were presented. In addition to that, the driver's experience and the driver's education as well as the number of daily trips for each driver were presented.
The demographic analysis results show that 74.4% of the participants were males (including the bus, taxi, and truck drivers, who were over 30% of the sample), and 70.7% had a university undergraduate or a post-graduate degree. The results also show that 57.6% had five or more years of driving experience. Of these, 52.5% were taxi, truck, or microbuses drivers. Figure 1 indicates that the majority of participants (57.28%) were somehow involved in one to three crashes in the last 3 years, categorized by the following variables and indicating the dominant variable between brackets: age [26–40 years], gender [male], years of experience driving [> 10 years.], and university degree [university and post-graduate degree]. As shown in Fig. 1a, the greatest percentage of participants involved in the 1–3 crashes category were 26–40 years old. More than 30% of this category has more than 10 years of driving experience. As shown in Fig. 1c, 189 participants, or 48.8%, with 10 or more years of experience were taxi, truck, or microbus drivers. Also, 99.47 % of the 189 participants (188) drivers usually drove 3–8 h per day. Therefore, these drivers had longer exposure times, increasing the probability of being involved in crashes.
Demographic variables and number of crashes
In this sample, 74.4% are males, as can be concluded from Fig. 1b, and more than 40% of the participant involved in the 1–3 crashes category were males; the females' percentage was only 12% in this category. It should be noted that 25.6% of the sample was made up of female drivers, representing 211 participants. A total of 71 of these participants held a post-graduate degree, and the rest had graduated from a university with an undergraduate degree.
The average crash frequency for the demographic variables that were deemed significant is shown in Fig. 2 while fixing all the other variables. In the demographic dimension, the age category was inversely proportional to the number of crashes. As shown in Fig. 2b, there was a relationship between exposure and the number of traffic crashes for public transportation and truck drivers who drove at least 3 h a day. The same is true of the general drivers; those who drove more than 3 h per day tended to have more crashes.
Data trend for the mean crashes versus demographic variables and trips
Number and reasons for crashes
The participants were asked to respond to questions on the number and cause of vehicular crashes they experienced in the previous 3 years. The number of crashes ranged widely from 0 to 16, with a reported mean of 2.04 and a median of 1.00 crashes per participant. The reasons behind the reported crashes are summarized in Fig. 3. Participants reported that they believed their crashes were caused primarily (18.7%) by tailgating, which is the failure to keep a sufficiently safe distance between their car and the car in front, followed by (16.36%) related to sudden swerving of their car or the car in front. The third most likely cause (14.71%) was distracted driving, caused by mobile phones or eating.
Causes of reported crashes
Driver behavior data
Driver behavior data show that 56.4% of the participants exceeded the posted speed limit, while 15.0% of the respondents said they always overtake from the right-hand side. In addition, 40.1% said they use phones while driving, 61.7% of drivers said they express anger or aggressiveness by using the headlight beam or honking the horn. Following at a safe distance of more than 18.0 m while driving the vehicle at a speed of 80 km/h was respected by only 35.5% of the drivers. Finally, the survey results indicated that a significant 25.48% of the participants drove in the opposite direction of traffic. While the authors acknowledge that this percentage is considerably higher than in any other country, it is noteworthy that 45% of respondents are public transportation and truck drivers. Most of those drivers received only primary or preparatory education, and they abide by driving rules. Therefore, this result was not a complete surprise, especially in the Greater Cairo Region, where researchers interviewed these drivers. Figure 4 shows the variability in the average crash frequency for the different levels of driving behavior. For example, drivers who regularly use the horn or the high beam aggressively (Fig. 4a), tend to drive in the opposite direction (Fig. 4b) or tend to tailgate the front vehicle (Fig. 4c) are more likely to be involved in crashes. In other words, hostility on the highway is more likely to result in a roadway crash. It was also found that seatbelt use was inversely proportional to the average number of crashes (Fig. 4d); that is, drivers who tend to use seatbelts were less likely to be involved in a crash in the last 3 years.
Data trend for the average mean crash frequency for each variable
As previously discussed in the DBQ setup, respondents were asked to rate specific scenes from real-life situations captured on different roads based on their perception of the risky behavior in the scenario. These are the scenes deemed as the most dangerous by various types of drivers. They depicted traffic violations, aggressive behavior, traffic safety issues, and road geometry concerns in 10 scenarios typically witnessed on Egyptian roads. The 10 captured scenarios were: a typical cross-section with no depicted risk, improper pavement marking or median variable width, illegal pickup/drop off of public transportation, illegal/unsafe loading of heavy trucks, night driving with no lights, trucks driving on the fast (left) lane, an illegal pedestrian crossing on busy highways, illegal pickup/drop off on highways, dangerous means of transportation on top of goods on trucks, and driving against traffic.
The participants rated the situation on a scale from 1 to 5, 1 being very safe and 5 being very dangerous. The respondents were grouped into three different driver categories: truck drivers, public transportation (bus and taxi) drivers, and passenger car drivers. The differences in the results between the three categories, presented in Fig. 5, provide interesting insights into how different groups of drivers perceive risk in different ways.
Level of risk perception among driver types
The results showed that 93.5% of the truck drivers rated a pedestrian illegally crossing a road with high-speed traffic as the most dangerous situation. The illegal picking up or dropping off on the highway came in second (86% rated this situation as a very dangerous act). The dangerous means of transportation for passengers and cargo came third, with 82 % of the truck drivers rating it as a very hazardous act. These three situations were perceived as the highest risk, as they probably are the main reasons for heavy vehicle crashes on Egyptian roads and pose the greatest danger to truck drivers. The results showed that these participants did not perceive that driving in the fast lane was a dangerous act for truck drivers, nor were driving in the opposite direction of traffic and illegal or dangerous truck loading.
Public transportation drivers had a slightly different view. A total of 92.9% agreed with the truck drivers that an illegal pedestrian crossing was the most dangerous, followed by an unsafe means of transportation (85%), and lastly were trucks driving in the fast or left lane (82.4%). These three situations were representative of the risky situations that public transportation drivers may encounter on roads in Egypt, and they increase the probability that these drivers will be involved in a crash. The public transportation drivers found that it is very dangerous for trucks to travel in the left lane, but not more dangerous than illegal pedestrian crossing. Also, they rated the illegal picking up and dropping off as a normal scenario because they, unlike truck drivers, do this regularly.
Passenger car drivers agreed with drivers of public transportation on the first and second most dangerous situations, illegal pedestrian crossing (94.6%), and dangerous means of transportation (90.9%). Passenger car drivers reported that the third most dangerous situation was truck drivers traveling in the fast lane (87.2%). This represents one of the most severe types of crashes on Egypt's highways, ones in which heavy vehicles are involved with a private car and/or pedestrians. Regardless of the class of drivers, it was clear that all participants did rate the exposure of vulnerable pedestrians crossing illegally as the most dangerous (93.9 %), followed by dangerous means of transport (86.6%), and trucks drivers driving on the left lane (79.3%).
It is noteworthy that 25% of the participants said they might drive in the wrong direction to reach their destination faster and have done this at least once. Moreover, only 36% of the participants felt that driving against traffic was a very dangerous act. This reflects the general perception in Egypt that this is an acceptable way to drive given traffic conditions.
Modeling procedure for the probability of crashes
After data were initially wrangled to explore the results, it was essential to conduct statistical analysis and modeling to characterize the drivers' behavior. Because of the nature of the collected data, logistic regression techniques were utilized. In SPSS, the data and variables were analyzed to produce predictive models to estimate the number of crashes likely to occur, based on the demographic factors, driver behavior, and risk perception. The dependent variables (exposure to traffic crashes) are categorical, and the independent variables are the driver behavior, risk perception, and demographic variables. Regression models were applied to each cluster to investigate the critical factors affecting the probability of crash occurrence within that cluster. For example, the demographic variables were examined concerning the number of crashes to determine how those variables affected the probability of crash occurrence. Each cluster was studied separately. Then, all clusters were investigated together against the number of crashes.
Various regression analyses were conducted. These were the linear regression analysis (LR), negative binomial regression analysis (NBR), and Poisson regression analysis (PR). A series of tests and analyses were carried out to assess the most suitable model for this data. However, crashes were count data and were usually modeled by using Poisson and negative binomial regression models. Rare-event count data such as crash occurrence better fit Poisson distribution, Washington et al. (2020) [23]. However, one requirement of the Poisson distribution is that the mean of the count data equals its variance and this is not the case in this research as the variance is significantly larger than the mean, which implies that the data is over-dispersed. In many cases, over-dispersed count data are successfully modeled using the negative binomial distribution, Washington et al. (2020) [23].
In the first step, data and codes were reviewed again to investigate the data distribution. Each of the collected variables is categorical, except for the car crash count, which is measured as the number of crashes. Three tests were run, PR, NBR, and LR, to assess the predictive capability of the variables. According to the initial results of the three models, where all the constructs are included, the NBR appears to be most appropriate in that it was able to extract a higher number of predictors. The variables were clustered in three dimensions: (1) driver behavior, (2) risk perception, and (3) demographic variables.
Model A: demographic variables model
The correlation analysis for demographic variables is shown in Table 2. It indicates a strong positive correlation between age and driving experience. The two variables most positively correlated to the number of crashes are the number of driving hours and daily trips. Age is negatively correlated with the number of crashes. The internal consistency of the demographic variables was assessed by Cronbach's alpha reliability test [24]. The Cronbach's alpha scores for the initial and final trials of the demographic variables are shown in Table 4a. Using all the variables of the drivers' demographic characteristics results in a low level of reliability of 0.319 level of alpha. Thus, it is necessary to progressively drop some variables from the model until an acceptable level of reliability is attained. In that respect, dropping the gender, trip purpose, and education variables resulted in a reasonably accepted alpha coefficient of 0.735 level of reliability. The model parameter estimation is presented in Table 5a, indicating that at a 5% significance level, only the age, number of daily driving hours, and the number of daily trips variables can be retained to explain the predicted crash frequency.
Table 2 Driver demographics variables correlation analysis matrix
Model B: driver behavior variables model
The correlation analysis for the drivers' behavior variables, as shown in Table 3, indicates a strong positive correlation between speeding and illegal overtaking, mobile phone use, and changing lanes. Speeding is strongly and positively correlated with wrong overtaking and changing lanes frequently, behaviors that indicate an aggressive driving attitude. Similar to Model A above, the Cronbach's alpha reliability scale initially resulted in an alpha coefficient of 0.532 when incorporating all the variables describing the driver's behavior. Dropping the drug's impact, running red light, and changing lanes illegally variables resulted in a reasonably accepted 0.720 level of reliability, as shown in Table 4b.
Table 3 Driver behavior variables correlation analysis matrix
Table 4 Cronbach's Alpha Test for Internal Consistency and Sensitivity Analysis
The model parameter estimation is presented in Table 5b. It indicates that, at a 5% significance level, only exceeding speed limit and driving in the opposite direction variables are related to the driver's behavior factors and can be considered to explain the predicted crash frequency. In addition, at 1% or more significance level, the keeping a safe following distance, using seatbelt while driving, and honking the horn or using the high beam variables; resulting in a significant model shown in Table 5b, as well as the goodness of fit tests to review how strong and precise this model in predicting the number of crashes.
Table 5 Model Parameters Estimation
Model C: combined demographics and drivers' behavior model
Combining all the variables from both demographics and behavior dimensions results in a mixed model that estimates the predicted crash frequency as a function of both dimensions. The model parameter estimation is presented in Table 5c, indicating that, at a 5% significance level or less, the following variables are significant: age, number of daily driving hours, number of daily trips, honking horn/using high beam, and driving in the opposite direction. The variable that contributes the most to reducing the predicted frequency of crashes is Age, indicating that the senior drivers are less likely to be involved in a crash. Of course, this conclusion tops out at a certain age and is a function of the participant's age group. On the other hand, the behavioral components of the model exhibit a higher contribution to the predicted crash frequency compared to the demographic ones.
Model D: adjusted demographics and drivers' behavior model
Using all the significant and logical variables retained from all the previous models and considering Cronbach's alpha reliability measures and the correlation matrix results in a mixed model that estimates the predicted crash frequency as a function of only the significant and logical variables. The model parameter estimation is presented in Table 5d, indicating that, at 5% significance level or less, the following variables are significant: age, number of daily driving hours, number of daily trips, honking the horn/using high beams, driving in opposite direction, and keeping a safe following distance.
Age and keeping a safe following distance are the variables that contribute the most to reducing the predicted frequency of crashes. On the other hand, the driving in opposite direction variable contributes greatly to predicted crash frequency compared to the other variables.
Model comparison
The model's parameter estimation was presented in modeling tables indicating that all models were significant. However, the best model that represented the sample were identified by comparing all models against the well know comparison parameters starting with the goodness of fit which is known by chi-square (R2). Where the chi-squared test is a parameter to check the goodness of fit for the null hypothesis to confirm the statistical significance to determine whether two or more categorical random variables such as age and accidents are independent of each other. Also, used to compare the log-likelihoods of regression models under the null hypothesis. So, it can be used to compare between different models to confirm the best fitting model to the used data. Then, another two parameters called Akaike's Information Criterion (AIC), and Bayesian Information Criterion (BIC) are mathematical methods for evaluating how well a model fits the data it was generated from. In statistics, AIC is used to compare different possible models and determine which one is the best fit for the data. However, BIC is an estimated probability of a model being true. So, a lower BIC means that a model is considered to be more likely to be the truth. Both criteria are based on various assumptions and asymptotic approximations.
Model goodness of fit
All the models were tested by the goodness of fit (R2) test, the results showed that Model D, adjusted demographics and drivers' behavior model was the best model represented the sample, as its Omnibus Test (likelihood ratio chi-square) value equals 99.235 at a degree of freedom of 6 and significance of p = 0.000. Goodness of fit < deviance (720.037)/degree of freedom (816) with R2 =0.912, Pearson chi-square (976.801)/degree of freedom (816) R2 = 1.097.
Bayesian Information Criteria (BIC)
All the models were tested by the Bayesian Information Criterion (BIC) test, the results indicated that Model D BIC was equal to 3430.237 the lowest value. Model C results showed BIC equals 3484.213, and Model B equal to 3455.174. Lastly, Model A equal to 3453.663, which means that model D was the truest model as its BIC value was the lowest value.
Akaike Information Criteria (AIC)
Reviewing all the models from the Akaike Information Criterion (AIC) number, the results show that Model D, adjusted demographics, and drivers' behavior model are equal to 3392.524, which is the lowest and the best value. Model C was equal to 3422.174, and Model B was equal to 3422.174. Lastly, Model A was equal to 3430.093. Also, the Consistent Akaike's Information Criterion (CAIC) number for model D is the lowest value among all models by 3438.237.
Model testing
Testing the models showed that Model D is preferred by the BIC and the AIC. When testing the models against the goodness of fit McFadden pseudo R2 value, it was found that Model D (R2 = 0. 912) was a better fit for the database because it falls within the accepted values (0.4 and 0.9). Conclusively, the best model found by this researcher is Model D, as defined by the following equation:
$$ \mathrm{Predicted}\ \mathrm{Crash}\ \mathrm{Count}=\exp \left( Int+{\beta}_1{x}_1+{\beta}_2{x}_2+\cdots +{\beta}_k{x}_k\right) $$
Predicted Crash Count = exp (0.685) × exp (0.096 (Honking Horn/Using High) × exp (0.106 (Driving in Opposite Direction) × exp (− 0.066 (Keeping Safe Following Distance) × exp (− 0.116 (Age) × exp (0.075 (# of Daily Trips) × exp (0.045 (# of Daily Driving Hours)).
The DBQ technique, combined with risk perception scenarios, can be used as an enabling tool to understand drivers' characteristics and behaviors and collect information on the crashes they experience, especially in cases where a structured periodic crash database is largely missing.
The key conclusions of this research can be summarized as follows:
Participants stated that their crashes were primarily attributed to tailgating and failure to keep a safe gap, while the modeling results added that horn honking, use of high beams, and driving toward oncoming traffic also are aggressive factors contributing to the predicted number of crashes.
Regardless of driver type (private car, public transportation, or truck drivers), all participants said that the most dangerous behavior was when pedestrians illegally crossed a busy highway. Dangerous means of transport by cargo and passengers and trucks illegally driving in the left or fast lane were considered the second and third most hazardous.
The variables that contribute the most to reducing the predicted frequency of crashes are age and safe following distance
Behavioral components of the model exhibit a greater contribution to predicted crash frequency than do the demographic ones.
In practice, this research has the potential to support the Ministry of Transport and traffic police responsible for law enforcement on the following directions:
Introduction of Anti-Car-Honking Ordinance and Traffic Management and Noise Control Act to enforce traffic control, maintain traffic order and ensure traffic safety.
Raise awareness of driving etiquette rules to avoid the "flashing to dazzle" effect and consider including informative material in the driving test exam.
Setting and enforcing driving hours' regulations as there is evidence from research relating fatigue to crashes, as this was clearly shown from the modeling results herein, especially that the majority of participants who indicated long driving hours are truck drivers followed by public transportation (bus and taxi) drivers.
Consider specific education and training programs for beginning drivers including behind-the-wheel driver education to address tailgating, driving in the opposite direction, and seatbelts issues. Additionally, consider improvement schools for young offenders as a non-trivial number of respondents were involved in 1–3 accidents in the last 3 years, and as the age group increases the number of crashes decreases.
The survey was performed with special care to avoid response patterns as much as possible, one of the biggest limitations of this study was the self-reported data online as it could be associated with a bias of social desirability or poor understanding of the questionnaire. During the data collection process, some constraints due to geographical areas were raised as the field interview only captured drivers from Cairo and Giza. Also, the passenger car drivers captured via social media means having some glitches such as age limitations and educational categories like university degrees and post-graduate degree holders. This seems to be biased towards high educational degrees. In addition to that, we believe that due to the field interviews the male percentage in the sample was over representative. The field interviews done with the truck, taxi, public transportation drivers created males representing 74% of the total sample. However, no official statistics are mentioning the number or the percentage of female drivers in Egypt. as this is an uncontrolled bias and due to time and effort limitations the research had to adopt this bias in the study and mitigate it as shown in the modeling section. Also, collected data was limited to 2019 only. In addition, the risk perception collected data could not be modeled due to some unknown glitches. Also, the collected number of collisions for each participant could not be verified due to lots of constraints like the autonomous agreement and the data unavailability from the government.
Negative binomial regression was used in the model after succeeding in the comparison with Poisson's regression. The four models were presented and based on the model testing and comparison shown in the below section between the four models only one was recommended to be used in the predicting formula of the number of crashes.
Opportunities for future research could include the following:
Harnessing the potential of emerging tools, like driving simulation, and initiating programs like naturalistic driving—even at a modest scale.
Adopting a structured equation model [SEM] further extends the modeling effort presented in this paper by quantitatively studying multivariable relationships between measurement variables and latent variables.
All the materials, including and not limited to, the descriptive analysis, tables, figures, statistical models, and equations, are included in the manuscript. In addition to that, all the relevant raw data, excel sheets, questionnaires forms, data collected, and SPSS files are freely available to any researchers who wish to use them for non-commercial purposes while preserving data collected confidentiality and anonymity from the corresponding author on reasonable request.
CAPMAS:
Egyptian Central Agency for Public Mobilization and Statistics
DBQ:
Driver Behavior Questionnaire
MIROAD:
Mobile Intelligent Recognition of Aggressive Driving
LR:
Linear regression analysis
Negative binomial regression analysis
Poisson regression analysis
SEM:
Structured equation model
World Health Organization. Global status report on road safety 2018: summary. No. WHO/NMH/NVI/18.20. World Health Organization, 2018.
Janstrup KH (2017) Road Safety Annual Report 2017. Technical University of Denmark: Lyngby, Denmark.
Central Agency for Public Mobilization and Statistics. CAPMAS (2018), DDI-EGY-CAPMAS-Road-2018.
Dahlen ER, Edwards BD, Tubré T, Zyphur MJ, Warren CR (2012) Taking a look behind the wheel: An investigation into the personality predictors of aggressive driving. Accid Anal Prev 45:1–9. https://doi.org/10.1016/j.aap.2011.11.012
Alonso F, Esteban C, Montoro L, Serge A (2019) Conceptualization of aggressive driving behaviors through a perception of aggressive driving scale (PAD). Transp Res F: Traffic Psychol Behav 60:415–426. https://doi.org/10.1016/j.trf.2018.10.032
Tao D, Zhang R, Qu X (2017) The role of personality traits and driving experience in self-reported risky driving behaviors and accident risk among Chinese drivers. Accid Anal Prev 99(Pt A):228–235. https://doi.org/10.1016/j.aap.2016.12.009
Regev S, Rolison JJ, Moutari S (2018) Crash risk by driver age, gender, and time of day using a new exposure methodology. J Saf Res 66:131–140. https://doi.org/10.1016/j.jsr.2018.07.002
Elshamly AF, Abd El-Hakim R, Afify HA (2017) Factors affecting accidents risks among truck drivers in Egypt. In: MATEC Web of Conferences, vol 124, p 04009 EDP Sciences
Vanlaar W, Yannis G (2006) Perception of road accident causes. Accid Anal Prev 38(1):155–161. https://doi.org/10.1016/j.aap.2005.08.007
Reason J, Manstead A, Stradling S, Baxter J, Campbell K (1990) Errors and violations on the roads: a real distinction? Ergonomics 33(10-11):1315–1332. https://doi.org/10.1080/00140139008925335
Zhang H, Qu W, Ge Y, Sun X, Zhang K (2017) Effect of personality traits, age and sex on aggressive driving: psychometric adaptation of the Driver Aggression Indicators Scale in China. Accid Anal Prev 103:29–36. https://doi.org/10.1016/j.aap.2017.03.016
Cordazzo ST, Scialfa CT, Bubric K, Ross RJ (2014) The driver behaviour questionnaire: A north American analysis. J Saf Res 50:99–107. https://doi.org/10.1016/j.jsr.2014.05.002
Martinussen LM, Hakamies-Blomqvist L, Møller M, Özkan T, Lajunen T (2013) Age, gender, mileage and the DBQ: the validity of the Driver Behavior Questionnaire in different driver groups. Accid Anal Prev 52:228–236. https://doi.org/10.1016/j.aap.2012.12.036
Perepjolkina V, Renge V (2011) Drivers' Age, Gender, Driving Experience, and Aggressiveness as Predictors of Aggressive Driving Behaviour. Signum Temporis 4(1):62–72. https://doi.org/10.2478/v10195-011-0045-2
Bener A, Özkan T, Lajunen T (2008) The driver behaviour questionnaire in Arab gulf countries: Qatar and United Arab Emirates. Accid Anal Prev 40(4):1411–1417. https://doi.org/10.1016/j.aap.2008.03.003
Al Naser NB, Hawas YE, Maraqa MA (2013) Characterizing driver behaviors relevant to traffic safety: a multistage approach. J Transp Saf Secur 5(4):285–313. https://doi.org/10.1080/19439962.2013.766291
Sümer N, Lajunen T, Özkan T (2002) Sürücü davranislarinin kaza riskindeki rolü: ihlaller ve hatalar (The role of driver behaviour in accident risk: violations and errors). In: International Traffic and Road Safety Congress & Fair
Blockey PN, Hartley LR (1995) Aberrant driving behaviour: errors and violations. Ergonomics 38(9):1759–1771. https://doi.org/10.1080/00140139508925225
Mesken J, Lajunen T, Summala H (2002) Interpersonal violations, speeding violations and their relation to accident involvement in Finland. Ergonomics 45(7):469–483. https://doi.org/10.1080/00140130210129682
Gueho L, Granie MA, Abric JC (2014) French validation of a new version of the Driver Behavior Questionnaire (DBQ) for drivers of all ages and level of experiences. Accid Anal Prev 63:41–48. https://doi.org/10.1016/j.aap.2013.10.024
Şimşekoğlu Ö, Nordfjærn T, Rundmo T (2012) Traffic risk perception, road safety attitudes, and behaviors among road users: a comparison of Turkey and Norway. J Risk Res 15(7):787–800. https://doi.org/10.1080/13669877.2012.657221
Ali EK, El-Badawy SM, Shawaly EA (2014) Young drivers behavior and its influence on traffic incidents. J Traffic Logist Eng 2(1):45–51. https://doi.org/10.12720/jtle.2.1.45-51
Washington S, Karlaftis MG, Mannering F, Anastasopoulos P (2020) Statistical and econometric methods for transportation data analysis. Chapman and Hall/ CRC press. https://doi.org/10.1201/9780429244018
Cortina JM (1993) What is coefficient alpha? An examination of theory and applications. J Appl Psychol 78(1):98–104. https://doi.org/10.1037/0021-9010.78.1.98
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Public Works Department, Traffic and Highway Engineering, Cairo University, Giza, Egypt
Islam Sayed, Hossam Abdelgawad & Dalia Said
Islam Sayed
Hossam Abdelgawad
Dalia Said
All the authors confirm contribution to the paper as follows: study conception and design: DS and HA. Data collection: Sayed. Analysis and interpretation of results: IS, DS, and HA. Draft manuscript preparation: IS, DS, and HA. Manuscript review: DS and HA. All authors reviewed the results and approved the final version of the manuscript.
IS is a senior highway engineer and is interested in road safety, driver behavior, risk perception, crash analysis, autonomous vehicles, and smart city research. Sayed is currently working at RAK Municipality. He is responsible for planning and designing the strategic Megaprojects of the Emirate of Ras Al Khaimah. This is after working for Parsons Corporation for more than 5 years. Sayed was part of the designing team responsible for the roads and infrastructure design of some of Dubai's signature projects like EXPO 2020, Palm Deira, Dubai One Way system, Dubai Design District, Health care city, and a lot more.
HA is an Associate Professor at Cairo University, Traffic and Highway Engineering, and also a Director of Urban Transport Technologies at SETS. He completed his Ph.D. in ITS from the University of Toronto. Much of his professional and academic experience has been accumulated in Egypt, the Middle East, and in Canada in Traffic management, Transportation planning, operations, modeling, and optimization; ITS specifications, technical requirements, and functional testing; Smart Mobility Systems Concepts, Vision, and Strategy; Data analytics and visualization, data-driven innovation in transportation and spatial data management; and Roadway safety audits, operational reviews, speed management and traffic calming measures.
DS is an Associate Professor at Cairo University. She completed her Ph.D. at Carleton University, Canada in 2008. She has taken part and led in several research projects in Canada and Egypt related to Traffic Safety on Highways, Driver Behaviour and its Relation to Geometric Design of Highways, and Using New Technologies for Capturing Driver Behaviour Parameters. She has received several prestigious scholarships and awards during her studies including awards by the Transportation Association of Canada, and the National Science and Engineering Research Council of Canada. She is also a Professional Engineer and is involved in several strategic transportation projects.
Correspondence to Dalia Said.
Sayed, I., Abdelgawad, H. & Said, D. Studying driving behavior and risk perception: a road safety perspective in Egypt. J. Eng. Appl. Sci. 69, 22 (2022). https://doi.org/10.1186/s44147-021-00059-z
Accepted: 13 December 2021
Drivers' behavior
Driver demographics
Roadway crashes | CommonCrawl |
COVID-19: years of life lost (YLL) and saved (YLS) as an expression of the role of vaccination
Mapping 123 million neonatal, infant and child deaths between 2000 and 2017
Roy Burstein, Nathaniel J. Henry, … Simon I. Hay
Effect of health systems context on infant and child mortality in sub-Saharan Africa from 1995 to 2015, a longitudinal cohort analysis
Ryan A. Simmons, Rebecca Anthopolos & Wendy Prudhomme O'Meara
Interrupted-time-series analysis of the immediate impact of COVID-19 mitigation measures on preterm birth in China
Yanxia Xie, Yi Mu, … Jun Zhu
Temporal and spatial distribution of under-five mortality and factors associated with multiple cases of under-five deaths within a family in the rural area of Khuzestan, Southern Iran
Tofigh Anafcheh, Mahmoud Yaghoubi Doust, … Morteza Abdullatif Khafaie
Trends and risk factors for infant mortality in the Lao People's Democratic Republic
Viengsakhone Louangpradith, Eiko Yamamoto, … Nobuyuki Hamajima
Provincial-level outcomes of China's 'Reducing maternal mortality and eliminating neonatal tetanus' program
Peiran Chen, Mingrong Li, … Hanming Liu
Changes in rates of adverse pregnancy outcomes during the COVID-19 pandemic: a cross-sectional study in the United States, 2019–2020
Regina M. Simeone, Karrie F. Downing, … Sascha R. Ellington
Trends of influenza vaccination coverage in pregnant women: a ten-year analysis from a French healthcare database
Mélodie Corbeau, Aurélien Mulliez, … Philippe Vorilhon
Changes in preterm birth and birthweight during the SARS-CoV-2 pandemic: a nationwide study in South Korea
Jeongeun Hwang, Seokjoo Moon, … Geum Joon Cho
Klára Hulíková Tesárková ORCID: orcid.org/0000-0003-4315-15301 &
Dagmar Dzúrová ORCID: orcid.org/0000-0003-0530-49972
Scientific Reports volume 12, Article number: 18129 (2022) Cite this article
Lifestyle modification
When evaluating vaccine efficacy, the conventional measures include reduction of risk of hospitalization and death. The number of patients dying with or without vaccination is often in the public spotlight. However, when evaluating public health interventions or the burden of disease, it is more illustrative to use mortality metrics taking into account also prematurity of the deaths, such as years of life lost (YLL) or years of life saved (YLS) thanks to the vaccination. We develop this approach for evaluation of the difference in YLL and YLS between COVID-19 victims with or without completed vaccination in the autumn pandemic wave (2021, October–December) in Czechia. For the analysis, individual data about all COVID-19 deaths in the country (N = 5797, during the studied period) was used. While 40.6% of the deaths are in cohorts with completed vaccination, this corresponds to 35.1% of years of life lost. The role of vaccination is expressed using YLS and hypothetical numbers of deaths. The registered number of deaths is approximately 3.5 times lower than it would be expected without vaccination. The results illustrate that vaccination is more effective in saving lives than suggested by simplistic comparisons.
During the pandemic years, 2020 and 2021, Czechia has been ranked in the number of deaths per capita as one of the worst countries in the World (the total number of COVID-19 victims in a country of 10.5 million inhabitants surpassed 41 thousand in September 2022). There were 39,248 coronavirus-related deaths registered in the country since the pandemic began until March 15, 2022. This is staggeringly high compared to, for instance, Austria, a border country of a similar population size, which had less than half of the number of coronavirus-related deaths (14,609; March 14, 2022)1. The burden of the epidemic is most often assessed by the number of deaths, but this is only an inaccurate rough indicator, as neither the age structure of the population nor the age of the victims is taken into account2.
Demography works with many indicators that eliminate the influence of age structures and thus enable better and more correct regional or international comparisons. When evaluating public health interventions or the burden of disease, it is often preferable to use mortality metrics taking into account also how premature the deaths are, or the age structure of the victims in general, such as years of life lost (YLL). YLL is a valid measure in demography, potentially used also for identifying and classifying the underlying causes of premature mortality3,4,5. The method was probably first used in the Global Burden of Disease Study3.
The YLL method has already been used as a metric to evaluate the effects of COVID-19 in several published articles6,7,8. Pifarré et al. compared the effects of COVID-19 using YLL in 81 countries, including Czechia6. The authors concluded that in highly developed countries, the impact of COVID-19 was 2–9 times higher than for common seasonal influenza (compared to the median influenza year in the same country). For Czechia, it was concluded that for COVID-19 it was up to 5 times higher in 2020 in terms of the number of YLL than the effects of the common seasonal flu.
These publications and many others address the estimation of YLL in the context of the ongoing COVID-19 pandemic, but only a few address the use of the YLL method to evaluate the effect of vaccination against COVID-199.
Misinformation and hesitancy in vaccines, potentially leading to refusal or delayed acceptance of COVID-19 vaccines, are considered a key factor in the high number of pandemic victims. When evaluating vaccine efficacy, the conventionally-evaluated outcomes include a reduction in the risk of hospitalization and death. The number of patients dying with or without completed vaccination is often also in the public spotlight. In this context, it is worth mentioning a unique study by Joshua Goldstein et al.9. Their results support the preference for vaccination among the oldest ages. The authors show that vaccinating the most vulnerable people will provide the highest protection against deaths, both in terms of the number of deaths and in terms of YLL, but also in terms of the number of years of life saved (YLS). At the time of preparation of the cited study9 the authors had to use the assumption of vaccination coverage and its efficiency for estimation of the overall supposed effects. At present, in developed countries, vaccines are available to everyone, regardless of age (except for children under 5 years of age). That means, that nowadays it is not necessary to base the study on many assumptions. Rather, it is possible to use the empirical data for estimation of the effects of vaccination (as in the analytical part of this paper). Still, however, the number of YLL or YLS could be used for its clearness and simplicity.
Because most people dying of COVID-19 are elderly, opinions resonate that YLL and YLS due to COVID-19 disease are low and that the young population is spared the serious consequences of the disease. For potential evaluation of such assumptions, younger ages are also included in the analysis below.
The efficacy of COVID-19 vaccines can in principle be assessed in terms of both YLL in the population without complete vaccination and YLS in the population with complete vaccination. We can set a more common question, how many years of life have been lost because of COVID-19 in the population with and without complete vaccination, as well as a less common question, how many years of life were saved among those who have complete vaccination against COVID-19. The measure of YLS through COVID-19 vaccination is used only sporadically, however, its construction is very straightforward as an alternative to YLL9.
We aim to estimate YLL and YLS in connection with COVID-19 using data on completed vaccination and individual deaths related to COVID-19 (See the Data and Methods section for proper definitions of terms and variables). This study quantifies the years of life lost and saved associated with completed vaccination in the period until the peak of the fifth Czech epidemic wave with dominantly only one variant—the Delta variant (October–December 2021)10. To our knowledge, this is a unique study on the effectiveness of vaccination using the method of estimating the YLL and YLS for the period of the Delta variant.
For the purpose of the study, we used the continuously collected and published data related to COVID-19 pandemic in Czechia11. In the database, deaths related to COVID-19 are defined as the deaths of "individuals positively tested for SARS-CoV-2 (by PCR) regardless of the reasons for their deaths, and regardless of whether they died in a hospital or outside hospital care"12. Clearly, for not all of the registered deaths, the COVID-19 was the underlying cause of death, however, the official statistics based on the underlying causes are published with a much higher time delay and the numbers are not distinguished according to the vaccination status. For this reason, we used the continuously registered data where the deaths could be taken directly or indirectly related to the disease. These data are also included in international databases and correspond to the international standards of data evidence12.
The initial dataset used for analysis included individual death records related to the COVID-19 disease in the period of three months from October 1 to December 31, 2021 (N = 5797) by the time given by the availability of data at the time of analysis (March 2022). Data were obtained from the Czech National Information System, which includes records of all individuals who tested positive for SARS-CoV-212,13.
The studied period was chosen for three main reasons: (1) only one variant, that is, the Delta variant was the dominant strain of COVID-1910; (2) the possibility of completed vaccination for all persons over 12 years of age; (3) the vaccines administered were suitable for the Delta variant14.
The following characteristics were available for all death records: age, sex, and information on the course of vaccination (date of the first, second, and booster vaccines). We used only population and numbers of deaths at the age of at least 12, because younger children were not vaccinated at that time in Czechia. The dataset was divided into two groups of cases according to the completed vaccination as follows: (a) persons who died without previously completed vaccination and (b) persons with completed vaccination at the time of death. At least one dose for a single-dose type of vaccine, or at least two doses for a two-dose vaccine, is considered the complete vaccination. Deaths of persons with incomplete vaccination (only one dose for a two-dose vaccine) are included in the first group ("without previously completed vaccination"), they are not studied separately as there are low numbers of cases.
Estimated years of life lost and saved
A measure of disease burden—expected years of life lost (YLL) is often used for comparative purposes. In the calculation, each death is weighted as a function of age at the time of death, reflecting the fact that deaths at young ages are related to a higher number of years of life lost (i.e. longer average lengths of potential remaining life lost) than deaths at an advanced age4,5. The potential remaining length of life was estimated using the age-specific life expectancy according to sex before the onset of the pandemic in 2019 (\({e}_{x, 2019}\)), as published by the Czech Statistical Office15.
Because several chronic diseases or health states such as obesity and diabetes mellitus are considered as factors increasing the risks of severe outcomes of COVID-19, it could be speculated that people who have died from the COVID-19 disease have usually been comorbid, with more serious diseases that are in themselves associated with reduced life expectancy. In that case, it seems rather inappropriate to use the average population life expectancy as the potential length of the remaining life of the population deceased in relation to COVID-19. Their potential remaining length of life could be expected to be on average shorter, however, the shortening could be only supposed or roughly estimated, as there are no data available for its calculation.
That is why the potential years of life lost or years of life saved are estimated using three scenarios based on different assumptions of the potentially remaining length of life. The potential remaining length of life in all of them is based on official life tables for the year 201915.
In the first one, the baseline scenario, the life expectancy (\({e}_{x, 2019,}\)) is used as an estimation of the potential remaining years of life for all of the deceased persons in relation to COVID-19. As mentioned above, this assumption seems to be rather overestimated in relation to the potential remaining length of life.
In the second scenario, we used the 70th percentile of remaining expected years of life for each sex and age from the official life tables for the year 201915 (i.e. we used the potential remaining length of life as the number of years of life within which the first 30% of the population aged x die according to the distribution of the survival function, \({l}_{x}\), in the life tables, \({e}_{x, 2019, P70}\)). In general, we used the calculation as
$${e}_{x, 2019, P}={x}_{P}+\frac{{l}_{{x}_{P}}-P\times {l}_{x}}{{l}_{{x}_{P}}-{l}_{{x}_{P}+1}}$$
where P = 0.7 for the 70th percentile, \({l}_{x}\) is the survival function of the life tables \(, {x}_{P}\) is the highest age where \({l}_{x}\) is higher than \(P\times {l}_{x}\).
In the third scenario, the 90th percentile was used, where the potential remaining length of life at age x was set as the value corresponding to the death of the first 10% of the population at that age (based again on life tables 2019, \({e}_{x, 2019, P90}\) calculated as in Eq. (1) where P = 0.9).
Except for the baseline scenario, the above-mentioned third scenario (based on \({e}_{x, 2019, P90}\)) could be taken as rather a pessimistic one according to assumptions of the potential remaining length of life, and the second scenario (based on \({e}_{x, 2019, P70}\)) may provide the closest real potential. When dealing with the scenarios, it is important to keep in mind that the main aim is to compare the overall trends or crucial differences given by the initial assumptions of the scenarios, not to discuss the detailed resulting values based on estimations and assumptions.
In the equations, the number of deaths at age x is marked as \({D}_{ x}\). The calculation was processed separately for males and females, and for detail of individual ages, values of YLL were then aggregated according to below-defined age groups.
$${YLL}=\sum_{x}{YLL}_{ x}=\sum_{x}{D}_{ x} *{e}_{x, 2019, scenario=i}$$
where \({e}_{x, 2019, scenario=i}\) is equal to \({e}_{x, 2019}\) for the baseline scenario, it is equal to \({e}_{x, 2019, P70}\) for i = 2 or \({e}_{x, 2019, P90}\) for i = 3. Values of life expectancy (\({e}_{x, 2019}\)) for males and females, as well as \({e}_{x, 2019, P70}\) and \({e}_{x, 2019, P90}\) are presented in Fig. 1. On average, a man dying at the age of 65 loses 16.3 potential years of life in scenario 1 (corresponds to the life expectancy at age 65 for males in 2019), 11.3 years in scenario 2, and 4.6 years in scenario 3. For females, those three values would be 19.9 in scenario 1, 16.1 in scenario 2, and 8.1 years in scenario 3. Based on Eq. (2), the potential remaining years of life represents a weight of the number of deaths at each age.
Potential years of remaining life (Eq. 1) according to age used in the three scenarios of the analysis for males and females based on the life tables for the year 2019. Source: author's calculation according to Eq. (1)15.
Years of life saved (YLS) through COVID-19 vaccination are proposed as a (more optimistic and, therefore, potentially more publicly acceptable) measure of the effect of vaccination on COVID-19-related mortality. Since this is the estimated effect of vaccination, the value of YLS is calculated only for the sub-population with completed vaccination. In this paper, it was estimated as a difference between the hypothetical years of life lost (\({YLL}_{HYP}\)) in the sub-population with completed vaccination and years of life lost based on registered deaths related to COVID-19 in the population with completed vaccination (\({YLL}_{vac}\)) (Eq. 3).
$$YLS={YLL}_{HYP}-{YLL}_{vac}$$
The hypothetical years of life lost (\({YLL}_{HYP}\)) are calculated only for the population with completed vaccination, and it is based on the assumption that the risk of COVID-19-related death in the population with completed vaccination was equal to the risk of COVID-19-related death in the population without completed vaccination (Eq. 4).
The first step in the calculation of the hypothetical years of life lost (\({YLL}_{HYP}\)) is the estimation of the hypothetical number of deaths at age x (\({D}_{HYP, x}\)) under the assumption that the risk of death in the population with completed vaccination was equal to the risk of death in the population without completed vaccination. It was estimated using the population size with completed vaccination according to age and sex (\({P}_{vac, x}\)) and age-specific quotients of lethality (risk of death) from COVID-19 in the population without completed vaccination (\({lq}_{unvac, x}\)):
$${D}_{HYP, x}= {P}_{vac, x}* {lq}_{unvac, x}={P}_{vac, x}*\frac{{D}_{unvac, x}}{{P}_{unvac, x}}$$
where \({P}_{vac, x}\) and \({P}_{unvac, x}\) are the estimated population sizes according to age (x) at the beginning of the studied period (October 1, 2021) with completed vaccination (vac) and without completed vaccination (unvac) using the numbers of fully vaccinated persons based on the official evidence11. Where \({D}_{unvac, x}\) is the registered number of deaths at age x related to COVID-19 in the population without completed vaccination, the age-specific quotient of lethality (\({lq}_{unvac, x}\)) represents the risk of an person without completed vaccination dying from COVID-19 during the studied period.
If the risk of death of the population with completed vaccination as well as the population without completed vaccination was the same (assumption of the null effect of completed vaccination), the number of deaths among the population with completed vaccination would equal to \({D}_{HYP, x}\). Using the average number of years of life lost per death from the population without completed vaccination (\(\frac{{YLL}_{unvax, x}}{{D}_{unvax, x}}\)) the hypothetical years of life lost is calculated as (second step of the calculation, Eq. 5):
$${YLL}_{HYP}=\sum_{x}{D}_{HYP, x}*\frac{{YLL}_{unvax, x}}{{D}_{unvax, x}}$$
In the calculations of years of life saved (YLS), we used the age groups (12–44, 45–64, 65–84, 85 +) instead of individual ages, and all the calculations were also processed separately for males and females. Also, other results are presented for the defined age groups. That is, the age group labeled as 85 + represents the oldest population aged 85 and more years, most often affected by chronic diseases, etc. Younger seniors are represented by the age group 65–84 years. The younger ages are further divided into the age 12–44 years, where only a marginal part of deaths occurs, and the age group 45–64 years which could still be considered as a relatively young age group (in the age of economic activity). However, this age group had already been significantly affected by the pandemic (see below, Table 1).
Table 1 Registered numbers of deaths within the population with completed vaccination and without completed vaccination by males, female and, both sexes during the period from October 1 to December 31, 2021.
All methods were carried out in accordance with relevant guidelines and regulations; no experiments on humans were done, and no human tissue samples or data were used.
Data are routinely collected in compliance with Czech legal regulations (Act on the Protection of Public Health). To use anonymized, retrospective data from this database there is no need for ethical approval.
Tables 1 and 2 summarise the total number of registered deaths related to the COVID-19 disease by age groups and vaccination status (\({D}_{vac}\) and \({D}_{unvac}\)), and also the estimation of the hypothetical number of deaths \({D}_{HYP}\) (Table 2). In the analyzed period from October 1 to December 31, 2021, there were reported 5797 deaths in Czechia, of which 3441 deaths were in the population without complete vaccination (59.4% of the total number). In terms of age, the highest number of victims was aged 65–84 (3570), followed by the oldest age group 85 and over (1481). There were 61 deaths between the ages of 12 and 44. In terms of vaccination, the relative distribution was different. The highest proportion of victims without completed vaccination was in the lowest age category (86.9% of the registered 61 deaths at age 12–44). In the age group of 45–64 years, 549 people (80.1%) died without completed vaccination, and 136 people (19.9%) died with completed vaccination (Table 1).
Table 2 Estimated population sizes at the beginning of the studied period (October 1, 2021) with and without completed vaccination, hypothetical numbers of deaths (population with complete vaccination) and estimated saved numbers of death (population with complete vaccination).
However, absolute numbers of deaths in individual categories are influenced by the structure of the exposed population according to vaccination coverage and age (Table 2). By October 1, 2021, approximately 64% of the population aged 12 and older had completed vaccination. The highest coverage by completed vaccination was at the highest age groups—around 83.5% at age 85 + and around 85.0% at age 65–84. Only 70.1% of the population aged 45–64 had completed vaccination, and 49.5% were in the youngest age group (12–44 years).
Table 2 gives the estimates of hypothetical deaths in the population with a completed vaccination (\({D}_{HYP}\)) for each age group and sex, assuming that the risk of death of the population with completed vaccination equals the risk of death of the population without completed vaccination. However, the population with completed vaccination had considerably lower rates of COVID-19 mortality (on average 7.5 times less, see below and Fig. 2). For a comparison, the age-specific quotients of lethality (risk of death) from COVID-19 in the population without completed vaccination (\({lq}_{unvac, x}\)) and with completed vaccination (\({lq}_{vac, x}\)) were calculated. These measures are presented in Fig. 2 as a number of registered deaths per 100,000 inhabitants according to vaccination status for males and females, in particular, defined by age groups.
Number of deaths per 100,000 inhabitants according to vaccination—population with and without completed vaccination, males (left), females (right). Source: author's calculation11,15.
In Fig. 2, it is clear that the risk of death related to COVID-19 significantly increases with age, however, the pace is higher in the population without completed vaccination. If we do not consider the youngest age group (12–44) where the numbers of death are small, the risk of death is 6.4-times higher for the population without completed vaccination aged 85 and older as compared to the population with completed vaccination for the same age group. In the age group 65–84 years, the risk is 7.3-times higher, and in the age group 45–64 it is even 9.5-times higher for the sub-population without completed vaccination (Fig. 2).
Years of life lost (YLL)
The total number of deaths (5797) led to more than 63 thousand years of life lost in the baseline scenario 1 (where the lost potential length of life equals to life expectancy in 2019) while the total number of years of life lost reached more than 43 thousand in alternative scenario 2 and more than 19 thousand in alternative scenario 3. In terms of the number of YLL, Fig. 3 shows an even more pronounced effect of vaccination, i.e. a significantly lower number of years of life lost in the sub-population with completed vaccination. Two-thirds (64.9% in scenario 1, 66.8% in scenario 2, and 69.3% in scenario 3) of the years of life lost belong to the victims without completed vaccination. What might even be more important, a significantly higher proportion of the years of life lost among the population without completed vaccination is caused by the deaths at the age of economic activity (below 65 years). Among males, it is around 50% of YLL, and among women it is around one-third of YLL. Considering the population with completed vaccination, the proportion of YLL at the age below 65 is around 20% (for both sexes). More detailed results are included in the Supplementary information S1.
Years of life lost (YLL) by males, females, population with or without completed vaccination, and Scenarios 1–3 during the period of October 1 to December 31, 2021, Czechia. Source: author's calculation11,15.
Years of life saved (YLS)
Table 3 provides an overview of the three calculated indicators for the population with completed vaccination, the hypothetical number of years of life lost (\({YLL}_{HYP}\)), registered years of life lost (\({YLL}_{vac}\)), and years of life saved (\({YLS}\)) by age and sex. The measure of years of life saved (YLS) enables the evaluation of the effect of vaccination in a less usual way.
Table 3 The hypothetical years of life lost (\({YLL}_{HYP}\)), years of life lost (\({YLL}_{vac}\)), and years of life saved through COVID-19 vaccination (\(YLS\)) in the population with completed vaccination during the period of October 1 to December 31, 2021, Czechia, Scenarios 1–3.
In all three scenarios, the highest number of YLS could be observed in the age group 65–84 years. However, in relative presentation, the proportion of YLS from the hypothetical YLL, i.e. the effect of vaccination, is similar across all the age groups and all three scenarios—it is around 85–90% (Table 3).
Under the assumption of no effect of vaccination, the numbers of deaths related to COVID-19 in the population with completed vaccination would have been almost 15 thousand higher during the studied period. That is, without vaccination against COVID-19, the number of deaths related to COVID-19 would probably have been 3.5 times higher in the analyzed period (Table 2). Through vaccination, in total around 88% of the hypothetical number of YLL was prevented among the inhabitants with completed vaccination. An equally important result of the study is the outcome that the proportions of years of life saved by vaccination from the overall hypothetical years of life lost are almost identical for all age groups (from 85 to 89%, Table 3). The observed results are also robust across all the scenarios. The conclusions do not depend on initial assumption related to potential remaining length of life.
As with any other studies, this study has to deal with some limitations. From the methodological point of view, it needs be mentioned that the evidence of deaths related to COVID-19 cannot be 100% complete because some deaths are not defined precisely according to the cause of death. Some cases of COVID-19 positivity were not revealed because of limited testing, for example. In this study, we do not consider the period from vaccination completion which may play a significant role in the vaccine efficiency16 as well as the lag between vaccination and its protective effect17. This remains the potential object in another study focused more on the time dimension and duration of the effects of vaccines. Also, we have no information about the health status or comorbidities of the victims or of the vaccinated and unvaccinated population in general.
The COVID-19 pandemic emphasized the need of interdisciplinary approach. The role of demography is irreplaceable in the case of evaluation of its consequences for the population and the effectiveness of applied measures above all. The study has developed a method usable in this evaluation and contributes to the topic of vaccine effectiveness using demographic and mathematical methods. Metrics such as YLL should be considered when evaluating the impacts of various population-wide interventions. Assessing years of life lost is a good indicator of the effects of a pandemic, as it provides a much more relevant view than the crude mortality rate (numbers of deaths per population size) often used in practice. In this study, the measure of YLL was used not only for illustration of the outcome of the pandemics but above all for the evaluation of the effect of completed vaccination. It provides clear evidence of the benefits of COVID-19 vaccination, and using the YLS illustrates the advantage of the population with complete vaccination as compared to the population without it.
From a public health perspective, not only are the years of life lost assessing how much life years have been shortened for populations affected by the COVID-19 disease, but equally important are the years of lives saved by interventions—COVID-19 vaccination in case of this study. This study quantifies the years of life lost and saved associated with completed vaccination in the period until the peak of the fifth Czech pandemic wave (October–December 2021), when 5797 people died from the COVID-19 disease.
This result illustrates that vaccination is even more effective in saving lives than suggested by straightforward and often simplified comparisons. Moreover, in the case of Czechia, among the population with completed vaccination, almost 15 thousand COVID-19-related deaths were potentially avoided. Vaccination helped to reduce the YLL among the fully vaccinated by around 88% during the studied period and the registered number of deaths is approximately 3.5 lower than it would be expected without vaccination.
This study demonstrates that COVID-19 vaccination saves lives and saves years of potential future lives.
Data used in this study for the analyses are not publicly available. De-identified individual-level data are available to the scientific community (authorized access only after registration at www.uzis.cz/index-en.php). All the following calculations were prepared in MS Excel using the equations described in the text of the paper. The datasets analyzed in the study are available in the repository of the Ministry of Health of the Czech Republic (https://onemocneni-aktualne.mzcr.cz/covid-19), and of the Czech Statistical Office (Complete life tables for the Czech Republic for 2019, https://www.czso.cz/csu/czso/life-tables-for-the-czech-republic-cohesion-regions-and-regions-2018-2019). Ethical approval was not required for this secondary analysis of data publicly available the Czech National Information System.
ECDC. Latest situation update for the EU/EEA, as of 11 February 2022. (2022). at https://www.ecdc.europa.eu/en/cases-2019-ncov-eueea.
Hulíková Tesárková, K. & Dzúrová, D. The age structure of cases as the key of COVID-19 severity: Longitudinal population-based analysis of European countries during 150 days. Scand. J. Public Health 50(6), 738–747. https://doi.org/10.1177/14034948211042486 (2021).
Murray, C.J.L. & Lopez, A.D. (eds.). Global Burden of Disease: A comprehensive assessment of mortality and disability from diseases, injuries, and risk factors in 1990 and projected to 2020 (The Global Burden of Disease and Injury). (Harvard School of Public Health, 1996).
Mazzuco, S., Suhrcke, M. & Zanotto, L. How to measure premature mortality? A proposal combining "relative" and "absolute" approaches. Popul. Health Metrics 19, 41. https://doi.org/10.1186/s12963-021-00267-y (2021).
Martinez, R., Soliz, P., Caixeta, R. & Ordunez, P. Reflection on modern methods: Years of life lost due to premature mortality—a versatile and comprehensive measure for monitoring non-communicable disease mortality. Int. J. Epidemiol. 48(4), 1367–1376. https://doi.org/10.1093/ije/dyy254 (2019).
Pifarré Arolas, H. et al. Years of life lost to COVID-19 in 81 countries. Sci. Rep. 11, 3504. https://doi.org/10.1038/s41598-021-83040-3 (2021).
Mitra, A. K. et al. Potential years of life lost due to COVID-19 in the United States, Italy, and Germany: An old formula with newer ideas. Int. J. Environ. Res. Public Health 17(12), 4392. https://doi.org/10.3390/ijerph17124392 (2020).
Hanlon, P. et al. COVID-19—exploring the implications of long-term condition type and extent of multimorbidity on years of life lost: A modelling study. Wellcome Open Res. 5, 75. https://doi.org/10.12688/wellcomeopenres.15849.3 (2021).
Goldstein, J. R., Cassidy, T. & Wachter, K. W. Vaccinating the oldest against COVID-19 saves both the most lives and most years of life. Proc. Natl. Acad. Sci. 118, 11. https://doi.org/10.1073/pnas.2026322118 (2021).
Hodcroft, E.B. CoVariants: SARS-CoV-2 mutations and variants of interest. (2021). https://covariants.org.
Komenda, M., Panoška, P., Bulhart, V., Žofka, J., Brauner, T., Hak, J., Jarkovský, J., Mužík, J., Blaha, M., Kubát, J., Klimeš, D., Langhammer, P., Daňková, Š., Májek, O., Bartůňková, M. & Dušek, L. COVID‑19: Přehled aktuální situace v ČR. Onemocnění aktuálně. (Ministry of Health, Czech Republic, 2020). https://onemocneni-aktualne.mzcr.cz/COVID-19. ISSN 2694–9423.
Komenda, M. et al. Sharing datasets of the COVID-19 epidemic in the Czech Republic. PLoS ONE 17(4), e0267397. https://doi.org/10.1371/journal.pone.0267397 (2022).
Komenda, M. et al. Complex reporting of the COVID-19 epidemic in the Czech Republic: Use of an interactive web-based app in practice. J. Med. Internet Res. 22(5), e19367. https://doi.org/10.2196/19367 (2020).
WHO. Episode #44 - Delta variant and vaccines, Science conversation. (2021) at https://www.who.int/emergencies/diseases/novel-coronavirus-2019/media-resources/science-in-5/episode-44---delta-variant-and-vaccines.
Czech Statistical Office. Life Tables for the Czech Republic, Cohesion Regions, and Regions—2018–2019: Complete life tables for the Czech Republic for 2019. (2020) at https://www.czso.cz/csu/czso/life-tables-for-the-czech-republic-cohesion-regions-and-regions-2018-2019.
Šmíd, M. et al. Protection by vaccines and previous infection against the Omicron variant of SARS-CoV-2. J. Infect. Dis. https://doi.org/10.1093/infdis/jiac161 (2022).
Li, H., Wang, L., Zhang, M., Lu, Y. & Wang, W. Effects of vaccination and non-pharmaceutical interventions and their lag times on the COVID-19 pandemic: Comparison of eight countries. PLoS Negl. Trop. Dis. 16(1), e0010101. https://doi.org/10.1371/journal.pntd.0010101 (2022).
The authors thank the staff of the Institute of Health Information and Statistics of the Czech Republic for allowing access to the database.
Open access funding provided by Charles University. This output was supported by the NPO "Systemic Risk Institute" (LX22NPO5101).
Department of Demography and Geodemography, Faculty of Sciences, Charles University, Prague, Czechia
Klára Hulíková Tesárková
Department of Social Geography and Regional Development, Faculty of Sciences, Charles University, Prague, Czechia
Dagmar Dzúrová
K.H.T. and D.D. participated in the conception and design of the study. K.H.T. performed the statistical analyses. K.H.T. and D.D. interpreted the data and wrote the manuscript together. Both authors agree to be responsible for all aspects of the work ensuring integrity and accuracy.
Correspondence to Dagmar Dzúrová.
Supplementary Information.
Hulíková Tesárková, K., Dzúrová, D. COVID-19: years of life lost (YLL) and saved (YLS) as an expression of the role of vaccination. Sci Rep 12, 18129 (2022). https://doi.org/10.1038/s41598-022-23023-0
About Scientific Reports
Guide to referees
Scientific Reports (Sci Rep) ISSN 2045-2322 (online) | CommonCrawl |
References of "Wang, Jun 50003303"
Privacy-preserving Recommender Systems Facilitated By The Machine Learning Approach
Wang, Jun
Doctoral thesis (2018)
Recommender systems, which play a critical role in e-business services, are closely linked to our daily life. For example, companies such as Youtube and Amazon are always trying to secure their profit by ... [more ▼]
Recommender systems, which play a critical role in e-business services, are closely linked to our daily life. For example, companies such as Youtube and Amazon are always trying to secure their profit by estimating personalized user preferences and recommending the most relevant items (e.g., products, news, etc.) to each user from a large number of candidates. State-of-the-art recommender systems are often built on top of collaborative filtering techniques, of which the accuracy performance relies on precisely modeling user-item interactions by analyzing massive user historical data, such as browsing history, purchasing records, locations and so on. Generally, more data can lead to more accurate estimations and more commercial strategies, as such, service providers have incentives to collect and use more user data. On the one hand, recommender systems bring more income to service providers and more convenience to users; on the other hand, the user data can be abused, arising immediate privacy risks to the public. Therefore, how to preserve privacy while enjoying recommendation services becomes an increasingly important topic to both the research community and commercial practitioners. The privacy concerns can be disparate when constructing recommender systems or providing recommendation services under different scenarios. One scenario is that, a service provider wishes to protect its data privacy from the inference attack, a technique aims to infer more information (e.g., whether a record is in or not) about a database, by analyzing statistical outputs; the other scenario is that, multiple users agree to jointly perform a recommendation task, but none of them is willing to share their private data with any other users. Security primitives, such as homomorphic encryption, secure multiparty computation, and differential privacy, are immediate candidates to address privacy concerns. A typical approach to build efficient and accurate privacy-preserving solutions is to improve the security primitives, and then apply them to existing recommendation algorithms. However, this approach often yields a solution far from the satisfactory-of-practice, as most users have a low tolerance to the latency-increase or accuracy-drop, regarding recommendation services. The PhD program explores machine learning aided approaches to build efficient privacy-preserving solutions for recommender systems. The results of each proposed solution demonstrate that machine learning can be a strong assistant for privacy-preserving, rather than only a troublemaker. [less ▲]
Detailed reference viewed: 476 (42 UL)
Facilitating Privacy-preserving Recommendation-as-a-Service with Machine Learning
Wang, Jun ; Delerue Arriaga, Afonso ; Tang, Qiang et al
Poster (2018, October)
Machine-Learning-as-a-Service has become increasingly popular, with Recommendation-as-a-Service as one of the representative examples. In such services, providing privacy protection for users is an ... [more ▼]
Machine-Learning-as-a-Service has become increasingly popular, with Recommendation-as-a-Service as one of the representative examples. In such services, providing privacy protection for users is an important topic. Reviewing privacy-preserving solutions which were proposed in the past decade, privacy and machine learning are often seen as two competing goals at stake. Though improving cryptographic primitives (e.g., secure multi-party computation (SMC) or homomorphic encryption (HE)) or devising sophisticated secure protocols has made a remarkable achievement, but in conjunction with state-of-the-art recommender systems often yields far-from-practical solutions. We tackle this problem from the direction of machine learning. We aim to design crypto-friendly recommendation algorithms, thus to obtain efficient solutions by directly using existing cryptographic tools. In particular, we propose an HE-friendly recommender system, refer to as CryptoRec, which (1) decouples user features from latent feature space, avoiding training the recommendation model on encrypted data; (2) only relies on addition and multiplication operations, making the model straightforwardly compatible with HE schemes. The properties turn recommendation-computations into a simple matrix-multiplication operation. To further improve efficiency, we introduce a sparse-quantization-reuse method which reduces the recommendation-computation time by $9\times$ (compared to using CryptoRec directly), without compromising the accuracy. We demonstrate the efficiency and accuracy of CryptoRec on three real-world datasets. CryptoRec allows a server to estimate a user's preferences on thousands of items within a few seconds on a single PC, with the user's data homomorphically encrypted, while its prediction accuracy is still competitive with state-of-the-art recommender systems computing over clear data. Our solution enables Recommendation-as-a-Service on large datasets in a nearly real-time (seconds) level. [less ▲]
Differentially Private Neighborhood-based Recommender Systems
Wang, Jun ; Tang, Qiang
in IFIP Information Security & Privacy Conference (2017, May)
Privacy issues of recommender systems have become a hot topic for the society as such systems are appearing in every corner of our life. In contrast to the fact that many secure multi-party computation ... [more ▼]
Privacy issues of recommender systems have become a hot topic for the society as such systems are appearing in every corner of our life. In contrast to the fact that many secure multi-party computation protocols have been proposed to prevent information leakage in the process of recommendation computation, very little has been done to restrict the information leakage from the recommendation results. In this paper, we apply the differential privacy concept to neighborhood-based recommendation methods (NBMs) under a probabilistic framework. We first present a solution, by directly calibrating Laplace noise into the training process, to differential-privately find the maximum a posteriori parameters similarity. Then we connect differential privacy to NBMs by exploiting a recent observation that sampling from the scaled posterior distribution of a Bayesian model results in provably differentially private systems. Our experiments show that both solutions allow promising accuracy with a modest privacy budget, and the second solution yields better accuracy if the sampling asymptotically converges. We also compare our solutions to the recent differentially private matrix factorization (MF) recommender systems, and show that our solutions achieve better accuracy when the privacy budget is reasonably small. This is an interesting result because MF systems often offer better accuracy when differential privacy is not applied. [less ▲]
A Probabilistic View of Neighborhood-based Recommendation Methods
in ICDM 2016 - IEEE International Conference on Data Mining series (ICDM) workshop CLOUDMINE (2016, December 12)
Probabilistic graphic model is an elegant framework to compactly present complex real-world observations by modeling uncertainty and logical flow (conditionally independent factors). In this paper, we ... [more ▼]
Probabilistic graphic model is an elegant framework to compactly present complex real-world observations by modeling uncertainty and logical flow (conditionally independent factors). In this paper, we present a probabilistic framework of neighborhood-based recommendation methods (PNBM) in which similarity is regarded as an unobserved factor. Thus, PNBM leads the estimation of user preference to maximizing a posterior over similarity. We further introduce a novel multi-layer similarity descriptor which models and learns the joint influence of various features under PNBM, and name the new framework MPNBM. Empirical results on real-world datasets show that MPNBM allows very accurate estimation of user preferences. [less ▲]
Privacy-preserving Friendship-based Recommender Systems
Tang, Qiang; Wang, Jun
in IEEE Transactions on Dependable and Secure Computing (2016, November)
Privacy-preserving recommender systems have been an active research topic for many years. However, until today, it is still a challenge to design an efficient solution without involving a fully trusted ... [more ▼]
Privacy-preserving recommender systems have been an active research topic for many years. However, until today, it is still a challenge to design an efficient solution without involving a fully trusted third party or multiple semitrusted third parties. The key obstacle is the large underlying user populations (i.e. huge input size) in the systems. In this paper, we revisit the concept of friendship-based recommender systems, proposed by Jeckmans et al. and Tang and Wang. These solutions are very promising because recommendations are computed based on inputs from a very small subset of the overall user population (precisely, a user's friends and some randomly chosen strangers). We first clarify the single prediction protocol and Top-n protocol by Tang and Wang, by correcting some flaws and improving the efficiency of the single prediction protocol. We then design a decentralized single protocol by getting rid of the semi-honest service provider. In order to validate the designed protocols, we crawl Twitter and construct two datasets (FMT and 10-FMT) which are equipped with auxiliary friendship information. Based on 10-FMT and MovieLens 100k dataset with simulated friendships, we show that even if our protocols use a very small subset of the datasets, their accuracy can still be equal to or better than some baseline algorithm. Based on these datasets, we further demonstrate that the outputs of our protocols leak very small amount of information of the inputs, and the leakage decreases when the input size increases. We finally show that he single prediction protocol is quite efficient but the Top-n is not. However, we observe that the efficiency of the Top-n protocol can be dramatically improved if we slightly relax the desired security guarantee. [less ▲]
Recommender Systems and their Security Concerns
Scientific Conference (2015, October)
Instead of simply using two-dimensional User × Item features, advanced recommender systems rely on more additional dimensions (e.g. time, location, social network) in order to provide better ... [more ▼]
Instead of simply using two-dimensional User × Item features, advanced recommender systems rely on more additional dimensions (e.g. time, location, social network) in order to provide better recommendation services. In the first part of this paper, we will survey a variety of dimension features and show how they are integrated into the recommendation process. When the service providers collect more and more personal information, it brings great privacy concerns to the public. On another side, the service providers could also suffer from attacks launched by malicious users who want to bias the recommendations. In the second part of this paper, we will survey attacks from and against recommender service providers, and existing solutions. [less ▲]
Privacy-Preserving Context-Aware Recommender Systems: Analysis and New Solutions
Tang, Qiang ; Wang, Jun
in Computer Security - ESORICS 2015 - 20th European Symposium on Research in Computer Security (2015, September)
Nowadays, recommender systems have become an indispens- able part of our daily life and provide personalized services for almost everything. However, nothing is for free – such systems have also upset the ... [more ▼]
Nowadays, recommender systems have become an indispens- able part of our daily life and provide personalized services for almost everything. However, nothing is for free – such systems have also upset the society with severe privacy concerns because they accumulate a lot of personal information in order to provide recommendations. In this work, we construct privacy-preserving recommendation protocols by incorpo- rating cryptographic techniques and the inherent data characteristics in recommender systems. We first revisit the protocols by Jeckmans et al. and show a number of security issues. Then, we propose two privacy- preserving protocols, which compute predicted ratings for a user based on inputs from both the user's friends and a set of randomly chosen strangers. A user has the flexibility to retrieve either a predicted rating for an unrated item or the Top-N unrated items. The proposed protocols prevent information leakage from both protocol executions and the pro- tocol outputs. Finally, we use the well-known MovieLens 100k dataset to evaluate the performances for different parameter sizes. [less ▲]
Recalibrating Equus evolution using the genome sequence of an early Middle Pleistocene horse.
Orlando, Ludovic; Ginolhac, Aurélien ; Zhang, Guojie et al
in Nature (2013), 499(7456), 74-8
The rich fossil record of equids has made them a model for evolutionary processes. Here we present a 1.12-times coverage draft genome from a horse bone recovered from permafrost dated to approximately 560 ... [more ▼]
The rich fossil record of equids has made them a model for evolutionary processes. Here we present a 1.12-times coverage draft genome from a horse bone recovered from permafrost dated to approximately 560-780 thousand years before present (kyr BP). Our data represent the oldest full genome sequence determined so far by almost an order of magnitude. For comparison, we sequenced the genome of a Late Pleistocene horse (43 kyr BP), and modern genomes of five domestic horse breeds (Equus ferus caballus), a Przewalski's horse (E. f. przewalskii) and a donkey (E. asinus). Our analyses suggest that the Equus lineage giving rise to all contemporary horses, zebras and donkeys originated 4.0-4.5 million years before present (Myr BP), twice the conventionally accepted time to the most recent common ancestor of the genus Equus. We also find that horse population size fluctuated multiple times over the past 2 Myr, particularly during periods of severe climatic changes. We estimate that the Przewalski's and domestic horse populations diverged 38-72 kyr BP, and find no evidence of recent admixture between the domestic horse breeds and the Przewalski's horse investigated. This supports the contention that Przewalski's horses represent the last surviving wild horse population. We find similar levels of genetic variation among Przewalski's and domestic populations, indicating that the former are genetically viable and worthy of conservation efforts. We also find evidence for continuous selection on the immune system and olfaction throughout horse evolution. Finally, we identify 29 genomic regions among horse breeds that deviate from neutrality and show low levels of genetic variation compared to the Przewalski's horse. Such regions could correspond to loci selected early during domestication. [less ▲] | CommonCrawl |
Bio II Module I
JessicaDyson1996
Select the correct taxonomic order from the highest (broadest most inclusive) to the lowest category
Kingdom, phylum, class, order, family, genus, species
Which of the following represent a valid criticism of the morphological definition of a species?
Individuals in breeding plumage look different from individuals of the same species outside breeding season
There are more than 350,000 species of beetles within the Coleoptera order of insects; more than any other order of animals on earth. This fact is an example of the concept of:
Species richness
Which of the following is true concerning the genetic material of bacteria?
Their genetic material may be double-stranded DNA or double-stranded RNA or single-stranded DNA or single-stranded RNA
The way fungi obtain nutrition makes them an important contributor to the environment because they
Decompose organic material and recycle it*
Which of the following diseases is caused by a fungus?
Coccidiomycosis
An individual, elongated cell of a fungus is called a(n):
Hypha
One organism that demonstrates an exception to the definition of unicellular is Anabaena, which has heterocysts with very thick walls. The heterocysts carry out the process of ___________
Tropical rain forests occupy about 2% of the earth's land surface but contain 50-80% of all the terrestrial species on earth. This is an example of the concept of ________
Ecological diversity
Which of the following statements is true concerning viruses
Viruses consist of a protein coat surrounding some genetic material
Are antibiotics such as penicillin effective against viral infections in humans?
No, an antibiotic would have to kill the host human cell to kill the virus harbored inside
Which of the following is a zooplanktonic organism?
Stentor sp
What function does the lipopolusaccharide layer have for the organisms that possess it?
Defends the organism against the host's defenses
These hair-like structures are found on some bacteria, such as Neisseria gonorrhoae. They help the bacterium stick to the tissue of the host.
Fimbriae
What is an accurate definition for the term eukaryote?
An organism that has a membrane surrounding its genetic material (DNA)
These structures help some organisms survive in extreme circumstances such as freezing or dryness. These structures have a tough wall that may resist even in boiling temperatures. What are these structures?
Endopores
An organism such as Spirogyra is photosynthetic. It is classified as a(n) __________.
Where does a photosynthetic bacterium such as Synechocystis carry out the process of photosynthesis?
On a fold of the plasma membrane called a thylakoid
How does the Endangered Species Act define an endangered species?
A species that is at risk of extinction throughout all or a significant part of its range
Which type of organisms are used to produce biofuels (e.g. biodiesel)?
Chemoheterotrophs such as diatoms
Clostridium per friend is a bacterium that can cause gas gangrene. It survives deep in human tissue. One way that wounds infected by this bacterium are treated is with oxygen (hyperbaric) therapy to kill the bacterium. C. perfrigens is a(n):
Obligate anaerobe
What is the pathogen for syphillis?
Treponema pallidum bacterium
What is the pathogen for the bubonic plague?
Yersenia pestis bacterium
This disease kills many thousands of people (often children) each year. It kills by dehydrating the victim and depleting the body's electrolytes. It is caused by a comma-shaped bacterium.
Which type of organism produces calcium alginate ( used as a wound dressing material )?
Something that transmits a disease, but does not cause it, is a(n):
Which type of organism produces carrageenan ( a binder used in the food industry )?
The Endangered Species Act (ESA) helps to protect biodiversity by a sequence of actions. What is the first course of action taken by the agencies that administer the ESA?
Evaluate data complied by hunters and fisherman and concerned citizens to identify species that may need protection
The class of antibiotics that includes penicillin work by which mechanism?
Interfering with the productions of peptidoglycan in the cell wall of microbes
The technique of Gram staining bacteria based on their cell structure. Which layer of the cell structure binds the crystal violet stain best?
Peptidoglycan
Pseudomonas aeruginosa is a gram-negative bacillus. What does it look like under the microscope after Gram staining?
Red, rod-shaped
Lichens represent a symbiotic relationship between a(n) _____ and a(n) ______.
Chemoheterotroph fungus and a photoautotroph algae
Which of the following organisms is associated with red tides in Florida?
In which phylum (division) of fungi are the reproductive structures produced in a sac?
Ascomycota ( ascomycetes )
Diatoms and amoebae (that's plural for amoeba) are classified in the kingdom ________
Protista
In which phylum (division) of fungi, are the reproductive structures produced on a club-like structure?
Basidiomycota (basidiomycetes)
Laminar is, the large algae commonly known as kelp is often classified as a protist, but some scientists disagree with that classification. What is so different about Laminaria from a typical protist such as Paramecium?
It has gross differentiation: the blades have specialized floats and the blades are different from the stalks
These fungi live in nodules at the roots of plants such as tall trees in a tropical rainforest; they improve the nutrition and anchoring of the tree.
Mycorrhizae
Select the correct statement regarding fungal nutrition
All fungi are heterotrophic by releasing enzymes to digest organic material outside their body and resorbing the nutrients
The cell wall is composed chiefly of _____
Chitin
This is the general term for the filamentous (tubular) cell of all fungi
This type of fungi feeds on dead organic material
Saprobe
In what way are yeast different from other fungi?
Unicellular (undifferentiated)
A ______ is a fungal infection on the surface of plants. One such _______ is Ustilago which is edible to humans and very expensive.
This tissue in fungi connects the organism to its substrate (e.g. soil or slice of bread)
Produces oxygen:
How paramecium moves:
Malaria is:
Plasmodium falciparum
Amoebae are:
Hetertrophic and eat with their pseudopod
What is the cause of red tide?
This disease causes extreme diarrhea:
This is the name of the bubonic plague:
Yersinia pestis bacterium
Why does the CDC (Centers for Disease Control) recommend that you not use anti-bacterial soap at home?
Anti-bacterial soap promotes anti-biotic resistance among bacteria
Which molecule is present in the wall of all bacteria?
Which important role does anabaena perform?
Why is the antibiotic penicillin not effective for treating viral infections? ****
Contains membrane-enclosed organelles.
Prokaryotic cell
Lacks a nucleus or other membrane-enclosed organelles
Translates genes into proteins
The "library" of genetic instructions that an organism inherits
Feedback regulation
the regulation of a process by its output or end product. Body reacting to different levels.
Bacteria and Archaea are both:
Prokaryotic
The three domains of life:
Domain Bacteria, Domain Archaea, Domain Eukarya
Domain Archaea
Some of the prokaryotes known as archaea live in Earth's extreme environments, such as salty lakes and boiling hot springs. Domain Archaea includes multiple kingdoms.
Domain Eukarya consists of:
Kingdom Plantae, Kingdom Fungi, Protists, and Kingdom Animalia
Cilia of a Paramecium
The cilia of the single-called Paramecium propel the organism through the pond water.
Charles Darwin created:
The theory of natural selection
Collecting and analyzing observations can lead to important conclusions based on this type of logic. Through induction, we derive generalizations from a large number of specific observations. "The sun always rises in the east", "All organisms are made of cells" are good examples of this.
Tenative answer to a well framed question - an explanation on trial. Must lead to predictions.
A type of logic called deductive reasoning is also built into the use of hypotheses in science. While induction entails reasoning from a set of specific observations to reach a general conclusion, deductive reasoning involves logic that flows in the opposite directions, from the general to the specific.
Broader in scope than a hypothesis
All the organisms on your campus make up
Which of the following is a correct sequence of levels in life's hierarchy, proceeding downward from an individual?
Nervous system, brain, nervous tissue, nerve cell
Genetic information is encoded in the nucleotide sequences of:
It is DNA that transmits heritable information from parents to:
DNA sequences called _____ program a cell's protein production by being transcribed into mRNA, and then translated into specific proteins through a process called _____
Genes, Gene Expression
The large scale analysis of DNA sequences of a species (its genome) as well as the comparison of genomes between species.
Uses computational tools to deal with huge volumes of sequence data.
Which of the following is not an observation or inference on which Darwin's theory of natural selection is based?
Poorly adapted individuals never produce offspring
Systems biology is mainly an attempt to
Understand the behavior of entire biological systems by studying interactions among its component parts
Protists and bacteria are grouped into different domains because:
Protists have a membrane-bounded nucleus
Which of the following best demonstrates the unity among all organisms?
the structure and function of DNA
A controlled experiment is one that
tests experimental and control groups in parallel
Difference of hypothesis and a theory?
Hypotheses are usually narrow in scope, theories have broad explanatory power
Which of the following in an example of qualitative data?
The fish swam in a zigzag motion
The gametophyte generation is always the _____ generation in plants
Haploid
In liverworts, such as Marchantia, eggs are produced in a structure called a(n) ________
Archegonium
The rapidly mitotic area from which new growth occurs in all plants is called the ________
Apical meristem
The light reactions in photosynthesis produce _______
ATP and oxygen
In the process of photosynthesis, which molecule first captures the energy from light?
In the dark reactions of photosynthesis, this enzyme catalyzes the fixation of carbon to form sugar
Rubisco
Final Exam Coach G
laurendean_03
MAstering Bio 18,19,20 Post-Test Questions
Tyler_lee_walker
Chapter 26 - 28
paityn314
jessicathegoodag
Chemistry II Exam I
Bio II Module II
Hyperbaric Tech Review
History WW1 Questions and Answers
Callie_Emmerich
CPCO chapter 10
Sherri_Tucker8
Bible Lessons 11.1-11.3 Quiz
alley_claxton
3 final examChapter 27 The Reproductive System T/F
time4ausername
Verified questions
What is a helpful function of both bottomland wetlands in woodland ecoregions and coastal wetlands, and why might loss of these wetlands result in property damage?
Verified answer
Match each term on the left with the most appropriate description on the right. $$ \begin{matrix} \text{(a) switch} & \text{(i) a part of an electric}\\ \text{ } & \text{circuit that converts}\\ \text{ } & \text{electrical energy into}\\ \text{ } & \text{another form of energy}\\ \text{(b) photovoltaic cell} & \text{(ii) a device that controls}\\ \text{ } & \text{the flow of electrons by}\\ \text{ } & \text{opening or closing a circuit}\\ \text{(c) battery} & \text{(iii) a combination of two or}\\ \text{ } & \text{more electric cells}\\ \text{(d) load} & \text{(iv) a type of light bulb that}\\ \text{ } & \text{is more efficient than an}\\ \text{ } & \text{incandescent light bulb}\\ \text{(e) CFL} & \text{(v) a device that converts}\\ \text{ } & \text{light energy from a light}\\ \text{ } & \text{source directly into}\\ \text{ } & \text{electrical energy}\\ \end{matrix} $$
Classify each of the following as mechanical or chemical weathering. Cracks in a sidewalk next to a tree _____ Limestone with holes like Swiss cheese _____
A water flea of the genus Daphnia eats algae. How might this organism acquire its food? | CommonCrawl |
This website was created for the predictions for the upcoming Congressional Elections made by the Political Statistics class at Montgomery Blair High School in Silver Spring, Maryland. Under the guidance of Mr. David Stein, this model (which we named the Overall Results of an Analytical Consideration of the Looming Elections a.k.a. ORACLE of Blair) was developed by a group of around 70 high school seniors, working diligently since the start of September. Apart from the youth and enthusiasm that went into making it, the advantage our model has over professionally developed models is transparency. Unlike professionals, we need not have any secrets in regards to how our predictions are generated. In fact, the sections that follow attempt to detail exactly how we come up with all of the numbers involved in our model. If you are interested by politics, statistics, education, or just agree with our predictions, please tell your friends and social media followers about the work that we've done.
First, some conventions used throughout the explanation of our methodology. All calculations made are based on the two party vote, meaning that any votes for a third-party or independent candidate do not count. For example, in Oregon's 3rd Congressional District, we say that in 2014 the Democratic vote percentage was \(78.7\%\), even though the Democratic candidate only got \(72.3\%\) of the actual vote. The \(78.7\%\) represents the ratio of votes cast for the Democrat to the total votes cast for either the Republican or the Democrat, hence it's a two-party vote percentage. The two-party vote percentage differs from the actual vote percentage as some votes are not counted due to the votes being cast for a third-party or an independent candidate.
Another convention used is that, when using a metric centered at \(0\), positive numbers mean a Democratic advantage and negative numbers mean a Republican advantage. For example, when talking about margin in the elections, we usually look at the Democratic margin, as opposed to the Republican Margin. A margin of \(10\%\) means that the Democratic candidate received \(10\%\) more of the two-party vote than the Republican candidate. This means that the Democrat got \(55\%\) of the two-party vote. Going in the other direction, a Democratic margin of \(-20\%\) means that the Democratic candidate received \(20\%\) less of the two-party vote than the Republican Candidate. This means that the Republican candidate prevailed with \(60\%\) of the two-party vote.
BPI and SEER
The Blair Partisan Index (BPI) is a metric that measures a district's partisan voting tendencies relative to the nation. In order to calculate this, we subtract the national Democratic vote percentage from a district's Democratic vote percentage for four recent elections: the 2012 and 2016 Presidential elections and the 2014 and 2016 House elections. The subtraction leads to the relative Democratic vote percentages of a district. For example, in AR-02 during the 2016 House election, the district cast \(39\%\) of their two-party vote for the Democratic nominee. In comparison, the nation as a whole cast \(50.57\%\) of their two-party vote for a Democratic nominee.
To be consistent with our predicted 2018 incumbency advantage (weights for SEER), we adjusted 2014 and 2016 House vote proportions based on incumbency in those years. Specifically, if there was a Democratic incumbent running we subtracted \(3.45\%\) (\(6.90\%/2\)) from the Democratic vote proportion and if there was a Republican incumbent running we added \(4.09\%\) (\(8.17\%/2\)) to the Democratic two-party vote proportion. So in 2016, there was a Republican incumbent running for the House for AR-02. Therefore, we added \(4.09\%\) to the district's Democratic vote percentage leading to an adjusted vote percentage of \(39\% + 4.09\% = 43.09\%\) for the 2016 House election. We then take the adjusted vote percentage and subtract it from the national average of \(50.57\%\) to get a relative 2016 House vote percentage of \(43.09\% - 50.57\% = -7.48\%\). The BPI is a weighted average of the relative vote percentages, weighted by the table below:
Relative 2012 Presidential Vote Weight:
Relative 2014 House Vote Weight:
Relative 2016 Presidential Weight:
Relative 2016 House Weight:
The BPI of a district with relative vote percentages \( v_1, v_2, v_3, v_4 \) is \( 0.133v_{1}+0.278v_{2}+0.244v_{3}+0.345v_{4} \). So for AR-02, the BPI would be \(0.133(-8.01\%) + 0.278(-1.35\%) + 0.244(-6.79\%)+0.345(-7.48\%)\), which is \(-5.68\%\).
Districts in which the 2014 and/or 2016 House elections did not have both a Democratic candidate and a Republican candidate pose a problem, since only one major party has a vote total, and that vote total is not meaningful. Therefore, if a past race was unopposed we did not use it in calculating that district's BPI. That is to say, we calculated the BPI as the weighted average of only the races with candidates from both major parties, using the above weights for each election and dividing the weighted sum by the sum of the weight for the elections used. As an example, consider AR-04, where there was no Democratic candidate for the House seat in 2016. AR-04 has relative vote percentages of \(-15.22\%\) for Obama in 2012, \(-2.85\%\) for the Democratic House candidate in 2014, and \(-18.33\%\) for Clinton in 2016. There was no incumbent running for the House seat in 2014, so no adjustment is needed. The BPI of AR-04 is therefore \(0.133(-15.22\%)+0.278(-2.85\%)+0.244(-18.33\%)/(0.133+0.278+0.244)=-14.39%\)
The Synthesized using Earlier Elections as Rationale (SEER) percentage is the first forecast given by the ORACLE of Blair process. SEER begins with a prediction of the Democratic margin, or by how much of the two-party vote the Democratic nominee will win or lose. The Democratic margin prediction is a scale of the BPI shifted by incumbency, incumbency being whether or not a Democratic or a Republican incumbent is running. The scale of the BPI and the shift based on incumbency are listed below:
Partisanship Weight:
Dem Incumbency Weight:
Rep Incumbency Weight:
By our positive-negative convention, we subtract \(8.17\%\) if a Republican incumbent is running and add \(6.90\%\) if a Democratic incumbent is running. So, as AR-02 has a BPI of \(-5.68\%\) and a Republican incumbent running, SEER predicts a Democratic margin of \(2 (-5.68\%) - 8.17\% = -19.63\%\), meaning SEER predicts that the Democratic nominee will lose by \(19.63\%\) of the two-party vote. This leads to the final SEER prediction of the Democratic two-party vote, which is calculated by \(50\% + 0.5 (margin)\). So for AR-02 SEER predicts that \(50\% + 0.5(-19.63\%) = 40.24\%\) of the two-party vote will go to the Democratic nominee.
We also assigned a standard deviation to each SEER prediction. Based on some tests of the predictive accuracy of past election results, we used a standard deviation of \(0.066\) if there is an incumbent (of either party) running in 2018 and \(0.074\) if there is not. These standard deviations are for the two-party vote proportion predictions.
National Mood Shift (bigmood)
To calculate and adjustment for the national mood, we started with an estimation of the expected major-party voter turnout in 2018. To get this, we started with the sum of Democratic and Republican voters in each Congressional election in 2014. In cases where the 2014 turnout in a district was problematic due to their not being a contested House race (leading to low vote totals) or due to redistricting, we used the 2016 turnout in that district scaled down by the ratio of 2014 average contested district turnout to 2016 average contested district turnout (which is less than 1). In cases where the 2016 turnout is also problematic, we resorted to using the average 2014 turnout for those districts. We then determined, based on our SEER predictions for each district and the number of expected major-party voters in each district, the predicted total number of Democratic and Republican votes on the national level, from which we get the predicted national two-party Democratic vote.
We then compare this to current generic ballot polls. Generic ballot polls ask respondents nationwide whether they plan to vote for the Democratic Congressional candidate or for the Republican Congressional candidate in their districts without naming the candidate. We use a weighted average of generic ballot polls, averaged using the same method as used for polls in an individual district (discussed below); the weighted average Democratic two-party vote is referred to here as the current National Mood. We then find the difference between the current National Mood and the National fundamental prediction discussed in the first paragraph. Let us call this difference the National Mood Shift.
Each district has an elasticity (taken from 538), which quantifies how much more or less than the national average a district is affected by shifts in the national mood. The average elasticity is \(1.00\) (affected the same as the country as a whole). For example, the Democratic vote percentage of a hypothetical district with elasticity \(0.90\) is expected to shift by \(0.9\) points for every 1-point shift in the National Mood, in the same direction as the national mood.
To combine the National Mood with the SEER predictions, we add the product of the National Mood Shift and the elasticity for each district to the SEER prediction for the district to get the bigmood prediction for each district.
There are two sources of variation of the shift. The first the generic ballot polling average. The standard deviation of the polling average (\(\sigma_{p}\)) isn't calculated from the standard deviation of the individual polls, but rather from looking at the relationship between generic ballot average and national popular House vote from 2002 to 2016. This yields a (\(\sigma_{p}\)) of \(1.38\) percentage points. The second source of variation is the variation of the SEER predictions, which persists when they are applied to the 2014 voter turnout. The standard deviation (\(\sigma_{q}\)) of the SEER-based projected national vote is therefore:
$$\sigma_{q} = \frac{\sqrt{\sum\limits_{i=1}^{435} t_{i}^{2}+\sigma_{i}^{2}}}{\sum \limits_{i=1}^{435} t_{i}}$$
Where the \(i\)th district (from \(1\) to \(435\)) has 2014 two-party turnout \(t_{i}\) and the SEER standard deviation for the ith district has standard deviation \(\sigma_{i}\). The overall standard deviation of the national mood shift is given by \(\sqrt{\sigma_{p}^{2}+\sigma_{q}^{2}}\).
Averaging polls
For districts that have polls for their congressional races, we construct a weighted average of those polls. We start by getting for each poll: the two-party Democratic vote prediction \((0 < p < 1)\), the number of days before the election that the poll was finished (\(t\)), and the sample size of the poll (\(n\)). The polls are weighted by their ages, based on the relative values of this function for the different ages of the polls:
$$f(t)=e^{-t/30}$$
Therefore, if there are \(m\) polls in a district, and the \(a\)th poll is from \(t_{a}\) days before the election, it will have weight \(w_{a}\):
$$w_{a}=\frac{e^{\frac{-t_{a}}{30}}}{\sum\limits_{i=1}^m e^{\frac{-t_{i}}{30}}}$$
This ensures that old polls are weighted less than newer polls, that the marginal penalty for being one day older decreases as the polls get older, and that the weights sum to \(1\). We then get the weighted average of the predictions by summing the products of each poll's predicted Democratic two-party vote proportion and that poll's weight. However, there is some variation in polls which we include as uncertainty in our model. One source of this variation is sampling error. Given a poll with Democratic vote share \(p\), Republican vote share \((1-p)=q\), and sample size \(n\), the average sampling error \(se\) of such a poll is given by:
$$se=\sqrt{\frac{pq}{n}}$$
For now, we will say that the standard deviation \(\sigma\) of each poll (i.e. the average error) is its sampling error.
However, there are other sources of variation that we are unable to empirically calculate. We divide the polls into four grades (A, B, C, and D; taken from 538) based on the quality of the pollster's methodology. For each grade of polls, we took polls from previous House elections (starting in 2012) and found the average distance around the line of best fit for actual election results vs. poll results (i.e. the line that best predicted the actual election result given a poll result). Let's call this average distance the past poll standard deviation for each grade.
We then find the average sampling error of all polls of each grade. For each grade, we then add ((past poll standard deviation) - (average sampling error)) to the standard deviation of each poll with that grade. Although the numbers will change slightly as we add more polls with slightly different average sampling errors, we found that the average sampling error is generally between \(0.02\) and \(0.025\) regardless of grade. Here are approximate standard deviation increases for each grade:
Standard Deviation Around Regression Line
Est. Standard Deviation Increase
Given two polls \(P_{1}\) and \(P_{2}\) with standard deviations \(\sigma_{2}\) and weights \(w_{1}\) and \(w_{2}\), we can find the standard deviation of the weighted sum of \(P_{1}\) and \(P_{2}\) with:
$$\sigma_{1}+\sigma_{2}=\sqrt{w_{1}^{2}\sigma_{1}^{2}+w_{2}^{2}\sigma_{2}^{2}}$$
This can be generalized to more than two polls to get one 'weighted average' standard deviation for the weighted average of the polls in a district.
Weighting Polls vs Bigmood
For districts with polls, we take a weighted average of the bigmood prediction and aggregate poll prediction to get to our final prediction. The weight (\(0 < weight < 1\)) of the polls depends on 1) the number of polls, 2) the grade of each poll, and 3) the age (in days before the election) of each poll. We first calculate the Grade Point Sum (GPS) of all of the polls. If a district has \(m\) polls, and the \(i\)th poll has grade \(g_{i}\) and is from \(t_{i}\) days before the election, the GPS is:
$$GPS=\sum\limits_{i=1}^m G(g_{i})e^{\frac{-t_{i}}{167}}$$
Where \(G(g_{i})\) is \(0.177\) if \(g_{i}\) is A, \(0.151\) if \(g_{i}\) is B, \(0.130\) if \(g_{i}\) is C, and \(0.077\) if \(g_{i}\) is D.
The weighting of the aggregate poll prediction (\(w_{p}\)) is then given by:
$$w_{p}=\frac{1.9}{\pi} \cdot \arctan(6.12 \cdot GPS)$$
And the weighting of the bigmood prediction is (\(1-w_{p}\)).
The constants of the arctan function were chosen so that it has an asymptote of \(0.95\) (a district with a very large GPS has a poll weight very close to \( 0.95\)) and so that a district with two B polls, each finished the day of the election, will have poll weight \(0.6\).
Blairvoyance
Blairvoyance provides a hypothesized poll result for districts without polls by interpolating on demographics of the districts that do have polls. First, Blairvoyance creates linear relationships between districts using select demographics for that district. Figure 1 is a simplified example where the three districts are plotted against two axes representing two demographics, which is effectively a representation of a relationship between districts.
Figure 1. Blairvoyance demographics mapping
The closer they are in Figure 1, the more similar the districts by some metric formed by the demographic data. The red dots represent the districts with polling data. Blairvoyance then moves the points that represent district with polls to another dimension, which represent the poll result. Figure 2 is a representation of Figure 1 with the third axes of poll result is added, and Figure 3 shows the points being moved into the third dimension representing polls.
Figure 2. Blairvoyance mapping in 3-space
Figure 3. Blairvoyance adding in poll results
Keep in mind that, our model contains more than two demographics, therefore Blairvoyance utilizes more than 3-dimensions. Blairvoyance then tries to fit a curve relating demographics and poll results. The blue line in Figure 4 represents a potential fitted curve Blairvoyance might create in that simplified case. Then, in order to create a hypothesized poll result for a district without poll, it will use the poll result of the closest poll in the fitted curve, as can be seen in Figure 5.
Figure 4. Blairvoyance curve fitting
Figure 5. Blairvoyance poll hypothesis
Blairvoyance takes in polls and returns poll hypotheses. Typically, polls are only done on the districts with close races. Therefore, Blairvoyance should make poll hypotheses closer than actual polls would be; the weight of Blairvoyance should decrease as partisanship increases. Hence, the weight of Blairvoyance should be at most the best poll we can have, which is an A graded poll taken one day before the election. Let the weight of one A graded poll taken one day before election be \(w_a\). Then the weight of Blairvoyance would be \(w_a (1 + 2|bigmood - 0.5|)\). This satisfies both properties that it is at most \(w_{a}\) and decreases as partisanship increases. If some hypothetical district is completely partisan, Blairvoyance would not count toward the prediction of that district.
Calculating AUSPICE
The Agglomeration Utilizing Statistical Predictions and Inquiries Concerning Elections (AUSPICE) is the final election forecast that the ORACLE of Blair gives a district every run. AUSPICE is a combination of the distribution provided by Bigmood and the averaged polls, or Blairvoyance in the absence of polls, weighted as mentioned above. Suppose the weight of bigmood is \(w_b\) and the weight of the averaged polls for a district (or Blairvoyance) to be \(w_l\). And suppose that bigmood gives a mean of \(\mu_b\) and a standard deviation of \(\sigma_b\) for the proportion of the two-party vote going to the Democratic nominee, and the polls give a mean of \(\mu_l\) and a standard deviation of \(\sigma_l\). The AUSPICE for that district will return a mean of \(w_l \mu_l + w_b \mu_b\) and a standard deviation of \(\sqrt{(w_l \sigma_l)^2+(w_b \sigma_b)^2}\).
National Predictions
At this point, we have predicted vote shares and standard deviations of vote shares, so it might seem that calculating the overall chance that a party gets a majority of seats is rather simple. However, this is not the case.
In order to avoid excessive computations or calculating an exact number, we simulate the entire House election \(10,000,000\) times each time we run our model. The probability that each party wins a majority of seats in the House is then about the number of simulations in which this happens divided by \(10,000,000\).
A naïve approach to use here would be to simulate each district separately, using the numbers we already have. However, this approach has the implicit assumption that the district vote shares are all independent, which is certainly not the case. For one, the systematic bias of polls can often be caused by the same factors from district to district. Also, since our model is not perfect, it may be consistently off in one direction for most districts. Thus, our simulation must introduce some correlation between districts.
The way we chose to implement this is to, for each simulation, choose a number \(s\) from a normal distribution with mean \(0\) and variance \(\sigma^2\) (we will explain where \(\sigma\) comes from later). Then, for every district \(d\) with mean vote share \(\mu_d\) and variance of the vote share \(\sigma_d^2\), we choose a number \(v_d\) from a normal distribution with mean vote share \(\mu_d\) and variance \(\sigma_d^2 - \sigma^2\) and let the simulated vote share be \(v_d + s\). Since \(v_d\) and \(s\) are independent, this means that the distribution of \(v_d + s\) has mean \(\mu_d\) and variance \(\sigma_d^2\), matching our earlier computations The important part is that \(s\) is the same for all districts, meaning that a portion of the variance of the districts is shared.
It suffices to find a suitable value for \(\sigma\). There is no clear way to do this, but we decided on the following: the outcome of the House is very correlated with the House popular vote, so a good choice for \(\sigma\) is one such that the resulting distribution of the House popular vote matches our uncertainty about it. Since the individual district variation (on the order of \(\sigma \sim 5\%\)) is all independent, the contribution to the House popular vote from these sources of variation is on the order of \(5\%/\sqrt{435} \sim 0.25\%\) which is far lower than the actual uncertainty. So we should have \(\sigma\) be our uncertainty about the House popular vote.
There are many ways to forecast the House popular vote, but using the standard deviation of the national voter turnout prediction is close to the best one can do. Therefore, we use the historical standard deviation of the national generic ballot polls versus the national popular vote, which corresponds to \(\sigma \approx 1.38\%\).
Example: California's 25th District
Please note that in this example calculations may be shown with intermediate rounding. However, the actual computations used to produce results did not use intermediate rounding. This district is currently represented by Republican Steve Knight. He is being challenged by Democrat Katie Hill. This section was updated on October 26th.
Obama received \(49.1\%\) of the two-party vote in this district in 2012. As a consequence of California's jungle primary system, there was no Democrat on the 2014 House ballot. We therefore used the aggregate two-party Democratic performance in the jungle primary, which was \(32.8\%\); there was no incumbent running in 2014. Clinton received \(53.6\%\) of the two-party vote in this district in 2016. There was a Democratic candidate for the House seat in 2016 and he got \(46.9\%\) of the vote; there was a Republican incumbent. Since there was a Republican incumbent in 2016, we adjust the Democrat's vote share in 2016 to be \(51.0\%\).
This district thus has a BPI of \(0.133(49.1-50) + 0.278(32.8-50) + 0.244(53.6-50)+0.345(51.0-50) = -3.59\%\). Therefore, as there is a Republican incumbent running in 2018, this district has a SEER prediction of \(50\% + 0.5(2(-3.59\%) - 8.17\%) = 42.4\%\). Since there is an incumbent, we will use a standard deviation of \(0.066\) for the SEER prediction.
Bigmood
Using the SEER predictions and procedure described above in the National Mood Shift section, our nationwide fundamental prediction with a 2014 turnout model is that Democrats would get \(49.5\%\) of the two-party vote. The current (at the time or writing) generic ballot average is that Democrats will get \(54.3\%\) of the two-party vote. This implies an average shift of \(+4.8\%\) to the Democratic two-party votes in each district to get the big mood predictions. This number will change as more generic ballot polls are conducted, but it is suitable for this example. CA-25 has an elasticity of \(0.97\), so it has a bigmood prediction of \(42.4\% + 0.97 \cdot 4.8\% = 47.1\%\).
This district has six polls:
Poll #
Age (days)
The standard deviation and weight of each poll are:
Poll weight function
Poll weight
These variances include the sampling error and the grade-based standard deviation increase discussed above in the Averaging Polls section. The weighted average of the polls is approximately \(0.498 \cdot 0.521+0.436 \cdot 0.489+0.044 \cdot 0.500+0.022 \cdot 0.421+0.00 \cdot 0.556+0.00 \cdot 0.570 = 0.504 = 50.4\%\). The weighted average has a standard deviation of approximately \(\sqrt{0.498^{2} \cdot 0.125^{2} + 0.436^{2} \cdot 0.056^{2} + 0.044^{2} \cdot 0.079^{2} + 0.022^{2} \cdot 0.079^{2} \cdot 0.000^{2} \cdot 0.074^{2} + 0.000^{2} \cdot 0.063^{2}} = 0.067\).
Poll Weighting
This district's polls have \(GPS = 0.077e^{\frac{-44}{167}}+0.177e^{\frac{-48}{167}}+0.130e^{\frac{-117}{167}}+0.130e^{\frac{-138}{167}}+0.151e^{\frac{-264}{167}}+0.151e^{\frac{-282}{167}} = 0.372\). Therefore, the polls in this district have weight \(\frac{1.9}{π} \cdot \arctan(6.12 \cdot 0.372) = 0.70\). The bigmood prediction consequently has weight \(1-0.70 = 0.30\).
Final Prediction
The final prediction for CA-25 is that Hill will get on average \(0.70 \cdot 50.4\% + 0.30 \cdot 47.1\% = 49.4\%\) of the two-party vote. This has a standard deviation of \(\sqrt{0.70^{2} \cdot 0.067^{2} + 0.30^{2} \cdot 0.066^{2}} = 0.051\). Therefore, Hill getting at least \(50\%\) of the two-party vote has a \(z\)-score of \(z = \frac{0.50-0.494}{0.051} = 0.11\). This implies a win probability of \(45.4\%\). In other words, if this particular general election were held a very large number of times, we would expect Hill to win \(45.4\%\) of the time and Knight to win \(54.6\%\) of the time.
BlairOracle
[email protected]
@ORACLE_of_Blair
BlairOracle is a project by seniors at Montgomery Blair High School in Silver Spring, Maryland. It was created during the Fall 2018 Political Statistics course taught by Mr. David Stein. Questions for the students about the model can be sent to [email protected], while Mr. Stein can be reached directly through the Blair website.
Any views or opinions expressed on this site are those of the students in Montgomery Blair High School's 2018 Political Statistics class and do not necessarily reflect the official position of Montgomery Blair High School. | CommonCrawl |
How To Calculate Rate Of Change Over Time
How to calculate rate of change over time for an variable
Acceleration, the rate of change in speed, or the change in speed per unit of time Power , the rate of doing work , or the amount of energy transferred per unit time Frequency , the number of occurrences of a repeating event per unit of time... 9/02/2005 · Rate of change of pressure - gases Feb 4, 2005 #1. shrikeerthi. Hai, I have the following situation: I have a closed container with a certain gas at a certain temperature(Tg) and pressure(pg). Now I open the container. The gas will escape through the opening to the atmosphere in order to create a pressure balance. There will be no significant increse in the pressure in the atmosphere but there
How to calculate rate of change over time for an variable. 1. I have an input value that changes steadily (at constant rate, either increasing or decreasing), and Splunk is capturing every value with a timestamp. I am trying to find a way to calculate the acceleration of this input, which is the rate of change over time. Ideally, I would like to trigger an alert if a threshold in this rate of... (probably time), and solve for the unknown rate of change in terms of known quantities. Example 3 Assume that a hot air balloon is rising, and that the ascent of the balloon is completely vertical.
How to calculate the rate of change of a variable over
16/05/2018 · Calculate that by using the "Rule of 72": Divide 72 by the number of years it takes an investment to double in value, and that is the compound rate of growth over the period of time … how to create a table of contents in word automatically To find the rate of change as the height changes, solve the equation for volume of a cone ($\frac{\pi r^2 h}{3}$) for h, and find the derivative, using the given radius. For the rate of change as the radius changes - same idea.
Working with Time Series Data Percent Change SAS
Let's make one for the information we have about the distance, rate, and time Karen travels when she is going both upstream and downstream. We'll call the time it takes to row downstream x, which means that the time it takes to row upstream is x +4. We'll start by calculating Karen's rates going upstream and downstream. When she is traveling against the current, she won't be able to row 10 mortal kombat 9 how to change costumes Below, we'll solve an example problem in which you receive two salary increases over a 10-year period, and calculate a compounded annual growth rate for your salary over time.
Temperature Over Time CES Florida Atlantic University
How do you calculate the rate of change of the volume of a
The Volume rate of change indicator measures the rate of change in volume over the past "n" sessions. In other words, the VROC measures the current volume by comparing it to the volume "n" periods or sessions ago.
To calculate month-over-month growth for a single month, simply take the difference between this month's total number of users and last month's total number of …
Calculate the average rate of change by dividing the net difference between the beginning and final values by the duration of the time period. Calculate an average of the annual rates of change and find that it is an incorrect measure of the average rate of change when the values represent unequal time …
How To Avoid Lice In School
How To Cancel Xbox Live Without An Xbox
How To Create A Server Using Atlauncher Vanilla
How To Connect Imac As Second Monitor
How To Add Liveries To The Pmdg Operation Cener
How To Make Blue Hawaiian Drink
How To Create A Html File In Qtravel
How To Change Background Of Video In Adobe Premiere
How To Create A Facebook Page 2017 Part 1
How To Draw A Cute Pizza
Gta 5 How To Build Up Strength Fast
Maya How To Create Edge
How To Draw Star Butterfly
How To Break Baby Of Swaddle
How To Cut Up A Whole Chicken For Frying
John on How To Clean Paint Off Copper Pipe
Pablo on How To Become A Designer Shoe Retailer
Bruce G. Li on How To Add Reminder In Outlook 2013
Marlin on How To Add Song In Video App
Samanta Cruze on How To Create A Animated Video Online | CommonCrawl |
Power times prior times benefit minus cost of experimentation: (0.20 \times 0.30 \times 540) - 41 = -9. So the VoI is negative: because my default is that fish oil works and I am taking it, weak information that it doesn't work isn't enough. If the power calculation were giving us 40% reliable information, then the chance of learning I should drop fish oil is improved enough to make the experiment worthwhile (going from 20% to 40% switches the value from -$9 to +$23.8).
Spaced repetition at midnight: 3.68. (Graphing preceding and following days: ▅▄▆▆▁▅▆▃▆▄█ ▄ ▂▄▄▅) DNB starting 12:55 AM: 30/34/41. Transcribed Sawaragi 2005, then took a walk. DNB starting 6:45 AM: 45/44/33. Decided to take a nap and then take half the armodafinil on awakening, before breakfast. I wound up oversleeping until noon (4:28); since it was so late, I took only half the armodafinil sublingually. I spent the afternoon learning how to do value of information calculations, and then carefully working through 8 or 9 examples for my various pages, which I published on Lesswrong. That was a useful little project. DNB starting 12:09 AM: 30/38/48. (To graph the preceding day and this night: ▇▂█▆▅▃▃▇▇▇▁▂▄ ▅▅▁▁▃▆) Nights: 9:13; 7:24; 9:13; 8:20; 8:31.
If you want to focus on boosting your brain power, Lebowitz says you should primarily focus on improving your cardiovascular health, which is "the key to good thinking." For example, high blood pressure and cholesterol, which raise the risk of heart disease, can cause arteries to harden, which can decrease blood flow to the brain. The brain relies on blood to function normally.
"Cavin's enthusiasm and drive to help those who need it is unparalleled! He delivers the information in an easy to read manner, no PhD required from the reader. 🙂 Having lived through such trauma himself he has real empathy for other survivors and it shows in the writing. This is a great read for anyone who wants to increase the health of their brain, injury or otherwise! Read it!!!"
I've been actively benefitting from nootropics since 1997, when I was struggling with cognitive performance and ordered almost $1000 worth of smart drugs from Europe (the only place where you could get them at the time). I remember opening the unmarked brown package and wondering whether the pharmaceuticals and natural substances would really enhance my brain.
In this large population-based cohort, we saw consistent robust associations between cola consumption and low BMD in women. The consistency of pattern across cola types and after adjustment for potential confounding variables, including calcium intake, supports the likelihood that this is not due to displacement of milk or other healthy beverages in the diet. The major differences between cola and other carbonated beverages are caffeine, phosphoric acid, and cola extract. Although caffeine likely contributes to lower BMD, the result also observed for decaffeinated cola, the lack of difference in total caffeine intake across cola intake groups, and the lack of attenuation after adjustment for caffeine content suggest that caffeine does not explain these results. A deleterious effect of phosphoric acid has been proposed (26). Cola beverages contain phosphoric acid, whereas other carbonated soft drinks (with some exceptions) do not.
A quick search for drugs that make you smarter will lead you to the discovery of piracetam. Piracetam is the first synthetic smart drug of its kind. All other racetams derive from Piracetam. Some are far more potent, but they may also carry more side effects. Piracetam is an allosteric modulator of acetylcholine receptors. In other words, it enhances acetylcholine synthesis which boosts cognitive function.
Use of and/or registration on any portion of this site constitutes acceptance of our User Agreement (updated 5/25/18) and Privacy Policy and Cookie Statement (updated 5/25/18). Your California Privacy Rights. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
The Stroop task tests the ability to inhibit the overlearned process of reading by presenting color names in colored ink and instructing subjects to either read the word (low need for cognitive control because this is the habitual response to printed words) or name the ink color (high need for cognitive control). Barch and Carter (2005) administered this task to normal control subjects on placebo and d-AMP and found speeding of responses with the drug. However, the speeding was roughly equivalent for the conditions with low and high cognitive control demands, suggesting that the observed facilitation may not have been specific to cognitive control.
"You know how they say that we can only access 20% of our brain?" says the man who offers stressed-out writer Eddie Morra a fateful pill in the 2011 film Limitless. "Well, what this does, it lets you access all of it." Morra is instantly transformed into a superhuman by the fictitious drug NZT-48. Granted access to all cognitive areas, he learns to play the piano in three days, finishes writing his book in four, and swiftly makes himself a millionaire.
(As I was doing this, I reflected how modafinil is such a pure example of the money-time tradeoff. It's not that you pay someone else to do something for you, which necessarily they will do in a way different from you; nor is it that you have exchanged money to free yourself of a burden of some future time-investment; nor have you paid money for a speculative return of time later in life like with many medical expenses or supplements. Rather, you have paid for 8 hours today of your own time.)
Do you start your day with a cup (or two, or three) of coffee? It tastes delicious, but it's also jump-starting your brain because of its caffeine content. Caffeine is definitely a nootropic substance—it's a mild stimulant that can alleviate fatigue and improve concentration, according to the Mayo Clinic. Current research shows that coffee drinkers don't suffer any ill effects from drinking up to about four cups of coffee per day. Caffeine is also found in tea, soda, and energy drinks. Not too surprisingly, it's also in many of the nootropic supplements that are being marketed to people looking for a mental boost. Take a look at these 7 genius brain boosters to try in the morning.
Still, the scientific backing and ingredient sourcing of nootropics on the market varies widely, and even those based in some research won't necessarily immediately, always or ever translate to better grades or an ability to finally crank out that novel. Nor are supplements of any kind risk-free, says Jocelyn Kerl, a pharmacist in Madison, Wisconsin.
Over the last few months, as part of a new research project, I have talked with five people who regularly use drugs at work. They are all successful in their jobs, financially secure, in stable relationships, and generally content with their lives. None of them have plans to stop using the drugs, and so far they have kept the secret from their employers. But as their colleagues become more likely to start using the same drugs (people talk, after all), will they continue to do so?
From its online reputation and product presentation to our own product run, Synagen IQ smacks of mediocre performance. A complete list of ingredients could have been convincing and decent, but the lack of information paired with the potential for side effects are enough for beginners to old-timers in nootropic use to shy away and opt for more trusted and reputable brands. There is plenty that needs to be done to uplift the brand and improve its overall ranking in the widely competitive industry. Learn More...
We hope you find our website to be a reliable and valuable resource in your search for the most effective brain enhancing supplements. In addition to product reviews, you will find information about how nootropics work to stimulate memory, focus, and increase concentration, as well as tips and techniques to help you experience the greatest benefit for your efforts.
Another empirical question concerns the effects of stimulants on motivation, which can affect academic and occupational performance independent of cognitive ability. Volkow and colleagues (2004) showed that MPH increased participants' self-rated interest in a relatively dull mathematical task. This is consistent with student reports that prescription stimulants make schoolwork seem more interesting (e.g., DeSantis et al., 2008). To what extent are the motivational effects of prescription stimulants distinct from their cognitive effects, and to what extent might they be more robust to differences in individual traits, dosage, and task? Are the motivational effects of stimulants responsible for their usefulness when taken by normal healthy individuals for cognitive enhancement?
I stayed up late writing some poems and about how [email protected] kills, and decided to make a night of it. I took the armodafinil at 1 AM; the interesting bit is that this was the morning/evening after what turned out to be an Adderall (as opposed to placebo) trial, so perhaps I will see how well or ill they go together. A set of normal scores from a previous day was 32%/43%/51%/48%. At 11 PM, I scored 39% on DNB; at 1 AM, I scored 50%/43%; 5:15 AM, 39%/37%; 4:10 PM, 42%/40%; 11 PM, 55%/21%/38%. (▂▄▆▅ vs ▃▅▄▃▃▄▃▇▁▃)
A 100mg dose of caffeine (half of a No-Doz or one cup of strong coffee) with 200mg of L-theanine is what the nootropics subreddit recommends in their beginner's FAQ, and many nootropic sellers, like Peak Nootropics, suggest the same. In my own experiments, I used a pre-packaged combination from Nootrobox called Go Cubes. They're essentially chewable coffee cubes (not as gross as it sounds) filled with that same beginner dose of caffeine, L-theanine, as well as a few B vitamins thrown into the mix. After eating an entire box of them (12 separate servings—not all at once), I can say eating them made me feel more alert and energetic, but less jittery than my usual three cups of coffee every day. I noticed enough of a difference in the past two weeks that I'll be looking into getting some L-theanine supplements to take with my daily coffee.
The fish oil can be considered a free sunk cost: I would take it in the absence of an experiment. The empty pill capsules could be used for something else, so we'll put the 500 at $5. Filling 500 capsules with fish and olive oil will be messy and take an hour. Taking them regularly can be added to my habitual morning routine for vitamin D and the lithium experiment, so that is close to free but we'll call it an hour over the 250 days. Recording mood/productivity is also free a sunk cost as it's necessary for the other experiments; but recording dual n-back scores is more expensive: each round is ~2 minutes and one wants >=5, so each block will cost >10 minutes, so 18 tests will be >180 minutes or >3 hours. So >5 hours. Total: 5 + (>5 \times 7.25) = >41.
The question of whether stimulants are smart pills in a pragmatic sense cannot be answered solely by consideration of the statistical significance of the difference between stimulant and placebo. A drug with tiny effects, even if statistically significant, would not be a useful cognitive enhancer for most purposes. We therefore report Cohen's d effect size measure for published studies that provide either means and standard deviations or relevant F or t statistics (Thalheimer & Cook, 2002). More generally, with most sample sizes in the range of a dozen to a few dozen, small effects would not reliably be found.
All clear? Try one (not dozens) of nootropics for a few weeks and keep track of how you feel, Kerl suggests. It's also important to begin with as low a dose as possible; when Cyr didn't ease into his nootropic regimen, his digestion took the blow, he admits. If you don't notice improvements, consider nixing the product altogether and focusing on what is known to boost cognitive function – eating a healthy diet, getting enough sleep regularly and exercising. "Some of those lifestyle modifications," Kerl says, "may improve memory over a supplement."
These are quite abstract concepts, though. There is a large gap, a grey area in between these concepts and our knowledge of how the brain functions physiologically – and it's in this grey area that cognitive enhancer development has to operate. Amy Arnsten, Professor of Neurobiology at Yale Medical School, is investigating how the cells in the brain work together to produce our higher cognition and executive function, which she describes as "being able to think about things that aren't currently stimulating your senses, the fundamentals of abstraction. This involves mental representations of our goals for the future, even if it's the future in just a few seconds."
And in his followup work, An opportunity cost model of subjective effort and task performance (discussion). Kurzban seems to have successfully refuted the blood-glucose theory, with few dissenters from commenting researchers. The more recent opinion seems to be that the sugar interventions serve more as a reward-signal indicating more effort is a good idea, not refueling the engine of the brain (which would seem to fit well with research on procrastination).↩
Smart drugs act within the brain speeding up chemical transfers, acting as neurotransmitters, or otherwise altering the exchange of brain chemicals. There are typically very few side effects, and they are considered generally safe when used as indicated. Special care should be used by those who have underlying health conditions, are on other medications, pregnant women, and children, as there is no long-term data on the use and effects of nootropics in these groups.
Caffeine (Examine.com; FDA adverse events) is of course the most famous stimulant around. But consuming 200mg or more a day, I have discovered the downside: it is addictive and has a nasty withdrawal - headaches, decreased motivation, apathy, and general unhappiness. (It's a little amusing to read academic descriptions of caffeine addiction9; if caffeine were a new drug, I wonder what Schedule it would be in and if people might be even more leery of it than modafinil.) Further, in some ways, aside from the ubiquitous placebo effect, caffeine combines a mix of weak performance benefits (Lorist & Snel 2008, Nehlig 2010) with some possible decrements, anecdotally and scientifically:
We've talk about how caffeine affects the body in great detail, but the basic idea is that it can improve your motivation and focus by increasing catecholamine signaling. Its effects can be dampened over time, however, as you start to build a caffeine tolerance. Research on L-theanine, a common amino acid, suggests it promotes neuronal health and can decrease the incidence of cold and flu symptoms by strengthening the immune system. And one study, published in the journal Biological Psychology, found that L-theanine reduces psychological and physiological stress responses—which is why it's often taken with caffeine. In fact, in a 2014 systematic review of 11 different studies, published in the journal Nutrition Review, researchers found that use of caffeine in combination with L-theanine promoted alertness, task switching, and attention. The reviewers note the effects are most pronounced during the first two hours post-dose, and they also point out that caffeine is the major player here, since larger caffeine doses were found to have more of an effect than larger doses of L-theanine.
With something like creatine, you'd know if it helps you pump out another rep at the gym on a sustainable basis. With nootropics, you can easily trick yourself into believing they help your mindset. The ideal is to do a trial on yourself. Take identical looking nootropic pills and placebo pills for a couple weeks each, then see what the difference is. With only a third party knowing the difference, of course.
Nootropics, also known as 'brain boosters,' 'brain supplements' or 'cognitive enhancers' are made up of a variety of artificial and natural compounds. These compounds help in enhancing the cognitive activities of the brain by regulating or altering the production of neurochemicals and neurotransmitters in the brain. It improves blood flow, stimulates neurogenesis (the process by which neurons are produced in the body by neural stem cells), enhances nerve growth rate, modifies synapses, and improves cell membrane fluidity. Thus, positive changes are created within your body, which helps you to function optimally irrespective of your current lifestyle and individual needs.
However, when I didn't stack it with Choline, I would get what users call "racetam headaches." Choline, as Patel explains, is not a true nootropic, but it's still a pro-cognitive compound that many take with other nootropics in a stack. It's an essential nutrient that humans need for functions like memory and muscle control, but we can't produce it, and many Americans don't get enough of it. The headaches I got weren't terribly painful, but they were uncomfortable enough that I stopped taking Piracetam on its own. Even without the headache, though, I didn't really like the level of focus Piracetam gave me. I didn't feel present when I used it, even when I tried to mix in caffeine and L-theanine. And while it seemed like I could focus and do my work faster, I was making more small mistakes in my writing, like skipping words. Essentially, it felt like my brain was moving faster than I could.
Minnesota-based Medtronic offers a U.S. Food and Drug Administration (FDA)-cleared smart pill called PillCam COLON, which provides clear visualization of the colon and is complementary to colonoscopy. It is an alternative for patients who refuse invasive colon exams, have bleeding or sedation risks or inflammatory bowel disease, or have had a previous incomplete colonoscopy. PillCam COLON allows more people to get screened for colorectal cancer with a minimally invasive, radiation-free option. The research focus for WCEs is on effective localization, steering and control of capsules. Device development relies on leveraging applied science and technologies for better system performance, rather than completely reengineering the pill.
Not all drug users are searching for a chemical escape hatch. A newer and increasingly normalized drug culture is all about heightening one's current relationship to reality—whether at work or school—by boosting the brain's ability to think under stress, stay alert and productive for long hours, and keep track of large amounts of information. In the name of becoming sharper traders, medical interns, or coders, people are taking pills typically prescribed for conditions including ADHD, narcolepsy, and Alzheimer's. Others down "stacks" of special "nootropic" supplements.
Even though smart drugs come with a long list of benefits, their misuse can cause negative side effects. Excess use can cause anxiety, fear, headaches, increased blood pressure, and more. Considering this, it is imperative to study usage instructions: how often can you take the pill, the correct dosage and interaction with other medication/supplements.
A total of 14 studies surveyed reasons for using prescription stimulants nonmedically, all but one study confined to student respondents. The most common reasons were related to cognitive enhancement. Different studies worded the multiple-choice alternatives differently, but all of the following appeared among the top reasons for using the drugs: "concentration" or "attention" (Boyd et al., 2006; DeSantis et al., 2008, 2009; Rabiner et al., 2009; Teter et al., 2003, 2006; Teter, McCabe, Cranford, Boyd, & Guthrie, 2005; White et al., 2006); "help memorize," "study," "study habits," or "academic assignments" (Arria et al., 2008; Barrett et al., 2005; Boyd et al., 2006; DeSantis et al., 2008, 2009; DuPont et al., 2008; Low & Gendaszek, 2002; Rabiner et al., 2009; Teter et al., 2005, 2006; White et al., 2006); "grades" or "intellectual performance" (Low & Gendaszek, 2002; White et al., 2006); "before tests" or "finals week" (Hall et al., 2005); "alertness" (Boyd et al., 2006; Hall et al., 2005; Teter et al., 2003, 2005, 2006); or "performance" (Novak et al., 2007). However, every survey found other motives mentioned as well. The pills were also taken to "stay awake," "get high," "be able to drink and party longer without feeling drunk," "lose weight," "experiment," and for "recreational purposes."
The original "smart drug" is piracetam, which was discovered by the Romanian scientist Corneliu Giurgea in the early 1960s. At the time, he was looking for a chemical that could sneak into the brain and make people feel sleepy. After months of testing, he came up with "Compound 6215". It was safe, it had very few side effects – and it didn't work. The drug didn't send anyone into a restful slumber and seemed to work in the opposite way to that intended.
The miniaturization of electronic components has been crucial to smart pill design. As cloud computing and wireless communication platforms are integrated into the health care system, the use of smart pills for monitoring vital signs and medication compliance is likely to increase. In the long term, smart pills are expected to be an integral component of remote patient monitoring and telemedicine. As the call for noninvasive point-of-care testing increases, smart pills will become mainstream devices.
Ginsenoside Rg1, a molecule found in the plant genus panax (ginseng), is being increasingly researched as an effect nootropic. Its cognitive benefits including increasing learning ability and memory acquisition, and accelerating neural development. It targets mainly the NMDA receptors and nitric oxide synthase, which both play important roles in personal and emotional intelligence. The authors of the study cited above, say that their research findings thus far have boosted their confidence in a "bright future of cognitive drug development."
And yet aside from anecdotal evidence, we know very little about the use of these drugs in professional settings. The Financial Times has claimed that they are "becoming popular among city lawyers, bankers, and other professionals keen to gain a competitive advantage over colleagues." Back in 2008 the narcolepsy medication Modafinil was labeled the "entrepreneur's drug of choice" by TechCrunch. That same year, the magazine Nature asked its readers whether they use cognitive-enhancing drugs; of the 1,400 respondents, one in five responded in the affirmative.
That first night, I had severe trouble sleeping, falling asleep in 30 minutes rather than my usual 19.6±11.9, waking up 12 times (5.9±3.4), and spending ~90 minutes awake (18.1±16.2), and naturally I felt unrested the next day; I initially assumed it was because I had left a fan on (moving air keeps me awake) but the new potassium is also a possible culprit. When I asked, Kevin said:
Vinpocetine walks a line between herbal and pharmaceutical product. It's a synthetic derivative of a chemical from the periwinkle plant, and due to its synthetic nature we feel it's more appropriate as a 'smart drug'. Plus, it's illegal in the UK. Vinpocetine is purported to improve cognitive function by improving blood flow to the brain, which is why it's used in some 'study drugs' or 'smart pills'.
A provisional conclusion about the effects of stimulants on learning is that they do help with the consolidation of declarative learning, with effect sizes varying widely from small to large depending on the task and individual study. Indeed, as a practical matter, stimulants may be more helpful than many of the laboratory tasks indicate, given the apparent dependence of enhancement on length of delay before testing. Although, as a matter of convenience, experimenters tend to test memory for learned material soon after the learning, this method has not generally demonstrated stimulant-enhanced learning. However, when longer periods intervene between learning and test, a more robust enhancement effect can be seen. Note that the persistence of the enhancement effect well past the time of drug action implies that state-dependent learning is not responsible. In general, long-term effects on learning are of greater practical value to people. Even students cramming for exams need to retain information for more than an hour or two. We therefore conclude that stimulant medication does enhance learning in ways that may be useful in the real world.
To judge from recent reports in the popular media, healthy people have also begun to use MPH and AMPs for cognitive enhancement. Major daily newspapers such as The New York Times, The LA Times, and The Wall Street Journal; magazines including Time, The Economist, The New Yorker, and Vogue; and broadcast news organizations including the BBC, CNN, and NPR have reported a trend toward growing use of prescription stimulants by healthy people for the purpose of enhancing school or work performance.
The evidence? A 2012 study in Greece found it can boost cognitive function in adults with mild cognitive impairment (MCI), a type of disorder marked by forgetfulness and problems with language, judgement, or planning that are more severe than average "senior moments," but are not serious enough to be diagnosed as dementia. In some people, MCI will progress into dementia.
Since the discovery of the effect of nootropics on memory and focus, the number of products on the market has increased exponentially. The ingredients used in a supplement can tell you about the effectiveness of the product. Brain enhancement pills that produce the greatest benefit are formulated with natural vitamins and substances, rather than caffeine and synthetic ingredients. In addition to better results, natural supplements are less likely to produce side effects, compared with drugs formulated with chemical ingredients.
Finally, all of the questions raised here in relation to MPH and d-AMP can also be asked about newer drugs and even about nonpharmacological methods of cognitive enhancement. An example of a newer drug with cognitive-enhancing potential is modafinil. Originally marketed as a therapy for narcolepsy, it is widely used off label for other purposes (Vastag, 2004), and a limited literature on its cognitive effects suggests some promise as a cognitive enhancer for normal healthy people (see Minzenberg & Carter, 2008, for a review).
Scientists found that the drug can disrupt the way memories are stored. This ability could be invaluable in treating trauma victims to prevent associated stress disorders. The research has also triggered suggestions that licensing these memory-blocking drugs may lead to healthy people using them to erase memories of awkward conversations, embarrassing blunders and any feelings for that devious ex-girlfriend.
Do note that this isn't an extensive list by any means, there are plenty more 'smart drugs' out there purported to help focus and concentration. Most (if not all) are restricted under the Psychoactive Substances Act, meaning they're largely illegal to sell. We strongly recommend against using these products off-label, as they can be dangerous both due to side effects and their lack of regulation on the grey/black market.
While the commentary makes effective arguments — that this isn't cheating, because cheating is based on what the rules are; that this is fair, because hiring a tutor isn't outlawed for being unfair to those who can't afford it; that this isn't unnatural, because humans with computers and antibiotics have been shaping what is natural for millennia; that this isn't drug abuse anymore than taking multivitamins is — the authors seem divorced from reality in the examples they provide of effective stimulant use today.
That said, there are plenty of studies out there that point to its benefits. One study, published in the British Journal of Pharmacology, suggests brain function in elderly patients can be greatly improved after regular dosing with Piracetam. Another study, published in the journal Psychopharmacology, found that Piracetam improved memory in most adult volunteers. And another, published in the Journal of Clinical Psychopharmacology, suggests it can help students, especially dyslexic students, improve their nonverbal learning skills, like reading ability and reading comprehension. Basically, researchers know it has an effect, but they don't know what or how, and pinning it down requires additional research.
I took the pill at 11 PM the evening of (technically, the day before); that day was a little low on sleep than usual, since I had woken up an hour or half-hour early. I didn't yawn at all during the movie (merely mediocre to my eyes with some questionable parts)22. It worked much the same as it did the previous time - as I walked around at 5 AM or so, I felt perfectly alert. I made good use of the hours and wrote up my memories of ICON 2011.
I started with the 10g of Vitality Enhanced Blend, a sort of tan dust. Used 2 little-spoonfuls (dust tastes a fair bit like green/oolong tea dust) into the tea mug and then some boiling water. A minute of steeping and… bleh. Tastes sort of musty and sour. (I see why people recommended sweetening it with honey.) The effects? While I might've been more motivated - I hadn't had caffeine that day and was a tad under the weather, a feeling which seemed to go away perhaps half an hour after starting - I can't say I experienced any nausea or very noticeable effects. (At least the flavor is no longer quite so offensive.)
Nootropics are a great way to boost your productivity. Nootropics have been around for more than 40 years and today they are entering the mainstream. If you want to become the best you, nootropics are a way to level up your life. Nootropics are always personal and what works for others might not work for you. But no matter the individual outcomes, nootropics are here to make an impact!
(I was more than a little nonplussed when the mushroom seller included a little pamphlet educating one about how papaya leaves can cure cancer, and how I'm shortening my life by decades by not eating many raw fruits & vegetables. There were some studies cited, but usually for points disconnected from any actual curing or longevity-inducing results.)
The therapeutic effect of AMP and MPH in ADHD is consistent with the finding of abnormalities in the catecholamine system in individuals with ADHD (e.g., Volkow et al., 2007). Both AMP and MPH exert their effects on cognition primarily by increasing levels of catecholamines in prefrontal cortex and the cortical and subcortical regions projecting to it, and this mechanism is responsible for improving cognition and behavior in ADHD (Pliszka, 2005; Wilens, 2006).
Endoscopy surgeries, being minimally invasive, have become more popular in recent times. Latest studies show that there is an increasing demand for single incision or small incision type of surgery as an alternative to traditional surgeries. As aging patients are susceptible to complications, the usage of minimally invasive procedures is of utmost importance and the need of the hour. There are unexplained situations of bleeding, iron deficiency, abdominal pain, search for polyps, ulcers, and tumors of the small intestine, and inflammatory bowel disease, such as Crohn's disease, where capsule endoscopy diagnoses fare better than traditional endoscopy. Also, as capsule endoscopy is less invasive or non-invasive, as compared to traditional endoscopy, patients are increasingly preferring the usage of capsule endoscopy as it does not require any recovery time, which is driving the smart pill market.
But while some studies have found short-term benefits, Doraiswamy says there is no evidence that what are commonly known as smart drugs — of any type — improve thinking or productivity over the long run. "There's a sizable demand, but the hype around efficacy far exceeds available evidence," notes Doraiswamy, adding that, for healthy young people such as Silicon Valley go-getters, "it's a zero-sum game. That's because when you up one circuit in the brain, you're probably impairing another system."
Weyandt et al. (2009) Large public university undergraduates (N = 390) 7.5% (past 30 days) Highest rated reasons were to perform better on schoolwork, perform better on tests, and focus better in class 21.2% had occasionally been offered by other students; 9.8% occasionally or frequently have purchased from other students; 1.4% had sold to other students
Barbaresi WJ, Katusic SK, Colligan RC, Weaver AL, Jacobsen SJ. Modifiers of long-term school outcomes for children with attention-deficit/hyperactivity disorder: Does treatment with stimulant medication make a difference? Results from a population-based study. Journal of Developmental and Behavioral Pediatrics. 2007;28:274–287. doi: 10.1097/DBP.0b013e3180cabc28. [PubMed] [CrossRef]
On 8 April 2011, I purchased from Smart Powders (20g for $8); as before, some light searching seemed to turn up SP as the best seller given shipping overhead; it was on sale and I planned to cap it so I got 80g. This may seem like a lot, but I was highly confident that theanine and I would get along since I already drink so much tea and was a tad annoyed at the edge I got with straight caffeine. So far I'm pretty happy with it. My goal was to eliminate the physical & mental twitchiness of caffeine, which subjectively it seems to do.
Those who have taken them swear they do work – though not in the way you might think. Back in 2015, a review of the evidence found that their impact on intelligence is "modest". But most people don't take them to improve their mental abilities. Instead, they take them to improve their mental energy and motivation to work. (Both drugs also come with serious risks and side effects – more on those later).
"Piracetam is not a vitamin, mineral, amino acid, herb or other botanical, or dietary substance for use by man to supplement the diet by increasing the total dietary intake. Further, piracetam is not a concentrate, metabolite, constituent, extract or combination of any such dietary ingredient. [...] Accordingly, these products are drugs, under section 201(g)(1)(C) of the Act, 21 U.S.C. § 321(g)(1)(C), because they are not foods and they are intended to affect the structure or any function of the body. Moreover, these products are new drugs as defined by section 201(p) of the Act, 21 U.S.C. § 321(p), because they are not generally recognized as safe and effective for use under the conditions prescribed, recommended, or suggested in their labeling."[33] | CommonCrawl |
Annuity Definition and Guide
Explore The Guide
Annuities Overview
The Main Types of Annuities
An Overview of Annuities
Life Insurance vs. Annuity
Difference Between IRA and an Annuity
Best Age to Get an Annuity
Types of Annuities: Part 1
Deferred Annuity
Immediate Payment Annuity
Indexed Annuity
Individual Retirement Annuity
Joint and Survivor Annuity
Ordinary Anniuity
Qualified Longevity Annuity Contract
Calculating Present and Future Value
Present Value Annuity
Future Value Annuity
Calculating Present and Future Value Annuities
Annuity Table
Present Value Interest Factor of an Annuity
How Good a Deal is an Indexed Annuity?
Tax Implications
How Are Nonqualified Variable Annuities Taxed?
How a Fixed Annuity Works After Retirement
What Happens to an Annuity After You Die
Payouts, Distributions, and Withdrawals
Selecting the Payout on Your Annuity
Are Variable Annuities Subject to Required Minimum Distributions?
How to Rollover a Variable Annuity Into an IRA
Distribution Options for an Inherited Annuity
Penalties for Withdrawing Money From Annuities
Borrowing From an Annuity to Put a Down Payment
The Whole Story on Variable Annuities
Pros and Cons of Retirement Annuities
Biggest Disadvantages of Annuities
Risks of Annuities in a Recession
Retirement Planning Annuities
Present Value of an Annuity
By Julia Kagan
Reviewed By Roger Wohlner
What Is the Present Value of an Annuity?
The present value of an annuity is the current value of future payments from an annuity, given a specified rate of return, or discount rate. The higher the discount rate, the lower the present value of the annuity.
The present value of an annuity refers to how much money would be needed today to fund a series of future annuity payments.
Because of the time value of money, a sum of money received today is worth more than the same sum at a future date.
You can use a present value calculation to determine whether you'll receive more money by taking a lump sum now or an annuity spread out over a number of years.
Understanding the Present Value of an Annuity
Because of the time value of money, money received today is worth more than the same amount of money in the future because it can be invested in the meantime. By the same logic, $5,000 received today is worth more than the same amount spread over five annual installments of $1,000 each.
The future value of money is calculated using a discount rate. The discount rate refers to an interest rate or an assumed rate of return on other investments over the same duration as the payments. The smallest discount rate used in these calculations is the risk-free rate of return. U.S. Treasury bonds are generally considered to be the closest thing to a risk-free investment, so their return is often used for this purpose.
Example of the Present Value of an Annuity
The formula for the present value of an ordinary annuity, as opposed to an annuity due is below. (An ordinary annuity pays interest at the end of a particular period, rather than at the beginning, as is the case with an annuity due.)
P=PMT×1−(1(1+r)n)rwhere:P=Present value of an annuity streamPMT=Dollar amount of each annuity paymentr=Interest rate (also known as discount rate)n=Number of periods in which payments will be made\begin{aligned} &\text{P} = \text{PMT} \times \frac { 1 - \Big ( \frac { 1 }{ ( 1 + r ) ^ n } \Big ) }{ r } \\ &\textbf{where:} \\ &\text{P} = \text{Present value of an annuity stream} \\ &\text{PMT} = \text{Dollar amount of each annuity payment} \\ &r = \text{Interest rate (also known as discount rate)} \\ &n = \text{Number of periods in which payments will be made} \\ \end{aligned}P=PMT×r1−((1+r)n1)where:P=Present value of an annuity streamPMT=Dollar amount of each annuity paymentr=Interest rate (also known as discount rate)n=Number of periods in which payments will be made
Assume a person has the opportunity to receive an ordinary annuity that pays $50,000 per year for the next 25 years, with a 6% discount rate, or take a $650,000 lump-sum payment. Which is the better option? Using the above formula, the present value of the annuity is:
Present value=$50,000×1−(1(1+0.06)25)0.06=$639,168\begin{aligned} \text{Present value} &= \$50,000 \times \frac { 1 - \Big ( \frac { 1 }{ ( 1 + 0.06 ) ^ {25} } \Big ) }{ 0.06 } \\ &= \$639,168 \\ \end{aligned}Present value=$50,000×0.061−((1+0.06)251)=$639,168
Given this information, the annuity is worth $10,832 less on a time-adjusted basis, so the person would come out ahead by choosing the lump-sum payment over the annuity.
An ordinary annuity makes payments at the end of each time period, while an annuity due makes them at the beginning. All else being equal, the annuity due will be worth more in the present.
With an annuity due, in which payments are made at the beginning of each period, the formula is slightly different. To find the value of an annuity due, simply multiply the above formula by a factor of (1 + r):
P=PMT×1−(1(1+r)n)r×(1+r)\begin{aligned} &\text{P} = \text{PMT} \times \frac { 1 - \Big ( \frac { 1 }{ ( 1 + r ) ^ n } \Big ) }{ r } \times ( 1 + r ) \\ \end{aligned}P=PMT×r1−((1+r)n1)×(1+r)
So, if the example above referred to an annuity due, rather than an ordinary annuity, its value would be as follows:
Present value=$50,000×1−(1(1+0.06)25)0.06×(1+.06)=$677,518\begin{aligned} \text{Present value} &= \$50,000 \times \frac { 1 - \Big ( \frac { 1 }{ ( 1 + 0.06 ) ^ {25} } \Big ) }{ 0.06 } \times ( 1 + .06 ) \\ &= \$677,518 \\ \end{aligned}Present value=$50,000×0.061−((1+0.06)251)×(1+.06)=$677,518
In this case, the person should choose the annuity due option because it is worth $27,518 more than the $650,000 lump sum.
Annuity Table Definition
An annuity table is a tool for determining the present value of an annuity or other structured series of payments.
Future Value of an Annuity
The future value of an annuity is the total value of a series of recurring payments at a specified date in the future.
Present Value Interest Factor of Annuity (PVIFA)
The present value interest factor of annuity is a factor that can be used to calculate the present value of a series of annuities.
Understanding the Present Value Interest Factor
The present value interest factor (PVIF) is used to simplify the calculation for determining the current value of a future sum.
How to Calculate Present Value, and Why Investors Need to Know It
Present value is the concept that states an amount of money today is worth more than that same amount in the future. In other words, money received in the future is not worth as much as an equal amount received today.
Ordinary Annuity
An ordinary annuity is a series of equal payments made at the end of each period over a fixed amount of time.
Annuity Derivation Vs. Perpetuity Derivation: What's the Difference?
Calculating Present and Future Value of Annuities
What is the formula for calculating net present value (NPV) in Excel?
Tools for Fundamental Analysis
Understanding the Binomial Option Pricing Model
Fixed Income Essentials
Learn to Calculate Yield to Maturity in MS Excel | CommonCrawl |
EE5904/ME5404代写| Neural Networks代写神经网络代写| Homework #3代写
Home neural networks EE5904/ME5404代写| Neural Networks代写神经网络代写| Homework #3代写
0
By admin neural networks 2021年4月19日
1 以下是EE5904/ME5404 Neural Networks Homework #3的题目的简单解析
2 EE5904/ME5404 Neural Networks Homework #3第一题
以下是EE5904/ME5404 Neural Networks Homework #3的题目的简单解析
Important note: the due date is 22/03/2021. You should submit your scripts to the folder in LumiNus. Late submission is not allowed unless it is well justified. Please include the MATLAB code or Python Code as attachment if computer experiment is involved.
Please note that the MATLAB toolboxes for RBFN and SOM are not well developed. Please write your own codes to implement RBFN and SOM instead of using the MATLAB toolbox.
EE5904/ME5404 Neural Networks Homework #3第一题
问题 1.
Q1. Function Approximation with RBFN (10 Marks) Consider using RBFN to approximate the following function:
y=1.2 \sin (\pi x)-\cos (2.4 \pi x), \quad \text { for } x \in[-1,1]
The training set is constructed by dividing the range [-1,1] using a uniform step length $0.05,$ while the test set is constructed by dividing the range [-1,1] using a uniform step length 0.01 . Assume that the observed outputs in the training set are corrupted by random noise as follows.
y(i)=1.2 \sin (\pi x(i))-\cos (2.4 \pi x(i))+0.3 n(i)
where the random noise $n(i)$ is Gaussian noise with zero mean and stand deviation of one, which can be generated by MATLAB command randn. Note that the test set is not corrupted by noises. Perform the following computer experiments:
a) Use the exact interpolation method (as described on pages $16-21$ in the slides of lecture five) and determine the weights of the RBFN. Assume the RBF is Gaussian function with standard deviation of $0.1 .$ Evaluate the approximation performance of the resulting RBFN using the test set. (3 Marks)
b) Follow the strategy of "Fixed Centers Selected at Random" (as described on page 37 in the slides of lecture five), randomly select 20 centers among the sampling points. Determine the weights of the RBFN. Evaluate the approximation performance of the resulting RBFN using test set. Compare it to the result of part a). (4 Marks)
c) Use the same centers and widths as those determined in part a) and apply the regularization method as described on pages $42-45$ in the slides for lecture five. Vary the value of the regularization factor and study its effect on the performance of RBFN. (3 Marks)
证明 .
简单的用Matlab实践 RBFN的题
Radial basis function (RBF) networks typically have three layers: an input layer, a hidden layer with a non-linear RBF activation function and a linear output layer. The input can be modeled as a vector of real numbers $\mathbf{x} \in \mathbb{R}^{n}$. The output of the network is then a scalar function of the input vector, $\varphi: \mathbb{R}^{n} \rightarrow \mathbb{R},$ and is given by
\varphi(\mathbf{x})=\sum_{i=1}^{N} a_{i} \rho\left(\left\|\mathbf{x}-\mathbf{c}_{i}\right\|\right)
where $N$ is the number of neurons in the hidden layer, $\mathbf{c}_{i}$ is the center vector for neuron $i$, and $a_{i}$ is the weight of neuron $i$ in the linear output neuron. Functions that depend only on the distance from a center vector are radially symmetric about that vector, hence the name radial basis function. In the basic form, all inputs are connected to each hidden neuron. The norm is typically taken to be the Euclidean distance (although the Mahalanobis distance appears to perform better with pattern recognition [4][5] [editorializing]) and the radial basis function is commonly taken to be Gaussian $\rho\left(\left\|\mathbf{x}-\mathbf{c}_{i}\right\|\right)=\exp \left[-\beta\left\|\mathbf{x}-\mathbf{c}_{i}\right\|^{2}\right]$
The Gaussian basis functions are local to the center vector in the sense that
\lim _{\|x\| \rightarrow \infty} \rho\left(\left\|\mathbf{x}-\mathbf{c}_{i}\right\|\right)=0
i.e. changing parameters of one neuron has only a small effect for input values that are far away from the center of that neuron. Given certain mild conditions on the shape of the activation function, RBF networks are universal approximators on a compact subset of $\mathbb{R}^{n} .[6]$ This means that an RBF network with enough hidden neurons can approximate any continuous function on a closed, bounded set with arbitrary precision.
The parameters $a_{i}, \mathbf{c}_{i}$, and $\beta_{i}$ are determined in a manner that optimizes the fit between $\varphi$ and the data.
Theoretical motivation for normalization [ edit]
There is theoretical justification for this architecture in the case of stochastic data flow. Assume a stochastic kernel approximation for the joint probability density
$P(\mathbf{x} \wedge y)=\frac{1}{N} \sum_{i=1}^{N} \rho\left(\left\|\mathbf{x}-\mathbf{c}_{i}\right\|\right) \sigma\left(\left|y-e_{i}\right|\right)$
where the weights $\mathbf{c}_{i}$ and $e_{i}$ are exemplars from the data and we require the kernels to be normalized
\int \rho\left(\left\|\mathbf{x}-\mathbf{c}_{i}\right\|\right) d^{n} \mathbf{x}=1
\int \sigma\left(\left|y-e_{i}\right|\right) d y=1
The probability densities in the input and output spaces are
P(\mathbf{x})=\int P(\mathbf{x} \wedge y) d y=\frac{1}{N} \sum_{i=1}^{N} \rho\left(\left\|\mathbf{x}-\mathbf{c}_{i}\right\|\right)
The expectation of y given an input $\mathbf{x}$ is
\varphi(\mathbf{x}) \stackrel{\text { def }}{=} E(y \mid \mathbf{x})=\int y P(y \mid \mathbf{x}) d y
P(y \mid \mathbf{x})
is the conditional probability of y given $\mathbf{x}$. The conditional probability is related to the joint probability through Bayes theorem $P(y \mid \mathbf{x})=\frac{P(\mathbf{x} \wedge y)}{P(\mathbf{x})}$
which yields
\varphi(\mathbf{x})=\int y \frac{P(\mathbf{x} \wedge y)}{P(\mathbf{x})} d y
This becomes
\varphi(\mathbf{x})=\frac{\sum_{i=1}^{N} e_{i} \rho\left(\left\|\mathbf{x}-\mathbf{c}_{i}\right\|\right)}{\sum_{i=1}^{N} \rho\left(\left\|\mathbf{x}-\mathbf{c}_{i}\right\|\right)}=\sum_{i=1}^{N} e_{i} u\left(\left\|\mathbf{x}-\mathbf{c}_{i}\right\|\right)
when the integrations are performed.
Local linear models [edit]
It is sometimes convenient to expand the architecture to include local linear models. In that case the architectures become, to first order,
\varphi(\mathbf{x})=\sum_{i=1}^{N}\left(a_{i}+\mathbf{b}_{i} \cdot\left(\mathbf{x}-\mathbf{c}_{i}\right)\right) \rho\left(\left\|\mathbf{x}-\mathbf{c}_{i}\right\|\right)
\varphi(\mathbf{x})=\sum_{i=1}^{N}\left(a_{i}+\mathbf{b}_{i} \cdot\left(\mathbf{x}-\mathbf{c}_{i}\right)\right) u\left(\left\|\mathbf{x}-\mathbf{c}_{i}\right\|\right)
in the unnormalized and normalized cases, respectively. Here $\mathbf{b}_{i}$ are weights to be determined. Higher order linear terms are also possible.
This result can be written
\varphi(\mathbf{x})=\sum_{i=1}^{2 N} \sum_{j=1}^{n} e_{i j} v_{i j}\left(\mathbf{x}-\mathbf{c}_{i}\right)
e_{i j}=\left\{\begin{array}{ll}
a_{i}, & \text { if } i \in[1, N] \\
b_{i j}, & \text { if } i \in[N+1,2 N]
\end{array}\right.
v_{i j}\left(\mathbf{x}-\mathbf{c}_{i}\right) \stackrel{\text { def }}{=}\left\{\begin{array}{ll}
\delta_{i j} \rho\left(\left\|\mathbf{x}-\mathbf{c}_{i}\right\|\right), & \text { if } i \in[1, N] \\
\left(x_{i j}-c_{i j}\right) \rho\left(\left\|\mathbf{x}-\mathbf{c}_{i}\right\|\right), & \text { if } i \in[N+1,2 N]
in the unnormalized case and
v_{i j}\left(\mathbf{x}-\mathbf{c}_{i}\right) \stackrel{\text { dof }}{=}\left\{\begin{array}{ll}
\delta_{i j} u\left(\left\|\mathbf{x}-\mathbf{c}_{i}\right\|\right), & \text { if } i \in[1, N] \\
\left(x_{i j}-c_{i j}\right) u\left(\left\|\mathbf{x}-\mathbf{c}_{i}\right\|\right), & \text { if } i \in[N+1,2 N]
in the normalized case.
Here $\delta_{i j}$ is a Kronecker delta function defined as
\delta_{i j}=\left\{\begin{array}{ll}
1, & \text { if } i=j \\
0, & \text { if } i \neq j
E-mail: [email protected] 微信:shuxuejun
EE5904/ME5404代写| Neural Networks代写神经网络代写请认准uprivateta™
更多统计代写案例请参考另外一份统计代写案例
uprivateta™是一个服务全球中国留学生的专业代写公司
专注提供稳定可靠的北美、澳洲、英国代写服务
专注于数学,统计,金融,经济,计算机科学,物理的作业代写服务
Handwritten Character ClassificationNeural NetworksRBFNSelf-Organizing Map
admin / About Author
计算机科学代写| CE152 Java Programming assignment
以下是CE152的一份Java Programming assignment的题目的简单解析 Java Programm… | CommonCrawl |
Research article | Open | Published: 26 June 2015
A 3D approach to reconstruct continuous optical images using lidar and MODIS
HuaGuo Huang1 &
Jun Lian1
Monitoring forest health and biomass for changes over time in the global environment requires the provision of continuous satellite images. However, optical images of land surfaces are generally contaminated when clouds are present or rain occurs.
To estimate the actual reflectance of land surfaces masked by clouds and potential rain, 3D simulations by the RAPID radiative transfer model were proposed and conducted on a forest farm dominated by birch and larch in Genhe City, DaXing'AnLing Mountain in Inner Mongolia, China. The canopy height model (CHM) from lidar data were used to extract individual tree structures (location, height, crown width). Field measurements related tree height to diameter of breast height (DBH), lowest branch height and leaf area index (LAI). Series of Landsat images were used to classify tree species and land cover. MODIS LAI products were used to estimate the LAI of individual trees. Combining all these input variables to drive RAPID, high-resolution optical remote sensing images were simulated and validated with available satellite images.
Evaluations on spatial texture, spectral values and directional reflectance were conducted to show comparable results.
The study provides a proof-of-concept approach to link lidar and MODIS data in the parameterization of RAPID models for high temporal and spatial resolutions of image reconstruction in forest dominated areas.
Optical remote sensing images have been widely used in monitoring forest ecosystems. Spatial, temporal and spectral resolutions are the three key indicators to be considered during most applications. Spatial resolution had been improved from a scale of hundreds of meters (e.g. Landsat 8) to one of a half-meter (e.g. GeoEye-1 or Worldview-2) with only a slight increase in the number of spectral bands. However, in forested area, temporal resolution is generally reduced by frequent rains or cloud covers, which prevents users from continuously acquiring clear optical remote sensing images.
Temporally continuous satellite images are important for forest monitoring (Lunetta et al. 2004; Masek et al. 2008; Nitze et al. 2015) since forest reflectance varies with seasonality (Kobayashi et al. 2007; Xu et al. 2013). A few studies have been conducted to interpolate those contaminated images by rains or clouds. For example, Landsat images have been blended with MODerate-resolution Imaging Spectroradiometer (MODIS) data to create spatial and temporal fusion data (Gao et al. 2006; Hilker et al. 2009; Wu et al. 2012). Further, radiative transfer models have also been used to simulate a series of high temporal resolution images for future space earth observation missions (Inglada et al. 2011). However, spatial resolution is generally moderate due to the use of simple homogeneous radiative transfer models, which are not able to deal with high resolution simulation with diverse tree species and mountain shadows.
In recent years, light detection and ranging (lidar) has been a widely used tool for forest studies (Adams et al. 2012; Arno et al. 2013; Montesano et al. 2013). The greatest advantage of lidar is to provide direct measurements of very detailed 3D forest structures, so it can be used to reconstruct 3D trees to support the simulation of radar remote sensing signals (Lucas et al. 2006) and to study how 3D structures affect the quality of optical images (Barbier et al. 2011). By coupling lidar with high temporal data such as MODIS using 3D radiative transfer models, it will be possible to generate both high spatial and temporal resolution optical remote sensing images. However, very few studies were found using this approach. Therefore, we will test the possibility of that approach to simulate high-resolution optical satellite images on an arbitrary day of the growing season in a forested area.
Study site
The study site (Fig. 1), located in a 100 ha forested area (50°54′ N, 121°54′ E) of the Genhe Forestry Reserve, DaXing'AnLing Mountain in Inner Mongolia, China, belongs to a boreal moist and cold temperature forest, with an elevation ranging from 784 to 1142 m. Annual average precipitation is 450 to 550 mm, with sixty per cent falling in July and August. Annual average sunshine is 2594 h with a frost-free period of 80 days. Our study site occupied 75 % of the total area. The forest is mainly composed of Dahurian Larch (Larix gmelinii) and White Birch (Betula platyphylla Suk.). The understory vegetation of the larch forest is a single layer of evergreen shrubs (normally Ledum palustre L. or Rhododendron dauricum L.). L. palustre is generally a low shrub (less than 0.3 m), while the height of R. dauricum is around 1.5 m. Blueberries (Semen trigonellae) are widely distributed. The birch forest has a understory of grass or deciduous shrubs, such as Rosa acicularis, Spiraea sericea Turcz., or Rubus L.
Location of the Genhe study site in DaXing'AnLing Mountain, Inner Mongolia, China
The growing season typically begins in early May and senescence occurs in late September. In the summer of 2013, 18 field plots (45 m by 45 m) were established representing different combinations of forest types, density and leaf area index (LAI) (Table 1). The LAI, ranging from 1.44 to 3.51 m2∙m−2, was measured using LAI-2000 (LICOR Inc.) hemispheric data. The forest cover varies between 0.21 and 0.86.
Table 1 Plot variables at the study site
Based on inventory data of individual tree structures in plots L1 to L9, the DBH and crown length (L) of trees were regressed on tree height (H), where heights were derived by lidar. For convenience, both crowns of larch and birch are defined as spherical in shape.
Reflectance or transmittance spectra of leaves, branches and stems of birch and larch trees were measured in the field using the integrating sphere of ASD (Analytical Spectral Device, http://www.asdi.com/). Dry and wet soil spectra were the default soil spectra in a PROSAIL model (Jacquemoud et al. 2009). The re-sampled spectral curves are shown in Fig. 2.
Component reflectance in 18 bands
Airborne data
Small footprint full-waveform lidar data were acquired from August 16 to September 25 in 2012 (Mu et al. 2015). The system consisted of a Leica ALS60 with an integrated Leica RCD105 camera. The CCD camera produced natural color mosaic images with 0.2 m resolution. As for lidar, the mean swath width was 1 km at a flying altitude of approximately 2700 m (over rough terrain). The scan angle was less than 35°. Waveforms were digitized with a frequency of 100 to 200 kHz. An average of eight reflected pulses per m2 was obtained over the sample plots. Point clouds were first classified by the TerraScan software (see www.terrasolid.com) to separate the ground points from other points. We used Delaunay triangulation and bilinear interpolation method to generate a digital elevation model (DEM) from ground returns. DSM (digital surface model) was created using a maximum value in a window size of 0.5 m. CHM (canopy height model) was calculated as the difference between DSM and DEM (Fig. 3).
Lidar derived CHM and DEM (1 km): a CHM, gray color representing tree height 0–30 m; b DEM, gray color representing elevation 770–895 m
Individual tree crowns were segmented from CHM using the "TreeVaW" tool (Popescu and Wynne 2004), which uses a circular window filter to segment trees and produces the location of each tree (x i , y i ), height (H i ) and crown radius (R i ).
Optical satellite images
Several scales of geo-referenced satellite images were used, including SPOT-6 (1 m), Landsat (30 m) and MODIS (250 m). Due to frequent cloud cover, SPOT and Landsat were not able to capture clear land surface images during rainy days, which happened mostly in the growing season (May to September). There was only one cloud-free SPOT-6 image, obtained on October 10, 2013. For the same reason, only three Landsat images were clear (May 5, 2013; Sep 9, 2013; Sep 29, 2013) in 2013. As well, we collected a Landsat 8 image on May 24, 2014.
A Gram-Schmidt Spectral Sharpening image fusion technique in ENVI 5.1 (ITT Exelis) was applied to produce pan-sharpened Landsat 7 or 8 multi-spectral images with a resolution of 15 m. This pan-sharpening method was selected because it preserves the original spectral information of the image and can be simultaneously applied to multispectral bands. The Landsat image pixel values (in digital numbers) were converted to top-of-atmosphere (TOA) spectral radiance, which was further converted to land surface reflectance using the Fast Line-of-sight Atmospheric Analysis of Hypercubes (FLAASH) atmospheric correction model with the atmospheric visibility parameter estimated from the MODIS aerosol product.
MODIS 16-day 250 m NDVI images and the 500 m LAI products from Jan 1 to Dec 31, 2013 were downloaded. Because a maximum value filtering method was used, NDVI and LAI products had significantly fewer cloud cover problems. NDVI data were used to determine the phenology of the boreal forest, including birch, larch and understory, which allow interpolation of Landsat images ranging from clear to contaminated days. MODIS LAI products were utilized to determine the leaf area of each tree. Despite its low resolution, it is the only continuous global leaf area product, but with acceptable accuracy (Ahl et al. 2006).
MODIS Bidirectional Reflectance Factor (BRF) products in May of 2013 were collected for validation. The BRF curves were reconstructed from the kernel coefficients using the Algorithm for Model Bidirectional Reflectance Anisotropies of the Land Surface (AMBRALS) (Wanner et al. 1995; Huang et al. 2013b; Sharma et al. 2013).
RAPID model
RAPID is a 3D radiative transfer model, able to simulate reflectance images over complex 3D natural scenes at large scales (30 to 1000 m) with great efficiency (Huang et al. 2013a), implying that RAPID can simulate images at MODIS pixel scales (250 to 1000 m). The main input parameters of RAPID consist of 3D structures of the ground, trees, buildings and rivers, as well as reflectance and transmittance of leaves, branches, walls, water bodies and roads under a few sun and sensor angles. The main outputs are BRF curves and land surface reflectance images with defined spatial resolution (default 0.5 m).
Simulation framework
Figure 4 shows the 3D simulation framework, with integrated parameters extracted from lidar data, field plots data, Landsat images and MODIS images managed into the RAPID model to simulate optical images of a virtual sensor with several view angles, 18 spectral bands and a half-meter spatial resolution.
Simulation framework to generate time series of optical images
The sensor is an advanced version of the Compact High Resolution Imaging Spectrometer (CHRIS/PROBA). CHRIS is the only multi-angular sensor launched with both high spatial (17 m) and spectral resolution (20–40 nm) (Rautiainen et al. 2008; García Millán et al. 2014). For any selected target, five images with different viewing angles (−55°, −36°, 0°, 36° and 55°) were made within a short span of 2.5 min. The virtual sensor was placed above the canopy under clear sky conditions.
A large number of input parameters needed to be set in order to simulate seasonal variation. A few parameters, such as LAI and soil moisture, vary considerably over the growing season, while other parameters remain relatively stable. Given our relatively limited data source, we defined five basic assumptions to reduce the number of unknowns:
The DTM (digital terrain model) remained unchanged, a reasonable assumption for forested areas;
Tree crowns were ellipsoid or cone shaped, similar with geometric optic models (Schaaf and Strahlerl 1994; Chen et al. 2012);
Individual tree LAI (LAItree) was predicted from tree height (Xiao et al. 2006);
We accepted a spherical leaf angle distribution (LADtree) for all trees due to missing measurements;
Reflectance of non-vegetation objects, such as walls, water bodies and roads remained constant, following precedents set in existing literature (Wang et al. 2008) or in the ENVI spectral library.
With these assumptions, there were two types of input parameters: fixed or dynamic. Fixed parameters were DEM, land cover map, individual tree map (coordinates, DBH, height, crown radius, crown length). DEM and land cover map were re-sampled to a resolution of 1 m. Land cover maps were generated using a decision tree method with six classes: bare soil, road, birch forest, larch forest, water surface and buildings. Decision rules were largely based on the Ratio Vegetation Index (RVI), the Normalized Difference Water Index (NDWI) and CHM. Dynamic parameters determined the seasonal change of reflectance, such as component reflectances, LAI and sun position, obtained mainly from time series of MODIS products, including NDVI, LAI and land surface temperature (LST).
Leaf reflectance
Leaf reflectance and transmittance were measured only once, which was not sufficient to represent optical features for the entire growing season. Thanks to the PROSPECT model and changing the most sensitive input (leaf chlorophyll content) while fixing others, seasonal leaf reflectance and transmittance could be simulated (Barry et al. 2009). Previous studies have shown that the amount of leaf chlorophyll is correlated with NDVI (Wu et al. 2008; Rulinda et al. 2011; Croft et al. 2013; Feng and Niu 2014). Therefore, we used a linear relationship between MODIS NDVI products (0.1 to 1.0) and the amount of leaf chlorophyll (10 to 100 μg∙cm−2).
Background reflectance
Seasonal variation in background reflectance was complex. However, soil moisture played a major role (Muller and Décamps 2001; Weidong et al. 2002; Whiting et al. 2004), which was then derived from TVDI (temperature vegetation dryness index) inferred from temperature and NDVI (Sandholt et al. 2002; Liang et al. 2014). TVDI is highly correlated with soil moisture (Holzman et al. 2014). Therefore, we estimated the soil reflectance as the weighted average of dry soil and wet soil reflectance, where the weights were TVDI and (1-TVDI) respectively. The background was defined as soil covered by a homogeneous shrub layer with a LAI of 0.5. Shrub leaves were assumed to have the same optical parameters as birch leaves.
Growing season
From the Landsat classification map, pure birch and larch pixels were selected to determine the beginning and final day (DOY) of the annual growing season, using the following phenology analysis.
First, MODIS time series NDVI data were fitted using a harmonic analysis (Jonsson and Eklundh 2004) to remove random noises. We referred to the maximum and minimum values of NDVI as NDVImax and NDVImin. The starting date was defined as the DOY when NDVI exceeded 20 % of (NDVImax – NDVImin) between DOY 1 and DOY 180. Similarly, the final date was the DOY when NDVI exceeded 20 % of (NDVImax – NDVImin) between DOY 180 and DOY 365.
Temporal leaf area index of individual trees
It was difficult to calculate LAItree precisely. Instead, it was possible to allocate the MODIS LAI into individual trees. Based on assumption (3), LAItree is linearly related to tree height (H tree) for each species. Therefore, LAItree is a function of both species and DOY (see Equation 1).
$$ {\mathrm{LAI}}_{\mathrm{tree}}=f\left(\mathrm{species}\right)\times g\left(\mathrm{D}\mathrm{O}\mathrm{Y}\right)\times {H}_{\mathrm{tree}} $$
where f is a coefficient relating H tree to LAItree, which is constant for each species and g is a temporal correction factor. Plot LAI and individual tree height in field plots were used to calibrate f values for both birch or larch:
$$ \begin{array}{l}{\displaystyle \sum {f}_i}\times {H}_i\times \left(\pi {R}_i^2\right)=\mathrm{L}\mathrm{A}{\mathrm{I}}_{\mathrm{plot}}\times \mathrm{Are}{\mathrm{a}}_{\mathrm{plot}}\\ {}\kern5.5em \Rightarrow f=\frac{\mathrm{LA}{\mathrm{I}}_{\mathrm{plot}}\times \mathrm{Are}{\mathrm{a}}_{\mathrm{plot}}}{{\displaystyle \sum {H}_i\times \left(\pi {R}_i^2\right)}}\end{array} $$
For birch trees, we calibrated f as 0.25 and 0.20 for larch. From previous studies (Li et al. 2009; Liu and Jin 2013), we determined that the LAI of birch and larch varied with DOY and could be fitted with a polynomial equation. Both species showed very similar phenology in the spring without a difference on the average (Delbart et al. 2005), so we used the MODIS LAI to calibrate the g value for both species:
$$ \begin{array}{l}{\displaystyle \sum g}\left(\mathrm{D}\mathrm{O}\mathrm{Y}\right)\times {f}_i\times {H}_i\times \left(\pi {R}_i^2\right)=\mathrm{L}\mathrm{A}{\mathrm{I}}_{\mathrm{MODIS}}\times \mathrm{Are}{\mathrm{a}}_{\mathrm{MODIS}}\\ {}\kern5.5em \Rightarrow g\left(\mathrm{D}\mathrm{O}\mathrm{Y}\right)=\frac{\mathrm{LA}{\mathrm{I}}_{\mathrm{MODIS}}\times \mathrm{Are}{\mathrm{a}}_{\mathrm{MODIS}}}{{\displaystyle \sum {f}_i\times {H}_i\times \left(\pi {R}_i^2\right)}}\end{array} $$
Since the TreeVaW (Popescu and Wynne 2004) had not been tested in our study site, we manually segmented a few tree crowns in nine sub-plots with different tree densities in order to evaluate the accuracy of the extracted number of trees, height, location and crown radius.
We carried out four types of evaluations: (a) CCD image was used to check the pattern of simulated half-meter images; (b) Landsat images were used to check reflectance values of nadir images at the same date; (c) MODIS BRF products were used to compare simulated BRFs and (d) finally, we used four dates of Landsat images to evaluate temporal simulations.
Tree structure
Compared to manual segmentation, TreeVaW detected 88 % of the number of trees in sparse plots (Fig. 5a, b), but only 74 % in dense plots (Fig. 5c, d). Crown radii obtained from TreeVaW ranged from 0.59 to 0.71 m, lower than those from manual segmentation. The mean tree height error and location bias of detected trees was 0.88 and 0.91 m.
Comparisons of tree segmentation between manual operation and TreeVaW in a sparse subplot (a-b) and a dense subplot (c-d); (a) and (c) are manual results; (b) and (d) are TreeVaW results
Based on regression analysis of plot data, both tree DBH and crown length (L) were well predicted from tree height (H) with coefficients of determination larger than 0.80:
$$ \mathrm{D}\mathrm{B}\mathrm{H}\ \left(\mathrm{birch}\right)=0.2466{H}^{1.5652}\kern0.5em \left(R{}^2=0.89\right) $$
$$ \mathrm{D}\mathrm{B}\mathrm{H}\ \left(\mathrm{larch}\right)=0.1639{H}^{1.8704\kern0.5em }\left(R{}^2=0.82\right) $$
$$ L\left(\mathrm{birch}\right)=0.5475H-0.0118\kern1em \left(R{}^2=0.87\right) $$
$$ L\left(\mathrm{larch}\right)=0.6551H-0.0731\kern1em \left(R{}^2=0.55\right) $$
Figure 6 shows the smoothed NDVI curves for birch and larch-dominated forests. The starting date, final date and length of the growing season were estimated as DOY 140 (May 20), 273 (Sep 3) and 130 days. During the growing season, the birch forest had higher NDVI values than larch forests, and the larch forests in the flat wetland area had significant lower NDVI values than those in mountain areas.
Smoothed MODIS 16-day 250 m NDVI products in 2013
Land cover classification
The Landsat 8 image on May 24, 2014 was used to produce a 15 m classification map, given a suitable growing season and good image quality to distinguish birch and larch (Fig. 7a). Compared to the old forest map (Fig. 7b), the southern regions (1 and 2) visually matched much better than the northern regions (3 and 4). Fortunately, the major study area was located in regions 1 and 2, where the accuracy (around 75 %) was calculated from random sampling points. Major rules of the decision tree were the following: (1) forest vegetations = (RVI > 0 and NDWI > 0 and CHM > 2 m); (2) shrubs or grasses = (RVI > 0 and NDWI > 0 and CHM ≤ 2 m); (3) birch = ((1) and RVI > 7.0); (4) larch = ((1) and RVI ≤ 5.0); (5) mixed forests = ((1) and RVI > 5.0 and RVI ≤ 7.0).
Vegetation map: (a) classified image with the lightest greenness representing birch forests; (b) forest map with white color representing birch forests
Forest understory in the Genhe Reserve was complex but of considerable value in identifying forest types (see Table 1). Some shrubs were evergreen, while grasses shed leaves. Therefore, the vegetation detected in SPOT-6 image (1 m, October) was used to define shrubs as evergreen vegetation because only evergreens had green leaves at that time of the year (Fig. 8).
Determining vegetation as evergreen bush in winter season: (a) evergreen understory on Spot-6 image (red color); (b) CCD image; (c) CHM image
Comparisons of nadir images
Simulated nadir images (0.5 m resolution) were compared to the CCD image in Fig. 9. The spatial texture and land cover difference are consistent, but the simulated forests look sparser.
Comparing nadir image (0.5 m) with CCD: (a) simulated image (R = Near infrared (NIR), G = red, B = green); (b) airborne CCD mosaic image from multiple days
The spectral results were compared with Landsat 8 reflectance images on May 24, 2014 (Fig. 10). Both simulation and Landsat images showed typical vegetation reflectance spectra (low red reflectance and high near infrared (NIR) reflectance). Simulation results are significantly lower in blue bands (0.02 to 0.06).
Comparison of nadir top of canopy (TOC) reflectance image with a Landsat 8 image using linear stretch (0 to 0.3): (a) simulated image (0.5 m, R = NIR, G = red, B = green); (b) re-sampled 15 m image from (a); (c) Landsat 8 (15 m) on May 24, 2014 (R = NIR, G = red, B = green); (d) Spectral curves of dense and sparse canopies
Comparisons of BRF
Five pixels around the central study area showed variation in the BRF curves, used as a reference to evaluate the RAPID BRF results (Fig. 11). Generally, the simulated BRF matches the shape of MODIS BRF in spite of absolute biases in a few view directions. First, the simulated red BRF is higher than all MODIS BRFs when the view zenith angle (VZA) is between −50° and 40°. Second, in both red and NIR bands, the backscattering BRF when VZA larger than 50°, is lower than the MODIS BRF.
Comparisons between MODIS BRF product and RAPID simulations: (a) red band (0.620–0.670 nm); (b) NIR band (0.841–0.876 nm)
Temporal results
Four Landsat images were used to check the simulation ability of temporal variations; the dynamic parameters of birch and larch trees are shown in Table 2.
Table 2 Dynamic parameters of birch or larch forests
Figure 12 compares the results between simulated and real images in stripes of birch and larch forest stands (600 m by 600 m). The resolutions were 15 m except for the Landsat TM image (30 m) on September 5, 2013. The birch bands (marked as A) showed significant variation in reflectance from brown color (bare soil), red color (green canopy), pink color (dense canopy) to mixed color (discoloring canopy), reconstructed from simulated images in spite of slightly different colors. In the lower part of the Landsat ETM+ image, a black no-data area showed up, due to a sensor error (SLC-OFF). The results on Sept 5, 2013 showed larger discrepancies.
Comparison between simulated and Landsat images with false color composition (RGB = [NIR, RED, GREEN]); A and B represent birch and larch trees, respectively
Our main objective was to create and test how to couple lidar data and temporal optical data MODIS in order to simulate high-resolution optical satellite images. A framework was built and tested at the Genhe Forest Farm. In spite of some biases or errors, the approach successfully produced temporal images with high spatial, spectral and angular resolutions, which confirmed the possibility to fuse lidar and MODIS data.
Major contributors on simulation
The framework included four main data sources: lidar, Landsat, MODIS and field data. To drive a 3D model, the most important inputs were 3D scenes and the inside reflectance and transmittance of 3D objects. Lidar was the first contributor to providing 3D structures of individual trees and background. Lidar-derived 3D structures were normally static, but 3D scenic objects, especially their LAI, were dynamic. Therefore, we used an allocation method to downscale MODIS LAI into each tree, a technique not found in previous studies. Landsat images were used to classify birch and larch, supporting the generation of 3D trees.
The optical parameters of 3D objects were collected in the field or obtained from existing references; these were also dynamic. Therefore, MODIS NDVI data were used to calibrate leaf chlorophyll for the PROSPECT model, which then simulated dynamic leaf reflectance and transmittance. Background soil reflectance varied over time and was difficult to obtain. An alternative is to use TVDI to adjust soil reflectance, which is a more recent idea and needs to be evaluated in any future research.
Major errors
Despite the fact that three types of evaluation on reflectance, i.e., spatial texture, BRF and Landsat simulation demonstrated the capability to simulate temporal images, quantitative validation was still missing due to lots of uncertainties in the entire workflow. We tried to address major error sources and assess their uncertainty:
3D structure errors:
It has to be admitted that suppressed trees and irregular tree crowns are hard to detect from CHM. A previous study has shown that TreeVaW method can identify more than 95 % of the trees in planted forests but only 70 % in natural forests (Antonarakis et al. 2008). Although other detection algorithms may help improve the accuracy, the inter-comparisons between detection methods found that the correct percentage of the number of trees was generally between 50 to 90 % (Kaartinen et al. 2012). In our study, the percentage of the correct number was between 74 to 90 %, which is consistent with results above. The high level of missed detection leads to a higher clumping effect and sparser forests (Figs. 7 and 8), which then resulted in higher reflectance biases induced by background uncertainties.
Unknown background:
Although we classified evergreen bush, its background type and dynamic reflectance were almost unknown. Therefore, a very rude LAI of 0.5 was assumed for all understories. In fact, it is possible to retrieve forest background reflectance from satellite data (Canisius and Chen 2007; Pisek and Chen 2009; Pisek et al. 2010; Tuanmu et al. 2010; Rautiainen et al. 2011; Pisek et al. 2012). We will try these methods to inverse background reflectance in later studies. We were able to validate the TVDI-adjusted soil reflectance, although it should have directional effects. Actually, we used isotropic soil reflectance, which may explain the BRF biases with large angles in backward view directions.
Leaf discoloring
In September, the leaves of both birch and larch changed color. However, these changes varied even for trees of the same species, probably an effect of age, elevation or density, making it difficult to identify individual trees. Therefore, the accuracy in discoloration during the growing season will be low. Continuous field observations are strongly suggested.
MODIS data uncertainty
The most recent MODIS LAI product is Collection 5 (this a version code), which has uncertainties around +/−1.0 for relatively pure pixels (Fang et al. 2012). However, considering the low resolution of MODIS pixels, the uncertainties of inversed LAIs are even larger for mixed pixels. The image matching between MODIS (1 km) and CHM (0.5 m) sounds tricky. However, as the only available product, it was used in our simulation framework. In any future study, we will use Landsat images to bridge the gap of higher resolution of LAI products (Gao et al. 2014). The BRF biases between simulation and MODIS can be partially attributed to the limitations of MODIS BRF in reconstructing higher and narrower hotspots (Huang et al. 2013b).
Landsat data uncertainty
Landsat images were used to compare simulated nadir reflectance and image textures. Figure 10 shows significant differences in the blue band, which can be largely explained by an atmospheric correction error because blue band reflectance should be lower after a correct removal of aerosol scatter. This atmospheric correction was carried out by using the FLAASH module of the ENVI 5.1 software, where the aerosol optical depth and water content were only estimated from images.
Efficiency problems
RAPID is relatively fast with 3D models, but running one case (1 km) still needs four to six hours at individual tree scale on a workstation (using 10 CPU cores). We are of the opinion that it is not feasible for the generation of operational products. However, for some scientific use, focused on local areas, it may be worthwhile to obtain images with very high resolutions (spatial, spectral and angular) for research within an acceptable time frame and cost structure. Because RAPID can run at scalable resolutions, 3D scenes of dense forests can be up-scaled to regular grids with a medium resolution (e.g. 5 to 10 m), which significantly improves calculation efficiency (less than 30 min) without much loss in accuracy. Furthermore, we can create a reference table of 3D scenes, classifying a study area into fewer categories with possible combinations of DEM, understory, tree locations, tree heights and tree LAI. The corresponding reflectance images will be simulated and stored as an image database. Once the database is created, a quick search method can be used to pick up desired images based on input parameters such as DEM, understory, tree distribution and LAI. In the current framework, we only dealt with the capability of coupling simulation. Improvements will be presented in our next study.
Scale issues
Scale effect and scaling have been big issues in remote sensing community. When models or algorithms at small scales are used at large scales, they may produce certain errors, especially for non-linear models (Tao et al. 2009). Scale issues constrain the accuracy of retrieval and limit the development of remote sensing applications. In this study, MODIS LAI products at a 1 km scale were down-scaled to the level of LAI of individual trees, using lidar at a scale of only a few meters, used to coincide with the RAPID model scale. Assuming that MODIS LAI products are scale-corrected, this scaling does not change total leaf area. Since LAItree was found to be correlated with tree height, the down-scaling model should also be linear without model scale issues. In comparison with satellite images, the simulated RAPID images should also be re-sampled to coincide with the satellite scale, including the MODIS and Landsat scales. Fortunately, reflectance or radiance images are scale independent.
Using MODIS and lidar data to reconstruct 3D scenes at an individual tree scale, the RAPID model is capable of simulating time series of images at spatial scales from 0.5 m to 1 km and temporal scales of a few days. Although the 3D scene estimation is not perfect, this type of multi-scale optical image dataset will be useful to support the understanding of scale problems.
We presented a simulation framework which links lidar with optical images to produce series of temporal images. The study provides a proof-of-concept approach to link lidar data in the parameterization of a RAPID model for temporal image reconstruction in forest dominated areas. Demonstrations were applied at the Genhe Forest Farm, a remote forest reserve in China. Evaluations on nadir reflectance, spatial textures and BRF confirmed that 3D simulation provides an insight look into how images vary over time. Many uncertainties were identified, which can be expected to be reduced in any future study. Strategies to improve efficiency are possible and discussed.
Adams T, Beets P, Parrish C (2012) Extracting more data from LiDAR in forested areas by analyzing waveform shape. Remote Sens 4(3):682–702. doi:10.3390/rs4030682
Ahl DE, Gower ST, Burrows SN, Shabanov NV, Myneni RB, Knyazikhin Y (2006) Monitoring spring canopy phenology of a deciduous broadleaf forest using MODIS. Remote Sens Environ 104(1):88–95. doi:10.1016/j.rse.2006.05.003
Antonarakis AS, Richards KS, Brasington J, Bithell M, Muller E (2008) Retrieval of vegetative fluid resistance terms for rigid stems using airborne lidar. J Geophys Res 113(G2):G02S07. doi:10.1029/2007JG000543
Arno J, Escola A, Valles JM, Llorens J, Sanz R, Masip J, Palacin J, Rosell-Polo JR (2013) Leaf area index estimation in vineyards using a ground-based LiDAR scanner. Precision Agric 14(3):290–306. doi:10.1007/s11119-012-9295-0
Barbier N, Proisy C, Vega C, Sabatier D, Couteron P (2011) Bidirectional texture function of high resolution optical images of tropical forest: An approach using LiDAR hillshade simulations. Remote Sens Environ 115(1):167–179. doi:10.1016/j.rse.2010.08.015
Barry KM, Newnham GJ, Stone C (2009) Estimation of chlorophyll content in Eucalyptus globulus foliage with the leaf reflectance model PROSPECT. Agr Forest Meteorol 149(6–7):1209–1213, http://dx.doi.org/10.1016/j.agrformet.2009.01.005
Canisius F, Chen JM (2007) Retrieving forest background reflectance in a boreal region from Multi-anglo Imaging SpectroRadiometer (MISR) data. Remote Sens Environ 107(1–2):312–321. doi:10.1016/j.rse.2006.07.023
Chen G, Wulder MA, White JC, Hilker T, Coops NC (2012) Lidar calibration and validation for geometric-optical modeling with Landsat imagery. Remote Sens Environ 124(0):384–393, http://dx.doi.org/10.1016/j.rse.2012.05.026
Croft H, Chen JM, Zhang Y, Simic A (2013) Modelling leaf chlorophyll content in broadleaf and needle leaf canopies from ground, CASI, Landsat TM 5 and MERIS reflectance data. Remote Sens Environ 133(0):128–140, http://dx.doi.org/10.1016/j.rse.2013.02.006
Delbart N, Kergoat L, Le Toan T, Lhermitte J, Picard G (2005) Determination of phenological dates in boreal regions using normalized difference water index. Remote Sens Environ 97(1):26–38. doi:10.1016/j.rse.2005.03.011
Fang H, Wei S, Liang S (2012) Validation of MODIS and CYCLOPES LAI products using global field measurement data. Remote Sens Environ 119:43–54. doi:10.1016/j.rse.2011.12.006
Feng M, Niu Z (2014) Chlorophyll content retrieve of vegetation using Hyperion data based on empirical models. Remote Sens Land Resour 26(1):71–77. doi:10.6046/gtzyyg.2014.01.13
Gao F, Anderson MC, Kustas WP, Houborg R (2014) Retrieving leaf area index from landsat using MODIS LAI products and field measurements. IEEE Geosci Remote Sens Lett 11(4):773–777. doi:10.1109/lgrs.2013.2278782
Gao F, Masek J, Schwaller M, Hall F (2006) On the blending of the Landsat and MODIS surface reflectance: predicting daily Landsat surface reflectance. IEEE Trans Geosci Remote Sens 44(8):2207–2218. doi:10.1109/tgrs.2006.872081
García Millán VE, Sánchez-Azofeifa A, Málvarez García GC, Rivard B (2014) Quantifying tropical dry forest succession in the Americas using CHRIS/PROBA. Remote Sens Environ 144(0):120–136, http://dx.doi.org/10.1016/j.rse.2014.01.010
Hilker T, Wulder MA, Coops NC, Seitz N, White JC, Gao F, Masek JG, Stenhouse G (2009) Generation of dense time series synthetic Landsat data through data blending with MODIS using a spatial and temporal adaptive reflectance fusion model. Remote Sens Environ 113(9):1988–1999. doi:10.1016/j.rse.2009.05.011
Holzman ME, Rivas R, Bayala M (2014) Subsurface soil moisture estimation by VI-LST method. IEEE Geosci Remote Sens Lett 11(11):1951–1955. doi:10.1109/lgrs.2014.2314617
Huang H, Qin W, Liu Q (2013a) RAPID: a radiosity applicable to porous individual objects for directional reflectance over complex vegetated scenes. Remote Sens Environ 132:221–237. doi:10.1016/j.rse.2013.01.013
Huang X, Jiao Z, Dong Y, Zhang H, Li X (2013b) Analysis of BRDF and albedo retrieved by kernel-driven models using field measurements. IEEE J Selected Topics Appl Earth Observ Remote Sens 6(1):149–161. doi:10.1109/jstars.2012.2208264
Inglada J, Hagolle O, Dedieu G (2011) A framework for the simulation of high temporal resolution image series. Paper presented at the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 24–29 July 2011
Jacquemoud S, Verhoef W, Baret F, Bacour C, Zarco-Tejada PJ, Asner GP, François C, Ustin SL (2009) PROSPECT + SAIL models: a review of use for vegetation characterization. Remote Sens Environ 113(Supplement 1 (0)):S56–S66, http://dx.doi.org/10.1016/j.rse.2008.01.026
Jonsson P, Eklundh L (2004) TIMESAT - a program for analyzing time-series of satellite sensor data. Comput Geosci 30(8):833–845. doi:10.1016/j.cageo.2004.05.006
Kaartinen H, Hyyppä J, Yu X, Vastaranta M, Hyyppä H, Kukko A, Holopainen M, Heipke C, Hirschmugl M, Morsdorf F, Næsset E, Pitkänen J, Popescu S, Solberg S, Wolf BM, Wu J-C (2012) An international comparison of individual tree detection and extraction using airborne laser scanning. Remote Sensing 4(4):950–974
Kobayashi H, Suzuki R, Kobayashi S (2007) Reflectance seasonality and its relation to the canopy leaf area index in an eastern Siberian larch forest: Multi-satellite data and radiative transfer analyses. Remote Sens Environ 106(2):238–252. doi:10.1016/j.rse.2006.08.011
Li GZ, Wang HX, Zhu JJ (2009) Monthly changes of leaf area index and canopy openness of Larix olgensis in mountainous regions in east Liaoning province. J Northeast Forestry Univ 37(7):13
Liang L, Zhao S-h, Qin Z-h, He K-x, Chen C, Luo Y-x, Zhou X-d (2014) Drought change trend using MODIS TVDI and its relationship with climate factors in china from 2001 to 2010. J Integr Agric 13(7):1501–1508, http://dx.doi.org/10.1016/S2095-3119(14)60813-3
Liu ZL, Jin GZ (2013) Estimation of leaf area index of secondary Betula platyphylla forest in Xiaoxing' an Mountains. Acta Ecologica Sinica 33(8):9
Lucas RM, Lee AC, Williams ML (2006) Enhanced simulation of radar backscatter from forests using LiDAR and optical data. IEEE Trans Geosci Remote Sens 44(10):2736–2754. doi:10.1109/tgrs.2006.881802
Lunetta RS, Johnson DM, Lyon JG, Crotwell J (2004) Impacts of imagery temporal frequency on land-cover change detection monitoring. Remote Sens Environ 89(4):444–454. doi:10.1016/j.rse.2003.10.022
Masek JG, Huang C, Wolfe R, Cohen W, Hall F, Kutler J, Nelson P (2008) North American forest disturbance mapped from a decadal Landsat record. Remote Sens Environ 112(6):2914–2926. doi:10.1016/j.rse.2008.02.010
Montesano P, Cook B, Sun G, Simard M, Nelson R, Ranson K, Zhang Z, Luthcke S (2013) Achieving accuracy requirements for forest biomass mapping: a spaceborne data fusion method for estimating forest biomass and LiDAR sampling error. Remote Sens Environ 130:153–170. doi:10.1016/j.rse.2012.11.016
Mu X, Zhang Q, Liu Q, Pang Y, Hu K (2015) A study on typical forest biomass mapping technology of great khingan using airborne laser scanner data. Remote Sens Technol Appl 30(2):220–225
Muller E, Décamps H (2001) Modeling soil moisture–reflectance. Remote Sens Environ 76(2):173–180, http://dx.doi.org/10.1016/S0034-4257(00)00198-X
Nitze I, Barrett B, Cawkwell F (2015) Temporal optimisation of image acquisition for land cover classification with Random Forest and MODIS time-series. Int J Appl Earth Observ Geoinformation 34:136–146. doi:10.1016/j.jag.2014.08.001
Pisek J, Chen JM (2009) Mapping forest background reflectivity over North America with Multi-angle Imaging SpectroRadiometer (MISR) data. Remote Sens Environ 113(11):2412–2423. doi:10.1016/j.rse.2009.07.003
Pisek J, Chen JM, Miller JR, Freemantle JR, Peltoniemi JI, Simic A (2010) Mapping forest background reflectance in a boreal region using multiangle compact airborne spectrographic imager data. IEEE Trans Geosci Remote Sens 48(1):499–510. doi:10.1109/tgrs.2009.2024756
Pisek J, Rautiainen M, Heiskanen J, Mottus M (2012) Retrieval of seasonal dynamics of forest understory reflectance in a Northern European boreal forest from MOD'S BRDF data. Remote Sens Environ 117:464–468. doi:10.1016/j.rse.2011.09.012
Popescu SC, Wynne RH (2004) Seeing the trees in the forest: using lidar and multispectral data fusion with local filtering and variable window size for estimating tree height. Photogrammetric Eng Remote Sens 70(5):589–604
Rautiainen M, Lang M, Mõttus M, Kuusk A, Nilson T, Kuusk J, Lükk T (2008) Multi-angular reflectance properties of a hemiboreal forest: An analysis using CHRIS PROBA data. Remote Sens Environ 112(5):2627–2642, http://dx.doi.org/10.1016/j.rse.2007.12.005
Rautiainen M, Mottus M, Heiskanen J, Akujarvi A, Majasalmi T, Stenberg P (2011) Seasonal reflectance dynamics of common understory types in a northern European boreal forest. Remote Sens Environ 115(12):3020–3028. doi:10.1016/j.rse.2011.06.005
Rulinda CM, Bijker W, Stein A (2011) The chlorophyll variability in Meteosat derived NDVI in a context of drought monitoring. Procedia Environ Sci 3(0):32–37, http://dx.doi.org/10.1016/j.proenv.2011.02.007
Sandholt I, Rasmussen K, Andersen J (2002) A simple interpretation of the surface temperature/vegetation index space for assessment of surface moisture status. Remote Sens Environ 79(2–3):213–224, http://dx.doi.org/10.1016/S0034-4257(01)00274-7
Schaaf CB, Strahlerl AH (1994) Validation of bidirectional and hemispherical reflectances from a geometric-optical model using ASAS imagery and pyranometer measurements of a spruce forest. Remote Sens Environ 49(2):138–144, http://dx.doi.org/10.1016/0034-4257(94)90050-7
Sharma RC, Kajiwara K, Honda Y (2013) Estimation of forest canopy structural parameters using kernel-driven bi-directional reflectance model based multi-angular vegetation indices. ISPRS J Photogrammetry Remote Sens 78:50–57. doi:10.1016/j.isprsjprs.2012.12.006
Tao X, Yan B, Wang K, Wu D, Fan W, Xu X, Liang S (2009) Scale transformation of Leaf Area Index product retrieved from multiresolution remotely sensed data: analysis and case studies. Int J Remote Sens 30(20):5383–5395. doi:10.1080/01431160903130978
Tuanmu M-N, Vina A, Bearer S, Xu W, Ouyang Z, Zhang H, Liu J (2010) Mapping understory vegetation using phenological characteristics derived from remotely sensed data. Remote Sens Environ 114(8):1833–1844. doi:10.1016/j.rse.2010.03.008
Wang JD, Zhang LX, Liu QH, Zhang B, Yin Q (2008) The spectrum knowledge database of typical land surface objects in China. Science Press, Beijing
Wanner W, Li X, Strahler AH (1995) On the derivation of kernels for kernel-driven models of bidirectional reflectance. J Geophys Res 100(D10):21077–21089. doi:10.1029/95JD02371
Weidong L, Baret F, Xingfa G, Qingxi T, Lanfen Z, Bing Z (2002) Relating soil surface moisture to reflectance. Remote Sens Environ 81(2–3):238–246, http://dx.doi.org/10.1016/S0034-4257(01)00347-9
Whiting ML, Li L, Ustin SL (2004) Predicting water content using Gaussian model on soil spectra. Remote Sens Environ 89(4):535–552, http://dx.doi.org/10.1016/j.rse.2003.11.009
Wu C, Niu Z, Tang Q, Huang W (2008) Estimating chlorophyll content from hyperspectral vegetation indices: Modeling and validation. Agr Forest Meteorol 148(8–9):1230–1241, http://dx.doi.org/10.1016/j.agrformet.2008.03.005
Wu M-Q, Wang J, Niu Z, Zhao Y-Q, Wang C-Y (2012) A model for spatial and temporal data fusion. J Infrared Millimeter Waves 31(1):80–84
Xiao C-W, Janssens IA, Curiel Yuste J, Ceulemans R (2006) Variation of specific leaf area and upscaling to leaf area index in mature Scots pine. Trees 20(3):304–310. doi:10.1007/s00468-005-0039-x
Xu G-c, Pang Y, Li Z-y, Zhao K-r, Liu L-x (2013) The changes of forest canopy spectral reflectance with seasons in Xiaoxing'anling. Spectrosc Spectral Anal 33(12):3303–3307
The authors gratefully acknowledge the Chinese National Basic Research Program (2013CB733401) and the Chinese Natural Science Foundation Project (41171278).
Key Laboratory for Silviculture and Conservation of Ministry of Education, Beijing Forestry University, Beijing, China
HuaGuo Huang
& Jun Lian
Search for HuaGuo Huang in:
Search for Jun Lian in:
Correspondence to HuaGuo Huang.
HH is the major and contact author for most work. JL processed a small part of the data. Both authors read and approved the final manuscript.
Temporal interpolation
3D Remote Sensing | CommonCrawl |
Increased robustness of early embryogenesis through collective decision-making by key transcription factors
Ali Sharifi-Zarchi1,2,11,
Mehdi Totonchi2,3,
Keynoush Khaloughi2,
Razieh Karamzadeh2,4,
Marcos J. Araúzo-Bravo5,6,7,
Hossein Baharvand2,
Ruzbeh Tusserkani8,
Hamid Pezeshk9,10,
Hamidreza Chitsaz11 &
Mehdi Sadeghi10,12
Understanding the mechanisms by which hundreds of diverse cell types develop from a single mammalian zygote has been a central challenge of developmental biology. Conrad H. Waddington, in his metaphoric "epigenetic landscape" visualized the early embryogenesis as a hierarchy of lineage bifurcations. In each bifurcation, a single progenitor cell type produces two different cell lineages. The tristable dynamical systems are used to model the lineage bifurcations. It is also shown that a genetic circuit consisting of two auto-activating transcription factors (TFs) with cross inhibitions can form a tristable dynamical system.
We used gene expression profiles of pre-implantation mouse embryos at the single cell resolution to visualize the Waddington landscape of the early embryogenesis. For each lineage bifurcation we identified two clusters of TFs – rather than two single TFs as previously proposed – that had opposite expression patterns between the pair of bifurcated cell types. The regulatory circuitry among each pair of TF clusters resembled a genetic circuit of a pair of single TFs; it consisted of positive feedbacks among the TFs of the same cluster, and negative interactions among the members of the opposite clusters. Our analyses indicated that the tristable dynamical system of the two-cluster regulatory circuitry is more robust than the genetic circuit of two single TFs.
We propose that a modular hierarchy of regulatory circuits, each consisting of two mutually inhibiting and auto-activating TF clusters, can form hierarchical lineage bifurcations with improved safeguarding of critical early embryogenesis against biological perturbations. Furthermore, our computationally fast framework for modeling and visualizing the epigenetic landscape can be used to obtain insights from experimental data of development at the single cell resolution.
More than six decades ago, Conrad H. Waddington portrayed a conceptual landscape of development (Fig. 1a). In his "epigenetic landscape" a ball that indicates the whole or part of an egg or an embryo is rolling down a sloping and undulating surface with several valleys that represent distinguished organs or tissues [1]. Beyond its deceptive simplicity, the epigenetic landscape has entailed numerous embryogenesis facts: (i) decreased differentiation potency during development as illustrated by tilt of the landscape; (ii) the epigenetic barriers between sharply distinct cell fates, depicted as the hills between the valleys; (iii) derivation of distinct cell types from identical cells, portrayed as bifurcated valleys.
Waddington landscape of the mouse preimplantation embryo. a The original artwork of Waddington (we have added the arrows and the labels). b Principal component analysis (PCA) of the mouse preimplantation embryo gene expression profiles. Each point represents one cell, and the color of each point shows the developmental stage of the cell. c Schematic representation of mouse preimplantation embryonic development. d The computational Waddington landscape of the mouse early development based on the gene expression profiles. Each ball represents a single cell. PC: Principal component, ICM: Inner cell mass, TE: Trophectoderm, PE: Primitive endoderm, EPI: Epiblast
Waddington's innovation suggested that genetic interactions were the major determinants of a landscape's shape [1, 2]. In support of this idea, a genetic circuit of two TFs each stimulating itself (auto-activation) and repressing the activity of the other (mutual inhibition) has been shown to form a tristable dynamical system [3]. This system can model a lineage bifurcation, which is the differentiation of two distinct cell types from the common progenitor. The triple stable steady states or "attractors" represent the progenitor and two bifurcated lineages. In the progenitor cell state both TFs are expressed at balanced rates. In either of two bifurcated cell states, one TF is active or highly expressed whereas the other TF is silent or slightly expressed.
An example of the mutual-inhibition and auto-activation circuit between two TFs is the Gata1 versus Pu.1 circuit, which has been proposed to govern the bifurcation of common myeloid progenitors (Gata1+/Pu.1+) to either erythroids (Gata1+/Pu.1-) or myeloids (Gata1-/Pu.1+) [3]. Other examples of two-TF regulatory circuits suggested for lineage bifurcations are provided in Table 1. Furthermore, a hierarchy of mutual-inhibition and auto-activation circuits among several pairs of TFs is suggested for the hierarchy of cell type bifurcations during early development [4, 5] and pancreatic differentiation [6].
Table 1 Examples of two-TF regulatory circuits that are suggested for lineage bifurcations
As a major drawback, the two-TF circuit is highly dependent on the concentrations and functions of a pair of TFs. In this model, a genetic or environmental perturbation that affects one of the TFs can change the behavior of the circuit and result in a deficient lineage bifurcation. Some experimental studies, however, show the cell differentiation is more robust.
For instance, the recent finding that the inner cell mass (ICM) is formed after complete inactivation of Oct4 expression [7] rejects the hypothesis that ICM vs. trophectoderm (TE) bifurcation is switched solely by the Oct4 versus Cdx2 circuitry.
Here we introduce a computational framework for modeling the epigenetic landscape. Using the single cell resolution gene expression profiles of preimplantation mouse embryonic cells [8] we visualize the Waddington landscape of early development. After analysis of the expression patterns of the key TFs that are suggested to form early lineage bifurcations, we provide an extended form of hierarchical regulatory circuitry in which each bifurcation is decided by two clusters of TFs, rather than two single TFs. We show this extended circuitry is more robust against perturbation, which suggests it can better safeguard the development.
The Waddington landscape of a preimplantation embryo
We constructed the epigenetic landscape of mouse preimplantation embryonic development using the expression profiles of 48 genes – mostly TFs – in 442 single pre-implantation embryonic cells [8]. For this purpose, we quantified three axes: cell type (x-axis), time of development (y-axis), and pseudo-potential function (z-axis, see methods for more details). Time of development was quantified according to the developmental stage of each cell in the dataset. We used principal component analysis (PCA) [9] to project the expression profiles of the cells into a two-dimensional space (Fig. 1b), in which the cells with similar fates during embryonic development (Fig. 1c) were clustered together. The angular coordinates of the cells in the PCA plot were used to put them across the x-axis of the epigenetic landscape. In this way the cells were sorted along the x-axis according to their types. We also defined a pseudo-potential function using the Gaussian mixture model and Boltzmann distribution, and computed the z-coordinates accordingly.
The result is shown in Fig. 1d. Each ball represents a single embryonic cell. The y-axis (back-to-front) shows different developmental stages from 1-cell (zygote) to 64-cell (blastocyst). The height of each region shows the pseudo-potential function level, which reflects both stability and differentiation potency. There is a single valley from the 1- to 16-cell stages that shows no significant difference between single embryonic cells at these stages. The first bifurcation appears at the 32-cell stage, where ICM is distinguished from TE. At the 64-cell stage the ICM cells undergo a second bifurcation that discriminates epiblast (EPI) from primitive endoderm (PE).
Regulatory circuitry of two transcription factors (TFs) can form lineage bifurcations
In order to inspect how the epigenetic landscape bifurcations were formed we examined the expression levels of four key TFs of preimplantation development: Oct4, Cdx2, Nanog and Gata4. These TFs were selected due to their known critical functions in the formation of early embryonic cell types [10, 11]. Our analysis shows that Oct4 is expressed in ICM and its sub-lineages, but becomes silent in the TE valley (Fig. 2a). In contrast, Cdx2 is overexpressed in the TE, and underexpressed in the ICM and its sub-lineages. Both Nanog and Gata4 are underexpressed in the TE valley, but have a sharp contrast in ICM sub-lineages. Nanog is overexpressed in the EPI and underexpressed in the PE cells, while Gata4 is overexpressed in the PE and underexpressed in the EPI valley.
Expression levels of four key transcription factors (TFs) in early embryogenesis. a The gene expression levels of Oct4, Cdx2, Nanog and Gata4 in the single cells of preimplantation embryos. The cells with the highest expression level of each TF are depicted in red, while the intermediate and the lowest expression levels are shown as white and blue, respectively. b The regulatory circuitry between Oct4 and Cdx2 (left), and Nanog and Gata4 (right). Green and red arrows show positive and negative regulatory interactions, respectively. TE: Trophectoderm, PE: Primitive endoderm, EPI: Epiblast
Competition in expression of Oct4 and Cdx2 is suggested to arise from the particular form of regulatory circuitry between them [12]. While binding of Oct4 to its own promoter has a positive regulatory effect, its binding to the Cdx2 promoter is suppressive. Similarly, Cdx2 activates itself but inhibits Oct4 (Fig. 2b, left). The regulatory circuitry between Nanog and Gata4/6 has a similar structure (Fig. 2b, right) [13, 14].
A set of ordinary differential equations (ODEs) are previously used to model the regulatory circuitry between two generic TFs, such as A and B, with auto-activation and mutual inhibitions [12] (see Methods section for more details). Such ODEs form a tristable dynamical system that can be visualized in a force-field representation (Fig. 3a). Each grid point of the plot represents one system state with certain concentration levels of the TFs A and B, which are specified as the point dimensions. For each grid point, an arrow shows the direction of changes in the TF concentrations after a short period of time. The areas with longer arrows, in violet, represent the system states with higher tendency to change. In contrast, the shorter red arrows represent the more stable states of the system.
Attractor states of the two-TF regulatory circuitry. a Force-field representation of the dynamical system of a regulatory circuitry consisting of two TFs with auto-activation and mutual-repression interactions. b Regulatory states of the TFs in the three enumerated attractor states. Highly expressed TFs and strong interactions are shown as thick lines, whereas thin lines represent intermediate expressions or interactions. Null expressions or interactions are depicted as dashed-lines. c, d Phase space representations of the two-TF circuits. Red regions represent the highly stable states. (c) Both TFs have equal degradation rates. d The degradation rate of the transcription factor A is increased by 50 % (denoted by A*)
In the attractor 1, as enumerated in Fig. 3a, A is highly expressed and B is silent, and this state is maintained through the positive and negative feedback loops (Fig. 3b, top). The same conditions hold for the attractor 3 in which dominant expression of B suppresses expression of A and maintains a high abundance of B (Fig. 3b, bottom). In attractor 2, however, both TFs are expressed at lower and balanced rates (Fig. 3b, middle). In the same attractor, the positive feedback each TF receives from auto-activation forms equilibrium with the negative feedback from the other TF. The attractor 2 represents a progenitor cell type, while 1 and 3 denote two bifurcated cell lineages.
Two-cluster regulatory circuitry can resist perturbations
Although the two-TF regulatory circuitry could account for a developmental bifurcation, we conjectured that this type of regulatory circuitry would be too sensitive. In other words, genetic mutations or environmental perturbations that affect the concentration or function of either TF could influence the bifurcation and the ratios of the cells that differentiate into either lineage, or even lead some cell type to completely vanish.
To test this conjecture, we computationally examined the effect of an increased degradation rate of one TF. As shown in Fig. 3c, the original two-TF circuit with similar degradation rates of both TFs forms three attractor states indicated by red areas surrounded by the green epigenetic barriers. Increasing the degradation rate of the protein A by 50 % in the ODE model significantly changes the position of the stable states (Fig. 3d, the more degradable form of protein A is denoted by A*). While the attractor 1 remains isolated, the attractors 2 and 3 fuse together. As a result, it would be more likely for the progenitor cells in attractor 2 to differentiate into the attractor 3 rather than 1 during the lineage bifurcation.
We hypothesized that the regulatory circuitry would be more robust against perturbations or noise if there were more TFs involved in the formation of either branch of the bifurcation. To check this hypothesis we designed a new ODE system that represented a regulatory circuitry consisting of two clusters, with a couple of TFs in each cluster. The TFs of the same cluster have positive mutual regulatory interactions, whereas the TFs of opposite clusters inhibit each other (Fig. 4a).
Attractor states of the two-clusters regulatory circuitry. a The regulatory circuitry consisting of two clusters: A and C in one cluster, and B and D in the other. The interactions between the members of the same cluster are positive, and the interactions between the TFs of different clusters are negative. b Phase space representation of the system. Red regions are highly stable. c, d Regulatory circuitry and phase space representation of two clusters, in which the degradation rate of the protein C is increased by 50 % (denoted as C*)
To show a 4-dimentinal (4D) expression-space of the 4 TFs as a 2D plot, we assigned the total expression of the TFs in each cluster to one axis (Fig. 4b). The pseudo-potential function of the two-TF cluster circuitry shows a tristable system, which is very similar to the two-TF model. Both TFs A and C that belong to the same cluster are highly expressed in the attractor 1, whereas B and C are silent. In contrast, B and D are overexpressed in the attractor 3, while A and C are silent. The progenitor attractor state 2 represents the equilibrium in which all TFs are expressed at balanced rates.
In the two-cluster circuit, we analyzed the effect of a 50 % increase in the degradation rate of protein C (Fig. 4c, d). The attractor areas are slightly moved in the perturbed model (Fig. 4d) compared to the original two-cluster model (Fig. 4b). In particular, attractor 2 is slightly closer to attractor 3, due to the decreased concentration of protein C in the equilibrium state. However all three attractors are maintained and none them are fused together.
To have a quantitative insight into the robustness, we simulated the differentiation of four cell populations, each population having one of the regulatory circuitries shown in Fig. 3c, d and Fig. 4a, c (see the Methods section and the Additional file 1 for more details). We forced the cells to leave the progenitor state (the attractor 2 in Figs. 3 and 4) and differentiate into the attractor states 1 or 3. This was performed by gradually decreasing the auto-activation strengths of the TFs, as previously suggested [15].
In both two-TF and two-cluster circuits, the number of cells that differentiate into the attractors 1 and 3 are very similar (maximum 1 % difference), when there is no perturbation. After increasing degradation rate of one TF, only 3 % of the cells with two-TF circuit differentiate to the attractor 1. Nevertheless, the fraction of the cells with two-cluster circuit that differentiate to the attractor 1 is significantly higher (24 %). This simulation shows that one cell lineage (attractor 1) is almost vanished when the two-TF circuit is perturbed, while the two-cluster circuit is significantly more robust and safeguards differentiation into both lineages.
Early developmental bifurcations are switched by two clusters of TFs
We sought to determine whether the hypothesized TF clusters existed in the regulatory circuitry of the early embryogenesis. For this reason, we analyzed the expression profiles of the single mouse blastomeres at the 64-cell stage (Fig. 1b, c). Our analysis indicates three clusters of genes, which are mostly TFs (Fig. 5). The expression profiles of the genes in the same cluster are highly correlated, but lower or negative correlations are observed among the genes of different clusters. The first cluster consists of 17 genes, including Cdx2, Eomes and Gata3, which are highly expressed in TE. The second cluster includes 10 genes such as Gata4, Gata6 and Sox17 that mark PE cells. The 12 genes of the third cluster, including Nanog, Fgf4 and Sox2, are overexpressed in EPI cells. The genes of the TE cluster show lower coexpression with the genes of the other clusters. Some EPI genes are highly coexpressed with PE genes, which might reflect the limited time passed from the bifurcation of EPI and PE cell types at 64-cell stage.
Co-expressions of 48 genes in single blastocysts of the 64-cell stage mouse embryos. Each square shows the correlation value between expression profiles of two genes. Hierarchical clustering trees of the genes are shown in the top and left sides. There are three clusters of genes with high positive correlations, as indicated on the left side. The cell types in which each cluster is highly expressed are also shown
Through a literature search we revealed the experimentally validated regulatory interactions among the genes that pioneer early lineage bifurcations [8, 13, 16–27]. There are reports of positive interactions among Tead4, Eomes, Gata3, Cdx2, Elf5 and a number of other genes that are upregulated in TE cells (Fig. 6). The regulatory effects among Pou5f1(Oct4), Nanog, Sox2 and Sall4, as key TFs of the ICM cells, are also positive. However, the TFs in one cluster have been shown to repress the TFs in the other cluster. This finding is in agreement with the structure of the two-cluster circuitry. A similar regulator pattern can also be observed among the PE markers Gata4, Gata6, Sox17 and Sox7 in one cluster, and EPI markers Nanog, Sox2 and Oct4 in the other cluster. Assigning the color of the cells on the epigenetic landscape based on the average expression level of each cluster confirmed the proposed TF clusters experimentally (see the Additional file 2).
Regulatory circuitry of lineage bifurcations in the mouse preimplantation embryo. Left side shows two clusters of genes that are active either in the ICM or the TE. The interactions among the genes of each cluster are positive, while the interactions between the members of distinct clusters are negative. Right side shows similar network for the EPI and the PE. ICM: inner cell mass, TE: trophectoderm, EPI: epiblast, PE: primitive endoderm
We computationally visualized the Waddington landscape of mouse preimplantation development using the experimental data and depicted the differentiation of cell lineages as bifurcations of the valleys. In this study, we modeled the dynamical system of a regulatory circuit consisting of two individual TFs with auto-activation and mutual inhibitions, which has been proposed for lineage bifurcation [5, 15, 18]. This circuit formed a tristable dynamical system with clear borders of epigenetic barriers among them. An increased degradation rate of one TF caused the epigenetic barriers between the progenitor and one of the lineage committed cell states to be broken. This experiment showed that the circuit of two individual TFs is not very robust, and the ratios of the cells that commit to each lineage may be significantly affected by perturbations.
We investigated whether the presence of more TFs in the regulatory circuitry that governs a developmental bifurcation could lead to a more robust system. Extension of the initial circuit to a pair of clusters with multiple lineage-instructive TFs in each cluster, which activated themselves and inhibited the other cluster members, resulted in another tristable dynamical, similar to the one formed by the two-TF circuit. In the extended network, however, the epigenetic barriers were not vastly affected by increased decay rate of one TF, which was quantitatively confirmed by a simulation.
The positive feedbacks from the other TFs of the same cluster could buffer the effect of perturbations on a particular TF. This buffering property is somehow similar to the Waddington's original idea of "canalisation" – the capability of the system to recover after slight perturbations [1]. We expect this property would be even stronger in larger clusters of TFs having more positive feedback loops. This is in agreement with a suggestion by Waddington in the same book: "canalisations are more likely to appear when there are many cross links between the various processes, that is to say when the rate of change of any one variable is affected by the concentrations of many of the other variables" [1]. As the second property, the total expression of one TF cluster can overcome and inhibit the expression of the other TF cluster. We call these properties together as the collective decision-making of the TFs.
The extended regulatory circuitry was further illustrated by our analysis of the expression profiles of key TFs in mouse blastocysts. We indicated three clusters of genes (mostly TFs) that represented the EPI, PE and TE cell types (Fig. 5). A literature review of regulatory interactions among members of each cluster confirmed the structure of two-cluster regulatory circuitry and its role during early development (Fig. 6).
The proposed concept of two-cluster circuitry can be extended in a modular way to form a hierarchy of developmental bifurcations (Fig. 7). Early stages of development involve minimal cell quantity, and a small change in the fate of each single cell will pass on to a large number of offspring cells. Thus stronger safeguarding against perturbations is more crucial in the early development. This can be achieved by the presence of more TFs in each cluster and/or stronger feedback loops. The later developmental bifurcations are less sensitive and might rely on smaller clusters or even individual TFs.
Developmental bifurcations are governed by a hierarchical regulatory circuitry. Each circuit consists of two clusters of transcription factors (TFs), with positive feedbacks within each cluster and negative feedbacks between the two clusters. Prior to each developmental bifurcation, the TFs of both corresponding clusters are expressed at a balanced state. In each post-bifurcation branch, one cluster is downregulated while the other is upregulated. This triggers the competitive expression of clusters that switch later bifurcations
To identify the TF clusters of each bifurcation circuit we suggest assigning the expression profiles of embryonic and adult cell types to the network of differentiation [28]. Then we can look for the differentially expressed TFs and chromatin remodelers between a pair of cell types and offspring lineages, which are bifurcated from the common progenitor cells. This can be a systematic method to identify cocktails essential for cell type conversions such as reprogramming and transdifferentiation [29].
While the proposed hierarchical regulatory circuitry provides a basis for better understanding and analysis of developmental bifurcations, we do not exclude more complicated mechanisms such as the role of signaling networks and morphogens. For example, during embryonic stem cell differentiation, Oct4 and Sox2 have mutual positive feedbacks and belong to the same cluster of upregulated TFs in the ICM and EPI. The repressive effects of Wnt3a and activin on Sox2, and also inhibition of Oct4 by Fgf and retinoic acid result in asymmetric upregulation of Sox2 in the mesendoderm and Oct4 in the neural ectoderm [30]. This example lends support to the concept that signaling cascade forces dominate regulatory interactions of TFs, and will eventually cause the TF cluster to split.
A second example of the cryptic mechanisms in bifurcation regulation is the presence of master and supportive TFs. In the symmetric computational model, we have assigned identical effects to different TFs of the same cluster in determining the cell lineage. This can be further extended to an asymmetric model where one, or a small number of TFs in each cluster are the master lineage indicators and the other members support their expression and function. The latter suggests inactivation of different TFs in the same cluster will have different effects on formation of the corresponding cell lineage, which is supported by experimental evidences [11].
There are even more aspects of the cell biology that are critical for understanding development and differentiation. While gene-to-gene interactions are essential for the cells to differentiate, cell-to-cell communications are crucial for the embryo to balance the required quantity of each cell type, and to develop tissues and organs. As an example, the ICM and EPI cells secrete the Fgf4 signal, which binds to the Fgfr2 receptor on the membrane of TE and PE cells (Figs. 5 and 6). The development of TE and PE cells are significantly influenced by this signal [31, 32]; for instance the increased Fgf4 concentration results in enhanced PE and diminished EPI cells [33]. As a result, the proportion of the cells that differentiate into either EPI or PE would be balanced, which is another mechanism of developmental robustness. In absence of signals and intercellular communication, development would terminate in a salt-and-pepper mixture of differentiated cell types without any pattern.
Cell division and epigenetic mechanisms such as DNA methylation and histone modifications are the other crucial factors that influence the starting point and shape of the epigenetic landscape for each cell. To address these biological aspects, we suggest assigning individualized epigenetic landscapes to different cells, which are dynamically changed by the inherited parental cytoplasm and epigenetic modifications, the environmental signals and the other mechanisms of intercellular communication [34–38]. Hence the cells that are divided from the same parent or the adjacent cells would have similar epigenetic landscapes, which bias their differentiation towards particular cell types of the same tissue. We expect that this comprehensive approach to the Waddington landscape will provide new insights to the developmental biology.
In this work we presented a framework for modeling the epigenetic landscape of the single cell resolution gene expression profiles. We visualized the epigenetic landscape of mouse preimplantation embryogenesis based on the expression profiles of 48 genes in 442 embryonic cells [8], which resembled the original metaphoric Waddington landscape of cellular differentiation [1]. Next we scrutinized to determine the regulatory circuitry that governs each developmental bifurcation.
We examined, through an ODE based model, the two-TF genetic circuits, which were previously suggested to regulate lineage bifurcations [5]. Perturbation, in form of increased decay rate of one TF, severely changed the shape and position of the attractor states. It could be concluded that any factor that has the potential to affect the expression or function of those TFs, such as genetic mutations, extrinsic stimuli and intrinsic noise, could deviate the corresponding cell fate decision.
Next we developed a hierarchical regulatory network consisting in pairs of auto-activating and mutual-inhibiting clusters of TFs. Our analysis showed the enhanced buffering capacity of the two-cluster regulatory circuitry against biological perturbations, due to the collective decision-making of TFs. Our finding can be a further explanation for the determinism and robustness of the embryonic development.
We employed two different approaches to model the cell differentiation processes. In the first approach we used the experimental data to visualize the Waddington landscape of early mouse embryogenesis and identified the clusters of the genes differentially expressed in each developmental bifurcation. In the second approach, we theoretically compared the dynamical systems generated by the smaller (two-TF) and the extended (two-cluster) regulatory circuitries, using ODE based models.
Waddington landscape: preprocessing of the experimental data
We obtained the expression profiles of 48 genes in 442 single mouse embryonic cells from zygote to 64 cells stage, that were generated by the TaqMan qRT-PCR assay [8]. These genes were selected after analyzing the expression levels of 802 TFs, due to differential expression in blastomeres or known function in early development. The initial Ct values ranged from 10 to 28, and the expression values were assigned by subtracting the Ct values from the baseline value of 28 (see the Additional file 3). PCA was performed using the mean-subtracted expression values. Correlation heatmap of the genes was generated based on pairwise Spearman correlations of the expression profiles of the cells in the 64-cell stage.
Axes of Waddington landscape
In order to visualize the Waddington landscape of the preimplantation development, we needed to define each dimension and compute it. There are three axes (dimensions) in the epigenetic landscape, as illustrated in Fig. 1a: (i) The x-axis (left-right) through which distinct cell fates are shown as different attractors (valleys). (i) The y-axis (back-front) that shows time of development, as early and late developmental stages are located in backward forward of the landscape, respectively. (iii) The z-axis (down-up) that represents a potential function, which integrates both differentiation potency and stability [39]. The totipotent cells (zygotes) are posed at the highest valley. As the cells undergo more differentiation into pluripotent, multipotent and then unipotent cells, they go towards the deeper and lower valleys. Furthermore the stable cell states (attractors) are distinguished as valleys from the instable and transient cell states that form hills.
It was straightforward to assign the y-axis of the cells since the time of development was available for each cell. To establish the x-axis of the epigenetic landscape, we computed the principal components PC1 and PC2 of the gene expression profiles (Fig. 1b). The coordinates origin was slightly moved into a cell-free region (PC1 = −0.5, PC2 = 0) to ensure all the cells of the same fate are located in the same side of the origin and have close angular coordinates. Then the x-dimension of each cell was computed as its angular coordinate around the origin. Through this dimension reduction – from the initial gene expression profiles consisting of 48 dimensions into a single axis – we aimed to preserve the similarities and differences of the cells.
Pseudo-potential function of the Waddington landscape
We needed to define a form of potential function from the experimental data. The closed form of a potential function is restricted to the gradient systems with stringent mathematical conditions that usually do not hold in biological systems [40]. As a result, most of the previous studies have defined pseudo- or quasi-potential functions based on many different methods: the ODEs with path integration [15, 40], Fokker-Planck equation [41, 42], Langevin dynamics [43], Hamilton-Jacobi equation [44], drift-diffusion models [45], Boltzmann distribution [46] and stochastic simulation [47]. Signaling network entropy, as a measure of promiscuity or undetermined lineage, is the other framework used to define a pseudo-potential function based on the experimental data [48].
In this study we employed the Boltzmann (Gibbs) distribution, which models the probability distribution of the particles in a system over various states with different energy levels [49]. It makes a connection between the energy levels and the probabilities of the particles being in each state. The Boltzmann distribution is expressed as the following equation:
$$ \frac{P(A)}{P(B)}={e}^{\raisebox{1ex}{$-\Delta E$}\!\left/ \!\raisebox{-1ex}{${k}_BT$}\right.} $$
where A and B are two different states, P(x) is the probability of a particle to be in state x, ΔE is the energy difference that a particle should absorb/release to change its state from A to B, k B is the Boltzmann constant, and T is the system temperature. By taking the logarithm of two sides we have:
$$ \ln \frac{P(A)}{P(B)}=\frac{-\Delta E}{k_BT}\ \to\ E(A)-E(B)=-{k}_BT\left( \ln \left(P(A)\right)- \ln \left(P(B)\right)\right) $$
in which E(x) is the energy of a particle in state x. Taking the state B as the pseudo-potential reference results in:
$$ U\left(\mathrm{A}\right)=-\uprho\ \ln \left(P(A)\right)+\omega $$
where U is the pseudo-potential function. Both ρ and ω are constant values that scale the landscape and can be omitted in visualization. To compute the pseudo-potential function we should determine the probability of the cells to be in each state, as follows.
Probability distribution of the cell states
At each developmental stage we assumed the expression profiles of the cells of the same type were normally distributed along the x-axis after the dimension reduction. To check this assumption we produced the Q-Q plots of each developmental stage for the angular coordinates of the cells in the PC1-PC2 plane (see the Additional file 4). Up to the 16-cell stage the points are almost fitting a single trend line. In the 32-cell stage there are two distinguished segments, discriminating ICM and TE cells. Each of three segments in 64-cell stage fit a different trend line, which shows this stage is a mixture of three normal distributions, representing EPI, PE and TE lineages. Furthermore we performed the Shapiro-Wilk normality test [50], that confirmed the normality of several segments of different stages.
As a result we considered a cell population including m different cell types would have a mixture of m normal distributions. By assuming τ k as the probability of a cell belonging to the k -th cell type (\( 1\le k\le m,\ {\tau}_k\ge 0,\ {\displaystyle \sum_{k=1}^m}{\tau}_k=1 \)), the mixed probability distribution function is:
$$ f(x)={\displaystyle \sum_{k=1}^m}{\tau}_k{\Phi}_k\left(x\Big|{\mu}_k,{\Sigma}_k\right) $$
where μ k and Σ k are the mean and covariance matrix of all the cells of the k -th cell type, and Φ k is a Gaussian function defined as:
$$ {\Phi}_k\left({\boldsymbol{x}}_{\boldsymbol{i}}\Big|{\mu}_k,{\Sigma}_k\right)=\frac{1}{\sqrt{2\pi \left|{\Sigma}_k\right|}}{e}^{-\frac{1}{2}{\left({\boldsymbol{x}}_i-{\mu}_k\right)}^T{\Sigma}_k^{-1}\left({\boldsymbol{x}}_i-{\mu}_k\right)} $$
From the above equations we could calculate the pseudo-potential function:
$$ U(x) \propto - \ln \left(f(x)\right) $$
where x is any point on the x-axis of the epigenetic landscape (projection of the gene expression profiles) at some particular developmental time. The mixed distribution and the pseudo-potential function were recalculated for each developmental stage with the available experimental data. A linear interpolation was used to fill the gaps between consecutive developmental stages. The landscape was visually tilted to show the reduced differentiation potency during development.
Selecting the number of different cell types and assigning each cell to one of cell types can be done either manually (supervised) or computationally (unsupervised). To have an objective and automated framework, we used the unsupervised approach, using the "mclust" package [51] of R statistical language. The projected expression profiles were given to the package to compute the probabilistic model parameters, including the number of cell types (clusters), the mean and covariance values, based on a maximum likelihood criterion. For additional details, one may refer to the "mclust" package reference manual.
Dynamical modeling: Phase space representation of the regulatory circuitry of two transcription factors (TFs)
For the dynamical system analysis of the two-TF regulatory circuitry, we employed the following set of ODEs [3, 39]:
$$ \frac{du}{dt}=\alpha \frac{u^n}{s^n+{u}^n}+\beta \frac{s^n}{s^n+{v}^n}-\gamma u $$
$$ \frac{dv}{dt}=\alpha \frac{v^n}{s^n+{v}^n}+\beta \frac{s^n}{s^n+{u}^n}-\gamma v $$
where u and v are concentrations of the pair of opposite TFs, and α and β are the strengths of the positive and negative regulatory interactions, respectively. For simplicity we used the same protein degradation rate γ for both TFs. The term \( \frac{x^n}{s^n+{x}^n}\ \left(x\in \left\{u,v\right\}\right) \) is a sigmoid function that has 0 value at x = 0, increases to 0.5 at x = s, and asymptotically approaches 1 at the large values of x. It resembles the positive auto-activation regulatory effect of each TF. The steepness of the sigmoid function is defined by the power n. On the other hand, \( \frac{s^n}{s^n+{x}^n} \) is a decreasing sigmoid function that starts from 1 at x = 0 and approaches 0 at large x values, which resemble the mutual inhibitory effects.
In order to model the perturbation in the form of increased decay rate of a particular protein, we increased the degradation rate of the TF u by 50 %, as denoted by γ*:
$$ \frac{d{u}^{*}}{dt}=\alpha \frac{u^n}{s^n+{u}^n}+\beta \frac{s^n}{s^n+{v}^n}-{\gamma}^{*}u $$
Phase space representation of the regulatory circuitry of two clusters of transcription factors (TFs)
To analyze the dynamical system of a gene regulatory circuitry consisting in two clusters of TFs we generalized the previous two-TF model by using the following equations:
$$ \frac{dx}{dt}=\frac{d{u}_1}{dt}+\frac{d{u}_2}{dt}=\eta \left({u}_1,{u}_2,{v}_1,{v}_2,\gamma \right)+\eta \left({u}_2,{u}_1,{v}_2,{v}_1,\gamma \right) $$
$$ \frac{dy}{dt}=\frac{d{v}_1}{dt}+\frac{d{v}_2}{dt}=\eta \left({v}_1,{v}_2,{u}_1,{u}_2,\gamma \right)+\eta \left({v}_2,{v}_1,{u}_2,{u}_1,\gamma \right) $$
where u 1 and u 2 are the concentrations of two proteins of the first cluster, v 1 and v 2 denote the second cluster protein concentrations, and x = u 1 + u 2 and y = v 1 + v 2 are the total concentrations of the proteins in clusters 1 and 2, respectively. We defined the generic function η(a, b, c, d, γ) to compute the concentration rate of any protein a based on the concentration values of the TFs a and b in one cluster, and c and d in the other cluster, as follows:
$$ \eta \left(a,b,c,d,\gamma \right)=\alpha \frac{a^n+{b}^n}{s^n+{a}^n+{b}^n}+\beta \frac{s^n}{s^n+{c}^n+{d}^n}-\gamma a $$
For the perturbation analysis, we used the increased degradation rate γ* for the TF u 2:
$$ \frac{dx}{dt}=\frac{d{u}_1}{dt}+\frac{d{u}_2^{*}}{dt}=\eta \left({u}_1,{u}_2,{v}_1,{v}_2,\gamma \right)+\eta \left({u}_2,{u}_1,{v}_2,{v}_1,{\gamma}^{*}\right) $$
In both models we used the following parameters: n = 4, s = 0.5, α = 1.5, β = 1, γ = 1 and γ* = 1.5. The sample space of (u, v) = [0, 3]2 was used for analyzing the two-TF model, and (u, v, u 1, v 1) = [0, 3]4 for the two-cluster model.
Simulation of the cell differentiation in absence or presence of perturbation
For each of the four different regulatory circuitries depicted in Fig. 3c, d and Fig. 4a, b we simulated the differentiation of 1000 cells. The initial expression rates of the TFs in each cell were assigned from a normal distribution with μ = 1.5 and sd = 1. With this selection of parameters, the majority of the cells were initially in proximity of the progenitor state (the attractor 2 of Figs. 3 and 4).
Each simulation continued 100 steps, in which the expression rates of the TFs in each cell were slightly changed, based on two factors: the dynamical system forces (differential equations above) and a standard Gaussian noise (μ = 0, sd = 1). The strengths of these factors were tuned by two coefficients: the force field coefficient had a constant value of 0.2 during the simulation; and the noise coefficient that started with 0.5 and gradually reduced during the simulation (multiplied by 0.98 in each step) to ensure the convergence of the experiment.
The auto-activation strength α was 1.5 at the beginning of each simulation, but gradually reduced (multiplied by 0.98 in each step). In this way, we forced the cells to leave the progenitor state 2 and differentiate into the attractor states 1 or 3. During this process, the stability of the attractor 2 was gradually decreased and resulted in a bistable system with only attractors 1 and 3. In each attractor of the bistable system, one TF was silent and the other was expressed at a slightly lower rate than the initial circuit configuration, due to the lower value of α.
The code was implemented in R statistical language [52]. We used the packages "mclust" to generate the mixed Gaussian model, "rgl" for 3D visualization of the Waddington landscape, "ggplot2" for 2D visualization of the data [53], and "pheatmap" for visualization of the correlation heatmap. We also used the packages "grid", "gplots", "plyr", "Hmisc", and "Biobase".
Advantages and limitations
Our method of visualizing the Waddington landscape enables the application of the experimental data at single cell resolution for this purpose. While we used the gene expression profiles of early embryonic cells, our method can be generalized for analysis of the high-throughput DNA methylation, histone modifications and non-coding RNA expression profiles. It is computationally fast and can be used for whole-genome scale of data and a large number of single cells. By application of time-course data, the same method can be applied for visualizing the landscape of reprogramming, transdifferentiation or stem cell differentiation.
Our method interpolates the developmental time between each pair of successive sampling time points; hence the closer the sampling time points, the more realistic the resulting landscape. The valley depth in this method mainly represents the number of cells assigned to the corresponding attractor state. This requires the data to be generated by random sampling of different cell types. For study of distant cell types, the quantity of cells and the depth of attractors can be influenced by cell division rates. In this case we suggest combining our method with an indicator of differentiation potency or stability, such as the cellular network entropy [48].
Availability of supporting data
The preprocessed single-cell resolution gene expression profiles of mouse preimplantation embryonic cells [8] are provided in the Additional file 3. We have also provided in the same additional file the complete source code of this study in R programming language.
ODE:
Ordinary differential equation
ICM:
Inner cell mass
TE:
Trophectoderm
EPI:
Epiblast
PE:
Primitive endoderm
Principal component analysis
Principal component
Bayesian information criterion
qRT-PCR:
Quantitative reverse transcription polymerase chain reaction
Cycle threshold.
Waddington CH. The Strategy of the Genes. London: George Allen & Unwin; 1957.
Choudhuri S. From Waddington's epigenetic landscape to small noncoding RNA: some important milestones in the history of epigenetics research. Toxicol Mech Methods. 2011;21:252–74.
Huang S, Guo Y-P, May G, Enver T. Bifurcation dynamics in lineage-commitment in bipotent progenitor cells. Dev Biol. 2007;305:695–713.
Graf T, Enver T. Forcing cells to change lineages. Nature. 2009;462:587–94.
Foster DV, Foster JG, Huang S, Kauffman SA. A model of sequential branching in hierarchical cell fate determination. J Theor Biol. 2009;260:589–97.
Zhou JX, Brusch L, Huang S. Predicting pancreas cell fate decisions and reprogramming with a hierarchical multi-attractor model. PLoS One. 2011;6, e14752.
Wu G, Han D, Gong Y, Sebastiano V, Gentile L, Singhal N, et al. Establishment of totipotency does not depend on Oct4A. Nat Cell Biol. 2013;15:1089–97.
Guo G, Huss M, Tong GQ, Wang C, Sun LL, Clarke ND, et al. Resolution of cell fate decisions revealed by single-cell gene expression analysis from zygote to blastocyst. Dev Cell. 2010;18:675–85.
Pearson K. On lines and planes of closest fit to systems of points in space. Philosophical Magazine Series 6. 1901;2:559–72.
Chen L, Wang D, Wu Z, Ma L, Daley GQ. Molecular basis of the first cell fate determination in mouse embryogenesis. Cell Res. 2010;20:982–93.
Bergsmedh A, Donohoe ME, Hughes R-A, Hadjantonakis A-K. Understanding the molecular circuitry of cell lineage specification in the early mouse embryo. Genes. 2011;2:420–48.
Huang S. Reprogramming cell fates: reconciling rarity with robustness. Bioessays. 2009;31:546–60.
Chazaud C, Yamanaka Y, Pawson T, Rossant J. Early lineage segregation between epiblast and primitive endoderm in mouse blastocysts through the Grb2-MAPK pathway. Dev Cell. 2006;10:615–24.
Bessonnard S, De Mot L, Gonze D, Barriol M, Dennis C, Goldbeter A, et al. Gata6, Nanog and Erk signaling control cell fate in the inner cell mass through a tristable regulatory network. Development. 2014;141:3637–48.
Wang J, Zhang K, Xu L, Wang E. Quantifying the Waddington landscape and biological paths for development and differentiation. Proc Natl Acad Sci U S A. 2011;108:8257–62.
Zernicka-Goetz M, Morris SA, Bruce AW. Making a firm decision: multifaceted regulation of cell fate in the early mouse embryo. Nat Rev Genet. 2009;10:467–77.
Rossant J, Tam PPL. Blastocyst lineage formation, early embryonic asymmetries and axis patterning in the mouse. Development. 2009;136:701–13.
Andrecut M, Halley JD, Winkler DA, Huang S. A general model for binary cell fate decision gene circuits with degeneracy: indeterminacy and switch behavior in the absence of cooperativity. PLoS One. 2011;6, e19358.
Cockburn K, Rossant J. Making the blastocyst: lessons from the mouse. J Clin Invest. 2010;120:995–1003.
Wu G, Gentile L, Fuchikami T, Sutter J, Psathaki K, Esteves TC, et al. Initiation of trophectoderm lineage specification in mouse embryos is independent of Cdx2. Development. 2010;137:4159–69.
Ema M, Mori D, Niwa H, Hasegawa Y, Yamanaka Y, Hitoshi S, et al. Krüppel-like factor 5 is essential for blastocyst development and the normal self-renewal of mouse ESCs. Cell Stem Cell. 2008;3:555–67.
Nichols J, Zevnik B, Anastassiadis K, Niwa H, Klewe-Nebenius D, Chambers I, et al. Formation of pluripotent stem cells in the mammalian embryo depends on the POU transcription factor Oct4. Cell. 1998;95:379–91.
Plachta N, Bollenbach T, Pease S, Fraser SE, Pantazis P. Oct4 kinetics predict cell lineage patterning in the early mammalian embryo. Nat Cell Biol. 2011;13:117–23.
Jaenisch R, Young R. Stem cells, the molecular circuitry of pluripotency and nuclear reprogramming. Cell. 2008;132:567–82.
Young RA. Control of the embryonic stem cell state. Cell. 2011;144:940–54.
Avilion AA, Nicolis SK, Pevny LH, Perez L, Vivian N, Lovell-Badge R. Multipotent cell lineages in early mouse development depend on SOX2 function. Genes Dev. 2003;17:126–40.
Chambers I, Tomlinson SR. The transcriptional foundation of pluripotency. Development. 2009;136:2311–22.
Galvão V, Miranda JGV, Andrade RFS, Andrade JS, Gallos LK, Makse HA. Modularity map of the network of human cell differentiation. Proc Natl Acad Sci U S A. 2010;107:5750–5.
Pournasr B, Khaloughi K, Salekdeh GH, Totonchi M, Shahbazi E, Baharvand H. Concise review: alchemy of biology: generating desired cell types from abundant and accessible cells. Stem Cells. 2011;29:1933–41.
Thomson M, Liu SJ, Zou L-N, Smith Z, Meissner A, Ramanathan S. Pluripotency factors in embryonic stem cells regulate differentiation into germ layers. Cell. 2011;145:875–89.
Leunda Casi A, de Hertogh R, Pampfer S. Control of trophectoderm differentiation by inner cell mass-derived fibroblast growth factor-4 in mouse blastocysts and corrective effect of fgf-4 on high glucose-induced trophoblast disruption. Mol Reprod Dev. 2001;60:38–46.
Goldin SN, Papaioannou VE. Paracrine action of FGF4 during periimplantation development maintains trophectoderm and primitive endoderm. Genesis. 2003;36:40–7.
Yamanaka Y, Lanner F, Rossant J. FGF signal-dependent segregation of primitive endoderm and epiblast… - PubMed - NCBI. Development. 2010;137:715–24.
Camussi G, Deregibus MC, Bruno S, Cantaluppi V, Biancone L. Exosomes/microvesicles as a mechanism of cell-to-cell communication. Kidney Int. 2010;78:838–48.
Hervé J-C, Derangeon M. Gap-junction-mediated cell-to-cell communication. Cell Tissue Res. 2012;352:21–31.
Bukoreshtliev NV, Haase K, Pelling AE. Mechanical cues in cellular signalling and communication. Cell Tissue Res. 2012;352:77–94.
Xu L, Yang B-F, Ai J. MicroRNA transport: a new way in cell communication. J Cell Physiol. 2013;228:1713–9.
Gradilla A-C, Guerrero I. Cytoneme-mediated cell-to-cell signaling during development. Cell Tissue Res. 2013;352:59–66.
Ferrell Jr JE. Bistability, bifurcations, and Waddington's epigenetic landscape. Curr Biol. 2012;22:R458–66.
Bhattacharya S, Zhang Q, Andersen ME. A deterministic map of Waddington's epigenetic landscape for cell fate specification. BMC Syst Biol. 2011;5:85.
Micheelsen MA, Mitarai N, Sneppen K, Dodd IB. Theory for the stability and regulation of epigenetic landscapes. Phys Biol. 2010;7:026010.
Marco E, Karp RL, Guo G, Robson P, Hart AH, Trippa L, et al. Bifurcation analysis of single-cell gene expression data reveals epigenetic landscape. Proc Natl Acad Sci U S A. 2014;111:E5643–50.
Li C, Wang J. Quantifying cell fate decisions for differentiation and reprogramming of a human stem cell network: landscape and biological paths. PLoS Comput Biol. 2013;9, e1003165.
Lv C, Li X, Li F, Li T. Constructing the energy landscape for genetic switching system driven by intrinsic noise. PLoS One. 2014;9:e88167.
Morris R, Sancho-Martinez I, Sharpee TO, Izpisua Belmonte JC. Mathematical approaches to modeling development and reprogramming. Proceedings of the National Academy of Sciences. 2014;111:5076–82.
Sisan DR, Halter M, Hubbard JB, Plant AL. Predicting rates of cell state change caused by stochastic fluctuations using a data-driven landscape model. Proc Natl Acad Sci. 2012;109:19262–7.
Li C, Wang J. Quantifying Waddington landscapes and paths of non-adiabatic cell fate decisions for differentiation, reprogramming and transdifferentiation. J R Soc Interface. 2013;10:20130787–7.
Banerji CRS, Miranda-Saavedra D, Severini S, Widschwendter M, Enver T, Zhou JX, et al. Cellular network entropy as the energy potential in Waddington's differentiation landscape. Sci Rep. 2013;3.
Kramers HA. Brownian motion in a field of force and the diffusion model of chemical reactions. Physica. 1940;7:284–304.
Royston JP. An extension of Shapiro and Wilk's W test for normality to large samples. Applied Statistics. 1982;31:115.
Fraley C, Raftery AE. Model-based clustering, discriminant analysis and density estimation. J Am Stat Assoc. 2002;97:611–31.
R Core Team. R: a language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2012.
Wickham H. Ggplot2: elegant graphics for data analysis. New York: Springer; 2009.
Nagashimada M, Ohta H, Li C, Nakao K, Uesaka T, Brunet J-F, et al. Autonomic neurocristopathy-associated mutations in PHOX2B dysregulate Sox10 expression. J Clin Invest. 2012;122:3145–58.
Zhou JX, Huang S. Understanding gene circuits at cell-fate branch points for rational cell reprogramming. Trends Genet. 2011;27:55–62.
The authors would like to express their appreciation to Ali Masoumi for his helpful ideas, and Rahim Tavassolian for creating the artwork of the gene regulatory networks. Data analysis was performed using the Computing Cluster Facility of the Institute for Research in Fundamental Sciences (IPM), Tehran, Iran.
Department of Bioinformatics, Institute of Biochemistry and Biophysics, University of Tehran, Tehran, Iran
Ali Sharifi-Zarchi
Department of Stem Cells and Developmental Biology at Cell Science Research Center, Royan Institute for Stem Cell Biology and Technology, ACECR, Tehran, Iran
Ali Sharifi-Zarchi, Mehdi Totonchi, Keynoush Khaloughi, Razieh Karamzadeh & Hossein Baharvand
Department of Genetics at Reproductive Biomedicine Research Center, Royan Institute for Reproductive Biomedicine, ACECR, Tehran, Iran
Mehdi Totonchi
Department of Biophysics, Institute of Biochemistry and Biophysics, University of Tehran, Tehran, Iran
Razieh Karamzadeh
Computational Biology and Bioinformatics Group, Max Planck Institute for Molecular Biomedicine, Münster, Germany
Marcos J. Araúzo-Bravo
Group of Computational Biology and Systems Biomedicine, Biodonostia Health Research Institute, 20014, San Sebastián, Spain
IKERBASQUE, Basque Foundation for Science, 48011, Bilbao, Spain
School of Computer Science, Institute for Research in Fundamental Sciences, Tehran, Iran
Ruzbeh Tusserkani
School of Mathematics, Statistics and Computer Sciences, Center of Excellence in Biomathematics, College of Science, University of Tehran, Tehran, Iran
Hamid Pezeshk
School of Biological Science, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
Hamid Pezeshk & Mehdi Sadeghi
Computer Science Department, Colorado State University, Fort Collins, Colorado, 80523, USA
Ali Sharifi-Zarchi & Hamidreza Chitsaz
National Institute of Genetic Engineering and Biotechnology (NIGEB), Tehran, Iran
Mehdi Sadeghi
Keynoush Khaloughi
Hossein Baharvand
Hamidreza Chitsaz
Correspondence to Mehdi Sadeghi.
MT, HB, MS, KK, MA and AS were involved in designing the project. HP proposed the statistical model. RT was involved in the design of the mathematical framework. MS, RT, HP and AS developed the computational models. MT, KK and MS reviewed the biological concepts. MA suggested the biological data and the analysis methods. AS analyzed the data and visualized the models. AS, KK, HB, MA and HC wrote and/or reviewed the manuscript. RK and MT designed the scientific concept and network figures. KK, MA, AS and MS proofread the manuscript. All authors read and approved the final manuscript.
The simulation results of the differentiation of 1000 cells with two-TF (plots 1 and 2) or two-cluster regulatory circuitries (plots 3 and 4).
The expression profiles of the TF clusters in the embryonic cells. We computed the average expression levels of the TFs of each cluster in each cell, and colored the cell accordingly. The cells with the highest expression level of each cluster are depicted in red, while the intermediate and the lowest expression levels are shown in white and blue, respectively. Three TF clusters responsible for EPI, PE and TE differentiation are shown.
The complete source code of the study, in R programming language, and the preprocessed data.
The Q-Q plots of the angular coordinates of the gene expression profiles in the (PC1, PC2) plane.
Sharifi-Zarchi, A., Totonchi, M., Khaloughi, K. et al. Increased robustness of early embryogenesis through collective decision-making by key transcription factors. BMC Syst Biol 9, 23 (2015). https://doi.org/10.1186/s12918-015-0169-8
Waddington landscape
Early embryogenesis
Developmental bifurcations
Genetic circuit
Single cell analysis | CommonCrawl |
On avoided words, absent words, and their application to biological sequence analysis
Yannis Almirantis1,
Panagiotis Charalampopoulos2,
Jia Gao2,
Costas S. Iliopoulos2,
Manal Mohamed2,
Solon P. Pissis2 &
Dimitris Polychronopoulos3
The deviation of the observed frequency of a word w from its expected frequency in a given sequence x is used to determine whether or not the word is avoided. This concept is particularly useful in DNA linguistic analysis. The value of the deviation of w, denoted by \(\textit{dev}(w)\), effectively characterises the extent of a word by its edge contrast in the context in which it occurs. A word w of length \(k>2\) is a \(\rho \)-avoided word in x if \(\textit{dev}(w) \le \rho \), for a given threshold \(\rho < 0\). Notice that such a word may be completely absent from x. Hence, computing all such words naïvely can be a very time-consuming procedure, in particular for large k.
In this article, we propose an \(\mathcal {O}(n)\)-time and \(\mathcal {O}(n)\)-space algorithm to compute all \(\rho \)-avoided words of length k in a given sequence of length n over a fixed-sized alphabet. We also present a time-optimal \(\mathcal {O}(\sigma n)\)-time algorithm to compute all \(\rho \)-avoided words (of any length) in a sequence of length n over an integer alphabet of size \(\sigma \). In addition, we provide a tight asymptotic upper bound for the number of \(\rho \)-avoided words over an integer alphabet and the expected length of the longest one. We make available an implementation of our algorithm. Experimental results, using both real and synthetic data, show the efficiency and applicability of our implementation in biological sequence analysis.
The systematic search for avoided words is particularly useful for biological sequence analysis. We present a linear-time and linear-space algorithm for the computation of avoided words of length k in a given sequence x. We suggest a modification to this algorithm so that it computes all avoided words of x, irrespective of their length, within the same time complexity. We also present combinatorial results with regards to avoided words and absent words.
The one-to-one mapping of a DNA molecule to a sequence of letters suggests that DNA analysis can be modelled within the framework of formal language theory [1]. For example, a region within a DNA sequence can be considered as a "word" on a fixed-sized alphabet in which some of its natural aspects can be described by means of certain types of automata or grammars. However, a linguistic analysis of the DNA needs to take into account many distinctive physical and biological characteristics of such sequences: The genome consists of coding regions that encode for polypeptide chains associated with biological functions as well as a plethora of regulatory and potentially functional non-coding regions, identified through multiple alignment of genomes of several organisms, and termed conserved non-coding elements (CNEs). In addition, it contains large non-coding regions most of which are not linked to any particular function. All these genomic components appear to have many statistical features in common with natural languages [2].
A computational tool oriented towards the systematic search for avoided words is particularly useful for in silico genomic research analyses. The search for absent words is already undertaken in the recent past and several results exist on the application and computation of such words [3,4,5,6]. However, words which may be present in a genome or in genomic sequences of a specific role (e.g., protein coding segments, regulatory elements, conserved non-coding elements etc.) but they are strongly underrepresented—as we can estimate on the basis of the frequency of occurrence of their longest proper factors—may be of particular importance. They can be words of nucleotides which are hardly tolerated because they negatively influence the stability of the chromatin or, more generally, the functional genomic conformation; they can represent targets of restriction endonucleases which may be found in bacterial and viral genomes; or, more generally, they may be short genomic regions whose presence in wide parts of the genome are not tolerated for less known reasons. The understanding of such avoidances is becoming an interesting line of research (for recent studies, see [7, 8]).
On the other hand, short words of nucleotides may be systematically avoided in large genomic regions or whole genomes for entirely different reasons, i.e. just because they play important signaling roles which confine their appearance only in specific positions: consensus sequences for the initiation of gene transcription and of DNA replication are well-known such oligonucleotides. Other such cases may be insulators, sequences anchoring the chromatin on the nuclear envelope like lamina-associated domains, short sequences like dinucleotide repeat motifs with enhancer activity, and several other cases. Again, we cannot exclude that this area of research could lead to the identification of short sequences of regulatory activities still unknown.
Brendel et al. in [9] initiated research into the linguistics of nucleotide sequences that focuses on the concept of words in continuous languages—languages devoid of blanks—and introduced an operational definition of words. The authors suggested a method to measure, for each possible word w of length k, the deviation of its observed frequency from the expected frequency in a given sequence. The values of the deviation, denoted by \(\textit{dev}(w)\), were then used to identify words that are avoided among all possible words of length k. The typical length of avoided (or of overabundant) words of the nucleotide language was found to range from 3 to 5 (tri- to pentamers). The statistical significance of the avoided words was shown to reflect their biological importance. This work, however, was based on the very limited sequence data available at the time: only DNA sequences from two viral and one bacterial genomes were considered. Also note that k might change when considering eukaryotic genomes, the complex dynamics and function of which might impose a more demanding analysis. The authors in [10,11,12] have studied the concept of unusual words—based on different definitions than the ones Brendel et al. use for expectation and variance—focusing on the factors of a string, whereas based on Brendel et al. definitions, we consider here any word over the alphabet.
The computational problem can be described as follows. Given a sequence x of length n, an integer k, and a real number \(\rho < 0\), compute the set of \(\rho \)-avoided words of length k, i.e. all words w of length k for which \(\textit{dev}(w) \le \rho \). We call this set the \(\rho \)-avoided words of length k in x. Brendel et al. did not provide an efficient solution for this computation [9]. Notice that such a word may be completely absent from x. Hence the set of \(\rho \)-avoided words can be naïvely computed by considering all possible \(\sigma ^k\) words, where \(\sigma \) is the size of the alphabet.
Here we present an \(\mathcal {O}(n)\)-time and \(\mathcal {O}(n)\)-space algorithm for computing all \(\rho \)-avoided words of length k in a sequence of length n over a fixed-sized alphabet. For words over an integer alphabet of size \(\sigma \), the algorithm requires time \(\mathcal {O}(\sigma n)\), which is optimal for sufficiently large \(\sigma \). We also present a time-optimal \(\mathcal {O}(\sigma n)\)-time algorithm to compute all \(\rho \)-avoided words (of any length) in a sequence of length n over an integer alphabet of size \(\sigma \). We provide a tight asymptotic upper bound for the number of \(\rho \)-avoided words over an integer alphabet and the expected length of the longest one. We also prove that the same asymptotic upper bound is tight for the number of \(\rho \)-avoided words of fixed length when the alphabet is sufficiently large.
As shown subsequently, the set of absent \(\rho \)-avoided words is a subset of the set of minimal absent words of a word. Hence the tight asymptotic bounds for \(\rho \)-avoided words are based on the proof we provide for the tightness of the known asymptotic bound on minimal absent words and the tightness of this bound for minimal absent words of fixed length over sufficiently large alphabets.
We make available an open-source implementation of our algorithm. Experimental results, using both real and synthetic data, show its efficiency and applicability. Specifically, using our method we confirm that restriction endonucleases which target self-complementary sites are not found in eukaryotic sequences [8]. In addition, we apply our algorithm in the case of CNEs, which are classes of sequences whose functions in our genomes remain largely enigmatic [13, 14]. We observe interesting patterns of occurring avoided words within CNEs compared to CNE-like sequences (surrogates) that are in accordance with their distinct sequence characteristics which classify them from other non-functional sequences [15, 16].
A preliminary version of this article has appeared in [17].
Terminology and technical background
Definitions and notation
We begin with basic definitions and notation generally following [18]. Let \(x=x[0]x[1] \cdots x[n-1]\) be a word of length \(n=|x|\) over a finite ordered alphabet \(\Sigma \) of fixed size \(\sigma \), i.e. \(\sigma = |\Sigma |=\mathcal {O}(1)\). We also consider the case of an integer alphabet; in this case each letter is replaced by its rank such that the resulting string consists of integers in the range \(\{1,\ldots ,n\}\). For two positions i and j on x, we denote by \(x[i \ldots j]=x[i]\cdots x[j]\) the factor (sometimes called subword) of x that starts at position i and ends at position j (it is empty if \(j < i\)), and by \(\varepsilon \) the empty word, word of length 0. We recall that a prefix of x is a factor that starts at position 0 (\(x[0\ldots j]\)) and a suffix is a factor that ends at position \(n-1\) (\(x[i \ldots n-1]\)), and that a factor of x is a proper factor if it is not x itself. A factor of x that is neither a prefix nor a suffix of x is called an \(\textit{infix}\) of x. We say that x is a power of a word y if there exists a positive integer k, \(k>1\), such that x is expressed as k consecutive concatenations of y; we denote that by \(x=y^k\).
Let \(w=w[0]w[1] \cdots w[m-1]\) be a word, \(0<m\le n\). We say that there exists an occurrence of w in x, or, more simply, that w occurs in x, when w is a factor of x. Every occurrence of w can be characterised by a starting position in x. Thus we say that w occurs at the starting position i in x when \(w=x[i \ldots i + m - 1]\). Further let f(w) denote the observed frequency, that is, the number of occurrences of a non-empty word w in word x. Note that overlapping occurrences are considered as distinct ones; e.g. \(f(\texttt {TT})=2\) in \(\texttt {TTT}\). If \(f(w) = 0\) for some word w, then w is called absent, otherwise, w is called occurring.
By \(f(w_p)\), \(f(w_s)\), and \(f(w_i)\) we denote the observed frequency of the longest proper prefix \(w_p\), suffix \(w_s\), and infix \(w_i\) of w in x, respectively. We can now define the expected frequency of word w, \(|w|>2\), in x as in Brendel et al. [9]:
$$\begin{aligned} E(w) = \frac{f(w_p) \times f(w_s)}{f(w_i)},\quad \text { if~ } f(w_i) >0; \text {~else~} E(w) = 0. \end{aligned}$$
The above definition can be explained intuitively as follows. Suppose we are given \(f(w_p)\), \(f(w_s)\), and \(f(w_i)\). Given an occurrence of \(w_i\) in x, the probability of it being preceded by w[0] is \(\frac{f(w_p)}{f(w_i)}\) as w[0] precedes exactly \(f(w_p)\) of the \(f(w_i)\) occurrences of \(w_i\). Similarly, this occurrence of \(w_i\) is also an occurrence of \(w_s\) with probability \(\frac{f(w_s)}{f(w_i)}\). Although these two events are not always independent, the product \(\frac{f(w_p)}{f(w_i)} \times \frac{f(w_s)}{f(w_i)}\) gives a good approximation of the probability that an occurrence of \(w_i\) at position j implies an occurrence of w at position \(j-1\). It can be seen then that by multiplying this product by the number of occurrences of \(w_i\) we get the above formula for the expected frequency of w.
Moreover, to measure the deviation of the observed frequency of a word w from its expected frequency in x, we define the deviation (\(\chi ^2\) test) of w as:
$$\begin{aligned} \textit{dev}(w) = \frac{f(w)-E(w)}{\max \{ \sqrt{E(w)}, 1\}}. \end{aligned}$$
For more details on the biological justification of these definitions see [9].
Using the above definitions and a given threshold, we are in a position to classify a word w as either avoided or common in x. In particular, for a given threshold \(\rho < 0\), a word w is called \(\rho \)-avoided if \(\textit{dev}(w) \le \rho \). In this article, we consider the following computational problems.
Suffix trees
In our algorithms, suffix trees are used extensively as computational tools. For a general introduction to suffix trees, see [18].
The suffix tree \(\mathcal {T}(x)\) of a non-empty word x of length n is a compact trie representing all suffixes of x. The nodes of the trie which become nodes of the suffix tree are called explicit nodes, while the other nodes are called implicit. Each edge of the suffix tree can be viewed as an upward maximal path of implicit nodes starting with an explicit node. Moreover, each node belongs to a unique path of that kind. Then, each node of the trie can be represented in the suffix tree by the edge it belongs to and an index within the corresponding path.
We use \(\mathcal {L}(v)\) to denote the path-label of a node v, i.e., the concatenation of the edge labels along the path from the root to v. We say that v is path-labelled \(\mathcal {L}(v)\). Additionally, \(\mathcal {D}(v)= |\mathcal {L}(v)|\) is used to denote the word-depth of node v. Node v is a terminal node, if and only if, \(\mathcal {L}(v) = x[i \ldots n-1]\), \(0 \le i < n\); here v is also labelled with index i. It should be clear that each occurring word w in x is uniquely represented by either an explicit or an implicit node of \(\mathcal {T}(x)\). The suffix-link of a node v with path-label \(\mathcal {L}(v)= \alpha y\) is a pointer to the node path-labelled y, where \(\alpha \in \Sigma \) is a single letter and y is a word. The suffix-link of v exists if v is a non-root internal node of \(\mathcal {T}(x)\). We denote by Child \((v,\alpha )\) the explicit node that is obtained from v by traversing the outgoing edge whose label starts with \(\alpha \in \Sigma \).
In any standard implementation of the suffix tree, we assume that each node of the suffix tree is able to access its parent. Note that once \(\mathcal {T}(x)\) is constructed, it can be traversed in a depth-first manner to compute the word-depth \(\mathcal {D}(v)\) for each node v. Let u be the parent of v. Then the word-depth \(\mathcal {D}(v)\) is computed by adding \(\mathcal {D}(u)\) to the length of the label of edge (u, v). If v is the root then \(\mathcal {D}(v) = 0\). Additionally, a depth-first traversal of \(\mathcal {T}(x)\) allows us to count, for each node v, the number of terminal nodes in the subtree rooted at v, denoted by \(\mathcal {C}(v)\), as follows. When internal node v is visited, \(\mathcal {C}(v)\) is computed by adding up \(\mathcal {C}(u)\) of all the nodes u, such that u is a child of v, and then \(\mathcal {C}(v)\) is incremented by 1 if v itself is a terminal node. If a node v is a leaf then \(\mathcal {C}(v) = 1\).
The suffix tree \(\varvec{\mathcal {T}}\varvec{(x)}\) for \(\varvec{x}= {{\mathbf {\mathtt{{AGCGCGACGTCTGTGT}}}}}\). Double-lined nodes represent terminal nodes labelled with the associated indices. The suffix-links for non-root internal nodes are dashed
Consider the word \(x=\texttt {AGCGCGACGTCTGTGT}\). Fig. 1 represents the suffix tree \(\mathcal {T}(x)\). Note that word \(\texttt {GCG}\) is represented by the explicit internal node v; whereas word \(\texttt {TCT}\) is represented by the implicit node along the edge connecting the node labelled 15 and the node labelled 9. Consider node v in \(\mathcal {T}(x)\); we have that \(\mathcal {L}(v) = \texttt {GCG}\), \(\mathcal {D}(v) = 3\), and \(\mathcal {C}(v)=2\).
Tight bounds on minimal absent words
[4] An absent word w of x is minimal if and only if all proper factors of w occur in x.
We first show that the known asymptotic upper bound on the number of minimal absent words of a word is tight.
[19] The upper bound \(\mathcal {O}(\sigma n)\) on the number of minimal absent words of a word of length n over an alphabet of size \(\sigma \) is tight if \(2 \le \sigma \le n\).
To prove that the bound is tight it suffices to construct a word with these many minimal absent words asymptotically.
Let \(\Sigma =\{a_1,a_2\}\), i.e. \(\sigma =2\), and consider the word \(x=a_2 a_1^{n-2} a_2\) of length n. All words of the form \(a_2 a_1^k a_2\) for \(0 \le k \le n-3\) are minimal absent words in x. Hence x has at least \(n-2=\Omega (n)\) minimal absent words.
Let \(\Sigma =\{a_1,a_2,a_3,\ldots ,a_\sigma \}\) with \(3 \le \sigma \le n\) and consider the word \(x=a_2 a_1^k a_3 a_1^k a_4 a_1^k\cdots a_i a_1^k a_{i+1} \cdots a_{\sigma } a_1^k a_1^m\), where \(k=\lfloor \frac{n}{\sigma -1}\rfloor -1\) and \(m=n-(\sigma -1)(k+1)\). Note that x is of length n. Further note that \(a_i a_1^j\) is a factor of x, for all \(2 \le i \le \sigma \) and \(0 \le j \le k\). Similarly, \(a_1^j a_l\) is a factor of x, for all \(3 \le l \le \sigma \) and \(0 \le j \le k\). Thus all proper factors of all the words in the set \(S=\{ a_i a_1^j a_l \, | \, 0 \le j \le k, \, 2 \le i \le \sigma , \, 3 \le l \le \sigma \}\) occur in x. However, the only words in S that occur in x are the ones of the form \(a_i a_1^k a_{i+1}\), for \(2 \le i < \sigma \). Hence x has at least \((\sigma -1)(\sigma -2)(k+1)-(\sigma -2)=(\sigma -1)(\sigma -2)\lfloor \frac{n}{\sigma -1}\rfloor -(\sigma -2)=\Omega (\sigma n)\) minimal absent words. \(\square \)
In the following lemma we show that, for sufficiently large alphabets, \(\mathcal {O}(\sigma n)\) is a tight asymptotic bound for the number of minimal absent words of fixed length.
The upper bound \(\mathcal {O}(\sigma n)\) on the number of minimal absent words of fixed length of a word of length n over an alphabet of size \(\sigma \) is tight if \(\sqrt{n}+1 \le \sigma \le n\).
Let \(\Sigma =\{a_1, a_2, a_3,\ldots , a_\sigma \}\) be an alphabet of size \(\sigma \). We will show that we can construct words of any length n, with \( \sigma \le n \le \sigma (\sigma -1)\), that have \(\Omega (\sigma n)\) minimal absent words of length 3.
We first construct the strings (blocks) \(B_i= a_{i+1} a_i a_{i+2} a_i \cdots a_{i+j} a_i \cdots a_{\sigma } a_i\), for \(1\le i \le \sigma -1\). Note that \(|B_i|=2(\sigma -i)\) and that a letter \(a_i\) occurs in \(B_j\) if and only if \(j \le i\). We then consider the word \(x=B_1 B_2\cdots B_i\cdots B_{\sigma -1}\) which has length \(|x|=\sum _{i=1}^{\sigma -1} 2(\sigma -i) = \sigma (\sigma -1)\).
Now consider any prefix y of x with \(|y| > 2(\sigma -1)\). Then \(y=B_1 B_2 \cdots B_{j-1} \overline{B_{j}}\), where \(\overline{B_{j}}\) is a prefix of \(B_{j}\) for some \(j>1\). For any \(i < j\) the words of length 3 with \(a_i\) as the mid-letter that occur in y are the ones in the set \(U_i=\{a_{\ell } a_i a_{\ell } \mid 1 \le \ell \le i-2\} \cup \{a_k a_i a_{k+1}\mid i+1 \le k \le \sigma -1 \} \cup \{a_{i-2} a_i a_{i-1}\}\cup \{a_{\sigma } a_i a_{i+2}\}\), with the last singleton not included if \(i=j-1\) and \(\overline{B_{j}}=\varepsilon \). We thus have \(|U_i| \le \sigma \).
We notice that the strings of the form \(a_k a_i\) for all \(k \in P_i=\{1,2,\ldots ,\sigma \} \setminus \{i-1, i\}\) occur in y and similarly the strings of the form \(a_i a_{\ell }\) for all \(\ell \in S_i=\{1,2,\ldots ,\sigma \} \setminus \{i, i+1\}\) occur in y. Hence, all proper factors of all strings in \(V_i=\{a_k a_i a_{\ell } \mid k \in P_i, \ell \in S_i\}\) occur in y and \(|V_i|={(\sigma -2)}^2\). Then all the words in \(M_i = V_i \setminus U_i\) are minimal absent words of y of length 3 with mid-letter \(a_i\) and they are at least \({(\sigma -2)}^2-\sigma \). Now, since \(|B_i| < 2 \sigma \) for all i, we have that \(j > \frac{|y|}{2 \sigma }\). Hence \(\sum _{i=1}^{j-1} |M_i| \ge ({(\sigma -2)}^2-\sigma ) \times \frac{|y|}{2 \sigma }\). Since the sets \(M_i\) are pairwise disjoint it then follows that y has \(\Omega (\sigma |y|)\) minimal absent words of length 3.
Hence, given an alphabet of size \(\sigma \) we can construct words of any length n, such that \(2\sigma < n \le \sigma (\sigma -1)\), that have \(\Omega (\sigma n)\) minimal absent words of length 3.
Note that when \(\sigma \le n \le 2 \sigma \) the example of \(y=a_1 a_2 a_3 \cdots a_{\sigma }\) (possibly padded with \(a_{\sigma }\)'s) gives the desired result as at most \(\sigma \) out of the \({\sigma }^2\) possible combinations \(a_i a_j\) (of length 2) occur in y, while all proper factors of all such combinations occur in y.\(\square \)
Useful properties of avoided words
In this section, we provide some useful insights of combinatorial nature which were not considered by Brendel et al. [9]. By the definition of \(\rho \)-avoided words it follows that a word w may be \(\rho \)-avoided even if it is absent from x. In other words, \(\textit{dev}(w) \le \rho \) may hold for either \(f(w) > 0\) (occurring) or \(f(w) = 0\) (absent).
Consider again the word \(x=\texttt {AGCGCGACGTCTGTGT}\), \(k=3\), and \(\rho =-0.4 \).
Word \(w_1= \texttt {CGT}\), at position 7 of x, is an occurring \(\rho \)-avoided word:
$$\begin{aligned} E(w_1) = 3\times 3/6 = 1.5,\text { } \textit{dev}(w_1) =(1-1.5)/\sqrt{1.5} = -0.408248. \end{aligned}$$
Word \(w_2 = \texttt {AGT}\) is an absent \(\rho \)-avoided word:
$$\begin{aligned} E(w_2) = 1\times 3/6 = 0.5,\text { } \textit{dev}(w_2) =(0- 0.5)/1 = -0.5. \end{aligned}$$
This means that a naïve computation should consider all possible \(\sigma ^k\) words. Then for each possible word w, the value of \(\textit{dev}(w)\) can be computed via pattern matching on the suffix tree of x. In particular, we can search for the occurrences of w, \(w_p\), \(w_s\), and \(w_i\) in x in time \(\mathcal {O}(k)\) [18]. In order to avoid this inefficient computation, we exploit the following crucial lemmas.
Any absent \(\rho \)-avoided word w in x is a minimal absent word of x.
For w to be a \(\rho \)-avoided word it must hold that
$$\begin{aligned} \textit{dev}(w) = \frac{f(w)-E(w)}{\max \{ \sqrt{E(w)}, 1\}}\le \rho < 0. \end{aligned}$$
This implies that \(f(w)-E(w)<0\), which in turn implies that \(E(w)>0\) since \(f(w) = 0\). From \(E(w) = \frac{f(w_p) \times f(w_s)}{f(w_i)}>0\), we conclude that \(f(w_p)>0\) and \(f(w_s)>0\) must hold. Since \(f(w) = 0\), \(f(w_p)>0\), and \(f(w_s)>0\), w is a minimal absent word of x: all proper factors of w occur in x. \(\square \)
Let w be a word occurring in x and \(\mathcal {T}(x)\) be the suffix tree of x. Then, if \(w_p\) is a path-label of an implicit node of \(\mathcal {T}(x)\), \(\textit{dev}(w) \ge 0\).
For any w that occurs in x it holds that \(f(w_i) \ge f(w_s)\), which implies that \(f(w_p) \ge \frac{f(w_p) \times f(w_s)}{f(w_i)} = E(w)\). Furthermore, by the definition of the suffix tree, if w occurs in x and \(w_p\) is a path-label of an implicit node then \(f(w_p) = f(w)\). It thus follows that \(f(w) - E(w) = f(w_p) - E(w) \ge 0\), and since \(\max \{1,\sqrt{E(w)}\} > 0\), the claim holds. \(\square \)
The number of \(\rho \)-avoided words of length \(k>2\) in a word of length n over an alphabet of size \(\sigma \) is \(\mathcal {O}(\sigma n)\); in particular, this number is no more than \((\sigma + 1) n - k + 1\). The upper bound \(\mathcal {O}(\sigma n)\) is tight if \(\sqrt{n}+1 \le \sigma \le n\).
By Lemma 3, every \(\rho \)-avoided word is either occurring or a minimal absent word. It is known that the number of minimal absent words in a word of length n is smaller than or equal to \(\sigma n\) [20]. Clearly, the occurring \(\rho \)-avoided words in a word of length n are at most \(n - k + 1\). Therefore the number of \(\rho \)-avoided words of length k are no more than \((\sigma + 1) n - k + 1\). This implies that \(\mathcal {O}(\sigma n)\) is an asymptotic upper bound. In the case of an alphabet of size \(\sqrt{n}+1 \le \sigma \le n\), it follows from Lemma 2 that there exist words with \(\Omega (\sigma n)\) minimal absent words of a fixed length \(k>2\). Consider such a word x, the respective k, and some \(\rho \ge - \frac{1}{n}\). Let w be any minimal absent word of x. We have that \(f(w_p) \ge 1\), \(f(w_s) \ge 1\), and \(f(w_i) \le n\); and hence \(E(w) \ge \frac{1}{n}\). Since \(f(w)=0\), it follows that \(\textit{dev}(w) \le - \frac{1}{n} \le \rho \). Thus, every minimal absent word of x is \(\rho \)-avoided, and since there are \(\Omega (\sigma n)\) of them of length k, we conclude that \(\mathcal {O}(\sigma n)\) is a tight asymptotic bound in this case. \(\square \)
Avoided words algorithm
In this section, we present Algorithm AvoidedWords for computing all \(\rho \)-avoided words of length k in a given word x. The algorithm builds the suffix tree \(\mathcal {T}(x)\) for word x, and then prepares \(\mathcal {T}(x)\) to allow constant-time observed frequency queries. This is mainly achieved by counting the terminal nodes in the subtree rooted at node v for every node v of \(\mathcal {T}(x)\). Additionally during this pre-processing, the algorithm computes the word-depth of v for every node v of \(\mathcal {T}(x)\). By Lemma 3, \(\rho \)-avoided words are classified as either occurring or (minimal) absent, therefore Algorithm AvoidedWords calls Routines AbsentAvoidedWords and OccurringAvoidedWords to compute both classes of \(\rho \)-avoided words in x. The outline of Algorithm AvoidedWords is as follows.
Computing absent avoided words
In Lemma 3, we showed that each absent \(\rho \)-avoided word is a minimal absent word. Thus, Routine AbsentAvoidedWords starts by computing all minimal absent words in x; this can be done in time and space \(\mathcal {O}(n)\) for a fixed-sized alphabet or in time \(\mathcal {O}(\sigma n)\) for integer alphabets [4, 5]. Let \(< (i,j), \alpha>\) be a tuple representing a minimal absent word in x, where for some minimal absent word w of length \(|w| > 2\), \(w = x[i \ldots j]\alpha \), \(\alpha \in \Sigma \); this representation is clearly unique.
Intuitively, the idea is to check the length of every minimal absent word. If a tuple \(< (i,j), \alpha>\) represents a minimal absent word w of length \(k = j-i+2\), then the value of \(\textit{dev}(w)\) is computed to determine whether w is an absent \(\rho \)-avoided word. Note that, if \(w = x[i \ldots j]\alpha \) is a minimal absent word, then \(w_p= x[i \ldots j]\), \(w_i= x[i+1 \ldots j]\), and \(w_s = x[i+1 \ldots j]\alpha \) occur in x by Definition 1. Thus, there are three (implicit or explicit) nodes in \(\mathcal {T}(x)\) path-labelled \(w_p\), \(w_i\), and \(w_s\), respectively.
The observed frequencies of \(w_p\), \(w_i\), and \(w_s\) are already computed during the pre-processing of \(\mathcal {T}(x)\). For an explicit node v of \(\mathcal {T}(x)\), path-labelled \(w'= x[i' \ldots j']\), the value \(\mathcal {C}(v)\), which is the number of terminal nodes in the subtree rooted at v, is equal to the number of occurrences (observed frequency) of \(w'\) in x. For an implicit node along the edge (u, v) path-labelled \(w''\), the number of occurrences of \(w''\) is equal to \(\mathcal {C}(v)\) (and not \(\mathcal {C}(u)\)). The implementation of this procedure is given in Routine AbsentAvoidedWords.
Computing occurring avoided words
Lemma 4 suggests that for each occurring \(\rho \)-avoided word w, \(w_p\) is a path-label of an explicit node v of \(\mathcal {T}(x)\). Thus, for each internal node v such that \(\mathcal {D}(v)= k-1\) and \(\mathcal {L}(v)= w_p\), Routine OccurringAvoidedWords computes \(\textit{dev}(w)\), where \(w =w_p \alpha \), \(\alpha \in \Sigma \), is a path-label of a child (explicit or implicit) node of v. Note that if \(w_p\) is a path-label of an explicit node v then \(w_i\) is a path-label of an explicit node u of \(\mathcal {T}(x)\); node u is well-defined and it is the node pointed at by the suffix-link of v. The implementation of this procedure is given in Routine OccurringAvoidedWords.
Analysis of the algorithm
Given a word x, an integer \(k>2\), and a real number \(\rho < 0\), Algorithm AvoidedWords computes all \(\rho \)-avoided words of length k in x.
By definition, a \(\rho \)-avoided word w is either an absent \(\rho \)-avoided word or an occurring one. Hence, the proof of correctness relies on Lemmas 3 and 4. First, Lemma 3 indicates that an absent \(\rho \)-avoided word in x is necessarily a minimal absent word. Routine AbsentAvoidedWords considers each minimal absent word w and verifies if w is a \(\rho \)-avoided word of length k.
Second, Lemma 4 indicates that for each occurring \(\rho \)-avoided word w, \(w_p\) is a path-label of an explicit node v of \(\mathcal {T}(x)\). Routine OccurringAvoidedWords considers every child of each such node of word-depth k, and verifies if its path-label is a \(\rho \)-avoided word. \(\square \)
Given a word x of length n over a fixed-sized alphabet, an integer \(k>2\), and a real number \(\rho < 0\), Algorithm AvoidedWords requires time and space \(\mathcal {O}(n)\); for integer alphabets, it requires time \(\mathcal {O}(\sigma n)\).
Constructing the suffix tree \(\mathcal {T}(x)\) of the input word x takes time and space \(\mathcal {O}(n)\) for a word over a fixed-sized alphabet [18]. Once the suffix tree is constructed, computing arrays \(\mathcal {D}\) and \(\mathcal {C}\) by traversing \(\mathcal {T}(x)\) requires time and space \(\mathcal {O}(n)\). Note that the path-labels of the nodes of \(\mathcal {T}(x)\) can by implemented in time and space \(\mathcal {O}(n)\) as follows: traverse the suffix tree to compute for each node v the smallest index i of the terminal nodes of the subtree rooted at v. Then \(\mathcal {L}(v) = x[i \ldots i+\mathcal {D}(v)-1]\).
Next, Routine AbsentAvoidedWords requires time \(\mathcal {O}(n)\). It starts by computing all minimal absent words of x, which can be achieved in time and space \(\mathcal {O}(n)\) over a fixed-sized alphabet [4, 5]. The rest of the procedure deals with checking each of the \(\mathcal {O}(n)\) minimal absent words of length k. Checking each minimal absent word w to determine whether it is a \(\rho \)-avoided word or not requires time \(\mathcal {O}(1)\). In particular, an \(\mathcal {O}(n)\)-time pre-processing of \(\mathcal {T}(x)\) allows the retrieval of the (implicit or explicit) node in \(\mathcal {T}(x)\) corresponding to the longest proper prefix of w in time \(\mathcal {O}(1)\) [21]. Finally, Routine OccurringAvoidedWords requires time \(\mathcal {O}(n)\). It traverses the suffix tree \(\mathcal {T}(x)\) to consider all explicit nodes of word-depth \(k-1\). Then for each such node, the procedure checks every (explicit or implicit) child of word-depth k. The total number of these children is at most \(n-k+1\). For every child node, the procedure checks whether its path-label is a \(\rho \)-avoided word in time \(\mathcal {O}(1)\) via the use of suffix-links.
For integer alphabets, the suffix tree can be constructed in time \(\mathcal {O}(n)\) [22] and all minimal absent words can be computed in time \(\mathcal {O}(\sigma n)\) [4, 5]. The efficiency of Algorithm AvoidedWords is then limited by the total number of words to be considered, which, by Lemma 5, is \(\mathcal {O}(\sigma n)\). Note that for integers alphabets, a batch of \(q \) Child \((v,\alpha) \) queries can be answered off-line in time \(\mathcal{O}(n+q)\) with the aid of radix sort (in Routine AbsentAvoidedWords) or on-line in time \(\mathcal{O}(q \log \sigma) \) (in Routine OccurringAvoidedWords).\(\square \)
Lemmas 5, 6 and 7 imply the first result of this article.
Algorithm AvoidedWords solves Problem AvoidedWordsComputation in time and space \(\mathcal {O}(n)\). For integer alphabets, the algorithm solves the problem in time \(\mathcal {O}(\sigma n)\); this is time-optimal if \(\sqrt{n}+1 \le \sigma \le n\).
Optimal computation of all ρ-avoided words
Although the biological motivation is yet to be shown for this, we present here how we can modify Algorithm AvoidedWords so that it computes all \(\rho \)-avoided words (of all lengths) in a given word x of length n over an integer alphabet of size \(\sigma \) in time \(\mathcal {O}(\sigma n)\). We further show that this algorithm (AllAvoidedWords) is in fact time-optimal.
Based on Lemma 1 and similarly to the proof of Lemma 5 we obtain the following result.
The number of \(\rho \)-avoided words in a word of length n over an alphabet of size \(2 \le \sigma \le n\) is \(\mathcal {O}(\sigma n)\) and this bound is tight.
It is clear that if we just remove the condition on the length of each minimal absent word in Line 2 of AbsentAvoidedWords we then compute all absent \(\rho \)-avoided words in time \(\mathcal {O}(\sigma n)\). In order to compute all occurring \(\rho \)-avoided words in x it suffices by Lemma 4 to investigate the children of explicit nodes. We can thus traverse the suffix tree \(\mathcal {T}(x)\) and for each explicit internal node, check for all of its children (explicit or implicit) whether their path-label is a \(\rho \)-avoided word. We can do this in \(\mathcal {O}(1)\) time as described. The total number of these children is at most \(2n-1\), as this is the bound on the number of edges of \(\mathcal {T}(x)\) [18]. This modified algorithm is clearly time-optimal for fixed-sized alphabets as it then runs in time \(\mathcal {O}(n)\). The time optimality for integer alphabets follows directly from Lemma 8. Hence we obtain the second result of this article.
Algorithm AllAvoidedWords solves Problem AllAvoidedWordsComputation in time \(\mathcal {O}(\sigma n)\). This is time-optimal if \(2 \le \sigma \le n\).
In [23], it is shown that all \(|\mathcal{A}|\) minimal absent words of a word x of length n over an integer alphabet can be computed in time \(\mathcal{O}(n+|\mathcal{A}|)\) and space \(\mathcal {O}(n)\). Computing minimal absent words and checking for each of them if it is an avoided word is the bottleneck for algorithms AvoidedWords and AllAvoidedWords. The result of [23] implies that for a word x of length n over an integer alphabet we can make both algorithms to require time \(\mathcal{O}(n+|\mathcal{A}|)\) and space \(\mathcal {O}(n)\). We can do that by checking for each minimal absent word output by the algorithm whether it is avoided, instead of storing a representation of them and then making the check.
As the complexity of algorithms AvoidedWords and AllAvoidedWords does not depend on the value of \(\rho \), one can use a negative \(\rho \) close to 0, sort the output \(\rho \)-avoided words with respect to \(\textit{dev}(w)\), and consider the extreme ones.
The expected length of the longest \(\rho \)-avoided word in a word x of length n over an alphabet \(\Sigma \) of size \(\sigma >1\) is \(\mathcal {O}(\log _{\sigma } n)\) when the letters are independent and identically distributed random variables uniformly distributed over \(\Sigma \).
By Lemma 4 the length of the longest occurring word is bounded above by the word-depth of the deepest internal explicit node in \(\mathcal {T}(x)\) incremented by 1. We note that the greatest word-depth of an internal node corresponds to the longest repeated factor in word x. Moreover, for a word w to be a minimal absent word, \(w_i\) must appear at least twice in x (in the occurrences of \(w_p\) and \(w_s\)). Hence the length of the longest \(\rho \)-avoided word is bounded by the length of the longest repeated factor in x incremented by 2. The expected length of the longest repeated factor in a word is known to be \(\mathcal {O}(\log _{\sigma } n)\) [24] and hence the lemma follows. \(\square \)
Algorithm AvoidedWords was implemented as a program to compute the \(\rho \)-avoided words of length k in one or more input sequences; there is an option to run Algorithm AllAvoidedWords instead. The program was implemented in the C++ programming language and developed under GNU/Linux operating system. Our program makes use of the implementation of the compressed suffix tree available in the Succinct Data Structure Library [25]. The input parameters are a (Multi)FASTA file with the input sequence(s), an integer \(k > 2\), and a real number \(\rho < 0\). The output is a file with the set of \(\rho \)-avoided words of length k per input sequence. The implementation is distributed under the GNU General Public License, and it is available at http://github.com/solonas13/aw. The experiments were conducted on a Desktop PC using one core of Intel Core i5-4690 CPU at 3.50 GHz under GNU/Linux. The program was compiled with g++ version 4.8.4 at optimisation level 3 (−O3). We also implemented a brute-force approach for the computation of \(\rho \)-avoided words. We mainly used it to confirm the correctness of our implementation. Here we do not plot the results of the brute-force approach as it is easily understood that it is orders of magnitude slower than our approach.
Experiment I. Elapsed time of Algorithm AvoidedWords using synthetic DNA (\(\sigma =4\)) and proteins (\(\sigma =20\)) data of length 1M for variable k and variable \(\rho \)
Experiment I
To evaluate the time performance of our implementation, synthetic DNA (\(\sigma =4\)) and protein (\(\sigma =20\)) data were used. The input sequences were generated using a randomised script. In the first experiment, our task was to establish that the performance of the program does not essentially depend on k and \(\rho \); i.e., the elapsed time of the program remains unchanged up to some constant with increasing values of k and decreasing values of \(\rho \). As input datasets, for this experiment, we used a DNA and a protein sequence both of length 1M (1 Million letters). For each sequence we used different values of k and \(\rho \). The results, for elapsed time are plotted in Fig. 2. It becomes evident from the results that the time performance of the program remains unchanged up to some constant. The longer time required for the protein sequences for some value of k is explained by the increased number of branching nodes in this depth in the corresponding suffix tree due to the size of the alphabet (\(\sigma =20\)). To confirm this we counted the number of nodes considered by the algorithm to compute the \(\rho \)-avoided words for \(k=4\) and \(\rho =-10\) for both sequences. The number of considered nodes for the DNA sequence was 260 whereas for the protein sequence it was 1,585,510.
Experiment II. Elapsed time and peak memory usage of Algorithm AvoidedWords using synthetic DNA (\(\sigma =4\)) and proteins (\(\sigma =20\)) data of length 1–128M
Experiment II
In the second experiment, our task was to establish the fact that the elapsed time and memory usage of the program grow linearly with n, the length of the input sequence. As input datasets, for this experiment, we used synthetic DNA and proteins sequences ranging from 1 to 128 M. For each sequence we used constant values for k and \(\rho \): \(k=8\) and \(\rho =-10\). The results, for elapsed time and peak memory usage, are plotted in Fig. 3. It becomes evident from the results that the elapsed time and memory usage of the program grow linearly with n. The longer time required for the protein sequences compared to the DNA sequences for increasing n is explained by the increased number of branching nodes in this depth (\(k=8\)) in the corresponding suffix tree due to the size of the alphabet (\(\sigma =20\)). To confirm this we counted the number of nodes considered by the algorithm to compute the \(\rho \)-avoided words for \(n=64\)M for both the DNA and the protein sequence. The number of nodes for the DNA sequence was 69,392 whereas for the protein sequence it was 43,423,082.
Experiment III. Elapsed time and peak memory usage of Algorithm AvoidedWords using all chromosomes of the human genome
Experiment III
In the next experiment, our task was to evaluate the time and memory performance of our implementation with real data. As input datasets, for this experiment, we used all chromosomes of the human genome. Their lengths range from around 46M (chromosome 21) to around 249M (chromosome 1). For each sequence we used \(k=8\) and \(\rho =-10\). The results, for elapsed time and peak memory usage, are plotted in Fig. 4. The results with real data confirm that the elapsed time and memory usage of the program grow linearly with n.
Experiment IV
In an experiment with a prokaryote, we computed the set of avoided words for \(k=6\) (hexamers) and \(\rho =-10\) in the complete genome of Escherichia coli and sorted the output in increasing order of their deviation. The most avoided words were extremely enriched in self-complementary (palindromic) hexamers. In particular, within the output of 28 avoided words, 23 were self-complementary; and the 17 most avoided ones were all self-complementary. For comparison, we computed the set of avoided words for \(k=6\) and \(\rho =-10\) from an eukaryotic sequence: a segment of the human chromosome 21 (its leftmost segment devoid of N's) equal to the length of the E. coli genome. In the output of 10 avoided words, no self-complementary hexamer was found. Our results confirm that the restriction endonucleases which target self-complementary sites are not found in eukaryotic sequences [8].
Table 1 The number of avoided words, for \(k=10\) and \(\rho =-2\), for each concatenate of surrogates (Row 1); the number of avoided words of the corresponding CNE dataset (Row 2); and their ratio (Row 3)
Table 2 The number of avoided words, for \(k>2\) and \(\rho =-2\), for each concatenate of surrogates (Row 1); the number of avoided words of the corresponding CNE dataset (Row 2); and their ratio (Row 3)
Experiment V
Then, we proceeded to the examination of several collections of CNEs obtained through multiple sequence alignment between the human and other genomes. The detailed description of how those CNEs were identified could be found in [15]. For each CNE of these datasets, a sequence stretch (surrogate sequence) of non-coding DNA of equal length and equal GC content was taken at random from the repeat-masked human genome. The CNEs of each collection were concatenated into a single long sequence and the same procedure was followed for the corresponding surrogates. Seven CNEs concatenates and the corresponding surrogate datasets have been formed and used in this experiment. We have determined through the proposed algorithm the avoided words for \(k=10\) (decamers) and \(\rho =-2\) for these fourteen datasets and the results are presented in Table 1. In Table 2, we show likewise for \(k>2\) (all avoided words) and \(\rho =-2\).
The first five CNEs collections have been composed through multiple sequence alignment of the same set of genomes and they differ only in the thresholds of sequence similarity applied between the considered genomes: from 75 to 80 (the least conserved CNEs, which thus are expected to serve less demanding functional roles) to 95–100 which represent the extremely conserved non-coding elements (UCNEs or CNEs 95–100) [15]. The remaining two collections have been composed under different constraints and have been derived after alignment of genomes belonging to the Mammalian and Amniotic groups. In Tables 1 and 2, the last line shows the ratios formed by the numbers of avoided words of each concatenate of surrogates divided by the numbers of avoided words of the corresponding CNE dataset.
Two immediate results stem from inspection of Tables 1 and 2:
In all cases, the number of avoided words from the non-functional (surrogate) concatenate of sequences far exceeds the corresponding number derived from the corresponding CNE dataset.
In the case of datasets with increasing degree of similarity between aligned genomes (from 75–80 to 95–100) the ratios of the numbers of avoided words show a clear increasing trend.
Both these findings can be understood on the basis of the difference in functionality, and thus tolerance to mutations, between CNE and surrogate datasets. One particularly frequent source of mutations is the slippage error during DNA replication; see e.g. reference [26]. Within a genomic sequence, this phenomenon causes the generation and increase in length, during evolutionary time, of polypyrimidine and polypurine nucleotide tracts. The expansion of those tracts is impeded at a considerable degree in the case of sequences which serve a functional role (as CNEs do) due to several constraints. On the other hand, in non-functional regions (as our surrogates mostly are) this procedure ceases to be tolerated only when it reaches to the formation of a polypyrimidine/polypurine tract with length affecting the proper folding or other structural features of the chromatin. Then, selection eliminates it, while its longer proper factors are tolerated in sufficient numbers within the sequence, thus resulting to an avoided word. In support of this explanation is the observation that all lists of avoided words found by our algorithm in concatenates of surrogates exhibit a considerable enrichment in oligopurines and oligopyrimidines. Taking at random some examples, for \(k=10\), we notice: AAAAAAAAAT, AAAAAACCAC, ACAAAAAAAA, CTCCTCTTTT, etc.
Our second observation, i.e. the positive correlation between (1) the paucity of avoided decamers in CNEs collections and (2) the similarity thresholds used for their identification comes in accordance with the above argument. CNEs extracted under a stricter requirement of sequence similarity between evolutionary distant species are CNEs whose functionality is less tolerant to alterations due to random mutations in general. Hence, they also tolerate less the propagation within their sequence of parasite polypyrimidine/polypurine tracts too.
We presented an \(\mathcal {O}(n)\)-time and \(\mathcal {O}(n)\)-space algorithm to compute all \(\rho \)-avoided words of length k in a sequence of length n over a fixed-sized alphabet. For integer alphabets, our algorithm runs in time \(\mathcal {O}(\sigma n)\) and is optimal for a sufficiently large alphabet of size \(\sigma \). We also presented a time-optimal \(\mathcal {O}(\sigma n)\)-time algorithm to compute all \(\rho \)-avoided words (of any length) in a sequence of length n over an integer alphabet. Moreover, we provided a tight asymptotic upper bound for the number of \(\rho \)-avoided words over an integer alphabet and the expected length of the longest one.
In the process, we showed that the known asymptotic upper bound on the number of minimal absent words of a sequence is tight for integer alphabets. We also showed that the same asymptotic bound is tight for the number of minimal absent words of a fixed length if the alphabet is sufficiently large.
Finally, we made available an implementation of our algorithm. Experimental results, using both real and synthetic data, show its efficiency and applicability in biological sequence analysis.
Searls DB. The linguistics of DNA. Am Sci. 1992;80(6):579–91.
Mantegna RN, Buldyrev SV, Goldberger AL, Havlin S, Peng C-K, Simons M, Stanley HE. Linguistic features of noncoding DNA sequences. Phys Rev Lett. 1994;73(23):3169. doi:10.1103/PhysRevLett.73.3169.
Acquisti C, Poste G, Curtiss D, Kumar S. Nullomers: really a matter of natural selection? PLoS ONE. 2007;2(10):1022. doi:10.1371/journal.pone.0001022.
Barton C, Heliou A, Mouchard L, Pissis SP. Linear-time computation of minimal absent words using suffix array. BMC Bioinform. 2014;15(1):1. doi:10.1186/s12859-014-0388-9.
Barton C, Heliou A, Mouchard L, Pissis SP. Parallelising the computation of minimal absent words. In: Wyrzykowski R, Deelman E, Dongarra J, Karczewski K, Kitowski J, Wiatr K, editors. Parallel processing and applied mathematics—11th international conference, PPAM 2015, Krakow, Poland, September 6–9, 2015. Revised selected papers, Part II. lecture notes in computer science. vol. 9574. Berlin: Springer; 2015. p. 243–53. doi:10.1007/978-3-319-32152-3_23.
Crochemore M, Fici G, Mercas R, Pissis SP. Linear-time sequence comparison using minimal absent words and applications. In: Kranakis E, Navarro G, Chávez E, editors. LATIN 2016: theoretical informatics: 12th Latin American symposium, Ensenada, April 11–15, 2016, Proceedings. Lecture notes in computer science. Berlin: Springer; 2016. p. 334–46. doi:10.1007/978-3-662-49529-2_25.
Belazzougui D, Cunial F. Space-efficient detection of unusual words. In: International symposium on string processing and information retrieval. Berlin: Springer; 2015. p. 222–33. doi:10.1007/978-3-319-23826-5_22.
Rusinov I, Ershova A, Karyagina A, Spirin S, Alexeevski A. Lifespan of restriction-modification systems critically affects avoidance of their recognition sites in host genomes. BMC Genom. 2015;16(1):1. doi:10.1186/s12864-015-2288-4.
Brendel V, Beckmann JS, Trifonov EN. Linguistics of nucleotide sequences: morphology and comparison of vocabularies. J Biomol Struct Dyn. 1986;4(1):11–21. doi:10.1080/07391102.1986.10507643.
Apostolico A, Bock ME, Lonardi S, Xu X. Efficient detection of unusual words. J Comput Biol. 2000;7(1–2):71–94. doi:10.1089/10665270050081397.
Apostolico A, Bock ME, Lonardi S. Monotony of surprise and large-scale quest for unusual words. J Comput Biol. 2003;10(3–4):283–311. doi:10.1089/10665270360688020.
Apostolico A, Gong F-C, Lonardi S. Verbumculus and the discovery of unusual words. J Comput Sci Technol. 2004;19(1):22–41. doi:10.1007/BF02944783.
Harmston N, Barešić A, Lenhard B. The mystery of extreme non-coding conservation. Philos Trans R Soc B. 2013;368(1632):20130021. doi:10.1098/rstb.2013.0021.
Polychronopoulos D, Sellis D, Almirantis Y. Conserved noncoding elements follow power-law-like distributions in several genomes as a result of genome dynamics. PloS ONE. 2014;9(5):95437. doi:10.1371/journal.pone.0095437.
Polychronopoulos D, Weitschek E, Dimitrieva S, Bucher P, Felici G, Almirantis Y. Classification of selectively constrained DNA elements using feature vectors and rule-based classifiers. Genomics. 2014;104(2):79–86. doi:10.1016/j.ygeno.2014.07.004.
Polychronopoulos D, Krithara A, Nikolaou C, Paliouras G, Almirantis Y, Giannakopoulos G. In: Dediu AH, Martín-Vide C, Truthe B, editors. Analysis and classification of constrained DNA elements with \(n\)-gram graphs and genomic signatures. Berlin: Springer; 2014. p. 220–34. doi:10.1007/978-3-319-07953-0_18
Almirantis Y, Charalampopoulos P, Gao J, Iliopoulos CS, Mohamed M, Pissis SP, Polychronopoulos D. Optimal computation of avoided words. In: Algorithms in bioinformatics: 16th international workshop (WABI 2016). Berlin: Springer International Publishing. p. 1–13. doi:10.1007/978-3-319-43681-4_1.
Crochemore M, Hancart C, Lecroq T. Algorithms on strings. Cambridge: Cambridge University Press; 2007.
Charalampopoulos P, Crochemore M, Fici G, Mercas R, Pissis SP. Alignment-free sequence comparison using absent words (Under Review)
Mignosi F, Restivo A, Sciortino M. Words and forbidden factors. Theor Comput Sci. 2002;273(1):99–117. doi:10.1016/S0304-3975(00)00436-9.
Gawrychowski P, Lewenstein M, Nicholson PK. Weighted ancestors in suffix trees. Eur Symp Algorithms. 2014. doi:10.1007/978-3-662-44777-2.
Farach M. Optimal suffix tree construction with large alphabets. In: Proceedings, 38th annual symposium on foundations of computer science. New York City: IEEE; 1997. p. 137–43. doi:10.1109/SFCS.1997.646102.
Fujishige Y, Tsujimaru Y, Inenaga S, Bannai H, Takeda M. Computing DAWGs and minimal absent words in linear time for integer alphabets. In: Faliszewski P, Muscholl A, Niedermeier R, editors. 41st International symposium on mathematical foundations of computer science (MFCS 2016). Leibniz international proceedings in informatics (LIPIcs), vol. 58: Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik; 2016. p. 1–14. doi:10.4230/LIPIcs.MFCS.2016.38.
Manber U, Myers G. Suffix arrays: a new method for on-line string searches. Siam J Comput. 1993;22(5):935–48. doi:10.1137/0222058.
Gog S, Beller T, Moffat A, Petri M. From theory to practice: plug and play with succinct data structures. In: International Symposium on experimental algorithms. Berlin: Springer; 2014. p. 326–37. doi:10.1007/978-3-319-07959-2_28.
Hile SE, Eckert KA. Positive correlation between DNA polymerase \(\alpha \)-primase pausing and mutagenesis within polypyrimidine/polypurine microsatellite sequences. J Mol Biol. 2004;335(3):745–59. doi:10.1016/j.jmb.2003.10.075.
YA and SPP conceived the study. PC, JG, MM, CSI, and SPP devised the algorithms. PC showed the tight asymptotic bounds. JG and SPP implemented the algorithms. YA, JG, SPP, and DP conceived and conducted the experiments. All authors contributed equally in writing up the manuscript. All authors read and approved the final manuscript.
Open access for this article was funded by King's College London.
This research was partially supported by the Leverhulme Trust. PC is supported by the Graduate Teaching Scholarship scheme of the Department of Informatics at King's College London. DP is supported by the UK Medical Research Council (MRC) postdoctoral scheme.
National Center for Scientific Research Demokritos, Neapoleos, 153 10, Athens, Greece
Yannis Almirantis
Department of Informatics, King's College London, The Strand, London, WC2R 2LS, UK
Panagiotis Charalampopoulos, Jia Gao, Costas S. Iliopoulos, Manal Mohamed & Solon P. Pissis
Computational Regulatory Genomics, MRC Clinical Sciences Centre (CSC), Du Cane Road, London, W12 0NN, UK
Dimitris Polychronopoulos
Panagiotis Charalampopoulos
Jia Gao
Costas S. Iliopoulos
Manal Mohamed
Solon P. Pissis
Correspondence to Solon P. Pissis.
Almirantis, Y., Charalampopoulos, P., Gao, J. et al. On avoided words, absent words, and their application to biological sequence analysis. Algorithms Mol Biol 12, 5 (2017). https://doi.org/10.1186/s13015-017-0094-z
Avoided words
Underrepresented words
Absent words
Suffix tree
Conserved non-coding elements
Ultraconserved elements | CommonCrawl |
Insecticide use pattern and phenotypic susceptibility of Anopheles gambiae sensu lato to commonly used insecticides in Lower Moshi, northern Tanzania
Elinas J. Nnko1,
Charles Kihamia1,
Filemoni Tenu2,
Zul Premji1 &
Eliningaya J. Kweka ORCID: orcid.org/0000-0001-8367-40083,4
BMC Research Notes volume 10, Article number: 443 (2017) Cite this article
Evidence of insecticide resistance has been documented in different malaria endemic areas. Surveillance studies to allow prompt investigation of associated factors to enable effective insecticide resistance management are needed. The objective of this study was to assess insecticide use pattern and phenotypic susceptibility level of Anopheles gambiae sensu lato to insecticides commonly used in malaria control in Moshi, northern Tanzania.
A cross-sectional survey was conducted to assess insecticide usage pattern. Data was collected was through closed and open ended questionnaires The WHO diagnostic standard kit with doses of 0.1% bendiocarb, 0.05% deltamethrin, 0.75% permethrin and 4% DDT were used to detect knockdown time, mortality and resistance ratio of wild A. gambiae sensu lato. The questionnaire survey data was analyzed using descriptive statistics and one-way analysis of variance while susceptibility data was analysed by logistic regression with probit analysis using SPSS program. The WHO criteria was used to evaluate the resistance status of the tested mosquito populations.
A large proportion of respondents (80.8%) reported to have used insecticide mainly for farming purposes (77.3%). Moreover, 93.3% of household reported usage of long lasting insecticidal nets. The frequently used class of insecticide was organophosphate with chloropyrifos as the main active ingredients and dursban was the brand constantly reported. Very few respondents (24.1%) applied integrated vector control approaches of and this significantly associated with level of knowledge of insecticide use (P < 0.001). Overall knockdown time for A. gambiae s.l was highest in DDT, followed by Pyrethroids (Permethrin and deltamethrin) and lowest in bendiocarb. Anopheles gambiae s.l showed susceptibility to bendiocarb, increased tolerance to permethrin and resistant to deltamethrin. The most effective insecticide against the population from tested was bendiocarb, with a resistance ratio ranging between 0.93–2.81.
Education on integrated vector management should be instituted and a policy change on insecticide of choice for malaria vector control from pyrethroids to carbamates (bendiocarb) is recommended. Furthermore, studies to detect cross resistance between pyrethroids and organophosphates should be carried out.
In Sub Saharan Africa, species from the Anopheles gambiae complex and Anopheles funestus groups are the important malaria vectors [1, 2]. Out of the eight members of the A. gambiae complex sibling species, A. gambiae s.s and A. arabiensis are the main malaria vectors across sub-Saharan Africa including Tanzania [1,2,3,4,5,6]. Malaria is still a major cause of mortality and morbidity in sub-Saharan Africa including Tanzania [7,8,9]. The government of Tanzania has extensively provided and is scaling up free distribution of long lasting insecticidal nets [10, 11], free anti-malarial [12] and rapid diagnostic kits [13] in all health facilities across the country as a strategy for strengthening malaria control. Vector control constitute a major component of the global strategy for malaria control [14, 15]. Use of long lasting insecticide treated nets (LLINs), indoor residual spraying (IRS) and larviciding are the pillars of malaria vector control programmes [16,17,18,19].
Development of Insecticide resistance in targeted vector populations pose a major threat in malaria vector control as it weakens efficiency of insecticide based intervention tools [20,21,22,23,24]. Resistance has been documented in all classes of insecticides used in public health, veterinary and agricultural pests control including pyrethroids, carbamates, organophosphates and organochlorines, [20, 21, 25,26,27]. Commonly used pesticides in agriculture and public health are organophosphates (such as fenitrothion, malathion and pirimiphos-methyl), organochlorines such as dichlorodiphenyltrichloroethane (DDT), carbamates (such as bendiocarb and propoxur) and pyrethroids (alphacypermethrin, bifenthrin, cyfluthrin, deltamethrin, lambdacyhalothrin, etofenprox). Currently, pyrethroids is the only recommended insecticide class for application in LLINs [28]. Some of these pesticides residues have been found in soil and water from different areas practicing intensive agriculture and pesticides usage for higher yield productivity in vegetable gardens, cotton farms, horticulture and rice field [29, 30]. Cross resistance has been reported between DDT and pyrethroids that weakening the control efforts [30].
Four different insecticide resistance mechanisms have been reported in malaria vector and include; target site resistance, metabolic resistance, behavioral resistance and cuticular resistance [31,32,33,34]. Target site and metabolic resistance mechanisms are the most common mechanisms [31, 35]. Of the four types of resistance (phenotypic and target resistance) is primarily identified by determining the knockdown time in minutes (KDT) and mortality rate to exposed insecticides (24 h post exposure) [35].
Insecticide resistance in malaria vector mosquito has already been documented in Tanzania, however the resistance levels has not reached level which can lead to operational failure [6, 36]. Despite the fact that, Tanzania records shows reduced susceptibility levels of malaria vectors against different insecticides in most areas other studies have shown marginal susceptibility in a number of sentinel sites [6, 34]. Low susceptibility to 0.75% permethrin have been reported in Arumeru, Lower Moshi, and Dar-es-salaam with post exposure of 92, 77 and 92% respectively [6, 34].
The aim of this study was to assess the insecticide use practice, knowledge, frequency of insecticide use and pattern, type of vector control tool and method of vector control on the phenotypic insecticides resistance and resistance ratio among malaria vectors A. gambiae s.l wild populations in Lower Moshi rice irrigation scheme to lambdacyhalothrin, permethrin, DDT and Bendiocarb.
This study was carried out in Lower Moshi (37°20′E3°21′S and 700 m altitude), an intensive rice-irrigation area, south of Mount Kilimanjaro in north-eastern Tanzania. Mosquitoes were collected from two hamlets (Mabogini and Rau Kati). These two hamlets were selected based on their agricultural practices differences. Most of the population in the area is engaged in agriculture and livestock production. Rice irrigation is the predominant activity although other crops such as beans, maize and green vegetables are grown for subsistence. Insecticides are used for control of insect pests in agriculture and livestock production as well as control of human disease vectors such as mosquitoes. Two rivers, Njoro and Rau provide water for irrigation. There are two growing seasons, the main one is from June to October and the second one involving sporadic cultivation of rice is from September to February.
Sampling and sample selection technique
Semi-gravid adult A. gambiae s.l mosquitoes were collected between May and June 2013 from Mabogini and Rau Kati. The months of May and June are within the long rain season with high mosquito density. One central point was randomly selected from each village followed by random selection of the direction in which household interviews were conducted (simple random sampling technique). After household survey, the interviewer continued in the same direction interviewing every subsequent head of household or any adult above 18 years old available at the time of interview. In case of non-response (call backs were not implemented), the interviewer proceeded to the next household. Only one individual per household was interviewed. All households were visited in a multi-household dwelling.
Data collection tools
Data collection for cross-sectional survey utilized structured questionnaire with both closed and open ended-questions. The questionnaire was designed to capture all variables for the study including demographic characteristics of the study population, name of insecticide (trade, common and generic), ingredient of insecticide, types of insecticides [lambdacyhalothrin, deltamethrin, permethrin and dichloro-diphenyl-trichloroethane (DDT)], type of vector control tools (integrated, biological, environmental management, chemicals), knowledge of insecticide use (manufacturer information, storage, dosage and concentration, safety precautions measures), frequency of insecticide application (daily, weekly, monthly), years of application, time of application (night/day), season of application, Insecticide application technique (spraying, smearing, dipping, impregnated in a targeted object, etc.), forms of insecticide (powder and concentrate, coils, sprays, wettable powder, insecticide chalks and jelly) and insecticide use (agriculture, veterinary or public health). Data collection tool for susceptibility test was a form capturing information relevant for the test to be carried as instructed in WHO guidelines [37]. The form captured information such as mosquito stage (adult/larvae) collection method (indoor/outdoor), types of breeding site (rice field, rainwater pool), mosquitoes information (age, species, date collected/and tested), insecticide information, storage condition, test results, knockdown time and mortality). Susceptibility tests B were carried out using WHO test kits for adults mosquitoes [37] with four insecticides Two pyrethroids [0.05% deltamethrin (DE 271, manufactured September, 2012 and expired September, 2013)] 0.75% permethrin [PE 192, manufactured September, 2012 and expired September, 2013)], carbamate [0.1% bendiocarb (BC 081, manufactured September, 2011 and expired September, 2014)] and organochlorides [4% DDT (DD 150, manufactured August, 2011 and expired August, 2016)]. Impregnated papers were obtained from the WHO Collaborating Centre in Penang, Malaysia. A minimum of 100 Anopheles mosquitoes (4 replicates of 25 mosquitoes each,) were collected for susceptibility test. The numbers of knockdown mosquitoes were recorded at interval of 10, 15, 20, 30, 40, 50, 60 min (1 h). Test was accompanied by control test where mosquitoes were exposed to paper treated with Silicone oil [Hangzhou Jessica Chemicals Co., Ltd (Pyrethroid control)] or risella oil [manufactured at Shell's world-class Pearl GTL plant in Qatar (DDT control)] for 1 h. Bioassays were also carried out on the A. gambiae s.l Kisumu susceptible strain (KMS strain). After exposure, mosquitoes were kept in paper cups and supplied with a 10% sugar solution at 25–27 °C temperature, light regime of 12L:12D; relative humidity of 77 ± 2% and the mortality was recorded after 24 h.
Mosquitoes sampling
A minimum of five houses were sampled randomly in the two hamlets every daily. Mosquitoes were collected using mechanical aspirator in cowshed [38]. Mosquitoes were placed in a paper cup covered with netting material and provided with 10% sucrose solution. They were placed in a cooler box and transported to the testing laboratory. Blood fed mosquitoes were left for 24 h in insectary to digest the blood meal to semi-gravid. Insectary light condition was light:dark (12L:12D) and relative humidity of 78 ± 2%. Mosquitoes were then used for insecticides susceptibility testing [31]. Laboratory susceptible colony was tested for insecticide resistance ratio calculation purposes.
Insecticide susceptibility tests
To minimize the influence of a blood meal on exposures fully fed mosquitoes were left overnight to digest the blood meal before exposure to insecticides. Only female A. gambiae s.l. were used for the susceptibility tests according to WHO criteria [39].
Morphological identification
Adult female A. gambiae s.l mosquitoes were identified after susceptibility testing. Morphological identification was done using a key that was developed by Gillies and Coetzee [32].
Data management and analysis
Data were double entered and compared for consistency before analysis. The data were coded before entering them into Statistical Package for Social Scientists (SPSS) Software for analysis. Descriptive statistics (frequencies and percentages) were calculated to give characteristics of study variables. Cross tabulation was performed to determine relationships between choice of vector control methods/approach and other determinants (demographic characteristics, knowledge of insecticide use and practice, economic status, assets ownership and economy diversification) (Additional file 1). The P value was extracted and used to interpret the significance of the statistical test. Differences between groups compared were considered statistically significant when P < 0.05.
Household sample size estimation
This sample size is estimated at 95% confidence level, 5% margin of error, and a proportion of 50% for unknown proportion of household knowledgeable on appropriate use of insecticide. A sample size formula is as shown below:
$$\text{N = }\frac{{{\text{Z}}^{2} {\text{P}}\,\left( {100 - {\text{P}}} \right)}}{{{\text{E}}^{2} }}$$
where N = Sample size, P = 50% of household knowledgeable on appropriate use of insecticide (assumption, proportion unknown), E = Margin of Error = 5%, Z = Level of Confidence, Z = 1.96 for 95% Confidence Interval. N = 1.962 * 50(100 − 50), N = 384.
The 15% was added for non-responses, drop outs or missing data, the sample size taken was (0.15 * 384) + 384 = 441.6. The calculated sample size was rounded off to 450 participants.
Probit analysis was used for analysis of mosquitoes susceptibility status to different insecticides [40]. In analysis, number of mosquitoes knocked down was considered as response frequency. Total number of mosquitoes used per test was considered as total number observed, Insecticides were considered as covariates and time was considered as a factor. Natural response was calculated from the data. In calculating the 24 h mortality post exposure, descriptive statistics was used in which exploration of the data was conducted by overall location, by type of insecticides and by both site and insecticides. Mortality was considered as dependent variable while site and insecticide was considered as factors. The fifty percent knockdown time (KDT50) recorded from field-collected mosquitoes from Lower Moshi was compared with that of the A. gambiae Kisumu reference susceptible strain by estimates of KDT50 and resistance ratio (RR). Abbott's formula was not used to correct the observed mortality in adult susceptibility tests because there were no mortality in control group [41]. The World Health Organization standard criteria were used to evaluate the insecticides resistance/susceptibility status of the tested mosquito populations (a mortality in the range 98–100% indicates susceptibility); a mortality of <98% is suggestive of the existence of resistance and further investigation is needed; if the observed mortality (corrected if necessary) is between 90 and 97%, the presence of resistant genes in the vector population must be confirmed, and if mortality is <90%, confirms of the existence of resistant genes in the test population [37].
Conceptual framework
Many factors may contribute to susceptibility level of A. gambiae s.l. This includes vector control methods, types/class of vector control tool and knowledge of insecticides use and practice. However, on the other side vector control tool itself may be affected by demographic characteristics of respondents, knowledge of vector control and practice, insecticide application technique and frequency of insecticide application. The result of interrelation of all the factors may affect the susceptibility level as summarized by Fig. 1.
Conceptual framework which was used during the study planning
Demographic characteristics of respondents
A total of 448 respondents participated in the study of which 39.7% (n = 178) were males and 60.3% (n = 270) were females. The mean age of respondents was 43.78 ± 13.491 and most of them were within the age group of 46–55 (41%, n = 181). The average number of people in a household was 4.75 ± 1.9 and majority of members have primary education (69.9%, n = 313). Most of the respondents were married (75.7%, n = 333) and many households found to have 0–2 children aged below 5 years (96.7%, n = 433). Total number of people in a household ranged between 4–6 (61.2%, n = 274) and the majority of the respondents reported farming as their main source of income (Table 1).
Table 1 Demographic characteristics of respondents (N = 448)
Insecticide usage pattern
Majority (80.8%, n = 320) of respondents reported to have applied insecticides in the past 5 years mainly against crop pests (77.3%, n = 307). They also used insecticide for both veterinary (killing insects, 30%, n = 119), (nuisance control, 30.2%, n = 120)) and for household purposes (malaria vector control, n = 202, 30.2%). Generally reported trend of insecticides use increased for farming purposes (46.7%, n = 154) in the past 5 years while decreased for public health uses (69.3%, n = 158). The most commonly used pesticides were dursban (49.6%, n = 148) for farming, cybadip (71.1%, n = 83) for veterinary and Icon/Ngao (49.1%, n = 107) for public health pests. Through reading the label and material data sheet of the container/packaging, it was found that active ingredients contained in the insecticide were chloropyrifos (49.7%, n = 148) for farming, cypermethrin (77.9%, n = 81) for veterinary and lambdacyhalothrin (58.6%, n = 116) for public health purposes. It was further detected that major types of pesticides used were organophosphate for farming purposes (55.4%, n = 165) and pyrethroid for both veterinary and public health purposes (89.42%, n = 93; and 89.1%, n = 179) respectively (Table 2).
Table 2 Surveyed community insecticide use response pattern for farming, veterinary and domestic pests
Vectors control tools
Overall, most of respondents reported that insecticides (89.5%, n = 401) and environmental management (89.2%, n = 355) methods were used for vector control. However, very few respondents reported to use other vector control types (Table 3). Further analysis was done by producing composite variables for integrated methods used; combination of three or more methods (integrated) and non-integrated methods (2 or only 1 method). It was found that majority of the respondents used non-integrated method for vector control (75.9%, n = 302) compared to integrated ones (24.1%, n = 96).
Table 3 Vector control method options by respondents
Knowledge of insecticide use and practice
Majority of respondents (85.10%, n = 330) agreed that, they are aware of where to get information on insecticide. The most common source of information cited was from insecticides dealers and distributers (67.20%, n = 262) and lastly by reading from insecticides material data sheet (48.30%, n = 189) (Table 4). Many respondents also reported to have knowledge on use of insecticides (91.20%, n = 330) of which looking on expiry date was frequently considered and also, reading package labels (45.3%, n = 178) (Table 4). Total knowledge was determined by recoding and combining variables for knowing source of information (know where to get information, extension officer and veterinary officer, material data sheet, container label) and important information considered (expiry date, certification log, container label, language on the label and know important information to consider before using or buying insecticide). It was found that almost half of respondents have high level of knowledge on insecticide use and practice (51.8%, n = 184) and the rest have low level of knowledge.
Table 4 Proportions of respondents' knowledge on insecticide use and practice
The knock down time for wild Anopheles gambiae s.l. in Mabogini
A total of 4200 wild Anopheles s.l. adult mosquitoes were collected from May to June 2013 in the two hamlets (Mabogini and Rau Kati). In Mabogini, the least KDT50 was recorded for Bendiocarb and the highest for DDT. The KDT95 in Rau Kati was low for Bendiocarb but high for DDT (Table 5). Overall knockdown time was high in DDT, moderate in Pyrethroids (permethrin and deltamethrin) and lower in bendiocarb (Table 5).
Table 5 Mean knockdown time for wild Anopheles gambiae s.l
Resistance ratio of wild Anopheles gambiae s.l. against laboratory susceptible colony
It was found that the resistance ratio of wild A. gambiae to laboratory colony based on KDT50 for bendiocarb, deltamethrin and permethrin is twice as that of Kisumu susceptible strain but for DDT it was almost the same as that of Kisumu susceptible strain (Table 6). The mortality ratio was highest for bendiocarb with 1.00 (Table 7).
Table 6 Resistance ratio of wild Anopheles gambiae s.l against susceptible laboratory strain for different insecticides
Table 7 Mortality ratio of wild Anopheles gambiae s.l against susceptible laboratory colony based on 24 h mean mortality
Mean mortality of wild Anopheles gambiae s.l in 24 h
The study found that A. gambiae s.l was highly susceptible to bendiocarb and DDT (mortality rate of 100 and 99.2% respectively), increased tolerance to permethrin (mortality rate = 89.68%) and resistant to deltamethrin (mortality rate = 69.96%) (Table 8).
Table 8 Mean mortality after 24 h and knockdown time for wild Anopheles gambiae s.l
The present study investigated the insecticide usage pattern and phenotypic susceptibility of A. gambiae sensu lato to commonly used insecticides in Lower Moshi, northeastern Tanzania. Farming was reported to be the main income activity in the area and demographic characteristics were similar to other peri-urban areas of Tanzania as reported in the 2012 National Census Survey [42].
The proportion of pesticides used for farming in developing countries has been shown to be slightly higher compared to developed countries like Thailand whereby almost half of proportion of small scale farmers used insecticide [43]. The main reason why farmers use high amount of insecticides is to increase their yield through protecting crops against pests. Increased application of insecticide for farming purposes particularly for protecting crops against pests poses critical challenges as it may accelerate widespread of resistance strains of insects vector especially malaria vector in areas where agriculture is the main activity [35]. Even though linking of increases in insecticide resistance to farming has been previously reported, studies shows that resistance may differ in a short period of time, place and even at short distances [36, 44]. For example, in Kenya Mwea irrigation scheme where a lot of insecticides are used in rice production, A. arabiensis were found to be highly susceptible (with mortality of 94%) to all of insecticides recommended for malaria vector control [45]. This means that that there are no resistant genes in this population of malaria vectors. Monitoring the development of insecticide resistance in areas were insecticide based tools such as LLINs and IRS are being used should be reinforced to avoid compromising vector control interventions [26]. This study found that a majority of respondents reported usage of insecticides for malaria vector control in 93.3% of household. However, LLINs coverage findings differ from other Tanzania demographic and health survey in which the national coverage mean was about fifty percent (50%) while in this study was found to be 93.3% [45]. The difference can be justified by the reasons that at the time of Tanzania demographic and health survey (TDHS), LLINs were not yet distributed in all regions of mainland Tanzania. Despite this variation in coverage it can be concluded that the study area has exceeded the minimum target of millennium development goal of 80% coverage of LLINs at household level. The government of Tanzania has taken extra efforts in distribution and scaling up of LLINs for wide coverage and usage [10, 11]. It must be taken into cautions that high coverage and usage of LLINs has been associated to increased insecticide resistance of A. gambiae as in the case of Senegal [26]. High coverage of LLINs increases exposure of vectors to insecticides which causes them to be tolerant and spread the gene in wild populations of malaria vectors where the genes are already present [46, 47]. The study of Kulkarni and others showed that, the A. gambiae s.l and A. funestus remained highly susceptible with mortality rates of 87–100% despite long-term insecticide-treated net use [45].
Increased trend of insecticide usage for farming purposes and decreased use for veterinary and public health purposes in the past 5 years was reported during this study. Similar observations were also reported by in a study for small scale vegetable farmers in north Tanzania [48] In this study chloropyrifos and dursban was the main active ingredients and brand name reported respectively. Another study conducted recently by Nkya and others, substantiated a relationship between agriculture and insecticide resistance in disease vectors mainly mosquitoes by showing that, the intensity of pesticides usage is correlated with high resistance rates among malaria vectors [36]. The class and active ingredients of mostly applied pesticides reported in this study is similar to that reported by small scale farmers in Tanzania [48] and Thailand [43] the only differences was the brand names, however, active ingredients were the same.
Data on vector control tool usage showed that, environmental management and use of insecticides were the most prevalent vector control methods. Other approaches including biological control were least reported. Despite that environmental management reduces breeding sites of vectors the fewer survivors can still develop resistance due to high use of insecticides. Moreover, the reported use of biological vector control approach is rarely applied as compared to other places with irrigated rice practices in Middle East [49, 50]. In other studies done in Ethiopia, 28% of people reported the use of bio-pesticides such as fungus to control vectors especially malaria vector as the biological method and alternative for management of insecticide resistance [51]. The concept of Integrated Vector Management (IVM) was developed as a result of lessons learnt from integrated pest management, which is used in the agricultural sector; IVM aims to optimize and rationalize the use of resources and tools for vector control [52]. In other countries such as Zambia, application of IVM for malaria control have shown significant results in malaria reduction compared to where IVM was not applied [53]. Moreover, one should note that IVM approach makes vectors to be more susceptible to insecticides and hence reduce resistance. This implies that, since majority of participants do not apply IVM, it is probably be one of the contributing factor to the observed increased resistance for some insecticides used.
Majority of the study population was found to have primary education with basic reading and writing ability. Lack of secondary and tertiary education may reduce their capacity to read and understanding instructions. A study done in Ethiopia found that, 44.5% of respondents get information on insecticides by reading container or package label [51]. Reported results in Ethiopia are slightly higher compared to the findings of this study of which 33.60% could understand information of labels. This implies that, even if respondents know where to get insecticides and where to get information but poor capacity of reading and translating information properly may cause someone to miss important information.
The analysis of phenotypic susceptibility is often recommended for detecting resistance within population when it is in earlier stage for policy makers and vector control tool options [3, 4, 34, 54]. This study observed that the median knockdown time of A. gambiae has increased as compared with other study conducted in the same place [34, 55]. Similar studies also observed that median knockdown time when compared with that of sentinel site such as Meru, Kyela and Muleba has been raised too [6, 32, 33]. This implies that, susceptibility of A. gambiae to insecticides such as permethrin, deltamethrin and DDT in term of median knockdown time has been increased thus indicating that resistance has started to develop. It was found that, the use of pyrethroids was high with least use of DDT within the study site. However, it must be taken into caution that irrespective of low application of DDT there is a possibility of cross resistance between pyrethroids and DDT as it was commonly reported in other studies [6, 35, 55, 56]. This study further found that carbamates were not much in use. The low usage of the bendiocarb in other studies has been found to associate with vectors susceptibility status in carbamates compared to other insecticides, therefore good option for future malaria vector control as suggested elsewhere [54]. However, in other countries evidence of bendiocarb resistance has been reported [57].
The wild population of A. gambiae s.l was found to be highly susceptible to bendiocarb (mortality rate of 100%) and DDT (mortality rate of 99.2%) but resistant to permethrin (mortality rate of 89.68%) and deltamethrin (mortality rate of 69.96%). Of these four insecticides tested in Lower Moshi rice irrigation scheme it was found that bendiocarb showed promising effectiveness towards malaria vector control. This has been proved by its 100% mortality rate and low median knockdown time. Moreover, it has been found that, high effectiveness of bendiocarb in this area is attributed by the fact that, carbamate is least applied insecticide in the form of carbaryl for veterinary use. However, we should note that despite its effectiveness, A. gambiae resistance to bendiocarb (mortality 33.3%) has been recorded in other places of Africa [57]. Hence its application should incorporate practices for maintaining insecticide effectiveness such as IVM approach. Also, the present study has found that DDT is still highly effective (mortality rate of 99.2%) and the previous study in the same site had similar results [55]. However, in some other places DDT resistance has been documented in some areas including the Sahelian region of Burkina Faso [58]. In this study it has been reported that, DDT is either used at low rate or not applied and this may be the reason why resistance has not yet developed in the area [55]. Even in other places of Africa, A. gambiae s.l were observed to be resistant to permethrin [58]. Similar scenario was observed in deltamethrin. These results are similar to the findings in Burkina Faso whereby A. gambiae s.l were found to be resistant in all places with the exception of Orodara site [58]. The findings of this study are contrary to that of previously study conducted in the same area as it was observed that A. gambiae s.l was susceptible to deltamethrin [6]. This can be associated with increased use of pyrethroids in the area especially those which share the mode of action with deltamethrin including lambdacyhalothrin which is mainly used for agricultural and public health purposes.
The increased resistance ratio in pyrethroids (permethrin and deltamethrin) as compared to the previous study in the same area [6]. Moreover, when compared to others site such as Dar-es-salaam and Kilombero, similar findings of increased resistance ratio for pyrethroids were observed. Interestingly DDT showed much less resistance ratio as compared to all four insecticides tested and showed to be even much lower as compared with other sentinel sites such as Ilala, Kilombero and Arumeru [6]. The increased in resistance ratio among pyrethroids could probably be due to the use of insecticides in agriculture as reported in this study and this matched with other observations of previous studies done in Africa that, intensive use of insecticides might end up in insecticide resistance [36, 54]. Even if pyrethroids have shown to have increased resistance but is still suggested to be insecticides of choice to control malaria vectors because of relatively low toxicity to humans, rapid knock-down effect, relative longevity (duration of 3–6 months when used for IRS).
Anopheles gambiae s.l was highly susceptible to bendiocarb and, increased tolerance to permethrin and deltamethrin. The most effective insecticide for malaria vector control observed in the study site was bendiocarb. Educational level was found to be a hindering factor to best practices for insecticide use in this area.
DDT:
dichloro-diphenyl-trichloroethane
indoor residual spray
ITNs:
insecticide treated nets
IVM:
integrated vector management
KDT50 :
knockdown time for 50% of exposed population
KMS:
strain-Kisumu susceptible strain
SPSS:
Statistical Package for Social Scientists
TDHS:
Tanzania Demographic and Health Survey
Coetzee M, Craig M, le Sueur D. Distribution of african malaria mosquitoes belonging to the Anopheles gambiae complex. Parasit Today. 2000;16:74–7.
Coetzee M, Hunt RH, Wilkerson R, Della Torre A, Coulibaly MB, Besansky NJ. Anopheles coluzzii and Anopheles amharicus, new members of the Anopheles gambiae complex. Zootaxa. 2013;3619:246–74.
Mnzava A, Kilama W. Observations on the distribution of the Anopheles gambiae complex in Tanzania. Acta Trop. 1986;43:277–82.
Kweka E, Mahande A, Nkya W, Assenga C, Lyatuu E, Mosha F, et al. Vector species composition and malaria infectivity rates in Mkuzi, Muheza District, north-eastern Tanzania. Tanzan J Health Res. 2008;10:46–9.
Temu EA, Minjas JN, Tuno N, Kawada H, Takagi M. Identification of four members of the Anopheles funestus (Diptera: Culicidae) group and their role in Plasmodium falciparum transmission in Bagamoyo coastal Tanzania. Acta Trop. 2007;102:119–25.
Kabula B, Tungu P, Matowo J, Kitau J, Mweya C, Emidi B, et al. Susceptibility status of malaria vectors to insecticides commonly used for malaria control in Tanzania. Trop Med Int Health. 2012;17:742–50.
Mlacha YP, Chaki PP, Malishee AD, Mwakalinga VM, Govella NJ, Limwagu AJ, et al. Fine scale mapping of malaria infection clusters by using routinely collected health facility data in urban Dar es Salaam, Tanzania. Geospatial Health. 2017;12:494.
Searle KM, Katowa B, Kobayashi T, Siame MNS, Mharakurwa S, Carpi G, et al. Distinct parasite populations infect individuals identified through passive and active case detection in a region of declining malaria transmission in southern Zambia. Malar J. 2017;16:154.
Dheda K, Gumbo T, Maartens G, Dooley KE, McNerney R, Murray M, et al. The epidemiology, pathogenesis, transmission, diagnosis, and management of multidrug-resistant, extensively drug-resistant, and incurable tuberculosis. Lancet Respir Med. 2017;5:291–360.
Renggli S, Mandike R, Kramer K, Patrick F, Brown NJ, McElroy PD, et al. Design, implementation and evaluation of a national campaign to deliver 18 million free long-lasting insecticidal nets to uncovered sleeping spaces in Tanzania. Malar J. 2013;12:85.
Bernard J, Mtove G, Mandike R, Mtei F, Maxwell C, Reyburn H. Equity and coverage of insecticide-treated bed nets in an area of intense transmission of Plasmodium falciparum in Tanzania. Malar J. 2009;8:65.
Hanson K, Goodman C. Testing times: trends in availability, price, and market share of malaria diagnostics in the public and private healthcare sector across eight sub-Saharan African countries from 2009 to 2015. Malar J. 2017;16:205.
Michael D, Mkunde SP. The malaria testing and treatment landscape in mainland Tanzania, 2016. Malar J. 2017;16:202.
Malaria RB. World malaria report 2005. World Health Organization and UNICEF. 2005.
Fillinger U, Ndenga B, Githeko A, Lindsay SW. Integrated malaria vector control with microbial larvicides and insecticide-treated nets in western Kenya: a controlled trial. Bull World Health Organ. 2009;87:655–65.
Mutagahywa J, Ijumba J, Pratap H, Molteni F, Mugarula F, Magesa S, et al. The impact of different sprayable surfaces on the effectiveness of indoor residual spraying using a micro encapsulated formulation of lambda-cyhalothrin against Anopheles gambiae s.s. Parasit Vectors. 2015;8:203.
Thawer N, Ngondi J, Mugalura F, Emmanuel I, Mwalimu C, Morou E, et al. Use of insecticide quantification kits to investigate the quality of spraying and decay rate of bendiocarb on different wall surfaces in Kagera region, Tanzania. Parasit Vectors. 2015;8:242.
Helinski M, Nuwa A, Protopopoff N, Feldman M, Ojuka P, Oguttu D, et al. Entomological surveillance following a long-lasting insecticidal net universal coverage campaign in Midwestern Uganda. Parasit Vectors. 2015;8:458.
Okia M, Ndyomugyenyi R, Kirunda J, Byaruhanga A, Adibaku S, Lwamafa D, et al. Bioefficacy of long-lasting insecticidal nets against pyrethroid-resistant populations of Anopheles gambiae s.s. from different malaria transmission zones in Uganda. Parasit Vectors. 2013;6:130.
Aizoun N, Aikpon R, Padonou G, Oussou O, Oke-Agbo F, Gnanguenon V, et al. Mixed-function oxidases and esterases associated with permethrin, deltamethrin and bendiocarb resistance in Anopheles gambiae s.l. in the south–north transect Benin, West Africa. Parasit Vectors. 2013;6:223.
Jones C, Haji K, Khatib B, Bagi J, Mcha J, Devine G, et al. The dynamics of pyrethroid resistance in Anopheles arabiensis from Zanzibar and an assessment of the underlying genetic basis. Parasit Vectors. 2013;6:343.
Wanjala C, Zhou G, Mbugi J, Simbauni J, Afrane Y, Ototo E, et al. Insecticidal decay effects of long-lasting insecticide nets and indoor residual spraying on Anopheles gambiae and Anopheles arabiensis in western Kenya. Parasit Vectors. 2015;8:588.
Sovi A, Azondekon R, Aikpon R, Govoetchan R, Tokponnon F, Agossa F, et al. Impact of operational effectiveness of long-lasting insecticidal nets (LLINs) on malaria transmission in pyrethroid-resistant areas. Parasit Vectors. 2013;6:319.
Aikpon R, Agossa F, Osse R, Oussou O, Aizoun N, Oke-Agbo F, et al. Bendiocarb resistance in Anopheles gambiae s.l. populations from Atacora department in Benin, West Africa: a threat for malaria vector control. Parasit Vectors. 2013;6:192.
Ndiath MO, Sougoufara S, Gaye A, Mazenot C, Konate L, Faye O, et al. Resistance to DDT and pyrethroids and increased kdr mutation frequency in A. gambiae after the implementation of permethrin-treated nets in Senegal. PLoS ONE. 2012;7:e31943.
Nardini L, Christian R, Coetzer N, Koekemoer L. DDT and pyrethroid resistance in Anopheles arabiensis from South Africa. Parasit Vectors. 2013;6:229.
Zaim M, Aitio A, Nakashima N. Safety of pyrethroid treated mosquito nets. Med Vet Entomol. 2000;14:1–5.
Kishimba MA, Henry L, Mwevura H, Mmochi AJ, Mihale M, Hellar H. The status of pesticide pollution in Tanzania. Talanta. 2004;64:48–53.
Abuelmaali SA, Elaagip AH, Basheer MA, Frah EA, Ahmed FTA, Elhaj HFA, et al. Impacts of agricultural practices on insecticide resistance in the malaria vector Anopheles arabiensis in Khartoum State, Sudan. PLoS ONE. 2013;8:e80549.
Corbel V, N'Guessan R. Distribution, mechanisms, impact and management of insecticide resistance in malaria vectors: a pragmatic review. 2013.
Kabula B, Kisinza W, Tungu P, Ndege C, Batengana B, Kollo D, et al. Co-occurrence and distribution of East (L1014S) and West (L1014F) African knock-down resistance in Anopheles gambiae sensu lato population of Tanzania. Trop Med Int Health. 2014;19:331–41.
Kabula B, Tungu P, Malima R, Rowland M, Minja J, Wililo R, et al. Distribution and spread of pyrethroid and DDT resistance among the Anopheles gambiae complex in Tanzania. Med Vet Entomol. 2014;28:244–52.
Matowo J, Jones C, Kabula B, Ranson H, Steen K, Mosha F, et al. Genetic basis of pyrethroid resistance in a population of Anopheles arabiensis, the primary malaria vector in Lower Moshi, north-eastern Tanzania. Parasit Vectors. 2014;7:274.
Ranson H, N'Guessan R, Lines J, Moiroux N, Nkuni Z, Corbel V. Pyrethroid resistance in African anopheline mosquitoes: what are the implications for malaria control? Trends Parasitol. 2011;27:91–8.
Nkya T, Poupardin R, Laporte F, Akhouayri I, Mosha F, Magesa S, et al. Impact of agriculture on the selection of insecticide resistance in the malaria vector Anopheles gambiae: a multigenerational study in controlled conditions. Parasit Vectors. 2014;7:480.
WHO. Test procedures for insecticide resistance monitoring in malaria vector mosquitoes. 2013.
WHO. Manual on practical entomology in malaria. Part II. Methods and techniques. Geneva, Switzerland: World Health Organization; 1975.
WHO. Test procedures for insecticide resistance monitoring in malaria vector mosquitoes. Geneva: World Health Organisation; 2013.
Finney DJ. Probit analysis. 3rd ed. Cambridge: Cambridge University Press; 1971.
Abbott W. A method of computing the effectiveness of an insecticide. J Am Mosq Control Assoc. 1987;3:302–3.
Statistics TBo. Tanzania—Population and Housing Census 2012. Dar-es-salaam, Tanzania: Tanzania Bureau of Statistics; 2012.
Plianbangchang P, Jetiyanon K, Wittaya-Areekul S. Pesticide use patterns among small-scale farmers: a case study from Phitsanulok, Thailand. 2009.
Bonner K, Mwita A, McElroy PD, Omari S, Mzava A, Lengeler C, et al. Design, implementation and evaluation of a national campaign to distribute nine million free LLINs to children under five years of age in Tanzania. Malar j. 2011;10:73.
Kulkarni MA, Malima R, Mosha FW, Msangi S, Mrema E, Kabula B, et al. Efficacy of pyrethroid-treated nets against malaria vectors and nuisance-biting mosquitoes in Tanzania in areas with long-term insecticide-treated net use. Trop Med Int Health. 2007;12:1061–73.
Choi KS, Christian R, Nardini L, Wood OR, Agubuzo E, Muleba M, et al. Insecticide resistance and role in malaria transmission of Anopheles funestus populations from Zambia and Zimbabwe. Parasit Vectors. 2014;7:1–8.
Ochomo E, Bayoh NM, Kamau L, Atieli F, Vulule J, Ouma C, et al. Pyrethroid susceptibility of malaria vectors in four Districts of western Kenya. Parasit Vectors. 2014;7:9.
Ngowi A, Mbise T, Ijani A, London L, Ajayi O. Pesticides use by smallholder farmers in vegetable production in northern Tanzania. Crop Prot (Guildf, Surrey). 2007;26:1617.
Nguyen T, Nguyen H, Nguyen T, Vu S, Tran N, Le T, et al. Field evaluation of the establishment potential of wmelpop Wolbachia in Australia and Vietnam for dengue control. Parasit Vectors. 2015;8:563.
Tran TT, Olsen A, Viennet E, Sleigh A. Social sustainability of Mesocyclops biological control for dengue in South Vietnam. Acta Trop. 2015;141, Part A:54–9.
Amera T, Abate A. An assessment of the pesticide use, practice and hazards in the Ethiopian rift valley. 2008. Institute for Sustainable Development, Addis Ababa, Ethiopia and PAN UK This report includes the RRAs for 2008, 2.
WHO. WHO position statement on integrated vector management. 2008.
Chanda E, Masaninga F, Coleman M, Sikaala C, Katebe C, MacDonald M, et al. Integrated vector management: the Zambian experience. Malar J. 2008;7:164.
Ranson H, Abdallah H, Badolo A, Guelbeogo W, Kerah-Hinzoumbe C, Yangalbe-Kalnone E, et al. Insecticide resistance in Anopheles gambiae: data from the first year of a multi-country study highlight the extent of the problem. Malar J. 2009;8:299.
Mahande AM, Dusfour I, Matias JR, Kweka EJ. Knockdown resistance, rdl alleles, and the annual entomological inoculation rate of wild mosquito populations from lower Moshi, northern Tanzania. J Glob Infect Dis. 2012;4:114.
Munhenga G, Masendu HT, Brooke BD, Hunt RH, Koekemoer LK. Pyrethroid resistance in the major malaria vector Anopheles arabiensis from Gwave, a malaria-endemic area in Zimbabwe. Malar J. 2008;7:247.
Matambo TS, Abdalla H, Brooke BD, Koekemoer LL, Mnzava A, Hunt RH, et al. Insecticide resistance in the malarial mosquito Anopheles arabiensis and association with the kdr mutation. Med Vet Entomol. 2007;21:97–102.
Kerah-Hinzoumbé C, Péka M, Nwane P, Donan-Gouni I, Etang J, Samè-Ekobo A, et al. Insecticide resistance in Anopheles gambiae from south-western Chad. Central Africa. Malar J. 2008;7:3.1–4.
EJN and EJK conceived, designed and implemented the protocol for data collection, analysis and interpretation. CK and ZP critically reviewed the protocol and data collection tools. FT performed data analysis. All authors reviewed the manuscript critically. All authors read and approved the final manuscript.
Authors wish to thank heads of household for participating in this study. Those farmers who participated in interviews during this study.
The authors declares that they have no competing interests.
All data generated or analysed during this study are included in this published article.
Consent to publish
The Ethical Review Board of Muhimbili University of Health and Allied Sciences and Moshi rural district Council approved this study to be conducted on this site. The written informed consent was obtained from each head of household after all information about the objectives of the study.
Mkuranga District Council, Tanzania Commission for Science and Technology (COSTECH) and National Research Foundation (NRF) (Grant No. TZ-RSA/JRP/RG.2013.08) financed the study but had no role on study design, data analysis and this manuscript write up, neither its findings nor conclusion.
Department of Parasitology and Medical Entomology, Muhimbili University of Health and Allied Sciences, P.O. Box 65011, Dar es Salaam, Tanzania
Elinas J. Nnko
, Charles Kihamia
& Zul Premji
National Institute for Medical Research, Amani Medical Research Centre, Muheza, P.O. Box 81, Tanga, Tanzania
Filemoni Tenu
Tropical Pesticides Research Institute, Division of Livestock and Human Health Disease Vector Control, Mosquito Section, P.O. Box 3024, Arusha, Tanzania
Eliningaya J. Kweka
Department of Medical Parasitology and Entomology, School of Medicine, Catholic University of Health and Allied Sciences, P.O. Box 1464, Mwanza, Tanzania
Search for Elinas J. Nnko in:
Search for Charles Kihamia in:
Search for Filemoni Tenu in:
Search for Zul Premji in:
Search for Eliningaya J. Kweka in:
Correspondence to Eliningaya J. Kweka.
13104_2017_2793_MOESM1_ESM.doc
Additional file 1. The questionnaire used for data collection from head of households.
Nnko, E.J., Kihamia, C., Tenu, F. et al. Insecticide use pattern and phenotypic susceptibility of Anopheles gambiae sensu lato to commonly used insecticides in Lower Moshi, northern Tanzania. BMC Res Notes 10, 443 (2017) doi:10.1186/s13104-017-2793-4
Resistance ratio
Anopheles gambiae s.l. | CommonCrawl |
March 2002 , Volume 44, Issue 1, pp 111–126 | Cite as
Sublattices of regular elements
D. D. Anderson
E. W. Johnson
Richard L. Spellerberg II
Let L be an r-lattice, i.e., a modular multiplicative lattice that is compactly generated, principally generated, and has greatest element 1 compact. We consider certain subsets of L consisting of "regular elements": \(L_f = \left\{ 0 \right\} \cup \left\{ A \right. \in \left. L \right|\left. {(0:A) = 0} \right\},L_{sr} = \left\{ 0 \right\} \cup \left\{ A \right. \in \left. L \right|\) there is a compact element \(X \leqslant A\) with \(\left. {(0:X) = 0} \right\},L_r = \left\{ 0 \right\} \cup \left\{ A \right. \in \left. L \right|\) there is a principal element \(X \leqslant A{\text{ with }}\left. {(0:X) = 0} \right\},{\text{ and }}L_{rg} = \left\{ 0 \right\} \cup \left\{ A \right. \in \left. L \right|A = \bigvee _\alpha X_\alpha {\text{ where each }}X_\alpha \) is a principal element with (0:X_{\alpha })=0\} . The first three subsets L_{f}, L_{sr}, and L_{r} are augmented filters \(\mathcal{L}^0 \) on L, i.e., \(\mathcal{L}^0 = \mathcal{L} \cup \left\{ 0 \right\}{\text{ where }}\mathcal{L}\) is a multiplicatively closed subset of $L$ with $A\in \(\mathcal{L}\) and $B\geq A$ with $B\in L$ implies $B\in \(\mathcal{L}\) and hence are sublattices of $L$ closed under multiplication. We first consider the more general situation of augmented filters on $L.$ These results are then applied to study the four previously defined subsets for $L$ an $r$-lattice or Noether lattice (i.e., an $r$-lattice with ACC). Finally, we give a brief discussion of how the results for augmented lattices can be applied to subsets of $L$ which are "regular" with respect to an $L$-module.
General Situation Regular Element Great Element Principal Element Compact Element
D. D. Anderson, Distributive Noether lattices, Michigan Math. J. 22 (1975), 109–115.Google Scholar
D. D. Anderson, Abstract commutative ideal theory without chain condition, Algebra Universalis 6 (1976), 131–145.Google Scholar
D. D. Anderson, Fake rings, fake modules, and duality, J. Algebra 47 (1977), 425–432.Google Scholar
D. D. Anderson and E. W. Johnson, Ideal theory in commutative semigroups, SemigroupForum 30 (1984), 127–158.Google Scholar
D. D. Anderson and E. W. Johnson, Dilworth's principal elements, Algebra Universalis 36 (1996), 392–404.Google Scholar
D. D. Anderson and J. Pascual, Regular ideals in commutative rings, sublattices of regular ideals, and Prüfer rings, J. Algebra 111 (1987), 404–426.Google Scholar
K. P. Bogart, Structure theorems for regular local Noether lattices, Michigan Math. J. 15 (1968), 167–176.Google Scholar
K. P. Bogart, Distributive local Noether lattices, Michigan Math. J. 16 (1969).Google Scholar
R. P. Dilworth, Abstract commutative ideal theory, Pacific J. Math. 12 (1962), 481–498.Google Scholar
E. W. Johnson, A-transforms of Noether lattices, Dissertation, University of California, Riverside, 1966.Google Scholar
E. W. Johnson and J. P. Lediaev, Join principal elements in Noether lattices, Proc. Amer. Math. Soc. 36 (1972), 73–78.Google Scholar
I. Kaplansky, Commutative Rings, revised ed., Polygonal Publishing House, Washington, N.J., 1994.Google Scholar
R. L. Spellerberg II, Some problems in multiplicative lattice theory, Dissertation, The University of Iowa, 1990.Google Scholar
© Kluwer Academic Publishers 2002
1.Department of MathematicsThe University of IowaIowa CityU.S.A.
2.Department of MathematicsSimpson CollegeIndianolaU.S.A.
Anderson, D.D., Johnson, E.W. & Spellerberg II, R.L. Periodica Mathematica Hungarica (2002) 44: 111. https://doi.org/10.1023/A:1014932204184
DOI https://doi.org/10.1023/A:1014932204184 | CommonCrawl |
Introduction to recurrent neural networks.
9 Jun 2019 • 8 min read
Evolving a hidden state over time
Common structures of recurrent networks
Bidirectionality
Previously, I've written about feed-forward neural networks as a generic function approximator and convolutional neural networks for efficiently extracting local information from data. In this post, I'll discuss a third type of neural networks, recurrent neural networks, for learning from sequential data.
For some classes of data, the order in which we receive observations is important. As an example, consider the two following sentences:
"I'm sorry... it's not you, it's me."
"It's not me, it's you... I'm sorry."
These two sentences are communicating quite different messages, but this can only be interpreted when considering the sequential order of the words. Without this information, we're unable to disambiguate from the collection of words: {'you', 'sorry', 'me', 'not', 'im', 'its'}.
Recurrent neural networks allow us to formulate the learning task in a manner which considers the sequential order of individual observations.
In this section, we'll build the intuition behind recurrent neural networks. We'll start by reviewing standard feed-forward neural networks and build a simple mental model of how these networks learn. We'll then build on that to discuss how we can extend this model to a sequence of related inputs.
Recall that neural networks perform a series of layer by layer transformations to our input data. The hidden layers of the network form intermediate representations of our input data which make it easier to solve the given task.
This is demonstrated in the example below. Observe how our input space is warped into one which allows for a linear decision boundary to cleanly separate the two classes. At a high level, you can think of the hidden layers as "useful representations" of the original input data.
Now let's consider how we can leverage this insight for a sequence of related observations.
Let's first focus on the initial value in the sequence. As we calculate the forward pass through the network, we build a "useful representation" of our input in the hidden layers (the activations in these layers define our hidden state), continuing on to calculate an output prediction for the initial time-step.
When considering the next time-step in the sequence, we want to leverage any information we've already extracted from the sequence.
In order to do this, our next hidden state will be calculated as a combination of the previous hidden state and latest input.
The basic method for combining these two pieces of information is shown below; however, there exist other more advanced methods that we'll discuss later (gated recurrent units, long short-term memory units). Here, we have one set of weights $w_{ih}$ to transform the input to a hidden layer representation and a second set of weights $w_{hh}$ to bring along information from the previous hidden state into the next time-step.
We can continue performing this same calculation of incorporating new information to update the value of the hidden state for an arbitrarily long sequence of observations.
By always remembering the previous hidden state, we're able to chain a sequence of events together. This also allows us to backpropagate errors to earlier timesteps during training, often referred to as "backpropagation through time".
One of the benefits of recurrent neural networks is the ability to handle arbitrary length inputs and outputs. This flexibility allows us to define a broad range of tasks. In this section, I'll discuss the general architectures used for various sequence learning tasks.
One to many RNNs are used in scenarios where we have a single input observation and would like to generate an arbitrary length sequence related to that input. One example of this is image captioning, where you feed in an image as input and output a sequence of words to describe the image. For this architecture, we take our prediction at each time step and feed that in as input to the next timestep, iteratively generating a sequence from our initial observation and following predictions.
Many to one RNNs are used to look across a sequence of inputs and make a single determination from that sequence. For example, you might look at a sequence of words and predict the sentiment of the sentence. Generally, this structure is used when you want to perform classification on sequences of data.
Many to many (same) RNNs are used for tasks in which we would like to predict a label for each observation in a sequence, sometimes referred to as dense classification. For example, if we would like to detect named entities (person, organization, location) in sentences, we might produce a label for every single word denoting whether or not that word is part of a named entity. As another example, you could feed in a video (sequence of images) and predict the current activity in frame.
Many to many (different) RNNs are useful for translating a sequence of inputs into a different but related sequence of outputs. In this case, both the input and the output can be arbitrary length sequences and the input length might not always be equal to the output length. For example, a machine translation model would be expected to translate "how are you" (input) into "cómo estás" (output) even though the sequence lengths are different.
One of the weaknesses of a ordinary recurrent neural networks is that we can only use the set of observations which we have already seen when making a prediction. As an example, consider training a model for named entity recognition. Here, we want the model to output the start and end of phrases which contain a named entity. Consider the following two sentences:
"I can't believe that Teddy Roosevelt was your great grandfather!"
"I can't believe that Teddy bear is made out of chocolate!"
However, if you only read the input sequence from left to right, it's hard to tell whether or not you should mark "Teddy" as the start of a name.
Ideally, our model output would look something like this when reading the first sentence (roughly following the inside–outside–beginning tagging format).
When determining whether or not a token is the start of a name, it would sure be helpful to see which tokens follow after it; a bidirectional recurrent neural network provides exactly that. Here, we process the sequence reading from left-to-right and right-to-left in parallel and then combine these two representations such that at any point in a sequence you have knowledge of the tokens which came before and after it.
We have one set of recurrent cells which process the sequence from left to right...
... and another set of recurrent cells which process the sequence from right to left.
Thus, at any given time-step we have knowledge of all of the tokens which came before the current time-step and all of the tokens which came after that time-step.
One key component that I glanced over previously is that the recurrent layer's weights are shared across time-steps. This provides us with the flexibility to process arbitrary length sequences, but also introduces a unique challenge when training the network.
For a concrete example, suppose you've trained a recurrent neural network as a language model (predict the next word in a sequence). As you're generating text, it might be important to know whether the current word is inside quotation marks. Let's assume this is true and consider the case where our model makes a wrong prediction because it wasn't paying attention to whether or not the current time-step is inside quotation marks. Ideally, you want a way to send back a signal to the earlier time-step where we entered the quotation mark to say "pay attention!" to avoid the same mistake in the future. Doing so requires sending our error signal back through many time-steps. (As an aside, Karpathy has a famous blog post which shows that a character-level RNN language model can indeed pay attention to this detail.)
Let's consider what the backpropagation step would look like to send this signal to earlier time-steps.
As a reminder, the backpropagation algorithm states that we can define the relationship between a given layer's weights and the final loss using the following expression:
$$ \frac{{\partial E\left( w \right)}}{{\partial w^{(l)}}} = {\left( {{\delta ^{(l + 1)}}} \right)^T}{a^{(l)}} $$
where ${\delta ^{(l)}}$ (our "error" term) can be calculated as:
$$ {\delta ^{(l)}} = {\delta ^{(l + 1)}}{w ^{(l)}}f'\left( {{a^{(l)}}} \right) $$
This allows to efficiently calculate the gradient for any given layer by reusing the terms already computed at layer $l+1$. However, notice how there's a term for the weight matrix, ${w ^{(l)}}$, included in the computation at every layer. Now recall that I earlier mentioned recurrent layers share weights across time-steps. This means that the same exact value is being mulitplied every time we perform this layer by layer backpropagation through time.
Let's suppose one of the weights in our matrix is 0.5 and we're attempting to send a signal back 10 time-steps. By the time we've backpropagated to $t-10$, we've multiplied the overall gradient expression by $0.5 \cdot 0.5 \cdot 0.5 \cdot 0.5 \cdot 0.5 \cdot 0.5 \cdot 0.5 \cdot 0.5 \cdot 0.5 \cdot 0.5 = 0.00098$. This has the effect of drastically reducing the magnitude of our error signal! This phenomenon is known as the "vanishing gradient" problem which makes it very hard to learn using a vanilla recurrent neural network. The same problem can occur when the weight is greater than one, introducing an exploding gradient, although this is slightly easier to manage thanks to a technique known as gradient clipping.
In following posts, we'll look at two common variations of the standard recurrent cell which alleviate this problem of a vanishing gradient.
Learning Long-Term Dependencies with Gradient Descent is Difficult
On the difficulty of training Recurrent Neural Networks
A Simple Way to Initialize Recurrent Networks of Rectified Linear Units
Lectures/Notes
Stanford CS231n: Lecture 10 | Recurrent Neural Networks
Stanford CS231n Winter 2016: Lecture 10: Recurrent Neural Networks, Image Captioning, LSTM
Stanford CS224n: Lecture 8: Recurrent Neural Networks and Language Models
Stanford CS230: Recurrent Neural Networks Cheatsheet
MIT 6.S094: Recurrent Neural Networks for Steering Through Time
University of Toronto CSC2535: Lecture 10 | Recurrent neural networks
The Unreasonable Effectiveness of Recurrent Neural Networks
Building machine learning products: a problem well-defined is a problem half-solved.
Previously, I wrote about organizing machine learning projects where I presented the framework that I use for building and deploying models. However, that framework operates on the implicit assumption that you already know generally what your model should do. In this post, we'll dig
Jeremy Jordan 21 Sep 2019 • 12 min read
Scaling nearest neighbors search with approximate methods.
Jump to: What is nearest neighbors search? K-d trees Quantization Product quantization Handling multi-modal data Locally optimized product quantization Common datasets Further reading What is nearest neighbors search? In the world of deep learning, we often use neural networks to learn representations of objects
Jeremy Jordan 3 Feb 2019 • 10 min read | CommonCrawl |
Simplified modelling and backstepping control of the long arm agricultural rover
Napasool Wongvanich ORCID: orcid.org/0000-0002-2000-82591,
Sungwan Boksuwan1 &
Abdulhafiz Chesof1
Advances in Difference Equations volume 2020, Article number: 701 (2020) Cite this article
This paper presents the development of the simplified modelling and control of a long arm system for an agricultural rover, which also extends the modelling methodology from the previous work. The methodology initially assumes a flexible model and, through the use of the integral-based parameter identification method, the identified parameters are then correlated to an energy function to allow a construction of the friction induced nonlinear vibration model. To also capture the effect of the time delay, a delay model was also considered in the form of a second order delay differential equation. Both families of models were applied to identify and characterise a specialised long arm system. The nonlinear model was found to give significant improvement over the standard linear model in data fitting, which was further enhanced by the addition of the time delay consideration. A backstepping controller was also designed for both model families. Results show that the delay model expends less control efforts than the lesser non-delay model.
The advent of robotics technology in recent decades has fuelled rapid growth in agricultural robotics, not only to meet the increasing demands for alternatives to human labour in agricultural production due to the difficulty of finding and retaining workers [1], but also to satisfy environmental and food safety needs [2]. Such growth has garnered significant research in recent years [3, 4]. Different types of agricultural robots have been developed from the days of the Gerrish tractor robot in 1984 [5]. These robots operate in a wide range of agriculture processes, including harvesting [6–8], weed control [9–11] and spraying [12–14]. It is obvious that the use of a robot arm is essential to reaching the required targets.
Agricultural processes such as harvesting and spraying on tropical fruits such as rambutan (Nephelium lappaceum) and durian (Durio zibethinus), however, necessitate a mobile robotic platform with elongated arms that appends to at least four metres in height. As the arms themselves move, significant vibrations are felt at the tip of the arms which must be mathematically modelled and controlled, an issue also prevalent on the computer numerical control (CNC) machine tools in industrial robots [15].
Oscillatory vibrations usually assume a linear damping model, in which the responses are readily decomposed into various modal frequencies. The common algorithm that follows this line of approach is Prony's method [16, 17]. This approach is similar to the well-known concept of the Fourier transform, except that the exponential decay term is added to the trigonometric basis function. Variants of Prony's method include the use of the total least squares technique instead of the ordinary least squares [18] and the matrix pencil method [19]. Other approaches have also included the Kalman-based estimations [20–22], the distributed frequency domain optimisation [23] and the second order generalised integrator [24, 25].
More advanced types of vibration modelling also include friction induced models, which can be separated into two main types. The first type views the vibration from a tribological viewpoint, where the variation of the friction coefficient changes against the relative velocity between the structures in contact. Such variation could be described by the Stribeck model, polynomial or even exponential functions [26, 27]. The second type gives emphasis to the structural aspects in the process of vibration generations. In this regard, the prediction of the FIV has been conducted through sensitivity approaches and probabilistic models [28, 29], hybrid meta models [30] and fuzzy approaches [31]. More recent approaches to estimate the FIV include the use of observer designs to estimate the states [32–34].
The surveyed models and methods generally tend to presuppose a definite complicated model structure first, then fit a complex model to the data. Should this initial model assumption fail to capture the responses, then more complex observers are imbued. The approach taken to analyse the vibration of the agricultural mobile rover in this work extends the concept of the work done by the first author in 2015 [35]. In other words, Sect. 2 of this paper presents the integral-based identification methodology for the vibration model without including the delay effects. The same formulation and concept is then used for the vibration model with the delay being included. A corresponding integral-based identification method for the linear model is also presented for the purpose of comparison. Theoretical analyses are also given on the Lyapunov stabilities of the delay as well as the non-delay models. In Sect. 3, the developed model and identification methods are applied to specialised rover arm system, where more complex dynamics are uncovered based on the identified damping and stiffness values. A validation procedure of the developed model is also given as proof of concept. A backstepping controller is designed in Sect. 3.6 based on the developed model of Sect. 3.2. This paper is then concluded in Sect. 4.
Modified integral-based identification for the long arm rover without delay
Consider the normalised second order differential equation:
$$\begin{aligned} \theta '' + c \theta ' + k \theta = b u(t), \end{aligned}$$
\(\theta \equiv \theta (t)\) represents angular movement,
c is the damping constant,
k is the spring constant,
u is the input.
To designate some degree of flexibility for the model of damping as well as stiffness, these variables can be made time-varying. The simplest such time-varying function is the piecewise constant function defined as follows:
$$\begin{aligned} c(t) &= c_{1},\qquad T_{0}< t< T_{1} \\ &\vdots \\ &= c_{n},\qquad T_{n-1} < t < T_{\mathrm{end}} \end{aligned}$$
$$\begin{aligned} k(t) &= k_{1},\qquad T_{0}< t< T_{1} \\ &\vdots \\ &= k_{n},\qquad T_{n-1} < t < T_{\mathrm{end}}. \end{aligned}$$
The system of Equation (1) can now be rewritten as follows:
$$\begin{aligned} \theta '' + c(t) \theta ' + k(t) \theta = b u(t), \end{aligned}$$
where the functions \(c(t)\) and \(k(t)\) are defined in Equations (2) and (3). Furthermore, the following quantities are also defined:
$$\begin{aligned} &\Delta t= T_{i-1} - T_{i},\qquad T_{0}=0, \end{aligned}$$
$$\begin{aligned} &\Delta t= \text{User defined time interval}. \end{aligned}$$
For future references, let the measurement times \(t_{i}^{(j)}\) be defined as follows:
$$\begin{aligned} t_{i}^{(j)} \equiv \text{ Measurement instant $t_{i}$ of section $j$, $i=1,\ldots,N$ and $j=1,\ldots,n$}. \end{aligned}$$
Defining also the operator:
$$\begin{aligned} I_{T_{i},t}^{(k)} F = \underbrace{ \int _{T_{i}}^{t}\cdots \int _{T_{i}}^{t}}_{ \text{k times}} F \,d t\cdots d t. \end{aligned}$$
Applying the operator of Equation (8) onto Equation (4) yields
$$\begin{aligned} \theta (t) - \theta _{i-1} - \alpha _{i-1} (t - T_{i-1}) +c_{i} I_{T_{i-1},t}^{(1)} \theta (t) +k_{i} I_{T_{i-1},t}^{(2)} \theta (t) = b I_{T_{i-1},t}^{(2)} u(t). \end{aligned}$$
Here, the initial conditions are defined as follows:
$$\begin{aligned} \theta _{i-1} = \theta (T_{i-1}),\qquad d\theta _{i-1} = \theta '(T_{i-1}). \end{aligned}$$
The integral reconstruction model for the vibration system can now be written as follows:
$$\begin{aligned} \theta _{\mathrm{model},i}(t) = \theta _{i-1} + \alpha _{i} ( t - T_{i}) - c_{i} I_{T_{i-1},t}^{(1)} \theta (t) - k_{i} I_{T_{i-1},t}^{(2)} \theta (t) + b I_{T_{i-1},t}^{(2)} u(t), \end{aligned}$$
where the parameters \(\alpha _{i}\) are
$$\begin{aligned} \alpha _{i-1} = d \theta _{i-1} - c_{i} \theta _{i-1}. \end{aligned}$$
Figure 1 shows a possible scenario at the joins in the neighbourhood of \(t=T_{i}, i=1,\ldots,n\). In practice, the angle measurements \(\theta _{\mathrm{meas}}\) will be obtained from an encoder, whose additive quantisation noise implies that the value of \(\theta _{\mathrm{meas}}\) may not equal the \(\theta _{\mathrm{model}}\) at the joins. This phenomenon introduces possible discontinuities at the joins. A simple method of resolving this discontinuity is to proceed to identify the sections by piece and resolve the discontinuity at the end of every section. The integral reconstructor for the first section is written as follows:
$$\begin{aligned} &\theta _{\mathrm{model},0}(t)= \theta _{0} + \alpha _{0} (t-T_{0}) - c_{1} I_{T_{0},t}^{(1)} \theta -k_{1} I_{T_{0},t}^{(2)} \theta + b I_{T_{0},t}^{(2)} u, \end{aligned}$$
$$\begin{aligned} &\alpha _{0}= \text{Equation (13) with $i=0$}. \end{aligned}$$
Substituting \(\theta _{\mathrm{meas}}=\theta (t)\) in Equation (13) for \(t \in \{t_{1}^{(1)},\ldots,t_{N}^{(1)} \}\) yields a system of equations which can be summarised into a matrix equation
$$\begin{aligned} \mathbf{A} \mathbf{p}_{0} = \mathbf{b}, \end{aligned}$$
$$\begin{aligned} &\mathbf{A}= \begin{bmatrix} 1 & t_{1}^{(1)} - T_{0} & -I_{T_{0},t_{1}^{(1)}}^{(1)} \theta _{\mathrm{meas}}(t) &-I_{T_{0},t_{1}^{(1)}}^{(2)} \theta _{\mathrm{meas}}(t) &-I_{T_{0},t_{1}^{(1)}}^{(2)} u \\ 1 & t_{2}^{(1)} - T_{0} & -I_{T_{0},t_{2}^{(1)}}^{(1)} \theta _{\mathrm{meas}}(t) &-I_{T_{0},t_{2}^{(1)}}^{(2)} \theta _{\mathrm{meas}}(t)&-I_{T_{0},t_{2}^{(1)}}^{(2)} u \\ \vdots &\vdots &\vdots &\vdots \\ 1 & t_{N}^{(1)} - T_{0} & -I_{T_{0},t_{N}^{(1)}}^{(1)} \theta _{\mathrm{meas}}(t) &-I_{T_{0},t_{2}^{(1)}}^{(2)} \theta _{\mathrm{meas}}(t)&-I_{T_{0},t_{N}^{(1)}}^{(2)} u \end{bmatrix}, \end{aligned}$$
$$\begin{aligned} &\mathbf{p}_{0}= \begin{bmatrix} \theta _{0} \\ \alpha _{0} \\ c_{1} \\ k_{1} \\ b \end{bmatrix},\qquad \mathbf{b}= \begin{bmatrix} \theta _{\mathrm{meas}}(t_{1}^{(1)}) \\ \theta _{\mathrm{meas}}(t_{2}^{(1)}) \\ \vdots \\ \theta _{\mathrm{meas}}(t_{N}^{(1)}) \end{bmatrix}. \end{aligned}$$
Equations (15)–(17) are solved by linear least squares subjected to the constraints
$$\begin{aligned} c_{1} >0,\qquad k_{1} >0. \end{aligned}$$
The result yields the unknown parameters that are the elements of vector \(\mathbf{p}_{0}\). Substituting the elements of \(\mathbf{p}_{0}\) into Equation (13) yields an integral reconstructor model for the angle \(\theta (t)\) of the first section from the data segmentation.
The discontinuity of \(\theta (t)\) at the joins
To ensure that the \(C_{0}\) continuity is ensured at join \(k=i-1\), the initial condition \(\theta _{i}\) can be computed thus:
$$\begin{aligned} \theta _{i-1} \equiv \text{Equation (13) with $t=T_{i}$ for $i=2,\ldots,n$}. \end{aligned}$$
The value of b, which was identified for the first section, is assumed to be constant for all sections. This assumption is based on the fact that the arm is balanced and thus no sudden change of inertia is possible across the different time sections. The knowledge of the initial conditions for the ith section now leaves only three unknowns to be identified in the model of Equation (11). In this respect, setting \(\theta (t) \equiv \theta _{\mathrm{model},i}(t)\) and input \(u(t) \equiv u_{\mathrm{data},i}(t)\) for the time instants \(t \in \{t_{1}^{(i)},\ldots,t_{N}^{(i)}\}\) yields a system of N equations in three unknowns:
$$\begin{aligned} &\mathbf{A}_{i} \mathbf{p} = \mathbf{b}_{i}, \end{aligned}$$
$$\begin{aligned} &\mathbf{A}_{i} = \begin{bmatrix} t_{1}^{(i)}-T_{i-1} & -I_{T_{i-1},t_{1}^{(i)}}^{(1)} \theta _{\mathrm{meas}}(t) & -I_{T_{i-1},t_{1}^{(1)}}^{(2)} \theta _{\mathrm{meas}}(t) \\ t_{2}^{(i)}-T_{i-1} & -I_{T_{i-1},t_{2}^{(i)}}^{(1)} \theta _{\mathrm{meas}}(t) & -I_{T_{i-1},t_{2}^{(i)}}^{(2)} \theta _{\mathrm{meas}}(t) \\ \vdots & \vdots &\vdots \\ t_{N}^{(i)}-T_{i-1} & -I_{T_{i-1},t_{N}^{(i)}}^{(1)} \theta _{meas}(t) & -I_{T_{i-1},t_{N}^{(i)}}^{(2)} \theta _{meas}(t) \end{bmatrix}, \end{aligned}$$
$$\begin{aligned} &\mathbf{b}_{i}= \begin{bmatrix} \theta _{i}(t_{1}) - b I_{T_{i-1},t_{1}^{(i)}}^{(2)} u(t) \\ \theta _{i}(t_{2}) - b I_{T_{i-1},t_{2}^{(i)}}^{(2)} u(t) \\ \vdots \\ \theta _{i}(t_{N}) - b I_{T_{i-1},t_{2}^{(i)}}^{(2)} u(t) \end{bmatrix}. \end{aligned}$$
Equations (20)–(22) can now be solved by linear least squares subjected to the constraints
$$\begin{aligned} c_{i} >0, \qquad k_{i} >0. \end{aligned}$$
The result of such an operation will now provide the values for the damping \(c_{i}\) and stiffness \(k_{i}\) along with the integral reconstructor model for the angular movement \(\theta _{\mathrm{model}}(t)\). Figure 2 shows the algorithm for identifying the time-varying non-delay model.
Algorithm 1: algorithm for identifying the time-varying damping \(c(t)\) and the time-varying stiffness \(k(t)\) of Equation (4)
Integral-based identification for the long arm rover with delay
As a comparison, consider now the normalised second order delay differential equation given by
$$\begin{aligned} \theta ''(t-\tau ) + c \theta '(t-\tau ) + k \theta (t-\tau ) = u(t), \end{aligned}$$
where τ represents the delay, and the variable θ and the associated parameters c and k retain their meaning from the non-delay case in Sect. 2.1. Inserting also the time-varying models for the damping and stiffness as provided by Equations (2) and (3) gives
$$\begin{aligned} \theta ''(t-\tau ) + c(t) \theta '(t-\tau ) + k(t) \theta (t-\tau ) = u(t). \end{aligned}$$
Note that the time segmentation intervals for Equation (25) are again given by Equations (5) and (6). The time delay functions \(\theta (t-\tau )\) and their derivatives are usually difficult to model. However, under the assumption that the time delay τ is small, it is possible to approximate \(\theta (t-\tau )\) and their derivatives by Taylor approximations:
$$\begin{aligned} &\theta (t-\tau )= \theta (t) - \tau \theta '(t), \end{aligned}$$
(26a)
$$\begin{aligned} &\theta '(t-\tau )= \theta '(t) - \tau \theta ''(t), \end{aligned}$$
(26b)
$$\begin{aligned} &\theta ''(t-\tau )= \theta ''(t) - \tau \theta '''(t). \end{aligned}$$
(26c)
Substituting Equation (26a)–(26c) into Equation (25), we obtain:
$$\begin{aligned} \theta ''' + a_{1,i} \theta '' + a_{2,i} \theta ' + a_{3,i} \theta = b u(t), \end{aligned}$$
$$\begin{aligned} &a_{1,i}= c_{i} - \frac{1}{\tau },\qquad a_{2,i} = k_{i} - \frac{c_{i}}{\tau }, \end{aligned}$$
$$\begin{aligned} &a_{3,i}=-\frac{k_{i}}{\tau },\qquad b = \frac{1}{\tau }. \end{aligned}$$
Applying the operator of Equation (8) to Equation (27) gives
$$\begin{aligned} &\theta (t)= \theta _{i-1} + \beta _{1,i} ( t- T_{i}) + \beta _{2,i} (t-T_{i})^{2} - a_{1,i} I_{i-1,t}^{(1)} \theta \\ &\phantom{\theta (t)=}{}-a_{2,i} I_{i-1,t}^{(2)} \theta -a_{3,i} I_{i-1,t}^{(3)} \theta + b I_{i-1,t}^{(3)} u(t), \end{aligned}$$
$$\begin{aligned} &\beta _{1,i}= d \theta _{i-1}+a_{1,i} \theta _{i-1}, \end{aligned}$$
$$\begin{aligned} &\beta _{2,i}= \frac{dd\theta _{i-1}}{2} + a_{1,i} \frac{d\theta _{i-1}}{2} + a_{2,i} \theta _{i}, \end{aligned}$$
where the initial conditions are
$$\begin{aligned} dd\theta _{i-1} = \theta ''(T_{i-1}),\qquad d\theta _{i-1} = \theta '(T_{i-1}). \end{aligned}$$
The discontinuities at every section joins are resolved using the method presented in Sect. 2.1. In this light the integral reconstructor model for the first section is written as follows:
$$\begin{aligned} \theta _{\mathrm{model},0}(t)= {}&\theta _{0} + \beta _{1,1} (t-T_{0}) - \beta _{1,2} (t-T_{0})^{2} -a_{1,1} I_{T_{0},t}^{(1)} \theta \\ &{}- a_{2,1} I_{T_{0},t}^{(2)} \theta -a_{3,1} I_{T_{0},t}^{(3)} \theta +b I_{T_{0},t}^{(3)} u(t). \end{aligned}$$
Substituting the values of \(\theta \equiv \theta _{\mathrm{data}}(t)\) and \(u(t) \equiv u_{\mathrm{applied}}(t)\) for the times of \(t \in \{ t_{0}^{(1)},\ldots,t_{N}^{(1)} \}\) will give a matrix equation
$$\begin{aligned} &\mathbf{M} \mathbf{p}_{0}= \mathbf{b}_{0}, \end{aligned}$$
$$\begin{aligned} &\mathbf{M}= \begin{bmatrix} 1&(t_{1}^{(1)}-T_{0})&(t_{1}^{(1)}-T_{0})^{2}&-I_{T_{0},t_{1}^{(1)}}^{(1)} \theta & -I_{T_{0},t_{1}^{(1)}}^{(2)} \theta & -I_{T_{0},t_{1}^{(1)}}^{(3)} \theta & -I_{T_{0},t_{1}^{(1)}}^{(3)} u \\ 1&(t_{2}^{(1)}-T_{0})&(t_{2}^{(1)}-T_{0})^{2}&-I_{T_{0},t_{2}^{(1)}}^{(1)} \theta & -I_{T_{0},t_{2}^{(1)}}^{(2)} \theta & -I_{T_{0},t_{2}^{(1)}}^{(3)} \theta & -I_{T_{0},t_{2}^{(1)}}^{(3)} u \\ \vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots \\ 1&(t_{N}^{(1)}-T_{0})&(t_{N}^{(1)}-T_{0})^{2}&-I_{T_{0},t_{N}^{(1)}}^{(1)} \theta & -I_{T_{0},t_{N}^{(1)}}^{(2)} \theta & -I_{T_{0},t_{N}^{(1)}}^{(3)} \theta & -I_{T_{0},t_{N}^{(1)}}^{(3)} u \end{bmatrix}, \end{aligned}$$
$$\begin{aligned} &\mathbf{b}_{0}= \begin{bmatrix} \theta _{\mathrm{data}}({t_{1}^{(1)}}) \\ \theta _{\mathrm{data}}({t_{2}^{(1)}}) \\ \vdots \\ \theta _{\mathrm{data}}({t_{N}^{(1)}}) \end{bmatrix}, \end{aligned}$$
where \(\theta _{\mathrm{data}}({t_{j}^{(1)}})\) denotes the angular data at \(t=t_{j}\) of the first section. The system identification algorithm begins with the solving of Equations (35)–(37) by linear least squares subjected to the conditions
$$\begin{aligned} a_{1,1} >0,\qquad a_{2,1} >0,\qquad a_{3,1} >0,\qquad b >0. \end{aligned}$$
The result of this process yields the unknown parameters that belong to the elements of \(\mathbf{p}_{0}\), whose vector is then substituted into Equation (34) to obtain the integral reconstructor model for the angle \(\theta (t)\) for the first section of the segmentation.
The initial conditions for the beginning of the ith segmentation are evaluated thus:
$$\begin{aligned} \theta _{i-1}&= \text{ Equation (30) for $t=T_{i}$ with $i=2,\ldots,n$. } \end{aligned}$$
Again note that the variable b is assumed to be constant throughout all sections. The knowledge of the initial conditions implies that only six parameters are now required to be identified in the model of Equation (30). Setting \(\theta (t) \equiv \theta _{\mathrm{model},i}(t)\) and input \(u(t) \equiv u_{\mathrm{data},i}(t)\) for the time instants \(t \in \{t_{1}^{(i)},\ldots,t_{N}^{(i)}\}\) yields the matrix equation
$$\begin{aligned} &\mathbf{M}_{i} \mathbf{p}_{i}= \mathbf{b}_{i}, \end{aligned}$$
$$\begin{aligned} &\mathbf{M}_{i}= \begin{bmatrix} (t_{1}^{(i)}-T_{i-1})&(t_{1}^{(i)}-T_{i-1})^{2}&-I_{T_{i-1},t_{1}^{(i)}}^{(1)} \theta & -I_{T_{i-1},t_{1}^{(i)}}^{(2)} \theta & -I_{T_{i-1},t_{1}^{(i)}}^{(3)} \theta \\ (t_{2}^{(i)}-T_{i-1})&(t_{2}^{(i)}-T_{i-1})^{2}&-I_{T_{i-1},t_{2}^{(i)}}^{(1)} \theta & -I_{T_{i-1},t_{2}^{(i)}}^{(2)} \theta & -I_{T_{i-1},t_{2}^{(i)}}^{(3)} \theta \\ \vdots &\vdots &\vdots &\vdots &\vdots \\ (t_{N}^{(i)}-T_{i-1})&(t_{2}^{(i)}-T_{i-1})^{2}&-I_{T_{i-1},t_{N}^{(i)}}^{(1)} \theta & -I_{T_{i-1},t_{N}^{(i)}}^{(2)} \theta & -I_{T_{i-1},t_{N}^{(i)}}^{(3)} \theta \end{bmatrix}, \end{aligned}$$
$$\begin{aligned} &\mathbf{b}_{i}= \begin{bmatrix} \theta _{\mathrm{data}}({(t_{1}^{(i)}}) - b I_{T_{i-1},t_{1}^{(i)}}^{(3)} u(t) \\ \theta _{\mathrm{data}}({(t_{2}^{(i)}}) - b I_{T_{i-1},t_{2}^{(i)}}^{(3)} u(t) \\ \vdots \\ \theta _{\mathrm{data}}({(t_{N}^{(i)}})- b I_{T_{i-1},t_{N}^{(i)}}^{(3)} u(t) \end{bmatrix}. \end{aligned}$$
$$\begin{aligned} &a_{i,1} >0,\qquad a_{i,2} >0, \qquad a_{i,3} >0, \end{aligned}$$
$$\begin{aligned} &\vert a_{i,1} - a_{i-1,1} \vert < \gamma _{1} \vert t_{i} - t_{i-1} \vert , \end{aligned}$$
$$\begin{aligned} &\vert a_{i,3} - a_{i-1,3} \vert < \gamma _{3} \vert t_{i} - t_{i-1} \vert . \end{aligned}$$
(43d)
The result of such an operation will now provide the values for the parameters \(a_{i,1}\), \(a_{i,2}\) and \(a_{i,3}\) along with the integral reconstructor model for the angular movement \(\theta _{\mathrm{model}}(t)\). Figure 3 shows the algorithm for identifying the time delay model. Note that Equations (43b)–(43d) place the Lipschitz constraints on the derivatives of \(a_{i,1}\), \(a_{i,2}\) and \(a_{i,3}\) to make sure that they are bounded. Were these constraints not placed on the derivatives, it would be possible to choose very small ϵ such that the modelled response \(y_{\mathrm{model}}(t)\) gets very close to \(y_{\mathrm{true}}(t)\), yet the values of \(a_{i,1}\), \(a_{i,2}\) and \(a_{i,3}\) do not resemble the true function. In fact, previous work by Wongvanich et al. [35] has shown that the identified parameters significantly oscillate without bound about the true values, yet the modelled response matches very well with the true data.
Algorithm 2: algorithm for identifying the time-varying \(a_{1}(t)\), \(a_{2}(t)\) and \(a_{3}(t)\) functions of Equation (25)
This section gives the theoretical analyses for the vibration model, both without the delay and with the delay.
Model without delay
Lemma 1
Consider the homogeneous second order differential equation
$$\begin{aligned} \theta ''(t) + c(t) \theta '(t) + k(t) \theta (t) =0. \end{aligned}$$
The required Lyapunov function can be written as follows:
$$\begin{aligned} V &= \frac{A}{2} z_{1}^{2} + \frac{B}{2} z_{2}^{2} + G z_{1} z_{2}, \end{aligned}$$
$$\begin{aligned} A = \frac{\alpha _{2} k(t) + \alpha _{1}}{c(t)} + \frac{\alpha _{1} c(t)}{k(t)},\qquad B = \frac{\alpha _{2} k(t) + \alpha _{1}}{c(t)}, \qquad G = \frac{\alpha _{1}}{k(t)} \end{aligned}$$
$$\begin{aligned} z_{1} \equiv z_{1}(t) = \theta,\qquad z_{2} \equiv z_{2}(t) = \theta '. \end{aligned}$$
The proof of this lemma is similar to the one given in [36]. □
Consider the homogeneous second order differential equation defined in Equation (44). The system will have global asymptotic stability if there exists a number Q such that
$$\begin{aligned} \max \Bigl[\sup_{t\in [t,T_{\mathrm{end}} ]} c(t), \sup_{t\in [t,T_{\mathrm{end}} ]} k(t) \Bigr] < Q. \end{aligned}$$
Consider the Lyapunov candidate function of Equation (45) with parameters A, B and G as defined in Equation (46). To ensure that A, B and G are finite, select \(Q_{1}\) and \(Q_{2}\) such that
$$\begin{aligned} \sup_{t\in [t,T_{\mathrm{end}} ]} c(t) < Q_{1}, \qquad\sup_{t \in [t,T_{\mathrm{end}} ]} k(t) < Q_{2}. \end{aligned}$$
Hence the resulting upper bound Q is
Model with delay
Consider the third order homogeneous system
$$\begin{aligned} \theta '''(t) + a(t) \theta ''(t) + b(t) \theta '(t) + r(t) \theta (t) =0, \end{aligned}$$
$$\begin{aligned} &a \equiv a(t) \quad\textit{and}\quad 0< a(t)< a_{m}, \\ &b \equiv b(t) \quad\textit{and}\quad 0< b(t)< b_{m}, \\ &r \equiv r(t) \quad\textit{and}\quad 0< r(t)< r_{m}. \end{aligned}$$
The required Lyapunov function is written as follows:
$$\begin{aligned} V &= \frac{1}{2} a r \biggl(z_{1} + \frac{z_{2}}{a} \biggr)^{2} + \frac{1}{2} \biggl(b-\frac{r}{a} \biggr) z_{2}^{2} + (z_{3} + a z_{2})^{2}, \end{aligned}$$
where the states are chosen as follows:
$$\begin{aligned} z_{1} \equiv z_{1}(t) = \theta, \qquad z_{2} \equiv z_{2}(t) = \theta ',\qquad z_{3} \equiv z_{3} = \theta ''. \end{aligned}$$
Consider the third order differential equation
$$\begin{aligned} \theta ''' + F_{1,\mathrm{true}} \theta '' + F_{2,\mathrm{true}} \theta ' + \frac{K_{\mathrm{true}}}{D} \theta = 0, \end{aligned}$$
where \(F_{1,\mathrm{true}} \equiv F_{1,\mathrm{true}}(t)\), \(F_{2,\mathrm{true}} \equiv F_{2,\mathrm{true}}(t)\) and \(K_{\mathrm{true}} \equiv K_{\mathrm{true}}(t)\). The system of Equation (52) will have global asymptotic stability if there exists a number M such that
$$\begin{aligned} \max \biggl[\sup_{t\in [t,T_{\mathrm{end}} ]} \biggl( \frac{F_{1,\mathrm{true}} K_{\mathrm{true}}}{D} \biggr), \sup_{t\in [t,T_{\mathrm{end}} ]} \biggl( \frac{F_{1,\mathrm{true}} K_{\mathrm{true}} D - K_{\mathrm{true}} - F_{2,\mathrm{true}}^{2}}{D} \biggr) \biggr] < M. \end{aligned}$$
The following Lyapunov function is written by applying Lemma 3:
$$\begin{aligned} V={}& \frac{1}{2} \biggl[ \frac{F_{1,\mathrm{true}} K_{\mathrm{true}}}{D} \biggl(z_{1} - \frac{z_{2}^{2}}{F_{1,\mathrm{true}}} \biggr)^{2} + \biggl(K_{\mathrm{true}} \biggl(1- \frac{1}{D F_{2,\mathrm{true}}} \biggr)-\frac{F_{2,\mathrm{true}}}{D} \biggr) z_{2}^{2} \biggr] \end{aligned}$$
$$\begin{aligned} &{}+\frac{1}{2} (z_{3} + F_{1,\mathrm{true}} z_{2})^{2}. \end{aligned}$$
To keep the Lyapunov function of Equation (55) finite, first choose \(M_{1}\) and \(M_{2}\) so that the coefficients of \((z_{1} - \frac{z_{2}^{2}}{F_{1,\mathrm{true}}} )^{2}\) and \(z_{2}^{2}\) are finite:
$$\begin{aligned} &\sup_{t\in [t,T_{\mathrm{end}} ]} \frac{F_{1,\mathrm{true}} K_{\mathrm{true}}}{D} < M_{1}, \\ &\sup_{t\in [t,T_{\mathrm{end}} ]} \biggl(K_{\mathrm{true}} \biggl(1- \frac{1}{D F_{2,\mathrm{true}}} \biggr)-\frac{F_{2,\mathrm{true}}}{D} \biggr) < M_{2}. \end{aligned}$$
The resulting upper bound M is thus:
$$\begin{aligned} \max \biggl[\sup_{t\in [t,T_{\mathrm{end}} ]} \biggl( \frac{F_{1,\mathrm{true}} K_{\mathrm{true}}}{D} \biggr), \sup_{t\in [0,T_{\mathrm{end}} ]} \biggl( \frac{F_{1,\mathrm{true}} K_{\mathrm{true}} D - K_{\mathrm{true}} - F_{2,\mathrm{true}}^{2}}{D} \biggr) \biggr] < M. \end{aligned}$$
Having established the global asymptotic stability for the system of Equation (27), it is now possible to establish the convergence of our integral reconstructor model. In this respect, we propose the following theorem.
Consider the following third order nonlinear differential equation:
$$\begin{aligned} \theta '''+ F_{1,\mathrm{true}} \theta '' + F_{2,\mathrm{true}} \theta ' + \frac{K_{1,\mathrm{true}}}{D} \theta = u(t), \quad t \in [T_{0},T_{\mathrm{end}}], \end{aligned}$$
$$\begin{aligned} &\theta _{\mathrm{true}}(0)= \theta _{0},\qquad \theta '_{\mathrm{true}}(0) = d\theta _{0}, \\ &F_{i,\mathrm{true}} \equiv F_{i,\mathrm{true}}(t)>0,\qquad K_{\mathrm{true}} \equiv K_{\mathrm{true}}(t) >0, \qquad F_{i,\mathrm{true}} \in C^{0}, K_{\mathrm{true}} \in C^{0}, \\ &\sup_{t\in [0,T_{\mathrm{end}} ]} \vert F_{i,\mathrm{true}} \vert \quad \textit{is finite} \quad\textit{and}\quad \sup_{t\in [0,T_{\mathrm{end}} ]} \vert K_{i,\mathrm{true}} \vert \quad\textit{is finite}. \end{aligned}$$
Define also the following functions:
$$\begin{aligned} &F_{i,\mathrm{model},k}(t)= \sum_{j=1}^{k} a_{i,j,\mathrm{model}}^{(k)} \bigl[ u\bigl(t- \Delta t (j-1)\bigr)-u(t-j\Delta t) \bigr],\quad i=1,2, \\ &K_{\mathrm{model},k}(t)= \sum_{j=1}^{k} a_{3,j,\mathrm{model}}^{(k)} \bigl[ u\bigl(t- \Delta t (j-1)\bigr)-u(t-j\Delta t) \bigr], \\ &u(t-t_{k})= \textit{ unit step function at }t=t_{k}. \end{aligned}$$
If the parameter \(a_{i,j,\mathrm{model}}^{k}, i=1,2,3\), are functions satisfying the Lipschitz condition, that is,
$$\begin{aligned} &\bigl\vert a_{1,\mathrm{model}}^{(j+1)}-a_{1,\mathrm{model}}^{(j)} \bigr\vert < \gamma _{1} \vert t_{j}-t_{j-1} \vert ,\qquad \gamma _{1} = \max_{t\in [t,T_{\mathrm{end}} ]} \bigl\vert \dot{F}_{1,\mathrm{true}}(t) \bigr\vert , \end{aligned}$$
$$\begin{aligned} &\bigl\vert a_{3,\mathrm{model}}^{(j+1)}-a_{3,\mathrm{model}}^{(j)} \bigr\vert < \gamma _{3} \vert t_{j}-t_{j-1} \vert ,\qquad \gamma _{1} = \max_{t\in [t,T_{\mathrm{end}} ]} \bigl\vert \dot{F}_{3,\mathrm{true}}(t) \bigr\vert . \end{aligned}$$
The limit of \(\theta _{\mathrm{model},n}\) will approach the true angular function \(\theta _{\mathrm{true}}\). In addition,
$$\begin{aligned} &\lim_{k\to \infty } F_{i,\mathrm{model},k}(t) = F_{i,\mathrm{true}}(t),\quad i=1,2, \end{aligned}$$
$$\begin{aligned} &\lim_{k\to \infty } K_{\mathrm{model},k}(t) = K_{\mathrm{true}}(t). \end{aligned}$$
The integral reconstructor model for the system of Equation (56) is as follows:
$$\begin{aligned} \theta _{\mathrm{model}}(t)= {}&\theta _{i} + \beta _{i,1} (t-T_{i-1}) +\beta _{i,2} (t-T_{i-1})^{2} - I_{T{i-1},t}^{(1)} F_{1,\mathrm{true}} \theta \\ &{} - I_{T{i-1},t}^{(2)} F_{2,\mathrm{true}} \theta - I_{T{i-1},t}^{(3)} K_{\mathrm{true}} \theta + I_{T{i-1},t}^{(3)} u(t). \end{aligned}$$
We will firstly consider the case where \(t=0\). In this case, there exist N̄ and \(\delta >0\) such that, for \(\bar{N}>k\), \(|F_{1,\mathrm{model},k}(0)-F_{1,\mathrm{true}}(0)| > \delta _{1}\), \(|F_{2,\mathrm{model},k}(0)-F_{2,\mathrm{true}}(0)| > \delta _{2}\) and \(|F_{3,\mathrm{model},k}(0)-F_{3,\mathrm{true}}(0)| > \delta _{3}\).
Since the constituent functions of \(F_{i,\mathrm{model},k}, i=1,2\), and \(K_{i,\mathrm{model},k}\) are \(a_{i,\mathrm{model},k},i=1,2,3\), which are Lipschitz functions, there will exist a time \(t \in [0,dt^{*}]\) regardless of k such that \(F_{i,\mathrm{model},k}\) and \(K_{i,\mathrm{model},k}\) will intersect with the true functions \(F_{i,\mathrm{true}}\) and \(K_{\mathrm{true}}\) respectively. Therefore,
$$\begin{aligned} &F_{1,\mathrm{model},k}(t) - F_{1,\mathrm{true}}(t)> \delta _{1}^{*}\quad \text{for all } t \in \bigl[0,dt^{*}\bigr], \\ &F_{2,\mathrm{model},k}(t) - F_{2,\mathrm{true}}(t)> \delta _{1}^{*}\quad \text{for all } t \in \bigl[0,dt^{*}\bigr], \\ &K_{\mathrm{model},k}(t) - K_{\mathrm{true}}(t)> \delta _{1}^{*}\quad \text{for all } t \in \bigl[0,dt^{*}\bigr]. \end{aligned}$$
And if \(\theta '(0) \leq 0\), then it is possible to choose \(\tilde{dt}^{*} < dt^{*}\) such that the value of \(\theta _{\mathrm{true}}(t) \leq 0, t \in [0,dt^{*}]\). Hence the value of \(F_{i,\mathrm{model},k} - F_{i,\mathrm{true}}(t)\) and \(K_{\mathrm{model},k}-K_{\mathrm{true}}\) or \(F_{\mathrm{true}}-F_{i,\mathrm{model},k}\) and \(K_{\mathrm{true}}-K_{i,\mathrm{model},k}\) cannot change sign in that time. Thus, the error between the integral reconstruction function and the true value is as follows:
$$\begin{aligned} \epsilon \bigl(dt^{*}\bigr) &= \bigl\vert \theta _{\mathrm{true}} \bigl(dt^{*}\bigr) - \theta _{\mathrm{model}}\bigl(dt^{*} \bigr) \bigr\vert \\ &= I_{0,t}^{(3)} \vert F_{1,\mathrm{true}}-a_{i,1} \vert + I_{0,t}^{(3)} \vert F_{2,\mathrm{true}}-a_{i,2} \vert + I_{0,t}^{(3)} \vert K_{\mathrm{true}}-a_{i,3} \vert \\ & > \delta ^{**} >0. \end{aligned}$$
Equation (61) contradicts the assumption that the limit of \(\theta _{\mathrm{model},k}\) will approach \(\theta _{\mathrm{true}}\). Hence it follows that \(F_{1,\mathrm{model},k} \to F_{1,\mathrm{true}}(0)\), \(F_{2,\mathrm{model},k} \to F_{2,\mathrm{true}}(0)\), \(K_{\mathrm{model},k} \to K_{\mathrm{true}}(0)\).
For the case of \(t>0\), we also prove by contradiction. Suppose now that there exists a time \(t_{0}>0\), which is the smallest time such that \(F_{1,\mathrm{model},k}\) does not approach \(F_{1,\mathrm{true}}(0)\), \(F_{2,model,k}\) does not approach \(F_{2,true}(0)\) and \(K_{model,k}\) does not approach \(K_{true}(0)\). Hence,
$$\begin{aligned} &\vert F_{\mathrm{model},i} - F_{\mathrm{true},i} \vert > \delta _{F,i},\quad \text{for $k>N$}, i=1,2, \end{aligned}$$
$$\begin{aligned} &\vert K_{\mathrm{model}} - K_{\mathrm{true}} \vert > \delta _{K} \quad\text{for $k>N$}. \end{aligned}$$
Using the same concept as for the case of \(t=0\), since the functions \(a_{i,j,\mathrm{model}}, i=1,2,3\) are Lipschitz function, there exists \(dt^{*}>0\) such that
$$\begin{aligned} &F_{1,\mathrm{model},k} - F_{1,\mathrm{true}} < \delta _{1}^{*}, \\ &F_{2,\mathrm{model},k} - F_{2,\mathrm{true}} < \delta _{2}^{*}, \\ &K_{\mathrm{model},k} - K_{\mathrm{true}} < \delta _{3}^{*}. \end{aligned}$$
The above statement implies that it is possible to find the smallest \(t< t_{0}\) such that the limit of \(F_{1,\mathrm{model},k}\) will not approach \(F_{1,\mathrm{true}}(t)\), the limit of \(F_{2,\mathrm{model},k}\) will not approach \(F_{2,\mathrm{true}}(t)\) and \(K_{\mathrm{model},k}\) will not approach \(K_{2,\mathrm{true}}(t)\). This contradicts the assumption that \(t_{0}\) is the smallest such value. Statement is thus proved for the case of \(t>0\). □
Identification with the pure linear model
The analyses given in the previous section means that the use of constraints on the identified parameters are possible. This conjecture is due to the fact that the nonlinear least squares problem has duly been converted to a corresponding linear least squares problem by integral reconstruction, that is guaranteed to yield a unique solution. Two possible constraints exist for the models considered in this work, one for the non-delay model and the other for the delay model.
Non-delay model
For the non-delay model, the simplest such constraint is simply the equality constraints \(c_{1}=c_{2}=\cdots=c_{n}\) and \(k_{1}=k_{2}=\cdots=k_{n}\). In other words, the damping and stiffness parameters are assumed constant across the entire data set. This constraint also represents a full linear model without delay. Setting \(\theta (t) = \theta _{\mathrm{model}}(t)\) and \(u(t) = u_{\mathrm{applied}}(t)\) for the times of \(t \in \{ t_{0},\ldots,t_{N} \}\) gives a matrix equation which is written as follows:
$$\begin{aligned} &\mathbf{M}_{\mathrm{lin}} \mathbf{p}_{\mathrm{lin}}= \mathbf{b}_{\mathrm{lin}}, \end{aligned}$$
$$\begin{aligned} &\mathbf{M}_{\mathrm{lin}}= \begin{bmatrix} 1 & t_{1} - t_{0} & -I_{t_{0},t_{1}}^{(1)} \theta _{\mathrm{meas}}(t) & -I_{t_{0},t_{1}}^{(2)} \theta _{\mathrm{meas}}(t) &I_{t_{0},t_{1}}^{(2)} u_{\mathrm{appl}}(t_{1}) \\ 1 & t_{2} - t_{0} & -I_{t_{0},t_{2}}^{(1)} \theta _{\mathrm{meas}}(t) & -I_{t_{0},t_{2}}^{(2)} \theta _{\mathrm{meas}}(t) &I_{t_{0},t_{2}}^{(2)} u_{\mathrm{appl}}(t_{2}) \\ \vdots & \vdots &\vdots & \vdots & \vdots \\ 1 & t_{N} - t_{0} & -I_{t_{0},t_{N}}^{(1)} \theta _{\mathrm{meas}}(t) & -I_{t_{0},t_{N}}^{(2)} \theta _{\mathrm{meas}}(t) &I_{t_{0},t_{N}}^{(2)} u_{\mathrm{appl}}(t_{N}) \end{bmatrix}, \end{aligned}$$
$$\begin{aligned} &\mathbf{b}_{\mathrm{lin}}= \begin{bmatrix} \theta _{\mathrm{meas}}(t_{1}) \\ \theta _{\mathrm{meas}}(t_{2}) \\ \vdots \\ \theta _{\mathrm{meas}}(t_{N}) \end{bmatrix}. \end{aligned}$$
Equations (64)–(66) can duly be solved by linear least squares to yield the linear model without delay.
Delay model
Applying the equality constraints to the delay case means that \(a_{1,1}=a_{1,2}=\cdots=a_{1,n}\), \(a_{2,1}=a_{2,2}=\cdots=a_{2,n}\) and \(a_{3,1}=a_{3,2}=\cdots=a_{3,n}\). Setting \(\theta (t) = \theta _{\mathrm{model}}(t)\) and \(u(t) = u_{\mathrm{applied}}(t)\) for the times of \(t \in \{ t_{0},\ldots,t_{N} \}\), as was done for the non-delay case, now gives a matrix equation which is written as follows:
$$\begin{aligned} \mathbf{M}_{\mathrm{lin},d} \mathbf{p}_{\mathrm{lin},d} = \mathbf{b}_{\mathrm{lin},d}, \end{aligned}$$
$$\begin{aligned} &\mathbf{M}_{\mathrm{lin},d} = \begin{bmatrix} \mathbf{1} & (\mathbf{t}-t_{0}) &(\mathbf{t}-t_{0})^{2} & - \mathbf{I}_{1,2,3} & \mathbf{u}_{\mathrm{appl}} \end{bmatrix}, \end{aligned}$$
$$\begin{aligned} &\mathbf{I}_{1,2,3} = \begin{bmatrix} I_{t_{0},t_{1}}^{(1)} \theta _{\mathrm{meas}}(t) & I_{t_{0},t_{1}}^{(2)} \theta _{\mathrm{meas}}(t)&I_{t_{0},t_{1}}^{(3)} \theta _{\mathrm{meas}}(t) \\ I_{t_{0},t_{2}}^{(1)} \theta _{\mathrm{meas}}(t) & I_{t_{0},t_{2}}^{(2)} \theta _{\mathrm{meas}}(t)&I_{t_{0},t_{2}}^{(3)} \theta _{\mathrm{meas}}(t) \\ \vdots &\vdots &\vdots \\ I_{t_{0},t_{N}}^{(1)} \theta _{\mathrm{meas}}(t) & I_{t_{0},t_{N}}^{(2)} \theta _{\mathrm{meas}}(t)&I_{t_{0},t_{N}}^{(3)} \theta _{\mathrm{meas}}(t) \end{bmatrix}, \end{aligned}$$
$$\begin{aligned} &\mathbf{t} = \begin{bmatrix} t_{1} \\ t_{2} \\ \vdots \\ t_{N} \end{bmatrix}, \qquad\mathbf{1} :=\text{ vector of ones}, \qquad\mathbf{u}_{\mathrm{appl}} = \begin{bmatrix} u_{\mathrm{appl}}(t_{1}) \\ u_{\mathrm{appl}}(t_{2}) \\ \vdots \\ u_{\mathrm{appl}}(t_{N}) \end{bmatrix}. \end{aligned}$$
In a similar fashion to the non-delay model, Equations (67)–(70) are again solved by linear least squares. The result from this solving process yields the unknown parameters as well as the model for the delay case. Figure 4 depicts the algorithm for identifying the linear model, both for the non-delay and the delay cases.
Algorithm 3: algorithm for identifying the linear model for the delay and non-delay cases
Figure 5 shows the schematic of the specialised robot arm system used for orchard spraying. The arm contains two links where the first link is denoted by \(L_{1}\) and the second link by \(L_{2}\) for future reference. The link \(L_{1}\) is connected to a rotational joint \(J_{1}\) on one end, and joint \(J_{2}\) on the other end. Each joint is tethered to the next one and each has wires which are, in turn, connected to a wireless transmitter and a central microcontroller for receiving a PWM command wirelessly from a workstation.
The schematic of the arm system
A separate data acquisition system is developed where an inertial measurement unit (IMU) is fixated on the centre of gravity of both arms. The IMU used has an on-board accelerometre as well as a gyroscope, and it is also connected to a microcontroller via an I2C protocol, where the microcontroller of the acquisition system is also connected to the workstation via a wireless transmitter. Once a PWM command is sent from a workstation, the acquisition system then transmits the angular velocity ω, as well as the angular acceleration α signals for the roll, pitch and yaw axes to the workstation.
Data preparation and preprocessing
The data used are the angular velocity data for the roll, pitch and yaw axes for both links, where the command is given by a step function from 25 degrees and ends at 35 degrees. The command \(u(t)\) in symbols is given as follows:
$$\begin{aligned} u(t) = 25 + 10 H(t-t_{s}), \end{aligned}$$
where \(H(t)\) is the heaviside step function and \(t_{s}=8.5\text{ s}\) is the time of the step change. Figure 6 shows the raw data resulting from the command of Equation (71) for \(L_{1}\). Figure 7 depicts the raw data resulting from the command of Equation (71) for \(L_{2}\). Since these data will have to be integrated with respect to time to yield the angular position for the three axes, incurring integration drift in the process, a compensation mechanism must be in place. In this respect consider the following model for the roll, pitch and yaw axes of both links:
$$\begin{aligned} &\theta _{L_{1},\mathrm{yaw}}(t) = a_{L_{1},\mathrm{yaw}} + b_{L_{1},\mathrm{yaw}} t, \end{aligned}$$
$$\begin{aligned} &\theta _{L_{1},\mathrm{pitch}}(t) = a_{L_{1},\mathrm{pitch}} + b_{L_{1},\mathrm{pitch}} t, \end{aligned}$$
$$\begin{aligned} &\theta _{L_{1},\mathrm{roll}}(t) = a_{L_{1},\mathrm{roll}} + b_{L_{1},\mathrm{roll}} t. \end{aligned}$$
As an example, to find out the parameters of Equations (72a), setting \(\theta _{L_{1},\mathrm{yaw}} \equiv \theta _{L_{1},\mathrm{yaw}, \mathrm{meas}}(t)\) for \(t \in \{t_{0},\ldots,t_{N}\}\) yields the matrix equation
$$\begin{aligned} &\mathbf{S} \mathbf{p} = \mathbf{b}_{\mathrm{preproc}}, \end{aligned}$$
$$\begin{aligned} &\mathbf{S} = \begin{bmatrix} 1 & t_{1} \\ 1 & t_{2} \\ \vdots & \vdots \\ 1 & t_{N} \end{bmatrix},\qquad \mathbf{p}_{\mathrm{preproc}} = \begin{bmatrix} \theta _{L_{1},\mathrm{yaw}}(t_{1}) \\ \theta _{L_{1},\mathrm{yaw}}(t_{2}) \\ \vdots \\ \theta _{L_{1},\mathrm{yaw}}(t_{N}) \end{bmatrix}. \end{aligned}$$
Solving Equations (74) by linear least squares yields the parameters of vector p which can be substituted into Equation (72a) to obtain the model for \(\theta _{L_{1},\mathrm{yaw}}\). The model is then subtracted from the numerically integrated \(\omega _{L_{1},\mathrm{yaw}}\) data to achieve the required data for parameter identification. This procedure can be reiterated for \(\theta _{L_{1},\mathrm{roll}}\) and \(\theta _{L_{1},\mathrm{pitch}}\) and \(\theta _{L_{2},\mathrm{roll}}\), \(\theta _{L_{2}, \mathrm{pitch}}\) and \(\theta _{L_{2},\mathrm{yaw}}\). The result of this preprocessing is shown in Figs. 8–9.
Raw angular velocity data for roll, pitch and yaw axes of link \(L_{1}\)
Raw angular movement data for roll, pitch and yaw axes of link \(L_{1}\)
It is apparent from Figs. 8–9 that a step command at joint \(J_{2}\) from 25 to 35 induces significant vibration which is most visible on the roll and yaw axes of link \(L_{2}\). The trend for both signals is an exponential decay, which can be fitted by the methods presented in Sect. 2. The pitch vibration for both links, however, is infinitesimal; its signal is consumed by the accelerometer noise and is therefore unusable. Hence our result presentations and discussion will focus on the application of the methods presented in Sect. 2 to Link \(L_{2}\).
Parameter identification
Identification with non-delay model
Consider the application of Algorithm 1 on the roll and yaw data of \(L_{2}\), which is the most visible. The data was taken from t= 8.5 s to t= 12 s, with a sampling period of \(T_{s} = 0.02\text{ s}\). The value of the time interval Δt used is taken to be 0.1 s, which gives \(N=35\) values for \(c(t)\) and \(k(t)\). The values are then plotted in Figs. 10–11 for the roll and yaw axes, respectively.
Damping and stiffness for the roll axis
Damping and stiffness for the yaw axis
It is apparent from Figs. 10 and 11 that both \(c_{\mathrm{roll}}\) and \(c_{\mathrm{yaw}}\) exhibit an increasing trend. The stiffness \(k_{\mathrm{roll}}\) shows an exponentially decreasing trend as time progresses, while \(k_{\mathrm{yaw}}\) does not increase beyond \(k_{\mathrm{yaw}}=15\). These trends show that it is possible to correlate \(c_{\mathrm{roll}}\), \(c_{\mathrm{yaw}}\), \(k_{\mathrm{roll}}\) and \(k_{\mathrm{yaw}}\) to energy used in order to accurately model the responses.
Damping and stiffness as a function of energy
A change in damping and stiffness with respect to time as was seen in Figs. 10 and 11 suggests that there must also be a change of kinetic energy used by the system, possibly to overcome the stiction in the gears as well as mechanical backlashes. To model these changes in energy, suppose that the damping and stiffnesses are modelled in terms of the kinetic energy-like function
$$\begin{aligned} &c_{\mathrm{roll}}\equiv c_{\mathrm{roll}}\bigl(v^{2}\bigr),\qquad k_{\mathrm{roll}} \equiv k_{\mathrm{roll}}\bigl(v^{2}\bigr), \end{aligned}$$
$$\begin{aligned} &c_{\mathrm{yaw}}\equiv c_{\mathrm{yaw}}\bigl(v^{2}\bigr),\qquad k_{\mathrm{yaw}} \equiv k_{\mathrm{yaw}}\bigl(v^{2}\bigr). \end{aligned}$$
To find the relationships of the form given in Equations (76) and (77), the dampings and stiffnesses are then plotted against the average velocity-squared function defined as follows:
$$\begin{aligned} \bar{v}_{i}^{2} = \int _{T_{i-1}}^{T_{i}} v^{2}(t) \,dt. \end{aligned}$$
Figures 12 and 13 plot the damping and stiffnesses against the average kinetic energy-like functions. It is seen from these figures that as the value of the kinetic energy-like function approaches zero, the damping \(c_{\mathrm{roll}}\) quickly increases with respect to the energy-like function. This slope becomes significantly flatter as the value of the energy-like approaches about 20. The value of \(k_{\mathrm{roll}}\) exhibits a diode-like phenomenon, that is, the \(k_{\mathrm{roll}}\) value is close to zero initially, and it exponentially increases once the kinetic energy-like function approaches 40. This phenomenon is again seen in the function of \(c_{\mathrm{yaw}}\), where its value is close to zero initially and increases exponentially with a decreasing slope as its value reaches about 0.7. The values of \(k_{\mathrm{yaw}}\) are close to zero initially and quickly approach the value of 15 once the kinetic energy-like function reaches 0.2. Each of the four parameters exhibits a nonlinear phenomenon, which could be mathematically described by a nonlinear piecewise function
$$\begin{aligned} &c_{\mathrm{roll}}= \textstyle\begin{cases} m_{1} (1 - \exp (b_{1} v^{2}) ) &\text{if } 0 < v^{2} < 30, \\ m_{2} v^{2} + b_{2} & \text{if } 30 < v^{2} < 300, \end{cases}\displaystyle \end{aligned}$$
$$\begin{aligned} &k_{\mathrm{roll}}= \textstyle\begin{cases} 0 &\text{if } 0 < v^{2} < 48, \\ a_{1} (v^{2})^{3} + a_{2} (v^{2})^{2} + a_{3} v_{2} + a_{4} & \text{if } 40 < v^{2} < 300, \end{cases}\displaystyle \end{aligned}$$
$$\begin{aligned} &c_{\mathrm{yaw}}= \textstyle\begin{cases} b_{1} (v^{2})^{3} + b_{2} (v^{2})^{2} + b_{3} v_{2} + b_{4} &\text{if } 0 < v^{2} < 30, \\ m_{3} v_{2}^{2} + b_{3} & \text{if } 30 < v^{2} < 300, \end{cases}\displaystyle \end{aligned}$$
$$\begin{aligned} &k_{\mathrm{yaw}}= a \tanh \bigl(\beta v^{2} - \gamma \bigr). \end{aligned}$$
The parameters of Equations (79)–(82) were identified through nonlinear regression analyses. Figures 14–15 compare the \(c_{\mathrm{roll}}\), \(k_{\mathrm{roll}}\), \(c_{\mathrm{yaw}}\) and \(k_{\mathrm{yaw}}\) data against the fitted relations, showing a close fit between the parameters and the models.
Damping and stiffness for the roll axis against the average kinetic energy-like function
Damping and stiffness for the roll axis against the fitted model of Equations (79) and (80)
Damping and stiffness for the yaw axis against the fitted model of Equations (81) and (82)
Once the nonlinear relationships between \(c_{\mathrm{roll}}\), \(k_{\mathrm{roll}}\), \(c_{\mathrm{yaw}}\) and \(k_{\mathrm{yaw}}\) have been modelled, they can now be substituted into the original differential equation of Equation (1) to construct a nonlinear model for the long arm rover. To measure the performance of the matches, a mean absolute error metric is used. Figures 16–17 plot the true angular movement data for the yaw and roll axes against the resimulations of Equation (1), with \(c_{\mathrm{roll}}\), \(k_{\mathrm{roll}}\), \(c_{\mathrm{yaw}}\) and \(k_{\mathrm{yaw}}\) given in Equations (79)–(82). It is apparent from both figures that the model provides an accurate match to the data. Specifically, the percentage match of the roll data is 97.8%, while the corresponding match for the yaw data was 97.5%.
Model match between the identified nonlinear model against the measured roll angle response
Model match between the identified nonlinear model against the measured yaw angle response
Identification with the delay model
Consider now the application of Algorithm 2 on the same angular roll and yaw data of Fig. 9. Here the data is taken from \(t=8.5\text{ s}\) to \(t=12\text{ s}\). The value of Δt used is chosen to be 0.1167 s, yielding \(N=30\) values for \(c(t)\) and \(k(t)\). These values are again plotted in Figs. 18 and 19 for the roll and yaw axes respectively.
The identified coefficients for the yaw axis
The identified coefficients for the roll axis
It is seen from Figs. 18 and 19 that the value of \(a_{2,\mathrm{yaw}}(t)\) is small when \(t<0.5\) seconds and saturates at \(a_{2,\mathrm{yaw}}(t)=15\). The functions \(a_{1,\mathrm{roll}}(t)\) and \(a_{1,\mathrm{yaw}}(t)\) take on very small values, but exponentially increase as time progresses. The value of \(a_{3,\mathrm{roll}}(t)\), however, stays at 15 for about 2.5 seconds, and since then exponentially decreases to zero. These figures suggest that it is possible to explain these phenomena of the changes in \(a_{1}, a_{2}, a_{3}\) parameters through correlating with the kinetic-like function as was done with the non-delay case. Specifically, we again suppose that the \(a_{1}\), \(a_{2}\) and \(a_{3}\) functions are defined in terms of the kinetic energy-like function:
$$\begin{aligned} &a_{1,\mathrm{yaw}}\equiv a_{1,\mathrm{yaw}}\bigl(v^{2}\bigr),\qquad a_{2,\mathrm{yaw}} \equiv a_{2,\mathrm{yaw}}\bigl(v^{2}\bigr),\qquad a_{3,\mathrm{yaw}} \equiv a_{3,\mathrm{yaw}}\bigl(v^{2}\bigr),\qquad \end{aligned}$$
$$\begin{aligned} &a_{1,\mathrm{roll}}\equiv a_{1,\mathrm{roll}}\bigl(v^{2}\bigr),\qquad a_{2,\mathrm{roll}} \equiv a_{2,\mathrm{roll}}\bigl(v^{2}\bigr),\qquad a_{3,\mathrm{roll}} \equiv a_{3,\mathrm{roll}}\bigl(v^{2}\bigr). \end{aligned}$$
In this light, Figs. 20–21 plot the \(a_{1,\mathrm{yaw}}\), \(a_{2,\mathrm{yaw}}\), \(a_{3,\mathrm{yaw}}\), \(a_{1,\mathrm{roll}}\), \(a_{2,\mathrm{roll}}\) and \(a_{3,\mathrm{roll}}\) responses as a function of the average kinetic energy. It is apparent from these figures that both \(a_{1,\mathrm{yaw}}\) and \(a_{1,\mathrm{roll}}\) functions are close to zero for energy levels less than 0.15, and they increase at an exponential rate thereafter. Both functions appear to saturate at a higher energy point. The functions \(a_{2,\mathrm{yaw}}\) and \(a_{2,\mathrm{roll}}\) and \(a_{3,\mathrm{yaw}}\) and \(a_{3,\mathrm{roll}}\) all follow hyperbolic tangent trends, where the parameters take on a very small value for low kinetic energy and exponentially increase once a threshold value is reached, before saturating at a higher energy value.
The six functions clearly exhibit nonlinear phenomena, which are again explained by nonlinear piecewise functions of the form
$$\begin{aligned} &a_{1,\mathrm{yaw}}= \textstyle\begin{cases} 0.1 &\text{if } 0 < v^{2} < 0.1, \\ m_{2} v^{2} + b_{2} & \text{if } 0.1 < v^{2} < 0.65, \end{cases}\displaystyle \end{aligned}$$
$$\begin{aligned} &a_{2,\mathrm{yaw}}= \alpha _{a2,\mathrm{yaw}} \tanh \bigl(\beta _{a2,\mathrm{yaw}} v^{2} + \gamma _{a2,\mathrm{yaw}}\bigr), \end{aligned}$$
$$\begin{aligned} &a_{1,\mathrm{roll}}= \alpha _{a1,\mathrm{roll}} \tanh \bigl(\beta _{a1,\mathrm{roll}} v^{2} + \gamma _{a1,\mathrm{roll}}\bigr), \end{aligned}$$
$$\begin{aligned} &a_{3,\mathrm{roll}}= \alpha _{a3,\mathrm{roll}} \tanh \bigl(\beta _{a3,\mathrm{roll}} v^{2} + \gamma _{a3,\mathrm{roll}}\bigr). \end{aligned}$$
The parameters of Equations (85)–(90) were again identified through nonlinear regression analyses. Figures 22–23 compare the \(a_{i,\mathrm{yaw}}\) and \(a_{i,\mathrm{roll}}\), \(i=1,2,3\), data against the fitted models, illustrating a close fit between the two.
Comparison between the coefficients against their models for the roll axis
Comparison between the coefficients against their models for the yaw axis
The nonlinear models for the row and yaw axes can now be written as follows:
$$\begin{aligned} &\theta _{\mathrm{roll}}'''(t)+a_{1,\mathrm{roll}}(kE) \theta ''_{\mathrm{roll}}(t) + a_{2,\mathrm{roll}}(kE) \theta '_{\mathrm{roll}}(t) + a_{3,\mathrm{roll}}(kE) \theta _{\mathrm{roll}}(t) = b u(t). \end{aligned}$$
$$\begin{aligned} &\theta _{\mathrm{yaw}}'''(t)+a_{1,\mathrm{yaw}}(kE) \theta ''_{\mathrm{yaw}}(t) + a_{2,\mathrm{yaw}}(kE) \theta '_{\mathrm{yaw}}(t) + a_{3,\mathrm{yaw}}(kE) \theta _{\mathrm{yaw}}(t) = b u(t). \end{aligned}$$
Figures 24 and 25 now depict the comparisons between the fitted models of Equations (91) and (92) against the angular movement data. It is apparent that the fit was again very close. Specifically the mean absolute error of the fit was 0.5% for the yaw angle and 0.4% for the roll angle. Hence the nonlinear time delay model yields a more accurate description of the long arm rover than the non-delay counterparts.
Model match between the identified nonlinear model with delay against the measured roll angle response
Model match between the identified nonlinear model with delay against the measured yaw angle response
System identification of the linear model
Consider the application of Algorithm 3 of Fig. 4 to the roll and yaw data of Fig. 11 with the use of the non-delay model. The identified parameters are \(c_{r}=8.48\) and \(k_{r}=30.2\). The corresponding identified parameters for the yaw data are \(c_{y}=4.99\) and \(k_{y}=19.87\). Figure 26 plots the comparisons for the response data against their identified models. It is seen that the non-delay linear model captures the descent of both angles accurately, but could not capture the oscillation that occurs after \(t=0.5\) seconds.
Yaw and roll angle responses against the identified linear model
Figure 27 compares the responses between the true measured data against both linear models. It is seen that the delay model did indeed improve the fit of the non-delay model, as evidenced by an improved value of the mean squared error of 0.0028 for the yaw angle response, compared to 0.0036 for the non-delay model. Both models, however, could not capture the oscillations occurring after \(t=1.5\) seconds.
Yaw and roll angle responses against the identified linear model with delay
An important step in the modelling and identification is the process of validation. To subject the proposed algorithms to the validation test, the algorithms are separately applied to the step responses with the inputs being the following functions:
$$\begin{aligned} &u_{40}(t)= 40 + 20 H(t-t_{s,40}), \end{aligned}$$
where \(t_{s,40} = 5.6\text{ s}\) and \(t_{s,60} = 4.7\text{ s}\) are the step change times for each of the data sets. The identified parameters for the linear constant damping model without delay are as follows:
$$\begin{aligned} &c_{\mathrm{yaw},40}= 3.45,\qquad k_{\mathrm{yaw},40} = 13.30,\qquad c_{\mathrm{roll},40} = 8.46,\qquad k_{\mathrm{roll},40} = 36.7, \end{aligned}$$
$$\begin{aligned} &c_{\mathrm{yaw},60}= 4.74,\qquad k_{\mathrm{yaw},60} = 15.07,\qquad c_{\mathrm{roll},60} = 5.93,\qquad k_{\mathrm{roll},60} = 16.3. \end{aligned}$$
The identified parameters for the linear constant model with delay are as follows:
$$\begin{aligned} &a_{1,\mathrm{yaw},40}= 4.35,\qquad a_{2,\mathrm{yaw},40} = 28.4,\qquad a_{3,\mathrm{yaw},40} = 38.6 , \\ &a_{1,\mathrm{roll},40}= 13.45,\qquad a_{2,\mathrm{roll},40} = 123.30,\qquad a_{3,\mathrm{roll},40} = 620.3, \end{aligned}$$
$$\begin{aligned} &a_{1,\mathrm{yaw},60}= 5.77,\qquad a_{2,\mathrm{yaw},60} = 22.4,\qquad a_{3,\mathrm{yaw},60} = 29.22, \\ &a_{1,\mathrm{roll},60}= 9.09,\qquad a_{2,\mathrm{roll},60} = 53.4,\qquad a_{3,\mathrm{roll},60} = 126.0. \end{aligned}$$
For the purpose of comparison, we define the following models for the linear and nonlinear categories. For the linear models without delay, the models are:
$$\begin{aligned} LM_{\mathrm{yaw},j}\equiv{}& \text{ Model of Equation (1) with $c=c_{\mathrm{yaw},j}$, $k=k_{\mathrm{yaw},j}$ }, \\ & j=\{25,40,60\}. \end{aligned}$$
$$\begin{aligned} LM_{\mathrm{roll},j}\equiv {}&\text{ Model of Equation (1) with $c=c_{\mathrm{roll},j}$, $k=k_{\mathrm{roll},j}$ }, \\ & j=\{25,40,60\}. \end{aligned}$$
For the linear model with delay, the models are:
$$\begin{aligned} &LMD_{\mathrm{yaw},j}\equiv \text{ Model of Equation (27) with $a_{1}=a_{1,\mathrm{yaw},j}$,} \\ &{}\phantom{LMD_{\mathrm{yaw},j}\equiv}\text{$a_{2}=a_{2,\mathrm{yaw},j}$, $a_{3}=a_{3,\mathrm{yaw},j}$ } j=\{25,40,60 \}. \end{aligned}$$
$$\begin{aligned} &LMD_{\mathrm{roll},j}\equiv \text{ Model of Equation (27) with $a_{1}=a_{1,\mathrm{roll},j}$,} \\ &{}\phantom{LMD_{\mathrm{roll},j}\equiv}\text{$a_{2}=a_{2,\mathrm{roll},j}$, $a_{3}=a_{3,\mathrm{roll},j}$} j=\{25,40,60 \}. \end{aligned}$$
For the nonlinear models without delay, the models are:
$$\begin{aligned} &NLM_{\mathrm{yaw},j}\equiv \text{ Model of Equation (1) with $c=c_{\mathrm{yaw},j}\bigl(v^{2}\bigr)$, $k=k_{\mathrm{yaw},j} \bigl(v^{2}\bigr)$,} \\ &\phantom{NLM_{\mathrm{yaw},j}\equiv} j=\{25,40,60\}, \end{aligned}$$
$$\begin{aligned} &NLM_{\mathrm{roll},j}\equiv \text{ Model of Equation (1) with $c=c_{\mathrm{roll},j}\bigl(v^{2}\bigr)$, $k=k_{\mathrm{roll},j} \bigl(v^{2}\bigr)$,} \\ &\phantom{NLM_{\mathrm{roll},j}\equiv} j=\{25,40,60\}. \end{aligned}$$
For the nonlinear models with delay, the models are:
$$\begin{aligned} &NLMD_{\mathrm{yaw},j}\equiv \text{ Model of Equation (27) with $a_{1}=a_{1,\mathrm{yaw},j}\bigl(v^{2}\bigr)$,} \\ &\phantom{NLMD_{\mathrm{yaw},j}\equiv} \text{$a_{2}=a_{2,\mathrm{yaw},j}\bigl(v^{2}\bigr)$, $a_{3}=a_{3,\mathrm{yaw},j}\bigl(v^{2}\bigr)$},\quad j= \{25,40,60 \}, \end{aligned}$$
(100a)
$$\begin{aligned} &NLMD_{\mathrm{roll},j}\equiv \text{ Model of Equation (27) with $a_{1}=a_{1,\mathrm{roll},j}\bigl(v^{2}\bigr)$,} \\ & \phantom{NLMD_{\mathrm{roll},j}\equiv}\text{$a_{2}=a_{2,\mathrm{roll},j}\bigl(v^{2}\bigr)$, $a_{3}=a_{3,\mathrm{roll},j}\bigl(v^{2}\bigr)$},\quad j= \{25,40,60\}. \end{aligned}$$
(100b)
To compare the linear models without delay against the nonlinear models, the models \(LM_{\mathrm{yaw},40}\) and \(LM_{\mathrm{roll},40}\), \(LM_{\mathrm{yaw},60}\) and \(LM_{\mathrm{roll},60}\), as well as their nonlinear counterparts, are tested against the 25–35 degrees step change data for the yaw and roll axes, respectively. Similarly, the 40–60 degrees step change data is used as validation data for the models \(LM_{\mathrm{yaw},25}\), \(LM_{\mathrm{roll},25}\), \(LM_{\mathrm{yaw},60}\) and \(LM{\mathrm{roll},60}\), as well as their nonlinear counterparts. Lastly, the 60–80 degrees step change data is used as validation data for the models \(LM_{\mathrm{yaw},25}\), \(LM_{\mathrm{roll},25}\), \(LM_{\mathrm{yaw},40}\) and \(LM_{\mathrm{roll},40}\) and their nonlinear counterparts. Table 1 depicts the mean absolute error for the validating data against the models for the roll responses. Table 2 compares the mean absolute error for the validating data against the models for the yaw responses, respectively. It is apparent from these tables that the nonlinear models gave significantly less errors for both the yaw and roll responses, for all the validating data.
Table 1 Error comparison of the linear and nonlinear models without delay for the roll responses
Table 2 Error comparison of the linear and nonlinear models without delay for the yaw responses
To compare the linear models with delay, the models \(LMD_{\mathrm{yaw},40}\) and \(LMD_{\mathrm{roll},40}\), \(LMD_{\mathrm{yaw},60}\) and \(LMD_{\mathrm{roll},60}\), \(LMD_{\mathrm{yaw},80}\) and \(LMD_{\mathrm{roll},80}\), and their nonlinear counterparts, are again tested against the 25–35 degrees step change data, as was done with the non-delay case. The 40–60 degrees step change data is used as validation data for the models \(LMD_{\mathrm{yaw},40}\), \(LMD_{\mathrm{roll},40}\), \(LMD_{\mathrm{yaw},60}\), \(LMD{\mathrm{roll},60}\), \(LMD_{\mathrm{yaw},80}\) and \(LMD_{\mathrm{roll},80}\), as well as their nonlinear versions. The 60–80 degrees step change data is used as validation data for the models \(LMD_{\mathrm{yaw},25}\), \(LMD_{\mathrm{roll},25}\), \(LMD_{\mathrm{yaw},40}\), \(LMD_{\mathrm{roll},40}\), \(LMD_{\mathrm{yaw},60}\) and \(LMD_{\mathrm{roll},60}\), as well as their nonlinear counterparts. Table 3 depicts the mean absolute error for the validating data against the models for the roll responses, while Table 4 gives the mean absolute errors for the validation data against the yaw responses. It is again apparent that the nonlinear models gave significantly less errors than the corresponding linear models. Note also that the these errors are also less than the corresponding errors for the non-delay case, thereby completing the validation for the proposed models. Note also that the concept undertaken in this paper, for both families of models, is to initially assume a flexible model structure, and through the use of the system identification, unveil yet more complicated relationships between the underlying physical quantities. This concept is different to the ones normally seen in the literature, where a complex structure of vibration induced models, including finite elements and statistical distributions, must firstly be assumed.
Table 3 Error comparison of the linear and nonlinear models with delay for the roll responses
Table 4 Error comparison of the linear and nonlinear models with delay for the yaw responses
Backstepping controller design
Once the models for the long arm rover have been attained, a controller can now be designed for both the non-delay model and the model with delay. To simplify the controller process and as proof of concept, the roll and yaw angles are controlled separately. Since both models could be transformed into a strict feedback form, the backstepping controller is then suitable. This section thus details the control design and presents some results.
Consider the system of Equation (1) with the understanding that the variable θ is taken to mean either \(\theta _{\mathrm{roll}}\) or \(\theta _{\mathrm{yaw}}\), the parameters \(c \equiv c_{\mathrm{roll}}(kE)\) or \(c_{\mathrm{yaw}}(kE)\) and \(k \equiv k_{\mathrm{roll}}(kE)\) or \(k_{\mathrm{yaw}}(kE)\). The states are thus:
$$\begin{aligned} x_{1} = \theta,\qquad x_{2} = \theta '. \end{aligned}$$
The state space equations are then
$$\begin{aligned} &\dot{x}_{1}= x_{2}, \end{aligned}$$
$$\begin{aligned} &\dot{x}_{2}= -c x_{2} - k x_{1} + b u(t). \end{aligned}$$
In order to stabilise the first equation, we can then define two new state variables
$$\begin{aligned} z_{1} = x_{1},\qquad z_{2} = x_{2} - \alpha _{1}, \end{aligned}$$
where the variable \(\alpha _{1}\) represents a virtual controller. Define a Lyapunov candidate function
$$\begin{aligned} V_{1} = \frac{1}{2} z_{1}^{2}. \end{aligned}$$
Differentiating Equation (102) along the trajectory of Equation (101a)–(101b) yields
$$\begin{aligned} \dot{V}_{1} = z_{1} \dot{z}_{1} = z_{1} (z_{2} + \alpha _{1}). \end{aligned}$$
The variable \(z_{2}\) inside the parenthesis of Equation (103) will be designed in the next stage of the controller to be zero. Hence, designing the variable \(\alpha _{1}=-c_{1} z_{1}\), where \(c_{1}>0\), and with the assumption that \(z_{2}=0\), we will then have \(\dot{V}_{1} = -c_{1} z_{1}^{2} <0\), which means \(x_{1}\) will now be globally exponentially stable.
The derivative of \(z_{2}\) with respect to time is
$$\begin{aligned} \dot{z}_{2}={}& \dot{x}_{2} - \dot{\alpha } _{1} \\ ={}&{-}c x_{2} -k x_{1} + b u(t) + c_{1} (z_{2} + \alpha _{1}). \end{aligned}$$
The new Lyapunov function \(V_{2}\) is now defined as follows:
$$\begin{aligned} V_{2} = V_{1} + \frac{1}{2} z_{2}^{2}. \end{aligned}$$
$$\begin{aligned} \dot{V}_{2} &= \dot{V}_{1} + z_{2} \dot{z}_{2} \\ &= -c_{1} z_{1}^{2} +z_{1} z_{2} + z_{2} \bigl( -c x_{2} - k x_{1} + b u(t) + c_{1} z_{2} + c_{1} \alpha _{1}\bigr). \end{aligned}$$
Designing \(u(t)\) as \(u(t) = \frac{1}{b} (c x_{2} + k x_{1} - z_{1} -c_{1} \alpha _{1} -(c_{1} + c_{2}) z_{2} )\) will then make the derivative of the Lyapunov function equal to \(\dot{V}_{2} = -c_{1} z_{1}^{2} -c_{2} z_{2}^{2} <0\), ensuring the global asymptotically stability of the closed loop system.
The generic form of the system to be controlled under the delay model is
$$\begin{aligned} \theta '''(t) + a_{1} \theta ''(t) + a_{2} \theta '(t) + a_{3} \theta (t) = \rho u(t). \end{aligned}$$
Again it is understood that \(a_{1} \equiv a_{1,\mathrm{yaw}}(KE)\) or \(a_{1,\mathrm{roll}}(KE)\), \(a_{2} \equiv a_{2,\mathrm{yaw}}(KE)\) or \(a_{2,\mathrm{roll}}(KE)\), \(a_{3} \equiv a_{3,\mathrm{yaw}}(KE)\) or \(a_{3,\mathrm{roll}}(KE)\). The goal of the control is to design a controller \(u(t)\) that brings \(\theta (t)\) to zero with global asymptotic stability. The state variables for the system of Equation (107) are
$$\begin{aligned} x_{1} = \theta,\qquad x_{2} = \theta ',\qquad x_{3} = \theta ''. \end{aligned}$$
The system of Equation (107) can be rewritten as follows:
$$\begin{aligned} &\dot{x}_{1}= x_{2}, \\ &\dot{x}_{2}= x_{3}, \\ &\dot{x}_{3}= -a_{3} x_{1} -a_{2} x_{2} - a_{1} x_{3} + \rho u(t). \end{aligned}$$
In order to stabilise the first equation, we can then define two new state variables as follows:
where the variable \(\alpha _{1}\) in Equation (108) is the virtual controller. Defining the Lyapunov candidate function in much the same way as was defined in Equation (102) and differentiating the Lyapunov function along the trajectory of the system yields
$$\begin{aligned} \dot{V}_{1} = z_{1} \dot{z}_{1} = z_{1} (x_{2} + \alpha _{1}). \end{aligned}$$
Designing \(\alpha _{1} = -c_{1} z_{1}\) will thus make
$$\begin{aligned} \dot{V}_{1} = -c_{1} z_{1}^{2} + z_{1} z_{2}. \end{aligned}$$
Since we are designing for \(z_{2}\) to eventually reach zero, the value of \(\dot{V}_{1}\) will be less than zero, and thus the top equation is stabilised. Furthermore,
$$\begin{aligned} \dot{\alpha }_{1} = -c_{1} \dot{z}_{1} = -c_{1} (z_{2} + \alpha _{1}). \end{aligned}$$
We next design another virtual controller to stabilise the second equation. In this light the third new state variable is defined as follows:
$$\begin{aligned} z_{3} = x_{3} - \alpha _{2}. \end{aligned}$$
The second Lyapunov candidate function is defined as follows:
The derivative of Equation (111) with respect to time along the trajectory of the system is
$$\begin{aligned} \dot{V}_{2} = \dot{V}_{1} + z_{2} \dot{z}_{2}. \end{aligned}$$
The value of \(\dot{z}_{2}\) is
$$\begin{aligned} \dot{z}_{2} &= \dot{x}_{2} - \dot{\alpha }_{1} \\ &=x_{3} + c_{1} z_{2} - \alpha _{1} \\ &=z_{3} + c_{1} (z_{1} + z_{2}). \end{aligned}$$
Substituting the value of \(\dot{z}_{2}\) in Equation (112) yields
$$\begin{aligned} \dot{V}_{2} = -c_{1} z_{1}^{2} + z_{2} \bigl(z_{3} + c_{1} (z_{1} + z_{2})\bigr). \end{aligned}$$
Designing \(\alpha _{2}\) to be
$$\begin{aligned} \alpha _{2} = -c_{1} (z_{1} + z_{2}) - c_{2} z_{2}, \end{aligned}$$
results in \(\dot{V}_{2}\) that is guaranteed to be negative definite, meaning \(z_{2}\) will be stabilised. Furthermore,
$$\begin{aligned} \dot{\alpha }_{1} &= -c_{1} \dot{z}_{1} - (c_{1} +c_{2}) \dot{z}_{2} \\ &=-c_{1} (z_{2} + \alpha _{1}) + (c_{1} + c_{2}) \bigl[x_{3} + c_{1} (z_{2} + \alpha _{1}) \bigr]. \end{aligned}$$
The third Lyapunov candidate function is now defined as follows:
$$\begin{aligned} V_{3} = V_{1} + V_{2} + \frac{1}{2} z_{3}^{2}. \end{aligned}$$
$$\begin{aligned} \dot{V}_{3}={}& \dot{V}_{1} + \dot{V}_{2} + z_{3} \dot{z}_{3} \\ ={}&{-}2 c_{1} z_{1}^{2} -c_{2} z_{2}^{2} +z_{3} \bigl[-a_{1} z_{3} -a_{2} z_{2} -a_{3} z_{1} + u(t) \\ &{}- \bigl[-c_{1} (z_{2} + \alpha _{1}) -(c_{1}+c_{2}) \bigl(x_{3} +c_{1} (z_{2} + \alpha _{1})\bigr)\bigr]\bigr]. \end{aligned}$$
Designing \(u(t)\) to be
$$\begin{aligned} u(t) = \frac{1}{\rho } [ a_{1} x_{3} + a_{2} x_{2} + a_{3} x_{1} + \dot{\alpha }_{2} -c_{3} z_{3} ], \end{aligned}$$
will force \(\dot{V}_{2}\) to be negative definite, keeping the closed loop system stable.
In practice, only the value of \(x_{1}\) is measured, while those of \(x_{2}\) and \(x_{3}\) are not. However, these unmeasured variables can be attained through the integral reconstruction model as follows:
$$\begin{aligned} &\hat{x}_{2}=-a_{3} I^{(2)} x_{1,\mathrm{data}} -a_{2} I^{(2)} x_{1,\mathrm{data}} -a_{1} x_{1,\mathrm{data}} + \frac{1}{\rho } I^{(2)} u(t), \end{aligned}$$
$$\begin{aligned} &\hat{x}_{3}=-a_{3} I^{(1)} x_{1,\mathrm{data}} -a_{2} x_{1,\mathrm{data}} -a_{1} \hat{x}_{2} + \frac{1}{\rho } I^{(1)} u(t). \end{aligned}$$
Controller response results
For the non-delay model, Fig. 28 shows the results of the backstepping control responses when \(c_{1} =5\) and \(c_{2} = 5\). It is seen that the yaw angle increases from zero to about 0.02 before settling at zero by \(t=3\) seconds. The roll angle gradually decays and finally settles to zero at around one second.
Backstepping control response results for the non-delay model with parameters \(c_{1}=5\), \(c_{2} =5\)
Figure 29 shows the case of using \(c_{1}=20\), \(c_{2}=100\). The yaw angle in this case decreases to zero within 1 second, while the roll angle decreases from about 2 degrees to zero in about 1 seconds, all the while staying at the zero equilibrium.
Backstepping control response results for the non-delay model with parameters \(c_{1}=20\), \(c_{2} =100\)
For the use of the delay model, Fig. 30 shows the case of using \(c_{1}=10\), \(c_{2}=5\), \(c_{3}=5\). In this case the yaw angle deviates about 0.008 degrees before returning to zero in three seconds. The roll angle decreases from about 4 degrees to zero within one second and stays at the zero equilibrium thereafter. Similar results were obtained for the case of using \(c_{1}=10\), \(c_{2}=10\), \(c_{3}=5\) which is shown in Fig. 31. In this case, however, the deviation of the yaw angle decreases by 0.002 degrees to 0.006 degrees before returning to the zero equilibrium.
Backstepping control response results for the delay model with parameters \(c_{1}=10\), \(c_{2} =5\), \(c_{3}=5\)
Backstepping control response results for the delay model with parameters \(c_{1}=10\), \(c_{2} =10\), \(c_{3}=5\)
Overall, it is seen that using the delay model uses less controlling efforts to keep the yaw angle stabilised at about the zero equilibrium, while similar efforts must be exerted to bring the roll angle down to the zero equilibrium. These results imply that, even though the backstepping controller does a good job of handling the nonlinearities and modelling errors, the controller itself, like many such nonlinear controllers, is designed based on the principle of Lyapunov stability, which does not take into account the controlling effort penalties. In this respect the use of a better model would lessen the controller efforts required in the design of the nonlinear controllers.
This work has developed and extended a modelling methodology to characterise nonlinear damping and stiffness for second order systems from previous research. The methodology initially assumes a flexible model and, through the use of the integral-based parameter identification method, identifies a constant piecewise damping and stiffness parameters. The identified parameters are then correlated to the square of the velocity, effectively an energy-like function, to allow a construction of a nonlinear vibration model. To capture the effect of the time delay, a time delay model in the form of a second order delay differential equation was also considered. Theoretical analyses were also investigated into the global stabilities of both model families, namely the non-delay model and the delay model.
For both model families, coefficients were firstly identified across the entire data with equally spaced intervals. These coefficients were then correlated to the energy-like function to yield a nonlinear piecewise hyperbolic function, which effectively reveals the structural aspects of the vibration, namely that the energy levels also matter in the vibration generations. Both families of models were applied to a specialised robot arm system designed for orchard spraying. The nonlinear model gave significant improvements over the standard linear model in data fitting, which were further enhanced by the addition of the time delay consideration. This concept is different from the friction induced models normally seen in the literature, where initially a complicated structure must be assumed to model the system.
To demonstrate the model in action with control, a backstepping controller was designed for both the non-delay model and the delay model. It was found that overall the use of the delay model expends less controlling efforts to keep both the yaw and roll angle about the zero equilibrium. Even though the backstepping controller, being a nonlinear controller, is very strong in handling the nonlinearities and modelling errors, the use of a better model lessens the required control efforts.
The dataset generated and used to support the findings of this work are available from the corresponding author upon reasonable request.
Lowenberg-DeBour, J., et al.: Economics of robots and automation in field crop production. Precis. Agric. 21, 278–299 (2020)
Finger, R., Swinton, S., El Benni, N., Walter, A.: Precision farming at the nexus of agricultural production and the environment. Ann. Rev. Res. Econ. 11, 313–335 (2019)
Duckett, T., Pearson, S., Blackmore, S., Grieve, B.: Agricultural robots: the future of robotic agriculture. UK-RAS White Papers, EPSRC UK-Robotics and Autonomous Systems Network. Accessed 12 Apr 2020 from http://arxiv.org/ftp/arxiv/papers/1806/1806.06762.pdf
Shamshiri, R.R., et al.: Research and development in agricultural robotics: a perspective of digital farming. Int. J. Agri. Biol. Eng. 11(4), 1–14 (2018)
Gerrish, J.B., Surbrook, T.C.: Mobile robots in agriculture. ASAE 30–41, (1984)
Kondo, N., et al.: A machine vision system for tomato cluster harvesting robot. Eng. Agric. Environ. Food 2(2), 60–65 (2009)
Bac, C.W., et al.: Harvesting robots for high value crops: state-of-the-art review and challenges. J. Field Robot. 31(6), 888–911 (2014)
Silwal, A., Davidson, J.R., Karkee, M., Mo, C., Zhang, Q., Lewis, K.: Design, integration and field evaluation of robotic apple harvester. J. Field Robot. 34(6), 1140–1159 (2017)
Bakker, T., van Asselt, K., Bontsema, J., Mueller, J., van Straten, G.: Systematic design of an autonomous platform for robotic weeding. J. Terramech. 47(2), 63–73 (2010)
Sabanci, K., Aydin, C.: Smart robotic weed control system for sugar beet. Agric. Sci. Technol. 19(1), 73–83 (2017)
Raja, R., Ngugen, T.T., Slaughter, D.C., Fennimore, S.A.: Real-time weed-crop classification and localisation technique for robotic weed control in lettuce. Biosyst. Eng. 192, 257–274 (2020)
Berenstein, R., Edan, Y.: Human-robot cooperative precision spraying: collaboration levels and optimization function. In: 10th IFAC Symposium on Robot Control, Dubrovnik, Croatia, pp. 799–804 (2012)
Oberti, R., et al.: Selective spraying of grapevines for disease control using a modular agricultural robot. Biosyst. Eng. 146, 203–215 (2016)
Kassim, A.M., et al.: Design and development of autonomous pesticide sprayer robot for fertigation farm. Int. J. Adv. Comput. Sci. Appl. 2, 545–551 (2020)
Kaldestad, K.B., Tyapin, I., Hovland, G.: Robotic face milling path correction and vibration reduction. In: IEEE International Conference on Advanced Intelligent Mechanics (AIM), pp. 543–548 (2015)
Kumaresan, R., Tufts, D.: Estimating the parameters of exponentially damped sinusoids and pole zero modelling in noise. IEEE Trans. Acoust. Speech Signal Process. 30, 833–840 (1982)
Barone, P.: Some practical remarks on the extended Prony's method of spectrum analysis. Proc. IEEE 76, 716–723 (1988)
Markovsky, I., Van Huffel, S.: Overview of total least squares methods. Signal Process. 87, 2283–2302 (2007)
Hua, Y., Sarkar, T.K.: Matrix pencil method for estimating parameters of exponentially damped/undamped sinusoids in noise. IEEE Trans. Acoust. 38, 814–824 (1990)
Routray, A., Pradhan, A.K., Rao, K.P.: A novel Kalman filter for frequency estimation of distorted signals in power system. IEEE Trans. Instrum. Meas. 51, 469–479 (2002)
Wiltshire, R.A., et al.: A Kalman filtering approach to rapidly detecting modal changes in power systems. IEEE Trans. Power Syst. 22, 1698–1706 (2007)
Zadeh, R.A., Ghosh, A., Ledwich, G.: Combination of Kalman filter and least-error square techniques in power system. IEEE Trans. Power Deliv. 25, 2868–2880 (2010)
Ning, J., Pan, X., Venkatasubramanian, V.: Oscillation modal analysis from ambient synchrophasor data using distributed frequency domain optimization. IEEE Trans. Power Syst. 28(2), 1960–1968 (2013)
Karimi-Ghartemani, M., Khajehoddin, S.A., Jain, P.K., Bakhshai, A., Mojiri, M.: Addressing DC component in PLL and notch filter algorithm. IEEE Trans. Power Electron. 27, 78–86 (2012)
Mansouri, M., Mojiri, M., Ghadiri-Modarres, M.A., Karimi-Ghartemani, M.: Estimation of electromechanical oscillations from phasor measurements using second order generalized integrator. IEEE Trans. Instrum. Meas. 64(4), 943–950 (2014)
Hinrichs, N., Oestreich, M., Popp, K.: On the modeling of friction oscillators. J. Sound Vib. 216, 435–459 (1998)
Li, Y., Feng, Z.C.: Bifurcation and chaos in friction induced vibration. Commun. Nonlinear Sci. Numer. Simul. 9, 633–647 (2004)
Bultin, T., Woodhouse, J.: Friction-induced vibration: quantifying sensitivity and uncertainty. J. Sound Vib. 329(1–2), 509–526 (2010)
Nobari, A., Ouyang, H., Bannister, P.: Statistics of complex eigenvalues in friction-induced vibration. J. Sound Vib. 338, 169–183 (2015)
Nechak, L., Sinou, J.J.: Hybrid surrogate model for the prediction of uncertain friction-induced instabilities. J. Sound Vib. 126, 122–143 (2017)
Massa, F., et al.: Uncertain friction-induced vibration study: coupling of fuzzy logic, fuzzy sets and interval theories. ASME J. Risk Uncertain. Part B 2, 011008 (2016)
Mallon, N., et al.: Friction compensation in a controlled one-link robot using reduced order observer. IEEE Trans. Control Syst. Technol. 14, 374–383 (2006)
Basturk, H.I.: Observer based boundary control design for suppression of slick-slip oscillations in drilling systems with only surface measurements. J. Dyn. Syst. Meas. Control 139, 1–7 (2017)
Nechak, L.: Nonlinear state observer for estimating and controlling of friction-induced vibrations. Mech. Syst. Signal Process. 139, 106588 (2020)
Wongvanich, N., Hann, C.E., Sirisena, H.R.: Minimal modeling methodology to characterize non-linear damping in an electromechanical system. Math. Comput. Simul. 117, 117–140 (2015)
Cartwright, M.J.: On the stability of solution of certain differential equations of the fourth order. Q. J. Mech. Appl. Math. 9, 185–194 (1956)
Okereke, R.N.: Lyapunov stability analysis of certain third order nonlinear differential equations. J. Appl. Math. 7, 1971–1977 (2016)
This work is supported by the Research Seed Grant for New Lecturer, KMITL Research Fund, King Mongkut's Institute of Technology Ladkrabang, Thailand.
Department of Instrumentation and Control Engineering, Faculty of Engineering, King Mongkut's Institute of Technology Ladkrabang, Chalong Krung Road, Bangkok, Thailand
Napasool Wongvanich, Sungwan Boksuwan & Abdulhafiz Chesof
Napasool Wongvanich
Sungwan Boksuwan
Abdulhafiz Chesof
NW provided the mathematical model, conducted the system identification, designed the backstepping controller and set up the original manuscript draft. SB improved the manuscript. AC developed the data acquisition system and collected the data needed for the system identification analysis. All authors read and approved the final version of the manuscript.
Correspondence to Napasool Wongvanich.
All authors are in unison for the publication of this manuscript.
Wongvanich, N., Boksuwan, S. & Chesof, A. Simplified modelling and backstepping control of the long arm agricultural rover. Adv Differ Equ 2020, 701 (2020). https://doi.org/10.1186/s13662-020-03158-y
DOI: https://doi.org/10.1186/s13662-020-03158-y
Agricultural rover
Backstepping control | CommonCrawl |
Over 3 years (22)
Journal of Fluid Mechanics (27)
Ryan Test (25)
test society (2)
Experiments on flows in channels with spatially distributed heating
A. Inasawa, K. Taneda, J. M. Floryan
Journal: Journal of Fluid Mechanics / Volume 872 / 10 August 2019
Published online by Cambridge University Press: 07 June 2019, pp. 177-197
Print publication: 10 August 2019
Flows in channels exposed to spatially distributed heating were investigated. Such flows are of interest as theoretical analyses suggest that heating leads to the reduction of pressure losses. A special apparatus providing the means for the creation of well-controlled spatially periodic heating with the desired intensity as well as precise control of the flow rate in flows with small Reynolds numbers was constructed. The apparatus works with air and provides optical access to the flow interior. The relevant theory has been generalized to handle the temperature fields measured in the experiments. The experiments were carried out for Reynolds numbers $Re<20$ and at a single Rayleigh number based on the peak-to-peak temperature difference and channel half-height of $Ra_{p}=3500$ . Flow visualization and particle image velocimetry measurements demonstrated the formation of two-dimensional steady rolls whose size was dictated by $Re$ , with the largest rolls observed for the smallest $Re$ and the roll size being gradually reduced as $Re$ increased until their complete elimination at the largest $Re$ used in the experiment. An excellent agreement between the theoretically and experimentally determined complex flow fields was found. Wall shear stresses extracted from the velocity measurements agree with their theoretical counterparts within the expected accuracy. The agreement between the experimental and theoretical velocity fields and their unique relation with the corresponding pressure fields indirectly verify the heating-induced pressure-gradient-reducing effect.
Drag reduction and instabilities of flows in longitudinally grooved annuli
H. V. Moradi, J. M. Floryan
Journal: Journal of Fluid Mechanics / Volume 865 / 25 April 2019
Print publication: 25 April 2019
The primary and secondary laminar flows in annuli with longitudinal grooves and driven by pressure gradients have been analysed. There exist geometric configurations reducing pressure losses in primary flows in spite of an increase of the wall wetted area. The parameter ranges when such flows exist have been determined using linear stability theory. Two types of secondary flows have been identified. The first type has the form of the classical travelling waves driven by shear and modified by the grooves. The axisymmetric waves dominate for sufficiently large radii of the annuli while different spiral waves dominate for small radii. The secondary flow topology is unique in the former case and has the form of axisymmetric rings propagating in the axial direction. Topologies in the latter case are not unique, as spiral waves with left and right twists can emerge under the same conditions, resulting in flow structures varying from spatial rings to rhombic forms. The most intense motion of this type occurs near the walls. The second type of secondary flow has the form of travelling waves driven by inertial effects with characteristics very distinct from the shear waves. Its critical Reynolds number increases proportionally to $S^{-2}$ , where $S$ denotes the groove amplitude, while the amplification rates increase proportionally to $S^{2}$ . These waves exist only if $S$ is above a well-defined minimum and their axisymmetric forms dominate, with the most intense motion occurring near the annulus mid-section. Geometries that give preference to the latter waves have been identified. It is shown that the drag-reducing topographies stabilize the classical travelling waves; these waves are driven by viscous shear, so reduction of this shear decreases their amplification. The same topographies destabilize the new waves; these waves are driven by an inviscid mechanism associated with the formation of circumferential inflection points, and an increase of the groove amplitude increases their amplification. The flow conditions when the presence of grooves can be ignored, i.e. the annuli can be treated as being hydraulically smooth, have been determined.
Reduction of pressure losses and increase of mixing in laminar flows through channels with long-wavelength vibrations
J. M. Floryan, Sahab Zandi
Pressure losses and mixing in vibrating channels were analysed. The vibrations in the form of long-wavelength travelling waves were considered. Significant reduction of pressure losses can be achieved using sufficiently fast waves propagating downstream, while significant increase of such losses is generated by waves propagating upstream. The mechanisms responsible for pressure losses were identified and discussed. The interaction of the pressure field with the waves can create a force which assists the fluid movement. A similar force can be created by friction, but only under conditions leading to flow separation. An analysis of particle trajectories was carried out to determine the effect of vibrations on mixing. A significant transverse particle movement takes place, including particle trajectories with back loops. The downstream-propagating out-of-the phase waves provide a large reduction of pressure gradient and significant potential for mixing intensification. Analysis of energy requirements demonstrates that it is possible to identify waves which reduce power requirements, i.e. the cost of actuation is smaller than the energy savings associated with the reduction of pressure gradient. The fast forward moving waves provide an opportunity for the development of alternative propulsion methods which can be more efficient than methods based on the pressure difference.
Natural convection and thermal drift
Arman Abtahi, J. M. Floryan
Journal: Journal of Fluid Mechanics / Volume 826 / 10 September 2017
Print publication: 10 September 2017
An analysis of natural convection in a horizontal, geometrically non-uniform slot exposed to spatially non-uniform heating has been carried out. The upper plate is smooth and isothermal, and the lower plate has sinusoidal corrugations with a sinusoidal temperature distribution. The distributions of the non-uniformities are characterized in terms of the wavenumber $\unicode[STIX]{x1D6FC}$ and their relative position is expressed in terms of the phase difference $\unicode[STIX]{x1D6FA}_{TL}$ . The analysis is limited to heating conditions which do not give rise to secondary motions in the absence of the non-uniformities. The heating creates horizontal temperature gradients which lead to the formation of vertical and horizontal pressure gradients which drive the motion regardless of the intensity of the heating. When the hot spots (points of maximum temperature) overlap either with the corrugation tips or with the corrugation bottoms, convection assumes the form of pairs of counter-rotating rolls whose size is dictated by the heating/corrugation wavelengths. The formation of a net horizontal flow, referred to as thermal drift, is observed for all other relative positions of the hot spots and corrugation tips. Both periodic heating as well as periodic corrugations are required for the formation of this drift, which can be directed in the positive as well as in the negative horizontal directions depending on the phase difference between the heating and corrugation patterns. The most intense convection and the largest drift occur for wavelengths comparable to the slot height, and their intensities increase proportionally to the heating intensity as well as proportionally to the corrugation amplitude, with the drift being a very strong function of the phase difference. Convection creates forces at the plates which would cause horizontal displacement of the corrugated plate and deform the corrugations if such effects were allowed. Tangential forces generated by the uniform heating always contribute to the corrugation buildup while similar forces generated by the periodic heating contribute to the buildup only when the hot spots overlap with the upper part of the corrugation. The processes described above are qualitatively similar for all Prandtl numbers $Pr$ , with the intensity of convection and the magnitude of the drift increasing with a reduction in $Pr$ .
Natural convection in a corrugated slot
Journal: Journal of Fluid Mechanics / Volume 815 / 25 March 2017
Print publication: 25 March 2017
Analysis of natural convection in a horizontal slot formed by two corrugated isothermal plates has been carried out. The analysis is limited to subcritical Rayleigh numbers $Ra$ where no secondary motion takes place in the absence of corrugations. The corrugations have a sinusoidal form characterized by the wavenumber, the upper and lower amplitudes and the phase difference. The most intense convection occurs for corrugation wavelengths comparable to the slot height; it increases proportionally to $Ra$ and proportionally to the corrugation height. Placement of corrugations on both plates may either significantly increase or decrease the convection depending on the phase difference between the upper and lower corrugations, with the strongest convection found for corrugations being in phase, i.e. a 'wavy' slot, and the weakest for corrugations being out of phase, i.e. a 'converging–diverging' slot. It is shown that the shear forces would always contribute to the corrugation build-up if erosion was allowed, while the role of pressure forces depends on the location of the corrugations as well as on the corrugation height and wavenumber, and the Rayleigh number. Placing corrugations on both plates results in the formation of a moment which attempts to change the relative position of the plates. There are two limiting positions, i.e. the 'wavy' slot and the 'converging–diverging' slot, with the latter being unstable. The system would end up in the 'wavy' slot configuration if relative movement of the two plates was allowed. The presence of corrugations affects the conductive heat flow and creates a convective heat flow. The conductive heat flow increases with the corrugation height as well as with the corrugation wavenumber; it is largest for short-wavelength corrugations. The convective heat flow is relevant only for wavenumbers of $O(1)$ , it increases proportionally to $Ra^{3}$ and proportionally to the second power of the corrugation height. Convection is qualitatively similar for all Prandtl numbers $Pr$ , with its intensity increasing for smaller $Pr$ and with the heat transfer augmentation increasing for larger $Pr$ .
Flow dynamics and enhanced mixing in a converging–diverging channel
S. W. Gepner, J. M. Floryan
Journal: Journal of Fluid Mechanics / Volume 807 / 25 November 2016
Print publication: 25 November 2016
An analysis of flows in converging–diverging channels has been carried out with the primary goal of identifying geometries which result in increased mixing. The model geometry consists of a channel whose walls are fitted with spanwise grooves of moderate amplitudes (up to 10 % of the mean channel opening) and of sinusoidal shape. The groove systems on each wall are shifted by half of a wavelength with respect to each other, resulting in the formation of a converging–diverging conduit. The analysis is carried out up to Reynolds numbers resulting in the formation of secondary states. The first part of the analysis is based on a two-dimensional model and demonstrates that increasing the corrugation wavelength results in the appearance of an unsteady separation whose onset correlates with the onset of the travelling wave instability. The second part of the analysis is based on a three-dimensional model and demonstrates that the flow dynamics is dominated by the centrifugal instability over a large range of geometric parameters, resulting in the formation of streamwise vortices. It is shown that the onset of the vortices may lead to the elimination of the unsteady separation. The critical Reynolds number for the vortex onset initially decreases as the corrugation amplitude increases but an excessive increase leads to the stream lift up, reduction of the centrifugal forces and flow stabilization. The flow dynamics under such conditions is again dominated by the travelling wave instability. Conditions leading to the formation of streamwise vortices without interference from the travelling wave instability have been identified. The structure and the mixing properties of the saturated states are discussed.
Groove-induced changes of discharge in channel flows
Yu Chen, J. M. Floryan, Y. T. Chew, B. C. Khoo
Journal: Journal of Fluid Mechanics / Volume 799 / 25 July 2016
Print publication: 25 July 2016
The changes in discharge in pressure-driven flows through channels with longitudinal grooves have been investigated in the laminar flow regime and in the turbulent flow regime with moderate Reynolds numbers ( $Re_{2H}\approx 6000$ ) using both analytical and numerical methodologies. The results demonstrate that the long-wavelength grooves can increase discharge by 20 %–150 %, depending on the groove amplitude and the type of flow, while the short-wavelength grooves reduce the discharge. It has been shown that the reduced geometry model applies to the analysis of turbulent flows and the performance of grooves of arbitrary form is well approximated by the performance of grooves whose shape is represented by the dominant Fourier mode. The flow patterns, the turbulent kinetic energy as well as the Reynolds stresses were examined to identify the mechanisms leading to an increase in discharge. It is shown that the increase in discharge results from the rearrangement of the bulk fluid movement and not from the suppression of turbulence intensity. The turbulent kinetic energy and the Reynolds stresses are rearranged while their volume-averaged intensities remain the same as in the smooth channel. Analysis of the interaction of the groove patterns on both walls demonstrates that the converging–diverging configuration results in the greatest increase in discharge while the wavy channel configuration results in a reduction in discharge.
Drag reduction in a thermally modulated channel
M. Z. Hossain, J. M. Floryan
Flow in a horizontal channel exposed to external heating which results in sinusoidal temperature variations along the upper and lower walls with a phase shift between them has been studied using a combination of analytical and numerical methods. The most intense convection is observed when the upper and lower hot spots are located above each other. It has been demonstrated that the heating results in a significant reduction of the pressure gradient required to drive the flow when compared to a similar flow in an isothermal channel. The drag reduction is associated with the formation of separation bubbles which insulate the stream from direct contact with the bounding walls. The fluid inside of the bubbles rotates due to horizontal density gradients, which further reduces the required pressure gradient. The magnitude of the drag reduction depends on the phase shift between the heating patterns and can increase by up to threefold when compared to the drag reduction which can be achieved by heating only one wall. A detailed analysis of the associated heat fluxes has been presented.
New instability mode in a grooved channel
A. Mohammadi, H. V. Moradi, J. M. Floryan
It is known that longitudinal grooves may stabilize or destabilize the travelling wave instability in a channel flow depending on the groove wavenumber. These waves reduce to the classical Tollmien–Schlichting waves in the absence of grooves. It is shown that another class of travelling wave instability exists if grooves with sufficiently high amplitude and proper wavelengths are used. It is demonstrated that the new instability mode is driven by the inviscid mechanism, with the disturbance motion having the form of a wave propagating in the streamwise direction with phase speed approximately four times larger than the Tollmien–Schlichting wave speed and with its streamwise wavelength being approximately twice the spanwise groove wavelength. The instability motion is concentrated mostly in the middle of the channel and has a planar character, i.e. the dominant velocity components are parallel to the walls. A significant reduction of the corresponding critical Reynolds number can be achieved by increasing the groove amplitude. Conditions that guarantee the flow stability in a grooved channel, i.e. the grooved surface behaves as a hydraulically smooth surface, have been identified.
Flow in a meandering channel
J. M. Floryan
Journal: Journal of Fluid Mechanics / Volume 770 / 10 May 2015
Print publication: 10 May 2015
A comprehensive analysis of the pressure-gradient driven flow in a meandering channel has been presented. This geometry is of interest as it can be used for the creation of streamwise vortices which magnify the transverse transport of scalar quantities, e.g. heat transfer. The linear stability theory has been used to determine the meandering wavelengths required for the vortex formation. It has been demonstrated that reduction of the wavelength results in the onset of flow separation which, when combined with the wall geometry, results in an effective channel narrowing: the stream 'lifts up' above the wall and becomes nearly rectilinear, thus eliminating vortex-generating centrifugal forces. Increase of the wavelength also leads to a nearly rectilinear stream, as the slope of the wall modulations becomes negligible. As shear-driven instability may interfere with the formation of vortices, the conditions leading to the onset of such instability have also been investigated. The attributes of the geometry which lead to the most effective vortex generation without any interference from the shear instabilities and with the smallest drag penalty have been identified.
Mixed convection in a periodically heated channel
Mixed convection in a channel with flow driven by a pressure gradient and subject to spatially periodic heating along one of the walls has been studied. The pattern of the heating is characterized by the wavenumber ${\it\alpha}$ and its intensity is expressed in terms of the Rayleigh number $\mathit{Ra}_{p}$ . The primary convection has the form of counter-rotating rolls with the wavevector parallel to the wavevector of the heating. The resulting net heat flow between the walls increases proportionally to $\mathit{Ra}_{p}$ but the growth saturates when $\mathit{Ra}_{p}=O(10^{3})$ . The most effective heating pattern corresponds to ${\it\alpha}\approx 1$ , as this leads to the most intense transverse motion. The primary convection is subject to transition to secondary states with the onset conditions depending on ${\it\alpha}$ . The conditions leading to transition between different forms of secondary motion have been determined using the linear stability theory. Three patterns of secondary motion may occur at small Reynolds numbers $\mathit{Re}$ , i.e. longitudinal rolls, transverse rolls and oblique rolls, with the critical conditions varying significantly as a function of ${\it\alpha}$ . An increase of ${\it\alpha}$ leads to the elimination of the longitudinal rolls and, eventually, to the elimination of the oblique rolls, with the transverse rolls assuming the dominant role. For large ${\it\alpha}$ , the transition is driven by the Rayleigh–Bénard mechanism; while for ${\it\alpha}=O(1)$ , the spatial parametric resonance dominates. The global flow characteristics are identical regardless of whether the heating is applied at the lower or the upper wall.
Drag reduction in heated channels
Daniel Floryan, J. M. Floryan
Journal: Journal of Fluid Mechanics / Volume 765 / 25 February 2015
Print publication: 25 February 2015
It is known that the drag for flows driven by a pressure gradient in heated channels can be reduced below the level found in isothermal channels. This reduction occurs for spatially modulated heating and is associated with the formation of separation bubbles which isolate the main stream from direct contact with the solid wall. It is demonstrated that the use of a proper combination of spatially distributed and spatially uniform heating components results in an increase in the horizontal and vertical temperature gradients which lead to an intensification of convection which, in turn, significantly increases the drag reduction. An excessive increase of the uniform heating leads to breakup of the bubbles and the formation of complex secondary states, resulting in a deterioration of the system performance. This performance may, under certain conditions, still be better than that achieved using only spatially distributed heating. Detailed calculations have been carried out for the Prandtl number $\mathit{Pr}=0.71$ and demonstrate that this technique is effective for flows with a Reynolds number $\mathit{Re}<10$ ; faster flows wash away separation bubbles. The question of net gain remains to be settled as it depends on the method used to achieve the desired wall temperature and on the cost of the required energy. The presented results provide a basis for the design of passive flow control techniques utilizing heating patterns as controlling agents.
Stability of flow in a channel with longitudinal grooves
Journal: Journal of Fluid Mechanics / Volume 757 / 25 October 2014
Published online by Cambridge University Press: 25 September 2014, pp. 613-648
Print publication: 25 October 2014
The travelling wave instability in a channel with small-amplitude longitudinal grooves of arbitrary shape has been studied. The disturbance velocity field is always three-dimensional with disturbances which connect to the two-dimensional waves in the limit of zero groove amplitude playing the critical role. The presence of grooves destabilizes the flow if the groove wavenumber $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}\beta $ is larger than $\beta _{tran}\approx 4.22$ , but stabilizes the flow for smaller $\beta $ . It has been found that $\beta _{tran}$ does not depend on the groove amplitude. The dependence of the critical Reynolds number on the groove amplitude and wavenumber has been determined. Special attention has been paid to the drag-reducing long-wavelength grooves, including the optimal grooves. It has been demonstrated that such grooves slightly increase the critical Reynolds number, i.e. such grooves do not cause an early breakdown into turbulence.
Instabilities of natural convection in a periodically heated layer
Published online by Cambridge University Press: 19 September 2013, pp. 33-67
Natural convection in an infinite horizontal layer subject to periodic heating along the lower wall has been investigated using a combination of numerical and asymptotic techniques. The heating maintains the same mean temperatures at both walls while producing sinusoidal temperature variations along one horizontal direction, with its spatial distribution characterized by the wavenumber $\alpha $ and the amplitude expressed in terms of a Rayleigh number $R{a}_{p} $ . The primary response of the system takes the form of stationary convection consisting of rolls with the axis orthogonal to the heating wave vector and structure determined by the particular values of $R{a}_{p} $ and $\alpha $ . It is shown that for sufficiently large $\alpha $ convection is limited to a thin layer adjacent to the lower wall with a uniform conduction zone emerging above it; the temperature in this zone becomes independent of the heating pattern and varies in the vertical direction only. Linear stability of the above system has been considered and conditions leading to the emergence of secondary convection have been identified. Secondary convection gives rise to either longitudinal rolls, transverse rolls or oblique rolls at the onset, depending on $\alpha $ . The longitudinal rolls are parallel to the primary rolls and the transverse rolls are orthogonal to the primary rolls, and both result in striped patterns. The oblique rolls lead to the formation of convection cells with aspect ratio dictated by their inclination angle and formation of rhombic patterns. Two mechanisms of instability have been identified. In the case of $\alpha = O(1)$ , parametric resonance dominates and leads to a pattern of instability that is locked in with the pattern of heating according to the relation ${\delta }_{cr} = \alpha / 2$ , where ${\delta }_{cr} $ denotes the component of the critical disturbance wave vector parallel to the heating wave vector. The second mechanism, the Rayleigh–Bénard (RB) mechanism, dominates for large $\alpha $ , where the instability is driven by the uniform mean vertical temperature gradient created by the primary convection, with the critical disturbance wave vector ${\delta }_{cr} \rightarrow 1. 56$ for $\alpha \rightarrow \infty $ and the fluid response becoming similar to that found in the case of a uniformly heated wall. Competition between these mechanisms gives rise to non-commensurable states in the case of longitudinal rolls and the appearance of soliton lattices, to the formation of distorted transverse rolls, and to the appearance of the wave vector component in the direction perpendicular to the forcing direction. A rapid stabilization is observed when the heating wavenumber is reduced below $\alpha \approx 2. 2$ and no instability is found when $\alpha \lt 1. 6$ in the range of $R{a}_{p} $ considered. It is shown that $\alpha $ plays the role of an effective pattern control parameter and its judicious selection provides a means for the creation of a wide range of flow responses.
Pressure losses in grooved channels
A. Mohammadi, J. M. Floryan
Journal: Journal of Fluid Mechanics / Volume 725 / 25 June 2013
Published online by Cambridge University Press: 14 May 2013, pp. 23-54
Print publication: 25 June 2013
The effects of small-amplitude, two-dimensional grooves on pressure losses in a laminar channel flow have been analysed. Grooves with an arbitrary shape and an arbitrary orientation with respect to the flow direction have been considered. It has been demonstrated that losses can be expressed as a superposition of two parts, one associated with change in the mean positions of the walls and one induced by flow modulations associated with the geometry of the grooves. The former effect can be determined analytically, while the latter has to be determined numerically and can be captured with an acceptable accuracy using reduced-order geometry models. Projection of the wall shape onto a Fourier space has been used to generate such a model. It has been found that in most cases replacement of the actual wall geometry with the leading mode of the relevant Fourier expansion permits determination of pressure losses with an error of less than 10 %. Detailed results are given for sinusoidal grooves for the range of parameters of practical interest. These results describe the performance of arbitrary grooves with the accuracy set by the properties of the reduced-order geometry model and are exact for sinusoidal grooves. The results show a strong dependence of the pressure losses on the groove orientation. Longitudinal grooves produce the smallest drag, and oblique grooves with an inclination angle of ${\sim }42\textdegree $ exhibit the largest flow turning potential. Detailed analyses of the extreme cases, i.e. transverse and longitudinal grooves, have been carried out. For transverse grooves with small wavenumbers, the dominant part of the drag is produced by shear, while the pressure form drag and the pressure interaction drag provide minor contributions. For the same grooves with large wavenumbers, the stream lifts up above the grooves due to their blocking effect, resulting in a change in the mechanics of drag formation: the contributions of shear decrease while the contributions of the pressure interaction drag increase, leading to an overall drag increase. In the case of longitudinal grooves, drag is produced by shear, and its rearrangement results in a drag decrease for long-wavelength grooves in spite of an increase of the wetted surface area. An increase of the wavenumber leads to the fluid being squeezed from the troughs and the stream being forced to lift up above the grooves. The shear is nearly eliminated from a large fraction of the wall but the overall drag increases due to reduction of the effective channel opening. It is shown that properly structured grooves are able to eliminate wall shear from the majority of the wetted surface area regardless of the groove orientation, thus exhibiting the potential for the creation of drag-reducing surfaces. Such surfaces can become practicable if a method for elimination of the undesired pressure and shear peaks through proper groove shaping can be found.
Flows in annuli with longitudinal grooves
Analysis of pressure losses in laminar flows through annuli fitted with longitudinal grooves has been carried out. The additional pressure gradient required in order to maintain the same flow rate in the grooved annuli, as well as in the reference smooth annuli, is used as a measure of the loss. The groove-induced changes can be represented as a superposition of a pressure drop due to a change in the average position of the bounding cylinders and a pressure drop due to flow modulations induced by the shape of the grooves. The former effect can be evaluated analytically while the latter requires explicit computations. It has been demonstrated that a reduced-order model is an effective tool for extraction of the features of groove geometry that lead to flow modulations relevant to drag generation. One Fourier mode from the Fourier expansion representing the annulus geometry is sufficient to predict pressure losses with an accuracy sufficient for most applications in the case of equal-depth grooves. It is shown that the presence of the grooves may lead to a reduction of pressure loss in spite of an increase of the surface wetted area. The drag-decreasing grooves are characterized by the groove wavenumber $M/ {R}_{1} $ being smaller than a certain critical value, where $M$ denotes the number of grooves and ${R}_{1} $ stands for the radius of the annulus. This number marginally depends on the groove amplitude and does not depend on the flow Reynolds number. It is shown that the drag reduction mechanism relies on the re-arrangement of the bulk flow that leads to the largest mass flow taking place in the area of the largest annulus opening. The form of the optimal grooves from the point of view of the maximum drag reduction has been determined. This form depends on the type of constraints imposed. In general, the optimal shape can be described using the reduced-order model involving only a few Fourier modes. It is shown that in the case of equal-depth grooves, the optimal shape can be approximated using a special form of trapezoid. In the case of unequal-depth grooves, where the groove depth needs to be determined as part of the optimization procedure, the optimal geometry, consisting of the optimal depth and the corresponding optimal shape, can be approximated using a delta function. The maximum possible drag reduction, corresponding to the optimal geometry, has been determined.
Drag reduction due to spatial thermal modulations
M. Z. Hossain, D. Floryan, J. M. Floryan
Journal: Journal of Fluid Mechanics / Volume 713 / 25 December 2012
It is demonstrated that a significant drag reduction for pressure-driven flows can be realized by applying spatially distributed heating. The heating creates separation bubbles that separate the stream from the bounding walls and, at the same time, alter the distribution of the Reynolds stress, thereby providing a propulsive force. The strength of this effect is of practical interest for heating with wavenumbers $\ensuremath{\alpha} = O(1)$ and for flows with small Reynolds numbers and, thus, it is of potential interest for applications in micro-channels. Explicit results given for a very simple sinusoidal heating demonstrate that the drag-reducing effect increases proportionally to the second power of the heating intensity. This increase saturates if the heating becomes too intense. Drag reduction decreases as ${\ensuremath{\alpha} }^{4} $ when the heating wavenumber becomes too small, and as ${\ensuremath{\alpha} }^{\ensuremath{-} 7} $ when the heating wavenumber becomes too large; this decrease is due to the reduction in the magnitude of the Reynolds stress. The drag reduction can reach up to 87 % for the heating intensities of interest and heating patterns corresponding to the most effective heating wavenumber.
Effect of streamwise-periodic wall transpiration on turbulent friction drag
M. QUADRIO, J. M. FLORYAN, P. LUCHINI
In this paper a turbulent plane channel flow modified by a distributed transpiration at the wall, with zero net mass flux, is studied through direct numerical simulation (DNS) using the incompressible Navier–Stokes equations. The transpiration is steady, uniform in the spanwise direction, and varies sinusoidally along the streamwise coordinate. The transpiration wavelength is found to dramatically affect the turbulent flow, and in particular the frictional drag. Long wavelengths produce large drag increases even with relatively small transpiration intensities, thus providing an efficient means for improved turbulent mixing. Shorter wavelengths, on the other hand, yield an unexpected decrease of turbulent friction. These opposite effects are separated by a threshold of transpiration wavelength, shown to scale in viscous units, related to a longitudinal length scale typical of the near-wall turbulence cycle. Transpiration is shown to affect the flow via two distinct mechanisms: steady streaming and direct interaction with turbulence. They modify the turbulent friction in two opposite ways, with streaming being equivalent to an additional pressure gradient needed to drive the same flow rate (drag increase) and direct interaction causing reduced turbulent activity owing to the injection of fluctuationless fluid. The latter effect overwhelms the former at small wavelengths, and results in a (small) net drag reduction. The possibility of observing large-scale streamwise-oriented vortical structures as a consequence of a centrifugal instability mechanism is also discussed. Our results do not demonstrate the presence of such vortices, and the same conclusion can be arrived at through a stability analysis of the mean velocity profile, even though it is possible that a higher value of the Reynolds number is needed to observe the vortices.
Transient disturbance growth in a corrugated channel
J. SZUMBARSKI, J. M. FLORYAN
Published online by Cambridge University Press: 10 November 2006, pp. 243-272
Transient growth of small disturbances may lead to the initiation of the laminar–turbulent transition process. Such growth in a two-dimensional laminar flow in a channel with a corrugated wall is analysed. The corrugation has a wavy form that is completely characterized by its wavenumber and amplitude. The maximum possible growth and the form of the initial disturbance that leads to such growth have been identified for each form of the corrugation. The form that leads to the largest growth for a given corrugation amplitude, i.e. the optimal corrugation, has been found. It is shown that the corrugation acts as an amplifier for disturbances that are approximately optimal in the smooth channel case but has little effect in the other cases. The interplay between the modal (asymptotic) instability and the transient growth, and the use of the variable corrugation for modulation of the growth are discussed.
Wall-transpiration-induced instabilities in plane Couette flow
Published online by Cambridge University Press: 02 July 2003, pp. 151-188
Linear stability of Couette flow modified by transpiration applied at the lower wall is considered. It is shown that transpiration can induce flow instability resulting in the appearance of streamwise-vortex-like structures. It is argued that the instability is driven by centrifugal forces associated with streamline curvature. The conditions leading to the onset of the instability depend on the amplitude and wavelength of the transpiration and can be expressed in terms of the critical Reynolds number. The global critical conditions describing the minimum critical Reynolds number required for the onset of the instability for the specified amplitude of the transpiration regardless of its wavelength are also given. The threshold amplitude required for the onset varies approximately as ${\sim}\hbox{\it Re}^{-1.15}$ for large $Re$, where the Reynolds number used is based on the velocity difference between the walls and the channel half-width. The existence of a global threshold, below which the instability cannot occur regardless of the amplitude of the transpiration, has been demonstrated. This threshold corresponds approximately to $\hbox{\it Re}=84$. | CommonCrawl |
Your search: "author:"Nayak, A""
UC Berkeley (36)
UC Irvine (12)
UCLA (115)
UCSF (31)
UC Santa Barbara (10)
UC Office of the President (50)
Research Grants Program Office (RGPO) (50)
Donald Bren School of Information and Computer Sciences (5)
Department of Computer Science (5)
Department of Mathematics, UCI (4)
Berkeley Scientific Journal (1)
Clinical Practice and Cases in Emergency Medicine (1)
Dermatology Online Journal (1)
Proceedings of the Annual Meeting of the Cognitive Science Society (1)
Western Journal of Emergency Medicine: Integrating Emergency Care with Population Health (1)
Medicine and Health Sciences (2)
Social and Behavioral Sciences (1)
BY-NC-ND - Attribution; NonCommercial use; No derivatives (14)
BY - Attribution required (8)
BY-NC-SA - Attribution; NonCommercial use; Derivatives use same license (3)
Investigating the single cell dynamics of Saccharomyces cerevisiae using microfluidics
Nayak, Sujata
UC San Diego Electronic Theses and Dissertations (2013)
Systems biology has grown immensely in the wake of human genome project. In recent years, there has been a tremendous increase in measurement capabilities (e.g., microarray and proteomic technologies, improved reporter genes). However, future success depends not only on effective measurement techniques but also on the design and implementation of appropriate experimental stimuli. In this project, we investigate experimental approaches where the long-term dynamics of single cells subjected to a dynamic environment can be observed. We use microfluidic technology to develop a device where cells can be subjected to a stable and precise chemical gradient. We over- come the typical problem with many earlier gradient devices, where the high fluid flow needed to maintain the gradient renders such devices undesirable for the study of yeast cells. We use the gradient device along with fluorescence microscopy and molecular biology techniques to study gradient sensing and cell polarization during mating in the model organism Saccharomyces cerevisiae. We generalize the chemical gradient device such that the direction of the gradient can be specified as a function of time. The response of yeast cells to spatiotemporal signals generated by this device reveals aspects of yeast polarization adaptation that are unlikely to be observed in static environments. An integrated computational and experimental analysis of the pheromone response of yeast will provide a detailed understanding of the gradient sensing in yeast. Because MAP kinase signaling cascades and cell polarization machinery are conserved in most eukaryotes, understanding of the pheromone pathway should lead to improved models of cell polarization and gradient sensing in more complex organisms
Searching for Organics on the Dwarf Planet Ceres
Nayak, Michael
UC Santa Cruz Electronic Theses and Dissertations (2016)
The Herschel Space Observatory recently detected the presence of water vapor in observations of Ceres, bringing it into the crosshairs of the search for the building blocks of life in the solar system. I present a mission concept designed in collaboration with the NASA Ames Research Center for a two-probe mission to the dwarf planet Ceres, utilizing a pair of small low-cost spacecraft. The primary spacecraft will carry both a mass and an infrared spectrometer to characterize the detected vapor. Shortly after its arrival a second and largely similar spacecraft will impact Ceres to create an impact ejecta "plume" timed to enable a rendezvous and sampling by the primary spacecraft. This enables additional subsurface chemistry, volatile content and material characterization, and new science complementary to the Dawn spacecraft, the first to arrive at Ceres. Science requirements, candidate instruments, rendezvous trajectories, spacecraft design and comparison with Dawn science are detailed.
Confocal Scanning Laser Ophthalmoscopy (CSLO)-based Topographic Change Analysis in progressing glaucomatous and stable eyes
Nayak, Jagannath Sam
Independent Study Projects (2018)
To assess the performance, in an independent population, of previously published confocal scanning laser ophthalmoscopy Topographic Change Analysis (TCA) parameter cut-offs ofr discriminating between progressing glaucoma, stable glaucoma, and healthy eyes. Five published TCA cut-offs were applied to the following four groups: 54 glaucomatous eyes (at study baseline examination) progressing by optic disc stereophotograph assessment, 79 glaucomatous eyes progressing by standard automated perimetry guided progression analysis (GPA), 72 stable glaucoma eyes (patients tested 5 times over 5 weeks), and 135 healthy eyes. All eyes were imaged at least four times by Heidelberg Retina Tomograph (HRT; Heidelberg Engineering, Heidelberg, Germany) as part of the Diagnostic Innovations in Glaucoma Study (DIGS) and African Descent and Glaucoma Evaluation Study (ADAGES). Sensitivity and specificity for classifying progressed and stable eyes, respectively, were reported. The two TCA parameters providing the best sensitivity/specificity trade-off were the 95% cut-off for the largest clustered super-pixel area within the optic disc margin10 (sensitivity of 0.922 in stereophotograph progressors and specificity of 0.778 in stable glaucoma eyes) and the Moderate Criteria (largest clustered superpixel area within the optic disc margin > or equal to 1% of the disc area with > or equal to 50 [mu]m mean depth change). These cut-offs detected progression over a similar time frame. Specificity in healthy eyes was lower than in stable glaucoma eyes. Previously published HRT TCA parameters can discriminate between progressing and stable glaucoma eyes in an independent population with good sensitivities and specificities. Low specificity of TCA in healthy eyes might be due to the effects of aging on optic disc topography, evidenced by the long follow-up in this group.
A Joint Measurement of $\nu_{\mu}$-Disappearance and $\nu_{e}$-Appearance in the NuMI beam using the NOvA Experiment
Nayak, Nitish
UC Irvine Electronic Theses and Dissertations (2021)
The discovery of neutrino oscillations provides the first indication of a lepton flavor violating (LFV) process, one that isn't predicted by the Standard Model. As such, NOvA is part of a rich experimental program to constrain unknown parameters in the neutrino oscillation model, described for three neutrino flavors using the PMNS unitary matrix. It is a long-baseline experiment utilizing two detectors, a Near Detector at Fermilab and a Far Detector in Ash River, Minnesota for a total baseline of $\SI{810}{km}$. It receives a predominantly $\nu_{\mu}$/$\bar{\nu}_{\mu}$ beam peaking at $\SI{1.8}{GeV}$ from the NuMI beam facility at Fermilab. There are four oscillation channels used in the analysis, $\nu_{\mu} \rightarrow \nu_{\mu}$, $\nu_{\mu} \rightarrow \nu_{e}$, $\bar{\nu}_{\mu} \rightarrow \bar{\nu}_{\mu}$ and $\bar{\nu}_{\mu}\rightarrow \bar{\nu}_{e}$. With a total exposure of $13.6\times10^{20}$ and $12.5\times10^{20}$ protons on target for the neutrino and anti-neutrino beam modes respectively, $82$ candidates are seen in the $\nu_{\mu} \rightarrow \nu_{e}$ channel for a total predicted background of $26.8$ events. Similarly, $33$ candidates are seen in the corresponding anti-neutrino channel for a total predicted background of $14.0$ events. In the $\nu_{\mu}\rightarrow\nu_{\mu}$ ($\bar{\nu}_{\mu}\rightarrow\bar{\nu}_{\mu}$) channel, $211$ ($105$) candidates are seen with an expectation of $1156.1$ ($488.1$) events at no oscillations.
Consequently, this dissertation reports a measurement for oscillation parameters based on a joint fit for the spectra in these four channels, which is given by : $\Delta m^{2}_{32} = (2.41\pm 0.07)\times10^{-3}$ eV$^{2}$, $\sin^{2}\theta_{23} = 0.57^{+0.04}_{-0.03}$ (UO) and $\delta_{CP} = 0.82\pi^{+0.27\pi}_{-0.87\pi}$.
In addition, a leading $4.2\sigma$ confidence level of evidence is seen for $\bar{\nu}_{e}$ appearance. The oscillation analysis improves upon previous updates in several areas including particle identification, event reconstruction and cosmic background rejection. A principle component analysis (PCA)-based technique is also implemented for decorrelating important flux and cross-section systematics. In addition, new improvements are proposed in areas of energy estimation as well as confidence interval building.
Topological Quantum Computing with Majorana Zero Modes and Beyond
Knapp, Christina
UC Santa Barbara Electronic Theses and Dissertations (2019)
Topological quantum computing seeks to store and manipulate information in a protected manner using topological phases of matter. Information encoded in the degenerate state space of pairs of non-Abelian anyons or defects is robust to local perturbations, reducing its susceptiblity to environmental errors and potentially providing a scalable approach to quantum computing. However, topological quantum computing faces significant challenges, not least of which is identifying an experimentally accessible platform supporting non-Abelian topological physics. In this thesis, we critically analyze topological quantum computing with Majorana zero modes, non-Abelian defects of a topological superconductor. We identify intrinsic error sources for Majorana-based systems and propose quantum computing architectures that minimize their effects. Additionally, we consider a new approach for realizing and detecting non-Abelian topological defects in fractional Chern insulators.
Topological quantum computing is predicated on the idea that braiding non-Abelian anyons adiabatically can implement quantum gates fault tolerantly. However, any braiding experiment will necessarily depart from the strict adiabatic limit. We begin by analyzing the nature of diabatic errors for anyon braiding, paying particular attention to how such errors scale with braiding time. We find that diabatic errors are unfavorably large and worryingly sensitive to details of the time evolution. We present a measurement-based correction protocol for such errors, and illustrate its application in a particular Majorana-based qubit design.
We next propose designs for Majorana-based qubits operated entirely by a measurement-based protocol, thereby avoiding the diabatic errors discussed above. Our designs can be scaled into large two dimensional arrays amenable to long-term quantum computing goals, whose core components are testable in near-term devices. These qubits are robust to quasiparticle poisoning, anticipated to be one of the dominant error sources coupling to Majorana zero modes. We demonstrate that our designs support topologically protected Clifford operations and can be augmented to a universal gate set without requiring additional control parameters.
While topological protection greatly suppresses errors, residual coupling to noise limits the lifetimes of our proposed Majorana-based qubits. We analyze the dephasing times for our quasiparticle-poisoning-protected qubits by calculating their charge distribution using a particle number-conserving formalism. We find that fluctuations in the electromagnetic environment couple to an exponentially suppressed topological dipole moment. We estimate dephasing times due to $1/f$ noise, thermal quasiparticle excitations, and phonons for different qubit sizes.
The residual errors discussed above will necessarily require error correction for a sufficiently long quantum computation. We develop physically motivated noise models for Majorana-based qubits that can be used to analyze the performance of a quantum error correcting code. We apply this noise model to estimate pseudo-thresholds for a small subsystem code, identifying the relative importance of difference error processes from a fault tolerance perspective. Our results emphasize the necessity of suppressing long-lived quasiparticle excitations that can spread across the code.
Finally, we turn our attention to a different platform that could host non-Abelian topological defects: fractional Chern insulators in graphene. We study the edge states of fractional Chern insulators using the field theory of fractional quantum Hall edges supplemented with a symmetry action. We find that lattice symmetries impose a quantized momentum difference for edge electrons in a fractional state of a $C=2$ Chern band. This momentum difference can be used to selectively contact the different edge states, thereby allowing detection of topological defects in the bulk with a standard four terminal measurement. Our proposal could be implemented in graphene subject to an artificially patterned lattice.
Learning Containment Metaphors
Nayak, Sushobhan;
Mukerjee, Amitabha
Proceedings of the Annual Meeting of the Cognitive Science Society, Volume 34 (2012)
Executive functions in schizophrenia : defining and refining the constructs
Savla, Gauri Nayak
Executive functions are among the strongest neurocognitive predictors of functional disability among people with schizophrenia. However, there remains considerable debate about what constitutes executive functions, the extent to which they are uniquely impaired above and beyond other cognitive abilities, and their relationship with clinical and everyday functioning correlates of schizophrenia. The aim of the current study was to simultaneously assess multiple executive functioning abilities, as measured by the Delis-Kaplan Executive Function System (D-KEFS) among people with schizophrenia (SCs) compared to demographically-matched healthy comparison subjects (HCs), to assess for differential impairment among specific multi -level abilities and basic cognitive skills, to clarify the construct of "executive functions" in schizophrenia, and to examine the relationship of specific executive functions to psychopathology and everyday functioning. In this study, SCs, on average, had consistently worse multi- level executive functions in comparison to HCs. The differences between ipsative performances on multi-level tasks (e.g., switching) and basic cognitive tasks (e.g., motor speed) were greater among SCs than among HCs on some, but not all executive functioning tasks. Although the specific member components varied among SCs and HCs, exploratory factor analyses with the two groups examined separately, both revealed two factor solutions (cognitive flexibility/switching and abstraction/conceptualization). Latent profile analysis of the D-KEFS scores in the SCs indicated three distinct profiles, i.e., mildly impaired, average, and high average-to-superior. The high functioning group was characterized by higher levels of premorbid functioning (as estimated with education and word reading performance). Within-group, ipsative comparisons indicated that those in the mildly impaired group did worse on abstraction tasks and better on switching tasks compared with each subject's own respective mean performance, while those in the high average group had the opposite pattern. Path models indicated intact working memory as necessary for intact cognitive flexibility, and intact abstraction as necessary for intact logical reasoning and sorting abilities. Severity of thought disorder was associated with worse performance in terms of cognitive flexibility/switching and abstraction/conceptualization; there were no other significant relationships found between severity of psychopathology and executive functioning. Both D-KEFS factors were also significantly correlated with functional capacity, but not level of independence in current living situation, or quality of life.
The Magical Geometry of 1D Quantum Liquids
Plamadeala, Eugeniu
We investigate the edge properties of Abelian topological phases in two spatial dimensions. We discover that many of them support multiple fully chiral edge phases, with surprising and measurable experimental consequences. Using the machinery of conformal field theory and integral quadratic forms we establish that distinct chiral edge phases correspond to genera of positive-definite integral lattices. This completes the notion of bulk-boundary correspondence for topological phases. We establish that by tuning inter-channel interactions the system can be made to transition between the different edge phases without closing the bulk gap.
Separately we construct a family of one-dimensional models, called Perfect Metals, with no relevant mass-generating operators. These theories describe stable quantum critical phases of interacting fermions, bosons or spins in a quantum nanowire. These models rigorously answer a long-standing question about the existence of stable metallic phases in one and two spatial dimensions in the presence of generic disorder. Separately, they are the first example of a stable phase of an infinite parallel array of coupled Luttinger liquids.
We perform a detailed study of the transport properties of Perfect Metals and show that in addition to violating the Wiedemann-Franz law, they naturally exhibit low power-law dependence of electric and thermal conductivities on temperature all the way to zero temperature. We dub this phenomenological set of properties a hyperconductor because in some sense, hyperconductors are better conductors that superconductors, which may have thermal conductivities that are exponentially small in temperature.
I. Seismic Moment Tensor Analysis of Micro-Earthquakes in an Evolving Fluid-Dominated System, II. Ambient Noise Cross-Correlation for Evaluating Velocity Structure and Instrument Orientations in a Geothermal Environment
Nayak, Avinash
UC Berkeley Electronic Theses and Dissertations (2017)
This dissertation presents a detailed analysis of recorded seismic waves in terms of their source and their propagation through the Earth in multiple scenarios. First, I investigate the source mechanisms of some highly unusual seismic events associated with the formation of a large sinkhole at Napoleonville salt dome, Assumption Parish, Louisiana in August 2012. I implemented a grid-search approach for automatic detection, location and moment tensor inversion of these events. First, the effectiveness of this technique is demonstrated using low frequency (0.1-0.2 Hz) displacement waveforms and two simple 1D velocity models for the salt dome and the surrounding sedimentary strata for computation of Green's functions in the preliminary analysis. In the revised, and more detailed analysis, I use Green's functions computed using a finite-difference wave propagation method and a 3D velocity model that incorporates the currently known approximate geometry of the salt dome and the overlying anhydrite-gypsum cap rock, and features a large velocity contrast between the high velocity salt dome and low velocity sediments overlying and surrounding it. I developed a method for source-type-specific inversion of moment tensors utilizing long-period complete waveforms and first-motion polarities, which is useful for assessing confidence and uncertainties in the source-type characterization of seismic events. I also established an empirical method to rigorously assess uncertainties in the centroid location, MW and the source type of the events at the Napoleonville salt dome through changing network geometry, using the results of synthetic tests with real seismic noise. During 24-31 July 2012, the events with the best waveform fits are primarily located at the western edge of the salt dome at most probable depths of ~0.3-0.85 km, close to the horizontal positions of the cavern and the future sinkhole. The data are fit nearly equally well by opening crack moment tensors in the high velocity salt medium or by isotropic volume-increase moment tensors in the low velocity sediment layers. The addition of more stations further constrains the events to slightly shallower depths and to the lower velocity media just outside the salt dome with preferred isotropic volume-increase moment tensor solutions. I find that Green's functions computed with the 3D velocity model generally result in better fit to the data than Green's functions computed with the 1D velocity models, especially for the smaller amplitude tangential and vertical components, and result in better resolution of event locations and event source type. The dominant seismicity during 24- 31 July 2012 is characterized by the steady occurrence of seismic events with similar locations and moment tensor solutions at a near-characteristic inter-event time. The steady activity is sometimes interrupted by tremor-like sequences of multiple events in rapid succession, followed by quiet periods of little of no seismic activity, in turn followed by the resumption of seismicity with a reduced seismic moment-release rate. The dominant volume- increase moment tensor solutions and the steady features of the seismicity indicate a crack- valve-type source mechanism possibly driven by pressurized natural gas.
Accurate and properly calibrated velocity models are essential for the recovery of correct seismic source mechanisms. I retrieved empirical Green's functions in the frequency range ~ 0.2–0.9 Hz for interstation distances ranging from ~1 to ~30 km (~0.22 to ~6.5 times the wavelength) at The Geysers geothermal field, northern California, from cross-correlation of ambient seismic noise recorded by a wide variety of sensors. I directly compared noise- derived Green's functions with normalized displacement waveforms of complete single-force synthetic Green's functions computed with various 1D and 3D velocity models using the frequency-wavenumber integration method, and a 3D finite-difference wave propagation method, respectively. These comparisons provide an effective means of evaluating the suitability of different velocity models to different regions of The Geysers, and assessing the quality of the sensors and the noise cross-correlations. In the T-Tangential, R-Radial, Z- Vertical reference frame, the TT, RR, RZ, ZR and ZZ components (first component: force direction, second component: response direction) of noise-derived Green's functions show clear surface-waves and even body-wave phases for many station pairs. They are also broadly consistent in phase and relative inter-component amplitudes with the synthetic Green's functions for the known local seismic velocity structure that was derived primarily from body wave travel-time tomography, even at interstation distances less than one wavelength. I also found anomalous large amplitudes in TR, TZ, RT and ZT components of noise-derived Green's functions at small interstation distances (≲4 km) that can be attributed to ~10°-30° sensor misalignments at many stations inferred from analysis of longer period teleseismic waveforms. After correcting for sensor misalignments, significant residual amplitudes in these components for some longer interstation distance (≳ 8 km) paths are better reproduced by the 3D velocity model than by the 1D models incorporating known values and fast axis directions of crack-induced shear-wave anisotropy in the geothermal field. I also analyzed the decay of Fourier spectral amplitudes of the TT component of the noise-derived Green's functions at 0.72 Hz with distance in terms of geometrical spreading and attenuation. While there is considerable scatter in the amplitudes of noise-derived Green's functions, the average decay is consistent with the decay expected from the amplitudes of synthetic Green's functions and with the decay of tangential component local-earthquake ground-motion amplitudes with distance at the same frequency.
Sesquinaries, Magnetics and Atmospheres: Studies of the Terrestrial Moons and Exoplaets
The surface brightness of Deimos, groove patterns on Phobos, crustal magnetic anomalies on the Moon and the composition of exoplanet atmospheres represent some of the most interesting and puzzling questions in planetary science. Why is Deimos significantly brighter and smoother than its partner moon Phobos? What is the origin of the crater chain "grooves" on Phobos? Are the magnetic anomalies in the lunar South Pole-Aitken basin a remnant of the basin's formation, or do they owe their existence to a primordial period of lunar dynamo activity? And finally, as visible wavelength telescopes are designed and tested for space-based exoplanet detections, can we use observed albedo spectra to determine radius, gravity, cloud pressure heights and atmospheric compositions for these planets? I use dynamical modeling, magnetic inversions and Markov Chain Monte Carlo retrievals to address these questions. Major findings include 1) the likelihood of isotropic redistribution of reaccreted ejected material on Deimos, 2) the creation of hemispherical catenae from the creation of primary craters on Phobos, which match the locations and geomorphology of several existing grooves well, 3) the first directional magnetic survey of South Pole-Aitken basin anomalies, and a larger than expected diversity in recovered paleopole directions, and 4) the critical importance of considering the effects of planet phase in exoplanet atmosphere retrievals; changing planet phase, when combined with low signal-to-noise observations, can cause several orders of magnitude of uncertainty in atmospheric methane composition and cloud pressure height, among others. | CommonCrawl |
Imaging nodal knots in momentum space through topolectrical circuits
Topologically protected vortex knots and links
Toni Annala, Roberto Zamora-Zamora & Mikko Möttönen
Tidal surface states as fingerprints of non-Hermitian nodal knot metals
Xiao Zhang, Guangjie Li, … Ching Hua Lee
Topological complex-energy braiding of non-Hermitian bands
Kai Wang, Avik Dutt, … Shanhui Fan
A holographic duality from lifted tensor networks
Nathan A. McMahon, Sukhbinder Singh & Gavin K. Brennen
Tensor networks for complex quantum systems
Román Orús
Quantized angular momentum in topological optical systems
Mário G. Silveirinha
Non-Hermitian topology and exceptional-point geometries
Kun Ding, Chen Fang & Guancong Ma
Simulating hyperbolic space on a circuit board
Patrick M. Lenggenhager, Alexander Stegmaier, … Tomáš Bzdušek
Three-dimensional Weyl topology in one-dimensional photonic structures
Kosmas L. Tsakmakidis & Tomasz P. Stefański
Ching Hua Lee ORCID: orcid.org/0000-0003-0690-32381,
Amanda Sutrisno ORCID: orcid.org/0000-0003-3586-33002,
Tobias Hofmann3,
Tobias Helbig3,
Yuhan Liu4,5,
Yee Sin Ang ORCID: orcid.org/0000-0002-1637-16102,
Lay Kee Ang ORCID: orcid.org/0000-0003-2811-11942,
Xiao Zhang ORCID: orcid.org/0000-0002-4709-65195,
Martin Greiter ORCID: orcid.org/0000-0003-2008-40133 &
Ronny Thomale ORCID: orcid.org/0000-0002-3979-88363
Electronic and spintronic devices
Electronic properties and materials
Knots are intricate structures that cannot be unambiguously distinguished with any single topological invariant. Momentum space knots, in particular, have been elusive due to their requisite finely tuned long-ranged hoppings. Even if constructed, probing their intricate linkages and topological "drumhead" surface states will be challenging due to the high precision needed. In this work, we overcome these practical and technical challenges with RLC circuits, transcending existing theoretical constructions which necessarily break reciprocity, by pairing nodal knots with their mirror image partners in a fully reciprocal setting. Our nodal knot circuits can be characterized with impedance measurements that resolve their drumhead states and image their 3D nodal structure. Doing so allows for reconstruction of the Seifert surface and hence knot topological invariants like the Alexander polynomial. We illustrate our approach with large-scale simulations of various nodal knots and an experiment which maps out the topological drumhead region of a Hopf-link.
In the pursuit of ever more exotic topological states, contemporary research has witnessed a shift from established topological insulator platforms with \({\mathbb{Z}}\) or \({{\mathbb{Z}}}_{2}\) topology to photonic, mechanical, and acoustic metamaterials1,2,3 that mimic topological nodal semimetals4,5,6,7,8,9,10. The conceptual transfer from conventional electronic materials to such artificial structures allows for unprecedented control over individual couplings, and further permits access to any spectral regime of the band structure without limitations, as, e.g., implied by the chemical potential for electronic matter. The recent introduction of electric circuits for topological engineering11,12,13,14,15,16,17 brought about even greater accessibility and fine tuning, as well as much reduced cost. Most importantly, however, circuit connections transcend locality and dimensionality constraints, putting the implementation of couplings between distant sites of a high-dimensional system and nearest-neighbor connections on equally accessible footing. Furthermore, density of states divergences18 and even admittance bandstructure15,19 can be obtained with just impedance and voltage/current measurements, respectively.
Among topological structures, knots rank as among the most exotic, being intimately connected to Chern-Simons theory which underlies the braiding of quasiparticles20,21. In real space, knots are ubiquitous, being present in protein and polymer structures, optical vortices22 and, of course, everyday-life ropes. In momentum space, knotted configurations of band structure crossings (nodes) demonstrate their topological intricacies even more spectacularly, with their special "drumhead" surface modes generalizing the Fermi arcs of ordinary nodal semimetals.
To realize and image momentum space nodal knots in RLC circuits, two challenges have to be overcome. First, RLC circuits are reciprocal due to their components being symmetric from both ends, but mathematical models of nodal knots proposed thus far23,24,25,26,27 imply broken reciprocity. This apparent limitation has prevented nodal knot circuits from being developed so far, despite successes in non-knotted nodal loop circuits and metamaterials28,29,30,31. Second, the momentum knots are subextensive 1D features of the 3D Brillouin zone (BZ), and great finesse is required in imaging them.
In this work, we show how these challenges can be overcome via (i) a special scheme for designing nodal knots circuits with mirror-image partners, (ii) a new robust impedance measurement approach for imaging nodal knots and their accompanying drumhead surface states, and (iii) an instructive experimental demonstration of how the topological drumhead region of a nodal knot can imaged.
Designer nodal knots from braids
The most natural route to realizing momentum space knots is via a 3D lattice with band intersections (nodes) along particular knotted trajectories. A generic reciprocal lattice with band intersections minimally contains two sites per unit cell, and can be written as a reciprocal (momentum) space graph Laplacian
$$J({\bf{k}})={l}_{0}\ {\mathbb{I}}+{\mathfrak{R}}{\mathfrak{e}}f({\bf{k}}){\tau }_{x}+{\mathfrak{I}}{\mathfrak{m}}f({\bf{k}}){\tau }_{z},$$
where l0 is a uniform offset, f(k) is an even function of k, and τx, τz are the Pauli matrices. Nodes occur whenever its two eigenvalues (bands) \({l}_{0}\pm \sqrt{{[{\rm{Re}}f({\bf{k}})]}^{2}+{[\text{Im}f({\bf{k}})]}^{2}}=:{l}_{0}\pm | f({\bf{k}})|\) coincide, i.e., yielding a vanishing gap 2∣f(k)∣ = 0. This is a complex constraint equivalent to the intersection of two level sets given by \({\rm{Re}}\ f({\bf{k}})=0\) and Im f(k) = 0, which hence traces out a 1D nodal line in the 3D BZ. Note that we have excluded τy terms, which will break the nodal line into isolated Weyl points. Generically, the locus of f(k) = 0 can correspond to broken arcs or arbitrarily intertwined closed loops. The topologically most interesting cases occur when a loop links nontrivially with itself, forming a nodal knot, or when multiple loops inseparably entangle to form a nodal link. In the following, we shall first show how f(k) can be constructed based on a desired knot or link structure, without restricting ourselves to any particular physical implementation. Subsequently, we show why its corresponding Laplacian J(k) can be most suitably implemented by an RLC circuit.
To design f(k), the first step is to unambiguously specify a desired knot or link. Intuitively, we can visualize a knot/link as a braid closure32, i.e., as a collection of intertwining strands with their permuted ends joined together. (Fig. 1: The number of linked components is equal to the number of cycles in the decomposition of the permutation.) The precise sequence of the strand crossings identifies the knot/link, and is annotated as a braid word \({\sigma }_{1}^{\pm }{\sigma }_{2}^{\pm }...\), with σi indicating that the ith string crosses above the (i+1)th string from the left, and \({\sigma }_{i}^{-1}\) if the crossing is from below. Two non-adjacent crossings commute: σiσj = σjσi for ∣i − j∣ ≥ 2; less obvious is the braid relation σiσjσi = σjσiσj which plays a fundamental role in the Yang–Baxter equation33. Note that due to the braid relation, as well as Markovian moves that swap the closing strands34, more than one braid word can correspond to a desired knot. Nevertheless, the specification of the braid uniquely identifies the knot. For instance, \({\sigma }_{1}^{2}\) gives the Hopf-link, while \({\sigma }_{1}^{3}\) gives the Trefoil knot (Fig. 1).
Fig. 1: Nodal knots from braids.
a Braid operations σi and \({\sigma }_{i}^{-1}\) represent the over/under-crossing of strand i with strand i + 1 as we travel upwards. A braid consists of a series of braid operations, and can be closed to form a knot or link (in this case it is a link between three loops). b A braid closure can be embedded onto the 3D BZ torus in different ways through different choices of F(k). Depending on its topological charge density distribution of Eq. (4), it can produce different numbers of copies of the knots in the BZ, i.e. one a single copy (F1) or two mirror imaged copies (F2). c–f Various examples of simple Nodal knots/links defined by Eq. (3), some of which we shall explicitly construct in circuits band structures later. c Hopf-link with \(\sigma ={\sigma }_{1}^{2}\) and f(z, w) = (z − w)(z + w). d) Trefoil knot with \(\sigma ={\sigma }_{1}^{3}\) and f(z, w) = (z − w3/2)(z + w3/2). e) 3-link with \(\sigma ={({\sigma }_{1}{\sigma }_{2}{\sigma }_{1})}^{2}\) and f(z, w) = z(z2 − w2). f Figure-8 knot with \(\sigma ={({\sigma }_{2}^{-1}{\sigma }_{1})}^{2}\) and \(f(z,w)=64{z}^{3}-12z(3+2({w}^{2}-{\bar{w}}^{2}))-14({w}^{2}+{\bar{w}}^{2})-({w}^{4}-{\bar{w}}^{4})\)35.
The next step is to find an explicit form of f(k) that gives the knot/link corresponding to a desired braid. Mathematically, the knot/link exists as the kernel of the mapping \(f:{{\mathbb{T}}}^{3}\to {\mathbb{C}}\), which maps k in the 3D BZ \({{\mathbb{T}}}^{3}\) onto a complex number f(k). To make sure that f incorporates the information from the braid, we decompose it into a composition of mappings
$${{\mathbb{T}}}^{3}\mathop{\to }\limits^{F}{{\mathbb{C}}}^{2}\mathop{\to }\limits^{\bar{f}}{\mathbb{C}},$$
i.e., \(f({\bf{k}})=\bar{f}(F({\bf{k}}))\) where F(k) = (z, w) maps k onto two complex numbers z(k) and w(k) in an auxiliary braiding space, which then yields f via the braiding map \(\bar{f}(z({\bf{k}}),w({\bf{k}}))=f({\bf{k}})\). To concretely understand this decomposition, we first note that a braid closure lives in the space \({\mathbb{C}}\times {S}^{1}\), since the position of N strands can be given by complex coordinates z1(s), z2(s), . . . , zN(s), where s ∈ [0, 2π] is the periodic vertical "time" coordinate (Fig. 1a). Each braid operation corresponds to two half-revolutions (windings) between two particles i.e. \({\sigma }_{i}^{\pm }\) corresponds to zi+1 − zi → e±iπ(zi+1 − zi) with increasing s. We thus define \(\bar{f}(z,w)\) by analytical continuation to complex \(s=-i\mathrm{log}\,w\) as
$$\bar{f}(z,{e}^{is})=\mathop{\prod }\nolimits_{j}^{N}\left(z-{z}_{j}(s)\right),$$
such that points satisfying the nodal constraint \(\bar{f}(z,w)=0\) lie exactly along the trajectories zj(s). To use Eq. (3), one expresses each zj(s) as a time Fourier series containing w = eis, i.e., a polynomial in w, such that \(\bar{f}(z,w)\) becomes a Laurent polynomial of z and w. For instance, a Hopf braid can be parametrized by z1(s) = −z2(s) = eis = w, which yields \(\bar{f}(z,w)=(z-w)(z+w)={z}^{2}-{w}^{2}\). This can be directly generalized to a braid of a (p, q) torus knot, which consists of p strands each of which twists for q revolutions before closure: \({z}_{j}(s)={e}^{\frac{i}{p}\left(2\pi j+qs\right)}\), yielding \(\bar{f}(z,w)={z}^{p}-{w}^{q}\). Next, we need a criterion for suitable functions F(k) = (z(k), w(k)), that express z and w in terms of k. Ideally, F(k) should be able to "curl up" the braiding space \({\mathbb{C}}\times {S}^{1}\) into a solid torus in the 3D BZ, such that knots given by braid closures are faithfully mapped into nodal knots in the 3D BZ35 (Fig. 1). How this "curling" is accomplished is quantified by the winding number
$$n=-\frac{1}{2{\pi }^{2}}\int_{BZ}{d}^{3}{\bf{k}}\,{\epsilon }_{\mu \nu \rho \gamma }{N}_{\mu }{\partial }_{{k}_{x}}{N}_{\nu }{\partial }_{{k}_{y}}{N}_{\rho }{\partial }_{{k}_{z}}{N}_{\gamma },$$
where μ, ν, ρ, γ ∈ {1, 2, 3, 4} and z(k) = N1(k) + iN2(k), w(k) = N3(k) + iN4(k). It measures how many times the braid winds around the BZ. Generically, one will choose an F(k) with winding n = ± 1 to guarantee a one-to-one mapping from a specific braid closure to a nodal knot in the BZ. An important caveat, however, is that n = ± 1 is not possible for a passive RLC circuit implementation due to its reciprocal nature. In the discussion surrounding Eq. (7) later, we shall explain how this seeming obstacle can be avoided systematically.
Our approach outlined so far generalizes existing approaches in the literature: In the approach of Ezawa23, F(k) was chosen to be certain generalized Hopf fibrations, but there was no freedom of choosing f(z, w) for more general knot constructions; f(z, w) was further explored in ref. 36 in real space, but not in a toroidal momentum BZ where a nodal bandstructure can be found.
Characterizing nodal knot topology
A key feature of nodal knots is their interesting topological structure. Knotted lines of singularities in momentum space can be viewed as generalizations of Weyl points. In place of isolated sources of topological (Berry) flux, there are intertwined loops of "branch cuts". While signatures of non trivial knot topology can manifest as optical non-linearity enhancements in electronic nodal materials37,38, we shall see that circuit implementations allow the nodal knots themselves to be directly reconstructed.
To mathematically characterize different knots, we first introduce the knot group. The knot group of a given knot K is the fundamental group \({\pi }_{1}({{\mathbb{T}}}^{3}\setminus K)\) of its complement in its ambient space, which in our context is the 3-torus BZ \({{\mathbb{T}}}^{3}\). Physically, the complement \({{\mathbb{T}}}^{3}\setminus K\) is the part of the BZ containing non-degenerate eigenmodes, and the knot group indexes the space of non-trivial closed paths within this phase space. In the simple case of a nodal ring (unknot), \({\pi }_{1}({{\mathbb{T}}}^{3}\setminus K)\) consists of equivalence classes of trajectories characterized by their winding number around the ring, and is thus given by integer-valued Berry phase windings \({\mathbb{Z}}\). In more complicated knots, there can be several inequivalent sets of windings, corresponding to different unique homotopy generators of \({{\mathbb{T}}}^{3}\setminus K\). For instance, the knot group of a (p, q) torus knot is given by 〈x, y∣xp = yq〉, since a path that winds p times around the "equator" can be deformed into one that winds q times around the "pole". In the special case of the trefoil knot with (p, q) = (2, 3), the knot group 〈x, y∣x2 = y3〉 is also isomorphic to the braid group with three strands: σ1σ2σ1 = σ2σ1σ2, as evident from identifying x = σ1σ2σ1 and y = σ1σ2. Yet, in general, the presentation for the knot group can take diverse reparametrized forms (i.e. 〈x, y∣xyx−1yx = yxy−1xy〉 for the figure-8 knot), and is hence by itself insufficient for topological classification.
In order to faithfully distinguish topologically inequivalent knots, various knot invariants have been developed. Simple invariants such as the linking number or knot signature can be easily computed by examining the crossings, but only have limited discriminatory power. A more sophisticated approach involves the Chern Simons path integral20, which encapsulates topological information on the nodal singularities through certain knot polynomials, i.e., Jones polynomials, depending on the chosen gauge group. In our physical setup with classical circuits, another well-established invariant known as the Alexander polynomial will be most experimentally accessible. Starting from the topological surface "Drumhead" modes, one can reconstruct the Seifert surface, which is an orientable surface in the 3D BZ whose boundary is the nodal knot/link, and compute the Alexander polynomial from its homology properties.
Surface states of knots
Since nodal knots/links consist of closed loops, they form the boundary of topological surface drumhead modes in the projected 2D surface BZ. Intuitively, drumhead modes can be construed as Fermi arcs traced out by Weyl points moving along the nodal lines. If a nodal structure were to be deformed across a topological transition, i.e., till the loops of a Hopf link intersect, the shape of the drumhead regions along suitable projections must also transition discontinuously i.e. from two overlapping regions to two disjoint regions. For each possible surface termination, the drumhead regions form the surface projections (shadow) of a tight, i.e., minimal area Seifert surface (Fig. 2). In this sense, the drumhead modes on differently oriented boundary surfaces are just different "holographic" projections of the same tight Seifert surface living in the 3D BZ. Note that a Seifert surface is itself not a topological invariant, since it is not unique: for instance, \({\rm{Re}}[f({\bf{k}})]> 0\), \({\rm{Re}}[f({\bf{k}})]<0\), Im[f(k)] >0 and Im[f(k)] <0 are all valid Seifert surfaces, albeit not all tight.
Fig. 2: Seifert surfaces from topological surface states.
Projected surface states on the (001) surface of the a Hopf-link with \(\sigma ={\sigma }_{1}^{2}\), b Borromean rings with \(\sigma ={({\sigma }_{2}^{-1}{\sigma }_{1})}^{3}\) and c) 3-link with \(\sigma ={({\sigma }_{1}{\sigma }_{2}{\sigma }_{1})}^{2}\). We can observe multiple folded layers of the surface on top of another. Note that a different parametrization was used to plot these surfaces, as compared to Fig. 1. Interestingly, b, c, both contain three loops, but b is totally unlinked upon removal of any single loop, while, c still reduces to a Hopf-link upon removal of any loop. d How a Seifert surface can be obtained from the Drumhead states. By comparing the same nodal crossings across Drumhead states from different surfaces (Left), one can deduce the over/under-crossings in a knot diagram. The interior of this knot can then be systematically promoted into "surface layers" bounded by appropriately defined crossings (Center), which can further be arranged into a layer arrangement where its homology loops (i.e., α1) are evident.
To construct a topological invariant such as the Alexander polynomial, we hence need information on how the Seifert surface links with itself: we consider the linking of its 1st-homology loops α1, α2, . . . , αl with \({\alpha }_{1}^{\prime},{\alpha }_{2}^{\prime},...,{\alpha }_{l}^{\prime}\) of a lifted Seifert surface defined from a infinitesimally shifted Laplacian \(L^{\prime} ({\bf{k}})=L({\bf{k}})-\epsilon {\tau }_{j}\), with j = x or z. This shift creates a parallel Seifert surface infinitesimally displaced in a way consistent with the knot orientation given by the vector \({\nabla }_{{\bf{k}}}{\rm{Re}}\ f({\bf{k}})\times {\nabla }_{{\bf{k}}}{\rm{I}}m\ f({\bf{k}})\). The l × l Seifert matrix Sij, which captures the twisting structure of the Seifert surface, is then given by the linking number of αi and \({\alpha }_{j}^{\prime}\), with l being the number of homology generators34,39. From that, one can obtain the Alexander polynomial invariant as
$$A(t)={t}^{-l/2}{\rm{D}}et[S-t{S}^{T}].$$
For instance, as further elaborated on in the methods section, A(t) = t + t−1 − 1 for the trefoil knot. General heuristics for constructing and visualizing the Seifert surface for a given nodal bandstructure are outlined in Fig. 2d.
Constructing and measuring knots in circuits
Having detailed their mathematical construction and characterization, we now describe how nodal knots can be concretely implemented and detected in electrical RLC circuits via both simulations and experiments. An RLC circuit with N nodes can be represented by an undirected network with graph nodes (junctions) α = 1, . . . , N connected by resistors, inductors and capacitors. Its behavior is fully characterized by Kirchhoff's law at each junction, which takes the matrix form
$${I}_{\alpha }={J}_{\alpha \beta }{V}_{\beta },$$
where Iα is the external current entering junction α and Vβ is the potential at junction β. Physically, each entry Jαβ of the Laplacian J physically represents admittance (AC conductance): in the submatrix spanned by junctions (α, β), an element with impedance rab contributes \({r}_{ab}^{-1}\left(\begin{array}{cc}1&-1\\ -1&1\end{array}\right)\) to the Laplacian, where rab = R, iωL and (iωC)−1 for the RLC components, respectively. The strictly reciprocal (symmetric) nature of these components constrains the possible forms of the Laplacian. In particular, for a circuit array with two sites per unit cell, \({\rm{Re}}\ f({\bf{k}})\) and Im f(k) in the Laplacian of Eq. (1) must be even40 in powers of k. This constraint severely restricts the prospects of faithfully "curling" a braid into a 3D BZ, such that each desired braid crossing is mapped one-to-one onto the resultant nodal structure. This is because nodal knots necessarily contain unpaired 2D Chern phase slices, which require reciprocity breaking. Mathematically, it corresponds to the impossibility of achieving an F(k) winding of ∣n∣ = 1 (Eq. (4)) without sine terms. Primarily for this reason, nodal knots have not appeared in existing linearized reciprocial circuit architectures, or related settings of classical topological matter.
In this work, our key insight is to instead realize pairs of nodal knots related by mirror symmetry, such that reciprocity does not have to be broken. This can be achieved via a mapping F(k) = (z(k), w(k)) such as
$$z= \;\cos 2{k}_{z}+\frac{1}{2}+i(\cos {k}_{x}+\cos {k}_{y}+\cos {k}_{z}-2)\\ w= \;\sin {k}_{x}+i\sin {k}_{y},$$
which possesses opposite windings of n ≈ ±1 in each of the two halves of the 3D BZ given by kz > 0 and kz < 0 (Fig. 1b). Provided that w is raised only to even powers in \(\bar{f}(z,w)\), the Laplacian will be even in k, and hence realizable in an RLC, and as such reciprocal, circuit.
The overwhelming advantage of topolectrical circuit array implementations is that nodal structures naturally manifest as robust impedance peaks, i.e., electrical resonances. Consider a multi-terminal measurement with input currents and potentials given by the Iα and Vβ components respectively (c.f. Eq. (6)). In general, the impedance Zab between modes a and b is given by
$${Z}_{ab}=\sum_{\lambda }\frac{{\left|{\psi }_{\lambda }(a)-{\psi }_{\lambda }(b)\right|}^{2}}{{j}_{\lambda }},$$
where jλ and ψλ are the corresponding eigenvalues and eigenvectors of the circuit Laplacian J. Note that the modes a, b are not necessarily the real-space nodes α, β appearing in Eq. (6); in the translation-invariant circuits that we consider, they can also refer to quasi-momentum modes from the Fourier decomposition of multiterminal measurements. Importantly, for circuits designed such that jλ ≈ 0 along the nodal loops/knots or their drumhead regions, Zab should signal pronounced divergences (resonances) when either a or b coincide with the nodal regions. More generally, Zab should diverge strongly whenever the Laplacian exhibits a zero-eigenvalue flat band with divergent density of states, since jλ ≈ 0 for extensively many λ, unless ψλ(a) = ψλ(b) at terminal a, b.
For the sake of concreteness, we specialize to a periodic circuit network with a repeated unit cell structure. This allows us to rewrite Eq. (6) as
$${I}_{({\bf{x}},i)}={J}_{({\bf{x}},i),({\bf{y}},j)}{V}_{({\bf{y}},j)},$$
with x, y labeling the unit cell positions in the circuit, while i, j = {1, 2} labels the two sublattice nodes inside each unit cell. By exploiting the translational invariance of unit cells in the circuit, J(x, i),(y, j) = Ji,j(x − y), we can find the irreducible representations of the translational group of J by a Fourier transformation in the real space coordinates
$${J}_{i,j}({\bf{k}})=\sum_{{\bf{r}}}{J}_{i,j}({\bf{r}})\ {e}^{-i{\bf{k}}\cdot {\bf{r}}}.$$
In Eq. (10), we sum over all unit cell positions r in the circuit network. We define the Fourier transformation of J to be in the directions perpendicular to the open boundary surface. The dimension of the resulting matrix J(k) is fixed by the number of circuit nodes that do not transform into each other by translation. By diagonalizing J(k), we find the admittance band structure \({j}_{n}({\bf{k}}),n\in \{1,\ldots ,\dim (J({\bf{k}}))\}\) of the circuit network as a mapping of quasi-momentum k to admittance eigenvalues of J. The fully periodic circuit network is then constructed such that the admittance band eigenvalues are given by the absolute value of f, j±(k) = ±∣f(k)∣. The kernel of the fully periodic admittance band structure features one-dimensional closed nodal loops in its 3D BZ, that are induced by the corresponding mapping \({{\mathbb{T}}}^{3}\to {\mathbb{C}}\) inherited from the function f(k). In an experimental setting, it is possible to extract the admittance band structure by performing N linearly independent measurement steps, where N describes the number of inequivalent nodes in the network. Each step consists of a local excitation of the circuit network and a global measurement of the voltage response, from which all components of the Laplacian in reciprocal space can be extracted. Consequently, the admittance band structure is found by a diagonalization of J(k) for each k.
In the following, we show Xyce41 simulation results of the prescribed measurement procedure with periodic (Fig. 3) as well as open boundary conditions (Fig. 4) for circuits featuring a Hopf-link, trefoil knot and figure-8 knot. The experimental details for the Hopf-link are described in the Methods section.
Before proceeding to more involved nodal knots, we illustrate our approach through the simplest example of a nontrivial linked nodal structure—the Hopf-link (Fig. 1c). With f(k) = z(k)2 − w(k)2 (z1,2(s) = ± eis in Eq. (3)), it is the simplest possible nontrivial nodal structure, with at most next-nearest neighbor (NNN) unit cells connected by capacitors C, C/2, C/4 or inductors L, L/2, L/4 in each direction (see "Methods"). In steady-state Xyce AC simulations, where the frequency parameter is set by the external excitation, the impedance peaks at \({\omega }^{2}=\frac{1}{LC}\) indeed accurately delineate the two inter-linked nodal rings, as shown in Fig. 3a. Its surface projections are even more accurately resolved as drumhead regions when the measurements are taken on open boundary surfaces normal to \(\hat{x}\) and \(\hat{y}\), as shown in Fig. 4a. No drumheads are expected for \(\hat{z}\) open boundary surfaces, since there is another mirror-image nodal structure related by kz → − kz.
Fig. 3: Simulated nodal structure measurements under PBCs.
Points in reciprocal space corresponding to admittance eigenvalues smaller than a threshold js are colored black, which collectively delineate their theoretically computed respective nodal links or knots (orange). a Two entangled unknots, defined as the Hopf-link. The black dots combine the simulation results with circuit dimensions of \(\left(22\times 22\times 16\right)\), \(\left(23\times 23\times 20\right)\), \(\left(16\times 22\times 19\right)\), \(\left(22\times 22\times 14\right)\), \(\left(25\times 20\times 23\right)\) and \(\left(25\times 24\times 23\right)\). The admittance threshold is chosen to be js = 0.00335 Ω−1. b Depicts a trefoil knot showing the combined simulations of circuit system sizes of \(\left(20\times 20\times 20\right)\), \(\left(21\times 21\times 21\right)\), \(\left(24\times 15\times 15\right)\), \(\left(21\times 20\times 25\right)\), \(\left(18\times 19\times 17\right)\), \(\left(17\times 18\times 21\right)\), \(\left(23\times 21\times 19\right)\), \(\left(19\times 25\times 23\right)\) and \(\left(20\times 20\times 22\right)\). The admittance bound is threshold js = 0.0032 Ω−1. c Illustrates a figure-8 knot with \(\left(23\times 23\times 23\right)\), \(\left(20\times 20\times 25\right)\), \(\left(20\times 20\times 21\right)\), \(\left(19\times 16\times 18\right)\), \(\left(17\times 14\times 16\right)\), \(\left(19\times 25\times 25\right)\) and \(\left(25\times 21\times 22\right)\) unit cells in the respective directions. The admittance threshold is chosen to be js = 0.0037 Ω−1.
Fig. 4: Simulated drumhead state measurements under various OBCs.
The black diamond-shaped points indicate points in reciprocal space with admittance eigenvalues smaller than their respectively admittance thresholds jx, jy corresponding to x, y open boundaries, as obtained from Xyce circuit simulations. These points are contained in regions of the surface BZ which are bounded by the projected 3D theoretically computed bulk nodal structures (colored green, red and blue). a shows two entangled unknots, defined as the Hopf-link. The black dots combine the simulation results with circuit dimensions of \(\left(22\times 22\times 16\right)\), \(\left(23\times 23\times 20\right)\), \(\left(16\times 22\times 19\right)\), \(\left(22\times 22\times 14\right)\), \(\left(25\times 20\times 23\right)\) and \(\left(25\times 24\times 23\right)\). The admittance thresholds are chosen to be jx = 0.0027Ω−1 and jy = 0.0020 Ω−1. b depicts a trefoil knot showing the combined simulations of circuit system sizes of \(\left(20\times 20\times 20\right)\), \(\left(21\times 21\times 21\right)\), \(\left(24\times 15\times 15\right)\), \(\left(21\times 20\times 25\right)\), \(\left(18\times 19\times 17\right)\), \(\left(17\times 18\times 21\right)\), \(\left(23\times 21\times 19\right)\), \(\left(19\times 25\times 23\right)\) and \(\left(20\times 20\times 22\right)\). The admittance thresholds are chosen to be jx = 0.0030Ω−1 and jy = 0.0025 Ω−1. c illustrates a figure-8 knot with \(\left(23\times 23\times 23\right)\), \(\left(20\times 20\times 25\right)\), \(\left(20\times 20\times 21\right)\), \(\left(19\times 16\times 18\right)\), \(\left(17\times 14\times 16\right)\), \(\left(19\times 25\times 25\right)\) and \(\left(25\times 21\times 22\right)\) unit cells in the respective directions. The admittance thresholds are chosen to be jx = 0.0028 Ω−1 and jy = 0.0032 Ω−1.
We next consider the trefoil knot, which is defined by f(k) = z(k)2 − w(k)3. While it, even after topology-preserving real-space truncations (see "Methods"), still necessitates longer-ranged connections, circuit networks conveniently allow to accomodate for such couplings. In Figs. 3b and 4b, we present the simulation results of the detailed imaging of a nontrivially knotted nodal loop and its drumhead surface projections, which also showed remarkable agreement with theoretical expectations.
Our approach can also be conveniently applied to more obscure non-torus knots where f(z, w) is not a polynomial in z and w. For illustration, we simulate the circuit with a Figure-8 knot nodal structure with \(f({\bf{k}})=64\ z{({\bf{k}})}^{3}-12\ z({\bf{k}})(3+2(w{({\bf{k}})}^{2}-{\bar{w}}{({\bf{k}})}^{2})) -14(w{({\bf{k}})}^{2}+\bar{w}{({\bf{k}})}^{2})-(w{({\bf{k}})}^{4}-\bar{w}{({\bf{k}})}^{4})\), where \(w({\bf{k}}),\bar{w}({\bf{k}})=\sin {k}_{x}\pm i\sin {k}_{y}\). The Figure-8 knot belongs to the more general class of knots known as lemniscate knots, where the equivalent braid cannot be expressed the braiding of p strands with q revolutions, and requires the appearance of both w and \(\bar{w}\) in its f(k)35. Despite its ostensibly more complicated appearance, its nodal structure and surface drumhead states, shown in Figs. 3c and 4c, respectively, can be easily obtained from impedance measurements.
Experimental mapping of surface drumhead states
A highlight of this work is the experimental verification of our design of momentum-space nodal structures. Due to the topological significance of surface drumhead states, as well as their extensively large density of states, our experiment shall involve the mapping of the drumhead state of the nodal Hopf Link shown in Fig. 4a, where ky and kz are synthetic coordinates. This surface was chosen due to the distinctive "double-loped" structure of the drumhead state, which should prominently show up as a region of elevated topolectrical impedance.
The first step in experimental circuit design is to simplify the real-space lattice structure. After optimal truncation and tuning of the x-direction couplings (see "Methods"), we obtained a slightly modified Hopf-link with qualitatively similar double lobes in its drumhead region (Fig. 5a). Note that unlike the topological drumhead modes themselves, the elevated region consists of extra "ridges and valleys" due to additional contributions from other bands in Eq. (8). This circuit is physically implemented with an array of connected printed circuit boards (PCBs), each representing one unit cell, which can be adjusted to accurately correspond to different (ky, kz) points by tuning the inductors (Fig. 6 of Methods). Enabled by individually addressing the nodes, our tuning approach allows each inductance to be reliably adjusted by −50% to +25% of its original manufactured value, realizing to our knowledge the most accurately tunable circuit in the literature of topolectrical circuits to this date. To realize the required variety of capacitance values, we have implemented each logical capacitor as an appropriate parallel configuration of a few commercially available capacitors (see "Methods"). All parametric tunings are relegated to the inductances, since variable inductors are more reliably tuned than variable capacitors in practice.
Fig. 5: Simulated impedances vs. experimental measurements.
a Hopf-link (dark cyan) and the drumhead region (orange) of elevated impedance it encloses, computed in the "clean" limit absent of parasitic resistances and component uncertainty. Simulation was performed with N = 30 unit cells at resonant frequency 795.7 kHz for the circuit Laplacian in Eqs. (15) to (18) (in "Methods"), truncated from that of Fig. 4a to facilitate experimental construction. b Impedance map of the same circuit, but simulated for our N = 9 experimental setup with empirically determined parasitic inductor and capacitor resistances RpL = 0.11 Ω and RpC = 0.03 Ω, and capacitor/inductor tolerances of 1%. c Corresponding experimentally measured impedance (crosses) with distinct elevated region, which agree well with simulation (lighter background contours from (b)). Frequency used is 740 kHz, offset from the predicted 795.7 kHz to account for uncertainties in the tuning circuitry (see "Methods" and Supplementary Table 3).
Fig. 6: Schematic and PCB implementation of Hopf-Link experiment.
a Schematic of our 2-leg ladder LC circuit array, whose Laplacian takes the form of the Hopf-link at resonance when the component admittances are chosen according to (15)–(18). b Each rectangle in (a) corresponds to a parallel combination of an inductor and a logical capacitor whose specifications are indicated in Supplementary Table 1. As explained in the main text, each inductor can be accurately tuned to vary ky, kz near the drumhead region. c Schematic representation of one repeating unit cell of the circuit used to construct final experiment. The switches are set to open when the inductors are being tuned, and closed when the impedance of the entire circuit is measured to map out the drumhead region. d Experimental PCB realization of one repeating unit. Visible are the inductors equipped with ferrite rods or shorted wire loops, which respectively increase/decrease the inductances in a tunable manner. e Renderings of the same PCB to emphasize its physical structure. Each large cylinder represents a variable inductor, while the components prefixed by "C" represent capacitors that are connected in parallel to form the logical capacitors in Fig. 9. Detailed specifications of these components are given in Supplementary Tables 1 and 2.
While the topological robustness of drumhead states increases with the number of unit cells N, so do the destabilizing contributions from parasitic resistances and components uncertainties. As simulated in Fig. 5b for realistic component values, we have found that a rather low N = 9 already gives rise to a robustly visible drumhead region of elevated impedance. Importantly, this robustness is well corroborated against the experimental impedance data presented in Fig. 5c. Even with only 14 (ky, kz) data points, each obtained through careful tuning, we have observed a very high fidelity between the expected and measured impedance values, as also visually evident from the almost perfect match of the blue/red (low/high imepdance) points between simulation and experiment (Fig. 7 of Methods). To mitigate the effects of parasitic resistance and component uncertainty, we have also taken advantage of a machine learning algorithm that choses (ky, kz) sampling points that remain the most impervious to these uncertainties (Fig. 8 of Methods).
Fig. 7: Experimental vs. simulations with ideal/nonideal components.
a Simulated impedance map for N = 9 with ideal components specified by Supplementary Table 1 with no parasitic resistance or uncertainty, with an elevated drumhead region clearly visible. White regions denote impedance values above 600 Ω. b Contour plot of simulated impedance map for the same scenario as in (a), but with parasitic resistance RpL = 0.11 Ω, RpC = 0.03 Ω and random variation in inductor and capacitor values u ∈ [−0.01, 0.01], Zsim., and Experimentally measured impedance \({Z}_{\exp .}\) (colored crosses), see Supplementary Table 3. b Normalized error of measured points, \({\mathrm{Normalized}}\,{\mathrm{Error}}={(| {Z}_{{\mathrm{exp}} .}-{Z}_{{\mathrm{sim}}.}| /{Z}_{{\mathrm{sim}}.})}^{2}\). c Plot of the Log of expected simulated impedance, Log(Zsim.), vs. the Log of experimentally measured impedance, \({\rm{Log}}({Z}_{\exp}{.})\) (blue and red dots). The blue dots represent points in the low-lying regions, while the red dots are located near the drumhead region. The gray dashed line is the line computed by least squares regression. Gray crosses represent the mean of the simulated impedance within a 0.03 radius in k space surrounding a particular ky, kz point, while error bars represent the standard deviation of impedance within the 0.03 k radius of that particular ky, kz point, which is shown in Supplementary Tables 4–14. For ky, kz points in regions with very high local standard deviation, the gray cross may not coincide with the blue/red dots. The correlation coefficient between Log(Zsim) and \({\rm{Log}}({{\rm{Z}}}_{\exp .})\) is 0.743, and increases to 0.863 when the three borderline points with largest variance are excluded.
Fig. 8: Machine learning optimization of (ky, kz) measurement points.
a Initial suboptimal set of (ky, kz) sampling points for the drumhead region, subject to a relatively relaxed criterion of \(\mathrm{log}\,| {Z}_{{\rm{avg}}}-{Z}_{{\rm{SD}}}|\, > \, 5.2\), where ZSD is the standard deviation of the impedance subject to 1% tolerance in the capacitances and inductances with parasitic resistance RpL = 0.11 Ω, RpC = 0.03 Ω. While possessing higher impedance than points outside the drumhead region with ∣Zavg∣ < 4.8, these still suffer from significant uncertainty effects (motley of colors). b The Nearest-Neighbor algorithm sets an allowed region (light blue) for new (ky, kz) points, which are at most a distance 0.1 away from at least two existing good sampling points. c New randomly generated unfiltered sampling points in the allowed region. d Output consisting of new sampling points filtered according to more stringent criteria \(\mathrm{log}\,| {Z}_{{\rm{ideal}}}|\, > \, 5.7,\quad | ({Z}_{{\rm{avg}}}-{Z}_{{\rm{ideal}}})/{Z}_{{\rm{ideal}}}|\, <\, 0.2,\quad {Z}_{{\rm{SD}}}/{Z}_{{\rm{ideal}}}\,<\, 0.2\), which only need to be sieved out from the allowed region.
Besides conclusively demonstrating the experimental viability of mapping out nodal drumhead states, our experiment also pushes the state-of-the-art in tunable topolectrical circuits, where even minute uneveness between unit cells can potentially affect the circuit band structure significantly. As further elaborated in the "Methods" section, further refinement of this technique through micro-controllers can lead to even more accurate automated tuning that can eventually realize topological pumping in quasiperiodic (Aubry-Andre-Harper) circuits.
We have introduced an experimentally accessible approach for realizing generic momentum space nodal knots. Our proposed systems can be easily implemented in RLC circuit setups, whose nodal admittance band structure is directly characterizable via impedance measurements. A key theoretical novelty for accomplishing this is our choice of momentum space embedding functions z(k), w(k), which permits the knotting (and not just linking) of momentum space nodal structures without breaking reciprocity. This not only allows for easy implementation of almost any desired knot from its corresponding braid, but also for a robust surface drumhead state characterization of the knots. Combined with multi-terminal impedance measurements in the bulk, our RLC nodal knot framework provides an unprecedentedly direct access to the Seifert surface structure and knot invariants. Our approach is explicitly demonstrated through large-scale simulations of three different nodal knot circuits, as well as an experiment which maps out the drumhead surface state of a nodal Hopf-link. It established the proof of principel how to realize any nodal knot in a topolectric circuit.
As the next refinement step of the analytic simulation of the electronic setup, one needs to take into account parasitic resistances, in particular those that derive from the inductors. Here, the dissipative, i.e., non-Hermitian generalization of our idealized Hermitian circuit setup opens up yet another unexplored territory of topological matter42,43,44, i.e., non-Hermitian nodal knot systems40,45. We defer this analysis to future work. In order to directly remedy the parasitic effect from the inductors, the most viable solution is to increase the AC frequency scale into the Megahertz regime at which the nodal knots are observed. This would also help with the higher spatial intergration or our nodal knot circuits. Setting up a new generation of Meghertz topolectric circuits will hence be a prioritized experimental future objective.
Circuit simulation details
This section elaborates on the setup of the circuits that we simulated. As detailed in the main text, the desired knot or link is given by the kernel of a knot function f(z, w) that maps the 3D BZ \({{\mathbb{T}}}^{3}\) to a complex number \({\mathbb{C}}\). The first step in determining the circuit design is the construction of the function f(z, w) from the corresponding braid through the procedure we had outlined. In the next step, we find suitable functions z(k) and w(k) that faithfully map the knot to the kernel of f(k). To be able to implement the corresponding function f(k) in a circuit environment i.e. a tight-binding lattice that preserves reciprocity, we implement two mirror images of the circuit in the BZ that are related by kz → −kz. The Laplacian for the circuit simulations is then set up as (note the slightly different definition of f from Eq. 1 of the main text)
$$J({k}_{x},{k}_{y},{k}_{z})=i{\omega }_{0}C\,\left[{\mathfrak{I}}{\mathfrak{m}}f({k}_{x},{k}_{y},{k}_{z})\,{\tau }_{x}+ {\mathfrak{R}}{\mathfrak{e}}f({k}_{x},{k}_{y},{k}_{z})\,{\tau }_{z}\right].$$
The circuit connections are then designed such that they form the Laplacian J(k). This is achieved by expanding the real and imaginary part of f as single cosine terms and implementing the separated terms as internodal connections in the circuit. Those connections need to fulfill two criteria. First, they need to realize the proper real space linkage between two nodes to replicate the specified term in the (2 × 2) Fourier transformed Laplacian. Second, the magnitude of those elements is to scale with the prefactor of the corresponding cosine term. A positive value is implemented by a capacitor and a negative value by an inductor. Finally, we need to account for the total node conductance in the circuit setup by implementing adequate grounding terms. The scales of the capacitances and inductances are chosen to be C = 1 nF and L = 10 μH, yielding a resonance frequency of
$${f}_{0}=\frac{1}{2\pi \sqrt{LC}}\approx 1.592\ {\rm{MHz}}.$$
f0 will be the operating frequency for all performed simulations, where signatures of the prescribed nodal knots or links emerge. At this specific frequency, the inductances defined act as negative capacitances due to their π relative phase shifts. For reasons of numerical stability, we include additional ground connections of Cground = 100 nF and Rground = 1 kΩ at every node in the circuit. These terms just enter as an identity matrix contribution \({l}_{0}{\mathbb{I}}\) and can be subtracted out after the band structure has been reconstructed from the simulation data. The Laplacian of the circuit is then shifted as \(J({\bf{k}})\to J({\bf{k}})+{l}_{0}{\mathbb{I}}\), and its two band admittance spectrum is given by
$${j}_{\pm }({k}_{x},{k}_{y},{k}_{z})= \; {l}_{0}\pm i{\omega }_{0}C\ \sqrt{{\left({\mathfrak{R}}{\mathfrak{e}}f\right)}^{2}+{\left({\mathfrak{I}}{\mathfrak{m}}f\right)}^{2}}\\ = \;{l}_{0}\pm i{\omega }_{0}C\ | f| .$$
To recreate the admittance band structure, we use the measurement scheme initially described in19. There and in all our simulations, each measurement step consists of a local excitation of the circuit at one node through an AC driving voltage via a shunt resistance and a global measurement of the total voltage profile at all nodes in the circuit. The shunt resistance enables the measurement of the input current that is fed into the circuit.
From the global response of the circuit, we can reconstruct the Fourier coefficients of J in reciprocal space and diagonalize J(k) for every k. This measurement procedure must be repeated M times, where M describes the number of non-equivalent nodes in the circuit network to be able to reconstruct the full Laplacian J(k). From the admittance band structure, we then distill the closed nodal loops of the specified model by selecting the imaginary admittance eigenvalues, that are smaller than a globally chosen upper threshold. This upper bound is selected such that the valley points corresponding to the zero nodal points on the knot or link are recovered, but no additional points appear in regions with small gradients close to the nodal line. Due to the discretization of the BZ, we recover only a discrete set of nodal points in the BZ. This drawback can be counterbalanced to some degree by simulating circuit networks with different dimensions in terms of unit cells. This way, we enhance our grid resolution in reciprocal space and obtain a more precise result due to an increased number of data points on the knot or link.
Similarly, the OBC simulations are evaluated by extracting admittance eigenvalues smaller than a chosen limit. Those points in the projected BZ form 2D areas, as shown in Supplementary Fig. 2. These 2D areas correspond to projections of the Seifert surface bounded by the corresponding link or knot onto the direction of the open boundary surface. The corresponding zero-admittance eigenstates amount to the so-called Drumhead states that are exponentially localized at the boundary with an inverse localization lengths given by their imaginary gaps46,47. With these preliminary explanations, the only remaining requisite to perform the individual simulations is the specification of the employed knot function f(z, w) and the functions z(k) and w(k). Note that since f(k) in general consists of an exponential tail of distant couplings in real space46,48,49, some gap-preserving real space truncation of its real and imaginary parts is necessary for actual implementations. For the most part, this presents no additional challenges, and can be adapted to conform to the specifications of available actual electronic components. We also need to define an upper admittance threshold for resonance to extract the nodal points from the obtained simulation data.
We perform Xyce simulation for different system sizes in order to increase the resolution of the knot in the BZ. Since the reciprocal space consists of discrete points of allowed quasi-momenta for any finite number of unit cells, we cannot to trace out the knot exactly. In order to increase the density of samples, one can increase the system size, but this increases the computational costs. Our alternative approach is to create several copies of the same setup, but with varying system sizes. Choosing the number of unit cells as co-primes of one another increases the sampling density of the combined momentum grid without the need for creating a very large system.
In the Xyce simulations, we create spice netlists which represent a circuit network consisting of capacitors and inductors described by a Laplacian in the form of (11) and perform AC analyses on them. In order to simulate one measurement procedure step necessary to reconstruct the admittance band structure, we connect an ideal voltage source via a shunt resistor to the circuit. As the parameters of amplitude voltage and shunt resistance can be chosen arbitrarily in a simulation, we used 1 V and 1 Ω. The AC analysis frequency is given by f0.
Drumhead state experiment
The objective of our experiment is to reconstruct the surface topological drumhead state of a simplest illustrative nodal structure, the Hopf-link, as shown in Supplementary Fig. 2a. For that, the physical circuit must possess a Laplacian LC that is proportional to the Laplacian L of a Hopf-link at a particular resonant AC frequency ω0. For a streamlined implementation, we deformed the Hopf-link Laplacian from the Supplementary Information such that it contains up to only nearest neighbor (NN) connections along the surface normal \(\hat{x}\) while retaining a qualitatively similar nodal structure (Fig. 5). Explicitly, we require
$${\left.{L}^{C}\right|}_{\omega = {\omega }_{0}}=i\omega (\begin{array}{cc}{L}_{z}^{C}&{L}_{x}^{C}\\ {L}_{x}^{C}&-{L}_{z}^{C}\end{array}){\left.\right|}_{\omega = {\omega }_{0}}\propto i{\omega }_{0}\left(\begin{array}{cc}{L}_{z}&{L}_{x}\\ {L}_{x}&-{L}_{z}\end{array}\right),$$
where the components of the deformed Hopf-link Laplacian are given by \({L}_{z}=4\cos {k}_{x}(2-\cos {k}_{y}-\cos {k}_{z})-2(5-4\cos {k}_{y}+\cos 2{k}_{y}+\cos {k}_{y}-{k}_{z}-4\cos {k}_{z} -\cos 4{k}_{z}+\cos {k}_{y}+{k}_{z})\)and \({L}_{x}=(1+2\cos 2{k}_{z})(\cos {k}_{z}+\cos {k}_{y}+\cos {k}_{x}-2)\). One way to satisfy Eq. (14) is to design the physical circuit such that its corresponding components \({L}_{x}^{C},{L}_{z}^{C}\) are of the forms
$${L}_{x}^{C}=-{t}_{AB}-2v\cos {k}_{x},$$
$${L}_{z}^{C}=2t(1-\cos {k}_{x})+{g}_{A}+{t}_{AB}+2v$$
where v, t, tAB, gA and gB depend parametrically on ky, kz as follows:
$$v= \ 1+2\cos 2{k}_{z}\\ t= \ 4(2-\cos {k}_{y}-\cos {k}_{z})\\ {t}_{AB}= \ 2(\cos {k}_{y}+\cos {k}_{z}-2)(1+2\cos 2{k}_{z})\\ {g}_{A}= \ 6+4\cos 2{k}_{y}+4\cos {k}_{y}-{k}_{z}-12\cos {k}_{z}\\ \ +4\cos 2{k}_{z}-2(5+2\cos 2{k}_{z})\cos {k}_{y}-2\cos 3{k}_{z}\\ \ -4\cos 4{k}_{z}+4\cos {k}_{y}+{k}_{z}\\ {g}_{B}= \ -2-4\cos 2{k}_{y}-4\cos {k}_{y}-{k}_{z}+4\cos {k}_{z}\\ \ +4\cos 2{k}_{z}+2(3-2\cos 2{k}_{z})\cos {k}_{y}\\ \ -2\cos 3{k}_{z}+4\cos 4{k}_{z}-4\cos {k}_{y}+{k}_{z}$$
() can be realized with an LC circuit array in the form of a 2-leg ladder with N unit cells (rungs) and 2N nodes in total (Fig. 9). Each term x ∈ [t, v, − t, tAB, gA, gB] is represented by a parallel configuration of a tunable inductor Lx and capacitor Cx of appropriate value, such that its admittance
$${G}_{x}({k}_{y},{k}_{z})=i\omega {C}_{x}+\frac{1}{i\omega {L}_{x}}=i\omega {C}_{0}\left({c}_{x}-\frac{1}{{\omega }^{2}{L}_{x}{C}_{0}}\right)$$
is of the required (ky, kz)-dependent value t, v, −t, tAB, gA or gB at a particular ω = ω0. As elaborated later, it suffices to vary only the inductances to sweep through the entire range of (ky, kz) stipulated by the size of the drumhead region in Fig. 5. Here C0 is an arbitrarily defined reference capacitance value that offers a free rescaling degree of freedom in the tuning, and cx is the corresponding dimensionless capacitance of element x. Each element proportional to \(2(1-\cos {k}_{x})\) couples two neighboring unit cells, while each term in the off-diagonal \({L}_{x}^{C}\) couples the upper and lower rungs. Note that our proposed circuit requires only LC components i.e. inductors and capacitors, with positive and negative resistors truncated off without appreciably changing the shape of the drumhead region. That said, with the contact and parasitic resistances intrinsic to an experimental circuit, some of these resistances will be inevitably reintroduced. These, however, also lead to no significant modification of the drumhead region, as verified via a simulation with realistic amounts of parasitic resistances and component uncertainty (Fig. 7b).
Fig. 9: Tuning of variable inductors.
Illustration of tuning methods of the variable inductors and circuit diagram of impedance measurement circuit for tuning inductors and imaging the drumhead region. a By adding a ferrite rod to the top of a fixed-value inductor, the inductance may be increased by up to 25%. b By surrounding the inductor with a shorted wire loop, the inductance may be decreased by up to 50%. c Experimental implementation of ferrite rod and wire loop. A plastic straw and modeling clay is used to hold the ferrite rod/wire loop in place after tuning. d External circuit used to measure and tune the impedance of each coupling unit (logical component), as described by (19).
Our circuit is built with interconnected PCBs, each representing one unit cell, as shown in Fig. 9c. With a strategic choice of C0 and frequency ω0, it is possible to scan through the entire relevant range of ky, kz by just tuning the inductances alone. As elaborated later, this can be accurately achieved through the use of ferrite rods and shorted wire loops within/around each inductor. The required fixed capacitances are realized by parallel combining commercially available capacitances into logical capacitors. The specifications of these logical components, as well as that of their underlying physical capacitors, are detailed in Supplementary Tables 1 and 2.
A major consideration of topolectrical circuit design is that imperfections from parasitic/contact resistances and component uncertainties should not change the measured impedance and hence Laplacian band structure significantly. Inductors are commonly manufactured with +/−10% inductor value uncertainty, and a typical parasitic resistance that scales at a rate of 2.45 Ω/1 mH. In theory it is possible to decrease the impact of parasitic resistance by increasing the inductance, but this cannot be done in practice because larger inductors typically require longer wires which increases parasitic resistance. Capacitors on the other hand are commonly manufactured with +/−5% uncertainty, and have negligible parasitic resistance compared to PCB trace wires, which possess 0.024 Ω/cm. In the case of capacitors, the effect of parasitic resistance may be decreased by picking smaller capacitors. However, choosing smaller capacitors requires larger inductors for the same frequency used in impedance measurement, which increases parasitic resistance, or requires a higher frequency. Therefore ideally one chooses the ω0 to be as high as possible depending on their signal generator/impedance measurement equipment, and then chooses a value of C0 that results in the smallest effect of parasitic resistance in the capacitors, but not too small to increase the inductor values and inductor parasitic resistances. Such imperfections can be modeled as additional serial resistivities on inductances and capacitances that are rescaled by a factor of 1 + u, u a random variable, as illustrated in Fig. 9b for the measured impedance across the entire circuit (between nodes 1A and NB).
Impedance data measurement and analysis: To map out the drumhead state, we measured the impedance across the first and last nodes of the circuit (1A and NB) at a number of strategically determined (ky, kz) points that are relatively insensitive to component disorder, as elaborated in the following subsection. As presented in Fig. 5, the drumhead region is indeed clearly visible as a region of elevated impedance, in close agreement with the imperfection-corrected simulation (Fig. 7b.). The simulation also revealed that parasitic resistance in general decreased the impedance contrast by reducing the high impedance in the drumhead region and raising the low impedance outside of it. Component uncertainty increases the variance in the measured impedance at each individual (ky, kz) point, such that a larger number of measured (ky, kz) points are needed to average over the noise.
While a very large N will yield the most topologically robust drumhead state in an ideal setting, in practice that will also introduce much larger accumulated parasitic resistances and total component uncertainties, not to mention the copious resources needed. As such, we have built our circuit with N = 9 cells as a compromise between topological localization, noise and cost. The complete setup is pictured in Fig. 10. After tuning all inductors in accordance to the ky, kz values, the impedance across the entire circuit is measured by attaching nodes 1A and NB to a voltage divider and observing the voltage drop across the circuit using an oscilloscope. After correcting for possible frequency shifts due to uncertainties in the tuning circuitry (elaborated later), we indeed measured a distinctive cluster of elevated impedances in the drumhead region, as shown in Fig. 5c and analyzed in Fig. 7.
Fig. 10: Experimental setup for impedance measurement.
a Full view of setup for impedance measurement of the complete circuit array. The oscilloscope generates the AC voltage signal for the impedance measurement, as well as for tuning each variable inductor of each coupling unit (logical component). Schematics of the circuits are described in Figs. 9 and 6. b Close up of measurement circuit with AC voltage supply, test resistor forming a voltage divider, inductor currently being tuned, and wire connecting to oscilloscope to measure voltage across the coupling unit.
Even though the experimental setup also suffers from imperfect tuning of inductor values and additional parasitic resistances from the solders linking the repeating PCBs (unit cells), we are still able to reliably distinguish the low/high impedance points and hence delineate the correct drumhead region. Experimentally, we were able to measure 5 points in the low-lying regions \({Z}_{\exp .}<160\Omega\), 6 points in the elevated region \({Z}_{\exp .}> 250\Omega\) and 3 points in the borderline region between them. The 5 points in the low-lying region correspond to the "inside" of the elevated region ky < 0.5, 0.6 < kz < 1.0, and the region to the left of the elevated region kz < 0.4. The square root of the average normalized error of each measured point was 0.2, and the coefficient of correlation between simulated and measured data were 0.743. When excluding three points (ky,kz) = (0.9, 1.28), (0.96, 1.07), (0.07, 1.26), which according to simulation are in regions with very high local variance within a small k radius around those points (see Fig. 7b, c, and Supplementary Tables 4, 5, and 6), the coefficient of correlation increases to 0.863. These relatively unstable points were chosen for measurement in order to map a complete ring around the drumhead, but are difficult to measure due to the extreme variance in the region kz > 1.0. With larger circuits with much higher N, for example N = 30, the region kz > 1.0 becomes easier to measure due to reduced variance at higher N (see Fig. 5a.).
Machine learning assisted selection of sampling points: To minimize the effect of uncertainties and reduce the number of (ky, kz) points needed to reconstruct a prominent drumhead region of elevated impedance, we used a Nearest-Neighbor machine learning algorithm to select (ky, kz) sampling points which are optimally impervious to capacitor and inductor uncertainties, see Fig. 8. This is important for reducing experimental costs, as well as reducing the impact of inevitable component uncertainties.
We associate each sampling point with Zavg, which is the impedance of a particular ky, kz point averaged over a large number of randomly generated component uncertainties within the tolerance range, +/−1% for both inductors and capacitors, and a fixed parasitic resistance of RpL = 0.11 Ω, RpC = 0.03 Ω. ZSD is the corresponding standard deviation associated with many samples of a particular ky, kz point simulated with component uncertainty and parasitic resistance, and Zideal is the predicted impedance without component uncertainty or parasitic resistance. To ensure the integrity of the measured data, we will first desire that ∣Zavg − Zideal∣/∣Zideal∣ is small. This is not necessarily the case when the impedance depends highly nonlinearly with the capacitances and inductances. Furthermore, ZSD/Zideal should be minimized too, so as to mitigate the variance caused by the uncertainties.
Given an initial set of sampling points, our selection algorithm improves on them according to the aforementioned metrics, and outputs a more desirable set of points. As elaborated in Fig. 8, the Nearest-Neighbor unsupervised learning algorithm efficiently determines a smaller allowed search space, allowing the filtering of desirable measurement points to be performed with much less computational resources compared to a brute force approach. We have optimized the selection of sampling points only within the drumhead region, since high impedance points are more sensitive to parasitic resistance and component uncertainty.
Tuning of each unit cell through variable inductors: To realize the Laplacian at a specific (ky, kz) point, the admittance of each coupling unit, i.e., logical component x must be tuned to correspond to Gx in (18). This may be done by attaching each coupling unit to a voltage divider and AC power supply, and observing the voltage drop across the coupling unit using an oscilloscope, see Fig. 10. The coupling unit is placed in series with a calibrating resistance Rt = 1.5 kΩ, a Vsupply = 5 Vpp voltage is supplied across the entire circuit, and the voltage amplitude Vx between the ends of the coupling unit is measured. The inductance of the variable inductor in each coupling unit shall be tuned until Vx matches with
$${V}_{x}=\frac{{V}_{{\rm{supply}}}}{1+{R}_{{\rm{t}}}{G}_{x}({k}_{y},{k}_{z})}\quad {\rm{for}}\quad x\in [t,v,-t,{t}_{AB},{g}_{A},{g}_{B}].$$
Each variable inductor is tuned using ferrite rods or shorted wire loops placed close to the fixed-value inductors, see Fig. 6. To increase the inductance, a ferrite rod is placed closer to the inductor to better align the internal magnetic fields in it. Conversely, to decrease the inductance, a wire loop is used to "shield" the inductor from any change in magnetic field, thereby decreasing its self-inductance. The wire loop decreases the inductance of the original fixed-value inductor due to an opposing induced current, as derived below. First, the e.m.f induced in the wire loop is equal to
$$\epsilon =-\frac{d}{dt}{\Phi }_{{\rm{w}}l}=-j\omega {L}_{{\rm{m}}}i(t),$$
where Lm is the mutual inductance between the fixed-value inductor and the wire loop, and i(t) is the current running through the fixed-value inductor. The current in the wire loop is then
$${i}_{{\rm{w}}l}=\frac{\epsilon }{{Z}_{{\rm{w}}l}}=\frac{-j\omega {L}_{{\rm{m}}}i(t)}{j\omega {L}_{{\rm{w}}l}+{R}_{{\rm{w}}l}},$$
where Lwl is the self-inductance of the wire loop, and Rwl is the resistance of the wire loop. At sufficiently high AC frequencies, we may ignore the resistance in the wire loop, such that the current induced in the wireloop is simply
$${i}_{{\rm{w}}l}(t)\approx -\frac{{L}_{{\rm{m}}}}{{L}_{{\rm{w}}l}}i(t).$$
The total flux on the original fixed-value inductor is then
$$\Phi =Li(t)+{L}_{{\rm{m}}}{i}_{{\rm{w}}l}(t)\approx \left(L_i-\frac{{L}_{{\rm{m}}}^{2}}{{L}_{{\rm{w}}l}}\right)i(t),$$
implying a decreased inductance in the original fixed-value inductor:
$$L=\frac{\Phi }{i(t)}\approx L_i -\frac{{L}_{{\rm{m}}}^{2}}{{L}_{{\rm{w}}l}}.$$
Using a combination of the ferrite rod and wire cage, we were able to alter the inductance of a fixed-value inductor component by −50% to +25% of its original manufactured value.
With this range of variable inductances and the known stipulated values of the logical components given in Supplementary Table 1, we selected default fixed inductor values of 39 uH for the t, v coupling units, and 10 uH for the remaining −t, tAB, gA, gB units. The default capacitor values were selected to reproduce the reference point (ky, kz) = (1.02, 0.75) via ((18)) without any alteration of the fixed-value inductors. Because capacitors are only sold in a restricted set of standard values, we used a parallel combination of several standard capacitors to make up the capacitances needed in all of the coupling units. The combinations used in the experiment for the coupling units t, v, −t, tAB, gA, gB are shown in Supplementary Table 1. In addition to the variable inductors, removable 470 or 1800 pF capacitors are sometimes connected in parallel to certain coupling units to represent (ky, kz) points beyond the tuning range of the variable inductors alone. See Supplementary Table 2 for a complete list of component part numbers used in the experiment.
Offsetting calibration uncertainty: In the experiment, all variable inductor values are calibrated by a voltage divider as illustrated by Fig. 6d. As suggested by (19), they are crucially dependent on the known value of the calibrating resistance Rt. In particular, suppose that Rt has a manufacturing uncertainty ΔRt. Then since the calibration voltage depends only on the product RtGx, the admittance of component Gx will also sustain a measurement error of ΔGx/Gx = − ΔRt/Rt. Since Gx is related to the frequency via \({G}_{x}=i\omega {C}_{x}+{(i\omega {L}_{x})}^{-1}\) in (18), the effect of a nonzero ΔRt can be offset by shifting the measurement frequency window by
$$\Delta \omega \approx \, \frac{\Delta {G}_{x}}{d{G}_{x}/d\omega }\\ = \, -\frac{{G}_{x}\Delta {R}_{{\rm{t}}}}{{R}_{{\rm{t}}}d{G}_{x}/d\omega }\\ = \, -\frac{i\omega {C}_{x}+\frac{1}{i\omega {L}_{x}}}{i{C}_{x}-\frac{1}{i{\omega }^{2}{L}_{x}}}\frac{\Delta {R}_{{\rm{t}}}}{{R}_{{\rm{t}}}}\\ = \, -\omega \ \frac{{\omega }^{2}-{\omega }_{x}^{2}}{{\omega }^{2}+{\omega }_{x}^{2}}\frac{\Delta {R}_{{\rm{t}}}}{{R}_{{\rm{t}}}},$$
where \({\omega }_{x}^{2}={({L}_{x}{C}_{x})}^{-1}\) is the resonant frequency of coupling unit x. As such, calibration uncertainties can be offset by a small shift (in this case empirically determined to be all close to −60 kHz) in the measurement frequency up to leading order, allowing the drumhead region to still be faithfully mapped out.
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Lu, L., Fu, L., Joannopoulos, J. D. & Soljačić, M. Weyl points and line nodes in gyroid photonic crystals. Nat. photonics 7, 294 (2013).
Meeussen, A. S., Paulose, J. & Vitelli, V. Geared topological metamaterials with tunable mechanical stability. Phys. Rev. X 6, 041029 (2016).
Lin, J. Y., Hu, N. C., Chen, Y. J., Lee, C. H. & Zhang, X. Line nodes, Dirac points, and Lifshitz transition in two-dimensional nonsymmorphic photonic crystals. Phys. Rev. B 96, 075438 (2017).
Lv, B. et al. Experimental discovery of Weyl semimetal TaAs. Phys. Rev. X 5, 031013 (2015).
Soluyanov, A. A. et al. Type-II Weyl semimetals. Nature 527, 495 (2015).
ADS PubMed CAS Google Scholar
Ezawa, M. Loop-nodal and point-nodal semimetals in three-dimensional honeycomb lattices. Phys. Rev. Lett. 116, 127202 (2016).
ADS PubMed Google Scholar
Bzdušek, T., Wu, Q., Rüegg, A., Sigrist, M. & Soluyanov, A. A. Nodal-chain metals. Nature 538, 75 (2016).
Bian, G. et al. Topological nodal-line fermions in spin-orbit metal PbTaSe2. Nat. Commun. 7, 10556 (2016).
ADS PubMed PubMed Central CAS Google Scholar
Neupane, M. et al. Observation of topological nodal fermion semimetal phase in ZrSiS. Phys. Rev. B 93, 201104 (2016).
Zhong, C. et al. Three-dimensional Pentagon Carbon with a genesis of emergent fermions. Nat. Commun. 8, 15641 (2017).
Ningyuan, J., Owens, C., Sommer, A., Schuster, D. & Simon, J. Time- and site-resolved dynamics in a topological circuit. Phys. Rev. X 5, 021031 (2015).
Albert, V. V., Glazman, L. I. & Jiang, L. Topological properties of linear circuit lattices. Phys. Rev. Lett. 114, 173902 (2015).
ADS MathSciNet PubMed Google Scholar
Imhof, S. et al. Topolectrical-circuit realization of topological corner modes. Nat. Phys. 14, 925 (2018).
Hofmann, T., Helbig, T., Lee, C. H., Greiter, M. & Thomale, R. Chiral voltage propagation and calibration in a topolectrical chern circuit. Phys. Rev. Lett. 122, 247702 (2019).
Lu, Y. et al. Probing the Berry curvature and Fermi arcs of a Weyl circuit. Phys. Rev. B 99, 020302 (2019).
Liu, Y. et al. Topological corner modes in a brick lattice with nonsymmorphic symmetry. Phys. Rev. B 102, 035142 (2020).
Li, L., Lee, C. H. & Gong, J. Emergence and full 3D-imaging of nodal boundary Seifert surfaces in 4D topological matter. Commun. Phys. 2, 1 (2019).
Lee, C. H. et al. Topolectrical circuits. Commun. Phys. 1, 39 (2018).
Helbig, T. et al. Band structure engineering and reconstruction in electric circuit networks. Phys. Rev. B 99, 161114 (2019).
Witten, E. Quantum field theory and the Jones polynomial. Commun. Math. Phys. 121, 351 (1989).
ADS MathSciNet MATH Google Scholar
Nayak, C., Simon, S. H., Stern, A., Freedman, M. & Sarma, S. D. Non-Abelian anyons and topological quantum computation. Rev. Mod. Phys. 80, 1083 (2008).
ADS MathSciNet MATH CAS Google Scholar
Dennis, M. R., King, R. P., Jack, B., O'Holleran, K. & Padgett, M. J. Isolated optical vortex knots. Nat. Phys. 6, 118 (2010).
Ezawa, M. Topological semimetals carrying arbitrary Hopf numbers: Fermi surface topologies of a Hopf link, Solomon's knot, trefoil knot, and other linked nodal varieties. Phys. Rev. B 96, 041202 (2017).
Bi, R., Yan, Z., Lu, L. & Wang, Z. Nodal-knot semimetals. Phys. Rev. B 96, 201305 (2017).
Chang, P.-Y. & Yee, C.-H. Weyl-link semimetals. Phys. Rev. B 96, 081114 (2017).
Chen, W., Lu, H.-Z. & Hou, J.-M. Topological semimetals with a double-helix nodal link. Phys. Rev. B 96, 041102 (2017).
Yan, Z. et al. Nodal-link semimetals. Phys. Rev. B 96, 041103 (2017).
Yan, Q. et al. Experimental discovery of nodal chains. Nat. Phys. 14, 461 (2018).
Takahashi, Y., Kariyado, T. & Hatsugai, Y. Edge states of mechanical diamond and its topological origin. N. J. Phys. 19, 035003 (2017).
Luo, K. et al. Topological Nodal States in Circuit Lattice. Research 2018, 6793752 (2018).
Gao, W. et al. Experimental observation of photonic nodal line degeneracies in metacrystals. Nat. Commun. 9, 950 (2018).
Alexander, J. W. Topological invariants of knots and links. Trans. Am. Math. Soc. 30, 275 (1928).
Yang, C. N. & Ge, M.-L. Braid Group, Knot Theory, and Statistical Mechanics Ii. Braid Group, Knot Theory, and Statistical Mechanics II (World Scientific, 1994).
Murasugi, K. Knot Theory and its Applications. Knot Theory and its Applications (Springer Science & Business Media, 2007).
Bode, B., Dennis, M. R., Foster, D. & King, R. P. Knotted fields and explicit fibrations for lemniscate knots. Proc. R. Soc. A 473, 20160829 (2017).
Bode, B. & Dennis, M. R. Constructing a polynomial whose nodal set is any prescribed knot or link. J. Knot Theory Ramif. 28, 1850082 (2019).
Lee, C. H. et al. Enhanced higher harmonic generation from nodal topology. Phys. Rev. B 102, 035138 (2020).
Tai, T. & Lee, C. H. Anisotropic non-linear optical response of nodal loop materials. Preprint at https://arxiv.org/abs/2006.16851 (2020).
Li, L., Lee, C. H. & Gong, J. Realistic floquet semimetal with exotic topological linkages between arbitrarily many nodal loops. Phys. Rev. Lett. 121, 036401 (2018).
Lee, C. H. et al. Tidal surface states as fingerprints of non-Hermitian nodal knot metals. Preprint at https://arxiv.org/abs/1812.02011 (2018b).
Sandia national laboratories, xyce parallel electronic simulator:version 6.8, https://xyce.sandia.gov/ (2018).
Ezawa, M. Non-Hermitian higher-order topological states in nonreciprocal and reciprocal systems with their electric-circuit realization. Phys. Rev. B 99, 201411 (2019a).
Ezawa, M. Electric circuits for non-Hermitian Chern insulators. Phys. Rev. B 100, 081401 (2019b).
Hofmann, T. et al. Reciprocal skin effect and its realization in a topolectrical circuit. Phys. Rev. Res. 2, 023265 (2020).
Luo, K., Feng, J., Zhao, Y. X. & Yu, R. Nodal manifolds bounded by exceptional points on non-Hermitian honeycomb lattices and electrical-circuit realizations. Preprint at https://arxiv.org/abs/1810.09231 (2018b).
He, L. & Vanderbilt, D. Exponential decay properties of Wannier functions and related quantities. Phys. Rev. Lett. 86, 5341 (2001).
Lee, C. H. & Ye, P. Free-fermion entanglement spectrum through Wannier interpolation. Phys. Rev. B 91, 085119 (2015).
Lee, C. H., Arovas, D. P. & Thomale, R. Band flatness optimization through complex analysis. Phys. Rev. B 93, 155155 (2016).
Lee, C. H., Claassen, M. & Thomale, R. Band structure engineering of ideal fractional Chern insulators. Phys. Rev. B 96, 165150 (2017).
The work in Würzburg is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through Project-ID 258499086 - SFB 1170 and through the Würzburg-Dresden Cluster of Excellence on Complexity and Topology in Quantum Matter – ct.qmat Project-ID 39085490 - EXC 2147. T. Helbig was supported by a Ph.D. scholarship of the Studienstiftung des deutschen Volkes, Germany. X.Z. is supported by the National Natural Science Foundation of China (Grant No. 11874431), the National Key R&D Program of China (Grant No. 2018YFA0306800), and the Guangdong Science and Technology Innovation Youth Talent Program (Grant No. 2016TQ03X688). A.S., Y.S.A., and L.K.A. are supported by A*STAR-IRG (A1783c0011) and Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 2 grant (2018-T2-1-007). Open access funding provided by Projekt DEAL.
Department of Physics, National University of Singapore, Singapore, 117542, Singapore
Ching Hua Lee
Science, Mathematics and Technology, Singapore University of Technology and Design, Singapore, 487372, Singapore
Amanda Sutrisno, Yee Sin Ang & Lay Kee Ang
Institute for Theoretical Physics and Astrophysics, University of Würzburg, Am Hubland, Würzburg, D-97074, Germany
Tobias Hofmann, Tobias Helbig, Martin Greiter & Ronny Thomale
Department of Physics, The University of Chicago, Chicago, IL, 60637, USA
Yuhan Liu
School of Physics, Sun Yat-sen University, Guangzhou, 510275, China
Yuhan Liu & Xiao Zhang
Amanda Sutrisno
Tobias Hofmann
Tobias Helbig
Yee Sin Ang
Lay Kee Ang
Xiao Zhang
Martin Greiter
Ronny Thomale
C.H.L. conceptualized and initiated the project, designed the experiment and wrote most of the manuscript. T. Hofmann and T. Helbig performed the numerical simulations and provided circuit expertise. Y.L. and X.Z. provided support on the mathematical aspects. A.S. performed the experiment under the guidance of Y.S.A. L.K.A., M.G. and R.T. took on advisory roles and wrote parts of the manuscript. The manuscript reflects the contributions of all authors.
Correspondence to Ching Hua Lee, Yee Sin Ang, Xiao Zhang or Ronny Thomale.
Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Peer Review File
Lee, C.H., Sutrisno, A., Hofmann, T. et al. Imaging nodal knots in momentum space through topolectrical circuits. Nat Commun 11, 4385 (2020). https://doi.org/10.1038/s41467-020-17716-1
Square-root higher-order Weyl semimetals
Lingling Song
Huanhuan Yang
Peng Yan
Avik Dutt
Impurity induced scale-free localization
Linhu Li
Jiangbin Gong
Communications Physics (2021) | CommonCrawl |
Spectral theory for subnormal operators
by R. G. Lautzenheiser PDF
Trans. Amer. Math. Soc. 255 (1979), 301-314 Request permission
We give an example of a subnormal operator T such that ${\text {C}} \backslash \sigma (T)$ has an infinite number of components, $\operatorname {int} (\sigma (T))$ has two components U and V, and T cannot be decomposed with respect to U and V. That is, it is impossible to write $T = {T_1} \oplus {T_2}$ with $\sigma ({T_1}) = \overline U$ and $\sigma ({T_2}) = \overline V$. This example shows that Sarason's decomposition theorem cannot be extended to the infinitely-connected case. We also use Mlak's generalization of Sarason's theorem to prove theorems on the existence of reducing subspaces. For example, if X is a spectral set for T and $K \subset X$, conditions are given which imply that T has a nontrivial reducing subspace $\mathcal {M}$ such that $\sigma (T|\mathcal {M}) \subset K$. In particular, we show that if T is a subnormal operator and if $\Gamma$ is a piecewise ${C^2}$ Jordan closed curve which intersects $\sigma (T)$ in a set of measure zero on $\Gamma$, then $T = {T_1} \oplus {T_2}$ with $\sigma ({T_1}) \subset \sigma (T) \cap \overline {\operatorname {ext} (\Gamma )}$ and $\sigma ({T_2}) \subset \sigma (T) \cap \overline {\operatorname {int} (\Gamma )}$.
P. R. Ahern and Donald Sarason, On some hypo-Dirichlet algebras of analytic functions, Amer. J. Math. 89 (1967), 932–941. MR 221286, DOI 10.2307/2373411
William Arveson, Subalgebras of $C^{\ast }$-algebras. II, Acta Math. 128 (1972), no. 3-4, 271–308. MR 394232, DOI 10.1007/BF02392166
S. K. Berberian, A note on operators whose spectrum is a spectral set, Acta Sci. Math. (Szeged) 27 (1966), 201–203. MR 203458
C. A. Berger, A strange dilation theorem. Notices Amer. Math. Soc. 12 (1965), 590. —, A brief note on the existence of spectral sets, preprint.
Errett Bishop, A minimal boundary for function algebras, Pacific J. Math. 9 (1959), 629–642. MR 109305, DOI 10.2140/pjm.1959.9.629
Andrew Browder, Introduction to function algebras, W. A. Benjamin, Inc., New York-Amsterdam, 1969. MR 0246125
Kevin F. Clancey, Examples of nonnormal seminormal operators whose spectra are not spectral sets, Proc. Amer. Math. Soc. 24 (1970), 797–800. MR 254643, DOI 10.1090/S0002-9939-1970-0254643-X
K. F. Clancey and C. R. Putnam, Normal parts of certain operators, J. Math. Soc. Japan 24 (1972), 198–203. MR 313843, DOI 10.2969/jmsj/02420198
A. M. Davie and B. K. Øksendal, Rational approximation on the union of sets, Proc. Amer. Math. Soc. 29 (1971), 581–584. MR 277725, DOI 10.1090/S0002-9939-1971-0277725-6
Peter L. Duren, Theory of $H^{p}$ spaces, Pure and Applied Mathematics, Vol. 38, Academic Press, New York-London, 1970. MR 0268655
C. Foiaş, Some applications of spectral sets. I, Harmonic-spectral measure, Acad. R. P. Romîne Stud. Cerc. Mat. 10 (1959), 365-401; English transl., Amer. Math. Soc. Transl. (2) 61 (1967), 25-62.
Theodore W. Gamelin, Uniform algebras, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1969. MR 0410387
T. W. Gamelin and John Garnett, Pointwise bounded approximation and Dirichlet algebras, J. Functional Analysis 8 (1971), 360–404. MR 0295085, DOI 10.1016/0022-1236(71)90002-4
A. Gleason, Function algebras in Seminar on Analytic functions. Vol. II, Institute for Advanced Study, Princeton, N. J., 1957, pp. 213-226.
Paul R. Halmos, A Hilbert space problem book, D. Van Nostrand Co., Inc., Princeton, N.J.-Toronto, Ont.-London, 1967. MR 0208368
Edwin Hewitt and Karl Stromberg, Real and abstract analysis, Graduate Texts in Mathematics, No. 25, Springer-Verlag, New York-Heidelberg, 1975. A modern treatment of the theory of functions of a real variable; Third printing. MR 0367121
Kenneth Hoffman, Banach spaces of analytic functions, Prentice-Hall Series in Modern Analysis, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1962. MR 0133008
R. G. Lautzenheiser, Spectral sets, reducing subspaces, and function algebras, Ph. D. Thesis, Indiana University, 1973.
Arnold Lebow, On von Neumann's theory of spectral sets, J. Math. Anal. Appl. 7 (1963), 64–90. MR 156220, DOI 10.1016/0022-247X(63)90078-7
W. Mlak, Decompositions and extensions of operator valued representations of function algebras, Acta Sci. Math. (Szeged) 30 (1969), 181–193. MR 285914
W. Mlak, Partitions of spectral sets, Ann. Polon. Math. 25 (1971/72), 273–280. MR 301515, DOI 10.4064/ap-25-3-273-280
C. R. Putnam, The spectra of subnormal operators, Proc. Amer. Math. Soc. 28 (1971), 473–477. MR 275215, DOI 10.1090/S0002-9939-1971-0275215-8
C. R. Putnam, Invariant subspaces of certain subnormal operators, J. Functional Analysis 17 (1974), 263–273. MR 0358394, DOI 10.1016/0022-1236(74)90040-8
C. R. Putnam, Generalized projections and reducible subnormal operators, Duke Math. J. 43 (1976), no. 1, 101–108. MR 394272, DOI 10.1215/S0012-7094-76-04310-6
C. R. Putnam, Peak sets and subnormal operators, Illinois J. Math. 21 (1977), no. 2, 388–394. MR 482339, DOI 10.1215/ijm/1256049423
Donald Sarason, On spectral sets having connected complement, Acta Sci. Math. (Szeged) 26 (1965), 289–299. MR 188797
G. L. Seever, Operator representations of uniform algebras. I, preprint.
Joseph G. Stampfli, A local spectral theory for operators. IV. Invariant subspaces, Indiana Univ. Math. J. 22 (1972/73), 159–167. MR 296734, DOI 10.1512/iumj.1972.22.22014
M. Tsuji, Potential theory in modern function theory, Maruzen Co. Ltd., Tokyo, 1959. MR 0114894
A. G. Vituškin, Analytic capacity of sets in problems of approximation theory, Uspehi Mat. Nauk 22 (1967), no. 6 (138), 141–199 (Russian). MR 0229838
Johann von Neumann, Eine Spektraltheorie für allgemeine Operatoren eines unitären Raumes, Math. Nachr. 4 (1951), 258–281 (German). MR 43386, DOI 10.1002/mana.3210040124
Bhushan L. Wadhwa, A hyponormal operator whose spectrum is not a spectral set, Proc. Amer. Math. Soc. 38 (1973), 83–85. MR 310690, DOI 10.1090/S0002-9939-1973-0310690-3
John Wermer, Dirichlet algebras, Duke Math. J. 27 (1960), 373–381. MR 121671
Donald R. Wilken, Lebesgue measure of parts for $R(X)$, Proc. Amer. Math. Soc. 18 (1967), 508–512. MR 216297, DOI 10.1090/S0002-9939-1967-0216297-8
Donald R. Wilken, The support of representing measures for $R(X)$, Pacific J. Math. 26 (1968), 621–626. MR 236713, DOI 10.2140/pjm.1968.26.621
James P. Williams, Minimal spectral sets of compact operators, Acta Sci. Math. (Szeged) 28 (1967), 93–106. MR 217636
Lawrence Zalcmann, Analytic capacity and rational approximation, Lecture Notes in Mathematics, No. 50, Springer-Verlag, Berlin-New York, 1968. MR 0227434, DOI 10.1007/BFb0070657
M. S. Mel′nikov, The Gleason parts of the algebra $R(X)$, Mat. Sb. (N.S.) 101(143) (1976), no. 2, 293–300 (Russian). MR 0425619
Scott W. Brown, Some invariant subspaces for subnormal operators, Integral Equations Operator Theory 1 (1978), no. 3, 310–333. MR 511974, DOI 10.1007/BF01682842
Jim Agler, An invariant subspace theorem, Bull. Amer. Math. Soc. (N.S.) 1 (1979), no. 2, 425–427. MR 520079, DOI 10.1090/S0273-0979-1979-14627-5
Retrieve articles in Transactions of the American Mathematical Society with MSC: 47B20, 47A15
Retrieve articles in all journals with MSC: 47B20, 47A15
Journal: Trans. Amer. Math. Soc. 255 (1979), 301-314
MSC: Primary 47B20; Secondary 47A15
MathSciNet review: 542882 | CommonCrawl |
Article | Open | Published: 03 October 2017
Co@Carbon and Co 3 O4@Carbon nanocomposites derived from a single MOF for supercapacitors
Engao Dai1,
Jiao Xu1,
Junjie Qiu1,
Shucheng Liu1,
Ping Chen1 &
Yi Liu1
Scientific Reportsvolume 7, Article number: 12588 (2017) | Download Citation
Developing a composite electrode containing both carbon and transition metal/metal oxide as the supercapacitor electrode can combine the merits and mitigate the shortcomings of both the components. Herein, we report a simple strategy to prepare the hybrid nanostructure of Co@Carbon and Co3O4@Carbon by pyrolysis a single MOFs precursor. Co-based MOFs (Co-BDC) nanosheets with morphology of regular parallelogram slice have been prepared by a bottom-up synthesis strategy. One-step pyrolysis of Co-BDC, produces a porous carbon layer incorporating well-dispersed Co and Co3O4 nanoparticles. The as-prepared cobalt-carbon composites exhibit the thin layer morphology and large specific surface area with hierarchical porosity. These features significantly improve the ion-accessible surface area for charge storage and shorten the ion transport length in thin dimension, thus contributing to a high specific capacitance. Improved capacitance performance was successfully realized for the asymmetric supercapacitors (ASCs) (Co@Carbon//Co3O4@Carbon), better than those of the symmetric supercapacitors (SSCs) based on Co@Carbon and Co3O4@Carbon materials (i.e., Co@Carbon//Co@Carbon and Co3O4@Carbon//Co3O4@Carbon). The working voltage of the ASCs can be extended to 1.5 V and show a remarkable high power capability in aqueous electrolyte. This work provides a controllable strategy for nanostructured carbon-metal and carbon-metal oxide composite electrodes from a single precursor.
The carbon-based electrical double-layer supercapacitors (EDLC) have excellent cyclic stability and long service lifetime since the electrode undergoes no chemical change during the charge/discharge processes1,2. However, the energy density of currently commercial carbon-based EDLC is much lower than that of an electrochemical battery. Such low energy-density cannot fulfill the need of energy storage devices for hybrid electric vehicles, wind-farms and solar power plants. Recently, 2-D carbon nanostructures combine the high surface area, high electronic conductivity and high mechanical strength, which are very attractive for flexible energy-storage devices and for improvement in the charge/discharge reaction kinetics of supercapacitor electrodes3,4,5,6,7. Graphene, graphene oxide (GO) or reduced graphene oxide (rGO) are the representatives of the 2-D carbon nanosheets. Graphene and GO exhibit great mechanical strength, excellent electronic conductivity as well as high specific surface area, which are promising candidates for supercapacitor electrodes. However, the synthesis of graphitic-ordered mesoporous carbon is still one of challenges. Heat treatment that requires high temperature over 2000 °C is a traditional way, yet the process is rather energy costly. Employing inorganic additives for catalytic graphitization of amorphous carbon at relatively low temperature (<1000 °C) is an alternative to the high temperature method. Some initial work has reported that metal salts containing Fe, Co, Ni, Ti, W, and Mn exhibit catalytic graphitization behavior at the low temperature8,9,10,11. When hybridized with other metal or metal oxide nanoparticles (NPs) to form carbon/metal/metal oxide composites, multi-functionality is achieved through the combination of carbon and metal or metal oxide12,13,14,15,16,17. In these composites, the carbon nanostructure serves as the physical support of metal/metal oxide particles and its structure determines the architecture of the whole composite. The high electronic conductivity of carbon nanostructures benefits to the rate capability and power density at a large charge/discharge current. The electro-activities of metal/metal oxide NPs contribute to high specific capacitance and high energy density of the composite electrodes. A synergistic effect could be expected and the materials cost can be reduced.
In most cases, metal cations deposited on carbonaceous materials were reduced chemically or physically to form carbon/metal/metal oxide nanocomposites. In these processes, heterogeneous dispersion and agglomeration of the particles are problematic. As a relatively new class of porous materials, metal-organic frameworks (MOFs), usually constructed from metal (clusters) and organic ligands with diversified and tailorable structures have been primarily demonstrated to be suitable templates/precursors to afford uniform metal (oxide) NPs distributed throughout porous carbon via pyrolysis, in which high porosity and long-range structural ordering could be partially preserved18,19,20,21,22,23,24,25. The agglomeration of metal (oxide) clusters would be limited due to the presence of the surrounding polymers, which would be stabilized and slowly carbonized to form the metal (oxide)-core/carbon-shell architectures25,26,27.
In the present work, we have constructed an elegant nanostructure in which cobalt NPs was over coated by ultrathin carbon layers by using Co-containing MOF (Co-BDC) as both self-sacrificing template and precursor. Bottom-up synthesis method is an interesting strategy to produce highly crystalline and intact MOF nanosheets9. The bottom-up synthesis of MOF nanosheets and conversion of cobalt-carbon nanocomposites was schematically illustrated in Fig. 1. A topmost solution of cobalt(II) acetate tetrahydrate and a bottom solution of 1,4-benzenedicarboxylic acid (BDCA), separated by an intermediate solvent layer (mixtures of N,N-dimethyl formamide and acetonitrile). Under static conditions, diffusion of Co2+ cations and BDCA linker precursors into the space segment causes a slow supply of the MOF nutrients to an intermediate region where the growth of MOF crystals occurs locally. Finally, the Co-BDC nanosheets were calcined in an inert atmosphere at 700 oC to obtain Co@Carbon. Then Co-BDC was calcined in oxygen atmosphere at 400 oC to obtain Co3O4@Carbon. The designed two hybrid electrode shows a special core-shell structure and porous feature, which present a high specific capacitance and good rate capability.
Synthetic scheme for the preparation of Co-BDC nanosheets and cobalt-carbon composites.
Typical XRD patterns of Co@Carbon and Co3O4@Carbon are displayed in Fig. 2(a). The XRD patterns of the samples agree well with the cubic Co (JCPDS: 15–0806) and Co3O4 (JCPDS: 42–1467) phase with amorphous carbon bulge, respectively. No peaks from impurity phases can be detected, demonstrating high purity of hierarchical Co@Carbon and Co3O4@Carbon hybrid.
(a) XRD, (b) Raman spectra, (c) N2 adsorption and desorption isotherms, and (d) TG of samples.
The Raman spectrum of Co@Carbon, as shown in Fig. 2(b), displayed the well documented D band at 1328 cm−1 and G band at 1591 cm−1 further confirming the existence of carbon in the hybrid. The D band can be assigned to typical disorder, while the G band is characteristic of grapheme28. The intensity ratio between the D and G bands (ID/IG ~1.0) demonstrates that the carbon in the hierarchical Co@Carbon material is amorphous. The results prove that as-obtained composites have a large amount of void space for volume expansion and provide numerous electro-active sites for redox reactions. Obviously, the emergency of 2D band at 2645 cm−1 clearly indicates that the existence of ultrathin carbon layers in Co@Carbon hybrids29,30. For Co3O4@Carbon, the F2g, Eg and A1g peak assigned to Co3O4 were observed in Raman spectrum (Inset of Fig. 2(b))31.
Nitrogen adsorption and desorption measurements of as synthesized Co@Carbon and Co3O4@Carbon were performed to obtain more information on the porous structure (Fig. 2(c)). The type IV isotherms can be assigned to type of H3 hysteresis loops, indicating the presence of mesoporous structure. The BET surface area of Co@Carbon and Co3O4@Carbon is about 109.6 m2 g−1 and 23.6 m2 g−1, respectively. Inset of Fig. 2(c) shows the pore-size distribution calculated by the Barrett-Joyner-Halenda (BJH) method. It shows an average pore-size value of about 9.6 nm for Co@Carbon and 34.8 nm for Co3O4@Carbon. We believe that the mesoporous structure is critical for facilitating the transfer of electrons and ions in the interface between the electrode and electrolyte and offers many active sites for fast electrochemical reactions. This may lead to a great enhancement of electrochemical properties.
Transmission electron microscopy (TEM) images of Co-BDC precursor are shown in Fig. 3(a–c) on different resolution scales. It was observed that the Co-BDC exhibits the morphology of regular parallelogram slice with width and lengths up to several micrometers. Scanning electron microscopy (SEM) was employed to investigate the as-collected samples as shown in Fig. 3(d–f). The corresponding magnified SEM images indicate that the Co-BDC exhibits uniform sheet-shaped morphology with layer thickness down to 100 nm.
(a–c) TEM and (d–f) SEM of Co-BDC nanosheets.
TEM was also used to collect more information of Co@Carbon as displayed in Fig. 4(a–c). From images, it is detected that the Co nanoparticles with around 10 to 30 nm in diameter are monodispersed within the thin carbon layer. As shown in the SEM (Fig. 4(d–f)), the Co@Carbon was composed of thin carbon nanosheets with about 150 nm thicknesses, and the coarse surface with Co nanoparticles. Heat treatment of Co-MOFs in O2 resulted in nanometer-sized spherical cobalt-carbon composites Co3O4@Carbon (Fig. 4(g–k)). HRTEM characterization disclosed that these small particles were carbon-shell/cobalt-core hybrids (Fig. 5(c)). The diameter of the Co3O4 particles ranged from 10 to 30 nm. The EDS of materials were shows in Fig. 4(f,l) and the elemental composition of materials analyzed by EDS were presented in Tables S1 and S2. High-resolution TEM (HRTEM) (Fig. 5) analysis showed that the cobalt particles were composed of single-crystal structures and the carbon layers covered the particle surface. The HRTEM image reveals a fringe spacing of around 0.20 nm and 0.46 nm, corresponding to the (111) plane of cubic Co and Co3O4.
(a–c) TEM, (d,e) SEM, and (f) EDS of Co@Carbon; (g–i) TEM, (j,k) SEM, and (l) EDS of Co3O4@carbon.
HRTEM of (a,b) Co@Carbon and (c) Co3O4@Carbon.
These TEM and SEM images above clearly bring out some peculiar features for both cobalt-carbon composites, are to be noted as (a) typical few layer carbon character with significant layered nature, (b) uniform cobalt nanoparticle loading across the extended sheets, and (c) hierarchical porosity ranging from uniformly dispersed tiny nm scale pores to scattered mesopores. According to thermogravimetric analysis (TGA) in N2 atmosphere, Co-MOFs start to decompose at 700 °C (Fig. 2(d)). One can speculate that in situ carbonization of Co-MOFs would lead to the formation of cobalt clusters surrounded by ligands framework. The agglomeration of cobalt clusters would be limited due to the presence of the surrounding ligands, which would be stabilized and slowly carbonized to form the cobalt-core/carbon-shell architectures. We believe that the in situ process is beneficial for high interfacial interaction between the cobalt and carbon in the samples. Thus, the intimate combination of cobalt with electronically conducting carbon allows for rapid and efficient charge transport, which can lead to great improvement of electrochemical properties.
The X-ray photoelectron spectroscopy (XPS) spectra of as-obtained cobalt-carbon nanocomposites are presented in Fig. 6. The peak centered at 284.6 eV was observed in both samples, which is referred to characteristic peak of C1s. For Co@Carbon, the peak located at 778.5 eV is assigned to the characteristic peak of Co2p3/2, suggesting the presence of zero valence Co32. The presence of zero valences Co and carbon confirms the successful deposition of Co@Carbon material. XPS revealed Co/C atomic ratio of about 6% in Co@Carbon hybrid. For Co3O4@Carbon, the binding energy peaks at 779.8 eV, which can be attributed to Co2+ and Co3+ states33,34, suggesting the formation of Co3O4@Carbon. What's more, the Co/C atomic ratio determined from XPS measurement is much high than 50%. It is probably because most of the carbon is vaporized during sintered in the oxygen atmosphere.
XPS of cobalt-carbon composites.
The electrochemical performance of Co@Carbon and Co3O4@Carbon was evaluated in a three-electrode configuration with a 6 M KOH electrolyte. The representative cyclic voltammetry (CV) curves of Co@Carbon are shown in Fig. 7(a) with the scan rates varying from 10 to 100 mV s−1. These curves show the combination of both pseudo capacitive and an electric double layer capacitive behavior between −0.3 and 0.2 V vs. Ag/AgCl electrode, indicating the good capacitive properties. An obvious redox peak was observed, which was attributed to the reversible oxidation state change of Co element between Co2+ and Co3+. With the increase of scan rate, the shape of CV curves did not change too much and the redox peaks well remained, indicating the high rate capability of the Co@Carbon electrode. The Co@Carbon electrode was further tested by galvanostatic charge/discharge measurements (Fig. 7(b)). The charge/discharge curves at different current densities exhibited an almost symmetric shape with a small voltage drop, indicating the Co@Carbon electrode has small internal resistance and excellent capacitive property. The capacitance values are estimated to be 109, 100, 90, 81, 73 and 52 F g−1, respectively, at current densities of 0.25, 0.5, 1, 2, 3 and 7 A g−1. The electrochemical performances of Co@Carbon were further investigated by electrochemical impedance spectroscopy (EIS) from 105 to 10−2 Hz, and the Nyquist plots are drawn in Fig. 7(d). In the high frequency region, the intercept on the real axis can reflect the resistance of the electrolyte (Rs). Rs of material are about 2.0 Ω. The semicircular pattern in the middle frequency region is related with the resistance of charge transfer (Rct). The Rct value of material are much lower (~4.0 Ω) due to its nanosheet structure with high mesoporous surface area. It is easy for electrolyte to soak in contact with electrode material. The slope line in the low frequency region is related with Warburg impedance of ion diffusion/transportation to the electrode surface. The steeper the slope line is, the faster ion diffusion is. So, Co@Carbon displayed favorable ion diffusion and showed excellent supercapacitor performance especially at a high current density.
(a) Cyclic voltammetry curves, (b) galvanostatic charge-discharge curves, (c) Specific capacitance at different current density, (d) Nyquist plots of the Co@Carbon electrode.
The representative CV curves of Co3O4@Carbon are shown in Fig. 8(a) at different scan rates. The CV shows a typical pseudo capacitive behavior with an obvious redox peak, which was attributed to the reversible conversion between Co2+ and Co3+. The discharge behaviors of Co3O4@Carbon were examined by GCD in the potential range from 0 to 0.3 V at current densities of 2~10 A g−1. The GCD curves (Fig. 8(b)) of Co3O4@Carbon exhibited a more obvious voltage platform due to the redox reaction. The specific capacitance was calculated to be 261, 171, 148, 128 and 50 F g−1 at the current density of 1, 2, 3, 4 and 10 Ag−1. The capacitance value of Co3O4@Carbon was about 3 times higher than that of Co@Carbon at 1 Ag−1. Faradaic capacitance of Co3O4 at around 0.19 V associated with the conversion from Co3+ to Co2+ was mainly responsible for the high capacitance. Moreover, The Rct values of Co3O4@Carbon (∼2 Ω) are lower than that of Co@Carbon (∼4 Ω). The results indicate the coating of carbon thin layer on Co3O4 played an essential role in the enhanced performance of the composite electrode, which could be attributed to the unique structure of Co3O4@Carbon and the synergetic effect between carbon and Co3O4. In this special structure, the carbon thin film coated on Co3O4 and interconnected with each other to form conductive networks, promoting electron transfer. Moreover, the pore between Co3O4@Carbon nanostructures could also serve as the electrolyte ion reservoirs11, which not only ensured a close contact between the electrode material and the electrolyte, but also preserved a stable supply of electrolyte ions, leading to rapid ion transport. Furthermore, the incorporation of carbon led to the higher surface area of the Co3O4@Carbon composite and a smaller size of the Co3O4 nanomaterial, which not only shortened the ion diffusion length, but also provided more active surface area for the redox reaction.
(a) Cyclic voltammetry curves, (b) galvanostatic charge-discharge curves, (c) Specific capacitance at different current density, (d) Nyquist plots of the Co3O4@Carbon electrode.
In order to explore the electrochemical performance toward practical application of the materials, asymmetric supercapacitors (ASCs) was fabricated by utilizing the as-synthesized Co3O4@Carbon as the positive electrode and Co@Carbon as the negative electrode in 6 M KOH. To fabricate ASCs, The mass ratio of the negative electrode to the positive electrode was decided based on charge balance theory. It was observed that the CV curves (Fig. 9(a)) of ASCs are symmetric in shape even though the potential window is extended up to high working voltages, indicating ideal capacitive properties with good reversibility. The working voltage of the asymmetric supercapacitors thus can be extended to 1.5 V, indicating the potential of assembled system in practical application. The charge-discharge performance of the supercapacitors is also demonstrated by the galvanostatic charge-discharge curves presented in Fig. 9(b), which shows nearly symmetrical liner charge and discharge characteristics with no obvious internal voltage drops at different current densities, suggesting a highly reversible charge-discharge behavior. The specific capacitance was calculated from the GCD curves based on total mass loading of the active material of the two electrodes. The specific capacitance value calculated at a current density of 1 A g−1 was 17.9 F g−1, which only decreased to 8.9 F g−1, even at the high current density of 4 A g−1. The Nyquist plots (Fig. 9(c)) of ASCs show a lower Rct (~4 Ω) and steeper slop at low frequency range, indicating fast of ion diffusion. To evaluate the cycle behavior of the as-fabricated ASCs, 1000 charge-discharge cycles were run at a current density of 10 A g−1. As shown in Fig. 10, during the 1000 cycles, the specific capacitances of the electrode almost maintain the initial value, implying that the composite is a stable electrode material during the cycling test. The power density and energy density are generally used as important parameters to characterize the performance of supercapacitor devices. Figure 9(d) gives the Ragone plot of the fabricated asymmetric supercapacitors for energy and power densities. The asymmetric supercapacitors achieve a high energy density of 8.8 Wh kg−1at a power density of 375 W kg−1. The specific energy and specific power change with the applied current density are summarized in Table 1. The values obtained for specific energy and specific power for the ASCs show improved performance.
(a) Cyclic voltammetry curves at scan rates of 100mVs−1, (b) galvanostatic charge-discharge curves, (c) Nyquist plots, and (d) Ragone plots of the ASCs (Co@Carbon//Co3O4@Carbon), SSCs1(Co@Carbon//Co@Carbon) and SSCs2(Co3O4@Carbon//Co3O4@Carbon).
Cycle performance of Co@Carbon//Co3O4@Carbon ASCs at a current density of 10 A g−1.
Table 1 Various performance parameters for Co@Carbon//Co3O4@Carbon ASCs.
To further demonstrate the advantage of ASCs, here, we fabricated two types of symmetric supercapacitors (SSCs) (Co@Carbon//Co@Carbon and Co3O4@Carbon//Co3O4@Carbon). It was found that the SSCs based on Co@Carbon//Co@Carbon exhibits only maximal specific energy of 1.24 Wh kg−1 at specific power of 500 Wkg−1, while the other SSCs based on Co3O4@Carbon//Co3O4@Carbon exhibits maximal specific energy of 0.97 Wh kg−1 at specific power of 500 W kg−1. The specific energy and specific power of two SSCs change with the applied current density as summarized in Tables S3 and S4. It was observed that the values of specific energy obtained for the ASCs are almost seven times higher than those for the SSCs. We think that the specially designed ASCs will link the advantages of these two materials to provide high specific energy and specific power. The presence of pseudocapacitive material Co3O4 helps in attaining higher current sweeps, meanwhile, the porous carbon is mainly responsible for providing a stable and wide potential window, which makes a major contribution to the high performance of the ASCs.
In summary, porous metal/carbon and carbon/metal oxide with well-controlled pore structures have been successfully prepared by a single MOFs-templating approach. We present a bottom-up synthesis strategy for preparation of Co-MOFs nanosheets with micrometre lateral dimensions and nanometre thickness. Cobalt/Carbon hybrids (Co@Carbon and Co3O4@Carbon) with high surfaces were obtained by one-step heat treatment of Co-MOFs. Improved capacitance performance was successfully realized for the ASCs utilizing the as-synthesized Co3O4@Carbon as the positive electrode and Co@Carbon as the negative electrode in aqueous electrolyte. This specially designed ASCs link the advantages of these two materials to provide high specific energy and specific power. The results presented here are of technological interest as this carbon/metal (oxide) composite are promising candidates for supercapacitors.
Material preparation
Cobalt 1,4-benzenedicarboxylate (Co-BDC) was synthesized by using Co2+ as metal cation and 1,4-benzenedicarboxylate (H2BDC) as organic ligand. A linker solution composed of 10 mg of H2BDC dissolved in a mixture of 2 mL of N,N-Dimethylformamide (DMF) and 1 mL of CH3CN was employed as bottom liquid layer, a mixture of 1 mL of DMF and 1 mL of CH3CN was the spacer layer, while a solution of 10 mg Co(CH3COO)2·4H2O in 1 mL of DMF and 2 mL of CH3CN was the top, metal-containing layer. Synthesis took place at 35 °C for 24 hours under static conditions. Finally, the solid product was recovered by centrifugation and thoroughly washed consecutively three times with DMF (1ml each step). And then placed in 80 °C thermostat box drying 2h. For the synthesis of Co@Carbon nanosheets, Co-BDC was placed in a tube furnace under N2 gas flow at 700 °C for 2 h with a heating rate of 3 °C/min. And for the synthesis of Co3O4@Carbon, Co-BDC was placed in a tube furnace under O2 gas flow at 400 °C for 1 h with a heating rate of 3 °C/min.
The crystal phase of all samples was characterized by powder X-ray diffraction (PANalytical X'Pert Powder diffractometer) with CuKa radiation. The morphology and microstructure of synthesized materials were characterized using a transmission electron microscope (TEM, Tecnai G2 F20). Nitrogen adsorption/desorption isotherms were measured at 77 K using surface area and pore size analyzer (3H-2000PS4). Raman spectra were recorded using a Renishaw inViaI instrument with a 633 nm laser.
The electrochemical measurements using a three-electrode electrochemical system, a 6 M KOH aqueous solutions as the electrolyte, Pt plate acted as the counter electrode and Ag/AgCl electrode served as the reference electrode. The working electrode consisted of active material, conductive graphite and PTFE with the mass ration of 8:1:1. The mixture of pulp was pasted onto Ni foam and vacuum dried 10 h.
Cyclic voltammetry and galvanostatic charge-discharge investigation were implemented using a CHI660E electrochemical workstation (ChenHua, Shanghai). The specific capacitance was calculated from the galvanostatic chargedischarge curves using the following equation:
$${\rm{C}}=\frac{{\rm{I}}\times {\rm{\Delta }}{\rm{t}}}{{\rm{m}}\times {\rm{\Delta }}{\rm{V}}}$$
where I is charge-discharge current at a discharge time ∆t (s), ∆V is dropout voltage, and m is the mass of active electrode materials.
For the fabrication of symmetric supercapacitors, the previous working electrode served as the positive electrode and the negative electrode. The two electrodes and a separator were combined with 6 M KOH as the electrolyte to assemble the full cell. The energy density E (Wh Kg−1) was used the following equation:
$${\rm{E}}=(0.5{{\rm{CV}}}^{2})/3.6$$
C (F g−1) is specific capacitance of capacitor, V is potential range. The power density P (WKg−1) was calculated by following equation:
$${\rm{P}}=3600{\rm{E}}/{\rm{t}}$$
where E (Wh Kg−1) is energy density, t(s) is elapsed time during discharge period.
Bose, S. et al. Carbon-based nanostructured materials and their composites as supercapacitor electrodes. J. Mater. Chem. 22, 767–784 (2012).
Zhang, L. L. & Zhao, X. S. Carbon-based materials as supercapacitor electrodes. Chem. Soc. Rev. 38, 2520–2531 (2009).
Zhu, Y. et al. Carbon-based supercapacitors produced by activation of graphene. Science 332, 1537–1541 (2011).
Xia, X. et al. Graphene sheet/porous NiO hybrid film for supercapacitor applications. Chemistry-A European Journal 17, 10898–10905 (2011).
Wu, M. S., Lin, Y. P., Lin, C. H. & Lee, J. T. Formation of nano-scaled crevices and spacers in NiO-attached graphene oxide nanosheets for supercapacitors. J. Mater. Chem. 22, 2442–2448 (2012).
Zhao, Y. Q. et al. MnO2/graphene/nickel foam composite as high performance supercapacitor electrode via a facile electrochemical deposition strategy. Mater. Lett. 76, 127–130 (2012).
Chen, S., Zhu, J., Wu, X., Han, Q. & Wang, X. Graphene oxide-MnO2 nanocomposites for supercapacitors. ACS nano 4, 2822–2830 (2010).
Kiang, C. H., Goddard, W. A., Beyers, R., Salem, J. R. & Bethune, D. S. Catalytic effects of heavy metals on the growth of carbon nanotubes and nanoparticles. J.Phys. Chem of Solids 57, 35–39 (1996).
Rodenas, T. et al. Metal-organic framework nanosheets in polymer composite materials for gas separation. Nature materials 14, 48–55 (2015).
Zhi, M., Xiang, C., Li, J., Li, M. & Wu, N. Nanostructured carbon–metal oxide composite electrodes for supercapacitors: a review. Nanoscale 5, 72–88 (2013).
Cao, X. et al. Reduced graphene oxide-wrapped MoO3 composites prepared by using metal-organic frameworks as precursor for all-solid-state flexible supercapacitors. Adv. Mate. 27, 4695-–4701 (2015).
Lu, X. P. et al. Macroporous carbon/nitrogen-doped carbon nanotubes/polyaniline nanocomposites and their application in supercapacitors. Electrochimica Acta 189, 158–165 (2016).
Song, Y. H. et al. A green strategy to prepare metal oxide superstructure from metal-organic frameworks. Scientific Reports 5, 8401–8408 (2015).
Song, Y. H., Li, X., Sun, L. L. & Wang, L. Metal/metal oxide nanostructures derived from metal-organic frameworks. RSC Advances 5, 7267–7279 (2015).
Wang, L. et al. Nitrogen-doped porous carbon/Co3O4 nanocomposites as anode materials for lithium-ion batteries. ACS Appl. Mater. Inter. 6, 7117–7125 (2014).
Zhang, F., Hao, L., Zhang, L. J. & Zhang, X. G. Solid-state thermolysis preparation of Co3O4 nano/micro superstructures from metal-organic framework for supercapacitors. Int. J. Electrochem. Sci. 6, 2943–2954 (2011).
Salunkhe, R. R. et al. Asymmetric supercapacitors using 3D nanoporous carbon and cobalt oxide electrodes synthesized from a single metal-organic framework. ACS Nano 9, 6288–6296 (2015).
Li, H., Eddaoudi, M., O'Keeffe, M. & Yaghi, O. M. Design and synthesis of an exceptionally stable and highly porous metal-organic framework. Nature 402, 276–279 (1999).
Mulfort, K. L. & Hupp, J. T. Chemical reduction of metal-organic framework materials as a method to enhance gas uptake and binding. J. Am. Chem. Soc. 129, 9604–9605 (2007).
Bureekaew, S. et al. One-dimensional imidazole aggregate in aluminium porous coordination polymers with high proton conductivity. Nature materials 8, 831–836 (2009).
Liu, B., Shioyama, H., Akita, T., Akita, T. & Xu, Q. Metal-organic framework as a template for porous carbon synthesis. J. Am. Chem. Soc. 130, 5390–5391 (2008).
Xing, C. C. et al. Structural evolution of Co-based metal organic frameworks in pyrolysis for synthesis of core-shells on nanosheets: Co@CoOx@Carbon-rGO composites for enhanced hydrogen generation activity. ACS Appl. Mater. Inter. 8, 15430–15438 (2016).
Ma, X., Zhou, Y. X., Liu, H., Li, Y. & Jiang, H. L. A MOF-derived Co-CoO@ N-doped porous carbon for efficient tandem catalysis: dehydrogenation of ammonia borane and hydrogenation of nitro compounds. Chem.Comm. 52, 7719–7722 (2016).
Kim, J. et al. CNTs grown on nanoporous carbon from zeolitic imidazolate frameworks for supercapacitors. Chem. Comm. 52, 13016–13019 (2016).
Huang, M. et al. MOF-derived bi-metal embedded N-doped carbon polyhedral nanocages with enhanced lithium storage. J. Mater. Chem. A 5, 266–274 (2017).
Zhi, L. J. et al. Precursor-controlled formation of novel carbon/metal and carbon/metal oxide nanocomposites. Adv. Mater. 20, 1727–1731 (2008).
Zhang, H. B. et al. Surface-plasmon-enhanced photodriven CO2 reduction catalyzed by metal-organic-framework-derived iron nanoparticles encapsulated by ultrathin carbon layers. Adv Mater. 28, 3703–3710 (2016).
Sun, L. et al. From coconut shell to porous graphene-like nanosheets for high-power supercapacitors. J. Mater. Chem. A 1, 6462–6470 (2013).
Ferrari, A. C. & Basko, D. M. Raman spectroscopy as a versatile tool for studying the properties of graphene. Nat Nanotechnol. 8, 235–246 (2013).
Ferrari, A. C. Raman spectroscopy of graphene and graphite:disorder, electron-phonon coupling, doping and nonadiabatic effects. Solid state comm. 143, 47–57 (2007).
Jiao, Q., Fu, M., You, C., Zhao, Y. & Li, H. Preparation of hollow Co3O4 microspheres and their ethanol sensing properties. Inorg. Chem. 51, 11513–11520 (2012).
Zhang, L. J. et al. Highly graphitized nitrogen-doped porous carbon nanopolyhedra derived from ZIF-8 nanocrystals as efficient electrocatalysts for oxygen reduction reactions. Nanoscale 6, 6590–6602 (2014).
McIntyre, N. S. & Cook, M. G. X-ray photoelectron studies on some oxides and hydroxides of cobalt, nickel, and copper. Analytical chem. 47, 2208–2213 (1975).
Tan, B. J., Klabunde, K. J. & Sherwood, P. M. A. XPS studies of solvated metal atom dispersed (SMAD) catalysts. Evidence for layered cobalt-manganese particles on alumina and silica. J. Am. Chem. Soc. 113, 855–861 (1991).
This work is supported by the National Natural Science Foundation of China (No. 21261006).
School of Physical Sciences, Guizhou University, Guiyang, 550025, China
Engao Dai
, Jiao Xu
, Junjie Qiu
, Shucheng Liu
, Ping Chen
& Yi Liu
Search for Engao Dai in:
Search for Jiao Xu in:
Search for Junjie Qiu in:
Search for Shucheng Liu in:
Search for Ping Chen in:
Search for Yi Liu in:
Engao Dai is the first author. Yi Liu and Engao Dai designed and carried out the experiments. Jiao Xu, Junjie Qiu, Shucheng Liu and Ping Chen collected the experimental data. Results were analyzed and interpreted by Yi Liu and Engao Dai. The manuscript was written by Yi Liu.
Correspondence to Yi Liu.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Ordered mesoporous Co3O4/CMC nanoflakes for superior cyclic life and ultra high energy density supercapacitor
I. Manohara Babu
, J. Johnson William
& G. Muralidharan
Applied Surface Science (2019)
NiO@ graphite carbon nanocomposites derived from Ni-MOFs as supercapacitor electrodes
Shao-Rui Wu
, Jing-Bing Liu
& Hui Yan
Ionics (2019)
Facile One-Pot Synthesis of Bimetallic Co/Mn-MOFs@Rice Husks, and its Carbonization for Supercapacitor Electrodes
Hyunuk Kim
, Muhammad Sohail
, Chenbo Wang
, Martin Rosillo-Lopez
, Kangkyun Baek
, Jaehyoung Koo
, Myung Won Seo
, Seyoung Kim
, John S. Foord
& Seong Ok Han
A review of performance optimization of MOF-derived metal oxide as electrode materials for supercapacitors
Shaorui Wu
, Jingbing Liu
International Journal of Energy Research (2019)
Robust heterostructures of a bimetallic sodium–zinc metal–organic framework and reduced graphene oxide for high-performance supercapacitors
Richa Rajak
, Mohit Saraf
& Shaikh M. Mobin
Journal of Materials Chemistry A (2019) | CommonCrawl |
AMS Home Publications Membership Meetings & Conferences News & Public Outreach Notices of the AMS The Profession Programs Government Relations Education Giving to the AMS About the AMS
MathSciNet® Member Directory Bookstore Journals Employment Services Giving to the AMS
Bookstore MathSciNet® Meetings Journals Membership Employment Services Giving to the AMS About the AMS
The AMS website will be down on Saturday December 11th from 8:30 am to approximately 11:30 am for maintenance.
ISSN 1088-9485(online) ISSN 0273-0979(print)
Journals Home Search My Subscriptions Subscribe
Your device is paired with
for another days.
Previous issue | This issue | Most recent issue | All issues (1891–Present) | Next issue | Previous article | Articles in press | Recently published articles | Next article
A survey of integral representation theory
Author: Irving Reiner
Journal: Bull. Amer. Math. Soc. 76 (1970), 159-227
MSC (1970): Primary 1075, 1548, 2080, 1640; Secondary 1069, 1620
DOI: https://doi.org/10.1090/S0002-9904-1970-12441-7
MathSciNet review: 0254092
Full-text PDF Free Access
References | Similar Articles | Additional Information
References [Enhancements On Off] (What's this?)
Maurice Auslander and Oscar Goldman, Maximal orders, Trans. Amer. Math. Soc. 97 (1960), 1–24. MR 117252, DOI https://doi.org/10.1090/S0002-9947-1960-0117252-7
Maurice Auslander and Oscar Goldman, The Brauer group of a commutative ring, Trans. Amer. Math. Soc. 97 (1960), 367–409. MR 121392, DOI https://doi.org/10.1090/S0002-9947-1960-0121392-6
Gorô Azumaya, Corrections and supplementaries to my paper concerning Krull-Remak-Schmidt's theorem, Nagoya Math. J. 1 (1950), 117–124. MR 37832
4. D. Ballew, The module index, projective modules and invertible ideals, Ph.D. Thesis, University of Illinois, Urbana, Ill., 1969.
David W. Ballew, The module index and invertible ideals, Trans. Amer. Math. Soc. 148 (1970), 171–184. MR 255589, DOI https://doi.org/10.1090/S0002-9947-1970-0255589-8
B. Banaschewski, Integral group rings of finite groups, Canad. Math. Bull. 10 (1967), 635–642. MR 232864, DOI https://doi.org/10.4153/CMB-1967-061-0
P. M. Gudivok and L. F. Barannik, Projective representations of finite groups over rings, Dopovīdī Akad. Nauk Ukraïn. RSR Ser. A 1968 (1968), 294–297 (Ukrainian, with Russian and English summaries). MR 0228597
L. F. Barannik and P. M. Gudivok, Indecomposable projective representations of finite groups, Dopovīdī Akad. Nauk Ukraïn. RSR Ser. A 1969 (1969), 391–393, 472 (Ukrainian, with English and Russian summaries). MR 0276370
Hyman Bass, Finitistic dimension and a homological generalization of semi-primary rings, Trans. Amer. Math. Soc. 95 (1960), 466–488. MR 157984, DOI https://doi.org/10.1090/S0002-9947-1960-0157984-8
Hyman Bass, Projective modules over algebras, Ann. of Math. (2) 73 (1961), 532–542. MR 177012, DOI https://doi.org/10.2307/1970315
Hyman Bass, Torsion free and projective modules, Trans. Amer. Math. Soc. 102 (1962), 319–327. MR 140542, DOI https://doi.org/10.1090/S0002-9947-1962-0140542-0
Hyman Bass, On the ubiquity of Gorenstein rings, Math. Z. 82 (1963), 8–28. MR 153708, DOI https://doi.org/10.1007/BF01112819
H. Bass, $K$-theory and stable algebra, Inst. Hautes Études Sci. Publ. Math. 22 (1964), 5–60. MR 174604
Hyman Bass, The Dirichlet unit theorem, induced characters, and Whitehead groups of finite groups, Topology 4 (1965), 391–410. MR 193120, DOI https://doi.org/10.1016/0040-9383%2866%2990036-X
14. H. Bass, Algebraic K-theory, Math. Lecture Note Series, Benjamin, New York, 1968.
Edward A. Bender, Classes of matrices and quadratic fields, Linear Algebra Appl. 1 (1968), 195–201. MR 230741, DOI https://doi.org/10.1016/0024-3795%2868%2990003-7
S. D. Berman, On certain properties of integral group rings, Doklady Akad. Nauk SSSR (N.S.) 91 (1953), 7–9 (Russian). MR 0056603
S. D. Berman, On isomorphism of the centers of group rings of $p$-groups, Doklady Akad. Nauk SSSR (N.S.) 91 (1953), 185–187 (Russian). MR 0056604
S. D. Berman, On a necessary condition for isomorphism of integral group rings, Dopovidi Akad. Nauk Ukrain. RSR 1953 (1953), 313–316 (Ukrainian, with Russian summary). MR 0059909
S. D. Berman, On the equation $x^m=1$ in an integral group ring, Ukrain. Mat. Ž. 7 (1955), 253–261 (Russian). MR 0077521
S. D. Berman, On certain properties of group rings over the field of rational numbers, Užgorod. Gos. Univ. Nau�n. Zap. Him. Fiz. Mat. 12 (1955), 88–110 (Russian). MR 0097451
21. S. D. Berman, On automorphisms of the center of an integral group ring, Užgorod. Gos. Univ. Naučn. Zap. Him. Fiz. Mat. (1960), no. 3, 55. (Russian)
S. D. Berman, Integral representations of finite groups, Dokl. Akad. Nauk SSSR 152 (1963), 1286–1287 (Russian). MR 0154910
S. D. Berman, On the theory of integral representations of finite groups, Dokl. Akad. Nauk SSSR 157 (1964), 506–508 (Russian). MR 0165017
S. D. Berman, Integral representations of a cyclic group containing two irreducible rational components, In Memoriam: N. G. Čebotarev (Russian), Izdat. Kazan. Univ., Kazan, 1964, pp. 18–29 (Russian). MR 0195958
S. D. Berman, On integral monomial representations of finite groups, Uspehi Mat. Nauk 20 (1965), no. 4 (124), 133–134 (Russian). MR 0195959
S. D. Berman, Representations of finite groups over an arbitrary field and over rings of integers, Izv. Akad. Nauk SSSR Ser. Mat. 30 (1966), 69–132 (Russian). MR 0197582
S. D. Berman and P. M. Gudivok, Integral representations of finite groups, Dokl. Akad. Nauk SSSR 145 (1962), 1199–1201 (Russian). MR 0139664
S. D. Berman and P. M. Gudivok, Indecomposable representations of finite groups over the ring of $p$-adic integers, Izv. Akad. Nauk SSSR Ser. Mat. 28 (1964), 875–910 (Russian). MR 0166273
S. D. Berman and A. I. Lihtman, On integral representations of finite nilpotent groups, Uspehi Mat. Nauk 20 (1965), 186–188 (Russian). MR 0207859
S. D. Berman and A. R. Rossa, Integral group-rings of finite and periodic groups, Algebra and Math. Logic: Studies in Algebra (Russian), Izdat. Kiev. Univ., Kiev, 1966, pp. 44–53 (Russian, with English summary). MR 0209367
31a. A. Bialnicki-Birula, On the equivalence of integral representations of finite groups, Proc. Amer. Math. Soc. (to appear).
Z. I. Borevi� and D. K. Faddeev, Theory of homology in groups. I, Vestnik Leningrad. Univ. 11 (1956), no. 7, 3–39 (Russian). MR 0080088
Z. I. Borevi� and D. K. Faddeev, Integral representations of quadratic rings, Vestnik Leningrad. Univ. 15 (1960), no. 19, 52–64 (Russian, with English summary). MR 0153707
Z. I. Borevi� and D. K. Faddeev, Representations of orders with cyclic index, Trudy Mat. Inst. Steklov 80 (1965), 51–65 (Russian). MR 0205980
Z. I. Borevi� and D. K. Faddeev, A remark on orders with a cyclic index, Dokl. Akad. Nauk SSSR 164 (1965), 727–728 (Russian). MR 0190187
36. N. Bourbaki, Algèbre commutative, Actualités Sci. Indust., no. 1293, Hermann, Paris, 1961. MR 30 #2027.
A. A. Bovdi, Periodic normal subgroups of the multiplicative group of a group ring, Sibirsk. Mat. Ž. 9 (1968), 495–498 (Russian). MR 0227268
38. J. O. Brooks, Classification of representation modules over quadratic orders, Ph.D. Thesis, University of Michigan, Ann Arbor, Mich., 1964.
Armand Brumer, Structure of hereditary orders, Bull. Amer. Math. Soc. 69 (1963), 721–724. MR 152565, DOI https://doi.org/10.1090/S0002-9904-1963-11002-2
Henri Cartan and Samuel Eilenberg, Homological algebra, Princeton University Press, Princeton, N. J., 1956. MR 0077480
41. C. Chevalley, L'arithmétique dans les algèbres de matrices, Actualités Sci. Indust., no. 323, Hermann, Paris, 1936.
James A. Cohn and Donald Livingstone, On groups of order $p^{3}$, Canadian J. Math. 15 (1963), 622–624. MR 153739, DOI https://doi.org/10.4153/CJM-1963-063-1
James A. Cohn and Donald Livingstone, On the structure of group algebras. I, Canadian J. Math. 17 (1965), 583–593. MR 179266, DOI https://doi.org/10.4153/CJM-1965-058-2
D. B. Coleman, Idempotents in group rings, Proc. Amer. Math. Soc. 17 (1966), 962. MR 193158, DOI https://doi.org/10.1090/S0002-9939-1966-0193158-3
S. B. Conlon, Structure in representation algebras, J. Algebra 5 (1967), 274–279. MR 202860, DOI https://doi.org/10.1016/0021-8693%2867%2990040-3
S. B. Conlon, Relative components of representations, J. Algebra 8 (1968), 478–501. MR 223427, DOI https://doi.org/10.1016/0021-8693%2868%2990056-2
S. B. Conlon, Decompositions induced from the Burnside algebra, J. Algebra 10 (1968), 102–122. MR 237664, DOI https://doi.org/10.1016/0021-8693%2868%2990107-5
S. B. Conlon, Monomial representations under integral similarity, J. Algebra 13 (1969), 496–508. MR 252527, DOI https://doi.org/10.1016/0021-8693%2869%2990111-2
Ian G. Connell, On the group ring, Canadian J. Math. 15 (1963), 650–685. MR 153705, DOI https://doi.org/10.4153/CJM-1963-067-0
Charles W. Curtis and Irving Reiner, Representation theory of finite groups and associative algebras, Pure and Applied Mathematics, Vol. XI, Interscience Publishers, a division of John Wiley & Sons, New York-London, 1962. MR 0144979
E. C. Dade, Rings, in which no fixed power of ideal classes becomes invertible. Note to the preceding paper of Dade, Taussky and Zassenhaus, Math. Ann. 148 (1962), 65–66. MR 140545, DOI https://doi.org/10.1007/BF01438390
E. C. Dade, Some indecomposable group representations, Ann. of Math. (2) 77 (1963), 406–412. MR 144981, DOI https://doi.org/10.2307/1970222
E. C. Dade, The maximal finite groups of $4\times 4$ integral matrices, Illinois J. Math. 9 (1965), 99–122. MR 170958
E. C. Dade, D. W. Robinson, O. Taussky, and M. Ward, Divisors of recurrent sequences, J. Reine Angew. Math. 214(215) (1964), 180–183. MR 161875, DOI https://doi.org/10.1515/crll.1964.214-215.180
E. C. Dade and O. Taussky, Some new results connected with matrices of rational integers, Proc. Sympos. Pure Math., Vol. VIII, Amer. Math. Soc., Providence, R.I., 1965, pp. 78–88. MR 0184924
E. C. Dade and O. Taussky, On the different in orders in an algebraic number field and special units connected with it, Acta Arith. 9 (1964), 47–51. MR 166183, DOI https://doi.org/10.4064/aa-9-1-47-51
E. C. Dade, O. Taussky, and H. Zassenhaus, On the semigroup of ideal classes in an order of an algebraic number field, Bull. Amer. Math. Soc. 67 (1961), 305–308. MR 136597, DOI https://doi.org/10.1090/S0002-9904-1961-10594-6
E. C. Dade, O. Taussky, and H. Zassenhaus, On the theory of orders, in paricular on the semigroup of ideal classes and genera of an order in an algebraic number field, Math. Ann. 148 (1962), 31–64. MR 140544, DOI https://doi.org/10.1007/BF01438389
54. K. deLeeuw, Some applications of cohomology to algebraic number theory and group representations (unpublished).
Frank R. DeMeyer, The trace map and separable algebras, Osaka Math. J. 3 (1966), 7–11. MR 228542
Max Deuring, Algebren, Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 41, Springer-Verlag, Berlin-New York, 1968 (German). Zweite, korrigierte auflage. MR 0228526
Fritz-Erdmann Diederichsen, Über die Ausreduktion ganzzahliger Gruppendarstellungen bei arithmetischer Äquivalenz, Abh. Math. Sem. Hansischen Univ. 13 (1940), 357–412 (German). MR 2133
Andreas Dress, On the Krull-Schmidt theorem for integral group representations of rank $1$, Michigan Math. J. 17 (1970), 273–277. MR 263933
Andreas Dress, An intertwining number theorem for integral representations and applications, Math. Z. 116 (1970), 153–165. MR 267011, DOI https://doi.org/10.1007/BF01109959
Andreas Dress, On the decomposition of modules, Bull. Amer. Math. Soc. 75 (1969), 984–986. MR 244227, DOI https://doi.org/10.1090/S0002-9904-1969-12326-8
Andreas Dress, On integral representations, Bull. Amer. Math. Soc. 75 (1969), 1031–1034. MR 249466, DOI https://doi.org/10.1090/S0002-9904-1969-12349-9
Andreas Dress, The ring of monomial representations. I. Structure theory, J. Algebra 18 (1971), 137–157. MR 274607, DOI https://doi.org/10.1016/0021-8693%2871%2990132-3
Andreas Dress, On relative Grothendieck rings, Bull. Amer. Math. Soc. 75 (1969), 955–958. MR 244401, DOI https://doi.org/10.1090/S0002-9904-1969-12311-6
V. S. Drobotenko, Integral representations of primary abelian groups, Algebra and Math. Logic: Studies in Algebra (Russian), Izdat. Kiev. Univ., Kiev, 1966, pp. 111–121 (Russian, with English summary). MR 0204536
V. S. Drobotenko, È. S. Drobotenko, Z. P. Žilinskaja, and E. Ja. Pogoriljak, Representations of the cyclic group of prime order $p$ over the ring of residue classes ${\rm mod}\, p^{s}$, Ukrain. Mat. Ž. 17 (1965), no. 5, 28–42 (Russian). MR 0188304
67. V. S. Drobotenko and A. I. Lihtman, Representations of finite groups over the ring of residue classes mod p, Dokl. Užgorod Univ. 3 (1960), 63. (Russian)
P. M. Gudivok, V. S. Drobotenko, and A. I. Lihtman, On representations of finite groups over the ring of residue classes modulo $m$, Ukrain. Mat. Ž. 16 (1964), 82–89 (Russian). MR 0167538
V. S. Drobotenko and V. P. Rud′ko, Representations of a cyclic group by groups of automorphisms of a certain class of modules, Dopovīdī Akad. Nauk Ukraïn. RSR Ser. A 1968 (1968), 302–304 (Ukrainian, with Russian and English summaries). MR 0227288
Ju. A. Drozd, Representations of cubic $Z$-rings, Dokl. Akad. Nauk SSSR 174 (1967), 16–18 (Russian). MR 0215824
Ju. A. Drozd, The distribution of maximal sublattices, Mat. Zametki 6 (1969), 19–24 (Russian). MR 252434
Ju. A. Drozd, Adèles and integral representations, Izv. Akad. Nauk SSSR Ser. Mat. 33 (1969), 1080–1088 (Russian). MR 0255595
Ju. A. Drozd and V. V. Kiri�enko, Representation of rings in a second order matrix algebra, Ukrain. Mat. Ž. 19 (1967), no. 3, 107–112 (Russian). MR 0210746
Ju. A. Drozd and V. V. Kiri�enko, Hereditary orders, Ukrain. Mat. Ž. 20 (1968), 246–248 (Russian). MR 0254095
Ju. A. Drozd, V. V. Kiri�enko, and A. V. Roĭter, Hereditary and Bass orders, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 1415–1436 (Russian). MR 0219527
Ju. A. Drozd and A. V. Roĭter, Commutative rings with a finite number of indecomposable integral representations, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 783–798 (Russian). MR 0220716
Ju. A. Drozd and V. M. Tur�in, The number of modules of representations in genus for integral matrix rings of the second order, Mat. Zametki 2 (1967), 133–138 (Russian). MR 229679
Klaus W. Roggenkamp and Verena Huber-Dyson, Lattices over orders. I, Lecture Notes in Mathematics, Vol. 115, Springer-Verlag, Berlin-New York, 1970. MR 0283013
M. Eichler, Über die Idealklassenzahl total definiter Quaternionenalgebren, Math. Z. 43 (1938), no. 1, 102–109 (German). MR 1545717, DOI https://doi.org/10.1007/BF01181088
M. Eichler, Über die Idealklassenzahl hyperkomplexer Systeme, Math. Z. 43 (1938), no. 1, 481–494 (German). MR 1545733, DOI https://doi.org/10.1007/BF01181104
D. K. Faddeev, On the semigroup of genera in the theory of integral representations, Izv. Akad. Nauk SSSR Ser. Mat. 28 (1964), 475–478 (Russian). MR 0161885
D. K. Faddeev, An introduction to the multiplicative theory of modules of integral representations, Trudy Mat. Inst. Steklov 80 (1965), 145–182 (Russian). MR 0206048
D. K. Faddeev, On the theory of cubic $Z$-rings, Trudy Mat. Inst. Steklov. 80 (1965), 183–187 (Russian). MR 0195887
D. K. Faddeev, On the equivalence of systems of integral matrices, Izv. Akad. Nauk SSSR Ser. Mat. 30 (1966), 449–454 (Russian). MR 0194432
D. K. Faddeev, The number of classes of exact ideals for $Z$-rings, Mat. Zametki 1 (1967), 625–632 (Russian). MR 214617
Robert Fossum, The Noetherian different of projective orders, J. Reine Angew. Math. 224 (1966), 207–218. MR 222067, DOI https://doi.org/10.1515/crll.1966.224.207
Robert M. Fossum, Maximal orders over Krull domains, J. Algebra 10 (1968), 321–332. MR 233809, DOI https://doi.org/10.1016/0021-8693%2868%2990083-5
Albrecht Frölich, Ideals in an extension field as modules over the algebraic integers in a finite number field, Math. Z 74 (1960), 29–38. MR 0113877, DOI https://doi.org/10.1007/BF01180470
A. Fröhlich, The module structure of Kummer extensions over Dedekind domains, J. Reine Angew. Math. 209 (1962), 39–53. MR 160777, DOI https://doi.org/10.1515/crll.1962.209.39
A. Fröhlich, Invariants for modules over commutative separable orders, Quart. J. Math. Oxford Ser. (2) 16 (1965), 193–232. MR 210697, DOI https://doi.org/10.1093/qmath/16.3.193
A. Fröhlich, Resolvents, discriminants, and trace invariants, J. Algebra 4 (1966), 173–198. MR 207684, DOI https://doi.org/10.1016/0021-8693%2866%2990038-X
Italo Giorgiutti, Modules projectifs sur les algèbres de groupes finis, C. R. Acad. Sci. Paris 250 (1960), 1419–1420 (French). MR 124379
J. A. Green, On the indecomposable representations of a finite group, Math. Z. 70 (1958/59), 430–445. MR 131454, DOI https://doi.org/10.1007/BF01558601
J. A. Green, Blocks of modular representations, Math. Z. 79 (1962), 100–115. MR 141717, DOI https://doi.org/10.1007/BF01193108
J. A. Green, The modular representation algebra of a finite group, Illinois J. Math. 6 (1962), 607–619. MR 141709
J. A. Green, A transfer theorem for modular representations, J. Algebra 1 (1964), 73–84. MR 162843, DOI https://doi.org/10.1016/0021-8693%2864%2990009-2
K. W. Gruenberg, Residual properties of infinite soluble groups, Proc. London Math. Soc. (3) 7 (1957), 29–62. MR 87652, DOI https://doi.org/10.1112/plms/s3-7.1.29
95. P. M. Gudivok, Integral representations of a finite group with a noncyclic Sylow p-subgroup, Uspehi Mat. Nauk 16 (1961), 229-230. 96. P. M. Gudivok, Integral representations of groups of type (p, p), Dokl. Užgorod Univ. Ser. Phys.-Mat. Nauk (1962), no. 5, 73. 97. P. M. Gudivok, On p-adic integral representations of finite groups, Dokl. Užgorod Univ. Ser. Phys.-Mat. Nauk (1962), no. 5, 81-82.
P. M. Gudivok, Representations of finite groups over certain local rings, Dopovidi Akad. Nauk Ukraïn. RSR 1964 (1964), 173–176 (Ukrainian, with Russian and English summaries). MR 0166274
P. M. Gudivok, Representations of finite groups over quadratic rings, Dokl. Akad. Nauk SSSR 159 (1964), 1210–1213 (Russian). MR 0169931
P. M. Gudivok, Representations of finite groups over local number rings, Dopovīdī Akad. Nauk Ukraïn. RSR 1966 (1966), 979–981 (Ukrainian, with Russian and English summaries). MR 0201525
P. M. Gudivok, Representations of finite groups over number rings, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 799–834 (Russian). MR 0218468
P. M. Gudivok and V. P. Rud′ko, On $p$-adic integer-valued representations of a cyclic $p$-group, Dopovīdī Akad. Nauk Ukraïn. RSR 1966 (1966), 1111–1113 (Ukrainian, with Russian and English summaries). MR 0201527
T. A. Hannula, The integral representation ring $a(R_{k}G)$, Trans. Amer. Math. Soc. 133 (1968), 553–559. MR 241548, DOI https://doi.org/10.1090/S0002-9947-1968-0241548-9
Manabu Harada, Hereditary orders, Trans. Amer. Math. Soc. 107 (1963), 273–290. MR 151489, DOI https://doi.org/10.1090/S0002-9947-1963-0151489-9
Manabu Harada, Structure of hereditary orders over local rings, J. Math. Osaka City Univ. 14 (1963), 1–22. MR 168619
Manabu Harada, Hereditary orders in generalized quaternions $D_{\tau }$, J. Math. Osaka City Univ. 14 (1963), 71–81. MR 168620
Akira Hattori, Rank element of a projective module, Nagoya Math. J. 25 (1965), 113–120. MR 175950
Akira Hattori, Semisimple algebras over a commutative ring, J. Math. Soc. Japan 15 (1963), 404–419. MR 158903, DOI https://doi.org/10.2969/jmsj/01540404
Alex Heller, On group representations over a valuation ring, Proc. Nat. Acad. Sci. U.S.A. 47 (1961), 1194–1197. MR 125163, DOI https://doi.org/10.1073/pnas.47.8.1194
Alex Heller, Some exact sequences in algebraic $K$-theory, Topology 4 (1965), 389–408. MR 179229, DOI https://doi.org/10.1016/0040-9383%2865%2990004-2
A. Heller and I. Reiner, Indecomposable representations, Illinois J. Math. 5 (1961), 314–323. MR 122890
A. Heller and I. Reiner, Representations of cyclic groups in rings of integers. I, Ann. of Math. (2) 76 (1962), 73–92. MR 140575, DOI https://doi.org/10.2307/1970266
A. Heller and I. Reiner, On groups with finitely many indecomposable integral representations, Bull. Amer. Math. Soc. 68 (1962), 210–212. MR 137773, DOI https://doi.org/10.1090/S0002-9904-1962-10751-4
A. Heller and I. Reiner, Grothendieck groups of orders in semisimple algebras, Trans. Amer. Math. Soc. 112 (1964), 344–355. MR 161889, DOI https://doi.org/10.1090/S0002-9947-1964-0161889-X
A. Heller and I. Reiner, Grothendieck groups of integral group rings, Illinois J. Math. 9 (1965), 349–360. MR 175935
D. G. Higman, Indecomposable representations at characteristic $p$, Duke Math. J. 21 (1954), 377–381. MR 67896
D. G. Higman, Induced and produced modules, Canadian J. Math. 7 (1955), 490–508. MR 87671, DOI https://doi.org/10.4153/CJM-1955-052-4
D. G. Higman, On orders in separable algebras, Canadian J. Math. 7 (1955), 509–515. MR 88486, DOI https://doi.org/10.4153/CJM-1955-053-1
D. G. Higman, Relative cohomology, Canadian J. Math. 9 (1957), 19–34. MR 83486, DOI https://doi.org/10.4153/CJM-1957-004-4
D. G. Higman, On isomorphisms of orders, Michigan Math. J. 6 (1959), 255–257. MR 109174
D. G. Higman, On representations of orders over Dedekind domains, Canadian J. Math. 12 (1960), 107–125. MR 109175, DOI https://doi.org/10.4153/CJM-1960-010-1
D. G. Higman and J. E. McLaughlin, Finiteness of class numbers of representations of algebras over function fields, Michigan Math. J. 6 (1959), 401–404. MR 109151
Graham. Higman, The units of group-rings, Proc. London Math. Soc. (2) 46 (1940), 231–248. MR 2137, DOI https://doi.org/10.1112/plms/s2-46.1.231
Roger Holvoet, Sur l'isomorphie d'algèbres de groupes, Bull. Soc. Math. Belg. 20 (1968), 264–282 (French). MR 240219
124a. D. A. Jackson, On a problem in the theory of integral group rings, Ph.D. thesis, Oxford University, Oxford, 1967.
D. A. Jackson, The groups of units of the integral group rings of finite metabelian and finite nilpotent groups, Quart. J. Math. Oxford Ser. (2) 20 (1969), 319–331. MR 249521, DOI https://doi.org/10.1093/qmath/20.1.319
H. Jacobinski, Über die Hauptordnung eines Körpers als Gruppenmodul, J. Reine Angew. Math. 213 (1963/64), 151–164 (German). MR 163901, DOI https://doi.org/10.1515/crll.1964.213.151
H. Jacobinski, On extensions of lattices, Michigan Math. J. 13 (1966), 471–475. MR 204538
H. Jacobinski, Sur les ordres commutatifs avec un nombre fini de réseaux indécomposables, Acta Math. 118 (1967), 1–31 (French). MR 212001, DOI https://doi.org/10.1007/BF02392474
H. Jacobinski, Über die Geschlechter von Gittern über Ordnungen, J. Reine Angew. Math. 230 (1968), 29–39 (German). MR 229676, DOI https://doi.org/10.1515/crll.1968.230.29
H. Jacobinski, Genera and decompositions of lattices over orders, Acta Math. 121 (1968), 1–29. MR 251063, DOI https://doi.org/10.1007/BF02391907
H. Jacobinski, On embedding of lattices belonging to the same genus, Proc. Amer. Math. Soc. 24 (1970), 134–136. MR 251072, DOI https://doi.org/10.1090/S0002-9939-1970-0251072-X
Nathan Jacobson, The Theory of Rings, American Mathematical Society Mathematical Surveys, Vol. II, American Mathematical Society, New York, 1943. MR 0008601
N. Jacobson, Representation theory for Jordan rings, Proceedings of the International Congress of Mathematicians, Cambridge, Mass., 1950, vol. 2, Amer. Math. Soc., Providence, R. I., 1952, pp. 37–43. MR 0044505
W. E. Jenner, Block ideals and arithmetics of algebras, Compositio Math. 11 (1953), 187–203. MR 62723
W. E. Jenner, On the class number of non-maximal orders in ${\mathfrak p}$-adic division algebras, Math. Scand. 4 (1956), 125–128. MR 81270, DOI https://doi.org/10.7146/math.scand.a-10461
Alfredo Jones, Groups with a finite number of indecomposable integral representations, Michigan Math. J. 10 (1963), 257–261. MR 153737
Alfredo Jones, Integral representations of the direct product of groups, Canadian J. Math. 15 (1963), 625–630. MR 154927, DOI https://doi.org/10.4153/CJM-1963-064-9
Alfredo Jones, On representations of finite groups over valuation rings, Illinois J. Math. 9 (1965), 297–303. MR 175981
Irving Kaplansky, Elementary divisors and modules, Trans. Amer. Math. Soc. 66 (1949), 464–491. MR 31470, DOI https://doi.org/10.1090/S0002-9947-1949-0031470-3
Irving Kaplansky, Modules over Dedekind rings and valuation rings, Trans. Amer. Math. Soc. 72 (1952), 327–340. MR 46349, DOI https://doi.org/10.1090/S0002-9947-1952-0046349-0
Irving Kaplansky, Submodules of quaternion algebras, Proc. London Math. Soc. (3) 19 (1969), 219–232. MR 240142, DOI https://doi.org/10.1112/plms/s3-19.2.219
V. V. Kiri�enko, Orders whose representations are all completely reducible, Mat. Zametki 2 (1967), 139–144 (Russian). MR 219528
142. D. I. Knee, The indecomposable integral representations of finite cyclic groups, Ph.D. Thesis, M.I.T., Cambridge, Mass., 1962.
Martin Kneser, Einige Bemerkungen über ganzzahlige Darstellungen endlicher Gruppen, Arch. Math. (Basel) 17 (1966), 377–379 (German). MR 201526, DOI https://doi.org/10.1007/BF01899614
S. A. Krugljak, Precise ideals of integer matrix-rings of the second order, Ukrain. Mat. Ž. 18 (1966), no. 3, 58–64 (Russian). MR 0199229
S. A. Krugljak, The Grothendieck group, Ukrain. Mat. Ž. 18 (1966), no. 5, 100–105 (Russian). MR 0200305
Tsit-yuen Lam, Induction theorems for Grothendieck groups and Whitehead groups of finite groups, Ann. Sci. École Norm. Sup. (4) 1 (1968), 91–148. MR 231890
Richard G. Larson, Group rings over Dedekind domains, J. Algebra 5 (1967), 358–361. MR 209368, DOI https://doi.org/10.1016/0021-8693%2867%2990045-2
Claiborne G. Latimer and C. C. MacDuffee, A correspondence between classes of ideals and classes of matrices, Ann. of Math. (2) 34 (1933), no. 2, 313–316. MR 1503108, DOI https://doi.org/10.2307/1968204
149. W. J. Leahey, The classification of the indecomposable integral representations of the dihedral group of order 2p, Ph.D. Thesis, M.I.T., Cambridge, Mass., 1962.
Myrna Pike Lee, Integral representations of dihedral groups of order $2p$, Trans. Amer. Math. Soc. 110 (1964), 213–231. MR 156896, DOI https://doi.org/10.1090/S0002-9947-1964-0156896-7
Heinrich-Wolfgang Leopoldt, Über die Hauptordnung der ganzen Elemente eines abelschen Zahlkörpers, J. Reine Angew. Math. 201 (1959), 119–149 (German). MR 108479, DOI https://doi.org/10.1515/crll.1959.201.119
Lawrence S. Levy, Decomposing pairs of modules, Trans. Amer. Math. Soc. 122 (1966), 64–80. MR 194467, DOI https://doi.org/10.1090/S0002-9947-1966-0194467-9
George W. Mackey, On induced representations of groups, Amer. J. Math. 73 (1951), 576–592. MR 42420, DOI https://doi.org/10.2307/2372309
Jean-Marie Maranda, On $\mathfrak B$-adic integral representations of finite groups, Canad. J. Math. 5 (1953), 344–355. MR 56605, DOI https://doi.org/10.4153/cjm-1953-040-2
J.-M. Maranda, On the equivalence of representations of finite groups by groups of automorphisms of modules over Dedekind rings, Canadian J. Math. 7 (1955), 516–526. MR 88498, DOI https://doi.org/10.4153/CJM-1955-054-9
Jacques Martinet, Sur l'arithmétique des extensions galoisiennes à groupe de Galois diédral d'ordre $2p$, Ann. Inst. Fourier (Grenoble) 19 (1969), no. fasc. 1, 1–80, ix (French, with English summary). MR 262210
A. Matuljauskas, Integral representations of a fourth-order cyclic group, Litovsk. Mat. Sb. 2 (1962), no. 1, 75–82 (Russian, with Lithuanian and German summaries). MR 0148768
A. Matuljauskas, Integral representations of the cyclic group of order six, Litovsk. Mat. Sb. 2 (1962), no. 2, 149–157 (Russian, with Lithuanian and German summaries). MR 0155902
A. Matuljauskas, On the number of indecomposable representations of the group $Z_{8}$, Litovsk. Mat. Sb. 3 (1963), no. 1, 181–188 (Russian, with Lithuanian and German summaries). MR 0165018
A. Matuljauskas and M. Matuljauskene, On integral representations of a group of type $(3,\,3)$, Litovsk. Mat. Sb. 4 (1964), 229–233 (Russian, with Lithuanian and German summaries). MR 0167540
Warren May, Commutative group algebras, Trans. Amer. Math. Soc. 136 (1969), 139–149. MR 233903, DOI https://doi.org/10.1090/S0002-9947-1969-0233903-9
G. O. Michler, Structure of semi-perfect hereditary Noetherian rings, J. Algebra 13 (1969), 327–344. MR 246918, DOI https://doi.org/10.1016/0021-8693%2869%2990078-7
Tadasi Nakayama, A theorem on modules of trivial cohomology over a finite group, Proc. Japan Acad. 32 (1956), 373–376. MR 80098
Tadasi Nakayama, On modules of trivial cohomology over a finite group, Illinois J. Math. 1 (1957), 36–43. MR 84014
Tadasi Nakayama, On modules of trivial cohomology over a finite group. II. Finitely generated modules, Nagoya Math. J. 12 (1957), 171–176. MR 98125
L. A. Nazarova, Unimodular representations of the four group, Dokl. Akad. Nauk SSSR 140 (1961), 1101–1014 (Russian). MR 0130916
L. A. Nazarova, Unimodular representations of the alternating group of degree four, Ukrain. Mat. Ž. 15 (1963), 437–444 (Russian). MR 0158926
L. A. Nazarova, Representations of a tetrad, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 1361–1378 (Russian). MR 0223352
L. A. Nazarova and A. V. Roĭter, Integral representations of a symmetric group of third degree, Ukrain. Mat. Ž. 14 (1962), 271–288 (Russian, with English summary). MR 0148767
168. L. A. Nazarova and A. V. Roĭter, On irreducible representations of p-groups over Z, Ukrain. Mat. Ž. 18 (1966), no 1, 119-124. (Russian) MR 34 #254.
L. A. Nazarova and A. V. Roĭter, Integral $p$-adic representations and representations over a ring of residue classes, Ukrain. Mat. Ž. 19 (1967), no. 2, 125–126 (Russian). MR 0209369
L. A. Nazarova and A. V. Roĭter, A sharpening of a theorem of Bass, Dokl. Akad. Nauk SSSR 176 (1967), 266–268 (Russian). MR 0225810
L. A. Nazarova and A. V. Roĭter, Finitely generated modules over a dyad of two local Dedekind rings, and finite groups which possess an abelian normal divisor of index $p$, Izv. Akad. Nauk SSSR Ser. Mat. 33 (1969), 65–89 (Russian). MR 0260859
Morris Newman and Olga Taussky, Classes of positive definite unimodular circulants, Canadian J. Math. 9 (1957), 71–73. MR 82947, DOI https://doi.org/10.4153/CJM-1957-010-5
M. Newman and Olga Taussky, On a generalization of the normal basis in abelian algebraic number fields, Comm. Pure Appl. Math. 9 (1956), 85–91. MR 75985, DOI https://doi.org/10.1002/cpa.3160090106
R. J. Nunke, Modules of extensions over Dedekind rings, Illinois J. Math. 3 (1959), 222–241. MR 102538
Tadao Obayashi, On the Grothendieck ring of an abelian $p$-group, Nagoya Math. J. 26 (1966), 101–113. MR 225847
176. J. Oppenheim, Integral representations of cyclic groups of squarefree order, Ph.D. Thesis, University of Illinois, Urbana, Ill., 1962.
D. S. Passman, Nil ideals in group rings, Michigan Math. J. 9 (1962), 375–384. MR 144930
D. S. Passman, Isomorphic groups and group rings, Pacific J. Math. 15 (1965), 561–583. MR 193160
Lena Chang Pu, Integral representations of non-abelian groups of order $pq$, Michigan Math. J. 12 (1965), 231–246. MR 178063
T. Ralley, Decomposition of products of modular representations, J. London Math. Soc. 44 (1969), 480–484. MR 240220, DOI https://doi.org/10.1112/jlms/s1-44.1.480
Irving Reiner, Maschke modules over Dedekind rings, Canadian J. Math. 8 (1956), 329–334. MR 78969, DOI https://doi.org/10.4153/CJM-1956-037-3
Irving Reiner, Integral representations of cyclic groups of prime order, Proc. Amer. Math. Soc. 8 (1957), 142–146. MR 83493, DOI https://doi.org/10.1090/S0002-9939-1957-0083493-6
Irving Reiner, On the class number of representations of an order, Canadian J. Math. 11 (1959), 660–672. MR 108513, DOI https://doi.org/10.4153/CJM-1959-061-5
Irving Reiner, The nonuniqueness of irreducible constituents of integral group representations, Proc. Amer. Math. Soc. 11 (1960), 655–658. MR 122891, DOI https://doi.org/10.1090/S0002-9939-1960-0122891-9
Irving Reiner, Behavior of integral group representations under ground ring extension, Illinois J. Math. 4 (1960), 640–651. MR 121407
Irving Reiner, The Krull-Schmidt theorem for integral group representations, Bull. Amer. Math. Soc. 67 (1961), 365–367. MR 138689, DOI https://doi.org/10.1090/S0002-9904-1961-10619-8
Irving Reiner, Indecomposable representations of non-cyclic groups, Michigan Math. J. 9 (1962), 187–191. MR 140576
Irving Reiner, Failure of the Krull-Schmidt theorem for integral representations, Michigan Math. J. 9 (1962), 225–231. MR 144942
Irving Reiner, Extensions of irreducible modules, Michigan Math. J. 10 (1963), 273–276. MR 155874
Irving Reiner, The integral representation ring of a finite group, Michigan Math. J. 12 (1965), 11–22. MR 172937
Irving Reiner, Nilpotent elements in rings of integral representations, Proc. Amer. Math. Soc. 17 (1966), 270–274. MR 188306, DOI https://doi.org/10.1090/S0002-9939-1966-0188306-5
Irving Reiner, Integral represetation algebras, Trans. Amer. Math. Soc. 124 (1966), 111–121. MR 202863, DOI https://doi.org/10.1090/S0002-9947-1966-0202863-6
Irving Reiner, Relations between integral and modular representations, Michigan Math. J. 13 (1966), 357–372. MR 222188
Irving Reiner, Module extensions and blocks, J. Algebra 5 (1967), 157–163. MR 213452, DOI https://doi.org/10.1016/0021-8693%2867%2990032-4
Irving Reiner, Representation rings, Michigan Math. J. 14 (1967), 385–391. MR 218469
I. Raĭner, The action of an involution in $\~Ku^0 (ZG)$, Mat. Zametki 3 (1968), 523–527 (Russian). MR 229696
197. I. Reiner, A survey of integral representation theory, Proc. Algebra Sympos., University of Kentucky (Lexington, 1968), pp. 8-14. 198. I. Reiner, Maximal orders, Mimeograph Notes, University of Illinois, Urbana, Ill., 1969.
I. Reiner and H. Zassenhaus, Equivalence of representations under extensions of local ground rings, Illinois J. Math. 5 (1961), 409–411. MR 126468
Dock Sang Rim, Modules over finite groups, Ann. of Math. (2) 69 (1959), 700–712. MR 104721, DOI https://doi.org/10.2307/1970033
Dock Sang Rim, On projective class groups, Trans. Amer. Math. Soc. 98 (1961), 459–467. MR 124378, DOI https://doi.org/10.1090/S0002-9947-1961-0124378-1
Klaus W. Roggenkamp, Gruppenringe von unendlichem Darstellungstyp, Math. Z. 96 (1967), 393–398 (German). MR 206123, DOI https://doi.org/10.1007/BF01117098
Klaus W. Roggenkamp, Darstellungen endlicher Gruppen in Polynomringen, Math. Z. 96 (1967), 399–407 (German). MR 206124, DOI https://doi.org/10.1007/BF01117099
Klaus W. Roggenkamp, Grothendieck groups of hereditary orders, J. Reine Angew. Math. 235 (1969), 29–40. MR 254101, DOI https://doi.org/10.1515/crll.1969.235.29
Klaus W. Roggenkamp, On the irreducible lattices of orders, Canadian J. Math. 21 (1969), 970–976. MR 248247, DOI https://doi.org/10.4153/CJM-1969-106-x
206. K. W. Roggenkamp, Das Krull-Schmidt Theorem für projektive Gitter in Ordnungen über lokalen Ringen, Math. Seminar (Giessen, 1969).
K. W. Roggenkamp, Projective modules over clean orders, Compositio Math. 21 (1969), 185–194. MR 248170
K. W. Roggenkamp, A necessary and sufficient condition for orders in direct sums of complete skewfields to have only finitely many nonisomorphic indecomposable integral representations, Bull. Amer. Math. Soc. 76 (1970), 130–134. MR 284466, DOI https://doi.org/10.1090/S0002-9904-1970-12398-9
Klaus W. Roggenkamp, Projective homomorphisms and extensions of lattices, J. Reine Angew. Math. 246 (1971), 41–45. MR 274485, DOI https://doi.org/10.1515/crll.1971.246.41
A. V. Roĭter, On the representations of the cyclic group of fourth order by integral matrices, Vestnik Leningrad. Univ. 15 (1960), no. 19, 65–74 (Russian, with English summary). MR 0124418
A. V. Roĭter, Categories with division and integral representations, Soviet Math. Dokl. 4 (1963), 1621–1623. MR 0194494
A. V. Roĭter, On a category of representations, Ukrain. Mat. Ž. 15 (1963), 448–452 (Russian). MR 0159856
A. V. Roĭter, Integer-valued representations belonging to one genus, Izv. Akad. Nauk SSSR Ser. Mat. 30 (1966), 1315–1324 (Russian). MR 0213391
A. V. Roĭter, Divisibility in the category of representations over a complete local Dedekind ring, Ukrain. Mat. Ž. 17 (1965), no. 4, 124–129 (Russian). MR 0197534
A. V. Roĭter, $E$-systems of representations, Ukrain. Mat. Ž. 17 (1965), no. 2, 88–96 (Russian). MR 0190206
A. V. Roĭter, An analog of the theorem of Bass for modules of representations of noncommutative orders, Dokl. Akad. Nauk SSSR 168 (1966), 1261–1264 (Russian). MR 0202772
A. V. Roĭter, Unboundedness of the dimensions of the indecomposable representations of an algebra which has infinitely many indecomposable representations, Izv. Akad. Nauk SSSR Ser. Mat. 32 (1968), 1275–1282 (Russian). MR 0238893
A. V. Roĭter, On the theory of integral representations of rings, Mat. Zametki 3 (1968), 361–366 (Russian). MR 231859
Joseph J. Rotman, Notes on homological algebras, Van Nostrand Reinhold Co., New York-Toronto, Ont.-London, 1970. Van Nostrand Reinhold Mathematical Studies, No. 26. MR 0409590
V. P. Rud′ko, Tensor algebra of integral representations of a cyclic group of order $p^{2}$, Dopovīdī Akad. Nauk Ukraïn. RSR Ser. A 1967 (1967), 35–39 (Ukrainian, with Russian and English summaries). MR 0209370
A. I. Saksonov, On group rings of finite $p$-groups over certain integral domains, Dokl. Akad. Nauk BSSR 11 (1967), 204–207 (Russian). MR 0209372
A. I. Saksonov, Group-algebras of finite groups over a number field, Dokl. Akad. Nauk BSSR 11 (1967), 302–305 (Russian). MR 0210795
O. F. G. Schilling, The Theory of Valuations, Mathematical Surveys, No. 4, American Mathematical Society, New York, N. Y., 1950. MR 0043776
Hans Schneider and Julian Weissglass, Group rings, semigroup rings and their radicals, J. Algebra 5 (1967), 1–15. MR 213453, DOI https://doi.org/10.1016/0021-8693%2867%2990021-X
Sudarshan K. Sehgal, On the isomorphism of integral group rings. I, Canadian J. Math. 21 (1969), 410–413. MR 255706, DOI https://doi.org/10.4153/CJM-1969-044-9
C. S. Seshadri, Triviality of vector bundles over the affine space $K^{2}$, Proc. Nat. Acad. Sci. U.S.A. 44 (1958), 456–458. MR 102527, DOI https://doi.org/10.1073/pnas.44.5.456
C. S. Seshadri, Algebraic vector bundles over the product of an affine curve and the affine line, Proc. Amer. Math. Soc. 10 (1959), 670–673. MR 164972, DOI https://doi.org/10.1090/S0002-9939-1959-0164972-1
Michael Singer, Invertible powers of ideals over orders in commutative separable algebras, Proc. Cambridge Philos. Soc. 67 (1970), 237–242. MR 252378, DOI https://doi.org/10.1017/s0305004100045503
D. L. Stancl, Multiplication in Grothendieck rings of integral group rings, J. Algebra 7 (1967), 77–90. MR 223428, DOI https://doi.org/10.1016/0021-8693%2867%2990068-3
228. E. Steinitz, Rechteckige Systeme und Moduln in algebraischen Zahlenkörpern. I, II, Math. Ann. 71 (1911), 328-354; 72 (1912), 297-345.
Jan Rustom Strooker, Faithfully projective modules and clean algebras, J. J. Groen & Zoon, N.V., Leiden, 1965. Dissertation, University of Utrecht, Utrecht, 1965. MR 0217115
Richard G. Swan, Projective modules over finite groups, Bull. Amer. Math. Soc. 65 (1959), 365–367. MR 114842, DOI https://doi.org/10.1090/S0002-9904-1959-10376-1
Richard G. Swan, The $p$-period of a finite group, Illinois J. Math. 4 (1960), 341–346. MR 122856
Richard G. Swan, Induced representations and projective modules, Ann. of Math. (2) 71 (1960), 552–578. MR 138688, DOI https://doi.org/10.2307/1969944
Richard G. Swan, Projective modules over group rings and maximal orders, Ann. of Math. (2) 76 (1962), 55–61. MR 139635, DOI https://doi.org/10.2307/1970264
Richard G. Swan, The Grothendieck ring of a finite group, Topology 2 (1963), 85–110. MR 153722, DOI https://doi.org/10.1016/0040-9383%2863%2990025-9
R. G. Swan, Algebraic $K$-theory, Lecture Notes in Mathematics, No. 76, Springer-Verlag, Berlin-New York, 1968. MR 0245634
Richard G. Swan, Invariant rational functions and a problem of Steenrod, Invent. Math. 7 (1969), 148–158. MR 244215, DOI https://doi.org/10.1007/BF01389798
Richard G. Swan, The number of generators of a module, Math. Z. 102 (1967), 318–322. MR 218347, DOI https://doi.org/10.1007/BF01110912
Shuichi Takahashi, Arithmetic of group representations, Tohoku Math. J. (2) 11 (1959), 216–246. MR 109848, DOI https://doi.org/10.2748/tmj/1178244583
Shuichi Takahashi, A characterization of group rings as a special class of Hopf algebras, Canad. Math. Bull. 8 (1965), 465–475. MR 184988, DOI https://doi.org/10.4153/CMB-1965-033-5
Olga Taussky, On a theorem of Latimer and MacDuffee, Canad. J. Math. 1 (1949), 300–302. MR 30491, DOI https://doi.org/10.4153/cjm-1949-026-1
Olga Taussky, Classes of matrices and quadratic fields, Pacific J. Math. 1 (1951), 127–132. MR 43064
Olga Taussky, Classes of matrices and quadratic fields. II, J. London Math. Soc. 27 (1952), 237–239. MR 46335, DOI https://doi.org/10.1112/jlms/s1-27.2.237
Olga Taussky, Unimodular integral circulants, Math. Z. 63 (1955), 286–289. MR 72890, DOI https://doi.org/10.1007/BF01187938
Olga Taussky, On matrix classes corresponding to an ideal and its inverse, Illinois J. Math. 1 (1957), 108–113. MR 94326
Olga Taussky, Matrices of rational integers, Bull. Amer. Math. Soc. 66 (1960), 327–345. MR 120237, DOI https://doi.org/10.1090/S0002-9904-1960-10439-9
Olga Taussky, Ideal matrices. I, Arch. Math. 13 (1962), 275–282. MR 150165, DOI https://doi.org/10.1007/BF01650074
Olga Taussky, Ideal matrices. II, Math. Ann. 150 (1963), 218–225. MR 156862, DOI https://doi.org/10.1007/BF01396991
Olga Taussky, On the similarity transformation between an integral matrix with irreducible characteristic polynomial and its transpose, Math. Ann. 166 (1966), 60–63. MR 199206, DOI https://doi.org/10.1007/BF01361438
Olga Taussky, The discriminant matrices of an algebraic number field, J. London Math. Soc. 43 (1968), 152–154. MR 228473, DOI https://doi.org/10.1112/jlms/s1-43.1.152
Olga Taussky and John Todd, Matrices with finite period, Proc. Edinburgh Math. Soc. (2) 6 (1940), 128–134. MR 2829, DOI https://doi.org/10.1017/s0013091500024627
Olga Taussky and John Todd, Matrices of finite period, Proc. Roy. Irish Acad. Sect. A 46 (1941), 113–121. MR 0003607
Olga Taussky and Hans Zassenhaus, On the similarity transformation between a matrix and its transpose, Pacific J. Math. 9 (1959), 893–896. MR 108500
John G. Thompson, Vertices and sources, J. Algebra 6 (1967), 1–6. MR 207863, DOI https://doi.org/10.1016/0021-8693%2867%2990009-9
254. A. Troy, Integral representations of cyclic groups of order p, Ph.D. Thesis, University of Illinois, Urbana, Ill., 1961.
Kôji Uchida, Remarks on Grothendieck rings, Tohoku Math. J. (2) 19 (1967), 341–348. MR 227253, DOI https://doi.org/10.2748/tmj/1178243284
S. Ullom, Normal bases in Galois extensions of number fields, Nagoya Math. J. 34 (1969), 153–167. MR 240082
S. Ullom, Galois cohomology of ambiguous ideals, J. Number Theory 1 (1969), 11–15. MR 237473, DOI https://doi.org/10.1016/0022-314X%2869%2990022-5
Yutaka Watanabe, The Dedekind different and the homological different, Osaka Math. J. 4 (1967), 227–231. MR 227210
André Weil, Basic number theory, Die Grundlehren der mathematischen Wissenschaften, Band 144, Springer-Verlag New York, Inc., New York, 1967. MR 0234930
261. A. R. Whitcomb, The group ring problem, Ph.D. thesis, University of Chicago, Chicago, Ill., 1968.
Oscar Zariski and Pierre Samuel, Commutative algebra, Volume I, The University Series in Higher Mathematics, D. Van Nostrand Company, Inc., Princeton, New Jersey, 1958. With the cooperation of I. S. Cohen. MR 0090581
263. H. Zassenhaus, Neuer Beweis der Endlichkeit der Klassenzahl bei unimodularer Aquivalenz endlicher ganzzahliger Substitutionsgruppen, Abh. Math. Sem. Univ. Hamburg 12 (1938), 276-288.
Hans Zassenhaus, Über die Äquivalenz ganzzahliger Darstellungen, Nachr. Akad. Wiss. Göttingen Math.-Phys. Kl. II 1967 (1967), 167–193 (German). MR 230759
Janice Zemanek, On the semisimplicity of integral representation rings, Bull. Amer. Math. Soc. 76 (1970), 778–779. MR 269757, DOI https://doi.org/10.1090/S0002-9904-1970-12547-2
1. M. Auslander and O. Goldman, Maximal orders, Trans. Amer. Math. Soc. 97 (1960), 1-24. MR 22 #8034. 2. M. Auslander and O. Goldman, The Brauer group of a commutative ring, Trans. Amer. Math. Soc. 97 (1960), 367-409. MR 22 #12130. 3. G. Azumaya, Corrections and supplementaries to my paper concerning Krull-Remak-Schmidt's theorem, Nagoya Math. J. 1 (1950), 117-124. MR 12, 314. 4. D. Ballew, The module index, projective modules and invertible ideals, Ph.D. Thesis, University of Illinois, Urbana, Ill., 1969. 4a. D. Ballew, The module index and invertible ideals, Trans. Amer. Math. Soc. 148 (1970), (to appear). 5. B. Banaschewski, Integral group rings of finite groups, Canad. Math. Bull. 10 (1967), 635-642. MR 38 #1187. 6. L. F. Barannik and P. M. Gudivok, Projective representations of finite groups over rings, Dopovīdī Akad. Nauk Ukraïn. RSR Ser. A 1968, 294-297. (Ukrainian) MR 37 #4177. 7. L. F. Barannik and P. M. Gudivok, On indecomposable projective representations of finite groups, Dopovīdī Akad. Nauk Ukraïn. RSR Ser. A 1969, 391-393. (Ukrainian) 8. H. Bass, Finitistic dimension and a homological generalization of semi-primary rings, Trans. Amer. Math. Soc. 95 (1960), 466-488. MR 28 #1212, 9. H. Bass, Projective modules over algebras, Ann. of Math. (2) 73 (1961), 532-542. MR 31 #1278. 10. H. Bass, Torsion free and projective modules, Trans. Amer. Math. Soc. 102 (1962), 319-327. MR 25 #3960. 11. H. Bass, On the ubiquity of Gorenstein rings, Math. Z. 82 (1963), 8-28. MR 27 #3669. 12. H. Bass, K-theory and stable algebra, Inst. Hautes Études Sci. Publ. Math. No. 22 (1964), 5-60. MR 30 #4805. 13. H. Bass, The Dirichlet unit theorem, induced characters, and Whitehead groups of finite groups, Topology 4 (1966), 391-410. MR 33 #1341. 14. H. Bass, Algebraic K-theory, Math. Lecture Note Series, Benjamin, New York, 1968. 15. E. A. Bender, Classes of matrices and quadratic fields, Linear Algebra and Appl. 1 (1968), 195-201. MR 37 #6301. 16. S. D. Berman, On certain properties of integral group rings, Dokl. Akad. Nauk SSSR 91 (1953), 7-9. (Russian) MR 15, 99. 17. S. D. Berman, On isomorphism of the centers of group rings of p-groups, Dokl. Akad. Nauk SSSR 91 (1953), 185-187. (Russian) MR 15, 99. 18. S. D. Berman, On a necessary condition for isomorphism of integral group rings, Dopovīdī Akad. Nauk Ukraïn. RSR 1953, 313-316. (Ukrainian) MR 15, 599. 19. S. D. Berman, On the equation x, Ukrain. Mat. Ž. 7 (1955), 253-261. (Russian) MR 17, 1048. 20. S. D. Berman, On certain properties of group rings over the field of rational numbers, Užgorod. Gos. Univ. Naučn. Zap. Him. Fiz. Mat. 12 (1955), 88-110. (Russian) MR 20 #3920. 21. S. D. Berman, On automorphisms of the center of an integral group ring, Užgorod. Gos. Univ. Naučn. Zap. Him. Fiz. Mat. (1960), no. 3, 55. (Russian) 22. S. D. Berman, Integral representations of finite groups, Dokl. Akad. Nauk SSSR 152 (1963), 1286-1287 = Soviet Math. Dokl. 4 (1963), 1533-1535. MR 27 #4854. 23. S. D. Berman, On the theory of integral representations of finite groups, Dokl. Akad. Nauk SSSR 157 (1964), 506-508 = Soviet Math. Dokl. 5 (1964), 954-956. MR 29 #2308. 24. S. D. Berman, Integral representations of a cyclic group containing two irreducible rational components, In Memoriam: N. G. čebotarev, Izdat. Kazan Univ., Kazan, 1964, pp. 18-29. (Russian) MR 33 #4154. 25. S. D. Berman, On integral monomial representations of finite groups, Uspehi Mat. Nauk 20 (1965), no. 4 (124), 133-134. (Russian) MR 33 #4155. 26. S. D. Berman, Representations of finite groups over an arbitrary field and over rings of integers, Izv. Akad. Nauk SSSR Ser. Mat. 30 (1966), 69-132; English transl., Amer. Math. Soc. Transl. (2) 64 (1967), 147-215. MR 33 #5747. 27. S. D. Berman and P. M. Gudivok, Integral representations of finite groups, Dokl. Akad. Nauk SSSR 145 (1962), 1199-1201 =Soviet Math. Dokl. 3 (1962), 1172-1174. MR 25 #3095. 28. S. D. Berman and P. M. Gudivok, Integral representations of finite groups, Užgorod. Gos. Univ. Naučn. Zap. Him. Fiz. Mat. (1962), no. 5, 74-76. (Russian) 29. S. D. Berman and P. M. Gudivok, Indecomposable representations of finite groups over the ring of p-adic integers, Izv. Akad. Nauk SSSR Ser. Mat. 28 (1964), 875-910; English transl., Amer. Math. Soc. Transl. (2) 50 (1966), 77-113. MR 29 #3550. 30. S. D. Berman and A. I. Lihtman, On integral representations of finite nilpotent groups, Uspehi Mat. Nauk 20 (1965), no. 5 (125), 186-188. (Russian) MR 34 #7673. 31. S. D. Berman and A. R. Rossa, On integral group-rings of finite and periodic groups, Algebra and Math. Logic: Studies in Algebra, Izdat. Kiev Univ., Kiev, 1966, pp. 44-53. (Russian) MR 35 #265. 31a. A. Bialnicki-Birula, On the equivalence of integral representations of finite groups, Proc. Amer. Math. Soc. (to appear). 32. Z. I. Borevič and D. K. Faddeev, Theory of homology in groups. I, II, Vestnik Leningrad. Univ. 11 (1956), no. 7, 3-39; 14 (1959), no. 7, 72-87. (Russian) MR 18, 188; MR 21 #4968. 33. Z. I. Borevič and D. K. Faddeev, Integral representations of quadratic rings, Vestnik Leningrad. Univ. 15 (1960), no. 19, 52-64. (Russian) MR 27 #3668. 34. Z. I. Borevič and D. K. Faddeev, Representations of orders with cyclic index, Trudy Mat. Inst. Steklov. 80 (1965), 51-65. Proc. Steklov Inst. Math. 80 (1965), 56-72. MR 34 #5805. 35. Z. I. Borevič and D. K. Faddeev, Remarks on orders with a cyclic index, Dokl. Akad. Nauk SSSR 164 (1965), 727-728 = Soviet Math. Dokl. 6 (1965), 1273-1274. MR 32 #7601. 36. N. Bourbaki, Algèbre commutative, Actualités Sci. Indust., no. 1293, Hermann, Paris, 1961. MR 30 #2027. 37. A. A. Bovdi, Periodic normal divisors of the multiplicative group of a group ring, Sibirsk Mat. Ž. 9 (1968), 495-498 = Siberian Math. J. 9 (1968), 374-376. MR 37 #2853. 38. J. O. Brooks, Classification of representation modules over quadratic orders, Ph.D. Thesis, University of Michigan, Ann Arbor, Mich., 1964. 39. A. Brumer, Structure of hereditary orders, Bull. Amer. Math. Soc. 69 (1963), 721-724; Addendum, ibid. 70 (1964), 185. MR 27 #2543. 40. H. Cartan and S. Eilenberg, Homological algebra, Princeton Univ. Press, Princeton, N. J., 1956. MR 17, 1040. 41. C. Chevalley, L'arithmétique dans les algèbres de matrices, Actualités Sci. Indust., no. 323, Hermann, Paris, 1936. 42. J. A. Cohn and D. Livingstone, On groups of order p, Canad. J. Math. 15 (1963), 622-624. MR 27 #3700. 43. J. A. Cohn and D. Livingstone, On the structure of group algebras, Canad. J. Math. 17 (1965), 583-593. MR 31 #3514. 44. D. B. Coleman, Idempotents in group rings, Proc. Amer. Math. Soc. 17 (1966), 962. MR 33 #1379. 44a. S. B. Conlon, Structure in representation algebras, J. Algebra 5 (1967), 274-279. MR 34 #2719. 44b. S. B. Conlon, Relative components of representations, J. Algebra 8 (1968), 478-501. 44c. S. B. Conlon, Decompositions induced from the Burnside algebra, J. Algebra 10 (1968), 102-122. MR 38 #5945. 44d. S. B. Conlon, Monomial representations under integral similarity, J. Algebra 13 (1969), 496-508. 45. I. G. Connell, On the group ring, Canad. J. Math. 15 (1963), 650-685. MR 27 #3666. 46. C. W. Curtis and I. Reiner, Representation theory of finite groups and associative algebras, Pure and Appl. Math., vol. XI, Interscience, New York, 1962; 2nd ed., 1966. MR 26 #2519. 47. E. C. Dade, Rings in which no fixed power of ideal classes becomes invertible, Math. Ann. 148 (1962), 65-66. MR 25 #3963. 48. E. C. Dade, Some indecomposable group representations, Ann. of Math. (2) 77 (1963), 406-412. MR 26 #2521. 49. E. C. Dade, The maximal finite groups of 4X4 integral matrices, Illinois J. Math. 9 (1965), 99-122. MR 30 #1192. 50. E. C. Dade, D. W. Robinson, O. Taussky, and M. Ward, Divisors of recurrent sequences, J. Reine Angew. Math. 214/215 (1964), 180-183. MR 28 #5079. 51. E. C. Dade and O. Taussky, Some new results connected with matrices of rational integers, Proc. Sympos. Pure Math., vol. 8, Amer. Math. Soc. Providence, R. I., 1965, pp. 78-88. MR 32 #2395. 51a. E. C. Dade and O. Taussky, On the different in orders in an algebraic number field and special units connected with it, Acta Arith. 9 (1964), 47-51. 52. E. C. Dade, O. Taussky and H. Zassenhaus, On the semigroup of ideal classes in an order of an algebraic number field, Bull. Amer. Math. Soc. 67 (1961), 305-308. MR 25 #65. 53. E. C. Dade, O. Taussky and H. Zassenhaus, On the theory of orders, in particular on the semigroup of ideal classes and genera of an order in an algebraic number field, Math. Ann. 148 (1962), 31-64. MR 25 #3962. 54. K. deLeeuw, Some applications of cohomology to algebraic number theory and group representations (unpublished). 55. F. R. DeMeyer, The trace map and separable algebras, Osaka J. Math. 3 (1966), 7-11. MR 37 #4122. 56. M. Deuring, Algebren, Springer-Verlag, Berlin, 1935; rev. ed., Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 41, 1968. MR 37 #4106. 57. F. E. Diederichsen, Über die Ausreduktion ganzzahliger Gruppendarstellungen bei arithmetischer Aquivalenz, Abh. Math. Sem. Univ. Hamburg 14 (1938), 357-412. 58. A. Dress, A remark on the Krull-Schmidt theorem for integral group representations of rank 1 (to appear). 59. A. Dress, An intertwining number theorem for integral representations and applications (to appear). 60. A. Dress, On the decomposition of modules, Bull. Amer. Math. Soc. 75 (1969), 984-986. 61. A. Dress, On integral representations, Bull. Amer. Math. Soc. 75 (1969), 1031-1034. 62. A. Dress, The ring of monomial representations, I: Structure Theory (to appear). 63. A. Dress, Vertices of integral representations, Math. Z. (to appear). 64. A. Dress, On relative Grothendieck rings, Bull. Amer. Math. Soc. 75 (1969), 955-958. 65. V. S. Drobotenko, Integral representations of primary abelian groups, Algebra and Math. Logic: Studies in Algebra, Izdat. Kiev. Univ., Kiev, 1966, pp. 111-121. (Russian) MR 34 #4375. 66. V. S. Drobotenko, E. S. Drobotenko, Z. P. Žilinskaja and E. Y. Pogoriljak, Representations of the cyclic group of prime order p over residue classes mod p, Ukrain. Mat. Ž. 17 (1965), no. 5, 28-42; English transl., Amer. Math. Soc. Transl. (2) 69 (1968), 241-256. MR 32 #5743. 67. V. S. Drobotenko and A. I. Lihtman, Representations of finite groups over the ring of residue classes mod p, Dokl. Užgorod Univ. 3 (1960), 63. (Russian) 68. V. S. Drobotenko, P. M. Gudivok and A. I. Lihtman, On representations of finite groups over the ring of residue classes mod m, Ukrain. Mat. Ž. 16 (1964), 82-89. (Russian) MR 29 #4810. 69. V. S. Drobotenko and V. P. Rud'ko, Representations of a cyclic group by groups of automorphisms of a certain class of modules, Dopovīdī Akad. Nauk Ukraïn. RSR Ser. A. 1968, 302-304. (Ukrainian) MR 37 #2873. 70. Ju. A. Drozd, Representations of cubic Z-rings, Dokl. Akad. Nauk SSSR 174 (1967), 16-18 = Soviet Math. Dokl. 8 (1967), 572-574. MR 35 #6659. 70a. Ju. A. Drozd, On the distribution of maximal sublattices, Mat. Zametki 6 (1969), 19-24. 70b. Ju. A. Drozd, Adèles and integral representations, Izv. Akad. Nauk SSSR Ser. Mat. 33 (1969), 1080-1088. 71. Ju. A. Drozd, and V. V. Kiričenko, Representation of rings in a second order matrix algebra, Ukrain. Mat. Ž. 19 (1967), no. 3, 107-112. (Russian) MR 35 #1632. 72. Ju. A. Drozd, and V. V. Kiričenko, Hereditary orders, Ukrain. Mat. Ž. 20 (1967), 246-248. (Russian). 73. Ju. A. Drozd, V. V. Kiričenko and A. V. Roĭter, On hereditary and Bass orders, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 1415-1436 = Math. USSR Izv. 1 (1967), 1357-1376. MR 36 #2608. 74. Ju. A. Drozd and A. V. Roĭter, Commutative rings with a finite number of indecomposable integral representations, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 783-798 = Math. USSR Izv. 1 (1967), 757-772. MR 36 #3768. 75. Ju. A. Drozd and V. M. Turčin, Number of representation modules in a genus for integral second order matrix rings, Mat. Zametki 2 (1967), 133-138 = Math. Notes 2 (1967), 564-566. MR 37 #5253. 76. V. H. Dyson and K. W. Roggenkamp, Modules over orders, Springer Lecture Notes (to appear). 77. M. Eichler, Über die Idealklassenzahl total definiter Quaternionenalgebren, Math. Z. 43 (1938), 102-109. 78. M. Eichler, Über die Idealklassenzahl hyperkomplexer Zahlen, Math. Z. 43 (1938), 481-494. 79. D. K. Faddeev, On the semigroup of genera in the theory of integer representations, Izv. Akad. Nauk SSSR Ser. Mat. 28 (1964), 475-478; English transl., Amer. Math. Soc. Transl. (2) 64 (1967), 97-101. MR 28 #5089. 80. D. K. Faddeev, An introduction to multiplicative theory of modules of integral representations, Trudy Mat. Inst. Steklov. 80 (1965), 145-182 = Proc. Steklov Inst. Math. 80 (1965), 164-210. MR 34 #5873. 81. D. K. Faddeev, On the theory of cubic Z-rings, Trudy Mat. Inst. Steklov. 80 (1965), 183-187 = Proc. Steklov Inst. Math. 80 (1965), 211-215. MR 33 #4083. 82. D. K. Faddeevv, Equivalence of systems of integer matrices, Izv. Akad. Nauk SSSR Ser. Mat. 30 (1966), 449-454; English transl., Amer. Math. Soc. Transl. (2) 71 (1968), 43-48. MR 33 #2642. 83. D. K. Faddeev, The number of classes of exact ideals for Z-rings, Mat. Zametki 1 (1967), 625-632 = Math. Notes 1 (1967), 415-419. MR 35 #5466. 84. R. Fossum, The Noetherian different of projective orders, J. Reine Angew. Math. 224 (1966), 207-218. MR 36 #5119. 84a. R. Fossum, Maximal orders over Krull domains, J. Algebra 10 (1968), 321-332. MR 38 #2130. 85. A. Fröhlich, Ideals in an extension field as modules over the algebraic integers in a finite number field, Math. Z. 74 (1960), 29-38. MR 22 #4708. 86. A. Fröhlich, The module structure of Kummer extensions over Dedekind domains, J. Reine Angew. Math. 209 (1962), 39-53. MR 28 #3988; p. 1247. 87. A. Fröhlich, Invariants for modules over commutative separable orders, Quart. J. Math. Oxford Ser. (2) 16 (1965), 193-232. MR 35 #1583. 88. A. Fröhlich, Resolvents, discriminants, and trace invariants, J. Algebra 4 (1966), 173-198. MR 34 #7499. 89. I. Giorgiutti, Modules projectifs sur les algèbres de groupes finis, C.R. Acad. Sci. Paris 250 (1960), 1419-1420. MR 23 #A1691. 90. J. A. Green, On the indecomposable representations of a finite group, Math. Z. 70 (1958/59), 430-445. MR 24 #A1304. 91. J. A. Green, Blocks of modular representations, Math. Z. 79 (1962), 100-115. MR 25 #5114. 92. J. A. Green, The modular representation algebra of a finite group, Illinois J. Math. 6 (1962), 607-619. MR 25 #5106. 93. J. A. Green, A transfer theorem for modular representations, J. Algebra 1 (1964), 73-84. MR 29 #147. 94. K. Gruenberg, Residual properties of infinite soluble groups, Proc. London Math. Soc. (3) 7 (1957), 29-62. MR 19, 386. 95. P. M. Gudivok, Integral representations of a finite group with a noncyclic Sylow p-subgroup, Uspehi Mat. Nauk 16 (1961), 229-230. 96. P. M. Gudivok, Integral representations of groups of type (p, p), Dokl. Užgorod Univ. Ser. Phys.-Mat. Nauk (1962), no. 5, 73. 97. P. M. Gudivok, On p-adic integral representations of finite groups, Dokl. Užgorod Univ. Ser. Phys.-Mat. Nauk (1962), no. 5, 81-82. 98. P. M. Gudivok, Representations of finite groups over certain local rings, Dopovīdī Akad. Nauk Ukraïn. RSR 1964, 173-176. (Ukrainian) MR 29 #3551. 99. P. M. Gudivok, Representations of finite groups over quadratic rings, Dokl. Akad. Nauk SSSR 159 (1964), 1210-1213 =Soviet Math. Dokl. 5 (1964), 1669-1672. MR 30 #174. 100. P. M. Gudivok, Representations of finite groups over local number rings, Dopovīdī Akad. Nauk Ukraïn. RSR 1966, 979-981. (Ukrainian) MR 34 #1407. 101. P. M. Gudivok, Representations of finite groups over number rings, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 799-834 = Mat. USSR Izv. 1 (1967), 773-805. MR 36 #1554. 102. P. M. Gudivok and V. P. Rud'ko, On p-adic integer-valued representations of a cyclic p-group, Dopovīdī Akad. Nauk Ukraïn. RSR 1966, 1111-1113. (Ukrainian) MR 34 #1409. 103. T. Hannula, The integral representation ring a(R, Trans. Amer. Math. Soc. 133 (1968), 553-559. 104. M. Harada, Hereditary orders, Trans. Amer. Math. Soc. 107 (1963), 273-290. MR 27 #1474. 105. M. Harada, Structure of hereditary orders over local rings, J. Math. Osaka City Univ. 14 (1963), 1-22. MR 29 #5879. 106. M. Harada, Multiplicative ideal theory in hereditary orders, J. Math. Osaka City Univ. 14 (1963), 83-106. MR 29 #5880b. 107. A. Hattori, Rank element of a projective module, Nagoya Math. J. 25 (1965), 113-120. MR 31 #226. 108. A. Hattori, Semisimple algebras over a commutative ring, J. Math. Soc. Japan 15 (1963), 404-419. MR 28 #2125. 109. A. Heller, On group representations over a valuation ring, Proc. Nat. Acad. Sci. U.S.A. 47 (1961), 1194-1197. MR 23 #A2468. 110. A. Heller, Some exact sequences in algebraic K-theory, Topology 3 (1965), 389-408. MR 31 #3477. 111. A. Heller and I. Reiner, Indecomposable representations, Illinois J. Math. 5 (1961), 314-323. MR 23 #A222. 112. A. Heller and I. Reiner, Representations of cyclic groups in rings of integers. I, II, Ann. of Math. (2) 76 (1962), 73-92; (2) 77 (1963), 318-328. MR 25 #3993; MR 26 #2520. 113. A. Heller and I. Reiner, On groups with finitely many indecomposable integral representations, Bull. Amer. Math. Soc. 68 (1962), 210-212. MR 25 #1222. 114. A. Heller and I. Reiner, Grothendieck groups of orders in semisimple algebras, Trans. Amer. Math. Soc. 112 (1964), 344-355. MR 28 #5093. 115. A. Heller and I. Reiner, Grothendieck groups of integral group rings, Illinois J. Math. 9 (1965), 349-360. MR 31 #211. 116. D. G. Higman, Indecomposable representations at characteristic p, Duke Math. J. 21 (1954), 377-381. MR 16, 794. 117. D. G. Higman, Induced and produced modules, Canad. J. Math. 7 (1955), 490-508. MR 19, 390. 118. D. G. Higman, On orders in separable algebras, Canad. J. Math. 7 (1955), 509-515. MR 19, 527. 119. D. G. Higman, Relative cohomology, Canad. J. Math. 9 (1957), 19-34. MR 18, 715. 120. D. G. Higman, On isomorphisms of orders, Michigan Math. J. 6 (1959), 255-257. MR 22 #62. 121. D. G. Higman, On representations of orders over Dedekind domains, Canad. J. Math. 12 (1960), 107-125. MR 22 #63. 122. D. G. Higman and J. E. MacLaughlin, Finiteness of class numbers of representations of algebras over function fields, Michigan Math. J. 6 (1959), 401-404. MR 22 #39. 123. G. Higman, The units of group-rings, Proc. London Math. Soc. (2) 46 (1940), 231-248. MR 2, 5. 124. R. Holvoet, Sur l'isomorphie d'algèbres de groupes, Bull. Soc. Math. Belg. 20 (1968), 264-282. 124a. D. A. Jackson, On a problem in the theory of integral group rings, Ph.D. thesis, Oxford University, Oxford, 1967. 124b. D. A. Jackson, The group of units of the integral group ring of finite metabelian and finite nilpotent groups, Quart. J. Math. 20 (1969), 319-331. 125. H. Jacobinski, Über die Hauptordnung eines Körpers als Gruppenmodul, J. Reine Angew. Math. 213 (1963/64), 151-164. MR 29 #1200. 126. H. Jacobinski, On extensions of lattices, Michigan Math. J. 13 (1966), 471-475. MR 34 #4377. 127. H. Jacobinski, Sur les ordres commutatifs avec un nombre fini de réseaux indécomposables, Acta Math. 118 (1967), 1-31. MR 35 #2876. 128. H. Jacobinski, Über die Geschlechter von Gittern über Ordnungen, J. Reine Angew. Math. 230 (1968), 29-39. MR 37 #5250. 129. H. Jacobinski, Genera and decompositions of lattices over orders, Acta. Math. 121 (1968), 1-29. 130. H. Jacobinski, On embedding of lattices belonging to the same genus, Proc. Amer. Math. Soc. 24 (1970), 134-136. 131. N. Jacobson, The theory of rings, Math. Surveys, no. II, Amer. Math. Soc., Providence, R. I., 1943. MR 5, 31. 132. N. Jacobson, Structure of rings, Amer. Math. Soc. Colloq. Publ., vol. 37, Amer. Math. Soc., Providence, R. I., 1956. MR 18, 373. 133. W. E. Jenner, Block ideals and arithmetics of algebras, Compositio Math. 11 (1953), 187-203. MR 16, 7. 134. W. E. Jenner, On the class number of non-maximal orders in (P-adic division algebras, Math. Scand. 4 (1956), 125-128. MR 18, 375. 135. A. Jones, Groups with a finite number of indecomposable integral representations, Michigan Math. J. 10 (1963), 257-261. MR 27 #3698. 136. A. Jones, Integral representations of the direct product of groups, Canad. J. Math. 15 (1963), 625-630. MR 27 #4870. 137. A. Jones, On representations of finite groups over valuation rings, Illinois J. Math. 9 (1965), 297-303. MR 31 #257. 138. I. Kaplansky, Elementary divisors and modules, Trans. Amer. Math. Soc. 66 (1949), 464-491. MR 11, 155. 139. I. Kaplansky, Modules over Dedekind rings and valuation rings, Trans. Amer. Math. Soc 72 (1952), 327-340. MR 13, 719. 140. I. Kaplansky, Submodules of quaternion algebras, Proc. London Math. Soc. (3) 19 (1969), 219-232. 141. V. V. Kiričenko, Orders whose representations are all completely reducible, Mat. Zametki 2 (1967), 139-144 = Math. Notes 2 (1967), 567-570. MR 36 #2609. 142. D. I. Knee, The indecomposable integral representations of finite cyclic groups, Ph.D. Thesis, M.I.T., Cambridge, Mass., 1962. 143. M. Kneser, Einige Bemerkungen über ganzzahlige Darstellungen endlicher Gruppen, Arch. Math 17 (1966), 377-379. MR 34 #1408. 144. S. A. Krugljak, Exact ideals in a second order integral matrix ring, Ukrain. Mat. Ž. 18 (1966), no. 3, 58-64. (Russian) MR 33 #7378. 145. S. A. Krugljak, The Grothendieck group, Ukrain. Mat. Ž. 18 (1966), no. 5, 100-105. (Russian) MR 34 #204. 146. T. Y. Lam, Induction theorems for Grothendieck groups and Whitehead groups of finite groups, Ann. Sci. École Norm. Sup. (4) 1 (1968), 91-148. 147. R. Larson, Group rings over Dedekind domains. I, II, J. Algebra 5 (1967), 358-361; 7 (1967), 278-279. MR 35 #266; MR 35 #5525. 148. C. G. Latimer and C. C. MacDuffee, A correspondence between classes of ideals and classes of matrices, Ann. of Math, (2) 34 (1933), 313-316. 149. W. J. Leahey, The classification of the indecomposable integral representations of the dihedral group of order 2p, Ph.D. Thesis, M.I.T., Cambridge, Mass., 1962. 150. M. P. Lee, Integral representations of dihedral groups of order 2p, Trans. Amer. Math. Soc. 110 (1964), 213-231. MR 28 #139. 151. H. Leopoldt, Über die Hauptordnung der ganzen Elemente eines abelschen Zahlkörpers, J. Reine Angew. Math. 201 (1959), 119-149. MR 21 #7195. 152. L. S. Levy, Decomposing pairs of modules, Trans. Amer. Math. Soc. 122 (1966), 64-80. MR 33 #2677. 153. G. W. Mackey, On induced representations of groups, Amer. J. Math. 73 (1951), 576-592. MR 13, 106. 154. J.-M. Maranda, On B-adic integral representations of finite groups, Canad. J. Math. 5 (1953), 344-355. MR. 15, 100. 155. J.-M. Maranda, On the equivalence of representations of finite groups by groups of automorphisms of modules over Dedekind rings, Canad. J. Math. 7 (1955), 516-526. MR 19, 529. 155a. J. Martinet, Sur l'arithmétique des extensions galoisiennes à groupe de Galois diédral d'ordre 2p, Ann. Inst. Fourier (Grenoble) 19 (1969) (to appear). 156. A. Matuljauskas, Integral representations of a fourth-order cyclic group, Litovsk. Mat. Sb. 2 (1962), no. 1, 75-82. (Russian) MR 26 #6274. 157. A. Matuljauskas, Integral representations of the cyclic group of order six, Litovsk. Mat. Sb. 2 (1962), no. 2, 149-157. (Russian) MR 27 #5835. 158. A. Matuljauskas, On the number of indecomposable representations of the group Z, Litovsk. Mat. Sb 3 (1963), no. 1, 181-188. (Russian) MR 29 #2309. 159. A. Matuljauskas and M. Matuljauskene, On integral representations of a group of type (3, 3), Litovsk. Mat. Sb. 4 (1964), 229-233. (Russian) MR 29 #4812. 160. Warren May, Commutative group algebras, Trans. Amer. Math. Soc. 136 (1969), 139-149. MR 38 #2224. 160a. G. O. Michler, Structure of semi-perfect hereditary noetherian rings, J. Algebra 13 (1969), 327-344. 161. T. Nakayama, A theorem on modules of trivial cohomology over a finite group, Proc. Japan Acad. 32 (1956), 373-376. MR 18, 191. 162. T. Nakayama, On modules of trivial cohomology over a finite group, Illinois J. Math. 1 (1957), 36-43. MR 18, 793. 163. T. Nakayama, On modules of trivial cohomology over a finite group. II: Finitely generated modules, Nagoya Math. J. 12 (1957), 171-176. MR 20 #4587. 164. L. A. Nazarova, Unimodular representations of the four group, Dokl. Akad. Nauk SSSR 140 (1961), 1011-1014 = Soviet Math. Dokl. 2 (1961), 1304-1307. MR 24 #A770. 165. L. A. Nazarova, Unimodular representations of the alternating group of degree four, Ukrain. Mat. Ž. 15 (1963), 437-444. (Russian) MR 28 #2148. 166. L. A. Nazarova, Representations of a tetrad, Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967), 1361-1378 = Math. USSR Izv. 1 (1967), 1305-1322. MR 36 #6400. 167. L. A. Nazarova and A. V. Roĭter, Integral representations of the symmetric group of third degree, Ukrain. Mat. Ž. 14 (1962), 271-288. (Russian) MR 26 #6273. 168. L. A. Nazarova and A. V. Roĭter, On irreducible representations of p-groups over Z, Ukrain. Mat. Ž. 18 (1966), no 1, 119-124. (Russian) MR 34 #254. 169. L. A. Nazarova and A. V. Roĭter, On integral p-adic representations and representations over residue class rings, Ukrain. Mat. Ž. 19 (1967), no. 2, 125-126. (Russian) MR 35 #267. 170. L. A. Nazarova and A. V. Roĭter, Refinement of a theorem of Bass, Dokl. Akad. Nauk SSSR 176 (1967), 266-268 = Soviet Math. Dokl. 8 (1967), 1089-1092. MR 37 #1402. 171. L. A. Nazarova and A. V. Roĭter, Finitely generated modules over a dyad of a pair of local Dedekind rings, and finite groups having an abelian normal subgroup of index p, Izv. Akad. Nauk SSSR Ser. Mat. 33 (1969), 65-89 = Math. USSR Izv. 3 (1969). 172. M. Newman and O. Taussky, Classes of positive definite unimodular circulants, Canad. J. Math. 9 (1957), 71-73. MR 18, 634. 173. M. Newman and O. Taussky, On a generalization of the normal basis in abelian algebraic number fields, Comm. Pure Appl. Math. 9 (1956), 85-91. MR 17, 829. 174. R. J. Nunke, Modules of extensions over Dedekind rings, Illinois J. Math. 3 (1959), 222-241. MR 21 #1329. 175. T. Obayashi, On the Grothendieck ring of an abelian p-group, Nagoya Math. J. 26 (1966), 101-113. MR 37 #1438. 176. J. Oppenheim, Integral representations of cyclic groups of squarefree order, Ph.D. Thesis, University of Illinois, Urbana, Ill., 1962. 177. D. S. Passman, Nil ideals in group rings, Michigan Math. J. 9 (1962), 375-384. MR 26 #2470. 178. D. S. Passman, Isomorphic groups and group rings, Pacific J. Math. 15 (1965), 561-583. MR 33 #1381. 179. L. C. Pu, Integral representations of non-abelian groups of order pq, Michigan Math. J. 12 (1965), 231-246. MR 31 #2321. 180. T. Ralley, Decomposition of products of modular representations, J. London Math. Soc. 44 (1969), 480-484. 181. I. Reiner, Maschke modules over Dedekind rings, Canad. J. Math. 8 (1956), 329-334. MR 18, 7. 182. I. Reiner, Integral representations of cyclic groups of prime order, Proc. Amer. Math. Soc. 8 (1957), 142-146, MR 18, 717. 183. I. Reiner, On the class number of representations of an order, Canad. J. Math. 11 (1959), 660-672. MR 21 #7229. 184. I. Reiner, The nonuniqueness of irreducible constituents of integral group representations, Proc. Amer. Math. Soc. 11 (1960), 655-658. MR 23 #A223. 185. I. Reiner, Behavior of integral group representations under ground ring extension, Illinois J. Math. 4 (1960), 640-651. MR 22 #12145. 186. I. Reiner, The Krull-Schmidt theorem for integral group representations, Bull. Amer. Math. Soc. 67 (1961), 365-367. MR 25 #2132. 187. I. Reiner, Indecomposable representations of non-cyclic groups, Michigan Math. J. 9 (1962), 187-191. MR 25 #3994. 188. I. Reiner, Failure of the Krull-Schmidt theorem for integral representations, Michigan Math. J. 9 (1962), 225-231. MR 26 #2482. 189. I. Reiner, Extensions of irreducible modules, Michigan Math J. 10 (1963), 273-276. MR 27 #5807. 190. I. Reiner, The integral representation ring of a finite group, Michigan Math. J. 12 (1965), 11-22. MR 30 #3152. 191. I. Reiner, Nilpotent elements in rings of integral representations, Proc. Amer. Math. Soc. 17 (1966), 270-274. MR 32 #5745. 192. I. Reiner, Integral representation algebras, Trans. Amer. Math. Soc. 124 (1966), 111-121. MR 34 #2722. 193. I. Reiner, Relations between integral and modular representations, Michigan Math. J. 13 (1966), 357-372. MR 36 #5240. 194. I. Reiner, Module extensions and blocks, J. Algebra 5 (1967), 157-163. MR 35 #4316. 195. I. Reiner, Representation rings, Michigan Math. J. 14 (1967), 385-391. MR 36 #1555. 196. I. Reiner, An involution on K, Mat. Zametki 3 (1968), 523-527. (Russian) MR 37 #5270. 197. I. Reiner, A survey of integral representation theory, Proc. Algebra Sympos., University of Kentucky (Lexington, 1968), pp. 8-14. 198. I. Reiner, Maximal orders, Mimeograph Notes, University of Illinois, Urbana, Ill., 1969. 199. I. Reiner and H. Zassenhaus, Equivalence of representations under extensions of local ground rings, Illinois J. Math. 5 (1961), 409-411. MR 23 #A3764. 200. D. S. Rim, Modules over finite groups, Ann. of Math. (2) 69 (1959), 700-712. MR 21 #3474. 201. D. S. Rim, On projective class groups, Trans. Amer. Math. Soc. 98 (1961), 459-467. MR 23 #A1690. 202. K. W. Roggenkamp, Gruppenringe von unendlichem Darstellungstyp, Math. Z. 96 (1967), 393-398. MR 34 #5948. 203. K. W. Roggenkamp, Darstellungen endlicher Gruppen in Polynomringen, Math. Z. 96 (1967), 399-407. MR 34 #5949. 204. K. W. Roggenkamp, Grothendieck groups of hereditary orders, J. Reine Angew. Math. 235 (1969), 29-40. 205. K. W. Roggenkamp, On the irreducible lattices of orders, Canad. J. Math. 21 (1969), 970-976. 206. K. W. Roggenkamp, Das Krull-Schmidt Theorem für projektive Gitter in Ordnungen über lokalen Ringen, Math. Seminar (Giessen, 1969). 207. K. W. Roggenkamp, Projective modules over clean orders, Compositio Math. 21 (1969), 185-194. 208. K. W. Roggenkamp, A necessary and sufficient condition for orders in direct sums of complete skewfields to have only finitely many nonisomorphic indecomposable integral representations, Bull. Amer. Math. Soc. 76 (1969), 130-134. 208a. K. W. Roggenkamp, Projective homorphisms and extensions of lattices, J. Reine Angew. Math. (to appear). 208b. K. W. Roggenkamp and V. H. Dyson, Modules over orders, Springer Lecture Notes (to appear). 209. A. V. Roĭter, On the representations of the cyclic group of fourth order by integral matrices, Vestnik Leningrad. Univ. 15 (1960), no. 19, 65-74. (Russian) MR 23 #A1730. 210. A. V. Roĭter, Categories with division and integral representations, Dokl. Akad. Nauk SSSR 153 (1963), 46-48 = Soviet Math. Dokl. 4 (1963), 1621-1623. MR 33 #2704. 211. A. V. Roĭter, On a category of representations, Ukrain. Mat. Z. 15 (1963), 448-452. (Russian) MR 28 #3072. 212. A. V. Roĭter, On integral representations belonging to a genus, Izv. Akad. Nauk SSSR Ser. Mat. 30 (1966), 1315-1324; English transl., Amer. Math. Soc. Transl. (2) 71 (1968), 49-59. MR 35 #4255. 213. A. V. Roĭter, Divisibility in the category of representations over a complete local Dedekind ring, Ukrain. Mat. Ž. 17 (1965), no. 4, 124-129. (Russian) MR 33 #5699. 214. A. V. Roĭter, E-systems of representations, Ukrain. Mat. Ž. 17 (1965), no. 2, 88-96. (Russian) MR 32 #7620. 215. A. V. Roĭter, An analog of Bass' theorem for representation modules of non-commutative orders, Dokl. Akad. Nauk SSSR 168 (1966), 1261-1264 = Soviet Math. Dokl. 7 (1966), 830-833. MR 34 #2632. 216. A. V. Roĭter, Unboundedness of the dimensions of indecomposable representations of algebras having infinitely many indecomposable representations, Izv. Akad. Nauk SSSR Ser. Mat. 32 (1968), 1275-1282 = Math. USSR Izv. 2 (1968) (to appear). 217. A. V. Roĭter, On the theory of integral representations of rings, Mat. Zametki 3 (1968), 361-366. (Russian) MR 38 #187. 218. J. Rotman, Homological algebra, Van Nostrand, Princeton, N. J., 1970. 219. V. P. Rud'ko, On the integral representation algebra of a cyclic group of order p, Dopovīdī Akad. Nauk Ukrain. RSR Ser. A 1967, 35-39. (Ukrainian) MR 35 #268. 220. A. I. Saksonov, On group rings of finite p-groups over certain integral domains, Dokl. Akad Nauk BSSR 11 (1967), 204-207. (Russian) MR 35 #270. 221. A. I. Saksonov, Group-algebras of finite groups over a number field, Dokl. Akad. Nauk BSSR 11 (1967), 302-305. MR 35 #1681. 222. O. F. G. Schilling, The theory of valuations, Math. Surveys, no. 4, Amer. Math. Soc., Providence, R. I., 1950. MR 13, 315. 223. H. Schneider and J. Weissglass, Group rings, semigroup rings and their radicals, J. Algebra 5 (1967), 1-15. MR 35 #4317. 224. S. K. Sehgal, On the isomorphism of integral group rings. I, II, Canad. J. Math. 21 (1969), 410-413, 1182-1188. 225. C. S. Seshadri, Triviality of vector bundles over the affine space K, Proc. Nat. Acad. Sci. U. S. A. 44 (1958), 456-458. MR 21 #1318. 226. C. S. Seshadri, Algebraic vector bundles over the product of an affine curve and the affine line, Proc. Amer. Math. Soc. 10 (1959), 670-673. MR 29 #2263. 226a. M. Singer, Invertible powers of ideals over orders in commutative separable algebras, Proc. Cambridge Philos. Soc. (to appear). 227. D. L. Stancl, Multiplication in Grothendieck rings of integral group rings, J. Algebra 7 (1967), 77-90. MR 36 #6476. 228. E. Steinitz, Rechteckige Systeme und Moduln in algebraischen Zahlenkörpern. I, II, Math. Ann. 71 (1911), 328-354; 72 (1912), 297-345. 229. J. R. Strooker, Faithfully projective modules and clean algebras, Ph.D. Thesis, University of Utrecht, 1965. 230. R. G. Swan, Projective modules over finite groups, Bull. Amer. Math. Soc. 65 (1959), 365-367. MR 22 #5660. 231. R. G. Swan, The p-period of a finite group, Illinois J. Math. 4 (1960), 341-346. MR 23 #A188. 232. R. G. Swan, Induced representations and projective modules, Ann. of Math. (2) 71 (1960), 552-578. MR 25 2131. 233. R. G. Swan, Projective modules over group rings and maximal orders, Ann. of Math. (2) 76 (1962), 55-61. MR 25 #3066. 234. R. G. Swan, The Grothendieck ring of a finite group, Topology 2 (1963), 85-110. MR 27 #3683. 235. R. G. Swan, Algebraic K-theory, Springer Lecture Notes, Berlin, 1968. 236. R. G. Swan, Invariant rational functions and a problem of Steenrod, Invent. Math. 7 (1969), 148-158. 237. R. G. Swan, The number of generators of a module, Math. Z. 102 (1967), 318-322. MR 36 #1434. 238. S. Takahashi, Arithmetic of group representations, Tôhoku Math. J. (2) 11 (1959), 216-246. MR 22 #733. 239. S. Takahashi, A characterization of group rings as a special class of Hopf algebras, Canad. Math. Bull. 8 (1965), 465-475. MR 32 #2459. 240. O. Taussky, On a theorem of Latimer and MacDuffee, Canad. J. Math. 1 (1949), 300-302. MR 11, 3. 241. O. Taussky, Classes of matrices and quadratic fields, Pacific J. Math. 1 (1951), 127-132. MR 13, 201. 242. O. Taussky, Classes of matrices and quadratic fields. II, J. London Math. Soc. 27 (1952), 237-239. MR 13, 717. 243. O. Taussky, Unimodular integral circulants, Math. Z. 63 (1955), 286-289. MR 17, 347. 244. O. Taussky, On matrix classes corresponding to an ideal and its inverse, Illinois J. Math. 1 (1957), 108-113. MR 20 #845. 245. O. Taussky, Matrices of rational integers, Bull. Amer. Math. Soc. 66 (1960), 327-345. MR 22 #10994. 246. O. Taussky, Ideal matrices. I, Arch. Math. 13 (1962), 275-282. MR 27 #168. 247. O. Taussky, Ideal matrices. II, Math. Ann. 150 (1963), 218-225. MR 28 #105. 248. O. Taussky, On the similarity transformation between an integral matrix with irreducible characteristic polynomial and its transpose, Math. Ann. 166 (1966), 60-63. MR 33 #7355. 249. O. Taussky, The discriminant matrices of an algebraic number field, J. London Math. Soc. 43 (1968), 152-154. MR 37 #4053. 250. O. Taussky and J. Todd, Matrices with finite period, Proc. Edinburgh Math. Soc. (2) 6 (1940), 128-134. MR 2, 118. 251. O. Taussky and J. Todd, Matrices of finite period, Proc. Roy. Irish Acad. Sect. A 46 (1941), 113-121. MR 2, 243. 252. O. Taussky and H. Zassenhaus, On the similarity transformation between a matrix and its transpose, Pacific J. Math. 9 (1959), 893-896. MR 21 #7216. 253. John G. Thompson, Vertices and sources, J. Algebra 6 (1967), 1-6. MR 34 #7677. 254. A. Troy, Integral representations of cyclic groups of order p, Ph.D. Thesis, University of Illinois, Urbana, Ill., 1961. 255. K. Uchida, Remarks on Grothendieck groups, Tôhoku Math. J. (2) 19 (1967), 341-348. MR 37 #2838. 256. S. Ullom, Normal bases in Galois extensions of number fields, Nagoya Math. J. 34 (1969), 153-167. 257. S. Ullom, Galois cohomology of ambiguous ideals, J. Number Theory 1 (1969), 11-15. 258. Y. Watanabe, The Dedekind different and the homological different, Osaka J. Math. 4 (1967), 227-231. MR 37 #2795. 259. Y. Watanabe, The Dedekind different and the homological different of an algebra, J. Math. Soc. Japan (to appear). 260. A. Weil, Basic number theory, Die Grundlehren der Math. Wissenschaften, Band 114, Springer-Verlag, New York, 1967. MR 38 #3244. 261. A. R. Whitcomb, The group ring problem, Ph.D. thesis, University of Chicago, Chicago, Ill., 1968. 262. O. Zariski and P. Samuel, Commutative algebra. Vol. I, University Series in Higher Math., Van Nostrand, Princeton, N. J., 1958. MR 19, 833. 263. H. Zassenhaus, Neuer Beweis der Endlichkeit der Klassenzahl bei unimodularer Aquivalenz endlicher ganzzahliger Substitutionsgruppen, Abh. Math. Sem. Univ. Hamburg 12 (1938), 276-288. 264. H. Zassenhaus, Über die Äquivalenz ganzzahliger Darstellungen, Nachr. Akad. Wiss. Göttingen Math.-Phys. Kl. II 1967, 167-193. MR 37 #6319. 265. J. Zemanek, On the semisimplicity of integral representation rings, Ph.D. Thesis, University of Illinois, Urbana, Ill., 1970.
Retrieve articles in Bulletin of the American Mathematical Society with MSC (1970): 1075, 1548, 2080, 1640, 1069, 1620
Retrieve articles in all journals with MSC (1970): 1075, 1548, 2080, 1640, 1069, 1620
Join the AMS
AMS Conferences
News & Public Outreach
Math in the Media
Mathematical Imagery
Mathematical Moments
Data on the Profession
Fellows of the AMS
Mathematics Research Communities
AMS Fellowships
Collaborations and position statements
Appropriations Process Primer
Congressional briefings and exhibitions
About the AMS
Jobs at AMS
Notices of the AMS · Bulletin of the AMS
American Mathematical Society · 201 Charles Street Providence, Rhode Island 02904-2213 · 401-455-4000 or 800-321-4267
AMS, American Mathematical Society, the tri-colored AMS logo, and Advancing research, Creating connections, are trademarks and services marks of the American Mathematical Society and registered in the U.S. Patent and Trademark Office.
© Copyright , American Mathematical Society · Privacy Statement · Terms of Use · Accessibility | CommonCrawl |
Ana Abras ORCID: orcid.org/0000-0001-7989-15791,
Rita K. Almeida2,
Pedro Carneiro3 &
Carlos Henrique L. Corseuil4
The frequency of labor inspections in Brazil increased in the late 1990s. In the years that followed, between 2003 and 2007, formal employment expanded significantly in the country. This paper examines whether these city-level changes in labor inspections could be a significant factor contributing to the increase in the number of formal labor contracts at the city level. We exploit unique administrative data on formal employment on different indicators for job and worker flows—including job creation, destruction, reallocation, accessions, and separations—between 1996 and 2006, and on the intensity of labor inspections, both at the city level. The results show that increases in the enforcement of labor market regulations at the subnational level led to an increase in gross and net formal job creation rates and accession rates in a period when the Brazilian GDP and formal employment were growing and informality rates were declining. In contrast, increases in enforcement of regulations are not significantly correlated with changes in the rate of job destruction. This finding is robust to different specifications and is consistent with a model where formal jobs become more attractive to workers when enforcement of different types of labor regulations increases.
JEL ClassificationJ21, J63, E24, H80, C23
As more micro-level data becomes available, the understanding of labor market adjustment has benefited considerably from a literature looking into jobs or worker flows as the main outcome variables.Footnote 1 This new approach has unveiled new results on labor market adjustments to changes in the environment such as business cycle fluctuations (Shimer 2012), or minimum wage shocks (Brochu and Green 2013). This paper looks at a shock that has been substantively overlooked: how labor markets in emerging economies react to changes in the enforcement of labor legislation. In spite of the importance of the topic, the literature has not been conclusive about the relation between enforcement of labor regulations and rates of job flow in emerging economies.Footnote 2 On the one hand, enforcement of labor regulations increases (formal) labor costs and could lead to lower rates of formal job creation. On the other hand, enforcement can directly impact job creation though the regularization of informal jobs at the plant level. Moreover, higher compliance formal sector jobs can become more attractive to workers, lowering job destruction and separations.
This paper exploits unique Brazilian administrative data at the city level to answer this question. In particular, it exploits information on job and worker flows and administrative data on the enforcement of labor market regulations, captured by the incidence of labor inspections, across cities between 1996 and 2006. We measure enforcement of labor regulations using the frequency of labor inspections at the city level. During this period, Brazilians went through an important expansion in labor markets with employment growing 7% in the formal sector on average and average rates of job creation and job destruction of 15.4 and 8.6%, respectively. Along with the increase in employment, the average annual GDP growth of 2.6% contributed to the decline in informality in the labor market from 54.9 to 51.5%.Footnote 3 Simultaneously, during this period labor inspections on fundamental aspects of the de jure labor code, such as contributions to the job severance fund, also increased significantly.Footnote 4
Simply relating labor regulations with aggregate indicators of job growth or job flows would, however, not be likely to yield a good estimate of the impact of enforcing labor regulation on job flows. The main empirical challenge lies in that enforcement of labor regulations is, in practice, not randomly distributed across all Brazilian cities. On one end, enforcement may be stronger in cities where reports of labor violations are more frequent as a significant part of inspections are triggered by anonymous reporting. On the other end, enforcement may be stronger in richer and larger cities, also with better institutions. Moreover, Brazilian firms likely faced other policy shocks, over this period, possibly affecting their patterns of jobs and workers flows. Two examples are the expansion of firms' business credit lines (Catão et al. 2009) and significant tax simplification programs for small businesses (Fajnzylber et al. 2011).
To mitigate this concern, we consider a simple reduced form equation exploiting time and within-country variation, across cities, in the enforcement of labor market regulations and in the rates of job creation and job destruction. In other words, our reduced form compares changes over time in the degree of enforcement of labor regulations at the city level and relates that variation to changes in job and worker flow rates. The advantage is that, by exploiting subnational variation, this reduced form accounts for any time varying nationwide shocks that could have simultaneously contributed to the increase in employment formality during this period. In addition, our data includes a robust set of time-varying observable characteristics at the city level (e.g., city GDP, the distribution of plant size, and the share of educated workers at plant level, total city population, and total city homicide rates). In addition, because we exploit city-level panel data, we can account also for unobservable city-year time trends.
Our findings suggest that, all else being constant, cities facing an increase in the enforcement of labor market regulations tend to have higher rates of worker flows in both margins: accessions and separations. Stringent enforcement is also related to increases in job creation rates. In contrast, changes in job destruction rates as measured in our administrative data set are not related with changes in the degree of enforcement of labor regulations. Our main findings are robust to the inclusion of state-level time trends, considering different sub-national samples.
The paper draws upon and contributes to different literatures. First, it relates to the literature analyzing, at an aggregate level, indicators of job growth/flows with country regulations and institutions. While earlier empirical cross-country work did not find a significant link between labor regulations and job reallocation (see Bertola and Rogerson 1997; Davis and Haltiwanger 1999), more recent findings show that, even after accounting for differences across countries in technology and sector composition, there is still sizeable unaccounted variation in job reallocation across countries. This unexplained variation can be related to institutional or policy variables or to measurement error inherent to cross-country studies (see Bartelsman et al. 2009). The literature looking at the institutional role in explaining this cross-country variation in job reallocation shows that labor regulations may play an important role.
Difference-in-difference estimations exploiting cross-country variation in the firing and hiring costs show a strong and negative relationship between restrictive regulation and the reallocation of resources (e.g., Micco and Pages 2004; Haltiwanger et al. 2010). Because our work explores within-county variation in the degree of enforcement of labor regulations, our empirical approach bypasses some of the measurement issues and assumptions in previous research by using time and within country variation in de facto regulation within a single country. As the enforcement of policies is not uniform across regions in Brazil, we discuss a tighter link between the degree of stringency of the de facto labor market regulations and job reallocation in cities under a similar institutional environment.
Secondly, it relates to the literature studying the impact of labor regulations on firm dynamics and labor market outcomes. The literature here is extensive and considers several dimensions of labor market regulations, from the compliance with mandated benefits (such as unemployment or health benefits) or the minimum wage to alternative employment protection measures. The theoretical predictions on how these regulations affect firm outcomes are diverse. While the literature on the effects of mandated benefits on labor market outcomes in developing countries has produced mixed results,Footnote 5 the impacts of employment protection rules likely vary for different workers and firms.Footnote 6 Because our empirical approach will explore variation in labor inspections, we are effectively considering the de facto enforcement of a diverse mix of labor policies. This has advantages and disadvantages relatively to comparing the variation in de jure regulations. While on the one hand, one cannot try to identify the effect of each regulation, on the other hand, any effect identified already considers the interactions of de facto regulations and of their enforcement, which is ultimately what impacts individuals.
Thirdly, we relate to the literature analyzing how changes in the enforcement of labor market regulations impacts labor market outcomes. This work was initiated by Almeida and Carneiro (2009) and proxied enforcement of labor regulations with the average labor inspections in the city.Footnote 7 Almeida and Carneiro (2012) look at the impacts of enforcement of labor regulations on different labor market variables, also exploring administrative city-level data on labor inspections. Exploring only the within-country variation across cities, they show that, in response to a rise in labor inspections, there is an increase in formal employment, a decrease in informal employment, a rise in non-employment, a decline in wages at the top of the formal wage distribution, and an increase in informal wages. All of the movement from the informal to the formal sector is among the self-employed. There is little change in the employment and wages of those who are informal employees. Almeida and Carneiro (2012) argue that, in the early 1990s, labor inspectors started enforcing compliance with mandated benefits, namely contributions in advance to the job severance fund, and job severance payments upon dismissal. As a result of increased enforcement, formal workers support more generous mandated benefits by receiving lower wages. The value that workers place on these benefits is potentially higher than their cost to employers because they are untaxed. In addition, wage rigidity (e.g., through minimum wages) may prevent downward adjustment at the bottom of the wage distribution. This causes formal sector jobs at the bottom of the wage distribution to become more attractive to informal workers, leading them to switch to the formal sector.
Our work contributes to this literature by exploiting within-country and time series variation in the enforcement of labor market regulations between 1996 and 2006 and focusing now on job and worker flows, including indicators of job creation, job destruction, reallocation, accessions, and separations. In addition, our results are aligned with Almeida and Carneiro (2012) on how labor inspections are related to employment in the formal sector. There are different ways that labor inspections can affect job and worker flows, and the direction of the relationship is not clear a priori. On the one hand, inspections can directly impact job creation though the regularization of informal jobs at the plant level. Indirectly, with higher compliance, formal sector jobs can become more attractive to workers, lowering job destruction and separations. On the other hand, more inspections increase the cost of formal jobs and can lead establishments to shed workers. Almeida and Carneiro (2012) show that in cities with more frequent inspections, formal employment tends to be higher. This finding is fully consistent with our results of higher formal net employment growth and job creation rates in cities with more frequent inspections.
One fact is worth highlighting. Unlike Almeida and Carneiro (2012) who explore the Brazilian population census, our paper explores administrative data only on formally registered firms. It thus only captures the formal labor contracts and one cannot make any inferences about the relationship between inspections and the subsequent regularization of labor contracts as we do not observe informality.
This paper is organized as follows. Section 2 presents an overview of the main changes in the procedures of labor inspection implemented in Brazil, arguing that changes in these policies have made them more effective. Section 3 discusses the data used and indicators computed. Section 4 presents the empirical methodology and the proposed reduced form. Section 5 reports the main descriptive statistics for the final sample, and Section 6 discusses the empirical results. Section 7 discusses conclusions and main policy implications.
Labor inspections in Brazil
Starting in 1995, the Ministry of Labor and Employment (MTE), under the Secretary of Work Inspection (SIT), implemented a series of reforms aiming to increase the efficiency of inspections.Footnote 8 The reform emphasized a new way of monitoring outcomes of labor inspections (see Miguel 2004). The primary objective was the standardization of the results of labor inspections at the national level. The creation of the Federal System of Labor Inspection (SFIT) was an important tool for this aim. First, the system allowed the creation of a routine to plan labor inspections throughout the country. Schedules with the targeted outcomes (goals) began to be sent annually by various Regional Offices of Labor to create a system of inspections. This reform made policies less reactive to complaints about labor standards and more proactive and based on long-term planning. In addition, the reform developed financial incentives so that labor inspections played became more efficient. The system awarded bonuses linked to performance. The bonuses were granted in accordance with the enforcement goals initially established. These goals generally considered the number of inspected plants and the total financial amount collected from fines. It is worth noting that the bonus system is not the only incentive mechanism. Pires (2011) argues that the formation of regional and sector teams with common goals is an additional incentive mechanism to individual bonuses.Footnote 9
The reform also involved the change in the motivation of labor inspections. Miguel (2004) states that "the main objective was to make inspection less punitive and more educational, thus making it more effective from a social-economic point of view". In this context, it is important to highlight two actions: (i) the creation in 1996 of handbook entitled "Mediator's Manual" which contained advice to resolve labor conflicts, and (ii) the increase of available options for the labor inspector, beginning in 2001, when "tables of understanding" were permitted to debate solutions over difficult-to-solve irregularities during audit visits. Pires (2008) suggests that this new approach contributed to enabling the labor inspector to fulfill his role in a more efficient manner. Almeida (2008) also explores this point, arguing that this type of strategy is particularly successful in non-metropolitan areas cities that agglomerate small businesses.Footnote 10
Therefore, since 1995, the inspector (i) became more oriented on the focus of their actions, (ii) received more incentives to work more intensively perusing evasion (with pay for performance schemes), and (iii) had more resources to support any irregularities found. We argue that these changes in labor inspections in Brazil were accompanied by an improvement in the inspection-related indicators. Table 1 illustrates this point.
Table 1 Country means for labor inspection variables, 1996–2000 and 2001–2006
Column (1) of Table 1 displays the annual growth in the rate of formalization of workers following inspections which is captured by the number of workers registered during labor inspections divided by the number of workers covered by these same inspections. This ratio increases from 1.8% in 1996–2000 to 2.6% in 2001–2006. Column (2) shows that in these two periods, the annual average number of plants inspected by each inspector decreased from over 141 to less than 120. Therefore, an increase in the rate of formalization seems to have been motivated by better and more targeted inspections rather than by more inspections.
This hypothesis is indirectly supported in the third column, which shows that the mean size (captured by total number of employees) of the inspected plant increased by almost 50%, changing from 50 to 74.Footnote 11 If we consider that informal labor contracts are less frequent in larger plants, then an increase in the mean size of inspected plants may be related to a decrease in the rate of labor contracts that become registered because of the inspections. Nevertheless, the first column of Table 1 indicates an increase in the rate of contracts registered following the labor inspections. These facts are reconciled if, throughout this period, there is an increase in the effectiveness of inspections.
Finally, the last column of Table 1 shows that the percentage of inspected plants which have been fined remained stable at around 18%. This suggests that the highest rate of formalization was not the result of applying harsher penalties.Footnote 12 That is an indication that the improvement of labor inspection, with respect to combating informality, is due to a more effective performance on the part of the labor inspector.Footnote 13 Table 2 shows the inspection intensity measured at the city level and used in the regression analysis: number of visits over the number of plants. This indicator is stable from 1996 to 2006, with small dips in 1998 and 2004.
Table 2 Inspection intensity by year
Note that this increased effectiveness of labor inspections could have come from any combination of the three dimensions outlined above. Identifying the specific contributions of each of these dimensions is beyond the scope of this work.
Another change in labor inspection in Brazil occurred outside the SIT. Since 1998, the Public Ministry of Work (MPT) began to play an active role in labor inspection, increasingly acting in parallel to the SIT. The most noteworthy fact is that in the last decade, five priorities were chosen for the SIT, one of them being the regularization of labor contracts. One should also take the performance of the MPT into account in the analysis. However, because we lack information about MPT's results, we will focus the analysis on labor inspection under the SIT.
Data and indicators
In the empirical work, we will explore different data sets covering the period between 1996 and 2006. First, we explore the report on social information RAIS (Relação Anual de Informações Sociais, RAIS) (see Appendix for correspondence between RAIS and SFIT data), published yearly by the Brazilian Ministry of Labor and Employment, and capturing all the sectors of the economy (agriculture, industry and services).Footnote 14 This is our source of information to compute measures of job and worker flows. We consider the total number of accessions (Ai,t) and separations (Si,t) each year (denoted with subscript t) in every formal plant.Footnote 15 Those registered with the tax authority are denoted with subscript i. We compute net employment growth in each establishment (Δni,t) which are the basic inputs for job flow measures.
We also compute job and worker flow measures aggregated at the city level between 1996 and 2006. Job creation and destruction rates at the city level (denoted by subscript j) are defined as in Davis et al. (1996):
$$ {JC}_{j,t}={100}^{\ast}\left[{\sum}_{\mathrm{i}\upepsilon j}\left(\Delta {n}_{i,t}\right).I\left(\Delta {n}_{i,t}>0\right)/{N}_{j,t}\right], $$
$$ {JD}_{j,t}={100}^{\ast}\left[{\sum}_{\mathrm{i}\upepsilon j}\left|\Delta, {n}_{i,t}\right|.I\left(\Delta {n}_{i,t}\le 0\right)/{N}_{j,t}\right], $$
JCj,t and JDj,t denote the rate of job creation and destruction for year t and city j, respectively. The two job flow rates are based on the change in employment resulting from the contrast of accessions and separations (Δni,t) at the plant level i, in each city j from years t-1 and t. When this variation is positive, it will contribute to job creation in the city, and when it is negative, it contributes to job destruction in the city. This condition appears in the above formulas through the indicator function I(.). Nj denotes the city average employment in 12 consecutive months during year t, and it is used for normalization of both rates.
Two job flow variables derived from job creation and destruction rates are also considered in the analysis: net job growth rate and job reallocation rate. These measures allow us to look at distinct aspects of labor market, namely the net increase in job positions and the increase in job churning. It is possible that the policy studied has no effect in employment growth at city level but increases job reallocation within cities through job creation and job destruction at distinct plants within the same city simultaneously. For instance, jobs may flow from plants with higher inspection probability to plants with lower probability of being inspected.
$$ {NET}_{j,t}={JC}_{j,t}-{JD}_{j,t} $$
$$ {REALL}_{j,t}={JC}_{j,t}+{JD}_{j,t} $$
Finally, we also aggregate accessions and separations at city-level computing:
$$ {A}_{j,t}={100}^{\ast }{\sum}_{\mathrm{i}\upepsilon j}\left({\mathrm{A}}_{i,t}\right)/{N}_{j,t} $$
$$ {S}_{j,t}={100}^{\ast }{\sum}_{\mathrm{i}\upepsilon j}\left({\mathrm{S}}_{i,t}\right)/{N}_{j,t} $$
where Ai,t and Si,t are accessions and separations of workers at the plant level, as previously defined. Accessions are the sum of hires (H), rehires (RH), and transfers from other establishments in the same firm. Separations are sum of quits (Q), fires (F), discharges (D), and transfers (TO) to other establishments in the firm. The average employment in the 12 consecutive months during year t is used for normalizing both rates. Using the RAIS data, it is possible to identify month-to-month changes in employment and the average employment within the year. The upside of this approach is that we avoid possible autocorrelation introduced in the regression by defining variables with previous year's information.
Second, we explore annual information on labor inspections at the city level. This is collected by the Brazilian Federal System of Labor Inspection (SFIT) which is part of the Ministry of Labor. The data is available at the city level for the years 1996, 1998, 2000, 2002, 2004, and 2006. Our period for the analysis coincides with these years since this is the most restrictive set of data in the time dimension. We measure labor inspections in the city j with an indicator of the average frequency of total labor inspections per plant in the city, where the number of plants in each city is computed using RAIS:
$$ {FR}_{j,t}={\mathrm{LI}}_{j,t}/{\sum}_{\mathrm{i}}\Big({I}_{\left(\mathrm{i}\upvarepsilon j\right)\Big)}, $$
where LIj,t is the total amount of visits by labor inspectors in city j during year t and I(iϵj) is the same indicator function used before.
Third, we use additional variables to control for differences across cities/years such as yearly and city-level GDP, current city government expenses as a fraction of GDP, agricultural sector GDP, service sector GDP, population, and the number of homicides. GDP-related information was taken from IPEADATA, while population and homicides information are available at DataSUS.Footnote 16
Empirical methodology
We consider a simple reduced form equation relating the different measures of job flow and job reallocation with enforcement of labor regulations, measured by labor inspections. As described in the previous section, we consider different dependent variables of interest: total job creation and destruction rates (JC and JD), net job growth (NET, equal to JC minus JD), reallocation rate (REALL, equal to JC plus JD), accession rate (A), and separation rate (S).
$$ {Y}_{jt}={\upbeta \mathrm{FR}}_{jt}+{\gamma X}_{jt}+{\mu}_t+{\alpha}_j+{\varepsilon}_{jt} $$
where Yjt denotes the value of the dependent variable of interest, in city j, and year t, and where t = 1996, 1998, 2000, 2002, 2004, and 2006; FRjt captures the frequency of labor inspections in city j at year t; and Xjt captures time-varying city-level characteristics such as the average, median, and 75th percentile of plant size in the city; city-level GDP; share establishments in agriculture; share of establishments in industry, average, median, and 75th percentile for the share of workers with secondary education in the establishment at the city-level; total city population; and total city homicide rates. The time variant city-level dummy variables account for city characteristics that may simultaneously affect job and worker flow measures and that could be related with the intensity of labor inspections at the city level. μt are year dummies to capture macro shocks; αj are city dummies capture unobserved time invariant city-level characteristics; and ε captures unobservable shocks to our dependent variable of interest. We estimate Eq. (8) using weighted ordinary least squares where the employment at city level is used as weight.
We will also test the robustness of the results by including in Eq. (8) state-level time trends, possibly correlated with city-level trends in the enforcement of labor regulations and with trends in job flow rates. Hence, we will estimate the following specification:
$$ {Y}_{jt}={\upbeta \mathrm{FR}}_{jt}+\upgamma {X}_{jt}+{\mu}_t+{\theta}_{st}+{\alpha}_j+{\varepsilon}_{jt} $$
where all the notation is as above and θst captures the state specific trends.
Tables 3 and 4 report the evolution of our main dependent variables of interest at the aggregate level, between 1996 and 2006. Aggregation from city level to national level uses average city-year employment level as weights analogous to the use of average plant employment in (3). Table 3 shows that throughout this period, there have been an increase in average job creation rates and a decrease in job destruction. Column (1) shows that JC rises from 14.9% in 1996 to 18.1% in 2000 and then stabilizes around 17%. Column (2) suggests a noisier evolution for JD, but with two distinct levels—a higher one in the 1990's (around 10%) and a lower one in the 2000's (around 8.5%). Columns (3) and (4) show that the net job growth increased substantially, while reallocation rates stayed approximately constant. The difference in the time patterns across measures in Table 3 highlights that each captures a distinct feature of the labor market, justifying their simultaneous use in the empirical work. The same can be said about using worker flow measures as a complement for job flow measures.
Table 3 Job flows, 1996–2006
Table 4 Worker flows, 1996–2006
Table 4 presents the evolution of worker flows between 1996 and 2006. Column (1) reports accession rates which have increased substantially, especially in the 2000's, when JC was relatively stable. Separation rates, in column (2), display a U-shape varying from just above 40% in 1996 to its lowest value of 37.2% in 2000. Again, this contrasts with the JD evolution which was stable. It is interesting to note that worker flow rates are on average higher than job flow rates in the data. Although this should be the case since workers can move over and above the shifts in jobs, the difference between the two rates is substantial. In the US economy, where worker and firms may be the most unencumbered from moving, hiring, and separation rates in 2010 stood at only 18.7 and 18.5%, respectively (Hyatt and Spletzer 2013).
In addition to the time series variation, our identification strategy will rely on the variation across cities in these indicators. Table 5 reports the within-country cross-sectional variation in the main outcomes of interest for measures of job and worker flows and for the inspection intensity. The ratio between the 90th and 10th percentiles in job creation reaches almost 2 and passes this mark in job destruction. The analogous ratio for both worker flow measures are also higher than 2.
Table 5 Statistics for worker and job flows, inspection indicator, and establishment characteristics at the city level
Table 5 reports substantial variation in the distribution for inspection intensity in our sample. The 10th and 90th percentiles go from less than 0.10 to 0.36. Almeida (2008) suggests that the logistics of labor inspection vary significantly by the size of the city and the size of its establishments. The cross-sectional variation in the main variables of our analysis related to job and worker flows may be driven by city characteristics. This suggests that accounting for city differences helps to isolate part of confounding effects that might jeopardize the interpretation of our estimates in a univariate regression analysis.
Table 6 reports the intensity in labor inspections, between 1996 and 2006, depending on the firm and city size (we differentiate cities with more than 1000 establishments, cities with 100–1000 establishments, and cities with less than 100 establishments). Results show that inspection intensity is higher among larger firms (with more than 20 employees) and in larger cities (with more than 1000 establishments).
Table 6 Inspection intensity, by average plant and city size
Figure 1 shows the average inspection intensity indicator at city level aggregated by selected states in different regions. Results show sizeable spatial and temporal variation in inspection intensity across the sample. Lastly, Table 7 reports summary statistics for the main control variables.
Inspection intensity indicator by year for selected states
Table 7 Summary statistics for all city-level time-varying control variables
The results of estimating Eq. (8) by ordinary least squares are reported in columns (1) through (6) of Table 8 for job creation and destruction rates, net job creation, job reallocation, accessions, and separations, respectively. Panel A does not control city-level time-varying characteristics (Xjt) while panel B includes all the city time-varying characteristics reported in Table 8. The findings reported in column (1) to (4) of panel A show no significant relation between enforcement intensity and average city-level job flow rates. But the findings in the last two columns of panel A of Table 8 show that an increase in the enforcement of labor market regulations, as captured by increased number of inspections per plant, is correlated with increased rates of worker flows in the city. The results in panel A of Table 8 show that a one standard deviation increase in inspection intensity is associated with a 0.79 percentage points increase in accessions rates and with a 0.55 percentage points increase in separations rates.
Table 8 Enforcement of labor Regulations and job flow rates
In panel B of Table 8, we control for several city-level time variant characteristics to account for the fact that labor and product market conditions at the city-level likely change over time. The results reported in columns (5) and (6) show that the main finding for worker flows holds; furthermore, in this reduced form, there is a substantive positive correlation between the intensity of labor inspections and the level of gross and net job creation at the city level. In particular, in cities with more frequent labor inspections, there are statistically significant higher city-level rates of job creation, net growth, separation, and accession rates. The findings reported in panel B of Table 9 show that an increase of one standard deviation in inspection intensity at the city level is associated with city-level job creation and net growth rates 0.26 percentage points and 0.30 percentage points higher, respectively. The same increase in inspections is associated to accessions and separation rates 0.79 percentage points and 0.49 percentage points higher, respectively.Footnote 17
Table 9 Enforcement of labor regulations and job flow rates, controlling for state time trends
The positive effect of labor inspections on both margins of worker flows may seem a surprise. However, the main data we exploit, RAIS, only captures formal sector jobs even if the establishment keeps some unregistered workers. Hence, what is computed as accession may represent a worker flow within the establishment from an informal to a formal position. That could explain the positive effect on accessions. The positive effect on separations is interpreted in two ways: firstly, as an employer's reaction aiming cost reduction. Dismissing employees compensates the labor cost increase due to the formalization procedure aforementioned. Secondly, for a given turnover rate, the more formal workers the establishment has, the more separations will be registered at RAIS. This interpretation depends on firms employing part of their workforce under informal contracts. Hence, the magnitude of our results should increase the higher the propensity of firms to hire workers under informal contracts. We will provide some suggestive evidence consistent with this interpretation.
Because the degree of enforcement of labor regulations varies at the city level and over time, it is not possible to account in the reduced form for unobservable city trends. Nevertheless, we acknowledge that there may be trends at a sub-national level, correlated with changes in enforcement (for example, changes in the quality of other institutions at the sub-national level). Hence, we test the robustness of the main specification to the inclusion of state level trends. The main findings are reported in Table 9.Footnote 18 Reassuringly, our main findings are robust to the inclusion of state-time trends.
As shown in the previous sections, both enforcement levels and labor market flows vary systematically across cities and establishments of different sizes in our sample. To check the robustness of results across these groups, Eq. (8) is estimated separately for different groups of cities depending on their average size, proxied by population and establishment size). Table 10 reports the results for cities with less and more than 10,000 persons in panels A and B, respectively. The coefficients for job creation and worker flows rates reported in columns (1) and (2) remain positive and statistically significant only for small cities suggesting that enforcement of labor regulations produces stronger impacts there. Table 11 has results for cities with average plant size of fewer than 10 employees in panel A and average plant size of 10 or more employees in panel B. Interestingly, results are positive and statistically significant for small average establishments in Table 11 for all job and worker flow rates except in the case of job destruction.
Table 10 Enforcement of labor regulations and job flow rates, by average population size in the city
Table 11 Enforcement of labor regulations and job flow rates, by average plant size in the city
As we have claimed, our estimated effects for labor inspection are more intense for settings with higher incidence of informal labor contracts, such as small municipalities. We interpret the outcome as follows. De jure rules are the same for all cities. But large cities are likely to already face scrutiny by labor inspectors and comply with regulations. Establishments in small cities infringe rules under fewer inspections. Hence, the marginal effect of more enforcement in small cities and establishments is higher, forcing employers to adjust hiring and firing in the face of de facto stringent rules. Since the firm-size distribution in Brazil is left-eschewed, the aggregate results show a positive correlation between flows and inspection frequency.
In this paper, we explore the relationship between the enforcement of labor market regulations and job and worker flow measures. We explore city-level data across Brazilian cities, between 1996 and 2006 to identify whether and how the enforcement of labor regulations is related to different indicators of job and worker flows. The analysis is based on unique city-level and time series administrative data for Brazil, exploring both the census of all plants in Brazil—RAIS—and administrative data on labor inspections. Both data sets are collected by the Brazilian Ministry of Labor and Employment.
Our results suggest that increases in the enforcement of labor market regulations at the city level is strongly correlated with higher job creation rates. This result is also present between labor inspection and net growth, reallocation, and accession and separation rates. The estimations are consistent across samples by city and establishment sizes.
These findings are in line with Almeida and Carneiro (2012). There, the authors find that, in a response to a rise in labor inspections, there is also an increase in formal employment, together with a decrease in informal employment, a rise in non-employment, a decline in wages at the top of the formal wage distribution, and an increase in informal wages. Their argument is that as inspectors started enforcing compliance with mandated benefits, formal workers pay for more generous mandated benefits by receiving lower wages. The value that workers place on these benefits is potentially higher than their cost to employers because they are untaxed. In addition, wage rigidity (e.g., through minimum wages) prevents downward adjustment at the bottom of the wage distribution. This causes formal sector jobs at the bottom of the wage distribution, to become more attractive to informal workers, leading them to switch to the formal sector. In the process, wages in the informal sector adjust upwards.
The emphasis on flows as opposed to stocks motivated the terminology "flow approach" to the labor market as referred by Blanchard and Portugal (2001) and others since then.
Examples include Haltiwanger et al. (2010), Bartelsman et al. (2009), and Bocconi et al. (2008)
Data from IPEADATA.gov. Informality is defined as the ratio of self-employed plus unregistered workers divided by the sum of employers, self-employed, registered, and unregistered workers in the labor market.
Inspections cover a wide range of mandated benefits and rules including social security and unemployment insurance contributions, maximum working hours, registration cards, minimum wage compliance, and subsidies for commuting and transportation expenses.
The literature on the effects of mandated benefits on labor market outcomes in developing countries has produced mixed results. The question of whether benefits impact employment and wages remains only partially answered, since different authors have found both increases and reductions in employment and wages after relevant labor market reforms. Take the case of two studies on social security in Latin American which showed opposite answers. Gruber (1997) analyzes the social security reforms in Chile in the eighties which sharply reduced payroll taxes. The results point to wage shifting following lower taxes and little employment effects, regardless of the choice of estimation technique. Closer to our work, Kugler and Kugler (2002) use a panel of manufacturing firms in Colombia to assess wage and employment outcomes after a government attempt to improve Social Security funding with higher payroll taxes. Using variation in tax rates and compliance between firms and industries, the authors find that the adjustment happened largely through unemployment instead of wages.
Employment protection rules may vary for different types of workers and firms, hence their potential to generate misallocation and change the optimal choice of labor input and firm size. One example can be found in Kugler and Pica (2008) study of the impact of an increase in employment protection costs for small firms in Italy. Difference-in-difference exercises for a regulation change in 1990 indicate that higher firing costs lowered turnover rates of small firms. Small businesses were also less likely to enter the market after the reform. Employment protection can also affect the pace of worker flows. In the case of Chile, Montenegro and Pages (2004) estimate the effect of severance payment in job loss and job finding rates of different workers. Employees with shorter tenure bring lower dismissal costs. This is the case of young and female workers who display higher chances of dismissal over the cycle and higher job finding rates.
Almeida and Carneiro (2009) explore firm-level data from the Brazil World Bank's Enterprise surveys to relate a more stringent enforcement of labor market regulations with the number of hires and fires among formal firms. The results suggest that, on average, firms facing an increased probability of being inspected (by 1 percentage point) employ 0.38% less workers than similar firms. They suggest that more intensive inspection inhibit the use of informal hires and impose a cost on firms decreasing the level of new formal hires. It is worth stressing, however, that their data also cannot identify whether workers have a formalized work contract. It is possible that the registration of a worker who has already been employed, albeit informally, may not be accounted for as a hire. This may lead to an underestimation of the impacts of labor inspections on total employment.
The government's concern with reducing the fiscal deficit at this time is pointed as a strong motivation to pursue an increase in the efficacy of labor inspection. One of the items commonly inspected are the deposits in the FGTS. Cardoso and Lage (2007), for example, encourage the reader to make this association.
The author also suggests that a collective mechanism can be more effective than a bonus mechanism.
The author also points to the relevance of an integrated performance of the inspection with other public agents, such as SEBRAI, state secretaries, and the public state ministry. This appears to bring synergies by virtue of the fact that such agents have similar objectives with respect to combating informality, understood in a broader sense.
Cardoso and Lage (2007) report that labor inspections started to focus on large firms, attributing this behavior to the incentives given to inspectors, who would rather visit larger firms.
It is often argued that labor fines may have their efficacy diminished in Brazil given the limited reach of Labor Courts in the country. See Magalhães (2010).
SFIT data show that the improvement in the performance of labor inspections is not limited to the regularization of workers. For example, the percentage of irregularities solved out of total irregularities found during inspections rose 71.1% from 1996 to 2000 and 84.5% from 2001 to 2006.
This data is adequate for our analysis as it includes establishments/plants of all sizes as long as they are formally registered. This contrasts with some firm level data for selected developing countries which only capture firms employing more than a threshold level.
In the text, we use interchangeably the terms plant and establishment, since they both refer to the unit of observation available in the data.
This information is available at http://www.ipeadata.gov.br, while population and homicides information are available at http://www.datasus.gov.br.
The back-of-the-envelope calculations multiply the standard deviation of the inspection intensity explanatory variable (0.16) by the coefficients of interest in panel B of Table 8.
There is a total of 27 states in Brazil, and we exploit data for 6 years, for a total of 162 state trends.
Almeida, M. Além da Informalidade: Entendendo Como os Fiscais e Agentes de Desenvolvimento Promovem a Formalização o Crescimento de Pequenas e Médias Empresas. Ipea (Texto para Discussão, 2008 n. 1.353).
Almeida R, Carneiro P. Enforcement of labor regulation and firm size. J Comp Econ. 2009;37:28–46.
Almeida R, Carneiro P. Enforcement of labor regulation and informality. Am Econ J Appl Econ. 2012;4(3)
Bartelsman, E, J Haltiwanger; and S Scarpetta Measuring and analyzing cross country differences in firm dynamics, in Producer dynamics: new evidence from micro data, (Dunne, Jensen and Roberts, eds.) NBER/University of Chicago Press, 2009.
Bertola G, Rogerson R. Institutions and labor reallocation. European Econ Review. 1997;41(6):1147–71.
Blanchard O, Portugal P. What hides behind an unemployment rate: comparing Portuguese and U.S. labor markets. Am Econ Review. 2001;91(1):187–207.
Bocconi, Tito Boeri; Helppie, Brook; Mario Macis (2008). Labor regulations in developing countries: a review of the evidence and directions for future research. World Bank, Social Protection Discussion Paper 0833.
Brochu P, Green D. The impact of minimum wages on labor market transitions. Economic J. 2013;123:1203–35.
Cardoso, A., Lage, T. As Normas e os Fatos. Editora FGV, Rio de Janeiro, Brasil. 2007
Catão, L.; Pages, C. and Rosales. Financial dependence, formal credit and informal jobs: new evidence from Brazilian household data. RES working papers 4642, Inter-American Development Bank, Research Department. 2009
Davis, S; Haltiwanger, J; and Schuh, S Job Creation and Destruction. MIT Press. 1996
Davis, S and Haltiwanger, J. Gross job flows, in handbook of labor economics, volume 3, North-Holland 1999.
Fajnzylber P, Maloney WF, Montes-Rojas GV. Does formality improve micro-firm performance? Evidence from the Brazilian SIMPLES program. J Dev Eco. 2011;94(2):262–76.
Gruber J. The incidence of payroll taxation: evidence from Chile. J Labor Econ. 1997;15(3):S72–S100.
Haltiwanger, J; Scarpetta, S; and Schweiger, H. Cross country differences in job reallocation: the role of industry, firm size and regulations. European Bank for Reconstruction and Development. WP 2010 116.
Hyatt H, Spletzer J. The recent decline in employment dynamics. IZA J Labor. 2013;2013:3–5.
Kugler, A and Kugler, M. Effects of payroll taxes on employment and wages: evidence from the Columbian social security reform. Center for Research on Economic Development and Policy Reform, Working Paper No. 2002 134.
Kugler A, Pica G. Effects of employment protection on job and worker flows: evidence from the 1990 Italian reform. Labour Econ. 2008;15(1):78–95.
Magalhães, H. La inspección (o no) del trabajo y las sanciones (o no) por incumplimiento de la legislación laboral en Brasil, mimeo 2010.
Micco, A and C Pages. Employment protection and gross job flows: a difference-in-difference approach. Inter-American Development Bank, Research Department 2004.
Miguel A. A Inspeção do Trabalho no Governo FHC: Análise sobre a Política de Fiscalização do Trabalho. Dissertação de mestrado, Programa de Pós-Graduação em Ciências Sociais. São Carlos: Universidade Federal de São Carlos; 2004.
Montenegro; and Pages, C. Who benefits from labor market regulation? Chile 1960–1998. NBER Working Paper No 0850 2004.
Pires R. Promoting sustainable compliance: styles of labour inspection and compliance outcomes in Brazil. Int Labour Review. 2008;147:199–249.
Pires, R. Beyond the fear of discretion: flexibility, performance, and accountability in the management of regulatory bureaucracies. Reg Govern. 2011;5:43–69.
Shimer, R. Reassessing the Ins and Outs of Unemployment. Review of Economic Dynamics, 2012;15(2):127–148.
We gratefully acknowledge suggestions made by seminar participants at IPEA (2012), the 2011 meeting of the Brazilian Economic Association, and the conference Reforming Minimum Wage and Labor Regulation Policy in Developing and Transition Economies held at the Beijing Normal University (2014). We are very grateful for the Ministerio Trabalho e Emprego for sharing the data on labor inspections.
Ana Abras thanks the Fundação de Amparo à Pesquisa de São Paulo (FAPESP) for the Post-Doctoral Fellowship under which part of the work was undertaken.
Universidade Federal do ABC, Rua Arcturus 3, São Bernardo do Campo, São Paulo, 09606-070, Brazil
Ana Abras
World Bank, 1818 H Street, NW, Washington, D.C., 20433, USA
Rita K. Almeida
University College London, Gower Street, London, WC1E 6BT, UK
Pedro Carneiro
Instituto de Pesquisa Econômica Aplicada, Av. Pres. Antônio Carlos, 51, Centro, Rio de Janeiro, RJ, 20020-010, Brazil
Carlos Henrique L. Corseuil
Search for Ana Abras in:
Search for Rita K. Almeida in:
Search for Pedro Carneiro in:
Search for Carlos Henrique L. Corseuil in:
Correspondence to Ana Abras.
Non-compatibility between RAIS and SFIT in the case of new cities
After the 1988 Constitution, there was a spur in creation of new cities in Brazil. This phenomenon was concentrated in the early 1990s but one can still observe new cities in the sample starting in 1996. This raises the issue that SFIT and RAIS do not incorporate new city codes into the data set at the same time. While SFIT includes the new city in the year it is created, RAIS only registers it in the following year. The mismatch creates a few problematic cases when merging the two data sets. In order to keep the information from the year when a new city is created, we proceed as follows:
Identify city codes appearing in SFIT but not in RAIS in each year.
Check if the code appearing only at SFIT in a specific year can be found at RAIS in the following year.
If a code follows cases 1 and 2 above, identify the group of firms with the new city code.
Identify in 3 the sub-group of firms appearing at RAIS in the previous year.
Impute the new city codes to the information from the labor inspection database.
Abras, A., Almeida, R., Carneiro, P. et al. Enforcement of labor regulations and job flows: evidence from Brazilian cities. IZA J Develop Migration 8, 24 (2018) doi:10.1186/s40176-018-0129-3
Formal employment growth
Job flows
Enforcement labor market regulations
Panel data | CommonCrawl |
18w5085 HomeConfirmed Participants
ScheduleWorkshop Videos Final Report (PDF) Testimonials
Schedule for: 18w5085 - Hydraulic Fracturing: Modeling, Simulation, and Experiment
Arriving in Banff, Alberta on Sunday, June 3 and departing Friday June 8, 2018
16:00 - 17:30 Check-in begins at 16:00 on Sunday and is open 24 hours (Front Desk - Professional Development Centre)
17:30 - 19:30 Dinner ↓
A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.
(Vistas Dining Room)
20:00 - 22:00 Informal gathering (Corbett Hall Lounge (CH 2110))
07:00 - 08:45 Breakfast ↓
Breakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building.
08:45 - 09:00 Introduction and Welcome by BIRS Staff ↓
A brief introduction to BIRS with important logistical information, technology instruction, and opportunity for participants to ask questions.
(TCPL 201)
09:00 - 09:30 Sau-Wai Wong: HYDRAULIC FRACTURE MODELING AND DESIGN - A PERSPECTIVE ON HOW THINGS HAVE CHANGED FROM CONVENTIONAL TO UNCONVENTIONAL RESERVOIRS ↓
In hydraulic fracture stimulation of conventional reservoirs (e.g. tight gas and deep water unconsolidated sands), the use of sophisticated design models is almost indispensable. These hydraulic fractures are typically single fracture treatments and executed from near vertical wellbores. It is well understood that the post-fracture productivity is directly linked to achieving an optimum hydraulic fracture conductivity, which is governed largely by propped fracture length and width. From an engineering and operation execution point-of-view, the goal is to pump into the fracture the desired (large) volume of proppant without encountering pre-mature 'screen-out'. Therefore, the prediction of fracture geometry and the design of pad volume become critical for propped fracture design. Models are calibrated on-site with dedicated mini-frac tests prior to main propped fracture treatments. Two important calibration parameters are fluid efficiency (leak-off behavior) and minimum in-situ stress (stress profile/contrast). The injection pressure during fracturing, which is readily available, is a valuable source of information and often analyzed and compared with model prediction for fracture diagnostics. In practice, a wide range of models have been employed successfully. However, such considerations do not appear to be important for unconventional resources where multiple fractures are pumped from a long horizontal well. In fact, multi-fracced horizontal well technology has advanced through field trials and experimentation without much help from modelling or understanding of multiple-fracture mechanics. Perhaps one reason is the lower risk of screening-out. This could be due to 1) the extreme low permeability of unconventional shales, which renders the use of high proppant concentration unnecessary and 2), the treatment of multiple fractures in one stage of pumping allows for one or two of the fractures to screen-out without causing an unacceptable rise of pumping pressure. In fact, with the pumping of tens and even a hundred fractures in one horizontal well, the 'system' appears to tolerate some 'non-performing' fractures without impairing the ultimate production. Conventional wisdom has it that fracture length should be maximized, but in the development of onshore unconventional resources, the horizontal wells are spaced evermore closer to each other, and consequently, the fracture length may not need to be long in order to access the reserves. Operators have successfully fractured and produced from unconventional reservoirs without the use of advanced modelling technology. This begs the questions of what areas of research and model design parameters should we focus on? Can we avoid the 'details' while dealing with the 'big picture' such as fractures spacing, horizontal well length/direction, the well's landing depth, and their impact on cost and production? Are research and model development sufficiently guided and tested by field data/observations?
09:30 - 10:00 Alexei Savitski: Outstanding Challenges in Modeling Hydraulic Fracturing in Unconventionals: What We Do Not Know and What We Cannot Do. ↓
Significant progress in developing ultra-tight unconventional resources has been achieved with horizontal drilling and massive hydraulic fracturing. These technologies enable economic production from sub-microDarcy rocks; however, they introduce significant uncertainty. The wells are drilled from pads and in the subsurface are spaced at about 300-400 m (1000-1300 ft). The wells are then stimulated in stages with variable number of sleeves or perforating clusters, which brings uncertainty about the injection rates into each hydraulic fracture. Poor geological characterization of potential fracture barriers and of natural fractures result in significant uncertainties about the created geometries of hydraulic fractures and distribution of proppant. The in-situ fracture conductivity is also poorly understood. This incomplete list of sources of uncertainty in hydraulic fracturing stimulation of unconventional wells explains a dilemma faced by the operators: to make the development decisions based on physics-based numerical modeling or based on field experience and statistical analysis. The latter becomes a viable alternative in a view of enormous number of producing wells and increasing amount of completion and production data. This presentation reviews outstanding challenges in modeling hydraulic fracturing that need to be addressed to make physics-based numerical modeling relevant to the development of unconventional fields. These challenges can be divided into problems that are understood but difficult to solve or implement (e.g., modeling of the wellbore hydraulics for multi-cluster treatments or numerical efficiency of solving integrated multi-well problems); and those that are not yet understood (proppant transport in realistic non-planar rough fractures in the presence of geological heterogeneities). Addressing these challenges will require advances in numerical modeling, field data acquisition and laboratory experimentation.
10:00 - 10:30 Coffee Break (TCPL Foyer)
10:30 - 11:00 Olga Kresse: SLB Hydraulic Fracture Simulators: modeling challenges ↓
The main purpose of this presentation is to give a short review of the existing hydraulic fracture models in Schlumberger, and discuss their specific features, advantages, disadvantages, and modeling challenges. Such models as Planar3D, and UFM, will be presented, and main modeling challenges will be discussed, such as - Height growth modeling in heterogeneous layered media - Interaction with pre-existing natural fractures (crossing criterion) modeling. What we have modeled so far is for weak interfaces. Some of the aspects which need to be better understood: o mineralized fracs - not completely certain how they affect the crossing behavior; o if fracture propagates along the dominant weak plane – how do the smaller fracs or defect affect the fracture bifurcation; o leakoff into the interfaces (NFs). - Interaction between closely spaced hydraulic fractures/branches (stress shadow effect) and stability issues - numerical or real? How to deal with numerical instabilities? - 3D effects and CPU efficiency
11:00 - 11:30 Robert Viesca: Fluid-induced faulting ↓
Subsurface fluid injection is often followed by observations of an enlarging cloud of microseismicity. The cloud's diffusive growth is thought to be a direct response to the diffusion of elevated pore fluid pressure reaching pre-stressed faults, triggering small instabilities; the observed high rates of this growth are interpreted to reflect a relatively high permeability of a fractured subsurface [e.g., Shapiro, GJI 1997]. We investigate an alternative mechanism for growing a microseismic cloud: the elastic transfer of stress due to slow, aseismic slip on a subset of the pre-existing faults in this damaged subsurface. We show that the growth of the slipping region of the fault may be self-similar in a diffusive manner. While this slip is driven by fluid injection, we show that, for critically stressed faults, the apparent diffusion of this slow slip may quickly exceed the poroelastically driven diffusion of the elevated pore fluid pressure. We also examine recent field injection experiments providing time series, measured at the borehole, of both fluid pressure as well as the relative displacement of a fault cross-cutting the borehole [Guglielmi et al., 2015]. We couple a hydrogeologic model for fluid flow from the borehole with a model for an expanding shear rupture of the fault. We find that such a model reproduces the observed time history, with a Bayesian inversion providing uncertainties of the model parameters for host rock stiffness and frictional strength, fault zone storage and permeability, as well as the pre-injection stress state. Remarkably, we also find that the inferred rupture front outpaces the region of significant pore pressure increase.
11:30 - 13:00 Lunch ↓
Lunch is served daily between 11:30am and 1:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building.
13:00 - 14:00 Guided Tour of The Banff Centre ↓
Meet in the Corbett Hall Lounge for a guided tour of The Banff Centre campus.
(Corbett Hall Lounge (CH 2110))
14:00 - 14:30 Christine Detournay: Investigation of Kerogen's Effects on Hydraulic Fracturing using XSite ↓
Strain-softening/hardening tensile laws, derived from laboratory tests on nano-cantilever beams of kerogen rich shales (KRS) were implemented as user-defined spring models in XSITE, a DEM based lattice code developed by Itasca. The macroscopic tensile strength and toughness properties of the simulated KRS materials were obtained by performing direct, self-similar notched tension tests on microscopic samples with low, medium and high kerogen contents in the lattice code. The notched tests were characterized by two dimensionless numbers, i.e., the ratio of initial crack size over sample width, and the crack resolution in the lattice (the ratio of initial crack size over the lattice resolution). With the first ratio and lattice resolution fixed, parametric studies were performed to generate log-log plots of the critical tensile stress versus the crack resolution in the lattice. The macroscopic tensile strength of the simulated KRS material was estimated from the horizontal plateau obtained at low resolution values in the plots. The toughness corresponding to the condition of LEFM (linear elastic fracture mechanics) was measured from the segment observed at high values of resolution in the logarithm plots where the slope was -½. The toughness values derived from the log-log plots were larger for higher kerogen content, but overall were quite small compared to typical values for shale. Some interesting phenomena were observed in the simulations, e.g., the extent of the process zone near crack tips appeared to remain constant in the tests at nominal low kerogen content; the value of crack resolution beyond which LEFM applies increases as the kerogen content increases (which is consistent with the lower brittleness, or higher plasticity level, observed experimentally at high kerogen content). Using the calibrated macroscopic tensile strength and toughness, fluid injection tests were simulated in meter-scale numerical models at uniform nominal kerogen content. The numerical injection results indicated that, as kerogen content increases, the cluster injection pressure and hydraulic fracture radius increase but the maximum aperture decreases. The behavior is consistent with that expected from the evolution of a penny-shape crack in the viscosity dominated regime.
14:30 - 15:00 Wei Fu: On the Hydraulic Fracture Propagation Influenced by Spatially-Varied Natural Fracture Properties ↓
Natural fractures are widely observed in unconventional oil and gas reservoirs through core samples, image logs, mineback experiments, and outcrop studies. The existence of natural fractures can strongly impact the hydraulic fracture propagation, potentially influencing the effectiveness of reservoir stimulation and hydrocarbon production. Past studies on hydraulic fracture-natural fracture interaction typically assume uniform properties on natural fractures that persist through the entire height of the reservoir/hydraulic fracture, which allow simplification of problems within a two-dimensional (2D) framework. Recent field observations, however, demonstrate that natural fractures can be partially cemented and/or with a height that is less than the reservoir/hydraulic fracture height. These spatially-varied natural fracture properties can influence the morphology of propagation patterns and fracture network differently and require a three-dimensional (3D) consideration. In this study, we present a series of analogue laboratory experiments for hydraulic fractures crossing partially-cemented and/or non-persistent natural fractures. It is observed that a strong enough region on an otherwise weak natural fracture is sufficient to promote crossing. Also, a hydraulic fracture is able to engulf limited-height weak natural fractures and continue propagation after crossing. Guided by experimental observations, an analytical criterion based on linear elastic fracture mechanics is derived to capture the dependence of crossing/no crossing behaviors on spatially-varied natural fracture properties, including the proportion of the cemented region, natural fracture height, and cementation strength. The criterion is further compared with fully-coupled 3D lattice simulations and good agreements are achieved.
15:30 - 16:00 Andrew Bunger: Swarm Theory Framework for Evaluating Suitability of Models for Predicting Simultaneous Growth of Multiple Hydraulic Fractures ↓
Swarming morphologies, that is, those involving multiple aligned members separated by a finite spacing, emerge from systems involving interplay among three fundamental drivers: 1) Alignment: Move in the same direction as neighboring members, 2) Avoidance: Not running into other members, and 3) Attraction: Do not move too far away from other members. As with other systems resulting in swarm-like morphologies, simulation of multiple hydraulic fractures requires a model accounting for the interplay of these fundamental drivers. Specifically, alignment corresponds to the control of the ambient stress field on hydraulic fracture orientation that leads to predominance of certain strike directions. Avoidance drives hydraulic fractures to separate from one another and/or suppress one another's growth due to the energetic consequences of propagation within the region of elevated compressive stresses surrounding each fracture. Finally, attraction arises from the reduction of viscous energy dissipation associated with splitting the injected fluid among many growing hydraulic fractures rather than just one dominant fracture. When combined, theoretically predicted alignment and emergent spacing in hydraulic fracture swarms matches match with field observations for both hydraulic fractures and naturally occurring dyke swarm analogues. Unfortunately, however, some of the most tempting simplifications, such as neglecting fluid flow or using a two-dimensional modeling domain, result in omitting or fundamentally altering the energetics associated with one or more of the three drivers of hydraulic fracture swarms. As a result, certain simplifications result in a complete loss of model fidelity. On the other hand, reasonably accurate simulations can be obtained from heavily simplified models as long as they preserve the three basic drivers and first principles such as volume balance.
16:00 - 16:30 Delal Gunaydin: Laboratory Experimentation on Simultaneous Propagation of Multiple Hydraulic Fractures ↓
Stress shadowing, a well-known effect that occurs in multi stage hydraulic fracture operations when the hydraulic fractures are placed close to each other, is an important challenge to obtain the highest estimated ultimate recovery (EUR) from a horizontal well. In industry, the most common practices of wellbore completion include three to five perforation clusters (i.e. entry points from the cased wellbore to the formation) per stage. Ideally, each cluster takes the same fluid volume during the hydraulic fracturing operations, leading to uniform stimulation of the reservoir. However, because of stress shadowing, some of the clusters tend to dominate others resulting in an unequal growth of the hydraulic fractures. Motivated by a need to validate and benchmark models used to select perforation spacing, fluid viscosity, injection rate, and so forth that will minimize the negative impacts of stress shadowing, our research focuses on laboratory experiments on the behavior of multiple, simultaneously growing hydraulic fractures. The experimental results show the impact of fracture spacing, fracture height, and number of fractures on multiple fracture growth. We demonstrate qualitative similarity in many respects to existing numerical simulations. However, we also find that certain predicted geometries are apparently less stable than others are when subjected to natural perturbations associated with laboratory experiments.
16:30 - 17:00 Innokentiy Protasov: Modeling simultaneous growth of multiple pseudo-3D hydraulic fractures with a fixed mesh algorithm ↓
Numerical modeling is one of the tools that can be used for designing an optimal hydraulic fracturing treatment. One approach is to solve a fully 3D problem of fracture propagation numerically. However, numerical solution of the latter problem is computationally expensive and may preclude one to use it for problems involving optimization or sensitivity analysis. At the same time, there are more specialized models that typically rely on a series of assumptions, but are substantially faster to run. For instance, such models include plane strain, radial, Perkins-Kern-Nordgren (PKN), and pseudo-3D (P3D) hydraulic fractures. The primary aim of this talk is to present a numerical model that extends the aforementioned specialized models for a single fracture into the hydraulic fracturing simulator for multiple cracks. This is done by developing an algorithm for a single fracture, which is then extended to multiple cracks. The numerical algorithm utilizes a fixed mesh approach, in which fracture grows by extension of the tip elements that are eventually split into two parts. Tip element extension utilizes the universal asymptotic solution that originates from the problem of a semi-infinite crack, which includes the effects of toughness, fluid viscosity, and leak-off. The algorithm has been tested against the solution for a plane strain hydraulic fracture in different regimes. In addition, the approach has been extended and tested against enhanced PKN and enhanced P3D models. One of the advantages of the developed model is the fixed mesh methodology, which enabled us to extend the model to multiple fractures that can change their direction of propagation with time. Extension to multiple fractures poses an additional challenging problem of solving for the elastic interaction between the cracks. To address this problem, we use Displacement Discontinuity Method, which has been modified by using elliptical fracture elements. To check accuracy of the developed simulator, its predictions are compared to the reference solution, that is computed using Implicit Level Set Algorithm.
17:00 - 17:30 Guanyi Lu: Time-dependent hydraulic fracture initiation and propagation ↓
In engineering design for multi-stage HF treatments of horizontal well stimulation, it is ideal to promote simultaneous growth of all fractures in each stage in order to reduce the number of non-producing perforation clusters. While increased attention has been given to studies of multiple HF growth, time dependence is not typically considered as a factor affecting the HF initiation and following growth. A combined experimental and modeling study is carried out to explore the occurrence of the time-dependent initiation of single/multiple hydraulic fracture(s) and their subsequent propagation. By showing the existence of HF initiation at wellbore pressures that are insufficient to induce instantaneous initiation, and explaining that its underlying mechanism is due to the stable growth of the hydraulic fracture under subcritical conditions, this research leads to new insights for promoting more evenly growth of multiple hydraulic fractures in multi-stage HF treatments. Furthermore, our experimental results indicate that the time delay associated with hydraulic fracture initiation can be affected by various factors, such as the fluid viscosity and acidity, and the confining stresses, thereby leading to the practically-relevant outcome that fluid(s) can be chosen in order to promote initiation and growth of multiple hydraulic fractures and/or single hydraulic fractures under conditions where the required wellbore pressure for instantaneous initiation cannot be reached.
07:00 - 09:00 Breakfast (Vistas Dining Room)
09:00 - 09:30 John Napier: Simulation of hydraulic fracture propagation using unstructured triangular mesh elements ↓
A computational method is presented for the simulation of fluid-driven fracture propagation in a planar crack using unstructured triangular mesh elements. The crack opening elastic interactions between each element are determined using the displacement discontinuity boundary element method. A moving mesh, incorporating appropriate fracture tip asymptotic representations that depend on the fluid viscosity and the fracture toughness, is advanced and periodically regenerated at the crack front. The method includes special logic to represent intermediate viscosity-toughness crack tip asymptotic behaviour. The approach is validated using the simple geometry of a penny-shaped hydraulic fracture and is applied as well to the case of fracture propagation in a discontinuous stress field.
09:30 - 10:00 Egor Dontsov: Hydraulic fracture regimes and their applications ↓
Hydraulic fracturing is a technique for stimulating oil and gas wells, in which a viscous fluid is injected deep into a rock formation to produce high conductivity channels that facilitate flow of hydrocarbons back to the surface. Even for simple fracture geometries, such as plane strain or axisymmetric fractures, the solution features an interesting behavior due to interplay of physical mechanisms associated with the fluid viscosity, fracture toughness, and fluid leak-off into the formation. In particular, it is known that there are four types of self-similar solutions that correspond to the so-called regimes of propagation. The latter solutions occur for some limiting parameters that correspond to domination of one physical process, such as viscosity or toughness. The global solution, on the other hand, gradually transitions from one regime (or self-similar solution) to another in time. The "structure" of the global solution in the parametric space is investigated for plane strain and radially symmetric fractures. That is, location of the solution relative to the limiting cases is obtained for any problem parameters. Developments are extended to the case of a planar fracture driven by a power-law fluid in an anisotropic (but homogeneous) rock formation. Propagation of multiple closely spaced hydraulic fractures with limited entry design is studied with respect to the regime of propagation. It is found that the fracture shapes evolve from "pancakes" to a "flower" during the transition from the viscosity to the toughness dominated regimes. In the former case, all fractures are mostly radially symmetric and have approximately the same size. At the same time, the fracture "flower" is formed when each fracture has a shape of a petal. There is almost no overlap between the fractures if one observes them from the side. This enables one to influence fracture geometry of multiple fractures in field applications by controlling the regime of propagation.
10:30 - 11:00 Thomasina Ball: Static and dynamic fluid-driven fracturing of adhered elastica ↓
The geometry and propagation of fluid-driven fractures is determined by a competition between the flow of viscous fluid, the elastic deformation of the solid, and the energy required to create new surfaces through fracturing. To date, much research has focused on the formation of idealised penny-shaped cracks in elastic media [1]. However, the dynamics of fluid-driven fracturing of thin adhered elastica remain unexplored and unobserved, and provide an experimentally accessible and theoretically simpler setting in which to assess the underlying physical processes. We present a theoretical and experimental approach to model a 'fracture' produced when fluid is injected from a point source between a solid horizontal plane and an elastic sheet, which is adhered to the plane. Divergence of viscous stresses necessitates the formation of a vapour tip between the fluid front and fracture front. This results in two dynamical regimes of spreading: viscosity dominant spreading controlled by the flow of viscous fluid into the vapour tip, and adhesion dominant spreading controlled by the energy required to fracture the two layers. Constant flux experiments using clear elastic sheets (PDMS) enable new, direct measurements of the vapour tip and confirm the existence of spreading regimes controlled by viscosity and adhesion. We extend this work to consider the possibility of turbulent flow within the body of the fracture and assess the scale of the laminar tip at the fracture front. This analysis identifies the transition from turbulent to laminar control of the spreading, or equivalently the transition from bulk to tip control. These processes primarily feature industrially in the hydraulic fracturing of shale [2], but are also commonplace in nature, from magmatic intrusions in the Earth's crust [3, 4], to the propagation of cracks at the base of glaciers [5]. [1] D. I. Garagash and E. Detournay, "The Tip region of a Fluid-Driven fracture in an Elastic Medium," J. Appl. Mech. 67, 183-192 (1999) [2] E. Detournay, "Mechanics of Hydraulic Fractures," Annu. Rev. Fluid Mech. 48, 311-339 (2016) [3] C. Michaut, "Dynamics of Magmatic Intrusions in the Upper Crust: Theory and Applications to Laccoliths on Earth and the Moon," J. Geophys. Res. 116, 1-19 (2011) [4] A. M. Rubin, "Propagation of Magma Filled Cracks," Annu. Rev. Earth Planet. Sci. 23, 287-336 (1995) [5] V. C. Tsai and J. R. Rice, "A Model for Turbulent Hydraulic Fracture and Application to Crack Propagation at Glacier Beds," J. Geophys. Res. Earth Surf. 115, 1-18 (2010)
11:00 - 11:30 Brice Lecampion: Slickwater is not water ↓
In this talk, we will review the implications of the use of high rate slickwater hydraulic fracture treatments as often performed in unconventional gas reservoirs. In particular, due to the very large injection rate used (with rates up to 25 Barrels per minutes in a multistage context when not all the fractures within a stage propagate), the assumption of laminar flow in the fracture may be challenged - at least in the near-wellbore. This is particularly striking if one takes the properties of water to compute a fracture inlet Reynolds number: e.g. thus obtaining inlet Reynolds number up to 4; 000 for a PKN fracture geometry [6]. However, in practice, so-called "friction reducers" are always added in small quantities to the injected water in order to reduce the pressure drop in the wellbore (where the flow is turbulent) and thus minimize the pumping energy required on site (i.e. the numbers of pumping trucks). These friction reducers are high molecular weight (macro molecules) polymers (typically polyacrylamide-based), whose micellar structures completely change the transition from laminar to turbulent flow [5] - making the water "slick". The effect of the addition of these polymers do saturate at a finite concentration, where the so-called maximum drag reduction asymptote is reached. Such a "saturating" concentration is actually quite low such that it is always targeted in engineering practice. We will present limiting solutions for hydraulic fracture growth in the case of a turbulent maximum drag reduction flow for both the height contained (PKN) and radial fracture geometries in the zero toughness limit. In particular, we will show that most turbulent flow regimes can be recasted as modified power-law fluids as first discussed in [4] for the turbulent rough Gauckler-Manning-Strickler regime. This allows to partly adapt a number of existing solutions for hydraulic fracture growth [3, 1, 2]. We will discuss our results in light of typical operational parameters, highlighting the importance of the drag reduction of slickwater at large Reynolds number. References [1] Adachi, J. I. and Detournay, E. [2002], 'Self-similar solution of a plane-strain fracture driven by a powerlaw fluid', International Journal for Numerical and Analytical Methods in Geomechanics 26(6), 579–604. [2] Madyarova, M. and Detournay, E. [2004], Radial Fracture driven by a Power-law Fluid in a Permeable Elastic Rock, Technical report, Schlumberger. Report of UMN to Modeling & Mechanics Group, EAD, Schlumberger. [3] Savitski, A. and Detournay, E. [2002], 'Propagation of a penny-shaped fluid-driven fracture in an impermeable rock: asymptotic solutions', International Journal of Solids and Structures 39(26), 6311–6337. [4] Tsai, V. and Rice, J. R. [2010], 'A model for turbulent hydraulic fracture and application to crack propagation at glacier beds', J. Geoph. Res. - Earth Surface. [5] Virk, P. S. [1975], 'Drag reduction fundamentals', AIChE Journal 21(4), 625–656. [6] Zia, H. and Lecampion, B. [2017], 'Propagation of a height contained hydraulic fracture in turbulent flow regimes', International Journal of Solids and Structures 110-111, 265–278.
11:30 - 13:30 Lunch (Vistas Dining Room)
13:30 - 13:50 Group Photo ↓
Meet in foyer of TCPL to participate in the BIRS group photo. The photograph will be taken outdoors, so dress appropriately for the weather. Please don't be late, or you might not be in the official group photo!
14:00 - 14:30 Dmitry Garagash: What good is Linear Elastic Fracture Mechanics in Hydraulic Fracturing? ↓
Fluid-driven fracture presents an interesting case of crack elasticity and fracture propagation nonlinearly coupled to fluid flow. With the exceptions of a few numerical studies, previous hydraulic fracture modeling efforts have been based on the premise of Linear Elastic Fracture Mechanics (LEFM): specifically, that the damage (aka cohesive) zone associated with the rock breakage near the advancing fracture front is lumped into a singular point, under the tacit assumption that the extent of the cohesive zone is small compared to lengthscales of other physical processes relevant in the HF propagation. The latter include the dissipation in the viscous fluid flow in the fracture channel, of which the fluid lag - a region adjacent to the fracture tip filled with fracturing fluid volatiles and/or infiltrated formation pore fluid - is the extreme manifestation. In this work, we address the validity of the LEFM approach in hydraulic fracturing by considering the solution in the near tip region of a cohesive fracture driven by Newtonian fluid in an impermeable linearelastic rock. First, we show that the solution in general possesses an intricate structure supported by a number of nested lengthscales (a general sentiment for HF), on which different dissipation processes are realized (or are dominant). The latter processes can be cataloged as (1) dissipation in the fracture cohesive zone, "c", parameterized by the fracture energy Gc (cohesive energy release per unit fracture advance), the peak cohesive stress $\sigma_c$ destroyed by fracturing, and corresponding fracture aperture scale $w_c = G_c/\sigma_c$; (2) the LEFM "reduction" of the cohesive zone process, "$k$", quantified by $G_c$, but with the cohesive zone replaced by a singularity ($\sigma_c \rightarrow \infty$ and $w_c \rightarrow 0$); (3) viscous fluid dissipation associated with the fluid lag region, "o", parametrized by an equivalent fracture energy $G_o = \sigma_o w_o$, where $\sigma_o$ is the ${ in \ situ}$ confining stress (signifying the fracturing fluid pressure drop in the lag region from value $\sim\sigma_o$ to near zero) and $w_o$ is the corresponding fracture aperture scale given previously by Garagash and Detournay (2000); and (4) the viscous dissipation along the rest of the fracture (away from the fluid lag), "m" (Desroches et al, 1994). Furthermore, each of the above limiting processes corresponds to distinct solution asymptotes. The HF tip solution structure is bookended by the "c" or "o" (solid or fluid process zones) asymptotes near the tip and by the "m" asymptote away from the tip, while the LEFM "k" asymptote may emerge within the transitional region, as an intermediate asymptote, depending on values of the two governing parameters: the cohesive-to-lag fracture energy ratio $G_c=G_o$ and the cohesive-to-in-situ stress ratio $\sigma_c=\sigma_o$. For typical sets of parameters representative of both field hydraulic fractures and their lab siblings, $G_c/G_o$ is either $\sim 1$ (low-viscosity frac. fluid) or $<<1$ (high viscosity frac. fluid). Under the above conditions, $\sigma_c/\sigma_o>> 1$ is shown to be required for appearance of the LEFM intermediate asymptote near the HF tip. Since $\sigma_c \sim 1$ MPa for most rocks, it can be easily recognized that the latter condition for the relevance of the LEFM to hydraulic fracturing is mostly realized in laboratory experiments conducted under reduced levels of confining stress, and would almost never occur in the field (with the exception of very-near-surface fracturing and/or highly overpressured permeable formations).
14:30 - 15:00 Alena Bessmertnykh: Effects of Herschel-Bulkley fluid rheology and proppant on the near tip region of a hydraulic fracture ↓
Hydraulic fracturing is a process in which fractures are generated in the rock by injection of highly pressurized fluid. Hydraulic fracturing technique together with horizontal drilling allowed to effectively increase oil and gas recovery from low permeable shale formations. The global behavior of a hydraulic fracture is strongly influenced by the processes occurring near the fracture tip, which are related to rock toughness, fluid viscosity and leak-off. The near tip region is modeled as a semi-infinite fracture. The governing equations include elasticity equation, lubrication equation and a propagation criterion. Analytical solutions can be found only for the particular cases of toughness, viscosity and leak-off dominated regimes of propagation. To find a general solution, we employ non-singular formulation to solve the problem numerically. To study the effect of fluid yield stress, the problem of a semi-infinite fracture driven by Herschel-Bulkley fluid is investigated. Numerical results demonstrate that the yield stress influences fracture width solution at larger distances from the tip. At the same time, the solution follows the behavior of a power-law fluid ahead of this zone. Analytical solution for the yield stress dominated regime is obtained and boundaries of its applicability are found. The near tip behavior of a hydraulic fracture can also be strongly affected by proppant - granular material which is mixed with fracturing fluid to prevent fracture from closing after the pressure is removed. Proppant can accumulate near the fracture tip due to settling, bridging, and/or dehydration of the slurry. To investigate the effect of proppant, the problem of a semi-infinite fracture with a localized proppant plug near the tip is analyzed for the case of Newtonian fluid. Fluid filtration through the proppant plug is modeled according to Darcy's law. Boundaries of the proppant plug are determined by a particle-size dependent bridging criterion and total volume of particles. Proppant causes a noticeable pressure drop over the plug, which in turn leads to fracture widening behind proppant. The effect of proppant can be equivalently represented by a stress barrier solution without proppant. Expressions for magnitude and location of the stress jump are explicitly calculated. Results indicate that such a representation leads to a solution that agrees reasonably well with the numerical solution with proppant.
15:30 - 16:00 Fatima-Ezzahra Moukhtari: A semi-infinite hydraulic fracture driven by a shear thinning fluid ↓
Although a large number of fluids used in hydraulic fracturing practice exhibit a shear thinning behaviour, little is known on the impact of such a complex fluid rheology on the propagation of a hydraulic fracture. We focus our investigation on the configuration of a semi-infinite hydraulic fracture propagating at a constant velocity in an impermeable linearly elastic material. We allow for the occurrence of a region without fluid of a-priori unknown length at the fracture tip. We use the Carreau rheological model in order to properly account for the shear thinning of fracturing fluid between the low and large shear rates Newtonian limits. We solve this problem numerically combining a Gauss-Chebyshev method for the discretization of the elasticity equation, the quasi-static fracture propagation condition and a finite difference scheme for the width-averaged lubrication flow. This yields in a system of non-linear equations for the fluid pressure in the filled region of the fracture and the extent of the fluid lag region near the fracture tip. We show that for a Carreau rheology, the solution depends on four dimensionless parameters: a dimensionless toughness (function of the fracture velocity, confining stress, material and fluid parameters), a dimensionless transition shear stress (related to both fluid and material behaviour), the fluid shear thinning index and the amplitude of the shear thinning behaviour of the fluid (captured by the ratio of the high and low shear rate viscosities). The solution exhibits a complex structure with up to four distinct asymptotic regions as one moves away from the fracture tip: a region governed by the classical linear elastic fracture mechanics behaviour near the tip, a high shear rate viscosity asymptotic and power-law asymptotic region in the intermediate field and a low shear rate viscosity asymptotic far away from the fracture tip. The occurrence and order of magnitude of the extent of these different viscous asymptotic regions are obtained analytically. Our results also quantify how shear thinning drastically reduces the size of the fluid lag compared to a Newtonian fluid. We also investigate the response obtained with simpler rheological models (powerlaw, Ellis). In most cases, the power-law model does not accurately match the predictions obtained with a Carreau rheology. In the zero lag limit, the Ellis model properly reproduces the results of a Carreau rheology, albeit only for a dimensionless transition shear stress below a critical dimensionless transition shear stress whose expression is given analytically as function of the shear thinning index and magnitude.
16:00 - 16:30 Zhiqiao Wang: The Tip Region of a Near-Surface Hydraulic Fracture ↓
This talk investigates the tip region of a hydraulic fracture propagating near a free surface via the related problem of the steady fluid-driven peeling of a thin elastic layer from a rigid substrate. The solution of this problem requires accounting for the existence of a fluid lag, as the pressure singularity that would otherwise exist at the crack tip is incompatible with the underlying linear beam theory governing the deflection of the thin layer. These considerations lead to the formulation of a nonlinear traveling wave problem with a free boundary, which is solved numerically. The scaled solution depends only on one number K, which has the meaning of a dimensionless toughness. The asymptotic viscosity- and toughness-dominated regimes, respectively, corresponding to small and large K, represent the end members of a family of solutions. It is shown that the far-field curvature can be interpreted as an apparent toughness, which is a universal function of K. In the viscosity regime, the apparent toughness does not depend on K, while in the toughness regime, it is equal to K. By noting that the apparent toughness represents an intermediate asymptote for the layer curvature under certain conditions, the obtention of time-dependent solutions for propagating near-surface hydraulic fractures can be greatly simplified. Indeed, any such solutions can be constructed by a matched asymptotics approach, with the outer solution corresponding to a uniformly pressurized fracture and the inner solution to the tip solution derived in this talk.
16:30 - 17:00 Gennady Mishuris: The role of fluid induced shear traction on the surface of a hydraulically driven crack. ↓
We discuss the Hydraulic Fracture (HF) model introduced in [1] accounting for the hydraulically induced shear stresses at the crack faces. The model utilizes a general form of the boundary integral operator alongside a revised fracture propagation condition based on the critical value of the energy release rate. The tip asymptotics of the revised model is always consistent with that of the Linear Elastic Fracture Mechanics. We have found that energy release rate criterion takes a more general form in this case and, in fact, plays the role of a natural regulariser in the numerical simulations. As a result, the hydraulically induced tangential tractions may play a significant role in the small toughness and viscosity dominated regimes of crack propagation, while for other regimes the reported results are close to those obtained in the classic model. We have also found that, in case of small toughness or viscosity dominated regimes, the crack redirection angle may change rather significantly for a mixed mode loading [2]. Certain aspects of the recent discussion on the topic [3-5] will be presented and commented. The potential of the revised formulation in tackling some challenges of HF modelling will be demonstrated. References [1] Wrobel, M., Mishuris, G., & Piccolroaz, A. (2017). Energy release rate in hydraulic fracture: Can we neglect an impact of the hydraulically induced shear stress? International Journal of Engineering Science, 111, 28–51. [2] Perkowska, M., Piccolroaz, A., Wrobel, M., & Mishuris, G. (2017). Redirection of a crack driven by viscous fluid. International Journal of Engineering Science, 121, 182–193. [3] Linkov, A. M. (2017). On influence of shear traction on hydraulic fracture propagation. Material Physics and Mechanics, 32, 272–277. [4] Linkov, A. M. (2018). Response to the paper by M. Wrobel, G. Mishuris, A. Piccolroaz "Energy release rate in hydraulic fracture: Can we neglect an impact of the hydraulically induced shear stress?" International Journal of Engineering Science, 127, 217–219. [5] Wrobel, M., Mishuris, G., & Piccolroaz, A. On the impact of tangential traction on the crack surfaces induced by fluid in hydraulic fracture: Response to the letter of A.M. Linkov. Int. J. Eng. Sci. (2018) 127, 217–219
17:00 - 17:30 Will Steinhardt: Hydraulic Fracture as a Sensitive Material Probe ↓
Hydraulic fractures occur miles underground, below complex, layered, heterogeneous rocks, making direct measurements of their dynamics or structure extremely challenging As such, these fractures are typically studied in the lab within blocks of classically brittle materials like glass, PMMA, or rocks that are hydraulically broken with air or fluid (Bunger (2008), Alpern (2012)). Developments in polymer science have shown that heavily cross-linked hydrogels behave nearly identically both qualitatively and quantitatively to these same brittle materials and thus are another good material in which one can study hydraulic fractures (Livne et al (2004)). We have developed a system to study hydraulic fractures within these hydrogels, which have the benefits of highly tunable material properties, being optically clear, and fracture speeds and breakdown pressures 2-3 orders of magnitude lower than PMMA. Using a combination of fast camera photography and laser sheet microscopy, we can study the three dimensional morphology and dynamics of hydraulic fractures at extremely high spatiotemporal fidelity. While the fractures in the gels show excellent agreement with the tip asymptotics outlined in Rice (1968) and Spence and Sharp (1985). However, we also observe instabilities in the propagating fracture front that generate small steps, which leave behind "step lines" that segment an otherwise smooth fracture surface. We show that the density of these lines are the result of increasing mechanical heterogeneity, which we can control in our system, and that at high density, the lines interact resulting in a very rough and uneven fracture surface. This has important practical applications as roughness can be a dominant effect in hydraulic fracture propagation, as well as acting as a nucleation point for the clogging of proppants.
17:30 - 19:30 Dinner (Vistas Dining Room)
08:30 - 09:00 Nancy Shengnan Chen: Optimization of Well Placement and Fracture Design for Multi-Well Pads in Unconventional Tight Reservoirs ↓
Well pads with multiple horizontal wells are widely used to develop the unconventional tight and shale oil reservoirs, which is driven by both the economic and environmental considerations. Thus, determining the optimal well spacing, placement configuration, and stimulation design is critical to optimizing hydrocarbon production from multi-well pads in unconventional. Recent research efforts have been devoted to maximizing the oil/gas production or the NPV in unconventional reservoirs by utilizing analytical models and reservoir simulations. However, owing to the complexity and computationally expensive simulation of the field-scale problem, the optimization process is mostly restricted to the parametric-sensitivity analysis, where a single variable is varied while others are fixed as constant values. In this work, a new Generalized Differential Evolution (GDE) algorithm has been developed and successfully applied to optimize the well placement as well as fracture parameters of a multi-well pad under constraints. A new well completion economic model based on the field dataset is developed and incorporated into the optimization framework, allowing us to find a practical optimum scenario for the multi-well pad development. A field case in Cardium tight oil reservoir is finally used to demonstrate the successful application of the newly developed optimization framework. It is shown from optimum solutions that the well spacing between 230 and 280 m is considered to be the optimum range for the multi-well pad development in Cardium tight formation. The optimum fracture half-length ranges from 82 to 97 m, and the optimum value of fracture conductivity is between 220 and 240 md⸱m. Under an optimal design of well placements and fracture parameters, the proppant pumped per stage ranges from 15 to 20 tonnages and fracturing fluid injection volume is between 100 and 130 m3 per stage. In summary, the relationship between the overall NPV and total fracture volume is complicated and it is of practical importance to optimize the total fracture volume and strike a balance between the oil production and stimulation cost in order to achieve a higher NPV.
09:00 - 09:30 Mary F. Wheeler: Diffusive Fracture Network Representations in Tight Formations ↓
We describe methodologies and robust flow and mechanics algorithms for modeling diffusive fracture network representations in tight formations. These include a priori and a posteriori error estimates for modeling Biot systems, generating natural fracture networks and applying phase field for stimulation .
09:30 - 10:00 Sanghyun Lee: Phase field modeling for fracture propagation in porous medium ↓
The computational modeling of the formation and growth of the pressurized and fluid filled fractures in poroelastic media is difficult with complex fracture topologies. Here we study the fracture propagation by approximating lower-dimensional fracture surface using the phase field function. The major advantages of using phase-field modeling for crack propagation are i) it is a fixed-topology approach in which remeshing is avoided, ii) crack propagation and joining path are automatically determined based on energy minimization, and iii) joining and branching of multiple cracks also do not require any additional techniques. Recently, the phase field approach has been widely employed to different applications and developed for various softwares. The two-field displacement phase-field system solves fully-coupled constrained minimization problem due to the crack irreversibility. Here, this constrained optimization problem is handled by using active set strategy. The pressure is obtained by using a diffraction equation where the phase-field variable serves as an indicator function that distinguishes between the fracture and the reservoir. Then the above system is coupled via a fixed-stress iteration. In addition, we couple with transport system for proppant filled fracture by using a power-law fluid system.The numerical discretization in space is based on Galerkin finite elements for displacements and phase-field, and an Enriched Galerkin method is applied for the pressure equation and transport equation in order to obtain local mass conservation. Nonlinear equations are treated with Newton's method. Predictor-corrector dynamic mesh refinement allows to capture more accurate interface of the fractures with reasonable number for degrees of freedom. In addition, we will discuss how to couple these phase field model to multi scale and optimization problems.
10:30 - 11:00 Erwan Tanne: A variational phase field model of hydraulic fracturing ↓
Since their inception in the mid-90's, variational phase-field models of fracture [1] have steadily gained popularity. One of their strengths, the ability to handle complex topologies with unknown crack paths and the interaction between multiple cracks, which is a fundamental requirement for the numerical simulation of hydraulic fracturing in complex situations. Following the technique of [2], crack propagation subject to given pressure $p$ acting along the fracture surfaces of a brittle material occupying a domain $\Omega$ is computed as the minimizer of the energy functional $$ \mathcal{E}_\ell (u,\alpha)= \int_{\Omega} \frac{1}{2} (1-\alpha)^2 \mathtt{A} e(u):e(u) \mathrm{d}x - p \int_{\Omega} \alpha \div(u) \mathrm{d}x + \frac{3G_c}{8} \int_{\Omega} \frac{\alpha}{\ell} + \ell | \nabla \alpha |^2 \mathrm{d}x $$ where $u$ denotes the displacement field, $\alpha$ the phase field representing the fracture geometry and $\mathtt{A}$ its Hooke's law. As in [2], in the case of a single pre-existing line or penny-shaped crack in an infinite medium the pressure and volume of fluid recovered from minimizers of (1) can be compared with solutions from the literature [3]. This formalism can also be used to address the issue of hydraulic stimulation of multiple cracks. Symmetry arguments are routinely used to suggest that the propagation of an infinite array of cracks of equal length in an infinite reservoir is possible. Yet a simple stability analysis reveals that this is not the case and that loss of symmetry is always energetically favored [4]. References [1] Bourdin, B., Francfort, G., and Marigo, J.-J., Numerical experiments in revisited brittle fracture. J. Mech. Phys. Solids, (2000), 48(4) 797-826 [2] Bourdin, B., Chukwudozie, C., and Yoshioka, K. (2012). A variational approach to the numerical simulation of hydraulic fracturing. In Proceedings of the 2012 SPE Annual Technical Conference and Exhibition, volume SPE 159154. [3] Sneddon, I. and Lowengrub, M. (1969). Crack problems in the classical theory of elasticity. The SIAM series in Applied Mathematics. John Wiley & Sons. [4] Tanné, E. (2017). Variational phase-field models from brittle to ductile fracture: nucleation and propagation. PhD thesis, Université Paris-Saclay, Ecole Polytechnique.
11:00 - 11:30 Keita Yoshioka: A phase-field hydromechanical model of reservoir simulation ↓
Since their inception in late 90's, the phase-field models of fracture simulation have steadily gained popularity. One of the appeals is its ability to handle complex topologies with unknown crack paths in relatively coarse meshes as well as multiple-crack interaction, which is a fundamental requirement for the numerical simulation of hydraulic fracturing in complex situations and is technically more difficult to achieve with many other methods. In this talk, we will first describe the construction of a phase-field based coupled hydromechanical reservoir simulator. We will then revisit the problem of a single hydraulic fracture propagating in an infinite impermeable medium in order to validate the computation of fracture width and frac pressure from the phase-field model. Finally we will show how a phase-field description of a system of cracks can be leveraged to model flow in a fractured porous medium and describe the coupling of the flow and mechanics problems, and illustrate the properties of this model through various numerical simulations.
13:30 - 17:30 Free Afternoon (Banff National Park)
13:30 - 17:30 Dmitry Garagash: Field Trip to Sulphur Mountain Peak ↓
The excursion to the Sulfur Mountain Peak offers an unparalleled 360 degree view of the Canadian Rockies and Bow River Valley in addition to the along-the-ridge broad-walk to the Cosmic Ray Station and the Mountain peak. The Mountain can be ascended by the Banff Gondola (~15 min), or by a hiking trail (1.5-2 hours). https://www.banffjaspercollection.com/attractions/banff-gondola/ The hiking switch-back trail is 5.5 km long and has a ~650 meter elevation gain, and is rated as moderate. The mountain Top offers an observation deck, restaurant/wine-bar, gift shop. If time allows, the Banff Upper Hot Springs are located few minutes walk from the Gondola/mountain-trail base, and provides a way to relax in the naturally hot spring-water and open pool after the mountaineering exercise. http://www.hotsprings.ca/banff-upper-hot-springs (bring you swim suits).
09:00 - 09:30 Thomas-Peter Fries: Explicit-implicit XFEM for Hydraulic Fracturing with emphasis on transport models on curved crack surfaces ↓
The eXtended Finite Element Method (XFEM) has developed to a standard tool in fracture mechanics. The method enriches the approximation space such that inner-element cracks are considered without loss of accuracy. The hybrid explicit-implicit XFEM uses both, an explicit surface mesh and the implicit level-set method for the description of the crack geometry. The implicit description is needed for the integration in cut elements and it defines where to enrich and how. The explicit description facilitates the (non-planar) crack propagation and provides the basis for solving general transport models on the surface mesh in order to consider for the fluid in Hydraulic Fracturing. One may span the full range from the Reynolds equation, general scalar advection-diffusion equations up to Stokes and Navier-Stokes equations. Because the crack surfaces in hydraulic fracturing may be non-planar, these models have to be extended from the flat case to the situation on curved manifolds. Tangential differential calculus and surface operators play an important role and approximations based on finite elements have to be provided. Future research will show which of these transport models are necessary and sufficient in the field of hydraulic fracturing.
09:30 - 10:00 Robert Gracie: Sequential Coupling Schemes for Hydraulic Fracture Simulation ↓
Hydraulic fracturing (HF) is a coupled process involving the simultaneous consideration of both solid deformation and fracturing of the rock mass and the flow of the fracturing fluid. While fully coupled and simple iterative schemes have been shown in the literature, relatively little focus has been placed on the effectiveness and stability of the iterative schemes. This is in contrast to the porous media simulation literature. In the context of the Finite Element Method (FEM) and the eXtended FEM, the most commonly adopted iterative scheme for HF simulation is analogous to the drained split, which in the context of porous media simulation has been shown to be unstable. In this presentation we contrast and compare the HF drained split with a new HF split analogous to the stable undrained split developed for porous media. Through two-dimensional examples of non-planar hydraulic fracture propagation and a benchmark comparison with the KGD model, it will be shown that the new HF undrained split has superior performance in terms of accuracy and load step size, leading to increased computational efficiency.
10:30 - 11:00 Adrian Lew: Simulation of Thermally and Hydraulically Driven Fractures with Universal Meshes ↓
We describe our approach to simulating curvilinear brittle fractures. Key to our approach is the ability to compute the values of the stress intensity factors around the crack tip with high order of accuracy, in practice fourth order. The practical consequences of this feature are that (a) converged crack paths can be obtained with relatively coarse meshes and, more importantly, (b) it is not necessary to refine the mesh around the tip at each crack step, except perhaps, around high curvature regions of the crack paths. The ability to compute accurate stress intensity factors relies on two novel developments in my group: (a) the use of Universal Meshes to deform an underlying mesh so that it precisely matches the geometry of the fracture as it evolves, and (b) the computation of high-order solutions to elasticity problems with singularities. We will briefly illustrate this method through the simulation of the propagation of thermally driven cracks. We will then focus on the application of the Universal Meshes to simulate hydraulic fractures with lag in two-dimensions. The key advantage here is that the Universal Mesh provides a mesh of good quality on the crack faces for the computation of the lubrication equations. We are still working on the method for the computation of high-order solutions in the presence of the fluid, so we only obtain first order convergence of the stress intensity factors in this case.
11:00 - 11:30 Erfan Sarvaramini: 3D Simulation of Stimulated Rock Volume Evolution during Hydraulic Fracturing ↓
Hydraulic fracturing in naturally fractured rocks often leads to the creation of a stimulated zone of enhanced permeability in which the target formation is irreversibly deformed through shear dilation of natural fractures, plastic deformation, and induced bulk damage. The current dominant modeling approach - explicitly accounting for each fracture with microscale resolution of the fracture network (e.g., discrete fracture network or distinct element method) is a computationally expensive and complex task. There also remain large uncertainties with respect to natural fracture distribution and reservoir parameters. Addressing these issues leads to identification of the need to develop up-scaled continuum model that is able to, in an average sense, capture the irreversible behavior of naturally fractured rock masses. We present a novel mathematical approach with the goal of simulating the evolution of the Stimulated Rock Volume (SRV) in a 2D/3D geomechanical model. This is achieved by introducing a homogenized non-local poro-elastic-plastic continuum zone for the stimulated region, described by an internal characteristic length scale. The up-scaled mechanism of fracturing and deformation is described by a non-local Drucker-Prager model coupled to a Biot poroelastic medium, and implemented within a standard Galerkin Finite Element Method framework. We first quantify the evolution of the SRV and pressure change in the reservoir for a typical example of hydraulic fracture stimulation in a tight formation. After the creation of a sufficiently large SRV, the well is shut-in for an extended period of time and the wellbore pressure is allowed to fall-off. The analysis of post-shut pressure curves confirms the existence of the well-known flow regimes- storage and bi-linear flow- characteristic of the simple bi-wing hydraulic fractures in homogenous rocks. Using the existing analytical solution for the finite conductive fracture, the flow capacity of the stimulated zone is calculated and correlated to the size of the stimulated zone through the non-local length scale. The performance of the developed methodology is tested by considering examples of 2D and 3D SRV calculations. For each example, the stimulated zone and the fluid pressure in relation to the local in-situ stress field are quantified. The influence of reservoir complexities, such as sedimentary layering, complex initial in-situ stress field, and wellbore effects on the evolution of the SRV and fluid pressure will be discussed.
13:30 - 14:00 Anthony Peirce: Monitoring Evolving Hydraulic Fracture Growth using Tiltmeters and a combined Extended Kalman Filter-Implicit Level Set Algorithm ↓
The inversion of remote tilt measurements to determine the geometry of an evolving hydraulic fracture (HF) is a classically ill-posed problem due to the fact that the elliptic PDE that governs the behaviour of the displacement gradient field rapidly smooths the geometric details of the fracture with distance. Indeed, it is possible to show that it is only possible to obtain reasonable estimates of the first few moments of the crack opening displacement field from such measurements. On the other hand, numerical and analytic models of evolving HF cannot be expected to provide a completely accurate prediction of evolving HF geometries taking place in the field due to the large number of uncertainties in the data and the un-modeled dynamics due to physical processes that have, of necessity, been ignored. Our approach is to feed the time series of tilt data as input to the implicit level set algorithm (ILSA) model for an evolving HF via the Extended Kalman Filter (EKF). Form an inversion point of view, the dynamics from the coupled ILSA model enables the tilt data snapshots in the time series to be connected, where in previous inversion algorithms these data were regarded as independent. We illustrate the EKF-ILSA algorithm using numerical experiments for planar HF propagating in 3D elastic media. By varying the confining stress field, synthetic tiltmeter data are generated that result in substantial changes to the geometry of the evolving HF. The ILSA model is assumed to have no knowledge of this confining stress field except for feedback from the tiltmeters via the Extended Kalman Filter. Indeed, without this feedback the ILSA HF model would propagate with radial symmetry. We compare the EKF-ILSA estimates of the fracture geometry and width with those of the HF used to generate the synthetic data with and without Gaussian noise. We also present results in which the algorithm is tested on real field data from a mining situation in which HF have been deliberately generated to enhance caving in longwall coal mining. The model is able to detect asymmetry in the growth of the HF, which is corroborated by measured intersection times of the HF with monitoring boreholes.
14:00 - 14:30 Emmanuel Detournay: Hydraulic Fracture in Highly Permeable Rock ↓
Models of hydraulic fractures in conventional reservoirs assume that Carter's leak-off law —the leak-off rate is proportional to the inverse of the square root of the time elapsed since exposure to fracturing fluid — is applicable. The validity of Carter's leak-off law stems from the cake-building properties of the fracturing fluid. In some situations where water is essentially the fracturing fluid, Carter's leak-off law can also be justified (through an reinterpretation of the leak-off coefficient) as an early-time solution of the diffusion equation. However, in water flooding operations of very permeable reservoirs, the fracture propagates in a region where the pore pressure perturbations caused by injection of water is quasi-stationary. The talk will present the construction of a new class of solutions for hydraulic fractures propagating under these asymptotic conditions. We will first present a KGD-type model of an hydraulic fracture created by injecting fluid in weak, poorly consolidated rocks. By further assuming a "small" or negligible toughness (with the consequence that the crack aperture is "small"), we prove that the system is characterized by two asymptotic fracture propagation regimes: rock-flow dominated at small time and fracture-flow dominated at large time. The timescale that legislates the transition between the small and large time asymptotic regimes is shown to be a strongly nonlinear function of a dimensionless injection rate. The rock-flow dominated regime is characterized by an increasing injection pressure while the fracture-flow dominated regime is associated with an injection pressure decreasing with time. The peak injection pressure takes place during the transition between the two regimes. The KGD model collapses, however, when the total crack length (two wings) becomes larger than the thickness of the reservoir layer, assumed to be bounded by impermeable strata. The changing geometric ratio of the constant crack height over its length affects the fracture compliance, i.e. the relationship between fracture aperture and net pressure. As this ratio decreases, the non-local elastic interaction characteristic of the KGD model progressively vanishes and for ratio approximately larger than 5 the compliance becomes essentially local as in the PKN model. It will be shown that evolution of the fracture from a KGD to a PKN mode causes an unexpected reversal of the regime of propagation, with the rock-flow dominated regime in the PKN geometry becoming the long-term solution.
14:30 - 15:00 Denis Esipov: The fully coupled 3D numerical model of hydraulic fracturing: ways to improve and possible applications ↓
The model and algorithm for the numerical solution of the three-dimensional problem of hydraulic fracture initiation and further propagation will be presented. The model is fully coupled and takes into account three important processes: elastic deformation of the rock, fluid flow in the fracture, and its further propagation in the rock. The mathematical model consists of three groups of equations. Each of them responses for one process defined above. The elasticity equations are solved by the dual boundary element method (DBEM), the lubrication equations by the finite element method (FEM) improved by simple conservative correction. This correction allows us to preserve the total volume of injected fluid on the discrete level. The fracture propagation criterion gives the system of non-linear equations, which is solved by special modification of relaxation method. In the early stages of propagation we need to explicitly consider the fluid lag, which in general varies along fracture front. It essentially increases needed computational resources. One of the ways to overcome this challenge is to use any approximation of behavior of variables near the fracture front (tip). Are the already developed asymptotic solution applicable here? The results obtained by the model include the initiation pressure for the real configuration of perforated well, the shape of the fracture, its position and orientation, as well as the possibility of reorientation and the size of the domain where it reorients. The cementing and casing of the well can be taken into account. From the point of oilfield engineer's view, the model can be useful in the understanding of the early stage of hydraulic fracturing when there are many stop cases that sometimes lead to an unsuccessful hydraulic fracturing.
15:30 - 16:00 Sergey Golovin: Modelling of a planar hydraulic fracture with three different approaches ↓
In the talk, we present our recent developments of the modelling of a planar hydraulic fracture in an inhomogeneous reservoir. The hierarchy of models includes the Enhanced Pseudo 3D (EP3D) model [1] coupled with the proppant transport, the Planar 3D model under the modification of the Implicit Level Set Algorithm (ILSA) [2], and the Planar 3D Biot model that accounts for the effect of poroelasticity [3]. In the EP3D model, the three-dimensional fracture is modeled in terms of quantities that are averaged along the fracture's height dimension. This model is computationally fast, but is limited to a certain shape of the fracture, and takes into account the layered structure of the reservoir only in terms of the confining stress difference. This model is used for the development of the coupling procedure with the proppant transport module. For the latter we use the one-speed transport model with the effective viscosity varying with particle concentration. We demonstrate effects of the Saffman-Taylor instability development, proppant bridging, breakage of the proppant plug due to the displacement instability. The Planar 3D model describes the fracture development in a layered reservoir, where the effects of the poroelasticity are neglected. The model is implemented using a modification of the ILSA approach [2]. In particular, the model accounts for the inhomogeneity of the elastic properties of the reservoir (only the layered structure of the reservoir is allowed), and is able to simulate cases of fracture local closure due to the fluid re-distribution and/or leak-off. The most advanced Planar 3D Biot model describes the fully coupled interaction of the stresses with the fluid filtration. The numerical model is implemented using the Finite Element Method and allows us to account for an arbitrary inhomogeneity of all physical characteristics of the reservoir. In particular, we show that in the case of the layered structure of the formation, where the layers differ only by permeability, the fracture propagation can demonstrate counter-intuitive non-monotonical behavior. Both, Planar 3D and Planar 3D Biot models, are thoroughly verified by comparison with fast approximate solution for the radial fracture [4], and matched with the existing experimental data [5]. Finally, we present results of the multi-parameter and multi-objective optimization for the Net Present Value and the Fracture Production depending on the applied flow rate, volume of fluid and proppant, and other characteristics of the fracturing process. The optimization is based on the fast algorithms for estimation of fracture's characteristics [6], on the module for computation of the production of the multiply-fractured wellbore, and on application of genetic algorithms [7] for construction of the Pareto front in the space of objective functions. The work was supported by the Ministry of Science and Education of Russian Federation (grant 2016-220-05-2642). Literature 1. E.V. Dontsov, A. P. Peirce. (2015) An enhanced Pseudo-3D model for hydraulic fracturing accounting for viscous height growth, non-local elasticity, and lateral toughness, Eng. Fract. Mech., V. 142, p 116-139 2. E.V. Dontsov, A.P. Peirce. (2017) A multiscale Implicit Level Set Algorithm (ILSA) to model hydraulic fracture propagation incorporating combined viscous, toughness, and leak-off asymptotics. Comput. Methods Appl. Mech. Engrg. V. 313. P. 53–84. 3. A.N. Baykin, S.V. Golovin. (2016) Modelling of hydraulic fracture development in inhomogeneous poroelastic medium. J. Phys.: Conf. Ser., V. 722. 012003 4. Dontsov, E.V. (2016) An approximate solution for a penny-shaped hydraulic fracture that accounts for fracture toughness, fluid viscosity, and leak-off. R.Soc. Open Sci. V. 3. P. 160737. 5. R. Wu, A.P. Bunger, R.G. Jeffrey, E. Siebrits. (2008) A comparison of numerical and experimental results of hydraulic fracture growth into a zone of lower confining stress. ARMA 08-267 6. Dontsov, E. V. (2016) An approximate solution for a penny-shaped hydraulic fracture that accounts for fracture toughness, fluid viscosity and leak-off. Royal Society open science V.3 N.12 P. 160737. 7. Deb, K. (2001) Multi-objective optimization using evolutionary algorithms. John Wiley & Sons.
16:00 - 16:30 Ali Rezaei: A Fast Multipole Displacement Discontinuity Method for Hydraulic Fracture Simulation ↓
A fast multipole method (FMM) is used to decrease the computational time of a fully coupled poroelastic hydraulic fracture model with a controllable effect on its accuracy. The hydraulic fracture model is based on the fully poroelastic formulation of the displacement discontinuity method (DDM) which is a special formulation of boundary element method (BEM). DDM is a powerful and efficient method for problems involving fractures. However, this method becomes slow as the number of spatial elements increases, or necessary details such as poroelasticity, that makes the solution history-dependent, are added to the model. FMM is a technique to expedite matrix-vector multiplications within a controllable error without forming the matrix explicitly. Several examples are provided to show the efficiency of the proposed approach in problems with large degrees of freedom (in time and space). Examples include hydraulic fracturing of a horizontal well and randomly distributed pressurized fractures at different orientations with respect to horizontal stresses. The results are compared to the conventional DDM in terms of computational processing time and accuracy. It is demonstrated that FMM may decrease the computation time by up to 70 times with a negligible error. The solution of tip displacements using both methods are then used to compare the computation of stress intensity factors (SIF) in mode I and II, which are needed for fracture propagation. The error of SIF calculation using the proposed modification was also found to be negligible. Consequently, this method will not affect the estimation of the fracture propagation direction. Accordingly, the proposed algorithm may be used for fracture propagation studies while substantially reducing the processing time.
16:30 - 17:00 Peter Grassl: Hydraulic fracture of a porous thick-walled hollow sphere ↓
The aim of this work is to analyse the nonlinear response of a porous thick-walled hollow sphere subjected to inner fluid pressure. In particular, we aim to find out how Biot coefficient and Poisson's ratio of the material influence the fracture process. Spherical symmetry is assumed so that the hydromechanical fracture problem is expressed by means of an ordinary differential equation of the radial displacement with respect to the radius of the sphere. For the elastic response, we derive the analytical hydro-mechanical solution based on the work in [2], which is a hydro-mechanical extension of the well known mechanical case described for instance in [1]. It is assumed that the material is fully saturated and that the application of the fluid pressure changes so slowly that a steady state always exists. For the extension to fracture, the crack openings are smeared out into an inelastic strain, which is used in a one-dimensional damage model for the stiffness reduction. The resulting nonlinear ordinary differential equation, which is presented here for the general case of non-zero Poisson's ratio, is solved numerically by means of a finite difference scheme. A sensitivity study reveals that both Biot coefficient and Poisson's ratio have a very strong influence on the hydromechanical response of the thick-walled hollow sphere. References [1] S. P. Timoshenko, J. N. Goodier, Theory of elasticity. McGraw-Hill, 1987. [2] P. Grassl and C. Fahy and D. Gallipoli and S. J. Wheeler, On a 2D hydro-mechanical lattice approach for modelling hydraulic fracture. Journal of the Mechanics and Physics of Solids, Vol. 75, pp. 104{ 118, 2015. 1
17:00 - 17:30 Shmuel Rubinstein: Instabilities in advancing hydraulic fracture fronts ↓
Fracture surface roughness can play a significant role in enhanced oil recovery, due to its impact on the fluid dynamics within the thin fracture. In theory, fracture surfaces should be smooth; however, undulations and crinkles of the fracture front resulting from heterogeneities and dynamic instability produce a complex fracture surface. These effects are fast, multi scale and generically three dimensional, and as such, are intractable both experimentally and computationally. Experimentally, these difficulties are mediated by studying fracture dynamics in brittle hydrogels. In these transparent materials, the fracture dynamics are slow and can be visualized. We combine high speed photography and laser sheet microscopy and directly observe in full 3D how roughness is dynamically generated by the fracture front. Specifically, I will discuss the behavior of small step-like discontinuities that form, propagate, and interact with each other along the advancing front. The interaction of these step lines can create significant relief along the fracture front and can even result in fracture propagation on separate parallel planes.
09:00 - 10:00 Andrew Bunger: Open Technical Discussion (TCPL 201)
10:30 - 11:30 Emmanuel Detournay: Concluding Remarks (TCPL 201)
11:30 - 12:00 Checkout by Noon ↓
5-day workshop participants are welcome to use BIRS facilities (BIRS Coffee Lounge, TCPL and Reading Room) until 3 pm on Friday, although participants are still required to checkout of the guest rooms by 12 noon.
(Front Desk - Professional Development Centre)
12:00 - 13:30 Lunch from 11:30 to 13:30 (Vistas Dining Room) | CommonCrawl |
Journal of Electrical Engineering and Technology
The Korean Institute of Electrical Engineers (대한전기학회)
Journal of Electrical Engineering and Technology (JEET), which is the official publication of the Korean Institute of Electrical Engineers (KIEE) being published bimonthly, released the first issue in March 2006.The journal is open to submission from scholars and experts in the wide areas of electrical engineering technologies. The scope of the journal includes all issues in the field of Electrical Engineering and Technology. Included are techniques for electrical power engineering, electrical machinery and energy conversion systems, electrophysics and applications, information and controls.Papers based on novel methodologies and implementations, creative and innovative electrical engineering associated with the four scopes are particularly welcome but not restricted to the above topics. The JEET publishes basically in conformity with publication ethics codes based on the COPE(committee on publication ethics: http://publicationethics.org/). Additionally, the JEET publication complies strictly with the general research ethics codes of the KIEE(http://www.kiee.or.kr). Reviews and tutorial articles on contemporary subjects are strongly encouraged. All papers are to be reviewed by at least three independent reviewers and authors of all accepted papers would be required to complete a copyright from transferring all rights to the KIEE. For more detailed information about manuscript preparation, please visit the web site of the KIEE at http://www.kiee.or.kr or contact the secretariat ofJEET.
http://home.jeet.or.kr/ KSCI KCI SCOPUS SCIE
Fault Current Limiting Characteristics of Separated and Integrated Three-Phase Flux-Lock Type SFCLs
Lim, Sung-Hun 289
https://doi.org/10.5370/JEET.2007.2.3.289 PDF KSCI
The fault current limiting characteristics of the separated and the integrated three-phase flux-lock type superconducting fault current limiters (SFCLs) were analyzed. The three-phase flux-lock type SFCL consisted of three flux-lock reactors and three $high-T_c$ superconducting (HTSC) elements. In the integrated three-phase flux-lock type SFCL, three flux-lock reactors are connected on the same iron core. On the other hand, three flux-lock reactors of the separated three-phase flux-lock type SFCL are connected on three separated iron cores. The integrated three-phase flux-lock type SFCL showed the different fault current limiting characteristics from the separated three-phase flux-lock type SFCL that the fault phase could affect the sound phase, which resulted in quench of the HTSC element in the sound phase. Through the computer simulation applying numerical analysis for its three-phase equivalent circuit, the fault current limiting characteristics of the separated and the integrated three-phase flux-lock type SFCLs according to the ground fault types were compared.
A Special Protection Scheme Against a Local Low-Voltage Problem and Zone 3 Protection in the KEPCO System
Yun, Ki-Seob;Lee, Byong-Jun;Song, Hwa-Chang 294
This paper presents a special protection scheme, which was established in the KEPCO (Korea Electric Power Corporation) system, against a critically low voltage profile in a part of the system after a double-circuit tower outage. Without establishing the scheme, the outage triggers the operation of a zone 3 relay and trips the component. This sequence of events possibly leads to a blackout of the local system. The scheme consists of an inter-substation communication network using PITR (Protective Integrated Transmitter and Receiver) for acquisition of the substations' data, and under-voltage load shedding devices. This paper describes the procedure for determining the load shedding in the scheme and the experiences of the implementation.
Investigation and Mitigation of Overvoltage Due to Ferroresonance in the Distribution Network
Sakarung, Preecha;Bunyagul, Teratam;Chatratana, Somchai 300
This paper reports an investigation of overvoltages caused by ferroresonance in the distribution system, which consists of a bank of open-delta single-phase voltage transformers. The high voltage sides of the voltage transformer are connected to the distribution system via three single-phase fuse cutouts. Due to the saturation characteristic of the transformer cores, ferroresonance can occur in the condition that the transformer is energized or deenergized with single-phase switching operation of the fuse cutouts. The simulation tool based on EMTP is used to investigate the overvoltages at the high side of voltage transformer. Bifurcation diagrams are used to present the ferroresonance behavior in ranges of different operating conditions. The simulation results are in good agreement with the results from the experiment of 22 kV voltage transformers. The mitigation technique with additional damping resistors in the secondary windings of the voltage transformers will be introduced. Brief discussion will be made on the physical phenomena related to the overvoltage and the damage of voltage transformer.
Measurement of Transient Electric Field Emission from a 245 kV Gas Insulated Substation Model during Switching
Rao, M. Mohana;Thomas, M. Joy;Singh, B.P. 306
The transient fields generated during switching operations in a Gas Insulated Substation (GIS) are associated with high frequency components in the order of few tens of MHz. These transient fields leak into the external environment of the gas-insulated equipment and can interfere with the nearby electronics. Measurements of the transient fields are thus required to characterise the interference caused by switching phenomena in such substations. In view of the above, E-field emission measurement during a switching operation has been carried out for a 245 kV GIS model, using a resonant dipole antenna and D-dot sensor. The characteristics of the E-fields i.e., frequency spectra and their levels have been analysed and are reported in the paper. Suitability of the measurements has been confirmed by comparing frequency spectra of the measured and computed transient fields.
A New Concept of Power Flow Analysis
Kim, Hyung-Chul;Samann, Nader;Shin, Dong-Geun;Ko, Byeong-Hun;Jang, Gil-Soo;Cha, Jun-Min 312
The solution of the power flow is one of the most important problems in electrical power systems. These traditional methods such as Gauss-Seidel method and Newton-Raphson (NR) method have had drawbacks up to now such as initial values, abnormal operating solutions and divergences in heavy loads. In order to overcome theses problems, the power flow solution incorporating genetic algorithm (GA) is introduced in this paper. General operator of genetic algorithm, arithmetic crossover, and non-uniform mutation operator of GA are suggested to solve the power flow problem. While abnormal solution cannot be obtained by a NR method, multiple power flow solution can be obtained by a GA method. With a heavy load, both normal solution and abnormal solution can be obtained by a proposed method. In this paper, a floating number representation instead of the binary number representation is introduced for accuracy. Simulation results have been compared with traditional methods.
Sheath Circulating Current Analysis of a Crossbonded Power Cable Systems
Jung, Chae-Kyun;Lee, Jong-Beom;Kang, Ji-Won 320
The sheath in underground power cables serves as a layer to prevent moisture ingress into the insulation layer and provide a path for earth return current. Nowadays, owing to the maturity of manufacturing technologies, there are normally no problems for the quality of the sheath itself. However, after the cable is laid in the cable tunnel and is operating as part of the transmission network, due to network construction and some unexpected factors, some problems may be caused to the sheath. One of them is the high sheath circulating current. In a power cable system, the uniform configuration of the cables between sections is sometimes difficult to achieve because of the geometrical limitation. This will cause the increase of sheath circulating current, which results in the increase of sheath loss and the decrease of permissible current. This paper will study the various characteristics and effects of sheath circulating current, and then will prove why the sheath current rises on the underground power cable system. A newly designed device known as the Power Cable Current Analyser, as well as ATP simulation and calculation equation are used for this analysis.
Load Flow Analysis for Distribution Automation System based on Distributed Load Modeling
Yang, Xia;Choi, Myeon-Song;Lim, Il-Hyung;Lee, Seung-Jae 329
In this paper, a new load flow algorithm is proposed on the basis of distributed load modeling in radial distribution networks. Since the correct state data in the distribution power networks is basic for all distribution automation algorithms in the Distribution Automation System (DAS), the distribution networks load flow is essential to obtain the state data. DAS Feeder Remote Terminal Units (FRTUs) are used to measure and acquire the necessary data for load flow calculations. In case studies, the proposed algorithm has been proven to be more accurate than a conventional algorithm; and it has also been tested in a simple radial distribution system.
An Efficient Implementation of Decentralized Optimal Power Flow
Kim, Balho H. 335
In this study, we present an approach to parallelizing OPF that is suitable for distributed implementation and is applicable to very large inter-connected power systems. The approach could be used by utilities for optimal economy interchange without disclosing details of their operating costs to competitors. It could also be used to solve several other computational tasks, such as state estimation and power flow, in a distributed manner. The proposed algorithm was demonstrated with several case study systems.
An Analysis on Price Limits of Imported Power via Northeast Asian Power System Ties
Chung, Koo-Hyung;Kim, Balho H. 342
This paper presents an engineering approach to derive the optimum price levels of transacted power. In this paper, with the assumption that power import is possible through the system connection in Northeast Asia regions, the upper price limit of imported power deserving economic efficiency was derived with respect to the time and amount of power import. The proposed approach was demonstrated based on the data from the National Power Development Planning in 2004 with the WASP model.
Analysis of an Electromagnetic Actuator for Circuit Breakers
Shin, Dong-Kyu;Choi, Myung-Jun;Kwon, Jung-Lok;Jung, Hyun-Kyo 346
In this paper, we present an analysis of dynamic characteristics of an electromagnetic actuator for circuit breakers. It is indispensable to simultaneously analyze magnetic, electric, and mechanical phenomena to obtain the dynamic characteristics of the electromagnetic actuator because these phenomena are closely related to each other in an electromagnetic actuator system. The magnetic equations are computed by using the finite element method (FEM). The electric equations and the mechanical equations, which include the time derivative terms, are calculated by using the time difference method (TDM). The calculated results, which have been obtained by means of the FEM and the TDM, are presented with experimental data.
Bearing Fault Diagnosis Using Fuzzy Inference Optimized by Neural Network and Genetic Algorithm
Lee, Hong-Hee;Nguyen, Ngoc-Tu;Kwon, Jeong-Min 353
The bearing diagnostics method is presented in this paper using fuzzy inference based on vibration data. Both time-domain and frequency-domain features are used as input data for bearing fault detection. The Adaptive Network based Fuzzy Inference System (ANFIS) and Genetic Algorithm (GA) have been proposed to select the fuzzy model input and output parameters. Training results give the optimized fuzzy inference system for bearing diagnosis based on measured vibration data. The result is also tested with other sets of bearing data to illustrate the reliability of the chosen model.
A Novel Solid State Controller for Parallel Operated Isolated Asynchronous Generators in Pico Hydro Systems
Singh, Bhim;Kasal, Gaurav Kumar 358
This paper deals with a novel solid state controller (NSSC) for parallel operated isolated asynchronous generators (IAG) feeding 3-phase 4-wire loads in constant power applications, such as uncontrolled pico hydro turbines. AC capacitor banks are used to meet the reactive power requirement of asynchronous generators. The proposed NSSC is realized using a set of IGBTs (Insulated gate bipolar junction transistors) based current controlled 4-leg voltage source converter (CC-VSC) and a DC chopper at its DC bus, which keeps the generated voltage and frequency constant in spite of changes in consumer loads. The complete system is modeled in MATLAB along with simulink and PSB (power system block set) toolboxes. The simulated results are presented to demonstrate the capability of isolated generating system consisting of NSSC and parallel operated asynchronous generators driven by uncontrolled pico hydro turbines and feeding 3-phase 4-wire loads.
Characteristics of Pulse MIG Arc Welding with a Wire Melting Rate Change by Current Polarity Effect
Kim, Tae-Jin;Lee, Jong-Pil;Min, Byung-Duk;Yoo, Dong-Wook;Kim, Cheul-U 366
Joining thin aluminum alloy is difficult using most welding techniques. Many of the problems are associated with bum-through by the high heat input. Common welding techniques are TIG (Tungsten Inert Gas), MIG (Metal Inert Gas), and PULSE-MIG welding. The method provides more control of the heat balance in the welding arc by taking advantage of the different arc characteristics obtained with each of the two polarities. In this paper, we proposed a new welding method by control DSP 320C32, and the characteristic and experiment result-voltage, current, welding bead, and penetrations by this method are presented.
High-Efficiency Ballast for HID Lamp using Soft-Switching Multi-Level Inverter
Lee, Baek-Haeng;Kim, Hee-Jun 373
Soft switching was applied to the multi-level inverter to enhance the performance of the high-intensity discharge (HID) ballast used in vehicle headlights. The electrical properties were investigated and the available modeling of ballast in steady state was calculated using mathematical methods. The result was used in analyzing the power characteristics. The modeling was confirmed by the experiment.
Reliable Ethernet Architecture with Redundancy Scheme for Railway Signaling Systems
Hwang, Jong-Gyu;Jo, Hyun-Jeong 379
Recently, vital devices of the railway signaling systems have been computerized in order to ensure safe train operation. Due to this computerization, we have gradually come to need networking interfaces between these devices. Thus it is important that there be reliable communication links in the signaling systems. Network technologies are applied in the real-time industrial control system, and there are numerous studies to be carried out on the computer network technology for vital control systems such as railway signaling systems. For deploying the studies, we consider costs, reliability, safety assurance technique, compatibility, and etc. In this paper, we propose the Ethernet for railway signaling systems and also precisely describe the computer network characteristics of vital railway signaling systems. Then we demonstrate the experimental results of the proposed network algorithm, which is based on switched Ethernet technology with redundancy scheme.
Quantitative Reliability Assessment for Safety Critical System Software
Chung, Dae-Won 386
At recent times, an essential issue in the replacement of the old analogue I&C to computer-based digital systems in nuclear power plants becomes the quantitative software reliability assessment. Software reliability models have been successfully applied to many industrial applications, but have the unfortunate drawback of requiring data from which one can formulate a model. Software that is developed for safety critical applications is frequently unable to produce such data for at least two reasons. First, the software is frequently one-of-a-kind, and second, it rarely fails. Safety critical software is normally expected to pass every unit test producing precious little failure data. The basic premise of the rare events approach is that well-tested software does not fail under normal routine and input signals, which means that failures must be triggered by unusual input data and computer states. The failure data found under the reasonable testing cases and testing time for these conditions should be considered for the quantitative reliability assessment. We presented the quantitative reliability assessment methodology of safety critical software for rare failure cases in this paper.
Tunable Properties of Ferroelectric Thick Films With MgO Added on (BaSr)TiO3
Kim, In-Sung;Song, Jae-Sung;Jeong, Soon-Jong;Jeon, So-Hyun;Chung, Jun-Ki;Kim, Won-Jeong 391
MgO enhanced $(Ba_{0.6}Sr_{0.4})$ $TiO_3$ thick films have been fabricated by a tape casting and firing method for tunable devices on the microwave frequency band. In order to improve ferroelectric properties, the composite thick films enhanced with MgO on BST have been asymmetrically annealed by a focused halogen beam method. Dielectric constants of composite thick films are changed from 1050 to 1300 at 100 kHz after 60 s and 150 s annealing by the focused halogen beam. Even though no prominent changes were previously observed from the thick films before and after annealing in terms of chemical composition and surface morphology, it is clear that the average particle size of the thick films calculated by Scherrer's formula were increased by annealing. Furthermore, a strong correlation between particle size and dielectric constant of the composite thick films has been observed; dielectric constant increases with increased particle size. This has been attributed to the increased volume of ferroelectric domain due to increased particle sizes. As a result, the tuning range was improved by halogen beam annealing.
An Application of the Novel Techniques Detecting Partial Discharge Employable to GIS Using Optical Sensor
Ryu, Cheol-Hwi;Jung, Seung-Yong;Koo, Ja-Yoon;Yeon, Man-Seung 396
A novel technique has been proposed and related experimental works have been performed in order to detect the partial discharges and the location of the possible defects introduced into the Gas Insulated Switchgear. For this purpose, a prototype HY Pockels sensor has been developed and then employed in order to investigate its field applicability for finding the location of the defects using a 170kV GIS mock-up. Our proposed sensor enables us to measure the electric field variation due to the PD occurrence. In addition, the different PD patterns are observed, which might be dependant on the location and the distance of the sensor with respect to the PD source. Throughout this work, its linear response has been proved to be admissible as a function of the applied voltage. And also the position of the PD source might be distinguished by comparing the PD patterns.
Uncertainty Analysis in Potential Transformer Calibration Using a High Voltage Capacitance Bridge
Jung, Jae-Kap;Lee, Sang-Hwa;Kang, Jeon-Hong;Kwon, Sung-Won;Kim, Myung-Soo 401
Precise absolute measurement of the errors in a potential transformer (PT) can be achieved using high voltage capacitance bridge (HVCB) and capacitive divider. The uncertainty in a PT measurement using the HVCB system was evaluated by considering the overall factors affecting during the calibration of a PT. The expanded uncertainties are found to be not more than $30{\times}10^{-6}$ for ratio and $30{\mu}rad$ for phase up to the primary voltage of $V_p=22kV$. For same PTs, the measured errors in KRISS (Korea Research Institute of Standards and Science) using our bridge are well coincide with those in NMIA (National Measurement Institute of Australia) and PTB (Physikalisch-Technische Bundesanstalt) within the corresponding uncertainties.
Design of a Smart Gas Sensor System for Room Air-Cleaner of Automobile (Thick-Film Metal Oxide Semiconductor Gas Sensor)
Kim, Jung-Yoon;Shin, Tae-Zi;Yang, Myung-Kook 408
It is almost impossible to secure the reproductibility and stability of a commercial Thick-Film Metal Oxide Semiconductor Gas Sensor since it is very difficult to keep the consistency of the manufacturing environment. Thus it is widely known that the general Semiconductor-Oxide Gas Sensors are not appropriate for precise measurement systems. In this paper, the output characteristic analyzer of the various Thick-Film Metal Oxide Semiconductor Gas Sensors that are used to recognize the air quality within an automobile are proposed and examined. The analyzed output characters in a normal air chamber are grouped by sensor ranks and used to fill out the characteristic table of the Thick-Film Metal Oxide Semiconductor Gas Sensors. The characteristic table is used to determine the rank of the sensor that is equipped in the current air cleaner system of an automobile. The proposed air control system can also adapt the on-demand operation that recognizes the history of the passenger's manual-control. | CommonCrawl |
Category Archives: Page 3 model
Page 3 model: Bees
by Eleanor Doman. Published on 23 October 2019.
If you model rabbits under ideal circumstances, you may find that the number of pairs of rabbits each month follows the Fibonacci sequence.
In this case, 'ideal circumstances' is a euphemism for nonsense, as your assumptions would include blatant untruths such as "rabbits mate once a month every month except their first month alive", "a pair of rabbits gives birth to exactly one pair of rabbits per month", and "the hutch is infinitely big (and hence Starsky is very squashed)".
Fibonacci numbers, however, are not completely absent from nature. They accurately describe a vastly superior animal: the honeybee.
Male bees (drones) come from unfertilised eggs, and so they only have one parent — the queen.
Female bees (workers or queens) come from fertilised eggs and so have two parents — the current queen and a drone.
If you follow a drone's family tree backwards, you will see that a drone has:
The number of ancestors of a male bee follows the Fibonacci sequence.
Who would've expected that?!
Posted in Page 3 model | Tagged issue-10-fun
Page 3 model: Game of Thrones
by Eleanor Doman. Published on 14 March 2019.
Be warned: this article is dark and full of spoilers.
One of the best parts of getting into a series is getting to know and love the main characters. However, in Game of Thrones (or A Song of Ice and Fire, for you purists), this can be a heart-breaking activity. Who will survive to the end and who will bite the dust? No one knows, but perhaps maths can lend a hand.
Image: Andrew Beveridge
Andrew Beveridge and Jie Shan used network theory to investigate who the main characters of Game of Thrones are. The diagram above shows all the interactions between characters during the seventh series: the larger characters are more central, as determined by the PageRank algorithm.
However, it only takes one swing of an axe to drastically change the network…
Page 3 model: Ponytails
by Hugo Castillo Sánchez. Published on 18 October 2018.
The ponytail hairstyle is a synonym of comfort and simplicity, and what was once considered a traditional schoolgirl style, nowadays it has become popular again thanks to clever styling. But trying to work out what shape someone's ponytail will be has puzzled scientists and artists since Leonardo da Vinci.
In 2012, scientists from the University of Cambridge and University of Warwick developed the ponytail shape equation (PSE) to unravel some of the mysteries of the ponytail. Their model takes into account the gravity ($g$), the elasticity of the hair, the presence of random curliness of hair, and an outward swelling pressure that arises from collisions between the component hairs (which explains how a bundle of hair is swelled).
This equation can be used to find $R$, the radius of the ponytail, in terms of $s$, the arc length along the ponytail. The length at which gravity bends the hair is $l$, $L$ is the length of the ponytail, $P$ is the pressure due to the hairband, $A$ is the bending modulus, and $\rho$ is the hair's density.
The Rapunzel number, $\text{Ra}$, of a ponytail is the ratio $L/l$. This dimensionless number determines the effect of gravity on hair. When $\text{Ra}<1$, the hair doesn't bend much, leading to a thin, straight ponytail. When $\text{Ra}>1$, the hair bends strongly under gravity leading to a wide, bushy ponytail.
The relevance of this equation is that it could help in understanding the structure of materials made up of fiber and depicting hair realistically in animation and video games. But most importantly, if you want to look good at a party or a maths conference, simply calculate your Rapunzel number and pop on a hairband that exerts the correct pressure.
RE Goldstein, PB Warren, RC Ball, Shape of a Ponytail and the Statistical Physics of Hair Fiber Bundles, Physical Review Letters, 108, 078101, (2012).
RE Goldstein, (2016, September 11), Leonardo, Rapunzel and the Mathematics of Hair.
Page 3 model: Frictional unemployment
If I had a pound for every time someone assumed I studied maths because I wanted to be an economist without writing essays, I'd have enough to make it worth following the stock market. However, once the indignation fades, I can see the attraction—there are a lot of interesting uses of mathematics in economics. One of the most basic, yet most important, is modelling unemployment.
Unemployment might be caused by too few jobs in an area. Or, it may also be due to a lack of information being provided to employers or potential workers: there may be perfectly good jobs available that qualified workers simply don't know about. This sort of unemployment is called frictional unemployment.
We split the labour force $L$ into two separate populations: employed ($N$) and unemployed ($U$). We then define $s$ and $f$ to be the rates at which people gain and lose employment:
The rate of change in unemployment is:
\begin{align*}
\frac{\text{d} U}{\text{d}t}&=\text{number becoming unemployed} -\text{number entering work}\\
&=sN(t)-fU(t)
\end{align*}
If we assume that the total size of the labour force is constant, then this leads us to:
$$\frac{\text{d}u}{\text{d}t}+(s+f)u=s,$$
where $u$ is the proportion of the labour force that is unemployed. A lovely first order ODE, which can be solved using the integrating factor method (an exercise left for the reader). Simple enough that even an economist would understand!
Page 3 model: Crowd control
by Sean Jamshidi. Published on 18 October 2017.
Being part of a crowd is something that we all have to experience from time to time. Whether it's in a busy shop or commuting to work, the feeling of being swept along by those around us is all too familiar. The ubiquity of the situation, and the huge amount of data available from CCTV footage, makes crowd dynamics a favourite subject for mathematical modelling.
One popular method is known as the social force model, which applies Newton's second law to each member of the crowd. Each individual accelerates to maintain their 'desired velocity', and this is balanced against forces from physical obstacles as well as the social force that maintains polite distance between people—a mathematical interpretation of personal space!
Lanes naturally form when people walk in opposite directions. Image: Dirk Helbing and Peter Molnar
Huge simulations of up to a million pedestrians have been run, which show the model's remarkable powers. If groups of people want to travel in opposite directions along a bridge, for example, lanes of alternating direction naturally form to minimise "bumping".
When two crowds meet at a gap, the walking direction oscillates. Image: Dirk Helbing and Peter Molnar
Some of the results are more unexpected. For example, if people try and move too fast then it can actually slow them down via an increase in 'friction' that results from pushing. Further, it can be shown that two narrow doors are a more effective way of leaving a room than one big door, so putting a bollard in the middle of an exit actually speeds people up!
Still, not much solace when you're stuck in a Christmas scramble at Woolworths…
Helbing D and Molnar P (1997). Self-organization phenomena in pedestrian crowds. In: Schweitzer F (ed.) From individual to collective dynamics, 569–577.
Bread is a staple of many diets. From delicious garlic bread to crunchy pizza, it's enjoyed throughout the world. But have you ever wondered what mathematics lies just beneath the crust? Thankfully DR Jefferson, AA Lacey and PA Sadd at Heriot-Watt University have! No? Well, we're going to tell you anyway.
Bread dough is initially a bubbly liquid, with bubbles connected to other bubbles in a 'matrix'. These bubbles will collapse, provided that both the temperature and temperature gradient are high enough. To start with, the bubbles at the surface (which is hotter than the interior) reach a temperature at which they are likely to fracture. At this point, the temperature gradient is also high, with plenty of cooler liquid dough nearby. However, when the temperature of the interior has increased sufficiently to allow the bubbles inside to burst, the temperature gradient is much lower, the matrix has set, there is less liquid dough nearby, and so less collapse can take place.
But that's not all! We can refine the model by considering the movement of the 'crust boundary' (where bubbles collapse) as the dough rises, as well as the vaporisation of moisture inside the bubbles. Both of these allow for the transfer of heat and affect the thermodynamics of the whole process.
So in the future, please try to remember all the maths that worked hard to ensure the crustiness of your bread! And, on that note, we're off to get pizza…
Jefferson DR, Lacey AA & Sadd PA 2007 Crust density in bread baking: Mathematical modelling and numerical solutions. Applied Mathematical Modelling 31 (2) 209–225.
Jefferson DR, Lacey AA & Sadd PA 2007 Understanding crust formation during baking. Journal of Food Engineering 75 (4) 515–521.
Posted in Page 3 model | Tagged Bread, issue-05-fun, model, Page 3
Page 3 model: Hallucinations
by Chalkdust. Published on 3 October 2016.
You might think that maths and psychedelic hallucinations tend not to mix very well. But you would be mistaken! There are a series of visual hallucinations known as form constants that are highly geometric, and a mathematical model of them has provided us with some fascinating insight into how our visual cortex (the part of the brain that processes the information we receive from our eyes) works.
These hallucinations were first observed in patients who had taken mescaline, a psychedelic drug produced from a cactus found in South America. Form constants have subsequently been reported in a number of other altered states such as sensory deprivation, waking/falling asleep states, near death experiences and by individuals with synaesthesia. Some people even report seeing these patterns after closing their eyes and applying firm pressure to both eyelids for a few seconds!
The mathematical model we referred to was described in a paper by Bressloff et al., and is based on anatomical features of our brain. It seems that the visual cortex has certain symmetry properties, such as reflective, translational and even a novel shift-twist symmetry. Its electrical activity can be represented mathematically and—a bit of group theory, some eigenvectors and a couple of transformations later—has steady state solutions to the resulting equations that are remarkably similar to the observed hallucinogenic experiences. Groovy!
Disclaimer: Chalkdust does not advocate pressing hard on your eyelids.
[Written in collaboration with Samuel Mills. Pikachu adapted from picture by Matt Levya, CC BY 2.0; Hallucination pictures taken with kind permission from PC Bressloff, JD Cowan, M Golubitsky, PJ Thomas and MC Wiener, What geometric visual hallucinations tell us about the visual cortex, Neural Computation 14(3) (2002), 473–91.]
Page 3 model: Traffic flow
by Chalkdust. Published on 13 March 2016.
Have you ever reached the start of a traffic jam fearing the worst—road works, an accident, a fallen tree—to later discover no clear reason for the delay? Then you fell victim to one of the strangest traffic phenomena: the phantom traffic jam. This ghostly foe may seem supernatural, but can actually be predicted through the theory of shockwaves.
Page 3 model: The Duckworth–Lewis method
If there are two things that typify an English summer, they are cricket and rainy days. Unfortunately, the two very often come together, which makes it very difficult to decide who should win a limited overs cricket match when rain stops play.
In these cases, a statistical model known as the Duckworth–Lewis method, devised by statistician Frank Duckworth and mathematician Tony Lewis, settles the issue (and provokes copious debate amongst Lord's Long
Room members as they sip their champagne).
Page 3 model: When zombies attack
Run? Kill? Ask nicely? (Source: flickr.com/joelf)
Every issue we feature another great model on page 3 of our magazine. This issue it's this:
S' &= \Pi – \beta S Z – \delta S,\\
Z' &= \beta S Z + \zeta R – \alpha S Z ,\\
R' &= \delta S + \alpha S Z – \zeta R.
\end{align*} Continue reading →
Posted in Page 3 model | Tagged fun, issue-01-fun, models, zombies | CommonCrawl |
Osama Khalil
Department of Mathematics, Ohio State University, 231 W 18th Ave., Columbus, OH 43210, USA
Received January 2018 Revised July 2018 Published November 2018
Figure(2)
We study the problem of rigidity of closures of totally geodesic plane immersions in geometrically finite manifolds containing rank 1 cusps. We show that the key notion of K-thick recurrence of horocycles fails generically in this setting. This property played a key role in the recent breakthroughs of McMullen, Mohammadi and Oh. Nonetheless, in the setting of geometrically finite groups whose limit sets are circle packings, we derive 2 density criteria for non-closed geodesic plane immersions, and show that closed immersions give rise to surfaces with finitely generated fundamental groups. We also obtain results on the existence and isolation of proper closed immersions of elementary surfaces.
Keywords: Geodesic planes, geometrically finite manifolds, unipotent flows.
Mathematics Subject Classification: 22F30, 37A17, 51M10.
Citation: Osama Khalil. Geodesic planes in geometrically finite manifolds. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 881-903. doi: 10.3934/dcds.2019037
R. Benedetti and C. Petronio, Lectures on Hyperbolic Geometry, Universitext. Springer-Verlag, Berlin, 1992. doi: 10.1007/978-3-642-58158-8. Google Scholar
B. H. Bowditch, Geometrical finiteness for hyperbolic groups, Journal of Functional Analysis, 113 (1993), 245-317. doi: 10.1006/jfan.1993.1052. Google Scholar
F. Dal'bo, Topologie du feuilletage fortement stable, Annales de l'institut Fourier, 50 (2000), 981-993. Google Scholar
P. Eberlein, Geodesic flows on negatively curved manifolds, I, Ann. of Math. (2), 95 (1972), 492-510. doi: 10.2307/1970869. Google Scholar
R. L. Graham, J. C. Lagarias, C. L. Mallows, A. R. Wilks and C. H. Yan, Apollonian circle packings: Number theory, J. Number Theory, 100 (2003), 1-45. doi: 10.1016/S0022-314X(03)00015-5. Google Scholar
L. Keen, B. Maskit and C. Series, Geometric finiteness and uniqueness for kleinian groups with circle packing limit sets, J. Reine Angew. Math., 436 (1993), 209-219. Google Scholar
G. A. Margulis, Indefinite quadratic forms and unipotent flows on homogeneous spaces, Banach Center Publications, 23 (1989), 399-409. Google Scholar
F. Maucourant and B. Schapira, On topological and measurable dynamics of unipotent frame flows for hyperbolic manifolds, ArXiv e-prints, February 2017.Google Scholar
C. T. McMullen, A. Mohammadi and H. Oh, Geodesic planes in hyperbolic 3-manifolds, Inventiones Mathematicae, 209 (2017), 425-461. doi: 10.1007/s00222-016-0711-3. Google Scholar
C. T McMullen, A. Mohammadi and H. Oh, Horocycles in hyperbolic 3-manifolds, Geometric and Functional Analysis, 26 (2016), 961-973. doi: 10.1007/s00039-016-0373-8. Google Scholar
H. Oh and N. Shah, The asymptotic distribution of circles in the orbits of kleinian groups, Inventiones Mathematicae, 187 (2012), 1-35. doi: 10.1007/s00222-011-0326-7. Google Scholar
M. Ratner, Raghunathans topological conjecture and distributions of unipotent flows, Duke Math. J., 63 (1991), 235-280. doi: 10.1215/S0012-7094-91-06311-8. Google Scholar
N.Shah, Closures of totally geodesic immersions in manifolds of constant negative curvature,Group Theory from a Geometrical Viewpoint (Trieste, 1990), World Scientific, (1991), 718-732. Google Scholar
Figure 1. Apollonian circle packing (solid). Inversions through dual circles (dashed) generate a geometrically finite group containing rank-$1$ parabolic subgroups
Figure Options
Download as PowerPoint slide
Figure 2. Proof of Lemma 3.2
Shucheng Yu. Logarithm laws for unipotent flows on hyperbolic manifolds. Journal of Modern Dynamics, 2017, 11: 447-476. doi: 10.3934/jmd.2017018
Jayadev S. Athreya, Gregory A. Margulis. Logarithm laws for unipotent flows, Ⅱ. Journal of Modern Dynamics, 2017, 11: 1-16. doi: 10.3934/jmd.2017001
Jayadev S. Athreya, Gregory A. Margulis. Logarithm laws for unipotent flows, I. Journal of Modern Dynamics, 2009, 3 (3) : 359-378. doi: 10.3934/jmd.2009.3.359
Jan Philipp Schröder. Ergodicity and topological entropy of geodesic flows on surfaces. Journal of Modern Dynamics, 2015, 9: 147-167. doi: 10.3934/jmd.2015.9.147
Daniel Visscher. A new proof of Franks' lemma for geodesic flows. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4875-4895. doi: 10.3934/dcds.2014.34.4875
Keith Burns, Katrin Gelfert. Lyapunov spectrum for geodesic flows of rank 1 surfaces. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 1841-1872. doi: 10.3934/dcds.2014.34.1841
Mark Pollicott. Closed geodesic distribution for manifolds of non-positive curvature. Discrete & Continuous Dynamical Systems - A, 1996, 2 (2) : 153-161. doi: 10.3934/dcds.1996.2.153
Cheng Zheng. Sparse equidistribution of unipotent orbits in finite-volume quotients of $\text{PSL}(2,\mathbb R)$. Journal of Modern Dynamics, 2016, 10: 1-21. doi: 10.3934/jmd.2016.10.1
Jeffrey Boland. On rigidity properties of contact time changes of locally symmetric geodesic flows. Discrete & Continuous Dynamical Systems - A, 2000, 6 (3) : 645-650. doi: 10.3934/dcds.2000.6.645
Artur O. Lopes, Vladimir A. Rosas, Rafael O. Ruggiero. Cohomology and subcohomology problems for expansive, non Anosov geodesic flows. Discrete & Continuous Dynamical Systems - A, 2007, 17 (2) : 403-422. doi: 10.3934/dcds.2007.17.403
David Ralston, Serge Troubetzkoy. Ergodic infinite group extensions of geodesic flows on translation surfaces. Journal of Modern Dynamics, 2012, 6 (4) : 477-497. doi: 10.3934/jmd.2012.6.477
Michael Usher. Floer homology in disk bundles and symplectically twisted geodesic flows. Journal of Modern Dynamics, 2009, 3 (1) : 61-101. doi: 10.3934/jmd.2009.3.61
Katrin Gelfert. Non-hyperbolic behavior of geodesic flows of rank 1 surfaces. Discrete & Continuous Dynamical Systems - A, 2019, 39 (1) : 521-551. doi: 10.3934/dcds.2019022
Todd A. Drumm and William M. Goldman. Crooked planes. Electronic Research Announcements, 1995, 1: 10-17.
Mark Pollicott. Ergodicity of stable manifolds for nilpotent extensions of Anosov flows. Discrete & Continuous Dynamical Systems - A, 2002, 8 (3) : 599-604. doi: 10.3934/dcds.2002.8.599
Paolo Maria Mariano. Line defect evolution in finite-dimensional manifolds. Discrete & Continuous Dynamical Systems - B, 2012, 17 (2) : 575-596. doi: 10.3934/dcdsb.2012.17.575
Misha Bialy, Andrey E. Mironov. Rich quasi-linear system for integrable geodesic flows on 2-torus. Discrete & Continuous Dynamical Systems - A, 2011, 29 (1) : 81-90. doi: 10.3934/dcds.2011.29.81
François Gay-Balmaz, Cesare Tronci, Cornelia Vizman. Geometric dynamics on the automorphism group of principal bundles: Geodesic flows, dual pairs and chromomorphism groups. Journal of Geometric Mechanics, 2013, 5 (1) : 39-84. doi: 10.3934/jgm.2013.5.39
Bryce Weaver. Growth rate of periodic orbits for geodesic flows over surfaces with radially symmetric focusing caps. Journal of Modern Dynamics, 2014, 8 (2) : 139-176. doi: 10.3934/jmd.2014.8.139
Chiara Zanini. Singular perturbations of finite dimensional gradient flows. Discrete & Continuous Dynamical Systems - A, 2007, 18 (4) : 657-675. doi: 10.3934/dcds.2007.18.657 | CommonCrawl |
Why doesn't the bidentate ligand form dative bonding three times?
I was looking at this question where it asked you to work out the final complex formed when an excess of 1,2-diaminoethane is added to and aqueous solution of copper (II) sulphate.
So the equation I worked out would have been: $$\ce{[Cu(H2O)6]^2+ + 3(NH2CH2CH2NH2)-> [Cu(NH2CH2CH2NH2)3]^2+ + 6H2O}.$$
Therefore I worked out the complex to be: $$\ce{[Cu(NH2CH2CH2NH2)3]^2+}$$
However the answer was : $$\ce{[Cu(NH2CH2CH2NH2)2(H2O)2]^2+}$$
coordination-compounds transition-metals
VivViv
$\begingroup$ Jahn-Teller effect $\endgroup$ – orthocresol♦ May 29 '16 at 11:02
$\begingroup$ @orthocresol What other transitional metals would show this effect? (is it all elements in row 11 or is this effect unique only to copper (II) ) $\endgroup$ – Viv May 29 '16 at 11:31
$\begingroup$ Look at the Wikipedia page, it has a good explanation... Ag(II) and Au(II) are very rare anyway $\endgroup$ – orthocresol♦ May 29 '16 at 11:53
Orthocresol already pointed you towards the Jahn-Teller effect. The effect is there in $\ce{[Cu(H2O)6]^2+}$, the hexaaquacopper(II) cation, but more pronounced in the tetraammindiaquacopper(II) cation $\ce{[Cu(NH3)4(H2O)2]^2+}$. Compare an image of the complex as shown below:
Even the naked eye can see that the copper-oxygen bonds are much longer than the copper-nitrogen ones. Measuring the distances in crystal structures containing that cation gives $203~\mathrm{pm}$ for the $\ce{Cu-N}$ bonds and $251~\mathrm{pm}$ for $\ce{Cu-O}$.[1] Why is this the case? To answer that we need to refer to the molecular orbital scheme of a typical octahedral complex:[2]
The three lowest groups of orbitals containing six MOs in total are typically populated by the ligand lone pairs which form the coordinate bond. The next set of two groups — five orbitals in total — are the metal d-orbitals that can (or cannot, in case of $\mathrm{t_{2g}}$) interact with the ligands. For copper(II), we need to fill in nine metal d-electrons meaning that the $\mathrm{t_{2g}}$ level is fully populated while $\mathrm{e_g}$ has three electrons in two orbitals — a denatured state.
This denatured state is not energy-optimal. Energy could be released if we somehow manage to unequalise the $\mathrm{e_g}$ orbitals, give them different energies and have the lower-energetic one be fully populated. Remember that the $\mathrm{e_g}$ orbitals are $\mathrm{d}_{x^2-y^2}$ and $\mathrm{d}_{z^2}$. In a gedankenexperiment, remove the two ligands in $z$-direction symmetrically away from the metal centre. Any orbital that has a $z$ contribution should now be stabilised — most notably, that includes $\mathrm{d}_{z^2}$. The orbital scheme looks slightly different, the complex' symmetry is no longer $O_\mathrm{h}$ but $D_\mathrm{4h}$ but we have gained energy by being able to differentiate between the $\mathrm{d}_{x^2-y^2}$ and $\mathrm{d}_{z^2}$ orbitals. This is the Jahn-Teller effect.
Consequences of the effect include that ammin ligands preferentially coordinate copper(II) equatorially, and that the tetraammindiaqua complex is the preferred one in high aquaeous ammonia concentrations. Hexaammincopper(II) can be prepared only in liquid ammonia to the best of my knowledge.
Now let's consider using $\ce{en}$ (ethylenediamine) rather than $\ce{NH3}$. The bridge between the two amino groups is long enough to bridge across one edge of the octahedron but not any further. Replacing all the equatorial ligands with $\ce{en}$ makes sense because we will get another very symmetric complex $\ce{[Cu(en)2(H2O)2]^2+}$, and because these are the ones that are replaced first. If we now wanted to replace the two other water molecules, we would create a rather distorted $\ce{[Cu(en)3]^2+}$ complex, where two $\ce{en}$ molecules have one short, strong and one long, weak bond. This is an unfavourable condition, since the former shorter, stronger bonds have to be weakened to get there. In short, $\ce{en}$ behaves like ammonia and only occupies the equatorial positions, not the axial ones. A consequence again of Jahn-Teller.
[1]: Data taken from Professor Klüfers' internet scriptum for his general and inorganic chemistry course at the LMU Munich (section 23.4).
[2]: First presented in this answer and originally taken from Professor Klüfers' coordination chemistry course.
Not the answer you're looking for? Browse other questions tagged coordination-compounds transition-metals or ask your own question.
Colour intensity of transition metal complexes
Why Cl⁻ can't act as bidentate ligand?
Thermogravimetric Analysis of hydrated compounds
Nomenclature and chemical formulae for bridged dinuclear chromium complexes
Adding non stoichiometric amount of NH3 to copper solution
Does addition of HCl favour, or disfavour, formation of tetraamminecopper(II)?
Why is a [Cu(SCN)2] complex black?
Cobalt Chloride in various solvent/water mixtures - tested
Identification for qualitative analysis
Optical Isomerism in Coordination Compounds of type MA₂B₂(CC)
Why does the hexachlorocopper(II) ion not form? | CommonCrawl |
C. Z. Jiang and X. C. Xiao, "Norm-based adaptive coefficient ZNN for solving the time-dependent algebraic Riccati equation," IEEE/CAA J. Autom. Sinica, vol. 10, no. 1, pp. 298–300, Jan. 2023. doi: 10.1109/JAS.2023.123057
Citation: C. Z. Jiang and X. C. Xiao, "Norm-based adaptive coefficient ZNN for solving the time-dependent algebraic Riccati equation," IEEE/CAA J. Autom. Sinica, vol. 10, no. 1, pp. 298–300, Jan. 2023. doi: 10.1109/JAS.2023.123057
PDF( 575 KB)
Norm-Based Adaptive Coefficient ZNN for Solving the Time-Dependent Algebraic Riccati Equation
Chengze Jiang ,
Xiuchun Xiao ,
Corresponding author: The authors are with the School of Electronics and Information Engineering, Guangdong Ocean University, Zhanjiang 524088, China (e-mail: [email protected]; [email protected])
A. R. R. Narváez and E. F. Costa, "Control of continuous-time linear systems with Markov jump parameters in reverse time," IEEE Trans. Autom. Control., vol. 65, no. 5, pp. 2265–2271, May 2020. doi: 10.1109/TAC.2019.2944919
L. Lin and M. Xin, "Computational enhancement of the SDRE scheme: General theory and robotic control system," IEEE Trans. Robot., vol. 36, no. 3, pp. 875–893, Jun. 2020. doi: 10.1109/TRO.2020.2976330
T. Nguyen and Z. Gajic, "Solving the matrix differential Riccati equation: A Lyapunov equation approach," IEEE Trans. Autom. Control., vol. 55, no. 1, pp. 191–194, Jan. 2010. doi: 10.1109/TAC.2009.2033841
F. J. Vargas and R. A. González, "On the existence of a stabilizing solution of modified algebraic Riccati equations in terms of standard algebraic Riccati equations and linear matrix inequalities," IEEE Contr. Syst. Lett., vol. 4, no. 1, pp. 91–96, Jan. 2020. doi: 10.1109/LCSYS.2019.2921998
Y. Zhang, S. Satapathy, D. Wu, D. Guttery, J. Gorriz, and S. Wang, "Improving ductal carcinoma in situ classification by convolutional neural network with exponential linear unit and rank-based weighted pooling," Complex Intel Syst., vol. 7, no. 3, pp. 1295–1310, Nov. 2020.
L. Xiao and Y. He, "A noise-suppression ZNN model with new variable parameter for dynamic Sylvester equation," IEEE Trans. Ind. Informat., vol. 17, no. 11, pp. 7513–7522, Nov. 2021. doi: 10.1109/TII.2021.3058343
S. Wang, Z. Zhu, and Y. Zhang, "PSCNN: PatchShuffle convolutional neural network for COVID-19 explainable diagnosis," Front. Public Health., vol. 9, p. 768278, Oct. 2021.
H. Liu, T. Wang, and D. Guo, "Design and validation of zeroing neural network to solve time-varying algebraic Riccati equation," IEEE Access., vol. 8, pp. 211315–211323, Nov. 2020. doi: 10.1109/ACCESS.2020.3039253
S. Wang, S. Satapathy, D. Anderson, S. Chen, and Y. Zhang, "Deep fractional max pooling neural network for COVID-19 recognition," Front. Public Health., vol. 9, p. 726144, Aug. 2021.
S. Wang, M. Khan, Y. Zhang, "VISPNN: VGG-inspired stochastic pooling neural network," Comput. Mater. Contin., vol. 70, no. 2, pp. 3081–3097, Sep. 2022.
L. Jin, L. Wei, and S. Li, "Gradient-based differential neural-solution to time-dependent nonlinear optimization," IEEE Trans. Autom. Control., DOI: 10.1109/TAC.2022.3144135.
W. Li, X. Ma, J. Luo, and L. Jin, "A strictly predefined-time convergent neural solution to equality- and inequality-constrained time-variant quadratic programming," IEEE Trans. Syst.,Man,Cybern. Syst., vol. 51, no. 7, pp. 4028–4039, Jul. 2021. doi: 10.1109/TSMC.2019.2930763
Y. Zhang, Y. Ling, M. Yang, S. Yang, and Z. Zhang, "Inverse-free discrete ZNN models solving for future matrix pseudoinverse via combination of extrapolation and ZeaD formulas," IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 6, pp. 2663–2675, Jun. 2021. doi: 10.1109/TNNLS.2020.3007509
L. Jin, X. Zheng, and X. Luo, "Neural dynamics for distributed collaborative control of manipulators with time delays," IEEE/CAA J. Autom. Sinica., vol. 9, no. 5, pp. 854–863, May 2022. doi: 10.1109/JAS.2022.105446
Y. Liufu, L. Jin, J. Xu, X. Xiao, and D. Fu, "Reformative noise-immune neural network for equality-constrained optimization applied to image target detection," IEEE Trans. Emerg. Top. Comput., DOI: 10.1109/TETC.2021.3057395.
C. Jiang, X. Xiao, D. Liu, H. Huang, H. Xiao, and H. Lu, "Nonconvex and bound constraint zeroing neural network for solving time-varying complex-valued quadratic programming problem," IEEE Trans. Ind. Informat., vol. 17, no. 10, pp. 6864–6874, Oct. 2021. doi: 10.1109/TII.2020.3047959
Figure 1. Experimental results of simulations. (a) Comparsion of residual error $ ||E(t)||_{{F}} $ between different models with noise-free case; (b) The logarithm of residual error $ ||E(t)||_{{F}} $.
Figure 2. Comparison among different paramter adopted for the NACZNN model (3).
Figure 3. Comparing the robustness of different models. (a) Residual error $||E(t)||_{{F}}$ of models perturbed by constant noises $ \Theta = [2]^{4\times 1} $; (b) Residual error $||E(t)||_{F}$ of models perturbed by time-varying noises $ \Theta t = t\times[2]^{4\times 1} $. | CommonCrawl |
Effects of subsampling on characteristics of RNA-seq data from triple-negative breast cancer patients
Alexey Stupnikov1,
Galina V Glazko2 &
Frank Emmert-Streib1,3
Chinese Journal of Cancer volume 34, Article number: 36 (2015) Cite this article
Data from RNA-seq experiments provide a wealth of information about the transcriptome of an organism. However, the analysis of such data is very demanding. In this study, we aimed to establish robust analysis procedures that can be used in clinical practice.
We studied RNA-seq data from triple-negative breast cancer patients. Specifically, we investigated the subsampling of RNA-seq data.
The main results of our investigations are as follows: (1) the subsampling of RNA-seq data gave biologically realistic simulations of sequencing experiments with smaller sequencing depth but not direct scaling of count matrices; (2) the saturation of results required an average sequencing depth larger than 32 million reads and an individual sequencing depth larger than 46 million reads; and (3) for an abrogated feature selection, higher moments of the distribution of all expressed genes had a higher sensitivity for signal detection than the corresponding mean values.
Our results reveal important characteristics of RNA-seq data that must be understood before one can apply such an approach to translational medicine.
In recent years, next-generation sequencing technology for generating RNA-seq data has gained considerable interest [1–4] in the biological [5, 6] and biomedical literature [7, 8]. Such data are frequently used, e.g., for identifying alternative splicing, finding differentially expressed genes, or detecting differentially expressed pathways [9–14]. The conventional analysis pipeline for RNA-seq data first maps the reads to genes for a given annotation, resulting in a high-dimensional count vector for each sample. Thereafter, these integer count vectors are normalized and further processed with statistical inference methods. Altering parameters of the preprocessing steps, e.g., aligning procedure, summarization of reads, choice of annotation, and normalization techniques, can change the output of a gene expression analysis drastically. This effect has been studied for different normalization procedures [15].
So far, a major focus has been placed on methods for identifying differentially expressed genes from RNA-seq data [16–18] because such analysis methods that are simpler than, e.g., network-based approaches yet provide meaningful insights into the basic biological functioning of different physiological conditions. Some of these methods assume that the count distribution of individual genes follows a Poisson distribution, whereas others assume a negative binomial distribution for their model. Interestingly, it has been argued that the negative binomial distribution does not perform well under specific conditions [18].
In this study, we carried out an analysis of RNA-seq count distributions for two biological conditions: triple-negative breast cancer (TNBC) samples and TNBC-free samples. The TNBC-free samples corresponded to the same cell types as TNBC samples but were from normal tissue; they formed a control group. For each biological sample, we repeatedly performed a subsampling of mapped reads and thus simulated new samples with a different sequencing depth. For these surrogate gene expression data sets, we studied and compared a variety of properties of their RNA-seq count distributions. We describe the biological data we used for our analysis and the preprocessing steps we applied, and we introduce a procedure, Depth of Sequencing Iterative Reduction Estimator (DESIRE), for subsampling RNA-seq data.
The whole data set consists of 6 groups, including a total of 168 samples [19]. We randomly selected four samples of TNBC tumors from the primary tumor group and four samples of healthy breast tissues from TNBC-free group. This selection allowed us to estimate the main statistical entities under investigation. Other samples were not considered in our analysis.
Data preprocessing
To use RNA-seq data for a gene expression analysis, certain preprocessing steps must be performed. These include alignment of reads, count matrix computation, and normalization.
After the data were extracted from The Sequence Read Archive [20], we performed the alignment with Bowtie 2 [21] allowing 1 mismatch; human genome version hg38 [22] was taken as the most recent version of reference at the time when the analysis was conducted. To obtain a count vector for a sample (i.e., the number of reads mapped to a gene for all genes), we used the featureCounts function available from the Rsubread package for the R language [23]. During this procedure, the total number of fragments mapped to particular gene positions was summarized. We followed the steps usually implemented for differential gene expression analysis, so various gene isoforms were not of interest. We focused on the gene level for the summarization, not the exon level. The overall process is shown in Fig. 1.
Preprocessing steps of our analysis. Short reads as provided in Fastq files are aligned with Bowtie 2, resulting in Sam files. Application of our method Depth of Sequencing Iterative Reduction Estimator (DESIRE) extracts a defined subsample of size f, resulting in updated Sam files. Finally, feature Counts, a function from the R subread package, is applied to obtain the count vector for one sample.
In recent years, a number of different normalization methods have been suggested for the modification of the integer counts for the genes [15]. We preferred "counts per million" (CPM), defined by
$$c_{i} = \frac{{N_{i} \times 10^{6} }}{{N_{\text{lib}} }}$$
over "reads per kilobase per million" (RPKM) [24], given by
$$c_{i} = \frac{{N_{i} \times 10^{6} }}{{N_{\text{lib}} \times L_{i} }}.$$
Here, i corresponds to the index of a gene; N i is the number of integer counts (reads) for gene i; N lib is the total number of reads in the library, i.e., the total number of reads per sample; and L i is the length of an exon (in kilobases).
$$N_{\text{lib}} = \sum\limits_{i} {N_{i} }$$
When choosing CPM, we followed the reported argument [18] as the relative difference in expression levels between conditions was the matter of interest.
Depth of sequencing iterative reduction estimator (DESIRE)
It is commonly accepted that the depth of the sequencing can affect the results of an analysis [25–28]. However, these papers considered only results of a bioinformatics analysis and did not study the details of the count distributions. Another example is the study that addressed the question of the optimal sequencing depth [29].
To study the influence of the sequencing depth on a gene expression analysis, we developed a resampling procedure based on the subsampling of the data. By subsampling, we used only a fraction, f, of the total amount of available data in a systematic manner [30]. Another name for such a procedure used in the literature is m out of n bootstrap, whereas m < n and the bootstrap samples are drawn without replacement [31]. Our procedure, DESIRE, has the following underlying ideas.
For each biological sample, we drew a number of replicates of a smaller sequencing depth. To accomplish this, a particular portion, f, of reads, ranging from 10 to 90%, was randomly drawn from a biological sample without replacement. This process was repeated R times resulting in R simulated replicates for one simulated sequencing depth f. For our analysis, we used R = 24 resulting in a total of 240 subsampled data sets for a single biological sample for the 10 different sequencing depths, f = {0.1,…, 0.9, 1.0}.
The specific value of R is not crucial. However, if it is large, the computational complexity would increase without resulting in significant improvements in the statistical estimates of our analysis. On the other hand, values of R much lower than 24 potentially result in unstable results. The particular number of R = 24 considered the number of nodes in our computer cluster available for our analysis.
A schematic overview of the DESIRE procedure is shown in Figs. 2 and 3. It is important to note that the simulated sequencing death, f, refers to all reads of the genome and not to the reads of a single gene. In this way, DESIRE simulates actual biological experiments conducted for a smaller sequencing depth. If we draw f reads for each gene independently, the resulting samples would not correspond to results produced by next-generation sequencing technology, e.g., on an Illumina platform.
Overview of the DESIRE procedure.
Generation of R replicates for a given sequencing depth using only a fraction, f, of the original data in a biological sample. Hence, each of the R generated data sets is a subsample of the original biological sample.
We calculated the count vectors using Entrez annotation from Bioconductor, database org.Hs.e.g.db2.9.0, which consisted of 23,648 (protein-coding and -noncoding) genes [32].
The purpose of our study was to learn about the influence of the sequencing depth on inferred biological results. For this reason, we investigated 4 layers of complexity. First, we compared differences between an explicit subsampling of reads and a direct scaling of count matrices. The results from this analysis demonstrated that a subsampling via DESIRE was necessary to obtain realistic surrogates of sequencing experiments with a smaller sequencing depth. Second, we studied the absolute expression of genes and their growth. Third, we investigated the growth rate of the number of expressed genes. Fourth, we analyzed differences in the distributional shape of expressed genes between TNBC patients and TNBC-free patients. For each of these analysis steps, we used data generated by the DESIRE procedure.
Differences between subsampling of reads and direct scaling of count matrices
Our first analysis investigated differences between a subsampling of reads via the DESIRE procedure and a direct scaling of count matrices. The results of this analysis justified our approach for the following sections.
The basic idea of DESIRE is to draw randomly aligned reads, as provided by a Sam file, and create a new auxiliary Sam file corresponding to a new sequencing experiment with a smaller sequencing depth. We compared this with a direct scaling of count matrices, whereas the scaling was obtained by multiplying the components of the count matrices, c ij , with a constant factor f that corresponds to the simulated sequencing depth because
$$\frac{{{\text{Total}}\;{\text{number}}\;{\text{of}}\;{\text{scaled}}\;{\text{counts}}}}{{{\text{Total}}\;{\text{number}}\;{\text{of}}\;{\text{counts}}}} = \frac{{\sum\nolimits_{i,j} {f \times c_{ij} } }}{{\sum\nolimits_{i,j} {c_{ij} } }} = f$$
Hence, this simple scaling of a count matrix resulted in the desired simulated sequencing depth for a sample.
For one TNBC-free sample (SRR1313211), the difference between counts obtained via our DESIRE procedure and the direct scaling method of count matrices is shown in Fig. 4. Specifically, the number of expressed genes (Y axis), depending on the sequencing depth f (X axis), for different values of a threshold parameter is presented in Fig. 4. By the number of expressed genes, we meant the number of genes that have a short read count c ij of ⊝ϵ{1, 10, 50, 100} or larger, i.e., c ij ≥ ⊝, where ⊝ is the threshold parameter. All results are for raw count values, not normalized values, and each dot corresponds to the result from one data set.
Comparison of the subsampling of reads via the DESIRE procedure (blue) and a direct scaling of count matrices (red). The obtained numbers of expressed genes depending on the sequencing depth for four different threshold parameters (1, 10, 50, 100) are shown.
For all threshold values and all sequencing depths that we investigated, there were distinct differences between the two approaches (Fig. 4). Similar results were also observed in other patient samples (not shown). From these results, we concluded that the computationally efficient shortcut via a direct scaling of count matrices did not lead to the same results as the DESIRE procedure. Hence, the scaled count matrices did not correspond to sequencing experiments with a smaller sequencing depth but had an unclear biological interpretation. For this reason, the DESIRE procedure needs to be used for simulating realistic sequencing experiments because only in this way do the resulting data have a clear interpretation in biological terms. In the following sections, we used the DESIRE procedure for this purpose.
We would like to note that neither our statistic, the number of expressed genes, nor the specific threshold ⊝ was crucial for our conclusion, but other statistics led to similar results. For our following analysis, it was important only that there was a difference but not how each individual measure was affected. However, we thought that for particular measures that were used, e.g., as test statistic for hypothesis tests or distance metrics for clustering, it might be interesting to quantify these differences more specifically.
Absolute expression of genes
In this analysis, we studied the influence of the sequencing depth on the number of expressed genes. The results for a TNBC-free patient (SRR1313211) and a TNBC patient (SRR1313133), exemplary for all samples studied, are shown in Fig. 5; the number of expressed genes (Y axis), depending on the sequencing depth f (X axis) for different values of a threshold parameter ⊝ϵ{1, 10, 50, 100}, are also presented. All results are for raw count values, not normalized values, and for each sequencing depth f, we generated R = 24 subsampled data sets for which box plots are shown.
Triple-negative breast cancer (TNBC)-free sample SRR1313211 and TNBC sample SRR1313133. The number of expressed genes for different filtering thresholds (1, 10, 50, 100) depends on the sequencing depth. The blue curves correspond to fitted Gompertz functions. All results are for raw (unnormalized) count values.
The first impression of the overall behavior was intuitive because the larger was the sequencing depth, the higher was the probability to obtain at least ⊝ reads for a gene, if it was expressed. Less intuitive was the fact that for all samples and all thresholds, there was no saturation in the number of expressed genes, but this number continued to grow, which suggests that even the maximally available sequencing depth was not sufficient to achieve a saturation of the measurements. In addition, this pointed to possible errors in either the sequencing or the alignment of reads because it was biologically implausible to assume that almost all 23,648 genes considered by our analysis were actually expressed for ⊝ = 1 (Fig. 5). This may open the possibility to quantify such errors statistically.
From the obtained results in Fig. 5 and the results from 6 further samples that looked qualitatively similar (not shown), we attempted to estimate the optimal sequencing depth in the following two ways using the available sequencing depth of the samples used for our analysis (TNBC samples: 34974017, 46677107, 17574408, and 24440340; TNBC-free samples: 25900791, 43454785, 31426867, and 33517581). Estimator (I)—average sequencing depth: the first estimator centers on average properties of our samples. Given that the average number of short reads per sample was 32,245,737 ± 9,710,593 (averaged over the 8 samples) and the fact that none of the growth curves saturated, we estimated that the average number of reads necessary for a saturation must be larger than 32,245,737. Estimator (II)—individual sequencing depth: the second estimator centers on the individual samples. The largest sequencing depth of our samples was 46,677,107, and even this sample did not lead to a saturating growth. Hence, a conservative estimate requires an individual sequencing depth larger than 46,677,107.
The variability of all results, e.g., the interquartile range (IQR) of the box plots, was in general quite small. However, for larger ⊝ values, the IQR was even further decreased, which showed that the estimation for the number of expressed genes was even more stable for larger expression threshold values, corresponding to a more stringent filtering for expressed genes.
For a quantitative comparison between the TNBC and TNBC-free patient samples, we compared the mean of median values of the number of expressed genes, for different sequencing depths f, to test the null hypothesis:
$$H_{0|f} :{\text{mean(median}}_{\text{TNBC|f}} )\;{ = }\;{\text{mean(median}}_{\text{TNBC-free|f}} )$$
by a two-sample t test. Each comparison was based on 4 samples per condition. Here, for instance, medianTNBC|f indicates the conditional median value of TNBC patients, conditioned on the sequencing depth f. The other conditional symbols have a similar meaning.
The results of these hypothesis tests are shown in Table 1. For a significance level of α = 0.05, only one result for a left-sided test was significant for f = 0.1. However, all other P values from the left-sided comparison were approximately 5%, indicating a tendency of being different but not significantly. This is plausible because we know that the samples from TNBC and TNBC-free patients corresponded to two different physiological conditions but that these differences affected some, but not all, biological processes, e.g., the hallmarks of cancer [33]. Hence, if samples are compared as a whole, as in our case, using only the mean of the medians of the number of expressed genes as a test statistic and not adjusting for different types of biological processes, e.g., using information from the gene ontology database [34], this signal is too weak to be detected. On the other hand, we found that the number of expressed genes in TNBC patients was smaller than that in TNBC-free patients because there was a clear asymmetry between the left- and right-sided P values, always leading to the relation
Table 1 Results of two-sample t tests comparing the total number of expressed genes for various sequencing depths
$${\text{p}}\;{\text{value}}_{\text{left-sided}} \ll {\text{p}}\;{\text{value}}_{\text{right-sided}}$$
This relation indicated that, on average, there were fewer genes expressed in TNBC patients than in the corresponding control samples, independent of the sequencing depth.
Growth rate of the number of expressed genes
Next, we compared the growth of the number of expressed genes depending on the sequencing depth (Fig. 5). For this reason, we fitted Gompertz growth functions [35] given by
$$f(x) = a\exp \left( { - b\exp \left( { - cx} \right)} \right)$$
Here a, b, and c are parameters of the Gompertz function to be fitted and c is called the growth rate. For our quantitative comparison, we used the fitted values of c.
We used Gompertz growth functions because the number of (expressed) genes of an organism was limited and, hence, so was the increase in the number of genes having more than a certain threshold needed to saturate. Growth curves, such as the Gompertz function or the logistic function [36, 37], have the natural constraint of being limited from above and, hence, provide a natural choice for a constrained regression function. Table 2 shows the growth rates and their standard deviations for all 8 samples and the 4 threshold values, ⊝ϵ {1, 10, 50, 100}.
Table 2 Fitted growth factor values and standard deviations for the Gompertz functions
From a visual inspection, there were only slight differences between the different conditions. For this reason, we quantified the results to test the null hypothesis that there was no difference in the values of the growth rates, i.e.,
$$H_{0|f} :{\text{mean(}}c_{\text{TNBC|f}} )\;{ = }\;{\text{mean(}}c_{\text{TNBC-free|f}} ),$$
for depth by a two-sample t test. Again, each comparison was based on 4 samples per condition.
To identify direction-specific effects, we also performed hypothesis tests for two-sided, left-sided, and right-sided comparisons. The results of these hypothesis tests are shown in Table 3. Overall, for a significance level of α = 0.05, none of these hypothesis tests was significant. However, the right-sided P values were not much larger than 0.05, hinting at a tendency in the data to be different, like the comparison of the median number of expressed genes above.
Table 3 Results from comparing the growth rates of the fitted Gompertz functions for TNBC and TNBC-free patients
A normalization of the data does not remove the growth property observed in Fig. 5, but normalized data exhibit qualitatively the same behavior. For ⊝ = 1, this was obvious because the normalization led to a scaling of the data without changing the zero values. For ⊝ > 1, it was less intuitive but followed from our numerical analysis (results not shown).
Distributional shape of expressed genes
Last, we studied the distributional shape of expressed gene values (and not of their numbers) by estimating individually for each parameter configuration its mean value, variance, skewness, and kurtosis. Here, we mean the distribution over all genes within a sample, and not the count distribution of individual genes across samples. Because every distribution with existing moments was fully characterized by all of its moments, either via its moment-generating function or via its probability generating function [38, 39], our analysis was an approximation of the distributional shape because we limited our focus to 4 dimensions.
Specifically, for each condition (TNBC versus TNBC-free) and each sequencing depth (f ϵ {1, 10, 50, 100}), we generated R = 24 data sets, giving a total of 432 data sets, and applied the expression threshold ⊝ = 1 to each data set as a filter. In the following analysis, we distinguished between CPM normalized and raw (unnormalized) data by estimating the mean, variance, skewness, and kurtosis of the distributions of expression values of the genes. The results of this analysis are shown in Fig. 6 and Tables 4 and 5, which include results for raw (unnormalized) data in Columns 3 and 4. The first observation from Fig. 6 is that a normalization of the data was absolutely necessary to obtain stable results across different sequencing depths. This is clearly visible for the mean and variance values because they showed increasing values for larger sequencing depths. In this respect, even a simple CPM normalization counterbalanced this effect, leading to stable expression patterns across different sequencing depths. This also illustrated that the choice of normalization method affected the statistical properties of a distribution and the results of statistical inference significantly, such as differential gene expression analysis, which was also observed [15]. From a visual comparison of the moments for TNBC and TNBC-free patients, we observed clear differences between the variance, less clear differences for the kurtosis and neutral differences for the mean and skewness. For a quantification of the comparison between the moments for TNBC and TNBC-free patients, we tested the following null hypothesis by a two-sample t test:
$$H_{0|f} :{\text{mean(}}m_{\text{TNBC|f}} )\;{ = }\;{\text{mean(}}m_{\text{TNBC-free|f}} ),$$
for depth f and mϵ{mean, variance, skewness, kurtosis}, indicating the four moments we studied. Each comparison was based on nine samples per condition because we pooled the median values across the different sequencing depths for each condition and each measure m. The results of this analysis are shown in Table 6. Overall, the mean values were essentially undistinguishable (with P values of approximately 1.0) but the other three moments were significantly different at a two-sided significance level of α = 0.05. Specifically, for the kurtosis and skewness, the left-sided tests were significant; for the variance, the right-sided test was significant. That means that for kurtosis and skewness, the values of the moments were higher in TNBC-free patients than in TNBC patients, whereas for variance, these values were lower. This result is interesting because, commonly, a disease is associated with instability or disorder, but a decreasing variance suggested less variability in the expression values of the genes.
Results for the 4 moments: mean, variance, skewness, and kurtosis (rows). Columns 1 and 2: normalized data; Columns 3 and 4: raw data; Columns 1 and 3: TNBC patients; Columns 2 and 4: TNBC-free patients.
Table 4 Moments for TNBC-free patients
Table 5 Moments for TNBC patients
Table 6 Results from pooled (across different sequencing depths) two-sample t tests for the 4 moments of the gene expression distributions
In this paper, we studied various effects of differing sequencing depth on distributional aspects of gene expression data obtained from RNA-seq experiments. From our analysis, we found 3 main results.
The subsampling of RNA-seq data gave biologically realistic simulations of next-generation sequencing experiments with smaller sequencing depth, but a direct scaling of count matrices did not. This is an important finding because, first of all, it demonstrated that the conceptually simpler and computationally more efficient approach of a direct scaling of count matrices led to data sets with an unclear biological interpretation. This is of course a major problem because whatever results were obtained from such data sets, e.g., using them for identifying differentially expressed genes, the meaning is at best unclear and possibly even uninterruptable in the sense that replicated next-generation sequencing experiments would not result in data with such a characteristic.
To obtain saturating results, we estimated an average sequencing depth of >32 million reads and an individual sequencing depth of >46 million reads. The literature gives context-specific suggestions. For instance, for detecting rare transcripts in human, >200 million paired-end reads should be used, and for the accurate quantification of genes across the entire expression range, >80 million reads per sample should be used [29, 40]. However, for the identification of differentially expressed genes, 36 million reads per sample may be sufficient [29].
For future studies, it would be interesting to derive improved bounds for optimal sequencing depths with respect to two complementary aspects. The first aspect involves distinguishing different application domains because the optimal sequencing depth is likely to depend on the bioinformatics analysis. For gene expression data from DNA microarray experiments, such differences have already been known for, e.g., methods identifying differentially expressed genes and methods for identifying differentially expressed gene sets [41–43]. Second, in this study, we considered only simple statistical estimators for the optimal sequencing depth; however, more elaborate approaches are possible, e.g., by exploiting the results from the fitted growth curves.
For an abrogated feature selection, i.e., using all expressed genes that have read counts of ⊝ = 1 or larger, the higher moments of the distribution of expressed genes showed a much better sensitivity for the signal detection of differing phenotypic conditions than the corresponding mean values (Table 6). This could be further explored by designing statistical tests that use such higher moments as a test statistic. A potential advantage of such tests over, e.g., the conventional mean-based tests such as a t test or ANOVA could be a reduced need in sample size, as suggested by our results. However, this requires a further detailed analysis.
The subsampling of RNA-seq data allows us to explore important aspects of gene expression data. These must be understood before such high-throughput data types can be used for applications in translational medicine.
McGettigan PA. Transcriptomics in the RNA-seq era. Curr Opin Chem Biol. 2013;17(1):4–11.
Marguerat S, Bähler J. RNA-seq: from technology to biology. Cell Mol Life Sci. 2010;67(4):569–79.
PubMed Central CAS Article PubMed Google Scholar
Metzker ML. Sequencing technologies–the next generation. Nat Rev Genet. 2009;11(1):31–46.
Wang Z, Gerstein M, Snyder M. RNA-Seq: a revolutionary tool for transcriptomics. Nat Rev Genet. 2009;10(1):57–63.
Mortazavi A, Williams BA, McCue K, Schaeffer L, Wold B. Mapping and quantifying mammalian transcriptomes by RNA-seq. Nat Methods. 2008;5(7):621–8.
Peng Z, Cheng Y, Tan BC, Kang L, Tian Z, Zhu Y, et al. Comprehensive analysis of RNA-Seq data reveals extensive RNA editing in a human transcriptome. Nat Biotechnol. 2012;30(3):253–60.
Beane J, Vick J, Schembri F, Anderlind C, Gower A, Campbell J, et al. Characterizing the impact of smoking and lung cancer on the airway transcriptome using RNA-Seq. Cancer Prev Res. 2011;4(6):803–17.
Sinicropi D, Qu K, Collin F, Crager M, Liu ML, Pelham RJ, et al. Whole transcriptome RNA-Seq analysis of breast cancer recurrence risk using formalin-fixed paraffin-embedded tumor tissue. PLoS One. 2012;7(7):e40092.
Anders S, Huber W. Differential expression analysis for sequence count data. Genome Biol. 2010;11(10):R106.
Rahmatallah Y, Emmert-Streib F, Glazko G. Comparative evaluation of gene set analysis approaches for RNA-Seq data. BMC Bioinform. 2014;15:397.
Nicolae M, Mangul S, Mandoiu II, Zelikovsky A. Estimation of alternative splicing isoform frequencies from RNA-Seq data. Algorithm Mol Biol. 2011;6(1):9.
Robinson MD, Oshlack A. A scaling normalization method for differential expression analysis of RNA-seq data. s. 2010;11(3):R25.
Trapnell C, Pachter L, Salzberg SL. TopHat: discovering splice junctions with RNA-Seq. Bioinformatics. 2009;25(9):1105–11.
Wang L, Feng Z, Wang X, Wang X, Zhang X. DEGseq: an R package for identifying differentially expressed genes from RNA-seq data. Bioinformatics. 2010;26(1):136–8.
Dillies MA, Rau A, Aubert J, Hennequet-Antier C, Jeanmougin M, Servant N, et al. A comprehensive evaluation of normalization methods for illumina high-throughput RNA sequencing data analysis. Brief Bioinform. 2013;14(6):671–83.
Wu H, Wang C, Wu Z. A new shrinkage estimator for dispersion improves differential expression detection in RNA-seq data. Biostatistics. 2013;14(2):232–43.
Robinson MD, McCarthy DJ, Smyth GK. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010;26(1):139–40.
Law C, Chen Y, Shi W, Smyth G. Voom: precision weights unlock linear model analysis tools for RNA-seq read counts. Genome Biol. 2014;15(2):R29.
Varley KE, Gertz J, Roberts BS, Davis NS, Bowling KM, Kirby MK, et al. Recurrent read-through fusion transcripts in breast cancer. Breast Cancer Res Treat. 2014;146(2):287–97.
Leinonen R, Sugawara H, Shumway M. International Nucleotide Sequence Database Collaboration. The sequence read archive. Nucleic Acids Res. 2010;39:D19–21.
Langmead B, Salzberg SL. Fast gapped-read alignment with Bowtie 2. Nat Methods. 2012;9(4):357–9.
Karolchik D, Barber GP, Casper J, Clawson H, Cline MS, Diekhans M, et al. The UCSC genome browser database: 2014 update. Nucleic Acids Res. 2014;42:D764–70.
Liao Y, Smyth GK, Shi W. featureCounts: an efficient general purpose program for assigning sequence reads to genomic features. Bioinformatics. 2013;30(7):923–30.
Fumagalli M. Assessing the effect of sequencing depth and sample size in population genetics inferences. PLoS One. 2013;8(11):e79667.
Rapaport F, Khanin R, Liang Y, Pirun M, Krek A, Zumbo P, et al. Comprehensive evaluation of differential gene expression analysis methods for RNA-seq data. Genome Biol. 2013;14(9):R95.
Robinson DG, Storey JD. subSeq: determining appropriate sequencing depth through efficient read subsampling. Bioinformatics. 2014;30(23):3424–6.
Liu Y, Zhou J, White KP. RNA-seq differential expression studies: more sequence or more replication? Bioinformatics. 2014;30(3):301–4.
Sims D, Sudbery I, Ilott NE, Heger A, Ponting CP. Sequencing depth and coverage: key considerations in genomic analyses. Nat Rev Genet. 2014;15(2):121–32.
Politis DN, Romano JP, Wolf M. Subsampling Springer series in statistics. Berlin: Springer; 1999.
Bickel PJ, Gotze F, van Zwet W. Resampling fewer than n observations: gains, losses and remedies for losses. Statist Sinica. 1997;7(1):1–31.
Gentleman RC, Carey VJ, Bates DM, Bolstad B, Dettling M, Dudoit S, et al. Bioconductor: open software development for computational biology and bioinformatics. Genome Biol. 2004;5(10):80.
Hanahan D, Weinberg RA. The hallmarks of cancer. Cell. 2000;100(1):57–70.
Ashburner M, Ball CA, Blake JA, Botstein D, Butler H, Cherry JM, et al. Gene ontology: tool for the unification of biology. Gene ontology consortium. Nat Genet. 2000;25(1):25–9.
Laird AK. Dynamics of tumour growth. Br J Cancer. 1964;18(3):490.
PubMed Central Article Google Scholar
Emmert-Streib F. Structural properties and complexity of a new network class: Collatz step graphs. PLoS One. 2013;8(2):e56461.
Harrell FE. Regression modeling strategies. New York: Springer; 2001.
Casella G, Berger RL. Statistical inference. Belmont: Duxbury Press; 2002.
Feller W. An introduction to probability theory and its applications. New York: Wiley; 1968.
Tarazona S, Garcia-Alcalde F, Dopazo J, Ferrer A, Conesa A. Differential expression in RNA-seq: a matter of depth. Genome Res. 2011;21(12):2213–23.
Emmert-Streib F, Tripathi S, de Matos Simoes R. Harnessing the complexity of gene expression data from cancer: from single gene to structural pathway methods. Biol Direct. 2012;7:44.
Hung JH, Yang TH, Hu Z, Weng Z, DeLisi C. Gene set enrichment analysis: performance evaluation and usage guidelines. Brief Bioinform. 2012;13(3):281–91.
Steinhoff C, Vingron M. Normalization and quantification of differential expression in gene expression microarrays. Brief Bioinform. 2006;7(2):166–77.
AS performed the analysis, interpreted the results, and wrote the article. GVG interpreted the results and wrote the article. FES conceived the study, performed the analysis, interpreted the results, and wrote the article. All authors read and approved the final manuscript.
We would like to thank Manuel Salto-Tellez, Ricardo de Matos Simoes, and Shaliesh Tripathi for fruitful discussions. GVG was supported in part by the Arkansas Biosciences Institute under Grant (No. UL1TR000039) and the IDeANetworks of Biomedical Research Excellence (INBRE) Grant (No. P20RR16460).
Compliance with ethical guidelines
Competing interests The authors declare that they have no competing interests.
Computational Biology and Machine Learning Laboratory, Faculty of Medicine, Health and Life Sciences, School of Medicine, Dentistry and Biomedical Sciences, Center for Cancer Research and Cell Biology, Queen's University Belfast, 97 Lisburn Road, Belfast, BT9 7JL, UK
Alexey Stupnikov & Frank Emmert-Streib
Division of Biomedical Informatics, University of Arkansas for Medical Sciences, Little Rock, AR, 72205, USA
Galina V Glazko
Computational Medicine and Statistical Learning Laboratory, Department of Signal Processing, Tampere University of Technology, Korkeakoulunkatu 1, Tampere, 33720, Finland
Frank Emmert-Streib
Alexey Stupnikov
Correspondence to Frank Emmert-Streib.
Stupnikov, A., Glazko, G.V. & Emmert-Streib, F. Effects of subsampling on characteristics of RNA-seq data from triple-negative breast cancer patients. Chin J Cancer 34, 36 (2015). https://doi.org/10.1186/s40880-015-0040-8
Received: 16 January 2015
RNA-seq data
Statistical robustness
High-dimensional biology | CommonCrawl |
Ask Ubuntu 143 143 11 silver badge55 bronze badges
138 Are there real-life relations which are symmetric and reflexive but not transitive?
16 Donald Knuth's summation notation confuses me.
14 Is there a closed form for the series $\sum_{k=1}^\infty \frac{\ln(4k-3)}{(4k-3)}-\frac{\ln(4k-1)}{(4k-1)}?$
12 What is $f(x)$ divided by $(x-a)$?
10 What is the coefficient of the $x^3$ term in the expansion of $(x^2+x-5)^7$ (See details)?
9 How to simplify polynomials
9 Cyclic sums -- How do you use them? | CommonCrawl |
BMC Medical Informatics and Decision Making
AliClu - Temporal sequence alignment for clustering longitudinal clinical data
Kishan Rama1,3,
Helena Canhão2,
Alexandra M. Carvalho1 &
Susana Vinga ORCID: orcid.org/0000-0002-1954-54873
BMC Medical Informatics and Decision Making volume 19, Article number: 289 (2019) Cite this article
Patient stratification is a critical task in clinical decision making since it can allow physicians to choose treatments in a personalized way. Given the increasing availability of electronic medical records (EMRs) with longitudinal data, one crucial problem is how to efficiently cluster the patients based on the temporal information from medical appointments. In this work, we propose applying the Temporal Needleman-Wunsch (TNW) algorithm to align discrete sequences with the transition time information between symbols. These symbols may correspond to a patient's current therapy, their overall health status, or any other discrete state. The transition time information represents the duration of each of those states. The obtained TNW pairwise scores are then used to perform hierarchical clustering. To find the best number of clusters and assess their stability, a resampling technique is applied.
We propose the AliClu, a novel tool for clustering temporal clinical data based on the TNW algorithm coupled with clustering validity assessments through bootstrapping. The AliClu was applied for the analysis of the rheumatoid arthritis EMRs obtained from the Portuguese database of rheumatologic patient visits (Reuma.pt). In particular, the AliClu was used for the analysis of therapy switches, which were coded as letters corresponding to biologic drugs and included their durations before each change occurred. The obtained optimized clusters allow one to stratify the patients based on their temporal therapy profiles and to support the identification of common features for those groups.
The AliClu is a promising computational strategy to analyse longitudinal patient data by providing validated clusters and by unravelling the patterns that exist in clinical outcomes. Patient stratification is performed in an automatic or semi-automatic way, allowing one to tune the alignment, clustering, and validation parameters. The AliClu is freely available at https://github.com/sysbiomed/AliClu.
The increasing availability of clinical data and the increased investments in healthcare are driving research on building better clinical decision support systems for the effective personalization of treatments. In this context, machine learning and data mining techniques are becoming ubiquitous, helping to provide high-quality care systems and improve the long-term health of patients.
Patients' health records are being stored in electronic medical records (EMRs) and consist of a variety of data, such as demographics, medical history, laboratory test results, medications, and allergies. These EMR systems are designed to store patients' data across time, thereby providing large longitudinal cohorts. Exploring the disease heterogeneity and patterns in these datasets is a challenging task. Several issues contribute to this difficulty of this task: the exponential number of all possible combinations in patients' trajectories, the variability in their temporal scales, and the complexity of their representations.
We address the problem of learning temporal patterns in EMR data by using a combined approach of (temporal) alignment and hierarchical clustering. More specifically, we use the Temporal Needleman-Wunsch (TNW) algorithm [1] to align discrete sequences with the time information between symbols and, subsequently, perform hierarchical clustering using the obtained pairwise scores. The TNW algorithm is an extension of the traditional Needleman-Wunsch (NW) [2] for global sequence alignment. The TNW takes into account the matches between symbols, as in the NW algorithm, and also adds a penalization term for the differences in the time values between two sequences. Other temporal alignment methods, such as dynamic time warping, are not adequate for dealing with these type of data, and they just provide general trends for matching continuous-time signals [3–6].
The TNW is particularly interesting when utilizing data representing given events or states (coded as symbols) and their corresponding durations. Treatment switching provides us with an excellent example of this type of temporal sequence data. Starting at instant 0 with Treatment A, its failure after tA may lead to switching to Treatment B with a duration of tB, and then switching again to Treatment F, which is still ongoing (tF represents that duration). In this case, we would have a patient profile given by the sequence
$$0.A,t_{A}.B,t_{B}.F,t_{F}.Z,$$
which includes symbols and numeric values and where Z is a special symbol representing that the last therapy has not yet failed. It is worth noting that the discrete states (A, B and F in this example) can also be obtained through the discretization of the continuous features. Additionally, the times representing the durations of the states are completely general with the only restriction being that they are measured at the same scale for all patients.
State-of-the-art alignment approaches usually involve multiple sequence alignment techniques that use the progressive alignment heuristic: they are fast, scalable and widely used. The most popular methods include Clustal Omega [7], MAFFT [8], and MUSCLE [9]. These methods were essentially developed for aligning DNA or protein sequences, which are time-invariant sequences composed by letters.
In this work, we focus specifically on using the temporal information present in clinical data for pairwise sequence alignment. In this regard, the literature includes mostly alignment algorithms for continuous time series data [4–6]. A very well known approach is Dynamic Time Warping (DTW) [3], which warps the time axis of the sequences to achieve alignment. It is also based on dynamic programming, such as the NW algorithm [2], but it does not incorporate a gap penalty. Pairwise alignment using Hidden Markov Models (HMMs) also constitutes an alternative [10]; however, it is not trivial to directly include temporal data.
Motivated by the need for a sequence alignment method that can assess the similarity between two sequences in the same way as the NW or HMM does while also accounting for the time that elapses between events, Syed and Das developed the TNW algorithm [1] that can be applied to healthcare data to find similar patients based on medical histories.
An alternative approach could be simply applying traditional sequence alignments such as the NW to sequences after some pre-processing step. This step would account for the temporal information between events by repeating an event several times to create the sequences to be aligned. For example, the temporal sequence "0.A,5.B" could be transformed to "AAAAAB", where the five As in the latter sequence represent the five units of time that elapsed from "A" to "B". Then, the NW algorithm can be applied. Several drawbacks exist in this approach; namely, the need to divide the time intervals between events in windows and the longer sequences that are created, thus increasing the computational time of the alignments. The TNW algorithm overcomes these issues and does not require any additional transformation of the original data. The absence of related works in the literature on this algorithm motivated us to test it on the Reuma.pt dataset [11].
The main goal of this work is to obtain clusters of patients by analysing longitudinal medical data specifically, clinical data. Clustering patients with similar treatment profiles would allow for identifying the common features of those groups and delineate strategies to improve treatment outcomes.
In the literature, several studies are found that try to achieve the same objective. In [12], Docampo et al. present a cluster analysis of clinical data to identify fibromyalgia subgroups. Their approach is a two-step clustering process. In the first step, the clinical variables are clustered by using partitioning around medoids. The number of clusters is found by using silhouette plots and Calinski's index. In the second step, synthetic patient indices are calculated for each sample and dimension in order to find the patient subgroups.
In another work [13], Garg et al. proposed two techniques based on survival trees to cluster patients into clinically meaningful groups according to their expected lengths of stay. Their techniques are more applicable to survival analysis using survival data.
In [14], the authors investigated if subgroups of patients with non-specific lower back pain could be identified by applying hierarchical cluster analysis to a dataset that contained 6-month clinical courses of patients with measurements of bothersomeness. An initial step was required before using the clustering algorithm, which consisted of condensing the courses of each patient based on four parameters. These parameters were obtained by fitting a regression line in the courses and computing the slopes and intersections. After the parameters were defined for each patient, hierarchical clustering utilizing Ward's method was applied. In order to determine the optimal number of clusters, they analysed the resulting dendrograms with Calinski's criterion, which was also used in [12]. Regarding the results, four clusters were found with distinct clinical courses, which showed that it is possible to find clinical meaningful clusters based on the temporal evolution of the variable under study. Note that, in this work, the temporal information between measurements is not directly used, but we estimate the parameters of a line that is fit for the clinical courses.
In addition to the clustering approaches discussed before, a model-based clustering method was proposed for clustering individuals based on measurements taken over time [15]. The authors apply their method to data from pregnant women to identify hormone trajectories. One important aspect of this approach is that the method requires the specification of the number of clusters to be fit to the model. In their work, it was known that data were divided into two groups; hence, they knew the number of clusters to select.
However, this number was also confirmed by the Bayesian information criterion that they used to choose the number of clusters.
To the best of our knowledge, the AliClu is a novel approach for addressing this type of mixed, longitudinal data that takes into account both the sequence of states and their durations. The TNW algorithm allows one to align similar medical histories by considering the temporal information and also penalising missing events by inserting gaps. Furthermore, the AliClu provides clustering validation using bootstrapping, which allows one to tune the input parameters to find the best number of clusters and to identify the most homogeneous patient strata. The AliClu is fully implemented and freely available for further applications.
The pipeline of the proposed method, which is named the AliClu, is illustrated in Fig. 1. In the first step, the complete raw data are pre-processed to obtain the temporal sequences. Then, in the second step, pairwise temporal sequence alignment is performed, and a similarity matrix is obtained. The third step consists of converting the similarity matrix into distances. Agglomerative clustering is then performed by using this distance matrix, and finally, the clustering results are validated via a bootstrapping approach. The obtained patient stratification can be graphically represented to ease the clinical interpretation. Each step of this pipeline is detailed as follows.
The proposed AliClu approach. First, raw data is pre-processed to obtain PE sequences. Then, pairwise sequence alignment is performed and a similarity matrix S is obtained. Next, S is converted into a distance matrix D. Agglomerative clustering is then performed with this distance matrix D. Validation of the clustering results is accomplished via a bootstrapping approach. In the end, retrieved clusters are analysed by the clinicians
Data pre-processing
This pre-processing step creates temporal sequences for each patient from EMRs. Patients' records are typically available in panel data format, in which each patient is spread in different lines, one for each medical appointment, and the columns contain the features of interest measured over time. In this work, we consider that each patient experiences a sequence of events over time. Let A and B be the events of interest for a given patient with the time-distance t between them, and a prefix-encoded (PE) sequence for that patient is defined as 0.A,t.B.
In this pre-processing phase, the PE sequences are built for each patient, requiring information about the patient's ID, the event under study, and the time between two consecutive events. These features must be taken from the panel data. In the data set, the time may be formatted as a date or just a number in any time unit (e.g., seconds, minutes, or days). Depending on the time format, two types of pre-processing steps are implemented. We refer the interested reader to the Additional file 1 for further details.
Temporal sequence alignment
After building the prefix-encoded (PE) sequences, it is possible to align all patient pairs using the TNW algorithm [1]. The TNW guarantees convergence to the optimal alignment for a given scoring scheme, gap penalty g, and temporal penalty Tp. Notwithstanding, alignments can drastically change depending on the choice of these parameters, and this is the reason why they should be carefully chosen.
The information of the retrieved alignments is summarized into an N×N similarity matrix S, where N is the number of patients in the data. In this matrix, the entry value (i,j) gives the alignment score of the i-th and j-th patients. Due to symmetry, only N×(N−1)/2 entries need to be computed.
Before using the agglomerative clustering algorithm, we need to convert the similarity matrix S, which was obtained in the previous step, into a distance matrix D. To this end, we take the symmetric value of each score and then we shift it by adding the maximum similarity score in matrix S. This shift is made in order to make all scores greater than or equal to zero. In summary, the distance matrix is computed as follows: a= maxi<jSij with i,j=1,…,N and D=−S+a(1·1T) with \(\mathbf {1} = \left (\begin {smallmatrix} 1 \\ \vphantom {\int \limits ^{x}}\smash {\vdots } \\ 1 \end {smallmatrix}\right) \in \mathbf {R}^{N}\).
Clustering of temporal sequence alignments
The dissimilarity matrix obtained is then used to perform agglomerative clustering [16]. The resulting groups can be depicted in a dendrogram, a tree showing the order and distances of the merges performed during the clustering procedure. Five different linkage functions are used, namely, single, complete, average, centroid, and Ward's method. Since hierarchical clustering methods do not explicitly set the number of clusters, the AliClu additionally provides an automatic bootstrapping-based validation technique proposed by Mucha [17] that selects the best number according to several clustering indices. These indices include Rand [18], the adjusted Rand (AR) [19], Fowlkes and Mallows (FM) [20], Jaccard, and the adjusted Wallace (AW) [21].
The pseudo-code of the cluster and validation procedure is given in Algorithm 1. The inputs of the algorithm are the distance matrix D for the agglomerative clustering algorithm, the number of bootstrap samples M, the linkage criterion L, and the minimum Kmin and the maximum Kmax numbers of clusters to be analysed. The output is the statistics of all the clustering indices described above, namely, the medians, means, and variances for all the bootstrap samples, which are calculated for each analysed number of clusters (between Kmin and Kmax).
The algorithm begins by performing agglomerative clustering on distance matrix D in Step 1. Then, an outer loop starts in Step 2, corresponding to a bootstrapping procedure. From Steps 3 to 5, a bootstrapped sample is generated, and agglomerative clustering is performed on it. Then, an inner loop computes the clustering indices between the clustering of the original patients and the clustering of the bootstrapped sample (Steps 6-10). In Step 8, the obtained dendrograms Z and Z′ are cut to retrieve q clusters (in each), where Kmin≤q≤Kmax. After running the outer loop M times, the statistics of the clustering indices are computed (Step 11).
The output of Algorithm 1 helps to select the best number of clusters in the data, herein k. The right candidate is the one that yields the higher number of maximum average values over the clustering indices. To corroborate the previous guess, the standard deviation of the clustering indices for each k can be taken into account. The choice of k can be automatic or semi-automatic. In this latter case, the results composed by dendrograms, the averages and the standard deviations of the obtained clustering indices are given to the user for manual inspection and further selection.
After obtaining the best number of clusters k according to these criteria, the stability of each individual cluster is then assessed in Algorithm 2, again via the bootstrapping approach [17]. The inputs of this algorithm are the number of clusters k, the clusters themselves {A1,…,Ak}, the linkage criterion L, and the number of bootstrapped samples M. The output is the stability measures of the obtained clusters, which are assessed by the criteria described as follows.
The algorithm starts with resampling. For each bootstrapped sample, a dendrogram Z′ is obtained by performing agglomerative clustering on the sample (Steps 2-4). Then, a collection of k clusters {B1,…,Bk} is obtained by cutting the dendrogram Z′ (Step 5). From Steps 6 to 11, as proposed by Mucha [17], three different measures are computed for each cluster Aj,1≤j≤k, namely, \(\tau _{j}^{\ast }\) (the Jaccard index), \(\gamma _{j}^{\ast }\) (the recovery rate) and \(\eta _{j}^{\ast }\) (the Dice coefficient). These indices provide a measure of the similarity between cluster Aj and its most similar cluster in {B1,…,Bk}. Finally, in Step 12, the stability of the retrieved clusters is assessed by computing the average values of \(\tau _{j}^{\ast }, \gamma _{j}^{\ast }\) and \(\eta _{j}^{\ast }\), and by analysing the corresponding standard deviations.
As discussed in [17], it is difficult to set an appropriate threshold that denotes that a cluster is stable. Therefore, we followed the rule of thumb and considered stable clusters as the ones that yield high average values (close to one) and low standard deviations for \(\tau _{j}^{\ast }, \gamma _{j}^{\ast }\) and \(\eta _{j}^{\ast }\).
Algorithm 3 presents the overall proposed method for obtaining clusters from PE sequences. Its inputs are the raw data, the scoring system SS, the temporal penalty Tp, and the gap related parameters (gmin,gmax and gistep) required by the TNW; the number of bootstrapped samples M, for Algorithm 1 and Algorithm 2; the linkage criterion L; and the minimum Kmin and the maximum Kmax numbers of clusters.
The initial step of the algorithm pre-processes the raw data to produce PE sequences (Step 1). The gap penalty of the TNW algorithm is then set to range from gmin to gmax at incremental steps of gistep (Step 2 and Step 7). For each value of the gap penalty g, pairwise temporal alignment using the TNW is performed, which outputs a similarity matrix S (Step 4). Then, S is converted into a distance matrix D (Step 5). Clustering is then performed by running Algorithm 1 (Step 6).
When the cycle from Steps 3 to 7 ends, there are several results to explore: one for each of the number of clusters (Kmin,…, Kmax) and gap penalties (gmin to gmax with gistep). In Step 8, the final number of clusters k is obtained from these results. As stated before, if an automatic procedure is chosen, the final number of clusters k retrieved in this step is that which results in the most frequent higher average values for the clustering indices. In this case, the chosen gap penalty g is the one that yields the best average values for the clustering indices for the final number of clusters. In the semi-automatic option, the full results for different k and g – including the dendrograms, averages and standard deviations of the clustering indices – are provided to the user, which then determines the final number of clusters k and gap parameter g to be further used. In Step 9, the stability of the retrieved clusters is assessed by running Algorithm 2.
The run-time complexity of the TNW is O(n2), and that of agglomerative clustering is O(N3), where n is the length of the PE sequences and N is the number of patients in the data. Moreover, computing the cluster stability in Algorithm 2 for Steps 6–11 takes \(O(K_{\max }^{2}\times N)\). Therefore, the AliClu algorithm takes
$$O(\Delta G\times n^{2} + \Delta G\times M\times \Delta K\times N^{3} + M\times K_{\max}^{2} \times N)$$
time, where \(\Delta G=\left \lceil \frac { g_{\max }-g_{\min }+1}{g_{istep}}\right \rceil \) is the number of gaps analysed (gmin to gmax with gistep), M is the number of bootstrapped samples, and ΔK=Kmax−Kmin+1 is the number of clusters considered (from Kmin to Kmax).
Synthetic datasets
We first evaluate the AliClu using synthetic datasets, which provides a proof of concept in a controlled scenario where the true cluster labels are known a priori and makes it easy to determine the merits of the method. The synthetic datasets consisted of temporal sequences generated by continuous-time Markov chains in a variety of parameter settings.
We concluded that the AliClu successfully found the correct clusters in more than 80% of the cases for datasets containing two well-separated clusters. Moreover, the linkage method that produced the best results for the agglomerative clustering was Ward's method; thus, it was adopted in the remaining experiments. The complete study of the AliClu behaviour on each of the synthetic problems is available in the Additional file 1, along with all the details regarding the sequence generation and clustering evaluation.
The Reuma.pt database
We then applied the AliClu to biologic therapy switching for rheumatoid arthritis (RA) patients in a real-life longitudinal cohort – the Reuma.pt database [11].
Reuma.pt [11] is a Portuguese nationwide database developed by the Portuguese Society of Rheumatology. It stores the EMRs of rheumatoid patients as structured and narrative data with the goal of monitoring the disease's progression and assuring treatment effectiveness and safety. In this study, we focus on patients with rheumatoid arthritis (RA) being treated with biologic therapies at one centre. The retrieved data include 426 patients diagnosed with RA who followed-up regularly more or less every three to six months, which resulted in a total of 9305 medical appointments.
The RA is an immunomediated inflammatory rheumatic disease that causes pain and swelling in the wrists and small joints of the hands and feet. RA treatments can mitigate these symptoms, prevent joint damage, and provide a better quality of life to the patients. Traditional therapies consist of using conventional disease-modifying antirheumatic drugs (DMARDs), which are used as a monotherapy or in combinations. When patients fail to respond to conventional DMARDs, modern biologic therapies are tried. Unlike conventional DMARDs, biologic ones are made using biotechnology. Biologics are genetically engineered to act as natural proteins in the human immune system.
The goal of RA treatment is to induce the disease's remission by controlling the inflammation. This approach would relieve the symptoms, prevent joint and organ damage, improve physical functioning and overall well-being, and reduce long-term complications. It is crucial to identify the most effective RA treatments early in the disease's progression. In this regard, we used the AliClu to analyse biologic therapy switching, where PE sequences are built by interspersing biologic drugs that are coded as letters and include their durations. The optimized clusters allow for the stratification of RA patients based on their temporal therapy profiles and identification of common features of these groups. Patients starting new biologic therapies can then benefit from these insights.
Clustering of biologic therapy switches
Data of the 426 RA patients concerning biologic therapy switches from the Reuma.pt database were preprocessed to build the PE sequences. Figure 2 presents the statistics regarding the number of biologic drugs taken by patients. Almost 60% of the patients had only one biologic drug recorded (no switches). Patients who have taken five or more drugs are rare: three patients have taken five, two have taken six, and two have taken seven different treatments. We stress that when switching therapies, a patient never goes back to taking the previous biologic drug.
Percentage of biologic drugs taken by Rheumatoid Arthritis (RA) patients. Almost 60% of the patients only had one biologic drug. Patients that have taken more than five biologic drugs are rare; three patients have taken five, two patients have taken six, and other two seven biologic drugs
For this particular dataset, the following drugs were as follows: A – etanercept, B – infliximab, C – rituximab, D – adalimumab, E – anacinra, F – abatacept, G – tocilizumab, and H – golimumab. These drugs correspond to distinct active therapeutic principles and are prescribed in different stages of the disease.
Having the PE sequences, Algorithm 3 is run with Kmax=30, and all other input parameters are set to their default values. The scoring system is 1 for a match and −1.1 for a mismatch of the drug representation, the temporal penalty is Tp=0.25, and the number of bootstrapped samples is M=1000. Moreover, in this experiment, the AliClu is used in a semi-automatic manner (Step 12 of Algorithm 1 and Step 8 of Algorithm 3 are subject to user input).
We concluded that Ward's linkage leads to superior results in terms of the clustering indices and clinical information, and a gap penalty of g=0.7 and a temporal penalty of Tp=0.25 correspond to balanced choices with respect to the other input parameters. It is noteworthy that these choices are data dependent and provide a proof-of-concept of the principle since a full analysis and optimization of the clustering parameters would be out of the scope of the present work.
The running time recorded for this final setting was approximately 1 hour by using a machine with a 2.6 GHz Intel Core i7 processor and 16 GB of 2400 MHz DDR4 memory. This time corresponds to approximately 3.8 seconds for each gap and replicate analysed for the full range of cluster numbers.
Figure 3 shows the dendrogram obtained when using this parameter set, i.e., g=0.7 and a temporal penalty of Tp=0.25. The averages of the five clustering indices obtained with Algorithm 1 are presented in Table 1.
Dendrogram of the agglomerative hierarchical clustering of Rheumatoid Arthritis (RA) patients. Dendrogram of Ward's method hierarchical clustering with gap penalty g=0.7 and temporal penalty Tp=0.25. Twenty five clusters were selected based on the analysis of the clustering indices and clinical interpretation
Table 1 Average values of five clustering indices for the dendrogram of Fig. 3
Three of the measures, namely, the AR, FM, and Jaccard, indicate the existence of 26 clusters; the AW indicates that k=25, and the AR indicates that k=25, 26 and 27. In this case, not all averages point to the same number of clusters k; therefore, a more careful and refined analysis is required.
We complemented this analysis with the standard deviation of the AR, which is presented in Fig. 4. The minimum standard deviation of the AR is achieved for k=25, which, combined with the information provided in Table 1 and Fig. 4, leads to the selection of 25 clusters.
Standard deviation of Adjusted Rand (AR) versus the number of clusters. Standard deviation of AR versus number of clusters for dendrogram in Fig. 3. There is a downward trend of the standard deviation when increasing the number of clusters. The minimum value is attained with 25 clusters
The stability of the 25 clusters was then assessed through the medians, averages and standard deviations of η∗,τ∗ and γ∗ (Table 2). As expected, the three statistic values of η∗ are always smaller than those of τ∗ and γ∗. For some clusters, the medians and averages of the three measures are not as high as is desirable to consider the clusters stable. Moreover, the medians and averages of τ∗ and γ∗ are not the same in all clusters. Notwithstanding, in clusters 20,21,22,23,24, and 25 (also those with more observations), those values are the same, and they are high enough to be considered stable.
Table 2 Stability of the 25 clusters for Ward's method, g=0.7, and Tp=0.25
Clusters visualization
Visualization is an essential task in any clustering process since it provides an intuitive way to validate clusters. Due to the characteristics of the clustered PE sequences, we propose a graph representation that summarizes the information regarding the sequences that belong to a given cluster. Therein, each node represents a biologic drug symbol ("A" to "H", and "Z" described above), and each edge represents a therapy switch (from one biologic drug to another). A special symbol "Z" marks the end of the sequence, signalling that from that point on there is no information regarding the therapy's success or failure. The value on an edge is the median of the times between the corresponding drug switches in that cluster.
The colour of an edge represents the transition probability from one biologic drug to another. This probability is computed by counting the number of times a switch occurs divided by the total number of transitions in that cluster. A grey scale is used for the edges in this regard. A darker edge means that the switches between the linked biologic drugs frequently occurred in that cluster.
The clusters with higher stability correspond to easily interpretable therapy profiles, including monotherapies (no switches). For example, these include clusters with only etanercept (A; Cluster 25 – 101 patients), only infliximab (B; Cluster 24 – 46 patients), or minor or no switches for the majority of the patients in that group. For example, cluster with adalimumab (D; Cluster 23 – 37 patients) where some patients switch to golimumab (H), and vice-versa (Cluster 20 – 19 These clusters are represented in Fig. 5. Less stable clusters may also provide relevant clinical information regarding the longitudinal profile of the therapy. For example, Cluster 14 (with 10 patients), defines a more elaborate structure of therapy switches, which corresponds to a more complex medical interpretation. Patients started with a TNF inhibitor agent (etanercept, A). If the patient's therapy failed (secondary failure) after some time, then the patient can be switched to a new TNF inhibitor (adalimumab, D). After two TNF inhibitor agents failed, the patients were switched to another class of drugs. The next drug can be either a B cell antibody (rituximab, C) or an IL-6 inhibitor (tocilizumab, G). Sometimes, patients do not respond at all to the first TNF inhibitor agent (primary failure) or they can develop severe adverse reactions. In those cases, the rheumatologist can decide to go directly from etanercept (A) to tocilizumab (G) and switch the drug class earlier. This example shows a direct meaningful interpretation of the obtained clusters from a medical point of view and highlights the advantages of patient stratification using longitudinal data.
Cluster Visualization. Graph representation of selected clusters based on stability measures and clinical interpretation. Drug codes: A - Etanercept; B - Infliximab; C - Rituximab; D - Adalimumab; E - Anacinra; F - Abatacept; G - Tocilizumab; H - Golimumab. Z - Follow-up/end
We propose the AliClu, a method that combines temporal sequence alignment and agglomerative hierarchical clustering to find groups in longitudinal data containing sequences of symbols and numeric values. The AliClu includes a clustering validation strategy based on bootstrapping and uses several clustering indices, such as the (adjusted) Rand, Fowlkes–Mallows, Jaccard, and adjusted Wallace, to choose the best number of groups to consider for each particular dataset. The stability of the obtained clusters is then assessed through resampling and by using the Jaccard index, the recovery rate, and the Dice indices coefficient. The AliClu can either be run entirely automatically or in a semi-automatic way, which requires user input regarding the chosen parameters. The final clusters are depicted in graphs where each node represents a symbol, each edge (a state switch) has one number corresponding to the median time, and the weight represents the estimated conditional probability of switching.
The AliClu was tested using synthetic data generated with continuous-time Markov chain models, which makes it possible to separate the sequences generated with different parameters. The AliClu was run using the Portuguese Rheumatic Diseases Register (Reuma.pt), the national database for all the rheumatic patients treated with biologic agents. In particular, the rheumatoid arthritis (RA) patients' therapy information, including the sequence of drugs taken and their durations, was used as the input. The procedure allowed us to stratify RA patients in a clinically relevant way by creating groups of similar treatment profiles. The clusters obtained depict the treatment switches between different drugs, their median duration times and their probabilities.
The AliClu provides a strategy setting, validation, and visualization procedure for the automatic clustering of temporal sequence data, and it has promising applications for patient stratification using electronic medical record (EMR) data.
Availability and requirements
Project name: AliClu
Project home page: https://github.com/sysbiomed/AliClu
Operating system(s): Platform independent
Programming language: Python
Other requirements: Python3 (in Linux or Windows) and Anaconda (in Mac OS)
Any restrictions to use by non-academics: None
Availability of data and material
AliClu is available at https://github.com/sysbiomed/AliClu. Data from Reuma.pt are not publicly available. Synthetic data is provided along with AliClu to ease its use.
AR:
adjusted Rand
AW:
adjusted Wallace
DMARD:
disease-modifying antirheumatic drugs EMR: Electronic Medical Records
Fowlkes and Mallows
PE:
prefix-encoded
TNW:
Temporal Needleman-Wunsch
Syed H, Das AK. Temporal Needleman-Wunsch. In: 2015 IEEE International Conference on Data Science and Advanced Analytics (DSAA). IEEE: 2015. https://doi.org/10.1109/dsaa.2015.7344785.
Needleman SB, Wunsch CD. A General Method Applicable to the Search for Similarities in the Amino Acid Sequence of Two Proteins. J Mol Biol. 1970; 48:443–53.
Sakoe H, Chiba S. Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans Acoust Speech Sig Process. 1978; 26:43–9.
Zhou F, la Torre FD. Canonical time warping for alignment of human behavior. In: Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009. Vancouver: Curran Associates, Inc.: 2009. p. 2286–94.
Kulkarni K, Evangelidis G, Cech J, Horaud R. Continuous action recognition based on sequence alignment. Int J Comput Vis. 2015; 112(1):90–114. https://doi.org/10.1007/s11263-014-0758-9.
Fischer B, Roth V, Buhmann JM. Time-series alignment by non-negative multiple generalized canonical correlation analysis. BMC Bioinformatics. 2007; 8(10):4.
Sievers F, Wilm A, Dineen D, Gibson TJ, Karplus K, Li W, Lopez R, McWilliam H, Remmert M, Söding J, Thompson JD, Higgins DG. Fast, scalable generation of high-quality protein multiple sequence alignments using clustal omega. Mol Syst Biol. 2011; 7(1):539.
Katoh K, Standley DM. Mafft multiple sequence alignment software version 7: Improvements in performance and usability. Mol Biol Evol. 2013; 30(4):772–80.
Edgar RC. Muscle: multiple sequence alignment with high accuracy and high throughput. Nucleic Acids Res. 2004; 32(5):1792–7.
Eddy SR. Profile hidden Markov models,. Bioinformatics. 1998; 14(9):755–63. https://doi.org/10.1093/bioinformatics/14.9.755.
Canhão H, Faustino A, Martins F, et al.Reuma.pt - The Rheumatic Diseases Portuguese Register. Acta Reumatologica Portuguesa. 2011; 36(1):45–56.
Docampo E., Collado A., Escaramís G, Carbonell J, Rivera J, Vidal J, Alegre J, Rabionet R, Estivill X. Cluster analysis of clinical data identifies fibromyalgia subgroups. PLOS ONE. 2013; 8(9):1–7. https://doi.org/10.1371/journal.pone.0074873.
Garg L, McClean S, Meenan BJ, Millard P. Phase-type survival trees and mixed distribution survival trees for clustering patients' hospital length of stay. Informatica. 2011; 22(1):57–72.
Axén I, Bodin L., Bergström G, Halasz L, Lange F, Lövgren PW, Rosenbaum A, Leboeuf-Yde C, Jensen I. Clustering patients on the basis of their individual course of low back pain over a six month period. BMC Musculoskelet Disord. 2011; 12(1):99. https://doi.org/10.1186/1471-2474-12-99.
De la Cruz-Mesía R, Quintana FA, Marshall G. Model-based clustering for longitudinal data. Comput Stat Data Anal. 2008; 52(3):1441–57. https://doi.org/10.1016/j.csda.2007.04.005.
Saxena A, Prasad M, Gupta A, Bharill N, Patel OP, Tiwari A, Er MJ, Ding W, Lin C-T. A review of clustering techniques and developments. Neurocomputing. 2017; 267:664–81.
Mucha H-J. Advances in Data Analysis In: Decker R, Lenz H-J, editors. Berlin, Heidelberg: Springer: 2007. p. 115–122.
M. Rand W. Objective criteria for the evaluation of clustering methods. J Am Stat Assoc. 1971; 66:846–50.
Hubert L, Arabie P. Comparing partitions. J Classif. 1985; 2(1):193–218.
B. Fowlkes E, Mallows C. A method for comparing two hierachical clusterings. J Am Stat Assoc. 1983; 78:553–69.
Wallace DL. A method for comparing two hierachical clusterings: Comment. J Am Stat Assoc. 1983; 78:569–76.
We acknowledge all Reuma.pt contributors.
The authors acknowledge funding the Portuguese Foundation for Science and Technology (Fundação para a Ciência e a Tecnologia - FCT) under contracts INESC-ID (UID/CEC/50021/2019) and IT (UID/EEA/50008/2019), projects PREDICT (PTDC/CCI-CIF/29877/2017), PERSEIDS (PTDC/EMS-SIS/0642/2014) and NEUROCLINOMICS2 (PTDC/EEI-SII/1937/2014). The funders had no role in the design of the study, collection, analysis and interpretation of data, or writing the manuscript.
Instituto de Telecomunicações, Instituto Superior Técnico, Universidade de Lisboa, Avenida Rovisco Pais, 1 - Torre Norte Piso 10., Lisboa, 1049-001, Portugal
Kishan Rama
& Alexandra M. Carvalho
CEDOC, EpiDoC Unit, NOVA Medical School, National School of Public Health, Universidade NOVA de Lisboa, Rua do Instituto Bacteriológico, n∘ 5 Lab 2.9., Lisboa, 1150-082, Portugal
Helena Canhão
INESC-ID, Instituto Superior Técnico, Universidade de Lisboa, Rua Alves Redol 9, Lisboa, 1000-029, Portugal
& Susana Vinga
Search for Kishan Rama in:
Search for Helena Canhão in:
Search for Alexandra M. Carvalho in:
Search for Susana Vinga in:
KR implemented the algorithms, performed the computational experiments and wrote the first draft of the manuscript (all authors made the required updates). HC provided the data, clinical insights and interpretation. AMC and SV conceived the study, supervised the research, generated the final results and manuscript. All authors contributed to the final draft, read and approved the final version of the manuscript.
Correspondence to Susana Vinga.
Reuma.pt was approved by the National Data Protection Board (Comissão Nacional de Proteção de Dados – CNPD, Portugal) and by the Ethics Committee of Centro Hospitalar Lisboa Norte (CHLN) - Hospital de Santa Maria (HSM), Lisbon, Portugal. Patients signed Reuma.pt's informed and written consent.
SV is member of the Editorial Board of BMC Bioinformatics. KR, HC, and AMC declare that they have no competing interests.
Additional file 1 Supplementary Information
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Rama, K., Canhão, H., Carvalho, A.M. et al. AliClu - Temporal sequence alignment for clustering longitudinal clinical data. BMC Med Inform Decis Mak 19, 289 (2019) doi:10.1186/s12911-019-1013-7
clustering indices | CommonCrawl |
On parabolic external maps
October 2017, 37(10): 5065-5083. doi: 10.3934/dcds.2017219
The global stability of 2-D viscous axisymmetric circulatory flows
Huicheng Yin 1, and Lin Zhang 2,,
School of Mathematical Sciences, Nanjing Normal University, Nanjing 210023, China
Department of Mathematics and IMS, Nanjing University, Nanjing 210093, China
* Corresponding author: Lin Zhang
Received February 2015 Revised May 2017 Published June 2017
Fund Project: The authors are supported by the NSFC (No. 11571177) and A Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions
In this paper, we study the global existence and stability problem of a perturbed viscous circulatory flow around a disc. This flow is described by two-dimensional Navier-Stokes equations. By introducing some suitable weighted energy space and establishing a priori estimates, we show that the 2-D circulatory flow is globally stable in time when the corresponding initial-boundary values are perturbed sufficiently small.
Keywords: Compressible Navier-Stokes equations, circulatory flow, weighted energy space, global existence.
Mathematics Subject Classification: Primary: 35L70, 35L65; Secondary: 35L67, 76N15.
Citation: Huicheng Yin, Lin Zhang. The global stability of 2-D viscous axisymmetric circulatory flows. Discrete & Continuous Dynamical Systems - A, 2017, 37 (10) : 5065-5083. doi: 10.3934/dcds.2017219
S. Alinhac, Temps de vie des solutions réguliéres des équations d'Euler compressibles axisymétriques en dimension deux, Invent. Math., 111 (1993), 627-670. doi: 10.1007/BF01231301. Google Scholar
R. Courant and K. O. Friedrichs, Supersonic Flow and Shock Waves Interscience Publishers Inc., New York, 1948. Google Scholar
D. Cui and J. Li, On the existence and stability of 2-D perturbed steady subsonic circulatory flows, Sci. China Math., 54 (2011), 1421-1436. doi: 10.1007/s11425-011-4226-5. Google Scholar
D. Hoff, Spherically symmetric solutions of the Navier-Stokes equations for compressible, isothermal flow with large, discontinuous initial data, Indiana Univ. Math. J., 41 (1992), 1225-1302. doi: 10.1512/iumj.1992.41.41060. Google Scholar
J. Li and Z. Liang, On local classical solutions to the Cauchy problem of the two-dimensional barotropic compressible Navier-Stokes equations with vacuum, J. Math. Pures Appl., 102 (2014), 640-671. doi: 10.1016/j.matpur.2014.02.001. Google Scholar
J. Li and H. Yin, On the blowup problem of unsteady 2-D circulatory flow, Preprint, 2014. Google Scholar
C. H. Jun and K. Hyunseok, Global existence of the radially symmetric solutions of the Navier-Stokes equations for the isentropic compressible fluids, Math. Methods Appl. Sci., 28 (2005), 1-28. doi: 10.1002/mma.545. Google Scholar
Y. Kagei and T. Kobayashi, Asymptotic behavior of solutions of the compressible Navier-Stokes equations on the half space, Arch. Ration. Mech. Anal., 177 (2005), 231-330. doi: 10.1007/s00205-005-0365-6. Google Scholar
Y. Kagei and S. Kawashima, Stability of planar stationary solutions to the compressible Navier-Stokes equation on the half space, Comm. Math. Phys., 266 (2006), 401-430. doi: 10.1007/s00220-006-0017-1. Google Scholar
Y. Kagei, Asymptotic behavior of solutions of the compressible Navier-Stokes equation around the plane Couette flow, J. Math. Fluid Mech., 13 (2011), 1-31. doi: 10.1007/s00021-009-0019-9. Google Scholar
Y. Kagei, Asymptotic behavior of solutions to the compressible Navier-Stokes equation around a parallel flow, Arch. Ration. Mech. Anal., 205 (2012), 585-650. doi: 10.1007/s00205-012-0516-5. Google Scholar
P. L. Lions, Mathematical Topics in Fluid Dynamics Vol. 2, Compressible Models, Oxford Science Publication, Oxford, 1998. Google Scholar
A. Matsumura and T. Nishida, Initial-boundary value problems for the equations of motion of compressible viscous and heat-conductive fluids, Comm. Math. Phys., 89 (1983), 445-464. Google Scholar
Q. Jiu, Y. Wang and Z. Xin, Global well-posedness of the Cauchy problem of two-dimensional compressible Navier-Stokes equations in weighted spaces, J. Differential Equations, 255 (2013), 351-404. doi: 10.1016/j.jde.2013.04.014. Google Scholar
S. Ding, H. Wen, L. Yao and C. Zhu, Global spherically symmetric classical solution to compressible Navier-Stokes equations with large initial data and vacuum, SIAM J. Math. Anal., 44 (2012), 1257-1278. doi: 10.1137/110836663. Google Scholar
T. C. Sideris, Delayed singularity formation in 2 D compressible flow, Amer. J. Math., 119 (1997), 371-422. doi: 10.1353/ajm.1997.0014. Google Scholar
S. Jiang, Global spherically symmetric solutions to the equations of a viscous polytropic ideal gas in an exterior domain, Comm. Math. Phys., 178 (1996), 339-374. doi: 10.1007/BF02099452. Google Scholar
S. Jiang and P. Zhang, On spherically symmetric solutions of the compressible isentropic Navier-Stokes equations, Comm. Math. Phys., 215 (2001), 559-581. doi: 10.1007/PL00005543. Google Scholar
A. Tani, On the first initial-boundary value problem of compressible viscous fluid motion, Publ. Res. Inst. Math. Sci. Kyoto Univ., 13 (1977), 193-253. Google Scholar
C. Yonggeun, C. H. Jun and K. Hyunseok, Unique solvability of the initial boundary value problems for compressible viscous fluids, J. Math. Pures Appl., 83 (2004), 243-275. doi: 10.1016/j.matpur.2003.11.004. Google Scholar
Figure 1. Subsonic case of a viscous flow around a disc
Figure 2. Supersonic-sonic-subsonic case of a viscous flow around a disc
Peixin Zhang, Jianwen Zhang, Junning Zhao. On the global existence of classical solutions for compressible Navier-Stokes equations with vacuum. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 1085-1103. doi: 10.3934/dcds.2016.36.1085
Yuming Qin, Lan Huang, Zhiyong Ma. Global existence and exponential stability in $H^4$ for the nonlinear compressible Navier-Stokes equations. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1991-2012. doi: 10.3934/cpaa.2009.8.1991
Sun-Ho Choi. Weighted energy method and long wave short wave decomposition on the linearized compressible Navier-Stokes equation. Networks & Heterogeneous Media, 2013, 8 (2) : 465-479. doi: 10.3934/nhm.2013.8.465
Pavel I. Plotnikov, Jan Sokolowski. Compressible Navier-Stokes equations. Conference Publications, 2009, 2009 (Special) : 602-611. doi: 10.3934/proc.2009.2009.602
Huicheng Yin, Lin Zhang. The global existence and large time behavior of smooth compressible fluid in an infinitely expanding ball, Ⅱ: 3D Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2018, 38 (3) : 1063-1102. doi: 10.3934/dcds.2018045
Daoyuan Fang, Bin Han, Matthias Hieber. Local and global existence results for the Navier-Stokes equations in the rotational framework. Communications on Pure & Applied Analysis, 2015, 14 (2) : 609-622. doi: 10.3934/cpaa.2015.14.609
Reinhard Racke, Jürgen Saal. Hyperbolic Navier-Stokes equations II: Global existence of small solutions. Evolution Equations & Control Theory, 2012, 1 (1) : 217-234. doi: 10.3934/eect.2012.1.217
Zhenhua Guo, Zilai Li. Global existence of weak solution to the free boundary problem for compressible Navier-Stokes. Kinetic & Related Models, 2016, 9 (1) : 75-103. doi: 10.3934/krm.2016.9.75
Pavel I. Plotnikov, Jan Sokolowski. Optimal shape control of airfoil in compressible gas flow governed by Navier-Stokes equations. Evolution Equations & Control Theory, 2013, 2 (3) : 495-516. doi: 10.3934/eect.2013.2.495
Shuguang Shao, Shu Wang, Wen-Qing Xu, Bin Han. Global existence for the 2D Navier-Stokes flow in the exterior of a moving or rotating obstacle. Kinetic & Related Models, 2016, 9 (4) : 767-776. doi: 10.3934/krm.2016015
Bingkang Huang, Lusheng Wang, Qinghua Xiao. Global nonlinear stability of rarefaction waves for compressible Navier-Stokes equations with temperature and density dependent transport coefficients. Kinetic & Related Models, 2016, 9 (3) : 469-514. doi: 10.3934/krm.2016004
Xulong Qin, Zheng-An Yao. Global solutions of the free boundary problem for the compressible Navier-Stokes equations with density-dependent viscosity. Communications on Pure & Applied Analysis, 2010, 9 (4) : 1041-1052. doi: 10.3934/cpaa.2010.9.1041
Daniel Coutand, Steve Shkoller. Turbulent channel flow in weighted Sobolev spaces using the anisotropic Lagrangian averaged Navier-Stokes (LANS-$\alpha$) equations. Communications on Pure & Applied Analysis, 2004, 3 (1) : 1-23. doi: 10.3934/cpaa.2004.3.1
Daoyuan Fang, Ting Zhang. Compressible Navier-Stokes equations with vacuum state in one dimension. Communications on Pure & Applied Analysis, 2004, 3 (4) : 675-694. doi: 10.3934/cpaa.2004.3.675
Jing Wang, Lining Tong. Stability of boundary layers for the inflow compressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2012, 17 (7) : 2595-2613. doi: 10.3934/dcdsb.2012.17.2595
Misha Perepelitsa. An ill-posed problem for the Navier-Stokes equations for compressible flows. Discrete & Continuous Dynamical Systems - A, 2010, 26 (2) : 609-623. doi: 10.3934/dcds.2010.26.609
Dong Li, Xinwei Yu. On some Liouville type theorems for the compressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4719-4733. doi: 10.3934/dcds.2014.34.4719
Zhilei Liang. Convergence rate of solutions to the contact discontinuity for the compressible Navier-Stokes equations. Communications on Pure & Applied Analysis, 2013, 12 (5) : 1907-1926. doi: 10.3934/cpaa.2013.12.1907
Yingshan Chen, Shijin Ding, Wenjun Wang. Global existence and time-decay estimates of solutions to the compressible Navier-Stokes-Smoluchowski equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (10) : 5287-5307. doi: 10.3934/dcds.2016032
Joel Avrin. Global existence and regularity for the Lagrangian averaged Navier-Stokes equations with initial data in $H^{1//2}$. Communications on Pure & Applied Analysis, 2004, 3 (3) : 353-366. doi: 10.3934/cpaa.2004.3.353
HTML views (34)
Huicheng Yin Lin Zhang | CommonCrawl |
Astrophysics > Cosmology and Nongalactic Astrophysics
[Submitted on 14 Apr 2017 (v1), last revised 19 Jun 2017 (this version, v2)]
Title:Exploring Cosmic Origins with CORE: B-mode Component Separation
Authors:M. Remazeilles, A. J. Banday, C. Baccigalupi, S. Basak, A. Bonaldi, G. De Zotti, J. Delabrouille, C. Dickinson, H. K. Eriksen, J. Errard, R. Fernandez-Cobos, U. Fuskeland, C. Hervías-Caimapo, M. López-Caniego, E. Martinez-González, M. Roman, P. Vielva, I. Wehus, A. Achucarro, P. Ade, R. Allison, M. Ashdown, M. Ballardini, R. Banerji, N. Bartolo, J. Bartlett, D. Baumann, M. Bersanelli, M. Bonato, J. Borrill, F. Bouchet, F. Boulanger, T. Brinckmann, M. Bucher, C. Burigana, A. Buzzelli, Z.-Y. Cai, M. Calvo, C.-S. Carvalho, G. Castellano, A. Challinor, J. Chluba, S. Clesse, I. Colantoni, A. Coppolecchia, M. Crook, G. D'Alessandro, P. de Bernardis, G. de Gasperis, J.-M. Diego, E. Di Valentino, S. Feeney, S. Ferraro, F. Finelli, F. Forastieri, S. Galli, R. Genova-Santos, M. Gerbino, J. González-Nuevo, S. Grandis, J. Greenslade, S. Hagstotz, S. Hanany, W. Handley, C. Hernandez-Monteagudo, M. Hills, E. Hivon, K. Kiiveri, T. Kisner, T. Kitching, M. Kunz, H. Kurki-Suonio, L. Lamagna, A. Lasenby, M. Lattanzi, J. Lesgourgues, A. Lewis, M. Liguori, V. Lindholm, G. Luzzi, B. Maffei, C.J.A.P. Martins, S. Masi, D. McCarthy, J.-B. Melin, A. Melchiorri, D. Molinari, A. Monfardini, P. Natoli, M. Negrello, A. Notari, A. Paiella, D. Paoletti, G. Patanchon, M. Piat, G. Pisano, L. Polastri, G. Polenta, A. Pollo, V. Poulin
, M. Quartin, J.-A. Rubino-Martin, L. Salvati, A. Tartari, M. Tomasi, D. Tramonte, N. Trappe, T. Trombetti, C. Tucker, J. Valiviita, R. Van de Weijgaert, B. van Tent, V. Vennin, N. Vittorio, K. Young, M. Zannoni (for the CORE collaboration)
et al. (16 additional authors not shown)
Abstract: We demonstrate that, for the baseline design of the CORE satellite mission, the polarized foregrounds can be controlled at the level required to allow the detection of the primordial cosmic microwave background (CMB) $B$-mode polarization with the desired accuracy at both reionization and recombination scales, for tensor-to-scalar ratio values of ${r\gtrsim 5\times 10^{-3}}$. We consider detailed sky simulations based on state-of-the-art CMB observations that consist of CMB polarization with $\tau=0.055$ and tensor-to-scalar values ranging from $r=10^{-2}$ to $10^{-3}$, Galactic synchrotron, and thermal dust polarization with variable spectral indices over the sky, polarized anomalous microwave emission, polarized infrared and radio sources, and gravitational lensing effects. Using both parametric and blind approaches, we perform full component separation and likelihood analysis of the simulations, allowing us to quantify both uncertainties and biases on the reconstructed primordial $B$-modes. Under the assumption of perfect control of lensing effects, CORE would measure an unbiased estimate of $r=\left(5 \pm 0.4\right)\times 10^{-3}$ after foreground cleaning. In the presence of both gravitational lensing effects and astrophysical foregrounds, the significance of the detection is lowered, with CORE achieving a $4\sigma$-measurement of $r=5\times 10^{-3}$ after foreground cleaning and $60$% delensing. For lower tensor-to-scalar ratios ($r=10^{-3}$) the overall uncertainty on $r$ is dominated by foreground residuals, not by the 40% residual of lensing cosmic variance. Moreover, the residual contribution of unprocessed polarized point-sources can be the dominant foreground contamination to primordial B-modes at this $r$ level, even on relatively large angular scales, $\ell \sim 50$. Finally, we report two sources of potential bias for the detection of the primordial $B$-modes.[abridged]
Comments: 87 pages, 32 figures, 4 tables, expanded abstract. Updated to match version accepted by JCAP
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); Astrophysics of Galaxies (astro-ph.GA); Instrumentation and Methods for Astrophysics (astro-ph.IM)
DOI: 10.1088/1475-7516/2018/04/023
Cite as: arXiv:1704.04501 [astro-ph.CO]
(or arXiv:1704.04501v2 [astro-ph.CO] for this version)
From: Mathieu Remazeilles [view email]
[v1] Fri, 14 Apr 2017 18:00:01 UTC (7,621 KB)
[v2] Mon, 19 Jun 2017 20:46:36 UTC (7,624 KB)
astro-ph.CO
astro-ph.GA
astro-ph.IM | CommonCrawl |
Stem extract of Albizia richardiana exhibits potent antioxidant, cytotoxic, antimicrobial, anti-inflammatory and thrombolytic effects through in vitro approach
Mohammad Nazmul Islam ORCID: orcid.org/0000-0002-4021-106X1,2,
Homyra Tasnim3,
Laiba Arshad4,
Md. Areeful Haque1,5,
Syed Mohammed Tareq1,
A. T. M. Mostafa Kamal1,
Md. Masudur Rahman1,
A S. M. Ali Reza1,
Kazi Ashfak Ahmed Chowdhury1 &
Abu Montakim Tareq1
Albizia richardiana belongs Fabaceae family which different parts like fruits, flowers, barks, and roots are used medicinally. The study reports the in vitro anti-inflammatory, thrombolytic, cytotoxicity and antimicrobial activity of methanolic extract of A. richardiana stem and its different fractions.
The methanolic extract of A. richardiana stem (MEAR) extracted with n-hexane (HXFAR), carbon tetrachloride (CTFAR), chloroform (CFAR), and aqueous (AQFAR) and subjected for DPPH scavenging activity and total phenol content (TPC). The cytotoxic activity evaluated by brine shrimp lethality bioassay, while the disk diffusion method used for the antimicrobial study. The anti-inflammatory and thrombolytic activities of the extracts evaluated by the hypotonic solution induced hemolysis, heat-induced hemolysis and human blood clot lysis, respectively.
All the extracts exhibited excellent antioxidant activity in the DPPH scavenging assay and maximum total phenol content observed by HXFAR. Secondly, the extract showed a moderate LC50 value in brine shrimp lethality bioassay, where the CTFAR extract exhibited potential antimicrobial activities against sixteen different microorganisms. In anti-inflammatory, all the extract exhibited a significant (P < 0.0001) protection against lysis of human erythrocyte membrane induced by heat and hypotonic solution, as compared to the standard acetyl salicylic acid. An extremely significant (P < 0.0001) clot lysis was found in MEAR (16.66%) while the standard drug streptokinase (70.94%).
All the fractions revealed the significant free radical scavenging activity. Moreover, CTFAR showed wide spectrum of antimicrobial activity. Thus, the results of the present study provided scientific evidence for the use of Albizia richardiana as traditional medicine.
Plant is a resourceful preference to produce drug which is important to keep the world healthy [1]. Plants are used to cure, prevent the diseases whereas 75% of world population depends on herbal drugs for their primary healthcare [2, 3]. Albizia richardiana King & Prain belongs Fabaceae family which is found in Asia, Africa and Australia [4]. Different parts like fruits, flowers, barks, roots etc. are used medicinally [5]. A. richardiana are used in treating depression, appetite, increase blood circulation, tightness in chest, eye problems, blurred vision and pain in back [6]. This plant reported to have several secondary metabolites in bark such as carbohydrates, saponins, glucosides, glycosides and alkaloids. It was also reported to have significant antioxidant activity while lethal concentration (LC50) was extremely toxic in bark extract. The hypoglycemic and antimicrobial activity was also evaluated [7].
Secondary metabolites of plants are not necessary for survival of plant but essential for the development of plant, reproduction, growth, protection etc. [8]. Phenolic are highly distributed in plant species which are useful in physical and biological stress [9, 10]. Phenolic compounds effective for human health for their various biological effects such as antioxidant, anti-microbial, anticancer, and anti-allergic effects [11].
Reactive oxygen species (ROS) incorporates both oxygen and non-radicals that are generated during usual metabolic processes [12]. Excessive production of ROS may lead to oxidative chain reactions which imbalance the antioxidants in body. This imbalance may affect in health conditions and molecular function [13,14,15]. The imbalance may cause several diseases such as neurodegenerative diseases, cancer, allergies, cardiac problem [16]. Antioxidants (present in human body or obtained by food or plants) can suppress the oxidative responses or free radicals, in this manner restricting the oxidative damage [17]. So, intake of dietary of plant derived antioxidants may reduce or prevent the oxidative stress and intake of additional medicine [18].
Our present research study design to investigate the in vitro anti-inflammatory, thrombolytic, cytotoxicity and antimicrobial activity of methanolic extract of A. richardiana and its different fractions. To our knowledge, there are no such study was design or published by using the stem of A.richardiana.
Sample collection and preparation
A. richardiana stem was collected from hilly area of Chittagong, Bangladesh which was identified by the Professor Dr. Shaikh Bokhtear Uddin, Department of Botany, University of Chittagong, Bangladesh. Stem was cut into small pieces and dried in sun-light for couple of days. For better grinding the sun-dried stem was also applied in oven. After that the dried stem was crushed with a high capacity grinding machine and the ultimate product was coarse powder of A.richardiana (0.5 kg).
Folin-Ciocalteu reagent (10 fold diluted), (Merck, St. Louis, MO, USA), Methanol, Chloroform, Carbon tetra chloride (Merck Darmstadt, Germany), Brine shrimp egg (SK brand, Thailand), vincristine sulphate (Sigma- Aldrich Co). The Human RBCs were collected from a 70 kg diseased free male of fair complexion. The collected RBCs were kept in a test tube with an anticoagulant EDTA under standard conditions of temperature 23 ± 2 °C and relative humidity 55 ± 10%. Lyophilized Alteplase (Streptokinase) vial of 15, 00,000 I.U. was purchased from Beacon Pharmaceuticals Ltd. All other chemicals are in analytical grade.
Test microorganisms
Gram positive bacteria (Bacillus cereus, Bacillus megaterium, Bacillus subtillis, Sarcina lute, and Staphylococcus aureus), gram negative bacteria (Escherichia coli, Salmonella paratyphi, Salmonella typhi, Shigella boydii, Shigella dysenteriae, Pseudomonas aeruginosa, Vibrio mimicus, Vibrio parahemolyticus) and fungal strains (Aspergillus niger, Candida albicans, Saccharomyces cerevisiae) were used for antimicrobial assay. The microorganisms were provided by State University of Bangladesh.
Extraction of plant material
The coerced powder of stem was macerated in 1.5 Liter of methanol in an amber glass bottle. The bottle was kept in a dry place with occasional shaking and stirring. After 14 days, the mixture was filtered through cotton and Whitman filter paper #1 respectively. The obtained volume of filtrate was reduced by evaporation at atmospheric temperature until 70% solvent was evaporated. Finally, a crude semi-solid methanolic extract of A. richardiana obtained (5 g).
Solvent- solvent partitioning
The crude extract is fractionated using the solvent-solvent partitioning protocol, designed by Kupchan et al. and modified by Van Wagenen et al. [19]. All the fractioned (HXFAR, CTFAR, CFAR, and AFAR) were subjected to evaporate for drying to be used further analysis.
DPPH scavenging assay
To evaluate anti-oxidant potential of the plant extract, DPPH assay was used employing the method of Brand-Williams et al., 1995 [20]. Therefore, each test sample containing 2 mL of serially diluted different concentrations (500 μg/mL to 0.977 μg/mL) was mixed with 3 mL of a DPPH methanol solution (20 μg/mL). Following a 30 min reaction period (25 °C) a dark place, the absorbance was measured at 517 nm by UV spectrophotometer. Finally, IC50 values were calculated from the graph plotting concentration of the sample against percentage inhibition of free radicals. As positive control, ascorbic acid and butylated hydroxyl toluene were used.
Total phenolic component analysis
The total phenolic content of A. richardiana stem was assessed by using the previously described method of Skerget et al. (2005) [21]. Folin-Ciocalteu reagent used as an oxidizing agent whereas gallic acid was used as reference. The A. richardiana stem extract (2 mg) was dissolved in the distilled water to obtain a sample concentration of 2 mg/mL. Afterwards, a mixture consisting of 0.5 ml extract solution (conc. 2 mg/ml), 2.5 mL Folin-Ciocalteu reagent (diluted ten times with water) and 2 ml Na2CO3 (7.5% w/v) solution was prepared and incubated for 20 min at room temperature. Finally, the absorbance of the mixture was determined with UV- spectrophotometer at 760 nm and the total phenolic content of the sample was calculated from absorbance. The standard curve was also prepared from gallic acid solution using different concentrations. The unit of phenolic content is expressed as mg of GAE (gallic acid equivalent)/gm of the extract.
Cytotoxic activity assay
Brine shrimp lethality bioassay was performed to assess possible cytotoxic activity (Meyer et al., 1982) [22]. Brine shrimp eggs collected from pet shops were hatched in simulated seawater (prepared from 38 g sea salt dissolving in one liter of distilled water) with constant oxygen supply till they mature to nauplii. The test sample was taken in a vial and dissolved in 200 μL pure dimethyl sulfoxide (DMSO), from which each time 100 μL solution was transferred to a test tube containing 5 mL simulated sea water and 10 shrimp nauplii. Test samples of different concentrations were prepared applying the sequential dilution method. For validation of the test method and ensuring that the obtained results only reflect the activity of the test agent and nullify effects of other possible factors, positive and negative control groups were used. Vincristine sulfate was added as a positive control to DMSO to obtain different concentrations through serial dilutions, which were later added to the pre-marked test tube containing 5 mL, simulated sea water and 10 shrimp nauplii to get a positive control group. Next, a comparative analysis is done between the result of test agent and positive control group. For negative control, 100 μL DMSO is added to each of the pre-marked simulated sea water (5 mL), and shrimp nauplii (n = 10) filled test tubes. Rapid mortality of the brine shrimps indicates the test is invalid as the nauplii died due to reason other than the cytotoxicity of the compounds. After 24 h, visual inspection with a magnifying glass was done to count the survivors and concentration-mortality data were analyzed statistically using IBM-PC program. Median lethal concentration (LC50) was used to express the effectiveness or concentration-mortality relationship of plant product. It represents the concentration of a chemical that produces death in half of the test subjects after a specific exposure period.
Antimicrobial assay
In this study, the antimicrobial property of crude extracts as well as fractions of A. richardiana was tested using the disc diffusion method [23, 24]. The bacterial and fungal strains used for the experiment were obtained from the State University of Bangladesh. Standard Ciprofloxacin (30 μg/disc) and blank discs were used as positive and negative control respectively. Nutrient agar medium was used to assess the sensitivity of the organisms to the test materials. On the previously marked zones of the agar plates pre-inoculated with test bacteria and fungi; the sample discs, the standard antibiotic discs and the control discs were placed gently. The plates were then incubated at 37 °C for 24 h. After incubation, the clear zone of inhibition surrounding the discs was measured which determines the antimicrobial potency of the test agents.
Anti-inflammatory activity
To evaluate anti-inflammatory activity, hypotonic solution induced hemolysis and heat-induced hemolysis techniques were adopted.
Hypotonic solution induced hemolysis
Hypotonic solution induced hemolysis was evaluated by using the previously described method [25]. The experiment included test sample consisting of stock erythrocyte (RBC) suspension (0.50 mL) with 5 mL of hypotonic solution (50 mM NaCl) in 10 mM sodium phosphate buffer saline (pH 7.4) containing either different methanol extracts (2 mg/mL) or Acetylsalicylic acid (0.10 mg/mL). The acetylsalicylic acid was used as a reference standard. All these mixtures were subjected to incubation for 10 min at room temperature and then centrifuged for 10 min at 3000 g, and the absorbance (O.D.) of the supernatant was determined at 540 nm using Shimadzu UV spectrometer. The percentage inhibition of either hemolysis or membrane stabilization was calculated applying the following equation:
$$ \%\mathrm{Inhibition}\ \mathrm{of}\ \mathrm{hemolysis}=100\times \left\{\left({\mathrm{OD}}_1-{\mathrm{OD}}_2\right)/{\mathrm{OD}}_1\right\} $$
OD1 = Optical density of hypotonic-buffered saline solution alone (control) and
OD2 = Optical density of test sample in hypotonic solution.
Heat-induced hemolysis
Heat-induced hemolysis was evaluated by using the standard protocol [25]. Aliquots (5 mL) of the isotonic buffer containing 1 mg/mL of different extractives were taken into two duplicate sets of centrifuge tubes. The vehicle, in the same amount, was added to another tube as control and erythrocyte suspension (30 μL) was added to each tube and mixed with gentle inversion. One pair of the tubes was incubated at 54 °C for 20 min in a water bath, while the other pair was maintained at 0–5 °C in an ice bath. The reaction mixture was centrifuged for 3 min at 1300 g, and the absorbance of the supernatant was measured at 540 nm. The percentage inhibition of hemolysis was calculated according to the equation:
$$ \%\mathrm{Inhibition}\ \mathrm{of}\ \mathrm{hemolysis}=100\ \mathrm{x}\ \left[1-\left({\mathrm{OD}}_2-{\mathrm{OD}}_1/{\mathrm{OD}}_3-{\mathrm{OD}}_1\right)\right] $$
OD1 = test sample unheated, OD2 = test sample heated and OD3 = control sample heated
Thrombolytic activity assay
To investigate the thrombolytic activity of methanol extracts and different fractions of A. richardiana, 10 mg of methanol extracts along with its different fractions were added to different vials each containing 1 mL of distilled water. Aliquots (5 mL) of venous blood were drawn from healthy volunteers (without a history of use of oral contraceptive or anticoagulant therapy) and distributed in 10 different pre-weighted sterile vials (1 mL/tube). The vials were then incubated at 37 °C for 45 min. In order to determine the clot weight, the vials were again weighed after clot formation. A 100 μL aqueous solution of different fractioned along with the crude extracts was added separately to each vial containing pre-weighted clot. Also, 100 μL of streptokinase (SK) and 100 μL of distilled water were separately added to the control vial as a positive and negative control respectively. All the vials were then subjected to incubation at 37 °C for 90 min and observed for clot lysis. Once the incubation was over, the released fluid was removed, and vials were weighed again to express the percentage of clot lysis from the difference in weight measured before and after clot lysis.
$$ \%\mathbf{Clot}\ \mathbf{lysis}=\left(\mathbf{weight}\ \mathbf{of}\ \mathbf{clot}\ \mathbf{after}\ \mathbf{remove}\ \mathbf{of}\ \mathbf{fluid}/\mathbf{clot}\ \mathbf{weight}\right)\times \mathbf{100} $$
Values are represented in Mean ± SEM (n = 3). P < 0.05 is considered as statistically significant in comparison to control group where One-way ANOVA was followed by GraphPad Prism (Ver 7).
DPPH scavenging activity
The antioxidant activity by DPPH assay and total phenol content (TPC) was summarized in Tables 1 and 2, respectively. The A. richardiana showed a significant antioxidant activity by CFAR (5.49 μg/mL), HXFAR (11.26 μg/mL) and CTFAR (12.98 μg/mL) in comparison to standard ascorbic acid (14.14 μg/mL), followed by other two fractioned AQFAR (81.99 μg/mL) and MEAR (155.32 μg/mL).
Table 1 IC50 values of methanolic extract of Albizia richardiana and standard reference ascorbic acid with regression equation
Table 2 Effect of different extractives of stem of Albizia richardiana on Total phenol content
The total phenol content in A. richardiana was evaluated by the Folin-Ciocalteu's reagent which expressed as GAE/g of dried extract. The regression equation found from standard gallic acid was: Y = 0.0162x + 0.0215; R2 = 0.9985. The highest total phenolic content was found in n-hexane fraction of A. richardiana 54.05 mg GAE/g. The order of TPC antioxidant activity as followed: HXFAR > AQFAR > CTFAR > MEAR >CFAR.
Oxidative damage resulting an imbalance between the antioxidant defense system and free radical (FS) generation [26] which lead to several damage of protein, nucleic acid and lipids [27]. Oxidative stress initiated by tissues infection or injury by physical injury, hypertoxia, any chemical (toxin) and excessive exercise which produce increase amount of xanthine oxidase, disruption in oxidative phosphorylation and excess reactive oxygen species (ROS). Excess production of ROS, cause several complications such as cardiac disease, diabetes, aging, Parkinson's disease [28]. In biological system, phenolic and flavonoid like content are important for antioxidant activity which is reported by scientists [29, 30]. The phenolic compound is important in defense of plants and signaling pathway. The mechanism of inhibition by phenol compounds in free radical is, by transferring a H-atom form the hydroxyl group (OH). Phenolic compounds are responsible in reducing the neurodegenerative disease [31]. In our experiment, the plant extract and it's fractioned of A. richardiana found enriched in antioxidant activity.
Cytotoxic activity
Brine shrimp lethality assay reveals the LC50 values of the test fractions range between 106.10–194.85 μg/mL in light condition and 124.83–182.16 μg/mL in dark condition whereas standard Vincristine sulfate 0.27 μg/mL (Table 3). Among all extract of stem of A. richardiana, the lowest brine shrimp lethality was demonstrated by CTFAR 106.10 μg/mL followed by HXFAR 194.78 μg/mL, MEAR 188.59 μg/mL, AQFAR 181.95 μg/Ml and CFAR 156.98 μg/mL under light condition, and CTFAR 124.83 μg/mL followed by MEAR 195.05 μg/mL, AQFAR 182.16 μg/Ml, CFAR 156.62 μg/mL and HXFAR 141.34 μg/mL under dark condition respectively. Mostly, the higher the lethal concentration the lower will be the LC50 and vice versa. The value of LC50 over 1000 μg/ml is considered to be non-toxic, ranging from 500 to 1000 μg/ml is weakly toxic, moderately toxic for 100–500 μg/ml while less than 100 μg/ml is considered as highly toxic [32]. Here, all extract are shown a moderate toxicity, Hence, the extract required cation at treatment while overdosing.
Table 3 LC50 values of methanolic extract of Albizia richardiana at light and dark condition with regression equation as well as standard vincristine sulfate
Anti-microbial activity
In anti-microbial screening performed at the dose of 400 μg/disc by disc diffusion method, the carbon tetrachloride soluble fraction (CTFAR) exhibited the highest inhibition against microbial growth with a zone of inhibition ranged from 7 to 23 mm. The HXFAR and CFAR also exhibited minimum inhibition against microbial growth while AFAR did not exhibit any. The results indicate that CTFAR possesses better antimicrobial activity against gram positive, gram negative as well as fungi and can be studied further to explore potent anti-microbial agent (Table 4).
Table 4 Antimicrobial activity of test samples of Albizia richardiana
The antimicrobial potential of plants was studied by zone of inhibition of different microorganisms of gram positive, gram negative and fungi. Different fraction of A. richardiana showed their potential activity. Except CTFAR, none of the fraction exhibits a positive inhibition of growth of microorganisms. From the result, the antimicrobial activity can be mentioned that methanol, n-hexane, carbon tetra chloride extract of A. richardiana might be a broad spectrum antibacterial agent against the tested organisms.
The anti-inflammatory activity was presented in Figs. 1 and 2. The different methanol extract and fractions of A. richardiana stem at concentration 2.0 mg/mL significantly (P < 0.0001) protected the lysis of human erythrocyte membrane induced by heat and hypotonic solution, as compared to the standard acetylsalicylic acid (0.10 mg/mL). The erythrocyte membrane resembles to lysosomal membrane and as such, the effect of drugs on the stabilization of erythrocyte could be extrapolated to the stabilization of lysosomal membrane [33]. In both the hypotonic solution and heat-induced condition the highest membrane stabilizing activity was demonstrated by the CTFAR (62.29 ± 0.41% and 85.71 ± 0.71%, respectively). Significant membrane stabilizing activity was also exhibited in other extract in both conditions. The results clearly indicated that the extract of the stem of A. richardiana were highly effective in the membrane stabilizing activity as to prevent the lysis of erythrocyte induced by hypotonic solution and heat. Previous studies have reported that flavonoids exert profound stabilizing effects on lysosomes both in vitro and in vivo experimental animals while tannin and saponins have the ability to bind cation and other biomolecules, and are able to stabilize erythrocyte membrane [34, 35].
Effect of different extracts of stem of Albizia richardiana on hypotonic solution-induced haemolysis of erythrocyte membrane. Values are represented in Mean ± SEM (n = 3). d P < 0.0001 considered as statistically significant in comparison to positive control acetylsalicylic acid
Effect of different extracts of stem of Albizia richardiana on heat induced haemolysis of erythrocyte membrane. Values are represented in Mean ± SEM (n = 3). d P < 0.0001 and b P < 0.01 considered as statistically significant in comparison to positive control acetylsalicylic acid
Thrombolytic activity
In this study, the thrombolytic activity, among all extract only MEAR exhibit extremely significant (P < 0.0001) clot lysis while all other plant extracts of A. richardiana was found to be negligible (Fig. 3). The amounts of thrombolytic activity were present in Methanol soluble fraction (MEAR-16.66%), chloroform soluble fraction (CFAR-2.15%), Hexane soluble fraction (HXFAR-3.22%), carbon tetrachloride soluble fraction (CTFAR-4.68%) and AFAR have no thrombolytic activity. Therefore, it can be concluded that the extracts of A. richardiana showed very poor clot lysis activity compared to standard substance Streptokinase (SK-70.94%).
Thrombolytic activity of different extractives of stem of Albizia richardiana on human blood. Values are represented in Mean ± SEM (n = 3). d P < 0.0001 considered as statistically significant in comparison to control group
In recent years, the formation of a clot in blood has been a serious problem in blood circulation. Blocking off the blood vessel due to the formation of thrombus, which impedes the blood flow. So, the normal oxygen supply and blood flow was deprived. Plasmin is a fibrinolytic drug, blood clot formed by thrombin which lysed by plasmin. This plasmin activated by tPA. The fibrinolytic drug dissolved the thrombin in coronary arteries to reassure the blood flow [36]. In our present study, the clot lysis formation was found to be negligible while in comparison to standard drug streptokinase, whereas only MEAR found to be significant.
This study suggested that extracts of A. richardiana have significant activity in free radical scavenging activity and anti-inflammatory effects. Besides, among the fractions carbon tetrachloride fraction of the stem extract showed promising antimicrobial effect. Notably, among the entire extracts only methanol extract exhibited most significant clot lysis effect while all other plant extracts of A. richardiana was found to be negligible. So, this result suggesting A. richardiana can be a potential source for biological activity. However, further investigations are required to evaluate the specific compounds form A. richardiana stems.
MEAR:
Methanolic extract of Albizia richardiana
HXFAR:
N-hexane fraction of Albizia richardiana
CTFAR:
Carbon tetrachloride fraction of Albizia richardiana
CFAR:
Chloroform fraction of Albizia richardiana
AQFAR:
Aqueous fraction of Albizia richardiana
DMSO:
GPB:
Gram positive bacteria
GNB:
Gram negative bacteria
Sandberg F, Corrigan D. Natural remedies: their origins and uses. CRC Press; 2001.
Schulz V, Hänsel R, Tyler VE. Rational phytotherapy: a physician's guide to herbal medicine. Psychology Press; 2001.
Bodeker G, Ong CK. WHO global atlas of traditional, complementary and alternative medicine. World Health Organization; 2005.
Allan GJ, Porter JM. Tribal delimitation and phylogenetic relationships of Loteae and Coronilleae (Faboideae: Fabaceae) with special reference to Lotus: evidence from nuclear ribosomal ITS sequences. Am J Bot. 2000;87(12):1871–81.
Joycharat N, Thammavong S, Limsuwan S, Homlaead S, Voravuthikunchai SP, Yingyongnarongkul B-E, et al. Antibacterial substances from Albizia myriophylla wood against cariogenic Streptococcus mutans. Arch Pharm Res. 2013;36(6):723–30.
Xinrong Y, Anmin C, Fang S, Bingyi F, Jinlin Q, Yingfu M, Quan L, Yuan G, Shuqian W, Werner H, Zhemin G, editors. Encyclopedic reference of traditional Chinese medicine. Springer Science & Business Media; 2003.
Rahman M, Jahan Shetu H, Sukul A, Rahman I. Phytochemical and biological evaluation of albizia richardiana benth, Fabaceae family. World J Pharm Res. 2015;4990:168–76.
Park CH, Yeo HJ, Kim NS, Eun PY, Kim S-J, Arasu MV, et al. Metabolic profiling of pale green and purple kohlrabi (Brassica oleracea var. gongylodes). Appl Biol Chem. 2017;60(3):249–57.
Higdon JV, Delage B, Williams DE, Dashwood RH. Cruciferous vegetables and human cancer risk: epidemiologic evidence and mechanistic basis. Pharmacol Res. 2007;55(3):224–36.
Douglas CJ. Phenylpropanoid metabolism and lignin biosynthesis: from weeds to trees. Trends Plant Sci. 1996;1(6):171–8.
Jahangir M, Kim HK, Choi YH, Verpoorte R. Health-affecting compounds in Brassicaceae. Compr Rev Food Sci Food Saf. 2009;8(2):31–43.
Thomas C. Oxygen radicals and the disease process. CRC Press; 1998.
Kaushik A, Jijta C, Kaushik JJ, Zeray R, Ambesajir A, Beyene L. FRAP (Ferric reducing ability of plasma) assay and effect of Diplazium esculentum (Retz) Sw. (a green vegetable of North India) on central nervous system; 2012.
Dudonne S, Vitrac X, Coutiere P, Woillez M, Merillon JM. Comparative study of antioxidant properties and total phenolic content of 30 plant extracts of industrial interest using DPPH, ABTS, FRAP, SOD, and ORAC assays. J Agric Food Chem. 2009;57(5):1768–74.
Antolovich M, Prenzler PD, Patsalides E, McDonald S, Robards K. Methods for testing antioxidant activity. Analyst. 2002;127(1):183–98.
Mates JM, Perez-Gomez C, Nunez de Castro I. Antioxidant enzymes and human diseases. Clin Biochem. 1999;32(8):595–603.
Gutteridge JM. Biological origin of free radicals, and mechanisms of antioxidant protection. Chem Biol Interact. 1994;91(2–3):133–40.
García-Andrade M, González-Laredo R, Rocha-Guzmán N, Gallegos-Infante J, Rosales-Castro M, Medina-Torres L. Mesquite leaves (Prosopis laevigata), a natural resource with antioxidant capacity and cardioprotection potential. Ind Crop Prod. 2013;44:336–42.
VanWagenen BC, Larsen R, Cardellina JH, Randazzo D, Lidert ZC, Swithenbank C. Ulosantoin, a potent insecticide from the sponge Ulosa ruetzleri. J Org Chem. 1993;58(2):335–7.
Brand-Williams W, Cuvelier ME, Berset C. Use of a free radical method to evaluate antioxidant activity. LWT Food Sci Technol. 1995;28(1):25–30.
Škerget M, Kotnik P, Hadolin M, Hraš AR, Simonič M, Knez Ž. Phenols, proanthocyanidins, flavones and flavonols in some plant materials and their antioxidant activities. Food Chem. 2005;89(2):191–8.
Meyer B, Ferrigni N, Putnam J, Jacobsen L, Nichols DJ, McLaughlin JL. Brine shrimp: a convenient general bioassay for active plant constituents. Planta Med. 1982;45(05):31–4.
Barry AL. The antimicrobic susceptibility test: principles and practices. Lippincott Williams & Wilkins; 1976.
Bauer AW, Kirby WM, Sherris JC, Turck M. Antibiotic susceptibility testing by a standardized single disk method. Am J Clin Pathol. 1966;45(4):493–6.
Shinde U, Phadke A, Nair A, Mungantiwar A, Dikshit V, Saraf M. Membrane stabilizing activity—a possible mechanism of action for the anti-inflammatory activity of Cedrus deodara wood oil. Fitoterapia. 1999;70(3):251–7.
Rock CL, Jacob RA, Bowen PE. Update on the biological characteristics of the antioxidant micronutrients: vitamin C, vitamin E, and the carotenoids. J Am Diet Assoc. 1996;96(7):693–702 quiz 3-4.
McCord JM. The evolution of free radicals and oxidative stress. Am J Med. 2000;108(8):652–9.
Rao AL, Bharani M, Pallavi V. Role of antioxidants and free radicals in health and disease. Adv Pharmacol Toxicol. 2006;7(1):29–38.
Blokhina O, Virolainen E, Fagerstedt KV. Antioxidants, oxidative damage and oxygen deprivation stress: a review. Ann Bot. 2003;91 Spec No:179–94.
Pandey KB, Rizvi SI. Plant polyphenols as dietary antioxidants in human health and disease. Oxidative Med Cell Longev. 2009;2(5):270–8.
Santos-Sánchez NF, Salas-Coronado R, Villanueva-Cañongo C, Hernández-Carlos B. Antioxidant compounds and their antioxidant mechanism. InAntioxidants 2019 Mar 22. IntechOpen.
Nguta J, Mbaria J, Gakuya D, Gathumbi P, Kabasa J, Kiama S. Biological screening of Kenyan medicinal plants using Artemia salina (Artemiidae). Pharmacologyonline. 2011;2:458–78.
Omale J, Okafor PN. Comparative antioxidant capacity, membrane stabilization, polyphenol composition and cytotoxicity of the leaf and stem of Cissus multistriata. Afr J Biotechnol. 2008;7(17).
Oyedapo O. Biological activity of Plyllanthus amarus extracts on pragrow-Dawley rats. Nig J Biochem Mol Biol. 2001;26:202–26.
El-Shabrawy O, El-Gindi O, Melek F, Abdel-Khalik S, Haggag M. Biological properties of saponin mixtures of Fagonia cretica and Fagonia mollis. Fitoterapia. 1997;68(3):219–22.
Laurence D. Ethics and law in clinical pharmacology. Br J Clin Pharmacol. 1989;27(6):715.
Authors are thankful to Department of Pharmacy, International Islamic University Chittagong, Chittagong, Bangladesh and Department of Pharmacy, State University of Bangladesh, Dhaka, Bangladesh for their research facilities and support.
This work is conducted with the individual funding of all authors.
Department of Pharmacy, International Islamic University Chittagong, Chittagong-4318, Bangladesh
Mohammad Nazmul Islam, Md. Areeful Haque, Syed Mohammed Tareq, A. T. M. Mostafa Kamal, Md. Masudur Rahman, A S. M. Ali Reza, Kazi Ashfak Ahmed Chowdhury & Abu Montakim Tareq
Department of Pharmacy, State University of Bangladesh, Dhaka-1205, Bangladesh
Mohammad Nazmul Islam
Dhaka Medical College and Hospital, Dhaka, 1000, Bangladesh
Homyra Tasnim
Department of Pharmacy, Forman Christian College (A Chartered University), Lahore, Pakistan
Laiba Arshad
Drug and Herbal Research Centre, Faculty of Pharmacy, Universiti Kebangsaan Malaysia, 50300, Kuala Lumpur, Malaysia
Md. Areeful Haque
Syed Mohammed Tareq
A. T. M. Mostafa Kamal
Md. Masudur Rahman
A S. M. Ali Reza
Kazi Ashfak Ahmed Chowdhury
Abu Montakim Tareq
MNI, SMT, MMK, MRM, SMR, MAH and CKA together planned and designed the research. SMT arranged the whole facilities for the research and supervised the whole research. MNI conducted the entire laboratory works. MNI and AMT imparted in study design and interpreted the results putting efforts on statistical analysis with MNI, HT, AMT, SMR, LA and MAH participated in the manuscript draft and has thoroughly checked and revised the manuscript for necessary changes in format, grammar and English standard. All authors read and agreed on the final version of the manuscript.
Correspondence to Mohammad Nazmul Islam.
All authors have agreed to publish all materials belongs to this article.
Authors declared that they have no Competing interests.
Islam, M.N., Tasnim, H., Arshad, L. et al. Stem extract of Albizia richardiana exhibits potent antioxidant, cytotoxic, antimicrobial, anti-inflammatory and thrombolytic effects through in vitro approach. Clin Phytosci 6, 60 (2020). https://doi.org/10.1186/s40816-020-00212-w
DOI: https://doi.org/10.1186/s40816-020-00212-w
Albizia richardiana | CommonCrawl |
6.6: Centripetal Force
[ "article:topic", "authorname:openstax", "centripetal force", "ideal banking", "banked curve", "Coriolis force", "inertial force", "noninertial frame of reference", "license:ccby", "showtoc:no", "transcluded:yes", "source-phys-4001" ]
https://phys.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fphys.libretexts.org%2FCourses%2FJoliet_Junior_College%2FPhysics_201_-_Fall_2019%2FBook%253A_Physics_(Boundless)%2F6%253A_Applications_of_Newton%2F6.06%253A_Centripetal_Force
6: Applications of Newton
Banked Curves
Inertial Forces and Noninertial (Accelerated) Frames: The Coriolis Force
Explain the equation for centripetal acceleration
Apply Newton's second law to develop the equation for centripetal force
Use circular motion concepts in solving problems involving Newton's laws of motion
In Motion in Two and Three Dimensions, we examined the basic concepts of circular motion. An object undergoing circular motion, like one of the race cars shown at the beginning of this chapter, must be accelerating because it is changing the direction of its velocity. We proved that this centrally directed acceleration, called centripetal acceleration, is given by the formula
\[a_{c} = \frac{v^{2}}{r}\]
where v is the velocity of the object, directed along a tangent line to the curve at any instant. If we know the angular velocity \(\omega\), then we can use
\[a_{c} = r \omega^{2} \ldotp\]
Angular velocity gives the rate at which the object is turning through the curve, in units of rad/s. This acceleration acts along the radius of the curved path and is thus also referred to as a radial acceleration.
An acceleration must be produced by a force. Any force or combination of forces can cause a centripetal or radial acceleration. Just a few examples are the tension in the rope on a tether ball, the force of Earth's gravity on the Moon, friction between roller skates and a rink floor, a banked roadway's force on a car, and forces on the tube of a spinning centrifuge. Any net force causing uniform circular motion is called a centripetal force. The direction of a centripetal force is toward the center of curvature, the same as the direction of centripetal acceleration. According to Newton's second law of motion, net force is mass times acceleration: Fnet = ma. For uniform circular motion, the acceleration is the centripetal acceleration: a = ac. Thus, the magnitude of centripetal force Fc is
\[F_{c} = ma_{c} \ldotp\]
By substituting the expressions for centripetal acceleration ac (\(a_{c} = \frac{v^{2}}{r}; a_{c} = r \omega^{2}\)), we get two expressions for the centripetal force Fc in terms of mass, velocity, angular velocity, and radius of curvature:
\[F_{c} = m \frac{v^{2}}{r}; \quad F_{c} = mr\omega^{2} \ldotp \label{6.3}\]
You may use whichever expression for centripetal force is more convenient. Centripetal force \(\vec{F}_{c}\) is always perpendicular to the path and points to the center of curvature, because \(\vec{a}_{c}\) is perpendicular to the velocity and points to the center of curvature. Note that if you solve the first expression for r, you get
\[r = \frac{mv^{2}}{F_{c}} \ldotp\]
This implies that for a given mass and velocity, a large centripetal force causes a small radius of curvature—that is, a tight curve, as in Figure \(\PageIndex{1}\).
Figure \(\PageIndex{1}\): The frictional force supplies the centripetal force and is numerically equal to it. Centripetal force is perpendicular to velocity and causes uniform circular motion. The larger the Fc, the smaller the radius of curvature r and the sharper the curve. The second curve has the same v, but a larger Fc produces a smaller r′.
Example \(\PageIndex{1}\): What Coefficient of Friction Do Cars Need on a Flat Curve?
Calculate the centripetal force exerted on a 900.0-kg car that negotiates a 500.0-m radius curve at 25.00 m/s.
Assuming an unbanked curve, find the minimum static coefficient of friction between the tires and the road, static friction being the reason that keeps the car from slipping (Figure \(\PageIndex{2}\)).
Figure \(\PageIndex{2}\): This car on level ground is moving away and turning to the left. The centripetal force causing the car to turn in a circular path is due to friction between the tires and the road. A minimum coefficient of friction is needed, or the car will move in a larger-radius curve and leave the roadway.
We know that \(F_{c} = m \frac{v^{2}}{r}\). Thus $$F_{c} = m \frac{v^{2}}{r} = \frac{(900.0\; kg)(25.00\; m/s)^{2}}{(500.0\; m)} = 1125\; N \ldotp$$
Figure \(\PageIndex{2}\) shows the forces acting on the car on an unbanked (level ground) curve. Friction is to the left, keeping the car from slipping, and because it is the only horizontal force acting on the car, the friction is the centripetal force in this case. We know that the maximum static friction (at which the tires roll but do not slip) is \(\mu_{s}\) N, where \(\mu_{s}\) is the static coefficient of friction and N is the normal force. The normal force equals the car's weight on level ground, so N = mg. Thus the centripetal force in this situation is $$F_{c} = f = \mu_{s} N = \mu_{s} mg \ldotp$$Now we have a relationship between centripetal force and the coefficient of friction. Using the equation $$F_{c} = m \frac{v^{2}}{r} \ldotp$$we obtain $$m \frac{v^{2}}{r} = \mu_{s} mg \ldotp$$We solve this for \(\mu_{s}\), noting that mass cancels, and obtain $$\mu_{s} = \frac{v^{2}}{rg} \ldotp$$Substituting the knowns, $$\mu_{s} = \frac{(25.00\; m/s)^{2}}{(500.0\; m)(9.80\; m/s^{2})} = 0.13 \ldotp$$(Because coefficients of friction are approximate, the answer is given to only two digits.)
The coefficient of friction found in Figure \(\PageIndex{2b}\) is much smaller than is typically found between tires and roads. The car still negotiates the curve if the coefficient is greater than 0.13, because static friction is a responsive force, able to assume a value less than but no more than \(\mu_{s}\)N. A higher coefficient would also allow the car to negotiate the curve at a higher speed, but if the coefficient of friction is less, the safe speed would be less than 25 m/s. Note that mass cancels, implying that, in this example, it does not matter how heavily loaded the car is to negotiate the turn. Mass cancels because friction is assumed proportional to the normal force, which in turn is proportional to mass. If the surface of the road were banked, the normal force would be less, as discussed next.
Exercise \(\PageIndex{1}\)
A car moving at 96.8 km/h travels around a circular curve of radius 182.9 m on a flat country road. What must be the minimum coefficient of static friction to keep the car from slipping?
Let us now consider banked curves, where the slope of the road helps you negotiate the curve (Figure \(\PageIndex{3}\)). The greater the angle θ , the faster you can take the curve. Race tracks for bikes as well as cars, for example, often have steeply banked curves. In an "ideally banked curve," the angle \(\theta\) is such that you can negotiate the curve at a certain speed without the aid of friction between the tires and the road. We will derive an expression for \(\theta\) for an ideally banked curve and consider an example related to it.
Figure \(\PageIndex{3}\): The car on this banked curve is moving away and turning to the left.
For ideal banking, the net external force equals the horizontal centripetal force in the absence of friction. The components of the normal force N in the horizontal and vertical directions must equal the centripetal force and the weight of the car, respectively. In cases in which forces are not parallel, it is most convenient to consider components along perpendicular axes—in this case, the vertical and horizontal directions.
Figure \(\PageIndex{3}\) shows a free-body diagram for a car on a frictionless banked curve. If the angle \(\theta\) is ideal for the speed and radius, then the net external force equals the necessary centripetal force. The only two external forces acting on the car are its weight \(\vec{w}\) and the normal force of the road \(\vec{N}\). (A frictionless surface can only exert a force perpendicular to the surface—that is, a normal force.) These two forces must add to give a net external force that is horizontal toward the center of curvature and has magnitude \(\frac{mv^{2}}{r}\). Because this is the crucial force and it is horizontal, we use a coordinate system with vertical and horizontal axes. Only the normal force has a horizontal component, so this must equal the centripetal force, that is,
\[N \sin \theta = \frac{mv^{2}}{r} \ldotp\]
Because the car does not leave the surface of the road, the net vertical force must be zero, meaning that the vertical components of the two external forces must be equal in magnitude and opposite in direction. From Figure \(\PageIndex{3}\), we see that the vertical component of the normal force is N cos \(\theta\), and the only other vertical force is the car's weight. These must be equal in magnitude; thus,
\[N \cos \theta = mg \ldotp\]
Now we can combine these two equations to eliminate N and get an expression for \(\theta\), as desired. Solving the second equation for N = \(\frac{mg}{(\cos \theta)}\) and substituting this into the first yields
\[\begin{split} mg \frac{\sin \theta}{\cos \theta} & = \frac{mv^{2}}{r} \\ mg \tan \theta & = \frac{mv^{2}}{r} \\ \tan \theta & = \frac{v^{2}}{rg} \ldotp \end{split}\]
Taking the inverse tangent gives
\[\theta = \tan^{-1} \left(\dfrac{v^{2}}{rg}\right) \ldotp \label{6.4}\]
This expression can be understood by considering how \(\theta\) depends on v and r. A large \(\theta\) is obtained for a large v and a small r. That is, roads must be steeply banked for high speeds and sharp curves. Friction helps, because it allows you to take the curve at greater or lower speed than if the curve were frictionless. Note that \(\theta\) does not depend on the mass of the vehicle.
Example \(\PageIndex{2}\): What Is the Ideal Speed to Take a Steeply Banked Tight Curve?
Curves on some test tracks and race courses, such as Daytona International Speedway in Florida, are very steeply banked. This banking, with the aid of tire friction and very stable car configurations, allows the curves to be taken at very high speed. To illustrate, calculate the speed at which a 100.0-m radius curve banked at 31.0° should be driven if the road were frictionless.
We first note that all terms in the expression for the ideal angle of a banked curve except for speed are known; thus, we need only rearrange it so that speed appears on the left-hand side and then substitute known quantities.
\[\tan \theta = \frac{v^{2}}{rg},\]
we get
\[v = \sqrt{rg \tan \theta} \ldotp\]
Noting that tan 31.0° = 0.609, we obtain
\[v = \sqrt{(100.0\; m)(9.80\; m/s^{2})(0.609)} = 24.4\; m/s \ldotp\]
This is just about 165 km/h, consistent with a very steeply banked and rather sharp curve. Tire friction enables a vehicle to take the curve at significantly higher speeds.
Airplanes also make turns by banking. The lift force, due to the force of the air on the wing, acts at right angles to the wing. When the airplane banks, the pilot is obtaining greater lift than necessary for level flight. The vertical component of lift balances the airplane's weight, and the horizontal component accelerates the plane. The banking angle shown in Figure \(\PageIndex{4}\) is given by \(\theta\). We analyze the forces in the same way we treat the case of the car rounding a banked curve.
Figure \(\PageIndex{4}\): In a banked turn, the horizontal component of lift is unbalanced and accelerates the plane. The normal component of lift balances the plane's weight. The banking angle is given by \(\theta\). Compare the vector diagram with that shown in Figure 6.22.
Join the ladybug in an exploration of rotational motion. Rotate the merry-go-round to change its angle or choose a constant angular velocity or angular acceleration. Explore how circular motion relates to the bug's xy-position, velocity, and acceleration using vectors or graphs.
A circular motion requires a force, the so-called centripetal force, which is directed to the axis of rotation. This simplified model of a carousel demonstrates this force.
What do taking off in a jet airplane, turning a corner in a car, riding a merry-go-round, and the circular motion of a tropical cyclone have in common? Each exhibits inertial forces—forces that merely seem to arise from motion, because the observer's frame of reference is accelerating or rotating. When taking off in a jet, most people would agree it feels as if you are being pushed back into the seat as the airplane accelerates down the runway. Yet a physicist would say that you tend to remain stationary while the seat pushes forward on you. An even more common experience occurs when you make a tight curve in your car—say, to the right (Figure \(\PageIndex{5}\)). You feel as if you are thrown (that is, forced) toward the left relative to the car. Again, a physicist would say that you are going in a straight line (recall Newton's first law) but the car moves to the right, not that you are experiencing a force from the left.
Figure \(\PageIndex{5}\): (a) The car driver feels herself forced to the left relative to the car when she makes a right turn. This is an inertial force arising from the use of the car as a frame of reference. (b) In Earth's frame of reference, the driver moves in a straight line, obeying Newton's first law, and the car moves to the right. There is no force to the left on the driver relative to Earth. Instead, there is a force to the right on the car to make it turn.
We can reconcile these points of view by examining the frames of reference used. Let us concentrate on people in a car. Passengers instinctively use the car as a frame of reference, whereas a physicist might use Earth. The physicist might make this choice because Earth is nearly an inertial frame of reference, in which all forces have an identifiable physical origin. In such a frame of reference, Newton's laws of motion take the form given in Newton's Laws of Motion. The car is a noninertial frame of reference because it is accelerated to the side. The force to the left sensed by car passengers is an inertial force having no physical origin (it is due purely to the inertia of the passenger, not to some physical cause such as tension, friction, or gravitation). The car, as well as the driver, is actually accelerating to the right. This inertial force is said to be an inertial force because it does not have a physical origin, such as gravity.
A physicist will choose whatever reference frame is most convenient for the situation being analyzed. There is no problem to a physicist in including inertial forces and Newton's second law, as usual, if that is more convenient, for example, on a merry-go-round or on a rotating planet. Noninertial (accelerated) frames of reference are used when it is useful to do so. Different frames of reference must be considered in discussing the motion of an astronaut in a spacecraft traveling at speeds near the speed of light, as you will appreciate in the study of the special theory of relativity.
Let us now take a mental ride on a merry-go-round—specifically, a rapidly rotating playground merry-go-round (Figure \(\PageIndex{6}\)). You take the merry-go-round to be your frame of reference because you rotate together. When rotating in that noninertial frame of reference, you feel an inertial force that tends to throw you off; this is often referred to as a centrifugal force (not to be confused with centripetal force). Centrifugal force is a commonly used term, but it does not actually exist. You must hang on tightly to counteract your inertia (which people often refer to as centrifugal force). In Earth's frame of reference, there is no force trying to throw you off; we emphasize that centrifugal force is a fiction. You must hang on to make yourself go in a circle because otherwise you would go in a straight line, right off the merry-go-round, in keeping with Newton's first law. But the force you exert acts toward the center of the circle.
Figure \(\PageIndex{6}\): (a) A rider on a merry-go-round feels as if he is being thrown off. This inertial force is sometimes mistakenly called the centrifugal force in an effort to explain the rider's motion in the rotating frame of reference. (b) In an inertial frame of reference and according to Newton's laws, it is his inertia that carries him off (the unshaded rider has Fnet = 0 and heads in a straight line). A force, Fcentripetal, is needed to cause a circular path.
This inertial effect, carrying you away from the center of rotation if there is no centripetal force to cause circular motion, is put to good use in centrifuges (Figure \(\PageIndex{7}\)). A centrifuge spins a sample very rapidly, as mentioned earlier in this chapter. Viewed from the rotating frame of reference, the inertial force throws particles outward, hastening their sedimentation. The greater the angular velocity, the greater the centrifugal force. But what really happens is that the inertia of the particles carries them along a line tangent to the circle while the test tube is forced in a circular path by a centripetal force.
Figure \(\PageIndex{7}\): Centrifuges use inertia to perform their task. Particles in the fluid sediment settle out because their inertia carries them away from the center of rotation. The large angular velocity of the centrifuge quickens the sedimentation. Ultimately, the particles come into contact with the test tube walls, which then supply the centripetal force needed to make them move in a circle of constant radius.
Let us now consider what happens if something moves in a rotating frame of reference. For example, what if you slide a ball directly away from the center of the merry-go-round, as shown in Figure \(\PageIndex{8}\)? The ball follows a straight path relative to Earth (assuming negligible friction) and a path curved to the right on the merry-go-round's surface. A person standing next to the merry-go-round sees the ball moving straight and the merry-go-round rotating underneath it. In the merry-go-round's frame of reference, we explain the apparent curve to the right by using an inertial force, called the Coriolis force, which causes the ball to curve to the right. The Coriolis force can be used by anyone in that frame of reference to explain why objects follow curved paths and allows us to apply Newton's laws in noninertial frames of reference.
Figure \(\PageIndex{8}\): Looking down on the counterclockwise rotation of a merry-go-round, we see that a ball slid straight toward the edge follows a path curved to the right. The person slides the ball toward point B, starting at point A. Both points rotate to the shaded positions (A' and B') shown in the time that the ball follows the curved path in the rotating frame and a straight path in Earth's frame.
Up until now, we have considered Earth to be an inertial frame of reference with little or no worry about effects due to its rotation. Yet such effects do exist—in the rotation of weather systems, for example. Most consequences of Earth's rotation can be qualitatively understood by analogy with the merry-go-round. Viewed from above the North Pole, Earth rotates counterclockwise, as does the merry-go-round in Figure \(\PageIndex{8}\). As on the merry-go-round, any motion in Earth's Northern Hemisphere experiences a Coriolis force to the right. Just the opposite occurs in the Southern Hemisphere; there, the force is to the left. Because Earth's angular velocity is small, the Coriolis force is usually negligible, but for large-scale motions, such as wind patterns, it has substantial effects.
The Coriolis force causes hurricanes in the Northern Hemisphere to rotate in the counterclockwise direction, whereas tropical cyclones in the Southern Hemisphere rotate in the clockwise direction. (The terms hurricane, typhoon, and tropical storm are regionally specific names for cyclones, which are storm systems characterized by low pressure centers, strong winds, and heavy rains.) Figure \(\PageIndex{9}\) helps show how these rotations take place. Air flows toward any region of low pressure, and tropical cyclones contain particularly low pressures. Thus winds flow toward the center of a tropical cyclone or a low-pressure weather system at the surface. In the Northern Hemisphere, these inward winds are deflected to the right, as shown in the figure, producing a counterclockwise circulation at the surface for low-pressure zones of any type. Low pressure at the surface is associated with rising air, which also produces cooling and cloud formation, making low-pressure patterns quite visible from space. Conversely, wind circulation around high-pressure zones is clockwise in the Southern Hemisphere but is less visible because high pressure is associated with sinking air, producing clear skies.
Figure \(\PageIndex{9}\): (a) The counterclockwise rotation of this Northern Hemisphere hurricane is a major consequence of the Coriolis force. (b) Without the Coriolis force, air would flow straight into a low-pressure zone, such as that found in tropical cyclones. (c) The Coriolis force deflects the winds to the right, producing a counterclockwise rotation. (d) Wind flowing away from a high-pressure zone is also deflected to the right, producing a clockwise rotation. (e) The opposite direction of rotation is produced by the Coriolis force in the Southern Hemisphere, leading to tropical cyclones. (credit a and credit e: modifications of work by NASA)
The rotation of tropical cyclones and the path of a ball on a merry-go-round can just as well be explained by inertia and the rotation of the system underneath. When noninertial frames are used, inertial forces, such as the Coriolis force, must be invented to explain the curved path. There is no identifiable physical source for these inertial forces. In an inertial frame, inertia explains the path, and no force is found to be without an identifiable source. Either view allows us to describe nature, but a view in an inertial frame is the simplest in the sense that all forces have origins and explanations.
6.5: Friction (Part 2)
6.7: Drag Force and Terminal Speed
banked curve
ideal banking
inertial force
noninertial frame of reference | CommonCrawl |
Some remarks for a modified periodic Camassa-Holm system
DCDS Home
A pointwise gradient bound for elliptic equations on compact manifolds with nonnegative Ricci curvature
November 2011, 30(4): 1145-1159. doi: 10.3934/dcds.2011.30.1145
Towards the Chern-Simons-Higgs equation with finite energy
Hyungjin Huh 1,
Department of Mathematics, Chung-Ang University, Seoul 156-756, South Korea
Received October 2009 Revised January 2011 Published May 2011
Under the Coulomb gauge condition Chern-Simons-Higgs equations are formulated in the hyperbolic system coupled with elliptic equations. We consider a solution of Chern-Simons-Higgs equations with finite energy and show how to obtain $H^1$ solution with one exceptional term $\phi\partial_t A_0$ from which the model equations (63) are proposed.
Keywords: Wente's inequality., Null form, Wave-Sobolev space $\mathcal{H}^{s, \theta}$, Chern-Simons-Higgs, Coulomb gauge.
Mathematics Subject Classification: Primary: 35L15, 35L45; Secondary: 35Q4.
Citation: Hyungjin Huh. Towards the Chern-Simons-Higgs equation with finite energy. Discrete & Continuous Dynamical Systems - A, 2011, 30 (4) : 1145-1159. doi: 10.3934/dcds.2011.30.1145
M. Beals, Self-spreading and strength of singularities for solutions to semilinear wave equations,, Ann. of Math., 118 (1983), 187. doi: 10.2307/2006959. Google Scholar
N. Bournaveas, Low regularity solutions of the Dirac-Klein-Gordon equations in two space dimensions,, Comm. Partial Differential Equations, 26 (2001), 1345. Google Scholar
H. Brezis and J. M. Coron, Multiple solutions of H-systems and Rellich's conjecture,, Comm. Pure Appl. Math., 37 (1984), 149. doi: 10.1002/cpa.3160370202. Google Scholar
L. A. Caffarelli and Y. Yang, Vortex condensation in Chern-Simons-Higgs model: An existence theorem,, Comm. Math. Phys., 168 (1995), 321. doi: 10.1007/BF02101552. Google Scholar
D. Chae and K. Choe, Global existence in the Cauchy problem of the relativistic Chern-Simons-Higgs theory,, Nonlinearity, 15 (2002), 747. doi: 10.1088/0951-7715/15/3/314. Google Scholar
D. Chae and O. Yu. Imanuvilov, The existence of non-topological multivortex solutions in the relativistic self-dual Chern-Simons theory,, Comm. Math. Phys., 215 (2000), 119. doi: 10.1007/s002200000302. Google Scholar
D. M. Eardley and V. Moncrief, The global existence of Yang-Mills-Higgs fields in 4-dimensional Minkowski space,, Comm. Math. Phys., 83 (1982), 171. doi: 10.1007/BF01976040. Google Scholar
D. Foschi and S. Klainerman, Bilinear space-time estimates for homogeneous wave equations,, Ann. Sci. École Norm. Sup., 33 (2000), 211. Google Scholar
J. Ginibre and G. Velo, The Cauchy problem for coupled Yang-Mills and Scalar fields in the temporal gauge,, Comm. Math. Phys., 82 (1981), 1. doi: 10.1007/BF01206943. Google Scholar
J. Han and N. Kim, Nonself-dual Chern-Simons and Maxwell-Chern-Simons vortices on bounded domains,, J. Funct. Anal., 221 (2005), 167. doi: 10.1016/j.jfa.2004.09.012. Google Scholar
J. Hong, P. Kim and P. Pac, Multivortex solutions of the abelian Chern-Simons-Higgs theory,, Phys. Rev. Lett., 64 (1990), 2230. doi: 10.1103/PhysRevLett.64.2230. Google Scholar
H. Huh, Low regularity solutions of the Chern-Simons-Higgs equations,, Nonlinearity, 18 (2005), 1. doi: 10.1088/0951-7715/18/6/009. Google Scholar
H. Huh, Local and global solutions of the Chern-Simons-Higgs system,, J. Funct. Anal., 242 (2007), 526. doi: 10.1016/j.jfa.2006.09.009. Google Scholar
R. Jackiw and E. Weinberg, Self-dual Chern-Simons vortices,, Phys. Rev. Lett., 64 (1990), 2234. doi: 10.1103/PhysRevLett.64.2234. Google Scholar
S. Klainerman and M. Machedon, On the Maxwell-Klein-Gordon equation with finite energy,, Duke Math. J., 74 (1994), 19. doi: 10.1215/S0012-7094-94-07402-4. Google Scholar
S. Klainerman and S. Selberg, Bilinear estimates and applications to nonlinear wave equations,, Commun. Contemp. Math., 4 (2002), 223. doi: 10.1142/S0219199702000634. Google Scholar
S. Klainerman and D. Tataru, On the optimal local regularity for Yang-Mills equations in $\mathbbR$${4+1}$,, J. Amer. Math. Soc., 12 (1999), 93. doi: 10.1090/S0894-0347-99-00282-9. Google Scholar
S. Lee and A. Vargas, Sharp null form estimates for the wave equation,, Amer. J. Math., 130 (2008), 1279. doi: 10.1353/ajm.0.0024. Google Scholar
H. Lindblad and C. Sogge, On existence and scattering with minimal regularity for semilinear wave equations,, J. Funct. Anal., 130 (1995), 357. doi: 10.1006/jfan.1995.1075. Google Scholar
V. Moncrief, Global existence of Maxwell-Klein-Gordon fields in $(2+1)$ dimensional spacetimes,, J. Math. Phys., 21 (1980), 2291. doi: 10.1063/1.524669. Google Scholar
M. Nolasco, Non-topological N-vortex condensates for the self-dual Chern-Simons theory,, Comm. Pure Appl. Math., 56 (2003), 1752. doi: 10.1002/cpa.10109. Google Scholar
S. Selberg, "Multilinear Spacetime Estimates and Applications to Local Existence Theory for Nonlinear Wave Equations,", Ph.D. thesis, (1999). Google Scholar
S. Selberg, On an estimate for the wave equation and applications to nonlinear problems,, Differential Integral Equations, 15 (2002), 213. Google Scholar
S. Selberg, Almost optimal local well-posedness of the Maxwell-Klein-Gordon equations in $1+4$ dimensions,, Comm. Partial Differential Equations, 27 (2002), 1183. Google Scholar
J. Spruck and Y. Yang, Topological solutions in the self-dual Chern-Simons theory,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 12 (1995), 75. Google Scholar
E. M. Stein, "Singular Integrals and Differentiability Properties of Functions,", Princeton Mathematical Series, 30 (1970). Google Scholar
T. Tao, Multilinear weighted convolution of $L$2 functions, and applications to nonlinear dispersive equations,, Amer. J. Math., 123 (2001), 839. doi: 10.1353/ajm.2001.0035. Google Scholar
G. Tarantello, Multiple condensate solutions for the Chern-Simons-Higgs theory,, J. Math. Phys., 37 (1996), 3769. doi: 10.1063/1.531601. Google Scholar
D. Tataru, On the $X^s_\theta$ spaces and unique continuation for semilinear hyperbolic equations,, Comm. Partial Differential Equations, 21 (1996), 841. Google Scholar
R. Wang, The existence of Chern-Simons vortices,, Comm. Math. Phys., 137 (1991), 587. doi: 10.1007/BF02100279. Google Scholar
H. Wente, An existence theorem for surfaces of constant mean curvature,, J. Math. Anal. Appl., 26 (1969), 318. doi: 10.1016/0022-247X(69)90156-5. Google Scholar
Hartmut Pecher. The Chern-Simons-Higgs and the Chern-Simons-Dirac equations in Fourier-Lebesgue spaces. Discrete & Continuous Dynamical Systems - A, 2019, 39 (8) : 4875-4893. doi: 10.3934/dcds.2019199
Nikolaos Bournaveas, Timothy Candy, Shuji Machihara. A note on the Chern-Simons-Dirac equations in the Coulomb gauge. Discrete & Continuous Dynamical Systems - A, 2014, 34 (7) : 2693-2701. doi: 10.3934/dcds.2014.34.2693
Sigmund Selberg, Achenef Tesfahun. Global well-posedness of the Chern-Simons-Higgs equations with finite energy. Discrete & Continuous Dynamical Systems - A, 2013, 33 (6) : 2531-2546. doi: 10.3934/dcds.2013.33.2531
Jianjun Yuan. On the well-posedness of Maxwell-Chern-Simons-Higgs system in the Lorenz gauge. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 2389-2403. doi: 10.3934/dcds.2014.34.2389
Hartmut Pecher. Local solutions with infinite energy of the Maxwell-Chern-Simons-Higgs system in Lorenz gauge. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 2193-2204. doi: 10.3934/dcds.2016.36.2193
Rudolf Ahlswede. The final form of Tao's inequality relating conditional expectation and conditional mutual information. Advances in Mathematics of Communications, 2007, 1 (2) : 239-242. doi: 10.3934/amc.2007.1.239
Agnieszka Badeńska. No entire function with real multipliers in class $\mathcal{S}$. Discrete & Continuous Dynamical Systems - A, 2013, 33 (8) : 3321-3327. doi: 10.3934/dcds.2013.33.3321
Kwangseok Choe, Hyungjin Huh. Chern-Simons gauged sigma model into $ \mathbb{H}^2 $ and its self-dual equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (8) : 4613-4646. doi: 10.3934/dcds.2019189
Felipe Riquelme. Ruelle's inequality in negative curvature. Discrete & Continuous Dynamical Systems - A, 2018, 38 (6) : 2809-2825. doi: 10.3934/dcds.2018119
S. S. Dragomir, C. E. M. Pearce. Jensen's inequality for quasiconvex functions. Numerical Algebra, Control & Optimization, 2012, 2 (2) : 279-291. doi: 10.3934/naco.2012.2.279
Wen-ming He, Jun-zhi Cui. The estimate of the multi-scale homogenization method for Green's function on Sobolev space $W^{1,q}(\Omega)$. Communications on Pure & Applied Analysis, 2012, 11 (2) : 501-516. doi: 10.3934/cpaa.2012.11.501
Pablo Raúl Stinga, Chao Zhang. Harnack's inequality for fractional nonlocal equations. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 3153-3170. doi: 10.3934/dcds.2013.33.3153
Anne-Sophie de Suzzoni. Consequences of the choice of a particular basis of $L^2(S^3)$ for the cubic wave equation on the sphere and the Euclidean space. Communications on Pure & Applied Analysis, 2014, 13 (3) : 991-1015. doi: 10.3934/cpaa.2014.13.991
Letizia Stefanelli, Ugo Locatelli. Kolmogorov's normal form for equations of motion with dissipative effects. Discrete & Continuous Dynamical Systems - B, 2012, 17 (7) : 2561-2593. doi: 10.3934/dcdsb.2012.17.2561
Youngae Lee. Topological solutions in the Maxwell-Chern-Simons model with anomalous magnetic moment. Discrete & Continuous Dynamical Systems - A, 2018, 38 (3) : 1293-1314. doi: 10.3934/dcds.2018053
Youngae Lee. Non-topological solutions in a generalized Chern-Simons model on torus. Communications on Pure & Applied Analysis, 2017, 16 (4) : 1315-1330. doi: 10.3934/cpaa.2017064
Youyan Wan, Jinggang Tan. The existence of nontrivial solutions to Chern-Simons-Schrödinger systems. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2765-2786. doi: 10.3934/dcds.2017119
Frank Jochmann. A variational inequality in Bean's model for superconductors with displacement current. Discrete & Continuous Dynamical Systems - A, 2009, 25 (2) : 545-565. doi: 10.3934/dcds.2009.25.545
Anat Amir. Sharpness of Zapolsky's inequality for quasi-states and Poisson brackets. Electronic Research Announcements, 2011, 18: 61-68. doi: 10.3934/era.2011.18.61
Takayoshi Ogawa, Kento Seraku. Logarithmic Sobolev and Shannon's inequalities and an application to the uncertainty principle. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1651-1669. doi: 10.3934/cpaa.2018079
Hyungjin Huh | CommonCrawl |
Brain Informatics
The proposed method
Pattern recognition of spectral entropy features for detection of alcoholic and control visual ERP's in multichannel EEGs
T. K. Padma Shri1 and
N. Sriraam2Email authorView ORCID ID profile
Brain Informatics20174:61
https://doi.org/10.1007/s40708-017-0061-y
Received: 28 August 2016
Accepted: 9 January 2017
This paper presents a novel ranking method to select spectral entropy (SE) features that discriminate alcoholic and control visual event-related potentials (ERP'S) in gamma sub-band (30–55 Hz) derived from a 64-channel electroencephalogram (EEG) recording. The ranking is based on a t test statistic that rejects the null hypothesis that the group means of SE values in alcoholics and controls are identical. The SE features with high ranks are indicative of maximal separation between their group means. Various sizes of top ranked feature subsets are evaluated by applying principal component analysis (PCA) and k-nearest neighbor (k-NN) classification. Even though ranking does not influence the performance of classifier significantly with the selection of all 61 active channels, the classification efficiency is directly proportional to the number of principal components (pc). The effect of ranking and PCA on classification is predominantly observed with reduced feature subsets of (N = 25, 15) top ranked features. Results indicate that for N = 25, proposed ranking method improves the k-NN classification accuracy from 91 to 93.87% as the number of pcs increases from 5 to 25. With same number of pcs, the k-NN classifier responds with accuracies of 84.42–91.54% with non-ranked features. Similarly for N = 15 and number of pcs varying from 5 to 15, ranking enhances k-NN detection accuracies from 88.9 to 93.08% as compared to 86.75–91.96% without ranking. This shows that the detection accuracy is increased by 6.5 and 2.8%, respectively, for N = 25, whereas it enhances by 2.2 and 1%, respectively, for N = 15 in comparison with non-ranked features. In the proposed t test ranking method for feature selection, the pcs of only top ranked feature candidates take part in classification process and hence provide better generalization.
Visual event-related potentials (visual ERP)
Electroencephalogram (EEG)
Spectral entropy (SE)
Gamma sub-band
Principal components (pcs)
k-Nearest neighbor (k-NN) classifier
Alcoholism is a chronic disease that is addictive and progressive in nature. There are genetic, environmental, and psychosocial factors which determine the extent to which alcoholism turns into alcohol abuse. A lot of studies have shown the ill effects of alcoholism on various organs of the body, especially on the brain [1–6]. Prefrontal dysfunction in alcoholics is well understood by a lot of studies [7]. Alcohol consumption releases dopamine into nucleus accumbens and prefrontal cortex which is hypothesized to reinforce drinking habit [8]. Studies have also revealed that the alcohol consumption affects the non-alcoholic offspring of alcoholic parents [9, 10]. One of the simplest and cost-effective tools to study the real-time effects of alcoholism is the EEG recorded on the scalp of human brain. While recording the EEG with an internal or external stimulus, ERPs exhibit cerebral activity that characterizes the spatiotemporal changes in the human brain over a period of time due to alcoholism [4, 5, 11]. These changes persist even after long-term abstinence from alcohol [12]. The dynamic processes of the brain such as memory, attention, and cognitive processing [13, 14] are correlated with the synchronizations of phase-locked peaks generated by ERPs. These dynamic processes exhibit themselves at different frequencies known as the delta, theta, alpha, beta, and gamma waves. Changes in the characteristics of these waves due to alcoholism are also reported by a lot of studies [15–17].
Studies on visual ERPs of alcoholics [5, 13, 18–21] have reported that there is a reduction in evoked gamma oscillations during the processing of a visual object recognition task. As these oscillations are correlated with cognition processes such as selective attention and working memory, there is a need to identify specific regions of brain that are largely influenced by alcohol consumption during these event-related oscillations. Therefore, identification of channels with high discrimination between alcoholic and control groups using features extracted from visual ERP's of a multichannel EEG recording in gamma band needs to be investigated. Feature subset selection from 64-channel EEG recordings of alcoholics and controls has been reported [22–25] in literature. All the above studies use the same alcohol EEG dataset used in the current study with 30 alcoholic and 30 control subjects visual ERP's. In one of these studies [22] seven out of 61-channel EEG are selected based on genetic algorithm (GA) optimization to effectively discriminate alcoholics and controls, providing an average classification accuracy of 94.3 and 81.8% with multilayered perceptron-back propagation (MLP-BP) network and fuzzy art map (FA) classifier, respectively. Studies in [23] have shown that the use of PCA for reducing the number of channels resulted in classification accuracies of 95.83, 94.06, 86.01, and 75.13% for 61, 16, 8, and 4 channels, respectively. The correlation between the selected optimal subset of channels for alcoholics was explored in a study [24] to select the channel subset based on the mean gamma band power. The vectors consisting of mean gamma band power from highly correlated channels were used to train a least-square SVM classifier to discriminate alcoholics from their control counterparts. An average classification accuracy of 80% was reported among different pairs of retained active channels. Another study [25] reported using nonlinear parameters such as ApEn, sample entropy, Lyapunov exponent, and higher-order spectra for feature extraction and detection of alcoholics with maximum accuracy of 91.7% with support vector machine (SVM) classifier. The channel selection in this study is based on statistical t test and only seven statistically significant channels were used for classification. In all these studies, even though the selection of a subset of channels is based on certain criterion, the merit of each of the individual channels within the subset is not weighed in terms of its ability to separate alcoholics from controls. Also, the selected channels are not correlated with the position on the scalp. Hence the main objective of the proposed research is not only to explore the possibility of identifying and retaining those channels which show more dissimilar activity in the visual ERPs of alcoholics and controls but also to rank them in terms of their ability to discriminate the groups. The novelty of this study lies in ranking and reducing the dimensionality of multichannel EEG data using t test and PCA for the localization of visual ERPs of alcoholics and controls groups. In the proposed study, the spectral entropies of 61 active (three reference electrodes) channels in the gamma sub-band constitute the feature vector for each subject and are used as features to classify alcoholics and controls. The extracted SE features are ranked using statistical t test analysis and the dimensionality of the ranked feature vector is reduced by applying PCA. The validity of the proposed ranking method in identifying channels of high significance (as far as the effect of alcohol on visual ERPs in these regions is concerned) is evaluated by applying the ranked and reduced feature set as inputs to k-NN classifier. For a predetermined set of ranked channels (N = 61, 25, 15), PCA is performed and a subset of pcs are chosen from these ranked channels to evaluate the performance of a k-NN classifier for pattern recognition. This method is repeated for a set of non-ranked features of the same size. The order of non-ranked channels is according to those specified in the EEG dataset. The classifier results are cross-validated using hold out cross-validation.
The rest of the paper is organized as follows: Sect. 2 presents the methodology and implementation. Section 3 presents results and discussions, followed by conclusions and future work in Sect. 4.
2 The proposed method
In our earlier work [26–29], despite obtaining good classification accuracy, the channels which contribute to the discrimination of alcoholics were not identified. In our recently published work [21], a statistical measure called SEPCOR is used to rank the features and classification results are very impressive. In order to further evaluate statistically significant features, ranking and reduction in SE features are performed on the visual ERPs of alcoholics/controls using t test and PCA in the proposed study for studying the impact of alcohol on specific regions of the brain.
Figure 1 shows the proposed schematic flow.
Schematic of the proposed method
2.1 EEG data
The EEG dataset is from an open EEG Database of State University of New York Health Centre. These data arise from a large number of studies to examine EEG correlates of genetic predisposition to alcoholism. It contains measurements from 64 (61 active channels + 3 reference channels) electrodes placed on subject's scalps which were sampled at 256 Hz [30]. The database consists of 64 channel EEG recordings of ten alcoholic and ten control subjects while performing a visual object recognition task. The picture objects were chosen from the 1980 Snodgrass and Vander wart picture set [31]. A single object S1 or two objects S1 and S2 were used as visual stimulus to each of the subjects both in S1–S2 matched condition and in S1–S2 unmatched condition. Ten trials were conducted in each condition for acquiring train set and the test set used the same ten alcoholic and ten control subjects, but with ten out-of-sample runs per subject per paradigm. This accounts for a total of 600 visual ERP patterns for the training data and 600 visual ERP patterns for the testing data, each of them lasting for a second.
2.2 Data preprocessing
Eye blink artifact removal and gamma sub-band extraction
Eye blink artifact produces a 100–200-µV potential lasting for 250 ms [32]. Some of the other sources of artifact consist of surface muscle activity (>30 Hz), body movement, etc. Independent component analysis (ICA) is performed on the entire EEG dataset to separate these artifacts to obtain artifact-free EEG epochs for further processing [21].
Studies have reported that early phase-locked gamma is evoked in selective attention and larger in response to attended stimuli than unattended stimuli, particularly in frontal lobe [19, 20]. It is also observed that the visual feature binding process is synchronized with the gamma band [18, 32]. Due to this reason, an elliptic band pass filter of sixth-order is used to extract gamma sub-band range of 30–55 Hz. The phase distortions caused due to filtering are compensated by applying the filter in forward/reverse direction.
2.3 Spectral entropy feature extraction
Entropy estimation provides a measure of disorderliness, and hence some important information regarding the complexity of the processes involved in a system is obtained. The more the disorderliness (complexity) is, the higher is the entropy value. Neurophysiological evidence shows that as the cortex becomes unconscious, there is a true decrease in entropy occurring at the neuronal level [33]. Recently, entropy estimation of EEG signals has been used to explain how the EEG signals change with time either in frequency or in phase domain [34–37]. The change in information entropy within the EEG may reflect a real-time information transfer within the cortex.
Spectral entropy (SE) computation uses Shannon's entropy formula to represent the power spectral densities as probabilities. Accordingly, the normalized SE corresponding to the frequency range [f 1, f 2] is calculated from 1-s epochs of 61-channel visual ERP's of alcoholic and control group as follows:
$${\text{SE}}\left[ {f_{1} ,f_{2} } \right] = - \frac{1}{{\log \left[ {N\left[ {f_{1} ,f_{2} } \right]} \right]}}\mathop \sum \limits_{{f_{{i = f_{1} }} }}^{{f_{2} }} P_{n} \left( {f_{i} } \right)\log \left( {P_{n} \left( {f_{i} } \right)} \right)$$
where P n (f i ) represents the probability of the ith frequency component. The step-by-step computation is explained in our earlier published work [21].
Each 1-s, 61-channel EEG data epoch (61 channels × 256 samples/s) is represented by a 61-component SE vector (61 × 1), called SE feature vectors. The entire feature set dimension is equal to (61 × 1200) corresponding to a total 1200 (test + train) visual ERP's comprising of both alcoholics and controls. Figure 2 shows a sample plot of SE in all 61 channels for a single alcoholic and a single control subject, and Fig. 3 represents the SE plot for all 61 channels of the entire dataset consisting of 600 visual ERP'S of alcoholics and 600 ERP's of controls. It is seen that in some channel locations (for example, 4 and 10), the gamma sub-band SE feature is more discriminative between groups, whereas in channel locations between 4 and 10, there is no difference in the computed SE feature for both groups. This suggests that only in certain locations of the scalp, the SE feature being a measure of complexity differs between Alco/control groups, while in other channel locations the complexity measure is not distinctive. This can be observed in other channel locations as well, leading to a very important conclusion that those channels with highly discriminative SE measures between groups identify themselves as better candidates for feature classification task. A detailed discussion of the same has been published by the same authors in [21].
Spectral entropy plot of a single alcoholic/control subject
Plot of spectral entropy features for the entire dataset [21]
2.4 Ranking by t test and PCA
Spectral entropy estimation is applied to extract features from the visual ERP responses of alcoholics and controls. Next, the SE values of each channel are ranked based on their ability to discriminate alcoholics from their control counterparts by using hypothesis (t test) testing method (Eq. 2). The suitability of data for t test is determined by fitting the entire 61-channel SE data to normal distribution, and goodness of fit is evaluated using Kolmogorov–Smirnov test. Since the proposed classification problem has a single outcome variable of either detecting an alcoholic or non-alcoholic and data are parametric in which the population parameter is specified [38], the selection of independent t test is justified for feature ranking. The Welch two-sample t test is considered for analysis which is defined as (2):
$$t = \frac{{\left( {\bar{x}_{1} - \bar{x}_{2} } \right)}}{{\sqrt {\left( {{\raise0.7ex\hbox{${S_{1}^{2} }$} \!\mathord{\left/ {\vphantom {{S_{1}^{2} } {n_{1} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${n_{1} }$}} + {\raise0.7ex\hbox{${S_{2}^{2} }$} \!\mathord{\left/ {\vphantom {{S_{2}^{2} } {n_{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${n_{2} }$}}} \right)} }}$$
where \(\bar{x}_{1} ,\,\bar{x}_{2}\) are the sample means, \(s_{1}^{2} ,\,s_{2}^{2}\) are the sample variances and n 1, n 2 are the sample sizes of groups 1 (alcoholic) and 2 (control), respectively.
The t test statistic plot in Fig. 3 gives information regarding the difference in class means of SE feature for each channel. The ranking of channels is not only based on the statistical significance (p value) but also on the difference in class means, i.e., larger the difference in class means, higher the ranking for that channel. For example, from Fig. 4, it is seen that channels such as 4, 12, 15, 19, and 30, possess t statistic values that maximize the class separation than other channels with lower statistic. As an example, Fig. 5 shows the bar plot of SE mean for the first five channels of both alcoholic and control groups. The fourth channel has the largest difference in SE mean and hence is chosen as a candidate with higher ranking. Similarly, first channel scores lowest ranking among these channels as the difference in class means is zero. This procedure is actually performed on all 61 channels, and each channel is ranked based on the differences in class means. This ensures that it is not only sufficient for a channel to have p value <0.05, but the difference in class means must also be higher to achieve a higher ranking. The channels are ranked based on the statistically significant values (p < 0.05) obtained by performing t test on 61 channels of both groups with 95% confidence interval.
Plot of t test statistic for alcoholic and control SE feature vectors
Bar plot of SE mean values in arbitrarily selected channels for alcoholic/control subjects
The ranking method assigns highest ranks to those channels in which the p values are lesser than 0.05. Higher ranks indicate that the visual ERP signals in these locations are largely influenced by the consumption of alcohol as compared to the control groups. The regions where the SE features score less ranking are less discriminative of both groups.
2.4.1 Principal component analysis (PCA)
In order to reduce the dimensionality of the features further and to study the effect of pcs, PCA is applied on both ranked and non-ranked channels.
Implementation steps
The aim here is to transform a N × n matrix X into a N × d matrix Y, where X represents the original SE feature space of dimension N × n, N represents the number of samples (1200), n represents the number of channels (61), d represents the feature subspace (arbitrarily selected as, 25 and 15), and Y represents the PCA-transformed feature matrix. The steps involved are:
Subtract the mean of SE values from each channel variables of matrix, N × n (shifting the data to origin).
Compute the covariance matrix V, of n × n
Dimension as below:
$$V = \frac{1}{N - 1}X^{T} X;$$
i.e.,
$$V_{i,j} = \frac{1}{N - 1}\mathop \sum \limits_{k = 1}^{N} X_{k,i} X_{k,j}$$
The diagonal elements of V represent the variance of variable i, whereas the off-diagonal elements represent the covariance between i and j.
Compute the eigenvectors of covariance matrix, V.
Identify d eigenvectors that correspond to the largest d eigenvalues to be the new basis for the transformed data space.
2.5 k-NN classification
In order to validate the results obtained by the proposed ranking method, the ranked and non-ranked SE feature vectors are applied as inputs to k-NN classifier. The k-nearest neighbor algorithm (k-NN) is a supervised nonparametric classifier [39, 40] used to classify patterns based on closest training pattern vectors in the feature space. The purpose of selecting k-NN classifier over other robust classification methods such as SVM and MLP is to make a preliminary evaluation of the ranked features. The advantage of k-NN classification is that it is a nonparametric method where no assumption on the distribution of data is made and that it is simple to implement. However, it is sensitive to local structure of the dataset. To address this problem, a 50% holdout cross-validation is performed. Initially, all 61 ranked features are applied to k-NN classifier, and the performance is evaluated in terms of classification accuracy and computational time. The classifier performance is evaluated by applying predetermined sets of ranked and non-ranked features (arbitrarily selected as 61, 25, and 15, respectively) with different number of pcs (randomly chosen values of 61, 25 and 15). For each of these cases, the discriminatory behavior of classifiers is studied and compared. The k-NN classifier algorithm is implemented on MATLAB platform.
Gamma deficits manifest themselves as cognitive deficits in selective attention and working memory of alcoholics [19, 20].Deficits in gamma band power to target stimuli in alcoholics, particularly in the frontal lobe, is well documented by a lot of studies [5, 21].
The SE features plotted for the entire dataset (Fig. 3) consisting of alcoholic and control subjects indicate the apparent differences in spectral entropies for alcoholics and controls in some channels. It is seen that the alcoholic SE values exceed in many of the locations on the scalp as compared to controls, indicating more complexity (disorderliness) in these regions. Thus the SE values reflect a measure of complexity in specific regions of the brain directly correlating with the difference in cognitive and information processing of both groups. To identify those regions in which alcoholics have specific cognitive and working memory deficits, ranking of features is performed based on a statistical (t test) hypothesis testing. The criterion for ranking 61-channel SE coefficients is based on a two-sample t test performed on both the groups and the resulting statistics are plotted in Fig. 4. The t test values are used to test the null hypothesis that the group means are identical. For brevity, t test statistics and the electrode positions associated with the best five ranked feature indices are shown in Table 1. It can be observed that the channel number 4 (F8) corresponding to the right frontal position of brain is assigned the highest rank and the corresponding t test statistic is the largest, indicating maximum difference in the mean value of the SE features of both groups. The right occipital region of the brain is assigned the second highest ranking as indicated by the t test statistic. This result is very important clinically as it reflects the impact of alcohol on the visual ERP signal of alcoholic subject while performing a visual working memory task. Figure 6 shows specific electrode positions on the scalp associated with the best five ranks.
Electrode positions of the first five ranked channels and their corresponding t test statistic
Ranked channel index
t test statistic
Electrode position
Right frontal
Right occipital
Left temporal
Right intermediate region between fronto-parietal and frontal
Left fronto-temporal
Active electrode positions (in red) with first five ranks. (Color figure online)
The number of ranked/non-ranked channels (N) is chosen arbitrarily as 61 (all), 25 and 15. As far as the pcs are considered, the maximum number is limited by the value of N, i.e., if N = 25, maximum number of pcs = 25 and below that an arbitrarily chosen numbers of 15 and 5 are considered. The result plots show a smooth interpolation between the arbitrarily selected numbers of pcs for both ranked/non-ranked cases.
Figures 7, 8, and 9 show the effect of number of pcs on the classification accuracy, computation time, and receiver operating characteristic (ROC) curve for both ranked and non-ranked cases, with N = 25. In this case, proposed ranking method improves the k-NN classification accuracy from 91 to 93.87% as the number of pcs increases from 5 to 25. With same number of pcs, the k-NN classifier responds with accuracies of 84.42–91.54% with non-ranked features. Similarly for N = 15 and number of pc varying from 5 to 15, ranking enhances k-NN detection accuracies from 88.9 to 93.08% as compared to 86.75–91.96% without ranking (Refer Table 2). Any number of pc ≥ 15 results in an improvement by approximately 1%. With 5 and 15 pcs, the detection accuracy is increased by 6.5 and 2.8%, respectively, for N = 25, whereas it enhances by 2.2 and 1%, respectively, for N = 15 in comparison with non-ranked features. Also, the minimum number of pcs required to achieve a classification accuracy of above 90% is 15 and 11 for N = 25, 15, respectively. The ranking method preserves its superiority for pc greater than 5 in the case of N = 15 and N = 25. The PCA reduced dimension of the ranked set represents an effective feature subset of statistically significant features to be applied to a k-NN classifier for evaluation of ranked features.
k-NN classifier performance for alcoholic data
Computation time using k-NN classifier for alcoholic data
ROC curve using k-NN classifier for alcoholic data
No. of PC
No rank
Efficiency (%)
Time (s)
Figure 7 represents the classification accuracy that improves with an increase in the number of pcs of ranked features compared to their non-ranked counterparts. This indicates that the pcs of ranked channels have more information for classification of alcoholics than that of non-ranked set of channels. Figure 8 represents the computation time required for the classification in both ranked and non-ranked cases as a function of the number of pcs. As can be seen, ranking does not impose any limitation on computation time. Figure 9 represents the receiver operating characteristic (ROC) for the classifier in both cases of ranked and non-ranked channels. A typically good ROC always shifts to the leftmost corner. This will ensure that there will be more true-positive instances than the false-positive cases. As expected the ranked case has its ROC pushed toward the left top corner than the non-ranked case.
A channel with first rank will obviously be the best channel as it possesses maximum information regarding the discrimination between alcoholics and controls. In the proposed study, F8 channel is seen to be the best. This result confirms with the findings in the literature that during target stimuli, there will be a gamma band power deficit in alcoholics especially in the frontal lobe. Interestingly, second rank is achieved by O2 channel (associated with the occipital region with the visual cortex underneath). This result correlates with the visual object recognition task used as stimulus while recording EEG in the database under consideration. In fact, these two results are claimed to be the important findings of this study. The accuracy and computation undoubtedly increases with increase in number of pcs. The pcs of ranked channels have better information for classification than their non-ranked counterparts.
All the classification accuracies shown are with respect to 50% holdout cross-validation. The holdout validation method is simple to perform that ensures faster computation. Irrespective of ranking or non-ranking of features, the computation time remains same. This shows that there is no additional computational overhead as far as the classification of ranked channels is concerned. Due to this, the overall CPU time can be greatly reduced while handling large EEG datasets. Also, the computation time consists of the total processor time required for ranking the channels, dimensionality reduction using PCA and classification. In our earlier study [26–29], other robust classifiers such as MLP, SVM, and probabilistic neural network (PNN) were used for the classification of SE and parametric features extracted from the gamma band visual ERP's of the same alcoholic/control EEG dataset. Even though the PNN classifier responds with excellent classification accuracy close to 100%, it does not generalize as good as other classifiers, and the execution time is proportional to the size of the training set [41]. Due to this, it requires large memory and a more representative training set. The current study used supervised k-NN classifiers for a preliminary assessment of ranked features. In the proposed study, the artifact-free datasets are used and hence the results obtained in the current study are claimed to be more precise and accurate.
The results obtained in the proposed study are clinically important with respect to the impact of alcohol on the visual ERPs of alcoholics and controls. The high ranked channels are located in the frontal, fronto-parietal, temporal, and occipital regions of the brain. In particular, the perception and cognitive processing of a visual stimulus is said to evoke potentials in fronto-parietal and occipital regions of the brain [5]. The results reflect the discriminatory nature of the behavioral sensory control, attention, memory, emotion, and vision of alcoholics with respect to their control counterparts. Interestingly, the channel associated with occipital region is ranked second (Table 1) indicating the influence of alcohol on visual ERP signal produced while performing an object recognition task. A further investigation may be beneficial in studying these aspects in alcoholic patients and find the underlying neural mechanism associated with alcoholism and alcoholic dependence.
A comparison of proposed method with previous studies (for the same EEG dataset) in literature is shown in Table 3. Even though the accuracies and the number of reduced PCA features in this study may not be as impressive as those in previous studies especially that published by the same authors [21], the results can be useful in understanding the contribution of pcs of the ranked channels to enhance the class separation and hence help in the localization of effects of alcohol on different regions of the brain. This assists in identifying regions of brain in which the neuronal activities are highly influenced by alcoholism and alcoholic dependence that lead to cognitive deficits.
Comparison of proposed method with previous studies using same EEG dataset
Feature selection method
Average classification accuracy
No. of selected channels
Avg. comp time in s
Existing methods
Spectral ratios of δ to γ band (7 spectral ratios) + GA + NN + FAM classifiers [22]
(Train + 200 test vectors classification time only)
γ sub-band power + PCA + k-NN classifier [23]
Not discussed
Mean γ power and correlation coefficient measure between channels + SVM classifier [24]
Nonlinear feature extraction (Hurst, Lyapunov exponent, higher-order spectra, ApEn, SaEn) + SVM classifier [25]
Spectral entropy features + SEPCOR + k-NN + MLP classifier [21]
Correlation threshold
Classification accuracy k-NN
Classification accuracy MLP
SEPCOR feature vectors
Computation time (s) k-NN
Computation time (s) MLP
Proposed method spectral entropy features with t test ranking +PCA + k-NN + MLP classifier
No of pc. = 25
k-NN
In previous studies involving the same EEG dataset, the eye blink artifacts were rejected online causing a loss of information. In this study, artifact-free EEG datasets are used in which the artifacts are removed using ICA. This results in more precise classification results as only artifact-free EEG data epochs are presented to the proposed algorithm. In our previous studies, the SE feature extraction was implemented on an unprocessed dataset [26–29] with motion and muscle artifacts. The EEG epochs containing eye blink artifacts were rejected leading to loss of information whereas in our work published later [21] and in the proposed study, artifact-free EEG alcoholic/control datasets are used.
This paper proposes a robust method to statistically rank SE features in a 64-channel EEG recording for the identification of visual ERP's produced in the brain that are highly discriminative of alcoholics and control groups. The method uses SE features computed on gamma sub-band visual ERPs of a 64-channel alcoholic/control EEG recording. The proposed statistical t test ranking method uniquely identifies channels with maximal separability between class means. Further evaluation of top ranked features is done by applying PCA to subsets of various sizes. As the number of pcs is increased, the classification accuracy improves with ranking. Ranking and reducing of features allow only the best features to be used for classification and hence may provide better generalization.
The effect of pcs on the performances of k-NN classifier is well exploited in the proposed study using ranking and non-ranking procedures. Previous studies have explored several feature selection methods in gamma sub-band range for the identification of alcoholics using the same database. However, the effectiveness of features in the identification of channels with maximum class separation has not been explored in all these studies. The proposed method weighs each feature in terms of its capability to maximize the separation between class means. Also the use of ICA to separate cranial muscle activity (>30 Hz) and motion artifact result in a more valid EEG dataset for processing.
The k-NN classifier takes almost the same amount of time for computations irrespective of ranked or non-ranked channels. The results obtained are clinically significant as the frontal, temporal, and occipital regions of the brain score higher ranks in terms of the SE information for discriminating alcoholics and controls. In particular, the second rank associated with the occipital region directly correlates with the visual object recognition task while recording the EEG data under study. These results may also help in understanding the underlying cortical functions of the selected (ranked) regions of the brain in alcoholic patients. It may also help in reducing the number of channels required in EEG recording of alcoholics. The future work lies in validating these results with a different alcoholic EEG database. The proposed ranking method may also be applied to other time and/or frequency domain features and evaluated using other robust classifiers such as SVM and MLP.
Finally, the proposed study strongly suggests that the pcs of top ranked channels directly correlate with those regions of the brain eliciting reduced responses in gamma range causing cognitive and memory deficits in alcoholics. This may help in exploring the impact of alcohol on visual ERPs.
We thank Prof. Henri Begleiter at the Neuro dynamics Laboratory at the State University of New York Health Centre at Brooklyn, USA, for sharing the EEG database in the public domain. We would also like to thank Dr. Vivek Benegal, Professor, NIMHANS, Bangalore, India, for his invaluable suggestions.
We hereby declare that we have no conflict of interest.
Department of Electronics and Communication, Manipal Institute of Technology, Manipal University, Manipal, Karnataka, 576104, India
Department of Medical Electronics, M.S. Ramaiah Institute of Technology (An Autonomous Institute, Affiliated to Visvesvaraya Technological University), Bangalore, Karnataka, 560054, India
Pfefferbaum A, Sullivan EV, Rosenbloom MJ, Shear PK, Mathalon DH, Lim KO (1993) Increase in brain cerebrospinal fluid volume is greater in older than in younger alcoholic patients: a replication study and CT/MRI comparison. Psychiatry Res Neuroimaging 50:257–274View ArticleGoogle Scholar
Chen ACH, Porjesz B, Rangaswamy M, Kamarajan C, Tang YQ, Jones KA (2007) Reduced frontal lobe activity in subjects with high impulsivity and alcoholism. Alcohol Clin Exp Res 31:156–165View ArticleGoogle Scholar
Butterworth RF (2003) Hepatic encephalopathy—a serious complication of alcoholic liver disease. Alcohol Res Health 27:143–145Google Scholar
Campanella S, Petit G, Maurage P, Kornreich C, Verbanck P, Noël X (2009) Chronic alcoholism: insights from neurophysiology. J Clin Neurophysiol 39:191–207View ArticleGoogle Scholar
Porjesz B, Begleiter H (2003) Alcoholism and human electrophysiology. Alcohol Res Health 27:153–160Google Scholar
Courtney KE, Polich J (2010) Binge drinking effects on EEG in young adult humans. Int J Environ Res Public Health 7:2325–2336View ArticleGoogle Scholar
Padmanabha pillai A, Tang YQ, Ranganathan M, Rangaswamy M, Jones KA, Chorlian DB (2006) Evoked gamma band response in male adolescent subjects at high risk for alcoholism during a visual oddball task. Psychophysiology 62:262–271View ArticleGoogle Scholar
Wackernah RC, Minnick MJ, Clapp P (2014) Alcohol use disorder: pathophysiology, effects, and pharmacologic options for treatment. Int J Subst Abuse Rehabil 5:1–12Google Scholar
Rangaswamy M, Jones KA, Porjesz B, Chorlian DB, Padmanabha pillai A, Kamarajan C (2007) Delta and theta oscillations as risk markers in adolescent offspring of alcoholics. Int J Psychol Physiol 63(1):3–15Google Scholar
Kamarajan C, Porjesz B, Jones K, Chorlian D, Padmanabha pillai A, Rangaswamy M (2006) Event related oscillations in offspring of alcoholics: neurocognitive disinhibition as a risk for alcoholism. Biol Psychiatry 59:625–634View ArticleGoogle Scholar
Wong DF, Maini A, Rousset OG, Brasíc JR (2003) Positron emission tomography—a tool for identifying the effects of alcohol dependence on the brain. Alcohol Res Health 27:161–173Google Scholar
Gansler DA et al (2000) Hypo perfusion of inferior frontal brain regions in abstinent alcoholics: a pilot SPECT study. J Stud Alcohol 61:32–37View ArticleGoogle Scholar
Başar E, Guntekin E (2008) A review of brain oscillations in cognitive disorders and the role of neurotransmitters. Brain Res 1235:172–193View ArticleGoogle Scholar
Varner J, Rohrbaugh JW, Stapleton JM, Zubovic EA, Eckardt MJ (1990) Attention deficits in alcoholic brain syndrome. In: Annual IEEE international conference on engineering in Medicine and Biology Society, vol 12, no. 2Google Scholar
Rangaswamy M et al (2007) Delta and theta oscillations as risk markers in adolescent offspring of alcoholics. Int J Psychophysiol 63:3–15View ArticleGoogle Scholar
Andrew C et al (2010) Induced theta oscillations as biomarkers for alcoholism. Clin Neurophysiol 121(3):350–358View ArticleGoogle Scholar
Enoch MA et al (1999) Association of low-voltage alpha EEG with a subtype of alcohol use disorders. J Alcohol Clin Exp Res 23:1312–1319View ArticleGoogle Scholar
Basar E (2004) Memory and brain dynamics: oscillations integrating attention, perception, learning & memory. CRC, Boca RatonView ArticleMATHGoogle Scholar
Basar E, Karakas S, Schurmann M (1999) Are cognitive processes manifested in event-related gamma, alpha, theta and delta oscillations in the EEG? Neurosci Lett 259:165–168View ArticleGoogle Scholar
Yordanova J, Banaschewski T, Kolev V (2001) Abnormal early stages of task stimulus processing in children with attention-deficit hyperactivity disorder: Evidence from event-related gamma oscillations. Clin Neurophysiol 112:1096–1108View ArticleGoogle Scholar
Padma Shri TK, Sriraam N (2016) Spectral entropy feature subset selection using SEPCOR to detect alcoholic impact on gamma sub band visual event related potentials of multichannel EEG. J Appl Soft Comput 46:441–451View ArticleGoogle Scholar
Palaniappan R, Raveendran P (2002) Using genetic algorithm to identify the discriminatory subset of multi-channel spectral bands for visual response. J Appl Soft Comput 2:48–60View ArticleGoogle Scholar
Ong K-M, Thung K-H, Wee C-Y, Parameswaran R, Selection of a subset of EEG channels using PCA to classify alcoholics and non-alcoholics. In: International conference on engineering in medicine and biology, pp 4195–4198Google Scholar
Shooshtari MA, Setarehdan SK (2009) Selection of optimal EEG channels for classification of signals correlated with alcohol abusers. In: Proceedings of the IEEE 10th International conference on signal processing, pp 1–4Google Scholar
Rajendra Acharya U, Vinitha Sree S, Chattopadhyay S, Suri J (2012) Automated diagnosis of normal and alcoholic EEG signals. Int J Neural Syst 22:1–11View ArticleGoogle Scholar
Padma Shri TK, Sriraam N (2012) Performance evaluation of classifiers for detection of alcoholics using electroencephalograms (EEG). J Med Imaging Health Inform 2:289–295View ArticleGoogle Scholar
Padma Shri TK, Sriraam N (2012) EEG based detection of alcoholics using spectral entropy with neural network classifiers. In: International conference on biomedical engineering (ICoBE), Penang, Malaysia, 27–28 Feb 2012Google Scholar
Padma Shri TK, Sriraam N (2012) EEG based detection of alcoholics: a selective review. Int J Biomed Clin Eng 1:59–76View ArticleGoogle Scholar
Padma Shri TK, Sriraam N (2012) Statistical analysis of spectral entropy features for the detection of alcoholics based on electroencephalogram (EEG) signals. Int J Biomed Clin Eng 1(2):34–41View ArticleGoogle Scholar
Zhang XL, Begleiter H, Porjesz B, Wang W, Litke A (1995) Event related potentials during object recognition tasks. Brain Res Bull 38(6):531–538View ArticleGoogle Scholar
Snodgrass JG, Vanderwart M (1980) A standardized set of 260 pictures: norms for name agreement, image agreement, familiarity, and visual complexity. J Exp Psychol Hum Learn Memory 6(2):174–215View ArticleGoogle Scholar
Palaniappan R (2005) Discrimination of alcoholic subjects using second order autoregressive modeling of brain signals evoked during visual stimulus perception. World Acad Sci Eng Technol 12:640–645Google Scholar
Steyn-Ross ML (1999) Theoretical electroencephalogram stationary spectrum for a white-noise-driven cortex: evidence for a general anesthetic-induced phase transition. J Phys Rev 60:7299–7311View ArticleGoogle Scholar
Sleigh JW, Olofsen E, Dahan A, de Goede J, Steyn-Ross A (2001) Entropies of the EEG: the effects of general anesthesia. In: 5th Conference on memory, anesthesia and consciousness, New YorkGoogle Scholar
Vahaplar A, Cengiz Çelikoğlu C, Özgören M (2011) Entropy in dichotic listening EEG recordings. J Math Comput Appl 16(1):43–52Google Scholar
Yordanova J, Kolev V, Rosso OA, Schumann M, Sakowitz OW, Özgören M, Basar E (2002) Wavelet entropy analysis of event-related potentials indicates modality-independent theta dominance. J Neurosci Methods 117:99–109View ArticleGoogle Scholar
Viertio Oja H, Maja V, Sarkela M, Talja P, Tenkanen N, Tolvanen-Laakso H, Paloheimo M, Vakkuri A, Yli-Hankala A, Merilainen P (2004) Description of the entropy TM algorithm as applied in the Datex-Ohmeda S/5TM entropy module. J Acta Anesthesiol Scand 48:154–161View ArticleGoogle Scholar
Duncan L (2012) Statistical tests-1. Presentations from Harvard School of Public Health. rmhs.ku.edu.trGoogle Scholar
http://ocw.mit.edu/courses/electrical/MIT6_034F10_tutorial
http://www.cs.utah.edu/~piyush/teaching
Specht DF (1990) Probabilistic neural networks. J Neural Netw 3:109–118View ArticleGoogle Scholar | CommonCrawl |
Non-stationary phase of the MALA algorithm
Juan Kuntz1,
Michela Ottobre ORCID: orcid.org/0000-0002-8725-42782 &
Andrew M. Stuart3
Stochastics and Partial Differential Equations: Analysis and Computations volume 6, pages 446–499 (2018)Cite this article
The Metropolis-Adjusted Langevin Algorithm (MALA) is a Markov Chain Monte Carlo method which creates a Markov chain reversible with respect to a given target distribution, \(\pi ^N\), with Lebesgue density on \({\mathbb {R}}^N\); it can hence be used to approximately sample the target distribution. When the dimension N is large a key question is to determine the computational cost of the algorithm as a function of N. The measure of efficiency that we consider in this paper is the expected squared jumping distance (ESJD), introduced in Roberts et al. (Ann Appl Probab 7(1):110–120, 1997). To determine how the cost of the algorithm (in terms of ESJD) increases with dimension N, we adopt the widely used approach of deriving a diffusion limit for the Markov chain produced by the MALA algorithm. We study this problem for a class of target measures which is not in product form and we address the situation of practical relevance in which the algorithm is started out of stationarity. We thereby significantly extend previous works which consider either measures of product form, when the Markov chain is started out of stationarity, or non-product measures (defined via a density with respect to a Gaussian), when the Markov chain is started in stationarity. In order to work in this non-stationary and non-product setting, significant new analysis is required. In particular, our diffusion limit comprises a stochastic PDE coupled to a scalar ordinary differential equation which gives a measure of how far from stationarity the process is. The family of non-product target measures that we consider in this paper are found from discretization of a measure on an infinite dimensional Hilbert space; the discretised measure is defined by its density with respect to a Gaussian random field. The results of this paper demonstrate that, in the non-stationary regime, the cost of the algorithm is of \({{\mathcal {O}}}(N^{1/2})\) in contrast to the stationary regime, where it is of \({{\mathcal {O}}}(N^{1/3})\).
Avoid the most common mistakes and prepare your manuscript for journal editors.
Metropolis–Hastings algorithms are Markov Chain Monte Carlo (MCMC) methods used to sample from a given probability measure, referred to as the target measure. The basic mechanism consists of employing a proposal transition density q(x, y) in order to produce a reversible Markov chain \(\{x^k\}_{k=0}^{\infty }\) for which the target measure \(\pi \) is invariant [11]. At step k of the chain, a proposal move \(y^{k}\) is generated by using q(x, y), i.e. \(y^{k} \sim q(x^k, \cdot )\). Then such a move is accepted with probability \(\alpha (x^k, y^k)\):
$$\begin{aligned} \alpha \big (x^k,y^k\big )= \min \left\{ 1, \frac{\pi \big (y^k\big ) q\big (y^k,x^k\big )}{\pi \big (x^k\big ) q\big (x^k,y^k\big )} \right\} . \end{aligned}$$
The computational cost of this algorithm when the state space has high dimension N is of practical interest in many applications. The measure of computational cost considered in this paper is the expected squared jumping distance, introduced in [19] and related works. Roughly speaking [we will be more precise about this in the next Sect. 1.2, see comments before (1.8)], if the size of the proposal moves is too large, i.e. if we propose moves which are too far away from the current position, then such moves tend to be frequently rejected; on the other hand, if the algorithm proposes moves which are too close to the current position, then such moves will be most likely accepted, however the chain will have not moved very far away. In either extreme cases, the chain tends to get stuck and will exhibit slow mixing, and this is more and more true as the dimension N of the state space increases. It is therefore clear that one needs to strike a balance between these two opposite scenarios; in particular, the optimal size of the proposed moves (i.e., the proposal variance) will depend on N. If the proposal variance scales with N like \(N^{-\zeta }\), for some \(\zeta >0\), then we will say that the cost of the algorithm, in terms of ESJD, is of the order \(N^{\zeta }\).
A widely used approach to tackle this problem is to study diffusion limits for the algorithm. Indeed the scaling used to obtain a well defined diffusion limit corresponds to the optimal scaling of the proposal variance (see Remark 1.1). This problem was first studied in [19], for the Random Walk Metropolis algorithm (RWM); in this work it is assumed that the algorithm is started in stationarity and that the target measure is in product form. In the case of the MALA algorithm, the same problem was considered in [20, 21], again in the stationary regime and for product measures. In this setting, the cost of RWM has been shown to be \({{\mathcal {O}}}(N)\), while the cost of MALA is \({{\mathcal {O}}}(N^{\frac{1}{3}}).\) The same \({{\mathcal {O}}}(N^{\frac{1}{3}})\) scaling for MALA, in the stationary regime, was later obtained in the setting of non-product measures defined via density with respect to a Gaussian random field [17]. In the paper [6] extensions of these results to non-stationary initializations were considered, however only for the Gaussian targets. For Gaussian targets, RWM was shown to scale the same in and out of stationarity, whilst MALA scales like \({{\mathcal {O}}}(N^{\frac{1}{2}})\) out of stationarity. In [12, 13] the RWM and MALA algorithms were studied out of stationarity for quite general product measures and the RWM method shown again to scale the same in and out of stationarity. For MALA the appropriate scaling was shown to differ in and out of stationarity and, crucially, the scaling out of stationarity was shown to depend on a certain moment of the potential defining the product measure. In this paper we contribute further understanding of the MALA algorithm when initialized out of stationarity by considering non-product measures defined via density with respect to a Gaussian random field. Considering such a class of measures has proved fruitful, see e.g. [15, 17]. Relevant to this strand of literature, is also the work [5].
In this paper our primary contribution is the study of diffusion limits for the the MALA algorithm, out of stationarity, in the setting of general non-product measures, defined via density with respect to a Gaussian random field. Significant new analysis is needed for this problem because the work of [17] relies heavily on stationarity in analyzing the acceptance probability, whilst the work of [13] uses propagation of chaos techniques, unsuitable for non-product settings.
The challenging diffusion limit obtained in this paper is relevant both to the picture just described and, in general, due to the widespread practical use of the MALA algorithm. The understanding we obtain about the MALA algorithm when applied to realistic non-product targets is one of the main motivations for the analysis that we undertake in this paper. The diffusion limit we find is given by an SPDE coupled to a one-dimensional ODE. The evolution of such an ODE can be taken as an indicator of how close the chain is to stationarity (see Remark 1.1 for more details on this). The scaling adopted to obtain such a diffusion limit shows that the cost of the algorithm is of order \(N^{1/2}\) in the non-stationary regime, as opposed to what happens in the stationary phase, where the cost is of order \(N^{1/3}\). It is important to recognize that, for measures absolutely continuous with respect to a Gaussian random field, algorithms exist which require \({{\mathcal {O}}}(1)\) steps in and out of stationarity; see [7] for a review. Such methods were suggested by Radford Neal in [16], and developed by Alex Beskos for conditioned stochastic differential equations in [4], building on the general formulation of Metropolis–Hastings methods in [23]; these methods are analyzed from the point of view of diffusion limits in [18]. It thus remains open and interesting to study the MALA algorithm out of stationarity for non-product measures which are not defined via density with respect to a Gaussian random field; however the results in [12] demonstrate the substantial technical barriers that will exist in trying to do so. An interesting starting point of such work might be the study of non i.i.d. product measures as pioneered by Bédard [2, 3].
Setting and the main result
Let (\({\mathcal {H}}, \langle \cdot , \cdot \rangle , \Vert \cdot \Vert \)) be an infinite dimensional separable Hilbert space and consider the measure \(\pi \) on \({\mathcal {H}}\), defined as follows:
$$\begin{aligned} \frac{d\pi }{d\pi _0} \propto \exp ({-\varPsi }), \qquad \pi _0:={\mathcal {N}}(0,{\mathcal {C}}). \end{aligned}$$
That is, \(\pi \) is absolutely continuous with respect to a Gaussian measure \(\pi _0\) with mean zero and covariance operator \({\mathcal {C}}\). \(\varPsi \) is some real valued functional with domain \({\tilde{{\mathcal {H}}}} \subseteq {\mathcal {H}}\), \(\varPsi : {\tilde{{\mathcal {H}}}}\rightarrow {\mathbb {R}}\). Measures of the form (1.2) naturally arise in Bayesian nonparametric statistics and in the study of conditioned diffusions [10, 22]. In Sect. 2 we will give the precise definition of the space \({\tilde{{\mathcal {H}}}}\) and identify it with an appropriate Sobolev-like subspace of \({\mathcal {H}}\) (denoted by \({\mathcal {H}}^s\) in Sect. 2).The covariance operator \({\mathcal {C}}\) is a positive, self-adjoint, trace class operator on \({\mathcal {H}}\), with eigenbasis \(\{\lambda _j^2, \phi _j\} \):
$$\begin{aligned} {\mathcal {C}}\phi _j= \lambda _j^2 \phi _j, \quad \forall j \in {\mathbb {N}}, \end{aligned}$$
and we assume that the set \(\{\phi _j\}_{j \in {\mathbb {N}}}\) is an orthonormal basis for \({\mathcal {H}}\).
We will analyse the MALA algorithm designed to sample from the finite dimensional projections \(\pi ^N\) of the measure (1.2) on the space
$$\begin{aligned} X^N:=\text {span}\{\phi _j\}_{j=1}^N \subset {\mathcal {H}}\end{aligned}$$
spanned by the first N eigenvectors of the covariance operator. Notice that the space \(X^N\) is isomorphic to \({\mathbb {R}}^N\). To clarify this further, we need to introduce some notation. Given a point \(x \in {\mathcal {H}}\), \({\mathcal {P}}^N(x):=\sum _{j=1}^n\left\langle \phi _j,x \right\rangle \phi _j\) is the projection of x onto the space \(X^N\) and we define the approximations of functional \(\varPsi \) and covariance operator \({\mathcal {C}}\):
$$\begin{aligned} \varPsi ^N:=\varPsi \circ {\mathcal {P}}^N \quad \text{ and } \quad {\mathcal {C}}_N:={\mathcal {P}}^N\circ {\mathcal {C}}\circ {\mathcal {P}}^N. \end{aligned}$$
With this notation in place, our target measure is the measure \(\pi ^N\) (on \(X^N \cong {\mathbb {R}}^N \)) defined as
$$\begin{aligned} \frac{d\pi ^N}{d\pi _0^N}(x)=M_{\varPsi ^N}e^{-\varPsi ^N(x)}, \qquad \pi _0^N:={\mathcal {N}}(0,{\mathcal {C}}_N), \end{aligned}$$
where \(M_{\varPsi ^N}\) is a normalization constant. Notice that the sequence of measures \(\{\pi ^N\}_{N\in {\mathbb {N}}}\) approximates the measure \(\pi \) (in particular, the sequence \(\{\pi ^N\}_{N\in {\mathbb {N}}}\) converges to \(\pi \) in the Hellinger metric, see [22, Section 4] and references therein). In order to sample from the measure \(\pi ^N\) in (1.6), we will consider the MALA algorithm with proposal
$$\begin{aligned} y^{k,N}=x^{k,N}+\delta {\mathcal {C}}_N\nabla \log \pi ^N\big (x^{k,N}\big )+ \sqrt{2 \delta }\, {\mathcal {C}}_N^{1/2} \xi ^{k,N}, \end{aligned}$$
$$\begin{aligned} \xi ^{k,N}=\sum _{i=1}^N \xi _i\phi _i, \quad \xi _i {\mathop {\sim }\limits ^{{\mathcal {D}}}} {\mathcal {N}}(0,1) \text{ i.i.d. }, \end{aligned}$$
and \(\delta >0\) is a positive parameter. We rewrite \(y^{k,N}\) as
$$\begin{aligned} y^{k,N}=x^{k,N}-\delta \bigl (x^{k,N}+ {\mathcal {C}}_N \nabla \varPsi ^N\big (x^{k,N}\big )\bigr )+ \sqrt{2 \delta }\, {\mathcal {C}}_N^{1/2} \xi ^{k,N}. \end{aligned}$$
The proposal defines the kernel q and enters in the accept-reject criterion \(\alpha \), which is added to preserve detailed balance with respect to \(\pi ^N\) (more details on the algorithm will be given in Sect. 2.2). The proposal is a discretization of a \(\pi ^N\)-invariant diffusion process with time step \(\delta \); in the MCMC literature \(\delta \) is often referred to as the proposal variance. The accept-reject criterion compensates for the discretization, which destroys the \(\pi ^N\)-reversibility. A crucial parameter to be appropriately chosen in order to optimize the performance of the algorithm is \(\delta \); such a choice will depend on the dimension N of the state space. To be more precise, set \(\delta =\ell N^{-\zeta }\), where \(\ell , \zeta \) are two positive parameters, the latter being, for the time, the most relevant to this discussion. As explained when outlining the context of this paper, if \(\zeta \) is too large (so that \(\delta \) is too small) then the algorithm will tend to move very slowly; if \(\zeta \) is too big, then the proposed moves will be very large and the algorithm will tend to reject them very often. In this paper we show that, if the algorithm is started our of stationarity then, in the non-stationary regime, the optimal choice of \(\zeta \) is \(\zeta =1/2\). In particular, if
$$\begin{aligned} \delta =\ell /\sqrt{N} \end{aligned}$$
then the acceptance probability is \({{\mathcal {O}}}(1)\). Furthermore, starting from the Metropolis–Hastings chain \(\{x^{k,N}\}_{k\in {\mathbb {N}}}\), we define the continuous interpolant
$$\begin{aligned}&x^{(N)}(t)=(N^{1/2}t-k)x^{k+1,N}+(k+1-N^{1/2}t)x^{k,N}, \quad \nonumber \\&t_k\le t< t_{k+1}, \text{ where } t_k=\frac{k}{N^{1/2}}. \end{aligned}$$
This process converges weakly to a diffusion process. The precise statement of such a result is given in Theorem 4.2 (and Sect. 4 contains heuristic arguments which explain how such a result is obtained). In proving the result we will use the fact that W(t) is a \({\mathcal {H}}_s\)-valued Brownian motion with covariance \({\mathcal {C}}_s\), with \({\mathcal {H}}_s\) a (Hilbert) subspace of \({\mathcal {H}}\) and \({\mathcal {C}}_s\) the covariance in this space. Details of these spaces are given in Sect. 2, see in particular (2.4) and (2.5). Below \(C([0,T];{\mathcal {H}}_s)\) denotes the space of \({\mathcal {H}}_s\)-valued continuous functions on [0, T], endowed with the uniform topology; \(\alpha _{\ell }, h_{\ell }\) and \(b_{\ell }\) are real valued functions, which we will define immediately after the statement, and \(x^{k,N}_j\) denotes the jth component of the vector \(x^{k,N}\in X^N\) with respect to the basis \(\{\phi _1,\ldots ,\phi _N\}\) (more details on this notation are given in Sect. 2.1.)
Main Result
Let \(\{x^{k,N}\}_{k\in {\mathbb {N}}}\) be the Metropolis–Hastings Markov chain to sample from \(\pi ^N\) and constructed using the MALA proposal (1.7) (i.e. the chain (2.14)) with \(\delta \) chosen to satisfy (1.8). Then, for any deterministic initial datum \(x^{0,N}={\mathcal {P}}^N(x^0)\), where \(x^0\) is any point in \({\mathcal {H}}_s\), the continuous interpolant \(x^{(N)}\) defined in (1.9) converges weakly in \(C([0,T];{\mathcal {H}}_s)\) to the solution of the SDE
$$\begin{aligned} dx(t)=- h_{\ell }(S(t)) \bigl (x(t)+{\mathcal {C}}\nabla \varPsi (x(t)) \bigr ) \, dt+\sqrt{2h_{\ell }(S(t))} \, dW(t) , \quad x(0)=x^0, \end{aligned}$$
where \(S(t) \in {\mathbb {R}}_+:=\{s\in {\mathbb {R}}: s\ge 0\}\) solves the ODE
$$\begin{aligned} dS(t)=b_{\ell }(S(t))\, dt, \qquad S(0):= \lim _{N \rightarrow \infty } \frac{1}{N}\sum _{j=1}^N \frac{\left| x_j^{0,N} \right| ^2}{\lambda _j^2} . \end{aligned}$$
In the above the initial datum S(0) is assumed to be finite and W(t) is a \({\mathcal {H}}_s\)-valued Brownian motion with covariance \({\mathcal {C}}_s\).
The functions \(\alpha _{\ell }, h_{\ell }, b_{\ell }: {\mathbb {R}}\rightarrow {\mathbb {R}}\) in the previous statement are defined as follows:
$$\begin{aligned} \alpha _{\ell }(s)&= 1\wedge e^{\ell ^2 (s-1)/2} \end{aligned}$$
$$\begin{aligned} h_{\ell }(s)&= \ell \alpha _{\ell }(s) \end{aligned}$$
$$\begin{aligned} b_{\ell }(s)&= 2\ell (1-s)\left( 1\wedge e^{\ell ^2 (s-1)/2} \right) = 2 (1-s) h_{\ell }(s). \end{aligned}$$
Remark 1.1
We make several remarks concerning the main result.
Since the effective time-step implied by the interpolation (1.9) is \(N^{-1/2}\), the main result implies that the number of steps required by the Markov chain in its non-stationary regime is \({{\mathcal {O}}}(N^{1/2})\). A more detailed discussion on this fact can be found in Sect. 4.
Notice that Eq. (1.11) evolves independently of Eq. (1.10). Once the MALA algorithm (2.14) is introduced and an initial state \(x^0\in {\tilde{{\mathcal {H}}}}\) is given such that S(0) is finite, the real valued (double) sequence \(S^{k,N}\),
$$\begin{aligned} S^{k,N}:=\frac{1}{N} \sum _{i=1}^N \frac{\left| x^{k,N}_i\right| ^2}{\lambda _i^2} \end{aligned}$$
started at \(S_0^N:=\frac{1}{N} \sum _{i=1}^N \frac{\left| x^{0,N}_i\right| ^2}{\lambda _i^2}\) is well defined. For fixed N, \(\{S^{k,N}\}_k\) is not, in general, a Markov process (however it is Markov if e.g. \(\varPsi =0\)). Consider the continuous interpolant \(S^{(N)}(t)\) of the sequence \(S^{k,N}\), namely
$$\begin{aligned} S^{(N)}(t)=(N^{1/2}t-k)S^{k+1,N}+(k+1-N^{1/2}t)S^{k,N}, \quad t_k\le t< t_{k+1}, \,\, t_k=\frac{k}{N^{\frac{1}{2}}}.\nonumber \\ \end{aligned}$$
In Theorem 4.1 we prove that \(S^{(N)}(t)\) converges in probability in \(C([0,T];{\mathbb {R}})\) to the solution of the ODE (1.11) with initial condition \(S_0:=\lim _{N\rightarrow \infty }S_0^N\). Once such a result is obtained, we can prove that \(x^{(N)}(t)\) converges to x(t). We want to stress that the convergence of \(S^{(N)}(t)\) to S(t) can be obtained independently of the convergence of \(x^{(N)}(t)\) to x(t).
Let \(S(t):{\mathbb {R}}\rightarrow {\mathbb {R}}\) be the solution of the ODE (1.11). We will prove (see Theorem 3.1) that \(S(t) \rightarrow 1\) as \(t\rightarrow \infty \); this is also consistent with the fact that, in stationarity, \(S^{k,N}\) converges to 1 as \(N \rightarrow \infty \) (for every \(k>0\)), see Remark 4.1. In view of this and the above comment, S(t) (or \(S^{k,N}\)) can be taken as an indication of how close the chain is to stationarity. Moreover, notice that \(h_{\ell }(1)=\ell \); heuristically one can then argue that the asymptotic behaviour of the law of x(t), the solution of (1.10), is described by the law of the following infinite dimensional SDE:
$$\begin{aligned} dz(t)=-\ell (z(t)+{\mathcal {C}}\nabla \varPsi (z(t)))dt+ \sqrt{2\ell } dW(t). \end{aligned}$$
It was proved in [9, 10] that (1.17) is ergodic with unique invariant measure given by (1.2). Our deduction concerning computational cost is made on the assumption that the law of (1.10) does indeed tend to the law of (1.17), although we will not prove this here as it would take us away from the main goal of the paper which is to establish the diffusion limit of the MALA algorithm.
In [12, 13] the diffusion limit for the MALA algorithm started out of stationarity and applied to i.i.d. target product measures is given by a non-linear equation of McKean-Vlasov type. This is in contrast with our diffusion limit, which is an infinite-dimensional SDE. The reason why this is the case is discussed in detail in [14, Section 1.2]. The discussion in the latter paper is in the context of the Random Walk Metropolis algorithm, but it is conceptually analogous to what holds for the MALA algorithm and for this reason we do not spell it out here.
In this paper we make stronger assumptions on \(\varPsi \) than are required to prove a diffusion limit in the stationary regime [17]. In particular we assume that the first derivative of \(\varPsi \) is bounded, whereas [17] requires only boundedness of the second derivative. Removing this assumption on the first derivative, or showing that it is necessary, would be of interest but would require different techniques to those employed in this paper and we do not address the issue here.
The proposal we employ in this paper is the standard MALA proposal. It can be seen as a particular case of the more general proposal introduced in [4, equation (4.2)] see also [1]; in our notation this proposal can be written as
$$\begin{aligned} y^{k+1,N}= x^{k,N} +\delta \big \{-(1-\theta )x^{k,N}-\theta y^{k+1,N}- {\mathcal {C}}_N \nabla \varPsi ^N \big (x^{k,N}\big )\big \}+\sqrt{2 \delta } \xi ^{k,N}.\nonumber \\ \end{aligned}$$
In the above, \(\theta \in [0,1]\) is a parameter. The choice \(\theta = 0\) corresponds to our proposal. When \(\theta = 1/2\), the resulting algorithm is well posed in infinite dimensions; as a consequence a diffusion limit is obtained, in and out of stationarity, without scaling \(\delta \) with respect to N; see Remark 4.3. When \(\theta \ne 1/2\) the algorithms all suffer from the curse of dimensionality: it is necessary to scale \(\delta \) inversely with a power of N to obtain an acceptable acceptance probability. In this paper we study how the efficiency decreases with N when \(\theta =0\); results analogous to the one we prove here will hold for any \(\theta \ne 1/2\), but proving them at this level of generality would lengthen the article without adding insight. Furthermore, for non-Gaussian priors practitioners might use the algorithm with \(\theta =0\) and so our results shed light on that case; if the prior is actually Gaussian practitioners should use the algorith with \(\theta = \frac{1}{2}.\) There is no reasons to use any other values of \(\theta \) in practice, as far as we are aware.
Structure of the paper
The paper is organized as follows. In Sect. 2 we introduce the notation and the assumptions that we use throughout this paper. In particular, Sect. 2.1 introduces the infinite dimensional setting in which we work, Sect. 2.2 discusses the MALA algorithm and the assumptions we make on the functional \(\varPsi \) and on the covariance operator \({\mathcal {C}}\). Section 3 contains the proof of existence and uniqueness of solutions for the limiting Eqs. (1.10) and (1.11). With these preliminaries in place, we give, in Sect. 4, the formal statement of the main results of this paper, Theorems 4.1 and 4.2. In this section we also provide heuristic arguments outlining how the main results are obtained. The complete proof of these results builds on a continuous mapping argument presented in Sect. 5. The heuristics of Sect. 4 are made rigorous in Sects. 6–8. In particular, Sect. 6 contains some estimates of the size of the chain's jumps and the growth of its moments, as well as the study of the acceptance probability. In Sects. 7 and 8 we use these estimates and approximations to prove Theorems 4.1 and 4.2, respectively. Readers interested in the structure of the proofs of Theorems 4.1 and 4.2 but not in the technical details may wish to skip the ensuing two sections (Sects. 2 and 3) and proceed directly to the statement of these results and the relevant heuristics discussed in Sect. 4.
Notation, algorithm, and assumptions
In this section we detail the notation and the assumptions (Sects. 2.1 and 2.3, respectively) that we will use in the rest of the paper.
Let \(\left( {\mathcal {H}}, \langle \cdot , \cdot \rangle , \Vert \cdot \Vert \right) \) denote a real separable infinite dimensional Hilbert space, with the canonical norm induced by the inner-product. Let \(\pi _0\) be a zero-mean Gaussian measure on \({\mathcal {H}}\) with covariance operator \({\mathcal {C}}\). By the general theory of Gaussian measures [8], \({\mathcal {C}}\) is a positive, trace class operator. Let \(\{\phi _j,\lambda ^2_j\}_{j \ge 1}\) be the eigenfunctions and eigenvalues of \({\mathcal {C}}\), respectively, so that (1.3) holds. We assume a normalization under which \(\{\phi _j\}_{j \ge 1}\) forms a complete orthonormal basis of \({\mathcal {H}}\). Recalling (1.4), we specify the notation that will be used throughout this paper:
x and y are elements of the Hilbert space \({\mathcal {H}}\);
the letter N is reserved to denote the dimensionality of the space \(X^N\) where the target measure \(\pi ^N\) is supported;
\(x^N\) is an element of \(X^N\)\(\cong {\mathbb {R}}^N\) (similarly for \(y^N\) and the noise \(\xi ^N\));
for any fixed \(N \in {\mathbb {N}}\), \(x^{k,N}\) is the kth step of the chain \(\{x^{k,N}\}_{k \in {\mathbb {N}}} \subseteq X^N\) constructed to sample from \(\pi ^N\); \(x^{k,N}_i\) is the ith component of the vector \(x^{k,N}\), that is \(x^{k,N}_i:=\langle x^{k,N}, \phi _i\rangle \) (with abuse of notation).
For every \(x \in {\mathcal {H}}\), we have the representation \(x = \sum _{j\ge 1} \; x_j \phi _j\), where \(x_j:=\langle x,\phi _j\rangle .\) Using this expansion, we define Sobolev-like spaces \({\mathcal {H}}^s, s \in {\mathbb {R}}\), with the inner-products and norms defined by
$$\begin{aligned} \langle x,y \rangle _s = \sum _{j=1}^\infty j^{2s}x_jy_j \qquad \text {and} \qquad \Vert x\Vert ^2_s = \sum _{j=1}^\infty j^{2s} \, x_j^{2}. \end{aligned}$$
The space \(({\mathcal {H}}^s, \langle \cdot , \cdot \rangle _s)\) is also a Hilbert space. Notice that \({\mathcal {H}}^0 = {\mathcal {H}}\). Furthermore \({\mathcal {H}}^s \subset {\mathcal {H}}\subset {\mathcal {H}}^{-s}\) for any \(s >0\). The Hilbert–Schmidt norm \(\Vert \cdot \Vert _{\mathcal {C}}\) associated with the covariance operator \({\mathcal {C}}\) is defined as
$$\begin{aligned} \left| \left| x\right| \right| _{{\mathcal {C}}}^2 := \sum _{j=1}^{\infty } \lambda _j^{-2} x_j^2= \sum _{j=1}^{\infty } \frac{\left| \langle x, \phi _j\rangle \right| ^2}{\lambda _j^2},\qquad x\in {\mathcal {H}}, \end{aligned}$$
and it is the Cameron–Martin norm associated with the Gaussian measure \({\mathcal {N}}(0,{\mathcal {C}})\). Such a norm is induced by the scalar product
$$\begin{aligned} \langle x, y\rangle _{{\mathcal {C}}} :=\langle {\mathcal {C}}^{-1/2}x, {\mathcal {C}}^{-1/2}y \rangle , \qquad x,y\in {\mathcal {H}}. \end{aligned}$$
Similarly, \({\mathcal {C}}_N\) defines a Hilbert–Schmidt norm on \(X^N\),
$$\begin{aligned} \left| \left| x^N\right| \right| _{{\mathcal {C}}_N}^2:=\sum _{j=1}^{N} \frac{\left| \langle x^N, \phi _j\rangle \right| ^2}{\lambda _j^2},\qquad x^N\in X^N, \end{aligned}$$
which is induced by the scalar product
$$\begin{aligned} \langle x^N, y^N\rangle _{{\mathcal {C}}_N} :=\left\langle {\mathcal {C}}_N^{-1/2}x^N, {\mathcal {C}}_N^{-1/2}y^N \right\rangle , \qquad x^N,y^N\in X^N. \end{aligned}$$
For \(s \in {\mathbb {R}}\), let \(L_s : {\mathcal {H}}\rightarrow {\mathcal {H}}\) denote the operator which is diagonal in the basis \(\{\phi _j\}_{j \ge 1}\) with diagonal entries \(j^{2s}\),
$$\begin{aligned} L_s \,\phi _j = j^{2s} \phi _j, \end{aligned}$$
so that \(L^{\frac{1}{2}}_s \,\phi _j = j^s \phi _j\). The operator \(L_s\) lets us alternate between the Hilbert space \({\mathcal {H}}\) and the interpolation spaces \({\mathcal {H}}^s\) via the identities:
$$\begin{aligned} \langle x,y \rangle _s = \left\langle L^{\frac{1}{2}}_s x,L^{\frac{1}{2}}_s y \right\rangle \qquad \text {and} \qquad \Vert x\Vert ^2_s =\left\| L^{\frac{1}{2}}_s x\right\| ^2. \end{aligned}$$
Since \(\left| \left| L_s^{-1/2} \phi _k\right| \right| _{s} = \left| \left| \phi _k\right| \right| =1\), we deduce that \(\{{\hat{\phi }}_k:=L^{-1/2}_s \phi _k \}_{k \ge 1}\) forms an orthonormal basis of \({\mathcal {H}}^s\). An element \(y\sim {\mathcal {N}}(0,{\mathcal {C}})\) can be expressed as
$$\begin{aligned} y=\sum _{j=1}^{\infty } \lambda _j \rho _j \phi _j \qquad \text{ with } \qquad \rho _j{\mathop {\sim }\limits ^{{\mathcal {D}}}}{\mathcal {N}}(0,1) \,\,\text{ i.i.d }. \end{aligned}$$
If \(\sum _j \lambda _j^2 j^{2s}<\infty \), then y can be equivalently written as
$$\begin{aligned} y=\sum _{j=1}^{\infty } (\lambda _j j^s) \rho _j (L_s^{-1/2} \phi _j) \qquad \text{ with } \qquad \rho _j{\mathop {\sim }\limits ^{{\mathcal {D}}}}{\mathcal {N}}(0,1) \,\,\text{ i.i.d }. \end{aligned}$$
For a positive, self-adjoint operator \(D : {\mathcal {H}}\mapsto {\mathcal {H}}\), its trace in \({\mathcal {H}}\) is defined as
$$\begin{aligned} {\mathrm{Trace}}_{{\mathcal {H}}}(D) \;{:=}\; \sum _{j=1}^\infty \langle \phi _j, D \phi _j \rangle . \end{aligned}$$
We stress that in the above \( \{ \phi _j \}_{j \in {\mathbb {N}}} \) is an orthonormal basis for \(({\mathcal {H}}, \langle \cdot , \cdot \rangle )\). Therefore, if \({\tilde{D}}:{\mathcal {H}}^s \rightarrow {\mathcal {H}}^s\), its trace in \({\mathcal {H}}^s\) is
$$\begin{aligned} {\mathrm{Trace}}_{{\mathcal {H}}^s}({\tilde{D}}) \;{=}\; \sum _{j=1}^\infty \left\langle L_s^{-\frac{1}{2}} \phi _j, {\tilde{D}} L_s^{-\frac{1}{2}} \phi _j \right\rangle _s. \end{aligned}$$
Since \({\mathrm{Trace}}_{{\mathcal {H}}^s}({\tilde{D}})\) does not depend on the orthonormal basis, the operator \({\tilde{D}}\) is said to be trace class in \({\mathcal {H}}^s\) if \({\mathrm{Trace}}_{{\mathcal {H}}^s}({\tilde{D}}) < \infty \) for some, and hence any, orthonormal basis of \({\mathcal {H}}^s\). Because \({\mathcal {C}}\) is defined on \({\mathcal {H}}\), the covariance operatorFootnote 1
$$\begin{aligned} {\mathcal {C}}_s=L_s^{1/2} {\mathcal {C}}L_s^{1/2} \end{aligned}$$
is defined on \({\mathcal {H}}^s\). Thus, for all the values of r such that \({\mathrm{Trace}}_{{\mathcal {H}}^s}({\mathcal {C}}_s)=\sum _j \lambda _j^2 j^{2s}< \infty \), we can think of y as a mean zero Gaussian random variable with covariance operator \({\mathcal {C}}\) in \({\mathcal {H}}\) and \({\mathcal {C}}_s\) in \({\mathcal {H}}^s\) [see (2.2) and (2.3)]. In the same way, if \({\mathrm{Trace}}_{{\mathcal {H}}^s}({\mathcal {C}}_s)< \infty \), then
$$\begin{aligned} W(t)= \sum _{j=1}^{\infty } \lambda _j w_j(t) \phi _j= \sum _{j=1}^{\infty }\lambda _j j^r w_j(t) {\hat{\phi }}_j, \end{aligned}$$
where \(\{ w_j(t)\}_{j \ge 1}\) a collection of i.i.d. standard Brownian motions on \({\mathbb {R}}\), can be equivalently understood as an \({\mathcal {H}}\)-valued \({\mathcal {C}}\)-Brownian motion or as an \({\mathcal {H}}^s\)-valued \({\mathcal {C}}_s\)-Brownian motion.
We will make use of the following elementary inequality,
$$\begin{aligned} \left| \left\langle x,y \right\rangle \right| ^2=\left| \sum _{j=1}^{\infty } (j^s x_j)(j^{-s}y_j)\right| ^2 \le \left| \left| x\right| \right| _{s}^2 \left| \left| y\right| \right| _{-s}^2,\qquad \forall x \in {\mathcal {H}}^s,\quad y \in {\mathcal {H}}^{-s}.\nonumber \\ \end{aligned}$$
Throughout this paper we study sequences of real numbers, random variables and functions, indexed by either (or both) the dimension N of the space on which the target measure is defined or the chain's step number k. In doing so, we find the following notation convenient.
Two (double) sequences of real numbers \(\{A^{k,N}\}\) and \(\{B^{k,N}\}\) satisfy \(A^{k,N} \lesssim B^{k,N}\) if there exists a constant \(K>0\) (independent of N and k) such that
$$\begin{aligned} A^{k,N}\le KB^{k,N}, \end{aligned}$$
for all N and k such that \(\{A^{k,N}\}\) and \(\{B^{k,N}\}\) are defined.
If the \(A^{k,N}\)s and \(B^{k,N}\)s are random variables, the above inequality must hold almost surely (for some deterministic constant K).
If the \(A^{k,N}\)s and \(B^{k,N}\)s are real-valued functions on \({\mathcal {H}}\) or \({\mathcal {H}}^s\), \(A^{k,N}= A^{k,N}(x)\) and \(B^{k,N}= B^{k,N}(x)\), the same inequality must hold with K independent of x, for all x where the \(A^{k,N}\)s and \(B^{k,N}\)s are defined.
As is customary, \({\mathbb {R}}_+:=\{s\in {\mathbb {R}}: s \ge 0\}\) and for all \(b \in {\mathbb {R}}_+\) we let \([b]=n\) if \(n\le b < n+1\) for some integer n. Finally, for time dependent functions we will use both the notations S(t) and \(S_t\) interchangeably.
A natural variant of the MALA algorithm stems from the observation that \(\pi ^N\) is the unique stationary measure of the SDE
$$\begin{aligned} dY_t={\mathcal {C}}_N\nabla \log \pi ^N(Y_t)dt+\sqrt{2}dW^N_t, \end{aligned}$$
where \(W^N\) is an \(X^N\)-valued Brownian motion with covariance operator \({\mathcal {C}}_N\). The algorithm consists of discretising (2.7) using the Euler-Maruyama scheme and adding a Metropolis accept-reject step so that the invariance of \(\pi ^N\) is preserved. The variant on MALA which we study is therefore a Metropolis–Hastings algorithm with proposal
$$\begin{aligned} y^{k,N} =x^{k,N}- \delta \left( x^{k,N}+ {\mathcal {C}}_N \nabla \varPsi ^N\big (x^{k,N}\big )\right) + \sqrt{2\delta } {\mathcal {C}}_N^{1/2} \xi ^{k,N}, \end{aligned}$$
$$\begin{aligned} \xi ^{k,N}:= \sum _{j=1}^N \xi ^{k,N}_j \phi _j, \quad \xi ^{k,N}_j \sim {\mathcal {N}}(0,1){ \text{ i.i.d }}. \end{aligned}$$
We stress that the Gaussian random variables \(\xi ^{k,N}_i\) are independent of each other and of the current position \(x^{k,N}\). Motivated by the considerations made in the introduction (and that will be made more explicit in Sect. 4.1), in this paper we fix the choice
$$\begin{aligned} \delta :=\frac{\ell }{N^{1/2}}. \end{aligned}$$
If at step k the chain is at \(x^{k,N}\), the algorithm proposes a move to \(y^{k,N}\) defined by Eq. (2.8). The move is then accepted with probability
$$\begin{aligned} \alpha ^N\big (x^{k,N},y^{k,N}\big ):=\frac{\pi ^N\big (y^{k,N}\big ) q^N\big (y^{k,N}, x^{k,N}\big )}{\pi ^N\big (x^{k,N}\big ) q^N\big (x^{k,N}, y^{k,N}\big )}, \end{aligned}$$
where, for any \(x^N, y^N \in {\mathbb {R}}^N \simeq X^N\),
$$\begin{aligned} q^N\big (x^N,y^N\big )\propto e^{-\frac{1}{4\delta }\Vert \big (y^N-x^N\big )-\delta \nabla \log \pi ^N\big (x^N\big )\Vert ^2_{{\mathcal {C}}_N}}. \end{aligned}$$
If the move to \(y^{k,N}\) is accepted then \(x^{k+1,N}=y^{k,N}\), if it is rejected the chain remains where it was, i.e. \(x^{k+1,N}=x^{k,N}\). In short, the MALA chain is defined as follows:
$$\begin{aligned} x^{k+1,N}:=\gamma ^{k,N} y^{k,N}+ \big (1-\gamma ^{k,N})x^{k,N},\qquad x^{0,N}:={\mathcal {P}}^N(x^0\big ), \end{aligned}$$
where in the above
$$\begin{aligned} \gamma ^{k,N}{\mathop {\sim }\limits ^{{\mathcal {D}}}}{\mathrm{Bernoulli}}\big ( \alpha ^N\big (x^{k,N},y^{k,N}\big )\big ); \end{aligned}$$
that is, conditioned on \((x^{k,N},y^{k,N})\), \(\gamma ^{k,N}\) has Bernoulli law with mean \(\alpha ^N(x^{k,N},y^{k,N})\). Equivalently, we can write
$$\begin{aligned} \gamma ^{k,N}=\mathbf{{1}}_{\big \{U^{k,N}\le \alpha ^N\big (x^{k,N},y^{k,N}\big )\big \}}, \end{aligned}$$
with \(U^{k,N}{\mathop {\sim }\limits ^{{\mathcal {D}}}}\) Uniform\(\,[0,1]\), independent of \(x^{k,N}\) and \(\xi ^{k,N}\).
For fixed N, the chain \(\{x^{k,N}\}_{k\ge 1}\) lives in \(X^N \cong {\mathbb {R}}^N\) and samples from \(\pi ^N\). However, in view of the fact that we want to study the scaling limit of such a chain as \(N \rightarrow \infty \), the analysis is cleaner if it is carried out in \({\mathcal {H}}\); therefore, the chain that we analyse is the chain \(\{x^k\}_{k}\subseteq {\mathcal {H}}\) defined as follows: the first N components of the vector \(x^k \in {\mathcal {H}}\) coincide with \(x^{k,N}\) as defined above; the remaining components are not updated and remain equal to their initial value. More precisely, using (2.8) and (2.12), the chain \(x^k\) can be written in a component-wise notation as follows:
$$\begin{aligned}&x^{k+1}_i=x^{k+1,N}_i =x^{k,N}_i- \gamma ^{k,N} \left[ \frac{\ell }{N^{1/2}}\left( x^{k,N}_i+ \big [{\mathcal {C}}_N \nabla \varPsi ^N\big (x^{k,N}\big )\big ]_i\right) \right. \nonumber \\&\qquad \qquad \qquad \qquad \qquad \left. + \sqrt{\frac{2 \ell }{N^{1/2}}} \lambda _i \,\xi ^{k,N} \right] \qquad \forall i\le N \end{aligned}$$
$$\begin{aligned} x_i^{k+1}&=x^k_i=0 \qquad \forall i\ge N+1. \end{aligned}$$
For the sake of clarity, we specify that \([{\mathcal {C}}_N \nabla \varPsi ^N(x^{k,N})]_i\) denotes the ith component of the vector \({\mathcal {C}}_N \nabla \varPsi ^N(x^{k,N}) \in {\mathcal {H}}^s\). From the above it is clear that the update rule (2.14) only updates the first N coordinates (with respect to the eigenbasis of \({\mathcal {C}}\)) of the vector \(x^k\). Therefore the algorithm evolves in the finite-dimensional subspace \(X^N\). From now on we will avoid using the notation \(\{x^k\}_k\) for the "extended chain" defined in \({\mathcal {H}}\), as it can be confused with the notation \(x^N\), which instead is used throughout to denote a generic element of the space \(X^N\).
We conclude this section by remarking that, if \(x^{k,N}\) is given, the proposal \(y^{k,N}\) only depends on the Gaussian noise \(\xi ^{k,N}\). Therefore the acceptance probability will be interchangeably denoted by \(\alpha ^N\big (x^N,y^N\big )\) or \(\alpha ^N\big (x^N,\xi ^N\big )\).
Assumptions
In this section, we describe the assumptions on the covariance operator \({\mathcal {C}}\) of the Gaussian measure \(\pi _0 {\mathop {\sim }\limits ^{{\mathcal {D}}}}{\mathcal {N}}(0,{\mathcal {C}})\) and those on the functional \(\varPsi \). We fix a distinguished exponent \(s\ge 0\) and assume that \(\varPsi : {\mathcal {H}}^s\rightarrow {\mathbb {R}}\) and \({\mathrm{Trace}}_{{\mathcal {H}}^s}({\mathcal {C}}_s)<\infty \). In other words, \({\mathcal {H}}^s\) is the space that we were denoting with \({\tilde{{\mathcal {H}}}}\) in the introduction. Since
$$\begin{aligned} {\mathrm{Trace}}_{{\mathcal {H}}^s}({\mathcal {C}}_s)= \sum _{j=1}^{\infty } \lambda _j^2 j^{2s}, \end{aligned}$$
the condition \({\mathrm{Trace}}_{{\mathcal {H}}^s}({\mathcal {C}}_s)<\infty \) implies that \(\lambda _j j^s \rightarrow 0\) as \(j \rightarrow \infty \). Therefore the sequence \(\{\lambda _j j^s\}_j\) is bounded:
$$\begin{aligned} \lambda _j j^s \le C, \end{aligned}$$
for some constant \(C>0\) independent of j.
For each \(x \in {\mathcal {H}}^s\) the derivative \(\nabla \varPsi (x)\) is an element of the dual \({\mathcal {L}}({\mathcal {H}}^s,{\mathbb {R}})\) of \({\mathcal {H}}^s\), comprising the linear functionals on \({\mathcal {H}}^s\). However, we may identify \( {\mathcal {L}}({\mathcal {H}}^s,{\mathbb {R}})={\mathcal {H}}^{-s}\) and view \(\nabla \varPsi (x)\) as an element of \({\mathcal {H}}^{-s}\) for each \(x \in {\mathcal {H}}^s\). With this identification, the following identity holds
$$\begin{aligned} \left| \left| \nabla \varPsi (x)\right| \right| _{{\mathcal {L}}({\mathcal {H}}^s,{\mathbb {R}})} = \left| \left| \nabla \varPsi (x)\right| \right| _{-s}. \end{aligned}$$
To avoid technical complications we assume that the gradient of \(\varPsi (x)\) is bounded and globally Lipschitz. More precisely, throughout this paper we make the following assumptions.
Assumption 2.1
The functional \(\varPsi \) and covariance operator \({\mathcal {C}}\) satisfy the following:
Decay of Eigenvalues \(\lambda _j^2\) of \({\mathcal {C}}\): there exists a constant \(\kappa > s+\frac{1}{2}\) such that
$$\begin{aligned} j^{-\kappa }\lesssim \lambda _j \lesssim j^{-\kappa }. \end{aligned}$$
Domain of \(\varPsi \): the functional \(\varPsi \) is defined everywhere on \({\mathcal {H}}^s\).
Derivatives of \(\varPsi \): The derivative of \(\varPsi \) is bounded and globally Lipschitz:
$$\begin{aligned} \left| \left| \nabla \varPsi (x)\right| \right| _{-s} \lesssim 1,\qquad \left| \left| \nabla \varPsi (x)- \nabla \varPsi (y)\right| \right| _{-s} \lesssim \left| \left| x-y\right| \right| _{s}. \end{aligned}$$
The condition \(\kappa > s+\frac{1}{2}\) ensures that \({\mathrm{Trace}}_{{\mathcal {H}}^s}({\mathcal {C}}_s) < \infty \). Consequently, \(\pi _0\) has support in \({\mathcal {H}}^s\) (\(\pi _0({\mathcal {H}}^s)=1\)). \(\square \)
Example 2.1
The functional \(\varPsi (x) = \sqrt{1+\left| \left| x\right| \right| _{s}^2}\) satisfies all of the above.
Our assumptions on the change of measure (that is, on \(\varPsi \)) are less general than those adopted in [14, 17] and related literature (see references therein). This is for purely technical reasons. In this paper we assume that \(\varPsi \) grows linearly. If \(\varPsi \) was assumed to grow quadratically, which is the case in the mentioned works, finding bounds on the moments of the chain \(\{x^{k,N}\}_{k\ge 1}\) (much needed in all of the analysis) would become more involved than it already is, see Remark C.1. However, under our assumptions, the measure \(\pi \) (or \(\pi ^N\)) is still, generically, of non-product form. \(\square \)
We now explore the consequences of Assumption 2.1. The proofs of the following lemmas can be found in Appendix A.
Lemma 2.1
Suppose that Assumption 2.1 holds. Then
The function \({\mathcal {C}}\nabla \varPsi (x)\) is bounded and globally Lipschitz on \({\mathcal {H}}^s\), that is
$$\begin{aligned} \left| \left| {\mathcal {C}}\nabla \varPsi (x)\right| \right| _{s}\lesssim 1 \quad \text{ and } \quad \left| \left| {\mathcal {C}}\nabla \varPsi (x)-{\mathcal {C}}\nabla \varPsi (y)\right| \right| _{s}\lesssim \left| \left| x-y\right| \right| _{s}. \end{aligned}$$
Therefore, the function \(F(z):=-z-{\mathcal {C}}\nabla \varPsi (z)\) satisfies
$$\begin{aligned} \left| \left| F(x) - F(y)\right| \right| _{s} \lesssim \left| \left| x-y\right| \right| _{s} \quad \text{ and } \quad \left| \left| F(x)\right| \right| _{s} \lesssim 1+ \left| \left| x\right| \right| _{s}. \end{aligned}$$
The function \(\varPsi (x)\) is globally Lipschitz and therefore also \(\varPsi ^N(x):=\varPsi ({\mathcal {P}}^N(x))\) is globally Lipschitz:
$$\begin{aligned} \left| \varPsi ^N(y)-\varPsi ^N(x)\right| \lesssim \left| \left| y-x\right| \right| _{s}. \end{aligned}$$
Before stating the next lemma, we observe that by definition of the projection operator \({\mathcal {P}}^N\) we have that
$$\begin{aligned} \nabla \varPsi ^N={\mathcal {P}}^N\circ \nabla \varPsi \circ {\mathcal {P}}^N. \end{aligned}$$
Suppose that Assumption 2.1 holds. Then the following holds for the function \(\varPsi ^N\) and for its the gradient:
If the bounds (2.19) hold for \(\varPsi \), then they hold for \(\varPsi ^N\) as well:
$$\begin{aligned} \left| \left| \nabla \varPsi ^N(x)\right| \right| _{-s}\lesssim 1,\qquad \left| \left| \nabla \varPsi ^N(x)- \nabla \varPsi ^N(y)\right| \right| _{-s} \lesssim \left| \left| x-y\right| \right| _{s}. \end{aligned}$$
Moreover,
$$\begin{aligned} \left| \left| {\mathcal {C}}_N\nabla \varPsi ^N(x)\right| \right| _s\lesssim 1, \end{aligned}$$
$$\begin{aligned} \left| \left| {\mathcal {C}}_N\nabla \varPsi ^N(x)\right| \right| _{{\mathcal {C}}_N}\lesssim 1. \end{aligned}$$
We stress that in (2.24)–(2.26) the constant implied by the use of the notation "\( \lesssim \)" (see end of Sect. 2.1) is independent of N. Lastly, in what follows we will need the fact that, due assumptions on the covariance operator,
$$\begin{aligned} {\mathbb {E}}\left| \left| {\mathcal {C}}_N^{1/2} \xi ^N\right| \right| _{s}^2 \lesssim 1, \quad \hbox { uniformly in}\ N, \end{aligned}$$
where \(\xi ^N:=\sum _{j=1}^N\xi _j\phi _j\) and \(\xi _i {\mathop {\sim }\limits ^{{\mathcal {D}}}} {\mathcal {N}}(0,1)\) i.d.d., see [15, (2.32)] or [14, first proof of Appendix A]
Existence and uniqueness for the limiting diffusion process
The main results of this section are Theorems 3.1, 3.2 and 3.3. Theorems 3.1 and 3.2 are concerned with establishing existence and uniqueness for Eqs. (1.10) and (1.11), respectively. Theorem 3.3 states the continuity of the Itô maps associated with Eqs. (1.10) and (1.11). The proofs of the main results of this paper (Theorems 4.1 and 4.2) rely heavily on the continuity of such maps, as we illustrate in Sect. 5. Once Lemma 3.1 below is established, the proofs of the theorems in this section are completely analogous to the proofs of those in [14, Section 4]. For this reason, we omit them and refer the reader to [14]. In what follows, recall that the definition of the functions \(\alpha _{\ell }, h_{\ell }\) and \(b_{\ell }\) has been given in (1.12), (1.13) and (1.14), respectively.
The functions \(\alpha _{\ell }(s)\), \(h_{\ell }(s)\) and \(\sqrt{h_{\ell }(s)}\) are positive, globally Lipschitz continuous and bounded. The function \(b_{\ell }(s)\) is globally Lipschitz and it is bounded above but not below. Moreover, for any \(\ell >0\), \(b_{\ell }(s)\) is strictly positive for \(s\in [0,1)\), strictly negative for \(s>1\) and \(b_{\ell }(1)=0\).
Proof of Lemma 3.1
When \(s>1\), \(\alpha _{\ell }(s)=1\) while for \(s\le 1\) \(\alpha _{\ell }(s)\) has bounded derivative; therefore \(\alpha _{\ell }(s)\) is globally Lipshitz. A similar reasoning gives the Lipshitzianity of the other functions. The further properties of \(b_{\ell }\) are straightforward from the definition. \(\square \)
In the case of (1.11) we have the following.
Theorem 3.1
For any initial datum \(S(0) >0\), there exists a unique solution S(t) to the ODE (1.11). The solution is strictly positive for any \(t>0\), it is bounded and has continuous first derivative for all \(t\ge 0\). In particular
$$\begin{aligned} \lim _{t\rightarrow \infty } S(t) =1 \, \end{aligned}$$
$$\begin{aligned} 0\le \min \{S(0),1\}\le S(t) \le \max \{S(0), 1\} \, . \end{aligned}$$
For (1.10) we have that:
Let Assumption 2.1 hold and consider Eq. (1.10), where W(t) is any \({\mathcal {H}}^s\)-valued \({{{\mathcal {C}}}}_s\)-Brownian motion and S(t) is the solution of (1.11). Then for any initial condition \( x^0\in {\mathcal {H}}^s\) and any \(T>0\) there exists a unique solution of Eq. (1.10) in the space \(C([0,T]; {\mathcal {H}}^s)\).
Consider the deterministic equations
$$\begin{aligned} dz(t)=[-z(t)-{\mathcal {C}}\nabla \varPsi (z(t))] h_{\ell }(S(t)) \, dt + d\zeta (t),\qquad z(0)=z^0 \end{aligned}$$
$$\begin{aligned} d{\mathfrak {S}}(t)=b_{\ell }({\mathfrak {S}}(t)) \, dt+ dw(t),\qquad {\mathfrak {S}}(0)={\mathfrak {S}}^0, \end{aligned}$$
where S is the solution of (1.11), \(z^0\in {\mathcal {H}}^s\), \({\mathfrak {S}}^0\in {\mathbb {R}}\), and \(\zeta \) and w are functions in \(C([0,T];{\mathcal {H}}^s)\) and \(C([0,T];{\mathbb {R}})\), respectively. Throughout the paper, we endow the spaces \(C([0,T];{\mathcal {H}}^s)\) and \(C([0,T];{\mathbb {R}})\) with the uniform topology. The following is the starting point of the continuous mapping arguments presented in Sect. 5.
Suppose that Assumption 2.1 is satisfied. Both (3.2) and (3.3) have unique solutions in \(C([0,T];{\mathcal {H}}^s)\) and \(C([0,T];{\mathbb {R}})\), respectively. The Itô maps
$$\begin{aligned} {\mathcal {J}}_1: {\mathcal {H}}^s\times C([0,T]; {\mathcal {H}}^s)&\longrightarrow C([0,T];{\mathcal {H}}^s) \\ (z^0,\zeta )&\longrightarrow z \end{aligned}$$
$$\begin{aligned} {\mathcal {J}}_2: {\mathbb {R}}_+ \times C([0,T]; {\mathbb {R}})&\longrightarrow C([0,T]; {\mathbb {R}}) \\ ({\mathfrak {S}}^0, w)&\longrightarrow {\mathfrak {S}} \end{aligned}$$
are continuous.
Main theorems and heuristics of proofs
In order to state the main results, we first set
$$\begin{aligned} {\mathcal {H}}^s_{\cap }:=\left\{ x \in {\mathcal {H}}^s: \lim _{N \rightarrow \infty } \frac{1}{N}\sum _{i=1}^N \frac{\left| x_i \right| ^2}{ \lambda _i^2}< \infty \right\} , \end{aligned}$$
where we recall that in the above \(x_i:= \left\langle x,\phi _i \right\rangle \).
Let Assumption 2.1 hold and let \(\delta =\ell /N^{\frac{1}{2}}\). Let \(x^0\in {\mathcal {H}}^s_{\cap }\) and \(T>0\). Then, as \(N\rightarrow \infty \), the continuous interpolant \(S^{(N)}(t)\) of the sequence \(\{S^{k,N}\}_{k\in {\mathbb {N}}} \subseteq {\mathbb {R}}_+\) (defined in (1.16)) and started at \(S^{0,N}=\frac{1}{N}\sum _{i=1}^N \left| x_{i}^{0} \right| ^2 / \lambda _i^2 \), converges in probability in \(C([0,T]; {\mathbb {R}})\) to the solution S(t) of the ODE (1.11) with initial datum \(S^0:=\lim _{N\rightarrow \infty }S^{0,N}\).
For the following theorem recall that the solution of (1.10) is interpreted precisely through Theorem 3.2 as a process driven by an \({\mathcal {H}}^s-\)valued Brownian motion with covariance \({\mathcal {C}}_s\), and solution in \(C([0,T];{\mathcal {H}}^s).\)
Let Assumption 2.1 hold let \(\delta =\ell /N^{\frac{1}{2}}\). Let \(x^0\in {\mathcal {H}}^s_{\cap }\) and \(T>0\). Then, as \(N \rightarrow \infty \), the continuous interpolant \(x^{(N)}(t)\) of the chain \(\{x^{k,N}\}_{k\in {\mathbb {N}}} \subseteq {\mathcal {H}}^s\) (defined in (1.9) and (2.14), respectively) with initial state \(x^{0,N}:={\mathcal {P}}^N(x^0)\), converges weakly in \(C([0,T]; {\mathcal {H}}^s)\) to the solution x(t) of Eq. (1.10) with initial datum \(x^0\). We recall that the time-dependent function S(t) appearing in (1.10) is the solution of the ODE (1.11), started at \(S(0):= \lim _{N \rightarrow \infty } \frac{1}{N}\sum _{i=1}^N \left| x_i^{0} \right| ^2 / \lambda _i^2\).
Both Theorems 4.1 and 4.2 assume that the initial datum of the chains \(x^{k,N}\) is assigned deterministically. From our proofs it will be clear that the same statements also hold for random initial data, as long as (i) \(x^{0,N}\) is not drawn at random from the target measure \(\pi ^N\) or from any other measure which is a change of measure from \(\pi ^N\) (i.e. we need to be starting out of stationarity) and (ii) \(S^{0,N}\) and \(x^{0,N}\) have bounded moments (bounded uniformly in N) of sufficiently high order and are independent of all the other sources of noise present in the algorithm. Notice moreover that the convergence in probability of Theorem 4.1 is equivalent to weak convergence, as the limit is deterministic.
The rigorous proof of the above results is contained in Sects. 5–8. In the remainder of this section we give heuristic arguments to justify our choice of scaling \(\delta \propto N^{-1/2}\) and we explain how one can formally obtain the (fluid) ODE limit (1.11) for the double sequence \(S^{k,N}\) and the diffusion limit (1.10) for the chain \(x^{k,N}\). We stress that the arguments of this section are only formal; therefore, we often use the notation "\(\simeq \)", to mean "approximately equal". That is, we write \(A\simeq B\) when \(A=B+\) "terms that are negligible" as N tends to infinity; we then justify these approximations, and the resulting limit theorems, in the following Sects. 5–8.
Heuristic analysis of the acceptance probability
As observed in [17, equation (2.21)], the acceptance probability (2.10) can be expressed as
$$\begin{aligned} \alpha ^N\big (x^N,\xi ^N\big )= 1\wedge e^{Q^N\big (x^N,\xi ^N\big )}, \end{aligned}$$
where, using the notation (2.1), the function \(Q^N(x,\xi )\) can be written as
$$\begin{aligned} Q^N(x^N, \xi ^N)&:= - \frac{\delta }{4} \left( \left| \left| y^N\right| \right| _{{\mathcal {C}}_N}^2 - \left| \left| x^N\right| \right| _{{\mathcal {C}}_N}^2\right) + r^N (x^N, \xi ^N) \end{aligned}$$
$$\begin{aligned}&= \left[ \frac{\delta ^2}{2}\left( \left| \left| x^N\right| \right| _{{\mathcal {C}}_N}^2- \left| \left| {\mathcal {C}}_N^{1/2}\xi ^N\right| \right| _{{\mathcal {C}}_N}^2 \right) \right] - \frac{\delta ^3}{4} \left| \left| x^N\right| \right| _{{\mathcal {C}}_N}^2 \nonumber \\&\quad -\left( \frac{\delta ^{3/2}}{\sqrt{2}} - \frac{\delta ^{5/2}}{\sqrt{2}} \right) \langle x^N, {\mathcal {C}}_N^{1/2} \xi ^N\rangle _{{\mathcal {C}}_N}+r_{\varPsi }^N (x^N, \xi ^N). \end{aligned}$$
We do not give here a complete expression for the terms \(r^{N}(x^N,\xi ^N)\) and \(r^N_{\varPsi }(x^N,\xi ^N)\). For the time being it is sufficient to point out that
$$\begin{aligned} r^N\big (x^N,\xi ^N\big )&:=I_2^N+ I_3^N \nonumber \\ r^N_{\varPsi }\big (x^N,\xi ^N\big )&:= r^N\big (x^N,\xi ^N\big ) + \frac{\left( \delta ^2-\delta ^3 \right) }{2} \big \langle x^N, {\mathcal {C}}_N \nabla \varPsi ^N\big (x^N\big )\big \rangle _{{\mathcal {C}}_N}\nonumber \\&\quad - \frac{\delta ^3}{4}\big \Vert {\mathcal {C}}_N \nabla \varPsi ^N\big (x^N\big ) \big \Vert _{{\mathcal {C}}_N}^2+ \frac{\delta ^{5/2}}{\sqrt{2}} \big \langle {\mathcal {C}}_N \nabla \varPsi ^N\big (x^N\big ), {\mathcal {C}}_N^{1/2}\xi ^N \big \rangle _{{\mathcal {C}}_N} \end{aligned}$$
where \(I_2^N\) and \(I_3^N\) will be defined in (6.10) and (6.11), respectively. Because \(I_2^N\) and \(I_3^N\) depend on \(\varPsi \), \(r^N_{\varPsi }\) contains all the terms where the functional \(\varPsi \) appears; moreover \(r^N_{\varPsi }\) vanishes when \(\varPsi =0\). The analysis of Sect. 6 (see Lemma 6.4) will show that with our choice of scaling, \(\delta = \ell / N^{1/2}\), the terms \(r^N\) and \(r^N_{\varPsi }\) are negligible (for N large). Let us now illustrate the reason behind our choice of scaling. To this end, set \(\delta = \ell / N^{\zeta }\) and observe the following two simple facts:
$$\begin{aligned} S^{k,N}= \frac{1}{N}\sum _{j=1}^N \frac{\left| x^{k,N}_j\right| ^2}{\lambda _j^2}= \frac{1}{N} \left| \left| x^{k,N}\right| \right| _{{\mathcal {C}}_N}^2 \end{aligned}$$
$$\begin{aligned} \left| \left| {\mathcal {C}}_N^{1/2}\xi ^N\right| \right| _{{\mathcal {C}}_N}^2=\sum _{i=1}^N \left| \xi _i\right| ^2 \simeq N, \end{aligned}$$
the latter fact being true by the Law of Large Numbers. Neglecting the terms containing \(\varPsi \), at step k of the chain we have, formally,
$$\begin{aligned} Q^N(x^{k,N}, \xi ^{k+1,N}) \simeq&\frac{\ell ^2}{2} N^{1-2\zeta } \left( S^{k,N}- 1 \right) \end{aligned}$$
$$\begin{aligned}&- \frac{\ell ^3}{4} N^{1-3\zeta } S^{k,N} - \frac{\ell ^{3/2}}{\sqrt{2}} N^{(1-3\zeta )/2} \frac{\langle x^{k,N}, {\mathcal {C}}_N^{1/2} \xi ^{k,N}\rangle _{{\mathcal {C}}_N}}{\sqrt{N}} \nonumber \\\end{aligned}$$
$$\begin{aligned}&+ \frac{\ell ^{5/2}}{\sqrt{2}} N^{(1-5\zeta )/2} \frac{\langle x^{k,N}, {\mathcal {C}}_N^{1/2} \xi ^{k,N}\rangle _{{\mathcal {C}}_N}}{\sqrt{N}}. \end{aligned}$$
The above approximation (which, we stress again, is only formal and will be made rigorous in subsequent sections) has been obtained from (4.4) by setting \(\delta = \ell / N^{\zeta }\) and using (4.6) and (4.7), as follows:
$$\begin{aligned} \frac{\delta ^2}{2} \left[ \left| \left| x^N\right| \right| _{{\mathcal {C}}_N}^2- \left| \left| {\mathcal {C}}_N^{1/2}\xi ^N\right| \right| _{{\mathcal {C}}_N}^2\right]&\simeq (4.8),\\ - \delta ^3 \frac{\left| \left| x^N\right| \right| _{{\mathcal {C}}_N}^2}{4} - \frac{\delta ^{3/2}}{\sqrt{2}} \langle x^N, {\mathcal {C}}_N^{1/2} \xi ^N\rangle _{{\mathcal {C}}^N}&\simeq (4.9), \nonumber \\ + \frac{\delta ^{5/2}}{\sqrt{2}} \langle x^N, {\mathcal {C}}_N^{1/2} \xi ^N\rangle _{{\mathcal {C}}^N}&= (4.10). \nonumber \end{aligned}$$
Looking at the decomposition (4.8)–(4.10) of the function \(Q^N\), we can now heuristically explain the reason why we are lead to choose \(\zeta =1/2\) when we start the chain out of stationarity, as opposed to the scaling \(\zeta =1/3\) when the chain is started in stationarity. This is explained in the following remark.
First notice that the expression (4.4) and the approximation (4.8)–(4.10) for \(Q^N\) are valid both in and out of stationarity, as the first is only a consequence of the definition of the Metropolis–Hastings algorithm and the latter is implied just by the properties of \(\varPsi \) and by our definitions.
If we start the chain in stationarity, i.e. \(x_0^N\sim \pi ^N\) (where \(\pi ^N\) has been defined in (1.6)), then \(x^{k,N} \sim \pi ^N\) for every \(k \ge 0\). As we have already observed, \(\pi ^N\) is absolutely continuous with respect to the Gaussian measure \(\pi _0^N \sim {\mathcal {N}}(0, {\mathcal {C}}_N)\); because all the almost sure properties are preserved under this change of measure, in the stationary regime most of the estimates of interest need to be shown only for \(x^N \sim \pi _0^N\). In particular if \(x^N \sim \pi _0^N\) then \(x^N\) can be represented as \(x^N= \sum _{i=1}^N \lambda _i \rho _i \phi _i\), where \(\rho _i\) are i.i.d. \({\mathcal {N}}(0,1)\). Therefore we can use the law of large numbers and observe that \(\Vert x^N\Vert _{{\mathcal {C}}^N}^2=\sum _{i=1}^N \left| \rho _{i} \right| ^2 \simeq N \).
Suppose we want to study the algorithm in stationarity and we therefore make the choice \(\zeta =1/3\). With the above point in mind, notice that if we start in stationarity then by the Law of Large numbers \(N^{-1}\sum _{i=1}^N \left| \rho _{i} \right| ^2= S^{k,N}\rightarrow 1\) (as \(N\rightarrow \infty \), with speed of convergence \(N^{-1/2}\)). Moreover, if \(x^N \sim \pi _0^N\), by the Central Limit Theorem the term \(\langle x^N, {\mathcal {C}}_N^{1/2} \xi ^N\rangle _{{\mathcal {C}}_N}/\sqrt{N}\) is O(1) and converges to a standard Gaussian. With these two observations in place we can then heuristically see that, with the choice \(\zeta =1/3\) the term in (4.10) are negligible as \(N\rightarrow \infty \) while the terms in (4.9) are O(1). The term in (4.8) can be better understood by looking at the LHS of (4.11) which, with \(\zeta =1/3\) and \(x^N \sim \pi _0^N\), can be rewritten as
$$\begin{aligned} \frac{\ell ^2}{2N^{2/3}} \sum _{i=1}^N (\left| \rho _i\right| ^2- \left| \xi _i\right| ^2 ). \end{aligned}$$
The expected value of the above expression is zero. If we apply the Central Limit Theorem to the i.i.d. sequence \(\{\left| \rho _i\right| ^2- \left| \xi _i\right| ^2 \}_i\), (4.12) shows that (4.8) is \(O(N^{1/2-2/3})\) and therefore negligible as \(N \rightarrow \infty \). In conclusion, in the stationary case the only O(1) terms are those in (4.9); therefore one has the heuristic approximation
$$\begin{aligned} Q^N(x,\xi ) \sim {\mathcal {N}} \left( -\frac{\ell ^3}{4}, \frac{\ell ^3}{2}\right) . \end{aligned}$$
For more details on the stationary case see [17].
If instead we start out of stationarity the choice \(\zeta =1/3\) is problematic. Indeed in [6, Lemma 3] the authors study the MALA algorithm to sample from an N-dimensional isotropic Gaussian and show that if the algorithm is started at a point \(x^0\) such that \(S(0) <1\), then the acceptance probability degenerates to zero. Therefore, the algorithm stays stuck in its initial state and never proceeds to the next move, see [6, Figure 2] (to be more precise, as N increases the algorithm will take longer and longer to get unstuck from its initial state; in the limit, it will never move with probability 1). Therefore the choice \(\zeta =1/3\) cannot be the optimal one (at least not irrespective of the initial state of the chain) if we start out of stationarity. This is still the case in our context and one can heuristically see that the root of the problem lies in the term (4.8). Indeed if out of stationarity we still choose \(\zeta =1/3\) then, like before, (4.9) is still order one and (4.10) is still negligible. However, looking at (4.8), if \(x^0\) is such that \(S(0)<1\) then, when \(k=0\), (4.8) tends to minus infinity; recalling (4.2), this implies that the acceptance probability of the first move tends to zero. To overcome this issue and make \(Q^N\) of order one (irrespective of the initial datum) so that the acceptance probability is of order one and does not degenerate to 0 or 1 when \(N \rightarrow \infty \), we take \(\zeta =1/2\); in this way the terms in (4.8) are O(1), all the others are small. Therefore, the intuition leading the analysis of the non-stationary regime hinges on the fact that, with our scaling,
$$\begin{aligned} Q^N(x^{k,N}, \xi ^{k,N}) \simeq \frac{\ell ^2}{2}(S^{k,N} -1); \end{aligned}$$
$$\begin{aligned} \alpha ^N(x^{k,N}, \xi ^{k,N}) = (1 \wedge e^{Q^N(x^{k,N}, \xi ^{k,N})}) \simeq \alpha _{\ell }\big (S^{k,N}\big ), \end{aligned}$$
where the function \(\alpha _{\ell }\) on the RHS of (4.14) is the one defined in (1.12). The approximation (4.13) is made rigorous in Lemma 6.4, while (4.14) is formalized in Sect. 6.1 (see in particular Proposition 6.1).
Finally, we mention for completeness that, by arguing similarly to what we have done so far, if \(\zeta < 1/2\) then the acceptance probability of the first move tends to zero when \(S(0)<1\). If \(\zeta >1/2\) then \(Q^N \rightarrow 0\), so the acceptance probability tends to one; however the size of the moves is small and the algorithm explores the phase space slowly.
Notice that in stationarity the function \(Q^N\) is, to leading order, independent of \(\xi \); that is, \(Q^N\) and \(\xi \) are asymptotically independent (see [17, Lemma 4.5]). This can be intuitively explained because in stationarity the leading order term in the expression for \(Q^N\) is the term with \(\delta ^3 \Vert x\Vert ^2\). We will show that also out of stationarity \(Q^N\) and \(\xi \) are asymptotically independent. In this case such an asymptotic independence can, roughly speaking, be motivated by the approximation (4.13), (as the interpolation of the chain \(S^{k,N}\) converges to a deterministic limit). The asymptotic correlation of \(Q^N\) and the noise \(\xi \) is analysed in Lemma 6.5.
When one employs the more general proposal (1.18), assuming \(\varPsi \equiv 0\), the expression for \(Q^N\) becomes
$$\begin{aligned} Q^N\big (x^{k,N}, y^{k,N}\big ) = -\frac{\delta }{4} (1-2\theta ) \left( \Vert x^{k,N}\Vert _{{\mathcal {C}}^N}^2 - \Vert y^{k,N}\Vert _{{\mathcal {C}}^N}^2 \right) . \end{aligned}$$
So, if \(\theta =1/2\), the acceptance probability would be exactly one (for every N), i.e. the algorithm would be sampling exactly from the prior hence there is no need of rescaling \(\delta \) with N.
Heuristic derivation of the weak limit of \(S^{k,N}\)
Let Y be any function of the random variables \(\xi ^{k,N}\) and \(U^{k,N}\) (introduced in Sect. 2.2), for example the chain \(x^{k,N}\) itself. Here and throughout the paper we use \({\mathbb {E}}_{x^0}\left[ Y\right] \) to denote the expected value of Y with respect to the law of the variables \(\xi ^{k,N}\)'s and \(U^{k,N}\)'s, with the initial state \(x_0\) of the chain given deterministically; in other words, \({\mathbb {E}}_{x^0}(Y)\) denotes expectation with respect to all the sources of randomness present in Y. We will use the notation \({\mathbb {E}}_k \left[ Y\right] \) for the conditional expectation of Y given \(x^{k,N}\), \({\mathbb {E}}_k \left[ Y\right] :={\mathbb {E}}_{x^0}\left[ Y\left| x^{k,N}\right. \right] \) (we should really be writing \({\mathbb {E}}_k^N\) in place of \({\mathbb {E}}_k\), but to improve readability we will omit the further index N). Let us now decompose the chain \(S^{k,N}\) into its drift and martingale parts:
$$\begin{aligned} S^{k+1,N}=S^{k,N}+\frac{1}{\sqrt{N}} b_{\ell }^{k,N}+ \frac{1}{N^{1/4}}D^{k,N}, \end{aligned}$$
$$\begin{aligned} b_{\ell }^{k,N}:=\sqrt{N}{\mathbb {E}}_k [S^{k+1,N}-S^{k,N}] \end{aligned}$$
$$\begin{aligned} D^{k,N}:= N^{1/4}\left[ S^{k+1,N}-S^{k,N} - \frac{1}{\sqrt{N}}b_{\ell }^{k,N}\big (x^{k,N}\big )\right] . \end{aligned}$$
In this subsection we give the heuristics which underly the proof, given in subsequent sections, that the approximate drift \(b_{\ell }^{k,N}= b_{\ell }^{k,N}\big (x^{k,N}\big )\) converges to \(b_{\ell }(S^{k,N})\),Footnote 2 where \(b_{\ell }\) is the drift of (1.11), while the approximate diffusion \(D^{k,N}\) tends to zero. This formally gives the result of Theorem 4.1. Let us formally argue such a convergence result. By (4.6) and (2.12),
$$\begin{aligned} S^{k+1,N}= \frac{1}{N} \sum _{j=1}^N \frac{\left| x^{k+1,N}_j\right| ^2}{\lambda _j^2} = \frac{1}{N} \left( \gamma ^{k,N}\left| \left| y^{k,N}\right| \right| _{{\mathcal {C}}_N}^2 + (1-\gamma ^{k,N})\left| \left| x^{k,N}\right| \right| _{{\mathcal {C}}_N}^2 \right) . \end{aligned}$$
Therefore, again by (4.6),
$$\begin{aligned} b_{\ell }^{k,N}=\sqrt{N} {\mathbb {E}}_k \big [S^{k+1,N} -S^{k,N}\big ]&= \frac{1}{\sqrt{N}} {\mathbb {E}}_k \left[ \gamma ^{k,N}\left( \left| \left| y^{k,N}\right| \right| _{{\mathcal {C}}_N}^2 -\left| \left| x^{k,N}\right| \right| _{{\mathcal {C}}_N}^2 \right) \right] \nonumber \\&= \frac{1}{\sqrt{N}} {\mathbb {E}}_k \left[ \big (1 \wedge e^{Q^N\big (x^{k,N},y^{k,N}\big )}\big ) \left( \left| \left| y^{k,N}\right| \right| _{{\mathcal {C}}_N}^2 \right. \right. \nonumber \\&\quad \left. \left. -\left| \left| x^{k,N}\right| \right| _{{\mathcal {C}}_N}^2 \right) \right] , \end{aligned}$$
where the second equality is a consequence of the definition of \(\gamma ^{k,N}\) (with a reasoning, completely analogous to the one in [14, last proof of Appendix A], see also (4.24). Using (4.3) (with \(\delta =\ell /\sqrt{N}\)), the fact that \(r^N\) is negligible and the approximation (4.13), the above gives
$$\begin{aligned} b_{\ell }^{k,N}=\sqrt{N} {\mathbb {E}}_k [S^{k+1,N} -S^{k,N}] \simeq - \frac{4}{\ell } \left( 1 \wedge e^{\ell ^2 \big (S^{k,N}-1\big )/2} \right) \frac{\ell ^2}{2} \big (S^{k,N}-1\big ) = b_{\ell }\big (S^{k,N}\big ). \end{aligned}$$
The above approximation is made rigorous in Lemma 7.5. As for the diffusion coefficient, it is easy to check (see proof of Lemma 7.2) that
$$\begin{aligned} N {\mathbb {E}}_k [S^{k+1,N} -S^{k,N}]^2 <\infty . \end{aligned}$$
Hence the approximate diffusion tends to zero and one can formally deduce that (the interpolant of) \(S^{k,N}\) converges to the ODE limit (1.11).
Heuristic analysis of the limit of the chain \(x^{k,N}\).
The drift-martingale decomposition of the chain \(x^{k,N}\) is as follows:
$$\begin{aligned} x^{k+1,N}=x^{k,N}+\frac{1}{N^{1/2}}\varTheta ^{k,N}+\frac{1}{N^{1/4}}L^{k,N}\end{aligned}$$
where \(\varTheta ^{k,N}=\varTheta ^{k,N}\big (x^{k,N}\big )\) is the approximate drift
$$\begin{aligned} \varTheta ^{k,N}:=\sqrt{N} {\mathbb {E}}_k \left[ x^{k+1,N}-x^{k,N}\right] \end{aligned}$$
$$\begin{aligned} L^{k,N}:=N^{1/4} \left[ x^{k+1,N}- x^{k,N} - \frac{1}{\sqrt{N}} \varTheta ^{k,N}\big (x^{k,N}\big ) \right] \end{aligned}$$
is the approximate diffusion. In what follows we will use the notation \(\varTheta (x,S)\) for the drift of Eq. (1.10), i.e.
$$\begin{aligned} \varTheta (x, S)= F(x)h_{\ell }(S), \quad (x, S) \in {\mathcal {H}}^s\times {\mathbb {R}}, \end{aligned}$$
with F(x) defined in Lemma 2.1. Again, we want to formally argue that the approximate drift \(\varTheta ^{k,N}\big (x^{k,N}\big )\) tends to \(\varTheta (x^{k,N}, S^{k,N})\)Footnote 3 and the approximate diffusion \(L^{k,N}\) tends to the diffusion coefficient of Eq. (1.10).
Approximate drift
As a preliminary consideration, observe that
$$\begin{aligned} {\mathbb {E}}_k \left( \gamma ^{k,N}{\mathcal {C}}_N^{1/2} \xi ^{k,N}\right) = {\mathbb {E}}_k \left( \left( 1 \wedge e^{Q^N(x^{k,N}, \xi ^{k,N})} \right) {\mathcal {C}}_N^{1/2} \xi ^{k,N}\right) , \end{aligned}$$
see [14, equation (5.14)]. This fact will be used throughout the paper, often without mention. Coming to the chain \(x^{k,N}\), a direct calculation based on (2.8) and on (2.12) gives
$$\begin{aligned} x^{k+1,N} - x^{k,N} = - \gamma ^{k,N}\delta \big (x^{k,N} + {\mathcal {C}}_N \nabla \varPsi ^N\big (x^{k,N}\big )\big ) + \gamma ^{k,N}\sqrt{2 \delta } {\mathcal {C}}_N^{1/2} \xi ^{k,N}.\quad \end{aligned}$$
Therefore, with the choice \(\delta = \ell /\sqrt{N}\), we have
$$\begin{aligned} \varTheta ^{k,N}&=\sqrt{N}{\mathbb {E}}_k \big [x^{k+1,N} -x^{k,N}\big ] \nonumber \\&= -\ell {\mathbb {E}}_k \left[ \big (1 \wedge e^{Q^N(x^{k,N},\xi ^{k,N})}\big ) \big (x^{k,N}+ {\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big )\big ) \right] \nonumber \\&\quad +{N^{1/4}} \sqrt{2\ell } {\mathbb {E}}_k \left[ \big (1 \wedge e^{Q^N(x^{k,N},\xi ^{k,N})}\big ) {\mathcal {C}}_N^{1/2} \, \xi ^{k,N}\right] \end{aligned}$$
The addend in (4.26) is asymptotically small (see Lemma 6.5 and notice that this addend would just be zero if \(Q^N\) and \(\xi ^{k,N}\) were uncorrelated); hence, using the heuristic approximations (4.13) and (4.14),
$$\begin{aligned} \varTheta ^{k,N}=\sqrt{N}{\mathbb {E}}_k [x^{k+1,N} -x^{k,N}]&\simeq - \ell \alpha _{\ell }\big (S^{k,N} \big )\big (x^{k,N}+ {\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big )\big )\nonumber \\&{\mathop {=}\limits ^{(1.13)}} - h_{\ell }\big (S^{k,N}\big ) \big (x^{k,N}+ {\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big )\big );\nonumber \\ \end{aligned}$$
the right hand side of the above is precisely the limiting drift \(\varTheta (x^{k,N},S^{k,N})\).
Approximate diffusion
We now look at the approximate diffusion of the chain \(x^{k,N}\):
$$\begin{aligned} L^{k,N}:= N^{1/4} (x^{k+1,N}-x^{k,N}-{\mathbb {E}}_k(x^{k+1,N}-x^{k,N}) ). \end{aligned}$$
By definition,
$$\begin{aligned} {\mathbb {E}}_k\left| \left| L^{k,N}\right| \right| _{s}^2&= \sqrt{N}{\mathbb {E}}_k \left| \left| x^{k+1,N}-x^{k,N}\right| \right| _{s}^2 - \sqrt{N}\left| \left| {\mathbb {E}}_k\left( x^{k+1,N}-x^{k,N}\right) \right| \right| _{s}^2. \end{aligned}$$
By (4.27) the second addend in the above is asymptotically small. Therefore
$$\begin{aligned} {\mathbb {E}}_k\left| \left| L^{k,N}\right| \right| _{s}^2&\simeq \sqrt{N}{\mathbb {E}}_k \left| \left| x^{k+1,N}-x^{k,N}\right| \right| _{s}^2\\&{\mathop {\simeq }\limits ^{(2.12), (4.25)}} {2\ell } {\mathbb {E}}_k \left| \left| \gamma ^{k,N}{\mathcal {C}}_N^{1/2} \xi ^{k,N}\right| \right| _{s}^2\\&= {2\ell }{\mathbb {E}}_k\sum _{j=1}^N j^{2s}\lambda _j^2\left( 1 \wedge e^{Q^N(x^{k,N},\xi ^{k,N})} \right) \left| \xi ^{k,N}_j\right| ^2. \end{aligned}$$
The above quantity is carefully studied in Lemma 6.6. However, intuitively, the heuristic approximation (4.14) (and the asymptotic independence of \(Q^N\) and \(\xi \) that (4.14) is a manifestation of) suffices to formally derive the limiting diffusion coefficient [i.e. the diffusion coefficient of (1.10)]:
$$\begin{aligned} {\mathbb {E}}_k\left| \left| L^{k,N}\right| \right| _{s}^2&\simeq 2\ell \sum _{j=1}^N j^{2s}\lambda _j^2 {\mathbb {E}}_k \left[ \big (1 \wedge e^{Q^N\big (x^{k,N},y^{k,N}\big )} \big )\left| \xi _j^{k,N}\right| ^2\right] \\&\simeq 2\ell \sum _{j=1}^N j^{2s}\lambda _j^2 {\mathbb {E}}_k \left[ \big (1 \wedge e^{\ell ^2\big (S^{k,N}-1\big )/2} \big )\left| \xi _j^{k,N}\right| ^2\right] \\&\simeq 2\ell \sum _{j=1}^N j^{2s}\lambda _j^2 \big (1 \wedge e^{\ell ^2\big (S^{k,N}-1\big )/2} \big )\\&\simeq 2\ell \, {\mathrm{Trace}}({\mathcal {C}}_s)\alpha _{\ell }\big (S^{k,N}\big ){\mathop {=}\limits ^{(1.13)}}2{\mathrm{Trace}}({\mathcal {C}}_s)\,h_{\ell }\big (S^{k,N}\big ). \end{aligned}$$
Continuous mapping argument
In this section we outline the argument which underlies the proofs of our main results. In particular, the proofs of Theorems 4.1 and 4.2 hinge on the continuous mapping arguments that we illustrate in the following Sects. 5.1 and 5.2, respectively. The details of the proofs are deferred to the next three sections: Sect. 6 contains some preliminary results that we employ in both proofs, in Sect. 7 contains the the proof of Theorem 4.1 and Sect. 8 that of Theorem 4.2.
Continuous mapping argument for (3.3)
Let us recall the definition of the chain \(\{S^{k,N}\}_{k\in {\mathbb {N}}}\) and of its continuous interpolant \(S^{(N)}\), introduced in (1.15) and (1.16), respectively. From the definition (1.16) of the interpolated process and the drift-martingale decomposition (4.15) of the chain \(\{S^{k,N}\}_{k\in {\mathbb {N}}}\) we have that for any \(t \in [t_k, t_{k+1})\),
$$\begin{aligned} S^{(N)}(t)&= (N^{1/2}t-k) \left[ S^{k,N}+\frac{1}{\sqrt{N}} b_{\ell }^{k,N}+ \frac{1}{N^{1/4}}D^{k,N}\right] + (k+1-tN^{1/2}) S^{k,N} \\&= S^{k,N} +(t-t_k) b_{\ell }^{k,N} + N^{1/4} (t-t_k) D^{k,N}. \end{aligned}$$
Iterating the above we obtain
$$\begin{aligned} S^{(N)}(t)&=S^{0,N} + (t-t_k) b_{\ell }^{k,N}+\frac{1}{\sqrt{N}} \sum _{j=0}^{k-1}b_{\ell }^{j,N}+ {w^N(t)}, \end{aligned}$$
$$\begin{aligned} w^N(t):=\frac{1}{N^{1/4}}\sum _{j=0}^{k-1}D^{j,N}+N^{1/4}(t-t_k) D^{k,N}\quad t_k\le t <t_{k+1}. \end{aligned}$$
The expression for \(S^{(N)}(t)\) can then be rewritten as
$$\begin{aligned} S^{(N)}(t) = S^{0,N}+\int _0^t b_{\ell }(S^{(N)}(v)) dv+ {\hat{w}}^N(t), \end{aligned}$$
having set
$$\begin{aligned} {\hat{w}}^N(t):=e^N(t)+w^N(t), \end{aligned}$$
$$\begin{aligned} e^N(t):=(t-t_k) b_{\ell }^{k,N}+\frac{1}{\sqrt{N}} \sum _{j=0}^{k-1}b_{\ell }^{j,N}-\int _0^t b_{\ell }(S^{(N)}(v)) dv. \end{aligned}$$
Equation (5.2) shows that
$$\begin{aligned} S^{(N)}={\mathcal {J}}_2(S^{0,N},{\hat{w}}^N), \end{aligned}$$
where \({\mathcal {J}}_2\) is the Itô map defined in the statement of Theorem 3.3. By the continuity of the map \({\mathcal {J}}_2\), if we show that \({\hat{w}}^N\) converges in probability in \(C([0,T]; {\mathbb {R}})\) to zero, then \(S^{(N)}(t)\) converges in probability to the solution of the ODE (1.11). We prove convergence of \({\hat{w}}^N\) to zero in Sect. 7. In view of (5.3), we show the convergence in probability of \({\hat{w}}^N\) to zero by proving that both \(e^N\) (Lemma 7.1) and \(w^N\) (Lemma 7.2) converge in \(L_2(\varOmega ; C([0,T]; {\mathbb {R}}))\) to zero. Because \(\{S^{0,N}\}_{N\in {\mathbb {N}}}\) is a deterministic sequence that converges to \(S^0\), we then have that \((S^{0,N},{\hat{w}}^N)\) converges in probability to \((S^0,0)\).
We now consider the chain \(\{x^{k,N}\}_{k\in {\mathbb {N}}}\subseteq {\mathcal {H}}^s\), defined in (2.14). We act analogously to what we have done for the chain \(\{S^{k,N}\}_{k\in {\mathbb {N}}}\). So we start by recalling the definition of the continuous interpolant \(x^{(N)}\), Eq. (1.9) and the notation introduced at the beginning of Sect. 4.3. An argument analogous to the one used to derive (5.2) shows that for any \(t\in [t_k,t_{k+1})\)
$$\begin{aligned} x^{(N)}(t)&=x^{0,N}+ (t-t_k) \varTheta ^{k,N}+ \frac{1}{\sqrt{N}}\sum _{j=0}^k\varTheta ^{j,N}+{\eta ^N(t)}\nonumber \\&= x^{0,N}+\int _0^t \varTheta (x^{(N)}(v),S(v)) dv+ {\hat{\eta }}^N(t), \end{aligned}$$
$$\begin{aligned} {\hat{\eta }}^N(t)&:=d^N(t)+\upsilon ^N(t)+\eta ^N(t), \end{aligned}$$
$$\begin{aligned} {\eta }^N(t)&:={N^{1/4}(t-t_k)L^{k,N}+\frac{1}{N^{1/4}} \sum _{j=1}^{k-1}L^{j,N}}, \end{aligned}$$
$$\begin{aligned} d^N(t)&:= (t-t_k) \varTheta ^{k,N}+ \frac{1}{\sqrt{N}}\sum _{j=0}^{k-1}\varTheta ^{j,N} - \int _0^t \varTheta (x^{(N)}(v),S^{(N)}(v))dv, \end{aligned}$$
$$\begin{aligned} \upsilon ^N(t)&:=\int _0^t \left[ \varTheta (x^{(N)}(v),S^{(N)}(v))- \varTheta (x^{(N)}(v),S(v))\right] dv. \end{aligned}$$
Equation (5.5) implies that
$$\begin{aligned} x^{(N)}={\mathcal {J}}_1(x^{0,N},{\hat{\eta }}^N), \end{aligned}$$
where \({\mathcal {J}}_1\) is Itô map defined in the statement of Theorem 3.3. In Sect. 8 we prove that \({\hat{\eta }}^N\) converges weakly in \(C([0,T];{\mathcal {H}}^s)\) to the process \(\eta \), where the process \(\eta \) is the diffusion part of Eq. (1.10), i.e.
$$\begin{aligned} \eta (t):=\int _0^t \sqrt{2h_{\ell }(S(v))} dW_v, \end{aligned}$$
with \(W_v\) a \({\mathcal {H}}^s\)-valued \({\mathcal {C}}_s\)-Brownian motion. Looking at (5.6), we prove the weak convergence of \({\hat{\eta }}^N\) to \(\eta \) by the following steps:
We prove that \(d^N\) converges in \(L_2(\varOmega ; C([0,T]; {\mathcal {H}}^s))\) to zero (Lemma 8.1);
using the convergence in probability (in \(C([0,T]; {\mathbb {R}})\)) of \(S^{(N)}\) to S, we show convergence in probability (in \(C([0,T]; {\mathcal {H}}^s)\)) of \(\upsilon ^N\) to zero (Lemma 8.2);
we show that \(\eta ^N\) converges in weakly in \(C([0,T]; {\mathcal {H}}^s)\) to the process \(\eta \), defined in (5.11) (Lemma 8.3).
Because \(\{x^{0,N}\}_{N\in {\mathbb {N}}}\) is a deterministic sequence that converges to \(x^0\), the above three steps (and Slutsky's Theorem) imply that \((x^{0,N},{\hat{\eta }}^N)\) converges weakly to \((x^0,\eta )\). Now observe that \(x(t)={\mathcal {J}}_1(x^0, \eta (t))\), where x(t) is the solution of the SDE (1.10). The continuity of the map \({\mathcal {J}}_1\) (Theorem 3.3), (5.10) and the Continuous Mapping Theorem then imply that the sequence \(\{x^{(N)}\}_{N\in {\mathbb {N}}}\) converges weakly to the solution of the SDE (1.10), thus establishing Theorem 4.2.
Preliminary estimates and analysis of the acceptance probability
This section gathers several technical results. In Lemma 6.1 we study the size of the jumps of the chain. Lemma 6.2 contains uniform bounds on the moments of the chains \(\{x^{k,N}\}_{k\in {\mathbb {N}}}\) and \(\{S^{k,N}\}_{k\in {\mathbb {N}}}\), much needed in Sects. 7 and 8. In Section 6.1 we detail the analysis of the acceptance probability. This allows us to quantify the correlations between \(\gamma ^{k,N}\) and the noise \(\xi ^{k,N}\), Sect. 6.2. Throughout the paper, when referring to the function \(Q^N\) defined in (4.3), we use interchangeably the notation \(Q^N(x^{k,N}, y^{k,N})\) and \(Q^N(x^{k,N}, \xi ^{k,N})\) (as we have already remarked, given \(x^{k,N}\), the proposal \(y^{k,N}\) is only a function of \(\xi ^{k,N}\)).
Let \(q\ge 1/2\) be a real number. Under Assumption 2.1 the following holds:
$$\begin{aligned} {\mathbb {E}}_k{\left| \left| y^{k,N}-x^{k,N}\right| \right| _{s}^{2q}}\lesssim \frac{1}{N^{q/2}}\left( 1+\left| \left| x^{k,N}\right| \right| _{s}^{2q}\right) \end{aligned}$$
$$\begin{aligned} {\mathbb {E}}_k{\left| \left| y^{k,N}-x^{k,N}\right| \right| _{{\mathcal {C}}_N}^{2q}}\lesssim \big (S^{k,N}\big )^q+N^{q/2}. \end{aligned}$$
$$\begin{aligned} {\mathbb {E}}_k{\left| \left| x^{k+1,N}-x^{k,N}\right| \right| _{s}^{2q}}\lesssim \frac{1}{N^{q/2}}\left( 1+\left| \left| x^{k,N}\right| \right| _{s}^{2q}\right) , \end{aligned}$$
$$\begin{aligned} {\mathbb {E}}_k{\left| \left| x^{k+1,N}-x^{k,N}\right| \right| _{{\mathcal {C}}_N}^{2q}}\lesssim \big (S^{k,N}\big )^q+N^{q/2}. \end{aligned}$$
By definition of the proposal \(y^{k,N}\), Eq. (2.8),
$$\begin{aligned} \left| \left| y^{k,N}-x^{k,N}\right| \right| _{s}^{2q} =&\, \left| \left| \delta \big (x^{k,N}+{\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big )\big )+\sqrt{2\delta } {\mathcal {C}}_N^{1/2}\xi ^{k,N}\right| \right| _{s}^{2q}\\ \lesssim&\,\frac{1}{N^q}\left( \left| \left| x^{k,N}\right| \right| _{s}^{2q} +\left| \left| {\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big )\right| \right| _{s}^{2q}\right) \nonumber \\&+\frac{1}{N^{q/2}} \left| \left| {\mathcal {C}}_N^{1/2}\xi ^{k,N}\right| \right| _{s}^{2q}. \end{aligned}$$
Thus, using (2.25) and (2.27), we have
$$\begin{aligned} {\mathbb {E}}_k\left| \left| y^{k,N}-x^{k,N}\right| \right| _{s}^{2q}&\lesssim \frac{1}{N^q} \left( 1+\left| \left| x^{k,N}\right| \right| _{s}^{2q}\right) +\frac{1}{N^{q/2}}\\&\lesssim \frac{1}{N^{q/2}}\left( 1+\left| \left| x^{k,N}\right| \right| _{s}^{2q}\right) , \end{aligned}$$
which proves (6.1). Equation (6.2) follows similarly:
$$\begin{aligned} {\mathbb {E}}_k{\left| \left| y^{k,N}-x^{k,N}\right| \right| _{{\mathcal {C}}_N}^{2q}}\lesssim&\, \frac{1}{N^q}\left( \left| \left| x^{k,N}\right| \right| _{{\mathcal {C}}_N}^{2q}+ \left| \left| {\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big )\right| \right| _{{\mathcal {C}}_N}^{2q}\right) \\&+\frac{1}{N^{q/2}}{\mathbb {E}}_k \left| \left| {\mathcal {C}}_N^{1/2}\xi ^{k,N}\right| \right| _{{\mathcal {C}}_N}^{2q}. \end{aligned}$$
Since \(\left| \left| {\mathcal {C}}_N^{1/2}\xi ^{k,N}\right| \right| _{{\mathcal {C}}_N}^{2}=\sum _{j=1}^N(\xi ^{k,N}_j)^2\) has chi-squared law, applying Stirling's formula for the Gamma function \(\varGamma :{\mathbb {R}}\rightarrow {\mathbb {R}}\) we obtain
$$\begin{aligned} {\mathbb {E}}_k\left| \left| {\mathcal {C}}_N^{1/2}\xi ^{k,N}\right| \right| _{{\mathcal {C}}_N}^{2q}\lesssim \frac{\varGamma (q+N/2)}{\varGamma (N/2)}\lesssim N^q. \end{aligned}$$
Hence, using (2.26), the desired bound follows. Finally, recalling the definition of the chain, Eq. (2.12), the bounds (6.3) and (6.4) are clearly a consequence of (6.1) and (6.2), respectively, since either \(x^{k+1,N}=y^{k,N}\) (if the proposed move is accepted) or \(x^{k+1,N}=x^{k,N}\) (if the move is rejected). \(\square \)
If Assumption 2.1 holds, then, for every \(q\ge 1\), we have
$$\begin{aligned} {\mathbb {E}}_{x^0}\big (S^{k,N}\big )^q&\lesssim 1 \end{aligned}$$
$$\begin{aligned} {\mathbb {E}}_{x^0}{\left| \left| x^{k,N}\right| \right| _{s}^q}&\lesssim 1, \end{aligned}$$
uniformly over \(N \in {\mathbb {N}}\) and \(k \in \{0, 1 ,\ldots ,[T\sqrt{N}]\}\).
The proof of this lemma can be found in Appendix C. \(\square \)
Acceptance probability
The main result of this section is Proposition 6.1, which we obtain as a consequence of Lemma 6.3 (below) and Lemma 6.2. Proposition 6.1 formalizes the heuristic approximation (4.14).
(Acceptance probability) Let Assumption 2.1 hold and recall the Definitions (4.2) and (1.12). Then the following holds:
$$\begin{aligned} {\mathbb {E}}_k \left| \alpha ^N(x^{k,N},\xi ^{k,N})-\alpha _\ell \big (S^{k,N}\big )\right| ^{2}\lesssim \frac{1+\big (S^{k,N}\big )^2+\left| \left| x^{k,N}\right| \right| _{s}^2}{\sqrt{N}}. \end{aligned}$$
Before proving Lemma 6.3, we state Proposition 6.1.
Proposition 6.1
If Assumption 2.1 holds then
$$\begin{aligned} \lim _{N\rightarrow \infty }{\mathbb {E}}_{x^0}{\left| \alpha ^N(x^{k,N},y^{k,N})-\alpha _\ell \big (S^{k,N}\big )\right| ^{2}}=0. \end{aligned}$$
This is a corollary of Lemmas 6.3 and 6.2. \(\square \)
The function \(z\mapsto 1\wedge e^z\) on \({\mathbb {R}}\) is globally Lipschitz with Lipschitz constant 1. Therefore, by (1.12) and (4.2),
$$\begin{aligned} {\mathbb {E}}_k \left| \alpha ^N(x^{k,N},y^{k,N})-\alpha _\ell \big (S^{k,N}\big )\right| ^{2}\le {\mathbb {E}}_k \left| Q^N(x^{k,N},y^{k,N})-\frac{\ell ^2\big (S^{k,N}-1\big )}{2}\right| ^{2}. \end{aligned}$$
The result is now a consequence of (6.15) below. \(\square \)
To analyse the acceptance probability it is convenient to decompose \(Q^N\) as follows:
$$\begin{aligned} Q^N\big (x^N,y^N\big )=I_1^N\big (x^N,y^N\big )+I_2^N\big (x^N,y^N\big )+I_3^N\big (x^N,y^N\big ) \end{aligned}$$
$$\begin{aligned} I_1^N\big (x^N,y^N\big )&:=-\frac{1}{2}\left[ \left| \left| y^N\right| \right| _{{\mathcal {C}}_N}^2 -\left| \left| x^N\right| \right| _{{\mathcal {C}}_N}^2\right] \nonumber \\&\quad -\frac{1}{4\delta } \left[ \left| \left| x^N-(1-\delta )y^N\right| \right| _{{\mathcal {C}}_N}^2-\left| \left| y^N-(1-\delta )x^N\right| \right| _{{\mathcal {C}}_N}^2\right] \nonumber \\&=-\frac{\delta }{4}\left( \left| \left| y^N\right| \right| _{{\mathcal {C}}_N}^2-\left| \left| x^N\right| \right| _{{\mathcal {C}}_N}^2\right) , \end{aligned}$$
$$\begin{aligned} I_2^N\big (x^N,y^N\big )&:=-\frac{1}{2}\left[ \left\langle x^N-(1-\delta )y^N,{\mathcal {C}}_N\nabla \varPsi ^N\big (y^N\big ) \right\rangle _{{\mathcal {C}}_N}\right. \nonumber \\&\quad \left. - \left\langle y^N-(1-\delta )x^N,{\mathcal {C}}_N\nabla \varPsi ^N\big (x^N\big ) \right\rangle _{{\mathcal {C}}_N}\right] \nonumber \\&\quad -\big (\varPsi ^N\big (y^N\big )-\varPsi ^N\big (x^N\big )\big ), \end{aligned}$$
$$\begin{aligned} I_3^N\big (x^N,y^N\big )&:=-\frac{\delta }{4}\left[ \left| \left| {\mathcal {C}}_N\nabla \varPsi ^N\big (y^N\big )\right| \right| _{{\mathcal {C}}_N}^2 -\left| \left| {\mathcal {C}}_N\nabla \varPsi ^N\big (x^N\big )\right| \right| _{{\mathcal {C}}_N}^2\right] . \end{aligned}$$
Let Assumption 2.1 hold. With the notation introduced above, we have:
$$\begin{aligned} {\mathbb {E}}_k \left| I_1^N\big (x^{k,N},y^{k,N}\big )-\frac{\ell ^2\big (S^{k,N}-1\big )}{2}\right| ^2&\lesssim \frac{\left| \left| x^{k,N}\right| \right| _{s}^2}{N^2}+\frac{\big (S^{k,N}\big )^2}{\sqrt{N}}+\frac{1}{N} \end{aligned}$$
$$\begin{aligned} {\mathbb {E}}_k \left| I_2^N\big (x^{k,N},y^{k,N}\big )\right| ^2&\lesssim \frac{1+\left| \left| x^{k,N}\right| \right| _{s}^2}{\sqrt{N}} \end{aligned}$$
$$\begin{aligned} {\mathbb {E}}_k \left| I_3^N\big (x^{k,N},y^{k,N}\big )\right| ^2&\lesssim \frac{1}{N}. \end{aligned}$$
$$\begin{aligned} {\mathbb {E}}_k \left| Q^N\big (x^{k,N},y^{k,N}\big )-\frac{\ell ^2\big (S^{k,N}-1\big )}{2}\right| ^{2}\lesssim \frac{1+\big (S^{k,N}\big )^2+\left| \left| x^{k,N}\right| \right| _{s}^2}{\sqrt{N}}. \end{aligned}$$
We consecutively prove the three bounds in the statement.
Proof of (6.12). Using (2.8), we rewrite \(I_1^N\) as
$$\begin{aligned}&I_1^N\big (x^{k,N},y^{k,N}\big )\\&\quad =-\frac{\delta }{4}\left( \left| \left| (1-\delta ) x^{k,N}-\delta {\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big )+\sqrt{2\delta } {\mathcal {C}}_N^{1/2}\xi ^{k,N}\right| \right| _{{\mathcal {C}}_N}^2-\left| \left| x^{k,N}\right| \right| _{{\mathcal {C}}_N}^2\right) . \end{aligned}$$
Expanding the above we obtain:
$$\begin{aligned} I_1^N\big (x^{k,N},y^{k,N}\big )-\frac{\ell ^2\big (S^{k,N}-1\big )}{2}&= -\left( \frac{\delta ^2}{2}\left| \left| {\mathcal {C}}_N^{1/2}\xi ^{k,N}\right| \right| _{{\mathcal {C}}_N}^2 -\frac{\ell ^2}{2}\right) \nonumber \\&\quad +(r_{\varPsi }^N - r^N)+r_{\xi }^N+r_x^N, \end{aligned}$$
where the difference \((r_{\varPsi }^N - r^N)\) is defined in (4.5) and we set
$$\begin{aligned} r^N_{\xi }&:= -\frac{(\delta ^{3/2}-\delta ^{5/2})}{\sqrt{2}} \left\langle x^{k,N},{\mathcal {C}}_N^{1/2}\xi ^{k,N} \right\rangle _{{\mathcal {C}}_N}, \end{aligned}$$
$$\begin{aligned} r^N_{x}&:= -\frac{\delta ^3}{4}\left| \left| x^{k,N}\right| \right| _{{\mathcal {C}}_N}^2. \end{aligned}$$
For the reader's convenience we rearrange (4.5) below:
$$\begin{aligned} r_{\varPsi }^N - r^N&= \frac{\delta ^2-\delta ^3}{2}\left\langle x^{k,N},{\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big ) \right\rangle _{{\mathcal {C}}_N} \nonumber \\&\quad -\frac{\delta ^3}{4}\left| \left| {\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big )\right| \right| _{{\mathcal {C}}_N}^2 +\frac{\delta ^{5/2}}{\sqrt{2}}\left\langle {\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big ),{\mathcal {C}}_N^{1/2}\xi ^{k,N} \right\rangle _{{\mathcal {C}}_N}. \end{aligned}$$
We come to bound all of the above terms, starting from (6.19). To this end, let us observe the following:
$$\begin{aligned} \left| \left\langle x^{k,N},{\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big ) \right\rangle _{{\mathcal {C}}_N}\right| ^2&=\left| \sum _{i=1}^N x^{k,N}_i [\nabla \varPsi ^N\big (x^{k,N}\big )]_i\right| ^2 \end{aligned}$$
$$\begin{aligned}&{\mathop {\le }\limits ^{(2.6)}} \left| \left| x^{k,N}\right| \right| _{s}^2 \Vert \nabla \varPsi ^N\big (x^{k,N}\big )\Vert _{-s}^2 {\mathop {\lesssim }\limits ^{(2.24)}} \left| \left| x^{k,N}\right| \right| _{s}^2. \end{aligned}$$
$$\begin{aligned} {\mathbb {E}}_k \left| \left| {\mathcal {C}}_N^{1/2} \xi ^{k,N}\right| \right| _{{\mathcal {C}}_N}^2 = {\mathbb {E}}_k \sum _{j=1}^N \left| \xi _j\right| ^2 = N, \end{aligned}$$
$$\begin{aligned} \left| \left\langle {\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big ),{\mathcal {C}}_N^{1/2}\xi ^{k,N} \right\rangle _{{\mathcal {C}}_N}\right| ^2 \le \left| \left| {\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big )\right| \right| _{{\mathcal {C}}_N}^2\left| \left| {\mathcal {C}}_N^{1/2} \xi ^{k,N}\right| \right| _{{\mathcal {C}}_N}^2 {\mathop {\lesssim }\limits ^{(2.26)}}N. \end{aligned}$$
From (6.19), (6.20), (2.26) and the above,
$$\begin{aligned} {\mathbb {E}}_k \left| r_{\varPsi }^N-r^N\right| ^2 \lesssim \frac{\left| \left| x^{k,N}\right| \right| _{s}^2}{N^2}+\frac{1}{N^{3/2}}. \end{aligned}$$
By (6.17),
$$\begin{aligned} {\mathbb {E}}_k \left| r^N_{\xi }\right| ^2&\lesssim \frac{1}{N^{3/2}} {\mathbb {E}}_k\left| \left\langle x^{k,N},{\mathcal {C}}_N^{1/2}\xi ^{k,N} \right\rangle _{{\mathcal {C}}_N}\right| ^2\nonumber \\&= \frac{1}{N^{3/2}}{\mathbb {E}}_k \left( \sum _{i=1}^N \frac{x_i^{k,N} \xi _i^{k,N}}{\lambda _i} \right) ^2 = \frac{1}{\sqrt{N}}S^{k,N}, \end{aligned}$$
where in the last equality we have used the fact that \(\{\xi _i^{k,N}:i=1,\ldots ,N\}\) are independent, zero mean, unit variance normal random variables (independent of \(x^{k,N}\)) and (4.6). As for \(r^N_{x}\),
$$\begin{aligned} {\mathbb {E}}_k \left| r_x^N\right| ^2 \lesssim \frac{1}{N^3}\left| \left| x^{k,N}\right| \right| _{{\mathcal {C}}_N}^4{\mathop {=}\limits ^{(4.6)}}\frac{(S^{k,N})^2}{N}. \end{aligned}$$
Lastly,
$$\begin{aligned} {\tilde{r}}^N:=\frac{\delta ^2}{2}\left| \left| {\mathcal {C}}_N^{1/2}\xi ^{k,N}\right| \right| _{{\mathcal {C}}_N}^2 -\frac{\ell ^2}{2}=\frac{\ell ^2}{2}\left( \frac{1}{N}\sum _{j=1}^N\xi ^2_j-1\right) . \end{aligned}$$
Since \(\sum _{j=1}^N\xi ^2_j\) has chi-squared law, \({\mathbb {E}}_k\left| {\tilde{r}}^N\right| ^2\lesssim Var\left( N^{-1}\sum _{j=1}^N\xi ^2_j\right) \lesssim N^{-1}\), by (6.5). Combining all of the above, we obtain the desired bound.
Proof of (6.13) From (6.10),
$$\begin{aligned} I_2^N\big (x^{k,N},y^{k,N}\big )&=-\left[ \varPsi ^N(y^{k,N})-\varPsi ^N\big (x^{k,N}\big ) -\left\langle y^{k,N}-x^{k,N},\nabla \varPsi ^N\big (x^{k,N}\big ) \right\rangle \right] \\&\quad +\frac{1}{2}\left\langle y^{k,N}-x^{k,N},\nabla \varPsi ^N(y^{k,N})-\nabla \varPsi ^N\big (x^{k,N}\big ) \right\rangle \\&\quad +\frac{\delta }{2}\left( \left\langle x^{k,N},\nabla \varPsi ^N\big (x^{k,N}\big ) \right\rangle -\left\langle y^{k,N},\nabla \varPsi ^N(y^{k,N}) \right\rangle \right) =:\sum _{j=1}^3d_j, \end{aligned}$$
where \(d_j\) is the addend on line j of the above array. Using (2.22), (2.24), (2.6) and Lemma 6.1, we have
$$\begin{aligned} {\mathbb {E}}_k \left| d_1\right| ^{2}\lesssim {\mathbb {E}}_k \left| \left| y^{k,N}-x^{k,N}\right| \right| _s^{2} \lesssim \frac{1+\left| \left| x^{k,N}\right| \right| _{s}^{2}}{\sqrt{N}}. \end{aligned}$$
By the first inequality in (2.24),
$$\begin{aligned} \left| \left| \nabla \varPsi ^N(y^{k,N})-\nabla \varPsi ^N\big (x^{k,N}\big )\right| \right| _{-s}\lesssim 1. \end{aligned}$$
Consequently, again by (2.6) and Lemma 6.1,
$$\begin{aligned} {\mathbb {E}}_k \left| d_2\right| ^{2}\lesssim {\mathbb {E}}_k \left| \left| y^{k,N}-x^{k,N}\right| \right| _s^{2}\lesssim \frac{1+\left| \left| x^{k,N}\right| \right| _{s}^{2}}{\sqrt{N}}. \end{aligned}$$
Next, applying (2.6) and (2.24) gives
$$\begin{aligned} \left| {d_3}\right|&\le \frac{\left| \left| x^{k,N}\right| \right| _{s}\left| \left| \nabla \varPsi ^N\big (x^{k,N}\big )\right| \right| _{-s} +\left| \left| y^{k,N}\right| \right| _{s}\left| \left| \nabla \varPsi ^N(y^{k,N})\right| \right| _{-s}}{\sqrt{N}}\\&\lesssim \frac{\left| \left| x^{k,N}\right| \right| _{s}+\left| \left| y^{k,N}\right| \right| _{s}}{\sqrt{N}} \lesssim \frac{\left| \left| x^{k,N}\right| \right| _{s}+\left| \left| y^{k,N}-x^{k,N}\right| \right| _{s}}{\sqrt{N}}. \end{aligned}$$
Thus, applying Lemma 6.1 then gives the desired bound.
Proof of (6.14) This follows directly from (2.25). \(\square \)
Correlations between acceptance probability and noise \(\xi ^{k,N}\)
Recall the definition of \(\gamma ^{k,N}\), Eq. (2.13), and let
$$\begin{aligned} \varepsilon ^{k,N}:= \gamma ^{k,N}{\mathcal {C}}_N^{1/2}\xi ^{k,N}. \end{aligned}$$
The study of the properties of \(\varepsilon ^{k.N}\) is the object of the next two lemmata, which have a central role in the analysis: Lemma 6.5 (and Lemma 6.2) establishes the decay of correlations between the acceptance probability and the noise \(\xi ^{k,N}\). Lemma 6.6 formalizes the heuristic arguments presented in Sect. 4.3.2.
If Assumption 2.1 holds, then
$$\begin{aligned} \left| \left| {{\mathbb {E}}_k \varepsilon ^{k,N}}\right| \right| _{s}^2\lesssim \frac{1+\left| \left| x^{k,N}\right| \right| _{s}^2}{\sqrt{N}}. \end{aligned}$$
$$\begin{aligned} \left\langle {{\mathbb {E}}_k \varepsilon ^{k,N}},x^{k,N} \right\rangle _{s}{={\mathbb {E}}_k\left\langle \gamma ^{k,N}{\mathcal {C}}_N^{1/2}\xi ^{k,N},x^{k,N} \right\rangle _{s}}\lesssim \frac{1}{N^{1/4}}\left( 1+\left| \left| x^{k,N}\right| \right| _{s}^2\right) . \end{aligned}$$
Let Assumption 2.1 hold. Then, with the notation introduced so far,
$$\begin{aligned} \left| {\mathbb {E}}_k \left| \left| \varepsilon ^{k,N}\right| \right| _{s}^2-{\mathrm{Trace}}_{{\mathcal {H}}^s}({\mathcal {C}}_s)\alpha _{\ell }\big (S^{k,N}\big )\right| \lesssim \frac{1+S^{k,N}+\left| \left| x^{k,N}\right| \right| _{s}}{N^{1/4}}. \end{aligned}$$
The proofs of the above lemmata can be found in Appendix B. Notice that if \(\xi ^{k,N}\) and \(\gamma ^{k,N}\) (equivalently \(\xi ^{k,N}\) and \(Q^{N}\)) were uncorrelated, the statements of Lemmas 6.5 and 6.6 would be trivially true.
Proof of Theorem 4.1
As explained in Sect. 5.1, due to the continuity of the map \({\mathcal {J}}_2\) (defined in Theorem 3.3), in order to prove Theorem 4.1 all we need to show is convergence in probability of \({\hat{w}}^N(t)\) to zero. Looking at the definition of \({\hat{w}}^N(t)\), Eq. (5.3), the convergence in probability (in \(C([0,T];{\mathbb {R}})\)) of \({\hat{w}}^N(t)\) to zero is consequence of Lemmas 7.1 and 7.2 below. We prove Lemma 7.1 in Sect. 7.1 and Lemma 7.2 in Sect. 7.2.
Let Assumption 2.1 hold and recall the definition (5.4) of the process \(e^N(t)\); then
$$\begin{aligned} \lim _{N\rightarrow \infty }{\mathbb {E}}_{x^0}\left( \sup _{t\in [0,T]}\left| e^N(t)\right| \right) ^2=0. \end{aligned}$$
Let Assumption 2.1 hold and recall the definition (5.1) of the process \(w^N(t)\); then
$$\begin{aligned} \lim _{N\rightarrow \infty }{\mathbb {E}}_{x^0}\left( \sup _{t\in [0,T]}\left| w^N(t)\right| \right) ^2=0. \end{aligned}$$
Analysis of the drift
In view of what follows, it is convenient to introduce the piecewise constant interpolant of the chain \(\{S^{k,N}\}_{k\in {\mathbb {N}}}\):
$$\begin{aligned} {\bar{S}}^{(N)}(t):=S^{k,N}, \quad t_k\le t<t_{k+1}, \end{aligned}$$
where \(t_k= k/\sqrt{N}\).
From (7.1), for any \(t_k\le t<t_{k+1}\) we have
$$\begin{aligned} \int _0^tb_{\ell }({\bar{S}}^{(N)}_v)dv&= \int _{t_k}^tb_{\ell }({\bar{S}}^{(N)}_v)dv +\sum _{j=1}^{k-1}\int _{t_{j-1}}^{t_j}b_{\ell }({\bar{S}}^{(N)}_v)dv\\&=(t-t_k)b_{\ell }\big (S^{k,N}\big )+ \frac{1}{\sqrt{N}}\sum _{j=1}^{k-1}b_{\ell }(S^{j,N}). \end{aligned}$$
With this observation, we can then decompose \(e^N(t)\) as
$$\begin{aligned} e^N(t)=e^N_1(t)- e^N_2(t), \end{aligned}$$
$$\begin{aligned} e^N_1(t)&:=(t-t_k) \big (b_{\ell }^{k,N}-b_{\ell }\big (S^{k,N}\big )\big )+\frac{1}{\sqrt{N}} \sum _{j=0}^{k-1}\left[ b_{\ell }^{j,N}-b_{\ell }(S^{j,N})\right] \end{aligned}$$
$$\begin{aligned} e_2^N(t)&:=\int _0^t \left[ b_{\ell }( S^{(N)}_v)-b_{\ell }({\bar{S}}^{(N)}_v) \right] dv. \end{aligned}$$
The result is now a consequence of Lemmas 7.3 and 7.4 below, which we first state and then consecutively prove. \(\square \)
$$\begin{aligned} \lim _{N\rightarrow \infty }{\mathbb {E}}_{x^0}\left( \sup _{t\in [0,T]} \left| e_1^N(t) \right| \right) ^2=0. \end{aligned}$$
Denoting \(E^{k,N}:=b_{\ell }^{k,N}-b_\ell \big (S^{k,N}\big )\), by (discrete) Jensen's inequality we have
$$\begin{aligned} \sup _{t\in [0,T]}\left| e_1^N(t)\right| ^2&=\sup _{t\in [0,T]}\left| (t-t_k) E^{k,N}+\frac{1}{\sqrt{N}} \sum _{j=0}^{k-1}E^{k,N}\right| ^2\\&\lesssim \frac{1}{\sqrt{N}} \sum _{j=0}^{[T\sqrt{N}]-1}\left| E^{j,N}\right| ^2. \end{aligned}$$
Using Lemma 7.5 below, we obtain
$$\begin{aligned} \frac{1}{\sqrt{N}}\sum _{j=0}^{[T\sqrt{N}]-1}\left| E^{j,N}\right| ^2\lesssim \frac{1}{\sqrt{N}}\sum _{k=0}^{[T\sqrt{N}]-1}\frac{1+\big (S^{k,N}\big )^4 +\left| \left| x^{k,N}\right| \right| _{s}^4}{\sqrt{N}}. \end{aligned}$$
Taking expectations on both sides and applying Lemma 6.2 completes the proof. \(\square \)
Let Assumption 2.1 hold. Then, for any \(N \in {\mathbb {N}}\) and \(k\in \{0, 1 ,\ldots , [T\sqrt{N}]\}\),
$$\begin{aligned} \left| E^{k,N} \right| ^2= \left| b_{\ell }^{k,N}-b_{\ell }\big (S^{k,N}\big )\right| ^2\lesssim \frac{1+\big (S^{k,N}\big )^4+\left| \left| x^{k,N}\right| \right| _{s}^4}{\sqrt{N}}. \end{aligned}$$
$$\begin{aligned} Y^N_k:=\frac{\left| \left| y^{k,N}\right| \right| _{{\mathcal {C}}_N}^2 -\left| \left| x^{k,N}\right| \right| _{{\mathcal {C}}_N}^2}{\sqrt{N}},\qquad {\tilde{Y}}^N_k:=2\ell (1-S^{k,N}). \end{aligned}$$
Then, from (4.19), (4.2), (1.12) and (1.14), we obtain
$$\begin{aligned} \left| b_{\ell }^{k,N}-b_{\ell }\big (S^{k,N}\big )\right| ^2=&\,\left| {\mathbb {E}}_k \left( \alpha ^N\big (x^{k,N},y^{k,N}\big )Y^N_k\right) - \alpha _\ell \big (S^{k,N}\big ){\tilde{Y}}^N_k\right| ^2 \\ \le&\, {\mathbb {E}}_k \left| \alpha ^N\big (x^{k,N},y^{k,N}\big )Y^N_k- \alpha _\ell \big (S^{k,N}\big ){\tilde{Y}}^N_k \right| ^2\\ \lesssim&\, \,{\mathbb {E}}_k \left[ \left| \alpha ^N\big (x^{k,N},y^{k,N}\big )\right| ^2\left| Y^N_k-{\tilde{Y}}^N_k \right| ^2\right] \\&+{\mathbb {E}}_k \left[ \left| {\tilde{Y}}^N_k \right| ^2\left| \alpha ^N\big (x^{k,N},y^{k,N}\big )- \alpha _\ell \big (S^{k,N}\big )\right| ^2\right] . \end{aligned}$$
Since \(|\alpha ^N\big (x^{k,N},y^{k,N}\big )|\le 1\) and \({\tilde{Y}}^N_k\) is a function of \(x^{k,N}\) only, we can further estimate the above as follows:
$$\begin{aligned} \left| b_{\ell }^{k,N}-b_{\ell }\big (S^{k,N}\big )\right| ^2\lesssim {\mathbb {E}}_k \left| Y^N_k-{\tilde{Y}}^N_k \right| ^2+\left| {\tilde{Y}}^N_k\right| ^2{\mathbb {E}}_k \left| \alpha ^N\big (x^{k,N},y^{k,N}\big )- \alpha _{\ell }\big (S^{k,N}\big )\right| ^2. \end{aligned}$$
From the definition of \(I_1^N\), Eq. (6.9), we have
$$\begin{aligned} Y^N_k=-\frac{4}{\ell }I_1^N\big (x^{k,N},y^{k,N}\big ). \end{aligned}$$
$$\begin{aligned} Y^N_k-{\tilde{Y}}^N_k= -\frac{4}{\ell } \left[ I_1^N - \frac{\ell ^2}{2}\big (S^{k,N}-1\big ) \right] , \end{aligned}$$
which implies
$$\begin{aligned} {\mathbb {E}}_k (Y^N_k-{\tilde{Y}}^N_k)^2\lesssim&\,{\mathbb {E}}_k \left( I_1^N\big (x^{k,N},y^{k,N}\big )-\ell ^2 \big (S^{k,N}-1\big )/2\right) ^2\\ {\mathop {\lesssim }\limits ^{(6.12)}}&\frac{\left| \left| x^{k,N}\right| \right| _{s}^2}{N^2}+\frac{\big (S^{k,N}\big )^2}{\sqrt{N}}+\frac{1}{N}. \end{aligned}$$
As for the second addend in (7.4), Lemma 6.3 gives
$$\begin{aligned} \left| {\tilde{Y}}^N_k\right| ^2{\mathbb {E}}_k \left| \alpha ^N\big (x^{k,N},y^{k,N}\big )- \alpha _\ell (S^{k,N})\right| ^2\lesssim & {} (1+\big (S^{k,N}\big )^2)\left( \frac{1+\big (S^{k,N}\big )^2+\left| \left| x^{k,N}\right| \right| _{s}^2}{\sqrt{N}}\right) \\\lesssim & {} \frac{1+\big (S^{k,N}\big )^4+\left| \left| x^{k,N}\right| \right| _{s}^4}{\sqrt{N}}. \end{aligned}$$
Combining the above two bounds and (7.4) gives the desired result. \(\square \)
By Jensen's inequality,
$$\begin{aligned} \left( \sup _{t\in [0,T]}\left| \int _0^tb_{\ell }( S^{(N)}_v)-b_{\ell }({\bar{S}}^{(N)}_v)dv\right| \right) ^2\lesssim \int _0^T\left| b_{\ell }( S^{(N)}_v)-b_{\ell }({\bar{S}}^{(N)}_v)\right| ^2dv. \end{aligned}$$
Since \(b_{\ell }\) is globally Lipschitz,
$$\begin{aligned} \int _0^T\left| b_{\ell }({\bar{S}}^N(v)) -b_{\ell }(S^N(v)) \right| ^2dv \lesssim&\,\int _0^T\left| {\bar{S}}^N(v) -S^N(v) \right| ^2dv\\ =&\,\sum _{k=0}^{[T\sqrt{N}]-1}\int _{t_k}^{t_{k+1}}\left| {\bar{S}}^N(v) -S^N(v)\right| ^2dv\\&+\int _{[T\sqrt{N}]}^{T}\left| {\bar{S}}^N(v) -S^N(v)\right| ^2dv\\ \lesssim&\,\frac{1}{\sqrt{N}}\sum _{k=0}^{[T\sqrt{N}]-1}(S^{k+1,N}-S^{k,N})^2. \end{aligned}$$
From (4.18) and (4.6),
$$\begin{aligned} \left| S^{k+1,N}-S^{k,N}\right|&\lesssim \frac{1}{N} \left( \Vert y^{k,N}\Vert _{{\mathcal {C}}^N}^2- \Vert x^{k,N}\Vert _{{\mathcal {C}}^N}^2 \right) \\&{\mathop {\lesssim }\limits ^{(7.5)}} \frac{1}{\sqrt{N}} I_1^N\big (x^{k,N},y^{k,N}\big )\\&= \frac{1}{\sqrt{N}} \left( I_1^N\big (x^{k,N},y^{k,N}\big ) - \frac{\ell ^2 \big (S^{k,N}-1\big )}{2} \right) + \frac{1}{\sqrt{N}}\frac{\ell ^2 \big (S^{k,N}-1\big )}{2}. \end{aligned}$$
Combining the above with (6.12) we obtain
$$\begin{aligned} {\mathbb {E}}_k {(S^{k+1,N}-S^{k,N})^2} \lesssim \frac{1+\big (S^{k,N}\big )^2+\left| \left| x^{k,N}\right| \right| _{s}^2}{N}. \end{aligned}$$
Taking expectations and applying Lemma 6.2 concludes the proof. \(\square \)
Analysis of noise
Notice that we can write \(w^N\) as the linear interpolation
$$\begin{aligned} w^N(t)=(N^{1/2}t-k)M^{k,N}+(k+1-N^{1/2}t)M^{k-1,N}\qquad \forall t_k\le t<t_{k+1}, \end{aligned}$$
of the array
$$\begin{aligned} M^{k,N}:=\frac{1}{N^{1/4}}\sum _{j=0}^{k-1}D^{j,N},\qquad \forall k=1,\ldots ,[T\sqrt{N}]+1. \end{aligned}$$
It follows from the definition of \(D^{k,N}\) in (4.17) and Lemma 6.2 that \(\{M^{k,N}\}_{k\ge 1}\) is a discrete-time \({\mathbb {P}}_{x^0}\)-martingale with respect to the filtration generated by \(\{x^{k,N}\}_{k\ge 1}\). Since,
$$\begin{aligned} \sup _{t\in [0,T]}\left| w^N(t)\right| =\sup _{k\in \{1,\ldots ,[T\sqrt{N}]+1\}}\left| M^{k,N}\right| , \end{aligned}$$
Doob's \(L^p\) inequality implies that
$$\begin{aligned} {\mathbb {E}}_{x^0}\left( \sup _{t\in [0,T]}\left| w^N(t)\right| \right) ^2&\lesssim {\mathbb {E}}_{x^0}\left( \sup _{k\in \{1,\ldots ,[T\sqrt{N}]+1\}}\left| M^{k,N}\right| ^2 \right) \\&=\frac{1}{\sqrt{N}}\sum _{k=0}^{[T\sqrt{N}]}{\mathbb {E}}_{x^0}\left| {D^{k,N}}\right| ^2, \end{aligned}$$
where the equality follows from the independence of the increments of \(\{M^{k,N}\}_{k\ge 1}\). From the definition of \(D^{k,N}\), Eq. (4.17), we have that
$$\begin{aligned} \frac{{\mathbb {E}}_{x^0}\left| D^{k,N}\right| ^2}{\sqrt{N}}&={\mathbb {E}}_{x^0}\left[ S^{k+1,N}-S^{k,N} -{\mathbb {E}}_k \left( {S^{k+1,N}-S^{k,N}}\right) \right] ^2\\&\lesssim {\mathbb {E}}_{x^0}\left| S^{k+1,N}-S^{k,N} \right| ^2{\lesssim }\frac{1}{{N}}, \end{aligned}$$
where the last inequality is a consequence of (7.6) and Lemma 6.2. The result follows immediately. \(\square \)
The idea behind the proof is the same as in the previous Sect. 7. First we introduce the piecewise constant interpolant of the chain \(\{x^{k,N}\}_{k\in {\mathbb {N}}}\)
$$\begin{aligned} {\bar{x}}^{(N)}(t)=x^{k,N} \quad \text{ for } \,\, t_k\le t < t_{k+1}. \end{aligned}$$
Due to the continuity of the map \({\mathcal {J}}_1\) (Theorem 3.3), all we need to prove is the weak convergence of \({\hat{\eta }}^N(t)\) to zero (see Sect. 5.2). Looking at the definition of \({\hat{\eta }}^N(t)\), Eq. (5.6), this follows from Lemmas 8.1, 8.2 and 8.3 below. We prove Lemmas 8.1 and 8.2 in Sect. 8.1 and Lemma 8.3 in Sect. 8.2.
Let Assumption 2.1 hold and recall the definition (5.8) of the process \(d^N(t)\); then
$$\begin{aligned} \lim _{N\rightarrow \infty }{\mathbb {E}}_{x^0}\left( \sup _{t\in [0,T]}\left| d^N(t)\right| \right) ^2=0. \end{aligned}$$
If Assumption 2.1 holds, then \(\upsilon ^N\) (defined in (5.9)) converges in probability in \(C([0,T]; {\mathcal {H}}^s)\) to zero.
Let Assumption 2.1 hold. Then the interpolated martingale difference array \(\mathfrak {\eta }^N(t)\) defined in (5.7) converges weakly in \(C([0,T]; {\mathcal {H}}^s)\) to the stochastic integral \(\eta (t)\), defined in Eq. (5.11).
Analysis of drift
Proof (Lemma 8.1)
For all \(t\in [t_k,t_{k+1})\), we can write
$$\begin{aligned} (t-t_k) \varTheta (x^{k,N},S^{k,N})+\frac{1}{\sqrt{N}}\sum _{j=0}^{k-1}\varTheta \big (x^{j,N},S^{j,N}\big ) =\int _0^t\varTheta \big ({\bar{x}}^{(N)}(v),{\bar{S}}^{(N)}(v)\big )dv. \end{aligned}$$
Therefore, we can decompose \(d^N(t)\) as
$$\begin{aligned} d^N(t)=d^N_1(t)+d^N_2(t), \end{aligned}$$
$$\begin{aligned} d_1^N(t):=(t-t_k)\left[ \varTheta ^{k,N}-\varTheta \big (x^{k,N},S^{k,N}\big )\right] +\frac{1}{\sqrt{N}}\sum _{j=0}^{k-1}\left[ \varTheta ^{j,N}-\varTheta \big (x^{j,N},S^{j,N}\big )\right] \end{aligned}$$
$$\begin{aligned} d_2^N(t):=\int _0^t \left[ \varTheta \big ({\bar{x}}^N(v), {\bar{S}}^N (v)\big ) - \varTheta \big ({x}^{(N)}(v), {S}^{(N)}(v)\big )\right] dv. \end{aligned}$$
The statement is now a consequence of Lemmas 8.4 and 8.5. \(\square \)
$$\begin{aligned} \lim _{N\rightarrow \infty }{\mathbb {E}}_{x^0}\left( \sup _{t\in [0,T]}\left| \left| d_1^N(t)\right| \right| _{s} \right) ^2= 0. \end{aligned}$$
Before proving Lemma 8.4, we state and prove the following Lemma 8.6. We then consecutively prove Lemmas 8.4, 8.5 and 8.2. Recall the definitions of \(\varTheta \) and \(\varTheta ^{k,N}\), equations (4.23) and (4.21), respectively.
Let Assumption 2.1 hold and set
$$\begin{aligned} p^{k,N}:=\varTheta ^{k,N}-\varTheta (x^{k,N},S^{k,N}). \end{aligned}$$
$$\begin{aligned} {\mathbb {E}}_{x^0}\left| \left| p^{k,N}\right| \right| _{s}^2\lesssim&\sum _{j=N+1}^\infty (\lambda _jj^s)^4+\frac{1}{\sqrt{N}}. \end{aligned}$$
Recalling (4.26) and (6.24), we have
$$\begin{aligned} \left| \left| p^{k,N}\right| \right| _{s}^2&\lesssim \sqrt{N}\left| \left| {\mathbb {E}}_k\varepsilon ^N_k\big (x^{k,N}\big )\right| \right| _{s}^2 \end{aligned}$$
$$\begin{aligned}&\quad + \left| \left| \alpha _\ell \big (S^{k,N}\big )F\big (x^{k,N}\big )- \left[ {\mathbb {E}}_k \alpha ^N\big (x^{k,N},y^{k,N}\big )\right] \big (x^{k,N}+{\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big )\big )\right| \right| _{s}^2, \end{aligned}$$
where the function F that appears in the above has been defined in Lemma 2.1. The term on the RHS of (8.3) has been studied in Lemma 6.5. To estimate the addend in (8.4) we use (2.25), the boundedness of \(\alpha _{\ell }\) and Lemma 6.3. A straightforward calculation then gives
$$\begin{aligned} (8.4)\lesssim&\, \left[ \alpha _\ell \big (S^{k,N}\big ) - {\mathbb {E}}_k \alpha ^N\big (x^{k,N},y^{k,N}\big )\right] ^2\left| \left| x^{k,N}+{\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big )\right| \right| _{s}^2\\&+ \left| \left| \alpha _{\ell }\big (S^{k,N}\big ) \left[ F\big (x^{k,N}\big ) - \big (x^{k,N}+{\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big )\big ) \right] \right| \right| _{s}^2\\ \lesssim&\, \frac{1+\big (S^{k,N}\big )^4+\left| \left| x^{k,N}\right| \right| _{s}^4}{\sqrt{N}} +\left| \left| {\mathcal {C}}\nabla \varPsi \big (x^{k,N}\big )-{\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big )\right| \right| _{s}^2. \end{aligned}$$
From the definition of \(\varPsi ^N\) and \(\nabla \varPsi ^N\), Eqs. (1.5) and (2.23), respectively,
$$\begin{aligned} \left| \left| {\mathcal {C}}\nabla \varPsi \big (x^{k,N}\big )-{\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big )\right| \right| _{s}^2&= \left| \left| {\mathcal {C}}\nabla \varPsi \big (x^{k,N}\big )-{\mathcal {C}}_ N {\mathcal {P}}^N\big (\nabla \varPsi \big (x^{k,N}\big )\big )\right| \right| _{s}^2 \\&=\sum _{j=N+1}^\infty (\lambda _jj^s)^4{\mathbb {E}}\left[ j^{-2s}\big (\nabla \varPsi \big (x^{k,N}\big )\big )_j^2 \right] \\&\lesssim \sum _{j=N+1}^\infty (\lambda _jj^s)^4, \end{aligned}$$
having used (2.24) in the last inequality. The statement is now a consequence of Lemma 6.2. \(\square \)
Following the analogous steps to those taken in the proof of Lemma 7.3, the proof is a direct consequence of Lemma 8.6, after observing that the summation \(\sum _{j=N+1}^\infty (\lambda _jj^s)^4\) is the tail of a convergent series hence it tends to zero as \(N \rightarrow \infty \). \(\square \)
By the definition of \(\varTheta \), Eq. (4.23), we have
$$\begin{aligned}&\left| \left| \varTheta ({\bar{x}}^N(t), {\bar{S}}^N (t)) - \varTheta ({x}^N(t), {S}^N (t)) \right| \right| _{s}\\&\quad =\left| \left| F({\bar{x}}^N(t))h_{\ell }({\bar{S}}^N(t)) - F({x}^{(N)}(t))h_{\ell }({S}^{(N)}(t)) \right| \right| _{s}. \end{aligned}$$
Applying (2.20) and (2.25) and using the fact \(h_\ell \) is globally Lipschitz and bounded, we get
$$\begin{aligned}&\left| \left| \varTheta ({\bar{x}}^N(t), {\bar{S}}^N (t)) - \varTheta ({x}^N(t), {S}^N (t)) \right| \right| _{s}\\&\quad \lesssim \left| \left| {\bar{x}}^N(t)-{x}^{(N)}(t)\right| \right| _{s}+(1+\left| \left| {\bar{x}}^N(t)\right| \right| _{s})\left| {\bar{S}}^N (t)-S^{(N)} (t)\right| . \end{aligned}$$
Thus, from the definitions (1.16), (7.1), (1.9) and (8.1), if \(t_k\le t<t_{k+1}\), we have
$$\begin{aligned}&\left| \left| \varTheta ({\bar{x}}^N(t), {\bar{S}}^N (t)) - \varTheta ({x}^N(t), {S}^N (t)) \right| \right| _{s}\\&\quad \lesssim (t-k\sqrt{N})\left| \left| x^{k+1,N}-x^{k,N}\right| \right| _{s}\\&\qquad +(t-k\sqrt{N})\left( 1+\left| \left| x^{k,N}\right| \right| _{s}\right) \left| S^{k+1,N}-S^{k,N}\right| . \end{aligned}$$
Applying (6.3) and (7.6) one then concludes
$$\begin{aligned}&{\mathbb {E}}_k \left| \left| \varTheta ({\bar{x}}^N(t), {\bar{S}}^N (t)) - \varTheta ({x}^N(t), {S}^N (t)) \right| \right| _{s}^2\\&\quad \lesssim (t-k\sqrt{N})^2\left( \frac{1+\left| \left| x^{k,N}\right| \right| _{s}^2}{\sqrt{N}} +\frac{\left| \left| x^{k,N}\right| \right| _{s}^4+\big (S^{k,N}\big )^4}{N}\right) \end{aligned}$$
The remainder of the proof is analogous to the proof of Lemma 7.4. \(\square \)
For any arbitrary but fixed \(\varepsilon >0\), we need to argue that
$$\begin{aligned} \lim _{N\rightarrow \infty }{\mathbb {P}}\left[ \sup _{t\in [0,T]} \left| \left| \upsilon ^N(t)\right| \right| _{s}\ge \varepsilon \right] =0. \end{aligned}$$
From the definition of \(\upsilon ^N\) we have
$$\begin{aligned} \sup _{t\in [0,T]}\left| \left| \upsilon ^N(t)\right| \right| _{s}\le \int _0^T \left| \left| F(x^{(N)}(v))\right| \right| _{s}\left| S^{(N)}(v)-S(v)\right| dv. \end{aligned}$$
Using (2.21) and the fact that \(\left| \left| x^{(N)}(t)\right| \right| _{s}\le \left| \left| x^{k,N}\right| \right| _{s}+\left| \left| x^{k+1,N}\right| \right| _{s}\) (which is a simple consequence of (1.9)), for any \(t\in [t_k,t_{k+1})\)
$$\begin{aligned} \sup _{t\in [0,T]}\left| \left| \upsilon ^N(t)\right| \right| _{s}&\le \left( \sup _{t\in [0,T]} \left| S^{(N)}(t)-S(t)\right| \right) \int _0^T\left| \left| F(x^{(N)}(v))\right| \right| _{s}dv\\&\lesssim \underbrace{\left( \sup _{t\in [0,T]}\left| S^{(N)}(t)-S(t)\right| \right) }_{=:a^N}\underbrace{\left( 1+\frac{1}{\sqrt{N}} \sum _{j=0}^{[T\sqrt{N}]-1}\left| \left| x^{j,N}\right| \right| _{s}\right) }_{=:u^N}. \end{aligned}$$
Using Markov's inequality and Lemma 6.2, given any \(\delta >0\), it is straightforward to find constant M such that \({\mathbb {P}}\left[ u^N> M\right] \le \delta \) for every \(N\in {\mathbb {N}}\). Thus
$$\begin{aligned} {\mathbb {P}}\left[ \sup _{t\in [0,T]}\left| \left| \upsilon ^N(t)\right| \right| _{s}\ge \varepsilon \right]&\le {\mathbb {P}}\left[ a^N u^N\ge \varepsilon \right] \\&= {\mathbb {P}}[a^N u^N\ge \varepsilon , u^N\le M]+ {\mathbb {P}}[a^N u^N\ge \varepsilon , u^N> M]\\&\le {\mathbb {P}}\left[ a^N\ge \varepsilon /M\right] +{\mathbb {P}}\left[ u^N> M\right] \le {\mathbb {P}}\left[ a^N\ge \varepsilon /M\right] +\delta . \end{aligned}$$
Given that the \(\delta \) was arbitrary, the result then follows from the fact that \(S^{(N)}\) converges in probability to S (Theorem 4.1). \(\square \)
The proof of Lemma 8.3 is based on [14, Lemma 8.9]. For the reader's convenience, we restate [14, Lemma 8.9] below as Lemma 8.7. In order to state such a lemma let us introduce the following notation and definitions. Let \(k_N:[0,T] \rightarrow {\mathbb {Z}}_+\) be a sequence of nondecreasing, right continuous functions indexed by N, with \(k_N(0)=0\) and \(k_N(T)\ge 1\). Let \({\mathcal {H}}\) be any Hilbert space and \(\{X^{k,N}, {\mathcal {F}}^{k,N}\}_{0\le k \le k_N(T)}\) be a \({\mathcal {H}}\)-valued martingale difference array (MDA), i.e. a double sequence of random variables such that \({\mathbb {E}}[X^{k,N}\vert {\mathcal {F}}_{k-1}^N ]=0\), \({\mathbb {E}}[\Vert { X^{k,N}}\Vert ^2\vert {\mathcal {F}}_{k-1}^N ]< \infty \) almost surely and sigma-algebras \({\mathcal {F}}^{k-1, N} \subseteq {\mathcal {F}}^{k,N}\). Consider the process \({\mathcal {X}}^N(t)\) defined by
$$\begin{aligned} {\mathcal {X}}^N(t):=\sum _{k=1}^{k_N(t)}X^{k,N}, \end{aligned}$$
if \(k_N(t)\ge 1\) and \(k_N(t) > \lim _{v\rightarrow 0+} k_N(t-v)\) and by linear interpolation otherwise. With this set up we recall the following result.
(Lemma 8.9 [14]) Let \(D:{\mathcal {H}}\rightarrow {\mathcal {H}}\) be a self-adjoint positive definite trace class operator on \(({\mathcal {H}}, \left| \left| \cdot \right| \right| )\). Suppose the following limits hold in probability
there exists a continuous and positive function \(f:[0,T]\rightarrow {\mathbb {R}}_+\) such that
$$\begin{aligned} \lim _{N\rightarrow \infty } \sum _{k=1}^{k_N(T)} {\mathbb {E}}\bigg ({\left| \left| X^{k,N}\right| \right| }^2\vert {\mathcal {F}}_{k-1}^N\bigg )= {\mathrm{Trace}}_{{\mathcal {H}}}(D) \int _0^T f(t) dt \, ; \end{aligned}$$
if \(\{{\phi }_j\}_{j\in {\mathbb {N}}}\) is an orthonormal basis of \({\mathcal {H}}\) then
$$\begin{aligned} \lim _{N\rightarrow \infty } \sum _{k=1}^{k_N(T)} {\mathbb {E}}\bigg (\langle X^{k,N},{\phi }_j \rangle \langle X^{k,N},{\phi }_i \rangle \vert {\mathcal {F}}_{k-1}^N\bigg )=0\, \quad \text{ for } \text{ all } \,\, i\ne j\, ; \end{aligned}$$
for every fixed \(\epsilon >0\),
$$\begin{aligned} \lim _{N \rightarrow \infty } \sum _{k=1}^{k_N(T)} {\mathbb {E}}\bigg ({\left| \left| X^{k,N}\right| \right| }^2 \mathbf{1}_{\left\{ {\left| \left| X^{k,N}\right| \right| }^2\ge \epsilon \right\} } \vert {\mathcal {F}}_{k-1}^N \bigg )=0, \qquad \text{ in } \text{ probability }, \end{aligned}$$
where \({\mathbf {1}}_A\) denotes the indicator function of the set A. Then the sequence \({\mathcal {X}}^N\) converges weakly in \(C([0,T]; {\mathcal {H}}^s)\) to the stochastic integral \(t\mapsto \int _0^t \sqrt{f(v)} dW_v\), where \(W_t\) is a \({\mathcal {H}}\)-valued D-Brownian motion.
We apply Lemma 8.7 in the Hilbert space \({\mathcal {H}}^s\), with \(k_N(t)=[t\sqrt{N}]\), \(X^{k,N}=L^{k,N}/{N}^{1/4}\) [\(L^{k,N}\) is defined in (4.22)] and \({\mathcal {F}}_k^N\) the sigma-algebra generated by \(\{\gamma ^{h,N}, \xi ^{h,N}, \, 0\le h\le k\}\) to study the sequence \(\eta ^N(t)\), defined in (5.7). We now check that the three conditions of Lemma 8.7 hold in the present case.
Note that by the definition of \(L^{k,N}\), \({\mathbb {E}}[L^{k,N}\vert {\mathcal {F}}_{k-1}^N]={\mathbb {E}}_k [L^{k,N}]\) almost surely. We need to show that the limit
$$\begin{aligned} \lim _{N\rightarrow \infty }\frac{1}{\sqrt{N}}\sum _{k=0}^{[T\sqrt{N}]} {\mathbb {E}}_k \left| \left| L^{k,N}\right| \right| _{s}^2 = 2 \, {\mathrm{Trace}}_{{\mathcal {H}}^s}({\mathcal {C}}_s) \int _0^T h_{\ell }(S(u))du, \end{aligned}$$
holds in probability. By (4.28),
$$\begin{aligned} \frac{1}{\sqrt{N}} {\mathbb {E}}_k \left| \left| L^{k,N}\right| \right| _{s}^2&= {\mathbb {E}}_k \left| \left| x^{k+1,N}-x^{k,N}\right| \right| _{s}^2 - \left| \left| {\mathbb {E}}_k\left( x^{k+1,N}-x^{k,N}\right) \right| \right| _{s}^2. \end{aligned}$$
From the above, if we prove
$$\begin{aligned} {\mathbb {E}}_{x^0}\sum _{k=0}^{[T\sqrt{N}]}\left| \left| {\mathbb {E}}_k\left( x^{k+1,N}-x^{k,N}\right) \right| \right| _{s}^2 \rightarrow 0 \quad \text{ as } N\rightarrow \infty , \end{aligned}$$
$$\begin{aligned}&\lim _{N\rightarrow \infty }\sum _{k=0}^{[T\sqrt{N}]} {\mathbb {E}}_k \left| \left| x^{k+1,N}-x^{k,N}\right| \right| _{s}^2 \nonumber \\&\quad = 2 \, {\mathrm{Trace}}_{{\mathcal {H}}^s}({\mathcal {C}}_s) \int _0^T h_{\ell }(S(u))du, \quad \text{ in } \text{ probability }, \end{aligned}$$
then (8.5) follows. We start by proving (8.6):
$$\begin{aligned} \left| \left| {\mathbb {E}}_k\left( x^{k+1,N}-x^{k,N}\right) \right| \right| _{s}^2 {\mathop {\lesssim }\limits ^{(2.14)}}&\frac{1}{N}\left| \left| x^{k,N}+{\mathcal {C}}_N \nabla \varPsi ^N(x^{k,N})\right| \right| _{s}^2 \\&+\frac{1}{\sqrt{N}} \left| \left| {\mathbb {E}}_k \left( \gamma ^{k,N}({\mathcal {C}}_N)^{1/2}\xi ^{k,N}\right) \right| \right| _{s}^2\\ \lesssim&\, \frac{1}{N} \left( 1+ \left| \left| x^{k,N}\right| \right| _{s}^2\right) , \end{aligned}$$
where the last inequality follows from (2.25) and (6.25). The above and (6.7) prove (8.6). We now come to (8.7):
$$\begin{aligned}&\left| \sum _{k=0}^{[T\sqrt{N}]} {\mathbb {E}}_k \left| \left| x^{k+1,N}-x^{k,N}\right| \right| _{s}^2-2 \, {\mathrm{Trace}}_{{\mathcal {H}}^s}({\mathcal {C}}_s) \int _0^T h_{\ell }(S(u))du \right| \\&\quad {\mathop {\lesssim }\limits ^{(2.14)}} \frac{1}{N}\sum _{k=0}^{[T\sqrt{N}]} {\mathbb {E}}_k \left| \left| x^{k,N}+{\mathcal {C}}_N \nabla \varPsi ^N(x^{k,N})\right| \right| _{s}^2\\&\quad \qquad + \frac{1}{N^{3/4}}\sum _{k=0}^{[T\sqrt{N}]} {\mathbb {E}}_k \left| \langle x^{k,N}+{\mathcal {C}}_N \nabla \varPsi ^N(x^{k,N}), {\mathcal {C}}_N^{1/2}\xi ^{k,N}\rangle _s\right| \\&\quad \qquad + \left| \frac{2\ell }{\sqrt{N}}\sum _{k=0}^{[T\sqrt{N}]}{\mathbb {E}}_k \left| \left| \gamma ^{k,N}{\mathcal {C}}_N^{1/2}\xi ^{k,N}\right| \right| _{s}^2 -2 \, {\mathrm{Trace}}_{{\mathcal {H}}^s}({\mathcal {C}}_s) \int _0^T h_{\ell }(S(u))du \right| .\\ \end{aligned}$$
The first two addends tend to zero in \(L_1\) as N tends to infinity due to (2.25), (2.27) and Lemma 6.2. As for the third addend, we decompose it as follows
$$\begin{aligned}&\left| \frac{2\ell }{\sqrt{N}}\sum _{k=0}^{[T\sqrt{N}]}{\mathbb {E}}_k \left| \left| \gamma ^{k,N}{\mathcal {C}}_N^{1/2}\xi ^{k,N}\right| \right| _{s}^2 -2 \, {\mathrm{Trace}}_{{\mathcal {H}}^s}({\mathcal {C}}_s) \int _0^T h_{\ell }(S(u))du \right| \nonumber \\&\quad {\mathop {\lesssim }\limits ^{(1.13), (6.24)}} \left| \frac{\ell }{\sqrt{N}}\sum _{k=0}^{[T\sqrt{N}]}{\mathbb {E}}_k \left| \left| \varepsilon ^{k,N}\right| \right| _{s}^2 - \frac{\ell }{\sqrt{N}}\sum _{k=0}^{[T\sqrt{N}]}{\mathrm{Trace}}_{{\mathcal {H}}^s}({\mathcal {C}}_s)\alpha _{\ell }\big (S^{k,N}\big )\right| \nonumber \\&\qquad \qquad + \left| \frac{1}{\sqrt{N}}\sum _{k=0}^{[T\sqrt{N}]}{\mathrm{Trace}}_{{\mathcal {H}}^s}({\mathcal {C}}_s)h_{\ell }\big (S^{k,N}\big )- {\mathrm{Trace}}_{{\mathcal {H}}^s}({\mathcal {C}}_s) \int _0^T h_{\ell }(S(u))du\right| . \end{aligned}$$
Convergence to zero in \(L^1\) of the first term in the above follows from Lemmas 6.2 and 6.6. As for the term in (8.8), we use the identity
$$\begin{aligned} \int _0^Th_{\ell }({\bar{S}}^{(N)}(u))du =\left( T-\frac{[T\sqrt{N}]}{\sqrt{N}}\right) h_{\ell }\big (S^{[T\sqrt{N}],N}\big ) +\frac{1}{\sqrt{N}}\sum _{k=0}^{[T\sqrt{N}]}h_{\ell }\big (S^{k,N}\big ), \end{aligned}$$
to further split it, obtaining:
$$\begin{aligned} (8.8)\lesssim&\, \left| \int _0^T h_{\ell }({\bar{S}}^{(N)}(u))- h_{\ell }(S^{(N)}(u))du \right| \end{aligned}$$
$$\begin{aligned}&+\left| \int _0^T h_{\ell }(S^{(N)}(u))- h_{\ell }(S(u))du \right| \end{aligned}$$
$$\begin{aligned}&+\left( T-\frac{[T\sqrt{N}]}{\sqrt{N}}\right) h_{\ell }(S^{[T\sqrt{N}],N}). \end{aligned}$$
Convergence (in \(L_1\)) of (8.9) to zero follows with the same calculations leading to (7.6), the global Lipschitz property of \(h_{\ell }\), and Lemma 6.2. The addend in (8.10) tends to zero in probability since \(S^{(N)}\) tends to S in probability in \(C([0,T];{\mathbb {R}})\) (Theorem 4.1) and the third addend is clearly small. The limit (8.7) then follows.
Condition (ii) of Lemma 8.7 can be shown to hold with similar calculations, so we will not show the details.
Using (6.3), the last bound follows a calculation completely analogous to the one in [14, Section 8.2]. We omit the details here. \(\square \)
In this paper, we commit a slight abuse of our notation by writing \({\mathcal {C}}_s\) to mean the covariance operator on the Sobolev-like subspace \({\mathcal {H}}^s\) and \({\mathcal {C}}_N\) to mean that on the finite dimensional subspace \(X^N\) as defined in (1.5). We distinguish these two by always employing N as the subscript for the latter, and lower case letters such as s or r for the former.
Notice that \(S^{k,N}\) is only a function of \(x^{k,N}\).
Note that in the limit the dependence of the drift on \(S^{k,N}\) becomes explicit.
Beskos, A., Girolami, M., Lan, S., Farrell, P., Stuart, A.: Geometric MCMC for infinite-dimensional inverse problems. J. Comput. Phys. 335, 327–351 (2017)
Bédard, M.: Weak convergence of Metropolis algorithms for non-i.i.d. target distributions. Ann. Appl. Probab. 17(4), 1222–1244 (2007)
Bédard, M., Rosenthal, J.: Optimal scaling of Metropolis algorithms: heading toward general target distributions. Can. J. Stat. 36(4), 483–503 (2008)
Beskos, A., Roberts, G., Stuart, A., Voss, J.: An MCMC method for diffusion bridges. Stochast. Dyn. 8(3), 319–350 (2008)
Breyer, L., Piccioni, M., Scarlatti, S.: Optimal scaling of MALA for nonlinear regression. Ann. Appl. Probab. 14(3), 1479–1505 (2004)
Christensen, O., Roberts, G., Rosenthal, J.: Scaling limits for the transient phase of local Metropolis–Hastings algorithms. J. R. Stat. Soc. Ser. B Stat. Methodol. 67(2), 253–268 (2005)
Cotter, S., Roberts, G., Stuart, A., White, D., et al.: MCMC methods for functions: modifying old algorithms to make them faster. Stat. Sci. 28(3), 424–446 (2013)
Da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions. Encyclopedia of Mathematics and Its Applications. Cambridge University Press, Cambridge (1992)
Hairer, M., Stuart, A., Voss, J.: Analysis of SPDEs arising in path sampling. Part II: the nonlinear case. Ann. Appl. Probab. 17(5–6), 1657–1706 (2007)
Hairer, M., Stuart, A., Voss, J., Wiberg, P.: Analysis of SPDEs arising in path sampling. Part I: the Gaussian case. Commun. Math. Sci. 3, 587–603 (2005)
Hastings, W.: Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57, 97–109 (1970)
Jourdain, B., Lelièvre, T., Miasojedow, B.: Optimal scaling for the transient phase of Metropolis–Hastings algorithms: the longtime behavior. Bernoulli 20(4), 1930–1978 (2014)
Jourdain, B., Lelièvre, T., Miasojedow, B.: Optimal scaling for the transient phase of the random walk Metropolis algorithm: the mean-field limit. Ann. Appl. Probab. 25(4), 2263–2300 (2015)
Kuntz, J., Ottobre, M., Stuart, A.: Diffusion limit for the Random Walk Metropolis algorithm out of stationarity. Arxiv preprint (2016)
Mattingly, J., Pillai, N., Stuart, A.: Diffusion limits of the random walk Metropolis algorithm in high dimensions. Ann. Appl. Probab. 22(3), 881–930 (2012)
Neal, R.M.: Regression and classification using Gaussian process priors (with discussion). In: Bernardo, J.M., Berger, J.O., Dawid, A.P., Smith, A.F.M. (eds.) Bayesian statistics 6. Oxford University Press (1998). https://www.cs.toronto.edu/~radford/ftp/val6gp.pdf
Pillai, N., Stuart, A., Thiéry, A.: Optimal scaling and diffusion limits for the Langevin algorithm in high dimensions. Ann. Appl. Probab. 22(6), 2320–2356 (2012)
Pillai, N., Stuart, A., Thiéry, A.: Noisy gradient flow from a random walk in Hilbert space. Stoch. Partial Differ. Equ. Anal. Comput. 2(2), 196–232 (2014)
MathSciNet MATH Google Scholar
Roberts, G., Gelman, A., Gilks, W.: Weak convergence and optimal scaling of random walk Metropolis algorithms. Ann. Appl. Probab. 7(1), 110–120 (1997)
Roberts, G., Rosenthal, J.: Optimal scaling of discrete approximations to Langevin diffusions. J. R. Stat. Soc. Ser. B Stat. Methodol. 60(1), 255–268 (1998)
Roberts, G., Tweedie, R.: Exponential convergence of Langevin distributions and their discrete approximations. Bernoulli 2(4), 341–363 (1996)
Stuart, A.: Inverse problems: a Bayesian perspective. Acta Numerica 19, 451–559 (2010)
Tierney, L.: A note on Metropolis–Hastings kernels for general state spaces. Ann. Appl. Probab. 8(1), 1–9 (1998)
A.M. Stuart acknowledges support from AMS, DARPA, EPSRC, ONR. J. Kuntz gratefully acknowledges support from the BBSRC in the form of the Ph.D. studentship BB/F017510/1. M. Ottobre and J. Kuntz gratefully acknowledge financial support from the Edinburgh Mathematical Society.
Imperial College London, London, SW7 2AZ, UK
Juan Kuntz
Mathematics Department, Heriot Watt University, Edinburgh, EH14 4AS, UK
Michela Ottobre
Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA, 91125, USA
Andrew M. Stuart
Correspondence to Michela Ottobre.
Appendix: Proofs of the results in Sect. 2
The bounds (2.20) are a consequence of (2.19). We show how to obtain the second bound in (2.20):
$$\begin{aligned} \left| \left| {\mathcal {C}}\nabla \varPsi (x)-{\mathcal {C}}\nabla \varPsi (y)\right| \right| _{s}^2&= \sum _{j=1}^{\infty } \lambda _j^4 j^{2s} \left[ \left( \nabla \varPsi (x)-\nabla \varPsi (y)\right) _j\right] ^2\\&=\sum _{j=1}^{\infty } (\lambda _j j^{s})^4 j^{-2s} \left[ \left( \nabla \varPsi (x)-\nabla \varPsi (y)\right) _j\right] ^2\\&\lesssim \Vert \nabla \varPsi (x)-\nabla \varPsi (y)\Vert _{-s}^2 {\mathop {\lesssim }\limits ^{(2.19)}} \Vert x-y\Vert _s^2, \end{aligned}$$
where in the above we have used (2.17) and \(\left( \nabla \varPsi (x)-\nabla \varPsi (y)\right) _j\) denotes the jth component of the vector \(\nabla \varPsi (x)-\nabla \varPsi (y)\). With analogous calculations one can obtain the first bound in (2.20). As for the second equation in (2.21):
$$\begin{aligned} \left| \left| F(z)\right| \right| _{s}&\lesssim \left| \left| z\right| \right| _{s}+ \Vert {\mathcal {C}}\nabla \varPsi (z)\Vert _{s} {\mathop {\lesssim }\limits ^{(2.20)}} 1+ \left| \left| z\right| \right| _{s}. \end{aligned}$$
Similarly for the first bound in (2.21). The proof of Eq. (2.22) is standard, so we only sketch it: consider a line joining points x and y, \(\gamma (t)= x+t(y-x), t \in [0,1]\). Then
$$\begin{aligned} \varPsi (\gamma (1))-\varPsi (\gamma (0))&=\varPsi (y)-\varPsi (x)\\&= \int _0^1 dt \,\left\langle \nabla \varPsi (\gamma (t)), y-x \right\rangle \lesssim \left| \left| y-x\right| \right| _{s}, \end{aligned}$$
having used (2.19) and (2.6) in the last inequality. An analogous calculation to the above can be done for \(\varPsi ^N\), after proving (2.24) below. \(\square \)
The bounds (2.24) and (2.25) are just consequences of the definition of \(\varPsi ^N\) and \(\nabla \varPsi ^N\) and the analogous properties of \(\varPsi \). For the sake of clarity we just spell out how to obtain (2.25):
$$\begin{aligned} \left| \left| {\mathcal {C}}_N \nabla \varPsi ^N(x)\right| \right| _{s}^2&{\mathop {=}\limits ^{(2.23)}} \left| \left| {\mathcal {C}}_N {\mathcal {P}}^N\nabla \varPsi ({\mathcal {P}}^N(x))\right| \right| _{s}^2 = \sum _{j=1}^N j^{2s}\lambda _j^4\left[ \nabla \varPsi ({\mathcal {P}}^N(x)) \right] _j^2\\&\le \sum _{j=1}^{\infty } j^{2s}\lambda _j^4\left[ \nabla \varPsi ({\mathcal {P}}^N(x)) \right] _j^2\le \left| \left| {\mathcal {C}}\nabla \varPsi ({\mathcal {P}}^N(x))\right| \right| _{s}^2{\mathop {\lesssim }\limits ^{(2.20)}}1. \end{aligned}$$
As for (2.26), using (2.17):
$$\begin{aligned} \Vert {\mathcal {C}}_N \nabla \varPsi ^N(x)\Vert _{{\mathcal {C}}_N}^2&=\sum _{j=1}^{N} \lambda _j^2 \left[ \left( \nabla \varPsi ^N(x)\right) _j\right] ^2 \lesssim \sum _{j=1}^{\infty } j^{-2s} \left[ \left( \nabla \varPsi ^N(x)\right) _j\right] ^2 \\&= \Vert \nabla \varPsi ^N(x)\Vert _{-s}^2\lesssim 1. \end{aligned}$$
\(\square \)
Appendix: Proofs of Lemmas 6.5 and 6.6
To prove Lemmas 6.5 and 6.6 we decompose \(Q^N(x^{k,N}, \xi ^{k,N})\) into the sum of a term that depends on \(\xi _j^{k,N}\) (the jth component of \(\xi ^{k,N})\), \(Q^N_j\) and one that is independent of \(\xi _j\), \(Q_{j,\perp }^N\):
$$\begin{aligned} Q^N=Q^N_j+Q_{j,\perp }^N, \end{aligned}$$
$$\begin{aligned} Q^N_j&:=\left( \frac{\ell ^{5/2}}{\sqrt{2}N^{5/4}} -\frac{\ell ^{3/2}}{\sqrt{2}N^{3/4}}\right) \frac{x^{k,N}_j \xi ^{k,N}_j}{\lambda _j}+\frac{\ell ^{5/2}}{\sqrt{2}N^{5/4}}\lambda _j\xi ^{k,N}_j\big (\nabla \varPsi ^N\big (x^{k,N}\big )\big )_j \nonumber \\&\qquad -\frac{\ell ^2}{2N}\big (\xi _j^{k,N}\big )^2 +I_2^N\big (x^{k,N},y^{k,N}\big )+I_3^N\big (x^{k,N},y^{k,N}\big ). \end{aligned}$$
(B.1)
We recall that \(I_2^N\) and \(I_3^N\) have been defined in Sect. 6. Therefore, using (6.8),
$$\begin{aligned} Q_{j,\perp }^N= Q^N-Q^N_j=I_1^N+{\tilde{Q}}_j^N, \end{aligned}$$
$$\begin{aligned} {\tilde{Q}}_j^N&:=-\left( \frac{\ell ^{5/2}}{\sqrt{2}N^{5/4}} -\frac{\ell ^{3/2}}{\sqrt{2}N^{3/4}}\right) \frac{x^{k,N}_j \xi ^{k,N}_j}{\lambda _j} -\frac{\ell ^{5/2}}{\sqrt{2}N^{5/4}}\lambda _j\xi ^{k,N}_j\big (\nabla \varPsi ^N\big (x^{k,N}\big )\big )_j \nonumber \\&\qquad +\frac{\ell ^2}{2N}\big (\xi _j^{k,N}\big )^2. \end{aligned}$$
(6.26) is a consequence of the definition (6.24) and the estimate (6.25). Thus, all we have to do is establish the latter. Recalling that \(\{{\hat{\phi }}_j\}_{j\in {\mathbb {N}}}:= \{j^{-s}\phi _j\}_{j\in {\mathbb {N}}}\) is an orthonormal basis for \({\mathcal {H}}^s\), we act as in the proof of [17, Lemma 4.7] and obtain
$$\begin{aligned} \left| \left\langle {{\mathbb {E}}_k \varepsilon ^{k,N}},{\hat{\phi }}_j \right\rangle _{s}\right| ^2\lesssim j^{2s}\lambda _j^2{\mathbb {E}}_k \left[ {Q^N_j\big (x^{k,N},\xi ^{k,N}\big )}\right] ^2 \end{aligned}$$
where \(Q^N_j\) has been defined in (B.1). Thus
$$\begin{aligned} \left| \left\langle {{\mathbb {E}}_k \varepsilon ^{k,N}},{\hat{\phi }}_j \right\rangle _{s}\right| ^2\lesssim&j^{2s}\lambda _j^2\left( N^{-3/2}{(x^{k,N}_j)^2{\mathbb {E}}_k \xi _j^{2}}\lambda _j^{-2}+N^{-5/2}\lambda _j^2{\mathbb {E}}_k \left[ {\xi _j^2\big (\nabla \varPsi ^N\big (x^{k,N}\big )\big )_j^2}\right] \right) \\&+ j^{2s}\lambda _j^2{\mathbb {E}}_k\big (\left| I_2^N\right| ^2+\left| I_3^N\right| ^2\big )+ j^{2s}\lambda _j^2N^{-2}\\ \lesssim&\, N^{-3/2}{\mathbb {E}}_k \big (j^sx^{k,N}_j\big )^2 +N^{-5/2}j^{-2s}\big (\nabla \varPsi ^N\big (x^{k,N}\big )\big )_j^2\\&+j^{2s}\lambda _j^2N^{-2}+ j^{2s}\lambda _j^2\frac{1+\left| \left| x^{k,N}\right| \right| _{s}^2}{\sqrt{N}}, \end{aligned}$$
where the second inequality follows from the boundedness of the sequence \(\{\lambda _j\}\), (6.13) and (6.14). Summing over j and applying (2.24) we obtain (6.25). \(\square \)
By definition of \(\varepsilon ^{k,N}\), and because \(\gamma ^{k,N}=[\gamma ^{k,N}]^2\) (as \(\gamma ^{k,N}\) can only take values 0 or 1)
$$\begin{aligned} {\mathbb {E}}_k \left| \left| \varepsilon ^{k,N}\right| \right| _{s}^2= & {} \sum _{j=1}^N j^{2s}\lambda _j^2{\mathbb {E}}_k \left[ \gamma ^{k,N}\left| \xi ^{k,N}_j\right| ^2 \right] \\= & {} \sum _{j=1}^N j^{2s}\lambda _j^2{\mathbb {E}}_k \left[ \left( 1\wedge e^{Q^N\big (x^{k,N},y^{k,N}\big )}\right) \left| \xi ^{k,N}_j\right| ^2\right] . \end{aligned}$$
Using the above, the Lipschitzianity of the function \(s \mapsto 1\wedge e^s\), (B.2) and the independence of \(Q_{j,\perp }^N\) and \(\xi _j^{k,N}\), we write
$$\begin{aligned}&\left| {\mathbb {E}}_k \left| \left| \varepsilon ^{k,N}\right| \right| _{s}^2 - {\mathrm{Trace}}({\mathcal {C}}_s)\alpha _{\ell }\big (S^{k,N}\big ) \right| \nonumber \\&\quad = \left| {\mathbb {E}}_k \sum _{j=1}^N j^{2s}\lambda _j^2\left( 1\wedge e^{Q^N}\right) \left| \xi _j\right| ^2 -{\mathrm{Trace}}({\mathcal {C}}_s)\alpha _{\ell }\big (S^{k,N}\big ) \right| \nonumber \\&\quad \le \left| {\mathbb {E}}_k \sum _{j=1}^N j^{2s}\lambda _j^2\left( 1\wedge e^{Q^N_{j,\perp }}\right) \left| \xi _j\right| ^2 - {\mathrm{Trace}}({\mathcal {C}}_s)\alpha _{\ell }\big (S^{k,N}\big ) \right| \nonumber \\&\qquad +\left| {\mathbb {E}}_k \sum _{j=1}^N j^{2s}\lambda _j^2 \left[ \left( 1\wedge e^{Q^N}\right) -\left( 1\wedge e^{Q^N_{j,\perp }}\right) \right] \left| \xi _j\right| ^2 \right| \nonumber \\&\quad \lesssim \left| \sum _{j=1}^N j^{2s}\lambda _j^2 {\mathbb {E}}_k\left( 1\wedge e^{Q^N_{j,\perp }}\right) - {\mathrm{Trace}}({\mathcal {C}}_s)\alpha _{\ell }\big (S^{k,N}\big ) \right| \end{aligned}$$
$$\begin{aligned}&\qquad + \left| {\mathbb {E}}_k \sum _{j=1}^N j^{2s}\lambda _j^2 \left| Q_j^N \right| \left| \xi _j\right| ^2 \right| \end{aligned}$$
We now proceed to bound the addends in (B.4) and (B.5), starting with the latter. Using (B.1) and (B.3), we write
$$\begin{aligned} {\mathbb {E}}_k \sum _{j=1}^N j^{2s}\lambda _j^2 \left| Q_j^N \right| \left| \xi _j\right| ^2\le&\, {\mathbb {E}}_k \sum _{j=1}^N j^{2s}\lambda _j^2 \left| I_2^N \right| \left| \xi _j\right| ^2+ {\mathbb {E}}_k \sum _{j=1}^N j^{2s}\lambda _j^2 \left| I_3^N \right| \left| \xi _j\right| ^2\nonumber \\&+{\mathbb {E}}_k \sum _{j=1}^N j^{2s}\lambda _j^2 \left| {\tilde{Q}}_j^N \right| \left| \xi _j\right| ^2\nonumber \\ \lesssim&\, \sum _{j=1}^N j^{2s}\lambda _j^2 \sqrt{{\mathbb {E}}_k\left| I_2^N \right| ^2} + {\mathbb {E}}_k \sum _{j=1}^N j^{2s}\lambda _j^2 \sqrt{{\mathbb {E}}_k\left| I_3^N \right| ^2} \nonumber \\&+ \sum _{j=1}^N j^{2s}\lambda _j^2 {\mathbb {E}}_k \left( \left| {\tilde{Q}}_j^N \right| \left| \xi _j\right| ^2\right) \nonumber \\ \lesssim&\,\frac{1+\left| \left| x^{k,N}\right| \right| _{s}}{N^{1/4}}+ \sum _{j=1}^N j^{2s}\lambda _j^2 {\mathbb {E}}_k \left( \left| {\tilde{Q}}_j^N \right| \left| \xi _j\right| ^2\right) , \end{aligned}$$
where the last inequality follows from Lemma 6.4 and (2.16). As for the last addend, using (B.3):
$$\begin{aligned} \sum _{j=1}^N j^{2s}\lambda _j^2 {\mathbb {E}}_k \left[ \left| {\tilde{Q}}_j^N \right| \left| \xi _j\right| ^2\right] \lesssim&\, \frac{1}{N^{3/4}} \sum _{j=1}^N j^{2s}\lambda _j \left| x^{k,N}_j\right| {\mathbb {E}}_k\left| \xi _j^{k,N} \right| ^3\nonumber \\&+\frac{1}{N^{5/4}} \sum _{j=1}^N j^{2s}\lambda _j^3 \left| \big ({\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big )\big )_j\right| {\mathbb {E}}_k\left| \xi _j^{k,N} \right| ^3\nonumber \\&+ \frac{1}{N}\sum _{j=1}^Nj^{2s}\lambda _j^2{\mathbb {E}}_k \left| \xi _j^{k,N} \right| ^4 \nonumber \\ \lesssim&\, \frac{1+ \left| \left| x^{k,N}\right| \right| _{s}^2}{N^{3/4}}, \end{aligned}$$
where the last inequality follows from (2.25), (2.16), the boundedness of the sequence \(\{\lambda _j\}_{j\in {\mathbb {N}}}\) and by using Young's inequality (more precisely, the so-called Young's inequality "with \(\epsilon \)"), as follows:
$$\begin{aligned} \lambda _j \left| x^{k,N}_j\right| {\mathbb {E}}_k\left| \xi _j^{k,N} \right| ^3 \le \left| x^{k,N}_j\right| ^2 + \lambda _j^2 \left( {\mathbb {E}}_k\left| \xi _j^{k,N} \right| ^3\right) ^2. \end{aligned}$$
This concludes the analysis of the term (B.5). As for the term (B.4), by definition of \(\alpha _{\ell }\), Eq. (1.12),
$$\begin{aligned} 1\wedge e^{Q^{N}_{j,\perp }}-\alpha _{\ell }\big (S^{k,N}\big )&=\left( 1\wedge e^{Q^{N}_{j,\perp }}-1\wedge e^{I^N_1\big (x^{k,N},y^{k,N}\big )}\right) \\&\quad +\left( 1\wedge e^{I^N_1\big (x^{k,N},y^{k,N}\big )}-1 \wedge e^{\ell ^2\big (S^{k,N}-1\big )/2}\right) . \end{aligned}$$
Exploiting the fact that \(s\mapsto 1\wedge e^s\) is globally Lipschitz, using Lemma 6.4 and manipulations of the same type as in (B.7), it follows that
$$\begin{aligned} \left| 1\wedge e^{Q^{N}_{j,\perp }}-\alpha _{\ell }\big (S^{k,N}\big )\right| \lesssim \frac{1+S^{k,N}+\left| \left| x^{k,N}\right| \right| _{s}^2}{\sqrt{N}}. \end{aligned}$$
Putting (B.6)–(B.7) and the above together completes the proof. \(\square \)
Appendix: Uniform bounds on the moments of \(S^{k,N}\) and \(x^{k,N}\)
To prove both bounds, we use a strategy analogous to the one used in [18, Proof of Lemma 9]. Let \(\{A_k:k\in {\mathbb {N}}\}\) be any sequence of real numbers. Suppose that there exists a constant \(C\ge 0\) (independent of k) such that
$$\begin{aligned} A_{k+1}-A_k\le \frac{C}{\sqrt{N}}\left( 1+A_k\right) . \end{aligned}$$
(C.1)
We start by showing that if the above holds then \(A_k\le e^{CT}(A_0+CT)\), uniformly over \(k=0,\ldots ,[T\sqrt{N}]\). Indeed, from (C.1),
$$\begin{aligned} A_k\le \left( 1+\frac{C}{\sqrt{N}}\right) ^kA_0+\frac{C}{\sqrt{N}} \sum _{j=0}^{k-1}\left( 1+\frac{C}{\sqrt{N}}\right) ^j \le \left( 1+\frac{C}{\sqrt{N}}\right) ^k\left( A_0+k\frac{C}{\sqrt{N}}\right) . \end{aligned}$$
Thus, for all \(k=0,\ldots ,[T\sqrt{N}]\),
$$\begin{aligned} A_k\le \left( 1+\frac{C}{\sqrt{N}}\right) ^{[T\sqrt{N}]}\left( A_0+[T\sqrt{N}] \frac{C}{\sqrt{N}}\right) \le \left( 1+\frac{C}{\sqrt{N}}\right) ^{T\sqrt{N}}(A_0+CT). \end{aligned}$$
Since \(1+z\le e^z\) for any \(z\in {\mathbb {R}}\),
$$\begin{aligned} \left( 1+\frac{C}{\sqrt{N}}\right) ^{\sqrt{N}}\le \left( e^{C/\sqrt{N}}\right) ^{\sqrt{N}}=e^C. \end{aligned}$$
With this preliminary observation, we can now prove (6.6)–(6.7).
Proof of (6.6). To prove (6.6) we only need to show that (C.1) holds (for some constant \(C>0\) independent of N and k) for the sequence \(A_k={\mathbb {E}}_{x^0}{\big (S^{k,N}\big )^q}\). By the definition of \(S^{k,N}\), we have
$$\begin{aligned} S^{k+1,N} = S^{k,N}+\frac{\left| \left| x^{k+1,N}-x^{k,N}\right| \right| _{{\mathcal {C}}_N}^2}{N} +\frac{2\left\langle x^{k+1,N}-x^{k,N},x^{k,N} \right\rangle _{{\mathcal {C}}_N}}{N}. \end{aligned}$$
$$\begin{aligned}&{\mathbb {E}}_{x^0}{(S^{k+1,N})^q}- {\mathbb {E}}_{x^0}\big (S^{k,N}\big )^q \nonumber \\&\quad =\sum _{\begin{array}{c} n+m+l=q \\ (n,m,l)\ne (q,0,0) \end{array}} \left( {\begin{array}{c}q\\ n,m,l\end{array}}\right) {\mathbb {E}}_{x^0}\left[ \big (S^{k,N}\big )^n\left( \frac{\left| \left| x^{k+1,N}-x^{k,N}\right| \right| _{{\mathcal {C}}_N}^2}{N}\right) ^m\right. \nonumber \\&\qquad \left. \times \left( \frac{2\left\langle x^{k+1,N}-x^{k,N},x^{k,N} \right\rangle _{{\mathcal {C}}_N}}{N}\right) ^l\right] . \end{aligned}$$
Thus, to establish (C.1) it is enough to argue that each of the terms in the right-hand side of the above is bounded by \((C/\sqrt{N})(1+{\mathbb {E}}{\big (S^{k,N}\big )^q})\). To this end, set
$$\begin{aligned} J^{k,N}&:= {\mathbb {E}}_{x^0}\left[ {\big (S^{k,N}\big )^n\left( \frac{\left| \left| x^{k+1,N}-x^{k,N}\right| \right| _{{\mathcal {C}}_N}^2}{N}\right) ^m \left( \frac{2\left\langle x^{k+1,N}-x^{k,N},x^{k,N} \right\rangle _{{\mathcal {C}}_N}}{N}\right) ^l} \right] \\&= {\mathbb {E}}_{x^0}{\mathbb {E}}_k \left[ {\big (S^{k,N}\big )^n\left( \frac{\left| \left| x^{k+1,N} -x^{k,N}\right| \right| _{{\mathcal {C}}_N}^2}{N}\right) ^m\left( \frac{2\left\langle x^{k+1,N}-x^{k,N},x^{k,N} \right\rangle _{{\mathcal {C}}_N}}{N}\right) ^l} \right] . \end{aligned}$$
By the Cauchy–Schwartz inequality for the scalar product \(\left\langle \cdot ,\cdot \right\rangle _{{\mathcal {C}}_N}\),
$$\begin{aligned} \frac{\left\langle x^{k+1,N}-x^{k,N},x^{k,N} \right\rangle _{{\mathcal {C}}_N}^l}{N^l}&\le \frac{\left| \left| x^{k,N}\right| \right| _{{\mathcal {C}}_N}^l\left| \left| x^{k+1,N}-x^{k,N}\right| \right| _{{\mathcal {C}}_N}^l}{N^l}\\&=\big (S^{k,N}\big )^{l/2}\frac{\left| \left| x^{k+1,N}-x^{k,N}\right| \right| _{{\mathcal {C}}_N}^l}{N^{l/2}}, \end{aligned}$$
which gives
$$\begin{aligned} J_k^N\lesssim {\mathbb {E}}_{x^0}\left[ \big (S^{k,N}\big )^{n+l/2}\frac{{\mathbb {E}}_k \left| \left| x^{k+1,N} -x^{k,N}\right| \right| _{{\mathcal {C}}_N}^{2m+l}}{N^{m+l/2}}\right] . \end{aligned}$$
Using the bound (6.4) of Lemma 6.1, we also have
$$\begin{aligned} {\mathbb {E}}_k\frac{{\left| \left| x^{k+1,N}-x^{k,N}\right| \right| _{{\mathcal {C}}_N}^{2m+l}}}{N^{m+l/2}}\lesssim \frac{\big (S^{k,N}\big )^{m+l/2}}{N^{m+l/2}}+\frac{1}{N^{(m+l/2)/2}}. \end{aligned}$$
Putting all of the above together (and using Young's inequality) we obtain
$$\begin{aligned} J_k^N \lesssim \frac{{\mathbb {E}}_{x^0}[\big (S^{k,N}\big )^q]}{N^{m+l/2}}+ \frac{1}{N^{(m+l/2)/2}}. \end{aligned}$$
Now observe that \((m+l/2)/2\ge 1/2\) except when \((n,m,l)=(q,0,0)\) or \((n,m,l)=(q-1,0,1)\). Therefore we have shown the desired bound for all the terms in the expansion (C.2), except the one with \((n,m,l)=(q-1,0,1)\). To study the latter term, we recall that \(\gamma ^{k,N}\in \{0,1\}\), and use the definition of the chain [Eqs. (2.8) and (2.12)] to obtain
$$\begin{aligned} \left| \left\langle x^{k+1,N}-x^{k,N},x^{k,N} \right\rangle _{{\mathcal {C}}_N}\right| \lesssim&\, \delta \left| \left| x^{k,N}\right| \right| _{{\mathcal {C}}_N}^2+ \delta \left| \left\langle {\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big ),x^{k,N} \right\rangle _{{\mathcal {C}}_N}\right| \\&+ \sqrt{\delta }\left| \left\langle x^{k,N},({\mathcal {C}}_N)^{1/2}\xi ^{k,N} \right\rangle _{{\mathcal {C}}_N}\right| . \end{aligned}$$
Combining (2.26) with the Cauchy–Schwartz inequality we have
$$\begin{aligned} \delta \left| \left\langle {\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big ),x^{k,N} \right\rangle _{{\mathcal {C}}_N}\right| \lesssim N^{-1/2}\left( 1+\left| \left| x^{k,N}\right| \right| _{{\mathcal {C}}_N}^2\right) \lesssim N^{-1/2}+N^{1/2}S^{k,N}, \end{aligned}$$
where in the last inequality we used the following observation
$$\begin{aligned} \left| \left| x^{k,N}\right| \right| _{s}^2=\sum _{j=1}^\infty \big (x^{k,N}\big )_j^2j^{2s}=\sum _{j=1}^\infty \frac{\big (x^{k,N}\big )_j^2}{\lambda _j^2}(\lambda _j^2j^{2s})\lesssim \sum _{j=1}^\infty \frac{\big (x^{k,N}\big )_j^2}{\lambda _j^2}=NS^{k,N}. \end{aligned}$$
Recalling that \(\left\langle x^{k,N},({\mathcal {C}}_N)^{1/2}\xi ^{k,N} \right\rangle _{{\mathcal {C}}_N}\), conditioned on \(x^{k,N}\), is a linear combination of zero-mean Gaussian random variables, we have
$$\begin{aligned} {\mathbb {E}}_k \sqrt{\delta }\left| \left\langle x^{k,N},({\mathcal {C}}_N)^{1/2}\xi ^{k,N} \right\rangle _{{\mathcal {C}}_N}\right|&\lesssim 1+N^{-1/2}{\mathbb {E}}_k \left| \left\langle x^{k,N},({\mathcal {C}}_N)^{1/2}\xi ^{k,N} \right\rangle _{{\mathcal {C}}_N}\right| ^2\\&\lesssim 1+\sqrt{N}S^{k,N}. \end{aligned}$$
Putting the above together and taking expectations we can then conclude
$$\begin{aligned} {\mathbb {E}}\left[ \frac{\big (S^{k,N}\big )^{q-1}\left\langle x^{k+1,N}-x^{k,N},x^{k,N} \right\rangle _{{\mathcal {C}}_N}}{N} \right]&\lesssim \frac{{\mathbb {E}}\left[ \big (S^{k,N}\big )^{q-1} \right] }{N}+\frac{{\mathbb {E}}\left[ \big (S^{k,N}\big )^q \right] }{\sqrt{N}}\\&\lesssim (1/\sqrt{N})\big (1+{\mathbb {E}}\left[ \big (S^{k,N}\big )^q \right] \big ), \end{aligned}$$
and (6.6) follows.
Proof of (6.7). This is very similar to the proof of (6.6), so we only sketch it. Just as before, it is enough to establish the following bound
$$\begin{aligned}&{\mathbb {E}}\left[ \left| \left| x^{k,N}\right| \right| _{s}^{2n}\left| \left| x^{k+1,N}-x^{k,N}\right| \right| _{s}^{2m}\left\langle x^{k+1,N}-x^{k,N},x^{k,N} \right\rangle _{s}^{l} \right] \\&\quad \lesssim \frac{1}{\sqrt{N}}\left( 1+{\mathbb {E}}\left[ \left| \left| x^{k,N}\right| \right| _{s}^{2q} \right] \right) \end{aligned}$$
for each (n, m, l) such that \(n+m+l=q\) with the exception of the triple \((n,m,l)=(q,0,0)\). Applying the Cauchy–Schwartz inequality for \(\left\langle \cdot ,\cdot \right\rangle _{s}\) we have
$$\begin{aligned} \left\langle x^{k+1,N}-x^{k,N},x^{k,N} \right\rangle _{s}^{l}\le \left| \left| x^{k,N}\right| \right| _{s}^l\left| \left| x^{k+1,N}-x^{k,N}\right| \right| _{s}^l. \end{aligned}$$
Thus, Lemma 6.1 implies
$$\begin{aligned}&{\mathbb {E}}_k \left| \left| x^{k,N}\right| \right| _{s}^{2n}\left| \left| x^{k+1,N}-x^{k,N}\right| \right| _{s}^{2m}\left\langle x^{k+1,N}-x^{k,N},x^{k,N} \right\rangle _{s}^{l}\\&\quad \le \left| \left| x^{k,N}\right| \right| _{s}^{2n+l}{\mathbb {E}}_k \left| \left| x^{k+1,N}-x^{k,N}\right| \right| _{s}^{2m+l} \\&\quad \lesssim \frac{\left| \left| x^{k,N}\right| \right| _{s}^{2n+l}(1+\left| \left| x^{k,N}\right| \right| _{s}^{2m+l})}{N^{(m+l/2)/2}}. \end{aligned}$$
The above gives us the desired bound for all (n, m, l) except for \((n,m,l)=(q-1,0,1)\). Like before, to study the latter case we observe
$$\begin{aligned}&\left\langle x^{k+1,N}-x^{k,N},x^{k,N} \right\rangle _{s}\\&\quad =\,\gamma ^{k,N}\left( -\frac{\ell }{\sqrt{N}} \left( \left| \left| x^{k,N}\right| \right| _{s}^2+\left\langle {\mathcal {C}}_N\nabla \varPsi ^N\big (x^{k,N}\big ),x^{k,N} \right\rangle _{s}\right) \right. \\&\qquad \left. +\frac{\sqrt{2\ell }}{N^{1/4}}\left\langle ({\mathcal {C}}_N)^{1/2}\xi ^{k,N},x^{k,N} \right\rangle _{s}\right) \\&\quad \lesssim \frac{1}{\sqrt{N}}\left( 1+\left| \left| x^{k,N}\right| \right| _{s}^2\right) +\frac{1}{N^{1/4}}\gamma ^{k,N}\left\langle (C_N)^{1/2}\xi ^{k,N},x^{k,N} \right\rangle _{s}\\&\quad \lesssim \frac{1}{\sqrt{N}}\left( 1+\left| \left| x^{k,N}\right| \right| _{s}^2\right) , \end{aligned}$$
where penultimate inequality follows from the Cauchy–Schwartz inequality, (2.25), and the fact that \(\gamma ^{k,N}\in \{0,1\}\), and the last inequality follows from Lemma 6.5. This concludes the proof. \(\square \)
Remark C.1
In [17] the authors derived the diffusion limit for the chain under weaker assumptions on the potential \(\varPsi \) than those we use in this paper. Essentially, they assume that \(\varPsi \) is quadratically bounded, while we assume that it is linearly bounded. If \(\varPsi \) was quadratically bounded the proof of Lemma 6.5 would become considerably more involved. We observe explicitly that the statement of Lemma 6.5 is of paramount importance in order to establish the uniform bound on the moments of the chain \(x^{k}\) contained in Lemma 6.2. In [17] obtaining such bounds is not an issue, since the authors study the chain in its stationary regime. In other words, in [17] the law of \(x^{k,N}\) is independent of k, and thus the uniform bounds on the moments of \(x^{k,N}\) and \(S^{k,N}\) are automatically true for target measures of the form considered there (see also the first bullet point of Remark 4.1). \(\square \)
Kuntz, J., Ottobre, M. & Stuart, A.M. Non-stationary phase of the MALA algorithm. Stoch PDE: Anal Comp 6, 446–499 (2018). https://doi.org/10.1007/s40072-018-0113-1
Issue Date: September 2018
Metropolis-Adjusted Langevin Algorithm
Diffusion limit
Optimal scaling
Mathematics Subject Classification
Primary 60J22
Secondary 60J20 | CommonCrawl |
Home Journals EJEE Study of Models Using One or Two Exponentials to Simulate the Characteristic Current-voltage of Silicon Solar Cells
Study of Models Using One or Two Exponentials to Simulate the Characteristic Current-voltage of Silicon Solar Cells
Inchirah Sari-Ali* | Boumédiène Benyoucef | Bachir Chikh-Bled | Younes Menni | Ali J. Chamkha | Giulio Lorenzini
Unit of Research on Materials and Renewable Energies, Department of Physics, Faculty of Sciences, Abou Bekr Belkaid University, B.P. 119, 13000, Tlemcen, Algeria
Mechanical Engineering Department, Prince Sultan Endowment for Energy and Environment, Prince Mohammad Bin Fahd University, Al-Khobar 31952, Saudi Arabia
Department of Engineering and Architecture, University of Parma, Parco Area delle Scienze, 181/A, Parma 43124, Italy
[email protected]
The production of electricity based on the conversion of the sunlight by photocells crystalline silicon is the most way using on technological and industrial plan. As a consequence, the development of the applications of energy production requires cells with high efficiency and low cost. We propose two models using either one or two exponentials allowing to simulate the characteristic current-voltage of a solar cell. The goal of our work is to present a comparative study between the model theoretical and experimental aiming at the improvement of the solar cell efficiency. We also determine the parameters of the solar cell from the current-voltage curve. Additionally, we provide a justification for using the two exponential models for improving the cell efficiency. The model with two exponential allowing to investigate the phenomena of recombination in zone of diffusion and space charge in the areas quasi-neutrals of the transmitter and the base. This study underlines the insufficiency of the model to exponential generally used by showing that it leads to the design of the ideal solar cells whereas structures characterized by a factor of quality greater outcome with high efficiency.
solar cells with high efficiency and low cost, solar cell efficiency, characteristic current-voltage of solar cell, production of electricity, silicon
A solar cell is an electronic device that produces electricity when light falls on it. The light is absorbed and the cell produces dc voltage and/or current. The device has a positive and a negative contact between which the voltage is generated and through which the current can flow. Solar cells have no moving parts. Effectively they take light energy and convert it into electrical energy in an electrical circuit, Figure 1.
Figure 1. The photovoltaic effect in a solar cell
Under illumination, a photocell behaves as a generator of power; the characteristic I-V is described for an illumination and a temperature given by the implicit equation [1]:
I=h(I,V) (1)
$I={{I}_{ph}}-\frac{V+{{R}_{s}}I}{{{R}_{sh}}}-{{I}_{01}}\left[ \exp \left( \frac{q\left( V+{{R}_{s}}I \right)}{KT} \right)-1 \right]\,\,-{{I}_{02}}\left[ \exp \left( \frac{q\left( V+{{R}_{s}}I \right)}{nKT} \right)-1 \right]\ \ \ $ (2)
This equation implicit expression for the IV characteristic according to the double exponential model for solar cells.
Or photo generated current (Iph), two diode saturation current (I01 and I02), the series resistance Rs, and the shunt resistance Rsh.
A- The single exponential model (SEM):
Experimentally this model is corrected by the introduction of the factor the quality n, the characteristic courant-tension of solar cell is this described by the following equation, with I02 = 0:
$I={{I}_{ph}}-\frac{V+{{R}_{s}}I}{{{R}_{sh}}}-{{I}_{01}}\left[ \exp \left( \frac{q\left( V+{{R}_{s}}I \right)}{nKT} \right)-1 \right]$ (3)
B- Double exponential model (DEM):
In this model the exponential represent the current of Shockley diffusion separately I01 and the current due to the recombination I02 by center of traps in the zone of space load. The characteristic current-tension of a solar cell for this model is described by the following equation [2]:
$I={{I}_{ph}}-\frac{V+{{R}_{s}}I}{{{R}_{sh}}}-{{I}_{01}}\left[ \exp \left( \frac{q\left( V+{{R}_{s}}I \right)}{KT} \right)-1 \right]\,-{{I}_{02}}\left[ \exp \left( \frac{q\left( V+{{R}_{s}}I \right)}{nKT} \right)-1 \right]\ \ $ (4)
Many investigations adopted to predict the impact of photovoltaic cell parameters on its performance. Lakeh et al. [3] conducted an exhaustive parametric research on a novel integrated thermoelectric-PV cell. They mathematically modeled and simulated the device, considering ambient conditions, cold side temperature, load resistance of TEG and some other important factors. Jeong et al. [4] integrated a realistic self-power circuit which is composed of a few nm-thin MoTe2 FET and perovskite PV cells. Antonacci and Scognamiglio [5] aimed to describe the enormous potential raised from the combination of photosynthetic elements and nanomaterials towards the design of hybrid nanostructures, and reported the recent advances in the realisation of smart biosensors and photovoltaic cells. Grubera et al. [6] investigated the influence of unstructured charge-extraction/transport layers on solar cell photocurrent measurements. Katayama et al. [7] applied several types of failure and degradation in photovoltaic cells, such as mechanical stress, interconnect ribbon disconnection, and PID, to photovoltaic modules. Hernández-Callejo et al. [8] reviewed the general operation and the operation of hybrid systems, as well as the power quality. Quan et al. [9] proposed a noncontact, fast and effective defect detection method for photovoltaic cells that uses a combination of image processing and compressive sensing techniques. Alcañiz et al. [10] reported a heterojunction MoOx/Ge PV cell that effectively demonstrates the possibility of creating hole selective contacts in n-type c-Ge. Shittu et al. [11] presented the concepts of photovoltaics and thermoelectric energy conversion, research focus areas in the hybrid systems, applications of such systems, discussion of the most recent research accomplishments and recommendations for future research. Salem et al. [12] experimentally investigated the performance of a PV module cooling effect using a compound enhancement technique. Torabi et al. [13] reviewed the development of single junction perovskite solar cells with a focus on the material structure, bandgap engineering and crystallization strategies. Vargas-Estevez et al. [14] report the fabrication of a simple photovoltaic microcell array (PVMA) using a CMOS-compatible microfabrication technology. Mathews et al. [15] reported some non-technical barriers to commercialization of IPV technologies including a requirement for a greater understanding of the costs when manufacturing low volumes of small IPV modules, toxicity concerns, and the stability of materials. Shaygan et al. [16] presented the energy, exergy, advanced energy and economic analysis of hybrid system consisting of photovoltaic cells, electrolyzer and polymer electrolyte membrane fuel cell to provide a clean power to run an electrolyzer for hydrogen production. Falama et al. [17] studied the implication of impact ionization due to a high electric field on the improvement of the photovoltaic solar cell efficiency. Ansari et al. [18] undertaken a computational study on the photovoltaic performance and electrical characteristics of graphene/gallium arsenide Schottky junction solar cell with structure graphene/SiO2/GaAs/Au. Cotfas et al. [19] described and used the SDA algorithm to calculate the five important parameters of the photovoltaic cells and panels. Aguiar et al. [20] designed, synthesized, characterized and applied a series of highly fluorinated BODIPY dyes with different styryl aromatic donor groups such as phenyl, naphthyl and anthracyl as electron-donor materials in organic photovoltaic cells (OPV). Manfredi et al. [21] presented the data on the effect of peripheral functionalization of a series of triphenylamine based di-branched dyes used as sensitizers in dye-sensitized solar cells. Other studies can be found in [22, 23].
As reported above, the production of electricity based on the conversion of the sunlight by photocells crystalline silicon is the most way using on technological and industrial plan. As a consequence, the development of the applications of energy production requires cells with high efficiency and low cost. In this paper, we propose two models using either one or two exponentials allowing to simulate the characteristic current- voltage of a solar cell. The goal of our work is to present a comparative study between the model theoretical and experimental aiming at the improvement of the solar cell efficiency; we also determine the parameters of the solar cell from the current-voltage curve. Additionally, we provide a justification for using the two exponential models for improving the cell efficiency.
The remainder of this paper is organized as follows: Section 2 reports the physical model under consideration, Section 3 presents the materials used for the experimental part, Section 4 describes the method of determining the parameter extraction, Section 5 includes a comparative study between the model theoretical and experimental aiming at the improvement of the solar cell efficiency, and Section 6 shows the conclusions, recommendations and suggestion for future study.
2. Experimental Study
In this study, we take measurements on a solar cell containing silicon, then we compare these results with the results theoretical of the model with one and two exponential. By exposing the photovoltaic cell to the light and by putting a load R at the boundaries of the cell (Figure 2). We measure the current and the voltage induced by the cell according to the intensity of the light.
To follow the pattern of the characteristic voltage according to illumination, we carry out the following diagram [22]:
Figure 2. Diagram of the solar cell
The material used for the experimental part is:
1. A simulator of universal the Prado type using a lamp with halogen giving a spectrum almost identical to the sun.
2. An optical bench on which the cell is placed allowing to vary flow.
3. A rheostat allowing to vary the intensity and the voltage of the cell.
4. A ampermeter being used to measure the current of the cell.
5. A voltmeter being used to measure the voltage of the cell.
6. A sunshine recorder of the type Sharp Solar being used to measure flow.
7. A solar silicon cell.
4. Method of Determining the Parameter Extraction
Different techniques have been presented to extract the parameters of the two diode model without using least mean squares fitting [2].
When the model equation and its derivative are expressed at open circuit voltage and are short circuit, four independent equations are obtained. When normal cell parameters taken into account, these can be rewritten as in equations [1]:
${{I}_{ph}}={{I}_{sc}}$ (5)
${{R}_{sh}}=\frac{1}{{{\left( \frac{\partial I}{\partial V} \right)}_{V=0}}}$ (6)
${{R}_{s}}=-\frac{1}{{{\left( \frac{\partial I}{\partial V} \right)}_{V={{V}_{\infty }}}}}-\frac{1}{\frac{{{I}_{01}}}{{{V}_{t}}}\exp \left( \frac{{{V}_{OC}}}{{{V}_{t}}} \right)+\frac{{{I}_{02}}}{2{{V}_{t}}}\exp \left( \frac{{{V}_{OC}}}{2{{V}_{t}}} \right)+\frac{1}{{{R}_{sh}}}}$ (7)
${{I}_{ph}}={{I}_{01}}\left( \exp \left( \frac{{{V}_{OC}}}{{{V}_{t}}} \right)-1 \right)+{{I}_{02}}\left( \exp \left( \frac{{{V}_{OC}}}{2{{V}_{t}}} \right)-1 \right)+\frac{{{V}_{oc}}}{{{R}_{sh}}}$ (8)
We developed a software allowing the determination of the parameters I01 defined in the equation characteristic of a solar cell.
Experimental determination of I01:
For various values (N) of flow, we determine I01 by using the following relations:
- For the model with exponential expression is following form [22]:
${{I}_{01}}=\frac{1}{N}\sum\limits_{N}{\left[ \frac{{{I}_{cc}}N}{\left( \exp \left( w{{V}_{co}}N \right)-1 \right)} \right]}$ (9)
- For the model with two exponential we fix the current of I02 recombination, allowing calculates it current of I01 saturation which is given by the following expression [23]:
${{I}_{01}}=\frac{{{I}_{cc}}N}{\left( \exp \left( w{{V}_{co}}N \right)-1 \right)}-\frac{{{I}_{02}}\left( \exp \left( w{{V}_{co}}_{N}/n \right)-1 \right)}{\left( \exp \left( w{{V}_{co}}_{N} \right)-1 \right)}-\frac{{{V}_{coN}}/{{R}_{sh}}}{\left( \exp \left( w{{V}_{co}}_{N} \right)-1 \right)}$ (10)
5. Results and Discussions
Our simulation program of characteristic I-V us allowed to obtain the variation of the parameters of the solar cell for various flows.
Table 1. Variation of the parameters of the I = f(V) characteristic according to flow
f (W/m2)
h (%)
FFexp
hexp (%)
Indeed we notice that the factor of form and the efficiency increase with incidental flow. With an aim of comparing the two types of model, we simulated the characteristic of the solar cell for the value of flow f = 1000 W/m2 and the results of our simulation are given by the following figures.
f3.png
Figure 3. Characteristic current-voltage of the model to exponential for various values of n and in f = 1000 W/m2
Figure 4. Variation of the efficiency according to solar flow (I01 = 10-10 A)
5.1 Single exponential model
Experimental measurements are represented graphically in (*), according to the Figure 3, we notice that for a factor of quality equal to the unit a high efficiency was obtained but its value is not compatible with the experimental, while n = 1,8 the theoretical curve are almost confused compared to the experimental results.
The Figure 4 shows the variation of the output according to flow with current a diffusion I01 = 10-10 A of better results were obtained but with a factor of quality higher than one. So a search for new structure must relate to the improvement of factor of quality and the diffusion current. This shows the interest to use the model with two exponential in order to being able to differentiate the two phenomena, the diffusion and the recombination in zone of space charge in the simulation of the operation of the solar cell according to the parameters of the characteristic voltage.
5.2 Double exponential model
Figures 5, 6 and 7 show us the impact of the parameters of the model to two exponential on the characteristic current-tension of a cell. We note that for a significant value of the current of recombination, high outputs can be obtained, we notice that the value nearest with experimental is between 10-5 and 1.5×10-5 A/cm2. We also note that for a factor of quality N given, the influence of the current of I02 recombination is dominating in front of that of current I01 the diffusion. And for a great value of the factor of quality of the diode (n=2,2), the theoretical curve is similar to the experimental with a cell with a significant current of recombination and a weak diffusion current of about 10-10 A/cm2.
Figure 5. Influence factor of quality on the model with two exponential
Figure 6. Influence of diffusion current on the model with two exponential
Figure 7. Influence of I02 on the model with two exponential
Figure 8. Variation of the output according to solar flow (n=2,2)
According to the Figure 8 we deduce that for a current from recombination I02 = 10-5 A/cm2, for a factor of quality higher than 2; the theoretical yields and experimental are close. This result informs us about the importance of the type and the position of the levels traps in the determination of the output h since the maximum of the current of recombination in zone of space charge arises for levels of traps located at the medium of the forbidden band (n=2).
Our study with carried on the analysis of the two models allowing the determination of the parameters of a photovoltaic cell. The model with two exponential allowing to investigate the phenomena of recombination in zone of diffusion and space charge in the areas quasi-neutrals of the transmitter and the base.
Our experimental results were compared to the theory for the two models describing the operation of the silicon solar cell.
This study underlines the insufficiency of the model to exponential generally used by showing that it leads to the design of the ideal solar cells whereas structures characterized by a factor of quality greater outcome with high efficiency.
As a perspective of this work, we plan to carry out additional experimental studies for photovoltaic modules based on polycrystalline and amorphous silicon; and to develop the model with resolution methods to determine different specific parameters of the current-voltage characteristic.
[1] Zerga, A. (1998). Optimisation du rendement d'une cellule solaire à base de silicium monocristallin de type n+p, Thèse de Magister, Université de Tlemcen.
[2] Enebish, N., Agachbayar, D., Dorjkhand, S., Baatar, D., Ulemj, I. (1993). Numerical analysis of solar cells current-voltage characteristic. Solar Energy Materials and Solar Cells, 29: 201-208. https://doi.org/10.1016/0927-0248(93)90035-2
[3] Lakeh, H.K., Kaatuzian, H., Hosseini, R. (2019). A parametrical study on photo-electro-thermal performance of an integrated thermoelectric-photovoltaic cell. Renewable Energy, 138: 542-550. https://doi.org/10.1016/j.renene.2019.01.094
[4] Jeong, Y., Shin, D., Park, J.H., Park, J., Yi, Y., Im, S. (2019). Integrated advantages from perovskite photovoltaic cell and 2D MoTe2 transistor towards self-power energy harvesting and photosensing. Nano Energy, 63: 103833. https://doi.org/10.1016/j.nanoen.2019.06.029
[5] Antonacci, A., Scognamiglio, V. (2019). Photosynthesis-based hybrid nanostructures: electrochemical sensors and photovoltaic cells as case studies. TrAC Trends in Analytical Chemistry, 115: 100-109. https://doi.org/10.1016/j.trac.2019.04.001
[6] Gruber, M., Jovanov, V., Wagner, V. (2019). Modeling of photoactive area spreading in unstructured photovoltaic cells. Solar Energy Materials and Solar Cells, 200: 110011. https://doi.org/10.1016/j.solmat.2019.110011
[7] Katayama, N., Osawa, S., Matsumoto, S., Nakano, T., Sugiyama, M. (2019). Degradation and fault diagnosis of photovoltaic cells using impedance spectroscopy. Solar Energy Materials and Solar Cells, 194: 130-136. https://doi.org/10.1016/j.solmat.2019.01.040
[8] Hernández-Callejo, L., Gallardo-Saavedra, S., Alonso-Gómez, V. (2019). A review of photovoltaic systems: Design, operation and maintenance. Solar Energy, 188: 426-440. https://doi.org/10.1016/j.solener.2019.06.017
[9] Quan, L., Xie, K., Liu, Y., Zhang, H. (2019). Camera enhanced compressive light beam induced current sensing for efficient defect detection in photovoltaic cells. Solar Energy, 183: 212-217. https://doi.org/10.1016/j.solener.2019.02.055
[10] Alcañiz, A., López, G., Martín, I., Jiménez, A., Datas, A., Calle, E., Rosa, E., Gerling, L.G., Voz, C., del Cañizo, C., Alcubilla, R. (2019). Germanium photovoltaic cells with MoOx hole-selective contacts. Solar Energy, 181: 357-360. https://doi.org/110.1016/j.solener.2019.02.009
[11] Shittu, S., Li, G., Akhlaghi, Y.G., Ma, X., Zhao, X., Ayodele, E. (2019). Advancements in thermoelectric generators for enhanced hybrid photovoltaic system performance. 109: 24-54. https://doi.org/10.1016/j.rser.2019.04.023
[12] Salem, M.R., Elsayed, M.M., Abd-Elaziz, A.A., Elshazly, K.M. (2019). Performance enhancement of the photovoltaic cells using Al2O3/PCM mixture and/or water cooling-techniques. Renewable Energy, 138: 876-890. https://doi.org/10.1016/j.renene.2019.02.032
[13] Torabi, N., Behjat, A., Zhou, Y., Docampo, P., Stoddard, R.J. (2019). Hillhouse H.W., Ameri T., Progress and challenges in perovskite photovoltaics from single- to multi-junction cells. Materials Today Energy, 12: 70-94. https://doi.org/10.1016/j.mtener.2018.12.009
[14] Vargas-Estevez, C., Blanquer, A., Murillo, G., Duque, M., Barrios, L., Nogués, C., Ibañez, E., Esteve, J. (2018). Electrical stimulation of cells through photovoltaic microcell arrays. Nano Energy, 51: 571-578. https://doi.org/10.1016/j.nanoen.2018.07.012
[15] Mathews, I., Kantareddy, S.N., Buonassisi, T., Peters, I.M. (2019). Technology and market perspective for indoor photovoltaic cells. Joule, 3(6): 1415-1426. https://doi.org/10.1016/j.joule.2019.03.026
[16] Shaygan, M., Ehyaei, M.A., Ahmadi, A., El Haj Assad, M., Silveira, J.L. (2019). Energy, exergy, advanced exergy and economic analyses of hybrid polymer electrolyte membrane (PEM) fuel cell and photovoltaic cells to produce hydrogen and electricity. Journal of Cleaner Production, 234: 1082-1093. https://doi.org/10.1016/j.jclepro.2019.06.298
[17] Falama, R.Z., Hidayatullah, Doka, S.Y. (2019). A promising concept to push efficiency of pn-junction photovoltaic solar cell beyond Shockley and Queisser limit based on impact ionization due to high electric field. Optik, 187: 39-48. https://doi.org/10.1016/j.ijleo.2019.04.136
[18] Ansari, Z.A., Singh, T.J., Islam, S.M., Singh, S., Mahala, P., Khan, A., Singh, K.J. (2019). Photovoltaic solar cells based on Graphene/Gallium arsenide schottky junction. Optik, 182: 500-506. https://doi.org/10.1016/j.ijleo.2019.01.078
[19] Cotfas, D.T., Deaconu, A.M., Cotfas, P.A. (2019). Application of successive discretization algorithm for determining photovoltaic cells parameters. Energy Conversion and Management, 196: 545-556. https://doi.org/10.1016/j.enconman.2019.06.037
[20] Aguiar, A., Farinhas, J, da Silva, W., Ghica, M.E., Brett, C.M.A., Morgado, J., Sobral, A.J.F.N. (2019). Synthesis, characterization and application of meso-substituted fluorinated boron dipyrromethenes (BODIPYs) with different styryl groups in organic photovoltaic cells. Dyes and Pigments, 168: 103-110. https://doi.org/10.1016/j.dyepig.2019.04.031
[21] Manfredi, N., Trifiletti, V., Melchiorre, F., Giannotta, G., Biagini, P., Abbotto, A. (2019). Photovoltaic characterization of di-branched organic sensitizers for DSSCs. Data in Brief, 2019. https://doi.org/10.1016/j.dib.2019.104167
[22] Charles, J.P., Mekkaoui, I., Bordure, G. (1985). A critical study of the effectiveness of the single and double exponential models for I-V characterization of solar cell. Solid-State Electronics, 28: 807-820. https://doi.org/10.1016/0038-1101(85)90068-1
[23] Sari-Ali, I. (2003). Contribution à l'etude de la caractéristique courant-tension des cellules solaires fonctionnant sous eclairement et a l'obscurité. Thèse de Magister, Université de Tlemcen. | CommonCrawl |
Optimizing strategies for population-based chlamydia infection screening among young women: an age-structured system dynamics approach
Yu Teng1,
Nan Kong2 &
Wanzhu Tu3
Chlamydia infection (CT) is one of the most commonly reported sexually transmitted diseases. It is often referred to as a "silent" disease with the majority of infected people having no symptoms. Without early detection, it can progress to serious reproductive and other health problems. Economical identification of asymptomatically infected is a key public health challenge. Increasing evidence suggests that CT infection risk varies over the range of adolescence. Hence, age-dependent screening strategies with more frequent testing for certain age groups of higher risk may be cost-saving in controlling the disease.
We study the optimization of age-dependent screening strategies for population-based chlamydia infection screening among young women. We develop an age-structured compartment model for CT natural progress, screening, and treatment. We apply parameter optimization on the resultant PDE-based system dynamical models with the objective of minimizing the total care spending, including screening and treatment costs during the program period and anticipated costs of treating the sequelae afterwards). For ease of practical implementation, we also search for the best screening initiation age for strategies with a constant screening frequency.
The optimal age-dependent strategies identified outperform the current CDC recommendations both in terms of total care spending and disease prevalence at the termination of the program. For example, the age-dependent strategy that allows monthly screening rate changes can save about 5 % of the total spending. Our results suggest early initiation of CT screening is likely beneficial to the cost saving and prevalence reduction. Finally, our results imply that the strategy design may not be sensitive to accurate quantification of the age-specific CT infection risk if screening initiation age and screening rate are the only decisions to make.
Our research demonstrates the potential economic benefit of age-dependent screening strategy design for population-based screening programs. It also showcases the applicability of age-structured system dynamical modeling to infectious disease control with increasing evidence on the age differences in infection risk. The research can be further improved with consideration of the difference between first-time infection and reinfection, as well as population heterogeneity in sexual partnership.
Sexually transmitted infections with Chlamydia trachomatis (CT) are among the most commonly reported infectious diseases in the United States [10] and many other developed countries [38]. The infection is caused by bacterium C. trachomatis [7]. It is estimated that about 1 million individuals in the U.S. are infected with CT. Due to lack of specific symptoms in many CT infection cases [22], the infection may lead to major long-term morbidities such as pelvic inflammatory disease, ectopic pregnancy, and infertility [9, 36]. Together with other STDs, CT infection inflicts significant human and economic costs [26].
At present, CT infection can be accurately detected and easily treated with early detection. Thus, CT screening has emerged as a key public health intervention [6] and the disease control relies primarily on the cost and effectiveness of the screening. Several economic studies found CT screening to be cost-effective, and even cost-saving (e.g., [17–19, 21, 35]). For literature reviews on the economic studies, we refer to Low et al. [23, 24]; Roberts et al. [28]. However, most of the existing economic studies assumed a constant CT infection rate over the studied age range, which typically spans adolescence and early adulthood. Increasing evidence suggests that the CT infection risk decreases with age (e.g., [3, 13, 31]), mainly due to more stabilized sexual partnership and possibly also due to increased immunological response to CT over age. Hence, one would expect that a screening strategy with age-dependent screening rate, i.e., treating screening proportion in the population as a function of age, would be more cost-saving than the strategies assuming a constant rate. In this paper, we incorporate the age dependency of the infection risk into an economic study of CT screening with nucleic acid amplification testing [33]. We optimize age-dependent screening strategies for a population-based screening program, which offers tests systematically to all individuals in the target group within a framework of agreed policy, protocols, quality management, monitoring and evaluation [16].
To the best of our knowledge, only few simulation-based economic studies have taken the age-dependency into account. For example, Hu et al. [18, 19], basing their studies on an earlier observational study in the Netherlands [8, 15], assumed that the probability of acquiring CT is constant for women from early ages and decreases with a constant annual rate after then. While the simulation-based analyses have compared tailored screening strategies that recommend different screening rates to different population subgroups based on some risk measure (e.g., [18, 19, 21]), we have not witnessed any optimization work on identifying age-dependent CT screening strategies, which are, in some sense, a subset of risk-based strategies.
In this paper, we model the population dynamics, related to CT transmission, screening, and treatment, with a set of partial differential equations (PDE) that incorporate age-dependency on the CT infection risk. We formulate a parameter optimization problem subject to the PDE model to identify the screening rates at different age points over a range (i.e., an age-dependent parameter profile) such that some per-capita cumulative cost is minimized. To summarize our contribution, we are among the first that conduct economic analyses of population-based CT screening programs through age-structured systems modeling and optimization.
In this paper, we also reasonably specify the studied cohort so that we can reduce the PDE model to a set of ordinary differential equations (ODEs) for simplifying the numerical optimization. We next focus on the optimization over a set of more implementable strategies. In anticipation that the optimal age-specific screening strategy may be difficult to implement as optimal screening rates obtained from the above model may vary significantly between consecutive age points, we consider cases where a constant screening rate is applied to a truncated age range. Specifically, we consider optimizing the screening start age. Finally, we make a simplifying assumption on the age-specific infection risk, with which we remodel the system dynamics and explore the benefit in the numerical optimization. Through this simplification, we also check how robust the optimal strategy with a constant screening rate is to the estimate of the age-specific CT infection risk profile. After presenting the research methodology, we report our numerical studies and discuss their policy implications. At the end of the paper, we draw conclusions and outline future research.
Differential equation based systems dynamic modeling has been widely used in infectious disease control. For a general introduction, we refer to Keeling and Rohani [20]. For studies on CT transmission dynamics, we refer to Martin et al. [25]; Sharomi and Gumel [29]. Meanwhile, ODE-based models have been applied to economic studies of screening programs. For example, Althaus et al. [1] applied an SEIRS (susceptible-exposed-infected-recovered-susceptible) model, which is widely used in the infectious disease modeling literature (e.g., [2, 15]), to assess the impact of screening programs on CT prevalence reduction. Regan et al. [27] extended the SEIRS model to incorporate the additional state of receiving treatment. Note that the two studies above did not consider cost or cost-effectiveness of the screening programs. Our work differs from previous in that we apply nonlinear optimization to design optimal strategies.
Optimization of age-dependent screening strategies
An age-structured SEIRS model
We adapt a widely used SEIRS compartment model [1] to illustrate the system dynamics associated with CT transmission, screening, and treatment. We then capture the system dynamics with a multi-compartment model and mathematically formalize the age-structured population heterogeneity with a set of PDEs.
Compartment modeling has been widely used in modeling infectious disease transmission [2, 4, 20, 34]. In recent years, it has been used to model various specific screening, vaccinating, pharmaceutical, and therapeutic interventions for dealing with relevant public health problems (e.g., [12]). To many infectious diseases, age has a deep influence on the rate of disease spread in a population, especially the contact rate [2, 20]. To sexually transmitted diseases, the contact rate is affected by the sexual behavior, which is often age dependent.
Figure 1 presents the age-structured compartment model. In the figure, the solid lines indicate transitions following the natural history and standard pharmaceutical/therapeutic intervention of the disease. The dashed lines indicate additional transitions due to screening. The system dynamics is explained as follows. Let t and τ be the time and age indices, respectively. At any time t ε [0, T], each population of age τ ε [0, A] is divided into five subgroups as follows. Susceptible population subgroup, denoted by S(t,τ), infected by the entire infected population with an age-dependent rate β(τ) > 0. They then experience an incubation period at rate γ > 0, during which they are denoted by E(t,τ). After the incubation period, the infection symptom becomes onset among a fraction of the infected population, denoted by I s (t,τ), whereas other infected people, denoted by I a (t,τ), do not show any symptom. We denote f ε [0, 1] to be the probability that an infected individual remains asymptomatic. In the absence of screening, symptomatically infected people clear their infections at a rate r s > 0, which can be interpreted as treating the infection by a general practitioner with symptom onset and subsequently curing the disease. We assume that the treatment is sought immediately after the symptom onset. Asymptomatically infected people may develop acute pelvic inflammatory disease (PID), then immediately seek inpatient treatment, and subsequently cure the disease at a rate r PID >0. Alternatively, they may recover through natural clearance a rate r a >0. We denote such people to be R(t,τ) and denote μ > 0 to be the rate at which they have temporary immunity before becoming susceptible to reinfection. With screening, the entire population is screened at an age-specific rate λ(τ) (i.e., on average each individual will be screened within 1/ λ(τ) years from age point τ). We assume the screening test is 100 % accurate and treatment is sought immediately after an infection is detected. We further assume that the screening is independent of the processes of infection clearance among both asymptomatically and symptomatically infected people. Hence, with screening, the overall infection clearance rates are r s + λ(τ) and r PID + λ(τ) for symptomatically and asymptomatically infected, respectively.
An age-dependent SIER Model for CT transmission and screening. Each box (compartment) represents a particular state that the total population is stratified into. For instance, S standards for the susceptible population subgroup. The solid lines indicate transitions due to natural disease progression and standard therapeutic intervention; and the dashed lines indicate additional transitions due to screening. With the system dynamics, each subpopulation size may fluctuate over time. Note that this is an age-structured model, which implies that the fluctuation of each subpopulation size is also age dependent, i.e.,many transition rates are age-dependent such as β
The notation used in the model is summarized in Table 1. The system dynamics is described with the following PDEs. In mathematics, a PDE is a differential equation that contains unknown multivariate functions and their partial derivatives. It is in contrast to ordinary differential equations (ODEs), which deal with functions of a single variable and their derivatives. PDEs are used to formulate problems involving functions of multiple variables. PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. For a general introduction on PDE, we refer to [14].
Table 1 Notation in the age-structured compartment model and corresponding PDEs
\( \begin{array}{l}\left(\frac{\partial }{\partial t}+\frac{\partial }{\partial \tau}\right)S\left(t,\tau \right)=-\beta \left(\tau \right)S\left(t,\tau \right){\displaystyle \underset{0}{\overset{A}{\int }}\left({I}_a\left(t,{\tau}^{\hbox{'}}\right)+{I}_s\left(t,{\tau}^{\hbox{'}}\right)\right)}d{\tau}^{\hbox{'}}+\left({r}_{PID}+\lambda \left(\tau \right)\right){I}_a\left(t,\tau \right)+\left({r}_s+\lambda \left(\tau \right)\right){I}_s\left(t,\tau \right)+\mu R\left(t,\tau \right);\hfill \\ {}\left(\frac{\partial }{\partial t}+\frac{\partial }{\partial \tau}\right)E\left(t,\tau \right)=\beta \left(\tau \right)S\left(t,\tau \right){\displaystyle \underset{0}{\overset{A}{\int }}\left({I}_a\left(t,{\tau}^{\hbox{'}}\right)+{I}_s\left(t,{\tau}^{\hbox{'}}\right)\right)}d{\tau}^{\hbox{'}}-\gamma E\left(t,\tau \right);\hfill \\ {}\left(\frac{\partial }{\partial t}+\frac{\partial }{\partial \tau}\right){I}_a\left(t,\tau \right)=f\gamma E\left(t,\tau \right)-\left({r}_a+{r}_{PID}+\lambda \left(\tau \right)\right){I}_a\left(t,\tau \right);\hfill \\ {}\left(\frac{\partial }{\partial t}+\frac{\partial }{\partial \tau}\right){I}_s\left(t,\tau \right)=\left(1-f\right)\gamma E\left(t,\tau \right)-\left({r}_s+\lambda \left(\tau \right)\right){I}_s\left(t,\tau \right);\hfill \\ {}\left(\frac{\partial }{\partial t}+\frac{\partial }{\partial \tau}\right)R\left(t,\tau \right)={r}_a{I}_a\left(t,\tau \right)-\mu R\left(t,\tau \right).\hfill \end{array} \) Typically, a screening program estimates in advance the size of the cohort it can deal with based on its capacity and keeps its size relatively constant by synchronizing the recruitment and exit processes. Without loss of generality, we set the cohort size to be 1 at any time point, i.e., \( {\displaystyle \underset{0}{\overset{A}{\int }}\left(S\left(t,\tau \right)+E\left(t,\tau \right)+{I}_a\left(t,\tau \right)+{I}_s\left(t,\tau \right)+R\left(t,\tau \right)\right)}d\tau =1,\forall t. \)
Once the screening rate profile, as well as the boundary and initial conditions, are given, the state of the system can be determined for any given time point with the above PDEs. The screening and treatment costs are cumulated accordingly over the program duration. An optimal screening rate profile can then be identified to minimize the per-capita cumulative cost. We next present a parameter optimization problem subject to the PDE constraints.
A parameter optimization problem
We present an optimal screening strategy design problem for the generic screening program. In the objective function, we denote c s to be the unit-time cost of screening an individual for CT, c t to be the unit-time cost of treating an individual for CT with antibiotics, c PID to be the unit-time cost of treating an individual for acute PID, and c end to be the expected cost of treating an individual for possible future PID sequelae when she leaves the cohort at age A with undiagnosed asymptomatic CT. The expectation takes the probability of developing three major PID sequelae (i.e., chronic pelvic pain, ectopic pregnancy, and infertility) and their associated treatment cost. Given a screening rate profile λ(τ), we model four types of cumulative cost over a screening period of T as follows.
CT screening cost: \( {C}_s\left(\lambda \left(\tau \right)\right)={c}_sT{\displaystyle \underset{0}{\overset{A}{\int }}\lambda \left(\tau \right)}, \)
CT Treatment cost: \( {C}_t\left(\lambda \left(\tau \right)\right)={\displaystyle \underset{0}{\overset{T}{\int }}{\displaystyle \underset{0}{\overset{A}{\int }}{c}_t\left[\left({r}_s+\lambda \left(\tau \right)\right){I}_s\left(t,\tau \right)+\lambda \left(\tau \right){I}_a\left(t,\tau \right)\right]}} d\tau dt, \)
Acute PID treatment cost: \( {C}_{PID}\left(\lambda \left(\tau \right)\right)={\displaystyle \underset{0}{\overset{T}{\int }}{\displaystyle \underset{0}{\overset{A}{\int }}{c}_{PID}}}{r}_{PID}{I}_a\left(t,\tau \right) d\tau dt, \)
PID sequelae treatment cost: \( {C}_{end}\left(\lambda \left(\tau \right)\right)={\displaystyle \underset{0}{\overset{T}{\int }}{c}_{end}{I}_a\left(t,A\right)dt}. \)
Note that the screening cost applies to the entire cohort, which is assumed to be 1. We define the cumulative cost as C total (λ(τ)) = C s (λ(τ)) + C t (λ(τ)) + C PID (λ(τ)) + C end (λ(τ)). The optimization problem is then formulated as \( \underset{\lambda \left(\tau \right)}{ \min }{C}_{total}\left(\lambda \left(\tau \right)\right) \) subject to the PDEs introduced above and the boundary and initial conditions. While attempting to minimize the per-capita cumulative cost, we also compare different strategies in terms of the terminal CT prevalence at time t, defined as \( {\displaystyle \underset{0}{\overset{A}{\int }}\left({I}_a\left(t,\tau \right)+{I}_s\left(t,\tau \right)\right)d\tau } \).
To solve this parameter optimization problem, we discretize it to a finite-dimensional nonlinear programming problem. Note that the discretization does not significantly affect the solution quality given that 1) many model parameters have only age-dependent point estimates; and 2) it is not feasible to modify the screening intensity in a continuous fashion. We divide the time interval [0, T) into N t subintervals with equal step size h t , i.e., N t h t = T. Then the end points of the subintervals are \( {t}_0=0,{t}_1={h}_t,\dots, {t}_{N_t}=T \). We divide the age interval [0, A) into N τ subintervals with equal step size h τ . Then the end points of the subintervals are \( {\tau}_0=0,{\tau}_1={h}_{\tau },\dots, {t}_{N_{\tau }}=A \). We use i, j to denote the indices for time and age, respectively. We use βj and λj to denote the discretized values for β(τ) and λ(τ) with τ = jh A . The PDEs for each i = 0, …, N T - 1, and j = 0, …, N τ - 1, are then discretized as follows.
$$ \begin{array}{l}\frac{S^{i+1,j}-{S}^{i,j}}{h_t}+\frac{S^{i,j+1}-{S}^{i,j}}{h_{\tau }}=-{\beta}^j{S}^{i,j}{\displaystyle \sum_{k=0}^{N_{\tau }-1}\left({I}_a^{i,k}+{I}_s^{i,k}\right)}+\left({r}_{PID}+{\lambda}^j\right){I}_a^{i,j}+\left({r}_s+{\lambda}^j\right){I}_s^{i,j}+\mu {R}^{i,j};\\ {}\frac{E^{i+1,j}-{E}^{i,j}}{h_t}+\frac{E^{i,j+1}-{E}^{i,j}}{h_{\tau }}={\beta}^j{S}^{i,j}{\displaystyle \sum_{k=0}^{N\tau -1}\left({I}_a^{i,k}+{I}_s^{i,k}\right)}-\gamma {E}^{i,j};\\ {}\frac{I_a^{i+1,j}-{I}_a^{i,j}}{h_t}+\frac{I_a^{i,j+1}-{I}_a^{i,j}}{h_{\tau }}=f\gamma {E}^{i,j}-\left({r}_a+{r}_{PID}+{\lambda}^j\right){I}_a^{i,j};\\ {}\frac{I_s^{i+1,j}-{I}_s^{i,j}}{h_t}+\frac{I_s^{i,j+1}-{I}_s^{i,j}}{h_{\tau }}=\left(1-f\right)\gamma {E}^{i,j}-\left({r}_s+{\lambda}^j\right){I}_s^{i,j};\\ {}\frac{R^{i+1,j}-{R}^{i,j}}{h_t}+\frac{R^{i,j+1}-{R}^{i,j}}{h_{\tau }}={r}_a{I}_a^{i,j}-\mu {R}^{i,j}.\end{array} $$
The objective function is discretized as: \( {C}_{total}\left({\lambda}^0,{\lambda}^1,\dots, {\lambda}^{N_{\tau }-1}\right)={c}_sT{\displaystyle \sum_{j=0}^{N_{\tau }-1}{\lambda}^j}+{\displaystyle \sum_{i=0}^{N_T-1}{\displaystyle \sum_{j=0}^{N_{\tau }-1}{c}_t\left[\left({r}_s+{\lambda}^{i,j}\right){I}_s^{i,j}+{\lambda}^{i,j}{I}_a^{i,j}\right]}+}{\displaystyle \sum_{i=0}^{N_T-1}{\displaystyle \sum_{j=0}^{N_{\tau }-1}{c}_{PID}{r}_{PID}{I}_a^{i,j}}+}{\displaystyle \sum_{i=0}^{N_T-1}{c}_{end}{I}_a^{i,{N}_{\tau }}}. \)
Given the two subinterval counts (N t and N τ ), the boundary and initial conditions, and the estimated CT infection risk for j = 1,…, N τ , we obtain a nonlinear optimization model with finitely many decision variables, linear objective function, and quadratic constraints. We use standard constrained nonlinear optimization solvers (e.g., active-set and interior point) available in the MATLAB Optimization Toolbox [4].
A special case for cohorts with uniform age distribution
In this section, we consider a special case of the above PDE model, which is more suitable to the real practice of a screening program. In real practice, a screening program often only targets those of age 0 (i.e., the smallest age to be concerned for CT infection) for recruitment and terminates CT screening for those who reach A (i.e., the largest age to be concerned for CT infection). A general belief is that the number of infected individuals at age 0 is negligible. That is, for any t, we have S(t,0) = p, where p is denoted as the rate with which new participants enter the cohort, and E(t,0) = I a (t,0) = I s (t,0) = R(t,0) = 0. We further assume that the age of the studied open cohort follows a uniform distribution and term such a cohort uniformly aged cohort. That is, for any t, we have S(t,τ) + E(t,τ) + I a (t,τ) + I s (t,τ) + R(t,τ) = p = 1/A for τ ∈ (0, A]. Hence, we can align the age domain with the time domain and thus reduce the age-structured PDE model to a time-invariant ODE model with age-specific CT infection risks. We term this model ODE_1. Since the screening strategy design is only considered up to age A, β(τ) for τ ≥ A can be arbitrarily specified. To solve the parameter optimization problem for ODE_1, we again resort to discretization. In the following, we further study this special case with a smaller set of age-independent screening strategies, which are more implementable in practice.
Optimization of age-independent screening policies
Our study in this section was inspired by the current CT screening recommendations. The CDC guideline recommends annual CT screening for women under age 25 but does not specify the initial screening age [34]. We consider policies similar to the current CDC recommendations structure-wise. The considered policies recommend to start CT screening for women at some age between 0 and A, and continue the screening until A with a constant frequency. Hence, the optimization problem is intended to determine an optimal screening initiation age and optimal screening rate. Note that Teng et al. [12] studied the problem with fixed screening initiation age and only optimized the screening rate over a fixed age range. Their problem is a parameter optimization problem with only one decision variable and assumes a constant infection risk. For each screening initiation age \( \widehat{\tau} \), we have a similar parameter optimization problem, but with age-dependent infection risk. We use a standard line search algorithm without derivative information in MATLAB to solve the inner problem for each given screening initiation age. We apply one-dimensional explicit enumeration to select the optimal screening initiation age.
We further our study on this set of age-independent screening policies by considering a simplified case where the CT infection risk is assumed to be constant within the interval before screening initiation and within the interval after the initiation, respectively. With this simplification, the CT dynamical system is approximated with a two-part age-independent time-invariant coupling systems. We expect that solving the optimization problem on the two-part coupling system could decrease the computational time while only suffering slight reduction in terms of solution quality. Figure 2 illustrates a 10-compartment model for the two-part system. Given screening initiation age \( \widehat{\tau} \), we divide the interval [0, A) into two subintervals [0, \( \widehat{\tau} \)) and [\( \widehat{\tau} \), A). We use S0, E0, Ia0, Is0, and R0 to denote the compartments for age range [0, \( \widehat{\tau} \)), and use S, E, I a , I s , and R to denote the compartments for age range [\( \widehat{\tau} \), A). The disease transmission occurs in both age ranges, while screening is administered only to [\( \widehat{\tau} \), A). With the assumption of two constant CT infection risks, we use β0 and β to denote the risks in the two age ranges, respectively. All cost and other transition parameters remain the same as introduced earlier.
An age-independent SIER Model with two constant CT infection rates over the periods before and after screening initiation. This is a 10-compartment model with two portions. The upper portion captures the disease progression without screening from age 0 to the age determined to start screening. The lower portion captures the disease progression with screening from the age determined to start screening to age A. The solid lines and dashed lines are used in the same way as in Fig. 1 to indicate the dynamics. The dotted lines indicate the necessary vital dynamics with population aging
To formulate the optimization problem, we denote M0 and M to be the total populations in the two age ranges. With an uniformly aged cohort, we have M0/M = \( \widehat{\tau} \)/(A - \( \widehat{\tau} \)) and M0 + M = 1 for any given \( \widehat{\tau} \). We can thus uniquely determine the values of M0 and M. With the above notation, we introduce model ODE_2 as follows. For [0, \( \widehat{\tau} \)), the system dynamics is governed by
$$ \begin{array}{l}\frac{d{S}_0}{d\tau }=-{\beta}_0{S}_0\frac{\left({I}_{a0}+{I}_{s0}\right)}{M_0}+{r}_{PID}{I}_{a0}+{r}_s{I}_{s0}+\mu {R}_0+p-\frac{S_0}{M_0}p;\\ {}\frac{d{E}_0}{d\tau }={\beta}_0{S}_0\frac{\left({I}_{a0}+{I}_{s0}\right)}{M_0}-\gamma {E}_0-\frac{E_0}{M_0}p;\\ {}\frac{d{I}_{a0}}{d\tau }=f\gamma {E}_0-\left({r}_{PID}+{r}_a\right){I}_{a0}-\frac{I_{a0}}{M_0}p;\\ {}\frac{d{I}_{s0}}{d\tau }=\left(1-f\right)\gamma {E}_0-{r}_s{I}_{s0}-\frac{I_{s0}}{M_0}p;\\ {}\frac{d{R}_0}{d\tau }={r}_a{I}_{a0}-\mu {R}_0-\frac{R_0}{M_0}p.\end{array} $$
For age range [\( \widehat{\tau} \), A), the system dynamics is governed by
$$ \begin{array}{l}\frac{dS}{d\tau }=-\beta S\frac{\left({I}_a+{I}_s\right)}{M}+\left({r}_{PID}+\lambda \right){I}_a+\left({r}_s+\lambda \right){I}_s+\mu R+\frac{S_0}{M_0}p-\frac{S}{M}p;\\ {}\frac{dE}{d\tau }=\beta S\frac{\left({I}_a+{I}_s\right)}{M}-\gamma E+\frac{E_0}{M_0}p-\frac{E}{M}p;\\ {}\frac{d{I}_a}{d\tau }=f\gamma E-\left({r}_{PID}+{r}_a+\lambda \right){I}_a+\frac{I_{a0}}{M_0}p-\frac{I_a}{M}p;\\ {}\frac{d{I}_s}{d\tau }=\left(1-f\right)\gamma E-\left({r}_s+\lambda \right){I}_s+\frac{I_{s0}}{M_0}p-\frac{I_s}{M}p;\\ {}\frac{dR}{d\tau }={r}_a{I}_a-\mu R+\frac{R_0}{M_0}p-\frac{R}{M}p.\end{array} $$
We present the objective function with respect to the screening initiation age \( \widehat{\tau} \) and constant screening rate λ.
CT screening cost: \( {C}_s\left(\widehat{\tau},\lambda \right)={c}_s\lambda M\left(A-\widehat{\tau}\right), \)
CT treatment cost: \( {C}_t\left(\widehat{\tau},\lambda \right)={\displaystyle \underset{0}{\overset{A}{\int }}{c}_t\left[{r}_s\left({I}_{s0}+{I}_s\right)+\lambda \left({I}_s+{I}_a\right)\right]d\tau }, \)
Acute PID treatment cost: \( {C}_{PID}\left(\widehat{\tau},\lambda \right)={\displaystyle \underset{0}{\overset{A}{\int }}{c}_{PID}{r}_{PID}\left({I}_{a0}+{I}_a\right)d\tau }, \)
PID sequelae treatment cost: \( {C}_{end}\left(\widehat{\tau},\lambda \right)={\displaystyle \underset{0}{\overset{A}{\int }}{c}_{end}p\frac{I_a}{M}d\tau }, \)
Per-capita cumulative cost: \( {C}_{total}\left(\widehat{\tau},\lambda \right)={C}_s\left(\widehat{\tau},\lambda \right)+{C}_t\left(\widehat{\tau},\lambda \right)+{C}_{PID}\left(\widehat{\tau},\lambda \right)+{C}_{end}\left(\widehat{\tau},\lambda \right). \)
The optimization problem is thus presented as \( \underset{\widehat{\tau},\lambda }{ \min }{C}_{total}\left(\widehat{\tau},\lambda \right) \) subject to ODE_2.
With any given screening initiation age \( \widehat{\tau}\in \left[0,A\right) \), β0 and β become known. Hence, we can uniquely set the initial condition on S(\( \widehat{\tau} \)), E(\( \widehat{\tau} \)), I a (\( \widehat{\tau} \)), I s (\( \widehat{\tau} \)), and R(\( \widehat{\tau} \)). We also determine the cost accumulated from 0 to \( \widehat{\tau} \). Then we can reduce the optimization problem to a parameter optimization problem based on the 5-compartment ODE model for \( \tau \in \left[\widehat{\tau},A\right) \), for which we can adapt the optimization method proposed in Teng et al. [12]. That is, for any \( \widehat{\tau} \), the gradient of the objective function, i.e., \( \frac{d{C}_{total}\left(\widehat{\tau},\lambda \right)}{d\lambda } \), can be derived with a cubic interpolation method. We apply a standard linear search algorithm with derivatives in MATLAB to solve the inner problem given each screening initiation age. We then apply one-dimensional explicit enumeration to select the optimal screening initiation age.
We focus on the special case of uniformly aged cohort for our proof-of-concept numerical studies. We acquired model parameters from [1, 18] (Table 2). We estimated age-dependent infection risk β(τ) based on a longitudinal study of CT infection among recruited inter-city young women in a Midwest U.S. city [14] (Fig. 3). We set the screening initiation and termination ages to be 14 and 25, respectively, largely according to the CDC recommendation on the universal screening. Note that some of the work in the existing literature has conducted economic studies on annual and biannual universal screening beyond age 25. It is clear that we can extend the upper bound of the integrations (i.e., increase A) to accommodate this change. We will leave it to our future study.
Table 2 Parameters pertaining to costs and disease transition rates
Initial condition for model ODE_1
We study the three parameter optimization problems presented earlier. In summary, the first problem aims to identify an optimal age-dependent screening strategy based on the time-invariant ODE model with an age-specific CT infection risk profile (i.e., ODE_1). The second problem aims to identify the screening initiation age and constant screening rate thereafter, again based on ODE_1. The third problem aims to make the same set of decisions as the second problem but the problem is based on the two-part time-invariant ODE model with a constant CT infection risk over each of the two age ranges (i.e., ODE_2). We term the optimal screening strategies identified in the three optimization problems S1, S2, and S3 in that order. We compare the three optimal strategies both in per-capita cumulative cost and terminal CT prevalence. We also report a comparative study with no screening and with the current recommendations.
For S1, the screening rate profile is represented as a multi-step function with identical step size depending on the maximal allowable frequency of strategy update. We chose to update the screening strategy either yearly or monthly. We report the optimal strategies in Fig. 4.
Optimal age-dependent screening strategy (S1)
For S2, we present the optimal screening rate with all possible screening initiation ages (every month between 0 and A), as well as the associated per-capita cumulative cost and terminal prevalence in Fig. 5. The smallest unit for the screening initiation age is one month. The strategy with the minimum cost is the one that starts the screening for every individual when she reaches the 6th month after the 14th birthday. The screening rate is 1.511 times per year, which implies that an individual should test for CT roughly every 8 months.
Screening rate, per-capita cumulative cost, and terminal prevalence of strategy S2 for each possible screening initiation age
For S3, we present the optimal screening rates with all possible screening initiation ages, as well as the associated per-capita cumulative cost and terminal prevalence in Fig. 6. The strategy with the minimum cost is the one that starts the screening for every individual when she reaches the 4th month after the 14th birthday. The screening rate is 1.499 times per year.
In Table 3, we compare the three strategies. First, the three studied strategies all outperform the strategy of no screening and the current CRC recommendations in both per-capita cumulative cost and terminal CT prevalence. Second, the comparison indicates the superiority of age-dependent CT screening strategy (S1 vs. S2) and quantifies its potential impact to the screening practice. Finally, the comparison shows comparable solution qualities between S2 and S3, suggesting the strategy design may not be sensitive to the quantification of age-dependent CT infection risks. In terms of computation time, on a PC with a 2.33GHz Intel Core 2 Duo Processor and 2GB RAM, the computation time is about 3.5 s for identifying S3, compared to 13 s for S2. This is mainly due to the fact that the gradient is available to the one-dimensional linear search for S3 but not for S2.
Table 3 Comparison between the screening strategies
Overall, our numerical studies suggest that considering age-dependency in the screening strategy design is more cost-saving than currently recommended strategies. Our results further offer insights into various aspects of the design. With the study on S1, the results suggest that the age-dependency on the screening rate in an optimal screening policy roughly coincides with the age-dependency on the CT infection risk. That is, the screening rate should be intensified around age 16 – 18, which is the age range where the infection risk is highest. Compared to the current recommendations, biannual screening or screening every 8 months is more likely to be optimal from the societal cost-saving viewpoint. With the study on S2, the results suggest that it may be beneficial to initiate the screening earlier at least for the tested intercity cohort, which has relatively high CT prevalence. This also suggests that it is important to consider the potential costs incurred by the PID sequelae. Thus, it is important to provide accurate estimate on the probabilities of developing the sequelae in any strategy design activities.
Comparing S2 to S1 suggests that constant rate screening is likely to be acceptable given the small increase in both outcomes. Comparing S3 to S2 suggests that accurate quantification of age-specific CT infection risks may not be essential to the design of strategies with constant screening rate. Note that almost all the existing work largely relies on relatively crude estimates due to data scarcity and ethical concerns [31]. Finally, the fast computations suggest that it may be appealing to expand our models to incorporate high-level population heterogeneities.
In this research, we present a series of parameter optimization models to investigate age-dependent screening strategies for controlling chlamydia infection among young women. Through our modeling research, we attempt to inform the design of optimal population-based CT screening strategies from a societal cost-saving perspective while ensuring a sufficient level of practicality. For the analysis, we extend a widely used SEIRS model to incorporate age-dependent screening rate profile and apply a gradient-based line search algorithm for ease of numerical optimization.
Our future research will mainly be focused on detailed model development. For example, it is evident that risks of first-time infection and subsequent reinfection differ due to partial protective immunity against CT [11, 37]. We will formulate the parameter optimization models that differentiate individuals with first-time infection and reinfection. We will also consider different patterns in ongoing sexual partnership. We plan to adapt the pair compartment model in Heijne et al. [30], which captures sexual partnership duration and reinfection. The investigation on sexual partnership and effective management of sex partners motivates us to explore the use of stochastic network models (e.g., [5, 32]), which provides added flexibility in modeling sexual partnership networks of complex structure. We will thereby develop optimization models based on stochastic network models for CT transmission among heterogeneous sex partners. In addition, we will model programmatic adherence and testing accuracy to make our strategy design more suitable in real-world CT infection control. Other future research directions include design of more efficient parameter optimization solution methods, systematic literature review for model parameter estimation, and sensitivity analyses on the model parameters.
Althaus CL, Heijne JC, Roellin A, Low N. Transmission dynamics of chlamydia trachomatis affect the impact of screening programmes. Epidemics. 2010;2(3):123–31.
Anderson RM, May RM. Infectious disease of humans: dynamics and control. Oxford: Oxford University Press; 1991.
Arno JN, Katz BP, McBride R, Carty GA, Batteiger BE, Caine VA, et al. Age and clinical immunity to infections with chlamydia trachomatis. Sex Transm Dis. 1994;21(1):47–52.
Bailey NT. The mathematical theory of infectious diseases and its applications. 2nd ed. London: Hafner Press/MacMillian Pub; 1975.
Batteiger BE, Xu F, Johnson RE, Rekart ML. Protective immunity to chlamydia trachomatis genital infection: evidence from human studies. J Infect Dis. 2010;201 Suppl 2:S178–89.
Britton TF, Delisle S, Fine D. STDs and family planning clinics: a regional program for chlamydia control that works. Am J Gynecol Health. 1992;6(3):80–7.
Brunham RC, Rey-Ladino J. Immunology of chlamydia infection: implications for a chlamydia trachomatis vaccine. Nat Rev Immunol. 2005;5(2):149–61.
Buhang H, Skjeldestad FE, Halvorsen LE, Dalen A. Should asymptomatic patients be tested for chlamydia trachomatis in general practice? Br J Gen Pract. 1990;40(333):142–5.
Cates Jr W, Wasserheit JN. Genital chlamydial infections: epidemiology and reproductive sequelae. Am J Obstet Gynecol. 1991;164(6 pt 2):1771–81.
Chlamydia surveillance data [http://www.cdc.gov/std/stats10/chlamydia.htm]. 2010 Sexually transmitted disease surveillance, Centers for Disease Control and Prevention (CDC). Retrieved April 26, 2012.
Constrained optimization [http://www.mathworks.com/help/optim/constrained-optimization.html]. MathWorks documentation center, optimization toolbox, nonlinear optimization, constrained optimization website. Retrieved March 27, 2012.
d'Onofrio A, Manfredi P, Salinelli E. Vaccinating behaviour, information, and the dynamics of SIR vaccine preventable diseases. Theor Popul Biol. 2007;71(3):301–17.
Datta SD, Sternberg M, Johnson RE, Berman S, Papp JR, McQuillan G, et al. Gonorrhea and chlamydia in the United States among persons 14 to 38 years of age, 1999 to 2002. Ann Intern Med. 2007;147(2):89–96.
Evans LC. Partial differential equations. Providence: American Mathematical Society; 1998.
Halvorsen LE, Skjeldestad FE, Mecsei R, Dalen A. Chlamydia trachomatis i prover fra cervix uteri blant pasienter i allmennpraksis [Chlamydia trachomatis in cervix uteri among patients in general practice. English summary]. Tidsskr Nor Laegeforen [J Nor Med Assoc]. 1988;108(30):2706–8.
Holland WW, Steward S, editors. Screening in disease prevention: what works? Oxford: Radcliffe Publishing; 2005.
Howell MR, Quinn TC, Gaydos CA. Screening for chlamydia trachomatis in asymptomatic women attending family planning clinics: a cost-effectiveness analysis of three strategies. Ann Intern Med. 1998;128(4):277–84.
Hu D, Hook 3rd EW, Goldie SJ. Screening for chlamydia trachomatis in women 15 to 29 years of age: a cost-effectiveness analysis. Ann Intern Med. 2004;141(7):501–13.
Hu D, Hook 3rd EW, Goldie SJ. The impact of natural history parameters on the cost-effectiveness of chlamydia trachomatis screening strategies. Sex Transm Dis. 2006;33(7):428–36.
Keeling MJ, Rohani P. Modeling infectious diseases in humans and animals. Princeton: Princeton University Press; 2008.
Kretzschmar M, Welte R, van den Hoek A, Postma MJ. Comparative model-based analysis of screening programs for chlamydia trachomatis infections. Am J Epidemiol. 2001;153(1):90–101.
Lau CY, Qureshi AK. Azithromycin versus doxycycline for genital chlamydial infections: a meta-analysis of randomized clinical trials. Sex Transm Dis. 2002;29(9):497–502.
Low N, Bender N, Nartey L, Shang A, Stephenson JM. Effectiveness of chlamydia screening: systematic review. Int J Epidemiol. 2009;38(2):435–48.
Low N, McCarthy A, Macleod J, Salisbury C, Campbell R, Roberts TE, et al. Epidemiological, social, diagnostic and economic evaluation of population screening for genital chlamydial infection. Health Technol Assess. 2007;11(8):1–165. iii–iv, ix–xii.
Martin CF, Allen LJS, Stamp MS. An analysis of the transmission of chlamydia in a closed population. J Differ Equ Appl. 1996;2(1):1–29.
National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention. Reported STDs in the United States. 2012 national data for chlamydia, gonorrhea, and syphilis. CDC fact sheet. Atlanta: Centers for Disease Control and Prevention; 2014. Retrieved from http://www.cdc.gov/nchhstp/newsroom/docs/STD-Trends-508.pdf.
Regan DG, Wilson DP, Hocking JS. Coverage is the key for effective screening of Chlamydia trachomatis in Australia. J Infect Dis. 2008;198(3):349–58.
Roberts TE, Robinson S, Barton P, Bryan S, Low N, Chlamydia Screening Studies (ClaSS) Group. Screening for chlamydia trachomatis: a systematic review of the economic evaluations and modeling. Sex Transm Infect. 2006;82(3):193–200.
Sharomi O, Gumel AB. Re-infection-induced backward bifurcation in the transmission dynamics of chlamydia trachomatis. J Math Anal Appl. 2009;356(1):99–118.
Teng Y, Han L, Tu W, Kong N. Optimizing coverage for a chlamydia trachomatis screening program. In: Hadjicostis C, editor. Proceedings of Institute of Electrical and Electronics Engineers (IEEE) 7th International Conference on Automation Science and Engineering. Trieste: IEEE Publishing; 2011.
Teng Y, Kong N, Tu W. Estimating age-dependent per-encounter chlamydia trachomatis acquisition risk via a Markov-based state-transition model. J Clin Bioinf. 2014;4:7.
Tu W, Batteiger BE, Wiehe S, Ofner S, Van Der Pol B, Katz BP, et al. Time from first intercourse to first sexually transmitted infection diagnosis among adolescent women. Arch Pediatr Adolesc Med. 2009;163(12):1106–11.
Van Der Pol B, Kraft CS, William JA. Use of an adaptation of a commercially available PCR assay aimed at diagnosis of chlamydia and gonorrhea to detect Trichomonas vaginalis in urogenital specimens. J Clin Microbiol. 2006;44(2):366–73.
Vynnycky E, White RG, editors. An introduction to infectious disease modelling. Oxford: Oxford University Press; 2010.
Welte R, Kretzschmar M, Leidl R, van den Hoek A, Jager JC, Postma MJ. Cost-effectiveness of screening programs for chlamydia trachomatis: a population-based dynamic approach. Sex Transm Dis. 2000;27(9):518–29.
Westrom L, Eschenbach D. Pelvic inflammatory disease. In: Holmes KK, Mardh PA, Sparling PF, editors. Sexually transmitted diseases. 3rd ed. New York: McGraw-Hill; 1999. p. 451–66.
Workowski KA, Berman S, Division of STD Prevention. Sexually transmitted disease treatment guidelines, 2010, MMWR recommendations and reports 2010, Vol. 59, No. RR-12. Atlanta: National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention, Centers for Disease Control and Prevention; 2010.
World Health Organization. Global prevalence and incidence of selected curable sexually transmitted infections overview and estimates. Retrieved from http://www.who.int/hiv/pub/sti/who_hiv_aids_2001.02.pdf. Geneva: 2001.
The data for CT age-dependent risk estimation was originally collected through the Young Women project which was supported by grant R01 HD042404 from the US National Institutes of Health.
Futures Institute, 41-A New London Tpke, Glastonbury, Connecticut, 06033, USA
Yu Teng
Weldon School of Biomedical Engineering, Purdue University, 206 S. Martin Jischke Dr, West Lafayette, Indiana, 47907, USA
Nan Kong
Department of Biostatistics, Indiana University School of Medicine, 410 West 10th St, Suite 3000, Indianapolis, Indiana, 46202, USA
Wanzhu Tu
Search for Yu Teng in:
Search for Nan Kong in:
Search for Wanzhu Tu in:
Correspondence to Nan Kong.
YT carried out the mathematical model development, the CT age-dependent risk estimation, and the simulation-based economic studies. NK supervised YT's PhD thesis research on CT infection screening economic study and took the main responsibility of drafting the manuscript. WT provided the initial data for CT age-dependent risk estimation, his expertise on mathematical model development, and his comments on the manuscript. All authors read and approved the final manuscript.
Teng, Y., Kong, N. & Tu, W. Optimizing strategies for population-based chlamydia infection screening among young women: an age-structured system dynamics approach. BMC Public Health 15, 639 (2015) doi:10.1186/s12889-015-1975-z
Age-structured system dynamics
Infectious disease modeling
Disease screening | CommonCrawl |
Longest Prime Sums
There are special sets S of primes such that \$\sum\limits_{p\in S}\frac1{p-1}=1\$. In this challenge, your goal is to find the largest possible set of primes that satisfies this condition.
Input: None
Output: A set of primes which satisfies the conditions above.
This challenge is a code-challenge, where your score is the size of the set you output.
Examples of Valid Outputs and their scores
{2} - 1
{3,5,7,13} - 4
{3,5,7,19,37} - 5
{3,5,7,29,31,71} - 6
Important: This challenge doesn't require any code, simply the set. However, it would be helpful to see how you generated your set, which probably involves some coding.
Edit: Found the math.stackexchange post that inspired this thanks to @Arnauld
math code-challenge number-theory primes
Don Thousand
Don ThousandDon Thousand
\$\begingroup\$ This challenge doesn't require any code - would this be better asked in puzzling.SE? \$\endgroup\$ – Digital Trauma Dec 11 '19 at 23:09
\$\begingroup\$ @DonThousand Is it correct that you require the solution to be a set (e.g. all elements are distinct)? Or is a list of primes (with recurring values) also acceptable? I (possibly wrongly) assumed in my answer that repetitions were allowed. If not, I'll change my answer. \$\endgroup\$ – agtoever Dec 12 '19 at 11:30
\$\begingroup\$ @agtoever It's a set. It must be distinct. I see you've already made that change, so that's good. \$\endgroup\$ – Don Thousand Dec 12 '19 at 14:35
\$\begingroup\$ I agree with @DigitalTrauma that this should be on puzzling.SE. They specifically have a 'no-computers' tag which to me implies that puzzles requiring computers are acceptable there. \$\endgroup\$ – Sam Dean Dec 12 '19 at 15:55
\$\begingroup\$ @SamDean No, that's not true. Open ended challenges like this are strictly off topic there. Problems must have a single definite answer to be posted there. Please read their FAQ \$\endgroup\$ – Don Thousand Dec 12 '19 at 15:58
Score 100 8605
I used an algorithm that starts with one solution and repeatedly tries to split a prime \$p\$ in the solution into two other primes \$q_ 1\$ and \$q_ 2\$ that satisfy \$\frac1{p-1} = \frac1{q_1-1}+\frac1{q_2-1}\$.
It is known (and can be quickly checked) that the positive integer solutions to \$\frac1n = \frac1x + \frac1y\$ are in one-to-one correspondence with factorizations \$n^2 = f_ 1 f_ 2\$, the correspondence being given by \$x = n + f_ 1\$, \$y = n + f_ 2\$. We can search through the factorizations of \$(p-1)^2\$ to see if any of them yielded a solution where both new denominators \$x,y\$ were one less then primes; if so, then \$p\$ can be replaced by \$q_1=x+1\$, \$q_2=y+1\$. (If there were multiple such factorizations, I used the one where the minimum of \$q_1\$ and \$q_2\$ was smallest.)
I started with this seed solution of length 44:
seed = {3, 7, 11, 23, 31, 43, 47, 67, 71, 79, 103, 131, 139, 191, 211, 239, 331, 419, 443, 463, 547, 571, 599, 647, 691, 859, 911, 967, 1103, 1327, 1483, 1871, 2003, 2311, 2347, 2731, 3191, 3307, 3911, 4003, 4931, 6007, 6091, 8779}
This seed was found using an Egyptian fraction solver that a former research student of mine, Yue Shi, coded in ChezScheme. (The primes \$p\$ involved all have the property that \$p-1\$ is the product of distinct primes less than 30, which increased the likelihood of Shi's program finding a solution.)
The following Mathematica code continually updates a current solution by looking at its primes one by one, trying to split them in the manner described above. (The number 1000 in the third line is an arbitrary stopping point; in principle one could let the algorithm run forever.)
solution = seed;
j = 1; (* j is the index of the element of the solution that we'll try to split *)
While[j <= 1000 && j <= Length[solution],
currentP = solution[[j]];
allDivisors = Divisors[(currentP - 1)^2];
allFactorizations = {#, (currentP - 1)^2/#} & /@
Take[allDivisors, Floor[Length[allDivisors]/2]];
allSplits = currentP + allFactorizations;
goodFactorizations = Select[allSplits,
And @@ PrimeQ[#] && Intersection[#, solution] == {} &];
If[goodFactorizations == {},
j++,
solution = Union[Complement[solution, {currentP}], First@goodFactorizations]
The code above yields a solution of length 4126, whose largest element is about \$8.7\times10^{20}\$; by the end, it was factoring integers \$(p-1)^2\$ of size about \$8.8\times10^{21}\$.
In practice, I ran the code several times, using the previous output as the next seed in each case and increasing the cutoff for j each time; this allowed for the recovery of some small prime splits that had become non-redundant thanks to previous splitting, which somewhat mitigated the size of the integers the algorithm factored.
The final solution, which took about an hour to obtain, is too long to fit in this answer but has been posted online. It has length 8605 and largest element about \$4.62\times10^{19}\$.
Various runs of this code consistently found that the length of the solution was about 3–4 times as long as the set of primes that had been examined for splitting. In other words, the solution was growing much faster than the code scanned through the initial elements. It seems likely that this behavior would continue for a long time, yielding some gargantuan solutions.
Greg MartinGreg Martin
\$\begingroup\$ Reading this discouraged me from competing, because you know your stuff so well \$\endgroup\$ – Mark Jeronimus Dec 12 '19 at 16:35
\$\begingroup\$ Oh I don't want anyone to be discouraged! Note that agtoever blew my original answer out of the water :) \$\endgroup\$ – Greg Martin Dec 12 '19 at 19:17
\$\begingroup\$ @GregMartin In my opinion, you deserve the accepted answer. Not only have you discovered a much larger set, but also an even more effective and efficient algorithm. Well done. Kudos. \$\endgroup\$ – agtoever Dec 12 '19 at 19:38
\$\begingroup\$ @agtoever Thanks for the kind words :) Your answer is very worthy and also was the first to have the code up; speed of posting is a consideration factor and I'm content for that to be the case. \$\endgroup\$ – Greg Martin Dec 12 '19 at 20:12
\$\begingroup\$ @JollyJoker I think it's an extremely interesting question whether a given seed can generate an infinite sequence of splits. My heuristics tell me that a randomly chosen large prime is unlikely to have such splits; however, primes such that p–1 is very composite have a better chance to have such splits, and it might be the case that this algorithm tends to produce primes of that form. \$\endgroup\$ – Greg Martin Dec 13 '19 at 19:51
Score 263 385 425 426 with only primes < 1.000.000 (was: non-competitive, now it is; score can be increased by running the program longer)
I followed the same path as Wheat Wizard: iteratively search for primes in the solution that can be replaced with a longer list of primes with the same result. I wrote that Python program that does exactly this. It starts with solution S = {2} and than iterates of all elements of that solution and tries to find a decomposition of that prime for which 1/(p-1) = sum(1/(q-1) for all q in the decomposition.
After I realized that S should be a set (and not a list), I altered the program to take this into account. I also added a ton of performance optimizations. The solution of 263 came up within 200 seconds or so (running under pypy3), but if you let it running, it steadily keeps coming with additional (longer) solutions.
Current best solution (425 elements, with all primes < 1M, calculated in ~ 15 min.):
S = [3, 5, 13, 19, 29, 37, 103, 151, 241, 281, 409, 541, 577, 593, 661, 701, 751, 1297, 1327, 2017, 2161, 2251, 2293, 2341, 2393, 2521, 2593, 2689, 2731, 3061, 3079, 3329, 3361, 3457, 6301, 6553, 7057, 7177, 7481, 7561, 8737, 9001, 9241, 9341, 10501, 11617, 12097, 12547, 14281, 14449, 14561, 15121, 17761, 17851, 18217, 18481, 20593, 21313, 22441, 23189, 23761, 24571, 26041, 26881, 28351, 28513, 29641, 30241, 36529, 37441, 46993, 49921, 51169, 57331, 58109, 58313, 58369, 58831, 59659, 60737, 60757, 61001, 61381, 61441, 61561, 61609, 63067, 63601, 64513, 64901, 65053, 65089, 65701, 65881, 66301, 66931, 67049, 69389, 69941, 70181, 72161, 72481, 72577, 72661, 73061, 73699, 74521, 77521, 78241, 79693, 81181, 86951, 88741, 90631, 98011, 100297, 102181, 107641, 108991, 109201, 109537, 114913, 117841, 118429, 121993, 122761, 123001, 124561, 127601, 128629, 130073, 130969, 131561, 133387, 133813, 138181, 138403, 139501, 146077, 149521, 159457, 160081, 162289, 163543, 166601, 174241, 175891, 176401, 177913, 180181, 182711, 189421, 199921, 201781, 206641, 218527, 223441, 227089, 229739, 234961, 238081, 238141, 238897, 239851, 246241, 250057, 261577, 266401, 267961, 280321, 280837, 280897, 281233, 283501, 283861, 284161, 287233, 288049, 291721, 297601, 299053, 302221, 306853, 309629, 313153, 316681, 322057, 325921, 332489, 342211, 342241, 349981, 352273, 354961, 355321, 360977, 365473, 379177, 390097, 390961, 394717, 395627, 401057, 404251, 404489, 412127, 412651, 416881, 417649, 418027, 424117, 427681, 428221, 428401, 429409, 430921, 434521, 435481, 441937, 443873, 444641, 451441, 453601, 454609, 455149, 459649, 466201, 468001, 473617, 474241, 480737, 481693, 483883, 496471, 498301, 498961, 499141, 499591, 499969, 501601, 501841, 502633, 513067, 514513, 517609, 523261, 524521, 525313, 529381, 538721, 540541, 545161, 550117, 552553, 560561, 562633, 563501, 563851, 568177, 570781, 575723, 587497, 590669, 591193, 599281, 601801, 601903, 604001, 605551, 607993, 609589, 611389, 617401, 621007, 627301, 628561, 628993, 629281, 635449, 637201, 639211, 642529, 645751, 651361, 651857, 653761, 654853, 655453, 657091, 662941, 664633, 667801, 669121, 669901, 670177, 673201, 673921, 675109, 688561, 689921, 691363, 692641, 694033, 695641, 697681, 698293, 700591, 703081, 703561, 705169, 705181, 707071, 709921, 713627, 732829, 735373, 737413, 739861, 742369, 745543, 750121, 750721, 754771, 756961, 757063, 758753, 759001, 760321, 761671, 762721, 766361, 773501, 774181, 776557, 779101, 782461, 784081, 784981, 786241, 788317, 794641, 795601, 797273, 800089, 801469, 808081, 808177, 810151, 813121, 815671, 819017, 823481, 823621, 825553, 831811, 833281, 833449, 836161, 839161, 840911, 846217, 859657, 859861, 860609, 863017, 865801, 869251, 870241, 875521, 876929, 878011, 880993, 884269, 891893, 895681, 898921, 899263, 902401, 904861, 905761, 907369, 908129, 914861, 917281, 917317, 921601, 922321, 923833, 926377, 939061, 941641, 942401, 943009, 943273, 944161, 944821, 944833, 949621, 949961, 950041, 950401, 953437, 953443, 954001, 957349, 957529, 960121, 960961, 963901, 964783, 967261, 967627, 967751, 968137, 971281, 973561, 973591, 984127, 984341, 984913, 986437, 991381, 992941, 994561, 995347, 996001]
Proof that is satisfies the challenge:
Some of the decompositions used:
1/( 2-1) = sum(1/(p-1)) for p in: {(5, 7, 13, 3)}
1/( 3-1) = sum(1/(p-1)) for p in: {(7, 13, 5)}
1/( 5-1) = sum(1/(p-1)) for p in: {(13, 7)}
1/( 7-1) = sum(1/(p-1)) for p in: {(13, 19, 37)}
1/( 13-1) = sum(1/(p-1)) for p in: {(19, 37)}
1/( 19-1) = sum(1/(p-1)) for p in: {(29, 71, 181)}
1/( 29-1) = sum(1/(p-1)) for p in: {(37, 127)}
1/( 37-1) = sum(1/(p-1)) for p in: {(43, 281, 2521)}
1/( 43-1) = sum(1/(p-1)) for p in: {(53, 223, 13469)}
1/(8779-1) = sum(1/(p-1)) for p in: {(8969, 739861, 941641)}
1/(8821-1) = sum(1/(p-1)) for p in: {(8941, 657091)}
1/(9109-1) = sum(1/(p-1)) for p in: {(10891, 55661)}
1/(9341-1) = sum(1/(p-1)) for p in: {(15121, 25219, 784561)}
1/(10333-1) = sum(1/(p-1)) for p in: {(10501, 645751)}
1/(131561-1) = sum(1/(p-1)) for p in: {(237361, 295153)}
Python3 code:
import sympy
import cProfile
import functools
import bisect
import operator
def sundaram3(max_n):
# Returns a list of all primes under max_n
numbers = list(range(3, max_n + 1, 2))
half = (max_n) // 2
initial = 4
for step in range(3, max_n + 1, 2):
for i in range(initial, half, step):
numbers[i - 1] = 0
initial += 2 * (step + 1)
if initial > half:
return [2] + list([_f for _f in numbers if _f])
# Precalculate all primes up to a million to speed things up
PRIMES_TO_1M = list(sundaram3(1000000))
def nextprime(number):
# partly precalculated fast version for calculating the
# first (e.g. smallest) prime that is largest than numer
global PRIMES_TO_1M
if number <= PRIMES_TO_1M[-2]:
return PRIMES_TO_1M[bisect.bisect(PRIMES_TO_1M, number)]
return sympy.nextprime(number)
def isprime(number):
# partly precalculated fast version to determine of number is prime
if number < 1000000:
return number in PRIMES_TO_1M
return sympy.isprime(number)
def upper_limit(prime, length=2):
# Returns the largest prime q in the decomposition of prime with the given
# length such that 1 / (prime - 1) = sum( 1 / (q - 1)) for q in
# set V, with V has the given length.
# ASSUMPTION: all q are unique; this assumption is not validated,
# but for this codegolf, the solution must be a set, so this is safe.
if length == 1:
return prime
nextp = nextprime(prime)
largestprime = (prime * nextp - 2 * prime + 1 ) // (nextp - prime)
if not isprime(largestprime):
largestprime = nextprime(largestprime)
return upper_limit(largestprime, length - 1)
def find_decomposition(prime, length=2):
# Returns a list of primes V = {q1, q2, q3, ...} for which holds that:
# 1 / (prime - 1) = sum(1 / (q - 1)) for q in V.
# Returns None if this decomposition is not found.
# Note that there may be more than one V of a given prime, but this
# function returns the first found V (implementation note: the sortest one)
print(f"Searching decomposition for prime {prime} with length {length} in range ({prime}, {upper_limit(prime, length)}]")
prime_range = PRIMES_TO_1M[bisect.bisect(PRIMES_TO_1M, prime) + 1:
bisect.bisect(PRIMES_TO_1M, upper_limit(prime, length) + 1)]
# we only search for combinations of length -1; the last factor is calculated
for combi in itertools.combinations(prime_range, length - 1):
# we find the common factor of prime and all primes in combi
# and use that to calculate the remaining prime. This is faster
# than trying all prime combinations.
factoritems = [-prime + 1] + [c - 1 for c in combi]
factor = -functools.reduce(operator.mul, factoritems)
remainder = - factor / sum(factor // p for p in factoritems) + 1
if remainder == int(remainder) and isprime(remainder) and remainder not in combi:
combi = combi + (int(remainder),)
print(f"Found decomposition: {combi}")
return combi
def find_solutions():
# Finds incrementally long solutions for the set of primes S, for which:
# sum(1/(p-1)) == 1 for p in S.
# We do this by incrementally searching for primes in S that can be
# replaced by longer subsets that have the same value of sum(1/p-1).
# These replacements are stored in a dictionary "decompositions".
# Starting with the base solution S = [2] and all decompositions,
# you can construct S.
decompositions = {} # prime: ([decomposition], max tried length)
S = [2]
old_solution_len = 0
# Keep looping until there are no decompositions that make S longer
while len(S) > old_solution_len:
# Loop over all primes in S to search for decompositions
for p in sorted(set(S)):
# If prime p is not in the decompositions dict, add it
if p not in decompositions:
decompositions[p] = (find_decomposition(p, 2), 2)
# If prime p is in the decompositions dict, but without solution,
# try to find a solution 1 number longer than the previous try
elif not decompositions[p][0]:
length = decompositions[p][1] + 1
decompositions[p] = (find_decomposition(p, length), length)
# If prime p is in decompositions and it has a combi
# and the combi is not already in S, replace p with the combi
elif all(p not in S for p in decompositions[p][0]):
old_solution_len = len(S)
print(f"Removing occurence of {p} with {decompositions[p][0]}")
S.remove(p)
S.extend(decompositions[p][0])
S = sorted(S)
break # break out of the for loop
print(f"Found S with length {len(S)}: S = {S}")
print(f"Decompositions: ")
for prime in sorted(decompositions.keys()):
print(f" 1/({prime:3}-1) = sum(1/(p-1)) for p in: \u007b{decompositions[prime][0]}\u007d")
cProfile.run("find_solutions()")
agtoeveragtoever
\$\begingroup\$ Using your function, you could decompose the twenty 43s into twenty 47, 491, 33811, increasing the score by 40. Also, the sixteen 73s can go to sixteen times 79, 1093, 6553, adding another 32 to your score \$\endgroup\$ – Mathias711 Dec 12 '19 at 10:47
\$\begingroup\$ @LuisMendo altered the code, added solution. To all who downvoted: please let me know you find anything non-competing about this answer. I believe this is now a valid (and sound) answer. I'll edit some code comments later to explain the algorithms used. \$\endgroup\$ – agtoever Dec 12 '19 at 12:54
\$\begingroup\$ @Arnauld you are right, I think. Fixed that. \$\endgroup\$ – agtoever Dec 12 '19 at 12:55
\$\begingroup\$ Impressive score! \$\endgroup\$ – Luis Mendo Dec 12 '19 at 13:18
\$\begingroup\$ Woops, sorry we edited at the same time. Feel free to changes the removing/replacing again, my bad. I added Python highlighting to your answer so it's easier to read the code and comments of your program. \$\endgroup\$ – Kevin Cruijssen Dec 12 '19 at 13:38
Score 32 34 36
{5, 7, 11, 13, 17, 23, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 101, 113, 131, 137, 151, 211, 229, 241, 281, 313, 379, 401, 433, 457, 491, 521, 571, 601, 25117, 293362609}
This is an improvement of Arnauld's answer. I just noticed that
\$ \dfrac{1}{19-1}=\dfrac{1}{73-1}+\dfrac{1}{61-1}+\dfrac{1}{41-1} \$
But 41 and 61 were already used in Arnauld's answer so I had to then figure out that
\$ \dfrac{1}{61-1} = \dfrac{1}{151-1} + \dfrac{1}{101-1} \$ and \$ \dfrac{1}{41-1} = \dfrac{1}{281-1}+\dfrac{1}{211-1}+\dfrac{1}{151-1}+\dfrac{1}{101-1} \$
But now I am using 151 and 101 twice. So I spent some time and discovered that
\$ \dfrac{1}{151-1} = \dfrac{1}{401-1} + \dfrac{1}{241-1} \$ and \$ \dfrac{1}{101-1} = \dfrac{1}{601-1} + \dfrac{1}{571-1} + \dfrac{1}{457-1} + \dfrac{1}{229-1} \$
So now I can just replace the 19 with 71, 151, 101, 151, 211, 229, 241, 281, 401, 457, 471, 601 and the sequence will maintain it's properties.
I also discovered that I can replace 19 with the sequence 71, 151, 101, 151, 211, 241, 241, 281, 401, 433, 541, 601, but that has 241 twice.
After that improvement I also noticed that 79 could be replaced with 521, 313, 131, to increase the size by 2 more. And 73 can be replaced with 113, 379, 433 for another 2.
Post Rock Garf HunterPost Rock Garf Hunter
Just to get the ball rolling.
{5,7,11,13,17,19,23,31,37,41,43,47,53,59,61,67,71,79,137,491,25117,293362609}
Compute the fraction
I suspect that the sequence can be made arbitrary large, but my code is currently too messy and inefficient for anything significantly better than that.
\$\begingroup\$ Most mathematicians suspect that as well, however, that claim is equivalent to a weak form of a long unsolved conjecture. \$\endgroup\$ – Don Thousand Dec 12 '19 at 2:36
\$\begingroup\$ @DonThousand Can you link to what is known/suspected mathematically? What is the conjecture you refer to? \$\endgroup\$ – Anush Dec 12 '19 at 7:59
\$\begingroup\$ @Anush I'll try to find the paper. It was quite a while ago. \$\endgroup\$ – Don Thousand Dec 12 '19 at 8:03
This non-answer expands on the modifications to @GregMartin's answer referenced in this comment there.
The allDivisors to goodFactorizations block can be sped up noticeably by not asking Divisors to factor the square of a number (and also a few other changes). The @GregMartin's original code:
divMethod[p_] := Module[{},
allDivisors = Divisors[(p - 1)^2];
allFactorizations = {#, (p - 1)^2/#} & /@
allSplits = p + allFactorizations;
And @@ PrimeQ[#] &]
The intersection check at the end is removed, since it is common to both the above and my proposed replacement:
g[p_] := Module[
{pm1s, factorization, divisors},
pm1s = (p - 1)^2;
(* Since the original seed contains only odd
primes, we know one factor of p-1. *)
factorization = FactorInteger[(p - 1)/2];
(* For divisors d1, d2, such that d1 d2 = pm1s,
we require d1+p and d2+p are prime. p>2, so p
is odd and both d1+p and d2+p are odd, so d1
and d2 are necessarily both even. This means
we only consider splitting the powers of 2 so
that at least one falls in each divisor; we
want to know the power of 2 in pm1s, minus 1.
If[factorization[[1, 1]] != 2,
pow2m1 = 1,
pow2m1 = 2 factorization[[1, 2]] + 1
divisors = Outer[Times,
2^Range[1, pow2m1],
Sequence @@ (
Select[
#[[1]]^Range[0, 2 #[[2]]],
(# < p - 1 &)] & /@
Rest[factorization])];
(* The Join@@ is a hack to deal with Reap's
denormalized output on no-Sow runs. See
https://mathematica.stackexchange.com/questions/67625/what-is-shorthand-way-of-reap-list-that-may-be-empty-because-of-zero-sow *)
Join @@ Reap[
Map[
(If[#1 < p - 1 && PrimeQ[#2],
(If[PrimeQ[#2],
Sow[{#1, #2}]
] &)[#2, pm1s/#1 + p]
] &)[#, # + p] &,
divisors,
{-1}];
][[2]]
Note: the replacement makes no attempt to sort the collection of pairs. The following tests do not demonstrate the potential difference in order of results from divMethod and g.
divMethod[3]
g[3]
(* {} *)
divMethod[29]
g[29]
(* {{31, 421}, {37, 127}} *)
p = NextPrime[10^3]
RepeatedTiming[divMethod[p]]
RepeatedTiming[g[p]]
(* {0.00041, {{1201, 6301}}} *)
(* {0.000487, {{1201, 6301}}} *)
(* {0.000271, {}} *)
(* {0.0000619, {}} *)
Let's collect timing data for sets of 1000 consecutive primes starting at powers of 10 and see what trends we see.
divTiming = Table[{10^k, RepeatedTiming[
p = NextPrime[10^k];
For[count = 1, count <= 1000, count++,
divMethod[p];
p = NextPrime[p];
][[1]]}, {k, 3, 22}];
fiTiming = Table[{10^k, RepeatedTiming[
g[p];
ListLogLinearPlot[{divTiming, fiTiming},
PlotLegends -> {"Divisors", "FactorInteger"}]
ListLogLinearPlot[
Transpose[{
divTiming[[All, 1]],
(divTiming/fiTiming)[[All, 2]]}],
PlotLegends -> {"Divisors/FactorInteger"}]
So around 10^7 or 10^8, the Divisors method is noticeably slower than the FactorInteger method. The ratio of times rises to 3-ish for primes around 10^15. There is a dip, for slightly larger primes, but the trend suggests the speed-up will improve as we go to larger primes than tested.
Eric TowersEric Towers
\$\begingroup\$ Wow, this is some really cool analysis. Is this all done in R? \$\endgroup\$ – Don Thousand Dec 17 '19 at 1:07
\$\begingroup\$ @DonThousand : This is Mathematica code, as is \@GregMartin's. \$\endgroup\$ – Eric Towers Dec 17 '19 at 4:09
Not the answer you're looking for? Browse other questions tagged math code-challenge number-theory primes or ask your own question.
List ALL prime-factorized natural numbers in ANY order
Find largest prime which is still a prime after digit deletion
Optimize paper folding to mitigate inkblots
Find the craftiest prime
Combinatorial products of unique primes
Find the largest gap between good primes
Stuffing primes in a box
Is it a Chen prime?
Writing rational numbers as ratio of factorials of primes
Prime or highest factor | CommonCrawl |
Ch. 4
Marci_Dupre
At what age do rituals and game playing emerge?
At about 8 to 9 months, infants develop __________ in interactions.
Phonetically consistent forms & Protowords
Consistent vocal patterns are called...
The first meaningful word occurs around...
Word/phrase understanding and protection
Speech perception at 6 months is related to later...
By __________, children produce about 50 words and begin to combine words predictably.
150-300 words
By age 2, children have an expressive vocabulary of about...
Each child has a personal dictionary, or __________, that reflects his/her environment.
If a preschool-aged child says "doggies are yucky," kitties are yucky," etc. they are using...
Increased memory
Preschool-aged children can recount the past and remember short stories because of...
Caregivers repeat the child's utterances in mature form
What is reformulation?
2-4 turns
How long can preschool-aged children maintain a conversation?
Comprehension of words is more advanced than expression
In preschool-aged children,
Fast mapping
__________ is inferring meaning from context and using the word in a similar manner
About 90% of adult syntax is acquired by age...
Mean length of utterance
Language becomes more complex as it becomes longer, and can be calculated in...
Metalinguistic skills
__________ allow(s) the child to consider language in the abstract, make judgements about its correctness, and create verbal contexts.
Slows and begins to stabilize
For school-aged children, language development...
__________ is/are sayings that do not always mean what they seem to mean, as in idioms.
By high school, children understand approximately...
During adolescence
Multiple word meanings are acquired...
By age __________, children can use most verb tenses, possessive pronouns, and conjunctions.
Language impairment
Risk factors for __________ include being male, having ongoing hearing problems, and having a more reactive temperament.
Protective factors for __________ include having a more persistent and sociable temperament and higher levels of maternal well-being.
Still have a weakness in language-related skills in late adolescence
Children who are identified as late talkers at 24 - 31 months...
Severity of intellectual disability is usually based on...
When should intervention begin for those with intellectual disability?
Regular education with support
Self-contained, special classrooms
Institutions for profound ID
Educational options for those with intellectual disabilities include...
Moderate-severe language delays
Children with Down Syndrome and Fragile X both have...
__________ are a heterogeneous group of disorders that are manifested by significant difficulties in the development and use of listening, speaking, reading, writing, reasoning, or mathematical abilities.
Approximately __________ of all individuals have LD.
ADHD (Attention Deficit Hyperactivity Disorder)
__________ is an underlying neurological impairment in executive function that regulates behavior, causing impulsiveness.
Exhibit poor ability to attend selectively & concentrate on inappropriate or unimportant stimuli
Children with LD...
Which aspect(s) of language can be affected in children with LD?
Cluttering
__________ is overuse of fillers and circumlocutions associated with word-finding difficulties, rapid speech, and word and phrase repetitions, along with lack of awareness; can occur in children with LD.
What percentage of middle class U.S. children have delayed language development?
_________ is an active process that allows limited information to be held in a temporary accessible state while cognitive processing occurs.
Negative perception by peers
Poor social skills
Perceiving themselves negatively
Lifespan issues of children with SLI include...
Extracting regularities from language
Registering different contexts for language
Constructing word-referent associations for lexical growth
Children with SLI have difficulty...
Which of the following may children with SLI expressively do?
A. Speak more rapidly
B. Have fewer speech disruptions
C. Speak more rapidly & have fewer speech disuptions
Grammatical markers
Children with SLI have difficulty with __________, which indicates language processing deficits in phonological working memory, where words are held while processed.
Lexical competition
__________ refers to the difficulty children with SLI have in inhibiting activation of nontarget competing words.
Children with ASD are identified by the time they are...
The incidence of ASD among children is...
Approximately 25% of children with ASD exhibit...
Males & those who have a family history of Autism
Incidence of ASD is higher in...
Regular education
May hold competitive employment
Which are options for education and employment for those with ASD?
25% and 60%
Between __________ and __________ of individuals with ASD remain nonspeaking throughout their lives.
Echolalia, formuli
Some individuals with ASD may have immediate or delayed __________ or use entire verbal routines, called __________.
_________ is diffuse brain damage as a result of external force.
Approximately __________ children and adolescents in the U.S. have experienced a TBI.
Extent and site of lesion
Age at onset
Age of the injury
Variables related to TBI include...
Social disinhibition
Psychological maladjustment or acting-out behaviors called _________ may occur.
Deficits in __________ are most likely remain long after the injury in TBI.
_________ is relatively unaffected in TBI.
Word retrieval
Individuals who have sustained TBI have difficulty with...
Child-mother attachment
__________ is more significant in language development than maltreatment.
__________ infants have low birth weight and later demonstrate hyperactivity, motor problem, attention deficits, and cognitive disabilities.
Manner of ingestion
Age of the fetus
Effects of drugs on the fetus vary with...
Language delay
Echolalia
Poor comprehension
Language features of children with FAS include...
Are behind their peers
Academically, children with FAS...
In __________, children do not speak in specific situations, although they speak in others.
Determine the presence or absence of a language problem
Screening tests are used to...
Case history and interview
After a referral and screening, the following procedures may be part of assessment.
__________ refers to the movement between two languages.
Dynamic assessment
__________ is probing performance to identify possible intervention procedures.
Telepractice
__________ is the provision of language assessment and intervention via the Internet.
ASD/ID
Adults with _________ will most likely require continued intervention for language and communication deficits throughout the lifespan.
Chapter 5 Review
ameliafiedler
online speech ch. 4
molly_murray07
online speech ch.11
Communication Disorders Exam 2 Test
murryh11
Comp References
Adv. Adult Neuro Quiz 2 (Concussion)
Adv. Adult Neuro Quiz 2 (TBI)
Why is the statement of cash flows important?
politics of the united states
What might have happened if the delegates were not able to agree to the terms of the Great Compromise?
Suppose that a random sample of 513 individuals were randomly sampled and information was collected about the method a subject used to make an airline reservation (last reservation for either business or pleasure) and the subject's gender. Test the null hypothesis of no association between these two characteristics. The data are summarized as follows: $$ \begin{array}{lcr} \hline \text { Reservation Method } & \text { Female } & \text { Male } \\ \hline \text { Used a travel agent } & 56 & 74 \\ \text { Booked on the Internet } & 148 & 142 \\ \text { Called the airline's } & & \\ \text { toll-free number } & 66 & 34 \\ \hline \end{array} $$
Answer the question to help you recall what you have read. If you cannot answer a question, read the related section again. What are the advantages and disadvantages of making payments using a personal check?
Myers' Psychology for the AP Course
3rd Edition•ISBN: 9781319070502 (1 more)C. Nathan DeWall, David G Myers
HDEV5
6th Edition•ISBN: 9780357041178Spencer A. Rathus
Essentials of Understanding Psychology
14th Edition•ISBN: 9781260829013 (2 more)Robert S Feldman
Understanding Psychology
15th Edition•ISBN: 9781264159147Robert S Feldman
Learning // Classical & Operant Conditioning
Livilou019
Learner's Permit Test
rachel_lesch
Final; test 2
cjames2635
611: Stroke/CVA
Anunger | CommonCrawl |
Hilbert-Schmidt and Sobol sensitivity indices for static and time series Wnt signaling measurements in colorectal cancer - part A
Shriprakash Sinha1
Ever since the accidental discovery of Wingless [Sharma R.P., Drosophila information service, 1973, 50, p 134], research in the field of Wnt signaling pathway has taken significant strides in wet lab experiments and various cancer clinical trials, augmented by recent developments in advanced computational modeling of the pathway. Information rich gene expression profiles reveal various aspects of the signaling pathway and help in studying different issues simultaneously. Hitherto, not many computational studies exist which incorporate the simultaneous study of these issues.
This manuscript ∙ explores the strength of contributing factors in the signaling pathway, ∙ analyzes the existing causal relations among the inter/extracellular factors effecting the pathway based on prior biological knowledge and ∙ investigates the deviations in fold changes in the recently found prevalence of psychophysical laws working in the pathway. To achieve this goal, local and global sensitivity analysis is conducted on the (non)linear responses between the factors obtained from static and time series expression profiles using the density (Hilbert-Schmidt Information Criterion) and variance (Sobol) based sensitivity indices.
The results show the advantage of using density based indices over variance based indices mainly due to the former's employment of distance measures & the kernel trick via Reproducing kernel Hilbert space (RKHS) that capture nonlinear relations among various intra/extracellular factors of the pathway in a higher dimensional space. In time series data, using these indices it is now possible to observe where in time, which factors get influenced & contribute to the pathway, as changes in concentration of the other factors are made. This synergy of prior biological knowledge, sensitivity analysis & representations in higher dimensional spaces can facilitate in time based administration of target therapeutic drugs & reveal hidden biological information within colorectal cancer samples.
Recently observed psychophysical laws working downstream of the Wnt pathway rely on ratio of deviations in input & absolute value of input. These deviations are crucial for observation of a phenotypic behaviour during a time interval. This work explores the influences of fold changes and deviations in fold changes in time using density based sensitivity indices which employ kernel methods to capture nonlinear relations among the involved intra/extracellular factors. On static gene expression toy example in normal and tumor cases & time series dataset, they outperformed the variance based sensitivity indices. Synergy of prior biological knowledge, sensitivity analysis and representations in higher dimensional spaces facilitates development of time based target specific interventions at molecular level within the pathway.
i compartmentalize the manuscript into three different parts ∙ a short review containing the systems wide analysis of the Wnt pathway divided into introduction, problem statement and a solution to address the same via latest sensitivity analysis methods ∙ an extensive description of the methodology, the description of the dataset and the design of the experiments and finally ∙ the biological findings from the system wide study of the Wnt pathway using sensitivity analysis.
A short review
Sharma's [1] accidental discovery of the Wingless played a pioneering role in the emergence of a widely expanding research field of the Wnt signaling pathway. A majority of the work has focused on issues related to ∙ the discovery of genetic and epigenetic factors affecting the pathway ([2] & [3]), ∙ implications of mutations in the pathway and its dominant role on cancer and other diseases [4], ∙ investigation into the pathway's contribution towards embryo development [5], homeostasis ([6, 7]) and apoptosis [8] and ∙ safety and feasibility of drug design for the Wnt pathway ([9–13]). Approximately forty years after the discovery, important strides have been made in the research work involving several wet lab experiments and cancer clinical trials ([9, 13]) which have been augmented by the recent developments in the various advanced computational modeling techniques of the pathway. More recent informative reviews have touched on various issues related to the different types of the Wnt signaling pathway and have stressed not only the activation of the Wnt signaling pathway via the Wnt proteins [14] but also the on the secretion mechanism that plays a major role in the initiation of the Wnt activity as a prelude [15].
With the rapid development of methods in biotechnology and the availability of of vast amounts of datasets at molecular level, there has arisen a need to understand the mechanism of these signaling pathways at a greater level. Systems biology is a field where the idea is to understand the deeper aspects of biology via various models that translate the biological problem into a computational/mathematical framework. Latest opinion on current trends in systems biology can be found in [16] and [17]. One of the earliest efforts to translate a biological problem into a mathematical framework regarding the Wnt pathway was done by [18]. The quantitative study involved the analysis of interactions among the known components of the pathway using differential equations that incorporates information regarding kinetics, synthesis/degradation and phosphorylation/dephosphorylation. Further improvements and analysis on such models have followed through in later years; for example - [19, 20]. Apart from these methods, bayesian methods have also played a crucial role in understanding certain aspects of the pathway. Recent work on parameter free methods by [21] employs bayesian methods for parameter inference from data and later use algebraic methods like matroid theory to analyse the models that fit the data, while not depending specifically on the parameters. Mixture models approach has also been employed recently to understand aspects of the Wnt pathway via bayesian parameter estimation [22]. Besides parameter estimation methods related to differential equations, bayesian network analysis methods have also been proposed in investigate the cause-influence hypotheses among various factors affecting the pathway, by integrating heterogenous data and using concepts of d-connectivity/separability [23]. The author had the chance to provide a pedagogical perspective on the insilico analysis of [23] in the form of computer code in [24]. Not only have the signaling pathways been studied but also the some of the phenomenon that have been prevalent in the pathway have also been studied using mathematical models. The prevalence of the Weber's law (described later) has been shown in [25] using differential equation models and sensitivity analysis. Though the work has used proposed the use of sensitivity analysis, it is no evident as to which approach has been employed to conduct the study. Sensitivity analysis has been employed to investigate the pathway in [26] also for indentification of parameters. In these above cases there is involvement of the parameters which need to be studied in order to gain an insight into some mechanism of the pathway. Recently, a system wide investigation on time course data was conducted by [27] using correlational analysis. The aim of this study was to understand at a systems level how the components of the pathway was behaving in time. Even though the effort lead to the understanding of a few areas of the pathway, it has not revealed the entire analysis of the influence of each of the components at different time points as well as during the different intermittent time periods in a coherent simultaneous manner. In this manuscript, the author takes an extensive analysis of the pathway on the same time course measurements generated by [27], using the existing variance based sensitivity indices as well as the latest density based sensitivity indices. The work exploits the deeper formulation of the deviations in input in the logarithmic Bernoulli's formulations from which the Weber's law has been derived (explained later). The work in this paper investigates a systems wide study of the Wnt pathway via sensitivity analysis while using static [28] and time series [27] gene expression data retrieved from colorectal cancer samples.
Canonical Wnt signaling pathway
Before delving into the problem statement, a brief introduction to the Wnt pathway is given here. From the recent work of [23], the canonical Wnt signaling pathway is a transduction mechanism that contributes to embryo development and controls homeostatic self renewal in several tissues [4]. Somatic mutations in the pathway are known to be associated with cancer in different parts of the human body. Prominent among them is the colorectal cancer case [29]. In a succinct overview, the Wnt signaling pathway works when the Wnt ligand gets attached to the Frizzled(FZD)/LRP coreceptor complex. FZD may interact with the Dishevelled (DVL) causing phosphorylation. It is also thought that Wnts cause phosphorylation of the LRP via casein kinase 1 (CK1) and kinase GSK3. These developments further lead to attraction of Axin which causes inhibition of the formation of the degradation complex. The degradation complex constitutes of AXIN, the β-catenin transportation complex APC, CK1 and GSK3. When the pathway is active the dissolution of the degradation complex leads to stabilization in the concentration of β-catenin in the cytoplasm. As β-catenin enters into the nucleus it displaces the GROUCHO and binds with transcription cell factor TCF thus instigating transcription of Wnt target genes. GROUCHO acts as lock on TCF and prevents the transcription of target genes which may induce cancer. In cases when the Wnt ligands are not captured by the coreceptor at the cell membrane, AXIN helps in formation of the degradation complex. The degradation complex phosphorylates β-catenin which is then recognized by FBOX/WD repeat protein β-TRCP. β-TRCP is a component of ubiquitin ligase complex that helps in ubiquitination of β-catenin thus marking it for degradation via the proteasome. Cartoons depicting the phenomena of Wnt being inactive and active are shown in Fig. 1 a and b, respectively.
A cartoon of Wnt signaling pathway. Part (a) represents the destruction of β-catenin leading to the inactivation of the Wnt target gene. Part (b) represents activation of Wnt target gene
Problem statement in short
Succinctly, the endeavour is to address the following issues - ∙ explore the strength of contributing factors in the signaling pathway, ∙ analyse the existing causal relations among the inter/extracellular factors effecting the pathway based on prior biological knowledge and ∙ investigate the significance of deviations in fold changes in the recently found prevalence of psychophysical laws working in the pathway in a multi-parameter setting. The issues related to ∙ inference of hidden biological relations among the factors, that are yet to be discovered and ∙ discovery of new causal relations using hypothesis testing, will be addressed in a subsequent manuscript. The current manuscript analyses the sensitivity indices for fold changes and deviations in fold changes in 17 different genes from a set of 74 genes as presented by [27]. An immediate followup of the manuscript is the analysis of the remaining 57 genes which happens to the part B of this manuscript.
A solution to the problem
In order to address the above issues, sensitivity analysis (SA) is performed on either the datasets or results obtained from biologically inspired causal models. The reason for using these tools of sensitivity analysis is that they help in observing the behaviour of the output and the importance of the contributing input factors via a robust and an easy mathematical framework. In this manuscript both local and global SA methods are used. Where appropriate, a description of the biologically inspired causal models ensues before the analysis of results from these models.
Seminal work by Russian mathematician [30] lead to development as well as employment of SA methods to study various complex systems where it was tough to measure the contribution of various input parameters in the behaviour of the output. A recent unpublished review on the global SA methods by [31] categorically delineates these methods with the following functionality ∙ screening for sorting influential measures ([32] method, Group screening in [33, 34], Iterated factorial design in [35], Sequential bifurcation design in [36] and [37]), ∙ quantitative indicies for measuring the importance of contributing input factors in linear models ([38–41]) and nonlinear models ([42–57] and [58]) and ∙ exploring the model behaviour over a range on input values ([59] and [60–62]). Iooss and Lema [31] also provide various criteria in a flowchart for adapting a method or a combination of the methods for sensitivity analysis. Figure 2 shows the general flow of the mathematical formulation for computing the indices in the variance based Sobol method. The general idea is as follows - A model could be represented as a mathematical function with a multidimensional input vector where each element of a vector is an input factor. This function needs to be defined in a unit dimensional cube. Based on ANOVA decomposition, the function can then be broken down into f 0 and summands of different dimensions, if f 0 is a constant and integral of summands with respect to their own variables is 0. This implies that orthogonality follows in between two functions of different dimensions, if at least one of the variables is not repeated. By applying these properties, it is possible to show that the function can be written into a unique expansion. Next, assuming that the function is square integrable variances can be computed. The ratio of variance of a group of input factors to the variance of the total set of input factors constitute the sensitivity index of a particular group.
Computation of variance based sobol sensitivity indices. For detailed notations, see Appendix
Besides the above [30]'s variance based indicies, more recent developments regarding new indicies based on density, derivative and goal-oriented can be found in [63–65], respectively. In a latest development, [66] propose new class of indicies based on density ratio estimation [63] that are special cases of dependence measures. This in turn helps in exploiting measures like distance correlation [67] and Hilbert-Schmidt independence criterion [68] as new sensitivity indicies. The framework of these indicies is based on use of [69] f-divergence, concept of dissimilarity measure and kernel trick [70]. Finally, [66] propose feature selection as an alternative to screening methods in sensitivity analysis. The main issue with variance based indicies [30] is that even though they capture importance information regarding the contribution of the input factors, they ∙ do not handle multivariate random variables easily and ∙ are only invariant under linear transformations. In comparison to these variance methods, the newly proposed indicies based on density estimations [63] and dependence measures are more robust. Figure 3 shows the general flow of the mathematical formulation for computing the indices in the density based HSIC method. The general idea is as follows - The sensitivity index is actually a distance correlation which incorporates the kernel based Hilbert-Schmidt Information Criterion between two input vectors in higher dimension. The criterion is nothing but the Hilbert-Schmidt norm of cross-covariance operator which generalizes the covariance matrix by representing higher order correlations between the input vectors through nonlinear kernels. For every operator and provided the sum converges, the Hilbert-Schmidt norm is the dot product of the orthonormal bases. For a finite dimensional input vectors, the Hilbert-Schmidt Information Criterion estimator is a trace of product of two kernel matrices (or the Gram matrices) with a centering matrix such that HSIC evaluates to a summation of different kernel values.
Computation of density based hsic sensitivity indices. For detailed notations, see Appendix
It is this strength of the kernel methods that HSIC is able to capture the deep nonlinearities in the biological data and provide reasonable information regarding the degree of influence of the involved factors within the pathway. Improvements in variance based methods also provide ways to cope with these nonlinearities but do not exploit the available strength of kernel methods. Results in the later sections provide experimental evidence for the same.
Application in systems biology
Recent efforts in systems biology to understand the importance of various factors apropos output behaviour has gained prominence. [71] compares the use of [30] variance based indices versus [32] screening method which uses a One-at-a-time (OAT) approach to analyse the sensitivity of GSK3 dynamics to uncertainty in an insulin signaling model. Similar efforts, but on different pathways can be found in [72] and [73].
SA provides a way of analyzing various factors taking part in a biological phenomena and deals with the effects of these factors on the output of the biological system under consideration. Usually, the model equations are differential in nature with a set of inputs and the associated set of parameters that guide the output. SA helps in observing how the variance in these parameters and inputs leads to changes in the output behaviour. The goal of this manuscript is not to analyse differential equations and the parameters associated with it. Rather, the aim is to observe which input genotypic factors have greater contribution to observed phenotypic behaviour like a sample being normal or cancerous in both static and time series data. In this process, the effect of fold changes and deviations in fold changes in time is also considered for analysis in the light of the recently observed psychophysical laws acting downstream of the Wnt pathway [25].
There are two approaches to sensitivity analysis. The first is the local sensitivity analysis in which if there is a required solution, then the sensitivity of a function apropos a set of variables is estimated via a partial derivative for a fixed point in the input space. In global sensitivity, the input solution is not specified. This implies that the model function lies inside a cube and the sensitivity indices are regarded as tools for studying the model instead of the solution. The general form of g-function (as the model or output variable) is used to test the sensitivity of each of the input factor (i.e expression profile of each of the genes). This is mainly due to its non-linearity, non-monotonicity as well as the capacity to produce analytical sensitivity indices. The g-function takes the form -
$$ f(x) = \Pi_{i = 1}^{d} \frac{|4 * x_{i} - 2| + a_{i}}{1 + a_{i}} $$
were, d is the total number of dimensions and a i ≥0 are the indicators of importance of the input variable x i . Note that lower values of a i indicate higher importance of x i . In our formulation, we randomly assign values of a i ∈[0,1]. For the static (time series) data d=18(71) (factors affecting the pathway). Thus the expression profiles of the various genetic factors in the pathway are considered as input factors and the global analysis conducted. Note that in the predefined dataset, the working of the signaling pathway is governed by a preselected set of genes that affect the pathway. For comparison purpose, the local sensitivity analysis method is also used to study how the individual factor is behaving with respect to the remaining factors while working of the pathway is observed in terms of expression profiles of the various factors.
Finally, in context of [25]'s work regarding the recent development of observation of Weber's law working downstream of the pathway, it has been found that the law is governed by the ratio of the deviation in the input and the absolute input value. More importantly, it is these deviations in input that are of significance in studying such a phenomena. The current manuscript explores the sensitivity of deviation in the fold changes between measurements of fold changes at consecutive time points to explore in what duration of time, a particular factor is affecting the pathway in a major way. This has deeper implications in the fact that one is now able to observe when in time an intervention can be made or a gene be perturbed to study the behaviour of the pathway in tumorous cases. Thus sensitivity analysis of deviations in the mathematical formulation of the psychophysical law can lead to insights into the time period based influence of the involved factors in the pathway. Thus, both global and local anaylsis methods are employed to observe the entire behaviour of the pathway as well as the local behaviour of the input factors with respect to the other factors, respectively, via analysis of fold changes and deviations in fold changes, in time.
Given the range of estimators available for testing the sensitivity, it might be useful to list a few which are going to be employed in this research study. These have been described in the Appendix.
The logarithmic psychophysical law
In a recent development, [25] point to two findings namely, ∙ the robust fold changes of β-catenin and ∙ the transcriptional machinery of the Wnt pathway depends on the fold changes in β-catenin instead of absolute levels of the same and some gene transcription networks must respond to fold changes in signals according to the Weber's law in sensory physiology. In an unpublished work by [74], preliminary analysis of results in [23] shows that the variation in predictive behaviour of β-catenin based transcription complex conditional on gene evidences follows power and logarithmic psychophysical law crudely, implying deviations in output are proportional to increasing function of deviations in input and showing constancy for higher values of input. This relates to the work of [75] on power and logarithmic law albeit at a coarse level. A description of these laws ensues before the analysis of the results.
Masin et al. [76] states the Weber's law as follows - Consider a sensation magnitude γ determined by a stimulus magnitude β. Fechner [77] (vol 2, p. 9) used the symbol Δ γ to denote a just noticeable sensation increment, from γ to γ + Δ γ, and the symbol Δ β to denote the corresponding stimulus increment, from β to β + Δ β. Fechner [77] (vol 1, p. 65) attributed to the German physiologist Ernst Heinrich Weber the empirical finding [78] that Δ γ remains constant when the relative stimulus increment \(\frac {\Delta \beta }{\beta }\) remains constant, and named this finding Weber's law. [77] (vol 2, p. 10) underlined that Weber's law was empirical. ■
It has been found that Bernoulli's principle [79] is different from Weber's law [78] in that it refers to Δ γ as any possible increment in γ, while the Weber's law refers only to just noticeable increment in γ. Masin et al. [76] shows that Weber's law is a special case of Bernoulli's principle and can be derived as follows - Eq. 2 depicts the Bernoulli's principle and increment in sensation represented by Δ γ is proportional to change in stimulus represented by Δ β.
$$ \gamma = b \times \log\frac{\beta}{\alpha} $$
were b is a constant and α is a threshold. To evaluate the increment, the following Eq. 3 and the ensuing simplification gives -
$$\begin{array}{@{}rcl@{}} \Delta\gamma & = & b \times \log\frac{\beta + \Delta\beta}{\alpha} - b \times \log\frac{\beta}{\alpha} \\ & = & b \times \log(\frac{\beta + \Delta\beta}{\beta}) = b \times \log(1 + \frac{\Delta\beta}{\beta}) \end{array} $$
Since b is a constant, Eq. 3 reduces to \( \Delta \gamma \circ \frac {\Delta \beta }{\beta }\) were ∘ means "is constant when there is constancy of" from [76]. The final reduction is a formulation of Weber's laws in wordings and thus Bernoulli's principles imply Weber's law as a special case. Using [77] derivation, it is possible to show the relation between Bernoulli's principles and Weber's law. Starting from the last line of Eq. 3, the following steps yield the relation -
$$\begin{array}{@{}rcl@{}} e^{\Delta\gamma} & = & e^{b \times \log(1 + \frac{\Delta\beta}{\beta})} \implies \\ k_{p} & = & e^{\log(1 + \frac{\Delta\beta}{\beta})^{b}} \text{; were} k_{p} = e^{\Delta\gamma} \\ k_{p} & = & (1 + \frac{\Delta\beta}{\beta})^{b} \text{; since} e^{\log(x)} = x \\ \sqrt[b]{k_{p}} & = & 1 + \frac{\Delta\beta}{\beta} \implies k_{q} - 1 = \frac{\Delta\beta}{\beta} \text{; were} \sqrt[b]{k_{p}} = k_{q} \\ k_{r} & = & \frac{\Delta\beta}{\beta} \text{; the weber's law s.t. } k_{r} = \sqrt[b]{e^{\Delta\gamma}} - 1 \\ \end{array} $$
The reduction \(\Delta \gamma \circ \frac {\Delta \beta }{\beta }\) holds true given the last line of Eq. 4. By observation, it is important to note that the deviation Δ in the stimulus β plays a crucial role in the above depicted formulations. In the current study, instead of computing the sensitivity of the laws for each involved factor, the sensitivity of the deviations in the fold changes of each factor is taken into account. This is done in order to study the affect of deviations in fold changes in time as concentrations of WNT3A changes at a constant rate. Without loss of generality, it was observed over time that most involved factors had sensitivity indices or strength of contributions, parts or whole of whose graphs follow a convex or a concave curvature. These are usually represented by either an exponentially increasing or decreasing curve or nonlinear curves. This points towards the fact that with increasing changes in stimulated concentration of WNT3A the deviations in fold changes of an involved factor behave either in an increasing or decreasing fashion. Thus deviations in fold changes of various involved factors does affect the working of the signaling pathway over time. Finally, these deviations approximately capture the difference in fold changes recorded between two time frames and are thus a measure of how much the involvement of a factor affects the pathway due to these differences. This measure of involvement is depicted via the estimated sensitivity indices. The study of deviations in fold changes might help in deciding when a therapeutic drug could be administered in time. Future wet lab tests can confirm the findings of the above solution.
Variance based sensitivity indices
The variance based indices as proposed by [30] prove a theorem that an integrable function can be decomposed into summands of different dimensions. Also, a Monte Carlo algorithm is used to estimate the sensitivity of a function apropos arbitrary group of variables. It is assumed that a model denoted by function u=f(x), x=(x 1,x 2,…,x n ), is defined in a unit n-dimensional cube \(\mathcal {K}^{n}\) with u as the scalar output. The requirement of the problem is to find the sensitivity of function f(x) with respect to different variables. If u ∗=f(x ∗) is the required solution, then the sensitivity of u ∗ apropos x k is estimated via the partial derivative \(\phantom {\dot {i}\!}(\partial u/ \partial x_{k})_{x = x^{*}}\). This approach is the local sensitivity. In global sensitivity, the input x=x ∗ is not specified. This implies that the model f(x) lies inside the cube and the sensitivity indices are regarded as tools for studying the model instead of the solution. Detailed technical aspects with examples can be found in [42] and [80].
Let a group of indices i 1,i 2,…,i s exist, where 1≤i 1<⋯<i s ≤n and 1≤s≤n. Then the notation for sum over all different groups of indices is -
$$ \widehat{\Sigma} T_{i_{1}, i_{2}, \dots, i_{s}} = \Sigma_{i = 1}^{n} T_{i} + \Sigma_{s = 1}^{n} \Sigma_{1 \leq i < j \leq n} T_{i,j} + \dots + T_{1, 2, \dots, n} $$
Then the representation of f(x) using Eq. 5 in the form -
$$\begin{array}{@{}rcl@{}} f(x) & = & f_{0} + \widehat{\Sigma} f_{i_{1}, i_{2}, \ldots, i_{s}} \\ & = & f_{0} + \Sigma_{i} f_{i}(x_{i}) + \Sigma_{i < j} f_{i,j}(x_{i},x_{j}) + \ldots \end{array} $$
$$\begin{array}{@{}rcl@{}} && +\> f_{1, 2, \ldots, n}(x_{1}, x_{2}, \ldots, x_{n}) \end{array} $$
is called ANOVA-decomposition from [55] or expansion into summands of different dimensions, if f 0 is a constant and integrals of the summands \(f_{i_{1}, i_{2}, \ldots, i_{s}}\) with respect to their own variables are zero, i.e,
$$ f_{0} = \int_{\mathcal{K}^{n}} f(x) dx $$
$$ \int_{0}^{1} f_{i_{1}, i_{2}, \ldots, i_{s}}(x_{i_{1}}, x_{i_{2}}, \ldots, x_{i_{s}}) {dx}_{i_{k}} = 0, 1 \leq k \leq s $$
It follows from Eq. 7 that all summands on the right hand side are orthogonal, i.e if at least one of the indices in i 1,i 2,…,i s and j 1,j 2,…,j l is not repeated i.e
$$ \int_{0}^{1} f_{i_{1}, i_{2}, \ldots, i_{s}}(x_{i_{1}}, \ldots, x_{i_{s}}) f_{j_{1}, \ldots, j_{l}}(x_{j_{1}}, x_{j_{2}}, \ldots, x_{j_{s}}) dx = 0 $$
[30] proves a theorem stating that there is an existence of a unique expansion of Eq. 7 for any f(x) integrable in \(\mathcal {K}^{n}\). In brief, this implies that for each of the indices as well as a group of indices, integrating equation 7 yields the following -
$$\begin{array}{@{}rcl@{}} \int_{0}^{1}.. \int_{0}^{1} f(x) dx/{dx}_{i} & = & f_{0} + f_{i}(x_{i}) \end{array} $$
$$\begin{array}{@{}rcl@{}} \int_{0}^{1}.. \int_{0}^{1} f(x) dx/{dx}_{i}{dx}_{j} & = & f_{0} + f_{i}(x_{i}) + f_{j}(x_{j}) \\ && +\> f_{i,j}(x_{i},x_{j}) \end{array} $$
were, dx/dx i is \(\prod _{\forall k \in \{1,..,n\}; i \notin k} {dx}_{k}\) and dx/dx i dx j is \(\prod _{\forall k \in \{1,..,n\};i,j \notin k} {dx}_{k}\). For higher orders of grouped indices, similar computations follow. The computation of any summand \(f_{i_{1}, i_{2}, \ldots, i_{s}}(x_{i_{1}}, x_{i_{2}}, \ldots, x_{i_{s}})\) is reduced to an integral in the cube \(\mathcal {K}^{n}\). The last summand f 1,2,…,n (x 1,x 2,…,x n ) is f(x)−f 0 from Eq. 7. Homma and Saltelli [42] stresses that use of Sobol sensitivity indices does not require evaluation of any \(f_{i_{1}, i_{2}, \ldots, i_{s}}(x_{i_{1}}, x_{i_{2}}, \ldots, x_{i_{s}})\) nor the knowledge of the form of f(x) which might well be represented by a computational model i.e a function whose value is only obtained as the output of a computer program.
Finally, assuming that f(x) is square integrable, i.e \(f(x) \in \mathcal {L}_{2}\), then all of \(f_{i_{1}, i_{2},..., i_{s}}(x_{i_{1}}, x_{i_{2}},..., x_{i_{s}}) \in \mathcal {L}_{2}\). Then the following constants
$$\begin{array}{@{}rcl@{}} \int_{\mathcal{K}^{n}} f^{2}(x) dx - f_{0}^{2} & = & D \end{array} $$
$$\begin{array}{@{}rcl@{}} \int_{0}^{1}.. \int_{0}^{1} f_{i_{1},.., i_{s}}^{2}(x_{i_{1}},.., x_{i_{s}}) {dx}_{i_{1}}..{dx}_{i_{s}} & = & D_{i_{1},.., i_{s}} \end{array} $$
are termed as variances. Squaring Eq. 7, integrating over \(\mathcal {K}^{n}\) and using the orthogonality property in Eq. 10, D evaluates to -
$$ D = \widehat{\Sigma} D_{i_{1}, i_{2}, \ldots, i_{s}} $$
Then the global sensitivity estimates is defined as -
$$ S_{i_{1}, i_{2}, \ldots, i_{s}} = \frac{D_{i_{1}, i_{2}, \ldots, i_{s}}}{D} $$
It follows from Eqs. 15 and 16 that
$$ \widehat{\Sigma} S_{i_{1}, i_{2}, \ldots, i_{s}} = 1 $$
Clearly, all sensitivity indices are non-negative, i.e an index \(S_{i_{1}, i_{2}, \ldots, i_{s}}\) = 0 if and only if \(f_{i_{1}, i_{2}, \ldots, i_{s}} \equiv \) 0. The true potential of Sobol indices is observed when variables x 1,x 2,…,x n are divided into m different groups with y 1,y 2,…,y m such that m<n. Then f(x)≡f(y 1,y 2,…,y m ). All properties remain the same for the computation of sensitivity indices with the fact that integration with respect to y k means integration with respect to all the x i 's in y k . Details of these computations with examples can be found in [30]. Variations and improvements over Sobol indices have already been stated in "Sensitivity analysis" section.
Density based sensitivity indices
As discussed before, the issue with variance based methods is the high computational cost incurred due to the number of interactions among the variables. This further requires the use of screening methods to filter out redundant or unwanted factors that might not have significant impact on the output. Recent work by [66] proposes a new class of sensitivity indicies which are a special case of density based indicies [63]. These indicies can handle multivariate variables easily and relies on density ratio estimation. Key points from [66] are mentioned below.
Considering the similar notation in previous section, \(f : \mathcal {R}^{n} \to \mathcal {R}\) (u=f(x)) is assumed to be continuous. It is also assumed that X k has a known distribution and are independent. Baucells and Borgonovo [81] state that a function which measures the similarity between the distribution of U and that of U|X k can define the impact of X k on U. Thus the impact is defined as -
$$ S_{X_{k}} = \mathcal{E}(d(U,U|X_{k})) $$
were d(·,·) is a dissimilarity measure between two random variables. Here d can take various forms as long as it satisfies the criteria of a dissimilarity measure. Csiszar [69]'s f-divergence between U and U|X k when all input random variables are considered to be absolutely continuous with respect to Lebesgue measure on \(\mathcal {R}\) is formulated as -
$$ d_{F}(U||U|X_{k}) = \int_{\mathcal{R}} F(\frac{p_{U}(u)}{p_{U|X_{k}}(u)}) p_{U|X_{k}}(u) du $$
were F is a convex function such that F(1)=0 and p U and \(p_{U|X_{k}}\) are the probability distribution functions of U and U|X k . Standard choices of F include Kullback-Leibler divergence F(t)=− loge(t), Hellinger distance \((\sqrt {t} - 1)^{2}\), Total variation distance F(t)=|t−1|, Pearson χ 2 divergence F(t)=t 2−1 and Neyman χ 2 divergence F(t)=(1−t 2)/t. Substituting Eq. 19 in equation 18, gives the following sensitivity index -
$$\begin{array}{@{}rcl@{}} S^{F}_{X_{k}} & = & \int_{\mathcal{R}} d_{F}(U||U|X_{k}) p_{X_{k}}(x) dx \\ & = & \int_{\mathcal{R}} \int_{\mathcal{R}} F(\frac{p_{U}(u)}{p_{U|X_{k}}(u)}) p_{U|X_{k}}(u) p_{X_{k}}(x) dx du \\ & = & \int_{\mathcal{R}^{2}} F(\frac{p_{U}(u) p_{X_{k}}(x)}{p_{U|X_{k}}(u) p_{X_{k}}(x)}) p_{U|X_{k}}(u) p_{X_{k}}(x) dx du \\ & = & \int_{\mathcal{R}^{2}} F(\frac{p_{U}(u) p_{X_{k}}(x)}{p_{X_{k},U}(x,u)}) p_{X_{k},U}(x,u) dx du \end{array} $$
were \(p_{X_{k}}\) and \(p_{X_{k},Y}\) are the probability distribution functions of X k and (X k ,U), respectively. Csiszar [69] f-divergences imply that these indices are positive and equate to 0 when U and X k are independent. Also, given the formulation of \(S_{X_{k}}^{F}\), it is invariant under any smooth and uniquely invertible transformation of the variables X k and U [82]. This has an advantage over Sobol sensitivity indices which are invariant under linear transformations.
By substituting the different formulations of F in Eqs 20, '[66]'s work claims to be the first in establishing the link that previously proposed sensitivity indices are actually special cases of more general indices defined through [69]'s f-divergence. Then Eq. 20 changes to estimation of ratio between the joint density of (X k ,U) and the marginals, i.e -
$$\begin{array}{@{}rcl@{}} S^{F}_{X_{k}} & = & \int_{\mathcal{R}^{2}} F(\frac{1}{r(x,u)}) p_{X_{k},U}(x,u) dx du \\ & = & \mathcal{E}_{(X_{k},U)} F(\frac{1}{r(X_{k},U)}) \end{array} $$
were, r(x,y) = \((p_{X_{k},U}(x,u)) / (p_{U}(u) p_{X_{k}}(x))\). Multivariate extensions of the same are also possible under the same formulation.
Finally, given two random vectors \(X \in \mathcal {R}^{p}\) and \(Y \in \mathcal {R}^{q}\), the dependence measure quantifies the dependence between X and Y with the property that the measure equates to 0 if and only if X and Y are independent. These measures carry deep links [83] with distances between embeddings of distributions to reproducing kernel Hilbert spaces (RHKS) and here the related Hilbert-Schmidt independence criterion (HSIC by [68]) is explained.
In a very brief manner from an extremely simple introduction by [84] - "We first defined a field, which is a space that supports the usual operations of addition, subtraction, multiplication and division. We imposed an ordering on the field and described what it means for a field to be complete. We then defined vector spaces over fields, which are spaces that interact in a friendly way with their associated fields. We defined complete vector spaces and extended them to Banach spaces by adding a norm. Banach spaces were then extended to Hilbert spaces with the addition of a dot product." Mathematically, a Hilbert space \(\mathcal {H}\) with elements \(r,s \in \mathcal {H}\) has dot product \(\langle r,s \rangle _{\mathcal {H}}\) and r·s. When \(\mathcal {H}\) is a vector space over a field \(\mathcal {F}\), then the dot product is an element in \(\mathcal {F}\). The product \(\langle r,s \rangle _{\mathcal {H}}\) follows the below mentioned properties when \(r,s,t \in \mathcal {H}\) and for all \(a \in \mathcal {F}\) -
Associative : (ar)·s = a(r·s)
Commutative : r·s = s·r
Distributive : r·(s+t) = r·s+r·t
Given a complete vector space \(\mathcal {V}\) with a dot product 〈·,·〉, the norm on \(\mathcal {V}\) defined by \(||r||_{\mathcal {V}}\) = \(\sqrt (\langle r,r \rangle)\) makes this space into a Banach space and therefore into a full Hilbert space.
A reproducing kernel Hilbert space (RKHS) builds on a Hilbert space \(\mathcal {H}\) and requires all Dirac evaluation functionals in \(\mathcal {H}\) are bounded and continuous (on implies the other). Assuming \(\mathcal {H}\) is the \(\mathcal {L}_{2}\) space of functions from X to \(\mathcal {R}\) for some measurable X. For an element x∈X, a Dirac evaluation functional at x is a functional \(\delta _{x} \in \mathcal {H}\) such that δ x (g)=g(x). For the case of real numbers, x is a vector and g a function which maps from this vector space to \(\mathcal {R}\). Then δ x is simply a function which maps g to the value g has at x. Thus, δ x is a function from (\(\mathcal {R}^{n} \mapsto \mathcal {R}\)) into \(\mathcal {R}\).
The requirement of Dirac evaluation functions basically means (via the [85] representation theorem) if ϕ is a bounded linear functional (conditions satisfied by the Dirac evaluation functionals) on a Hilbert space \(\mathcal {H}\), then there is a unique vector ℓ in \(\mathcal {H}\) such that ϕ g = \(\langle g,\ell \rangle _{\mathcal {H}}\) for all \(\ell \in \mathcal {H}\). Translating this theorem back into Dirac evaluation functionals, for each δ x there is a unique vector k x in \(\mathcal {H}\) such that δ x g = g(x) = \(\langle g,k_{x}\rangle _{\mathcal {H}}\). The reproducing kernel K for \(\mathcal {H}\) is then defined as : \(\phantom {\dot {i}\!} K(x,x') = \langle k_{x},k_{x'} \rangle \), were k x and \(\phantom {\dot {i}\!}k_{x'}\) are unique representatives of δ x and \(\phantom {\dot {i}\!}\delta _{x'}\). The main property of interest is \(\langle g,K(x,x')\rangle _{\mathcal {H}}\) = g(x ′). Furthermore, k x is defined to be a function y↦K(x,y) and thus the reproducibility is given by \(\langle K(x,\cdot),K(y,\cdot) \rangle _{\mathcal {H}}\) = K(x,y).
Basically, the distance measures between two vectors represent the degree of closeness among them. This degree of closeness is computed on the basis of the discriminative patterns inherent in the vectors. Since these patterns are used implicitly in the distance metric, a question that arises is, how to use these distance metric for decoding purposes?
The kernel formulation as proposed by [70], is a solution to our problem mentioned above. For simplicity, we consider the labels of examples as binary in nature. Let \(\mathbf {x}_{i} \in \mathcal {R}^{n}\), be the set of n feature values with corresponding category of the example label (y i ) in data set \(\mathcal {D}\). Then the data points can be mapped to a higher dimensional space \(\mathcal {H}\) by the transformation ϕ:
$$ \mathbf{\phi}: \mathbf{x}_{i} \in \mathcal{R}^{n} \mapsto \mathbf{\phi}(\mathbf{x}_{i}) \in \mathcal{H} $$
This \(\mathcal {H}\) is the Hilbert Space which is a strict inner product space, along with the property of completeness as well as separability. The inner product formulation of a space helps in discriminating the location of a data point w.r.t a separating hyperplane in \(\mathcal {H}\). This is achieved by the evaluation of the inner product between the normal vector representing the hyperplane along with the vectorial representation of a data point in \(\mathcal {H}\) (Fig. 4 represents the geometrical interpretation). Thus, the idea behind Eq. (22) is that even if the data points are nonlinearly clustered in space \(\mathcal {R}^{n}\), the transformation spreads the data points into \(\mathcal {H}\), such that they can be linearly separated in its range in \(\mathcal {H}\).
A geometrical interpretation of mapping nonlinearly separable data into higher dimensional space where it is assumed to be linearly separable, subject to the holding of dot product
Often, the evaluation of dot product in higher dimensional spaces is computationally expensive. To avoid incurring this cost, the concept of kernels in employed. The trick is to formulate kernel functions that depend on a pair of data points in the space \(\mathcal {R}^{n}\), under the assumption that its evaluation is equivalent to a dot product in the higher dimensional space. This is given as:
$$ \kappa(\mathbf{x}_{i}, \mathbf{x}_{j}) = <\mathbf{\phi}(\mathbf{x}_{i}), \mathbf{\phi}(\mathbf{x}_{j})> $$
Two advantages become immediately apparent. First, the evaluation of such kernel functions in lower dimensional space is computationally less expensive than evaluating the dot product in higher dimensional space. Secondly, it relieves the burden of searching an appropriate transformation that may map the data points in \(\mathcal {R}^{n}\) to \(\mathcal {H}\). Instead, all computations regarding discrimination of location of data points in higher dimensional space involves evaluation of the kernel functions in lower dimension. The matrix containing these kernel evaluations is referred to as the kernel matrix. With a cell in the kernel matrix containing a kernel evaluation between a pair of data points, the kernel matrix is square in nature.
As an example in practical applications, once the kernel has been computed, a pattern analysis algorithm uses the kernel function to evaluate and predict the nature of the new example using the general formula:
$$\begin{array}{@{}rcl@{}} f(\mathbf{z}) & = & <\mathbf{w}, \phi{(\mathbf{z}})> + b \\ & = & <\sum\limits_{i = 1}^{N} \alpha_{i} \times y_{i} \times \phi(\mathbf{x}_{i}), \phi{(\mathbf{z}})> + b \\ & = & \sum\limits_{i = 1}^{N} \alpha_{i} \times y_{i} \times <\phi(\mathbf{x}_{i}), \phi{(\mathbf{z}})> + b \\ & = & \sum\limits_{i = 1}^{N} \alpha_{i} \times y_{i} \times \kappa(\mathbf{x}_{i}, \mathbf{z}) + b \\ \end{array} $$
where w defines the hyperplane as some linear combination of training basis vectors, z is the test data point, y i the class label for training point x i , α i and b are the constants. Various transformations to the kernel function can be employed, based on the properties a kernel must satisfy. Interested readers are referred to [86] for description of these properties in detail.
The Hilbert-Schmidt independence criterion (HSIC) proposed by [68] is based on kernel approach for finding dependences and on cross-covariance operators in RKHS. Let \(X \in \mathcal {X}\) have a distribution P X and consider a RKHS \(\mathcal {A}\) of functions \(\mathcal {X} \rightarrow \mathcal {R}\) with kernel \(k_{\mathcal {X}}\) and dot product \(\langle \cdot,\cdot \rangle _{\mathcal {A}}\). Similarly, Let \(U \in \mathcal {Y}\) have a distribution P Y and consider a RKHS \(\mathcal {B}\) of functions \(\mathcal {U} \rightarrow \mathcal {R}\) with kernel \(k_{\mathcal {B}}\) and dot product \(\langle \cdot,\cdot \rangle _{\mathcal {B}}\). Then the cross-covariance operator C X,U associated with the joint distribution P XU of (X,U) is the linear operator \(\mathcal {B} \rightarrow \mathcal {A}\) defined for every \(a \in \mathcal {A}\) and \(b \in \mathcal {B}\) as -
$$ \langle a,C_{XU} b\rangle_{\mathcal{A}} = \mathcal{E}_{XU}[a(X), b(U)] - \mathcal{E}_{X}a(X) \mathcal{E}_{U}b(U) $$
The cross-covariance operator generalizes the covariance matrix by representing higher order correlations between X and U through nonlinear kernels. For every linear operator \(C:\mathcal {B} \rightarrow \mathcal {A}\) and provided the sum converges, the Hilbert-Schmidt norm of C is given by -
$$ ||C||^{2}_{HS} = \Sigma_{k,l} \langle a_{k}, {Cb}_{l} \rangle_{\mathcal{A}} $$
were a k and b l are orthonormal bases of \(\mathcal {A}\) and \(\mathcal {B}\), respectively. The HSIC criterion is then defined as the Hilbert-Schmidt norm of cross-covariance operator -
$$\begin{array}{@{}rcl@{}} {}HSIC(X,U)_{\mathcal{A},\mathcal{B}} \,=\, \left\{\!\! \begin{array}{l} ||C_{XU}||^{2}_{HS} = \\ \mathcal{E}_{X,X',U,U'}k_{\mathcal{X}}(X,X')k_{\mathcal{U}}(U,U') +\\ \mathcal{E}_{X,X'}k_{\mathcal{X}}(X,X')\mathcal{E}_{U,U'}k_{\mathcal{U}}(U,U') - \\ 2\mathcal{E}_{X,Y}\left[\mathcal{E}_{X'}k_{\mathcal{X}}(X,X')\mathcal{E}_{U'}k_{\mathcal{U}}(U,U')\right] \end{array} \right. \end{array} $$
were the equality in terms of kernels is proved in [68]. Finally, assuming (X i ,U i ) (i=1,2,...,n) is a sample of the random vector (X,U) and \(k_{\mathcal {X}}\) and \(K_{\mathcal {U}}\) denote the Gram matrices with entries \(K_{\mathcal {X}}(i,j)\) = \(k_{\mathcal {X}}(X_{i},X_{j})\) and \(K_{\mathcal {U}}(i,j)\) = \(k_{\mathcal {U}}(U_{i},U_{j})\). [68] proposes the following estimator for \({HSIC}_{n}(X,U)_{\mathcal {A},\mathcal {B}}\) -
$$ {HSIC}_{n}(X,U)_{\mathcal{A},\mathcal{B}} = \frac{1}{n^{2}}Tr(K_{\mathcal{X}}{HK}_{\mathcal{U}}H) $$
were H is the centering matrix such that H(i,j) = \(\delta _{i,j} - \frac {1}{n}\). Then \({HSIC}_{n}(X,U)_{\mathcal {A},\mathcal {B}}\) can be expressed as -
$$\begin{array}{@{}rcl@{}} HSIC(X,U)_{\mathcal{A},\mathcal{B}} = \left\{ \begin{array}{l} \frac{1}{n^{2}} \Sigma_{i,j = 1}^{n} k_{\mathcal{X}}(X_{i},X_{j})k_{\mathcal{U}}(U_{i},U_{j}) \\ + \frac{1}{n^{2}} \Sigma_{i,j = 1}^{n} k_{\mathcal{X}}(X_{i},X_{j}) \times \\ \frac{1}{n^{2}} \Sigma_{i,j = 1}^{n} k_{\mathcal{U}}(U_{i},U_{j}) \\ - \frac{2}{n} \Sigma_{i = 1}^{n} [\frac{1}{n} \Sigma_{j = 1}^{n} k_{\mathcal{X}}(X_{i},X_{j}) \times \\ \frac{1}{n} \Sigma_{j = 1}^{n} k_{\mathcal{U}}(U_{i},U_{j})] \end{array} \right. \end{array} $$
Finally, [66] proposes the sensitivity index based on distance correlation as -
$$ S^{HSIC_{\mathcal{A},\mathcal{B}}}_{X_{k}} = R(X_{k},U)_{\mathcal{A},\mathcal{B}} $$
were the kernel based distance correlation is given by -
$$ R^{2}(X,U)_{\mathcal{A},\mathcal{B}} = \frac{HSIC(X,U)_{\mathcal{A},\mathcal{B}}}{\sqrt(HSIC(X,X)_{\mathcal{A},\mathcal{A}} HSIC(U,U)_{\mathcal{B},\mathcal{B}})} $$
were kernels inducing \(\mathcal {A}\) and \(\mathcal {B}\) are to be chosen within a universal class of kernels. Similar multivariate formulation for Eq. 28 are possible.
Description of the dataset & design of experiments
STATIC DATA - A simple static dataset containing expression values measured for a few genes known to have important role in human colorectal cancer cases has been taken from [28]. Most of the expression values recorded are for genes that play a role in Wnt signaling pathway at an extracellular level and are known to have inhibitory affect on the Wnt pathway due to epigenetic factors. For each of the 24 normal mucosa and 24 human colorectal tumor cases, gene expression values were recorded for 14 genes belonging to the family of SFRP, DKK, WIF1 and DACT. Also, expression values of established Wnt pathway target genes like LEF1, MYC, CD44 and CCND1 were recorded per sample.
TIME SERIES DATA - Contrary to the static data described above, [27] presents a bigger set of 71 Wnt-related gene expression values for 6 different times points over a range of 24-hour period using qPCR. The changes represent the fold-change in the expression levels of genes in 200 ng/mL WNT3A-stimulated HEK 293 cells in time relative to their levels in unstimulated, serum-starved cells at 0-hour. The data are the means of three biological replicates. Only genes whose mean transcript levels changed by more than two-fold at one or more time points were considered significant. Positive (negative) numbers represent up (down) -regulation.
Note that green (red) represents activation (repression) in the heat maps of data in [28] and [27].
GENERAL ISSUES - ∙ Here the input factors are the gene expression values for both normal and tumor cases in static data. For the case of time series data, the input factors are the fold change (deviations in fold change) expression values of genes at different time points (periods). Also, for the time series data, in the first experiment the analysis of a pair of the fold changes recorded at to different consecutive time points i.e t i &t i+1 is done. In the second experiment, the analysis of a pair of deviations in fold changes recorded at t i &t i+1 and t i+1&t i+2. In this work, in both the static and the time series datasets, the analysis is done to study the entire model/pathway rather than find a particular solution to the model/pathway. Thus global sensitivity analysis is employed. But the local sensitivity methods are used to observe and compare the affect of individual factors via 1st order analysis w.r.t total order analysis (i.e global analysis). In such an experiment, the output is the sensitivity indices of the individual factors participating in the model. This is different from the general trend of observing the sensitivity of parameter values that affect the pathway based on differential equations that model a reaction. Thus the model/pathway is studied as a whole by observing the sensitivities of the individual factors.
∙ Static data - Note that the 24 normal and tumor cases are all different from each other. The 18 genes that are used to study in [28] are the input factors and it is unlikely that there will be correlations between different patients. The phenotypic behaviour might be similar at a grander scale. Also, since the sampling number is very small for a network of this scale, large standard deviations can be observed in many results, especially when the Sobol method is used. But this is not the issue with the sampling number. By that analysis, large deviations are not observed in kernel based density methods. The deviations are more because of the fact that the nonlinearities are not captured in an efficient way in the variance based Sobol methods. Due to this, the resulting indicies have high variance in numerical value. For the same number of samplings, the kernel based methods don't show high variance.
∙ Time series data - All the measurement data at each time point are generated by a normal distribution with fixed standard deviation of 0.005 plus a noise term. One might enquire as to how does this data generation match with the real experimental data? The kernel based density methods requires a distribution of data. The original experimental data of fold change was taken from each of the genes per time point. Gujral and MacBeath [27] states that to determine the fold-change in gene expression induced by stimulation with Wnt3a, the normalized expression of each gene in the Wnt3a-stimulated sample was divided by the normalized expression of the same gene in the unstimulated sample. The qPCR data presented are mean of three biological replicates. By using a stringent margin of 0.005 and a noise term, the distribution of the data near the mean value is kept constricted. How much it deviates from the reality beyond the errors of measurement is not known to the author! Finally, 74 gene expression values are taken as input per time point for evaluting the sensitivity of each of the genetic factor that affect the model/pathway. Again, one is not looking for a solution to the model in terms of good value for parameters but studying the degree of influence of each of the input factors that constitute the model/pathway.
DESIGN OF EXPERIMENTS - The reported results will be based on scaled as well as unscaled datasets. For the static data, only the scaled results are reported. This is mainly due to the fact that the measurements vary in a wide range and due to this there is often an error in the computed estimated of these indices. The data for time series does not vary in a wide range and thus the results are reported for both the scaled and the non scaled versions. Total sensitivity indices and 1st order indices will be used for sensitivity analysis. For addressing a biological question with known prior knowledge, the order of indices might be increased. While studying the interaction among the various genetic factors using static data, tumor samples are considered separated from normal samples. Bootstrapping without replicates on a smaller sample number is employed to generate estimates of indices which are then averaged. This takes into account the variance in the data and generates confidence bands for the indices. For the case of time series data, interactions among the contributing factors are studied by comparing (1) pairs of fold-changes at single time points and (2) pairs of deviations in fold changes between pairs of time points. Generation of distribution around measurements at single time points with added noise is done to estimate the indices.
To measure the strength of the contributing factors in the static dataset by [28], 1st order and total sensitivity indices were generated. For each of the expression values of the genes recorded in the normal and tumor cases, the computation of the indices was done using bootstrapped samples in three different experiments each with a sample size of 8, 16 and 24, respectively. With only 24 samples in total, 20 bootstraps were generated for each set and the results were generated. From these replicates, the mean of the indices is reported along with the 95% confidence bands. Figure 5 represents the cartoon of the experimental setup followed to acheive the desired results. Note that plots of sensitivity indices have been relegated to Appendix.
A cartoon of experimental setup. Step - (1) Segregation of data into normal and tumor cases. (2) Further data division per case and bootstrap sampling with no repetitions for different iterations. (3) Assembling bootstrapped data and application of SA methods. (4) Generation of SI's for normal and tumor case per gene per iteration. (5) Generation of averaged SI and confidence bands per case per gene
Using the sensiFdiv, all indices are computed as positive and those nearing to zero indicate the contribution of a factor as independent from the behaviour under consideration. Here, while comparing the indices of the gene expression values for normal and tumor cases, it was found that most of the involved intra/extracellular factors had some degree of contribution in the normal case and almost negligible contribution in the tumor case (see Figs. 6, 7 and 8). Apart from the negative reading for the KL divergence Fig. 9 the interpretations remain the same. This implies that the basic [69] f-divergence based indices might not capture the intrinsic genotypic effects for the normal and the tumorous cases. From the biological perspective, these graphs do not help in interpreting the strength of the contributions in normal and tumor cases. One might rank the indices for relative contributions, but this might not shed enough light on the how the factors are behaving in normal and tumor cases.
sensiFdiv indices using Total Variation distance. Red - indices for normal. Blue - indices for tumor
sensiFdiv indices for Hellinger distance. Red - indices for normal. Blue - indices for tumor
sensiFdiv indices for Pearson χ 2 distance. Red - indices for normal. Blue - indices for tumor
sensiFdiv indices for Kullback-Leibler divergence. Red - indices for normal. Blue - indices for tumor
A more powerful way to analyse the contributions is the newly proposed HSIC based measures by [66]. These distances use the kernel trick which can capture intrinsic properties inherent in the recorded measurements by transforming the data into a higher dimensional space. Using these distances in sensiHSIC, it was found that the contributions of the various factors in the normal and the tumor cases vary drastically. This is shown in Figs. 10, 11 and 12. The laplace and the rbf kernels give more reliable sensitivity estimates for the involved factors than the linear kernel. Studying the results in figures 6 and 7 of [23] based on prior biological knowledge encoded in the Bayesian network models along with the indices of aforementioned figures, it can be found that indices of the family of DACT−1/2/3 show higher (lower) sensitivity in the normal (tumor) case where the activation (repression) happens. Again, of the DACT family, DACT−1 has greater influence than DACT−3 (than DACT−2) based on the values of the sensitivity indices. These indices indicate the dependence of a factor on the output of the model characterized by the signaling being active (passive) in the normal (tumor) cases. 0(1) mean no (full) dependence of the output on the input factor. The laplace and the rbf kernels were found to give more consistent results than the linear kernel and the following description discusses the results from these kernels. For the SFRP family SFRP−1/2/5 show higher (lower) sensitivity in normal (tumor) case where the activation (repression) happens (see Figs. 11 and 12). For SFRP−3/4 the influence is higher (lower) in the tumor (normal) case. In all the three types of kernels, WIF1, MYC and CCND1 show stronger (weaker) influence of repression (activation) in the normal (tumor) case (see Figs. 11 and 12). CD44 showed variable influence while observing the normal and tumor cases. [23] could not derive proper inferences for LEF1 but the sensitivity indices indicate that the influence of LEF1 in tumor samples to be higher than in normal samples. This points to the LEF1's major role in tumor cases. Finally, for the family of DKK, DKK1 and DKK3−2 show similar behaviour of expression (repression) in normal (tumor cases) (see [23]). For the former, the prominence of the influence is shown in the higher (lower) sensitivity for tumor (normal) case. For the latter higher (lower) sensitivity was recorded for normal (tumor) case. This implies that the latter has more influential role in normal while the former has more influential role in tumor case. DKK3−1 was found to be expressed (repressed) in normal (tumor) and its dominant role is prominent from the higher bar sensitivity bar for normal than the tumor. Similar behavior of DKK2 was inferred by [23] but the sensitivity indices point to varied results and thus a conclusion cannot be drawn. Note that greater the value of the sensitivity index, greater is an input factor's contribution to the model.
sensiHSIC indices for linear kernel. Red - indices for normal. Blue - indices for tumor
sensiHSIC indices for laplace kernel. Red - indices for normal. Blue - indices for tumor
sensiHSIC indices for rbf kernel. Red - indices for normal. Blue - indices for tumor
The first order indices generated by sobol functions implemented in sobol2002 (Fig. 13), sobol2007 (Fig. 14), soboljansen (Fig. 15), sobolmartinez (Fig. 16) and sobol (Fig. 17) do not point to significant dependencies of the input factors. This can be attributed to the fact that there are less number of samples that help in the estimation of the sensitivity indicies. Finally, the total order indices need to be investigated in the context of the first order indices. It can be observed, sobol2002 (Fig. 18) and sobol2007 (Fig. 18) give much better estimates than soboljansen (Fig. 19) and sobolmartinez (Fig. 20). Most importantly, it is the former two that closely match with the sensitivity indices estimated using the HSIC distance measures. Interpretations from sobol2002 (Fig. 18) and sobol2007 (Fig. 21) are the same as those described above using the laplace and the rbf kernels from density based HSIC measure.
Sobol 2002 first order indices. Red - indices for normal. Blue - indices for tumor
Sobol jansen first order indices. Red - indices for normal. Blue - indices for tumor
Sobol martinez first order indices. Red - indices for normal. Blue - indices for tumor
Sobol first order indices. Red - indices for normal. Blue - indices for tumor
Sobol 2002 total order indices. Red - indices for normal. Blue - indices for tumor
Sobol jansen total order indices. Red - indices for normal. Blue - indices for tumor
Sobol martinez total order indices. Red - indices for normal. Blue - indices for tumor
In summary, the sensitivity indices confirm the inferred results in [23] but do not help in inferring the causal relations using the static data. In combination with the results obtained from the Bayesian network models in [23] it is possible to study the effect of the input factors for the pathway in both normal and tumor cases. The results of sensitivity indices indicate how much these factors influence the pathway in normal and tumor cases. Again, not all indices reveal important information. So users must be cautious of results and see which measures reveal information that are close to already established or computationally estimated biological facts. Here the density based sensitivity indices captured information more precisely than the variance based indices (except for the total order indices from sobol2002/7 which gave similar results as sensiHSIC). This is attributed to the analytical strength provided by the distance measures using the kernel trick via RKHS that capture nonlinear relations in higher dimensional space, more precisely. Finally, in a recent unpublished work by [87], it has been validated that the HSIC indices prove to be more sensitive to the global behaviour than the Sobol indices.
Time series data
Next, the analysis of the time series data is addressed using the sensitivity indices. THERE ARE TWO EXPERIMENTS THAT HAVE BEEN PERFORMED. FIRST IS RELATED TO THE ANALYSIS OF A PAIR OF THE FOLD CHANGES RECORDED AT TWO DIFFERENT CONSECUTIVE TIME POINTS I.E t i &t i+1. THE SECOND IS RELATED TO THE ANALYSIS OF A PAIR OF DEVIATIONS IN FOLD CHANGES RECORDED AT t i &t i+1 AND t i+1&t i+2. The former compares the measurements in time while the latter takes into account the deviations that happens in time. For each measurement at a time point a normal distribution was generated with original recorded value as the mean, a standard deviation of 0.005 and an added noise in the form of jitter (see function jitter in R langauge). For the time measurements of each of the genes recorded in [27] an analysis of the sensitivity indices for both the scaled and the non-scaled data was done. Here the analysis for non-scaled data is presented. The reason for not presenting the scaled data is that the sample measurements did not vary drastically as found in the case of static data which caused troubles in the estimation of indices earlier. Another reason for not reporting the results on the scaled data is that the non-scaled ones present raw sensitive information which might be lost in scaling via normalization. Note that [27] uses self organizing maps (SOM) to cluster data and use correlational analysis to derive their conclusions. In this work, the idea of clustering is abandoned and sensitivity indices are estimated for recorded factors participating in the pathway. Also the simple correlational analysis is dropped in favour of highly analytical kernel based distance measures which easily capture the nonlinearities inherent in the data. Figure 22 represents the experimental setup in a pictorial format.
A cartoon of experimental setup. Step - (1) Time recordings of different gene expression values after WNT3A stimulation at different hrs. (2) Generation of normal distribution for every FC & ΔFC for Gx at & between different time snapshots, respectively. mean - original Gx exp value; standard dev. - 0.005 + noise from jitter function in R (3) Generation of data set for FC & ΔFC. (4) Generation of samples for SA. (5) Compute SI for FC & ΔFC
Also, in a recent development, [25] point to two findings namely, ∙ the robust fold changes of β-catenin and ∙ the transcriptional machinery of the Wnt pathway depends on the fold changes in β-catenin instead of absolute levels of the same and some gene transcription networks must respond to fold changes in signals according to the Weber's law in sensory physiology. The second study also carries a weight in the fact that due to the study of the deviations in the fold changes it is possible to check if the recently observed and reported natural psychophysical laws in the signaling pathway hold true or not. Finally, using the sensitivity indicies an effort is made to confirm the existing biological causal relations that have been shown in [23].
Analysis of fold changes at different time points
Lets begin with the gene WNT3A as changes in its concentration lead to recording of the measurements of the different genes by [27]. Of the list of genes recorded, the indices of the those which are influenced by the concentration of WNT3A are analysed. Next based on these confirmations and patterns of indices over time, conclusions for other enlisted genes are drawn. For the former list, the following genes FZD1, FZD2, LEF1, TCF7, TCF7L1, LRP6, DVL1, SFRP1, SFRP4, CTBP1, CTBP2, PORCN, GSK3β, MYC, APC and CTNNB1 are considered. Figures 23 and 24 represent the indices computed over time. Columns represent the different kinds of indices computed while the rows show the respective genes. Each graph contains the sensitivity index computed at a particular time point (represented by a coloured bar). It should be observed from the aforementioned figures that the variants of the Sobol first order (FO) and the total order (TO) indices computed under different formulations were not very informative. This can be seen in graphs were some indices are negative and at some places the behaviour across time and genes remain the same. In contrast to this, the indices generated via the original Sobol function (under the column Sobol-SBL) as well as the sensiHSIC were found to be more reliable. Again, the rbf and laplace kernels under the HSIC formulations showed similar behaviour in comparison to the use of the linear kernel.
Column wise - methods to estimate sensitivity indices. Row wise - sensitivity indicies for each gene. For each graph, the bars represent sensitivity indices computed at t1 (red), t2 (blue), t3 (green), t4 (gray) and t5 (yellow). Indices were computed using non scaled time series data. TO - total order; FO - first order; SBL - Sobol
Gujral and MacBeath [27] simulate the serum starved HEK293 cells with 200 ng/mL of WNT3A at different lengths of time. After the first hour (t 1), (under HSIC-rbf/laplace) it was observed that the sensitivity of WNT3A was low (red bar). The maximum contribution of WNT3A can be recoreded after the 12th stimulation. But due to increased stimulation by WNT3A later on, there is an increased sensitivity of FZD-1/2 as well as LRP6. The FZD or the frizzled family of 7-transmember protein [88] works in tandem with LRP-5/6 as binding parameters for the Wnt ligands to initiate the Wnt signaling. Consistent with the findings of [89] and [90], FZD1 was found to be expressed. But there is a fair decrease in the contribution of the same in the next two time frames i.e after 3rd and the 6th hour. The maximum contribution of FZD1 is found after the WNT3A simulation at 12th hour. This probably points to repetitive involvement of FZD1 after a certain period of time to initiate the working of the signaling pathway. FZD2 showed increasing significance in contribution after the first two time frames. The contribution drops significantly after the 3rd simulation and gradually increases in the next two time frames. The repetitive behaviour is similar to FZD1, yet it's role is not well studied as it appears to bind to both WNT3A which promotes Wnt/beta-catenin signaling and WNT5A which inhibits it as shown by [91], respectively.
Klapholz-Brown et al. [92] and [93] show that there is increased β-catenin due to WNT3A stimulation which is depicted by the increased sensitivity of CTNNB1 expression in one of the above mentioned figures. MYC (i.e c−MYC) is known to be over expressed in colorectal cancer cases mainly due to the activation of TCF−4 transcription factor via intra nuclear binding of β-catenin [94], either by APC mutations [95] or β-catenin mutations [96]. The sensitivity of MYC increased monotonically but after the 6th h it dropped significantly. Probably MYC does not play important role at later stages. As found in [97] and [98], DVL family interacts with the frizzled FZD members leading to disassembly of the β-catenin destruction complex and subsequent translocation of β-catenin to the nucleus. Development on DVL family have been extensively recorded in [99] and [100], and significance of DVL1 in Taiwanese colorectal cancer in [101]. DVL1 shows a marked increase in sensitivity as the concentration of the WNT3A increases in time. This is supported by the fact that ligand binding at the membrane leads to formation of complex including DVL1, FZD and AXIN.
Negative regulators like SFRP4 were found to have lower sensitivity as WNT3A concentration increases, but remained constant for most period. Meanwhile the significance of Wnt antagonist SFRP1 ([102], [103] and [104]) decreases over the period as the concentration of WNT3A increases. [105] reviews the co-repressor ability of the CTBP family, while [106] shows CTBP as a binding factor that interacts with APC thus lowering the availability of free nuclear β-catenin. This interaction is further confirmed in the recent research work by [107]. As shown by [93] CTBP1 showed increased sensitivity with increased stimulation of WNT3A in the first hour. The latter stages show a decreased contribution of CTBP1 as the concentration of WNT3A was increased. This is in line with what [27] show in their manuscript and indicate the lowering of the co-repressor effect of CTBP at later stages. On the other hand, CTBP2 showed reverse behaviour of sensitivity in comparison to CTBP1 across different time points. Increased significance of CTBP2 was observed in the first two time frames, i.e after 1st and 3rd hour of stimulation, followed by lower contribution to the pathway at the latter stages. In both cases, the diminishing co-repressive nature of CTBP in time is observed. Contrary to these finding, recent results in [108] suggest that both CTBP1 and CTBP2 are up-regulated in colon cancer stem cells.
PORCN showed less sensitivity in the initial stages than in final stages indicating its importance in the contribution to Wnt secretion which is necessary for signaling [109]. The sensitivity of GSK3β and APC decreased in time indicating the lowering of its significance in later stages due to no formation of the degradation complex. Activity of TCF gains greater prominence in the first and the second time frames after the initial WNT3A stimulation. This is in conjugation with the pattern showed by CTBP2. Regarding TCF7L2, the activity is observed to be maximum during the first time frame with decrease in the contribution in the later time frames.
Indicies for remaining 57 genes as well as analysis of the same will be presented in the following B part of this manuscript. Graphs for these 57 genes have been presented in Figures 27 and 28 in Appendix.
Analysis of deviations in fold changes
In comparison to the contributions estimated via the sensitivity indices using fold changes at different time points separately, this section analyzes the contributions due to deviations in the fold changes recorded between two time points i.e t i & t i+1. These analyses are also a way to test the efficacy of deviations in fold changes versus the absolute levels that have been stressed upon in [25]. I PRESENT HERE A DETAILED ANALYSIS OF HOW SOME OF THE COMPONENTS OF THE PATHWAY INFLUENCE THE PATHWAY AT WHICH TIME POINT AND TIME PERIOD. OF THE EXPRESSION PROFILES OF 71 GENES RECORDED DURING THE STIMULATION, I PRESENT ONLY A FEW OF THEM AS AN EXAMPLE OF SYSTEM WIDE ANALYSIS AT THE COMPUTATIONAL LEVEL. NOTE THAT IT IS THESE TIME PERIODS WHICH HAVE BEEN IDENTIFIED BY OBSERVING THE NUMERICAL VALUES OF THE SENSITIVITY INDICES IN WHICH THE INFLUENCE OF A PARTICULAR FACTOR/COMPONENT IS REPORTED. IDENTIFICATION OF EARLY TIME POINTS AND TIME PERIODS INDICATE HOW THE PATHWAY IS AFFECTED AT AN EARLY STAGE AND VICE VERSA. THUS, THE SYSTEM WIDE ANALYSIS CONDUCTED AT TIME COURSE LEVEL, GIVES A DEEPER PICTURE OF THE INFLUENCES OF THE COMPONENTS IN THE WORKING OF THE PATHWAY. Some of these findings are in line with [27]. But some provide more deeper analysis where [27] fails to do so.
In such an already extensive computational study at systems level, it is not possible to provide wet lab tests to validate the above findings and neither does the author currently have the resources to test the same in a wet lab setting. INSTEAD, SUCH INSILICO FINDINGS FACILITATE THE BIOLOGISTS TO VERIFY AND STUDY THE PATHWAY DEEPLY IN WET LAB. IN THE AUTHOR'S LIMITED AWARENESS, SUCH A STUDY HAS NOT BEEN UNDERTAKEN BEFORE. FINALLY, THE FOLLOWING DESCRIPTIONS JUST DO NOT EXPLAIN THE GRAPH. THEY RATHER POINT TO THE TAKE HOME MESSAGES REGARDING THE INFLUENCE OF FACTORS/COMPONENTS AT DIFFERENT TIME POINTS AND TIME PERIODS AT INSILICO LEVEL.
As with the analysis of the fold changes at different time points, the estimates obtained using the rbf, linear and laplacian kernels in the HSIC based sensitivity analysis have been used here. Of these, the rbf and laplacian kernels give almost similar results. Plots of the time series expression profiles from [27] have been relegated to the Appendix and shown in Figures 31 and 32.
■WNT3A - Figure 31 in Appendix shows the profile of mRNA expression levels of WNT3A after external stimulation. There is a series of (+−++) deviations in the fold change recordings at different time points. A repetitive behaviour is observed in the contribution of the deviations in fold changes for WNT3A estimated via the sensitivity indices. For intervals in t 1, t 3 and t 6 there is an increase in the significance of the contribution of WNT3A in Fig. 25 (see the first two bars for <t 1,t 3> & <t 3,t 6>), even though in the first three time frames levels of WNT3A are shown to be down-regulated (see Figure 31 in Appendix). This behaviour is again repeated in Fig. 25 for intervals t 6, t 12 and t 24 (see the next two bars for <t 6,t 12>& <t 12,t 24>). In both cases one finds an increase in the contribution of the deviation in the fold change. Comparing the contribution of levels of fold changes in Fig. 23 were it was found that there is a dip in contribution of WNT3A after t 3 and then a further increase in the contribution at a latter time frame, one finds that the deviations in fold changes involving <t 1,t 3> & <t 3,t 6> have higher significance than the deviations in fold changes involving <t 6,t 12> & <t 12,t 24>. It can be noted that even in the deregulated state from <t 1, t 3 and t 6> the deviations are minimal and the contributions are significantly high. In case of the regulated states from <t 6, t 12 and t 24> the deviation is extremely high between the first two time frames and low in the next two. This results in greater significant contribution in the latter deviation than the former deviation. Thus when deviations are low and the fold changes over time do not vary much, the contributions of the involved factor to the signaling pathway is expected to be high and vice versa. This points to the fact that low variations in fold changes over time have a stablizing influence of WNT3A rather than abrupt high variations in fold changes that might not have the same influence. Thus measurments of deviations in fold changes provide greater support for studying the affect of WNT3A over time.
Column wise - methods to estimate sensitivity indices. Row wise - sensitivity indicies for each gene on deviations in fold change. For each graph, the bars represent sensitivity indices computed at <t1,t2> (red), <t2,t3> (blue), <t3,t4> (green) and <t4,t5> (gray). Indices were computed using non scaled time series data. TO - total order; FO - first order; SBL - Sobol
■CTNNB1 - Figure 31 in Appendix shows the profile of mRNA expression levels of CTNNB1 after external stimulation. There is a series of (++−+) deviations in the fold change recordings at different time points. An initial increase in the influence of CTNNB1 is observed from <t 1,t 3> to <t 3,t 6> (first two bars in Fig. 25) followed by a gradual decrease of influence from <t 3,t 6> to <t 6,t 12> to <t 12,t 24> (last three bars in Fig. 25). This is observed even though there is an up-regulation in levels of CTNNB1 with a slight dip at t 12 (see Figure 31 in Appendix). In comparison to the contribution of levels of fold changes in Fig. 23 were it was found that there is a gradual decrease in the influence of CTNNB1 till t 6 and then a further increase in the contribution at latter time frames, one finds that the deviations in fold changes involving <t 3,t 6> have the highest significance with an almost exponential decrease in the deviations in fold changes involving <t 6,t 12> & <t 12,t 24>. Even though in a regulated state the influence of deviations in the fold changes indicate a different scenario altogether in comparison to influences of fold changes at distinct time frames. This might point to the fact that the affect of CTNNB1 is maximum during <t 3,t 6> in comparison to other stages even after constantly increasing external stimulation with WNT3A at different time points. Exponential decrease in the influence in the deviations in latter time frames points to the ineffectiveness of CTNNB1 in the pathway at later stages. Finally, in contrast to the behaviour of influence of WNT3A in the foregoing paragraph, CTNNB1 showed higher (lower) influence for greater (lesser) deviations in fold changes.
■APC - Figure 31 in Appendix shows the profile of mRNA expression levels of APC after external stimulation. The profile of the deviations of APC in a deregulated state show the following (−++−) pattern. While the CTNNB1 expression profile shows non-monotonic increase in levels of fold changes in upregulated state, APC expression profile shows a nonlinear behaviour in levels of fold changes in down regulated state. The significance of deviation in fold changes for APC is maximum during <t 3,t 6> when the downregulation is weakened. Further weaking of the downregulation during <t 6,t 12> does not have much significance. This attenuation in significance of deviations in fold change might support the fact that APC's weaking in downregulation amplifies shutting down of the Wnt pathway after the intial strong downregulation (where Wnt activity is high). This is corroborated by the finding of [27] which observes the initial (later) positive (negative) feedback that strenghtens (weakens) the Wnt pathway activity. An initial increase in the influence of APC is observed from <t 1,t 3> to <t 3,t 6> (first two bars in Fig. 25) followed by a gradual decrease of influence from <t 3,t 6> to <t 6,t 12> to <t 12,t 24> (last three bars in Fig. 25). This is observed even though there is an down-regulation in levels of APC with slight weaking at t 6, t 12 and t 24 in comparison to recordings at other time frames (see Figure 31 in Appendix). In comparison to the contribution of levels of fold changes in Fig. 23 where it was found that there is a gradual decrease in the influence of APC till t 6 and then a further increase in the contribution at latter time frames, one finds that the deviations in fold changes involving <t 3,t 6> have the highest significance with an almost exponential decrease in the deviations in fold changes involving <t 6,t 12> & <t 12,t 24>.
■MYC - Figure 31 in Appendix shows the profile of mRNA expression levels of MYC after external stimulation. The profile of the deviations in fold changes of MYC in an up-regulated state show the following (−+++) pattern. After an initial dip in the up-regulation at t 6 there is an exponential increase in the fold changes of MYC as time progresses. While Fig. 23 shows an increasing sensitivity of MYC for the first three time frames, later up-regulated state of MYC due to increasing WNT3A stimulations do not hold much significance. In contrast, it is not possible to observe a pattern in the sensitivity of deviations in fold changes for MYC except for the fact that the maximum contribution of deviation in fold change is observed for the period of <t 6,t 12>. This is a period when MYC's significance in the pathway is maximum.
■GSK3B - Figure 31 in Appendix shows the profile of mRNA expression levels of GSK3β after external stimulation. The profile of the deviations in fold changes of GSK3β in an varied regulated state show the following (−+++) pattern. After an initial up-regulation at t 3 there is down regulation at t 6 before which up-regulation follows for latter stages. It is widely known that WNT stimulation leads to inhibition of GSK3β. In contrast to this regard GSK3β shows a up-regulated levels at t 3, t 12 and t 24. The author is currently unaware of why this contasting behaviour is exhibited. Later upregulation might point to the fact that the effectiveness of Wnt stimulation has decreased and GSK3β plays the role of stabilizing and controlling the behaviour of the pathway by working against the Wnt stimulation and preventing further degradation. While work by [27] does not shed light on this aspect, contrasting models of inhibitions for GSK3 has been recently proposed in [110] which might support this behaviour. Figure 23 shows an decreasing sensitivity of GSK3β for the first two time frames, after which there is an increasing sensitivity for the next three time frames. Comparing this with plots in Fig. 25 it is found that there is greater significance of deviations in fold changes of GSK3β during later stages of <t 6,t 12> and <t 12,t 24>.
■PORCN - Figure 31 in Appendix shows the profile of mRNA expression levels of PORCN after external stimulation. The profile of the deviations in fold changes of PORCN in an up regulated state show the following (+−−−) pattern. After an initial hike in up-regulation at t 3 there is continuous decrease in the up regulation. PORCN is known to help in the secretion of the Wnt ligands that later on help in the instigation of the signaling activity [111]. Sustanined stimulation by WNT3A over a period of time might lead to decrease in the up regulation of PORCN which helps in Wnt secretion. Graph for PORCN in Fig. 23 shows increasing significance of the influence of PORCN as time passes, even though there is lower regulation of the same at later stages (Figure 31 in Appendix). The highly significant influence of lower regulation at later stages indicates the lessened effectiveness of PORCN due to sustained WNT3A stimulation that might have suppressed the functionality of secretion carried out via PORCN. Contrary to this, the influences of the deviations in the fold changes over time show the reverse behaviour. The maximum influence is during the first two time frames of <t 1,t 3> and this influence of deviations decreases at later stages. This points to the fact that the deviations in the fold changes at intial stage has greater significance in the pathway than the deviations at later stages. It follows that in initial stages of Wnt stimulation the expression of PORCN has significant influence.
■CTBP2 - Figure 31 in Appendix shows the profile of mRNA expression levels of CTBP2 after external stimulation. The profile of the deviations in fold changes of CTBP2 in an up regulated state show the following (−++−) pattern. It is known that CTBP2 shows co-repressive nature [112] and the pattern of sensitivity indicates this heigthened effect at <t 3,t 6> and <t 12,t 24>. In contrast to this in Fig. 23 one finds that the significance of upregulation at t 12 and t 24 is minimal and yet the sensitivity for the deviations in fold changes for this period is second to <t 3,t 6>. The probable explanation for this might be that for higher upregulation (in terms of numeric representations), even small deviations might play a sigificant role while the sensitivity at individual time frames remains low.
■CTBP1 - Figure 31 in Appendix shows the profile of mRNA expression levels of CTBP1 after external stimulation. The profile of the deviations in fold changes of CTBP1 in an up regulated state show the following (+++−) pattern. As with the heightened sensitivity at time frame t 1 the sensitivity of deviations in the fold changes exhibits heightened effect in the pathway at <t 1,t 3>. Further analysis might not be possible as one finds lowered sensitivity even at heightened up regulation for individual recordings as well as deviations.
■SFRP4 - Figure 31 in Appendix shows the profile of mRNA expression levels of SFRP4 after external stimulation. The profile of the deviations in fold changes of SFRP4 in an down regulated (except in the last time frame) state show the following (−+++) pattern. Known to be a negative regulator of the Wnt pathway, it was found that its sensitivity in extremely high as it is down regulated during the stimulation. This is depicted in the figures plotted in Fig. 23. This heightened sensitivity during most of the down regulation points towards the significant role of hypermethylation that leads to silencing of this gene. Contrary to this, there is a monotonically increasing sensitivity for deviations in the fold changes during down regulation from t 1 to t 12. A dip in the sensitivity of the deviation for the final time frame <t 12,t 24> happens when the up regulation is recorded in the last time frame. Based on the maximum sensitivity for deviation in fold change during <t 6,t 12>, up regulation of SFRP4 in this period is expected to have greatest reverse effect on the activation of the Wnt pathway. It appears that the hypermethylation that causes the silencing of the SFRP4 is maximal during this stage and thus a potential period for the pathway to be inhibited via reversal of silencing [103].
■SFRP1 - Figure 32 in Appendix shows the profile of mRNA expression levels of SFRP1 after external stimulation. The profile of the deviations in fold changes of SFRP4 in an up regulated (except in during the second time frame) state show the following (−+−−) pattern. It is widely known that SFRP1 is a Wnt antagonist and is known for inactivation in the canconical Wnt pathway due to hypermethylation thus leading to upregulation of the pathway [104]. [103] further indicates that SFRP1 is thought to silence ligand-dependent Wnt signaling by binding of the cysteine rich-domain (CRD) to Wnt proteins, thus preventing interaction with FRZ receptors. Recent in silico results by [113] confirm hypermethylation of SFRP1 in colorectal cancers. Given the above profile, it is possible to see that there is a down regulation at t 3 but the significance of its influence on the pathway is not much as revealed in Fig 24. Figure 24 shows a decreasing significance in the influence of the SFRP1 with the maximum influence in the last stage of the WNT3A stimulation, where there is an up regulation. In similar way, in Fig. 26 there is significance of the sensitivity in the deviations in fold changes during the up regulation of SFRP1 during the <t 6,t 12> and <t 12,t 24>. The activation at later stages show that SFRP1 has a greater antagonistic effect on the Wnt pathway. In comparison to up regulation, one finds the down regulation at t 3 does not play a significant role. These are in line with [27]'s claim regarding reversal of behaviour at different time stages.
■DVL1 - Figure 32 in Appendix shows the profile of mRNA expression levels of DVL1 after external stimulation. The profile of the deviations in fold changes of DVL1 in an up regulated state show the following (−++−) pattern. DVL1 is an adaptor protein that helps in signal transmission that leads to stabilization of cytosolic β-catenin for further processing. [101] report high expression of DVL1 in Taiwanese colorectal cancer patients with liver metastasis and it has also been observed as a potential biomarker in CRC [114]. In Fig 24 the time frame t 12 at which DVL1 shows maximum up regulation is the most insignificant one due to the lowest sensitivity while moderate upregulation during t 3 and t 6 show high sensitivity. The same is true for upregulation at t 24. Comparing this with the deviations in the fold changes in 26, one finds that there is maximum sensitivity during the period of <t 3,t 6> preceeded by a lower sensitivity index for the perioud of <t 1,t 3>. At other intervals there was a decreased sensitivity even though the deviations in the fold changes were very high. This indicates that the high deviations might not influence the signaling activity significantly. Also the best period of intervention is at <t 3,t 6>.
■LRP6 - Figure 32 in Appendix shows the profile of mRNA expression levels of Lrp6 after external stimulation. The profile of the deviations in fold changes of Lrp6 in an up regulated state (except at t 3) show the following (−++−) pattern. In an extensive work on the molecular differences of LRPs [115] investigate and show that LRP5/6 along with the frizzled family members form a Wnt inducible co-receptor complex that helps in signal transmission after LRP phosphorylation. Earlier wet lab work by [116] and in silico findings by [117] have shown highly expressed participation of LRP5/6 in Wnt signaling pathway. Latest work by [118] shows that KRAS signaling promotes canonical Wnt activity via LRP6. In Fig. 24, LRP6 shows significant influences during the t 1, t 6 and t 12. The only period in which it is down regulated during t 3 has little significance in comparison to the significance of the up regulated states. It is not known why LRP6 shows down regulation at this stage. Finally, for unkown reasons, the influence of LRP6 during t 12 was found to be the lowest. It was not possible to read into the sensitivity of LRP6 for deviations in fold changes the HSIC based indices. The laplace kernel shows a pattern of increasing sensitivity with the heightest during <t 6,t 12>. But this is not so in the other two formulations. Thus wet lab experiments might aid in confirming these results and shedding more light on the duration during which a drug could be administered.
■TCF7L16 - Figure 32 in Appendix shows the profile of mRNA expression levels of TCF7L1 (also known as TCF3) after external stimulation. The profile of the deviations in fold changes of TCF7L1 in a down regulated state (except at t 3) show the following (+−+−) pattern. It is known that Wnt stimulation promotes the phosphorylation of repressor-acting TCF7L1 by homeodomain-interacting protein kinase (HIPK2), which results in its dissociation from the WRE [119]. Gujral and MacBeath [27]'s results also indicate the same repression of TCF7L1 during WNT3A simulation as shown in Figure 32 in Appendix. But in contradiction to this recent findings of [120], TCF7L1 is found to be expressed in the colon crypt and in colon cancer. Their results indicate that TCF7L1 may have an as yet unidentified role in transmission of tumor-related β-catenin signals. Evidence of this up regulation is found at t 3 time period as shown in Figure 32 in Appendix. From Fig. 24 it can be seen that the sensivitiy of TCF7L1 is maximum during the first time period. Later on the sensitivity subsides as time passes untill the sensitivity shots up in the last time frame t 24. In comparison to this with respect to the deviations in fold changes over time in Fig. 26, <t 1,t 3> showed the maximum period of influence. Later on there is a drop in the sensitivity which is followed by an approximately monotonic increase. It is the first transition from down regulation to up regulation <t 1,t 3> that might be the time for intervention. Also the last stage might be of some value but during down regulation only.
■TCF7 - Figure 32 in Appendix shows the profile of mRNA expression levels of TCF7 after external stimulation. The profile of the deviations in fold changes of TCF7 in an up regulated state show the following (−−++) pattern. TCF7 is found to be regulated upon Wnt stimulation as it binds with LEFs to activate the transcription procedure after interacting with β-catenin [121]. In Fig. 24, the sensitivity of the activation of TCF7 decreases monotonically as time progresses. But this behaviour is not the same for deviations in the fold changes. The maximum influence is found for the duration of <t 6,t 12>. The next best consistent influence is in the duration <t 12,t 24>. These are the two time periods when the influence of the deviations in the fold changes is maximum and thus susceptible to therapeutic interference.
■LEF1 - Figure 32 in Appendix shows the profile of mRNA expression levels of LEF1 after external stimulation. The profile of the deviations in fold changes of LEF1 in an up regulated state (except at t 3) show the following (−−++) pattern. Generally, LEF1 is found to be up regulated upon Wnt stimulation when it works in tandem with TCF7 [121]. Yet, in Fig. 24, the sensitivity of the activation of LEF1 is not similar to that of TCF7. In contradiction to what is observed, one finds a down regulation during the time period t 3. More importantly this is the period in which the most significant influence of down regulated LEF1 is observed. The Initial down regulation at this subinterval indicates that LEF1 is not facilitating the Wnt pathway positiviely. Conclusive results cannot be stated regarding the deviations in fold changes from Fig. 26.
■FZD2 - Figure 32 in Appendix shows the profile of mRNA expression levels of FZD2 after external stimulation. The profile of the deviations in fold changes of FZD2 in an up regulated state (except at t 3 and t 6) show the following (−−++) pattern. The FZD or the frizzled family of 7-transmember protein [88] works in tandem with LRP-5/6 as binding parameters for the Wnt ligands to initiate the Wnt signaling. In comparison to the repetitive behavior shown in Fig. 24 it is not possible to draw conclusions on the deviations in fold changes.
■FZD1 - Figure 32 in Appendix shows the profile of mRNA expression levels of FZD1 after external stimulation. The profile of the deviations in fold changes of FZD1 in an down regulated state (except at t 3) show the following (+−−+) pattern. Consistent with the findings of [89] and [90], FZD1 was found to be expressed at t 3. In the rest of the time periods, it was down regulated. But the significance of the influence shows a different pattern in Fig. 24 with the down regulation at t 12 being the most influencial. In contrast to this, while observing the deviations in the fold changes, it was found that the first two durations <t 1,t 3> and <t 3,t 6> showed consistent decreasing behaviour in terms of influence. It is during the first period that the deviations in fold changes are significant and thus it is possible to intervene therapeutically during the activation stage. Indicies for remaining 57 genes as well as analysis of the same will be presented in the following B part of this manuscript. Graphs for these 57 genes have been presented in Figures 29 and 30 in the Appendix.
COMPUTATIONAL SIGNIFICANCE
Local and global sensitivity analysis on static and time series measurements in Wnt signaling pathway for colorectal cancer is done. Density based Hilbert-Schmidt Information Criterion indices outperformed the variance based Sobol indices. This is attributed to the employment of distance measures & the kernel trick via Reproducing kernel Hilbert space (RKHS) that captures nonlinear relations among various intra/extracellular factors of the pathway in a higher dimensional space. The gained advantage is confirmed on the inferred results obtained via a Bayesian network model based on prior biological knowledge and static gene expression data. In time series data, using these indices it is now possible to observe when and in which period of time and to what degree a factor gets influenced & contributes to the pathway, as changes in concentration of the another factor is made. This facilitates in time based administration of target therapeutic drugs & reveal hidden biological information within colorectal cancer samples.
DEVIATIONS IN FORMULATION OF PSYCHOPHYSICAL LAW
In context of [25]'s work regarding the recent development of observation of Weber's law working downstream of the pathway, it has been found that the law is governed by the ratio of the deviation in the input and the absolute input value. More importantly, it is these deviations in input that are of significance in studing any phemomena. The current manuscript explores the sensitivity of deviation in the fold changes between measurements of fold changes at consecutive time points to explore in what duration of time i.e <t i ,t i+1>, a particular factor is affecting the pathway in a major way. This has deeper implications in the fact that one is now able to observe when in time an intervention can be made or a gene be perturbed to study the behaviour of the pathway in tumorous cases. Thus sensitivity analysis of deviations in mathematical formulations of the psychophysical law can lead to insights into the time period based influence of the involved factors in the pathway. This will also shed light on the duration in which the psychophysical laws might be most prevalent.
SPECIFIC EXAMPLES OF BIOLOGICAL INTERPRETATIONS
GSK3 β - It is widely known that WNT stimulation leads to inhibition of GSK3β. In contrast to this regard GSK3β shows a up-regulated levels at t 3,t 12 and t 24. The author is currently unaware of why this contasting behaviour is exhibited. Later upregulation might point to the fact that the effectiveness of Wnt stimulation has decreased and GSK3β plays the role of stabilizing and controlling the behaviour of the pathway by working against the Wnt stimulation and preventing further degradation. While work by [27] does not shed light on this aspect, contrasting models of inhibitions for GSK3 has been recently proposed in [110] which might support this behaviour. Considering analysis of fold changes at different time points, decreasing sensitivity of GSK3β was observed for the first two time frames, after which there is an increasing sensitivity for the next three time frames. Comparing this with plots of analysis of deviations in fold changes, it is observed that there is greater significance of deviations in fold changes of GSK3β during later stages of <t 6,t 12> and <t 12,t 24>. It is in these periods that one might be able to pertube and study significant affects on the pathway.
PORCN - PORCN is known to help in the secretion of the Wnt ligands that later on help in the instigation of the signaling activity. Sustanined stimulation by WNT3A over a period of time might lead to decrease in the up regulation of PORCN which helps in Wnt secretion. Graph for PORCN in analysis of fold changes shows increasing significance of the influence of PORCN as time passes, even though there is lower regulation of the same at later stages. The highly significant influence of lower regulation at later stages indicates the lessened effectiveness of PORCN due to sustained WNT3A stimulation that might have suppressed the functionality of secretion carried out via PORCN. Contrary to this, the influences of the deviations in the fold changes over time show the reverse behaviour. The maximum influence is during the first two time frames of <t 1,t 3> and this influence of deviations decreases at later stages. This points to the fact that the deviations in the fold changes at intial stage has greater significance in the pathway than the deviations at later stages. It follows that in initial stages of Wnt stimulation the expression of PORCN has significant influence.
Choice of sensitivity indices
The SENSITIVITY PACKAGE ([122] and [31]) in R langauge provides a range of functions to compute the indices and the following indices will be taken into account for addressing the posed questions in this manuscript.
sensiFdiv - conducts a density-based sensitivity analysis where the impact of an input variable is defined in terms of dissimilarity between the original output density function and the output density function when the input variable is fixed. The dissimilarity between density functions is measured with Csiszar f-divergences. Estimation is performed through kernel density estimation and the function kde of the package ks [63] and [66].
sensiHSIC - conducts a sensitivity analysis where the impact of an input variable is defined in terms of the distance between the input/output joint probability distribution and the product of their marginals when they are embedded in a Reproducing Kernel Hilbert Space (RKHS). This distance corresponds to HSIC proposed by [68] and serves as a dependence measure between random variables.
soboljansen - implements the Monte Carlo estimation of the Sobol indices for both first-order and total indices at the same time (all together 2p indices), at a total cost of (p+2) × n model evaluations. These are called the Jansen estimators [58] and [50].
sobol2002 - implements the Monte Carlo estimation of the Sobol indices for both first-order and total indices at the same time (all together 2p indices), at a total cost of (p+2) ×n model evaluations. These are called the Saltelli estimators. This estimator suffers from a conditioning problem when estimating the variances behind the indices computations. This can seriously affect the Sobol indices estimates in case of largely non-centered output. To avoid this effect, you have to center the model output before applying "sobol2002". Functions "soboljansen" and "sobolmartinez" do not suffer from this problem [44].
sobol2007 - implements the Monte Carlo estimation of the Sobol indices for both first-order and total indices at the same time (all together 2p indices), at a total cost of (p+2) × n model evaluations. These are called the Mauntz estimators [57].
sobolmartinez - implements the Monte Carlo estimation of the Sobol indices for both first-order and total indices using correlation coefficients-based formulas, at a total cost of (p + 2) × n model evaluations. These are called the Martinez estimators.
sobol - implements the Monte Carlo estimation of the Sobol sensitivity indices. Allows the estimation of the indices of the variance decomposition up to a given order, at a total cost of (N + 1) × n where N is the number of indices to estimate [30].
mRNA expression levels of genes at 1st, 3rd, 6th, 12th and 24th hour from [27]
AES:
amino-terminal enhancer of split
ANOVA:
APC:
Adenomatous polyposis coli or Wnt pathway regulator
AXIN:
AXIN or Protein phosphatase 1 regulatory subunits
β-TRCP:
β-Transducin repeat containing E3 Ubiquitin protein ligase
BCL9:
B-cell CLL/lymphoma 9
BRTC:
beta-transducin repeat containing E3 ubiquitin protein ligase
CCND1:
Cyclin D1
CCND2/3:
Cyclin D2/3
CD44:
CD44 antigen (homing function and Indian blood group system)
CK1:
Casein kinase 1 or serine/threonine-selective enzymes
CSNK1D:
casein kinase 1 delta
CSNK1G:
casein kinase 1 gamma
CSNK1A1:
casein kinase 1 alpha 1
CTBP:
C-terminal binding proteins
CTNNB1:
Catenin beta 1
CTNNBIP1:
Catenin beta interacting protein 1
CXXC4:
CXXC finger protein 4
DAAM1:
dishevelled associated activator of morphogenesis 1
DIXDC1:
DIX domain containing 1
DVL:
Dishevelled segment polarity proteins
DKK:
Dickkopf WNT signaling pathway inhibitor
DACT:
Dishevelled binding antagonist of beta catenin
EP300:
E1A binding protein p300
FBXW:
F-box and WD repeat domain containing
FGF:
FOSL1:
FOS like 1, AP-1 transcription factor subunit
FOXN1:
forkhead box N1
FRAT1:
FRAT1, WNT signaling pathway regulator
FRZB:
frizzled-related protein
FSHB:
follicle stimulating hormone beta subunit
FZD:
Frizzled receptors or G protein-coupled receptor proteins
FBOX/WD:
F: box and WD repeat domain
GSK3:
Glycogen synthase kinase 3 or serine/threonine protein kinase
GROUCHO:
Groucho, a transcription-inhibiting factor
HEK 293:
Human embryonic kidney cells 293
HSIC:
Hilbert-Schmidt Information Criterion
JUN:
Jun proto-oncogene, AP-1 transcription factor subunit
KL-divergence:
Kullback–Leibler divergence
KREMEN1:
kringle containing transmembrane protein 1
LRP:
Low density lipoprotein receptor-related proteins
LEF1:
Lymphoid enhancer binding factor 1
MYC:
MYC proto-oncogene, bHLH transcription factor
NKD1:
naked cuticle homolog 1
NLK:
nemo like kinase
OAT:
One-at-a-time
PITX2:
paired like homeodomain 2
PORCN:
Porcupine homolog (Drosophila)
PPP2CA:
protein phosphatase 2 catalytic subunit alpha
PPP2R1A:
protein phosphatase 2 scaffold subunit Aalpha
PYGO1:
pygopus family PHD finger 1
qPCR:
Quantitative polymerase chain reaction
RHOU:
ras homolog family member U
RKHS:
Reproducing kernel Hilbert space
SENP2:
SUMO1/sentrin/SMT3 specific peptidase 2
SLC9A3R1:
SLC9A3 regulator 1
SFRP:
Secreted frizzled-related protein
SOM:
Self organizing maps
T brachyury transcription factor
TCF:
TLE:
transducin like enhancer of split
TCF7L2:
Transcription factor 7-like 2 (T-cell specific, HMG-box)
WNT:
Wingless-type Mouse Mammary Tumor Virus
WIF1:
WNT inhibitory factor 1
WNT-3A:
Wnt family member 3A
Sharma R. Wingless a new mutant in drosophila melanogaster. Drosophila Inf Serv. 1973; 50:134–4.
Thorstensen L, Lind GE, Løvig T, Diep CB, Meling GI, Rognum TO, Lothe RA. Genetic and epigenetic changes of components affecting the wnt pathway in colorectal carcinomas stratified by microsatellite instability. Neoplasia. 2005; 7(2):99–108.
Baron R, Kneissel M. Wnt signaling in bone homeostasis and disease: from human mutations to treatments. Nat Med. 2013; 19(2):179–92.
Clevers H. Wnt/[ β]-catenin signaling in development and disease. Cell. 2006; 127(3):469–80.
Sokol S. Wnt Signaling in Embryonic Development, vol 17: Elsevier; 2011.
Pinto D, Gregorieff A, Begthel H, Clevers H. Canonical wnt signals are essential for homeostasis of the intestinal epithelium. Gene Dev. 2003; 17(14):1709–13.
Zhong Z, Ethen NJ, Williams BO. Wnt signaling in bone development and homeostasis. Wiley Interdiscip Rev Dev Biol. 2014; 3(6):489–500.
Pećina-Šlaus N. Wnt signal transduction pathway and apoptosis: a review. Cancer Cell Int. 2010; 10(1):1–5.
Kahn M. Can we safely target the wnt pathway?Nat Rev Drug Discov. 2014; 13(7):513–32.
Garber K. Drugging the wnt pathway: problems and progress. J Natl Cancer Inst. 2009; 101(8):548–50.
Voronkov A, Krauss S. Wnt/beta-catenin signaling and small molecule inhibitors. Curr Pharm Des. 2012; 19(4):634.
Blagodatski A, Poteryaev D, Katanaev V. Targeting the wnt pathways for therapies. Mol Cell Ther. 2014; 2:28.
Curtin JC, Lorenzi MV. Drug discovery approaches to target wnt signaling in cancer stem cells. Oncotarget. 2010; 1(7):552.
Rao TP, Kühl M. An updated overview on wnt signaling pathways a prelude for more. Circ Res. 2010; 106(12):1798–1806.
Yu J, Virshup DM. Updating the wnt pathways. Biosci Rep. 2014; 34(5):593–607.
Antebi YE, Nandagopal N, Elowitz MB. An operational view of intercellular signaling pathways. Curr Opin Syst Biol. 2017; 1:16–24.
Goentoro L. Cross-hierarchy systems principles. Curr Opin Syst Biol. 2016; 1:80–83.
Lee E, Salic A, Kruger R, Heinrich R, Kirschner MW. The roles of apc and axin derived from experimental and theoretical analysis of the wnt pathway. PLoS Biol. 2004; 2(3):405–6.
Kogan Y, Halevi-Tobias KE, Hochman G, Baczmanska AK, Leyns L, Agur Z. A new validated mathematical model of the wnt signalling pathway predicts effective combinational therapy by sfrp and dkk. Biochem J. 2012; 444(1):115–25.
Lee M, Chen GT, Puttock E, Wang K, Edwards RA, Waterman ML, Lowengrub J. Mathematical modeling links wnt signaling to emergent patterns of metabolism in colon cancer. Mol Syst Biol. 2017; 13(2):912.
MacLean AL, Rosen Z, Byrne HM, Harrington HA. Parameter-free methods distinguish wnt pathway models and guide design of experiments. Proc Natl Acad Sci. 2015; 112(9):2652–7.
Koutroumpas K, Ballarini P, Votsi I, Cournède PH. Bayesian parameter estimation for the wnt pathway: an infinite mixture models approach. Bioinformatics. 2016; 32(17):781–9.
Sinha S. Integration of prior biological knowledge and epigenetic information enhances the prediction accuracy of the bayesian wnt pathway. Integr Biol. 2014; 6:1034–48. doi:10.1039/c4ib00124a.
Sinha S. A pedagogical walkthrough of computational modeling and simulation of wnt signaling pathway using static causal models in matlab. EURASIP J Bioinforma Syst Biol. 2016; 2017(1):1.
Goentoro L, Kirschner MW. Evidence that fold-change, and not absolute level, of β-catenin dictates wnt signaling. Mol Cell. 2009; 36:872–84.
Azam M, Bhatti A, Arshad A, Babar M. Sensitivity analysis of wnt signaling pathway. In: Applied Sciences and Technology (IBCAST), 2013 10th International Bhurban Conference On. IEEE: 2013. p. 122–7.
Gujral TS, MacBeath G. A system-wide investigation of the dynamics of wnt signaling reveals novel phases of transcriptional regulation. PloS ONE. 2010; 5(4):10024.
Jiang X, Tan J, Li J, Kivimäe S, Yang X, Zhuang L, Lee PL, Chan MT, Stanton LW, Liu ET, et al. Dact3 is an epigenetic regulator of wnt/ β-catenin signaling in colorectal cancer and is a therapeutic target of histone modifications. Cancer Cell. 2008; 13(6):529–41.
Gregorieff A, Clevers H. Wnt signaling in the intestinal epithelium: from endoderm to cancer. Gene Dev. 2005; 19(8):877–90.
Sobol' IM. On sensitivity estimation for nonlinear mathematical models. Matematicheskoe Modelirovanie. 1990; 2(1):112–8.
Iooss B, Lemaître P. A review on global sensitivity analysis methods. 2014. arXiv preprint arXiv:1404.2405.
Morris MD. Factorial sampling plans for preliminary computational experiments. Technometrics. 1991; 33(2):161–74.
Moon H, Dean AM, Santner TJ. Two-stage sensitivity-based group screening in computer experiments. Technometrics. 2012; 54(4):376–87.
Dean A, Lewis S. Screening: Methods for Experimentation in Industry, Drug Discovery, and Genetics: Springer; 2006.
Andres TH, Hajas WC. Using iterated fractional factorial design to screen parameters in sensitivity analysis of a probabilistic risk assessment model. 1993.
Bettonvil B, Kleijnen JP. Searching for important factors in simulation models with many factors: Sequential bifurcation. Eur J Oper Res. 1997; 96(1):180–94.
Cotter SC. A screening design for factorial experiments with interactions. Biometrika. 1979; 66(2):317–20.
Christensen R. Linear Models for Multivariate, Time Series, and Spatial Data: Springer; 1991.
Saltelli A, Chan K, Scott E. Sensitivity analysis wiley series in probability and statistics. 2000.
Helton JC, Davis FJ. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliab Eng Syst Saf. 2003; 81(1):23–69.
McKay MD, Beckman RJ, Conover WJ. Comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics. 1979; 21(2):239–45.
Homma T, Saltelli A. Importance measures in global sensitivity analysis of nonlinear models. Reliab Eng Syst Saf. 1996; 52(1):1–17.
Sobol IM. Global sensitivity indices for nonlinear mathematical models and their monte carlo estimates. Math Comput Simul. 2001; 55(1):271–80.
Saltelli A. Making best use of model evaluations to compute sensitivity indices. Comput Phys Commun. 2002; 145(2):280–97.
Saltelli A, Ratto M, Tarantola S, Campolongo F. Sensitivity analysis for chemical models. Chem Rev. 2005; 105(7):2811–28.
Saltelli A, Ratto M, Andres T, Campolongo F, Cariboni J, Gatelli D, Saisana M, Tarantola S. Global Sensitivity Analysis: the Primer: Wiley; 2008.
Cukier R, Fortuin C, Shuler KE, Petschek A, Schaibly J. Study of the sensitivity of coupled reaction systems to uncertainties in rate coefficients. i theory. J Chem Phys. 1973; 59(8):3873–8.
Saltelli A, Tarantola S, Chan KS. A quantitative model-independent method for global sensitivity analysis of model output. Technometrics. 1999; 41(1):39–56.
Tarantola S, Gatelli D, Mara TA. Random balance designs for the estimation of first order global sensitivity indices. Reliab Eng Syst Saf. 2006; 91(6):717–27.
Saltelli A, Annoni P, Azzini I, Campolongo F, Ratto M, Tarantola S. Variance based sensitivity analysis of model output. design and estimator for the total sensitivity index. Comput Phys Commun. 2010; 181(2):259–70.
Janon A, Klein T, Lagnoux A, Nodet M, Prieur C. Asymptotic normality and efficiency of two sobol index estimators. ESAIM Probab Stat. 2014; 18:342–64.
Owen AB. Better estimation of small sobol'sensitivity indices. ACM Trans Model Comput Simul (TOMACS). 2013; 23(2):11.
Tissot JY, Prieur C. Bias correction for the estimation of sensitivity indices based on random balance designs. Reliab Eng Syst Saf. 2012; 107:205–13.
Da Veiga S, Gamboa F. Efficient estimation of sensitivity indices. J Nonparametric Stat. 2013; 25(3):573–95.
Archer G, Saltelli A, Sobol I. Sensitivity measures, anova-like techniques and the use of bootstrap. J Stat Comput Simul. 1997; 58(2):99–120.
Tarantola S, Gatelli D, Kucherenko S, Mauntz W, et al. Estimating the approximation error when fixing unessential factors in global sensitivity analysis. Reliab Eng Syst Saf. 2007; 92(7):957–60.
Saltelli A, Annoni P. How to avoid a perfunctory sensitivity analysis. Environ Model Softw. 2010; 25(12):1508–17.
Jansen MJ. Analysis of variance designs for model output. Comput Phys Commun. 1999; 117(1):35–43.
Storlie CB, Helton JC. Multiple predictor smoothing methods for sensitivity analysis: Description of techniques. Reliab Eng Syst Saf. 2008; 93(1):28–54.
Da Veiga S, Wahl F, Gamboa F. Local polynomial estimation for sensitivity analysis on models with correlated inputs. Technometrics. 2009; 51(4):452–63.
Li G, Rosenthal C, Rabitz H. High dimensional model representations. J Phys Chem A. 2001; 105(33):7765–77.
Hajikolaei KH, Wang GG. High dimensional model representation with principal component analysis. J Mech Des. 2014; 136(1):011003.
Borgonovo E. A new uncertainty importance measure. Reliab Eng Syst Saf. 2007; 92(6):771–84.
Sobol IM, Kucherenko S. Derivative based global sensitivity measures and their link with global sensitivity indices. Math Comput Simul. 2009; 79(10):3009–17.
Fort JC, Klein T, Rachdi N. New sensitivity analysis subordinated to a contrast. 2013. arXiv preprint arXiv:1305.2329.
Da Veiga S. Global sensitivity analysis with dependence measures. J Stat Comput Simul. 2015; 85(7):1283–305.
Székely GJ, Rizzo ML, Bakirov NK, et al. Measuring and testing dependence by correlation of distances. Ann Stat. 2007; 35(6):2769–794.
Gretton A, Bousquet O, Smola A, Schölkopf B. Measuring statistical dependence with hilbert-schmidt norms. In: Algorithmic Learning Theory. Springer: 2005. p. 63–77.
Csiszar I, et al. Information-type measures of difference of probability distributions and indirect observations. Studia Sci Math Hungar. 1967; 2:299–318.
Aizerman M, Braverman E, Rozonoer L. Theoretical foundations of the potential function method in pattern recognition learning. Autom Remote Control. 1964; 25:821–37.
Sumner T, Shephard E, Bogle I. A methodology for global-sensitivity analysis of time-dependent outputs in systems biology modelling. J R Soc Interface. 2012; 9(74):2156–66.
Zheng Y, Rundell A. Comparative study of parameter sensitivity analyses of the tcr-activated erk-mapk signalling pathway. IEE Proc-Syst Biol. 2006; 153(4):201–11.
Marino S, Hogue IB, Ray CJ, Kirschner DE. A methodology for performing global uncertainty and sensitivity analysis in systems biology. J Theor Biol. 2008; 254(1):178–96.
Sinha S. Sensitivity analysis of wnt β-catenin based transcription complex might bolster power-logarithmic psychophysical law and reveal preserved gene gene interactions. 2015. bioRxiv, 015834. doi:10.1101/015834.
Adler M, Mayo A, Alon U. Logarithmic and power law input-output relations in sensory systems with fold-change detection. PLoS Comput Biol. 2014; 10(8):1003781.
Masin SC, Zudini V, Antonelli M. Early alternative derivations of fechner's law. J Hist Behav Sci. 2009; 45:56–65. doi:10.1002/jhbs.20349.
Fechner GT. Elemente der Psychophysik (2 Vols): Breitkopf and Hartel; 1860.
Weber EH. De Pulsu Resorptione, Auditu et Tactu: Annotationes anatomicae et physiologicae; 1834.
Bernoulli D. Specimen theoriae novae de mensura sortis. Commentarii Acad Sci Imperialis Petropolitanae. 1738; 5:175–92.
Sobol S, andKucherenko IM. Global sensitivity indices for nonlinear mathematical models. review. Wilmott Magazine, 2–7.
Baucells M, Borgonovo E. Invariant probabilistic sensitivity analysis. Manag Sci. 2013; 59(11):2536–49.
Kraskov A, Stögbauer H, Grassberger P. Estimating mutual information. Phys Rev E. 2004; 69(6):066138.
Sejdinovic D, Sriperumbudur B, Gretton A, Fukumizu K, et al. Equivalence of distance-based and rkhs-based statistics in hypothesis testing. Ann Stat. 2013; 41(5):2263–91.
Daumé III H. From zero to reproducing kernel hilbert spaces in twelve pages or less. 2004.
Riesz F. Sur une espèce de géométrie analytique des systèmes de fonctions sommables. CR Acad Sci Paris. 1907; 144:1409–11.
Taylor JS, Cristianini N. Properties of Kernels: Cambridge University Press; 2004. Chap. 3.
De Lozzo M, Marrel A. New improvements in the use of dependence measures for sensitivity analysis and screening. 2014. arXiv preprint arXiv:1412.1414.
Ueno K, Hirata H, Hinoda Y, Dahiya R. Frizzled homolog proteins, micrornas and wnt signaling in cancer. Int J Cancer. 2013; 132(8):1731–40.
Holcombe R, Marsh J, Waterman M, Lin F, Milovanovic T, Truong T. Expression of wnt ligands and frizzled receptors in colonic mucosa and in colon carcinoma. Mol Pathol. 2002; 55(4):220.
Planutis K, Planutiene M, Nguyen AV, Moyer MP, Holcombe RF. Invasive colon cancer, but not non-invasive adenomas induce a gradient effect of wnt pathway receptor frizzled 1 (fz1) expression in the tumor microenvironment. J Transl Med. 2013; 11(50):10–1186.
Sato A, Yamamoto H, Sakane H, Koyama H, Kikuchi A. Wnt5a regulates distinct signalling pathways by binding to frizzled2. EMBO J. 2010; 29(1):41–54.
Klapholz-Brown Z, Walmsley GG, Nusse YM, Nusse R, Brown PO. Transcriptional program induced by wnt protein in human fibroblasts suggests mechanisms for cell cooperativity in defining tissue microenvironments. PloS ONE. 2007; 2(9):945.
Yokoyama N, Yin D, Malbon CC. Abundance, complexation, and trafficking of wnt/ β-catenin signaling elements in response to wnt3a. J Mol Signal. 2007; 2(1):11.
He TC, Sparks AB, Rago C, Hermeking H, Zawel L, da Costa LT, Morin PJ, Vogelstein B, Kinzler KW. Identification of c-myc as a target of the apc pathway. Science. 1998; 281(5382):1509–12.
Korinek V, Barker N, Morin PJ, van Wichen D, de Weger R, Kinzler KW, Vogelstein B, Clevers H. Constitutive transcriptional activation by a β-catenin-tcf complex in apc-/- colon carcinoma. Science. 1997; 275(5307):1784–7.
Morin PJ, Sparks AB, Korinek V, Barker N, Clevers H, Vogelstein B, Kinzler KW. Activation of β-catenin-tcf signaling in colon cancer by mutations in β-catenin or apc. Science. 1997; 275(5307):1787–90.
Hino SI, Michiue T, Asashima M, Kikuchi A. Casein kinase i ε enhances the binding of dvl-1 to frat-1 and is essential for wnt-3a-induced accumulation of β-catenin. J Biol Macromol. 2003; 278(16):14066–73.
You XJ, Bryant PJ, Jurnak F, Holcombe RF. Expression of wnt pathway components frizzled and disheveled in colon cancer arising in patients with inflammatory bowel disease. Oncol Rep. 2007; 18(3):691–4.
González-Sancho JM, Brennan KR, Castelo-Soccio LA, Brown AM. Wnt proteins induce dishevelled phosphorylation via an lrp5/6-independent mechanism, irrespective of their ability to stabilize β-catenin. Mol Cell Biol. 2004; 24(11):4757–68.
Gao C, Chen YG. Dishevelled: The hub of wnt signaling. Cell Signal. 2010; 22(5):717–27.
Huang MY, Yen LC, Liu HC, Liu PP, Chung FY, Wang TN, Wang JY, Lin SR. Significant overexpression of dvl1 in taiwanese colorectal cancer patients with liver metastasis. Int J Mol Sci. 2013; 14(10):20492–507.
Galli LM, Barnes T, Cheng T, Acosta L, Anglade A, Willert K, Nusse R, Burrus LW. Differential inhibition of wnt-3a by sfrp-1, sfrp-2, and sfrp-3. Dev Dyn. 2006; 235(3):681–90.
Suzuki H, Watkins DN, Jair KW, Schuebel KE, Markowitz SD, Chen WD, Pretlow TP, Yang B, Akiyama Y, van Engeland M, et al. Epigenetic inactivation of sfrp genes allows constitutive wnt signaling in colorectal cancer. Nat Genet. 2004; 36(4):417–22.
Caldwell GM, Jones C, Gensberg K, Jan S, Hardy RG, Byrd P, Chughtai S, Wallis Y, Matthews GM, Morton DG. The wnt antagonist sfrp1 in colorectal tumorigenesis. Cancer Res. 2004; 64(3):883–8.
Chinnadurai G. Ctbp, an unconventional transcriptional corepressor in development and oncogenesis. Mol Cell. 2002; 9(2):213–24.
Hamada F, Bienz M. The apc tumor suppressor binds to c-terminal binding protein to divert nuclear β-catenin from tcf. Dev Cell. 2004; 7(5):677–85.
Schneikert J, Brauburger K, Behrens J. Apc mutations in colorectal tumours from fap patients are selected for ctbp-mediated oligomerization of truncated apc. Hum Mol Genet. 2011; 20(18):3554–64.
Patel J, Baranwal S, Love IM, Patel NJ, Grossman SR, Patel BB. Inhibition of c-terminal binding protein attenuates transcription factor 4 signaling to selectively target colon cancer stem cells. Cell Cycle. 2014; 13(22):3506–18.
Willert K, Nusse R. Wnt proteins. Cold Spring Harb Perspect Biol. 2012; 4(9):007864.
Metcalfe C, Bienz M. Inhibition of gsk3 by wnt signalling–two contrasting models. J Cell Sci. 2011; 124(21):3537–44.
Lum L, Clevers H. The unusual case of porcupine. Science. 2012; 337(6097):922–3.
Chinnadurai G. Ctbp family proteins: more than transcriptional corepressors. Bioessays. 2003; 25(1):9–12.
Kim J, Kim S. In silico identification of sfrp1 as a hypermethylated gene in colorectal cancers. Genomics Inf. 2014; 12(4):171–80.
Wu CH, Chung FY, Chang JY, Wang JY. Rapid detection of gene expression by a colorectal cancer enzymatic gene chip detection kit. Biomark Genomic Med. 2013; 5(3):87–91.
MacDonald BT, Semenov MV, Huang H, He X. Dissecting molecular differences between wnt coreceptors lrp5 and lrp6. PLoS ONE. 2011; 6(8):23537.
Liu G, Bafico A, Harris VK, Aaronson SA. A novel mechanism for wnt activation of canonical signaling through the lrp6 receptor. Mol Cell Biol. 2003; 23(16):5825–35.
Watanabe T, Kobunai T, Toda E, Kanazawa T, Kazama Y, Tanaka J, Tanaka T, Yamamoto Y, Hata K, Kojima T, et al. Gene expression signature and the prediction of ulcerative colitis–associated colorectal cancer by dna microarray. Clin Cancer Res. 2007; 13(2):415–20.
Lemieux E, Cagnol S, Beaudry K, Carrier J, Rivard N. Oncogenic kras signalling promotes the wnt/ β-catenin pathway through lrp6 in colorectal cancer. Oncogene. 2014; 34:4914–27.
Hikasa H, Sokol SY. Phosphorylation of tcf proteins by homeodomain-interacting protein kinase 2. J Biol Chem. 2011; 286(14):12093–100.
Leushacke M, Spörle R, Bernemann C, Brouwer-Lehmitz A, Fritzmann J, Theis M, Buchholz F, Herrmann BG, Morkel M. An rna interference phenotypic screen identifies a role for fgf signals in colon camangancer progression. PLoS ONE. 2011; 6(8):23381.
Cadigan KM, Waterman ML. Tcf/lefs and wnt signaling in the nucleus. Cold Spring Harb Perspect Biol. 2012; 4(11):007906.
Faivre R, Iooss B, Mahévas S, Makowski D, Monod H. Analyse de Sensibilité et Exploration de Modèles: Application aux Sciences de la Nature et de L'environnement: Editions Quae; 2013.
Sincere thanks to anonymous reviwers who have provided input to refine this manuscript. Part of this work has been accepted for poster presentation at the International conference for Systems Biology of Human Disease 2016. The author thanks Harvard Program in Therapeutics Sciences for granting registration fee scholarship for this work after evaluation of the poster abstract. The Royal Society of Chemistry (RSC) for giving permission to reproduce parts of material in reference [23].
The author thanks Mrs. Rita Sinha and Mr. Prabhat Sinha for supporting him financially on this project for the period of 2014-16.
Faculty of Maths & IT, Royal Thimphu College, Nagbiphu, Thimphu, 1122, Bhutan
Shriprakash Sinha
Search for Shriprakash Sinha in:
SS conceived and designed the experiments; performed the experiments; analyzed the data; wrote the paper.
Correspondence to Shriprakash Sinha.
The datasets from the articles [27] and [28] were used in the computational experiments. [27] states the following -
N/A. An ethics statement is not required for this work. An informed consent from participants involved is also not applicable for this work.
This has been made available on the PLOS journal at http://dx.doi.org/10.1371/journal.pone.0010024. [28] states the following -
Human tissue samples were obtained from Singapore Tissue Network using protocols approved by institutional Review Board of National University of Singapore; informed consent was obtained from each individual who provided the tissues. The colorectal cance cell lines and non-transformed cell lines used in this study were purchased from the American Type Culture Collection (Manassas, VA).
[28] states the following -
Under the License number 4085451080321, the author of this article is using Table S1 of [28] containing static expression profiles. For [27] the following statement holds -
Copyright: ©2010 Gujral, MacBeath. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
The dynamic data from [27] that is used in this manuscript is available and can be downloaded from Table S3 of PLOS journal website.
Code has been made available on Google drive at https://drive.google.com/folderview?id=0B7Kkv8wlhPU-Q2NBZGt1ZERrSVE&usp=sharing Audio file along with the poster presented at SBHD 2016 has been made available on Google drive at https://drive.google.com/drive/folders/0B7Kkv8wlhPU-aVR0eFJqTkNUOFE The datasets generated and/or analysed during the current study are available in the GEO public database (accession number GSE10972) repository, https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE10972 for [28]. The datasets generated and/or analysed during the current study are available in Table S3, http://dx.doi.org/10.1371/journal.pone.0010024 for [27].
Authors' information
shriprakash sinha holds a MSc from Utrecht University (The Netherlands) in Applied Computing Science and MS in Computer Science from Oregon State University (USA). He has worked as an intern in Siemens Information Systems Limited, Computer Aided Diagnosis Research Center (Bangalore, India), as a project associate in the prestigious Indian Institute of Science (IISc, Bangalore, India) and as a scientific researcher/fMRI data analyst at the Neuroimaging Center (NiC) in UMCG hospital (Groningen, The Netherlands). He also had the opportunity to work at the TuDelft and Philips (The Netherlands) before returning temporarily to India and working as a researcher independently. As an independent researcher he has been interested in working on systems biology of the Wnt pathway using sensitivity analysis and machine learning methods as well as development of search engine to prioritize extra/intracellular nth order interactions that affect the pathway. It is hoped that these prioritization in the form of rankings will help reduce wet lab experiments to test crucial interactions in the pathway and thus save significant costs in wet lab experiments. As a further development, observing the changing rankings in time will lead to time based intervention in the pathway and development of target based drugs for cancer. He currently works as a Senior Lecturer in the faculty of Maths and IT at Royal Thimphu College Bhutan and teaches data mining and problem solving skills. Apart from scientific pursuits, sinha is also engaged in conducting 15 days intensive meditations programs based on the Heart Sutra.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Sinha, S. Hilbert-Schmidt and Sobol sensitivity indices for static and time series Wnt signaling measurements in colorectal cancer - part A. BMC Syst Biol 11, 120 (2017) doi:10.1186/s12918-017-0488-z
Wnt pathway
Psychophysical law | CommonCrawl |
The diffusion of goods with multiple characteristics and price premiums: an agent-based model
Pedro Lopez-Merino ORCID: orcid.org/0000-0003-0881-21611,2,3 &
Juliette Rouchier1
According to innovation diffusion theories, the adoption of a new product is the result of a dynamic process whereby individuals become likelier to adopt as others do. Agent-based modelling has emerged as a useful technique to model and study processes of innovation diffusion within artificial societies, as it allows to easily programme and simulate the interaction of multiple agents among them and with their environment. Despite a large body of literature dealing with innovation of diffusions, including the use of agent-based modelling, there has been little to no consideration of two elements that are important features of consumption: the presence of multiple characteristics of goods, and that of price-premiums on the presence of added characteristics. We propose an agent-based model of the diffusion of such goods, and study its emerging properties when compared to standard ones. Our goal is to try and understand how social interaction affects the consumption of goods that are complex rather than uni-dimensional, and whose prices depend on the number of dimensions (characteristics) that are present. Testing the model for different parameters shows that as goods become more complex, social interaction becomes an increasingly important explanatory variable for purchases. This opens up interesting avenues of discussion for those seeking to bring together innovation diffusion theories and goods' complexity, and can be linked with a number of issues in the social and sustainability sciences.
Introduction and state of the art
The academic study of innovation diffusions, traditionally considered as beginning with Roger's 1962 Diffusion of Innovations, has long been used to analyse and describe the market penetration of new product releases, most notably that of new technologies. It builds on the simple observation that individuals are influenced by their peers on the decisions they make, and that successful products often evolve from being the fad of a few early adopters, to reaching an important proportion of the population. For decades now, scholars have built models that seek to reflect the theoretical and empirical findings of the innovation diffusion's literature, starting with analytical works by Bass (1969) and Granovetter (1978). In the past 20 years, agent-based modelling has emerged as a useful technique that can help overcoming certain limitations of aggregate models (Kaufmann et al. 2009), deemed as too analytical and unable to capture heterogeneity and the complex dynamics of social processes that shape diffusion (Kiesling et al. 2012). Simulators have produced a variety of models that seek to recreate the main observed properties of innovation diffusion processes in order to study how they respond to different variations in their conditions.
Two issues have, to the best of our knowledge, been absent from the modelling of innovation diffusions' literature: the characterisation of goods as being multi-dimensional (Lancaster 1966; Rosen 1974)Footnote 1, and price differences on the presence or absence of these different dimensions.Footnote 2 This lack of consideration of multidimensionality and price premiums is particularly striking when compared to the study of how network structure affects diffusion, which has received the lion's share of scholars' attentionFootnote 3. Our model is an effort to compensate this lack of balance, particularly since multi-characteristic consumption is an already well-established feature of economic theories (Lancaster 1966; Rosen 1974). Moreover, the present study was conceived within the context of a larger project on food sustainability issues, where the of complexity of goods and price premiums are well-established features (Aschemann-Witzel and Zielke 2017; Jackson 2005).
With these elements in mind, we built a model that conceives consumption of each of these dimensions as being part of a dynamic process of social diffusion. The model belongs to the class of network threshold models (Watts and Dodds 2009), as first conceptualised by Granovetter (1978) and Granovetter and Soong (1983, 1986). In these, the action of an individual (in Granovetter's early example, deciding to join a riot) is binary, and depends on whether the proportion of others who act has reached or not a given threshold. Earlier versions of threshold models have been expanded in order to account for the heterogeneity of individuals and different network topologies (Delre et al. 2007). These have been used to recreate and study the diffusion of innovations (Pegoretti et al. 2012; Young 2009), analysing how cascade-like phenomena of adoption happen within societies. We extend them to include several characteristics of goods, whereby diffusion happens with regards to each one of them. As our model is inspired by issues of sustainable consumption, one can think of consumers adopting low plastic packaging, locally-sourced, organic, fairly traded, or any such dimensions of sustainability, all of which come with a higher price-tag attached. The intention to adopt a characteristic depends therefore on the proportion of others within a consumer's network that have previously adopted.
Our model—whose description and implementation are given on "Model description" section—is an extension of one previously presented (López-Merino and Rouchier 2021). The main concepts behind it are (i) that adoption of a given characteristic is the combined result of a consumer's intention to buy it and his or her budget ability to do so, and (ii) that intention on the said characteristic is formed through the observation of the level of adoption in the consumer's more or less immediate network.
We expand the aforementioned model and analysis in three main ways. First, we include a formalised and standardised description of it, in order to ensure transparency, ease of understanding and replicability. For this, we use the "Overview, Design concepts and Details" (ODD) protocol (Grimm et al. 2006, 2010; Müller et al. 2013). Second, we remove the focus on the intention-behaviour gap on which the previous work was centred, and work purely on the dynamics of adoption and diffusion, to shed light on how the consumption of multiple dimensions is increasingly dependent on social interaction. Third, and in order to add robustness to our analysis, we include an econometric regression and graphical presentation of results. Finally, we include an analytical exploration of our model's equations and results.
Our overarching interest is to study how the addition of extra dimensions to diffusion models and the explicit inclusion of price premiums on them can produce new results worthy of further exploration. In particular, we look at whether the influence of social dynamics on the purchase of combined characteristics is a function of the number of characteristics considered. We show that the importance of the influence of other's adoption on an agent's purchases increases as the number of characteristics is expanded. We evaluate this further by changing certain parameters within the model, which provides an additional confirmation of our results. This is arguably a novel result that could be of interest for analysts of social dynamics.
We have structured the remainder of the article in the following way: We first introduce the model using the ODD approach (Grimm et al. 2006, 2010).Footnote 4 The results emerging from our model are later shown graphically and by means of econometric analyses, as well as by an analytical exploration of the model's equations. We then finish with conclusions and work ahead.
Model description
General purpose, entities, state variables and process.
This model was conceived as a theoretical abstraction (Boero and Squazzoni 2005) in order to explore questions related to the diffusion of purchases of multi-dimensional goods within a human network. It uses and expands the innovation diffusion framework. Theoretical, empirical and modelling work has been done over the past few decades on how innovations become adopted in societies (or fail to do so), and has consistently described S-shaped curves as characterising the process of diffusion—the result of network economies and social influence.
Our model belongs to the class of network threshold models, whereby an agent whose network (immediate or otherwise) reaches a given proportion of adoptees automatically adopts. Although an obvious simplification of reality, these models have a number of advantages in terms of fitting theoretical and empirical findings relating to the diffusion of innovations (Watts and Dodds 2009).
We seek to study an emerging properties that comes out of this, and how it can inform theoretical and empirical work. Namely, we test how purchases are dependent on social interaction as more dimensions of characteristics are included. As the complexity of goods increases, it is natural to wonder what factors can move consumers towards adopting a multiple array of characteristics—and we pose that social interaction is an increasing determinant of this.
Entities, state variables and scales
There are three types of entities in the model: consumers, goods and links. The model is run on a number of parameters that are set before starting a simulation. Tables 1, 2, 3 and 4 describe the entities with their main related variables as well as the parameters.
Table 1 Consumer variables, their description and type
Table 2 Goods' variables, description and type
Table 3 Types of links
Table 4 Model's parameters and their description
Spatial considerations are not explicitly considered in the model.
Our model being largely a theoretical abstraction, we choose parameters' values so as to produce diffusion curves that stabilise within a relatively short time span (\(t^{max}=50\)). We do not strictly define what a time-step represents. Since the model is inspired by the notion of sustainable food purchases, however, a step of time and its corresponding purchase can be imagined as a weekly basket of items that cannot be avoided.
Consumers face \(2^{n_d}\) types of goods, as each one can have or not any of the \(n_d\) characteristics available (for \(n_d=1\), there are two goods: the one that has the existing characteristic and the one that doesn't). There is no difficulty in identifying a good or in purchasing it other than that created by \(\pi\). Consumers have to purchase a unit of good at each time-step, represented in the model through the creation of one \(L_b\) between the consumer and a good. The chosen good will depend on the consumer's own \(\mathbf {I_i}\) and \(w_i\), as well as on an element of randomness for any characteristic a where \(I_{i,a}=0\). The consumers' individual algorithm at each time-step (intention formation and purchase) can be described as follows, with its corresponding flowchart on the following page:
It should be noticed that the goods that do not contain any characteristics (\(C_g=\mathbf {0}\)) can be purchased by all consumers, and so the algorithm always comes to an end. A consumer purchasing a good costing less than the consumer's \(w_i\) does not save any money, and \(w_i\) is reset to its same value at each time-step. Borrowing to purchase an expensive good is not allowed either. Note also that characteristics are independent from one another from a consumer's point of view, and so having intention on one of them does not imply having it on the other.
The equations underlying the algorithm are shown on section "Submodels".
Basic principles, individual decision-making, learning, collectives, heterogeneity, stochasticity and observation & emergence.
We use a basic threshold model of innovation diffusion, and expand it to multiple characteristics. Despite its simplicity, the diffusion of purchases in our model is less straightforward than in traditional threshold models: a consumer will not automatically adopt once a threshold is reached on a dimension a, but will develop an intention (\(I_{i,a}\)) to do so. \(I_{i,a}=1\) with a corresponding \(w_i\) availability will thus translate as \(A_{i,a}=1\). A purchase can happen without a corresponding adoption, as a consumer with enough budget may purchase a good containing a characteristic for which he or she is not necessarily interested in.
Although characteristics are not interdependent per se (neither in the case of goods nor for consumers' intentions), they are subject to a common \(w_i\) constraint, and in this may have to be arbitrated by consumers with a limited \(w_i\) and more than one \(I_{i,a}=1\). In the example of sustainable food consumption, this can be pictured as a person wanting to purchase plastic-free, locally-sourced, organic and fair-trade, and yet being unable to satisfy all four due to budget issues (the arbitration in our model is done randomly, which precludes the possibility of a consumer having a higher preference on one or the other of the dimensions).
Individual decision-making
The decision-making process of individuals is exceedingly simple. They chose a good at each time-step, related to their intention and budget as has already been described. There's no particular rationality other than the fact that they have to purchase a unit of a good (in economic terms, the demand for a unit of a good per time-step is perfectly inelastic). Decisions are chiefly the outcome of a process of social influence, as the intention to adopt a given dimension is related to the proportion of consumers who have adopted it in the consumer's network of influence. In this, adoption can be seen as a cultural phenomena, which also has a counterpart in the consumption of food.
Learning, sensing and prediction
A consumer i with a sufficient level of \(w_i\) can randomly purchase a good with a characteristic a while \(I_{i,a}=0\) (much like a person in the supermarket may purchase an eco-labelled coffee without caring for it), although such a consumer is not considered as having changed its adoption. In this, only consumers with a formed intention can be considered as being able to sense a dimension, thus consisting of the only learning issue we can identify in the model.
No element of prediction is included in the model.
All consumers belong to an interconnected network created using the algorithm proposed by Watts and Strogatz (1998), which includes a parameter for the number of random links (\(\rho\) in our model) that determines the clustering coefficient of the network. We tested our model on more or less clustered networks. We present results for a perfectly regular lattice (\(\rho =0\), clustering coefficient of 0.5) and a Small World one (\(\rho =1\), with a mean observed clustering coefficient of 0.646).
\(L_c\) links do not change during the course of a simulation.
For purposes of illustration, Fig. 1 shows the two different network possibilities in our lattice, at \(t=0\) and \(t=1\).
\(t=0\) (left) and \(t=1\) (right) for a network lattice with \(\rho =0\) (above, clustering coefficient \(=0.5\)) and \(\rho =1\) (below, mean observed clustering coefficient \(=0.646\)). \(n_d=2\), green links represent \(L_c\) (fixed) and red ones \(L_b\) (evolving). The green boxes represent each one of the \(2^{n_d}\) goods
Heterogeneity
Consumers are heterogeneous in their budget and intention, the latter of which are set randomly to 1 according to the proportion \(\iota\). They also belong to different networks of influence. Decision-making and the remaining aspects of the model are common to all consumers.
Goods are heterogeneous, in that no good fully resembles another. There are \(2^{n_d}\), each of them having or not each of the \(n_d\) dimensions considered. \(\mathbf {C_g}\) and \(\pi\) determine each good's \(p_g\), with supply for each being perfectly inelastic.
Stochasticity
Stochasticity is included in that \(w_i\) is allocated randomly (uniformly distributed across consumers), that only a given proportion is randomly preset with intention at \(t=0\), and that a consumer i for whom the proportion of adoptees in \(\mathbf {J_i}\) reaches \(\tau\) in a characteristic a will develop \(I_{i,a}=1\) with a fixed probability \(\kappa\). \(B_{i,a}\) is a variable subject to an element of stochasticity also, as a consumer with a sufficiently high \(w_i\) may well buy a good with a characteristic it has no intention of purchasing.
Observation and emergence
We follow the evolution of two types of, related, indicators. The first is how intention is diffused throughout the network on each of the dimensions of characteristics. The second is how purchases of characteristics evolve on each of the dimensions.
We look at how these diffusions take place for single dimensions as well as for combined ones (the proportion of consumers with intention or purchasing more than one dimension of characteristics). It is to be expected that the diffusion curves on more than one dimension are shifted to the bottom-right with regards to single-dimension ones: as \(w_i\) constraints and \(\mathbf {I_i}\) limits are added up, diffusion on combined characteristics should increasingly be lower than for single ones.
We term our two sets of indicators as intention1D, intention2D, intention3D... and purchases1D, purchases2D, purchases3D..., where 1D, 2D, 3D, etc. represent the number of combined characteristics on which influence and adoption are being measured. Note that 1D can include any single one of the characteristics in \(C_n\), and thus for simulations run on \(n_d>1\) it means that any of them is present in the \(\mathbf {B_i}\) or \(\mathbf {I_i}\) sets of consumers (conversely, 2D when \(n_d>2\) includes combinations of characteristics, and so on). A consumer moving from purchasing one dimension to purchasing two will be counted both in purchases1D and purchases2D.
The question that opens up relates to the relationship between the evolution of the intention and purchase curves for different dimensions. Our interest is to study how overall adoption determines purchases, and whether its importance is indeed increased as the number of dimensions are combined. In a nutshell, can it be expected that purchases of combined characteristics is more dependent on the diffusion process than that of single ones? More formally, is the impact of the evolution of adoption and intention on purchases higher as more dimensions are considered This would be an interesting result stemming from already existing models and theories of diffusion that would validate our view on the adoption of sustainable behaviours: that these are socially constructed processes that chiefly depend on interaction, and that the more complex they become, the stronger this dependence will be.
Description of the implementation details, initialisation and mathematical submodels
The model was implemented in NetLogo.Footnote 5 6.2, drawing from its library in order to adapt the Watts and Strogatz (1998) network. The results were analysed using R (R Core Team 2020).
Initialisation and input data
We work on a baseline setup that we test for 4 values of \(n_d\), as well as modified ones obtained by changing three parameters: \(\rho\), d and \(\pi\). This will permit to explore how our model responds to a greater or lesser level of network activity (through added links and a higher degree of influence), and compare it with a corresponding change in prices. With regards to actual consumption, these imply comparing the effect of price reductions on the consumption of goods with several dimensions against that of a higher level of social exchanges.
The baseline values were chosen arbitrarily in order to produce stylised S-shaped curves, and do not correspond to precise data, which is otherwise unavailable for the type of dimensions conceived in the model (there's no exhaustive database of price differences between goods containing or not a variety of possible dimensions of characteristics). Moreover, we do not look at this stage to do parameter calibration, and so the values chosen should not necessarily be taken as having a one-to-one correspondence with the reality actual consumers face. The different parameters' values chosen are listed in Table 5.
Table 5 Baseline and modified values of parameters used in the simulations
\(n_d\) is set for both baseline and modified values at \(n_d=1,2,3\) and 4.
Budgets and prices are configured as follows:
\(w_i\) \(\sim U(\$ 1,\$2)\). Each consumer is randomly endowed with a budget that can go from one to two dollars.
\(p_0\) \(=\$1\). The price of a good containing no characteristics is of 1 dollar, and is therefore accessible to all consumers. Taking the baseline value of \(\pi\) into account, this means that a good containing 1 extra characteristic will be priced at \(\$1.1\) (1.05 for the modified values), a good containing two of them will be priced at \(\$1.2\) (1.1) and so on.
No input data is used to feed the model.
There are two main submodels present, pertaining to how consumers' intention and purchases evolve, as described in the algorithms in "Process" section. For any given characteristic, the submodels can be written as:
$$\begin{aligned}&Pr(I_{i}^{t}=1 \quad | \quad I_{i}^{t-1}=0) = {\left\{ \begin{array}{ll} \kappa &{} if \quad \sum _{j_{1}}^{j_m}{A_{j}^{t-1}/M_i} > \tau \\ 0 &{} otherwise \end{array}\right. } \end{aligned}$$
$$\begin{aligned}&A_{i}^{t} = {\left\{ \begin{array}{ll} 1 &{} if \quad I_{i}^{t}=1 \wedge w_i > p_1 \\ 0 &{} otherwise \end{array}\right. } \end{aligned}$$
$$\begin{aligned}&Pr(B_{i}^{t}=1) = {\left\{ \begin{array}{ll} 1 &{} if \quad A_{i}^{t} = 1 \\ 0.5 &{} if \quad A_{i}^{t} = 0 \wedge W_i > p_1 \\ 0 &{} otherwise \end{array}\right. } \end{aligned}$$
Where \(p_1\) the price of a good containing one extra characteristic.
Given our model description and the uniform distribution of \(w_i\), the \(Pr(w_i > p_1)\) implicit in Eqs. 2 and 3 can be generalised to be written as
$$\begin{aligned} Pr(w_i > p_g) = (1 - g \times \pi ) \end{aligned}$$
Where \(p_g\) is the price of a good containing g of the \(n_d\) characteristics.Footnote 6
In the following section, we analyse our results.
We run the model 50 times over 50 time-steps for each of the configurations proposed on Table 5 (1600 simulations and a total of 80,000 time-step observations). After this, we further checked the model running it on the same configurations but with \(n_c=50, 200, 300\), so as to verify the extent to which there are finite-size effects to it (Toral and Tessone 2007).
We propose two different approaches to study how intention and purchases evolve, in particular with respect to different values of \(n_d\). The question we keep in mind is the one raised in "Observation and emergence" section, as to whether purchases depend increasingly on intention as the number of dimensions considered increases. We first use the global results of our simulations to find trends that can further inform our discussion, both graphically and by means of linear regressions. Then, we look at how variations in our parameters change the evolution of each of our indicators, to further explore the effect of social interaction on them. Lastly, we sketch out an analytical study of our model to shed light on how the model's conception relates to the results we find.
Figure 2 below shows the S-shaped curves for our two indicators, obtained for the different values of \(n_d\) on the baseline setup, and for single (1D) or combined (2D, 3D, 4D) characteristics. The smoothed curves have been obtained using the loess method (Cleveland and Devlin 1988), and the grey area shows their confidence interval at 95%.
What can be seen from the figure is that, for \(n_d>1\), intention curves show increasingly lesser evolutions as dimensions are combined, something that does not seem to occur with regards to purchases. This gives already a visual hint to the hypothesis presented above, as the relationship between intention and purchase is stronger for a higher number of dimensions.
Evolution of intention and purchase (baseline setup, \(n_d=1, 2, 3, 4\)), as a percentage of \(n_c\). Curves are S-shaped. Curves' legend: , , , . The grey shade represents the confidence interval for each curve at 95%
Global results
Figure 3 below shows the result of plotting all observations of purchases against intention (baseline and modified values), separating them on 1D, 2D, 3D and 4D. Our hypothesis seems again to be corroborated, since the slope of the best-fit curve (loess method) gets steeper as the number of dimensions is increased.
Scattered plots showing the level of \(A_i\) and \(B_i\) for 1, 2, 3 and 4 dimensions of characteristics. Steeper curves observed for higher values of \(n_d\) (baseline and modified values considered)
To verify these results, we test them using least squares linear regressions for intention on purchases. Table 6 shows the estimates for y-intercept and \(\beta\) for each of the four regressions run. This helps confirming that slope of the curves becomes steeper, with \(\beta\) going above 0.888 when four dimensions are considered.
Table 6 Least squares linear regressions run for all values of \(I_i\) on \(B_i\), baseline and modified values considered. \(\beta\) increases as more dimensions are considered
Figure 3 and Table 6 thus appear to confirm our hypothesis: as the number of dimensions increases, the effect of overall intention becomes an increasing determinant of purchases. The reader may notice from Fig. 3 that the correlation between purchases and adoption is not perfect. This happens because of the random component of purchases that has been described earlier, whereby consumers with \(I_{i,a}=0\) but whose \(w_i>p_g,a\) may unknowingly buy a good containing it. This observed variability is lower as dimensions are increased, a result given by the fact that the random component is a decreasing function of the number of characteristics considered: the likelihood of purchasing without intention goes down as dimensions are combined.
As mentioned above, the literature has found the presence of finite-size effects on social computational models, whereby the number of agents in a simulation run strongly affects its outcome. In order to quickly assess this in our model, we checked the \(\beta\) for purchases4D \(\sim\) intention4D under \(n_c=50, 200\) and 300. We then compared these \(\beta\)s with that of \(n_c=100\), using an Anova test to asses the null hypothesis of different values of \(\beta\) for different \(n_c\). Table 7 below shows the result of the exercise and indicates that no finite-size effect are present, at least within the range of values explored.
Table 7 Comparison of y-intercepts and \(\beta\)s for the regression of purchases4D \(\sim\) intention4D for several values of \(n_c\). The p-value of the results of an Anova test comparing the different \(\beta\) for \(n_c=50, 200\) and 300 against that of \(n_c=100\) are shown. The hypothesis of different values of \(\beta\) can be rejected with 95% confidence
Parameter results
To better grasp this, and in order to understand its practical implications, we have chosen to modify three of the parameters. Two of them (d and \(\rho\)) are network-related: the first changes the average number of nodes in an agent's network of influence, and the second affects the topology of the network by introducing a small world element to it, through the inclusion of random links between consumers (see Fig. 1). The last parameter (\(\pi\)) is modified so as to compare against a non-network one.
Our framework and the results above imply that increasing the activity of the network should have a stronger effect on purchases as a higher number of dimensions is considered, when compared to a non-network intervention such as a price reduction.
Figure 4 and Table 8 below provide with confirmations of this. When d or \(\rho\) are modified (to 2 and 1, respectively) the effect on purchases is stronger as more dimensions are considered, which is not strictly the case for intention. Conversely, when \(\pi\) is halved (to 0.05), the results on the two variables do not appear as strongly dependent on the number of dimensions considered.
To fully understand this, it is useful to keep in mind the maximum theoretical percentage of purchases for \(n_d\) combined characteristics, which can be deduced from Eq. 4 to be (\(1-n \times \pi\))%. Under the baseline setup, a maximum of 90% of consumers should be able to purchase one dimension of characteristics, 80% two dimensions, and so forth. What happens when the network-relevant parameters are modified (which corresponds to a higher level of social interaction), is that purchases come much closer to these levels than when \(\pi\) is modified. The impact of the latter is stronger in our simulations for 1 and 2 dimensions of characteristics, but lower for 3 and 4.
Evolution of intention and purchase, for baseline and modified parameters (\(n_d=4\)). Continuous lines represent baseline setup, dashed ones represent parameters modified individually. The impact of network-related parameters appears as higher for combined dimensions. Curves' legend: , , , . The grey shade represents the confidence interval for each curve at 95%
Table 8 Observed indicators at \(t=50\) for baseline and modified setups, parameters modified individually. In brackets, difference with baseline values (as indicators are normalised to represent percentages of the population, this difference represent the increase in the respective proportions). The effects of intention gains on purchases when network-related parameters are modified is strictly higher as more dimensions are considered. Conversely, the effects when the \(\pi\) parameter is modified do not appear to be dependent on the number of dimensions considered
As stated in the introduction, agent-based modelling has helped overcome some of the limitations that aggregate and analytical models have, as sources of heterogeneity and randomness can be added without loosing the possibilities of comprehension. Nonetheless, once the above results have been obtained, it is worthy to explore what can be deduced from the model's formalisation in terms of theoretical and analytical conclusions. This subsection is an initial effort in this direction.
At any time-step, the total overall number of purchases of goods containing g characteristics can be deduced from Eqs. 2 to 3 to be:
$$\begin{aligned} \begin{aligned} \mathbf {B^{g,t}}&= n_c \times [Pr(\mathbf {B^{g,t}_i} | \mathbf {I^{g,t}_i} = \mathbf {1}) \times Pr(w_i> p_{g}) + Pr(\mathbf {B^{g,t}_i} | \mathbf {I^{g,t}_i} \ne \mathbf {1}) \times Pr( w_i> p_{g})] \\&= n_c \times Pr( w_i > p_{g}) \times [Pr(\mathbf {B^{g,t}_i} | \mathbf {I^{g,t}_i} = \mathbf {1}) + Pr(\mathbf {B^{g,t}_i} | \mathbf {I^{g,t}_i} \ne \mathbf {1})] \end{aligned} \end{aligned}$$
Where the vectorial notations for \(\mathbf {B^g}\) and \(\mathbf {I^g}\) indicate the collection of g characteristics. As characteristics are independent, the probability of purchasing g of them is equal to the probability of purchasing a single one to the power of g, and so the above can be rewritten to:
$$\begin{aligned} \begin{aligned} \mathbf {B^{g,t}} = n_c \times Pr( w_i > p_{g}) \times \sum _{k=0}^{k=g} \left( {\begin{array}{c}g\\ k\end{array}}\right) Pr(B_i^{t} | I_i^{t} = 1)^k \times Pr(B_i^{t} | I_i^{t} \ne 1)^{g-k} \end{aligned} \end{aligned}$$
With i representing an average agent. Using Eq. 4, we can rewrite the above to:
$$\begin{aligned} \begin{aligned} \mathbf {B^{g,t}}\,&= n_c \times (1-g \times \pi ) \times \sum _{k=0}^{k=g}Pr(I^{t}_i=1)^{k} \times [0.5 (1 - Pr(I^{t}_i=1))]^{g-k} \end{aligned} \end{aligned}$$
Which, using the binomial theorem, can be rewritten to:
$$\begin{aligned} \begin{aligned} \mathbf {B^{g,t}} \,&= n_c \times (1-g \times \pi ) \times [Pr(I^{t}_i=1) + 0.5 \times (1 - Pr(I^{t}_i=1))]^{g} \end{aligned} \end{aligned}$$
Or, equally:
$$\begin{aligned} \begin{aligned} \mathbf {B^{g,t}} = n_c \times [(1-g \times \pi ) \times 0.5 \times (1 + Pr(I^{t}_i=1))]^{g} \end{aligned} \end{aligned}$$
Equation 5 shows that the overall level of purchases for g characteristics at time-step t is dependent on the value of \(\pi\) as well as on the probability of any consumer reaching the state of \(I_i=1\). What we are interested in studying is how \(\mathbf {B}^{g,t}\) responds to changes in g, and how this is in turn affected by higher or lower intention to adopt. In other words:
$$\begin{aligned} \delta \frac{\Delta \% \mathbf {B}^{g,t}/\Delta g}{\delta Pr(I^{t}_i=1)} \end{aligned}$$
Our results in the previous subsections imply that there are values of the arguments in Eq. 5 for which the above is strictly positive.
We first study \(\Delta \% \mathbf {B}^{g,t}/\Delta g\), which we pose as:Footnote 7
$$\begin{aligned} \Delta \% \mathbf {B}^{g,t}/\Delta g = \frac{\mathbf {B}^{(g+1),t} - \mathbf {B}^{g,t}}{\mathbf {B}^{g,t}} \end{aligned}$$
Replacing from Eq. 5, this is equal to
$$\begin{aligned} \begin{aligned} \Delta \% \mathbf {B}^{g,t}/\Delta g = \,&(n_c \times (1-(g+1) \times \pi ) \times [0.5 \times (1+ Pr(I^{t}_i=1))]^{g+1} \\&-n_c \times (1-g \times \pi ) \times [0.5 \times (1 + Pr(I^{t}_i=1))]^{g}) \\&\times n_c \times (1-g \times \pi ) \times [0.5 \times (1 + Pr(I^{t}_i=1))]^{-g} \end{aligned} \end{aligned}$$
Which equals
$$\begin{aligned} \begin{aligned} \Delta \% \mathbf {B}^{g,t}/\Delta g = \,&0.5 \times \frac{(Pr(I^{t}_i=1) + 1) \times (g \times \pi + \pi -1)}{g \times \pi - 1} - 1 \end{aligned} \end{aligned}$$
We now need to study whether \(\Delta \% \mathbf {B}^{g,t}/\Delta g\) above is strictly a growing function of the probability of a consumer having \(I^t_i=1\), which itself is a function of the overall proportion of adoptees in the network. For this, Eq. 6 can be derivated on \(\delta Pr(I^{t}_i=1)\), which gives
$$\begin{aligned} \delta \frac{\Delta \% \mathbf {B}^{g,t}/\Delta g}{\delta Pr(I^{t}_i=1)} = 0.5\times \frac{1 - g \times \pi - \pi }{1 - g \times \pi } \end{aligned}$$
Our hypothesis stands true any time
$$\begin{aligned} \delta \frac{\Delta \mathbf {B}^{g,t}/\Delta g}{\delta Pr(I^{t}_i=1)} \ge 0 \end{aligned}$$
We know from above that in our model \(g \times \pi < 1\), and so the above can be rearranged to:
$$\begin{aligned} 1 - g \times \pi - \pi \ge 0 \end{aligned}$$
$$\begin{aligned} \pi \le \frac{1}{1 + g} \end{aligned}$$
This means that, under current model specifications, the main result that we find is dependent on the relationship between \(\pi\) and g. The price premium needs to be sufficiently low with regards to the number of characteristics considered for purchases to be increasingly dependent on intention.
Discussion and concluding remarks
Social influence is one of the central determinants of people's action, and as such has long been recognised by scholars in marketing studies (Bass 1969) and, more recently, economics (Campbell 2013; Jackson 2014). Agent-based modelling has been used to describe processes of social diffusion of innovations (Kiesling et al. 2012), including that of green products (Janssen and Jager 2002) and adoption of sustainable diets (Ploll et al. 2020).
Consumption is a complex issue. Two sources of this complexity are related to the fact that goods are multidimensional (Lancaster 1966), and that extra characteristics often have a higher price-tag attached (Aschemann-Witzel et al. 2019). Our interest is to explore how the innovation diffusion framework plays out when these two elements are taken into account, therefore contributing to understanding the adoption of multi-dimensional consumption practices. For this, we have conceived a model in which consumers' adoption of each of the characteristics is subject to a process of social diffusion, and in which intention to adopt is the result of the consumer's related nodes having reached a certain threshold of adoption.
We differentiate between purchase, intention and adoption, as a consumer may end up buying a good with characteristics he or she is not necessarily interested in adopting, as long as his or her budget allows for this possibility. The probability of such unknowingly purchases is naturally lower when several characteristics are taken into account. This opens way for social influence being a higher determinant in purchases of multidimensional goods, which we put to test using our model. We use overall intention as a proxy of social influence, as the possibility of a consumer developing intention to buy a set of characteristics is dependent on the proportion of others that have already adopted the said set. In this way, we study in different ways the relationship between overall purchases and intention.
The simulation results show that purchases are increasingly dependent on adoption as more dimensions are considered. Rising the level of social interaction in the network (by doubling the distance of influence, and by increasing the number of random links and thus lowering the clustering coefficient) confirms this property by creating an effect that is stronger on purchases than on adoption as increasing dimensions are looked at. Conversely, the reduction of another parameter that is external to the network (the halving of price-premiums) has an effect that does not substantially change for a higher number of characteristics. In the analytical study of our model, we put the mentioned result as a hypothesis we seek to validate. We show that it is indeed valid as long as the price premium paid is sufficiently low as a function of the number of additional characteristics considered. A longer discussion on this goes beyond the scope of the present paper, although an element of response to this can be that too high a price-premium prevents a high enough proportion of consumers to adopt once they develop intention, and thus suffocates the process of diffusion on combined characteristics.
These results could have important real-life implications, as they indicate that social interaction and influence are particularly important in the development of behaviours that are attentive to multi-dimensional consumption (which can be argued is a central feature of sustainable consumption). One can thus see social interaction as being helpful in the development of a culture of consumption that is much too complex for isolated individuals to apprehend, and where price premiums limit the possibilities of individuals to spontaneously develop attentive behaviours. Although the result of our model cannot be used to give precise policy recommendations or advice on actions to follow, it is interesting to highlight that the reduction of price premiums (which can be interpreted as subsidies on goods) has lesser effects on several dimensions of characteristics than that of increasing the level of social interaction. In the context of sustainability, examples of interventions to favour this could be the organisation of local forums and activities on the topic, as well as more focus on having the voice of early adopters heard. Since interventions of this type are arguably less costly than mass publicity campaigns or large-scale production subsidies, their potential should not be neglected.
Our work opens a number of avenues for future work. In terms of modelling, there are a number of assumptions we have made that could be relaxed, as mentioned in "Model description" section. Among these, we count including interdependencies within characteristics (as individual preferences for consumption arguably play out across multiple dimensions), and the possibility of individuals being more or less capable of influencing others, in both the positive and negative sense. Issues of interdependency and heterogeneity of influence have been explored within the field of opinion diffusion (Deffuant et al. 2002, 2005; Rouchier and Tanimura 2015; Huet et al. 2019; Ye et al. 2018), and are reasonable additions to a work on the diffusion of consumption. Results from this literature have studied phenomena of polarisation, divergence of opinions and influence towards non-adoption. These are issues that need to be considered in the study of the adoption of sustainable behaviours (Xu et al. 2018), where we have argued that the issues of multidimensionality and price-premiums are present.
We tested our model using one single network setup. Further work could try and explore the results on different ones such as preferential attachment (Barabási and Bonabeau 2003). Although current knowledge makes it impossible to know with complete precision what a real human network of influence looks like (Manzo and van de Rijt 2020), it is a worthy effort to test the stability and sensitivity of results to different configurations Thiriot (2010).
Our model was built as a theoretical effort in order to study the emerging properties of an extension to multiple dimensions and price premiums of existing threshold models of innovation diffusion. In this, the parameters we have used (both in the baseline and modified setup), have no real correspondence with reality, other than the fact that extra characteristics in goods tend to make them expensive. We thus do not make any claims as to the quantitative values of our results, but rather to the qualitative implications they bring about. Nonetheless, our analytical exploration shows that the results can be generalised to different parameters' values, as long as a certain relationship between them is respected. As in the case of different network configurations, further work on parameter manipulation and the study of their implications for our results would be welcome. As an example, simulations that take into account the critical point for \(\pi\) found in "Analytical results" section could offer interesting avenues for exploration.
From an economic point of view, our work could benefit from the inclusion of production-side effects and, more largely, the issue of economies of scale. We have assumed a perfectly elastic supply for any number of characteristics, which is hardly a realistic assumption. As demand for certain products increases, it is natural to expect that they become cheaper and more accessible to consumers. On the consumers' side, although one can argue that perfect inelasticity of demand is somewhat realistic for an essential good such as food, it is less easy to justify a similar level of importance for each of the characteristics consumers have intentions on, and thus that when consumers have to drop one they do it randomly. This is not necessarily the case in real life, as people may be more or less attentive to each of the dimensions they seek out.
All of these assumptions—which arguably reduce the descriptive power of our model—have ben made so as to increase its simplicity (Le Page 2017). Modifications building on this can be tested so as to see how they affect the results we have found here.
Outside of the realm of modelling, we have found it difficult to come across data on consumption that is attentive to the multiple dimensions it encompasses, and how social influence is a determinant of it. This makes contrasting the results with actual data (most notably quantitative) difficult. Surveys, experimental and field work (particularly using participatory methods) should more directly tackle this issue, which would be an important addition to current data, and more largely to our understanding of multidimensional consumption, in particular with regards to sustainability issues.
The code is accessible on the CoMSES Net/OpenABM (Janssen et al., 2008) model library https://www.comses.net/codebases/9c1c2e83-86e2-4cad-8482-20529f9a9b84/.
The following query on the Web of Science performed in September 2021 produces only three results, of which none is pertinent for consumption: TS=((sustainab*) AND ("innovation diffusion" OR "innovations diffusion" OR "diffusion of innovation") AND (simulation OR agent-based OR modelling OR modeling) AND (multidimension* OR multi-dimension* OR "multiple dimensions")).
An equivalent search where the last item is replaced by ("price premiums" OR "price differenc*") does not yield any results at all.
Conversely, in the field of opinion diffusion—where issues of convergence, divergence and polarisation are studied—multidimensionality has been more widely studied, starting from (Axelrod 1997)'s seminal work on the development of "culture", to more recent work on influence and learning (Rouchier and Tanimura 2012), worldviews (Huet et al. 2019) and interdependent topics (Ye et al. 2018)
We draw also from Müller et al.'s (2013) extension for agent-based models with human decisions (ODD + D), although the simplicity of our decision-making mechanism makes it unnecessary to use all of its items.
http://ccl.northwestern.edu/netlogo/.
Note that current specifications make any number of characteristics for which \(n \times\) \(\pi\) \(>1\) impossible to purchase. This, which under our baseline setup corresponds to \(n_d=10\) is an assumption that could be relaxed by using a \(\pi\) function that is asymptotic on 1.
It is also possible to study the non-percentual change of \(\mathbf {B}^{g,t}\), although it requires considerably longer space. Nonetheless, the critical points for relevant variables' values are the same.
Overview, Design concepts and Details (Grimm et al. 2006, 2010; Müller et al. 2013). A standardized protocol to describe a computer model to ensure transparency and replicability
Aschemann-Witzel J, Varela P, Peschel AO (2019) Consumers' categorization of food ingredients: do consumers perceive them as 'clean label' producers expect? An exploration with projective mapping. Food Qual Prefer 71:117–128
Aschemann-Witzel J, Zielke S (2017) Can't buy me green? A review of consumer perceptions of and behavior toward the price of organic food. J Consum Aff 51(1):211–251
Axelrod R (1997) The dissemination of culture: a model with local convergence and global polarization. J Conflict Resolut 41(2):203–226
Barabási A-L, Bonabeau E (2003) Scale-free networks. Sci Am 288(5):60–69
Bass FM (1969) A new product growth for model consumer durables. Manag Sci 15(5)
Boero R, Squazzoni F (2005) Does empirical embeddedness matter? J Artif Soc Soc Simul 8(4):31
Campbell A (2013) Word-of-mouth communication and percolation in social networks. Am Econ Rev 103(6):2466–2498
Cleveland WS, Devlin SJ (1988) Locally weighted regression: an approach to regression analysis by local fitting. J Am Stat Assoc 83(403):596–610
Deffuant G, Amblard F, Weisbuch G, Faure T (2002) How can extremism prevail? A study based on the relative agreement interaction model. J Artif Soc Soc Simul 5(4)
Deffuant G, Huet S, Amblard F (2005) An individual-based model of innovation diffusion mixing social value and individual benefit. Am J Sociol 110(4):1041–1069
Delre SA, Jager W, Janssen MA (2007) Diffusion dynamics in small-world networks with heterogeneous consumers. Comput Math Organ Theory 13(2):185–202
Granovetter M (1978) Threshold models of collective behavior. Am J Sociol 83(6):1420–1443
Granovetter M, Soong R (1983) Threshold models of diffusion and collective behavior. J Math Sociol 9(3):165–179
Granovetter M, Soong R (1986) Threshold models of interpersonal effects in consumer demand. J Econ Behav Org 7(1):83–99
Grimm V, Berger U, Bastiansen F, Eliassen S, Ginot V, Giske J, Goss-Custard J, Grand T, Heinz SK, Huse G, Huth A, Jepsen JU, Jørgensen C, Mooij WM, Müller B, Pe'er G, Piou C, Railsback SF, Robbins AM, Robbins MM, Rossmanith E, Rüger N, Strand E, Souissi S, Stillman RA, Vabø R, Visser U, DeAngelis DL (2006) A standard protocol for describing individual-based and agent-based models. Ecol Model 198(1–2):115–126
Grimm V, Berger U, DeAngelis DL, Polhill JG, Giske J, Railsback SF (2010) The ODD protocol: a review and first update. Ecol Model 221(23):2760–2768
Huet S, Deffuant G, Nugier A, Streith M, Guimond S (2019) Resisting hostility generated by terror: an agent-based study. PLoS ONE 14(1):e0209907
Jackson MO (2014) Networks in the understanding of economic behaviors. J Econ Perspect 28(4):3–22
Jackson T (2005) Live better by consuming less?: is there a "double dividend'' in sustainable consumption? J Ind Ecol 9(1–2):19–36. https://doi.org/10.1162/1088198054084734
Janssen MA, Alessa LN, Barton M, Bergin S, Lee A (2008) Towards a community framework for agent-based modelling. J Artif Soc Soc Simul 11(2(6))
Janssen MA, Jager W (2002) Stimulating diffusion of green products. J Evol Econ 12(3):283–306
Kaufmann P, Stagl S, Franks DW (2009) Simulating the diffusion of organic farming practices in two New EU Member States. Ecol Econ 68(10):2580–2593
Kiesling E, Günther M, Stummer C, Wakolbinger LM (2012) Agent-based simulation of innovation diffusion: a review. CEJOR 20(2):183–230
Lancaster KJ (1966) A new approach to consumer theory. J Polit Econ 74(2):132–157
Le Page C (2017) Simulation multi-agent interactive: engager des populations locales dans la modélisation des socio-écosystèmes pour stimuler l'apprentissage social. Sorbonne Universités, HDR, Paris
López-Merino P, Rouchier J (2021) An agent-based model of (food) consumption: accounting for the intention-behaviour-gap on three dimensions of characteristics with limited knowledge. EasyChair Preprint, (5440). Number: 5440 Publisher: EasyChair
Manzo G, van de Rijt A (2020) Halting SARS-CoV-2 by targeting high-contact individuals. J Artif Soc Soc Simul 23(4):10
Müller B, Bohn F, Dreßler G, Groeneveld J, Klassert C, Martin R, Schlüter M, Schulze J, Weise H, Schwarz N (2013) Describing human decisions in agent-based models - ODD + D, an extension of the ODD protocol. Environ Model Softw 48:37–48
Pegoretti G, Rentocchini F, Vittucci Marzetti G (2012) An agent-based model of innovation diffusion: network structure and coexistence under different information regimes. J Econ Interac Coord 7(2):145–165
Ploll U, Petritz H, Stern T (2020) A social innovation perspective on dietary transitions: diffusion of vegetarianism and veganism in Austria. Environ Innov Soc Trans 36:164–176
R Core Team (2020) R: a Language and environment for statistical computing
Rosen S (1974) Hedonic prices and implicit markets: product differentiation in pure competition. J Polit Econ 82(1):34–55
Rouchier J, Tanimura E (2012) When overconfident agents slow down collective learning. SIMULATION 88(1):33–49
Rouchier J, Tanimura E (2015) Influence with over-confident agents
Thiriot S (2010) Small world is not enough: criteria for network choice and conclusiveness of simulations. J Artif Soc Soc Simul (submitted, never published)
Toral R, Tessone CJ (2007) Finite size effects in the dynamics of opinion formation. Commun Comput Phys 2(2):177–195
Watts DJ, Dodds PS (2009) Threshold models of social influence. In: The oxford handbook of analytical sociology. Oxford University Press, pp. 475–497
Watts DJ, Strogatz SH (1998) Collective dynamics of 'small-world' networks. Nature 393:4
Xu Q, Huet S, Poix C, Boisdon I, Deffuant G (2018) Why do farmers not convert to organic farming? Modeling conversion to organic farming as a major change. Nat Resour Model 31(3):e12171
Ye M, Liu J, Wang L, Anderson BDO, Cao M (2018) Consensus and disagreement of heterogeneous belief systems in influence networks. arXiv:1812.05138 [cs, math]
Young HP (2009) Innovation diffusion in heterogeneous populations: contagion, social influence, and social learning. Am Econ Rev 99(5):1899–1924
The authors wish to thank Victorien Barbet, Claire Lamine (INRAE), Thibaud Trolliet and four anonymous reviewers for helpful comments.
Pedro Lopez-Merino's work is funded by a grant given by ADEME and by the European Union's H2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 101007755.
Lamsade/Université Paris Dauphine-PSL, Place du Maréchal de Lattre de Tassigny, 75775, Paris Cedex 16, France
Pedro Lopez-Merino & Juliette Rouchier
Écodéveloppement/INRAE, Domaine Saint-Paul, Site Agroparc, 84914, Avignon Cedex 9, France
Pedro Lopez-Merino
Agence de la transition écologique (ADEME), 20 Avenue du Grésillé, 49004, Angers Cedex 01, France
Juliette Rouchier
Pedro Lopez-Merino and Juliette Rouchier contributed equally to the conception of the model, theoretical framework, literature review and conclusions. Pedro Lopez-Merino worked on the coding and description of the model, statistical and graphical analysis, and putting together the first draft of the article.
Correspondence to Pedro Lopez-Merino.
Lopez-Merino, P., Rouchier, J. The diffusion of goods with multiple characteristics and price premiums: an agent-based model. Appl Netw Sci 7, 11 (2022). https://doi.org/10.1007/s41109-022-00447-1
Agent-based modelling
Innovation diffusion
Lancaster goods
Threshold models
Price premiums
Special Issue of the French Regional Conference on Complex Systems | CommonCrawl |
Branch Current Analysis
Electrical Circuit Analysis > Methods of Analysis
Why do we need to apply Branch current analysis?
Before examining the details of the first important method of analysis, let us examine the network in Fig.no.1, to be sure that you understand the need for these special methods.
Fig.no.1: Demonstrating the need for an approach such as branch-current analysis.
Initially, it may appear that we can use the reduce and return approach to work our way back to the source E1 and calculate the source current $I_{s1}$. Unfortunately, however, the series elements $R_3$ and $E_2$ cannot be combined because they are different types of elements. A further examination of the network reveals that there are no two like elements that are in series or parallel. No combination of elements can be performed, and it is clear that another approach must be defined.
It should be noted that the network of Fig.no.1 can be solved if we convert each voltage source to a current source and then combine parallel current sources. However, if a specific quantity of the original network is required, it would require working back using the information determined from the source conversion.
Further, there will be complex networks for which source conversions will not permit a solution, so it is important to understand the methods to be described in this chapter. The first approach to be introduced is called branch-current analysis because we will define and solve for the currents of each branch of the network.
In this method, we assume directions of currents in a network, then write equations describing their relationships to each other through Kirchhoff's and Ohm's Laws.
At this point it is important that we are able to identify the branch currents of the network. In general,
a branch is a series connection of elements in the network that has the same current.
In Fig.no.1, the source E1 and the resistor R1 are in series and have the same current, so the two elements define a branch of the network. It is the same for the series combination of the source E2 and resistor R3. The branch with the resistor R2 has a current different from the other two and, therefore, defines a third branch. The result is three distinct branch currents in the network of Fig.no.1 that need to be determined.
Experience shows that the best way to introduce the branch-current method is to take the series of steps listed here.
Branch-Current Analysis Procedure
Assign a distinct current of arbitrary direction to each branch of the network.
Indicate the polarities for each resistor as determined by the assumed current direction.
Apply Kirchhoff's voltage law around each closed, independent loop of the network.
The best way to determine how many times Kirchhoff's voltage law has to be applied is to determine the number of "windows" in the network. For networks with three windows, as shown in Fig.no.2, three applications of Kirchhoff's voltage law are required, and so on.
Fig.no.2: Determining the number of independent closed loops.
Apply Kirchhoff's current law at the minimum number of nodes that will include all the branch currents of the network.
The minimum number is one less than the number of independent nodes of the network. For the purposes of this analysis, a node is a junction of two or more branches, where a branch is any combination of series elements. Fig.no.3 defines the number of applications of Kirchhoff's current law for each configuration in Fig.no.2.
Fig.no.3: Determining the number of applications of Kirchhoff's current law required.
Solve the resulting simultaneous linear equations for assumed branch currents.
Example 1: Apply the branch-current method to the network in Fig.no.4.
Fig.no.4
Step 1: Since there are three distinct branches (cda, cba, ca), three currents of arbitrary directions (I1, I2, I3) are chosen, as indicated in Fig.no.4. The current directions for I1 and I2 were chosen to match the "pressure" applied by sources E1 and E2, respectively. Since both I1 and I2 enter node a, I3 is leaving.
Step 2: Polarities for each resistor are drawn to agree with assumed current directions, as indicated in Fig.no.5.
Fig.no.5: Inserting the polarities across the resistive elements as defined by the chosen branch currents.
Step 3: Kirchhoff's voltage law is applied around each closed loop (1 and 2) in the clockwise direction: $$ \text{loop 1:} \sum {V} = +E_1 - V_{R1} - V_{R3} = 0$$ $$ \text{loop 2:} \sum {V} = +V_{R3} + V_{R2} - E_1 = 0$$ and $$ \text{loop 1:} \sum {V} = +2 V - (2 Ω)I_1 - (4 Ω)I_3 = 0$$ $$ \text{loop 2:} \sum {V} = (4 Ω)I_3 +(1 Ω)I_2 - 6 V = 0$$ Step 4: Applying Kirchhoff's current law at node a (in a two-node network, the law is applied at only one node) gives $$I_1 + I_2 = I_3$$ Step 5: There are three equations and three unknowns (units removed for clarity): $$ 2 - 2I_1 - 4I_3 = 0$$ $$4I_3 + 1I_2 - 6 = 0$$ $$I_1 + I_2 = I_3$$
Rewritten: $$2I_1 + 0 + 4I_3 = 2$$ $$ 0 + I_2 + 4I_3 = 6$$ $$I_1 + I_2 - I_3 = 0$$ Using third-order determinants, we have matrix $$ A = \begin{bmatrix} 2 & 0 & 4 \\ 0 & 1 & 4 \\ 1 & 1 & -1 \\ \notag \end{bmatrix}$$ $$ Det(A) = D = \begin{vmatrix} 2 & 0 & 4 \\ 0 & 1 & 4 \\ 1 & 1 & -1 \\ \notag \end{vmatrix} = -14$$ $$ I_1 = {\begin{vmatrix} 2 & 0 & 4 \\ 6 & 1 & 4 \\ 0 & 1 & -1 \\ \notag \end{vmatrix}\over D} = -1A$$ $$ I_2 = {\begin{vmatrix} 2 & 2 & 4 \\ 0 & 6 & 4 \\ 1 & 0 & -1 \\ \notag \end{vmatrix}\over D} = 2A$$ $$ I_3 = {\begin{vmatrix} 2 & 0 & 2 \\ 0 & 1 & 6 \\ 1 & 1 & 0 \\ \notag \end{vmatrix}\over D} = 1A$$ | CommonCrawl |
Partially hyperbolic diffeomorphisms with one-dimensional neutral center on 3-manifolds
JMD Home
This Volume
Rauzy induction of polygon partitions and toral $ \mathbb{Z}^2 $-rotations
2021, 17: 529-555. doi: 10.3934/jmd.2021018
On Furstenberg systems of aperiodic multiplicative functions of Matomäki, Radziwiłł, and Tao
Alexander Gomilko 1, , Mariusz Lemańczyk 1, and Thierry de la Rue 2,
Faculty of Mathematics and Computer Science, Nicolaus Copernicus University, Chopin street 12/18, 87-100 Toruń, Poland
Laboratoire de Mathématiques Raphaël Salem, Université de Rouen Normandie, CNRS - Avenue de l'Université - 76801, Saint Étienne du Rouvray, France
Received August 25, 2020 Revised August 24, 2021 Published November 2021
Full Text(HTML)
It is shown that in a class of counterexamples to Elliott's conjecture by Matomäki, Radziwiłł, and Tao [23] the Chowla conjecture holds along a subsequence.
Keywords: Aperiodic multiplicative function, Chowla conjecture, Archimedean characters, Furstenberg system of an arithmetic function, transformations with quasi-discrete spectrum, strongly stationary processes.
Mathematics Subject Classification: Primary: 11N37, 37A44; Secondary: 37A50.
Citation: Alexander Gomilko, Mariusz Lemańczyk, Thierry de la Rue. On Furstenberg systems of aperiodic multiplicative functions of Matomäki, Radziwiłł, and Tao. Journal of Modern Dynamics, 2021, 17: 529-555. doi: 10.3934/jmd.2021018
H. El Abdalaoui, M. Lemańczyk and T. de la Rue, A dynamical point of view on the set of $\mathcal{B}$-free integers, Int. Math. Res. Not. IMRN, (2015), 7258–7286. doi: 10.1093/imrn/rnu164. Google Scholar
L. M. Abramov, Metric automorphisms with quasi-discrete spectrum, Izv. Akad. Nauk SSSR Ser. Mat., 26 (1962), 513-530. Google Scholar
American Institute of Mathematics, workshop, Sarnak's Conjecture, December 2018. Available from: http://aimpl.org/sarnakconjecture/3/. Google Scholar
V. Bergelson, J. Kulaga-Przymus, M. Lemańczyk and F. K. Richter, Rationally almost periodic sequences, polynomial multiple recurrence and symbolic dynamics, Ergodic Theory Dynam. Systems, 39 (2019), 2332-2383. doi: 10.1017/etds.2017.130. Google Scholar
S. Chowla, The Riemann Hypothesis and Hilbert's Tenth Problem, Mathematics and Its Applications 4, Gordon and Breach Science Publishers, New York-London-Paris, 1965. Google Scholar
M. Denker, C. Grillenberger and K. Sigmund, Ergodic Theory on Compact Spaces, Lecture Notes in Mathematics, Vol. 527, Springer-Verlag, Berlin-New York, 1976. Google Scholar
P. D. T. A. Elliott, Multiplicative functions $|g| \leq 1$ and their convolutions: An overview, Séminaire de Théorie des Nombres, Paris 1987-88. Progr. Math., 81 (1990), 63-75. Google Scholar
P. D. T. A. Elliott, On the correlation of multiplicative functions, Notas Soc. Mat. Chile, 11 (1992), 1-11. Google Scholar
P. D. T. A. Elliott, On the correlation of multiplicative and the sum of additive arithmetic functions, Mem. Amer. Math. Soc., 112 (1994). doi: 10.1090/memo/0538. Google Scholar
L. Flaminio, Mixing k-fold independent processes of zero entropy, Proc. Amer. Math. Soc., 118 (1993), 1263-1269. doi: 10.2307/2160087. Google Scholar
N. Frantzikinakis, Ergodicity of the Liouville system implies the Chowla conjecture, Discrete Anal., (2017), 19, 41 pp. doi: 10.19086/da.2733. Google Scholar
N. Frantzikinakis, An averaged Chowla and Elliott conjecture along independent polynomials, Int. Math. Res. Not. IMRN, (2018), 3721–3743. doi: 10.1093/imrn/rnx002. Google Scholar
N. Frantzikinkis and B. Host, Asymptotics for multilinear averages of multiplicative functions, Math. Proc. Cambridge Philos. Soc., 161 (2016), 87-101. doi: 10.1017/S0305004116000116. Google Scholar
N. Frantzikinakis and B. Host, The logarithmic Sarnak conjecture for ergodic weights, Ann. of Math. (2), 187 (2018), 869-931. doi: 10.4007/annals.2018.187.3.6. Google Scholar
N. Frantzikinakis and B. Host, Furstenberg systems of bounded multiplicative functions and applications, Int. Math. Res. Not. IMRN, (2021), 6077–6107. doi: 10.1093/imrn/rnz037. Google Scholar
H. Furstenberg, Strict ergodicity and transformation of the torus, Amer. J. Math., 83 (1961), 573-601. doi: 10.2307/2372899. Google Scholar
E. Glasner, Ergodic Theory via Joinings, Mathematical Surveys and Monographs, 101, American Mathematical Society, Providence, RI, 2003. doi: 10.1090/surv/101. Google Scholar
F. Hahn and W. Parry, Minimal dynamical systems with quasi-discrete spectrum, J. London Math. Soc., 40 (1965), 309-323. doi: 10.1112/jlms/s1-40.1.309. Google Scholar
E. Jenvey, Strong stationarity and de Finetti's theorem, J. Anal. Math., 73 (1997), 1-18. doi: 10.1007/BF02788136. Google Scholar
O. Klurman, Correlations of multiplicative functions and applications, Compo. Math., 153 (2017), 1622-1657. doi: 10.1112/S0010437X17007163. Google Scholar
L. Matthiesen, Linear correlations of multiplicative functions, Proc. Lond. Math. Soc., 121 (2020), 372-425. doi: 10.1112/plms.12309. Google Scholar
K. Matomäki and M. Radziwiłł, Multiplicative functions in short intervals, Ann. of Math., 183 (2016), 1015-1056. doi: 10.4007/annals.2016.183.3.6. Google Scholar
K. Matomäki, M. Radziwiłł and T. Tao, An averaged form of Chowla's conjecture, Algebra Number Theory, 9 (2015), 2167-2196. doi: 10.2140/ant.2015.9.2167. Google Scholar
J. Rivat, Bases of Analytic Number Theory, Ergodic Theory and Dynamical Systems in their Interactions with Arithmetics and Combinatorics, 1–113, Lecture Notes in Math., 2213, Springer, Cham, 2018., doi: 10.1007/978-3-319-74908-2. Google Scholar
P. Sarnak, Three Lectures on the Möbius Function, Randomness and Dynamics., Available from: http://publications.ias.edu/sarnak/. Google Scholar
T. Tao, The logarithmically averaged Chowla and Elliott conjectures for two-point correlations, Forum Math. Pi, 4 (2016), 36 pp. doi: 10.1017/fmp.2016.6. Google Scholar
T. Tao and J. Teräväinen, Odd order cases of the logarithmically averaged Chowla conjecture, J. Théor. Nombres Bordeaux, 30 (2018), 997-1015. doi: 10.5802/jtnb.1062. Google Scholar
T. Tao and J. Teräväinen, The structure of logarithmically averaged correlations of multiplicative functions, with applications to the Chowla and Elliott conjectures, Duke Math. J., 168 (2019), 1977-2027. doi: 10.1215/00127094-2019-0002. Google Scholar
T. Tao and J. Teräväinen, The structure of correlations of multiplicative functions at almost all scales, with applications to the Chowla and Elliott conjectures, Algebra Number Theory, 13 (2019), 2103-2150. doi: 10.2140/ant.2019.13.2103. Google Scholar
P. Walters, An Introduction to Ergodic Theory, Graduate Texts in Mathematics, 79, Springer-Verlag, New York-Berlin, 1982. Google Scholar
Nikolai Edeko. On the isomorphism problem for non-minimal transformations with discrete spectrum. Discrete & Continuous Dynamical Systems, 2019, 39 (10) : 6001-6021. doi: 10.3934/dcds.2019262
Yulin Zhao. On the monotonicity of the period function of a quadratic system. Discrete & Continuous Dynamical Systems, 2005, 13 (3) : 795-810. doi: 10.3934/dcds.2005.13.795
Mariusz Lemańczyk, Clemens Müllner. Automatic sequences are orthogonal to aperiodic multiplicative functions. Discrete & Continuous Dynamical Systems, 2020, 40 (12) : 6877-6918. doi: 10.3934/dcds.2020260
Zilong Wang, Guang Gong. Correlation of binary sequence families derived from the multiplicative characters of finite fields. Advances in Mathematics of Communications, 2013, 7 (4) : 475-484. doi: 10.3934/amc.2013.7.475
Jongkeun Choi, Ki-Ahm Lee. The Green function for the Stokes system with measurable coefficients. Communications on Pure & Applied Analysis, 2017, 16 (6) : 1989-2022. doi: 10.3934/cpaa.2017098
Prof. Dr.rer.nat Widodo. Topological entropy of shift function on the sequences space induced by expanding piecewise linear transformations. Discrete & Continuous Dynamical Systems, 2002, 8 (1) : 191-208. doi: 10.3934/dcds.2002.8.191
Nguyen Huy Chieu, Jen-Chih Yao. Subgradients of the optimal value function in a parametric discrete optimal control problem. Journal of Industrial & Management Optimization, 2010, 6 (2) : 401-410. doi: 10.3934/jimo.2010.6.401
Giovanni Colombo, Thuy T. T. Le. Higher order discrete controllability and the approximation of the minimum time function. Discrete & Continuous Dynamical Systems, 2015, 35 (9) : 4293-4322. doi: 10.3934/dcds.2015.35.4293
Feng Ma, Jiansheng Shu, Yaxiong Li, Jian Wu. The dual step size of the alternating direction method can be larger than 1.618 when one function is strongly convex. Journal of Industrial & Management Optimization, 2021, 17 (3) : 1173-1185. doi: 10.3934/jimo.2020016
Andi Kivinukk, Anna Saksa. On Rogosinski-type approximation processes in Banach space using the framework of the cosine operator function. Mathematical Foundations of Computing, 2021 doi: 10.3934/mfc.2021030
Jiyoung Han. Quantitative oppenheim conjecture for $ S $-arithmetic quadratic forms of rank $ 3 $ and $ 4 $. Discrete & Continuous Dynamical Systems, 2021, 41 (5) : 2205-2225. doi: 10.3934/dcds.2020359
Qi Wang. Global solutions of a Keller--Segel system with saturated logarithmic sensitivity function. Communications on Pure & Applied Analysis, 2015, 14 (2) : 383-396. doi: 10.3934/cpaa.2015.14.383
Yuri Latushkin, Alim Sukhtayev. The Evans function and the Weyl-Titchmarsh function. Discrete & Continuous Dynamical Systems - S, 2012, 5 (5) : 939-970. doi: 10.3934/dcdss.2012.5.939
Xilin Fu, Zhang Chen. New discrete analogue of neural networks with nonlinear amplification function and its periodic dynamic analysis. Conference Publications, 2007, 2007 (Special) : 391-398. doi: 10.3934/proc.2007.2007.391
Madhurima Mukhopadhyay, Palash Sarkar, Shashank Singh, Emmanuel Thomé. New discrete logarithm computation for the medium prime case using the function field sieve. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020119
J. William Hoffman. Remarks on the zeta function of a graph. Conference Publications, 2003, 2003 (Special) : 413-422. doi: 10.3934/proc.2003.2003.413
H. N. Mhaskar, T. Poggio. Function approximation by deep networks. Communications on Pure & Applied Analysis, 2020, 19 (8) : 4085-4095. doi: 10.3934/cpaa.2020181
Hassan Emamirad, Philippe Rogeon. Semiclassical limit of Husimi function. Discrete & Continuous Dynamical Systems - S, 2013, 6 (3) : 669-676. doi: 10.3934/dcdss.2013.6.669
Ken Ono. Parity of the partition function. Electronic Research Announcements, 1995, 1: 35-42.
Tomasz Downarowicz, Yonatan Gutman, Dawid Huczek. Rank as a function of measure. Discrete & Continuous Dynamical Systems, 2014, 34 (7) : 2741-2750. doi: 10.3934/dcds.2014.34.2741
HTML views (70)
Alexander Gomilko Mariusz Lemańczyk Thierry de la Rue
Article outline | CommonCrawl |
Acoustic DOA estimation using space alternating sparse Bayesian learning
Zonglong Bai ORCID: orcid.org/0000-0003-4465-54661,2,
Liming Shi2,
Jesper Rindom Jensen2,
Jinwei Sun1 &
Mads Græsbøll Christensen2
EURASIP Journal on Audio, Speech, and Music Processing volume 2021, Article number: 14 (2021) Cite this article
Estimating the direction-of-arrival (DOA) of multiple acoustic sources is one of the key technologies for humanoid robots and drones. However, it is a most challenging problem due to a number of factors, including the platform size which puts a constraint on the array aperture. To overcome this problem, a high-resolution DOA estimation algorithm based on sparse Bayesian learning is proposed in this paper. A group sparse prior based hierarchical Bayesian model is introduced to encourage spatial sparsity of acoustic sources. To obtain approximate posteriors of the hidden variables, a variational Bayesian approach is proposed. Moreover, to reduce the computational complexity, the space alternating approach is applied to push the variational Bayesian inference to the scalar level. Furthermore, an acoustic DOA estimator is proposed to jointly utilize the estimated source signals from all frequency bins. Compared to state-of-the-art approaches, the high-resolution performance of the proposed approach is demonstrated in experiments with both synthetic and real data. The experiments show that the proposed approach achieves lower root mean square error (RMSE), false alert (FA), and miss-detection (MD) than other methods. Therefore, the proposed approach can be applied to some applications such as humanoid robots and drones to improve the resolution performance for acoustic DOA estimation especially when the size of the array aperture is constrained by the platform, preventing the use of traditional methods to resolve multiple sources.
Acoustic direction-of-arrival (DOA) estimation is a key technology in audio signal processing where it enables source localization for humanoid robots [1, 2], drones [3, 4], teleconferencing [5, 6], and hearing aids [7]. The goal of acoustic DOA estimation is to obtain the bearing angle of acoustic waves generated by sound sources using a microphone array. According to the Rayleigh criterion [8], the resolution of traditional DOA estimation approaches (e.g., the classical beamforming (CBF)Footnote 1 based approach and the steered-response power phase transform (SRP-PHAT) method [9]) is limited by the array aperture. Therefore, for some applications like humanoid robots and drones with a small platform size, the traditional approaches suffer in scenarios with multiple sources simultaneously present. Although methods such as the minimum variance distortionless response (MVDR) [8, 10], multiple signal classification (MUSIC) [11], and estimation of signal parameters via the rotational invariance technique (ESPRIT) [12] can offer a high-resolution performance, they are sensitive to calibration errors and errors in the assumed or estimated signal statistics [13, 14]. The robustness of the MVDR and MUSIC methods have been studied in the presence of array errors [15–17]. However, these studies rely on asymptotic properties, i.e., high signal-to-noise ratio (SNR) scenarios and large numbers of snapshots. Thus, these studies do not apply when only a small number of snapshots is available.
Sparse signal recovery-based DOA estimation methods have enjoyed much success in recent decades by exploiting the sparsity of sources in the spatial domain [18, 19]. These approaches are attractive because (1) they offer robustness against noise and limitations in data quality [18], (2) they have a good performance with a small number of snapshots [20], (3) they offer a higher resolution performance than MVDR and MUSIC methods [21, 22], and (4) they have the capability to resolve coherent sources [23]. In [18], the source localization problem was first formulated as an over-complete basis representation problem. To estimate the source amplitudes, an l1-norm based singular value decomposition (SVD) method was proposed. In [24], a complex least absolute shrinkage and selection operator (cLASSO) method was proposed for DOA estimation. In [25], a re-weighted regularized sparse recovery method was proposed for DOA estimation with unknown mutual coupling. All these methods are based on convex optimization theory, that is, the signals are recovered by solving a regularized optimization problem. They have a good performance with a properly chosen regularization factor, but the regularization factor needs to be determined empirically [26].
Because of its self-regularization nature and its ability to quantify uncertainty, the sparse Bayesian learning (SBL)-based methods have attracted a lot of attention in sparse signal recovery and compressed sensing. The SBL principle was originally proposed in [27] for obtaining sparse solutions to regression and classification tasks. The SBL algorithm was applied to the compressed sensing in [28], and an SBL-based Bayesian compressed sensing method using Laplace priors was proposed in [29]. More recently, a scalable mean-field SBL was proposed in [30]. In [31], an SBL-based DOA estimation method with predefined grids was proposed. In that paper, the DOA estimation is formulated as a sparse signal recovery and compressed sensing problem. To obtain refined estimates of the DOA, an off-grid DOA estimation method was proposed in [32]. In [21], a multi-snapshot SBL (MSBL) method was proposed for the multi-snapshot DOA estimation problem. The method was further applied to sound source localization and speech enhancement in [22]. To reduce the computational complexity of the wide-band approach, a computationally efficient DOA estimation method was proposed in [33] based on a sparse Bayesian framework. Additionally, some of our previous works are related to this paper. In [34], we proposed an SBL method with compressed data for sound source localization. The results show that the SBL method offers an excellent estimation accuracy for sound source localization even with low data quality. In [35], we proposed an SBL-based acoustic reflector localization method, which models the acoustic reflector localization problem as a sparse signal recovery problem. It shows that the SBL-based method offers a more robust performance for basis mismatch compared to the state-of-the-art methods. However, a common drawback of these approaches is that the traditional SBL-based approaches are computationally complex due to the matrix inversion operation required for updating the covariance matrix of the source signals.
Computationally efficient SBL algorithms have also been proposed in various applications. For example, in [36], a basis adding/deleting scheme based on the marginal distribution was proposed. In [37], an inverse free SBL method was proposed by relaxing the evidence lower bound. In [38], a space alternating variational estimation (SAVE) algorithm was proposed to push the variational Bayesian inference (VBI) based SBL to a scalar level. The experimental results show that the SAVE approach has a faster convergence and a lower minimum mean square error (MMSE) performance than other fast SBL algorithms.
Based on this, we propose a space alternating SBL-based acoustic DOA estimation method for high-resolution estimation in this paper. A hierarchical Bayesian framework with group sparse priors is built to model multiple measurement vector (multi-snapshot) signals. As direct calculation of the posterior distribution is not possible, variational Bayesian inference is applied to infer all hidden variables in the proposed model. Furthermore, we extend the SAVE method [38] to the multiple measurement vector (MMV) case to reduce the computational complexity of the algorithm. The proposed algorithm can be applied to each frequency bin independently. To jointly utilize the recovered signals from all frequency bins, a complex Gaussian mixture model (CGMM) based expectation–maximization (EM) algorithm is proposed. We refer to the proposed method as the SAVE-MSBL-EM method.
The rest of this paper is organized as follows: In Section 2, we pose the narrow-band acoustic DOA estimation problem as a sparse signal recovery problem with an over-complete dictionary. Moreover, under the assumption that the DOAs of all sources do not change in a frame, a hierarchical Bayesian framework is built by exploiting the group sparsity of the MMV source signals. In Section 3, the SAVE-MSBL algorithm is proposed to infer all the hidden variables in the hierarchical Bayesian model for one frequency bin. Then, the CGMM-based EM algorithm is formulated to deal with the wide-band acoustic DOA estimation. In Section 4, we evaluate the performance of the proposed algorithm using both synthetic data and real data. Finally, we provide our conclusions in Section 5.
Note that vectors and matrices are represented using bold lowercase and uppercase letters, respectively. The superscripts (·)T and (·)H denote the transpose and conjugate transpose operator, respectively. Moreover, L×L identity matrix is denoted as IL. The lp norm and Frobenius norm are represented using ∥·∥p and ∥·∥F, respectively.
Signal model
The problem considered in this paper can be stated as follows. We consider the scenario that P sound sources exist in the far-field of an arbitrary microphone array with M microphones which are used to record the signals. The center point of the microphone array is denoted as O. All the microphones are assumed to be omnidirectional and synchronized. As it is shown in [18, 22, 33], the DOA estimation problem can be formulated as a sparse signal recovery problem using an over-complete dictionary with basis vectors containing the DOA information. Let θ=[θ1,θ2,⋯,θK]T denote a set of candidate DOAs, where K denotes the total number of candidate DOAs. The signal model for the fth (1≤f≤F) frequency bin of one frame can be expressed as
$$ \boldsymbol{X}_{f}=\boldsymbol{A}_{f}\boldsymbol{S}_{f}+\boldsymbol{N}_{f}, $$
$$\begin{array}{*{20}l} \boldsymbol{X}_{f}&=\left[\boldsymbol{x}_{f,1},\boldsymbol{x}_{f,2},\cdots,\boldsymbol{x}_{f,L}\right],\\ \boldsymbol{x}_{f,l}&=\left[x_{f,l,1},x_{f,l,2},\cdots,x_{f,l,M}\right]^{\mathrm{T}},\\ \boldsymbol{A}_{f}&=\left[\boldsymbol{a}_{f,1},\boldsymbol{a}_{f,2},\cdots,\boldsymbol{a}_{f,K}\right],\\ \boldsymbol{a}_{f,k}&=\left[1,e^{-j\omega_{f}\tau_{k2}},\cdots,e^{-j\omega_{f}\tau_{kM}}\right]^{\mathrm{T}},\\ \boldsymbol{S}_{f}&=\left[\boldsymbol{s}_{f,1},\boldsymbol{s}_{f,2},\cdots,\boldsymbol{s}_{f,K}\right]^{\mathrm{T}},\\ \boldsymbol{s}_{f,k}&=\left[s_{f,k,1},s_{f,k,2},\cdots,s_{f,k,L}\right]^{\mathrm{T}}, \end{array} $$
F is the total number of frequency bins, Xf\(\in \mathbb {C}^{M\times L}\) is a collection of signal snapshots in the frequency-domain with xf,l,m being the signal at the fth frequency bin, lth snapshot, and mth microphone. We refer to the matrix Xf as one frame and xf,l\(\in \mathbb {C}^{M}\) as one snapshot, l∈[1,2,⋯,L] is the index of the snapshotsFootnote 2. The matrix Af\(\in \mathbb {C}^{M\times N}\) is the dictionary for the fth frequency bin with the basis vector af,k\(\in \mathbb {C}^{M}\) representing the array response for the direction θk,ωf is the fth angular frequency, and τkm is the relative time delay of source k between microphone m and the array center point O. Moreover, Sf\(\in \mathbb {C}^{K\times L}\) is a collection of the source signals with sf,k being the kth row. The noise matrix Nf\(\in \mathbb {C}^{M\times L}\) is defined similarly to Sf. Assuming that several sound sources are active in one frame, let θs (θs⊂θ) denote the true DOA set and ks(ks⊂[1,2,⋯,K]) denote the true index set. Based on the above definition and the signal model in (1), Sf is an all-zero matrix except for the elements of the rows within the ground truth index set ks. An example is given in Fig. 1, which uses a uniform linear array (ULA). In this example, the target space is sampled uniformly with an interval of 10∘. Two sources are located at −30∘ and 40∘, respectively. Thus, when the two sources are active simultaneously, only the elements in the two rows of Sf corresponding to the bearing angles −30∘ and 40∘ are non-zero.
The candidate DOAs in the target space. The red circles denote the microphones. The blue circles denote the positions of the sound sources
Based on (1), to obtain the DOA estimator, we can first recover the source signal, Sf, given the MMV, Xf, and the predefined dictionary, Af, using MMV sparse signal recovery methods, and then find the row index set of the non-zero elements, which indicates the acoustic DOAs. We assume that the sound sources are static or move slowly such that the direction of the sound sources do not change within the snapshots in a frame. We further assume that the number of active sound sources P is very small compared to the number of candidate DOAs K, i.e., P≪K. As a result, the sound source signal, Sf, is a signal matrix with group sparsity and the algorithms for sparse signal recovery can be applied [18, 19]. In this paper, we propose a space alternating MSBL method to improve the estimation performance by exploiting the group sparsity of Sf.
Probabilistic models
The SBL method is a widely used sparse signal reconstruction method. It is a probabilistic parameter estimation approach based on a hierarchical Bayesian framework. It learns the sparse signal from the over-complete observation model, resulting in a robust maximum likelihood estimation method [27, 39]. Like other Bayesian algorithms, SBL estimates model parameters by maximizing the posterior with a sparse prior. However, instead of adding a specialized model prior, SBL encourages sparsity by using a hierarchical framework that controls the scaling of Gaussian priors through updating individual parameters of each model [27, 40].
Sparse signal model
Following the SBL method proposed in [27], a hierarchical Bayesian framework is used to model the signal matrix, Sf. For the sake of brevity, we omit the dependency of random variables on the subscript, f, where appropriate. First, we assume that the candidate sources are independent to each other. Then, a multivariate complex Gaussian distribution is used to describe the kth candidate source signal sk with zero mean and a covariance matrix \(\lambda _{k}^{-1}\mathbf {I}_{L}\), i.e.,
$$ p(\boldsymbol{S}|\boldsymbol{\lambda})=\prod\limits_{k=1}^{K}\mathcal{CN}\left({\boldsymbol{s}}_{k}|\boldsymbol{0},\lambda_{k}^{-1}\mathbf{I}_{L}\right), $$
where λ=[λ1,λ2,⋯,λK]T is the hyper-parameter vector, λk is the hyper-parameter related to the amplitude of the kth candidate source signal sk, e.g., the amplitude of sk is 0 when λk→∞. Moreover, IL is the L×L identity matrix, \(\mathcal {CN}(\cdot)\) denotes the complex Gaussian distribution and λk is the precision of sk. Note that, for each candidate DOA (e.g., the kth DOA), an individual precision λk is used, but the precision λk is set to the same for the signal in different snapshots, thereby encouraging group sparsity [41].
The motivation is that the DOAs of the sound sources, as well as the set of active sources, are assumed to not change within a frame. For different candidate DOAs, different precisions are used to encourage the sparsity (see [18, 19] for further details).
In the second layer of the hierarchy, we assume that the precision variables are independent and follow gamma distributions, i.e.,
$$ p(\boldsymbol{\lambda}|\boldsymbol{\gamma})=\prod\limits_{k=1}^{K}\mathcal{G}\left(\lambda_{k}|1,\gamma_{k}\right), $$
where \(\mathcal {G}(a,b)\) denotes the gamma distribution with the shape parameter a and the rate parameter b. There are two reasons for this particular choice of prior distribution: (1) the gamma distribution is a conjugate prior for the variable λk in the complex Gaussian distribution, leading to a tractable posterior, and (2) the marginal distribution \(\int p(\boldsymbol {S}|\boldsymbol {\lambda }) p(\boldsymbol {\lambda }|\boldsymbol {\gamma }) d\boldsymbol {\lambda }\) is a Student's t distribution encouraging sparsity [27].
To facilitate the inference of γ, we further assume that the variables in γ=[γ1,⋯,γk,⋯,γK]T follow i.i.d. gamma distributions, i.e.,
$$ p(\boldsymbol{\gamma})=\prod\limits_{k=1}^{K}\mathcal{G}(\gamma_{k}|a,b), $$
where a and b are model parameters that will be treated as hyper-parameters.
Likelihood function and noise model
Under the assumption of circular symmetric complex Gaussian noises, the likelihood function can be written as
$$ p(\boldsymbol{X}|\boldsymbol{S},\rho)=\prod\limits_{l=1}^{L}\mathcal{CN}\left(\boldsymbol{x}_{l}|\boldsymbol{A}\boldsymbol{s}_{l},\rho^{-1}\mathbf{I}_{M}\right), $$
where ρ denotes the noise precision.
For tractability, we assume that ρ follows a gamma distribution as follows
$$ p(\rho)=\mathcal{G}(\rho|c,d), $$
where c and d are modeling parameters.
The hierarchical Bayesian model is built using (2), (3), (4), (5) and (6), and the graphical model is shown in Fig. 2.
Probabilistic graphical model
Bayesian inference using space alternating variational estimation
Variational Bayesian inference
Let Θ={S,λ,γ,ρ} denote the set of hidden variables. Based on the graphical model shown in Fig. 2, the joint pdf can be written as
$$\begin{array}{*{20}l} p(\boldsymbol{X},\boldsymbol{\Theta})=&p(\boldsymbol{X}|\boldsymbol{S},\rho)p(\boldsymbol{S}|\boldsymbol{\lambda})p(\boldsymbol{\lambda}|\boldsymbol{\gamma})p(\boldsymbol{\gamma})p(\rho). \end{array} $$
A closed-form expression of the full posterior p(Θ|X) requires computation of the marginal pdf (X), which is intractable. In this paper, VBI is therefore applied to obtain an approximation of true posterior using a factorized distribution [42, 43]
$$ q(\boldsymbol{\Theta})=q(\rho)\left(\prod_{k=1}^{K}q({\boldsymbol{s}}_{k})q({\lambda}_{k})q({\gamma}_{k})\right), $$
where q(Θ) is an approximation of the full posterior p(Θ|X). For notational simplicity, the dependency of the approximated posterior on the observed signal X is omitted. Note that, instead of pursuing the full posterior q(S) of the source signals, a factorial form of the posterior \(\prod _{k=1}^{K}q({\boldsymbol {s}}_{k})\) is used to reduce the computational complexity. This is an extension to the SAVE proposed in the single measurement vector (SMV) scenario [38]. When L=1, the proposed approximation model (8) reduces to the model in SAVE. We also assume that the approximate posteriors have the same functional forms as the priors for all the hidden variables. For example, both the prior p(sk|λk) and posterior q(sk) are complex Gaussian. The VBI approach minimizes the Kullback–Leibler (KL) divergence between p(Θ|X) and q(Θ) by maximizing the following variational objective:
$$ \mathcal{L}=\mathrm{E}_{q(\boldsymbol{\Theta})}\left[\ln p(\boldsymbol{X},\boldsymbol{\Theta})\right]-\mathrm{E}_{q(\boldsymbol{\Theta})}\left[\ln q(\boldsymbol{\Theta})\right], $$
where Eq[·] denotes the expectation operator over the distribution q, i.e., \(\mathrm {E}_{q(x)}[p(x)]=\int q(x)p(x)\mathrm {d}x\).
Since the prior and likelihood of all nodes of the model shown in Fig. 2 fall within the conjugate exponential family, the VBI can be written as [42, 43]
$$ \ln q(\boldsymbol{\Theta}_{i})=\mathrm{E}_{q(\boldsymbol{\Theta}_{\bar{i}})}\left[\ln p(\boldsymbol{S},\boldsymbol{\Theta})\right]+\mathrm{C}, $$
where C is a constant and Θi denotes one of the variables in the factorized distribution (8), such as sk. The notation \(\boldsymbol {\Theta }_{\bar {i}}\) denotes the hidden variable set Θ excluding Θi.
The logarithm of the joint distribution
As shown in (9), the logarithmic form of the joint distribution is required for VBI. Substituting (2), (3), (4), (5), and (6) into (7), we have
$$\begin{array}{*{20}l} &\ln p(\boldsymbol{X},\boldsymbol{\Theta})= ML\ln\rho-\rho\|\boldsymbol{X}-\boldsymbol{A}\boldsymbol{S}\|_{F}^{2}+\\ &L\sum\limits_{k=1}^{K}\ln \lambda_{k}- \sum\limits_{k=1}^{K}\lambda_{k}{\boldsymbol{s}}_{k}{\boldsymbol{s}}_{k}^{\mathrm{H}}+\sum\limits_{k=1}^{K}\ln \gamma_{k}-\\ &\sum\limits_{k=1}^{K}\gamma_{k}\lambda_{k}+(a-1)\sum\limits_{k=1}^{K}\ln\gamma_{k}-\\ &b\sum\limits_{k=1}^{K}\gamma_{k}+(c-1)\ln\rho-d\rho+\mathrm{C}, \end{array} $$
where ∥·∥F denotes the Frobenius norm. Next, we present the approximate posterior by substituting (10) into (9).
Update of s k
The approximate posterior of sk can be written as Footnote 3
$$\begin{array}{*{20}l} \!\!\!\!\!\!\! \!\! \ln q({{\boldsymbol{s}}_{k}})=&-\text{tr}\left[{\boldsymbol{s}}_{k}^{\mathrm{H}}\left(\left<\rho\right>\boldsymbol{a}_{k}^{\mathrm{H}}\boldsymbol{a}_{k}+\left<\lambda_{k}\right>\right){\boldsymbol{s}}_{k}-\right.\\ &\left.\left<\rho\right>{\boldsymbol{s}}_{k}^{\mathrm{*}}\boldsymbol{a}_{k}^{\mathrm{H}}\left(\boldsymbol{X}-\boldsymbol{A}_{{\bar{k}}}\left<\boldsymbol{S}_{\bar{k}}\right>\right)-\right.\\ &\left.\left<\rho\right>\left(\boldsymbol{X}-\boldsymbol{A}_{{\bar{k}}}\left<\boldsymbol{S}_{\bar{k}}\right>\right)^{\mathrm{H}}\boldsymbol{a}_{k}{\boldsymbol{s}}_{k}^{\mathrm{T}}\right]+\mathrm{C}, \end{array} $$
$$\begin{array}{*{20}l} \left<\boldsymbol{S}_{\bar{k}}\right>=&\mathrm{E}_{q(\boldsymbol{S}_{\bar{k}})}\left[\boldsymbol{S}_{\bar{k}}\right] \\ =&\left[\boldsymbol{\mu}_{1},\cdots,\boldsymbol{\mu}_{k-1},\boldsymbol{\mu}_{k+1}\cdots,\boldsymbol{\mu}_{K}\right]^{T},\\ \left<\rho\right>=&\mathrm{E}_{q(\rho)}[\rho],\; \left<\lambda_{k}\right>=\mathrm{E}_{q(\lambda_{k})}[\lambda_{k}], \end{array} $$
and <·> is the shorthand of the expectation operator Eq[·]. Moreover, tr[·] denotes the trace operator, ak denotes the kth column of \(\boldsymbol {A}, \boldsymbol {A}_{{\bar {k}}}\) is the matrix A with the kth column ak being removed, and \(\boldsymbol {S}_{\bar {k}}\) is the matrix S with the kth row \({\boldsymbol {s}}_{k}^{\mathrm {T}}\) being removed. From (11), it can be shown that \(q({\boldsymbol {s}}_{k})=\mathcal {CN}\left ({\boldsymbol {s}}_{k}|\boldsymbol {\mu }_{k},\sigma ^{2}_{k}\mathbf {I}\right)\), where
$$\begin{array}{*{20}l} \!\!\!\!\!\!\! \!\! \sigma^{2}_{k}=&\left(M\left<\rho\right>+\left<\lambda_{k}\right>\right)^{-1}, \end{array} $$
$$\begin{array}{*{20}l} \!\!\!\!\!\!\! \!\! \boldsymbol{\mu}_{k}=&\sigma^{2}_{k}\left<\rho\right>\left(\boldsymbol{X}-\boldsymbol{A}_{\bar{k}}\left<\boldsymbol{S}_{\bar{k}}\right>\right)^{\mathrm{T}}\boldsymbol{a}_{k}^{\mathrm{*}}, \end{array} $$
where the property \(\boldsymbol {a}_{k}^{\mathrm {H}}\boldsymbol {a}_{k}=M\) is used. Note that the mean {μk} is updated based on the space alternating approach [38, 44], where the newest estimates are always used.
Update of λ,γ and ρ
The approximate posteriors for λ,γ and ρ can be derived in a similar way as sk, and we only give the results here.
Update q(λk): \(q(\lambda _{k})=\mathcal {G}({\alpha _{\lambda _{k}}},\beta _{\lambda _{k}})\), where
$$\begin{array}{*{20}l} \alpha_{\lambda_{k}}&=1+L,\; \beta_{\lambda_{k}} = \boldsymbol{\mu}_{k}^{H} \boldsymbol{\mu}_{k}+L\sigma_{k}^{2}+\left<\gamma_{k}\right>, \\ \left<\lambda_{k}\right>&=\frac{\alpha_{\lambda_{k}}}{\beta_{\lambda_{k}}}. \end{array} $$
Update q(γk): \(q(\lambda _{k})=\mathcal {G}\left ({\alpha _{\gamma _{k}}},\beta _{\gamma _{k}}\right)\), where
$$\begin{array}{*{20}l} \alpha_{\gamma_{k}}=1+a, \ \beta_{\gamma_{k}}= \left<\lambda_{k}\right>+b, \ \left<\gamma_{k}\right>=\frac{\alpha_{\gamma_{k}}}{\beta_{\gamma_{k}}}. \end{array} $$
Update q(ρ): \(q(\rho)=\mathcal {G}({\alpha _{\rho }},\beta _{\rho })\), where
$$\begin{array}{*{20}l} \alpha_{\rho}&=ML+c,\\ \beta_{\rho}&= \|\boldsymbol{X}-\mathbf{A}\left<\boldsymbol{S}\right>\|_{F}^{2}+L\text{tr}\left[\boldsymbol{\Sigma}\boldsymbol{A}^{\mathrm{H}}\boldsymbol{A}\right]+d, \\ &= \|\boldsymbol{X}-\mathbf{A}\left<\boldsymbol{S}\right>\|_{F}^{2}+ML\sum_{k=1}^{K} \sigma_{k}^{2}+d, \\ \left<\rho\right>&=\frac{\alpha_{\rho}}{\beta_{\rho}}, \end{array} $$
where \(\boldsymbol {\Sigma }=\text {diag}[\sigma ^{2}_{1},\cdots,\sigma ^{2}_{2},\cdots,\sigma ^{2}_{K}]\) and diag[·] denotes a diagonal matrix.
We refer to the proposed algorithm as SAVE-MSBL. By using the space alternating approach, the computationally complex matrix inversion operation of the traditional MSBL [19] can be avoided. Moreover, instead of using the above formulas directly, we can further reduce the computational complexity by introducing a temporary matrix \(\widehat {\boldsymbol {X}}\), which can be seen as an approximation of X. By removing or adding the terms \(\boldsymbol {a}_{k}{\boldsymbol {\mu }}_{k}^{T}\), the two terms \(\boldsymbol {A}_{\bar {k}}\left <\boldsymbol {S}_{\bar {k}}\right >\) and A<S> in (13) and (16) can be updated using \(\boldsymbol {a}_{k}{\boldsymbol {\mu }}_{k}^{T}\), resulting in a computationally efficient implementation. The pseudocode for the proposed method is shown Algorithm 1. Note that the proposed SAVE-MSBL algorithm can be applied to each frequency bin independently.
CGMM-based acoustic DOA estimator
Up to this point, the posteriors of the source signals (i.e., {q(sf,k)}) from all the frequency bins are obtained independently. The source signals sf,k can be estimated using the MMSE estimator, i.e.,
$$\begin{array}{*{20}l} \widehat{\boldsymbol{s}}_{f,k}=\boldsymbol{\mu}_{f,k}, \end{array} $$
where \(\widehat {\boldsymbol {s}}_{f,k}\) denotes the estimate of the source signal. In this section, we propose an acoustic DOA estimator, jointly utilizing the estimated source signals from all the frequency bins, based on the CGMM model. By fitting the observations and estimates of the source signals to the CGMM model, the weighting parameters can be obtained using the EM algorithm. The weighting parameter of each mixture component in the CGMM can be seen as the probability that there is an active acoustic source at the corresponding candidate location. With a known number of sources, the DOA estimates for all the sources can be obtained using peak-picking on the weighting parameters.
Inspired by the Gaussian mixture model [45, 46] and the probabilistic steered-response power (SRP) model [47, 48], we assume that xf,l follows a CGMM distribution with estimated source signals sf,k, i.e.,
$$ p(\boldsymbol{x}_{f,l};\boldsymbol{w})=\sum\limits_{k=1}^{K} w_{k} \mathcal{CN}\left(\boldsymbol{x}_{f,l}|\boldsymbol{a}_{f,k}\mu_{f,k,l},\eta\right), $$
where η is an empirically chosen small value, and wk≥0 is the weighting parameter for the kth complex Gaussian component with the constraint \(\sum _{k=1}^{K} w_{k}=1\). Then, the distribution of the observation set for all frequency bins can be expressed as
$$ p(\boldsymbol{Y};\boldsymbol{w})=\prod\limits_{f=1}^{F} \sum\limits_{k=1}^{K} w_{k}\left[\prod\limits_{l=1}^{L} \mathcal{CN}\left(\boldsymbol{x}_{f,l}|\boldsymbol{a}_{k,f}\mu_{f,k,l},\eta\right)\right], $$
where \(\boldsymbol {Y}=\{\boldsymbol {X}_{f}\}_{f=1}^{F}\) is the observation set for all frequency bins. Once (18) is maximized, each weight wk represents the probability of an acoustic source being active in the direction θk. However, it is intractable to maximize the function in (18) due to its high dimensionality. Therefore, an EM procedure is applied to deal with this maximization problem. Following [42], we introduce a set of hidden variables \(\boldsymbol {z}=\{\boldsymbol {r}_{f}\}_{f=1}^{F}\). The rf contains binary random variables with only one particular element rf,k being 1 while the others are all zeros. The variable rf,k can be seen as an indicator associated with the acoustic source from the direction θk at the fth frequency bin. Assuming p(rf,k=1)=wk, we can write the joint distribution as follows:
$$ p(\boldsymbol{z};\boldsymbol{w})=\prod\limits_{f=1}^{F}\prod\limits_{k=1}^{K} w_{k}^{r_{f,k}}. $$
The conditional distribution of the observation set Y given z is
$$ p(\boldsymbol{Y}|\boldsymbol{z})=\prod\limits_{f=1}^{F}\prod\limits_{k=1}^{K} \left[\prod\limits_{l=1}^{L}\mathcal{CN}(\boldsymbol{x}_{f,l}|\boldsymbol{a}_{k,f}\mu_{f,k,l},\eta)\right]^{r_{f,k}}. $$
Then, the joint distribution can be derived from (19) and (20) using Bayes' rule, i.e.,
$$\begin{array}{*{20}l} &p(\boldsymbol{Y},\boldsymbol{z};\boldsymbol{w})=p(\boldsymbol{Y}|\boldsymbol{z})p(\boldsymbol{z};\boldsymbol{w})\\ &=\prod\limits_{f=1}^{F}\prod\limits_{k=1}^{K}\left[w_{k}\prod\limits_{l=1}^{L}\mathcal{CN}(\boldsymbol{x}_{f,l}|\boldsymbol{a}_{k,f}\mu_{f,k,l},\eta)\right]^{r_{f,k}}. \end{array} $$
E-step
In the E-step, we use the current parameter \(\hat {\boldsymbol {w}}^{\text {old}}\) to update the posterior mean of the hidden variable denoted as \(\mathrm {E}[r_{f,k}|\boldsymbol {Y};\hat {\boldsymbol {w}}^{\text {old}}]\). From (21), the E-step can be written as
$$\begin{array}{*{20}l} &Q(\boldsymbol{w};\hat{\boldsymbol{w}}^{\text{old}})=\mathrm{E}[r_{f,k}|\boldsymbol{Y};\hat{\boldsymbol{w}}^{\text{old}}]\\ &=\sum\limits_{f=1}^{F}\sum\limits_{k=1}^{K}\mathrm{E}\left[r_{f,k}|\boldsymbol{Y};\hat{\boldsymbol{w}}^{\text{old}}\right]\left[\ln \hat{w}_{k}^{\text{old}}+\phi_{f,k,l}\right], \end{array} $$
$$\begin{array}{*{20}l} \phi_{f,k,l}&=\sum\limits_{l=1}^{L}\left[\ln\mathcal{CN}(\boldsymbol{x}_{f,l}|\boldsymbol{a}_{k,f}\mu_{f,k,l},\eta)\right]\\ &=\sum\limits_{l=1}^{L}\left\{-M\ln\eta-\frac{1}{\eta}\left[\|\boldsymbol{x}_{f,l}-\boldsymbol{a}_{k,f}\mu_{f,k,l}\|^{2}\right\}\right., \end{array} $$
where μf,k,l is obtained using Algorithm 1.
Therefore, the expected value \(\mathrm {E}[r_{f,k}|\boldsymbol {Y};\hat {\boldsymbol {w}}^{\text {old}}]\) is given by [42, 49]
$$ \mathrm{E}\left[r_{f,k}|\boldsymbol{Y};\hat{\boldsymbol{w}}^{\text{old}}\right]=\frac{\hat{w}_{k}^{\text{old}}{\exp{\left(\phi_{f,k,l}\right)}}}{{\sum_{\tilde{k}=1}^{K}\hat{w}_{\tilde{k}=1}^{\text{old}}\exp{\left(\phi_{f,\tilde{k},l}\right)}}}=\left< r_{f,k}\right>. $$
In the M-step, the required parameter w is updated through a constrained maximization of (22), i.e.,
$$\begin{array}{*{20}l} \hat{\boldsymbol{w}}^{\text{new}}=&\arg \max\limits_{\boldsymbol{w}} Q\left(\boldsymbol{w};\boldsymbol{w}^{\text{old}}\right) \\ &s.t. \sum\limits_{k=1}^{K} w_{k}=1; 0< w_{k}<1. \end{array} $$
Therefore, the M-step can be stated as
$$ \hat{w}_{k}^{\text{new}}=\frac{\sum_{f=1}^{F}\left< r_{f,k}\right>}{\sum_{f=1}^{F}\sum_{\tilde{k}=1}^{K}\left< r_{f,\tilde{k}}\right>}=\frac{1}{F}\sum_{f=1}^{F}\left< r_{f,k}\right>. $$
Given an initial value for the parameter w, the EM algorithm iterates between the E-step in (23) and the M-step in (25) until convergence. The EM algorithm is summarized in Algorithm 2.
In this section, we first investigate the computational complexity of the proposed SAVE-MSBL-EM method. Then, we test the performance of our proposed SAVE-MSBL-EM algorithm using both synthetic data and real data from the LOCATA datasetFootnote 4. The performance of the different methods are tested in three different scenarios. In the first scenario, we test the recovery accuracy and the resolution performance using narrow-band sources and a ULA. In the second part, we consider a complicated scenario with closely spaced sources in a virtual room. Last, the proposed method is tested using real data.
Computational complexity analysis
We first analyze the computational complexity of the proposed SAVE-MSBL algorithm by counting the number of mathematical multiplication/division operations in each iteration. As can be seen from Algorithm 1, in each "for" loop, the complexity of the proposed algorithm mainly depends on the update of the temporary matrix \(\bar {\boldsymbol {X}}\) and μk, which is \(\mathcal {O}(ML)\). The computational complexity of updating <ρ> is \(\mathcal {O}(ML)\). Therefore, the computational complexity of the proposed algorithm for each iteration is \(\mathcal {O}(KML)\). If we consider the variational Bayesian inference without the space alternating approach, the computational complexity is \(\mathcal {O}(M^{3}L^{3})\). Thus, the space alternating approach leads to a significant reduction on the computational complexity. Moreover, the computational complexity of MSBL proposed in [19] is \(\mathcal {O}(KM^{2})\). Therefore, the proposed method is faster than the MSBL method when L<M. Since the SVD approach can be utilized for data reduction [18], the condition L<M is met in most cases. For the EM algorithm, the computational complexity is \(\mathcal {O}(KML)\) for one frequency bin. Thus, the computational complexity of the proposed SAVE-MSBL-EM method is \(\mathcal {O}(KML)\) for each frequency bin.
We further measure the computational complexity using the "cputime" function provided by MATLAB. The computer is equipped with an i7-8700 processor. The clock rate is 3.19 GHz. The operation system is Windows 10. The software is MATLAB 2019a. We test the computational complexity for one frequency bin. The number of iterations is fixed to 100, the number of candidate DOAs is set to 41, the number of microphones is set to 15, the number of snapshots is set to 10, and the number of Monte-Carlo experiments is set to 1000. For a single frequency bin, the time consumption of the proposed SAVE-MSBL-EM method and the MSBL proposed in [19] are 0.08 and 0.25 s, respectively, i.e., the proposed method is faster than the MSBL method by a factor of ∼3. Note that, in practice, the time consumption for the acoustic DOA estimation algorithm is proportional to the number of frequency bins.
The methods used for comparison in this section are summarized as following: CBF refers to classical beamforming based method which is widely used in practice; SRP-PHAT is another widely used method for sound source localization especially in reverberant environments [9]; and MVDR is a method offering high-resolution performance [10]. Note that the implementation of the MVDR method is based on the observed signal statistics. Moreover, MSBL refers to the multiple snapshots SBL method for narrowband signals proposed in [19]. MSBL-EM is an acoustic DOA estimator which combines the MSBL algorithm and proposed EM algorithm. Furthermore, SAVE-MSBL is the proposed method for narrow-band signals and SAVE-MSBL-EM is the proposed method for acoustic DOA estimation. For the MSBL method, the threshold for stopping the iteration errmax is set to 1e−10. For the proposed SAVE-MSBL-EM method, the modeling parameters a, b, c, and d are all set to 1e−3, the parameter η is set to 0.1, the threshold for the SAVE-MSBL algorithm errmax is set to 1e−10, and the threshold for the EM algorithm err0 is set to 1e−3.
Recovery performance analysis using a ULA
In this section, we test the recovery performance of the proposed SAVE-MSBL algorithm using four acoustic sources comprising pure sinusoidal signals. Two assumptions are made in this simulation: (1) all the acoustic sources are located in the far-field of the microphone array and (2) the power of all the acoustic sources are equal. The frequencies of all the sources are set to 1 kHz. For each source, the initial phase is generated randomly. Assume that a ULA with 15 omni-directional microphones is used to receive the signals. The distance between adjacent microphones is set to 0.05 m in this simulation. The microphone array data are generated by assigning different time delays according to the true bearing angles of the sources. White Gaussian noise is added to the clean array data and the SNR is set to 10 dB. The sampling frequency is set to 16 kHz. The time-domain data are converted to the frequency-domain using the short-time Fourier transform (STFT). The temporal length of the snapshot is set to 1024. The length of the increment for the snapshots is set to 256, i.e., the overlap is 75%. The length of the FFT is set to 2048. The number of snapshots is set to 10. As the frequencies of all sources are 1 kHz, only the frequency bin whose center frequency is 1kHz is used for the estimation. We define the fan-shaped horizontal plane in the range from −60∘ to 60∘ as the target space (see Fig. 1). The target space is uniformly separated with a grid interval of 3∘, i.e., the number of grid points is 41 and the array response matrix (dictionary) is pre-computed according to these grid points. Moreover, the bearing angles of four pure sinusoidal sources are −33∘,−27∘,−12∘, and −3∘, respectively. Figure 3 shows the estimation results of the CBF, MVDR, SRP-PHAT, and SAVE-MSBL methods.
The resolution performance for different methods
It can be seen that the CBF and SRP-PHAT methods fail to separate the two sources located at −33∘ and −27∘, but the MVDR and proposed SAVE-MSBL methods still work in this case.
We now proceed to test the performance of the proposed method with respect to the number of snapshots. The number of Monte-Carlo runs is 1000. The recovery accuracy is measured by the root-mean-square-error (RMSE), defined as
$$ e_{rec}=\left(\sqrt{\frac{1}{N_{MC}L}\sum\limits_{i=1}^{N_{MC}}\frac{\|\hat{\boldsymbol{S}}-\boldsymbol{S}\|_{F}^{2}}{\|\boldsymbol{S}\|_{F}^{2}}}\right), $$
where \(\hat {\boldsymbol {S}}\) is the recovered signal, S is the true signal, ∥·∥F denotes the Frobenius norm, L is the number of snapshots, and NMC is the number of Monte-Carlo experiments. We compare the proposed method with the CBF method in [6] and one of the widely used MSBL algorithms proposed in [19]. The results of the RMSEs of the recovered signals are illustrated in Fig. 4. It can be seen that the recovery performance of all the methods improve dramatically as the number of snapshots increases in the range from 1 to 3. Moreover, the simulation result shows that the proposed SAVE-MSBL method achieves better recovery accuracy compared with the CBF and MSBL methods.
Recovery accuracy with different numbers of snapshots
Simulation with virtual room
In this part, we test the resolution performance of the proposed method with respect to different intervals of bearing angles between two sources. The synthetic array data are generated using the "signal-generator"Footnote 5 with a virtual room. Note that the "signal-generator" is designed for the moving source scenario. The room setup is summarized in Table 1.
Table 1 Parameter setup
In this virtual room, a uniform circular array (UCA) with 32 omni-directional microphones is used to record the signals. The center position of the UCA is (5,3.5,3) m. The radius of the UCA is set to 0.25 m. Two acoustic sources are used. Both of them play uninterrupted harmonic signals. The fundamental frequencies of the two sources are 300 Hz and 350 Hz, respectively. The spectrograms of the two sound sources are shown in Fig. 5.
The spectrograms of the two sources. a The spectrogram of source 1. b The spectrogram of source 2
We assume the sound sources are moving on a horizontal plane where the microphone array is located in. The horizontal plane is separated into 73 grid points from 0∘ to 360∘ with an angle interval 5∘, where 0∘ is in the positive direction of the x-axis and 90∘ is in the positive direction of the y-axis. For simulation 1, the trajectories of the two sources are illustrated in Fig. 6. The first source moves along the negative direction of y-axis while the second source moves along the negative direction of x-axis. The original positions of the first and second sound sources are (3.5,5,3) m and (6,5.5,3) m. The end positions are (3.5,3,3) m and (4,5.5,3) m, respectively. The true DOA trajectories of the two sources with respect to the microphone array are shown in Fig. 7(a).
Illustration of the first virtual room setup with two moving sources for the simulation 1
Estimation results for the first virtual room setup. For the CBF, MVDR,and SRP-PHAT methods, the estimation results are shown using the spatial spectrum of all frames. For the proposed SAVE-MSBL-EM method, the estimation results are shown using the weights w of all frames. a The trajectories of two source. b Estimation result of CBF method in free field. c Estimation result of CBF method in low reverberation. d Estimation result of SRP-PHAT method in free field. e Estimation result of SRP-PHAT method in low reverberation. f Estimation result of MVDR method in free field. g Estimation result of MVDR method in low reverberation. h Estimation result of the proposed SAVE-MSBL-EM method in free field. i Estimation result of the proposed SAVE-MSBL-EM method in low reverberation
According to the simulation setup, the time-domain array signals can be generated using the "signal-generator." Then, the received array signals are first segmented into a batch of snapshots with 87.5% overlap. By applying the fast Fourier transform (FFT) on each snapshot, the time-domain array signals are converted to the frequency-domain array data. Then, the frequency-domain array data is segmented into several frames with L consecutive snapshots grouped as one frame. In the first and second simulations, L is set to 15. The effect of L is discussed in the last part of this subsection. Note that the SVD approach is used for data reduction in this paper. After applying acoustic DOA estimation methods for each frame, we find the peaks for each frame and label these peaks according to the ground truth DOAs of the two sources. The error range is set to 15∘, i.e., if the minimum error between the estimated angle and all ground truth angles is larger than 15∘, we label the peak as a false estimate. In this paper, we use the black and red circles to denote estimates of the first source and the second source, respectively. Moreover, we use magenta triangles to denote false estimates.
To quantitatively show the difference of the resolution performance between the proposed SAVE-MSBL-EM method and other methods, the RMSE, the false alarm (FA) rate, and the miss-detected (MD) rate are used to measure the recovery performance. The RMSE is defined as
$$ e=\sqrt{\frac{1}{N_{c}}\sum_{i=1}^{N_{c}}\left|\tilde{\theta}_{i}-\theta_{i}\right|^{2}}, $$
where Nc is the total number of correct estimates, \(\tilde {\theta }_{i}\) is the ith correct estimate, and θi is the ith true bearing angle. Following [50], the FA rate is defined as the percent of sources that are falsely estimated out of the total number of sources and the MD rate is defined as the percent of sources that are miss-detected out of the total number of sources, i.e.,
$$ FA = \frac{N_{F}}{N_{T}}\times 100\%,\;\; MD = \frac{N_{M1}+N_{M2}}{2N_{T}}\times 100\%, $$
where NF is the number of sources with false estimation, NT is the total number of sources for all frames, and NM1 and NM2 are the miss-detected number of the first source and the second source, respectively. Note that two continuous harmonic sound signals are used in this simulation. Thus, two active sources exist in each frame.
We consider two reverberation conditions for all the methods: the free-field (no reverberation) and low-reverberation conditions (RT60 = 0.25 s). For the CBF, MVDR, and SRP-PHAT methods, the estimation results are shown using the spatial spectrums of all frames. For the proposed SAVE-MSBL-EM method, the estimation results are shown using the weight, w, of all frames. For comparison, all the data are normalized frame by frame and displayed using color maps.
In simulation 1, the estimation results of the CBF method in free-field and low-reverberation environments are shown in Figs. 7b and c, respectively. The estimation results of the different methods in both the free field and low reverberation conditions are shown in Fig. 7b–i. The RMSE, FA, and MD are shown in Table 2. Note that "FF" refers to the free-field condition and "RB" refers to the reverberation environment. It can be seen that all the methods perform well under the free-field condition. In the presence of reverberation, the good accuracy performance of the CBF, SRP-PHAT, and proposed SAVE-MSBL-EM method are retained but the MVDR method degrades considerably.
Table 2 Performance of all the methods for simulation 1
To further verify the performance of the proposed SAVE-MSBL-EM method in terms of resolution, another scenario is considered. In this case, all of the setup remains the same except the trajectories of the two sources. We refer to this simulation as simulation 2. The original position of the first source is (2.5,5.5,3) m while the second is (7.5,5.5,3) m. The end positions are (4,7,3) m and (6,7,3) m, respectively. Figure 8 shows the trajectories of the two sources in the virtual room.
Illustration of the second virtual room setup with two moving sources for the simulation 2
The true bearing angles of the two sources with respect to the microphone array are illustrated in Fig. 9a. The estimation results of the CBF, SRP-PHAT, MVDR, and SAVE-MSBL-EM methods in the free-field environment are shown in Figs. 9b, d, f, and h, respectively, while the results for the low reverberation condition are shown in Figs. 9c, e, g, and i, respectively. The RMSE, FA, and MD are summarized in Table 3.
Estimation results for the second virtual room setup. For the CBF, MVDR and SRP-PHAT methods, the estimation results are shown using the spatial spectrum of all frames. For the proposed SAVE-MSBL-EM method, the estimation results are shown using the weights w of all frames. a The trajectories of two source. b Estimation result of CBF method in free field. c Estimation result of CBF method in low reverberation. d Estimation result of SRP-PHAT method in free field. e Estimation result of SRP-PHAT method in low reverberation. f Estimation result of MVDR method in free field. g Estimation result of MVDR method in low reverberation. h Estimation result of the proposed method in free field. i Estimation result of the proposed method in low reverberation
From Figs. 9b, c, d, e, f, and g, it can be seen that the performance of the CBF, SRP-PHAT and MVDR methods degrade dramatically as two sound sources move closer. However, the proposed SAVE-MSBL-EM method retains an accurate estimation performance for the acoustic DOA estimation. In this case, the proposed SAVE-MSBL-EM method offers higher resolution performance than other methods.
We then test the performance of the proposed method and MVDR method using static sources and the results are shown in Fig. 10.
Simulation results. a False alarm rate in low-reverberation environment. b Miss-detected rate in low-reverberation environment
The microphone array signals are generated using the "rir-generator"Footnote 6. The distance between the sound sources and the microphone array center is set to 3 m. We tested the FA rate with different bearing intervals between the two sound sources in the low reverberation condition (RT60 = 0.25 s). Figure 10(a) depicts the FA rates of the MVDR method and the proposed algorithm.
It can be seen that the proposed SAVE-MSBL-EM algorithm has a lower FA rate in the interval range from 15∘ to 40∘. Figure 10(b) shows he MD rates of two algorithms. Compared with the MVDR method, the proposed method has a lower MD rate in the range from 15∘ to 40∘. From Figs. 7, 9 and 10, we can thus conclude that the proposed SAVE-MSBL-EM method provides a better resolution performance than the CBF, SRP-PHAT, and MVDR methods in both free-field and low-reverberation conditions.
To test the effect of the frame (window) length L on the localization performance, we conduct a simulation for different number of snapshots L. The simulation setup is the same as that of simulation 2, that is, the trajectories of the two sources and the true bearing angles of the two sources with respect to the microphone array are shown in Figs. 8 and 9a, respectively. The simulation is conducted in the reverberation environment (RT60 = 0.25 s). The results are illustrated in Fig. 11. The RMSE, FA, and MD are shown in Table 4.
The performance of the proposed method versus different number of snapshots L in a frame. a L=1. b L=5. c L=10. d L=15
Table 4 Performance of the proposed method versus L
It can be seen that the proposed method works for all snapshot numbers. However, the localization performance degrades if the number of snapshots is small, e.g., the FA and MD in Figs. 11a and b are higher than the FA and MD in Figs. 11c and d.
Real data experiments
The LOCATA dataset provides a series of microphone array data recorded in the Computing Laboratory of the Department of Computer Science of Humboldt University Berlin [51]. The room size is 7.1×9.8×3 m, with the reverberation time RT60 = 0.55 s. In this paper, we use the "benchmark2" microphone array data in task #6 to test the high-resolution performance of the proposed method. The number of microphones of the ''benchmark2" array is 12. Two speakers are moving and continuously speaking with short pauses. The spectrograms of the two sources recorded with one microphone are illustrated in Fig. 12.
In this experiment, we just consider the azimuth angle estimation with the elevation angle fixed at 90∘. The target plane is uniformly separated into 73 grid points from −180∘ to 180∘ with a uniform interval of 5∘. The true positions and sound source signals of two sources are provided by the LOCATA dataset. We applied a voice activity detector [52] to these source signals to obtain ground-truth voice activity information of the two sound sources. Figure 13a shows the true trajectories of the two sources. We also applied the voice activity detector to the microphone array signals to obtain the voice activity information of each frame. Similar to the simulation part, we find two peaks for each voice active frame and label these peaks according to the true source position. Note that a threshold δ is set to judge the existence of peaks, i.e, if the amplitude of peaks is less than δ, this estimated peak is considered as an invalid estimate. The black circles and red circles denote the true DOAs of the first and second sources, respectively. The magenta triangles denote the false estimates.
Estimation results for the real data. For the CBF, MVDR and SRP-PHAT methods, the estimation results are shown using the spatial spectrum of all frames. For the MSBL-EM and proposed SAVE-MSBL-EM method, the estimation results are shown using the weights w of all frames. All the data are normalized frame by frame. a The truth trajectory of two source. b The results of CBF based method. c The results of MVDR based method. d The results of SRP-PHAT method. e The results of MSBL-based method. f The results of the proposed method
The estimation results of the CBF, MVDR, SRP-PHAT, and MSBL-EM methods are shown in Figs. 13b, c, d, and e, respectively. Moreover, the estimation results of the proposed SAVE-MSBL-EM method is shown in Fig. 13f. From Figs. 13b–d, it can be seen that the two sources can hardly be separated in the time range from 6 to 10 s using the CBF, SRP-PHAT, and MVDR methods. However, the proposed SAVE-MSBL-EM method can separate two sources successfully, indicating a higher resolution than the CBF, SRP-PHAT, and MVDR methods (see Fig. 13f). Comparing Fig. 13e and f, it can be seen that the proposed SAVE-MSBL-EM method achieves better recovery performance than MSBL-EM method in the time range from 8 to 10 s. To evaluate the performance of all the methods, the MD rate versus FA rate is computed by varying the peak selection threshold (see Fig. 14). For all the curves in Fig. 14, the closer to the left-bottom the better. It can be seen that the proposed SAVE-MSBL-EM method achieves better performance than state-of-the-art methods.
The MD rate versus FA rate by varying the peak selection threshold
We further report the estimation result for a fixed peak selection threshold δ=−40 dB (see Table 5). It can be seen that the proposed SAVE-MSBL-EM method outperforms other methods especially for the FA rate and RMSE. The reason is that the proposed method successfully resolves the two sources while the others are failing in the range from 6 to 10 s. The results indicate that the proposed SAVE-MSBL-EM method provides a higher resolution performance than state-of-the-art methods also in real conditions where all assumptions of the proposed method might not hold.
Table 5 Results for the real data
In this paper, we propose a space alternating MSBL method for acoustic DOA estimation that offers a high-resolution performance. First, we build a group sparse prior based hierarchical Bayesian framework for the MMV signal model by exploiting the group sparsity of candidate source amplitude matrix. Then, the computational efficient SAVE-MSBL algorithm is proposed to infer all hidden variables in the Bayesian model. Moreover, an EM algorithm is proposed to deal with the acoustic DOA estimation problem. In the experimental parts, the performance of the proposed method is investigated using both synthetic and real data. The results show that the proposed method has lower RMSE and FA rate than state-of-the-art methods in both free-field and low-reverberation conditions. As a result, the proposed method can be applied to some applications (e.g., humanoid robots and drones) to improve the resolution performance for acoustic DOA estimation.
Appendix A: Derivation of (11)
According to Eq. 9 and Eq. 10, the signal sk can be updated using the space alternating approach as follows:
$$\begin{aligned} \ln q({\boldsymbol{s}_{k}})&=\mathrm{E}_{q(\boldsymbol{\Theta}/{\boldsymbol{s}}_{k})}\left[p(\boldsymbol{X},\boldsymbol{\Theta})\right]\\ &=\mathrm{E}_{q(\boldsymbol{\Theta}/{\boldsymbol{s}}_{k})}\left[-\rho\|{\boldsymbol{X}}-{\boldsymbol{A}}{\boldsymbol{S}}\|_{F}^{2}-\sum\limits_{k=1}^{K}\lambda_{k}{\boldsymbol{s}}_{k}^{\mathrm{H}}{\boldsymbol{s}_{k}}\right]\\ &=-\mathrm{E}_{q(\boldsymbol{\Theta}/{\boldsymbol{s}}_{k})}\left[\text{tr}\left[\rho\left(\boldsymbol{X}-\boldsymbol{A}\boldsymbol{S}\right)^{\mathrm{H}}\right.\right.\\ &\quad\left.\left.\times\left(\boldsymbol{X}-\boldsymbol{A}\boldsymbol{S}\right)+\lambda_{k}{\boldsymbol{s}}_{k}^{\mathrm{H}}{\boldsymbol{s}}_{k}\right]\right]+\mathrm{C}\\ &=-\mathrm{E}_{q(\boldsymbol{\Theta}/{\boldsymbol{s}}_{k})}\left[\text{tr}\left[\rho\left(\boldsymbol{X}-\boldsymbol{A}_{\bar{k}}\boldsymbol{S}_{\bar{k}}-\boldsymbol{a}_{k}{\boldsymbol{s}}_{k}^{\mathrm{T}}\right)^{\mathrm{H}}\right.\right.\\ &\quad\times\left(\boldsymbol{X}-\boldsymbol{A}_{\bar{k}}\boldsymbol{S}_{\bar{k}}-\boldsymbol{a}_{k}{\boldsymbol{s}}_{k}^{\mathrm{T}}\left)+\lambda_{k}{\boldsymbol{s}}_{k}^{\mathrm{H}}{\boldsymbol{s}}_{k}\right]\right]+\mathrm{C}\\ &=-\mathrm{E}_{q(\boldsymbol{\Theta}/{\boldsymbol{s}}_{k})}\left[\text{tr}\left[{\boldsymbol{s}}_{k}^{\mathrm{H}}\left(\rho\boldsymbol{a}_{k}^{\mathrm{H}}\boldsymbol{a}_{k}+\lambda_{k}\right){\boldsymbol{s}}_{k}-\rho{\boldsymbol{s}}_{k}^{*}\boldsymbol{a}_{k}^{\mathrm{H}}\right.\right.\\ &\left.\left.\quad\times\left(\boldsymbol{X}-\boldsymbol{A}_{\bar{k}}\boldsymbol{S}_{\bar{k}}\right)-\rho\left(\boldsymbol{X}-\boldsymbol{A}_{\bar{k}}\boldsymbol{S}_{\bar{k}}\right)^{\mathrm{H}}\boldsymbol{a}_{k}{\boldsymbol{s}}_{k}^{\mathrm{T}}\right]\right]+\mathrm{C}\\ &=-\text{tr}\left[{\boldsymbol{s}}_{k}^{\mathrm{H}}\left(\left<\rho\right>\boldsymbol{a}_{k}^{\mathrm{H}}\boldsymbol{a}_{k}+\left<\lambda_{k}\right>\right){\boldsymbol{s}}_{k}+\left<\rho\right>{\boldsymbol{s}}_{k}^{\mathrm{*}}\boldsymbol{a}_{k}^{\mathrm{H}}\left(\boldsymbol{X}-\boldsymbol{A}_{\bar{k}}\left<\boldsymbol{S}_{\bar{k}}\right>\right)\right.\\ &\left.\quad-\left<\rho\right>\left(\boldsymbol{X}-\boldsymbol{A}_{\bar{k}}\times\left<\boldsymbol{S}_{\bar{k}}\right>\right)^{\mathrm{H}}\boldsymbol{a}_{k}{\boldsymbol{s}}_{k}^{\mathrm{T}}\right]+\mathrm{C}, \end{aligned} $$
where Θ/sk denotes the set of variables with sk removed, C denotes a constant. Note that AS can be rewritten as \(\boldsymbol {A}_{{\bar {k}}}\boldsymbol {S}_{\bar {k}}+\boldsymbol {a}_{k}{\boldsymbol {s}}_{k}^{\mathrm {T}}\).
The software for microphone array data generation is from "International Audio Laboratories Erlangen" and is online available: https://www.audiolabs-erlangen.de/home. The LOCATA data originates from the"IEEE-AASP Challenge on Acoustic Source Localization and Trackin" and can be found under the following link: https://www.locata.lms.tf.fau.de/.
In this paper, the CBF is referred to as delay and sum beamforming.
Here, a snapshot refers to the array data in one observation window.
See Appendix A: Derivation of (11) for more derivation details.
The LOCATA dataset is publicly available at https://www.locata.lms.tf.fau.de/
The "signal-generator" for synthetic array data generation is online available: https://www.audiolabs-erlangen.de/fau/professor/habets/software.
The RIR generator is publicly available at: https://www.audiolabs-erlangen.de/fau/professor/habets/software/signal-generator.
DOA:
Direction-of-arrival
CBF:
Classical beamforming
SRP-PHAT:
Steered-response power phase transform
MVDR:
Minimum variance distortionless response
Multiple signal classification
ESPRIT:
Estimation of signal parameters via rotational invariance technique
SNR:
SVD:
cLASSO:
Complex least absolute shrinkage and selection operator
SBL:
Sparse Bayesian learning
MSBL:
Multi-snapshot sparse Bayesian learning
SAVE:
Space alternating variational estimation
VBI:
MMSE:
Minimum mean square error
RMSE:
Root-mean-square-error
MMV:
Multiple measurement vector
CGMM:
Complex Gaussian mixture model
Expectation–maximization
ULA:
Uniform linear array
UCA:
Uniform circular array
KL:
Kullback-Leibler
FFT:
False alarm
MD:
Miss-detected
J. Hornstein, M. Lopes, J. Santos-Victor, F. Lacerda, in 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. Sound localization for humanoid robots - building audio-motor maps based on the HRTF (IEEEBeijing, 2006), pp. 1170–1176.
C. Rascon, I. Meza, Localization of sound sources in robotics: a review. Robot. Auton. Syst.96:, 184–210 (2017).
M. Strauss, P. Mordel, V. Miguet, A. Deleforge, in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). DREGON: dataset and methods for UAV-embedded sound source localization (IEEEMadrid, 2018).
A. Deleforge, D. D. Carlo, M. Strauss, R. Serizel, L. Marcenaro, Audio-based search and rescue with a drone: highlights from the IEEE signal processing cup 2019 student competition. IEEE Signal Proc. Mag.36(5), 138–144 (2019).
J. M. Valin, F. Michaud, J. Rouat, in 2006 IEEE International Conference on Acoustics Speed and Signal Processing Proceedings. Robust 3D localization and tracking of sound sources using beamforming and particle filtering (IEEEToulouse, 2006), pp. 841–844.
C. Zhang, D. Florencio, D. E. Ba, Z. Zhang, Maximum likelihood sound source localization and beamforming for directional microphone arrays in distributed meetings. IEEE Trans. Multimed.10(3), 538–548 (2008).
M. Farmani, M. S. Pedersen, Z. -H. Tan, J. Jensen, Informed sound source localization using relative transfer functions for hearing aid applications. IEEE/ACM Trans. Audio Speech Lang. Process.25(3), 611–623 (2017).
H. L. Van Trees, in Part IV of Detection, Estimation, and Modulation Theory. One. Optimum array processing (John Wiley and SonsNew York, 2004), pp. 21–53.
J. H. DiBiase, H. F. Silverman, M. S. Brandstein, in Microphone arrays. Robust localization in reverberant rooms (SpringerBerlin, Heidelberg, 2001), pp. 164–180.
V. Krishnaveni, T. Kesavamurthy, A. B, Beamforming for direction-of-arrival (DOA) estimation-a survey. Int. J. Comput. Appl.61(11), 4–11 (2013).
R. Schmidt, Multiple emitter location and signal parameter estimation. IEEE Trans. Antennas Propag.34(3), 276–280 (1986).
R. Roy, T. Kailath, ESPRIT-estimation of signal parameters via rotational invariance techniques. IEEE Trans. Acoustics Speech Sig. Process.37(7), 984–995 (1989).
MATH Article Google Scholar
H. Cox, R. Zeskind, M. Owen, Robust adaptive beamforming. IEEE Trans Acoustics Speech Sig. Process.35(10), 1365–1376 (1987).
D. D. Feldman, L. J. Griffiths, A projection approach for robust adaptive beamforming. IEEE Trans Sig. Process.42(4), 867–876 (1994).
M. Pardini, F. Lombardini, F. Gini, The hybrid Cramér–Rao bound on broadside DOA estimation of extended sources in presence of array errors. IEEE Trans Sig. Process.56(4), 1726–1730 (2008).
A. Khabbazibasmenj, S. A. Vorobyov, A. Hassanien, Robust adaptive beamforming based on steering vector estimation with as little as possible prior information. IEEE Trans Sig. Process.60(6), 2974–2987 (2012).
MathSciNet MATH Article Google Scholar
A. L. Kintz, I. J. Gupta, A modified MUSIC algorithm for direction of arrival estimation in the presence of antenna array manifold mismatch. IEEE Trans. Antennas Propag.64(11), 4836–4847 (2016).
D. Malioutov, M. Cetin, A. S. Willsky, A sparse signal reconstruction perspective for source localization with sensor arrays. IEEE Trans. Sig. Process.53(8), 3010–3022 (2005).
D. P. Wipf, B. D. Rao, An empirical Bayesian strategy for solving the simultaneous sparse approximation problem. IEEE Trans. Sig. Process.55(7), 3704–3716 (2007).
S. Fortunati, R. Grasso, F. Gini, M. S. Greco, K. LePage, Single-snapshot DOA estimation by using compressed sensing. EURASIP J. Adv. Sig. Process.2014(1), 1–17 (2014).
P. Gerstoft, C. F. Mecklenbrauker, A. Xenaki, S. Nannuru, Multisnapshot sparse Bayesian learning for DOA. IEEE Sig. Process. Lett.23(10), 1469–1473 (2016).
A. Xenakia, J. B. Boldt, M. G. Christensen, Sound source localization and speech enhancement with sparse Bayesian learning beamforming. J. Acoust. Soc. Am.143(6), 3912–3921 (2018).
A. Xenaki, P. Gerstoft, K. Mosegaard, Compressive beamforming. J. Acoust. Soc. Am.136(1), 260–271 (2014).
C. F. Mecklenbräuker, P. Gerstoft, E. Zöchmann, c–LASSO and its dual for sparse signal estimation from array data. Sig. Process.130:, 204–216 (2017).
X. Wang, D. Meng, M. Huang, L. Wan, Reweighted regularized sparse recovery for DOA estimation with unknown mutual coupling. IEEE Commun. Lett.23(2), 290–293 (2019).
Z. Yang, J. Li, P. Stoica, L. Xie, C. Rama, T. Sergios, in Academic Press Library in Signal Processing. One, 7. Sparse methods for direction-of-arrival estimation (New York, 2018), pp. 509–581.
M. E. Tipping, A. Smola, Sparse Bayesian learning and the relevance vector machine. J. Mach. Learn. Res.59(1), 211–244 (2001).
MathSciNet MATH Google Scholar
S. Ji, Y. Xue, L. Carin, Bayesian compressive sensing. IEEE Trans. Sig. Process.56(6), 2346–2356 (2008).
S. D. Babacan, R. Molina, A. K. Katsaggelos, Bayesian compressive sensing using laplace priors. IEEE Trans. Image Process.19(1), 53–63 (2010).
B. Worley, Scalable mean-field sparse bayesian learning. IEEE Trans. Sig. Process.67(24), 6314–6326 (2019).
D. Wipf, S. Nagarajan, in Proceedings of the 24th International Conference on Machine Learning - ICML 07. Beamforming using the relevance vector machine (ACM PressNew York, USA, 2007), pp. 1–8.
Z. Yang, L. Xie, C. Zhang, Off-grid direction of arrival estimation using sparse Bayesian inference. IEEE Trans. Sig. Process.61(1), 38–43 (2013).
L. Zhao, X. Li, L. Wang, G. Bi, Computationally efficient wide-band DOA estimation methods based on sparse Bayesian framework. IEEE Trans. Veh. Technol.66(12), 11108–11121 (2017).
Z. Bai, J. Sun, J. R. Jensen, M. G. Christensen, in 2019 27th European Signal Processing Conference (EUSIPCO). Indoor sound source localization based on sparse Bayesian learning and compressed data (IEEEA Coruna, Spain, 2019), pp. 1–5.
Z. Bai, J. R. Jensen, J. Sun, M. G. Christensen, in 2019 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). A sparse Bayesian learning based RIR reconstruction method for acoustic TOA and DOA estimation (IEEENew York, USA, 2019), pp. 1–5.
M. E. Tipping, A. Faul, J. J. T. Avenue, J. J. T. Avenue, in Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics. Fast marginal likelihood maximisation for sparse Bayesian models (JMLRKey West, 2003), pp. 3–6.
H. Duan, L. Yang, J. Fang, H. Li, Fast inverse-free sparse Bayesian learning via relaxed evidence lower bound maximization. IEEE Sig. Process. Lett.24(6), 774–778 (2017).
C. K. Thomas, D. Slock, in 2018 26th European Signal Processing Conference (EUSIPCO). Space alternating variational Bayesian learning for LMMSE filtering (IEEERome, Italy, 2018), pp. 1–5.
D. P. Wipf, B. D. Rao, Sparse Bayesian learning for basis selection. IEEE Trans. Sig. Process.52(8), 2153–2164 (2004).
Z. Zhang, B. D. Rao, Sparse signal recovery with temporally correlated source vectors using sparse Bayesian learning. IEEE J. Sel. Top. Sig. Process.5(5), 912–926 (2011).
J. Huang, T. Zhang, The benefit of group sparsity. Ann. Stat.38(4), 1978–2004 (2010). https://doi.org/10.1214/09-aos778.
C. M. Bishop, in Pattern recognition and machine learning. Approximate inference (SpringerNew York, 2006), pp. 472–485.
MATH Google Scholar
D. G. Tzikas, A. C. Likas, N. P. Galatsanos, The variational approximation for Bayesian inference. IEEE Sig. Process. Mag.25(6), 131–146 (2008).
J. A. Fessler, A. O. Hero, Space-alternating generalized expectation-maximization algorithm. IEEE Trans. Sig. Process.42(10), 2664–2677 (1994).
Y. Dorfan, S. Gannot, Tree-based recursive expectation-maximization algorithm for localization of acoustic sources. IEEE/ACM Trans. Audio Speech Lang. Process.23(10), 1692–1703 (2015).
X. Li, Y. Ban, L. Girin, A. P. Xavier, R. Horaud, Online localization and tracking of multiple moving speakers in reverberant environments. IEEE J. Sel. Top. Sig. Process.13(1), 88–103 (2019).
S. T. Birchfield, D. K. Gillmor, in IEEE International Conference on Acoustics Speech and Signal Processing. Fast Bayesian acoustic localization (IEEEPalo Alto, California, 2002), pp. 1–4.
J. Traa, D. Wingate, N. D. Stein, P. Smaragdis, Robust source localization and enhancement with a probabilistic steered response power model. IEEE/ACM Trans. Audio Speech. Lang. Process.24(3), 493–503 (2016).
R. D. Nowak, Distributed EM algorithms for density estimation and clustering in sensor networks. IEEE Trans. Sig. Process.51(8), 2245–2253 (2003).
Y. Dorfan, G. Hazan, S. Gannot, in 2014 4th Joint Workshop on Hands-free Speech Communication and Microphone Arrays (HSCMA). Multiple acoustic sources localization using distributed expectation-maximization algorithm (IEEEVillers-les-Nancy, France, 2014), pp. 1–5.
H. W. Lollmann, C. Evers, A. Schmidt, H. Mellmann, H. Barfuss, P. A. Naylor, W. Kellermann, in 2018 IEEE 10th Sensor Array and Multichannel Signal Processing Workshop (SAM). The LOCATA challenge data corpus for acoustic source localization and tracking (IEEESheffield, 2018), pp. 410–414.
J. Sohn, N. S. Kim, W. Sung, A statistical model-based voice activity detection. IEEE Sig. Process. Lett.6(1), 1–3 (1999).
The authors would like to thank Zhilin Zhang for providing the source code of the MSBL approach.
This work was supported by the China Scholarship Council, grant ID.201806120176.
School of Instrument Science and Engineering, Harbin Institute of Technology, Xidazhi Street 92, Harbin, 150006, China
Zonglong Bai & Jinwei Sun
CREATE, Aalborg University, Rendsburggade 14, Aalborg, 9000, Denmark
Zonglong Bai, Liming Shi, Jesper Rindom Jensen & Mads Græsbøll Christensen
Zonglong Bai
Liming Shi
Jesper Rindom Jensen
Jinwei Sun
Mads Græsbøll Christensen
Z. Bai and L. Shi conceptualized the study and run the experiments. M. G. Christensen, J. R. Jensen, and J. Sun edited the manuscript. All the authors read and approved the final manuscript.
Correspondence to Zonglong Bai.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Bai, Z., Shi, L., Jensen, J.R. et al. Acoustic DOA estimation using space alternating sparse Bayesian learning. J AUDIO SPEECH MUSIC PROC. 2021, 14 (2021). https://doi.org/10.1186/s13636-021-00200-z
Received: 14 September 2020
Acoustic DOA estimation
Sound source localization | CommonCrawl |
Monday, June 17, 2013 ... / /
All proofs in natural and social sciences ultimately depend on probabilities
Mel B. has sent me a link pointing to a rather incredible attack by an economics professor on the statistical methods in science that was published in the Financial Post:
Junk Science Week: Unsignificant statistics
Stephen Ziliak doesn't want to believe the existence of the Higgs boson – or any other "proof" in science that is based on the notion of statistical significance. In fact, we learn – in big fonts – that
Statistical significance is junk science, and its big piles of nonsense are spoiling the research of more than particle physicists.
Wow. It's remarkable because with this deep misunderstanding of the very key part of any rational thinking, this Gentleman can't possibly understand anything about the proper verification of theories in economics, his field, either. I would argue that because of this lethal flaw in the author's approach to rational reasoning, it is guaranteed at 5 sigma that your humble correspondent and many other physicists and scientists simply have to be better economists than Mr Ziliak, too. He just can't have a clue about the scientific approach to anything.
Statistical significance is absolutely paramount in the verification of hypotheses in all natural sciences as well as all social sciences that more or less successfully try to emulate the scientific character and success of the natural sciences.
Only in mathematics, we may construct rigorous proofs that don't need to mention any probabilities because in principle, the probability that a mathematical proof is right may be verified to be 100 percent. There's no noise and no uncertainty in a rigorous mathematical proof.
However, this "optimistic observation" has two major limitations. One of them is that mathematics doesn't directly apply to the real world. As long as mathematical concepts, theorems, and their proofs are considered rigorous, they can't be reliably and accurately identified with anything in the real world. So they tell us nothing about Nature, humans, or the society. Claims about Nature, humans, or the society simply don't belong to mathematics. They can't be absolutely certain. They can't be rigorous in the truly mathematical sense.
The second limitation is that people aren't infallible so for various reasons, even a mathematical proof has a nonzero probability to be wrong. Even if a proof is carefully verified etc., there's always a nonzero probability that the brain or the computer performed an invalid operation that led to the confirmation of a proof that is actually erroneous. The embedding of mathematicians' brains in Nature guarantees that these brains can't quite share the perfectly clean, infallible features of the idealized world of mathematics.
In natural sciences, the verification and falsification of hypotheses – and falsification in particular is the basic methodology that makes observations relevant (and observations have to be relevant for anything that we call science) – always involves measurements that have some uncertainty, a nonzero error margin, or a risk that a phenomenon is caused by different causes than those we want to search for. This is a fact: the world is simply messy and complicated. It is partly unpredictable. It is not a clean and transparent celestial sphere with perfectly spherical angels.
We may develop mathematical models and theories that are meant to match the observations and they may be free of any remarks about error margins, backgrounds, or false positives. But as soon as we do anything that remotely involves the theories' verification – and in sciences, the verification ultimately boils down to empirical verification – we simply have to acknowledge that each measured quantity has a nonzero error margin because it can't be measured quite accurately. We must acknowledge that an event that looks like a proof of some new phenomenon predicted by a theory was actually caused by a more mundane – while perhaps more rare and less likely – effect that combines the known mechanisms.
We must not only acknowledge it but we must also quantify all these things. We must know whether the error margin of a measurement is small enough so that the measurement is useful and trustworthy concerning the validity of a proposition. In the same way, we must know whether it's conceivable that the event apparently proving a new effect is actually caused by a combination of an older, less extraordinary theory combined with some reasonable amount of good luck.
For all these things, we have to quantify the probabilities.
The Higgs boson was officially discovered once the probability that the pairs of photons or Z-bosons with the right energies that really look like coming from a new, 125-126 GeV heavy particle, were so numerous that such a spike in the number of these events was very unlikely to appear without a new particle. By "very unlikely", particle physicists mean the chance "1 in 3 million", also known as "5 sigma", that the excess was a fluke that appeared in a world without a new particle.
Some disciplines of science try to be as hard and reliable as particle physics so they adopted the same 5-sigma (1 in 3 million) standard for discovery; most other disciplines, especially soft sciences such as medical research, climate science, psychology, and others, are often satisfied with 3-sigma (1 in 300) or even 2-sigma (1 in 20) evidence.
The number of sigmas determine the deviations from the null hypothesis. A null hypothesis is some simple enough explanation "without new players" that admits some controllable noise according to some calculable statistical treatment. If it predicts that a quantity \(X\) has the value \(X_0\pm \Delta X\) where the distribution is normal (and it is very often almost exactly normal, and even if it is not normal, we usually know what it looks like and we can calculate the probabilities for other distributions as well), i.e. \(C\times \exp[-(X-X_0)^2/2\Delta X^2]\) where \(C\) is chosen so that the "total probability of any possibility" equals one, then it is possible to calculate that the probability that \(X\) doesn't belong to the interval \((X_0-5\Delta,X_0+5\Delta X)\) is approximately 1 over 3 million which is so tiny that physicists are willing to take the risk and announce the discovery.
The total significance of the deviation from the Higgs-less null hypothesis is now around 10 sigma or so which makes us really sure that the Higgs-like excess isn't just a fluke. The probability that the excess is just a fluke – a collection of coincidences – is much smaller than 1 in a quadrillion. These numbers are so large because \(\exp(-x^2)\) decreases really quickly with \(x\), more quickly than exponentially, in fact.
When the discrepancy between a theory and the observation becomes this high, we may eliminate the null hypothesis (in this case, a crippled Standard Model where the Higgs is amputated). This is the process of falsification and it's the key empirically rooted procedure by which any science makes some progress in its ability to distinguish viable hypotheses from the disproved ones. To disprove a (null) hypothesis is this straightforward. On the other hand, we can never "quite prove" any detailed theory because there's always a possibility (and, with an exception of the truly final theory, pretty much certainty) that more extensive and accurate experiments in the future will falsify the latest best theory, too. Equivalently, the absence of a statistically significant (e.g. 2-sigma or 5-sigma) deviation in the latest data doesn't mean that the null hypothesis is right and will be right forever. It just means that the deviations as displayed in the performed experiments are smaller than a certain bound which implies that the current theory is "practically" correct. In the future, a discrepancy may be found in more accurate, refined, or extensive experiments that may see tinier or subtler effects than what we can see today.
One simply can't ever deduce any conclusions from the empirical data with absolute certainty. It's always important to acknowledge that an uncertainty is there. And because such an uncertainty may compromise the conclusions, it's always important (sometimes more important, sometimes less important, but never quite forgettable) to quantify the uncertainty, i.e. to know how large it is. The most invariant way of quantification is ultimately one in terms of the probability that a conclusion is invalid because an anomalous observation or a "smoking gun" wasn't really caused by the new effect whose existence we wanted to prove but rather by some good luck (or bad luck) – an amount of luck that can't be quite small (because, as we assume, the observation doesn't look like the most typical prediction of the null hypothesis) but it can't be too large (because it may still realistically happen).
All this methodology is absolutely essential for any controlled, reliable enough empirical tests of any theory or any hypothesis in any natural or social science. We may only discuss how high our certainty should be for us to authoritatively claim that our experiments or observations have established something (the requirements may depend on the context a little bit). 5-sigma is the usual standard of the hardest sciences (led by particle physics) for the discovery. It wouldn't hurt if other sciences adopted the same standards. When a dataset produces 2-sigma excesses, which still has a substantial, "1 in 20" risk of a false positive, you only need a 2.5 squared i.e. 6.25 times larger dataset to achieve a 5-sigma excess where the risk of a false positive is just "1 in 3 million". I am confident that science would be much clearer if surveys with mere 2-sigma excesses were summarized as inconclusive ones. Lots of bad and questionable results in soft sciences are caused by their low standards on how many sigmas we need. These bad apples have far-reaching consequences because many other papers try to build on these bad apples, and so on.
But if someone wants to abandon the null hypothesis testing and the notion of statistical significance in general, he is surely throwing out the baby with the bath water. He can't possibly understand how proper science is done; he couldn't have possibly done any empirical research that could be uncontroversially considered scientific. In fact, as we have often emphasized on this blog, all predictions of fundamental theories of physics ultimately have to be probabilistic (even if you remove all the technological limitations of measurement devices etc.) because quantum mechanical postulates have to be universally valid in the whole Universe and every small or large corner of it.
Mr Ziliak tries to excuse his silly remarks by some confusing assertions about the nature of particle physicists' claims about the Higgs boson. The 5-sigma excess doesn't prove the Higgs boson, he says: it could be a Prometheus particle, too. But if he's serious, he misunderstands what terminology means in physics – and science. You are free to use the name "Prometheus" for the Higgs boson; after all, many of us use many other names at various points, such as the God particle or the BEH boson (only Peter Higgs really noticed the extra bosonic excitation named after him). But while the people are free to choose their language and terminology, physics isn't about terminology. Physics is about the observable phenomena. So even if the source of the bump were Prometheus according to your terminology and your belief system, it's still empirically demonstrated that this Prometheus behaves as the Higgs boson. If it looks like a God particle, walks like a God particle, and barks like a Dog particle, then it is a God particle (if you change one Dog to God). It doesn't matter whether someone says it's a Prometheus, too.
At the beginning, the new particle was given uncertain names and it was Higgs-like because there was clearly a new particle-like effect and its properties were compatible with the properties of a Higgs boson. Later, as we were more certain and knew more accurate values of the properties, we became able to falsify the theory that the bump is caused by something that differs too much from the Standard Model Higgs boson. At this point, we have everything we need to call it the Standard Model Higgs boson. By this claim, we don't mean that the Standard Model will forever be the right and complete theory for all observations. It almost certainly won't be. But the observed properties of the Higgs boson falsify so many competing hypotheses and are so nontrivially close to the predictions of the Standard Model Higgs boson that there's no reason not to use this name for the object. So the new particle may be a Prometheus but according to the physical definition of "being a Higgs boson", it is clearly a Higgs boson, too. Physics determines whether something is a Higgs boson by its decays, rates of production, mass, and other interactions, and if those things agree with the Higgs boson's property, then the particle – whether it is God or Prometheus or anyone else – simply is a Higgs boson and attempts to claim otherwise are just artifacts of a distorted terminology, mistakes, and demagogy.
When I talked about the certainty that the LHC has observed a new particle; a new Higgs-like particle; or a Standard-Model-like Higgs boson (these phrases are increasingly accurate and increasingly strong), I only took the (almost) purely experimental data into account. Aside from these nearly direct observations, we have nearly rock-solid theoretical arguments – that I won't offer to Mr Ziliak because he isn't smart enough to understand them as even the very rudimentary concept of statistical significance is already too hard and abstract for him – that there has to be a Higgs boson with the mass or other properties that can't differ from the observed ones by more than a relatively small amount. The Standard Model (or any theory with particles including the W- and Z-bosons and others we have known for 30 years) would simply produce inconsistent predictions (such as probabilities of some high-energy collisions exceeding 100 percent) if the Higgs boson weren't there. While an experimenter may view all these arguments as biases and he should perhaps only build on what he has seen with his own eyes, other physicists are more than free (in fact, nearly obliged) to use all the available evidence to decide about the existence of the Higgs boson (as well as any other scientific question). With this additional, mathematically sophisticated evidence added to the mix, there's really no doubt that Nature contains a Standard-Model-like Higgs boson. There's no sensible doubt about millions of other scientific claims, either. But the probability that these insights are right is never quite 100 percent although it has gotten insanely close to 100 percent in very many cases.
Other texts on similar topics: markets, philosophy of science
Without a thorough understanding of statistics it would be impossible to develop new pharmaceuticals. Without pharmaceuticals I would be dead. He denies statistical significance! My God!
Jun 17, 2013, 5:27:00 PM ...
reader lukelea said...
Since economists rarely include error bars isn't this evidence that it is not much of a science? I've been mightily impressed by Morgenstern's book on the much neglected measurement problem in economics. Even something as simple as "the" price of an article is difficult to determine and may not even exist. Which is why I think the employment of calculus in economics is spurious: there are no functions, let alone continuous ones, let alone continuous ones which we can measure. I would go further and say that mathematical equations (using the = sign) also have no place in economics except as a heuristic device (Eg. quantity theory of money). The only thing you are left with is the law of diminishing returns in its various manifestations, which is about the shape (convexity, concavity) of certain curves -- oops, not curves, there are no lines, only fuzzy lines whose fuzziness is not based on a normal distribution.
Does this mean that economics is useless? Not at all. You can squeeze a lot out of that little that you have. Adam Smith showed the tendency towards general equilibrium in a free market economy, later refined by the marginal revolution. Also you can make certain predictions involving these two signs: < and >. Just not =
At least this is my considered view of the subject, which I happen to love.
reader Orpheus said...
What about prior probabilities? By Bayes' Theorem, if someone assigns a low enough prior to e.g. the existence of the Higgs boson, that person may still obtain a less than 50% probability that it exists even after a succesful 5 sigma test.
One might of course argue that it's implausible to arbitrarily assign extremely low probabilities to scientific hypotheses, but "intuitive plausibility" does not seem to be a very rigorous framework to estimate prior probabilities. Is there some way around this problem or am I seeing things incorrectly?
reader Norpag said...
Lubos You say
" In fact, as we have often emphasized on this blog, all predictions of fundamental theories of physics ultimately have to be probabilistic (even if you remove all the technological limitations of measurement devices etc.) because quantum mechanical postulates have to be universally valid in the whole Universe and every small or large corner of it." and "Statistical significance is absolutely paramount in the verification of hypotheses in all natural sciences as well as all social sciences that more or less successfully try to emulate the scientific character and success of the natural sciences."
I think science should abandon the idea of laws being "valid " which is a human construct which gives an illusion of certainty which most people need.The important thing about laws is simply are they useful. This is the root cause of the division between classical physics at one end and quantum mechanics at the other. Its really a question of complexity Classical physics works by simplifying idealising and isolating systems - thus Newton - Einstein gravity works well enough with small masses eg the solar sysem but does very poorly at the Galactic scale. Similarly a statistical stochastic approach works well for particle physics and quantum mechanical processes at the other end.In between at an intermediate level of complexity neither approach works too well. When studying systems eg climate science and cosmology which consist of multiple resonating oscillatory interacting variables probably which have a secular evolution a different approach is requiered.Such systems are usually inherently untestable and outcomes can only be forecast for relatively small periods of future time by the recognition of patterns (wavelet analysis) that repeat for some periods of time on a scale of interest to humanity.In other words the " validity" of "Laws" in this area is inherently unknowable and is a meaningless concept. This is why Einstein couldn't come up with a UFT- nature isn't designed in in such a way that such a concept is meaningful
reader SteveBrooklineMA said...
Thanks Lubos, I have been hoping you would comment on this since reading WM Briggs' praise for Ziliak. I think there is plenty of bad science backed up with bad statistics, but these guys are not helping.
http://wmbriggs.com/blog/?p=8295
If the chance of a two sigma event is 1 out of 20 and a three sigma event is 1 out of 300, how does a 5 sigma event get all the way up to 1 out of 3,000,000?
To betray my ignorance, in IQ studies the average is 100 and the standard deviation is 15. Roughly one out of seven people have an IQ of 115 or higher, and one out of fourty-nine 130 or higher. Continuing that trend I was under the impression that you keep multiplying by 1/7 to get the chances for each higher sigma: approximately one out of 350 for 3 sigma, one out of 2500 for four sigma, and one out of 17,500 for 5 sigma, corresponding to an IQ of 175, which is already pretty meaningless or so I gather from something you once said because the tests aren't really that good.
I feel really dumb asking this question but would like to clear up my confusion.
Dear Luke, I wrote the explanation but I understand that in order to explain such things, one would have to pedagogically spend about 30 times longer time.
The odds become so extreme with the number of sigmas so quickly because the probability is the integral of exp(-x^2) which is faster than exponential.
Please think about it or try to study some standardized page/introduction to it, like
http://en.wikipedia.org/wiki/Normal_distribution#Standard_deviation_and_tolerance_intervals
Jun 17, 2013, 10:19:00 PM ...
Thanks. I'll study up.
Jun 18, 2013, 4:23:00 AM ...
reader Peter Golian said...
There is always " some " noisy factor in nature and therefore is statistics important. As I say there are no exactly two same things in universe. According to my lecture from basic statistics but other area as quantum physics, match cutting machine ,I know that there are no same matches with 30 mm exactly as set up of machine, there is always some difference due to machine. But if there is 28- 32 mm it is OK due to quality (match box properties, burning time). If I interprete it to quantum physics this noise could be due to test equipment performance ... ... But question is how many sigma are adequate to proof theory? This I think in future math should be deal with 8)
reader BobSykes said...
Briggs and many other Bayesians reject the frequentist approach in its entirety and regard confidence intervals and p values to be patent absurdities. It's not clear how they would apply Bayes Theorem to the the LHC results. It's also not clear whether Bayesians do not become effective frequentists once the results are in. They seem to be pure Bayesians only for the prior probability.
This justification sounds really crazy because in the normal formulation of the "Bayesian vs frequentist controversy", it's the Bayesians who should accept the notion of probability in a wider, more inclusive set of situations, in particular, they include various really subjective notions of probability.
I can't imagine how someone could reject the frequentist interpretation of probabilities, because it's the more indisputable one.
reader AngularMan said...
In reality even mathematical proofs are not noise-free, since they depend on the verification by humans or computers, and there is always a (very tiny) chance of error in those verifications. They just appear rigorous because the significance is so high.
Exactly but I actually wrote this thing in the text above, too. ;-D
I was eager to express that thought, and so I wrote the comment before reading the rest of the article.
Sorry :D
reader Albert Zotkin said...
How often does this universe repeat? What is the probability that a universe like this one existed? We can define the properties of this universe as the physical laws and constants that define its symmetries, but we can't test the number of repetitions this universe occurs over the whole number of equivalent repetitions of the probable universes. Therefore, string cosmology or string inflationary models are not statistically testable, so they must be tagged as pseudo-science ;-)
Bob Carter's academic job "not renewed"
Dobrodošli u EU, Hrvatska
Strings 2013
Jacques Distler on his lost new physics bet
The IPCC and the Flat Earth Society may merge now
An anomaly-like argument in favor of SUSY
Should and could science act as a religion?
Jiří Rusnok, the new technocratic Czech PM
Larry Summers vs Stephen Hawking
Hooper: XENON100 may have seen DM candidates, too
Lisa Randall on Higgs, physics, real world
Kip Thorne, a star in a Hollywood movie
Austin: 1-inch accelerator gets electrons to \(2\G...
Obama's new obsession with the global warming myth
Supersymmetric Google Hangout with John Ellis
Kenneth Wilson, RIP
Most species originate in the tropics
All proofs in natural and social sciences ultimate...
Mike Duff vs an anti-string layman
Valentina Tereshkova: 50 years ago
There is no classical world
Czech police raid on lobbyists and politicians
Amazon: 3D printers below $1,200
International Linear Collider: Technical Design Re...
Murry Salby: CO2 is the integral of temperature
Finding and abandoning incorrect general relativit...
Bohr model: 100 years ago
Toshiba's Westinghouse claims to be the Czech nucl...
Maldacena, Susskind: any entanglement is a wormhol...
Pros and cons of the U.S. surveillance program
Trade war: Chinese solar panels vs EU wine
Competing Japanese regions shoot videos to win the...
Sea temperature trend: 1.35 ± 0.15 °C per century
David Gross on youth, revolutions, conservatives, ...
Asymmetric fates of rivers of Pilsen
LARES: a discoball for eurocents that supersedes G...
Most questions are no good
Richard Feynman: Fun To Imagine
Quintuplets in physics | CommonCrawl |
European Psychiatry (2)
The Journal of Agricultural Science (2)
Epidemiology & Infection (1)
International Astronomical Union Colloquium (1)
Journal of Fluid Mechanics (1)
Journal of Materials Research (1)
Mineralogical Magazine (1)
The European Physical Journal - Applied Physics (1)
European Psychiatric Association (2)
Mineralogical Society (1)
Cambridge Studies in Biological and Evolutionary Anthropology (1)
Stable isotopes to study sulfur amino acid utilization in broilers
R. M. Suzuki, L. G. Pacheco, J. C. P. Dorigam, J. C. Denadai, G. S. Viana, H. R. Varella, C. C. N. Nascimento, J. Van Milgen, N. K. Sakomura
Journal: animal / Volume 14 / Issue S2 / August 2020
Published online by Cambridge University Press: 10 June 2020, pp. s286-s293
Nutritionists have been discussing whether the dietary supplementation of cyst(e)ine is required as a part of the dietary methionine (Met) in the total sulfur amino acid (TSAA) requirement to achieve optimum performance in broilers. Part of Met is converted to cysteine (Cys) to meet the Cys requirement, especially for feather growth. The TSAA requirement has been determined by using graded levels of free Met in the diet, without supplementation of free cyst(e)ine. It has also been argued that the Met to Cys ratio (Met : Cys) changes with age and even with different Met sources. The objective of this study was to evaluate the two sources of Met, while determining the proportion of Met and Cys in total dietary TSAA that optimize the performance of broilers. A performance assay was carried out in a factorial arrangement (5 × 2) using 1080 broilers from 42 to 56 days of age fed diets having different dietary proportions of Met and Cys (44 : 56, 46 : 54, 48 : 52, 50 : 50 or 52 : 48) while maintaining the same dietary TSAA in the diets. Two synthetic Met sources (dl-Met or l-Met) were used for each of the diets with different dietary Met : Cys ratios. Twenty-one broilers of the same age were fed the diets 44 : 56, 48 : 52 and 52 : 48 by supplementing the diet with L-(15N) Met or L-(15N2) Cystine to study the metabolism of TSAA. No differences were observed between Met sources for feed intake, BW gain and feed conversion ratio (FCR; P > 0.05); however, FCR was numerically improved at 50 : 50 Met : Cys. Regarding TSAA utilization, the conversion of Met to Cys increased with increase in Met : Cys ratios, but the concentration of Met intermediates decreased. Broiler chickens responded to different dietary proportions of sulfur amino acids by altering their sulfur amino acid metabolism, and diets containing 50 : 50 Met : Cys is recommended for broilers of age 42 to 56 days.
Alcohol dependence ambulatory clinic in Hospital de São João
V. Teixeira Sousa, A. Costa, C. Costa, S. Fonseca, M. Mota, R. Grangeia, A. Pacheco Palha
Journal: European Psychiatry / Volume 22 / Issue S1 / March 2007
Published online by Cambridge University Press: 16 April 2020, p. S201
It is unquestionable the importance of a consultation turned on the treatment of the most frequent substance dependence existing in our country, culturally "intoxicated" by the myths and traditions about alcohol intake.
In alcohol-dependent patients that have no severe signs of withdrawal, detoxication can be safely and effectively undertaken in ambulatory setting.
In this study, the authors intended to evaluate the socio-demographic and clinical characteristics of 115 individuals followed in Alcohol Dependence Clinic, in the past four years. Data were collected from their clinical registries.
Patients were referred to this consultation exclusively for alcohol detoxication program, in their majority (81,4%).
Most remarkable characteristics that define a socio-demographic profile of the studied population are: masculine gender (80,9%), mean age of 46,15 ± 10,6, without permanent occupation (57,7%) and from low socio-economical level (Classes III and IV of Graffar modified Score: 93,5%). Alcohol consumption pattern was most frequently the Cloninger's type II (53,2%), the most consumed beverage was wine (85,0%), with 52,1% of patients having the first consumptions during adolescence. In 69,2%, there was a positive familiar history of alcohol dependence.
On the topic of psychopharmacological treatment, there was the obvious use of benzodiazepines, being the tiapride the second most prescribed medicine (71,7%).
After a six moths follow-up, most patients presented reduction in consumptions of alcohol (54,7%).
This investigation may contribute to a qualitative improvement of care to alcohol-dependent patients witch seek treatment and, eventually, to the future design of guidelines for referral and management of this individuals.
Therapeutic Approach to Complicated Grief–An Example of Group Psychotherapy in Psychiatric Patients
J. Soares, S.L. Azevedo Pinto, A.C. Pinheiro, S. Pacheco, R. Curral
Journal: European Psychiatry / Volume 41 / Issue S1 / April 2017
Published online by Cambridge University Press: 23 March 2020, pp. s772-s773
Complicated Grief (CG) affects 7–10% of the grieving individuals in the general population. However, the incidence is much higher in psychiatric patients, reaching 70% in most samples. These individuals present many risk factors for such condition, demanding a particular attention and treatment approach. Most studies have shown that pharmacological treatment may help relieving depressive and anxiety symptoms, although they do not promote a consistent improvement of the grieving scenario. Several meta-analyses have recognized different psychological interventions as effective in dealing with the loss, decreasing psychological suffering and promoting adaptation. It is accepted that the benefits of the intervention overcome any possible harm.
To evaluate the impact of a group intervention (12 sessions) in pharmacologically stabilized psychiatric patients presenting with CG.
Patient selection was performed through a clinical interview and the fulfilment of the following psychometric tests: Complicated Grief Inventory; the Impact of Events Scale; Beck Depression Inventory; Social Support Scale. These assessment tools were also used to evaluate the impact of the intervention performed.
After the psychotherapeutic intervention, there were significant differences in the levels of depressive and post-traumatic stress symptoms.
Group intervention in CG has proven effective in this population, specially regarding depression and post-traumatic stress levels.
Disclosure of interest
The authors have not supplied their declaration of competing interest.
Milk production responses and rumen fermentation of dairy cows supplemented with summer brassicas
M. Castillo-Umaña, O. Balocchi, R. Pulido, P. Sepúlveda-Varas, D. Pacheco, S. Muetzel, R. Berthiaume, J. P. Keim
Journal: animal / Volume 14 / Issue 8 / August 2020
Forage brassicas, such as summer turnip (ST; Brassica rapa) and forage rape (FR; Brassica napus), are used as supplementary crops during summer. However, studies with lactating dairy cows fed these forages are limited and report inconsistent productive responses. The aim of this study was to determine dry matter intake, rumen fermentation and milk production responses of dairy cows in mid-lactation supplemented with and without summer ('ST' or 'FR') brassicas. Twelve multiparous lactating dairy cows were randomly allocated to three dietary treatments in a replicated 3 × 3 Latin square design balanced for residual effects over three 21-day periods. The control diet consisted of 16.2 kg DM of grass silage, 2.25 kg DM of commercial concentrate and 2.25 kg DM solvent-extracted soybean meal. For the other two dietary treatments, 25% of the amounts of silage and concentrates were replaced with FR or ST. The inclusion of forage brassicas had no effects on milk production (24.2 kg cow/day average) and composition (average milk fat and protein 43.2 and 33.6 g/l, respectively). Dry matter intake was 0.98 kg and 1.12 kg lower for cows supplemented with FR and ST, respectively, resulting in a greater feed conversion efficiency (1.35 kg milk/kg DM for ST and FR v. 1.27 kg milk/kg DM for the control diet). Intraruminal pH was lower for cows supplemented with ST compared to the control diet; however, it did not decrease below pH 5.8 at any time of the day. After feeding, the concentrations of total short-chain fatty acids (SCFAs) in rumen contents increased with ST supplementation compared to the control diet. Inclusion of FR in the diet increased the molar proportion of acetate (68.5 mmol/100 mmol) in total SCFA at the expense of propionate, measured 6 h after feeding of the forage. The molar proportion of butyric acid was greater with ST and FR supplementation (13.1 and 12 mmol/100 mmol, respectively) than in control cows. The estimated microbial nitrogen (N) flow was 89.1 g/day greater when supplementing FR compared to the control diet. Based on the haematological measures, the inclusion of summer brassica forages did not affect the health status of the animals. These results indicate that mid-lactation dairy cows fed brassicas are able to maintain production despite the reduced intake, probably due to improved rumen fermentation and therefore nutrient utilization.
Temporal fermentation and microbial community dynamics in rumens of sheep grazing a ryegrass-based pasture offered either in the morning or in the afternoon
R. E. Vibart, S. Ganesh, M. R. Kirk, S. Kittelmann, S. C. Leahy, P. H. Janssen, D. Pacheco
Journal: animal / Volume 13 / Issue 10 / October 2019
Eight ruminally-fistulated wethers were used to examine the temporal effects of afternoon (PM; 1600h) v. morning (AM; 0800 h) allocation of fresh spring herbage from a perennial ryegrass (Lolium perenne L.)-based pasture on fermentation and microbial community dynamics. Herbage chemical composition was minimally affected by time of allocation, but daily mean ammonia concentrations were greater for the PM group. The 24-h pattern of ruminal fermentation (i.e. time of sampling relative to time of allocation), however, varied considerably for all fermentation variables (P⩽0.001). Most notably amongst ruminal fermentation characteristics, ammonia concentrations showed a substantial temporal variation; concentrations of ammonia were 1.7-, 2.0- and 2.2-fold greater in rumens of PM wethers at 4, 6 and 8h after allocation, respectively, compared with AM wethers. The relative abundances of archaeal and ciliate protozoal taxa were similar across allocation groups. In contrast, the relative abundances of members of the rumen bacterial community, like Prevotella 1 (P=0.04), Bacteroidales RF16 group (P=0.005) and Fibrobacter spp. (P=0.008) were greater for the AM group, whereas the relative abundance of Kandleria spp. was greater (P=0.04) for the PM group. Of these taxa, only Prevotella 1 (P=0.04) and Kandleria (P<0.001) showed a significant interaction between time of allocation and time of sampling relative to feed allocation. Relative abundances of Prevotella 1 were greater at 2h (P=0.05), 4h (P=0.003) and 6h (P=0.01) after AM allocation of new herbage, whereas relative abundances of Kandleria were greater at 2h (P=0.003) and 4h (P<0.001) after PM allocation. The early post-allocation rise in ammonia concentrations in PM rumens occurred simultaneously with sharp increases in the relative abundance of Kandleria spp. and with a decline in the relative abundance of Prevotella. All measures of fermentation and most microbial community composition data showed highly dynamic changes in concentrations and genus abundances, respectively, with substantial temporal changes occurring within the first 8h of allocating a new strip of herbage. The dynamic changes in the relative abundances of certain bacterial groups, in synchrony with a substantial diurnal variation in ammonia concentrations, has potential effects on the efficiency by which N is utilised by the grazing ruminant.
Growth performance and carcass traits of steers finished on three different systems including legume–grass pasture and grain diets
A. P. B. Fruet, F. S. Stefanello, F. Trombetta, A. N. M. De Souza, A. G. Rosado Júnior, C. J. Tonetto, J. L. C. Flores, R. B. Scheibler, R. M. Bianchi, P. S. Pacheco, A. De Mello, J. L. Nörnberg
Journal: animal / Volume 13 / Issue 7 / July 2019
Inclusion of legume in grass pastures optimizes protein values of the forage and promotes improved digestibility. Therefore, we hypothesized that finishing steers on a novel combination of legumes and grass pasture would produce carcasses with acceptable traits when compared to carcasses from steers finished in feedlot systems. In this study, we evaluated the effects of finishing steers on three systems including: grazing legume–grass pasture containing oats, ryegrass, white and red clover (PAST), grazing PAST plus supplementation with whole corn grain (14 g/kg BW (SUPP)), and on a feedlot-confined system with concentrate only (28 g/kg BW, consisting of 850 g/kg of whole corn grain and 150 g/kg of protein–mineral–vitamin supplement (GRAIN)) on growth performance of steers, carcass traits and digestive disorders. Eighteen steers were randomly assigned to one of three dietary treatments and finished for 91 days. Data regarding pasture and growth performance were collected during three different periods (0 to 28, 29 to 56 and 57 to 91 days). Subsequently, steers were harvested to evaluate carcass traits, presence of rumenitis, abomasitis and liver abscesses. The legume–grass pasture provided more than 19% dry matter of protein. In addition, pasture of paddocks where steers were assigned to SUPP and PAST treatments showed similar nutritional quality. When compared to PAST, finishing on SUPP increased total weight gain per hectare, stocking rate, daily and total weight gains. The increase of weight gain was high to GRAIN than SUPP and PAST. Steers finished on GRAIN had high hot carcass weight, fat thickness and marbling score when compared to PAST. However, these attributes did not differ between GRAIN and SUPP. Abomasum lesions were more prevalent in steers finished on GRAIN when compared to PAST. Results of this research showed that it is possible to produce carcasses with desirable market weight and fat thickness by finishing steers on legume–grass pasture containing oats, ryegrass, white and red clover. Moreover, supplementing steers with corn when grazing on legume–grass pasture produced similar carcass traits when compared to beef fed corn only.
Indium and selenium distribution in the Neves-Corvo deposit, Iberian Pyrite Belt, Portugal
J. R. S. Carvalho, J. M. R. S. Relvas, A. M. M. Pinto, M. Frenzel, J. Krause, J. Gutzmer, N. Pacheco, R. Fonseca, S. Santos, P. Caetano, T. Reis, M. Gonçalves
Journal: Mineralogical Magazine / Volume 82 / Issue S1 / May 2018
Published online by Cambridge University Press: 28 February 2018, pp. S5-S41
High concentrations of indium (In) and selenium (Se) have been reported in the Neves-Corvo volcanic-hosted massive sulfide deposit, Portugal. The distribution of these ore metals in the deposit is complex as a result of the combined effects of early ore-forming processes and late tectonometamorphic remobilization. The In and Se contents are higher in Cu-rich ore types, and lower in Zn-rich ore types. At the deposit scale, both In and Se correlate positively with Cu, whereas their correlations with Zn are close to zero. This argues for a genetic connection between Cu, In and Se in terms of metal sourcing and precipitation. However, re-distribution and re-concentration of In and Se associated with tectonometamorphic deformation are also processes of major importance for the actual distribution of these metals throughout the whole deposit. Although minor roquesite and other In-bearing phases were recognized, it is clear that most In within the deposit is found incorporated within sphalerite and chalcopyrite. When chalcopyrite and sphalerite coexist, the In content in sphalerite (avg. 1400 ppm) is, on average, 2–3 times higher than in chalcopyrite (avg. 660 ppm). The In content in stannite (avg. 1.3 wt.%) is even higher than in sphalerite, but the overall abundance of stannite is subordinate to either sphalerite or chalcopyrite. Selenium is dispersed widely between many different ore minerals, but galena is the main Se-carrier. On average, the Se content in galena is ~50 times greater than in either chalcopyrite (avg. 610 ppm) or sphalerite (avg. 590 ppm). The copper concentrate produced at Neves-Corvo contains very significant In (+Se) content, well above economic values if the copper smelters recovered it. Moreover, the high In content of sphalerite from some Cu-Zn ores, or associated with shear structures, could possibly justify, in the future, a selective exploitation strategy for the production of an In-rich zinc concentrate.
The AusBeef model for beef production: II. sensitivity analysis
H. C. DOUGHERTY, E. KEBREAB, M. EVERED, B. A. LITTLE, A. B. INGHAM, J. V. NOLAN, R. S. HEGARTY, D. PACHECO, M. J. MCPHEE
Journal: The Journal of Agricultural Science / Volume 155 / Issue 9 / November 2017
Published online by Cambridge University Press: 03 August 2017, pp. 1459-1474
The present study evaluated the behaviour of the AusBeef model for beef production as part of a 2 × 2 study simulating performance on forage-based and concentrate-based diets from Oceania and North America for four methane (CH4)-relevant outputs of interest. Three sensitivity analysis methods, one local and two global, were conducted. Different patterns of sensitivity were observed between forage-based and concentrate-based diets, but patterns were consistent within diet types. For the local analysis, 36, 196, 47 and 8 out of 305 model parameters had normalized sensitivities of 0, >0, >0·01 and >0·1 across all diets and outputs, respectively. No parameters had a normalized local sensitivity >1 across all diets and outputs. However, daily CH4 production had the greatest number of parameters with normalized local sensitivities >1 for each individual diet. Parameters that were highly sensitive for global and local analyses across the range of diets and outputs examined included terms involved in microbial growth, volatile fatty acid (VFA) yields, maximum absorption rates and their inhibition due to pH effects and particle exit rates. Global sensitivity analysis I showed the high sensitivity of forage-based diets to lipid entering the rumen, which may be a result of the use of a feedlot-optimized model to represent high-forage diets and warrants further investigation. Global sensitivity analysis II showed that when all parameter values were simultaneously varied within ±10% of initial value, >96% of output values were within ±20% of the baseline, which decreased to >50% when parameter value boundaries were expanded to ±25% of their original values, giving a range for robustness of model outputs with regards to potential different 'true' parameter values. There were output-specific differences in sensitivity, where outputs that had greater maximum local sensitivities displayed greater degrees of non-linear interaction in global sensitivity analysis I and less variance in output values for global sensitivity analysis II. For outputs with less interaction, such as the acetate : propionate ratio and microbial protein production, the single most sensitive term in global sensitivity analysis I contributed more to the overall total-order sensitivity than for outputs with more interaction, with an average of 49, 33, 15 and 14% of total-order sensitivity for microbial protein production, acetate : propionate ratio, CH4 production and energy from absorbed VFAs, respectively. Future studies should include data collection for highly sensitive parameters reported in the present study to improve overall model accuracy.
The AusBeef model for beef production: I. Description and evaluation
H. C. DOUGHERTY, E. KEBREAB, M. EVERED, B. A. LITTLE, A. B. INGHAM, R. S. HEGARTY, D. PACHECO, M. J. MCPHEE
As demand for animal products, such as meat and milk, increases, and concern over environmental impact grows, mechanistic models can be useful tools to better represent and understand ruminant systems and evaluate mitigation options to reduce greenhouse gas emissions without compromising productivity. The objectives of the present study were to describe the representation of processes for growth and enteric methane (CH4) production in AusBeef, a whole-animal, dynamic, mechanistic model for beef production; evaluate AusBeef for its ability to predict daily methane production (DMP, g/day), gross energy intake (GEI, MJ/day) and methane yield (MJ CH4/MJ GEI) using an independent data set; and to compare AusBeef estimates to those from the empirical equations featured in the current National Academies of Sciences, Engineering and Medicine (NASEM, 2016) beef cattle requirements for growth and the Ruminant Nutrition System (RNS), a dynamic, mechanistic model of Tedeschi & Fox, 2016. AusBeef incorporates a unique fermentation stoichiometry that represents four microbial groups: protozoa, amylolytic bacteria, cellulolytic bacteria and lactate-utilizing bacteria. AusBeef also accounts for the effects of ruminal pH on microbial degradation of feed particles. Methane emissions are calculated from net ruminal hydrogen balance, which is defined as the difference between inputs from fermentation and outputs due to microbial use and biohydrogenation. AusBeef performed similarly to the NASEM empirical model in terms of prediction accuracy and error decomposition, and with less root mean square predicted error (RMSPE) than the RNS mechanistic model when expressed as a percentage of the observed mean (RMSPE, %), and the majority of error was non-systematic. For DMP, RMSPE for AusBeef, NASEM and RNS were 24·0, 19·8 and 50·0 g/day for the full data set (n = 35); 25·6, 18·2 and 56·2 g/day for forage diets (n = 19); and 21·8, 21·5 and 41·5 g/day for mixed diets (n = 16), respectively. Concordance correlation coefficients (CCC) were highest for GEI, with all models having CCC > 0·66, and higher CCC for forage diets than mixed, while CCC were lowest for MY, particularly forage diets. Systematic error increased for all models on forage diets, largely due to an increase in error due to mean bias, and while all models performed well for mixed diets, further refinements are required to improve the prediction of CH4 on forage diets.
By Krista Adamek, Ana Luisa K. Albernaz, J. Marcio Ayres†, Andrew J. Baker, Karen L. Bales, Adrian A. Barnett, Christopher Barton, John M. Bates, Jennie Becker, Bruna M. Bezerra, Júlio César Bicca-Marques, Richard Bodmer, Jean P. Boubli, Mark Bowler, Sarah A. Boyle, Christini Barbosa Caselli, Janice Chism, Elena P. Cunningham, José Maria C. da Silva, Lesa C. Davies, Nayara de Alcântara Cardoso, Manuella A. de Souza, Stella de la Torre, Ana Gabriela de Luna, Thomas R. Defler, Anthony Di Fiore, Eduardo Fernandez-Duque, Stephen F. Ferrari, Wilsea M.B. Figueiredo-Ready, Tracy Frampton, Paul A. Garber, Brian W. Grafton, L. Tremaine Gregory, Maria L. Harada, Amy Harrison-Levine, Walter C. Hartwig, Stefanie Heiduck, Eckhard W. Heymann, André Hirsch, Leandro Jerusalinsky, Gareth Jones, Richard F. Kay, Martin M. Kowalewski, Shawn M. Lehman, Laura Marsh, Jesús Martinez, William A. Mason, Hope Matthews, Wynlyn McBride, Shona McCann-Wood, W. Scott McGraw, D. Jeffrey Meldrum, Sally P. Mendoza, Nohelia Mercado, Russell A. Mittermeier, Mirjam N. Nadjafzadeh, Marilyn A. Norconk, Robert Gary Norman, Marcela Oliveira, Marcelo M. Oliveira, Maria Juliana Ospina Rodríguez, Erwin Palacios, Suzanne Palminteri, Liliam P. Pinto, Marcio Port-Carvalho, Leila Porter, Carlos Portillo-Quintero, George Powell, Ghillean T. Prance, Rodrigo C. Printes, Pablo Puertas, P. Kirsten Pullen, Helder L. Queiroz, Luis Reginaldo R. Rodrigues, Adriana Rodríguez, Alfred L. Rosenberger, Anthony B. Rylands, Ricardo R. Santos, Horacio Schneider, Eleonore Z.F. Setz, Suleima S.B. Silva, José S. Silva Júnior, Andrew T. Smith, Marcelo C. Sousa, Antonio S. Souto, Wilson R. Spironello, Masanaru Takai, Marcelo F. Tejedor, Cynthia L. Thompson, Diego G. Tirira, Raul Tupayachi, Bernardo Urbani, Liza M. Veiga, Marianela Velilla, João Valsecchi, Jean-Christophe Vié, Tatiana M. Vieira, Suzanne E. Walker-Pacheco, Rob Wallace, Patricia C. Wright, Charles E. Zartman
Edited by Liza M. Veiga, Universidade Federal do Pará, Brazil, Adrian A. Barnett, Roehampton University, London, Stephen F. Ferrari, Universidade Federal de Sergipe, Brazil, Marilyn A. Norconk, Kent State University, Ohio
Book: Evolutionary Biology and Conservation of Titis, Sakis and Uacaris
Published online: 05 April 2013
Print publication: 11 April 2013, pp xii-xv
Discrete Gauge Fields for Graphene Membranes under Mechanical Strain
James V. Sloan, Alejandro A. Pacheco Sanjuan, Zhengfei Wang, Cedric M. Horvath, Salvador Barraza-Lopez
Published online by Cambridge University Press: 02 September 2013, pp. 31-34
Mechanical strain creates strong gauge fields in graphene, offering the possibility of controlling its electronic properties. We developed a gauge field theory on a honeycomb lattice valid beyond first-order continuum elasticity. Along the way, we resolve a recent controversy on the theory of strain engineering in graphene: there are no K-point dependent gauge fields.
Polymyxin B Consumption and Incidence of Gram-Negative Bacteria Intrinsically Resistant to Polymyxins
Diego R. Falci, Liliane S. Pacheco, Luciana S. Puga, Renato C. F. da Silva, Anelise P. Alves, Paulo R. P. Behar, Alexandre P. Zavascki
Journal: Infection Control & Hospital Epidemiology / Volume 33 / Issue 5 / May 2012
Multilocus sequence types of invasive Corynebacterium diphtheriae isolated in the Rio de Janeiro urban area, Brazil
S. Z. VIGUETTI, L. G. C. PACHECO, L. S. SANTOS, S. C. SOARES, F. BOLT, A. BALDWIN, C. G. DOWSON, M. L. ROSSO, N. GUISO, A. MIYOSHI, R. HIRATA, A. L. MATTOS-GUARALDI, V. AZEVEDO
Invasive infections caused by Corynebacterium diphtheriae in vaccinated and non-vaccinated individuals have been reported increasingly. In this study we used multilocus sequence typing (MLST) to study genetic relationships between six invasive strains of this bacterium isolated solely in the urban area of Rio de Janeiro, Brazil, during a 10-year period. Of note, all the strains rendered negative results in PCR reactions for the tox gene, and four strains presented an atypical sucrose-fermenting ability. Five strains represented new sequence types. MLST results did not support the hypothesis that invasive (sucrose-positive) strains of C. diphtheriae are part of a single clonal complex. Instead, one of the main findings of the study was that such strains can be normally found in clonal complexes with strains related to non-invasive disease. Comparative analyses with C. diphtheriae isolated in different countries provided further information on the geographical circulation of some sequence types.
Instrumentation for a plasma needle applied to E. coli bacteria elimination
R. Peña-Eguiluz, J. A. Pérez-Martínez, J. Solís-Pacheco, B. Aguilar-Uscanga, R. López-Callejas, A. Mercado-Cabrera, R. Valencia-Alvarado, A. E. Muñoz-Castro, S. R. Barocio, A. de la Piedad Beneitez
Journal: The European Physical Journal - Applied Physics / Volume 49 / Issue 1 / January 2010
Published online by Cambridge University Press: 26 November 2009, 13109
Microplasmas are nowadays a powerful tool with multiple practical applications. The performance of a specific instrumentation for a plasma needle capable of producing non-thermal plasmas and a DBD reactor able to produce atmospheric pressure plasmas, both of them designed and already constructed, is reported. These devices operate at 13.56 MHz and are driven by a specifically built radio frequency (RF) resonant converter. The reactors, which operate at atmospheric pressure in a He-air gas mixture at a 1.5 SLPM flow, have been successfully applied to eliminate E. coli bacteria. In the needle case, bacterial samples were submitted typically to a 500 V peak voltage plasma discharge for 120 s. In the DBD treatment, the samples were processed with typical 750 V peak voltage plasma discharges for 80 s. The sample pH was used as a criterion to measure the effectiveness of the plasma treatment, in such a way that the return to the basal pH value after the treatment can be assumed as the validation of the complete bacterial elimination.
Effects of rotation and sloping terrain on the fronts of density currents
J. C. R. HUNT, J. R. PACHECO, A. MAHALOV, H. J. S. FERNANDO
Journal: Journal of Fluid Mechanics / Volume 537 / 25 August 2005
Print publication: 25 August 2005
The initial stage of the adjustment of a gravity current to the effects of rotation with angular velocity $f/2$ is analysed using a short time analysis where Coriolis forces are initiated in an inviscid von Kármán–Benjamin gravity current front at $t_F\,{=}\,0$. It is shown how, on a time-scale of order $1/f$, as a result of ageostrophic dynamics, the slope and front speed $U_F$ are much reduced from their initial values, while the transverse anticyclonic velocity parallel to the front increases from zero to $O(N H_0)$, where $N\,{=}\,\sqrt{g'/H_0}$ is the buoyancy frequency, and $g'\,{=}\,g \Delta \rho /\rho_0$ is the reduced acceleration due to gravity. Here $\rho_0$ is the density and $\Delta \rho$ and $H_0$ are the density difference and initial height of the current. Extending the steady-state theory to account for the effect of the slope $\sigma$ on the bottom boundary shows that, without rotation, $U_{F}$ has a maximum value for $\sigma \,{=}\, \upi/6$, while with rotation, $U_{F}$ tends to zero on any slope. For the asymptotic stage when $ft_F \,{\gg}\, 1$, the theory of unsteady waves on the current is reviewed using nonlinear shallow-water equations and the van der Pol averaging method. Their motions naturally split into a 'balanced' component satisfying the Margules geostrophic relation and an equally large 'unbalanced' component, in which there is horizontal divergence and ageostrophic vorticity. The latter is responsible for nonlinear oscillations in the current on a time scale $f^{-1}$, which have been observed in the atmosphere and field experiments. Their magnitude is mainly determined by the initial potential energy in relation to that of the current and is proportional to the ratio $\sqrt{\hbox{\it Bu}} \,{=}\, L_R/R_0$, where $L_R\,{=}\,N H_0/f$ is the Rossby deformation radius and $R_0$ is the initial radius. The effect of slope friction also prevents the formation of a steady front. From the analysis it is concluded that a weak mean radial flow must be driven by the ageostrophic oscillations, preventing the mean front speed $U_F$ from halting sharply at $f t_F \,{\sim}\, 1$. Depending on the initial value of $L_R/R_0$, physical arguments show that $U_F$ decreases slowly in proportion to $(f t_F)^{-1/2}$, i.e. $U_F/U_{F_0}\,{=}\,F(ft_F,\hbox{\it Bu})$. Thus the front only tends to the geostrophic asymptotic state of zero radial velocity very slowly (i.e. as $f t_F \,{\rightarrow}\, \infty $) for finite values of $L_R/R_0$. However, as $L_R/R_0 \,{\rightarrow}\, 0$, it reaches this state when $f t_F \,{\sim}\, 1$. This analysis of the overall nonlinear behaviour of the gravity current is consistent with two two-dimensional non-hydrostatic (Navier–Stokes) and axisymmetric hydrostatic (shallow-water) Eulerian numerical simulations of the varying form of the rotating gravity current. When the effect of surface friction is considered, it is found that the mean movement of the front is significantly slowed. Furthermore, the oscillations with angular frequency $f$ and the slow growth of the radius, when $ft_F \,{\ge}\, 1$, are consistent with recent experiments.
Correlation Between the AlN Buffer Layer Thickness and the GaN Polarity in GaN/AlN/Si(111) Grown by MBE
A. M. Sanchez, P. Ruterana, P. Vennegues, F. Semond, F. J. Pacheco, S. I. Molina, R. Garcia, M. A. Sanchez-Garcia, E. Calleja
Journal: MRS Online Proceedings Library Archive / Volume 743 / 2002
Published online by Cambridge University Press: 11 February 2011, L3.25
In this work it is shown that thin AlN buffer layers cause N-polarity GaN epilayers, with a high inversion domains density. When the AlN thickness increases, the polarity of the epilayer changes to Ga. The use of a low temperature AlN nucleation layer leads to a flat AlN/Si(111) interface. This contributes to decrease the inversion domains density in the overgrown GaN epilayer with a Ga polarity.
Crystalline Structure Determination of Anisotropic Dimethyl Terephthalate Crystallites by Micro-Raman Spectroscopy
R. Rodrĺguez, S. Pacheco, S. Vargas, S. Jiménez, V. M. Castaño
Journal: Journal of Materials Research / Volume 15 / Issue 6 / June 2000
A novel approach to determine the molecular orientation of dimethyl terephthalate molecules with respect to the direction of the crystal axis is reported. This determination was achieved by changing the crystal orientation with respect to the incident laser light of a micro-Raman spectrometer. Raman spectra were obtained at different incidence angles of the laser beam with respect to the crystal symmetry axis. The intensities of some specific bands were analyzed as a function of the tilting angle. With this information the molecular orientation with respect to the crystal axis was determined making use of a simple mechanical model.
Stellar and Circumstellar Activity in the Be Star EW Lac from the Multi-site 1993 Campaign
A.M. Hubert, M. Floquet, R. Hirata, D. McDavid, J. Zorec, D. Gies, M. Hahula, E. Janot-Pacheco, E. Kambe, N.V. Leister, S. Stefl, A. Tarasov
Journal: International Astronomical Union Colloquium / Volume 175 / 2000
The Be shell star EW Lac was observed in September 1993 during a multi-site campaign. Results from visual spectroscopy and polarimetry are summarized here. He I 6678 profiles have been compared to previous observations held in 1989 and show an additional complex and highly variable circumstellar component which can due to material expelled from the star just prior to these observations. Two groups of frequencies are found again in 1993 observations compared with 1989 ones. In the frame of nrp, they could be associated to low degree g-modes.
Progression of the Surface Roughness of N+ Silicon Epitaxial Films as Analyzed by AFM
S. John, E. J. Quinones, B. Ferguson, K. Pacheco, C. B. Mullins, S. K. Banerjee
Published online by Cambridge University Press: 21 February 2011, 123
We report on the morphology of heavily phosphorous doped silicon films grown by ultra high vacuum chemical vapor deposition at temperatures of ∼550° C. The effects of PH3 on epitaxial films have been examined for silicon deposited using SiH4 and Si2H6. It is found that films grown using silane experience an increase in surface roughness with increasing phosphine partial pressure. AFM and RHEED studies indicate 3-D growth. As epitaxy progresses, it is believed that phosphorus segregation on the growing film surface greatly diminishes the adsorption and surface mobility of the silicon bearing species. Initial Si deposition results in a pitted surface, but as growth advances and the phosphorus coverage increases, growth within the pits decreases the surface roughness. In contrast to SiH4, it is found that Si2H6 provides excellent quality, smooth films even at high PH3 partial pressures.
Naturally acquired infections with Leishmania enriettii Muniz and Medina 1948 in guinea-pigs from São Paulo, Brazil
M. I. Machado, R. V. Milder, R. S. Pacheco, M. Silva, R. R. Braga, R. Lainson
Journal: Parasitology / Volume 109 / Issue 2 / August 1994
Two domestic guinea-pigs (Cavia porcellus), bought in Pinheiros, São Paulo State, Brazil, were taken by their owners a farm in the rural district of Capão Bonito, close to the Atlantic Forest, São Paulo, where they both developed tumour-like and ulcerating lesions on the ears. The causative agent was identified as Leishmania (L.) enriettii, based on biological characters and isoenzyme profiles. Sources of the parasite in wild mammals, and the possible sandfly vector species discussed. | CommonCrawl |
An Electrical Characterisation Methodology for Benchmarking Memristive Device Technologies
Spyros Stathopoulos ORCID: orcid.org/0000-0002-0833-62091,
Loukas Michalas1,
Ali Khiat1,
Alexantrou Serb1 &
Themis Prodromakis ORCID: orcid.org/0000-0002-6267-69091
Characterization and analytical techniques
The emergence of memristor technologies brings new prospects for modern electronics via enabling novel in-memory computing solutions and energy-efficient and scalable reconfigurable hardware implementations. Several competing memristor technologies have been presented with each bearing distinct performance metrics across multi-bit memory capacity, low-power operation, endurance, retention and stability. Application needs however are constantly driving the push towards higher performance, which necessitates the introduction of a standard benchmarking procedure for fair evaluation across distinct key metrics. Here we present an electrical characterisation methodology that amalgamates several testing protocols in an appropriate sequence adapted for memristors benchmarking needs, in a technology-agnostic manner. Our approach is designed to extract information on all aspects of device behaviour, ranging from deciphering underlying physical mechanisms to assessing different aspects of electrical performance and even generating data-driven device-specific models. Importantly, it relies solely on standard electrical characterisation instrumentation that is accessible in most electronics laboratories and can thus serve as an independent tool for understanding and designing new memristive device technologies.
Emerging memory-resistive devices, also known as memristors1, have exhibited an unmatched potential for a broad range of applications ranging from non-volatile memories2 to neuromorphic computing3,4 and reconfigurable circuits5,6. As the scope of these resistive memories expands, there is a growing interest in identifying all appropriate techniques for evaluating the different attributes of electrical performance7 and the physical aspects8 of Resistive Random Access Memory (RRAM) devices. While these techniques do offer valuable insights into the operation and underpinning physical aspects of devices they are limited to individual performance metrics. In order to establish a workflow to evaluate devices in a consistent manner that can be transferred across different laboratories a more unified testing framework is needed.
Our methodology presents a characterisation suite that allows to fully evaluate RRAM devices in a consistent and repeatable manner. Due to its all-electrical nature, it does not require using expensive equipment and its modular structure allows accessing insights on the underlying mechanisms without resourcing to complex and highly specialised equipment. Having multiple steps for testing a device in sequence empowers the user to cross-validate experimental observations as they occur through the complementarity of the individual modules. We endeavour not only to cover benchmarking performance aspects of the device but also capture signatures related to the underpinning switching mechanism, providing useful insights on device operation without the need for bespoke and expensive physicochemical validation tools. Our overarching aim is for this methodology to serve as an independent tool for pushing the development frontiers of novel memristive device technologies and their translation into emerging applications. Having a standardised methodology will also help with validating published data through repeating, without any ambiguity, memristors' testing procedures.
The characterisation protocol, the overview of which can be seen in Fig. 1, consists of a series of consecutive modules each geared towards a specific performance target. Initially we deal with the functionality of the device itself, if any. In effect we query the capacity of any two-terminal device to act as a tuneable resistive element and determine its switching threshold and polarity dependency, forming a module we herein call Switching Dynamics. Next, we evaluate the stability of the Device Under Test (DUT) in its given resistive state thus evaluating the existence of volatile (metastable) dynamics. This is accomplished through a series of pulse stimuli, with voltage amplitudes below the switching threshold (sub-threshold) determined from the previous step. Our functionality testing is concluded with temperature dependent voltage cycling that can provide insights into the conduction mechanisms governing the switching in the DUT. This can be performed by considering the switching (supra-threshold) and the non-switching (sub-threshold) regimes of operation9.
An overview of the proposed characterisation procedure introduced in this paper. Testing is split to four modules depending on their particular scope. Functionality testing establishes the switching capacity of the memristive cell as well as dominant conduction mechanisms and Benchmarking evaluates the actual performance of the device under test. Although Functionality and Benchmarking are to be understood as a sequence the operating range of the device can be tuned using the Electroforming procedure. Modelling is finally used to extract a behavioural model out of fully characterised devices.
Once a tuneable resistive functionality has been established, a series of benchmarking routines can be used to examine the actual performance metrics of the DUT. Given that a key feature of such elements is their history-dependence, we first evaluate the DUT characteristics with protocols that are the least possible to lead to an irreversible change in the device and then progressively increasing the applied stimuli. Initially, we employ a bespoke programming protocol to determine the ultimate memory capacity of the device in terms of number of non-volatile resistive states as a means to determine the potential of the device to operate in an analogue fashion, a key aspect of reconfigurable electronics. After the resistive states of the device have been identified, several retention steps across a series of these states can inform us of their stability as well as the DUT's ability to retain the observed memory window that defines the dynamic range of switching. Moreover, another key metric is the number of cycles that a DUT can undergo before failing. Endurance testing can be performed in a bespoke manner, based on the intended use of operation, for example either between consecutive memory states or the extremes defining the DUT OFF/ON ratio.
We note that all of the above are dependent on the DUT preconditioning, often described as electroforming, a process that allows setting devices in distinct operating resistive "bands". As electroforming fundamentally affects the physical characteristics of the DUT, both on a structural and interfacial level, this evaluation process should be repeated post-electroforming, or directly (without electroforming) for technologies that are electroforming-free.
Finally, to properly integrate a device into a circuit design workflow it is also important to have accurate behaviour models. The proposed memristor testing methodology thus culminates in the production of a phenomenological model that is driven by data and can closely match the response of the DUT for specific stimulus within the operating range that has been established throughout testing. Overall, the introduced methodology offers a holistic, yet versatile, characterisation routine that: (i) incorporates traditional techniques in standard use, (ii) introduces advanced techniques to capture finer effects and (iii) refines specialist techniques oriented towards understanding the underlying physical mechanisms.
Functionality Testing
Switching dynamics
First step towards characterising a new DUT should be the determination of the actual switching capability. This can be done by assessing its switching dynamics by employing a two-stage characterisation algorithm10. Initially pulses of alternating polarity are applied to the DUT for determining the direction of change in the resistance of the device for a provided stimulus polarity. Then, voltage ramps are used to determine the actual change in the resistance. As a typical example of this routine can be seen in we can observe in Fig. 2 the relative resistive response of two different devices (Pt/TiO2/Pt and Pt/TiO2/Au at the ~30 kΩ and ~5 kΩ operating range) after the application a series of 100 μs wide pulses of gradually increasing amplitude. From these experimental data one can establish the switching regime for the two DUTs. In these examples, the first shows a clear bipolar response to pulses of different polarity, which is the typical behaviour expected for a bistable memory device. However, the second device exhibits a hybrid bipolar/unipolar behaviour where the applied bias can result to both an increase and a decrease of the device resistance depending on the stimulus amplitude when the applied bias surpasses a threshold and the behaviour of the device changes to unipolar. Although in the first case the operating boundaries of the device are clear, in the second case this analysis allows us to establish an operating voltage range where the device remains strictly within the bipolar switching regime, depending to one's application needs. This tool is essential for identifying appropriate biases for any DUT and for establishing its switching threshold for a pre-defined pulse width.
Switching dynamics. Relative resistance change in response to a given programming voltage in relation to its initial resistance (R0). Typical response of a Pt/TiO2/Pt device exhibiting bipolar behaviour (left) and a Au/TiO2/Pt exhibiting a hybrid unipolar/bipolar behaviour (right). The switching thresholds for an 1% change are highlighted. In the case of the second device the areas of unipolar operation are additionally highlighted. Programming pulse width is fixed at 1 μs.
It is important to mention that capacitance (either of the measuring system or the device itself) may affect the quality of the applied pulses leading to bandwidth limitations. This could have a detrimental effect on the outcome. In our approach we do not aim to differentiate between devices with different high-frequency characteristics but eliminate such differences by necessitating pulses with fast rise and fall times (or respectively longer pulse widths) in order to impose a quasi-DC operating regime.
Switching metastability evaluation
Once a DUT functionality is established, we proceed with assessing its metastability or volatile characteristics, i.e. the attribute of several memristive technologies to revert back to their initial resistive state. This is accomplished through the use of non-switching pulses that allow reading the DUT resistive state without affecting it directly, i.e. without switching the DUT any further. Such variations in the resistance are correlated with the inherent instability of the device itself, for example due to re-oxidation11 and/or other mechanisms12, based on the device structure. For DUTs that exhibit this kind of volatility, it is crucial to determine the timescale of said variations in order to accommodate for that in experiment planning and design of applications.
Temperature dependent I-V characterisation
Perhaps the most celebrated testing procedure of memristive technologies is capturing their current-voltage characteristics (I–Vs). Recording of the I–V DUT signature provides: (i) a straightforward testing for the device operation (ii) indications for any type (volatile or stable) of switching capability and (iii) a deeper insight on the device physics, i.e. the mechanism underneath the electrical response. Recording of the DUT I–V characteristic signature provides an initial indication of its resistive switching character, i.e. bipolar or unipolar, whilst it leads in qualitatively identifying the SET/RESET behaviour. As one of the prime characteristics of memristive devices is their history dependence, a bias ramp that constitutes a succession of pulse stimuli with a step-increased amplitude will in principle have a cumulative effect and thus any observable switching at a given stimulus amplitude cannot be expected to be repeatable with a single pulse of same amplitude. The extracted information can also be used on subsequent characterisation steps, such as the endurance and/or multistate memory evaluation routines, described in more detail in the following sections. The identified operating switching characteristics (Fig. 3a) can also be used to complement and corroborate those attributes identified through the switching dynamics evaluation routine.
Physical mechanisms through I-V characteristics. (a) Regular gradual switching I-V of a bipolar device (Pt/TiO2/Pt – electroformed device). (b) Hysteresis loop and drift of the I–V arise by the multiple acquisition iterations. This behaviour denotes strong ionic character in the core metal oxide thin film (Pt/TiO2/Pt – pristine device). (c) Typical asymmetric (red) and symmetric (yellow) I–Vs obtained on a Pt (TE)/TiO2/Au (BE) and on a Al (TE)/TiO2/Au (BE) respectively. This is an initial indication of interface and material controlled transport. (d) Temperature analysis for pristine Metal-Oxide-Metal stacks depicting evolution of I–V characteristics and (e) temperature dependent signature plots deriving from them following the analysis presented in16. This analysis allows for identifying the dominant transport mechanism. This particular case corresponds to Schottky emission13,16.
Besides providing valuable switching performance characteristics, the I–V evaluation is also capable of revealing the nature of the underlying switching mechanism. An unstable/non-reproducible I–V (Fig. 3b), could be associated with metastability induced by movable ions, associated to the oxygen vacancies or even interstitials. Movable ions, however, can also elicit stable (non-volatile) transitions that result in stable I–Vs presenting notable hysteresis loops on a full acquisition cycle. The ions contribution to the DUT's conduction could be direct or indirect, for example through modifying the charge close to the interface and in consequence the potential barrier, overall resulting in a resistance modulation. An asymmetric I-V response with respect to the bias polarity is an indication of transport determined by the interface barriers whilst a symmetric curve should be associated with core-material controlled transport (Fig. 3c).
The in-depth clarification of the transport properties requires assessing the temperature dependence of the DUT's I–V. This is because the various conduction mechanisms obey both field- and temperature-dependencies13 and thus the recording of the I–V curves at different temperatures (Fig. 3d) allows the extraction of relevant characteristic signature plots. This could take the form of simple Arrhenius plots or more sophisticated versions when the dominant mechanism involves processes such as hopping, Frenkel-Poole emission14 or Schottky emission over an interface barrier15,16 (as depicted in Fig. 3e). Further assessment of such signature plots allows for quantitatively extracting parameters related to the DUT's interface barrier height or the activation energy of the involved defects, in the cases of interface and material-controlled transport, respectively. It is worth mentioning to the case where clear indications of filamentary switching is present. In this case temperature analysis should be carefully applied as the local heating effect of the constricted conductive path may result in the actual localised temperature to be different than what is measured macroscopically. In this particular case more sophisticated techniques may be required to decouple localised from macroscopic heating17.
RRAM technologies can be operated across a variety of resistive regimes and considering also the application needs, pristine devices might require an electroforming step for bringing these into a desirable resistive range. Electroforming was originally mentioned by Hickmott18,19 where he describes it as an irreversible change of the electrical properties of the material by applying a voltage greater than a minimum forming voltage. The electroforming process can represent either the formation of a conductive filament due to structure altering effects or oxygen deprivation20, the diffusion of metal into the metal oxide layer or the modifying the interfacial barrier between the electrodes and the metal oxide active layer21,22. At this point the core material and the interfaces of the resistive memory stack have been completely reformed with respect to their prior state. It is therefore essential to re-evaluate the functionality characteristics of the DUT. In addition, this allows for correlating the pre- and post-formed characteristics, enabling new opportunities for tuning the performance of the device by means of optimising the applied pulsing scheme.
Regardless of the actual forming mechanism, during this process, the resistance of the device is lowered to ranges that are typically relevant to applications. It is well-known18,19 that electroforming is not possible before a device-dependent voltage threshold is reached; the way to cross this threshold is, however, not immediately obvious. One way to form a device is through an I–V cycle, as shown in Fig. 4a. The DUT is biased with increasingly higher voltage steps until a significant change in the DUT's resistance is observed. We note, that further steps might be required to bring the device into a non-volatile switching regime. As observed in Fig. 2b, the conductance of the DUT is still lower than the one achieved in the first step. In this case, a further forming cycle is required (shown in Fig. 4b) before reaching a final state (Fig. 4c). Further increasing the voltage amplitude leads to a partial dielectric breakdown23 of the active layer and compliance is an issue that must be dealt with during electroforming in order to prevent irreversible switching degeneration of the device. Two distinct forms of compliance can be identified: current compliance and time compliance (i.e. short controllable pulses). The key issue with the current compliance mode is that there is a distinct delay before reaching the current cut-off threshold with current overshoot, partly due to a delayed response of the compliance system as well as due to residual parasitic capacitance24. As such, short sequential pulses can be a better alternative for a controllable forming procedure. Instead of continuously biasing the device, sequential voltage pulses of continuously increasing duration and amplitude are applied to the device. This approach offers a less invasive procedure towards attaining a desired formed state. In the example shown in Fig. 4, a series of programming pulses from 3 to 10 V is applied to the device. Within each step the pulse duration is modulated from the low-μs up to the ms range.
Electroforming schemes. (a–c) Two step electroforming using staircase I-V of a Pt/TiO2/Pt device. Apparent electroforming voltage from the GΩ range is about −6.5 V (left). The device is undergoing a further electroforming step bringing it down to the 30 kΩ range (middle) where a steady state is established (right). (d,e) Typical two step electroforming process using a pulse sequence. This device is formed with pulses of increasing amplitude and pulse width. Target resistance threshold has been set to 10 kΩ. The device initially drops to the MΩ range before attaining its final value. Highlighted in blue is the biasing region, followed by a series of READ pulses (highlighted in yellow). Readout pulse width during programming ranges from 20 ms in the GΩ range down to 1 ms in MΩ and below range.
In addition, Fig. 4 further illustrates the common pattern in the electroforming process where there is an initial increase in the electrical conductivity of the DUT, marking the onset of the electroforming mechanism. By continuing the application of the programming pulses the device is finally driven to its target resistive state. While after the first forming the device is able to be tuned to a multitude of resistive states the initial increase to the conductance of the device is irreversible as it is associated with morphological alterations of the active layer25. This behaviour is also evident in the electroforming procedure using staircase I-V curves, as depicted in Fig. 4a–c.
Memory capacity – multibit evaluation
All of the previous steps mostly focus on the evaluation of the switching behaviour of the device. Broaching into the area of applications of particular importance is the ability to identify the memory capacity of the device. Although the bistable aspect of the device operation is considered straightforward, driving a DUT at intermediate states within these two extreme boundaries, requires a more targeted characterisation approach. The need for multibit memory capacity in conventional RRAM cells is nowadays gaining more importance thanks to a series of emerging applications of memristor technologies in reconfigurable circuits and neuromorphic computing.
Along this line, a comprehensive characterisation routine can be used as described in a previous publication26. A succession of fixed pulse width pulse trains each containing an increasing number of programming pulses is applied to the DUT followed by a short retention test to assess the stability of the current resistance. If during this test the resistance of the DUT remains within a pre-defined tolerance band then a new state is registered. Otherwise, the amplitude of the bias is increased up to a specified limit and a new succession of pulse trains in applied. In Fig. 5 one can observe a typical multi-level memory characterisation output for a Pt/TiO2/Pt RRAM cell. In this case, the read voltage is kept at 0.2 V and the characterisation protocol is applied for up to 10 pulses of 1 μs wide programming pulses ranging in amplitude from 1.6 V to 2.1 V with a confidence interval of 2σ. In this example, 5 bits of information (31 states) can be extracted from the DUT within just 4 kΩ of resistance span (8 to 12 kΩ). Depending on the target application, the confidence intervals can be accordingly adjusted to allow for either a high number of states or a larger interval in terms of resistance between the states.
Multibit evaluation. Memory capacity of a Pt/TiO2/Pt RRAM cell. In the top figure stable resistive states have been marked with red crosses. The corresponding programming protocol is shown in the bottom figure. Read pulses are applied continuously throughout and are set to 0.2 V.
Base performance evaluation (Endurance and Retention)
Although devices can alternate between neighbouring or extreme states the retention of such memory windows is not certain. To ascertain the stability of this window that defines the dymanic range of switching for a DUT, long retention tests are required where the DUT is programmed to different resistive levels within the operating range of the device and read continuously for prolonged periods of time. A typical output of such experiment can be seen in Fig. 6a. In this case, the DUT is continuously read for a period of up to three hours for two different states. The data are extrapolated for a period of 10 years (~106 minutes) so that the stability of the memory window defined by the resistive states can be established over that period. A different example is shown in Fig. 6b where the switching window is considerably degraded.
Performance metrics. (a,b) Retention measurements (room temperature) for the two different resistive states defined in Fig. 4a,b. Read voltage set at 0.2 V. By extrapolating the retention sampling data the device on the left retains its memory window despite the drift in resistance, whereas the device on the right has seen a deterioration to its window for the same period. (c–e) Endurance characterisation of two different devices. On the left (subfigures c and d) a Pt (TE)/AlxOy/TiO2/Pt (BE) is subjected to 5000 pulses of 2 V pulses of alternating polarity for two different resistive ranges. In both cases the resistive window is maintained throughout. On the right (e) a Pt (TE)/TiO2/Pt (BE) device exhibits a deteriorating memory window which is completely eliminated after 1000 pulses thus quickly failing the endurance test. Resistive lifetimes and windows are technology specific.
In order to determine how any DUT behaves under repeated stress an endurance test is required. For a typical bipolar DUT, a series of alternating polarity pulses is applied to the device switching it between neighbouring resistive states such as those defined from the retention test previously applied. A typical output can be seen in Fig. 6c,d where 2 V, 100 μs wide pulses of alternating polarity are applied to a Pt (TE)/AlxOy/TiO2/Pt (BE) device for two different resistive ranges. The ideal technology would retain its memory window for as long as possible, subject to biasing, while a failing device (as the case depicted in Fig. 6e) will have its memory window quickly deteriorating to a point that it is completely eliminated.
Base performance evaluation is a valuable tool as it can be used for the derivation of further performance metrics as for instance power dissipation an upper bound of which can be estimated by time-integrating the product of measured current and applied voltage.
From devices to Applications – Phenomenological Modelling
In order to properly integrate memristive technologies into an integrated circuit workflow it is essential to have realistic, accurate and computationally efficient behavioural models extracted from readily available data. The proposed methodology aids in that direction by providing a standardised approach. Our methodology makes it is possible to provide enough data to instantiate an phenomenological model that describes this DUT where the rate of change of the resistance is modelled a function of the resistance itself (R) and the applied bias (v). Such a model is described in a previous publication27.
$$\frac{dR}{dt}=s(v)\times f(R,v)$$
where s(v) is the switching sensitivity and f(R, v) the window function.
$$s(v)=\{\begin{array}{c}{A}_{p}(-1+\exp (\frac{|v|}{{t}_{p}})),\,v > 0\\ {A}_{n}(-1+\exp (\frac{|v|}{{t}_{n}})),\,v < 0\end{array}\,{\rm{and}}\,f(R,v)\{\begin{array}{c}{({a}_{0,p}+{a}_{1,p}v-R)}^{2},v > 0\\ {(R-{a}_{0,n}+{a}_{1,n}v)}^{2},v < 0\end{array}$$
The rest of the parameters are free fitting variables for the positive and negative branch of the bias. Data can be obtained as part of this characterisation procedure by applying a fixed number of programming pulses, while alternating the polarities and fitting the above equations in a least square fashion. An example is showcased in Fig. 7 where a bipolar Pt/TiO2/Pt RRAM cell is biased with alternating programming pulses of increasing amplitude ranging in amplitude from 1.5 V to 1.9 V and −1.7 V to −2.1 V. The fitting parameters for this DUT are summarised in Table 1.
Behavioural model fitting. The analytical model (solid blue line) extracted from the resistive response of a Pt/TiO2/Pt RRAM cell (red dots) using 500 pulse batches of alternating polarities. The amplitude of each of the pulse trains applied is indicated on the bottom of the graph. The initial resistance of the device is 18.3 kΩ.
Table 1 Fitting parameters for the phenomenological model used in this example.
In summary, in this paper we presented a complete testing methodology that can be used for benchmarking the performance of emerging memristive device technologies. The methodology covers all technological facets required to integrate a new resistive memory technology on any workflow, starting from fundamental physical aspects to performance characteristics memory capacity and behavioural modelling.
Device fabrication
Devices used in this paper have been fabricated on 6-inch oxidised silicon wafers (200 nm of thermal SiO2. Bottom electrodes were fabricated using photolithography and electron beam evaporation of titanium and platinum ( 20 nm) or gold (10 nm) followed by lift-off process in N-Methyl-2-pyrrolidone (NMP). Then 25 nm of TiO2 were deposited using magnetron sputtering. For the bi-layer devices an additional 4 nm of AlxOy was deposited using the same process. Top electrodes were again fabricated with photolithography, electron beam evaporation or sputtering of platinum (10–20 nm) and lift-off in NMP. Devices with active areas of 20 × 20 μm² and 30 × 30 μm² were used for the purposes of this paper. Discussion on the morphology of the films used as well as optical images of the devices can be found in previous publication16.
Electrical characterisation
All characterisation of the devices has been done with our characterisation platform ArC ONE28, although the methodology itself is instrumentation independent. Read pulses are set to up to 50 ms in duration (depending on actual resistive state) and 0.2 V in amplitude. Nominal line resistance for the devices is estimated to be 50–250 Ω depending on electrode material and length. Depending on the stack a pulsing-based electroforming routing has been used with consecutive 1 μs to 1 ms pulses ranging from ±6 to ±12 V in amplitude using an 1 kΩ resistor for current limiting. Endurance testing was performed with single alternating polarity pulses as per the article and retention measurements have been performed over a period of 3 hrs using the readout scheme described above. For the I–V curves 2–50 ms pulses were applied at the specific amplitude while measuring the resistance. The inter-pulse interval between consecutive pulses is either 0 ms (staircase mode) or 1 ms (pulsed mode). Multi-bit capability of the devices is assessed using a custom three-phase algorithm outlined in a previous publication26. For experiments where temperature control was required we used an ESPEC ETC-200L temperature controller with applicable temperature range of 0–200 °C. Thermal chuck was set to a specific temperature and the wafer was allowed to thermally stabilise so as to obtain the I-V curve at thermal equilibrium.
The data that accompany this study are available from the University of Southampton institutional repository at https://doi.org/10.5258/SOTON/D1153.
Chua, L. Memristor-The missing circuit element. IEEE Transactions on Circuit Theory 18, 507–519 (1971).
Yoshida, C., Tsunoda, K., Noshiro, H. & Sugiyama, Y. High speed resistive switching in Pt∕TiO2∕TiN film for nonvolatile memory application. Applied Physics Letters 91, 223510 (2007).
Prezioso, M. et al. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature 521, 61–64 (2015).
Serb, A. et al. Unsupervised learning in probabilistic neural networks with multi-state metal-oxide memristive synapses. Nature Communications, 7, 12611 (2016).
Borghetti, J. et al. 'Memristive' switches enable 'stateful' logic operations via material implication. Nature 464, 873–876 (2010).
Serb, A., Khiat, A. & Prodromakis, T. Seamlessly fused digital-analogue reconfigurable computing using memristors. Nature Communications 9, 2170 (2018).
Yang, Y. & Huang, R. Probing memristive switching in nanoionic devices. Nature Electronics 1, 274–287 (2018).
Lanza, M. et al. Recommended Methods to Study Resistive Switching Devices. Advanced Electronic Materials 5, 1800143 (2018).
Michalas, L., Stathopoulos, S., Khiat, A. & Prodromakis, T. An electrical characterisation methodology for identifying the switching mechanism in TiO2 memristive stacks. Scientific Reports 9, 8168 (2019).
Serb, A., Khiat, A. & Prodromakis, T. An RRAM Biasing Parameter Optimizer. IEEE Transactions on Electron Devices 62, 3685–3691 (2015).
Wedig, A. et al. Nanoscale cation motion in TaOx, HfOx and TiOx memristive systems. Nature Nanotechnology 11, 67–74 (2016).
Waser, R. & Aono, M. Nanoionics-based resistive switching memories. Nature Materials 6, 833–840 (2007).
Sze, S. M. & Ng, K. K. Physics of semiconductor devices. (Wiley-Interscience, 2006).
Michalas, L. et al. Electrical characterization of undoped diamond films for RF MEMS application. In 2013 IEEE International Reliability Physics Symposium (IRPS) 6B.3.1–6B.3.7, https://doi.org/10.1109/IRPS.2013.6532049 (2013).
Michalas, L. et al. Interface Asymmetry Induced by Symmetric Electrodes on Metal-Al:TiOx-Metal Structures. IEEE Transactions on Nanotechnology 17, 867–872 (2017).
Michalas, L., Khiat, A., Stathopoulos, S. & Prodromakis, T. Electrical characteristics of interfacial barriers at metal—TiO2 contacts. Journal of Physics D: Applied Physics 51, 425101 (2018).
Yalon, E. et al. Thermometry of Filamentary RRAM Devices. IEEE Transactions on Electron Devices 62, 2972–2977 (2015).
Hickmott, T. W. Low-Frequency Negative Resistance in Thin Anodic Oxide Films. Journal of Applied Physics 33, 2669–2682 (1962).
Hickmott, T. W. Impurity Conduction and Negative Resistance in Thin Oxide Films. Journal of Applied Physics 35, 2118–2122 (1964).
Dearnaley, G., Stoneham, A. M. & Morgan, D. V. Electrical phenomena in amorphous oxide films. Rep. Prog. Phys. 33, 1129 (1970).
Michalas, L., Stathopoulos, S., Khiat, A. & Prodromakis, T. Conduction mechanisms at distinct resistive levels of Pt/TiO2-x/Pt memristors. Applied Physics Letters 113, 143503 (2018).
Simmons, J. G. & Verderber, R. R. New thin-film resistive memory. Radio and Electronic. Engineer 34, 81–89 (1967).
Trapatseli, M. et al. Conductive Atomic Force Microscopy Investigation of Switching Thresholds in Titanium Dioxide Thin Films. Journal of Physical Chemistry C 119, 11958–11964 (2015).
Kalantarian, A. et al. Controlling uniformity of RRAM characteristics through the forming process. In 2012 IEEE International Reliability Physics Symposium (IRPS) 6C.4.1–6C.4.5, https://doi.org/10.1109/IRPS.2012.6241874 (2012).
Münstermann, R. et al. Morphological and electrical changes in TiO2 memristive devices induced by electroforming and switching. physica status solidi (RRL) - Rapid Research Letters 4, 16–18 (2010).
Stathopoulos, S. et al. Multibit memory operation of metal-oxide bi-layer memristors. Scientific Reports 7, 17532 (2017).
Messaris, I. et al. A Data-Driven Verilog-A ReRAM Model. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 37(12), 3151–3162 (2018).
Berdan, R. et al. A μ-Controller-Based System for Interfacing Selectorless RRAM Crossbar Arrays. IEEE Transactions on Electron Devices 62, 2190–2196 (2015).
The authors would like to acknowledge financial support from the Engineering and Physical Sciences Engineering Research Council programmes EP/K017829/1 and EP/R024642/1.
Electronic Materials & Devices Research Group Zepler Institute for Photonics and Nanoelectronics University of Southampton, SO17 1BJ, Southampton, UK
Spyros Stathopoulos
, Loukas Michalas
, Ali Khiat
, Alexantrou Serb
& Themis Prodromakis
Search for Spyros Stathopoulos in:
Search for Loukas Michalas in:
Search for Ali Khiat in:
Search for Alexantrou Serb in:
Search for Themis Prodromakis in:
S.S. and L.M. contributed equally to this work. T.P., L.M., A.S., and S.S. conceived the experiments and A.K. fabricated the devices. S.S. and L.M. performed the electrical characterisation and optimised the characterisation process. S.S. and L.M. wrote the manuscript and all authors contributed in writing by providing feedback and corrections.
Correspondence to Themis Prodromakis.
Stathopoulos, S., Michalas, L., Khiat, A. et al. An Electrical Characterisation Methodology for Benchmarking Memristive Device Technologies. Sci Rep 9, 19412 (2019) doi:10.1038/s41598-019-55322-4 | CommonCrawl |
The binomial distribution is a probability distribution that describes the likelihood of a given number of successes in a fixed number of trials. For example, in the case of flipping fair coins, the binomial distribution can be used to calculate the probability of getting a certain number of heads in a certain number of coin flips.
Properties of Binomial Distribution:
The binomial distribution is a discrete probability distribution that is defined by two parameters: the number of trials and the probability of success on each trial. It has several important properties, including:
Discrete values: The binomial distribution only takes on integer values. This means that the number of successes in a given number of trials can only be an integer, such as 0, 1, 2, etc.
Two possible outcomes: The binomial distribution assumes that each trial has only two possible outcomes: success or failure. The probability of success on each trial is constant across all trials.
Fixed number of trials: The binomial distribution assumes a fixed number of independent trials.
Independence of trials: The trials are independent; the outcome of one trial does not affect the outcome of other trials. This means that the probability of success is the same for each trial.
Probability of a given number of successes: The binomial distribution allows you to calculate the probability of a given number of successes in a given number of trials. For example, you can use the binomial distribution to calculate the probability of getting 5 heads in 10 coin flips.
Mean and variance: The binomial distribution has a mean equal to the number of trials multiplied by the probability of success on each trial (n.p) and a variance equal to the number of trials multiplied by the probability of success on each trial multiplied by the probability of failure on each trial or n.p.(1-p).
These are some of the most important properties of binomial distribution.
Probability Mass Function (PMF) - Binomial Distribution
This is the formula for the probability mass function of a binomial distribution. It gives the probability of x successes in n trials, where the probability of success on each trial is p. The formula is derived from the probability of success and failure on each trial, and it considers that the outcomes of each trial are independent.
$$P(x) = \frac{n!}{x! \cdot (n - x)!} \cdot p^x \cdot (1 - p)^{(n - x)}$$
\(P(x)\) is the probability of x successes in n trials
\(n\) is the total number of trials
\(x\) is the number of successes
\(p\) is the probability of success on each trial
\(!\) is the factorial symbol, which denotes the product of all positive integers less than or equal to the number. For example, \(4! = 4 \cdot 3 \cdot 2 \cdot 1 = 24\).
The formula uses factorials (represented by the exclamation mark) to calculate the number of possible combinations of successes and failures in the experiment. The formula is often simplified to make it easier to work with, but the basic idea is to calculate the probability of a given number of successes in the experiment.
The formula for the binomial distribution can be rewritten using combinations (represented by the notation nCx) to make it easier to understand and work with. The rewritten formula is as follows:
Binomial Distribution Formula with Combinations:
$$P(x) = nCx \cdot p^x \cdot (1 - p)^{(n - x)}$$
\(nCx\) is the binomial coefficient, which is defined as \(nCx = \frac{n!}{x! \cdot (n - x)!}\). It gives the number of ways to choose x successes from n trials.
Mean, Standard Deviation and Variance of Binomial Distribution
The mean, variance, and standard deviation are important statistical measures that describe the characteristics of a probability distribution. In the case of the binomial distribution, the mean, variance, and standard deviation can be calculated using the following formulas:
Mean of a Binomial Distribution:
Mean = \(n * p\)
n is the number of trials
p is the probability of success in each trial
Standard Deviation of a Binomial Distribution:
Standard Deviation = \(\sqrt{n * p * (1 - p)}\)
(1 - p) is the probability of failure in each trial
Variance of a Binomial Distribution:
Variance = \(n * p * (1 - p)\)
To calculate the probability of a binomial event using Microsoft Excel, you can use the BINOM.DIST function, which calculates the probability of a specified number of successes in a fixed number of Bernoulli trials.
BINOM.DIST
To use the BINOM.DIST(number_s, trials, probability_s, cumulative) function, you need to provide the following input arguments:
number_s: The number of successes you want to calculate the probability for.
trials: The total number of Bernoulli trials.
probability_s: The probability of success on each trial.
cumulative: A logical value that specifies whether to return the probability of the specified number of successes (FALSE) or the probability of the specified number of successes or fewer (TRUE).
For example, to calculate the probability of getting exactly 3 heads in 5 coin flips with a probability of heads on each flip of 0.5, you would use the following formula:
=BINOM.DIST(3, 5, 0.5, FALSE) = 0.3125
This formula uses the BINOM.DIST function to calculate the probability of getting exactly 3 heads in 5 flips, with a probability of heads on each flip of 0.5.
If you want to determine the probability of 3 or fewer heads in 5 coin flips, you will use cumulative as TRUE.
=BINOM.DIST(3, 5, 0.5, TRUE) = 0.8125
Job Titles in Quality
How Machine Learning is Changing Quality Management?
ASQ® CQE Exam Preparation Quiz
ASQ® CSSYB | How to pass your Six Sigma Yellow Belt Exam?
Special Cause vs. Common Cause Variation
One Proportion Z Test
Two Proportions Z Test or Two Sample Z Test for Proportions
Solve the Puzzle of Stem and Leaf Plots | CommonCrawl |
Leila Schneps on Grothendieck
Published August 20, 2022 by lievenlb
If you have neither the time nor energy to watch more than one interview or talk about Grothendieck's life and mathematics, may I suggest to spare that privilege for Leila Schneps' talk on 'Le génie de Grothendieck' in the 'Thé & Sciences' series at the Salon Nun in Paris.
I was going to add some 'relevant' time slots after the embedded YouTube-clip below, but I really think it is better to watch Leila's interview in its entirety. Enjoy!
Cartan meets Lacan
In the Grothendieck meets Lacan-post we did mention that Alain Connes wrote a book together with Patrick Gauthier-Lafaye "A l'ombre de Grothendieck et de Lacan, un topos sur l'inconscient", on the potential use of Grothendieck's toposes for the theory of unconsciousness, proposed by the French psychoanalyst Jacques Lacan.
A bit more on that book you can read in the topos of unconsciousness. For another take on this you can visit the blog of l'homme quantique – Sur les traces de Lévi-Strauss, Lacan et Foucault, filant comme le sable au vent marin…. There is a series of posts dedicated to the reading of 'A l'ombre de Grothendieck et de Lacan':
1. Initiation au topos
2. Rencontre d'une évidence
3. Métapsychologie du topos
4. Psychanalyse et mathématiques
5. Temps et instant
6. Mythes, fantasmes et topos classifiant
Alain Connes isn't the first (former) Bourbaki-member to write a book together with a Lacan-disciple.
In 1984, Henri Cartan (one of the founding fathers of Bourbaki) teamed up with the French psychoanalyst (and student of Lacan) Jean-Francois Chabaud for "Le Nœud dit du fantasme – Topologie de Jacques Lacan".
(Chabaud on the left, Cartan on the right, Cartan's wife Nicole in the mddle)
"Dans cet ouvrage Jean François Chabaud, psychanalyste, effectue la monstration de l'interchangeabilité des consistances de la chaîne de Whitehead (communément nommée « Noeud dit du fantasme » ou du « Non rapport sexuel » dans l'aire analytique), et peut ainsi se risquer à proposer, en s'appuyant sur les remarques essentielles de Jacques Lacan, une écriture du virage, autre nom de la passe. Henri Cartan (1904-2008), l'un des Membres-fondateur de N. Bourbaki, a contribué à ce travail avec deux réflexions : la première, considère cette monstration et l'augmente d'une présentation ; la seconde, traite tout particulièrement de l'orientation des consistances. Une suite de traces d'une séquence de la chaîne précède ce cahier qui s'achève par : « L'en-plus-de-trait », une contribution à l'écriture nodale."
Lacan was not only fascinated by the topology of surfaces such as the crosscap (see the topos of unconsciousness), but also by the theory of knots and links.
The Borromean link figures in Lacan's world for the Real, the Imaginary and the Symbolic. The Whitehead link (that is, two unknots linked together) is thought to be the knot (sic) of phantasy.
In 1986, there was the exposition "La Chaine de J.H.C. Whitehead" in the
Palais de la découverte in Paris (from which also the Chabaud-Cartan picture above is taken), where la Salle de Mathématiques was filled with different models of the Whitehead link.
In 1988, the exposition was held in the Deutches Museum in Munich and was called "Wandlung – Darstellung der topologischen Transformationen der Whitehead-Kette"
The set-up in Munich was mathematically more interesting as one could see the link-projection on the floor, and use it to compute the link-number. It might have been even more interesting if the difference in these projections between two subsequent models was exactly one Reidemeister move…
You can view more pictures of these and subsequent expositions on the page dedicated to the work of Jean-Francois Chabaud: La Chaîne de Whitehead ou Le Nœud dit du fantasme Livre et Expositions 1980/1997.
Part of the first picture featured also in the Hommage to Henri Cartan (1904-2008) by Michele Audin in the Notices of the AMS. She writes (about the 1986 exposition):
"At the time, Henri Cartan was 82 years old and retired, but he continued to be interested in mathematics and, as one sees, its popularization."
Bourbaki, Brassens, Hula Hoops and Coconuts
More than ten years ago, when I ran a series of posts on pre-WW2 Bourbaki congresses, I knew most of the existing B-literature. I'm afraid I forgot most of it, thereby missing opportunities to spice up a dull post (such as yesterday's).
Right now, I need facts about the infamous ACNB and its former connection to Nancy, so I reread Liliane Beaulieu's Bourbaki a Nancy:
(page 38) : "Like a theatrical canvas, "La Tribu" often carries as its header a subtitle, the product of its editor's imagination, which brings out the theme of the congress, if necessary. There is thus a "De Nicolaıdes" congress in Nancy, "Du banc public" (reference to Brassens) that of the "Universites cogerees" (in October 68, at the time of co-management)."
The first La Ciotat congress (February 27 to March 6, 1955) was called 'the congress of the public bench' ('banc public' in French) where Serre and Cartan tried to press Bourbaki to opt for the by now standard approach to varieties (see yesterday), and the following Chicago-congress retaliated by saying that there were also public benches nearby, but of little use.
What I missed was the reference to French singer-songwriter George Brassens. In 1953, he wrote, composed and performed Bancs Public (later called 'Les Amoureux des bancs publics').
If you need further evidence (me, I'll take Liliane's word on anything B-related), here's the refrain of the song:
"Les amoureux qui s'bécotent sur les bancs publics,
Bancs publics, bancs publics,
En s'foutant pas mal du regard oblique
Des passants honnêtes,
En s'disant des "Je t'aime'" pathétiques,
Ont des p'tits gueules bien sympathiques!"
(G-translated as:
'Lovers who smooch on public benches,
Public benches, public benches,
By not giving a damn about the sideways gaze
Honest passers-by,
The lovers who smooch on the public benches,
Saying pathetic "I love you" to each other,
Have very nice little faces!')
Compare this to page 3 of the corresponding "La Tribu":
"Geometrie Algebrique : elle a une guele bien sympathique."
(Algebraic Geometry : she has a very nice face)
More Bourbaki congresses got their names rather timely.
In the summer of 1959 (from June 25th – July 8th) there was a congress in Pelvout-le-Poet called 'Congres du cerceau'.
'Cerceau' is French for Hula Hoop, whose new plastic version was popularized in 1958 by the Wham-O toy company and became a fad.
(Girl twirling Hula Hoop in 1958 – Wikipedia)
The next summer it was the thing to carry along for children on vacation. From the corresponding "La Tribu" (page 2):
"Le congres fut marque par la presence de nombreux enfants. Les distractions s'en ressentirent : baby-foot, biberon de l'adjudant (tres concurrence par le pastis), jeu de binette et du cerceau (ou faut-il dire 'binette se jouant du cerceau'?) ; un bal mythique a Vallouise faillit faire passer la mesure."
(try to G-translate it yourself…)
Here's another example.
The spring 1949 congress (from April 13th-25th) was held at the Abbey of Royaumont and was called 'le congres du cocotier' (the coconut-tree congress).
From the corresponding "La Tribu 18":
"Having absorbed a tough guinea pig, Bourbaki climbed to the top of the Royaumont coconut tree, and declared, to unanimous applause, that he would only rectify rectifiable curves, that he would treat rational mechanics over the field $\mathbb{Q}$, and, that with a little bit of vaseline and a lot of patience he would end up writing the book on algebraic topology."
The guinea pig that congress was none other than Jean-Pierre Serre.
A year later (from April 5th-17th 1950) there was another Royaumont-congress called 'le congres de la revanche du cocotier' (the congress of the revenge of the coconut-tree).
"The founding members had decided to take a dazzling revenge on the indiscipline young people; mobilising all the magical secrets unveiled to them by the master, they struck down the young people with various ailments; rare were those strong enough to jump over the streams of Royaumont."
Here's what Maurice Mashaal says about this in 'Bourbaki – a secret society of mathematicians' (page 113):
"Another prank among the members was called 'le cocotier' (the coconut tree). According to Liliane Beaulieu, this was inspired by a Polynesian custom where an old man climbs a palm tree and holds on tightly while someone shakes the trunk. If he manages to hold on, he remains accepted in the social group. Bourbaki translated this custom as the following: some members would set a mathematical trap for the others. If someone fell for it, they would yell out 'cocotier'."
May I be so bold as to suggest that perhaps this sudden interest in Polynesian habits was inspired by the recent release of L'ile aux cocotiers (1949), the French translation of Robert Gibbing's book Coconut Island?
Le Guide Bourbaki : La Ciotat (2)
Published August 4, 2022 by lievenlb
Rereading the Grothendieck-Serre correspondence I found a letter from Serre to Grothendieck, dated October 22nd 1958, which forces me to retract some claims from the previous La Ciotat post.
Serre writes this ten days after the second La Ciotat-congress (La Tribu 46), held from October 5th-12th 1958:
"The Bourbaki meeting was very pleasant; we all stayed in the home of a man called Guérin (a friend of Schwartz's – a political one, I think); Guérin himself was in Paris and we had the whole house to ourselves. We worked outside most of the time, the weather was beautiful, we went swimming almost every day; in short, it was one of the best meetings I have ever been to."
So far so good, we did indeed find Guérin's property 'Maison Rustique Olivette' as the location of Bourbaki's La Ciotat-congresses. But, Serre was present at both meetings (the earlier one, La Tribu 35, was held from February 27th – March 6th, 1955), so wouldn't he have mentioned that they returned to that home when both meetings took place there?
From La Tribu 35:
"The Congress was held "chez Patrice", in La Ciotat, from February 27 to March 6, 1955. Present: Cartan, Dixmier, Koszul, Samuel, Serre, le Tableau (property, fortunately divisible, of Bourbaki)."
In the previous post I mentioned that there was indeed a Hotel-Restaurant "Chez Patrice" in La Ciotat, but mistakingly assumed both meetings took place at Guérin's property.
Can we locate this place?
On the backside of this old photograph
we read:
"Chez Patrice"
seul au bord de la mer
Hotel Restaurant tout confort
Spécialités Provençales
Plage privée Parc auto
Sur la route de La Ciota-Bandol
La Ciota (B.-d.-R.)
So it must be on the scenic coastal road from La Ciotat to Bandol. My best guess is that "Chez Patrice" is today the one Michelin-star Restaurant "La Table de Nans", located at 126 Cor du Liouquet, in La Ciotat.
Their website has just this to say about the history of the place:
"Located in an exceptional setting between La Ciotat and Saint Cyr, the building of "l'auberge du Revestel" was restored in 2016."
And a comment on a website dedicated to the nearby Restaurant Roche Belle confirms that "Chez Patrice", "l'auberge du Revestel" and "table de Nans" were all at the same place:
"Nous sommes locaux et avons découverts ce restaurant seulement le mois dernier (suite infos copains) alors que j'ai passé une partie de mon enfance et adolescence "chez Patrice" (Revestel puis chez Nans)!!!"
I hope to have it right this time: the first Bourbaki La Ciotat-meeting in 1955 took place "Chez Patrice" whereas the second 1958-congress was held at 'Maison Rustique Olivette', the property of Schwartz's friend Daniel Guérin.
Still, if you compare Serre's letter to this paragraph from Schwartz's autobiography, there's something odd:
"I knew Daniel Guérin very well until his death. Anarchist, close to Trotskyism, he later joined Marceau Prevert's PSOP. He had the kindness, after the war, to welcome in his property near La Ciotat one of the congresses of the Bourbaki group. He shared, in complete camaraderie, our life and our meals for two weeks. I even went on a moth hunt at his house and caught a death's-head hawk-moth (Acherontia atropos)."
Schwartz was not present at the second La Ciotat-meeting, and he claims Guérin shared meals with the Bourbakis whereas Serre says he was in Paris and they had the whole house to themselves.
Moral of the story: accounts right after the event (Serre's letter) are more trustworthy than later recollections (Schwartz's autobiography).
Dear Collaborators of Nicolas Bourbaki, please make all Bourbaki material (Diktat, La Tribu, versions) publicly available, certainly those documents older than 50 years.
Perhaps you can start by adding the missing numbers 36 and 49 to your La Tribu: 1940-1960 list.
Le Guide Bourbaki : Celles-sur-Plaine
Published July 30, 2022 by lievenlb
Bourbaki held His Spring-Congresses between 1952 and 1954 in Celles-sur-Plaine in the Vosges department.
La Tribu 27, 'Congres croupion des Vosges' (March 8th-16th, 1952)
La Tribu 30, 'Congres nilpotent' (March 1st-8th, 1953)
La Tribu 33, 'Congres de la tangente' (March 28th-April 3rd, 1954)
As we can consult the Bourbaki Diktat of the first two meetings, there is no mystery as to their place of venue. From Diktat 27:
"The Congress of March 1952 will be held as planned in Celles-sur-Plaine (Vosges) at the Hotel de la Gare, from Sunday March 9 at 2 p.m. to Sunday March 16 in the evening. A train leaves Nancy on Sunday morning at 8:17 a.m., direction Raon-l'Etappe, where we arrive at 9:53 a.m.; from there a bus leaves for Celles-sur-Plaine (11 km away) at 10 am. Please bring big shoes for the walks (there will probably be a lot of snow on the heights)."
Even though few French villages have a train station, most have a 'Place de la Gare', indicating the spot where the busses arrive and leave. Celles-sur-Plaine is no exception, and one shouldn't look any further to find the 'Hotel de la Gare'.
This Hotel still exists today, but is now called 'Hotel des Lacs'.
At the 1952 meeting, Grothendieck is listed as a 'visitor' (he was a guinea-pig earlier and would only become a Bourbaki-member in 1955). He was invited to settle disputes over the texts on EVTs (Topological Vector Spaces). In the quote below from La Tribu 27 'barrel' refers of course to barreled space:
"But above all a drama was born from the laborious delivery of the EVTs. Eager to overcome the reluctance of the opposition, the High Commissioner attempted a blackmail tactic: he summoned Grothendieck! He hoped to frighten the Congress members to such an extent that they would be ready to swallow barrel after barrel for fear of undergoing a Grothendieckian redaction. But the logicians were watching: they told Grothendieck that, if all the empty sets are equal, some at least are more equal than others; the poor man went berserk, and returned to Nancy by the first train."
The 1953 meeting also had a surprise guest, no doubt on Weil's invitation, Frank Smithies, who we remember from the Bourbaki wedding joke.
Frank Smithies seated in the middle, in between Ralph Boas (left) and Andre Weil (right) at the Red Lion, Grantchester in 1939.
At the 1954 meeting we see a trace of Bourbaki's efforts to get a position for Chevalley at the Sorbonne.
"Made sullen by the incessant rain, and exhausted by the electoral campaigns of La Sorbonne and the Consultative Committee, the faithful poured out their indecisive bile on the few drafts presented to them, and hardly took any serious decisions."
Charlie Hebdo on Grothendieck
Charlie Hebdo, the French satirical weekly newspaper, victim of a terroristic raid in 2015, celebrates the 30th anniversary of its restart in 1992 (it appeared earlier from 1969 till 1981).
Charlie's collaborators have looked at figures who embody, against all odds, freedom, and one of the persons they selected is Alexandre Grothendieck, 'Alexandre Grothendieck – l'équation libertaire'. Here's why
"A Fields Medal winner, ecology pioneer and hermit, he threw honours, money and his career away to defend his ideas."
If you want to learn something about Grothendieck's life and work, you'd better read the Wikipedia entry than this article.
Some of the later paragraphs are even debatable:
"But at the end of his life, total derailment, he gets lost in the meanders of madness. Is it the effect of desperation? of too much freedom? or the abuse of logic (madness is not uncommon among mathematicians, from Kürt Godel to Grigori Perelman…)? The rebel genius withdraws to a village in the Pyrenees and refuses all contact with the outside world."
"However, he silently continues to do math. Upon his death in 2014, thousands of pages will be discovered, of which the mathematician Michel Demazure estimates that "it will take fifty years to transform [them] into accessible mathematics"."
If you want to read more on these 'Grothendieck gribouillis', see here, here, here, here, here, and here.
Le Guide Bourbaki : La Ciotat
Two Bourbaki-congresses were organised at the Côte d'Azur, in La Ciotat, claiming to have one of the most beautiful bays in the world.
La Tribu 35, 'Congres du banc public' (February 27th – March 6th, 1955)
La Tribu 46, 'Congres du banquet auxiliaire' (October 5th-12th, 1958)
As is the case for all Bourbaki-congresses after 1953, we do not have access to the corresponding Diktat, making it hard to find the exact location.
The hints given in La Tribu are also minimal. In La Tribu 34 there is no mention of a next conferences in La Ciotat, in La Tribu 45 we read on page 11:
"October Congress: It will take place in La Ciotat, and will be a rump congress ('congres-croupion'). On the program: Flat modules, Fiber carpets, Schwartz' course in Bogota, Chapter II and I of Algebra, Reeditions of Top. Gen. III and I, Primary decomposition, theorem of Cohen and consorts, Local categories, Theorems of Ad(o), and (ritually!) abelian varieties."
La Tribu 35 itself reads:
"The Congress was held "chez Patrice", in La Ciotat, from February 27 to March 6, 1955.
Presents: Cartan, Dixmier, Koszul, Samuel, Serre, le Tableau (property, fortunately divisible, of Bourbaki).
The absence, for twenty-four hours, of any founding member, created a euphoric climate, consolidated by the aioli, non-cats, and sunbathing by the sea. We will ask Picasso for a painting on the theme 'Bourbaki soothing the elements'. However, some explorations were disturbed by barbed wire, wardens, various fences, and Samuel, blind with anger, declared that he could not find 'la patrice de massage'."
The last sentence seems to indicate that the clue "chez Patrice" is a red herring. There was, however, a Hotel-Restaurant Chez Patrice in La Ciotat.
But, we will find out that the congress-location was elsewhere. (Edit August 4th : wrong see the post La Ciotat (2).
As to that location, La Tribu 46 has this to say:
"The Congress was held in a comfortable villa, equipped with a pick-up, rare editions, tasty cuisine, and a view of the Mediterranean. In the deliberation room, Chevalley claimed to see 47 fish (not counting an object, in the general shape of a sea serpent which served as an ashtray); this prompted him to bathe; but, indisposed by a night of contemplation in front of Brandt's groupoid, he pretended to slip all his limbs into the same hole in Bruhat's bathing suit."
Present in 1958 were : Bruhat, Cartan, Chevalley, Dixmier, Godement, Malgrange
and Serre.
So far, we have not much to go on. Luckily, there are these couple of sentences in Laurent Schwartz' autobiography Un mathématicien aux prises avec le siècle:
Daniel Guérin is known for his opposition to Nazism, fascism, capitalism, imperialism and colonialism. His revolutionary defense of free love and homosexuality influenced the development of queer anarchism.
Now we're getting somewhere.
But there are some odd things in Schwartz' sentences. He speaks of 'two weeks' whereas both La Ciotat-meetings only lasted one week. Presumably, he takes the two together, so both meetings were held at Guérin's property.
Stranger seems to be that Schwartz was not present at either congress (see above list of participants). Or was he? Yes, he was present at the first 1955 meeting, masquerading as 'le Tableau'. On Bourbaki photos, Schwartz is often seen in front of their portable blackboard, as we've seen in the Pelvoux-post. Here's another picture from that 1951-conference with Weil and Schwartz discussing before 'le tableau'. (Edit August 12th : wrong, La Tribu 37 lists both Schwartz and 'Le Tableau' among those present).
Presumably, Bourbaki got invited to La Ciotat via Schwartz' connection with Guérin in 1955, and there was a repeat-visit three years later.
But, where is that property of Daniel Guérin?
I would love to claim that it is La Villa Deroze, (sometimes called the small Medici villa in La Ciotat), named after Gilbert Deroze. From the website:
"Gilbert Deroze's commitment to La Ciotat (he will be deputy mayor in 1947) is accompanied by a remarkable cultural openness. The house therefore becomes a place of hospitality and artistic and intellectual convergence. For example, it is the privileged place of reception for Daniel Guérin, French revolutionary writer, anti-colonialist, activist for homosexual emancipation, theoretician of libertarian communism, historian and art critic. But it also receives guests from the place that the latter had created nearby, the Maison Rustique Olivette, a real center of artistic residence which has benefited in particular from the presence of Chester Himes, Paul Célan, the "beat" poet Brion Gysin, or again of the young André Schwarz-Bart."
Even though the Villa Deroze sometimes received guests of Guérin, this was not the case for Bourbaki as Schwartz emphasises that the congress took place in Guérin's property near La Ciotat, which we now have identified as 'Maison (or Villa) Rustique Olivette'.
From the French wikipedia entry on La Ciotat:
"In 1953 the writer Daniel Guérin created on the heights of La Ciotat, Traverses de la Haute Bertrandière, an artists' residence in his property Rustique Olivette. In the 1950s, he notably received Chester Himes, André Schwartz-Bart, in 1957, who worked there on his book The Last of the Righteous, Paul Celan, Brion Gysin. Chester Himes returned there in 1966 and began writing his autobiography there."
Okay, now we're down from a village (La Ciotat) to a street (Traverses de la Haute Bertrandière), but which of these fabulous villas is 'Maison Rustique Olivette'?
I found one link to a firm claiming to be located at the Villa Rustique Olivette, and giving as its address: 130, Traverses de la Haute Bertrandière.
If this information is correct, we have now identified the location of the two last Bourbaki congress in La Ciotat as 'Maison Rustique Olivette',
with coordinates 43.171122, 5.597150.
Grothendieck's haircut
Browsing through La Tribu (the internal report of Bourbaki-congresses), sometimes you'll find an answer to a question you'd never ask?
Such as: "When did Grothendieck decide to change his looks?"
Photo on the left is from 1951 taken by Paulo Ribenboim, on a cycling tour to Pont-a-Mousson (between Nancy and Metz). The photo on the right is from 1965 taken by Karin Tate.
From La Tribu 43, the second Bourbaki-congress in Marlotte from October 6th-11th 1957:
"The congress gave an enthusiastic welcome to Yul Grothendieck, who arrived in his Khrushchev haircut, in order to enjoy more comfortably the shadow of the sputniks. Seized with jealousy, Dixmier and Samuel rushed to the local hairdresser, who was, alas, quite unable to imitate this masterpiece."
This Marlotte-meeting was called 'Congres de la deuxieme lune', because at their first congress in Marlotte, the hotel-owner thought this group of scientists was preparing for a journey to the moon. Bourbaki was saddened to find out that ownership of the 'Hotel de la mare aux fées' changed over the two years between meetings, for He hoped to surprise her with a return visit just at the time the first Sputnik was launched (October 4th, 1957).
Given the fact that the 1957-summer Bourbaki-congress lasted until July 7th, and that most of the B's may have bumped into G over the summer, I'd wager that the answer to this most important of questions is: late summer 1957.
Le Guide Bourbaki : Royaumont
At least six Bourbaki-congresses were held in 'Royaumont':
La Tribu 18 : 'Congres oecumenique du cocotier', April 13th-25th 1949
La Tribu 22 : 'Congres de la revanche du cocotier', April 5th-17th 1950
La Tribu without number : 'Congres de l'horizon', October 8th-15th 1950
La Tribu 26 : 'Congres croupion', October 1st-9th 1951
La Tribu 31 : 'Congres de la revelation du reglement', JUne 6th-19th 1953
La Tribu 32 : 'Congres du coryza', October 2nd-9th 1953
All meetings were pre-1954, so the ACNB generously grants us all access to the corresponding Bourbaki Diktats. From Diktat 31:
"The next congress will be held at the Abbey of Royaumont, from Saturday June 6th (not from June 5th as planned) to Saturday June 20th.
We meet at 10 a.m., June 6 at the Gare du Nord before the ticket-check. Train to Viarmes (change at Monsoult at 10.35 a.m.). Do not bring a ticket: one couch can transport 4 delegates.
Bring the Bible according to the following distribution:
Cartan: livre IV. Dixmier: Alg. 3, livre VI. Godement: Alg.4-5, Top. 1-2. Koszul: Top. 5-6-7-8-9. Schwartz: Top. 10, Alg. 1-2. Serre: Top. 3-4, livre V. Weil: Alg. 6-7, Ens. R."
Royaumont Abbey is a former Cistercian abbey, located near Asnières-sur-Oise in Val-d'Oise, approximately 30 km north of Paris, France.
How did Bourbaki end up in an abbey? From fr.wikipedia Abbaye de Royaumont:
In 1947, under the direction of Gilbert Gadoffre, Royaumont Abbey became the "International Cultural Center of Royaumont", an alternative place to traditional French university institutions. During the 1950s and 1960s, the former abbey became a meeting place for intellectual and artistic circles on an international scale, with numerous seminars, symposiums and conferences under the name "Cercle culturel de Royaumont". Among its illustrious visitors came Nathalie Sarraute, Eugène Ionesco, Alain Robbe-Grillet, Vladimir Jankélévitch, Mircea Eliade, Witold Gombrowicz, Francis Poulenc and Roger Caillois.
And… less illustrious, at least according to the French edition of Wikipedia, the Bourbaki-gang.
Le Guide Bourbaki : Murol(s)
The preparations for the unique Bourbaki-congress in Murols, start already in La Tribu 32 (fall 1953). On page 3:
"Summer 54: To suit Phileas Chevalley, Sammy Fogg and eventual Mexicans and Colombians, this Congress will be held from August 17 to 30. Samuel will look for a hotel in Auvergne, but everyone is asked to also prospect the hotels in his region."
One should recall that the ICM 1954 was held in Amsterdam from September 2nd-9th. It was convenient for Chevalley and Eilenberg (who were in the US) and for possible more foreigners to have Bourbaki's summer congress just before the ICM.
(Of course, Phileas Fogg is the main character in Jules Verne's Around the World in 80 days.)
A lot of people attended the Murols-meeting (La Tribu 34, 'Congres super-oecumenique du frigidaire et des revetements troues').
Apart from the regular crowd (Cartan, Chevalley, Delsarte, Dieudonne, Dixmier, Godement, Koszul, Sammy (=Eilenberg), Samuel, Schwartz, Serre and Weil), there was a guinea-pig (Serge Lang), an 'efficiency expert' (Saunders MacLane), two 'foreign visitors' (Hochschild and John Tate) and two 'honorable foreign visitors' (Iyanaga and Kosaku Yosida).
Probably because of this, extremely detailed travel instructions were given in La Tribu 33 (page 2):
"Next congress: will be held at the Hotel des Pins, in Murols (Puy-de-Dome) from August 17 to 30.
There is at least one night train departing from Paris, going to Clermont or Issoire, followed by a bus-ride to Murols; details will be given as soon as we know the summer schedules.
For motorized people not coming from the South by the N.9, nor from the West by the N.89: go to Clermont-Ferrand, leave it by the N.9 (route d'Issoire), turn right about 17 kms further (after the village of Veyre) to take the N.678 towards Champeix; in Champeix take (on the right) the N.496 (direction of St-Nectaire, Murols and Mont-Dore).
For those coming from the South by the N.9: turn left at Issoire to take the N.496 towards Campais, St-Nectaire, Murols. For those coming from the West by the N.89: leave it a little before Lequeille to take (on the right) the N.122, turn left 2 km further to take the N.496 towards Mont-Dore, the Chambon lake and Murols (road continuing towards Champeix and Issoire)."
If you follow this route on the map, you'll know that the congress was not held in Murols (departement de l'Aveyron), but in Murol (departement du Puy-de-Dome).
This time we do not have to search long for the place of venue as Hotel des Pins a Murol is still in operation.
Note the terras on the first floor, and the impressive line of trees in front of the hotel.
At first I felt frustrated as I couldn't figure out where this well-known photograph of the Murol-meeting was taken.
From left to right, Godement, Dieudonne, Weil, MacLane, and a smug looking Serre (he knew he would be awarded a Fields medal in a few days time).
Today it is impossible to have this view from the hotel-terras because of the trees in front. Still, the picture was taken from the terras, and the imposing building in the background is the late Turing Hotel in Murol.
Here's a picture of it with the Hotel des Pins in the background.
We've encountered the Murol-congress before on this blog when trying to piece together the history of the Yoneda lemma (Iyanaga was Yoneda's Ph.D. advisor, and probably on his advice MacLane met with Yoneda in the Gare du Nord to hear about his lemma).
On MacLane's role as 'efficiency expert' we have this in La Tribu 34:
"Frightened by the disorder of the discussions, some members had brought a world-renowned efficiency expert from Chicago. This one, armed with a hammer, tried hard and with good humor, but without much result. He quickly realized that it was useless, and turned, successfully this time, to photography."
As we've seen in Amboise and Pelvoux, Bourbaki likes to have His summer venues close to places of great sentimental value.
Murol is very close to Besse-et-Saint Anastaise, the place of the very first Bourbaki-meeting in 1935.
As always, this asks for a little pilgrimage. From La Tribu 34 (page 2):
"Despite the incessant rain, Bourbaki was attracted by the waters, and went to explore lots of Auvergnian lakes. Besse and its Lac Pavin were naturally entitled to a pilgrimage. Courageous founding-fathers and lower-members, braving the rain and fog, rushed across to the lake of Guery where their dripping pants aroused the suspicions of a bar maid, and beat the motorized elements there by several lengths. Others swam and rowed. Even the Japanese were entitled to their lake." | CommonCrawl |
How did Napier come to invent logarithms?
What was Napier's original logic, leading to his invention of logarithms?
In other words, how did Napier, using the mathematics that was available at that time, derive them?
Danu♦
AbdElWadoudAbdElWadoud
$\begingroup$ You can see at least John Napier and it is still usful : EW Hobson, John Napier and the invention of logarithms, 1614 : A lecture (1914) $\endgroup$
– Mauro ALLEGRANZA
$\begingroup$ There is some relevant information in this math.se post on How was $e$ first calculated? $\endgroup$
– Mark Dominus
The idea was to simplify multiplicaton of numbers. If you ever tried to multiply $10$-digit numbers by hand you will see what I am talking about.
The idea is this. We have $a^{m+n}=a^m a^n.$ On the right hand side we have a product of $a^m$ and $a^n$, while on the left hand side a sum $m+n$. So if you write two progressions, one arithmeric and one geometric in the parallel lines:
$1,2,3,4,...$
$a,a^2,a^3,a^4...$
it is very easy to multiply two numbers in the second row: you find the corresponding numbers in the first row, add them and look at the answer which stands below your sum.
The next idea was to choose the arithmetic progression with very small increment, so that the second row will become "dense" and you can approximate any number by some number of the second row. This is the idea.
Then one had to compute the table.
You may ask why people cared about multiplication of large numbers (with many digits). The reason is astronomy. As Kepler said, when he learned of the logarithms: this invention extended the life span of an astronomer many times.
Before Napier, there was another method called prosthaphaeresis (see Wikipedia). Instead of the simple formula $a^{m+n}=a^ma^n$ it used the more complicated formulas $$\cos m\cos n=(\cos(m+n)+\cos(m-n))/2$$ and the analogous formula for the $\sin m\sin n$. To find the product of two numbers $a=\cos m$ and $b=\cos n$ one used the tables of cosines (backwards) to find $m,n$ then compute $m+n$ and $m-n$, then use the tables of cosines again and then perform one more addition and divide by $2$. Division by 2 is easy.
This involves more additions/subtractions than the use of the logarithm tables.
But it was used because the tables of trigonometric functions existed long before Napier computed the tables of logarithms.
Alexandre EremenkoAlexandre Eremenko
$\begingroup$ It can be interesting to note that one of the "possible inventors" of prostapheris, the german astronomer Paul Wittich, collaborator of Tycho Brahe, was teacher in Frankfurt during 1576 of the scottish physicians and astronomer John Craig which in turn was linked with John Napier. $\endgroup$
Multiplication is a lot of work; in numerical computing it is considered evil and many tricks are used to avoid it. So Napier created the table of logarithms. (Briggs worked with Napier to make the table more useful.) IIRC, Napier's logarithms were to base 0.9999999.
However, keep in mind that not all multiplication is evil. Particularly multiplication by 1+10^-n and 1-10^-n, which is merely a shift and add-- much less work than a full blown multiplication. Multiplication by 0.9999999 would be a shift and subtract. This is one of the tricks that Napier used. Similar trickery was used by Robert Flower in 1771, by Euler (year unknown), and in the CORDIC algorithm of Jack Volder (1959, eventually incorporated into most handheld calculators).
Robert Flower claimed that he could compute a logarithm to 20 decimal places in 7 or 8 minutes, with nothing more than a pencil and paper. * Back in the 1960's, before we had calculators, I would use an abacus. I wondered if there could be a way to compute logarithms on an abacus. I later found out that it is not possible. You need two of them. What about a recursive method that converges quadratically? You could get ten digit logs from your calculator, do one iteration, and have them to twenty digits. Yep, there is such an algorithm. *
Math teachers in elementary school dare not offer this knowledge to their students!
*** Left as an exercise for the student, of course!
richard1941richard1941
John Napier (1550-1617) published his table of logarithms Mirifici Logarithmorum Canonis Descriptio in 1614 after some twenty years of work and described his method of construction in Mirifici Logarithmorum Canonis Constructio, published posthumously in 1619 (Edinburgh) by his son Robert, with appendices by Napier and Henry Briggs (1561-1630). Briggs worked with Napier on improving the methods of calculation in the summers following the publication of the table until Napier's death and published his own method in 1624 (Vlacq's 1628 edition). The following is based on the Constructio, Macdonald's translation, and Goldstine's summary in A History of Numerical Analysis from the 16th through the 19th Century (1977), 2-13.
Properties of geometric and arithmetic progressions were well-known by Napier's time, and the connection between a sequence of powers and its corresponding sequence of exponents that we call the law of exponents has roots in ancient mathematics (cf. Euclid IX.11 and Archimedes, The Sand Reckoner). I do not know what enabled Napier to make the key connection that led to logarithms (or on the other hand, what prevented the discovery earlier). My take on Napier is that calculating was thought primarily in terms of integers or ratios of integers (or fractions), and Napier is at pains to make his calculation of logarithms in terms of integers accurate. His break-through idea is quite ingenious and surprising (to me, at least). It is to consider two points moving continuously, one (representing the logarithm) increasing "arithmetically" while the other (representing what we would now call the argument) decreasing "geometrically." Here arithmetic motion is the same as uniform motion while for geometric motion, Napier proves that the point moving geometrically toward a fixed point "has its velocities proportional to its distances from the fixed one." (Prop. 25) The notion is in modern terms that of the continuous interpolation of arithmetic and geometric sequences.
Like many first tries, this led to some inconveniences. By the time his table was published, Napier had in mind some improvements, which are included in an appendix to the Constructio. One of them is that the logarithm of 1 should be 0 and the logarithm of 10 should be some nice number, like $10^{10}$. (It is large, it appears, so that one can express logarithms accurately in terms of integers.) Briggs too saw some room for improvement, and this led him to seek out Napier. Briggs published a table in Logarithmorum Chilias Prima (1617), the first published table of base-10 logarithms, and contributed some Notes in another appendix to the Constructio.
Napier deals with velocities and distances as easily as we deal with derivatives and integrals. In fact, from a modern point of view, there's is no real difference. In our modern terms, we can represent Napier's definition of a logarithm (Prop. 26):
The "artificial number" (logarithm) of a given sine is that which has increased Arithmetically always with the same velocity as the total sine [began] to decrease Geometrically and in the same time as the total sine decreased to that given sine.
Throughout the Constructio, Napier calls the logarithm numerus artificialis and the given number either sinus or numerus naturalis. The "total sine" is a fixed length through which the geometrically moving point can move. It can be translated as "radius" and a given sine will be less than or equal to this; however, trigonometry does not figure in the construction, so the names are somewhat meaningless. So a point begins moving along the total sine $r = TS$ from $T$ toward $S$. Let $x$ be the distance to $S$, so that $r-x$ is the distance moved. Then by Prop. 25, we have $${d \over dt} (r-x) = x$$ (where we can pick the unit of time so that the constant of proportionality is $1$.) If we let $y$ denote the distance the arithmetically moving point has covered, we have $dy/dt = r$. Thus Napier's definition in modern terms is equivalent to the initial value problem $${d \over dt} (r-x) = x,\ {d \over dt} y = r,\ x(0) = r,\ y(0)=0$$ since at $x = r$ the two velocities equal. (The notation here is the same as Goldstine's.) Napier takes $r=10^7$, and it follows that Napier's logarithm is $$y = 10^r \log 10^r/x = 10^7 \log 10^7/x\,.$$ To calculate his logarithm $y$, he uses his definition to prove some inequalities $$\frac{r (r-x)}{x}<y<r-x, \quad \frac{r (x_1-x_2)}{x_1}<y_2-y_1<\frac{r (x_1-x_2)}{x_2}$$ where $x_1>x_2$ and $y_1,y_2$ are the corresponding logarithms. It is easy to prove these inequalities from properties of the integral, but Napier did in terms of velocities and distances.
Napier's first step in constructing his table is to approximate the logarithm of $x=9\,999\,999$, one less than the total sine $10^7$. From the first inequality, he has $$1 < y < 1.00000010000001$$ He splits the difference and takes $y = 1.00000005$ (which represents a error of less than $10^{-14}$). With this he can fill out the logarithms of the geometric sequence with $r$ and this $x$ as the first terms. He does this down to $9999900.0004950$. The he starts a new geometric sequence with $r$ and $x=9999900$ down to nearly $9995001$. He then constructs a more extensive table with 69 columns so that the columns and rows were each in geometric progressions, starting with $r=10^7$ in one corner and ending with $x=4998609.4034$, or roughly $r/2$, in the opposite corner. With these, he constructed his final table, the "Canon of Logarithms."
Note that if $y=l(x)$ is Napier's logarithm, then $$l(ab) = l(a)+l(b)-l(1)$$ From this he realized that if $l(1)=0$, logarithms would be easier to use. But he had already constructed his table. Some ask about the "base" of Napier's logarithm. Given that it is not a true modern logarithm, that's a bit difficult to answer. If you think that $10^7 \log x$ is the basic function underlying $l(x)$, you can think of it either as a scaled natural logarithm or a logarithm with the odd base of $\exp(10^{-7})$. Wikipedia's Napierian logarithm effectively has the base as $10^7/(10^7 - 1)$; but that would imply the logarithm of $x=9\,999\,999$ is exactly $y=1$, not $y=1.00000005$ as Napier calculated. Briggs' base-10 logarithm mentioned above is really $10^9 \log_{10} x$.
Michael E2Michael E2
How was logarithm discovered?
When was the idea of exponents generalized from "repeated multiplication"?
How much did scholasticism contribute to logic and mathematics?
What sorts of calculations called for the invention of logarithm tables?
How were vector calculus nabla ∇ identities first derived?
In ancient times, how did people conclude that the shape of Earth is a sphere?
Day-to-day tasks of human computers, à la Hidden Figures movie
How did polar coordinates come into existence?
Confusion on the original article by Lucas
Did Riemann invent the Riemann curvature tensor? | CommonCrawl |
A trigonometric equation $\left(\sin{\frac{\pi}{7}}\right)^x+\left(\cos{\frac{\pi}{7}}\right)^x=1$
What's the way to solve $$\left(\sin{\frac{\pi}{7}}\right)^x+\left(\cos{\frac{\pi}{7}}\right)^x=1$$ I am looking for an analytic solution for this equation. With numerical solving I can find the solution(s), but if possible guide me to solve it like a real man!
remark: I know $x=2$ works here .
algebra-precalculus trigonometry
José Carlos Santos
KhosrotashKhosrotash
Notice that $0 < \sin{\frac{\pi}{7}} , \ \cos{\frac{\pi}{7}} < 1$;
note that for every $2 < x$ we can conclude that:
$$ (\sin{\frac{\pi}{7}})^x+(\cos{\frac{\pi}{7}})^x < (\sin{\frac{\pi}{7}})^2+(\cos{\frac{\pi}{7}})^2 =1 \Longrightarrow \\ (\sin{\frac{\pi}{7}})^x+(\cos{\frac{\pi}{7}})^x < 1 \ \ \ \ \ \ \ \ \text{for every} \ \ \ \ 2 < x \in \mathbb{R} ; $$
also for every $x < 2$ we can conclude that:
$$ (\sin{\frac{\pi}{7}})^x+(\cos{\frac{\pi}{7}})^x > (\sin{\frac{\pi}{7}})^2+(\cos{\frac{\pi}{7}})^2 =1 \Longrightarrow \\ (\sin{\frac{\pi}{7}})^x+(\cos{\frac{\pi}{7}})^x > 1 \ \ \ \ \ \ \ \ \text{for every} \ \ \ \ 2 > x \in \mathbb{R} ; $$
so there is no other solution rather than $x=2$.
Davood KhajehpourDavood Khajehpour
It has one solution: $x=2$.
Besides, $\left(\sin{\frac\pi7}\right)^x$ and $\left(\cos{\frac\pi7}\right)^x$ are strictly decreasing functions and therefore so is their sum. So, $2$ is the only solution.
man and laptop
José Carlos SantosJosé Carlos Santos
The only solution is $x=2$ which comes from the fundamental identity $\sin^2\alpha+\cos^2\alpha=1,\;\forall\alpha\in\mathbb{R}$
To show that there are no other solutions I computed the first derivative of
$f(x)=\sin ^x\left(\frac{\pi }{7}\right)+\cos ^x\left(\frac{\pi }{7}\right)-1$
$f'(x)=\sin ^x\left(\frac{\pi }{7}\right) \log \left(\sin \left(\frac{\pi }{7}\right)\right)+\cos ^x\left(\frac{\pi }{7}\right) \log \left(\cos \left(\frac{\pi }{7}\right)\right)$
as both log are negative because sine and cosine are less than $1$ we can conclude that $f(x)$ is decreasing on $\mathbb{R}$ therefore $x=2$ is the only solution
RaffaeleRaffaele
Not the answer you're looking for? Browse other questions tagged algebra-precalculus trigonometry or ask your own question.
Solve trigonometric equation: $1 = m \; \text{cos}(\alpha) + \text{sin}(\alpha)$
$ \cos ^2\left(x\right)+\cos ^2\left(2x\right)+\cos ^2\left(3x\right)=\frac{3}{2} $
How to prove $\sin (x)+ \sin(y) = 2 \sin \left(\frac{x+y}{2}\right) \cos\left(\frac{x-y}{2}\right)$ using addition theorems?
Variable Separation - Find $\theta$ in $\frac{\cos\left(\frac{\pi}2\cos\theta\right)}{\sin\theta} = \frac{0.8912r}{60I}$
Solving trigonometric equation $\sin\left(0.001-t\right)= \frac12$
Solutions for $\sin\left(3x-\frac{\pi}{2}\right)=\cos\left(x-\frac{\pi}{3}\right)$
Solve the trigonometric equation with $\sin$, $\cos$
Solve $\left ( 1- \sqrt{2}\sin x \right )\left ( \cos 2x+ \sin 2x \right )= \frac{1}{2}$
Solving $\sin\left(\frac{x}{x^2+1}\right)+\sin\left(\frac{1}{x^2+x+2}\right)=0$
Prove that $a\sin x+b\cos x=\sqrt{a^2+b^2} \sin\left(x+\tan^{-1}\left(\frac{b}{a}\right)\right)$ | CommonCrawl |
Actions of solvable Baumslag-Solitar groups on surfaces with (pseudo)-Anosov elements
DCDS Home
On the integrability of polynomial vector fields in the plane by means of Picard-Vessiot theory
May 2015, 35(5): 1801-1816. doi: 10.3934/dcds.2015.35.1801
The magnetic ray transform on Anosov surfaces
Gareth Ainsworth 1,
Department of Pure Mathematics & Mathematical Statistics, University of Cambridge, CB3 0WB, United Kingdom
Received May 2013 Revised October 2013 Published December 2014
Assume (M,g,$\Omega$) is a closed, oriented Riemannian surface equipped with an Anosov magnetic flow. We establish certain results on the surjectivity of the adjoint of the magnetic ray transform, and use these to prove the injectivity of the magnetic ray transform on sums of tensors of degree at most two. In the final section of the paper we give an application to the entropy production of magnetic flows perturbed by symmetric 2-tensors.
Keywords: X-ray transform, Anosov surface, geodesics..
Mathematics Subject Classification: 53C25, 53C21, 53C2.
Citation: Gareth Ainsworth. The magnetic ray transform on Anosov surfaces. Discrete & Continuous Dynamical Systems, 2015, 35 (5) : 1801-1816. doi: 10.3934/dcds.2015.35.1801
G. Ainsworth, The attenuated magnetic ray transforms on surfaces, Inverse Probl. Imaging., 7 (2013), 27-46. doi: 10.3934/ipi.2013.7.27. Google Scholar
Yu. Anikonov and V. Romanov, On uniqueness of determination of a form of first degree by its integrals along geodesics, J. Inverse Ill-Posed Probl., 5 (1997), 487-490. doi: 10.1515/jiip.1997.5.6.487. Google Scholar
D. V. Anosov, Geodesic flows on closed Riemannian manifolds with negative curvature, Proc. Steklov Inst. Math., 90 (1967), 209pp. Google Scholar
D. V. Anosov and Y. G. Sinai, Some smooth ergodic systems, Uspekhi Mat. Nauk, 22 (1967), 107-172. Google Scholar
V. I. Arnold, Some remarks on flows of line elements and frames, Dokl. Akad. Nauk SSSR, 138 (1961), 255-257. Google Scholar
K. Burns and G. P. Paternain, Anosov magnetic flows, critical values and topological entropy, Nonlinearity, 15 (2002), 281-314. doi: 10.1088/0951-7715/15/2/305. Google Scholar
C. Croke and V. A. Sharafutdinov, Spectral rigidity of a compact negatively curved manifolds, Topology, 37 (1998), 1265-1273. doi: 10.1016/S0040-9383(97)00086-4. Google Scholar
N. S. Dairbekov and G. P. Paternain, Entropy production in Gaussian thermostats, Comm. Math. Phys., 269 (2007), 533-543. doi: 10.1007/s00220-006-0117-y. Google Scholar
N. S. Dairbekov and G. P. Paternain, Rigidity properties of Anosov optical hypersurfaces, Ergod. Th. & Dynam. Sys., 28 (2008), 707-737. doi: 10.1017/S0143385707000612. Google Scholar
N. S. Dairbekov and G. P. Paternain, Longitudinal KAM cocycles and action spectra of magnetic flows, Math. Res. Lett., 12 (2005), 719-729. doi: 10.4310/MRL.2005.v12.n5.a9. Google Scholar
N. S. Dairbekov and G. P. Paternain, On the cohomological equation of magnetic flows, Mat. Contemp., 34 (2008), 155-193. Google Scholar
N. S. Dairbekov, G. P. Paternain, P. Stefanov and G. Uhlmann, The boundary rigidity problem in the presence of a magnetic field, Adv. Math., 216 (2007), 535-609. doi: 10.1016/j.aim.2007.05.014. Google Scholar
N. S. Dairbekov and V. A. Sharafutdinov, Some problems of integral geometry on Anosov manifolds, Ergod. Th. & Dynam. Sys., 23 (2003), 59-74. doi: 10.1017/S0143385702000822. Google Scholar
H. M. Farkas and I. Kra, Riemann Surfaces, Second Edition, Graduate Texts in Mathematics, Springer-Verlag, New York, 1992. doi: 10.1007/978-1-4612-2034-3. Google Scholar
L. Flaminio and G. Forni, Invariant distributions and time averages for horocycle flows, Duke Math. J., 119 (2003), 465-526. doi: 10.1215/S0012-7094-03-11932-8. Google Scholar
E. Ghys, Flots d'Anosov sur les 3-variétés fibrées en cercles, (French), Ergod. Th. & Dynam. Sys., 4 (1984), 67-80. doi: 10.1017/S0143385700002273. Google Scholar
V. Guillemin and D. Kazhdan, Some inverse spectral results for negatively curved (2)-manifolds, Topology, 19 (1980), 301-312. doi: 10.1016/0040-9383(80)90015-4. Google Scholar
B. Hasselblatt and A. Katok, Introduction to the Modern Theory of Dynamical Systems, Encyclopedia of Mathematics and its Applications, Cambridge University Press, Cambridge UK, 1995. doi: 10.1017/CBO9780511809187. Google Scholar
D. Jane and G. P. Paternain, On the injectivity of the X-ray transform for Anosov thermostats, Discrete Contin. Dyn. Syst., 24 (2009), 471-487. doi: 10.3934/dcds.2009.24.471. Google Scholar
A. N. Livsic, Certain properties of the homology of Y-systems, Mat. Zametki, 10 (1971), 555-564. Google Scholar
A. N. Livsic, Cohomology of dynamical systems, Izv. Akad. Nauk SSSR Ser. Mat., 36 (1972), 1296-1320. Google Scholar
R. de la Llave, J. M. Marco and R. Moriyon, Canonical pertubation theory of Anosov systems and regularity results for the Livsic cohomology equation, Ann. of Math., 123 (1986), 537-611. doi: 10.2307/1971334. Google Scholar
R. Michel, Sur la rigidité imposée par la longeur des géodésiques, (French), Invent. Math., 65 (1981), 71-83. doi: 10.1007/BF01389295. Google Scholar
R. G. Mukhometov, The reconstruction problem of a two-dimensional Riemannian metric, and integral geometry, (Russian), Dokl. Akad. Nauk SSSR, 232 (1977), 32-35. Google Scholar
G. P. Paternain, M. Salo and G. Uhlmann, The attenuated ray transform for connections and Higgs fields, Geom. Funct. Anal., 22 (2012), 1460-1489. doi: 10.1007/s00039-012-0183-6. Google Scholar
G. P. Paternain, M. Salo and G. Uhlmann, Tensor tomography on surfaces, Invent. Math., 193 (2013), 229-247. doi: 10.1007/s00222-012-0432-1. Google Scholar
G. P. Paternain, M. Salo and G. Uhlmann, Spectral rigidity and invariant distributions on Anosov surfaces, J. Differential Geom., 98 (2014), 147-181. Google Scholar
L. Pestov and G. Uhlmann, Two dimensional compact simple Riemannian manifolds are boundary distance rigid, Ann. of Math., 161 (2005), 1093-1110. doi: 10.4007/annals.2005.161.1093. Google Scholar
J. Plante and W. Thurston, Anosov flows and the fundamental group, Topology, 11 (1972), 147-150. doi: 10.1016/0040-9383(72)90002-X. Google Scholar
M. Pollicott, On the rate of mixing of Axiom A flows, Invent. Math., 81 (1985), 413-426. doi: 10.1007/BF01388579. Google Scholar
M. Pollicott, Derivatives of topological entropy for Anosov and geodesic flows, J. Diff. Geom., 39 (1994), 457-489. Google Scholar
D. Ruelle, Resonances for Axiom A flows, J. Diff. Geom., 25 (1987), 99-116. Google Scholar
D. Ruelle, Postivity of entropy production in non-equilibrium statistical mechanics, J. Statist. Phys., 85 (1996), 1-23. doi: 10.1007/BF02175553. Google Scholar
D. Ruelle, Differentiation of SRB states for hyperbolic flows, Ergodic Theory Dynam. Systems, 28 (2008), 613-631. doi: 10.1017/S0143385707000260. Google Scholar
V. A. Sharafutdinova and G. Uhlmann, On deformation boundary rigidity and spectral rigidity of Riemannian surfaces with no focal points, J. Diff. Geom., 56 (2000), 93-110. Google Scholar
I. Singer and J. Thorpe, Lecture Notes on Elementary Topology and Geometry, Undergrad. Texts Math. Springer-Verlag, New York, 1967. Google Scholar
M. Wojtkowski, Magnetic flows and Gaussian thermostats on manifolds of negative curvature, Fund. Math., 163 (2000), 177-191. Google Scholar
Dan Jane, Gabriel P. Paternain. On the injectivity of the X-ray transform for Anosov thermostats. Discrete & Continuous Dynamical Systems, 2009, 24 (2) : 471-487. doi: 10.3934/dcds.2009.24.471
François Rouvière. X-ray transform on Damek-Ricci spaces. Inverse Problems & Imaging, 2010, 4 (4) : 713-720. doi: 10.3934/ipi.2010.4.713
Aleksander Denisiuk. On range condition of the tensor x-ray transform in $ \mathbb R^n $. Inverse Problems & Imaging, 2020, 14 (3) : 423-435. doi: 10.3934/ipi.2020020
Wenzhong Zhu, Huanlong Jiang, Erli Wang, Yani Hou, Lidong Xian, Joyati Debnath. X-ray image global enhancement algorithm in medical image classification. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1297-1309. doi: 10.3934/dcdss.2019089
Silvia Allavena, Michele Piana, Federico Benvenuto, Anna Maria Massone. An interpolation/extrapolation approach to X-ray imaging of solar flares. Inverse Problems & Imaging, 2012, 6 (2) : 147-162. doi: 10.3934/ipi.2012.6.147
Nuutti Hyvönen, Martti Kalke, Matti Lassas, Henri Setälä, Samuli Siltanen. Three-dimensional dental X-ray imaging by combination of panoramic and projection data. Inverse Problems & Imaging, 2010, 4 (2) : 257-271. doi: 10.3934/ipi.2010.4.257
Arun K. Kulshreshth, Andreas Alpers, Gabor T. Herman, Erik Knudsen, Lajos Rodek, Henning F. Poulsen. A greedy method for reconstructing polycrystals from three-dimensional X-ray diffraction data. Inverse Problems & Imaging, 2009, 3 (1) : 69-85. doi: 10.3934/ipi.2009.3.69
Zhenhua Zhao, Yining Zhu, Jiansheng Yang, Ming Jiang. Mumford-Shah-TV functional with application in X-ray interior tomography. Inverse Problems & Imaging, 2018, 12 (2) : 331-348. doi: 10.3934/ipi.2018015
Jakob S. Jørgensen, Emil Y. Sidky, Per Christian Hansen, Xiaochuan Pan. Empirical average-case relation between undersampling and sparsity in X-ray CT. Inverse Problems & Imaging, 2015, 9 (2) : 431-446. doi: 10.3934/ipi.2015.9.431
Weihao Shen, Wenbo Xu, Hongyang Zhang, Zexin Sun, Jianxiong Ma, Xinlong Ma, Shoujun Zhou, Shijie Guo, Yuanquan Wang. Automatic segmentation of the femur and tibia bones from X-ray images based on pure dilated residual U-Net. Inverse Problems & Imaging, 2021, 15 (6) : 1333-1346. doi: 10.3934/ipi.2020057
Hiroshi Fujiwara, Kamran Sadiq, Alexandru Tamasan. Partial inversion of the 2D attenuated $ X $-ray transform with data on an arc. Inverse Problems & Imaging, 2022, 16 (1) : 215-228. doi: 10.3934/ipi.2021047
Gareth Ainsworth. The attenuated magnetic ray transform on surfaces. Inverse Problems & Imaging, 2013, 7 (1) : 27-46. doi: 10.3934/ipi.2013.7.27
Yang Zhang. Artifacts in the inversion of the broken ray transform in the plane. Inverse Problems & Imaging, 2020, 14 (1) : 1-26. doi: 10.3934/ipi.2019061
Yiran Wang. Parametrices for the light ray transform on Minkowski spacetime. Inverse Problems & Imaging, 2018, 12 (1) : 229-237. doi: 10.3934/ipi.2018009
Gareth Ainsworth, Yernat M. Assylbekov. On the range of the attenuated magnetic ray transform for connections and Higgs fields. Inverse Problems & Imaging, 2015, 9 (2) : 317-335. doi: 10.3934/ipi.2015.9.317
Siamak RabieniaHaratbar. Support theorem for the Light-Ray transform of vector fields on Minkowski spaces. Inverse Problems & Imaging, 2018, 12 (2) : 293-314. doi: 10.3934/ipi.2018013
Jan Boman. Unique continuation of microlocally analytic distributions and injectivity theorems for the ray transform. Inverse Problems & Imaging, 2010, 4 (4) : 619-630. doi: 10.3934/ipi.2010.4.619
Venkateswaran P. Krishnan, Plamen Stefanov. A support theorem for the geodesic ray transform of symmetric tensor fields. Inverse Problems & Imaging, 2009, 3 (3) : 453-464. doi: 10.3934/ipi.2009.3.453
Mark Hubenthal. The broken ray transform in $n$ dimensions with flat reflecting boundary. Inverse Problems & Imaging, 2015, 9 (1) : 143-161. doi: 10.3934/ipi.2015.9.143
Wenzhi Luo, Zeév Rudnick, Peter Sarnak. The variance of arithmetic measures associated to closed geodesics on the modular surface. Journal of Modern Dynamics, 2009, 3 (2) : 271-309. doi: 10.3934/jmd.2009.3.271
Gareth Ainsworth | CommonCrawl |
Optimal profile limits for maternal mortality rate (MMR) in South Sudan
Gabriel Makuei1,
Mali Abdollahian1 &
Kaye Marion1
Reducing Maternal Mortality Rate (MMR) is considered by the international community as one of the eight Millennium Development Goals. Based on previous studies, Skilled Assistant at Birth (SAB), General Fertility Rate (GFR) and Gross Domestic Product (GDP) have been identified as the most significant predictors of MMR in South Sudan.
This paper aims for the first time to develop profile limits for the MMR in terms of significant predictors SAB, GFR, and GDP. The paper provides the optimal values of SAB and GFR for a given MMR level.
Logarithmic multi- regression model is used to model MMR in terms of SAB, GFR and GDP. Data from 1986 to 2015 collected from Juba Teaching Hospital was used to develop the model for predicting MMR. Optimization procedures are deployed to attain the optimal level of SAB and GFR for a given MMR level.
MATLAB was used to conduct the optimization procedures. The optimized values were then used to develop lower and upper profile limits for yearly MMR, SAB and GFR.
The statistical analysis shows that increasing SAB by 1.22% per year would decrease MMR by 1.4% (95% CI (0.4–5%)) decreasing GFR by 1.22% per year would decrease MMR by 1.8% (95% CI (0.5–6.26%)).
The results also indicate that to achieve the UN recommended MMR levels of minimum 70 and maximum 140 by 2030, the government should simultaneously reduce GFR from the current value of 175 to 97 and 75, increase SAB from the current value of 19 to 50 and 76.
This study for the first time has deployed optimization procedures to develop lower and upper yearly profile limits for maternal mortality rate targeting the UN recommended lower and upper MMR levels by 2030. The MMR profile limits have been accompanied by the profile limits for optimal yearly values of SAB and GFR levels. Having the optimal level of predictors that significantly influence the maternal mortality rate can effectively aid the government and international organizations to make informed evidence-based decisions on resources allocation and intervention plans to reduce the risk of maternal death.
Improving maternal health and reducing related mortality has been the key concern of the international community as one of the Eight Millennium Development Goals (MDG) [1]. Maternal health is a major global development challenge particularly in Africa which accounts for about half of the world's maternal deaths, with little or no progress towards reduction of maternal mortality. Factors associated with maternal mortality in sub-Saharan Africa (SSA) include prenatal care coverage and skilled attendance at delivery. Kruk, et al., (2010) investigated the impact of the community perceptions on the quality of care provided by the local health system on pregnant women's decisions to deliver in a clinic. They suggest that improving the quality of care at first level clinics may assist the efforts to increase facility delivery in sub-Saharan Africa [2]. However, there are other contributing factors including socio-economic factors, macro-economic factors and physiological factors. South Sudan has about 1147 health care facilities that function to serve population of around 13 million. Out of these facilities, only 37 are hospitals. Ill-equipped buildings with poor hygiene are the common feature of these primary health care units. Chronic shortage of health care professional staff at all levels is demonstrated by 1.5 doctors and two nurses per 100,000 people (National Bureau of Statistic [3]' Report, (2013, 2014, 2015), and World Health Organization, (2014) [4]. All the above factors affect the total health care system and in particular the high maternal mortality rate problem.
The maternal mortality rate (MMR) in South Sudan is one of the highest in the world [3, 5, 6]. The risk of a pregnant woman dying is as high as one in seven. In Africa overall the risk of a pregnant woman dying is one in 16, in contrast with Asia (1 in 105), Europe (1 in 1895) and in North America (1 in 3750), [4, 7, 8], [9]. Thus, there is an urgent need for evidence-based intervention to significantly reduce mortality rate in South Sudan.
The data released by WHO, UNICEF, UNFPA, World Bank and the United Nations Division, [4] shows that even though the maternal mortality rate in South Sudan has decreased from 1000 to 730 per 100,000 live births between 2005 to 2015, it is still one of the highest in the world. Combining this with the high fertility rates in the country gives the probability of an average reproductive South Sudanese woman (12–49 years of age) dying during pregnancy to be 14.3% [10]. Therefore, understanding the trends in MMR and factors significant to MMR would be vital for developing health care systems that can ensure safe delivery.
The World Health Organization and USAID (2015) have recommended that all countries with high MMR level should aim to reduce MMR to the minimum of 70 and the maximum of 140 by 2030 [7].
This paper only investigates the impact of the most influential economic factors on the non-HIV MMR in South Sudan [11]. The authors have identified the most economical influential factors for MMR in South Sudan [10]: The Skilled Attendants at Births (SAB), the General Fertility Rate (GFR) and the Gross Domestic Product (GDP). The MMR is expressed as the annual number of maternal deaths per 100,000 live births. The SAB is the annual percentage of Skilled Attendants at Birth; the GFR is expressed as the annual percentage of live births per 1000 women of reproductive age (ages of 12 to 49 years) in a population. The GDP is expressed in US Dollars (USD). The relevant data for this research was collected for the period from January 1986 to October 2015 from Juba Teaching Hospital (JTH) which is one of the major health care referral centres in South Sudan. The hospital is a 580-bed facility located in the capital city. Additional data was collected from other reliable sources such as: Reproductive Health Department from Ministry of Health (MoH), National Bureau of Statistics NBS Report [12], the South Sudan 2009 National Baseline Household Survey Report, South Sudan Household Health Survey [13], Census of Population and Housing [14], and United Nations' organizations and their partners (e.g. WHO, UNAID, UNICEF, UNDP).
The findings show that Generals Fertility Rate (GFR) is the most influential factor in increasing the maternal death (MDs) followed by skilled attendant at birth (SAB) and the Gross Domestic Product (GDP). MMR decreases when SAB increases and the GFR decreases.
In their comprehensive discussions regarding strategies to reduce MMR, the Campbell, Graham, and Lancet Maternal Survival Series Steering Group [15] stressed the intra-partum care, which includes different types of skilled assistants at birth. Use of contraception to reduce the fertility rate is also mentioned though considered less important. However, the viability of both methods is positive for South Sudan.
According to the data from Juba Teaching Hospital (JTH) any efforts to increase SAB through increasing the number of trained attendants either directly or indirectly would affect the GFR and the GDP. According to the current statistics, only 20% births are attended by skilled assistants [3]. This indicates the need to increase the SAB and decrease the GFR levels. For more rapid impact on MMR, however, the programme of training birth attendants at village and community levels needs to be accelerated and this involves considerable funding.
Currently, the GFR in South Sudan is very high at around five children per mother (6.9 in 2014 cited by the Population Reference Bureau, 2016) [16] which is double the global average of 2.5%. Decrease in GFR has a much longer term effect in reducing MMR, however, contraceptive use has only reached 4% of the population compared to 62% globally [16]. Consequently, there is scope for improvement in this respect; yet this also involves substantial funding. However, acceleration in reducing GFR involves substantial funding to provide improved and affordable health care access to pregnant women. Although increase in GDP would increase MMR, because of the cultural practice of polygamy, the country needs economic growth as a fundamental requirement to provide improved and affordable health care access to pregnant women.
The only likely strategy in the current environment is to increase SAB while decreasing GFR in order to reduce MMR. The importance of attending to socio-economic factors to reduce MMR was highlighted in recent Indian research by Rai, and Tulchinsky (2015) [17]. Decreased fertility rate will decrease the number of births and thus reduce the number of SAB required. Consequently large scale effort to use birth control methods and increase trained birth attendants at village and community levels needs to be implemented.
In this paper we use the logarithmic multi-regression model suggested by Makuei et al. [10] to estimate MMR based on SAB, GFR and GDP. The mathematical optimization is then used to obtain the optimal values of SAB and GFR (when GDP is kept constant) for a given level of MMR. This study for the first time aims to develop yearly lower and upper profile limits for the MMR expressed in terms of significant predicators SAB, GFR and GDP using real data collected from the resources outlined above. The MMR profile limits are then accompanied by yearly optimal profile limits of SAB and GFR to achieve the recommended UN MMR levels by 2030. The optimal profile limits provide a quantitative guide-line for the government and partners in terms of yearly SAB and GFR targets in order to reduce MMR to the level recommended by the UN agencies [4].
Based on the previous study, Log Linear Regression can model maternal deaths more accurately compared with Poisson regression [10]. Thus, Log Linear Regression model has been developed and is used for optimization purpose. A reliable model that can estimate the maternal deaths and the optimized values of its corresponding predictors will assist the government to make an inform decision on resource allocation and lacking resources in order to reduce the domestic MMR.
The results of our analysis show that increasing SAB rapidly to the highest level is possible. Under this condition, a slow increase in GFR will not increase MMR too much above the UN maximum recommended level of 140. On the other hand, considering that the GFR in South Sudan is already at almost the highest level, it is more likely that it will decrease with increased education, income levels and awareness.
This section outlines the data collection tasks, prediction model for MMR, mathematical optimization and development of linear profile limits for MMR in terms of its significant predictors.
The research has used 30 years of data (1986–2015) to carry out the statistical analysis. The data was collected from the Department of Statistics at the Juba Teaching Hospital (JTH), Reproductive Health Department from Ministry of Health (MoH) [3] National Bureau of Statistics NBS Report [12], the South Sudan 2009 National Baseline Household Survey Report, South Sudan Household Health Survey [13], Census of Population and Housing [14], and the United Nations' Organizations and their partners (e.g. WHO, UNAID, UNICEF, UNDP).
The yearly data included the number of non-HIV+/AIDS maternal deaths, yearly SAB, GDP and yearly GFR. The data on GDP was mainly obtained from National Bureau of Statistics (NBS) Yearly' Report, World Health Organization (WHO), UNICEF, the World Bank and the United Nations Population Division [12].
Prediction model for MMR
Several models for predicting MMR based on different predictors were developed by Makuei, et al. [10]. We used randomly selected two third of the Yearly Data to build the models. The models were then used to predict the remaining ten years' data. The mean errors and the standard error of the mean (SE Mean) were used to compare the efficacy of the models. The analysis was carried out using Microsoft Excel, R and Minitab version 17 statistical soft-ware.
The following two models were the predictive models best describing MMR (based on their respective mean error and SE Mean):
Log Regression Equation, R2 = 77.11%
$$ Log\ \left( Non- HIV/ AIDS\right)=\hbox{-} 20.8\hbox{-} 8.30\ Log\ (SAB)+ 8.10\ Log\ (GFR)+ 5.12\ Log\ (GDP) $$
Poisson Regression Equation, R2 = 79.75%.
Non-HIV+/AIDS MDs Rate Per 1000 = exp.(Y′)
$$ {Y}^{\hbox{'}}= 4.227- 0.3819\ SAB+ 0.03237\ GFR+ 0.002902\ GDP $$
Since mean errors and SE Mean for the Log Linear Regression is much less than Poisson Regression, we can conclude that Log linear regression outperforms Poisson regression in predicting the MMR for South Sudan.
In this paper we deployed Ln-regression and used 30 years of data to develop the following optimal prediction model for MMR.
$$ \mathrm{Ln}\ \left(\mathrm{MMR}\right)=-10-1.73\ast \mathrm{Ln}\ \left(\mathrm{SAB}\right)+2.83\ast \mathrm{Ln}\ \left(\mathrm{GFR}\right)+0.943\ast \mathrm{Ln}\ \left(\mathrm{GDP}\right) $$
Equation (3) indicates that one unit change in Ln (SAB) will decrease Ln (MMR) by 1.73 units while one unit change in Ln (GFR) and Ln (GDP) will increase Ln (MMR) by 2.83 and 0.943 respectively. As the relationships are logarithmic, the effect on actual values of MMR, in terms of maternal death per 100,000 live births, will be several times higher.
Compared to the decrease in MMR which can be brought about by increasing Ln (SAB), the increasing effect of Ln (GFR) on Ln (MMR) is much higher (1.64 times) than that of Ln (SAB). Meanwhile, the effect that one unit change in Ln (GDP) has on Ln (MMR) is (0.55 times) less than that of Ln (SAB).
This result on GDP is aligned with the finding by the authors in [32] who investigated the relationship between MMR and GDP in 79 developing countries and concluded that per capital GDP was one of the most significant predictor (− 0.83) for MMR. Similarly in China, Feng XL et al. observed that log (GDP) per capita was a determinant of crude MMR with the adjusted rate ratio of 0.85–0.86 compared to a crude ratio of 0.66 [18] . In their study Du et al. noted that reduction of MMR over the period of 1996–2009 in Guizhou province of China was negatively related to GDP [19] . In our study, due to lack of the yearly GDP data in South Sudan, the GDP value was held constant at the average GDP over the period of 1986–2015 (Ln (GDP) =7.480, or GDP = 1772).
Optimization is often used to optimize and observe optimal values, patterns and structures in data over time. In this analysis we used MATLAB, Excel Solver, R, and Minitab 17. They have been used to conduct optimization procedures and obtain optimal values for the predictors SAB and GFR for a given value of MMR. The optimized values were then used to develop and plot lower and upper profile limits for the MMR, SAB and GFR to achieve the UN recommended lower and upper MMR limits of between 70 and 140.
Mathematical optimization
In mathematics, computer science, and operation research, mathematical optimization is the selection of a best value from some set of available alternatives. An optimization problem consists of minimizing or maximizing a real function systematically by choosing input values for its individual variables from within an allowed domain. In addition to that, optimization includes.
Finding the best available values of some objective function given a defined domain or input including a variety of various types of constrains.
Optimization using solver package and MATLAB
Solver is part of a collection of commands to determine the minimum or maximum value of one cell by changing the values of other cells. With Solver, an optimal (minima or maxima) value could be found for the objective cell, subject to constraints or limits on the values of the predictors. In this study Ln (SAB) and Ln (GFR) were optimized for given values of Ln (MMR) while keeping Ln (GDP) constant at 7.480 or GDP = 1772 which is the average of Ln (GDP) over the period that data were collected. The results from solver are then confirmed with the results of the algorithm developed in MATLAB. Optimization procedures to attain optimal max Ln (SAB) and min Ln (GFR) values for a given Ln (MMR) level is outlined in the algorithm presented in Fig. 1 below.
Calculation optimal max Ln (SAB) and min Ln (GFR) values for a given MMR level
Linear profile limits
Profile monitoring systems assist and help to identify factors related to an observed phenomenon, assess the effect of changing any factor/s on the event and predict the behaviour of the phenomenon under different situations. In many situations the quality and performance of a process may be better characterized and summarized by relationship between the response (dependent) variable and one or more explanatory (independent) variables referred to as profile [20].
The general parametric linear profile model relating the explanatory variables X1i, X2i, X3i ..., Xpi to the response Yij, is presented by.
$$ {\mathrm{Y}}_{\mathrm{ij}}={\mathrm{A}}_{0\mathrm{j}}+{\mathrm{A}}_{1\mathrm{j}}{\mathrm{X}}_{1\mathrm{i}}+.\dots, +{\mathrm{A}}_{\mathrm{pj}}+{\upvarepsilon}_{\mathrm{ij}},\mathrm{i}=1,2,3,\dots, \mathrm{n},\kern0.5em \mathrm{j}=1,2,3,\dots, \mathrm{k} $$
where A1j (l = 0,1,2, ...., p) is the regression coefficient. The pair observation (Xli, Yij) is obtained in the jth random sample, where Xli is the ith design point (i = 1,2,3, ..., n) for the lth explanatory variable (l = 1.2,3, ..., p ). It is assumed that the errors ɛijs are independent, identically distributed (i.i. d.) variables with mean zero and variance σ2 j, when the process is in control.
Profile monitoring is used to understand and to check the stability of this relationship over time [21].
Recently many practitioners and researchers have used profile monitoring as a new sub-area of statistical process control exploring the application of profile monitoring in different disciplines and in real life [22,23,24]. The application of profile monitoring is often focussed on processes with multiple quality characteristics and has also been extended to detect clusters of disease incident and used in public health surveillance [25,26,27,28,29,30,31,32].
In this study, profile monitoring will be used to monitor maternal mortality rate (MMR) in South Sudan and assess its variation influenced by SAB and GFR.
Development of profile limits
In this paper, MATLAB, Minitab, R and Excel Solver are used to obtain optimal values of Ln (SAB) and Ln (GFR) for a given value of Ln (MMR) while keeping Ln (GDP) constants at 7.480 (GDP = 1772) which is the average of Ln (GDP) over the period that the data were collected.
Furthermore, to generate the lower and upper profile control limits for Ln (MMR), the proposed predictive models presented in eq. (3) and the target minimum and maximum levels of MMR proposed by the UN agencies; MMR = 70 and MMR = 140 have been used. It was recommended that these limits should be achieved by 2030. The current MMR in South Sudan is about 730 deaths per 100,000.
The statistical analysis shows that increasing SAB by 1.22% per year would decrease MMR by 1.4% (95% CI (0.4–5%)) while decreasing GFR by 1.22% per year would decrease MMR by 1.8% (95% CI (0.5–6.26%)).
The following steps were taken to generate the lower and upper profile limits for yearly target values of SAB and GFR in order to reduce MMR to the target minimum and maximum levels recommended by UN agencies.
Step 1 To achieve MMR = 140 (the maximum recommended by the UN) from the current value of 730 by 2030, the government should reduce MMR by (approximately) 39 deaths per year (or Ln (MMR) by 0.11 per year). Therefore, the optimization program was deployed to obtain the optimal sets of Ln (SAB) and Ln (GFR) for a given Ln (MMR) with the starting value of Ln (730). The Ln (MMR) was then reduced by 0.11, year by year. The results in terms of Ln function and numerical values are presented in Tables 1 and 2. The profile limits are presented in Figs. 1, 2, 3 and 4. It should be noted that the constraint on Ln (SAB) is that, it should be greater than the existing maximum (Ln (SAB) = 3.178), as our aim is to increase SAB year by year. While the constraint on Ln (GFR) was to be smaller than the existing current minimum value of 5.024, its value should be further decreased. The results presented in the first 3 columns of Table 2 show that to decrease MMR from 730 to 140 by the year 2030, the government should increase SAB from the current value of 19 to 50 while the value of GFR should be decreased from the current value of 175 to 97. The five years break-up values are highlighted in Table 2. Thus, for the year 2020, South Sudan should target to have MMR decreased from the current value of 730 to 421 by simultaneously increasing SAB from 19 to 26 and deceasing GFR from 175 to 144. By the year 2025 the country should target to have MMR declined from the present value of 730 to 243 by simultaneously increasing SAB from 19 to 36 and decreasing GFR from 175 to 118. Moreover, by the year 2030, the government and stakeholders should target to have MMR decreased from the current value of 730 to 140 by increasing SAB from 19 to 50 and decreasing GFR from 175 to 97.
Step 2 To attain MMR = 70 (the minimum recommended by the UN) from the current value of 730 by 2030, step one was followed except that the target value was changed from 140 to 70 and the decrease in Ln (MMR) was 0.156 per year. The optimization results in terms of Ln function and numerical values are presented in Tables 1 and 2. The last three columns of Table 2 show that to achieve MMR of 70 by the year 2030, the authorities in South Sudan should reduce GFR from 175 to 75 while increasing SAB from the current value of 19 to 76. The target statistics for 2020 would be MMR = 334 with SAB being increased to 30 and GFR reduced to 133. By the year 2025 the government and partners should target to have MMR decreased from the present value of 730 to 153, by simultaneously increasing SAB from 19 to 47 and decreasing GFR from 175 to 101. Therefore, developing health policies that target MMR, the SAB and GFR profile limits out lined in Table 2 would ensure the successful accomplishment of the UN target maternal mortality rate proposal.
Table 1 Shows error analysis for independent variables of SAB, GFR and GDP
Table 2 Optimal values of Ln (SAB) and Ln (GFR) for given Ln (MMR)
Lower and upper profile limits for Ln (MMR), Ln (SAB) and Ln (GFR). a Profile limits for Ln (MMR), Ln (SAB) and LN (GFR). The target MMR for 2030 is 140. b Profile limits for Ln (MMR), Ln (SAB) and LN (GFR). The target MMR for 2030 is 70
The lower and upper profile limits for MMR, SAB and GFR. Figure 3a: Profile limits for the numerical values of MMR, SAB and GFR. The target MMR for 2030 is 140. Fig. 3b: Profile limits for the numerical values of MMR, SAB and GFR. The target MMR for 2030 is 70
a Three dimensional Surface Plot of MMR values vs SAB and GFR for target MMR 140. b Three dimensional Surface Plot of MMR values vs SAB and GFR for target MMR 70
The optimization results presented in Table 2 and Fig. 2 show that if it is desired to reduce MMR to 140 by 2030, we should decrease Ln (MMR) by 0.11 units annually (MMR by 39 units). This will be achieved by simultaneously increasing the value of Ln (SAB) by 0.06 units and reducing Ln (GFR) by 0.04 units. In contrast to achieve MMR of 70, we should decrease Ln (MMR) by 0.156, per year. This requires concurrently increasing Ln (SAB) by 0.09 and decreasing Ln (GFR) by 0.06, per year.
The optimization results presented in Table 3 and Fig. 3 show that to achieve the UN target of 140 by 2030, we should simultaneously increase the value of SAB to 50 and reduce the value of GFR to 97. While to achieve MMR of 70, we should simultaneously increase SAB to 76 and decrease GFR to 75.
Table 3 Optimal values of SAB and GFR for a given MMR when GDP) is constant at 1772
The target MMR for 2030 is 140.
The target MMR for 2030 is 70.
Figure 4a and b visually show that to achieve minimum MMR we must reduce GRF to its minimum value while increasing SAB to it maximum value. Figure 4b shows sharper reduction in MMR requires deeper increase and decrease in SAB and GFR values respectively.
Reducing the maternal mortality rate has been the main concern of the global health agenda over the last two decades. It has been documented that 74–98% of maternal deaths can be averted even in the lowest income nations [33,34,35,36,37,38]. According to Chou et al. & Yi et al. improving maternal health and reducing related mortality have been the key concern of the international community as one of the eight Millennium Development Goals (MDG 5) [32, 39]. Maternal mortality is a complex problem requiring complex intervention. It challenges the key performers in resource limited countries to acknowledge the problem and scale up the means and advocated measures to address this problem. The maternal mortality rate in South Sudan is one of the highest in the world. Based on previous studies, authors have identified the Skilled Assistant at Birth (SAB), the General Fertility Rate (GFR) and per capita the Gross Domestic Product (GDP) as the most significant predictors of MMR [10].
The study has deployed mathematical optimization methods and for the first time has developed the optimal lower and upper profile limits for MMR, SAB and GFR. Table 3 provides the yearly optimal values of these variables and can effectively aid the government to achieve the target maternal mortality rate recommended by UN agencies by 2030. The following discussion compares the findings from this research to other similar research, and also seeks to identify the best options for South Sudan to reduce MMR.
WHO has defined the term skilled assistant and listed the essential and additional skills required for the expected duties. Although at individual level, skilled assistance can be beneficial, at the population level, evidence is weak. The authors proposed a model which can reduce MMR by 16–33% through primary and secondary prevention of four major complications of pregnancy and delivery. Skilled assistant can reduce mortality rate as well as morbidities. Timely access to quality care and professional staff are also important in the implementation of skilled assistance. In Pakistan, traditional birth attendants are assisting deliveries, but MMR remains high. Training the traditional attendants and then deploying them for assisted deliveries and increasing access to emergency care were found to be very effective [40]. Improved quality of primary care through professional midwifery combined with referral care at hospitals takes less time to reduce MMR significantly. As exemplified by the experiences in some developing countries, the time taken to reduce MMR from 400 to 200 is around 9–12 years, from 200 to 100 is about 7–9 years and for 100 to 50 is 4–8 years [41]. Graham et al., [42] point out that increasing skilled assistance can take time and significant resources and funds. Politics play a critical role in the agenda setting in health affairs; therefore, understanding the priorities of the political agenda in health is vital [43]. Any improvement in the SAB can be influenced by political will. The ingredients of implementing methods to increase SAB in any country are useful for adaption into South Sudan also. The authors have also identified SAB as one of the most influential factors for MMR in South Sudan [10]. Our analysis shows that GFR is the most influential factor in increasing the MMR followed by the SAB and GDP. The South Sudan current statistics shows that only 20% births are attended by skilled assistants [3].
In a review of international data, Stover and Ross 2010 [44] found that using contraceptives and family planning reduced MMR by about one million between 1990 and 2005. This was due to the effect of contraceptives on fertility rates, especially in developing countries. Transition from low to high levels of contraceptive use can reduce the country's MMR by 450 per 100,000 live births. These observations justify the use of contraceptives to reduce GFR in South Sudan. Reduction of unsafe abortion led to significant reduction in MMR in Uruguay between 2001 and 2015, according to the findings of Briozzo 2016 [45]. Thus, abortion as a fertility reduction method among pregnant women is risky. Our study also shows that GFR is the most influential factor in increasing the MMR.
Other methods to reduce MMR
Mbaruku and Bergström [46] have reported a variety of strategies that resulted in reducing MMR from 933 to 186 per 100,000 live births during 1984–91 in Tanzania. Low cost strategies combined with local problem solving methods show that it is possible for developing countries, like South Sudan, to achieve significant reduction in MMR level.
Lassi and Bhutta (2015) in a review stressed community-based integrated packages of care for both mothers and new-born babies to help reduce MMR along-with neonatal mortality [47].
Clear evidence on reduction of MMR by the use of contraceptives was provided in the review of data on 172 countries by Ahmed et al. [48]. Using predictive models, they estimated that 342,203 women died of maternal causes in 2008. Contraceptive use prevented death of another 272,040 (44%) deaths. If unmet needs of contraceptive use were satisfied, another 104,000 maternal deaths could be prevented every year. The basic model of the authors for estimation of MMR is.
$$ \mathrm{Log}\ \left(\mathrm{PMDFi}\right)={\upbeta}_0+{\upbeta}_1\log\ \left({\mathrm{GDP}}_{\mathrm{i}}\right)+{\upbeta}_2\log\ \left({\mathrm{GFR}}_{\mathrm{i}}\right)+{\upbeta}_3{\mathrm{SAB}}_{\mathrm{i}}+{\upalpha^{\mathrm{c}}}_{\mathrm{j}\left[\mathrm{i}\right]}+{\upalpha}^{\mathrm{R}}{\mathrm{k}}_{\left[\mathrm{i}\right]}+\log\ \left(1-{\mathrm{a}}_{\mathrm{i}}\right)+{\upvarepsilon}_{\mathrm{i}} $$
This is similar to the model used in this paper.
In the above equation, PMDF is the proportion of maternal deaths among all deaths of reproductive-age women (15–49 years) in year i, country j, and geographical region k. While αc and αR are random intercepts for country j in geographical region k, respectively, ai is the proportion of death due to AIDS among the women of reproductive age, and εi is the error term. The GDP was adjusted according to purchasing power parity in 2005. But they have modified the equation for predicting MMR for contraceptive use.
In conclusion, the outlined research in other countries shows that increase in SAB and decrease in GFR can reduce MMR. These are consistent with the findings of this paper.
Based on the above discussion it appears that initiatives to increase SAB and reduce GFR are vitally important for reducing MMR. However, in a short run, reducing GFR is relatively easier than increasing SAB.
Factors impacting MMR
This paper investigates the impact of SAB, GFR and GDP on maternal mortality rate in South Sudan. However, as highlighted in the introduction, factors impacting the maternal mortality rate include socio-economic factors, macro-economic factors and physiological factors [1]. Lack of access to health care facilities is also a major factor due to lack of roads and transportation system [7, 10]. More than 50% of the population walks three miles or more to the nearest primary health care unit. All of factors affect the total health care system and in particular high maternal mortality rate problem. Kruk et al. have investigated the impact of the community perceptions on the quality of care provided by the local health system on pregnant woman's decision to deliver in a clinic. They have suggested that improving the quality of care at first level clinics may assist the efforts to increase facility delivery in sub-Saharan Africa [2].
The proposed Ln-linear regression model deployed in this paper and the constraints on SAB and GFR used to develop the profiles are based on 30 years of manually recorded data obtained from the sources listed under "data collection". The authors acknowledge that some of these data may be an estimate rather than the true values. Although this may result in underestimate/overestimate of the profiles, it is unlikely to impact on the validity of the analyses. Furthermore, our results on the impact of GFR, SAB and GDP on MMR in South Sudan are aligned with that of other researchers (as highlighted in the discussion section).
This study for the first time has deployed optimisation procedures to develop yearly lower and upper profile limits for maternal mortality rate (MMR), targeting the UN recommended lower and upper MMR levels by 2030. The MMR profile limits have been accompanied in by the profile limits for optimal yearly values of SAB and GFR levels. Studies on predictors of logarithmic multi-regression models provided distinct evidence that increasing Skilled Attendant at Birth (SAB) and decreasing General Fertility Rate (GFR) while leaving the Gross Domestic Product (GDP) constant at 1772, can reduce the Maternal Mortality Rate in South Sudan by 2030 to the limits proposed by the UN agencies (WHO, USAID, UNICEF & World Bank, 2015) and beyond. The statistical analysis shows that increasing SAB by 1.22% per year would decrease MMR by 1.4%. [95% CI (0.4–5%)] while decreasing GFR by 1.22% per year would decrease MMR by 1.8% [95% CI (0.5% - 6.26], when the GDP is held constant.
The comparison of the findings of this study to other similar studies suggests that reducing GFR is more effective and achievable than increasing SAB when aiming to reduce MMR.
The optimal profile limits provide a quantitative guide-line for the government and partners in terms of yearly SAB and GFR target in order to reduce MMR to the level recommended by the UN. The outcomes of this study can effectively aid authorities to make informed evidence-based intervention decisions on resources allocation to reduce the MMR.
AIDs:
Acquired Immune Deficiency Syndrome (AIDs)
GDP:
GFR:
General Fertility Rate (GFR)
HIV+ :
Inc.:
increment (See Fig. 1)
Ln:
Logarithmic of natural number
MATLAB:
Matrix Laboratory is multi-paradigm numerical computing environment and fourth- generation programming language
MDs:
Maternal Deaths
Minitab:
MMR:
Multi-paradigm numerical computing environment and generation of computer programming language
Red:
reduction (See Fig. 1)
SAB:
Skilled Attendance at Births
SSA:
Sub-Saharan Africa (a region in Africa)
UNICEF:
United Nations International Children's Emergency Fund
USAID:
United States Agency for International Development
World Health Organisation
Alkema L, Chou D, Hogan D, Zhang S, Moller AB, Gemmill A, Fat DM, Boerma T, Temmerman M, Mathers C, et al. Global, regional, and national levels and trends in maternal mortality between 1990 and 2015, with scenario-based projections to 2030: a systematic analysis by the UN maternal mortality estimation inter-agency group. Lancet. 2016;387(10017):462–74.
Kruk ME, Rockers PC, Mbaruku G, Paczkowski MM, Galea S. Community and health system factors associated with facility delivery in rural Tanzania: a multilevel analysis. Health Policy. 2010;97(2–3):209–16.
(NBS) MoHMaNBoS. The Republic of South Sudan: The Sudan Household Health Survey 2010. South Sudan: (NBS) MoHMaNBoS. Juba; 2014.
Trends in Maternal Mortality: 1990 to 2015: estimates by WHO, UNICEF, UNFPA, World Bank Group and the United Nations Division.
Rau A. Reducing maternity mortality rate in South Sudan. In: Borgen; 2015.
South Sudan: Maternal and Child Health in South Sudan.
Ending Preventable Maternal Mortality: USAID Maternal Health Vision for Action Evidence for Strategic Approches.
WHO U, UNFPA, World Bank Groun and the United Nations Population Division. Meauring Maternal Mortality, key facts. In: Geneva Foundation for Medical Education and Research. vol. Fact sheet No 348, vol. 20. Geneva 27, Switzerland: World Heath Organization (WHO); 2016.
Risk - Lifetime Risk of Death in Childbearing.
Makuei G, Abdollahian M, Marion K. Modeling maternal mortality rate in South Sudan. Int'l Conf Information and Knowledge Engineering. 2016;107-112:6.
WHO. Fact Sheet. Geneva: Worl Health Organization: Worl Health Oraganization (WHO); 2008.
(NBS) NBoS. SOUTH SUDAN STATISTICAL YEAR BOOK 2015. In: 2015 SYB. m; 2015.
National Bureau of Statistics NBS. Census of Population and Housing (the NBS, 2008), Southern Sudan. Juba, South Sudan: National Bureau of Statistic Office; 2008.
National Bureau of Statistics SS. The South Sudan National Baseline Household Survey' 2009 Report. Juba Southern Sudan: Statistics NBo. National Bureau of Statistics' Office; 2009.
Campbell OM, Graham WJ, steering g LMSS. Strategies for reducing maternal mortality: getting on with what works. Lancet. 2006;368(9543):1284–99.
World Highlights.
Rai RK, Tulchinsky TH. Addressing the sluggish progress in reducing maternal mortality in India. Asia Pac J Public Health. 2015;27(2):NP1161–9.
Feng XL, Zhu J, Zhang L, Song L, Hipgrave D, Guo S, Ronsmans C, Guo Y, Yang Q. Socio-economic disparities in maternal mortality in China between 1996 and 2006. BJOG. 2010;117(12):1527–36.
Du Q, Lian W, Naess O, Bjertness E, Kumar BN, Shi SH. The trends in maternal mortality between 1996 and 2009 in Guizhou, China: ethnic differences and associated factors. J Huazhong Univ Sci Technolog Med Sci. 2015;35(1):140–6.
Yin H, Zhao Y, Zhang Y, Zhang H, Xu L, Zou Z, Yang W, Cheng J, Zhou Y. Genome-wide analysis of the expression profile of Saccharomyces cerevisiae in response to treatment with the plant isoflavone, wighteone, as a potential antifungal agent. Biotechnol Lett. 2006;28(2):99–105.
Kang S, Ren D, Xiao G, Daris K, Buck L, Enyenihi AA, Zubarev R, Bondarenko PV, Deshpande R. Cell line profiling to improve monoclonal antibody production. Biotechnol Bioeng. 2014;111(4):748–60.
Gupta SK, Bansal D, Malhi P, Das R. Developmental profile in children with iron deficiency anemia and its changes after therapeutic iron supplementation. Indian J Pediatr. 2010;77(4):375–9.
Gupta S. Profile monitoring-control chart schemes for monitoring linear and low order polynomial profiles. In: A Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosopy. Arizona: Arizona State University; 2010.
Hosseinifard SZ. Monitoring and performance analysis of regression profiles. This thesis is submitted in total fulfillment of the requirements for the degree of Doctor of Philosophy. Melbourne, Victoria, Australia: RMIT University; 2012.
Woodall WH. Current research on profile monitoring. SciELO Brasil. 2007;17(3):420–5.
Grigg OA, Farewell VT. A risk-adjusted sets method for monitoring adverse medical outcomes. Stat Med. 2004;23(10):1593–602.
Grigg OA, Farewell VT, Spiegelhalter DJ. Use of risk-adjusted CUSUM and RSPRT charts for monitoring in medical contexts. Stat Methods Med Res. 2003;12(2):147–70.
Chopra M, Daviaud E, Pattinson R, Fonn S, Lawn JE. Saving the lives of South Africa's mothers, babies, and children: can the health system deliver? Lancet. 2009;374(9692):835–46.
Woodall J, Dixey R, South J. Control and choice in English prisons: developing health-promoting prisons. Health Promot Int. 2014;29(3):474–82.
Montgomery AL, Fadel S, Kumar R, Bondy S, Moineddin R, Jha P. The effect of health-facility admission and skilled birth attendant coverage on maternal survival in India: a case-control analysis. PLoS One. 2014;9(6):e95696.
Montgomery AL, Ram U, Kumar R, Jha P, Million Death Study C. Maternal mortality in India: causes and healthcare service use based on a nationally representative survey. PLoS One. 2014;9(1):e83331.
Chou D, Tuncalp O, Firoz T, Barreix M, Filippi V, von Dadelszen P, van den Broek N, Cecatti JG, Say L, Maternal Morbidity Working G. Constructing maternal morbidity - towards a standard tool to measure and monitor maternal health beyond mortality. BMC pregnancy and childbirth. 2016;16:45.
Bale JRS, B J, Lucas AO. Improving birth outcomes: meeting the Challange in the developing world. Washington: The National Academy Press; 2003.
Prual A. Reducing maternal mortality in developing countries: theory and practice. Medecine tropicale : revue du Corps de sante colonial. 2004;64(6):569–75.
Prual A, Bouvier-Colle MH, de Bernis L, Breart G. Severe maternal morbidity from direct obstetric causes in West Africa: incidence and case fatality rates. Bull World Health Organ. 2000;78(5):593–602.
Prual A, De Bernis L, El Joud DO. Potential role of prenatal care in reducing maternal and perinatal mortality in sub-Saharan Africa. Journal de gynecologie, obstetrique et biologie de la reproduction. 2002;31(1):90–9.
Prual A, Gamatie Y, Djakounda M, Huguet D. Traditional uvulectomy in Niger: a public health problem? Soc Sci Med. 1994;39(8):1077–82.
Prual A, Huguet D, Garbin O, Rabe G. Severe obstetric morbidity of the third trimester, delivery and early puerperium in Niamey (Niger). Afr J Reprod Health. 1998;2(1):10–9.
Yi S, Tuot S, Chhoun P, Pal K, Ngin C, Choub SC, Brody C. Improving prevention and care for HIV and sexually transmitted infections among men who have sex with men in Cambodia: the sustainable action against HIV and AIDS in communities (SAHACOM). BMC Health Serv Res. 2016;16(1):599.
Jokhio AH, Winter HR, Cheng KK. An intervention involving traditional birth attendants and perinatal and maternal mortality in Pakistan. N Engl J Med. 2005;352(20):2091–9.
Van Lerberghe W, De Brouwere V. Reducing maternal mortality in a context of poverty. Safe motherhood strategies: a review of the evidence. 2001;17:1–5.
Graham WJ, Bell JS, Bullough CH. Can skilled attendance at delivery reduce maternal mortality in developing countries. Safe motherhood strategies: a review of the evidence. 2001;17:97–130.
Jat TR, Deo PR, Goicolea I, Hurtig AK, San Sebastian M. The emergence of maternal health as a political priority in Madhya Pradesh, India: a qualitative study. BMC pregnancy and childbirth. 2013;13:181.
Stover J, Ross J. How increased contraceptive use has reduced maternal mortality. Matern Child Health J. 2010;14(5):687–95.
Briozzo L. From risk and harm reduction to decriminalizing abortion: the Uruguayan model for women's rights. Int J Gynaecol Obstet. 2016;134(Suppl 1):S3–6.
Mbaruku G, Bergstrom S. Reducing maternal mortality in Kigoma, Tanzania. Health Policy Plan. 1995;10(1):71–8.
Lassi ZS, Bhutta ZA. Community-based intervention packages for reducing maternal and neonatal morbidity and mortality and improving neonatal outcomes. Cochrane Database Syst Rev. 2015;(3) CD007754
Ahmed S, Li Q, Liu L, Tsui AO. Maternal deaths averted by contraceptive use: an analysis of 172 countries. Lancet. 2012;380(9837):111–25.
The authors would like to thank WHO (World Health Organisation), and UNICEF (United Nations International Children's Emergency Fund), South Sudan Country Offices and South Sudan National Bureau of Statistics (NBS) and Juba Teaching Hospital for providing the dataset. The authors appreciate the Australian Government for financially supporting this study through Australian Government Research Training Program Scholarship. We also express our gratitude to the Higher Degree by Research (HDR), RMIT University for financial support, Melbourne, Australia. Furthermore, our appreciation and thank to the RMIT University Learning Centre for their language advisory review in particular Dr. Ken Manson. The contents are solely the responsibility of the authors and don't necessarily represent the official views of the supporting offices.
The research was funded by the Federal Government of Australia, through Australian Government Research Training Program Scholarship and RMIT University, Higher Degree Research (HDR). The funder didn't play a significant role in the planning of the study and collection, analysis and interpretation of data, and writing the manuscript.
The datasets generated and analysed during the current study are not publically available due to ethical approval attained (Authors are not allowed to release the data to public domain) but are available from the corresponding author (Gabriel Makuei) on reasonable request. The contact details of Gabriel Makuei are: Email address: [email protected]; Mobil phone: (+ 61)470393936.
School of Science (Mathematical and Geospatial Sciences), College of Science, Engineering, and Health, RMIT University, GPO BOX 2476, Melbourne, VIC, 3001, Australia
Gabriel Makuei, Mali Abdollahian & Kaye Marion
Gabriel Makuei
Mali Abdollahian
Kaye Marion
GM designed study, conducted data collection, literature review, performed data analysis, interpretation of data and drafted the manuscript. MA and KM involved in the designed research, literature review, performed data analysis, proofreading and modification of the whole paper. All authors (GM, MA and KM), reviewed and approved the final version of the paper.
Correspondence to Gabriel Makuei.
G. Makuei is a PhD candidate at RMIT University, Melbourne, Australia and was a lecturer for statistics and operation research courses at DR John Garang University of Science and Technology from 2010 to 2012, Bor, South Sudan. Mali Abdollahian and Kaye Marion are both senior lecturers and lecturers at RMIT University, School of Science, Melbourne, Australia.
To conduct this research the ethical approval was obtained from South Sudan Government through Ministry of Health (MoH), the Ethical Review Committee of the National Medical Body, Juba, South Sudan. Informed consent was obtained from the Head of Department of Health Policy, Planning, Budgeting and Research with Department Reproductive Health, Ministry of Health (MoH) S. Sudan, before the research commenced. It also has been approved and registered by the authority concern from the RMIT University, under registration number: ASEHAPP 97–16, Melbourne, Australia.
Makuei, G., Abdollahian, M. & Marion, K. Optimal profile limits for maternal mortality rate (MMR) in South Sudan. BMC Pregnancy Childbirth 18, 278 (2018). https://doi.org/10.1186/s12884-018-1892-0
Profile limits
Ln multi-regression | CommonCrawl |
Clutters I
Posted on July 2, 2015 by Dillon Mayhew
I have been trying to firm up my feeling for the theory of clutters. To that end, I have been working through proofs of some elementary lemmas. For my future use, as much as anything else, I will post some of that material here.
A clutter is a pair $H=(S,\mathcal{A})$, where $S$ is a finite set, and $\mathcal{A}$ is a collection of subsets of $S$ satisfying the constraint that $A,A'\in\mathcal{A}$ implies $A\not\subset A'$. In other words, a clutter is a hypergraph satisfying the constraint that no edge is properly contained in another. For this reason we will say that the members of $\mathcal{A}$ are edges of $H$. Clutters are also known as Sperner families, because of Sperner's result establishing that if $|S|=n$, then
\[\mathcal{A}\leq \binom{n}{\lfloor n/2\rfloor}.\]
Clutters abound in 'nature': the circuits, bases, or hyperplanes in a matroid; the edge-sets of Hamilton cycles, spanning trees, or $s$-$t$ paths in a graph. Even a simple (loopless with no parallel edges) graph may be considered as a clutter: just consider each edge of the graph to be a set of two vertices, and in this way an edge of the clutter. There is one example that is particularly important for this audience: let $M$ be a matroid on the ground set $E$ with $\mathcal{C}(M)$ as its family of circuits, and let $e$ be an element of $E$. We define $\operatorname{Port}(M,e)$ to be the clutter
\[(E-e,\{C-e\colon e\in C\in \mathcal{C}(M)\})\]
and such a clutter is said to be a matroid port.
If $H=(S,\mathcal{A})$ is a clutter, then we define the blocker of $H$ (denoted by $b(H)$) as follows: $b(H)$ is a clutter on the set $S$, and the edges of $b(M)$ are the minimal members of the collection $\{X\subseteq S\colon |X\cap A|\geq 1,\ \forall A\in\mathcal{A}\}$. Thus a subset of $S$ is an edge of $b(H)$ if and only if it is a minimal subset that has non-empty intersection with every edge of $H$. Note that if $\mathcal{A}=\{\}$, then vacuously, $|X\cap A|\geq 1$ for all $A\in \mathcal{A}$, no matter what $X$ is. The minimal $X\subseteq S$ is the empty set, so $b((S,\{\}))$ should be $(S,\{\emptyset\})$. Similarly, if $\mathcal{A}=\{\emptyset\}$, then the collection $\{X\subseteq S\colon |X\cap A|\geq 1,\ \forall A\in\mathcal{A}\}$ is empty, so $b((S,\{\emptyset\}))$ should be $(S,\{\})$. The clutter with no edges and the clutter with only the empty edge are known as trivial clutters.
Our first lemma was noted by Edmonds and Fulkerson in 1970.
Lemma. Let $H=(S,\mathcal{A})$ be a clutter. Then $b(b(H))=H$.
Proof. If $H$ is trivial, the result follows by the discussion above. Therefore we will assume that $H$ has at least one edge and that the empty set is not an edge. This implies that $b(H)$ and $b(b(H))$ are also non-trivial. Let $A$ be an edge of $H$. Now every edge of $b(H)$ has non-empty intersection with $A$, by the definition of $b(H)$. Since $A$ is a set intersecting every edge of $b(H)$, it contains a minimal such set. Thus $A$ contains an edge of $b(b(H))$.
Now let $A'$ be an edge of $b(b(H))$. Assume that $A'$ contains no edge of $H$: in other words, assume that every edge of $H$ has non-empty intersection with $S-A'$. Then $S-A'$ contains a minimal subset that has non-empty intersection with every edge of $H$; that is, $S-A'$ contains an edge of $b(H)$. This edge contains no element in common with $A'$. As $A'$ is an edge of $b(b(H))$, this contradicts the definition of a blocker. Hence $A'$ contains an edge of $H$.
Let $A$ be an edge of $H$. By the previous paragraphs, $A$ contains $A'$, an edge of $b(b(H))$, and $A'$ contains $A^{\prime\prime}$, an edge of $H$. Now $A^{\prime\prime}\subseteq A'\subseteq A$ implies $A^{\prime\prime}=A$, and hence $A=A'$. Thus $A$ is also an edge of $b(b(H))$. Similarly, if $A'$ is an edge of $b(b(H))$, then $A^{\prime\prime}\subseteq A\subseteq A'$, where $A'$ and $A^{\prime\prime}$ are edges of $b(b(H))$, and $A$ is an edge of $H$. This implies $A'=A^{\prime\prime}=A$, so $A'$ is an edge of $H$. As $H$ and $b(b(H))$ have identical edges, they are the same clutter. $\square$
If $H=(S,\mathcal{A})$ is a simple graph (so that each edge has cardinality two), then the edges of $b(H)$ are the minimal vertex covers. In the case of matroid ports, the blocker operation behaves exactly as we would expect an involution to do$\ldots$
Lemma. Let $M$ be a matroid and let $e$ be an element of $E(M)$. Then \[b(\operatorname{Port}(M,e))=\operatorname{Port}(M^{*},e).\]
Proof. Note that if $e$ is a coloop of $M$, then $\operatorname{Port}(M,e)$ has no edges, and if $e$ is a loop, then $\operatorname{Port}(M,e)$ contains only the empty edge. In these cases, the result follows from earlier discussion. Now we can assume that $e$ is neither a loop nor a coloop of $M$. Let $A$ be an edge in $\operatorname{Port}(M^{*},e)$, so that $A\cup e$ is a cocircuit of $M$. Since a circuit and a cocircuit cannot meet in the set $\{e\}$, it follows that $A$ has non-empty intersection with every circuit of $M$ that contains $e$, and hence with every edge of $\operatorname{Port}(M,e)$. Now $A$ contains a minimal set with this property, so $A$ contains an edge of $b(\operatorname{Port}(M,e))$.
Conversely, let $A'$ be an edge of $b(\operatorname{Port}(M,e))$. Assume that $e$ is not in the coclosure of $A'$. By a standard matroid exercise this means that $e$ is in the closure of $E(M)-(A'\cup e)$. Let $C$ be a circuit contained in $E(M)-A'$ that contains $e$. Then $C-e$ is an edge of $\operatorname{Port}(M,e)$ that is disjoint from $A'$. This contradicts the fact that $A'$ is an edge of the blocker. Therefore $e$ is in the coclosure of $A'$, so there is a cocircuit $C^{*}$ contained in $A'\cup e$ that contains $e$. Therefore $A'$ contains the edge, $C^{*}-e$, of $\operatorname{Port}(M^{*},e)$.
In exactly the same way as the previous proof, we can demonstrate that $b(\operatorname{Port}(M,e))$ and $\operatorname{Port}(M^{*},e)$ have identical edges. $\square$
This last fact should be attractive to matroid theorists: clutters have a notion of duality that coincides with matroid duality. There is also a notion of minors. Let $H=(S,\mathcal{A})$ be a clutter and let $s$ be an element of $S$. Define $H\backslash s$, known as $H$ delete $s$, to be
\[(S-s,\{A\colon A\in \mathcal{A},\ s\notin A\}\]
and define $H/s$, called $H$ contract $s$, to be
\[(S-s,\{A-s\colon A\in \mathcal{A},\ A'\in \mathcal{A}\Rightarrow A'-s\not\subset A-s\}.\]
It is very clear that $H\backslash s$ and $H/s$ are indeed clutters. Any clutter produced from $H$ by a (possibly empty) sequence of deletions and contractions is a minor of $H$.
We will finish with one more elementary lemma.
Lemma. Let $H=(S,\mathcal{A})$ be a clutter, and let $s$ be an element in $S$. Then
$b(H\backslash s) = b(H)/s$, and
$b(H/s) = b(H)\backslash s$.
Proof. We note that it suffices to prove the first statement: imagine that the first statement holds. Then
\[b(b(H)\backslash s)=b(b(H))/s=H/s\]
which implies that
b(H)\backslash s=b(b(b(H)\backslash s))=b(H/s)
\]
and that therefore the second statement holds.
If $H$ has no edge, then neither does $H\backslash s$, so $b(H\backslash s)$ has only the empty edge. Also, $b(H)$ and $b(H)/s$ have only the empty edge, so the result holds. Now assume $H$ has only the empty edge. Then $H\backslash s$ has only the empty edge, so $b(H\backslash s)$ has no edges. Also, $b(H)$ and $b(H)/s$ have no edges. Hence we can assume that $H$ is nontrivial, and therefore so is $b(H)$.
If $s$ is in every edge of $H$, then $H\backslash s$ has no edges, so $b(H\backslash s)$ has only the empty edge. Also, $\{s\}$ is an edge of $b(H)$, so $b(H)/s$ has only the empty edge. Therefore we can now assume that some edge of $H$ does not contain $s$, and that therefore $H\backslash s$ is non-trivial and $\{s\}$ is not an edge of $b(H)$.
As $b(H)$ has at least one edge we can let $A$ be an arbitrary edge of $b(H)/s$, and as $\{s\}$ is not an edge of $b(H)$, it follows that $A$ is non-empty. Since $s$ is not in every edge of $H$, we can let $A'$ be an arbitrary edge of $H\backslash s$. Hence $A'$ is an edge of $H$. As $H$ is non-trivial, $A'$ is non-empty. If $A$ is an edge of $b(H)$, then certainly $A$ and $A'$ have non-empty intersection. Otherwise, $A\cup s$ is an edge of $b(H)$, so $A\cup s$ and $A'$ have non-empty intersection. As $A'$ does not contain $s$, it follows that $A$ and $A'$ have non-empty intersection in any case. This shows that every edge of $b(H)/s$ intersects every edge of $H\backslash s$, and thus every edge of $b(H)/s$ contains an edge of $b(H\backslash s)$.
As $H\backslash s$ is non-trivial, so is $b(H\backslash s)$. We let $A'$ be an arbitrary edge of $b(H\backslash s)$ and note that $A'$ is non-empty. Let $A$ be an arbitrary edge of $H$, so that $A$ is non-empty. If $s\notin A$, then $A$ is an edge of $H\backslash s$, so $A'\cap A\ne\emptyset$. If $s$ is in $A$, then $(A'\cup s)\cap A\ne\emptyset$. This means that $A'\cup s$ intersects every edge of $H$, so it contains an edge of $b(H)$ and therefore $A'=(A'\cup s)-s$ contains an edge of $b(H)/s$. We have shown that every edge of $b(H\backslash s)$ contains an edge of $b(H)/s$ and now the rest is easy. $\square$
This entry was posted in Matroids and tagged clutters by Dillon Mayhew. Bookmark the permalink.
2 thoughts on "Clutters I"
SK on July 31, 2015 at 11:49 am said:
Hi Dillon, This is an interesting topic. Perhaps the connections to the standard graph theory topic of Matchings and Covering could be explored further. Maybe it is possible to develop for matroids some sort of analog of independent sets of vertices and independent sets of edges (as these terms are defined in graph theory). – SK
Dillon Mayhew on August 3, 2015 at 4:56 am said:
Hi Sandra, I wonder what an independent set of vertices would be in a matroid. Vertices are sometimes seen as analogous to cocircuits. Perhaps a disjoint family of cocircuits is the matroid equivalent?
What is the number of elements in the Fano matroid? Type a number. * | CommonCrawl |
Simplify 5/10
(D) 1/2
Divide both 5 and 10 by 5. 5/10= 1/2
A room is 24 feet long, 18 feet wide, and 9 feet high. How many square yards of wallpaper are needed to paper the four walls of the room?
If x is chosen at random from the set {1,2,3,4} and y is chosen at random from the set {5,6,7,8}, what is the probability x*y is divisible by 4?
A total of N children are taking a field trip. The number of girls on the trip (G) exceeds the number of boys on the trip (B) by 17.
Which equation can you use to determine the number of girls taking the field trip?
In which of the following forms can the quadratic expression \(x^{2}-2 x-3\) be written?
What is the probability that a pregnant woman will give birth to a girl?
Find the missing number: 5a +10b = ………\(\left(\frac{5}{3} a+\frac{10}{3} b\right) \)
What will be the value of K in the equation \(x^{3}-K x^{2}+2 x-4=0\) if '2' is the root of this equation?
Which of the following is equivalent to \(\left(3^{-3}\right)^{2} / 3^{0}\) ?
Which of the following is the LCM of \(a^{3}, a^{4}, a^{6}\)?
What was Edmundo's mean score for a round of golf in August if his scores for each round were 78, 86, 82, 81, 82, and 77?
Jimmy mows yards for a lawn service company. He gets 50% of what each of his customers is charged. If the customer leaves a tip, Jimmy gets 100% of the tip. How much does Jimmy earn on a \(\$32\) mow if the client tips him \(\$4\)?
Dave made a 90-mile sales trip. For the first half of the distance, he averaged 45 miles per hour. For the last half, he drove 60 miles per hour. What was his total time, in hours, for the trip?
The table above shows the average price of gasoline among 50 gasoline-service stations on three different days, along with the number of stations charging more and less than the average on each day.
Based on the information in the table, which of the following statements about the price of gasoline among all 50 stations is NOT true?
The graph shows what happens to each \(\$100\) taken in by a small business. How many dollars out of each \(\$100\)taken in represent profit?
A pair of shoes is sold at a discounted price of 39.49 dollars from its original price of 54.99 dollars. Solve for the percent of decrease.
A plastic pipe, 5 feet 9 inches long, is cut into three equal pieces. Assuming no waste when the cuts are made, what is the length of each piece?
If Stan packs the cans in rows, and is careful to arrange the cans so that he fits in the maximum number possible, how many cans will fit in the box?
A shipping container is 80 ft long and 2.5 ft wide. The amount of space inside is approximately 2,000 cu ft. How many feet high is the box?
The dimensions of Box B, shown below, are twice the length of the corresponding dimensions on Box A (not shown). | CommonCrawl |
BMC Bioinformatics
Methodology article
An efficient scRNA-seq dropout imputation method using graph attention network
Chenyang Xu1,
Lei Cai1 &
Jingyang Gao ORCID: orcid.org/0000-0003-1270-62571
BMC Bioinformatics volume 22, Article number: 582 (2021) Cite this article
Single-cell sequencing technology can address the amount of single-cell library data at the same time and display the heterogeneity of different cells. However, analyzing single-cell data is a computationally challenging problem. Because there are low counts in the gene expression region, it has a high chance of recognizing the non-zero entity as zero, which are called dropout events. At present, the mainstream dropout imputation methods cannot effectively recover the true expression of cells from dropout noise such as DCA, MAGIC, scVI, scImpute and SAVER.
In this paper, we propose an autoencoder structure network, named GNNImpute. GNNImpute uses graph attention convolution to aggregate multi-level similar cell information and implements convolution operations on non-Euclidean space on scRNA-seq data. Distinct from current imputation tools, GNNImpute can accurately and effectively impute the dropout and reduce dropout noise. We use mean square error (MSE), mean absolute error (MAE), Pearson correlation coefficient (PCC) and Cosine similarity (CS) to measure the performance of different methods with GNNImpute. We analyze four real datasets, and our results show that the GNNImpute achieves 3.0130 MSE, 0.6781 MAE, 0.9073 PCC and 0.9134 CS. Furthermore, we use Adjusted rand index (ARI) and Normalized mutual information (NMI) to measure the clustering effect. The GNNImpute achieves 0.8199 (ARI) and 0.8368 (NMI), respectively.
In this investigation, we propose a single-cell dropout imputation method (GNNImpute), which effectively utilizes shared information for imputing the dropout of scRNA-seq data. We test it with different real datasets and evaluate its effectiveness in MSE, MAE, PCC and CS. The results show that graph attention convolution and autoencoder structure have great potential in single-cell dropout imputation.
With the development of single-cell RNA sequencing (scRNA-seq) technology, it provides an easy way to process tens of thousands of single cells in parallel while providing gene expression data with single-cell-level resolution [1,2,3]. The traditional RNA-seq technology cannot address complex tissues or organs at the cellular level because it measures the average expression of thousands of cells at the same time. Different from the traditional RNA-seq technology, scRNA-seq is widely used to study cell analysis, including cell heterogeneity [4], cell subgroups clustering [5, 6] and cell development trajectories [7]. Meanwhile, scRNA-seq technology can enhance the clinical diagnosis of the patient's disease, and help doctors further customize treatment plans [5, 6, 8].
The scRNA-seq technology can produce single-cell-level resolution data. As a result of defects such as low capture rate and low sequencing depth, the sequencing library data contains a lot of noise [9, 10].
Compared with the next-generation sequencing data, scRNA-seq data usually contains a lot of zero expressions. These zero expressions can arise in two ways: One is that the genes are not expressed in the corresponding cells and the other is that some genes with low expression cannot be detected due to technical limitations. These events are called dropout events [11]. There are some reasons for dropout, including nonlinear amplification of mRNA, transcription efficiency when reverse transcription of mRNA to cDNA and low sequencing read depth [11,12,13].
In the downstream analysis of scRNA-seq data, the dimensional reduction and unsupervised clustering are always be used to infer cell development trajectories and identify rare cell clusters [14, 15]. However, dropout events will seriously affect the calculation of the distance between expression profiles, which leads to downstream results [16].
Recently, many methods have been developed to impute dropout in scRNA-seq data. For example, MAGIC [17] is based on Markov affinity-based graph, which uses similar cells and genes information to impute missing values. However, this method lacks robustness and cannot adapt to the nonlinear relationship with genes. Furthermore, DCA [18] is a neural network-based method that uses deep autoencoding networks for unsupervised learning. It performs zero-inflated negative binomial modeling on scRNA-seq data to solve the problem of noise and gene distribution. In addition, there are some imputation methods based on deep learning or statistical methods such as scVI [19], scImpute [20], and SAVER [21]. These methods can only apply to Euclidean space by using Euclidean spatial data, such as the expression matrix. But they cannot directly deal with Non-Euclidean spatial data like cell graphs [22,23,24,25].
Therefore, we propose a novel structure neural network named GNNImpute, which is an autoencoder structure network that uses graph attention convolution. By building a graph from the scRNA-seq data, GNNImpute uses graph attention convolutional layer to make a targeted selection of similar neighboring nodes. Then, it aggregates these similar neighboring nodes. The nodes in the graph can continuously transmit messages along the edge direction until stability is reached. In this way, GNNImpute enables the expression of the cells in the same tissue area to be embedded in low-dimensional vectors through the autoencoder structure. GNNImpute can not only capture the co-expression patterns between similar cells but also remove sequencing technical noise from imputing dropout, which improves the downstream analysis of scRNA-seq data.
The high-level approach
In scRNA-seq data, each cell has its own expression profile, and the expression profile of each cell is different and unique. But cells from the same tissue or with the same function usually have similar features. Therefore, when a dropout event occurs in any cell, it can be recovered by the gene expression profile of similar cells.
GNNImpute is a deep learning method based on a graph attention neural network. Different from MAGIC, GNNImpute introduces the attention mechanism that can assign weights to different similar cells according to attention coefficients. It can establish nonlinear relationships between genes by learning low-dimensional embedding of expressions through the autoencoder structure network. Compared with DCA, GNNImpute can learn the gene co-expression patterns of similar cells by aggregating information from multi-level neighbors. The co-expression patterns can help recover low-expressed genes. GNNImpute reduces the dropout noise and improves the gene expression profile of cells.
The overall structure of GNNImpute is shown in Fig. 1a. It is composed of an encoder and a decoder. Figure 1b shows that the encoder of GNNImpute has two graph attention convolutional layers, which are used to transmit the information of neighbor nodes. And the decoder consists of two linear layers. GNNImpute uses the masked expression matrix as the model input. The output of the model is used to calculate the loss value. And the parameters of the model are optimized by this value.
The structure of GNNImpute. a Shows the overall framework of the GNNImpute uses the network structure of the encoder and decoder. b Shows the encoder composed of two layers of graph attention convolutional layers
GNNImpute uses the expression matrix of scRNA-seq data as input. As shown in Table 1, the expression matrix is an \(nCells*nGenes\) scale. The rows represent different cells number, and the columns represent different gene sites. Each value in the matrix indicates the expression intensity of a gene in a cell. Due to the sparseness of scRNA-seq data, the expression matrix contains a very large number of zeros. When the expression values in row or column are all zero, it means that those cells or genes are no expression at all. We filtered these no expression values from the matrix, because these values may cause impurity interference and invalid information. Similarly, we filtered cells with overexpression in the matrix, which may be caused by incorrect counting or cell rupture after death.
In the data preprocessing, we use SCANPY [26] to filter the original matrix. We address the data in four steps. In the first step, cells with expression values less than 200 and genes with expression values less than 3 are filtered. Second, we filter the cells with overexpression of mitochondrial genes [27], as shown in Fig. 2a. Third, the cells with a high total expression count in the Fig. 2b should also be filtered. Finally, we normalize the filtered expression matrix, the purpose is to make each row (cell) in the expression matrix have the same value of expressions, and this value is the median of the values of expressions of all cells before normalization.
Table 1 Expression matrix of PBMC dataset (intercepted)
a shows the mitochondrial gene counts and total expression counts of the PBMC dataset. b shows the gene counts and total expression counts of the PBMC dataset. And the red boxs indicate outliers
Build connection graph
GNNImpute is a dropout imputation method based on a graph attention network. It is used to obtain gene expression from similar cells to recover the dropout event. In order to aggregate the cells with similar expressions, it is necessary to define a connection graph between the cells. In this graph, we use nodes to represent cells and we use edges to represent the similarity between cells. In this graph, cells can transmit information to adjacent cells. As shown in Fig. 3, the construction of such a graph is divided into three steps. The first step is to reduce the dimensionality of the expression matrix. Figure 3a shows the result of scRNA-seq data dimensionality reduction by Principal Component Analysis (PCA). After PCA, we can see that the cells are clustered according to similar expressions (Fig. 3a). We select the first 50 principal components as the GNNImpute input. The second step is to calculate the Euclidean distance between every two cells in the expression matrix. As shown in Fig. 3b, we get a heat map of \(nCells*nCells\) scale. Heat map rows and columns represent different cells and the heat map color represent different cell distance. The color is deeper, the distance is closer. In the third step, we select K closest cells to construct graph edges. The K edges display a similar relation of cells. Through the above steps, we construct a cell-to-cell connection graph (K-nearest neighbor graph). We set the \(K=5\) (The K number can be customized).
After constructing the graph, all graph cells have K cells with the most similar expression. We call these K cells the "first-level" neighbors. These K cells are adjacent to the origin cells. Similarly, there are also K cells in the "first-level" neighbors. We named the "second-level" cells. It doesn't exist edges between origin cells and "second-level" cells, but the transferability of intermediate nodes can still indicate similarity. Figure 3c shows the origin cell and its neighbors. We use a two-layer graph convolution structure to transfer information within the range of "second-level" neighbors. This structure can not only maximize the aggregation of similar node information but also avoid over smooth node features.
There are three steps to construct a connection graph. First, a shows the result of visualization after dimensionality reduction by PCA. Second, b is the distance matrix represented by a heat map. Third, c represents the K-nearest neighbor graph after selecting K neighbors
Multi-head graph attention convolutional layer
In order to aggregate the information of cells, we need a connection graph and graph attention convolutional layers. The essence of the graph convolutional layer is not to aggregate information around the original nodes, but aggregate nodes connected by edges. The calculation process of graph convolutional layers is as follows:
$$\begin{aligned} H^{(k+1)} = f(H^{(k)},A) = \sigma ({\hat{A}} H^{(k)} W^{(k)}) \end{aligned}$$
where k is the number of layers of graph convolution. W is the trainable weight. \({\hat{A}} = {\tilde{D}}^{-\frac{1}{2}} {\tilde{A}} {\tilde{D}}^{-\frac{1}{2}}\), Where \({\tilde{A}}\) and \({\tilde{D}}\) are the adjacency matrix and degree matrix of the cell-to-cell connection graph, respectively. The adjacency matrix \({\tilde{A}} = A + I\), I is the identity matrix, which means add self-connections to the adjacency matrix. The degree matrix \({\tilde{D}} _{ii} = {\textstyle \sum _{j}^{}} {\tilde{A}}_{ij}\). \(\sigma\) is the activation function. ReLU is used as the activation function. \(H^{(k)}\) is the input matrix of the k-th graph convolutional layer. When \(k = 0\), \(H^{(k)} = X\). In the GNNImpute, it is the expression matrix of scRNA-seq data.
Through the superposition of multiple graph convolutional layers, the information aggregation of multi-order neighbors can be achieved. We use two-layer graph convolution in the encoder, and the output of the encoder is as follows:
$$\begin{aligned} H^{(2)} = f(f(X,A),A)=ReLU({\hat{A}}ReLU({\hat{A}} XW^{(0)})W^{(1)}). \end{aligned}$$
To aggregate the information of neighbors more efficiently, we propose an attention model for neighbor nodes. By adding attention to neighbor nodes in the form of weights, this attention model achieves targeted aggregation of neighbor nodes. Specifically, the more similar the neighbor node is to the target node, the greater the attention coefficient obtained by the neighbor node. In this way, different weights are applied to different neighbors. The calculation of the attention coefficient is as follows:
$$\begin{aligned} e_{ij} = a(\overrightarrow{h_i} ,\overrightarrow{h_j}) = W\overrightarrow{h_i} \cdot W\overrightarrow{h_j} \end{aligned}$$
where \(\overrightarrow{h_i}\) and \(\overrightarrow{h_j}\) represent the features of node i and node j, which is the gene expression profile of the cells. And \(e_{ij}\) represents the attention coefficient of cell j to cell i. a() is the attention calculation formula. It is used to calculate the similarity of every two nodes. we use the dot product as the attention calculation formula to calculate the similarity. W is a shared weight matrix. It transforms the input features into more advanced features, so that each node can obtain sufficient expressive ability. We only calculate the attention coefficient of cell i and cell j, where \(j \in N_i\), and \(N_i\) is the first-order neighbors of cell i in the cell-to-cell connection graph. Further calculation of multiple independent attention coefficients can be extended to the multi-head attention mechanism. In this attention mechanism, the attention coefficients are combined by calculate the average values to stabilize the learning process. The formula is as follows:
$$\begin{aligned} {\overrightarrow{h_i}}' = \sigma \left( \frac{1}{K} \sum _{K=1}^{K} \sum _{K \in N_i}^{} a_{ij}^kW^k\overrightarrow{h_i}\right) . \end{aligned}$$
In order to compare attention coefficients between different nodes, it is necessary to add the softmax function to standardize it:
$$\begin{aligned} a_{ij} = softmax_j(e_{ij}) = \frac{exp(e_{ij})}{{\textstyle \sum _{k \in N_i}^{}}exp(e_{ij})}. \end{aligned}$$
Combining the above (1), (2), (3), (4), (5) formula, the final attention weight displays as follows:
$$\begin{aligned} a_{(i,j)} = \frac{exp(LeakyReLU(a(\overrightarrow{h_i},\overrightarrow{h_j})))}{ {\textstyle \sum _{k \in N_i}^{}exp(LeakyReLU(a(\overrightarrow{h_i},\overrightarrow{h_j})))}}. \end{aligned}$$
Architecture and training
GNNImpute builds the model with an autoencoder (encoder and decoder). The input layer and output layer of the model have the same number of nodes. In the hidden layer, the nodes are much lower than encoder and decoder nodes. Different from the traditional autoencoder structure, we make an improvement on the GNNImpute encode layer. We use graph attention networks in the GNNImpute encode layer instead of linear networks. As shown in Fig. 4a, there is the autoencoder structure used by GNNImpute. In the encoder, the input size of the first layer is the number of gene features of the cell, and the first layer output is 512. The second input size is equal to the first layer output (512). GNNImpute decoder is composed of three parts: linear layer, batch normalization layer and ReLU.
Further, GNNImpute adds dropout layers to combat the over-fitting problem of the model. GNNImpute introduces a multi-head attention mechanism to achieve targeted selection when aggregating neighbor nodes. This multi-head attention mechanism can stabilize the learning process and provide robustness for the model. It is noted that GNNImpute uses a semi-supervised learning method to recover from dropout events. The advantage of semi-supervised learning is that some labeled cells can provide soft labels for many other unlabeled cells, and it can help the model recover from dropout events more accurately.
The model can learn the potential features by minimizing the error between the reconstructed expression matrix and the original expression matrix. Meanwhile, the hidden layer can capture the distribution of the matrix and ignore invalid changes in the low-dimensional environment.
Because the dropout event is random, there are few dropout benchmarks. Therefore, we used a fair measurement method [28, 29]. This is a method of constructing a dropout benchmark by randomly masking the expression matrix. Using this fair measurement method can make various methods calculate the corresponding metrics. First, we process the expression matrix of the real scRNA-seq data to obtain the filtered matrix as the ground truth. Then, we randomly masked non-zeros based on a predetermined dropout rate. After two of the above steps, we can obtain the masked expression matrix and unmasked expression matrix. We can use the matrix data to train the GNNImpute model and validate the imputation effectiveness.
In the model training phase, we divide the PBMC dataset according to the ratio of 6:2:2. There are 1706 cells in the training set, 569 cells in the validation set, and 568 cells in the test set. Training set used to train the model, the validation set would be used to test the trained model, and the test set would evaluate the final model. The total parameters of GNNImpute will be adjusted according to the size of the dataset. When using the PBMC dataset, the total parameters of the model are 26.75 M. The loss function of the model is set to the mean square error loss function. The optimizer is Adam, and the learning rate is 0.0001. The maximum number of iterations is set to 3000. If the loss value of the validation set does not decrease in 200 consecutive iterations, it will interrupt training early. The training processes of the model are shown in Fig. 4b–d.
a is the model structure of GNNImpute. b is the loss curve of GNNImpute training and validation. c is the PCC curve of GNNImpute training and validation. And d is the CS curve of GNNImpute training and validation
Evaluation metrics
In the experiment, we use four metrics to measure the imputation ability of GNNImpute with the other four methods. The four metrics are mean square error (MSE), mean absolute error (MSE), Pearson correlation coefficient (PCC) and Cosine similarity (CS), respectively. MSE and MAE are used to show whether the imputed gene expression values are the same as the labels. PCC and CS are used to measure whether the express trend of the imputed matrix is consistent with the raw matrix. In downstream data analysis, we employ the Adjusted rand index (ARI) and Normalized mutual information (NMI) to measure the clustering results.
MSE:
$$\begin{aligned} MSE = \frac{1}{N}\sum _{i=1}^{N}(x_i-y_i)^2 \end{aligned}$$
where \(x_i\) represents the imputed gene expression value, and \(y_i\) represents the real gene expression value.
MAE:
$$\begin{aligned} MAE = \frac{1}{N}\sum _{i=1}^{N}\left| x_i-y_i \right| \end{aligned}$$
PCC:
$$\begin{aligned} r = \frac{{\textstyle \sum _{i=1}^{N}}(x_i - {\overline{x}})(y_i - {\overline{y}})}{\sqrt{ {\textstyle \sum _{i=1}^{N}(x-{\overline{x}})^2}{\textstyle \sum _{i=1}^{N}(y-{\overline{y}})^2} } } \end{aligned}$$
where \(x_i\) represents the imputed gene expression value, \({\overline{x}}\) represents the average gene expression after imputation, \(y_i\) represents the real gene expression value, and \({\overline{y}}\) represents the real average gene expression.
CS:
$$\begin{aligned} cos(\theta ) = \frac{A\cdot B}{\left\| A \right\| \left\| B \right\| } = \frac{ {\textstyle \sum _{i=1}^{n}} A_i \times B_i}{\sqrt{ {\textstyle \sum _{i=1}^{n}(A_i)^2}} \times \sqrt{ {\textstyle \sum _{i=1}^{n}}(B_i)^2 }} \end{aligned}$$
where A and B represent the gene expression profile of the cell after imputation and the real gene expression profile of the cell respectively. And they are represented in the form of vectors. \(A_i\) represents the expression value of the ith gene of the cell after imputation, and \(B_i\) represents the real expression value of the ith gene of the cell.
We use four different real datasets in experiments. The real datasets list as following:
Human Frozen Peripheral Blood Mononuclear Cells (PBMCs), which from 10X GENOMICS. It contains 2900 cells and 32738 genes.
Mouse Brain cells published by Campbell (GSE93374). It uses Drop-seq technology to perform single-cell analysis on brain cells of adult mice, which contains 21,086 cells and 26,774 genes.
Mouse Brain cells published by Chen (GSE87544). It is the diversity analysis of mouse hypothalamic cells, which contains 14,437 cells and 23,284 genes.
Mouse embryo cell analysis published by Klein (GSE65525). It contains 2717 cells and 24,021 genes.
In order to validate the dropout imputation performance of different methods, we compare GNNImpute results with other five methods, including DCA, MAGIC, scVI, scImpute and SAVER. DCA uses a zero-inflated negative binomial distribution model to denoise the autoencoder network. This denoising network can solve the problem of count distribution, over-dispersion and sparsity. MAGIC is an imputation method based on Markov affinity-based graph, which imputes dropout values by sharing information among similar cells. scVI can capture the basic low-dimensional structure in the scRNA-seq data by introducing a robust latent variable model, which can eliminate the noise in the data. scImpute is a statistical method that can automatically identify possible dropout events and recover them. It can also exclude outliers without introducing new bias. SAVER uses regularized regression prediction and empirical Bayesian methods to recover the gene expression profile in noisy and sparse data. We conduct experiments on four single-cell sequencing datasets of humans and mice. To illustrate the imputation performance of methods, we use dropout recovery index, clustering, robustness to evaluate the results.
Imputation evaluation
By randomly masking the expression matrix on the four real datasets, we can obtain positive and negative training data. We compare the imputation performance of GNNImpute with the other five imputation methods using four real datasets. Figure 5 shows the performance of GNNImpute with the other five methods.
Overall, In the Fig. 5a, b, we can see the average MSE and MAE of GNNImpute can achieve 3.0130 and 0.6781. The results are better than DCA (3.0130 vs. 5.1888, and 0.6781 vs. 0.9036). The reason is that GNNImpute uses semi-supervised learning, which can learn from the labeled data to recover the dropout event. Since scVI is also a neural network method based on autoencoders, its performance is second only to DCA. For the scImpute, the MSE and MAE in the four datasets are the worst, because scImpute has an overall bias in the imputation. As shown in Fig. 5c, d, there are PCC and CS of GNNImpute and the other five methods in the four datasets. In PCC and CS, GNNImpute reaches the best result of 0.9073 and 0.9134 among all six methods. The performance is 8.69% and 8.71% better than the second place DCA (0.9073 vs. 0.8347, 0.9134 vs. 0.8402). It is because GNNImpute uses graph attention convolutional layer to aggregate information of similar cells. The performance of MAGIC, scImpute and SAVER in the four datasets are not stable. The average PCC and CS of MAGIC and SAVER on small datasets (PBMC, Klein) are 0.8226 and 0.5715 respectively. However, there are only 0.3146 and 0.2188 on the larger dataset (Chen, Campbell), which indicates that they cannot perform effective imputation on the large dataset. Another reason why scImpute may have an overall bias in imputation is that it has the worst performance in MSE and MAE but the PCC and CS are better than MAGIC and SAVER.
a shows the MSE between the gene expression value after imputation and the real gene expression value. b shows the MAE between the gene expression value after imputation and the real gene expression value. c represents the PCC between the gene expression value after imputation and the real gene expression value. d represents the CS between the gene expression value after imputation and the real gene expression value
Heat map and clustering evaluation
The purpose of imputation is to improve the downstream analysis of scRNA-seq data. Therefore, we use clustering results to evaluate the downstream analysis. We use two metrics (ARI, NMI) to measure the performance of cell clustering after imputation.
In the clustering analysis, we used the data published by Klein. They analyzed mouse embryonic stem cells, revealing in detail the population structure and the heterogeneous onset of differentiation after leukemia inhibitory factor (LIF) withdrawal. The cluster labels are determined by the intervals of LIF withdrawal (0, 2, 4, 7 days). The t-distributed stochastic neighbor embedding (t-SNE) algorithm is used to reduce the dimension of the expression matrix. And it can realize the visual analysis of clustering. In Fig. 6a–c, we show the figures of the raw matrix, noised matrix and the denoised matrix after GNNImpute imputation. The dimensions of these matrices are all reduced by t-SNE for visualization. The visual analysis of the noised expression matrix in Fig. 6b shows that four cell clusters have different degrees of mixing, and there is no obvious dividing line. But the expression matrix imputed by GNNImpute can separate different clusters, as shown in Fig. 6c. After imputing the matrices with different methods, we use k-means algorithm to measure the performance of matrix clustering. Then, we use ARI and NMI to measure the clustering results obtained by the k-means algorithm. GNNImpute reaches 0.8368 (ARI) and 0.8199 (NMI), which are at least 1.82% and 1.21% better than other methods (shown in Fig. 6d, e).
By calculating the gene heat map of all cells in the imputed expression matrix, the results can also be visualized to determine which methods can improve the downstream analysis of scRNA-seq data. Because the PBMC dataset does not have real cluster labels, we use Leiden algorithm to calculate pseudo labels. Then we find highly differentiated marker genes in each cluster by t-test based on the pseudo-labels. Finally, we select 50 marker genes most relevant to cluster classification in the PBMC dataset to measure the performance of GNNImpute and other five methods. Figure 6f, g shows the heat maps of the raw matrix and noised matrix. GNNImpute can recover the dropout events that occurred in different clusters, especially rare cell clusters (No. 7, No. 8 and No. 9 clusters, shown in Fig. 6h). The expression matrix after imputation by MAGIC shows that the large cell clusters are almost the same. The recovery of the dropout event is too smooth (such as LTB, RPS5 and CD74, shown in Fig. 6j). As a result, it lost the unique heterogeneity of scRNA-seq data. For scVI, it can only impute limited dropout values. The reason may be that low-expressed genes are mistaken for noise and ignored, such as RPL31 and RPS6 shown in Fig. 6k. scImpute can impute the dropout genes. But it changes the expression intensity of most genes, which further illustrates that scImpute imputes dropout values with a certain overall bias. The genes marked in Fig. 6l show that the expression intensity of these genes has been changed. SAVER does not perform obvious imputation. It may be that it cannot handle data with a high dropout rate.
a–c show the visualizations of the raw matrix, noised matrix and denoised matrix after GNNImpute. d, e show the ARI and NMI of different methods. f, g show the heat maps of the raw expression matrix and noised matrix. h– m show the heat maps of the expression matrix imputed by GNNImpute and the other five methods
Robustness analysis under different dropout rates
Next, we evaluate the ability of the imputation method for scRNA-seq data under different dropout rates. The dropout rates are 10%, 20%, 30%, 40%, 50%, and 60%, using PBMC dataset with random mask expression matrix.
Figure 7a–d shows the performance of the six scRNA-seq dropout imputation methods under different dropout rates. From Fig. 7 we can see that GNNImpute is not sensitive to the dropout rate. It can recover the most dropout events at a high dropout rate (60%). The MSE and MAE are 3.4783 and 0.8141. The PCC and CS are 0.9353 and 0.9438, respectively. After GNNImpute is DCA and MAGIC. Under different dropout rates, the MSE and MAE of DCA are hardly decrease. But the PCC and CS decrease by 1.7% and 1.4%. The MSE and MAE of MAGIC increased by 27.3% and 5.3%. And the PCC and CS decreased by 8.1% and 5.7%. The performance of scImpute and SAVER is in the middle. With the increase in the dropout rate, MSE and MAE show a clear increasing trend, while PCC and CS show a slowly decreasing trend. scVI is the most sensitive method for the dropout rate. MSE and MAE increased significantly (6.9536 to 17.0714 and 1.3264 to 1.8961). And both PCC and CS are decreased (0.8767 to 0.6332 and 0.8979 to 0.7017).
a shows the MSE of six methods at different dropout rates. b shows the MAE of six methods at different dropout rates. c shows the PCC of six methods at different dropout rates. d shows the CS of six methods at different dropout rates
Analysis of different training sets for semi-supervised learning
GNNImpute uses a semi-supervised learning method to train the model and learn the dropout knowledge. The advantage of semi-supervised is that it can use only a small amount of labeled data and a large amount of unlabeled data for training, which greatly reduces the requirements for manually labeling data. In this experiment, we use 80 ~10% data with labels for model training. Even if only 10% labeled data is used, the model still shows great imputation performance, as shown in Fig. 8. The MSE and MAE are 3.4685 and 0.8147, and the PCC and CS are 0.9351 and 0.9436, respectively.
a shows the MSE and MAE of GNNImpute in different scales of training set. b shows the PCC and CS of GNNImpute in different scales of training set
Imputation performance in simulated data
To further evaluate the performance of GNNImpute, we evaluate the performance in simulated data. Following the previous work, we used the Splatter package [30] to generate two simulation datasets. The first dataset has 2 groups, and the second dataset has 6 groups. Both simulation datasets contain expression matrices of 4000 cells and 20,000 genes. Table 2 shows the MSE, MAE, PCC and CS of GNNImpute and the other five methods in simulated dataset (2 groups). GNNImpute is better than other methods on MAE, PCC and CS. Only on MSE, our method is slightly inferior to DCA (21.2961 vs. 19.9282), but compared with the other four remaining methods, our method still has a significant improvement. When we evaluate in the simulated dataset (6 groups), we can get similar results, as shown in Table 3. The reason for this phenomenon may be that the simulated data generated by the Splatter package is quite different from the real data. After calculation, we can confirm that the sparsity of the simulated data is much lower than the real data (simulated data: 0.47, PBMC: 0.94, Campbell: 0.89, Chen: 0.92, Klein: 0.66).
Table 2 Imputation performance in simulated data (2 groups)
Analysis of attention mechanism of GNNImpute
In order to verify the effectiveness of the attention mechanism, we specially added experiments to evaluate our method and GCN architecture model (without attention). As shown in Tables 4 and 5, we evaluated the performance of five models on the PBMC dataset and Klein dataset. They are GNNImpute (GCN architecture without attention), GNNImpute (with 1 attention head), GNNImpute (with 3 attention heads), GNNImpute (with 5 attention heads) and GNNImpute (with 8 attention heads). The experiment method uses five independent repeated experiments to take the average value. The results of PBMC dataset in Table 4 show that the performance of the model using the attention mechanism is all better than the GCN model (without attention). In the evaluation of MSE and MAE, the model using the attention mechanism is at least better than GCN model (without attention) by 12.9% (3.3047 vs. 2.8800) and 3.6% (0.8022 vs. 0.7736). And they are also better than GCN (without attention) on PCC and CS (0.9436 vs. 0.9478, 0.9510 vs. 0.9547). Table 5 shows the improvement of the clustering effect of the attention mechanism on the Klein dataset. In the evaluation of ARI, the attention mechanism provided about 2% (0.7998 vs. 0.8155). For the evaluation of NMI, it is also better than GCN (without attention) by 1.6% (0.8204 vs. 0.8049). We also observed that the performance of GNNImpute (with 1 attention head) is not stable, so we recommend using the multi-head attention mechanism to stabilize the performance of the model.
Table 4 Imputation performance of GNNImpute and GCN architecture model (PBMC dataset)
Table 5 Imputation performance of GNNImpute and GCN architecture model (Klein dataset)
On the high level, imputing the dropout of scRNA-seq data is a process of denoising the expression matrix data of the raw scRNA-seq library. Many existing approaches show that deep learning methods, especially autoencoders, can effectively denoise data.
Our GNNImpute method extends this high-level approach to the case of Non-Euclidean spatial data like cell graphs. By reconstructing the expression matrix of scRNA-seq data, GNNImpute can establish a learning mechanism between the input and output of the model. It allows our model to capture non-linear relationships between genes and make better utilization of data.
Usually, the dropout imputation performance is affected by the sparseness of scRNA-seq data. For noise data, it is usually more difficult to impute dropout values. However, GNNImpute shows excellent performance compared with other methods. GNNImpute compensates for the lack of low expression intensity of some genes by aggregating the features information of similar cells. Meanwhile, it can recover the dropout events in the scRNA-seq data and remain the specificity between cells to avoid excessive smoothing of expression.
Compared with other dropout imputation methods, GNNImpute has great adaptability for addressing the different sizes of datasets (especially large datasets). The difficulty in processing large datasets is that there are many cell types, and each cell contains a lot of gene information. Since GNNImpute uses a neural network model, it can capture important features from all information, and then reduce the dimensionality to ignore unimportant features.
Moreover, GNNImpute is a semi-supervised learning model, which does not require manually labeled data. It can be trained with few labeled data. This model only uses 10% of the dataset for training can still achieve great results.
In this paper, a novel imputation method based on graph attention convolution is proposed, which is a semi-supervised learning method using an autoencoding structure network. GNNImpute focuses on determining the similarity between cells and constructs a connection graph to capture the features of similar cells. This method also introduces an attention mechanism of weighted neighbor nodes to select the cell node with the most useful features information. In the experiments of four datasets, the performance of GNNImpute is better than other existing methods for four metrics of MSE, MAE, PCC and CS. When we explore the limits of GNNImpute, we find that it cannot provide the interpretability of cell clusters. Future investigations will focus on how to make GNNImpute more explainable.
The datasets used in this study are publicly available. Single-cell library data and raw count expression matrices of PBMCs are downloaded from 10X GENOMICS (https://www.10xgenomics.com/resources/datasets/frozen-pbm-cs-donor-a-1-standard-1-1-0). The mouse brain cell data released by Campbell is available at Gene Expression Omnibus (GEO) under accession code GSE93374. The single-cell data and expression matrix data of mouse brain cells published by Chen are available in GEO under accession code GSE87544. The mouse embryo single-cell data published by Klein was downloaded from GEO, and the accession code is GSE65525. The source code in this paper is available at https://github.com/Lav-i/GNNImpute.
scRNA-seq:
Single-cell RNA sequencing
PCA:
RuLU:
Rectified linear unit
ARI:
NMI:
Normalized mutual information
t-SNE:
t-distributed stochastic neighbor embedding
Zeisel A, Muñoz-Manchado AB, Codeluppi S, Lönnerberg P, La Manno G, Juréus A, Marques S, Munguba H, He L, Betsholtz C, et al. Cell types in the mouse cortex and hippocampus revealed by single-cell rna-seq. Science. 2015;347(6226):1138–42.
Villani A-C, Satija R, Reynolds G, Sarkizova S, Shekhar K, Fletcher J, Griesbeck M, Butler A, Zheng S, Lazo S, et al. Single-cell RNA-Seq reveals new types of human blood dendritic cells, monocytes, and progenitors. Science. 2017;356(6335).
Chen G, Ning B, Shi T. Single-cell RNA-Seq technologies and related computational data analysis. Front Genet. 2019;10:317.
Cao J, Packer JS, Ramani V, Cusanovich DA, Huynh C, Daza R, Qiu X, Lee C, Furlan SN, Steemers FJ, et al. Comprehensive single-cell transcriptional profiling of a multicellular organism. Science. 2017;357(6352):661–7.
Stephenson W, Donlin LT, Butler A, Rozo C, Bracken B, Rashidfarrokhi A, Goodman SM, Ivashkiv LB, Bykerk VP, Orange DE, et al. Single-cell rna-seq of rheumatoid arthritis synovial tissue using low-cost microfluidic instrumentation. Nat Commun. 2018;9(1):1–10.
Keren-Shaul H, Spinrad A, Weiner A, Matcovitch-Natan O, Dvir-Szternfeld R, Ulland TK, David E, Baruch K, Lara-Astaiso D, Toth B, et al. A unique microglia type associated with restricting development of Alzheimer's disease. Cell. 2017;169(7):1276–90.
Moignard V, Woodhouse S, Haghverdi L, Lilly AJ, Tanaka Y, Wilkinson AC, Buettner F, Macaulay IC, Jawaid W, Diamanti E, et al. Decoding the regulatory network of early blood development from single-cell gene expression measurements. Nat Biotechnol. 2015;33(3):269–76.
Potter SS. Single-cell RNA sequencing for the study of development, physiology and disease. Nat Rev Nephrol. 2018;14(8):479–92.
Li G, Yang Y, Van Buren E, Li Y. Dropout imputation and batch effect correction for single-cell RNA sequencing data. J Bio-X Res. 2019;2(4):169–77.
Luecken MD, Theis FJ. Current best practices in single-cell rna-seq analysis: a tutorial. Mol Syst Biol. 2019;15(6):8746.
Kharchenko PV, Silberstein L, Scadden DT. Bayesian approach to single-cell differential expression analysis. Nat Methods. 2014;11(7):740–2.
Lun AT, Bach K, Marioni JC. Pooling across cells to normalize single-cell rna sequencing data with many zero counts. Genome Biol. 2016;17(1):1–14.
Vallejos CA, Risso D, Scialdone A, Dudoit S, Marioni JC. Normalizing single-cell rna sequencing data: challenges and opportunities. Nat Methods. 2017;14(6):565.
Sun S, Zhu J, Ma Y, Zhou X. Accuracy, robustness and scalability of dimensionality reduction methods for single-cell rna-seq analysis. Genome Biol. 2019;20(1):1–21.
Long, J., Xia, Y.: Cluster analysis of high-dimensional SCRNA sequencing data (2019). arXiv preprint arXiv:1912.08400.
Hicks SC, Townes FW, Teng M, Irizarry RA. Missing data and technical variability in single-cell rna-sequencing experiments. Biostatistics. 2018;19(4):562–78.
Van Dijk D, Sharma R, Nainys J, Yim K, Kathail P, Carr AJ, Burdziak C, Moon KR, Chaffer CL, Pattabiraman D, et al. Recovering gene interactions from single-cell data using data diffusion. Cell. 2018;174(3):716–29.
Eraslan G, Simon LM, Mircea M, Mueller NS, Theis FJ. Single-cell rna-seq denoising using a deep count autoencoder. Nat Commun. 2019;10(1):1–14.
Ding J, Condon A, Shah SP. Interpretable dimensionality reduction of single cell transcriptome data with deep generative models. Nat Commun. 2018;9(1):1–13.
Li WV, Li JJ. An accurate and robust imputation method scimpute for single-cell rna-seq data. Nat Commun. 2018;9(1):1–9.
Huang M, Wang J, Torre E, Dueck H, Shaffer S, Bonasio R, Murray JI, Raj A, Li M, Zhang NR. Saver: gene expression recovery for single-cell rna sequencing. Nat Methods. 2018;15(7):539–42.
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks (2016). arXiv preprint arXiv:1609.02907
Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks (2017). arXiv preprint arXiv:1710.10903
Ravindra N, Sehanobish A, Pappalardo JL, Hafler DA, van Dijk D. Disease state prediction from single-cell data using graph attention networks. In: Proceedings of the ACM conference on health, inference, and learning, p. 121–30 (2020).
Shao X, Yang H, Zhuang X, Liao J, Yang Y, Yang P, Cheng J, Lu X, Chen H, Fan X. Reference-free cell-type annotation for single-cell transcriptomics using deep learning with a weighted graph neural network. bioRxiv (2020)
Wolf FA, Angerer P, Theis FJ. Scanpy: large-scale single-cell gene expression data analysis. Genome Biol. 2018;19(1):1–5.
Lun AT, McCarthy DJ, Marioni JC. A step-by-step workflow for low-level analysis of single-cell rna-seq data with bioconductor. F1000Research. 2016;5.
Leote AC, Wu X, Beyer A. Network-based imputation of dropouts in single-cell rna sequencing data. bioRxiv: 611517 (2019).
Arisdakessian C, Poirion O, Yunits B, Zhu X, Garmire LX. Deepimpute: an accurate, fast, and scalable deep neural network method to impute single-cell rna-seq data. Genome Biol. 2019;20(1):1–14.
Fey M, Lenssen JE. Fast graph representation learning with PyTorch geometric. In: ICLR workshop on representation learning on graphs and manifolds (2019)
Zappia L, Phipson B, Oshlack A. Splatter: simulation of single-cell rna sequencing data. Genome Biol. 2017;18(1):1–15.
Project supported by Beijing Natural Science Foundation (5182018).
College of Information Science and Technology, Beijing University of Chemical Technology, Beijing, People's Republic of China
Chenyang Xu, Lei Cai & Jingyang Gao
Chenyang Xu
Lei Cai
Jingyang Gao
Conceived and designed the experiments: CX, JG, Performed the experiments: CX, LC, Analyzed the data: CX, LC. All authors read and approved the final manuscript.
Correspondence to Jingyang Gao.
GNNImpute is implemented in Python 3 using deep learning framework PyTorch Geometric [30]. Training on CPU or GPU is supported using PyTorch and PyTorch Geometric.
Simulated scRNA-seq data
We used the Splatter [31] package to generate simulation datasets. The parameters used in the generated two sets of simulation data are as follows. For two group simulation: nGroup = 2, dropout.mid = 5, dropout.shape = − 1, dropout.type = "experiment", de.facScale = 0.25, nGenes = 20000, batchCells = 4000. For six group simulation: nGroup = 6, dropout.mid = 5, dropout.shape = − 1, dropout.type = "experiment", de.facScale = 0.25, nGenes = 20000, batchCells = 4000.
Xu, C., Cai, L. & Gao, J. An efficient scRNA-seq dropout imputation method using graph attention network. BMC Bioinformatics 22, 582 (2021). https://doi.org/10.1186/s12859-021-04493-x
DOI: https://doi.org/10.1186/s12859-021-04493-x
scRNA-seq
Dropout imputation
Graph attention convolution
Submission enquiries: [email protected] | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.