text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Why isn't general relativity the obvious thing to try after special relativity?
To preface my question, I ask this as a mathematics student, so I don't have a very good sense of how physicists think.
Here is the historical context I'm imagining (in particular taking into account the development of differential geometry in the 19th century):
Classical mechanics is about Lagrangians of matter fields on $\mathbb{R}^3$ (with the flat metric)
Sometime in the 1820s Gauss speculated about replacing the flat metric on $\mathbb{R}^3$ by one with non-vanishing curvature
Special relativity is about Lagrangians of matter fields on $\mathbb{R}^{3,1}$ (with the flat metric). I'm taking this jump for granted since as I understand it, it was arrived at from experimental results on electromagnetism at the end of the 19th century.
Now in line with Gauss, it seems like it would be extremely natural to speculate about replacing the flat metric on $\mathbb{R}^{3,1}$ by one with non-vanishing curvature (and in the same spirit also considering more exotic topologies for the underlying manifold).
Given that then we would have to ask exactly which metric we are looking for, it seems natural to say that there should be a Lagrangian term corresponding to the metric. The Einstein-Hilbert functional is probably the simplest one to try. And so we get the Einstein equations.
Alternatively (as I heard from someone is the actual history) you could observe that the energy-momentum tensor is of course a 2-tensor, and so for an Euler-Lagrange equation the most natural metric-dependent expression would be $\operatorname{Ric}=T$. Since $T$ is always divergence free it would be natural to replace $\operatorname{Ric}$ by $\operatorname{Ric}-\frac{1}{2}Rg$ just from taking the contracted second Bianchi identity into account. And so again we get the Einstein equations.
I've often heard it said that if Einstein had not come up with special relativity, someone else probably would have in the next five or ten years. However if he had not come up with general relativity, it would have taken much longer to discover. Why is this? I feel that I must be missing something here.
general-relativity special-relativity differential-geometry history
youleryouler
$\begingroup$ " However if he had not come up with general relativity, it would have taken much longer to discover." Hilbert came up with it at basically the exact same time. There was also Nordstrom Gravity, which worked off of the trace rather than the full tensor. It really wouldn't have taken nearly as long as people think; the ideas were ripe for the taking at the time they were found $\endgroup$ – Robert Mastragostino Mar 20 '14 at 1:24
$\begingroup$ The curved spacetime approach only works because of the equivalence principle. The deep insight required to be able to come up with general relativity was seeing that the equivalence principle was the main content of Newtonian gravity, and that this led naturally to a metric theory. Only once you have that can you start thinking about field equations and manifold theory. $\endgroup$ – Jerry Schirmer Mar 20 '14 at 2:00
$\begingroup$ @RobertMastragostino: they certainly knew about each others' work, but they were definitely working independently. A bit of a trivia point is that Hilbert published his paper first, but admitted that Einstein deserved the credit for general relativity. $\endgroup$ – Jerry Schirmer Mar 20 '14 at 2:00
$\begingroup$ @youler: it was considered to be a coincidence of the form of the Newtonian force law for a long time. The insight was figuring out that it was the essential feature of gravity from which other things flow. $\endgroup$ – Jerry Schirmer Mar 20 '14 at 2:27
$\begingroup$ I should also add that a modern development of special relativity is very much informed by the subsequent development of general relativity. While the concept of hyperbolic rotation was certainly present in 1905 with Minkowski, it certainly wasn't as developed as it is now, nor was the real significance of hte negative signature metric known. $\endgroup$ – Jerry Schirmer Mar 20 '14 at 2:59
I do think Jerry Schirmer answered the question in the comments, but I'll try to expand just to make clear how he explained everything.
Let us consider given that special relativity is correctly described by physics in Minkowski spacetime. Then we can ask ourselves how to include gravity without violating causality, which is mandatory by the finite velocity of light.
The idea is to consider Einstein's elevator. Namely that there is no local experiment which can be done that can differentiate between bodies in free fall in a constant gravitational field and the same bodies uniformly accelerated. That's because gravity affects everything the same way. A somewhat formalization of this is called Einstein's equivalence principle (in contrast with Galileo's, that say about coordinate transformation by constant velocities).
Note first that this is not the case for eletromagnetism. One can always use test charges to determine the electromagnetic fields, and it is impossible to do away with them using accelerated frames. Also, the equivalence principles is strictly local. If you look at extend regions gravity will appear through tidal forces.
So, if you think that special relativity is a particular case of general relativity (because it's just the same without gravity) the question is: what looks locally like special relativity but not globally? The answer is curved lorentzian manifolds, that locally are Minkowski.
But, as Jerry stressed, if you think in curved manifolds as generalization of flat ones, that does not, in principle, say anything about gravity. Only by noticing it is a force unlike any other, and formalizing it through the equivalence principle, one can justify the physics behind it, that is the use of curved manifolds. For instance, you suggest it is natural to generalize the situation by allowing curved spaces, but from the mathematical point of view one could just as well argue that there are other forms of generalization, e.g. we could instead try to projectify Minkowski. This is indeed usefull in other contexts, but it has nothing to do with gravity. So for a physicist is important we have "conceptual insights" to guide the process of "generalization for comprehension", or in other words we need principles with physical content.
I'm really unsure about what Gauss could be thinking regarding the metric. He did try to formulate classical mechanics in a differential geometrical way (Lanczos "Vartiational principles of classical mechanics" discusses it), but if that's what you're referring to, then it had nothing to do specifically with gravity.
EDIT: Oh boy, that last sentence is very misleading, I'm sorry. I had a look at Lanczos' book and realized that while Gauss pushed for a different formulation of classical mechanics, it's called Principle of Least Constraint, page 106 in Lanczos, it was only after some time that Hertz gave the principle the geometrical interpretation. So really not relevant to you question. I won't erase the paragraph though, in case anyone is interested.
Also, the equivalence principle argument says nothing about the field equations, and would be true even if the correct equations were different. As a matter of fact, a lot of general relativity independs of Einstein Field Equations, like the causal structure and (to some extend) the singularity theorems. This is why the equivalence principle was formulated as early as 1907 but the field equations came only in 1915.
I'm not a big fan of "what if" questions in history, majorly because they don't seem to have answers, but while Poincaré had the Lorentz trasnformations and a lot of understanding of special relativity, I never heard of anyone who anticipated the equivalence principle. So I hope this makes plausible that while others could have done SR, it did not seem likely that GR was coming, because first it was needed to understand what gravity is. Nordstrom's theory is an extension of ideas of eletromagnetism and was bound to failure. Hilbert indeed got the field equations right on his own, but would not get there without the motivation of curved spacetimes
cesarulianacesaruliana
I'm not sure if your main interest lies on the question on the title of this thread or in the question you pose near the end of your text. I'll try to answer both, in spite of not being an expert neither on GR nor on the education on physics.
Why is it, if it is true, that GR would have taken many more years to discover, had Einstein not discovered it?
I agree with you and the comments to your question in saying that GR would most likely have emerged, in the following years (though "how many years?" is a question which I don't think anybody can answer) as a consequence of the equations given by Hilbert in the paper he published at almost the same time. A detailed account on this subject can be found on the following Physics SE question: Did Hilbert publish GR before Einstein?, and I can quote Pais' biography of Einstein on this subject (for a beautiful account of this particular episode of the development of GR, check chapters 11-14 of "Subtle is the Lord":
Let us come back to Einstein's paper of November 18. It was written at a time in which (by his own admission) he was beside himself about his perihelion discovery (formally announced that same day), very tired, unwell, and still at work on the November 25 paper [the paper called "The Field Equations of Gravitation"]. It seems most implausible to me that he would have been in a frame of mind to absorb the content of the technically difficult paper Hilbert had sent him on November 18. More than a year later, Felix Klein wrote that he found the equations in that paper so complicated that he had not checked them. [...]
I rather subscribe to Klein's opinion that the two men 'talked past each other, which is not rare among simultaneously productive mathematicians'[...] I again agree with Klein 'that there can be no question of priority, since both authors pursued entirely different trains of thought to such an extent that the compatibility of the results did not at once seem assured'. I do believe that Einstein was the sole creator of the physical theory of general relativity and that both he and Hilbert should be credited for the discovery of the fundamental equation.
I am not sure that the two protagonists would have agreed [included as a funny note].
Subtle is the Lord, Chapter 14, page 260. I think this supports my previous statement quite well.
I think this question can only be answered if we are talking about the teaching of the subject to undergrads in college. If we were talking about one teaching him-/herself GR after learning SR, I believe that the answer would be "well, mostly because one doesn't want to/couldn't do it; in fact, one could have done it before", or something along those lines, which doesn't quite help.
When talking about teaching SR to undergrads, you've got to understand this: there isn't a standard for teaching the theories of Relativity to undergrads (written as an as-of-yet undergrad). We mostly get taught versions of the theory with not so much emphasis on the mathematical structures lying below the subject, and get mostly taught to do calculations and think of the dynamics. So we, in these cases, don't always get to know the concepts of metric, of flat spacetime (which we actually use, but almost never think about it for the course), of curved spacetime, among others.
Add to that the fact that, in many universities (such as mine and the one's of many of my acquaintances), we don't get to learn much math (besides Calculus, some Linear Algebra and Complex Variables) before plunging into the vast world of physics subjects, so we are not in a position to say "oh, of course, this concept that I saw when learning SR can be, by analogy with these things which I haven't learned, extended to a more general (and complicated) one, which would link to some variational principle in order to get some new and complicated tensor equations, which we can't solve".
So well, if you see it from a mathematician's point of view (yours), the next step from SR is quite obvious because of the mathematics you know, but if you are a physics undergrad, who has (most likely) not had an intense training in mathematics, you are certainly not going to be able to take that step. Though I may be generalizing horribly, but all that I've said is true, in my experience.
DrarpDrarp
Thanks for contributing an answer to Physics Stack Exchange!
Not the answer you're looking for? Browse other questions tagged general-relativity special-relativity differential-geometry history or ask your own question.
Did Hilbert publish general relativity field equation before Einstein?
From the Poincare group to Minkowski space
Physical meaning of non-trivial solutions of vacuum Einstein's field equations
Riemann normal chart and special relativity
On the coordinate independence of general relativity
Two dimensional spacetime and the Gauss Bonnet theorem
What manifold is spacetime?
Interpretation of the Weyl tensor
Flat 3D space described with spherical coordinates VS curved space being the surface of a sphere
Is general relativity only applicable where the curvature is small?
Why was pseudo-Euclidean geometry not enough for general relativity? | CommonCrawl |
AMS Home
MAA Press Books
Item Successfully Added to Cart
An error was encountered while trying to add the item to the cart. Please try again.
Please make all selections above before adding to cart
The following link can be shared to navigate to this page. You can select the link to copy or click the 'Copy To Clipboard' button below.
Successfully Copied!
Homotopy Theory of the Suspensions of the Projective Plane
Jie Wu : National University of Singapore, Singapore, Singapore
Electronic ISBN: 978-1-4704-0367-6
Product Code: MEMO/162/769.E
130 pp
MAA Member Price: $55.80
AMS Member Price: $37.20
Click above image for expanded view
Memoirs of the American Mathematical Society
Volume: 162; 2003
MSC: Primary 55; Secondary 20; 57;
The homotopy theory of the suspensions of the real projective plane is largely investigated. The homotopy groups are computed up to certain range. The decompositions of the self smashes and the loop spaces are studied with some applications to the Stiefel manifolds.
Graduate students and research mathematicians interested in algebraic topology.
2. Preliminary and the classical homotopy theory
3. Decompositions of self smash products
4. Decompositions of the loop spaces
5. The homotopy groups $\pi _{n+r} (\Sigma ^n \mathbb {R}\mathrm {P}^2)$ for $n \geq 2$ and $r \leq 8$
6. The homotopy theory of $\Sigma \mathbb {R}\mathrm {P}^2$
The table of the homotopy groups of $\Sigma ^n\mathbb {R}\mathrm {P}^2$
Request Review Copy
Please select which format for which you are requesting permissions.
American Mathematical Society · 201 Charles Street Providence, Rhode Island 02904-2213 · Contact Us
AMS, American Mathematical Society, the tri-colored AMS logo, and Advancing research, Creating connections, are trademarks and services marks of the American Mathematical Society and registered in the U.S. Patent and Trademark Office.
© Copyright , American Mathematical Society · Privacy Statement · Terms of Use · Accessibility and AMS Online Content
Shipping Information · International Orders | CommonCrawl |
MathOverflow
MathOverflow Meta
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.
Ways to prove an inequality in groups
Modified 10 years, 9 months ago
Suppose we have a (possibly infinite) group given by generators and relations. One way to prove some inequality is to construct the representation of the group and show inequality in the representation. Is there some other methods?
gr.group-theory
The most general way is to use the complete presentations. See the book by Sims, for example. The idea is this. Find a presentation of your group as a monoid, that is include inverses of generators in the generating set, include also all relations $aa^{-1}=1$. Then use the completion algorithm (called the Knuth-Bendix procedure) to produce a complete monoid presentation of your group. If you are lucky, the complete presentation will be finite or a least recursive. It will consist of relations $u=v$. Now if you want to check whether a word $w$ is 1 or not, you just apply relations $u=v$ from your complete presentation to $w$ (i.e. replace subword equal to $u$ in $w$ by $v$) until you get a word to which you cannot apply any relation. If that word is empty, $w=1$, if not, $w\ne 1$. Complete recursive presentations exist surprisingly often. For example, many Coxeter groups have them (also see Sims' book mentioned above). Of course, Coxeter groups are residually finite and even linear, so there are other procedures to solve the word problem. But even complicated, non-residually finite, groups, like the R. Thompson group $F$, have nice recursive complete presentations.
edited May 3, 2012 at 7:38
user6976user6976
Following on from Mark's answer, finding an automatic structure for the group is an alternative computational approach to solving this problem, which can work successfully for many groups that do not have finite complete presentations. It is true that all automatic groups have recursive complete presentations, but in general it is very difficult to find such presentations even when they exist, and completeness has been proved to be an undecidable property of a general recursive presentation. The algorithms for computing automatic structures have been implemented, and versions are available in both GAP (as an external package) and Magma. The book "Word Processing in Groups" by David Epstein et al (5 sub-authors) is devoted to the general theory of automatic groups.
Although not all of out favourite groups are automatic (for example ${\rm SL}_n({\mathbb Z})$ is automatic only when $n=2$, and I believe that it is unknown whether Thompson's group $F$ is automatic), many of them are, including all Coxeter groups, braid groups, and many non-positively curved groups. If an automatic structure is successfully computed, then the word problem can be solved in at worst quadratic time, finiteness of the group and the orders of elements in the group can be determined, and the growth function of the group can be computed.
The programs have been used to help settle genuine open problems about specific finitely presented groups. For example, they were recently used to show that the group $\langle x,y \mid x^3,y^5,(xy)^7,[x,y]^2 \rangle$ is infinite. (There now remain only three quadruples of exponents for which the finiteness of groups with presentations of that type is unknown.)
Added later - reply to Mark's query about automaticity implying existence of recursive complete presentation. Thinking about it, I am not certain it is correct without further assumptions on the automatic structure. Let us assume:
Uniqueness. That is, there is a unique word in the language of the word acceptor representing each group element.
The language of the word acceptor is closed under subwords.
Any automatic structure can be replaced by one with uniqueness, but it is unknown whether you can always (simultaneously) achieve subword closure. But all known automatic groups have a structure with these properties, and they hold for so-called shortlex/lenlex structures, in which the word-acceptor language consists of the least representatives of the group elements under a shortlex ordering.
Assuming those two properties, let $A$ be the monoid generating set and $M_a$ the multiplier automaton for $a \in A$. Then
$\cup_{a \in A} \{ (w_1a, w_2) \mid (w_1,w_2) \in L(M_a) \}$
is a complete regular (and hence recursive) rewriting system for the group, because we can use these rules to replace any word by its unique representative in the word-acceptor language.
It is not generally a minimal complete rewriting system, because it may contain redundant rules. For shortlex automatic structures there is a program in the package that computes the unique minimal system.
Derek HoltDerek Holt
$\begingroup$ @Derek: I did not know that automatic groups have recursive complete presentations. Where is that proved? When Rips conjectured that $F$ is automatic, there was virtually no supporting evidence. Now we know that $F$ has a complete recursive presentation and a quadratic isoperimetric function. Still it is not known whether $F$ is automatic. $\endgroup$
$\begingroup$ @Derek: could you please tell what is the argument for $SL(n,\mathbb{Z})$ not to be automatic? Does it have a chance to be Cayley graph automatic? $\endgroup$
– Al Tal
$\begingroup$ It is proved using higher dimensional isoperimetric inequaltities in Section 10,4 of "Word Processing in Groups" that ${\rm SL}(n,{\mathbb Z})$ is not automatic for $n \ge 3$. I have no idea whether they could be Cayley graph automatic - you should probably write and ask one of the people who work in that area, such as Alexei Miasnikov. $\endgroup$
– Derek Holt
What you are really interested in is the word problem for groups. There are many different algorithms for doing this besides representation theory (say, Dehn algorithm is one of the oldest). See http://en.wikipedia.org/wiki/Word_problem_for_groups to get an idea.
MishaMisha
30.4k11 gold badge8787 silver badges158158 bronze badges
Thanks for contributing an answer to MathOverflow!
About an exercise in Serre's "Trees"
Groups with large negative deficiency
Understanding "infinite" relations in groups
Finite vertex-transitive graphs that look like infinite vertex-transitive graphs
redundant relations
Identifying a group without 2-torsion
When are indiscrete reflection groups Coxeter groups? | CommonCrawl |
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Induced vergence-accommodation conflict reduces cognitive performance in the Stroop test
François Daniel1 &
Zoï Kapoula1
Scientific Reports volume 9, Article number: 1247 (2019) Cite this article
Saccades
Interaction mechanisms between cognition and binocular motor control in reading saccades remain unclear. In this study we examine objectively saccades and fixations parameters during the Stroop test, involving three different levels of cognitive demand (reading, color denomination and interference). In addition, we experimentally induce accommodation and vergence conflicts during the different tasks. Twenty-one visually normal subjects (age 20.9 ± 1.45) performed the Stroop test in three different randomized conditions: a control normal viewing condition, a 16Δ base-out prism condition, and a −2.50D spherical lenses condition. Prisms and spherical lenses induced Vergence-Accommodation conflict. Eye movements were recorded with the Eyeseecam video-oculography device. The results show (1) longer fixation duration in the interference task than in the denomination task, and shorter fixation duration in the reading task; (2) a higher interference effect in the conflict induced conditions compared to the control condition; (3) a lower tolerance to prism induced conflict, with a higher destabilization of the binocular motor control of saccades and fixations. This suggests an interplay between vergence accommodation conflict and cognitive load: tolerance to the conflict seems to be lower in the more cognitively demanding interference Stroop task. The results consolidate the link between cognition and high quality of single binocular vision.
Cognitive executive functions, such as action planning, cognitive flexibility or decision making, represent high level processes responsible for the cognitive control of behavior and are known to be also related with academic achievement, especially working memory and inhibitory control that are essential for learning and maintaining attention1,2,3,4. Inhibition is an important dimension of cognition and refers to the capacity to inhibit dominant or automatic responses when necessary1,5. This executive function is closely linked to the capacity to control and focus attention. A golden test that enables to study cognitive executive functions such as inhibition and attention is the Stroop test6,7. Made up of different tasks such as reading and color denomination, the Stroop test highly involves the visual input to be completed, as the stimuli are exclusively visual. In this test, especially in the interference task, there is a succession of words designating different colors but printed in an incongruent color (for example the word "red" printed in green). The subjects must inhibit the automatic reading response in favor of a less obvious one: naming the color of the ink. The Stroop interference is used in neuroscience, in developmental studies including dyslexia, and in aging and neurodegenerative diseases to evaluate cognitive performances; particularly cognitive executive functions. It also shares common processes with reading (Protopapas et al.8,9).
In 201610, we introduced a study of the link between cognitive processes and the accommodative and vergence evaluation. We demonstrated that asymptomatic subjects with no accommodative or binocular dysfunctions showed higher inhibition performances evaluated with the Stroop test compared to symptomatic subjects with convergence insufficiency. These results suggest that visual processes responsible for clear and single binocular vision could interfere with cognitive processes and attention deployment. However, the mechanisms involved remained unclear.
Recent neuroimaging studies identified a cortical network activated when inhibition is required that involves the prefrontal, parietal, temporal and cingulate areas11,12. These areas are similar to those implicated in the top-down attentional control, such as the anterior cingulate cortex (ACC), the intraparietal sulcus (IPS) and the dorsolateral prefrontal cortex13,14,15. The cerebellum appears to also be involved in attentional control and cognition12,16,17. Concerning the Stroop test, several neuroimaging studies showed an increase of activation in cortical areas such as DLPFC, ACC, posterior parietal cortex (PPC)18,19,20, that supports the previous statement concerning the link between the Stroop test, inhibition and attentional control. Note that the neural circuits that process vergence disparity, accommodation signals and saccadic eye movements implicate similar areas and networks (visual cortex, parietal and frontal lobes, cerebellum)21,22,23,24,25,26. Thus, it is plausible physiologically to expect some competition in sharing common resources by cognitive executive functions and process to obtain clear and single binocular vision.
From an experimental point of view, several optometric studies were designed to focus on the impact of an induced accommodation-vergence conflict on reading and cognitive performances. Narayanasamy et al.27. showed that a bilateral lens-induced hyperopia of 2.50D had a significant impact on academic-related performance in children. Indeed, subjects showed lower performances or longer time to accomplish the reading test. Moreover, this destabilization was exacerbated after 20 minutes of sustained near work, suggesting a negative impact of an induced higher accommodative demand on the cognitive performance. These results are in line with those of Garzia et al.28 in students: an accommodative stress-induced of −2.00D on each eye increased the time to accomplish a reading task. Similar results were also found in the study of Poltavski et al.29, showing that the same amount of stress induced on adults impacted not only the amount of the accommodative lag30, but also the performance in a neuropsychological task of sustained attention, as the reaction time was greater in the stress-induced condition compared to control. Ludlam and Ludlam31 used base-in prisms on students and demonstrated poorer performances in reading comprehension in the stress-induced condition compared to control. These results suggest that an accommodation/vergence conflict experimentally induced could be responsible on lower performance concerning cognitive processes linked to attention and inhibition. However, eye movements such as reading saccades and vergence were not investigated in these studies, and the neuropsychological tests used were different, depending on study design.
Goals of the Study
The first goal of this study is to measure objectively eye movements with video-oculography during the different tasks of the Stroop test (reading, color denomination, interference). As the saccades are similar in the three tasks but the cognitive load differs, one could first expect changes of the pattern of exploration, changes in the binocular motor control such as saccade disconjugacy or in fixation stability (disconjugacy during the fixation).
The second goal of this study is to observe the potential impact of an induced vergence-accommodation conflict on the performances during the Stroop tasks. We expect that disturbing the usual balance between vergence and accommodation, using prisms or spherical lenses, will force the visual system to adapt its responses to keep a single and clear vision. This study will provide evidence to determine if such forcing will impact on the cognitive performance, on the binocular motor control of the saccades and fixations, or both interacting with each other. Moreover, comparing the incidence of a lens and a prism induced conflict will be finally possible, to establish whether blur or double vision is more affecting.
A total of 24 voluntary students aged from 19 to 23 years (mean age 20.9 ± 1.45 years old, 11 males) who were studying optics at the Lycée d'Optique Fresnel in Paris, participated in this study. Three subjects reported constant diplopia when wearing the prisms and were therefore excluded of the sample. They followed an optometric screening using the same method as Daniel and Kapoula10. All selected subjects presented no binocular and/or accommodative dysfunctions using the norms established by Scheiman and Wick32 and by considering the number of signs used in the studies of Porcar and Martinez-Palomera33 and Shin et al.34. Subjects wore their habitual refractive correction (when necessary) to yield normal vision. Refractive errors (spherical equivalent) were ranged from −3.50D to +1.50D, and 11 subjects were wearing a correction before the experiment. The other inclusion criteria were: a minimum of 20/20 visual acuity for each eye, no signs of amblyopia or strabismus, stereoacuity under or equal to 40 arcsecs (evaluated with the Wirt Rings Stereo Test, Stereo Optical Company) and no neurological findings. In addition, subjects were excluded from the study if the following criteria were observed: vertical phoria >1 prism diopter (Δ); an antecedent of eye pathology or surgery that could affect visual acuity or motility; presence of a central suppression or a fixation disparity (checked with the Mallett Fixation Disparity Test Unit and the Mallett Near Vision Unit NV5); signs of color vision defects (checked with an Ishihara plate test), which would affect their ability to normally perform the Stroop test; constant double and/or blurry vision reported during the testing.
The investigation adhered to the tenets of the Declaration of Helsinki and was approved by the local human experimentation committee, the "Comité de Protection des Personnes" (CPP) Ile de France VI (No: 07035), Necker Hospital, in Paris. Written informed consent was obtained from all subjects after the nature of the procedure was explained.
Procedure and testing
Every subject was sited in front of a computer screen, at 50 cm distance, and was asked to accomplish each task of the Stroop test in 3 different conditions. They were wearing a video-oculography EyeSeeCam system (University of Munich Hospital, Clinical Neuroscience, Munich, Germany, see http://eyeseecam.com/) and a trial frame (Oculus Adult UB3, Zeiss), with their usual correction if needed. Each task was preceded by a five-point calibration.
Stroop test
The version that we used was made up of 3 different tasks: in the "reading" task, the subject has to read aloud a succession of words designating colors (red, blue, green or yellow), written in black; in the "denomination" phase, the subject has to name a succession of dots of color (red, green, blue or yellow); in the "interference" phase, the subject has to name the color of the print of the word, printed in an incongruent color (red, green, blue or yellow), for example the word "red" printed in yellow. Each trial contains eighty items (10-line of 8-column matrix) spaced out 3.7 cm from each other (center to center, 4.2° at 50 cm distance) and placed randomly. Each letter of the words was about 0.4° of angular size. The diameter of the color dots was about 0.4° angular size as well. Subjects were instructed to finish as quickly as possible without making mistakes or omissions and tasks were randomized from one condition to the other. To minimize a potential training effect, a task could not be followed by the same one, even if the condition was also changing. Between two different trials, subjects were asked to close the eyes for one minute.
Subject accomplished the Stroop test in 3 different conditions: (1) a Control condition, wearing their usual correction; (2) a Prism condition, wearing their usual correction and an 8Δ base-out prism placed on the trial frame and on each eye (about 2.5 Meter Angle, MA); (3) a Lens condition, wearing their usual correction and a −2.50D placed on the trial frame and on each eye. We chose these amounts of spherical or prism power to induce a similar amount of conflict on accommodative demand or convergence demand. To minimize potential training or fatigue effects, conditions were randomized differently for every subject.
Assessment of visual and binocular functions
Optometric screening was done for all the subjects prior to the experiment and on separate day. We used similar methods and materials than Daniel and Kapoula10 to evaluate the visual functions in different areas: symptomatology (using the CISS35), visual acuity, binocular vision (stereo acuity, central suppression and fixation disparity), vergence (NPC, fusional ranges at far and near distance, vergence facility), accommodation (Binocular Fused Cross Cylinder, Negative and Positive Relative Accommodation, monocular and binocular Near Point of Accommodation, binocular and monocular accommodative facilities), phorias and AC/A ratio.
Eye movements recording
The subject was asked to accomplish the different tasks, and eye movements were recorded binocularly with a video-oculography EyeSeeCam system (University of Munich Hospital, Clinical Neuroscience, Munich, Germany, see http://eyeseecam.com/). At the beginning of each task, a 5-points calibration sequence was run using a matrices of laser dots: a central dot and four peripheral dots displayed at 8.5° rightward, leftward, downward and upward. Subjects fixated each dot one by one for four times, and total calibration task lasted about 20 seconds.
Eye movement analysis
Calibration factors for each eye were extracted from the saccades recorded in the calibration task. We used the network Analyze32 to extract and analyze the data. From the individual calibrated eye position signal, we derived the horizontal conjugate signal by calculating the mean of the two horizontal eye positions, i.e. (left eye + right eye)/2, and the horizontal disconjugate signal, by calculating the difference position between both eyes, i.e. left eye – right eye. The velocity of the horizontal conjugate and disconjugate signals were computed using a symmetrical two-point differentiator combined to low-pass filtering with a Gaussian FIR filter (cut-off frequency 33 Hz).
Horizontal eye movements were defined using the velocity of the signal, respectively conjugate velocity for saccades and disconjugate velocity for vergence. The onset, or offset, were marked as the time when velocity signal exceeded, or dropped respectively below 10% of the maximum velocity. Similar criteria have been used in several other studies (Bucci et al.36, Yang and Kapoula37, Vernet et al.38): i for the onset and p for the offset of each eye movement (see Fig. 1). The automatic position of the markers was carefully verified by visual inspection of the individual eye movement traces. From these markers, we measured the amplitude of the movement (between p and i).
Analysis and marking of the reading saccades: determination of the saccade and of the fixation duration. 'i' and 'p' indicate respectively the beginning and the end of each saccade. We studied the post saccadic drift 80 ms and 160 ms after the end of the saccade, 'x' and 'y' indicate respectively these two periods of fixation. Lower blue trace: horizontal conjugate position. Upper orange trace: horizontal disconjugate position.
Saccades analysis
We focused our analysis on progressive saccades, i.e. from left to right occurring in the time course of the experiment; regressive or corrective saccades in the opposite direction were scars (<5%). A few markers were added from the saccades analysis, x and y respectively 80 ms and 160 ms after p, as post-saccadic fixation marks (see Fig. 1). From these markers, we measured the amplitude of the saccades, the saccade disconjugacy, the amplitude of the post saccadic drifts during the first 80 ms (between x and p) and the first 160 ms (between y and p) of fixation, using the conjugate signal and the disconjugate signal.
Additionally, we evaluated fixation disparity during the beginning of each fixations and considering 3 critical points: the end of the saccade ("p"), 80 ms ("x") and 160 ms ("y") after the end of the saccade. These time points correspond to the two-time constant of fixation drift (see Kapoula et al.39). We measured the fixation disparity observed on these three points by subtracting the value of the vergence angle measured from the values of the vergence angle expected at 50 cm: a positive value shows a higher vergence angle than expected (esodisparity), and a negative value shows a smaller vergence angle than expected (exodisparity). Using these three values, we calculated the mean values of fixation disparity for each fixation. Because of the lack of monocular calibration, fixation disparity values would not be accurate40. Nevertheless, we decided to use the individual mean values of the standard deviation as an indicator to evaluate the variability of the calculated fixation disparity, which allows comparisons with earlier studies in this field41,42.
We were interested in evaluating how saccade motor control and related post saccadic drift influence fixation disparity in 3 different conditions. Indeed, Vernet et al.38 showed that the post saccadic disconjugate drift may act to reduce the saccade disconjugacy very early during the fixation (48 ms after the end of the saccade during reading). For all these reasons, we evaluated the fixation disparity for the same period, i.e. the beginning of the fixation, regardless of the total duration of the fixation that can be lengthened according to the task specificity.
We also calculated the fixation duration (between p and the i for the next saccade, see Fig. 2).
Evolution of the conjugate signal of the same subject (s. 17) when accomplishing the reading task (blue trace), the denomination task (green trace) and the interference task (red trace). Fixation duration are indicated between the end of the saccade ('p') and the beginning of the next one (next 'i').
For 20 subjects, 90 to 95% of trials were used for statistical analysis, 5 to 10% were rejected due essentially to blinks or partial lost signal during the recording, especially concerning the fixation duration parameter; for one subject in one denomination trial and one interference trial, 30% of the saccades were rejected due to a loss of signal for one eye during the recording. It is important to note that two subjects reported prismatic distortion during the Prism condition but did not experiment double vision, and two other subjects reported headaches after the Lens condition but did not reported blurred vision.
Stroop tests results
We used similar methods to those used by Daniel and Kapoula10. Time, corrected errors (when the subject made a mistake but corrected it immediately after) and uncorrected errors were measured for each task and for each subject. To evaluate the flexibility between tasks in Stroop and the impact of an induced vergence/accommodation conflict, we also calculated the time differences in each condition using the global time duration and the mean values in fixation duration. According to MacLeod7 and Jensen et al.43, time differences are believed to be more appropriate to evaluate Stroop interference. As in the study of Stuss et al.44, we opted for the following formulas, using the global time duration and the mean values of the fixation duration:
$${\rm{Interference}}\,{\rm{Effect}}\,({\rm{IE}})={\rm{Interference}}-{\rm{Denomination}}$$
The Error Rate (ER) gives information on the capacity of the subject to not make mistakes during the different tasks, especially the interference task. The higher it is, the more the subject made errors during the task, which gives information on the distractibility during the test. We calculated it with this formula:
$${\rm{ER}}={\rm{numbers}}\,{\rm{of}}\,{\rm{corrected}}\,{\rm{errors}}+({\rm{numbers}}\,{\rm{of}}\,{\rm{uncorrected}}\,{\rm{errors}}\times {\rm{2}})$$
A corrected error (when the subject made a mistake but corrected it immediately after) had to represent a lower importance than an uncorrected one. Such weighting is usually applied to clinical use of the test (Victoria test adapted for French), as uncorrected errors may represent higher loss of attention.
Eye movement's parameters and the Stroop test
this study provides for the first time a recording of the eye movements during the different tasks of the Stroop test. As targets were equally spaced and the eye movement demand was similar but each task (reading, denomination and interference) implicates a different level of cognitive demand, we expected to find different binocular coordination behaviors depending on the task. We first focused on the Control condition results, searching for an effect of the cognitive task (reading, denomination and interference) on the motor (amplitude, saccade disconjugacy, post saccadic drifts) and the temporal (fixation duration) parameters of the saccades and fixations, using the individual data. We used a non-parametric Friedman ANOVA as the number of subjects was limited. When a significant main effect was found, a post-hoc using non-parametric Wilcoxon test was then performed to compare tasks 'results two by two. As the saccade disconjugacy can be corrected by the post saccadic disconjugacy drift (see Vernet et al.38), we also searched for a possible correlation between the amplitude of the saccade disconjugacy and the amplitude of the post saccadic disconjugacy drift. We used therefore a Spearman correlation analysis on the individual mean values.
Induced accommodation/vergence conflict
we investigated the effect of the induced convergence/accommodation conflict on the saccades and fixations parameters. As the Stroop test is made of three different tasks, we searched for an effect of a lens-induced (Lens condition) or prism-induced (Prism condition) conflict on the motor parameters (amplitude, speed, saccade disconjugacy, post saccadic drifts) and the temporal parameter (fixation duration) of the saccades in the reading task, in the denomination task and in the interference task separately, using the non-parametric Friedman ANOVA test on individual data. When a significant main effect was found, a post-hoc using non-parametric Wilcoxon test was then performed to compare trials two by two.
We calculated separately the Interference Effect (IE), using the individual data of the global time to accomplish the entire tasks and the individual mean values of the fixation duration in each task, and the Error Rate (ER), in each condition. We applied the same analysis as described above.
Control condition in the Stroop test: cognitive demand and eye movement's parameters
Amplitudes of the saccades
The non-parametric Friedman ANOVA revealed a significant effect of the task on the amplitude of the saccades (X²(21, 2) = 8.67, p = 0.013). A post-hoc using non-parametric Wilcoxon tests showed significant differences between Reading and Interference (3.69 ± 0.29° vs 3.51 ± 0.29°; Z = 2.59, p = 0.0096) and between Interference and Denomination (3.51 ± 0.29° vs 3.65 ± 0.29°; Z = 2.41, p = 0.016). No significant difference was found between Reading and Denomination (3.69 ± 0.29° vs 3.65 ± 0.29°; Z = 0.99, p = 0.32).
Fixation duration following the saccades
The non-parametric Friedman ANOVA revealed a significant effect of the task on the length of the fixation duration (X²(21, 2) = 42, p < 0.0001). A post-hoc using non-parametric Wilcoxon tests showed significant differences between Interference and Reading (543.45 ± 90.04 ms vs 327.59 ± 87.58 ms; Z = 4.01, p = 0.00006), between Interference and Denomination (543.45 ± 90.04 ms vs 432.01 ± 61.85 ms; Z = 4.01, p = 0.00006) and between Reading and Denomination (327.59 ± 87.58 ms vs 432.01 ± 61.85 ms; Z = 4.01, p = 0.00006).
Saccade disconjugacy
The non-parametric Friedman ANOVA did not reveal any significant effect of the task, neither on the algebraic value of disconjugacy (X²(21, 2) = 0.095, p = 0.95) nor on the absolute value (X²(21, 2) = 0.286, p = 0.87) of the saccades.
Fixation disparity
The non-parametric Friedman ANOVA did not reveal a significant effect of the task, neither on the algebraic value of the amplitude of the fixation disparity (X²(21, 2) = 4.095, p = 0.13), nor on the mean values in standard deviation (X²(21, 2) = 2.95, p = 0.23).
Saccade disconjugacy and the following post-saccadic drift
significant correlations (p < 0.01) were found for each task and in each condition analyzing the linear regression of the amplitude of the saccade disconjugacy as a function of the amplitude of the post saccadic disconjugate drift after 80 ms when using the mean values, as shown on Fig. 3. These results indicate that the post saccadic drift may act to reduce the misalignment of the eyes at the end of the saccade (see Vernet et al.38), and the quality of this relation appears to be similar for the different tasks of the Stroop test. A higher correlation coefficient is found for Interference (rs = −0.90) compared to Reading (rs = −0.83) and Denomination (rs = −0.89, see Fig. 3).
Linear regression plot of the amplitude of the following post-saccadic disconjugacy in degrees (mean values, °) measured 80 ms after the end of each progressive reading saccade as a function of the amplitude of the saccade disconjugacy in degrees (mean values, °), concerning the reading task (blue diamonds), the denomination task (green triangles) and the interference task (red dots) in the Control condition. Spearman Rs correlation coefficient are indicated.
To determine if the correlation coefficients were significantly different, we applied a comparison of nonoverlapping correlations based on dependent groups two by two. The Silver, Hittner, and May's modification of Dunn and Clark's z using a back transformed average Fisher's Z procedure did not reveal a significant difference between the Reading task and the Denomination task (z = 1.3339, p = 0.1822), between Reading and Interference (z = 1.3709, p = 0.1704), nor between Denomination and Interference (z = 0.2859, p-value = 0.7750).
To summarize, the results show that the cognitive load has a major impact on the fixation duration, but not on the binocular motor control of the disconjugacy of the saccades and the fixation disparity. However, small saccades were more frequent during the interference task, and this could explain the significant smaller amplitude of the saccades, which suggest a different strategy of exploration for this task compared to the others. The binocular coordination of saccades and the stability of the fixation appear to not have been altered by the higher cognitive demand of the interference task (see Table 1).
Table 1 Group mean values (bold type) and SD of the results concerning saccades and fixations parameters in the control condition during the different tasks of the Stroop test (Reading, Denomination and Interference). Significant differences depending on the task (p < 0.05) are first indicated with an asterisk.
Temporal analysis and associated Stroop test performances
Global time duration and fixation duration
As reading saccades are small and fast to execute, fixations represent the major part of time in which the executive process involved by the Stroop tasks take place. We performed a Spearman correlation analysis between the mean values of fixation duration and the global time. As expected, the two measures were significantly and positively correlated for each Stroop task, for each condition and for all subjects (see Fig. 4).
Linear regression plot of the global time to accomplish the task in seconds (s) as a function of the mean values of fixation duration in milliseconds (ms); each point is an individual value in the Control condition, the Prism condition and the Lens condition. Values concerning Reading (blue diamonds), Denomination (green triangles) and Interference (red dots) are reported. Spearman Rs correlation coefficient and p values are indicated for each task in the same color.
Thus, the mean value of fixation duration reflects the process involved by each task of the Stroop test. Note that we measured fixation durations following the progressive saccades, without taking fixations following the regressive saccades due to errors or hesitation, to obtain a more precise measure of cognitive process. We therefore decided to use the fixation duration mean values to also calculate the usual Stroop interference effect (see Tables 2 and 3).
Table 2 Mean values concerning the global time to accomplish the different Stroop tasks in seconds, depending on the condition.
Table 3 Mean values concerning fixation duration in the different Stroop tasks (ms), depending on the condition.
Inspection of the Tables 2 and 3 shows that for both total time and fixation duration, statistical significant effect of the conditions occurs concerning the interference effect, particularly when fixation duration is used: the interference effect is much higher in the prism and the lens condition relative to the control condition.
Error rate
Concerning the Interference task, the non-parametric Friedman's ANOVA revealed a significant effect of the conditions on the ER (X²(21, 2) = 7.65, p < 0.022). A post-hoc using non-parametric Wilcoxon tests showed a significant difference between the Control condition and the Prism condition (1.71 ± 1.42 vs 3.57 ± 3.20; Z = 2.11, p = 0.035). Concerning the difference between the Lens condition and the Prism condition, the results tended to be similar (1.76 ± 1.79 vs 3.57 ± 3.20; Z = 1.89, p = 0.059) but did not reach a significant level.
To summarize, results show that the Stroop performances were stable in the Control condition compared to the induced vergence and accommodation conflict conditions. However, when calculating the interference effect, which reflects the time consuming in inhibiting the reading answer during the interference task, subjects showed more difficulties to accomplish the test in the induced conflict conditions, especially in the Prism condition. Prism induced conflict appears to have a higher impact on the Stroop performance, as the error rate was also higher in this condition.
Impact of an induced accommodation/vergence conflict on saccades and fixations parameters
Mean values and standard deviations of the saccade and fixations parameters measured during the different tasks and conditions are shown on Tables 4–6.
Table 4 Mean values and standard deviation concerning saccades and fixation parameters during the Reading task of the Stroop test. Friedman's ANOVA and post-hoc Wilcoxon tests results are indicated.
Table 5 Mean values and standard deviation concerning saccades and fixation parameters during the Denomination task of the Stroop test.
Table 6 Mean values and standard deviation concerning saccades and fixation parameters during the Interference task of the Stroop test. Friedman's ANOVA and post-hoc Wilcoxon tests results are indicated.
Inspection of the Tables 4–6 shows that prism induced vergence-accommodation conflict appears to have a higher impact on binocular motor control of the saccades in most of the Stroop tasks, as the saccade disconjugacy and the disconjugate drift values modulate in the Prism condition compared to the Control and the Lens conditions. The mean values in standard deviation concerning fixation disparity, reflecting the stability of the fixation, were also statistically higher in the Prism condition, which could also demonstrate the deterioration of the correlation between the amplitude of the saccade disconjugacy and the post saccadic disconjugate drift associated, especially when the cognitive demand is high (interference task). The Lens induced conflict shown minor impacts on disconjugate drift, however the results differ as a function of the task studied, suggesting a lower impact on binocular motor control of the saccades and fixation.
Saccade disconjugacy and the following post saccadic disconjugacy drift
significant correlations (p < 0.01) were found for each task and in each condition analyzing the linear regression of the amplitude of the saccade disconjugacy as a function of the amplitude of the post saccadic disconjugate drift after 80 ms using the individual mean values, as shown on Fig. 5.
Linear regression plot of the amplitude of the post-saccadic disconjugacy drift in degrees (°) measured 80 ms after the end of each progressive reading saccade as a function of the amplitude of the intra-saccadic disconjugacy in degrees (°) concerning Reading, Denomination and Interference. Mean values concerning the Control condition (blue squares), the Minus lenses condition (green dots) and the Prisms condition (orange triangles) are reported for each task. Spearman Rs correlation coefficient are indicated in bold type.
To determine if the correlation coefficients were significantly different, we applied a comparison of nonoverlapping correlations based on dependent groups two by two. The Silver, Hittner, and May's modification of Dunn and Clark's z using a back transformed average Fisher's Z procedure results are shown on Table 7.
Table 7 Statistical comparisons of the correlations coefficient two by two for each task.
The Stroop test and properties of eye movements
To our knowledge, the properties of eye movements during the different Stroop tasks has never been studied before. The Stroop test is a golden neurological test used in neuroscience, in developmental studies including dyslexia45, in aging and neurodegenerative diseases to evaluate cognitive executive functions46,47. Indeed, the Stroop test is believed to stimulate attentional and inhibitory mechanisms, particularly the interference task in which reading must be inhibited to name the color of the written words. This task also bears the greatest cognitive load when compared to the reading and the denomination ones. The first question we asked was: what is the effect of interference or color effect on eye movement properties? It is known from clinical studies that the response time is increased during the color denomination and the interference tasks relative to simple reading task but this increase of time could be related to several potential factors. As such, this study sought to establish which component exactly is related to this increase of time. We have shown that the length of timing while naming the color dots or doing the interference task is mainly due to increase of fixation duration. Fixation duration is the time during which the central nervous system processes visual information to name the color of the ink the word is written. It is possible that this cognitive task of interference is not entirely executed during the period of fixation, and partially this process can be continued and achieved at the beginning of the next saccade; still the results clearly indicate that fixation duration is the time during which this interference process occurs primarily. So, the interference task involves longer fixation durations and this result is in line with literature from other fields: e.g. fixation durations during reading are believed to related to cognitive processing48,49,50.
At the motor site, it is important to note the increase of the frequency of the small saccades (<1.5°) which leads to the overall decrease of the mean amplitudes of the saccades during the interference test, and the increase of regressive saccades. This behavior is to our knowledge reported for the first time. It is evocative of the presence of a strategy consisting in scrutinizing carefully every item during the interference task. It is possible that micro saccades, that represent focus scanning51, are more frequent in the interference task but this needs further investigation. However, the results showed that the properties of the saccades themselves remained stable whatever the cognitive task: the coordination of the saccades, the amplitude and variability of the fixation disparity along with the correlation coefficients between the amplitude of the saccade disconjugacy and the post saccadic drift associated, remained similar in the different tasks of the Stroop test (see Table 1 and Fig. 4).
In conclusion, we demonstrate here for the first time strong modulation of fixation duration by the cognitive demand of the Stroop test: the measure of fixation duration reflects in an incremental way the degree of difficulty of the test. Naming the color during the denomination task requires longer fixation because is less automatic than reading. The interference task requires inhibition of the reading response plus the color naming response, which involves higher cognitive processes.
Interplay between cognition and vergence/accommodation conflict
Inducing an accommodation/vergence conflict by adding prisms or adding spherical lenses involves shifting of the accommodation or the convergence demand, to maintain single and clear vision. The conflict creates a stress on the visual system, as the accommodative response does not correspond anymore to the vergence response. Even if the visual system can partially compensate this mismatch (as shown in adults52), this effort in compensation itself for restoring could require visual attention resources and impact on cognitive processes. It is important to note that our selected subjects had to wear their usual correction. However, even if the refraction was checked previously, objectively and subjectively using the monocular fogging method to a standard endpoint of maximum plus, small amount of residual hyperopia could remain as refraction was not determined under cycloplegic conditions. Therefore, it is important to acknowledge that the Lens condition could have produced more difficulties for some of the subjects.
In the present study, induction of an accommodation or vergence conflict with prism or with spherical lenses significantly increases the interference effect during the Stroop tasks, and the increase is more significant for the prism-induced conflict than for the lens-induced conflict. This result sheds new light on the interplay between vergence or accommodation conflict, cognition and eye movement control. The interference effect measured by the difference in fixation duration between the interference and color denomination tasks is on average 42 milliseconds with the prisms on and 37 milliseconds with the spherical lenses on; this is indicative of the importance of high quality visual input when high cognitive executive function are in process. Note that the neural circuits that process vergence disparity and accommodation signals (visual cortex, parietal and frontal lobes)21,22,23,24,25,26 are partially the same as they are ones that control cognition (e.g. frontal and parietal lobes)14,15,53,54,55,56,57,58. Indeed, this increase could be explained by the fact that attentional resources to treat the disparity induced by the prisms or the blur induced by the lenses are required and thereby diminishing the availability of such resources for the cognitive task (i.e. inhibition of reading and naming of the color). In other words, tolerance to the vergence/accommodation conflict seems to be lower in the case of the interference task, reflected by the substantial fixation duration increase. The visual stress induced by the conflict conditions forced the subjects to immediately distribute their cognitive resources to maintain single and clear vision. This redistribution involves sharing of cognitive attention resources required by the Stroop test, thereby decreasing their performances.
Another important point of our results is that the prism-induced conflict appears to be more disturbing than the lens-induced conflict. Minus lenses will induce blur and force the subject to accommodate for recovering clear vision, shifting positively the accommodative response while the convergence demand stays the same; the prisms will induce a disparity error, will cause convergence increase and will conflict with the accommodation demand. The conflict between accommodation and vergence can be higher in the Prisms condition on motor parameters, than in the Lens condition. As shown by Bharadwaj et al.52, the visual system usually shows a larger tolerance concerning the accommodative response to a Lenses condition than concerning the vergence response to a Prisms condition. This also could be attributable to the fact that the size of the Panum's area are smaller than the depth-of-focus, which allows a larger tolerance in the accommodative response during the lens-induced condition. Thus, avoiding double vision through prisms requires the visual motor system to produce an adequate, precise and stable vergence response, compared to the lens-induced blur on the accommodative response. Also, it is possible that neural circuits dealing with blur and accommodative cues might be less interfering with cognition than circuits controlling disparity eye movements. Vergence and accommodative adaptation involved the cerebellum for such responses22,59. In other words, disparity of images is more interfering with cognition than blurred images, and this is quite plausible physiologically as disparity calls for immediate adjustment of vergence to avoid diplopia while blurred images can be tolerated60. The higher interference with cognition was evident both, in terms of increase of fixation duration but also in terms of significantly higher error rates in the interference task in the prism condition compared to the lenses or the normal condition. The higher error rate reinforces the conclusion that the Prisms condition has a higher impact on cognitive resources. Thus, we hypothesize that this involves redistribution of these resources in the extended neural network (visual-frontal-parietal-cerebellum), that subtends cognitive and visual aspects.
Nevertheless, the Stroop interference is linked to two different parameters: the speed of word reading and the efficiency of the inhibitory mechanism that must block the reading response9. As reading was not lengthened in the Lenses condition but interference effect was higher than in the control condition, we argue that the vergence/accommodation conflict impacted mainly the inhibitory mechanism. As the reading task was slightly lengthened in the Prisms condition compared to the other conditions, it also indicates that reading processes could have been destabilized, which could be responsible of a higher increase of the interference effect with the prisms on. Yet, the higher error rate in the interference task shows a more likely lower efficiency of inhibition. Based on such observations, we argue that accommodation/vergence conflict interferes mainly with the inhibition process involved in the Stroop test.
Accommodation/vergence conflict modulates saccade disconjugacy and fixation disparity
In line with our general theoretical concept according to which binocular coordination of saccades depends on the quality of vergence and its synergy with accommodation10, we expected the induced conflict to modify the disconjugacy of the saccades. The parameters that first did change are the quality of binocular coordination of the saccades and the quality of the link between binocular coordination of the saccade and binocular coordination of the drift of the eyes during the fixation period. The absolute value of saccade disconjugacy did not change significantly; yet, the algebraic value of the intra-saccadic disconjugacy became more negative, which means that in the presence of the prisms only and in almost all three tasks, saccade disconjugacy was more frequently divergent. Thus, the prisms act on the intra-saccadic disconjugacy. Another important aspect is the weakening of the correlation between intra-saccadic disconjugacy and the disconjugacy during the following fixation. Previous studies showed significant correlation between these two parameters38,61; such correlation was associated with a better capacity for reducing the disparity during fixation that results from the intra-saccadic disconjugacy. In the present study, the prisms altered this correlation that became weaker than in the lenses and in the control condition, and this for the interference Stroop task (see Fig. 5). Thus, prisms caused a deregulation of the capacity of the central nervous system to control the sequence of intra-saccadic and post-saccadic disconjugacy, as the standard deviation of the fixation disparity appeared also to be highly impacted in the prism condition compared to the others, and this for most of the Stroop tasks. We argue that this is destructive and could also interfere with cognitive executive functions.
However, if the coordination of the saccade was not disrupted in the Lenses condition, the amplitude and the variability of the fixation disparity values showed a destabilization: maintaining the appropriate vergence angle during the fixation was more difficult in the Prisms and in the Lenses conditions. And, this critical phase of fixation is essential for single vision, as it permits reading and cognitive processes to occur. A higher disparity at the beginning of the fixation must therefore be reduced to maintain the vergence angle stable. In addition to a poorer saccade coordination, we make the hypothesis that a higher and a more variable fixation disparity can interfere with the efficiency of the inhibition processes as tested in the Stroop test.
In conclusion, this study demonstrates that vergence or accommodation conflicts, particularly such as induced by prisms, interferes with cognitive executive functions stimulated by the Stroop test. Cognitive interference is reflected mainly by longer fixations and higher rates of errors. The mismatch induced by prisms also alters the disconjugacy of saccades that becomes more divergent, and the correlation between intra-saccadic disconjugacy and the post-saccadic disconjugacy drift weakens. Therefore, residual disparities during fixation occur and this would interfere with cognition. The study also suggests better tolerance to the mismatch due to blur induced by spherical lenses. This study has both theoretical and clinical implications: (1) at the theoretical level, the interplay between vergence/accommodation conflict and cognition is of interest, as visuo-motor and cognition processes rely on same parietal/frontal cortical structures; (2) at the clinical level, it is important to consider that tolerance to vergence/accommodation mismatch depends upon the difficulty of the cognitive tasks: tolerance is lower when higher executive functions such as those of the interference task are in progress.
Miyake, A. et al. The unity and diversity of executive functions and their contributions to complex "Frontal Lobe" tasks: a latent variable analysis. Cogn. Psychol. 41, 49–100 (2000).
Redick, T. S. & Engle, R. W. Working memory capacity and attention network test performance. Appl.Cogn.Psychol. 20, 713–721 (2006).
Martinussen, R. & Major, A. Working memory weaknesses in students with ADHD: implications for instruction. Theory Pract. 50, 68–75 (2011).
Haapala, E. A. et al. Associations of motor and cardiovascular performance with academic skills in children. Med. Sci. Sports Exerc. 46, 1016–1024 (2014).
Logan, G. D. On the ability to inhibit thought and action: A user's guide to the stop signal paradigm. In D. Dagenbach & T. H. Carr (Eds), Inhibitory processes in attention, memory, and language. San Diego, CA: Academic Press, pp 189–239 (1994).
Stroop, J. R. Studies of interference in serial verbal reactions. J Exp Psychol. 18, 643–662 (1935).
Mac Leod, C. Half a century of research on the Stroop effect: an integrative review. Psychol Bull. 109, 163–203 (1991).
Protopapas, A., Archonti, A. & Skaloumbakas, C. Reading ability is negatively related to Stroop interference. Cogn Psychol. 54, 251–82 (2007).
Protopapas, A., Vlahou, E. L., Moirou, D. & Ziaka, L. Word reading practice reduces Stroop interference in children. Acta Psychologica. 148, 204–208 (2014).
Daniel, F. & Kapoula, Z. Binocular Vision and the Stroop Test. Optom Vis Sci. 93, 194–208 (2016).
Collette, F., Hogge, M., Salmon, E. & Van der Linden, M. Exploration of the neural substrates of executive functioning by functional neuroimaging. Neuroscience. 139, 209–221 (2006).
Bush, G. Cingulate, Frontal and Parietal Cortical Dysfunction in Attention-Deficit/Hyperactivity Disorder. Biological Psychiatry 69, 1160–1167 (2011).
Mansouri, F. A., Tanaka, K., Buckley, M. J. & Conflict-induced behavioural adjustment: a clue to the executive functions of the prefrontal cortex. Nat Rev Neurosci. 10(2), 141–152 (2009).
Wang, L. et al. Effective connectivity of the fronto-parietal network during attentional control. J Cogn Neurosci. 22, 543–53 (2010).
Aron, A. R. The neural basis of inhibition in cognitive control. Neuroscientist. 13, 214–28 (2007).
Buckner, R. L. The cerebellum and cognitive function: 25 years of insight from anatomy and neuroimaging. Neuron. 80, 807–815 (2013).
Esterman, M. et al. Network-targeted cerebellar transcranial magnetic stimulation improves attentional control. Neuroimage. 156, 190–198 (2017).
Chen, Z., Lei, X., Ding, C., Li, H. & Chen, A. The neural mechanisms of semantic and response conflicts: an fMRI study of practice-related effects in the Stroop task. Neuroimage. 66, 577–584 (2013).
Pardo, J. V., Pardo, P. J., Janer, K. W. & Raichle, M. E. The anterior cingulate cortex mediates processing selection in the Stroop attentional conflict paradigm. Proc Natl Acad Sci USA 87, 256–259 (1990).
CAS Article ADS Google Scholar
Bush, G. et al. The counting Stroop: An interference task specialized for functional neuroimaging. Validation study with functional MRI. Human Brain Mapping. 6, 270–282 (1998).
Judge, S. J. & Cumming, B. G. Neurons In the monkey midbrain with activity related to vergence eye movement and accommodation. J Neurophysiol. 55, 915–30 (1986).
Leigh, R. G. & Zee, D. S. The neurology of eye movements. Fifth Edition. New York: Oxford University Press (2016).
Mays, L. E. & Gamlin, P. D. Neuronal circuitry controlling the near response. Curr Opin Neurobiol. 5, 763–768 (1995).
Mays, L. E. Neural control of vergence eye movements: convergence and divergence neurons in midbrain. J Neurophysiol. 51, 1091–1108 (1984).
Gamlin, P. D. Subcortical neural circuits for ocular accommodation and vergence in primates. Ophthalmic Physiol Opt. 19, 81–89 (1999).
Zhang, Y., Mays, L. E. & Gamlin, P. D. R. Characteristics of near response cells projecting to the oculomotor nucleus. J. Neurophysiol. 67, 944–960 (1992).
Narayanasamy, S., Vincent, S. J., Sampson, G. P. & Wood, J. M. Impact of simulated hyperopia on academic-related performance in children. Optom Vis Sci. 92, 227–236 (2015).
Garzia, R. P. et al. Effects of nearpoint visual stress on psycholinguistic processing in reading. J Am Optom Assoc. 60, 38–44 (1989).
Poltavski, D. V., Biberdorf, D. & Petros, T. V. Accommodative response and cortical activity during sustained attention. Vision Res. 63, 1–8 (2012).
Ciuffreda, K. J. The scientific basis for an efficacy of optometric vision therapy in nonstrabismic accommodative and vergence disorders. Optometry 73, 735–762 (2002).
Ludlam, W. M. & Ludlam, D. E. Effects of prism-induced, accommodative convergence stress on reading comprehension test scores. J Am Optom Assoc. 59, 440–445 (1988).
Scheiman, M. & Wick, B. Clinical Management of Binocular Vision: Heterophoric, Accommodative and Eye Movement Disorders. Philadelphia: Lippincott Williams and Wilkins (2013).
Porcar, E. & Martinez-Palomera, A. Prevalence of general binocular dysfunctions in a population of university students. Optom Vis Sci. 74, 111–113 (1997).
Shin, H. S., Park, S. C. & Park, C. M. Relationship between accommodative and vergence dysfunctions and academic achievement for primary school children. Ophtalmic Physiol Opt. 29, 615–624 (2009).
Rouse, M. et al. Validity of the convergence insufficiency symptom survey: a confirmatory study. Optom Vis Sci. 86, 357–363 (2009).
Bucci, M. P., Gomes, M., Paris, S. & Kapoula, Z. Disconjugate oculomotor learning caused by feeble image-size inequality: Differences between secondary and tertiary positions. Vision Research. 41, 625–637 (2001).
Yang, Q. & Kapoula, Z. Binocular coordination of saccades at far and at near in children and in adults. Journal of Vision. 3, 554–61 (2003).
Vernet, M. & Kapoula, Z. Binocular motor coordination during saccades and fixations while reading: a magnitude and time analysis. Journal of Vision. 9, 2 (2009).
Kapoula, Z. A., Robinson, D. A. & Hain, T. C. Motion of the eye immediately after a saccade. Exp Brain Res. 61, 386–394 (1986).
Švede, A., Treija, E., Jaschinski, W. & Krūmiņa, G. Monocular versus binocular calibrations in evaluating fixation disparity with a video-based eye-tracker. Perception 44(8–9), 1110–1128 (2015).
Jaschinski, W. Fixation disparity and accommodation as a function of viewing distance and prism load. Ophthalmic Physiol Opt. 17, 324–339 (1997).
Jainta, S. & Kapoula, Z. Dyslexic children are confronted with unstable binocular fixation while reading. PLoS One. 6, e18694 (2011).
Jensen, A. R. & Rohwer, W. D. Jr. TheStroop color-word test: a review. Acta Psychol. 25, 36–93 (1966).
Stuss, D. T., Floden, D., Alexander, M. P., Levine, B. & Katz, D. Stroop performance in focal lesion patients: dissociation of processes and frontal lobe lesion location. Neuropsychologia. 39, 771–786 (2001).
Kapoula, Z. et al. Poor Stroop performance in 15-years-old dyslexic teenagers. Exp Brain Res. 203, 419–425 (2010).
Fliss, R., Lemerre, M. & Mollard, A. Performances on a cognitive theory of mind task: specific decline or general cognitive deficits? Evidence from normal aging. Geriatr Psychol Neuropsychiatr Vieil. 14, 221–230 (2016).
Sobol, N. A. et al. Associations between physical function, dual-task performance and cognition in patients with mild Alzheimer's disease. Aging Ment Health. 10, 1–8 (2015).
Rayner, K. Eye movements in reading and information processing: 20 years of research. Psychol. Bull. 124, 372–422 (1998).
Rayner, K. The 35th Sir Frederick Bartlett Lecture: eye movements and attention in reading, scene perception, and visual search. Q. J. Exp. Psychol. 62, 1457–1506 (2009).
Henderson, J. M., Choi, W., Luke, S. G. & Desai, R. H. Neural correlates of fixation duration in natural reading: Evidence from fixation-related fMRI. Neuroimage. 119, 390–397 (2015).
Martinez-Conde, S., Macknik, S. L., Troncoso, X. G. & Hubel, D. H. Microsaccades: a neurophysiological analysis. Trends Neurosci. 32, 463–75 (2009).
Bharadwaj, S. R. & Candy, T. R. Accommodative and vergence responses to conflicting blur and disparity stimuli during development. Journal of Vision. 9(4), 1–18 (2009).
Milham, M. P., Banich, M. T., Claus, E. D. & Cohen, N. J. Practice-related effects demonstrate complementary roles of anterior cingulate and prefrontal cortices in attentional control. Neuroimage. 18, 483–493 (2003).
Miller, E. K. & Cohen, J. D. An integrative theory of prefrontal cortex function. Annu Rev Neurosci. 24, 167–202 (2001).
Brass, M., Derrfuss, J., Forstmann, B. & Cramon, D. The role of the inferior frontal junction area in cognitive control. Trends Cogn Sci. 9, 314–316 (2005).
Nachev, P., Kennard, C. & Husain, M. Functional role of the supplementary and pre-supplementary motor areas. Nat Rev Neurosci. 9, 856–869 (2008).
Banich, M. T. et al. fMRI studies of Stroop tasks reveal unique roles of anterior and posterior brain systems in attentional selection. J Cogn Neurosci. 12, 988–1000 (2000).
Barch, D. M. et al. Anterior cingulate cortex and response conflict: effects of response modality and processing domain. Cereb Cortex. 11, 837–848 (2001).
Takagi, M., Tamargo, R. & Zee, D. S. Effects of lesions of the cerebellar oculomotor vermis on eye movements in primate: binocular control. Prog Brain Res. 142, 19–33 (2003).
Horwood, A. M. & Riddell, P. M. Developmental changes in the balance of disparity, blur, and looming/proximity cues to drive ocular alignment and focus. Perception. 42, 693–715 (2013).
Daniel, F., Morize, A., Brémond-Gignac, D. & Kapoula, Z. Benefits from Vergence Rehabilitation: Evidence for Improvement of Reading Saccades and Fixations. Frontiers in Integrative Neuroscience. 10, 33 (2016).
IRIS Group, Physiopathologie de la Vision et Motricité Binoculaire CNRS FR3636, Université Paris Descartes, Paris, France
François Daniel & Zoï Kapoula
François Daniel
Zoï Kapoula
F. Daniel and Z. Kapoula designed the experiment. F. Daniel realized the measurements and the statistical analysis. F. Daniel and Z. Kapoula wrote the main manuscript text. F. Daniel prepared all the figures and tables. F. Daniel and Z. Kapoula reviewed the manuscript.
Correspondence to François Daniel or Zoï Kapoula.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Daniel, F., Kapoula, Z. Induced vergence-accommodation conflict reduces cognitive performance in the Stroop test. Sci Rep 9, 1247 (2019). https://doi.org/10.1038/s41598-018-37778-y
DOI: https://doi.org/10.1038/s41598-018-37778-y
Short- and long-term learning of job interview with a serious game in virtual reality: influence of eyestrain, stereoscopy, and apparatus
Alexis D. Souchet
Stéphanie Philippe
Laure Leroy
Differential diagnosis of vergence and saccade disorders in dyslexia
Lindsey M. Ward
Scientific Reports (2020)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
About Scientific Reports
Guide to referees
Guest Edited Collections
Scientific Reports Top 100 2019
Scientific Reports Top 10 2018
Editorial Board Highlights
Author Highlights
10th Anniversary Editorial Board Interviews
Search articles by subject, keyword or author
Show results from All journals This journal
Explore articles by subject
Scientific Reports (Sci Rep) ISSN 2045-2322 (online)
nature.com sitemap
Protocol Exchange
Nature portfolio policies
Author & Researcher services
Nature Masterclasses
Nature Research Academies
Librarian service & tools
Partnerships & Services
Nature Careers
Nature Conferences
Nature Africa
Nature China
Nature India
Nature Italy
Nature Japan
Nature Korea
Nature Middle East
© 2022 Springer Nature Limited
Close banner Close
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing | CommonCrawl |
Series and Transformations
Laplace Transform
The contents of this lecture notes are yet to be edited by mentors and get verified by a Professor.
There are two types of Laplace transformations.
Unilateral Laplace transform takes in an equation, f(t) defined for $t\ge0$, and gives a function as output with a different domain. The transform is given by,
\[\ell \{f(t)\}=\int _{ 0 }^{ \infty }{ { e }^{ -st }f(t)dt }\]
where $s$ is referred to as frequency.
Bilateral Laplace transform is just an extension of the former, which takes even the other side of the real axes into consideration. Hence it is given by,
\[\ell \{f(t)\}=\int _{ -\infty }^{ \infty }{ { e }^{ -st }f(t)dt }\]
The Laplace transformation of a function, $f(t)$ is denoted as $F(s)$ i.e.
\[\ell \{f(t)\}=F(s)\]
We already saw a method by which we can get the constituent frequencies of some sound. We also have a method to get the frequency distribution of different constituent pure tones of a given sound. (Refer Fourier Transform)
While drawing out the frequency distribution from a time-domain function, we know how we can use the complex exponentials. Is there anything to generalize in that particular equation? (Fourier Transform equation)
Let us look at another perspective.
Consider some complex differential equations, say some differential equations dealing with the flow of heat through an insulated conductor (w.r.t time). Since it is always difficult solving such equations, we can try to convert them as algebraic functions that are easy to handle (easy to simply and solve). So is there any tool to solve such differential equations by making them analogous to algebraic equations?
In these sets of sections, we check how "Laplace Transformations" can be such a useful tool. We also check out the wide range of applications it has over many fields.
Laplace transformations are a tool used in many fields. They take in an equation and convert it to its alternative form.
To be precise, they change functions with time-domain into functions with frequency domain. Hence,
\[\ell f(t)=F(s)\]
where $\ell$ denotes the Laplace transformation.
Hence we also define Inverse Laplace transformation where we pass in a function with frequency domain and get a function with time-domain.
\[\ell^{-1}(F(s))=f(t)\]
Video 1 : Basic LaplaceTransform intuition
Context of the Definition
By simply using the term Laplace Transform, in most of the cases, we mean Unilateral Laplace transform. (unless stated explicitly)
Since the transformation consists of integrals, therefore we say the Laplace transform exists for a function, $f(t)$ given
\[lim _{ t\longrightarrow \infty }{ \int _{ 0 }^{ t }{ { e }^{ -st }f(t)dt }}\]
exists. If the integral doesn't exist, we say the function fails to have a Laplace transform!.
In all cases, $s$ represents the frequency which consists of magnitude as well as a phase. (simply $s$ is a complex number).
NOTE: The region in $s$ for which the integral exists is referred to as Region of Convergence.
Apart from changing the domain, these transformations also help in simplifying the complex differential equations!
Video 2 : Basic intuition of laplaceTransform in differential equations
Therefore we generally tableize the standard Laplace transformations (their inverses as well) which can be used to convert the differential equations as just algebraic problems.
NOTE: Here remember that \[\ell \{ c_{ 1 }f(t)+c_{ 2 }g(t)\} =c_{ 1 }\ell \{ f(t)\} +c_{ 2 }\ell \{ g(t)\}\] \[and\] \[\ell \{ f'(t)\} =s\ell \{ f(t)\} -f(0)\]
Here there is also a way to compute the inverse Laplace transformation (as for normal one), which is referred to as Convolution, though we aren't going to dive into it in this lecture note.
While dealing with our friend, we might come across two functions, "Unit Step function" and "Dirac function" which have a very wide range of usage.
Unit Step Function
These functions have a unit jump in their graphs. These are given by
\[\mu _{ c }(t)=0\quad t<c\]
\[\mu _{ c }(t)=1\quad t\ge c\]
Video 3 : Intuition of a Unit Step function
Dirac Delta Function
Consider an impulse applied over an object. We do get some differential equations. But what would be the Laplace transformation of the applied impulse? (as its value tend to $\infty$ at the time impulse is applied and remains $0$ for the rest).
Hence these functions are used to depict the sudden jump in the value at a particular instance, while will be $0$ for the rest. These are given by
\[\delta (x)= \infty \quad x=0\]
\[\delta (x)=0\quad x\neq 0\]
Video 4 : Standard Dirac Delta function
Here we can get the area bounded under this curve, which is equal to $1$!
\[\int _{ -\infty }^{ \infty }{ \delta (x)dx } = 1\]
Now is it actually possible to construct such an equation? Lets consider the following:
Video 5 : Formation of Dirac Delta Function
Here we are able to get a standard result from their Laplace transformations, which makes them unique and are often used widely in physics.
They are extensively used in solving complex differential equations as simple polynomial equations. Hence they have a major role in simplifying calculations in system modeling, solving digital signal processing problems.
In pure physics, Laplace transformations are used to change a function from the time domain to the frequency domain.
Laplace transforms are used to transform the signals (which can be made analogous to equations!) while sending signals over any communication medium such as FM, cellular phones.
They are also used in many fields of physics and digital electronics.
They make it possible to study the analytic part of Nuclear physics and are also used in solving the radioactive decay equations.
They are also used to solve the equations obtained in control systems which regulates the behavior of other systems such as a home heating controller.
Though these transformations got a formal method of solving from the work of Laplace, "Théorie analytique des probabilités (1812)", they were even used extensively by Léonard Euler in the 17th century itself. He mainly worked on the integral transforms by considering them as inverse Laplace transform in solving linear ordinary differential equations and the same has also been given credit, by Laplace in his work.
Later in the late 18th century, it was Spitzer who attached the name, Laplace to the equation
\[y=\int _{ a }^{ b }{ { e }^{ sx } } f(s) ds\]
employed by Euler.
In the 19th century, the idea was extended to its all possible complex forms and was eventually extended to two variables. In no time, they were also used to transform equations of radioactive decay, hence simplifying much of complex equations.
In 1920, Bernstein used the expression (in his work on theta functions)
\[f(s)=\int _{ 0 }^{ \infty }{ { e }^{ -su }\phi (u) du } \]
and called it a Laplace transformation.
The Laplace Transform: Theory and Applications by Joel L. Schiff
https://www.khanacademy.org/math/differential-equations/laplace-transform
Further Readings
Schaum's Outline of Laplace Transforms
Brian Davies. Integral Transforms and their Applications
L. Debnath, D. Bhatta, Integral Transforms and Their Applications
Mentor & Editor:
Approved On:
The following notes and their corrosponding animations were created by the above-mentioned contributor and are freely avilable under CC (by SA) licence. The source code for the said animations is avilable on GitHub and is licenced under the MIT licence.
The work under this website is licenced under a Creative Commons Attribution-Share Alike 4.0 International License CC BY-SA
The FOSSEE project is funded by the National Mission on Education through ICT, MoE (Ministry of Education), Government of India. | CommonCrawl |
All-----TitleAuthor(s)AbstractSubjectKeywordAll FieldsFull Text-----About
Kyoto Journal of Mathematics
Kyoto J. Math.
Volume 59, Number 1 (2019), 195-235.
Brane involutions on irreducible holomorphic symplectic manifolds
Emilio Franco, Marcos Jardim, and Grégoire Menet
More by Emilio Franco
More by Marcos Jardim
More by Grégoire Menet
Full-text: Access denied (no subscription detected)
We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text
Buy article
Article info and citation
In the context of irreducible holomorphic symplectic manifolds, we say that (anti)holomorphic (anti)symplectic involutions are brane involutions since their fixed point locus is a brane in the physicists' language, that is, a submanifold which is either a complex or Lagrangian submanifold with respect to each of the three Kähler structures of the associated hyper-Kähler structure. Starting from a brane involution on a K3 or Abelian surface, one can construct a natural brane involution on its moduli space of sheaves. We study these natural involutions and their relation with the Fourier–Mukai transform. Later, we recall the lattice-theoretical approach to mirror symmetry. We provide two ways of obtaining a brane involution on the mirror, and we study the behavior of the brane involutions under both mirror transformations, giving examples in the case of a K3 surface and K3[2]-type manifolds.
Kyoto J. Math., Volume 59, Number 1 (2019), 195-235.
Received: 5 July 2016
First available in Project Euclid: 8 January 2019
Permanent link to this document
https://projecteuclid.org/euclid.kjm/1546916422
Digital Object Identifier
doi:10.1215/21562261-2018-0009
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Primary: 14J28: $K3$ surfaces and Enriques surfaces
Secondary: 14J33: Mirror symmetry [See also 11G42, 53D37] 14J50: Automorphisms of surfaces and higher-dimensional varieties
K3 surfaces involutions mirror symmetry
Franco, Emilio; Jardim, Marcos; Menet, Grégoire. Brane involutions on irreducible holomorphic symplectic manifolds. Kyoto J. Math. 59 (2019), no. 1, 195--235. doi:10.1215/21562261-2018-0009. https://projecteuclid.org/euclid.kjm/1546916422
[1] P. S. Aspinwall and D. R. Morrison, "String theory on K3 surfaces" in Mirror Symmetry, II, AMS/IP Stud. Adv. Math. 1, Amer. Math. Soc., Providence, 1997, 703–716.
[2] M. F. Atiyah, K-theory and reality, Q. J. Math. 17 (1966), 367–386.
[3] D. Baraglia and L. P. Schaposnik, Higgs bundles and $(A,B,A)$-branes, Comm. Math. Phys. 331 (2014), no. 3, 1271–1300.
Zentralblatt MATH: 1311.53058
Digital Object Identifier: doi:10.1007/s00220-014-2053-6
[4] C. Bartocci, U. Bruzzo, and D. Hernández Ruipérez, Fourier–Mukai and Nahm Transforms in Geometry and Mathematical Physics, Progr. Math. 276, Birkhaüser, Boston, 2009.
[5] A. Bayer, B. Hassett, and Y. Tschinkel, Mori cones of holomorphic symplectic varieties of K3 type, Ann. Sci. Éc. Norm. Supér. (4) 48 (2015), 941–950.
Digital Object Identifier: doi:10.24033/asens.2262
[6] A. Beauville, Variétés Kähleriennes dont la première classe de Chern est nulle, J. Differential Geom. 18 (1983), no. 4, 755–782.
[7] A. Beauville, Antisymplectic involutions of holomorphic symplectic manifolds, J. Topol. 4 (2011), no. 2, 300–304.
Digital Object Identifier: doi:10.1112/jtopol/jtr002
[8] I. Biswas, Connections on principal bundles over Kähler manifolds with antiholomorphic involution, Forum Math. 17 (2005), no. 6, 871–884.
Mathematical Reviews (MathSciNet): MR2195711
Digital Object Identifier: doi:10.1515/form.2005.17.6.871
[9] I. Biswas, J. Huisman, and J. Hurtubise, The moduli space of stable vector bundles over a real algebraic curve, Math. Ann. 347 (2010), no. 1, 201–233.
[10] I. Biswas and G. Wilkin, Anti-holomorphic involutive isometry of hyper-Kähler manifolds and branes, J. Geom. Phys. 88 (2015), 52–55.
Digital Object Identifier: doi:10.1016/j.geomphys.2014.11.001
[11] S. Boissière, Automorphismes naturels de l'espace de Douady de points sur une surface, Canad. J. Math. 64 (2012), no. 1, 3–23.
[12] S. Boissière, C. Camere, and A. Sarti, Classification of automorphisms on a deformation family of hyper-Kähler four-folds by $p$-elementary lattices, Kyoto J. Math. 56 (2016), no. 3, 465–499.
[13] C. Camere, Lattice polarized irreducible holomorphic symplectic manifolds, Ann. Inst. Fourier (Grenoble) 66 (2016), no. 2, 687–709.
Digital Object Identifier: doi:10.5802/aif.3022
[14] I. V. Dolgachev, Mirror symmetry for lattice polarized $\mathrm{K3}$ surfaces, J. Math. Sci. 81 (1996), no. 3, 2599–2630.
Digital Object Identifier: doi:10.1007/BF02362332
[15] J. Fogarty, Algebraic families on an algebraic surface, Amer. J. Math. 90 (1968), 511–521.
Digital Object Identifier: doi:10.2307/2373541
[16] E. Franco, M. Jardim, and S. Marchesi, Branes in the moduli space of framed sheaves, Bull. Sci. Math. 141 (2017), no. 4, 353–383.
Digital Object Identifier: doi:10.1016/j.bulsci.2017.04.002
[17] B. Hassett and Y. Tschinkel, Hodge theory and Lagrangian planes on generalized Kummer fourfolds, Mosc. Math. J. 13 (2013), no. 1, 33–56.
Digital Object Identifier: doi:10.17323/1609-4514-2013-13-1-33-56
[18] D. Huybrechts, Compact hyper-Kähler manifolds: Basic results, Invent. Math. 135 (1999), no. 1, 63–113.
Digital Object Identifier: doi:10.1007/s002220050280
[19] D. Huybrechts, "Moduli spaces of hyperkähler manifolds and mirror symmetry" in Intersection Theory and Moduli, ICTP Lect. Notes 19, Abdus Salam Int. Cent. Theoret. Phys., Trieste, 2004, 185–247.
[20] D. Huybrechts, Lectures on K3 Surfaces, Cambridge Stud. Adv. Math. 158, Cambridge Univ. Press, Cambridge, 2016.
[21] D. Huybrechts and M. Lehn, The Geometry of Moduli Spaces of Sheaves, 2nd ed., Cambridge Math. Libr., Cambridge Univ. Press, Cambridge, 2010.
[22] A. Kapustin and E. Witten, Electric-magnetic duality and the geometric Langlands program, Commun. Number Theory Phys. 1 (2007), no. 1, 1–236.
Digital Object Identifier: doi:10.4310/CNTP.2007.v1.n1.a1
[23] E. Looijenga and C. Peters, Torelli theorems for Kähler K3 surfaces, Compos. Math. 42 (1980/81), no. 2, 145–186.
[24] E. Markman, Integral constraints on the monodromy group of the hyperKähler resolution of a symmetric product of a K3 surface, Internat. J. Math. 21 (2010), no. 2, 169–223.
Digital Object Identifier: doi:10.1142/S0129167X10005957
[25] E. Markman, "A survey of Torelli and monodromy results for holomorphic-symplectic varieties" in Complex and Differential Geometry, Springer Proc. Math. 8, Springer, Heidelberg, 2011, 257–322.
[26] G. Mongardi, Symplectic involutions on deformations of $K3^{[2]}$, Cent. Eur. J. Math. 10 (2012), no. 4, 1472–1485.
[27] G. Mongardi, On natural deformations of symplectic automorphisms of manifolds of $\mathrm{K3}^{[n]}$ type, C. R. Math. Acad. Sci. Paris 351 (2013), no. 13–14, 561–564.
[28] G. Mongardi, A note on the Kähler and Mori cones of hyperkähler manifolds, Asian J. Math. 19 (2015), no. 4, 583–591.
[29] G. Mongardi, K. Tari, and M. Wandel, Prime order automorphisms of generalised Kummer fourfolds, Manuscripta Math. 155 (2018), no. 3–4, 449–469.
Zentralblatt MATH: 06845362
[30] G. Mongardi and M. Wandel, Automorphisms of O'Grady's manifolds acting trivially on cohomology, Algebr. Geom. 4 (2017), no. 1, 104–119.
Digital Object Identifier: doi:10.14231/AG-2017-005
[31] S. Mukai, Symplectic structure on the moduli space of sheaves on an abelian or K3 surface, Invent. Math. 77 (1984), no. 1, 101–116.
[32] S. Mukai, "On the moduli space of bundles on $\mathrm{K3}$ surfaces, I" in Vector Bundles on Algebraic Varieties (Bombay, 1984), Tata Inst. Fund. Res. Stud. Math. 11, Tata Inst. Fund. Res., Bombay, 1987, 341–413.
[33] V. V. Nikulin, Finite groups of automorphisms of Kählerian K3 surfaces (in Russian), Trudy Moskov. Mat. Obshch. 38 (1979), 75–137;
[34] V. V. Nikulin, Integer symmetric bilinear forms and some of their geometric applications (in Russian), Izv. Akad. Nauk SSSR Ser. Mat. 43, no. 1 (1979), 111–177, 238; English translation in Math. USSR-Izv. 14 (1980), no. 1, 103–167.
[35] V. V. Nikulin, Quotient-groups of groups of automorphisms of hyperbolic forms of subgroups generated by $2$-reflections (in Russian), Dokl. Akad. Nauk SSSR 248, no. 6 (1979), 1307–1309; English translation in Soviet Math. Dokl. 20 (1979), no. 5, 1156–1158.
[36] V. V. Nikulin, "Discrete reflection groups in Lobachevsky spaces and algebraic surfaces" in Proceedings of the International Congress of Mathematicians, Vol. 1, 2 (Berkeley, Calif., 1986), Amer. Math. Soc., Providence, 1987, 654–671.
[37] F. Schaffhauser, Real points of coarse moduli schemes of vector bundles on a real algebraic curve, J. Symplectic Geom. 10 (2012), no. 4, 503–534.
Digital Object Identifier: doi:10.4310/JSG.2012.v10.n4.a2
Project Euclid: euclid.jsg/1357153427
[38] B. van Geemen and A. Sarti, Nikulin involutions on K3 surfaces, Math. Z. 255 (2007), no. 4, 731–753.
New content alerts
Email RSS ToC RSS Article
Turn Off MathJax
What is MathJax?
Lagrangian submanifolds in strict nearly Kähler 6-manifolds
Lê, Hông Vân and Schwachhöfer, Lorenz, Osaka Journal of Mathematics, 2019
Moduli spaces of bundles over nonprojective K3 surfaces
Perego, Arvid and Toma, Matei, Kyoto Journal of Mathematics, 2017
Relative Prym varieties associated to the double cover of an Enriques surface
Arbarello, E., Saccà, G., and Ferretti, A., Journal of Differential Geometry, 2015
Perverse coherent sheaves and Fourier–Mukai transforms on surfaces, I
Yoshioka, Kōta, Kyoto Journal of Mathematics, 2013
Perverse coherent sheaves and Fourier–Mukai transforms on surfaces, II
Lagrangian Submanifolds in Hyperkähler Manifolds, Legendre Transformation
Leung, Naichung Conan, Journal of Differential Geometry, 2002
Twistor Spaces and Compact Manifolds Admitting Both Kähler and Non-Kähler Structures
Kamenova, Ljudmila, Journal of Geometry and Symmetry in Physics, 2017
LAGRANGIAN $H$-UMBILICAL SUBMANIFOLDS OF PARA-KÄHLER MANIFOLDS
Chen, Bang-Yen, Taiwanese Journal of Mathematics, 2011
Submanifolds of generalized complex manifolds
Ben-Bassat, Oren and Boyarchenko, Mitya, Journal of Symplectic Geometry, 2004
Deformation of integral coisotropic submanifolds in symplectic manifolds
Ruan, Wei-Dong, Journal of Symplectic Geometry, 2005
euclid.kjm/1546916422 | CommonCrawl |
BioData Mining
LAF: Logic Alignment Free and its application to bacterial genomes classification
Emanuel Weitschek1,3,
Fabio Cunial2 &
Giovanni Felici3
BioData Mining volume 8, Article number: 39 (2015) Cite this article
Alignment-free algorithms can be used to estimate the similarity of biological sequences and hence are often applied to the phylogenetic reconstruction of genomes. Most of these algorithms rely on comparing the frequency of all the distinct substrings of fixed length (k-mers) that occur in the analyzed sequences.
In this paper, we present Logic Alignment Free (LAF), a method that combines alignment-free techniques and rule-based classification algorithms in order to assign biological samples to their taxa. This method searches for a minimal subset of k-mers whose relative frequencies are used to build classification models as disjunctive-normal-form logic formulas (if-then rules).
We apply LAF successfully to the classification of bacterial genomes to their corresponding taxonomy. In particular, we succeed in obtaining reliable classification at different taxonomic levels by extracting a handful of rules, each one based on the frequency of just few k-mers.
State of the art methods to adjust the frequency of k-mers to the character distribution of the underlying genomes have negligible impact on classification performance, suggesting that the signal of each class is strong and that LAF is effective in identifying it.
The field of biological sequence analysis relies on mathematical, statistical, and computer science methods for discovering similarities among different organisms, understanding their features and their structure, detecting ancestry, relatedness, evolution, and common functions.
Several well-established sequence comparison algorithms are based on sequence alignment: they compute sequence similarity by aligning portions of sequences (e.g., subsequences) that have common nucleotide assignments. The alignments of two or more sequences are scored according to the number of common nucleotides. Such methods can be exact or heuristic. Among exact methods, Smith-Waterman [1] and Needleman-Wunsch [2] use dynamic programming techniques. The first performs local sequence alignment: it detects the common regions between two sequences by comparing segments of all possible lengths. The second is a global alignment algorithm, designed to align entire sequences. In order to reduce the computational burden of exact methods, several heuristic algorithms have been designed, the most renowned being FASTA [3] and BLAST [4]. For the comparisons of more than two sequences, there are ad-hoc algorithms like Muscle [5], ClustalW [6], Motalign [7], and Mafft [8]. Alignment-based sequence analysis algorithms have a very high computational cost, especially when applied to a large set of sequences [9]. Other problems may also be encountered when performing alignment on genome sequences, related with the presence of non-coding subsequences, or simply with the computational burden associated with the alignment of whole genomes [10].
In order to address these issues, alignment-free sequence analysis methods can be considered. Such algorithms are mainly classified in two groups: methods based on sequence compression and methods that rely on the frequencies of the subsequences (oligomers) [9].
The first class of methods compute a model that succinctly describes the sequence, and assess the similarity of the sequences by analyzing their compressed representations, e.g., Kolomogorov complexity [11] or Universal Sequence Maps [12].
In this work we focus on the second class of methods, alignment-free algorithms that rely on oligomer frequencies and map two strings X and Y onto corresponding multidimensional vectors X and Y; these vectors are indexed by a number of substrings in the given alphabet (a typical case is when all possible substrings of a predefined length k are used). X[W] and Y[W] – the element of X and Y associated with substring W – contain the number of occurrences of W in X and Y respectively. Often the number of occurrences is normalized and converted into a measure of statistical surprise using the length and distribution of characters in each string. Standard distance functions on vectors are then applied to X and Y, allowing the original strings to be compared by classical distance-based algorithms.
Alignment-free algorithms are currently the most scalable class of methods for reconstructing phylogenetic trees from thousands of large, distantly-related genomes and proteomes [13, 14].
The success of alignment-free methods rests on extensive information on the substring composition of genomes and on codon-usage biases, cumulated over approximately fifty years, with particular emphasis on prokaryotes: from the first studies of GC content [15], to the first detection of biases in the composition of pairs and quadruples of adjacent nucleotides [15–21], to the discovery of species-specific frequencies of 4-mers and 8-mers preserved in DNA fragments ranging from 40 kilobases to 400 bases [22–26], to more recent, unsupervised classifications [27–29] and more complex protein motifs [30].
Since the very beginning, most such studies have relied on some form of noise filtration, either assuming an independent and identically distributed source or a Markov source of low order (i.e., normalizing the raw frequencies using their expectation and or variance according to the specified sources). Markov chains inferred from genomes have indeed been shown to reproduce large fractions of the frequency distribution of k-mers in the original genomes [23, 31, 32].
So far, classification has always relied on the frequency of allk-mers [27, 33], and minimality in phylogenetic signal has been investigated with respect to the length of the strings from which k-mers are extracted, rather than to the space of features used for classification. This trend continues in modern applications of k-mer composition to annotating and binning metagenomic reads [34]: increasingly more sophisticated heuristics have allowed to reliably classify reads ranging from one kilobase to 75 bases, under a variety of species abundance scenarios [35–40]. However, fundamental questions on the distribution and concentration of phylogenetic signal in the space of all k-mers are still open and scarcely investigated. Among the few attempts in this direction, we mention the use of singular value decomposition (SVD [41, 42]) and of irredundant shared substrings [43] in phylogeny reconstruction, the use of few selected k-mers in barcoding genes [44], and early attempts at classifying protein families using the frequency of a small set of dipeptides [45].
In this work, we search for a minimal set of k-mers whose frequency is sufficient to classify entire genomes. Specifically, we focus on logic formulas (if-then rules) whose attributes W are k-mers, and whose values f X (W) are relative frequencies in a genome X, possibly corrected by expected counts. An example of such a formula could be:
$$\text{if}\,\,(f(\texttt{ACGT})>0.15) \wedge (f(\texttt{GGCT})<0.6)\,\,\text{then}\,\,X \in \mathcal{T} $$
where \(\mathcal {T}\) is a taxonomic unit (for example, E. coli) at a given taxonomic rank (for example, at the species level). Similar to recent DNA barcoding efforts, such formulas approximate a unique signature of set \(\mathcal {T}\), but they work on entire genomes rather than on few specific genes, and they do not require \(\mathcal {T}\) to be at the species level [44, 46]. Contrary to markers [47–49], the k-mers in such formulas need not to be genes, they need not to be rare in the genomes they characterize, they need not to be absent from the genomes they do not characterize. Contrary to discriminating substrings (see e.g. [50] and references therein), formulas can use multiple substrings to classify, and they can link frequencies with conjunctions and disjunctions.
In this paper, we experiment with four rule-based algorithms [51] that extract classification models in the form of logic formulas and we compare them with other state-of-the-art classifiers, such as Support Vector Machines [58, 69] and Nearest Neighbor [70]. Surprisingly, it turns out that we can reliably classify genomes at multiple taxonomic levels using a limited number of formulas, each involving few, short k-mers. Moreover, standard noise filtration methods have minimum impact on classification performance, suggesting that noise is automatically dampened by the formula-extraction algorithms.
In this section, we present the Logic Alignment Free (LAF) technique and software package. The aim of LAF is to classify biological sequences and assign them to their taxonomic unit with the aid of a supervised machine learning paradigm [51] (see subsection Supervised machine learning and rule-based classification algorithms for more details). LAF uses a feature vector representation of the biological sequences, and gives them as input to rule-based classification algorithms (for a detailed analysis of rule-based classification methods, see [52]).
In [53], LAF has been already successfully applied to the classification of selectively constrained DNA elements, which are not alignable and do not come from the same gene regions.
Conversely, here we present the method in detail, provide the scripts and the software, and describe its application to bacterial genomes. In the following subsections, we illustrate the feature vector representation technique, the rule-based classification algorithms, and their integration in the LAF framework.
Representing the sequences as feature vectors with alignment-free methods
The most widespread alignment-free methods compute the frequencies of the substrings in the biological sequences, called k-mers (where k is the length of the substring). For each sequence, the substring frequencies are then represented in a vector, called frequency vector [12, 54–57]. Each element of this vector expresses the frequency of a given k-mer, computed by scanning a sliding window of length k over the sequence.
More formally [9], let S be a sequence of n characters over an alphabet Σ, e.g. Σ={A,C,G,T}, and let k∈ [ 1…n]. If K is a generic substring of S of length k, K is called a k-mer. Let the set V={K1,K2,…,K m } be all possible k-mers over Σ, and define m=|Σ|k to be the size of set V. The k-mers are computed by counting the occurrences of the substrings in S with a sliding window of length k over S, starting at position 1 and ending at position n−k+1. A vector F contains for each k-mer the corresponding counts F=c1,c2,…,c m . The frequencies are then computed accordingly and stored in a vector F′=f1,f2,…,f m ; for a k-mer K i , the frequency is defined as \(f_{i}=\frac {c_{i}}{n-k+1}\).
These numerical representations of the sequences allow the use of statistical and mathematical techniques; indeed, the most used approach for sequence comparisons in alignment-free vector representations are distance measures, such as the Euclidean distance and the d2 distance [9]. While the authors of [56] use feature vector representation in combination with supervised machine learning methods, specifically Support Vector Machines [69] for biological and text sequences, here we propose to analyze the frequency vectors with rule-based supervised machine learning algorithms. The effectiveness of this technique is investigated and tested on bacterial sequences.
Supervised machine learning and rule-based classification algorithms
The aim of this step is to classify the biological sequences into their taxonomic unit. Once the sequences are represented in a vector space, it is possible to analyze them by adopting a supervised machine learning approach, sketched in the following.
Given a set B of biological sequences, each assigned to a taxon (training set), a classifier is trained with these sequences in order to compute a classification model that predicts the taxon of each sequence from the values of its vector space representation. An additional set of sequences with known taxa is used to evaluate whether the model computed on the training set is able to predict correctly the taxa (the latter is called test set). For assessing the performance of the classifier we adopt the accuracy measure (A), also called correct rate \(A=\frac {c}{t}\), where c is the number of correct classified sequences in the test set and t is the number of total sequences in the test set.
We focus on a particular type of classification methods - rule-based classifiers - which express the classification model in propositional logic form (e.g., if-then rules). Rule-based classifiers have the main advantage of being able to control their dimension (in this case, the number of k-mers used), they are easily interpretable, and can straight-forwardly be integrated with other contextual knowledge. Several rule-based classification methods are proposed in the literature; in LAF we adopt the following ones: Data Mining Big (DMB) [59, 60], RIDOR [61], PART [62], and RIPPER [63]. All these methods use distinct rule extraction approaches, but – as we will see later – perform very well on the analyzed data sets of bacterial sequences. We report a brief description of these methods in the following.
Data Mining Big (DMB) [60, 64, 65] is a rule-based classification software designed for biomedical data. It adopts optimization models that are formulated and solved in order to deal with the different steps of the data mining process. Five main steps are performed by DMB:
discretization: conversion of numeric attributes into nominal (discrete);
discrete cluster analysis: samples that are similar in the discretized space are clustered and dimension-reduced accordingly;
feature selection: the most relevant attributes for classification purpose are selected;
rule extraction: small and effective rules are extracted from training data and verified on test data;
classification: the extracted rules are used to classify new samples.
RIDOR[61] performs rule extraction directly from the training data set. The first step is the computation of a default rule for the most frequent class (e.g., "all sequences are E. coli"). Then, it computes exception rules that represent the other classes (e.g., "except if freq(ACGT)<0.45 then the sequences are S. aureus").
PART [62] performs rules extraction with an indirect method. It uses the C4.5 decision tree based classification algorithm [66], which computes a pruned decision tree for a given number of iterations. The best performing tree in terms of classification performances is chosen by PART and converted to rules for every species.
RIPPER [63] is a direct rule extraction method based on a pruning procedure, whose aim is to minimize the error on the training set; it performs the following steps: i) growth of the rules; ii) pruning of the rules; iii) optimization of the model; iv) selection of the model. In the first step, thanks to a greedy procedure, RIPPER extracts many classification rules. Then, the rules are simplified and optimized in step two and three, respectively. Finally, the best model (i.e., set of rules) is selected.
Logic Alignment Free (LAF) method
Rule-based classifiers have been successfully used in the analysis of aligned sequences, e.g., in [59] and [60], where the classification of biological sequences to their species is performed by considering only sequences from the same gene region. In this case the rule-extraction procedure identifies exact gene regions and nucleotide assignments that are specific to a species; an example of such a rule could be ''if pos354 = T of gene 16S then the sequence belongs to E. Coli".
Here we test a method for classifying biological sequences without the strict requirement of overlapping gene regions and of calculating an alignment, referred to as Logic Alignment Free (LAF). It is based on the frequency vector representation of the sequences. The method allows to classify non coding DNA that is not alignable [53], and whole genomes, whose alignments are very computationally demanding. LAF adopts a supervised machine learning procedure, where a labeled training set of whole genomes is considered (labels in this case would be associated to the taxon). LAF would then operate with the following steps, if we take into account every genome g of the input data set:
The genome g is reverse complemented, the k-mers with k∈ [ 3…6] are counted and stored in a frequency vector F′;
A matrix that contains all frequency vectors is created; the rows of the matrix are associated to the k-mers and the columns to the sequences (an example is given in Table 1);
Table 1 Example of frequencies vectors matrix extracted by LAF and provided as input to rule-based classifiers
The frequencies are discretized with the MDL procedure [67] before applying RIDOR, PART and RIPPER, while DMB provides its own built-in discretization method;
A set of four rule-based classifiers (e.g., DMB, RIDOR, PART and RIPPER) take the matrix as input and extract the classification models and specimen to taxonomic unit assignments;
The above is repeated for different combinations of training / test sets.
For a compact overview of the method the reader may refer to the LAF flow chart drawn in Fig. 1. To compute k-mer counts, we adopt the Jellyfish software [68]. Data discretization is performed using MDL [67] or the DMB internal procedure. As rule-based classifiers implementations we employ the Weka [67] and the DMB packages. The LAF method is deployed in a software package available at dmb.iasi.cnr.it/laf.php.
Flow chart of the LAF method
Data sets of bacterial genomes
In order to prove the validity of the LAF technique, we chose to test the method for the classification of biological sequences belonging to the bactria domain. We downloaded 1964 bacterial genomes from the NCBI genomes database (www.ncbi.nlm.nih.gov/genome/browse/). For every downloaded sequence, we query the NCBI taxonomy service (scripts are available at dmb.iasi.cnr.it/laf.php) to retrieve the full lineage, i.e., Species, Genus, Order, Class, Phylum. In order to perform an effective classification, we do not take into consideration under-represented species and therefore we filter out sequences with less than nine specimens. This step is necessary to perform a proper training of the classifiers. The final filtered data set is composed of 413 sequences with 25 species, 21 genera, 14 orders, 9 classes, and 6 phyla. Additionally, we also report the performances on the original data set (1964 bacterial genomes, 1157 species, 590 genera, 120 orders, 57 classes, and 36 phyla).
We apply LAF to the previously described filtered data set of bacterial genomes, setting k∈ [ 3…6] and using the four already mentioned rule-based classification algorithms by adopting a 10-fold cross validation sampling scheme. We show also the results on the original data set composed of 1964 sequences. Additionally, we compare the results of LAF with respect to the Support Vector Machine (SVM) classifier [69] and with respect to a Nearest Neighbor approach [70].
First, we test LAF on the filtered raw sequences without any preprocessing, obtaining very good classification performance. The accuracy of the classification algorithms for k=4 and multiple taxonomic levels is summarized in Table 2. We focus on k=4 here since it is the smallest value to achieve good classification performances: increasing k slightly improves classification performances, but also complexity and computational time. We justify the choice of k=4 providing experimental evidence in Table 3 by focusing on the order level since similar performance is obtained at other levels. We can see that the classification accuracy only slightly increases by raising the value of k, but complexity and computational time significantly do. We provide also an example in Fig. 2 that shows the accuracy and computational time of RIPPER with respect to increasing values of k. The k-mers extraction is linear in the size of the input, but it is worth noting that for greater values of k the required IO bandwidth and the size of the data matrices exponentially increase [68], slowing down the k-mers extraction and the classification processes. Additionally, the value of k=4 resonates with a number of previous studies [71–73].
Accuracy and computational times of RIPPER with respect to increasing values of k on the original data set
Table 2 Percent accuracy of the rule-based classifiers for each taxonomic unit (10-fold cross validation) on the filtered data set
Table 3 Accuracy (ACC) [%] and computational times (T) [sec] on the order level with different values of K
In Table 2, we report the average accuracy over all classification algorithms on the filtered data set. We note that the best results (98 % accuracy) are obtained for the phylum level – the highest in the taxonomy. Accuracy remains greater than 96 % at lower levels as well. According to the average over all taxonomic levels, RIDOR exhibits the best performance.
Moreover, we compare LAF with respect to the Support Vector Machine (SVM) classifier. We adopt the Weka implementation of SVM (called SMO) with a linear kernel and a soft margin. We obtain an accuracy of 99 % on the filtered data sets with a 10-fold cross validation sampling scheme, which slightly outperforms LAF. But we remark that SVM outputs just a single classification model that cannot be easily interpreted by human experts.
Finally, we evaluate also the performances of the Nearest Neighbour (NN) classifier by using the Weka implementation of NN (called IBk) and by setting the number of neighbours to 1, the NN search algorithm to linear, and by adopting the Euclidean distance. Also in this case we obtain an accuracy of 99 % on all filtered data sets with a 10-fold cross validation sampling scheme, but no human readable classification model.
Conversely to NN and SVM, the rule-based classification methods adopted by LAF provide sets of similar rules than can be analyzed, compared, and evaluated by the user. Here we consider as a sample the rules at the species level extracted by DMB, reported in Table 4. A representative example of such family of rules is the one for Helicobacter pylori: "if 5.56≤f(GTAC)<42.82 then the sample is Helicobacter pylori". Here f(K) is the frequency of substring K (for readability, the frequency values are multiplied by 105).
Table 4 A sample of classification rules at the species level extracted by the DMB software. f(W) represents the relative frequency of substring W in a genome, multiplied by 105 for readability
We observe that the same 4-mer is able to distinguish 3 and 2 bacterial species with different frequency values, respectively, and that twenty 4-mers suffice to separate all the 25 species. The classification rules are also very concise, since most of them are composed only by the conjuction of the conditions on two 4-mers (in the logic jargon, such rules are conjunctive clauses composed of two literals). In general, the rules computed for distinct species do not seem to use disjoint, species-specific sets of k-mers, suggesting that discrimination critically depends on the frequency of a k-mer rather than on its simple presence or absence in a species. Additional considerations derive from the granularity of the adopted discretization. The method allows to specify up-front the number of intervals used to discretize the frequency values of each k-mer, and then searches for an optimal discretization under this condition. From the experimental results we conclude that the number of intervals in which frequencies are discretized has minimal effects on classification quality, provided that at least 3 intervals are used (results not reported).
Moreover, we show the results on the original data set of all rule-based algorithms and compare them with SVM and NN in Table 5. It is worth noting that the methods are not able to classify the bacteria genomes at species level, because of under representation (i.e., there are many species with just one or two sequences). At higher taxonomic levels (class and phylum) we obtain more reliable results. We highlight that SVM and NN perform best, but they do not provide a human readable classification model as rule-based classifiers, which permit to identify the different taxon specific k-mers.
Table 5 Percent accuracy of the classifiers for each taxonomic unit (10-fold cross validation) on the original data set
In order to test their effect on the classification performance, we applied different types of preprocessing to the filtered data set suggested in previous works [74–77] about phylogenetic reconstructions of genomes with alignment-free algorithms.
The first type consists in excluding all high-frequency and low-complexity substrings [74] of a genome from its k-mer counts, using the DUST software implementation provided by NCBI [78];
A second type of preprocessing consists in replacing the frequency f T (W) of a k-mer W in a string T with a measure of the statistical significance of the event that W has f T (W) occurrences in T. Specifically, we assigned to a k-mer W the score \(z_{T}(W) = \left (p_{T}(W)-\tilde {p}_{T}(W)\right)\!/\tilde {p}_{T}(W)\), where p T (W)=f T (W)/(|T|−k+1), and where \(\tilde {p}_{T}(W) = p_{T}(W[\!1..k-1]) \cdot p_{T}(W[\!2..k]) / p_{T}(W[\!2..k-1])\) is the expected value of p T (W) under the assumption that T was generated by a Markov process of order k−2 or smaller. This score has been shown to be critical in building accurate phylogenies of distantly-related prokaryotes [75];
We experimented with the estimator \(\tilde {p}_{T}(W) = \left (f_{T}(W[\!1]) \cdot f_{T}(W[\!2..k]) + f_{T}(W[\!1..k-1]) \cdot f_{T}(W[\!k]) \right)/2\), derived under the assumption that W[ 2..k−1], W[ 1] and W[ k] occur independently in T [76];
We also adopted an even simpler estimator, based on single-nucleotide frequencies (see [9, 77] and references therein for alternative ways to compute \(\tilde {p}_{T}(W)\)).
In our experiments, none of these preprocessing methods yielded a visible improvement on classification quality, suggesting that noise is automatically dampened by the formula-extraction algorithms run on raw frequencies. Nonetheless, we include in our LAF package an implementation of all such filters, since they could be useful in other data sets.
Conclusions and future work
The LAF method combines k-mer composition vectors and rule-based classification algorithms to classify biological sequences. Such sequences do not need to be aligned or to belong to the same gene. The method was applied to bacterial whole genomes, and it was able to perform with accurate classification results and to identify common subsequences (k-mers) in each taxon (class) of the data set.
We compared our method with other state-of-the art classification methods and provided experimental results that show promising performance of LAF in particular in the classification model extraction (i.e., specific k-mers for each taxon).
Several directions for future research stem from the results obtained in this paper: further reducing the size of the classification models, analyzing more deeply the k-mers selected by our models; and measuring how classification performance degenerates when moving from whole genomes to short fragments.
Another possible way to further reduce the size of our models consists in building hierarchical classification rules by extracting logic formulas that best discriminate between elements in a taxonomic unit \(\mathcal {T}\) and elements in \(\text {\texttt {parent}}(\mathcal {T}) \backslash \mathcal {T}\), where \(\text {\texttt {parent}}(\mathcal {T})\) is the parent of \(\mathcal {T}\) in the taxonomic tree. Such result would look very similar to a decision tree, and the corresponding k-mers could be related to the notion of crowns (see [79]).
Analyzing the actual k-mers selected by our models is another obvious open direction, for example in terms of syntactic similarity and positional correlations between the k-mers that appear in the same formula, or in terms of enrichment of such k-mers in regulatory regions or in gene families devoted to specific cellular processes.
It is also of interest the understanding of how the classification performance degenerates when moving from whole genomes to short fragments, for example by determining how small a fragment we can classify correctly using the formulas learned from entire genomes, or using new formulas learned from fragments. Abundance estimation in metagenomic samples is also a natural application for the strong biases in the relative frequency of k-mers that we report here: given a set of observed k-mer frequencies in a sample, and a set of logic rules in sequenced genomes, the problem would then amount to compute the most probable abundance of known species in the sample.
Pearson WR. Searching protein sequence libraries: comparison of the sensitivity and selectivity of the smith-waterman and fasta algorithms. Genomics. 1991; 11(3):635–50.
Needleman SB, Wunsch CD. A general method applicable to the search for similarities in the amino acid sequence of two proteins. J Mol Biol. 1970; 48(3):443–53.
Pearson WR. Rapid and sensitive sequence comparison with fastp and fasta. Methods Enzymol. 1990; 183:63–98.
Altschul SF, Madden TL, Schaffer AA, Zhang J, Zhang Z, Miller W, et al.Gapped blast and psi-blast: a new generation of protein database search programs. Nucleic Acids Res. 1997; 25(17):3389–402.
Edgar RC. Muscle: multiple sequence alignment with high accuracy and high throughput. Nucleic Acids Res. 2004; 32(5):1792–7.
Thompson JD, Gibson T, Higgins DG. Multiple sequence alignment using clustalw and clustalx. Curr Protocol Bioinformatics. 2002; 00:2.3:2.3.1–2.3.22.
Mokaddem A, Elloumi M. Motalign: A multiple sequence alignment algorithm based on a new distance and a new score function. In: DEXA Workshops. Los Alamitos, CA, USA: IEEE Computer Society: 2013. p. 81–4.
Katoh K, Misawa K, Kuma K-i, Miyata T. Mafft: a novel method for rapid multiple sequence alignment based on fast fourier transform. Nucleic Acids Res. 2002; 30(14):3059–66.
Vinga S, Almeida J. Alignment-free sequence comparison-a review. Bioinformatics. 2003; 19(4):513–23.
Delcher AL, Kasif S, Fleischmann RD, Peterson J, White O, Salzberg SL. Alignment of whole genomes. Nucleic Acids Res. 1999; 27(11):2369–76.
Li M, Vitnyi PMB. An Introduction to Kolmogorov Complexity and Its Applications, 3rd ed. New York, USA: Springer; 2008.
Almeida JS, Vinga S. Universal sequence map (usm) of arbitrary discrete sequences. BMC Bioinformatics. 2002; 3:6.
Vinga S. Biological sequence analysis by vector-valued functions: revisiting alignment-free methodologies for DNA and protein classification In: Pham TD, Yan H, Crane DI, editors. Advanced Computational Methods for Biocomputing and Bioimaging. New York: Nova Science Publishers: 2007.
Vinga S, Almeida J. Alignment-free sequence comparison – a review. Bioinformatics. 2003; 19(4):513–23.
Bentley SD, Parkhill J. Comparative genomic structure of prokaryotes. Annu Rev Genet. 2004; 38:771–91.
Josse J, Kaiser A, Kornberg A. Enzymatic synthesis of deoxyribonucleic acid. J Biol Chem. 1961; 236:864–75.
Trautner T, Swartz M, Kornberg A. Enzymatic synthesis of deoxyribonucleic acid, x. influence of bromouracil substitutions on replication. Proc Natl Acad Sci U S A. 1962; 48(3):449.
Russell G, Walker P, Elton R, Subak-Sharpe J. Doublet frequency analysis of fractionated vertebrate nuclear DNA. J Mol Biol. 1976; 108(1):1–20.
Russell G, Subak-Sharpe J. Similarity of the general designs of protochordates and invertebrates. Nature. 1977; 266(5602):533–6.
Karlin S, Burge C. Dinucleotide relative abundance extremes: a genomic signature. Trends Genet. 1995; 11(7):283–90.
Karlin S, Mrázek J. Compositional differences within and between eukaryotic genomes. Proc Natl Acad Sci. 1997; 94(19):10227–32.
Teeling H, Meyerdierks A, Bauer M, Amann R, Glöckner FO. Application of tetranucleotide frequencies for the assignment of genomic fragments. Environ Microbiol. 2004; 6(9):938–47.
Zhou F, Olman V, Xu Y. Barcodes for genomes and applications. BMC Bioinformatics. 2008; 9(1):546.
Deschavanne PJ, Giron A, Vilain J, Fagot G, Fertil B. Genomic signature: characterization and classification of species assessed by chaos game representation of sequences. Mol Biol Evol. 1999; 16(10):1391–9.
Sandberg R, Winberg G, Bränden CI, Kaske A, Ernberg I, Cöster J. Capturing whole-genome characteristics in short sequences using a naive bayesian classifier. Genome Res. 2001; 11(8):1404–9.
Pride DT, Meinersmann RJ, Wassenaar TM, Blaser MJ. Evolutionary implications of microbial genome tetranucleotide frequency biases. Genome Res. 2003; 13(2):145–58.
Gatherer D. Genome signatures, self-organizing maps and higher order phylogenies: A parametric analysis. Evol Bioinformatics Online. 2007; 3:211.
Takahashi M, Kryukov K, Saitou N. Estimation of bacterial species phylogeny through oligonucleotide frequency distances. Genomics. 2009; 93(6):525–33.
Teeling H, Waldmann J, Lombardot T, Bauer M, Glockner FO. Tetra: a web-service and a stand-alone program for the analysis and comparison of tetranucleotide usage patterns in dna sequences. BMC Bioinformatics. 2004; 5(1):163.
Rigoutsos I, Floratos A, Ouzounis C, Gao Y, Parida L. Dictionary building via unsupervised hierarchical motif discovery in the sequence space of natural proteins. Proteins. 1999; 37(2):264–77.
Chor B, Horn D, Goldman N, Levy Y, Massingham T. Genomic DNA k-mer spectra: models and modalities. Genome Biol. 2009; 10(10):108.
Oğul H, Mumcuoğlu EÜ. Svm-based detection of distant protein structural relationships using pairwise probabilistic suffix trees. Comput Biol Chem. 2006; 30(4):292–9.
Karlin S, Mrazek J, Campbell AM. Compositional biases of bacterial genomes and evolutionary implications. J Bacteriol. 1997; 179(12):3899–913.
Foerstner KU, von Mering C, Hooper SD, Bork P. Environments shape the nucleotide composition of genomes. EMBO Rep. 2005; 6(12):1208–13.
McHardy AC, Martín HG, Tsirigos A, Hugenholtz P, Rigoutsos I. Accurate phylogenetic classification of variable-length DNA fragments. Nat Methods. 2007; 4(1):63–72.
Chatterji S, Yamazaki I, Bai Z, Eisen JA. Compostbin: A dna composition-based algorithm for binning environmental shotgun reads. In: Research in Computational Molecular Biology. Berlin: Springer: 2008. p. 17–28.
Leung HC, Yiu S, Yang B, Peng Y, Wang Y, Liu Z, et al.A robust and accurate binning algorithm for metagenomic sequences with arbitrary species abundance ratio. Bioinformatics. 2011; 27(11):1489–95.
Wang Y, Leung HC, Yiu S, Chin FY. Metacluster 4.0: a novel binning algorithm for ngs reads and huge number of species. J Comput Biol. 2012; 19(2):241–9.
Tanaseichuk O, Borneman J, Jiang T. Separating metagenomic short reads into genomes via clustering. In: Algorithms in Bioinformatics. New York, NY, USA: Springer: 2011. p. 298–313.
Song K, Ren J, Zhai Z, Liu X, Deng M, Sun F. Alignment-free sequence comparison based on next generation sequencing reads. In: Research in Computational Molecular Biology. Berlin: Springer: 2012. p. 272–85.
Stuart GW, Moffett K, Baker S. Integrated gene and species phylogenies from unaligned whole genome protein sequences. Bioinformatics. 2002; 18(1):100–8.
Stuart GW, Moffett K, Leader JJ. A comprehensive vertebrate phylogeny using vector representations of protein sequences from whole genomes. Mol Biol Evol. 2002; 19(4):554–62.
Comin M, Verzotto D. Whole-genome phylogeny by virtue of unic subwords. In: Database and Expert Systems Applications (DEXA), 2012 23rd International Workshop On. Los Alamitos, CA, USA: IEEE Computer Society: 2012. p. 190–4.
Kuksa P, Pavlovic V. Efficient alignment-free DNA barcode analytics. BMC Bioinformatics. 2009; 10(Suppl. 14):9.
Solovyev VV, Makarova KS. A novel method of protein sequence classification based on oligopeptide frequency analysis and its application to search for functional sites and to domain localization. Comput Appl Biosci: CABIOS. 1993; 9(1):17–24.
Ratnasingham S, Hebert PDN. BOLD: the barcode of life data system. Mol Ecol Notes. 2007; 7:355–64.
Liu B, Gibbons T, Ghodsi M, Treangen T, Pop M. Accurate and fast estimation of taxonomic profiles from metagenomic shotgun sequences. BMC Genomics. 2011; 12(Suppl 2):4.
Segata N, Waldron L, Ballarini A, Narasimhan V, Jousson O, Huttenhower C. Metagenomic microbial community profiling using unique clade-specific marker genes. Nat Methods. 2012; 9(8):811–4.
Edwards RA, Olson R, Disz T, Pusch GD, Vonstein V, Stevens R, et al.Real time metagenomics: Using k-mers to annotate metagenomes. Bioinformatics. 2012; 28(24):3316–17.
Seth S, Välimäki N, Kaski S, Honkela A. Exploration and retrieval of whole-metagenome sequencing samples. Bioinformatics. 2014; 30(17):2471–9.
Weitschek E, Fiscon G, Felici G. Supervised dna barcodes species classification: analysis, comparisons and results. BioData Mining. 2014; 7:4.
Lehr T, Yuan J, Zeumer D, Jayadev S, Ritchie M. Rule based classifier for the analysis of gene-gene and gene-environment interactions in genetic association studies. BioData Mining. 2011; 4(1):4. doi:http://dx.doi.org/10.1186/1756-0381-4-4.
Polychronopoulos D, Weitschek E, Dimitrieva S, Bucher P, Felici G, Almirantis Y. Classification of selectively constrained dna elements using feature vectors and rule-based classifiers. Genomics. 2014; 104(2):79–86.
Kudenko D, Hirsh H. Feature generation for sequence categorization. In: AAAI/IAAI. Cambridge, USA: The MIT Press: 1998. p. 733–8.
Ben-Hur A, Brutlag D. Remote homology detection: a motif based approach. Bioinformatics. 2003; 19(suppl 1):26–33.
Xing Z, Pei J, Keogh E. A brief survey on sequence classification. ACM SIGKDD Explorations Newslett. 2010; 12(1):40–8.
Kuksa P, Pavlovic V. Efficient alignment-free dna barcode analytics. BMC Bioinformatics. 2009; 10 Suppl 14:9. doi:http://dx.doi.org/10.1186/1471-2105-10-S14-S9.
Vapnik VN, Vapnik V. Statistical Learning Theory. New York, NY, USA: Wiley; 1998.
Bertolazzi P, Felici G, Weitschek E. Learning to classify species with barcodes. BMC Bioinformatics. 2009; 10(S-14):7.
Weitschek E, Lo Presti A, Drovandi G, Felici G, Ciccozzi M, Ciotti M, et al.Human polyomaviruses identification by logic mining techniques. BMC Virol J. 2012; 58(9):1–6.
Gaines BR, Compton P. Induction of ripple-down rules applied to modeling large databases. J Intell Inf Syst. 1995; 5(3):211–28.
Frank E, Witten IH. Generating accurate rule sets without global optimization. In: Proc. of the 15th Int. Conference on Machine Learning. San Francisco, CA, USA: Morgan Kaufmann: 1998.
Cohen WW. Fast effective rule induction. In: Proceedings of the Twelfth International Conference on Machine Learning. San Francisco, CA, USA: Morgan Kaufmann: 1995. p. 115–23.
Felici G, Truemper K. A minsat approach for learning in logic domains. INFORMS J Comput. 2002; 13(3):1–17.
Bertolazzi P, Felici G, Weitschek E. Learning to classify species with barcodes. BMC Bioinformatics. 2009; 10(S14):7.
Quinlan JR. Improved use of continuous attributes in C4.5. J Artif Intell Res. 1996; 4:77–90.
Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH. The weka data mining software: an update. SIGKDD Explor Newsl. 2009; 11(1):10–18. doi:http://dx.doi.org/10.1145/1656274.1656278.
Marcais G, Kingsford C. A fast, lock-free approach for efficient parallel counting of occurrences of k-mers. Bioinformatics. 2011; 27(6):764–70. doi:http://dx.doi.org/10.1093/bioinformatics/btr011.
An Introduction to Support Vector Machines and Other Kernel-based Learning Methods. Cambridge, UK: Cambridge University Press.
Dasarathy BV. Nearest Neighbor NN Norms: NN Pattern Classification Techniques. Los Alamitos, CA, USA: IEEE Computer Society Press; 1991.
Teeling H, Meyerdiekers A, Bauer M, Glockner FO. Application of tetranucleotide frequencies for the assignment of genomic fragments. Environ Microbiol. 2004; 6(9):938–47.
Pride DT, Meinersmann RJ, Wassenaar TM, Blaser MJ. Evolutionary implications of microbial genome tetranucleotide frequency biases. Genome Res. 2003; 13:145–58.
Teeling H, Waldmann J, Lombardot T, Bauer M, Glockner FO. Tetra: a web-service and a stand-alone program for the analysis and comparison of tetranucleotide usage patterns in dna sequences. BMC Bioinformatics. 2004; 5:163.
Chan RH, Chan TH, Yeung HM, Wang RW. Composition vector method based on maximum entropy principle for sequence comparison. Comput Biol Bioinform IEEE/ACM Trans. 2012; 9(1):79–87.
Qi J, Wang B, Hao BI. Whole proteome prokaryote phylogeny without sequence alignment: a k-string composition approach. J Mol Evol. 2004; 58(1):1–11.
Yu ZG, Zhou LQ, Anh VV, Chu KH, Long SC, Deng JQ. Phylogeny of prokaryotes and chloroplasts revealed by a simple composition approach on all protein sequences from complete genomes without sequence alignment. J Mol Evol. 2005; 60(4):538–45.
Song K, Ren J, Reinert G, Deng M, Waterman MS, Sun F. New developments of alignment-free sequence comparison: measures, statistics and next-generation sequencing. Brief Bioinform. 2014; 15(3):343–53.
Blast Package Version 2.2.25-7. http://packages.ubuntu.com/precise/ncbi-blast+. Accessed Dec 2015.
Huang K, Brady A, Mahurkar A, White O, Gevers D, Huttenhower C, et al.Metaref: a pan-genomic database for comparative and community microbial genomics. Nucleic Acids Res. 2014; 42:617–24.
The authors are grateful to the organizing committee of the 5th Biological Discovery Workshop (Biokdd 2014) for inviting them to write and publish the manuscript in Biodata Mining. The authors would like to thank Giulia Fiscon for the precious advices and for revising the paper and prof. Paola Bertolazzi for providing a stimulating research environment and fruitful scientific discussions. This paper is dedicated to prof. Alberto Apostolico. The authors have been supported by the Italian PRIN "GenData 2020" (2010RTFWBH), the FLAGSHIP "InterOmics" project (PB.P05), and by Academy of Finland under grant 250345 (Center of Excellence in Cancer Genetics Research).
Department of Engineering, Uninettuno International University, Corso Vittorio Emanuele II, 39, Rome, 00186, Italy
Emanuel Weitschek
Helsinki Institute for Information Technology HIIT, Department of Computer Science, University of Helsinki, P.O. Box 68 (Gustaf Hällströmin katu 2b), Helsinki, FI-00014, Finland
Fabio Cunial
Institute of Systems Analysis and Computer Science "A. Ruberti", National Research Council, Via dei Taurini 19, Rome, 00185, Italy
Emanuel Weitschek & Giovanni Felici
Giovanni Felici
Correspondence to Emanuel Weitschek.
EW designed and implemented the method, planned and executed the experiments, and wrote the paper. FC inspired the research, contributed to the design of the method, suggested the statistical corrections, and wrote the paper. GF directed research, contributed to the design of the experiments, and wrote the paper. All authors read and approved the final manuscript.
Weitschek, E., Cunial, F. & Felici, G. LAF: Logic Alignment Free and its application to bacterial genomes classification. BioData Mining 8, 39 (2015). https://doi.org/10.1186/s13040-015-0073-1
Supervised classification
Alignment-free sequence comparison
Bacterial taxonomy | CommonCrawl |
Tantalum capacitor
Tantalum capacitors in different styles: axial, radial and SMD-chip versions (size comparison with a match)
10 µF 30 VDC-rated tantalum capacitors, solid electrolyte epoxy-dipped style.
A tantalum electrolytic capacitor is an electrolytic capacitor, a passive component of electronic circuits. It consists of a pellet of tantalum metal as an anode, covered by an insulating oxide layer that forms the dielectric, surrounded by liquid or solid electrolyte as a cathode. Because of its very thin and relatively high permittivity dielectric layer, the tantalum capacitor distinguishes itself from other conventional and electrolytic capacitors in having high capacitance per volume (high volumetric efficiency) and lower weight.
Tantalum is a conflict mineral. Tantalum electrolytic capacitors are considerably more expensive than comparable aluminum electrolytic capacitors.
Tantalum capacitors are inherently polarized components. Reverse voltage can destroy the capacitor. Non-polar or bipolar tantalum capacitors are made by effectively connecting two polarized capacitors in series, with the anodes oriented in opposite directions.
1.1 Basic principle
1.2 Basic construction of solid tantalum electrolytic capacitors
2 Materials, production and styles
2.1 Anode
2.2 Dielectric
2.3 Cathode
2.4 Production flow
2.5 Styles of tantalum capacitors
2.6 Chip capacitors (case size)
2.7 Wet tantalum capacitors
4 Electrical characteristics
4.1 Series-equivalent circuit
4.2 Capacitance standard values and tolerances
4.3 Rated and category voltage
4.4 Surge voltage
4.5 Transient voltage
4.6 Reverse voltage
4.7 Impedance
4.8 ESR and dissipation factor tan δ
4.9 Ripple current
4.10 Current surge, peak or pulse current
4.11 Leakage current
4.12 Dielectric absorption (soakage)
5 Reliability and life time
5.1 Reliability (failure rate)
5.2 Life time
5.3 Failure modes and self-healing mechanism
5.3.1 Failure avoidance
6.1 Capacitor symbols
6.2 Parallel connection
6.3 Series connection
6.4 Polarity marking
6.5 Imprinted markings
6.6 Standardization
6.7 Tantalum ore
6.8 Market
Basic information[edit]
Basic principle[edit]
Basic principle of the anodic oxidation, in which, by applying a voltage with a current source, an oxide layer is formed on a metallic anode
Electrolytic capacitors use a chemical feature of some special metals, historically called valve metals, which can form an insulating oxide layer. Applying a positive voltage to the tantalum anode material in an electrolytic bath forms an oxide barrier layer with a thickness proportional to the applied voltage. This oxide layer serves as the dielectric in an electrolytic capacitor. The properties of this oxide layer compared with tantalum oxide layer are given in the following table:
Characteristics of the different oxide layers in tantalum and niobium electrolytic capacitors[1]
Anode-
(V/µm)
Dielectric layer
(nm/V)
Tantalum Tantalum pentoxide, Ta2O5 27 Amorphous 625 1.7
Niobium or
Niobium oxide Niobium pentoxide, Nb2O5 41 Amorphous 400 2.5
After forming a dielectric oxide on the rough anode structures, a cathode is needed. An electrolyte acts as the cathode of electrolytic capacitors. There are many different electrolytes in use. Generally, the electrolytes will be distinguished into two species, non-solid and solid electrolytes. Non-solid electrolytes are a liquid medium whose conductivity is ionic. Solid electrolytes have electron conductivity and thus solid electrolytic capacitors are more sensitive against voltages spikes or current surges[citation needed]. The oxide layer may be destroyed if the polarity of the applied voltage is reversed.
A dielectric material is placed between two conducting plates (electrodes), each of area A and with a separation of d.
Every electrolytic capacitor in principle forms a plate capacitor whose capacitance is greater the larger the electrode area, A, and the permittivity, ε, are and the thinner the thickness, d, of the dielectric is.
C = ε ⋅ A d {\displaystyle C=\varepsilon \cdot {\frac {A}{d}}}
The dielectric thickness of electrolytic capacitors is very thin, in the range of nanometers per volt. Despite this, the dielectric strengths of these oxide layers are quite high. Thus, tantalum capacitors can achieve a high volumetric capacitance compared to other capacitor types.
All etched or sintered anodes have a much larger total surface area compared to a smooth surface of the same overall dimensions. This surface area increase boosts the capacitance value by a factor of up to 200 (depending on the rated voltage) for solid tantalum electrolytic capacitors.[2]
The volume of an electrolytic capacitor is defined by the product of capacitance and voltage, the so-called CV-volume. However, in comparing the permittivities of different oxide materials, it is seen that tantalum pentoxide has an approximately 3 times higher permittivity than aluminum oxide. Tantalum electrolytic capacitors of a given CV value can therefore be smaller than aluminum electrolytic capacitors.
Basic construction of solid tantalum electrolytic capacitors[edit]
Construction of a solid tantalum chip capacitor with manganese dioxide electrolyte
The capacitor cell of a tantalum electrolytic capacitor consists of sintered tantalum powder
Schematic representation of the structure of a sintered tantalum electrolytic capacitor with solid electrolyte and the cathode contacting layers
Construction of a typical SMD tantalum electrolytic chip capacitor with solid electrolyte
A typical tantalum capacitor is a chip capacitor and consists of tantalum powder pressed and sintered into a pellet as the anode of the capacitor, with the oxide layer of tantalum pentoxide as a dielectric, and a solid manganese dioxide electrolyte as the cathode.
Materials, production and styles[edit]
Anode[edit]
Figure 1: Tantalum powder CV/g.
Tantalum capacitors are manufactured from a powder of relatively pure elemental tantalum metal.[3][4][5] A common figure of merit for comparing volumetric efficiency of powders is expressed in capacitance (C, usually in μF) times volts (V) per gram (g). Since the mid-1980s, manufactured tantalum powders have exhibited around a ten-fold improvement in CV/g values (from approximately 20k to 200k).[2] The typical particle size is between 2 and 10 μm. Figure 1 shows powders of successively finer grain, resulting in greater surface area per unit volume. Note the very great difference in particle size between the powders.
Figure 2: Sintered anode.
The powder is compressed around a tantalum wire (known as the riser wire) to form a "pellet".[6] The riser wire ultimately becomes the anode connection to the capacitor. This pellet/wire combination is subsequently vacuum sintered at high temperature (typically 1200 to 1800 °C) which produces a mechanically strong pellet and drives off many impurities within the powder. During sintering, the powder takes on a sponge-like structure, with all the particles interconnected into a monolithic spatial lattice. This structure is of predictable mechanical strength and density, but is also highly porous, producing a large internal surface area (see Figure 2).
Larger surface areas produce higher capacitance; thus high CV/g powders, which have lower average particle sizes, are used for low voltage, high capacitance parts. By choosing the correct powder type and sintering temperature, a specific capacitance or voltage rating can be achieved. For example, a 220 μF 6 V capacitor will have a surface area close to 346 cm2, or 80% of the size of a sheet of paper (US Letter, 8.5×11 inch paper has area ~413 cm2), although the total volume of the pellet is only about 0.0016 cm3.
Dielectric[edit]
Figure 3: Dielectric layer.
The dielectric is then formed over all the tantalum particle surfaces by the electrochemical process of anodization. To achieve this, the "pellet" is submerged into a very weak solution of acid and DC voltage is applied. The total dielectric thickness is determined by the final voltage applied during the forming process. Initially the power supply is kept in a constant current mode until the correct voltage (i.e. dielectric thickness) has been reached; it then holds this voltage and the current decays to close to zero to provide a uniform thickness throughout the device and production lot. The chemical equations describing the dielectric formation process at the anode are as follows:[5]
2 Ta → 2 Ta5+ + 10 e−
2 Ta5+ + 10 OH− → Ta2O5 + 5 H2O
The oxide forms on the surface of the tantalum, but it also grows into the material. For each unit thickness of oxide growth, one third grows out and two thirds grows in. Due to the limits of oxide growth, there is a limit on the maximum voltage rating of tantalum oxide for each of the presently available tantalum powders (see Figure 3).
The dielectric layer thickness generated by the forming voltage is directly proportional to the voltage proof of electrolytic capacitors.[7] Electrolytic capacitors are manufactured with a safety margin in oxide layer thickness, which is the ratio between voltage used for electrolytical creation of dielectric and rated voltage of the capacitor, to ensure reliable functionality.
The safety margin for solid tantalum capacitors with manganese dioxide electrolyte is typically between 2 and 4. That means that for a 25 V tantalum capacitor with a safety margin of 4 the dielectric voltage proof can withstand 100 V to provide a more robust dielectric.[8] This very high safety factor is substantiated by the failure mechanism of solid tantalum capacitors, "field crystallization".[9][10] [11][12][13] For tantalum capacitors with solid polymer electrolyte the safety margin is much lower, typically around 2.[12][14]
Cathode[edit]
Figure 4: Manganese dioxide layer
The next stage for solid tantalum capacitors is the application of the cathode plate (wet tantalum capacitors use a liquid electrolyte as a cathode in conjunction with their casing). This is achieved by pyrolysis of manganese nitrate into manganese dioxide. The "pellet" is dipped into an aqueous solution of nitrate and then baked in an oven at approximately 250 °C to produce the dioxide coat. The chemical equation is:[5]
Mn(NO3)2 → MnO2 + 2 NO2
This process is repeated several times through varying specific gravities of nitrate solution, to build up a thick coat over all internal and external surfaces of the "pellet", as shown in Figure 4.
In traditional construction, the "pellet" is successively dipped into graphite and then silver to provide a good connection from the manganese dioxide cathode plate to the external cathode termination(see Figure 5).
Figure 5: Solid tantalum cathode cross section.
Production flow[edit]
The picture below shows the production flow of tantalum electrolytic chip capacitors with sintered anode and solid manganese dioxide electrolyte.
Representation of the production flow of tantalum electrolytic capacitors with sintered anode and solid manganese dioxide electrolyte
Styles of tantalum capacitors[edit]
Tantalum electrolytic capacitors are made in three different styles:[5]
Tantalum chip capacitors: SMD style for surface mounting, 80% of all tantalum capacitors are SMDs
Tantalum "pearls", resin-dipped, single-ended style for PCB mounting
Axial-leaded tantalum capacitors, with solid and non-solid electrolyte, mostly used for military, medical and space applications.
Different styles of tantalum capacitors
Tantalum chip capacitors
Tantalum "pearls" for PCB mounting
Axial tantalum capacitors
Chip capacitors (case size)[edit]
More than 90% of all tantalum electrolytic capacitors are manufactured in SMD style as tantalum chip capacitors. It has contact surfaces on the end faces of the case and is manufactured in different sizes, typically following the EIA-535-BAAC standard. The different sizes can also be identified by case code letters. For some case sizes (A to E), which have been manufactured for many decades, the dimensions and case coding over all manufactures are still largely the same. However, new developments in tantalum electrolytic capacitors such as the multi-anode technique to reduce the ESR or the "face down" technique to reduce the inductance have led to a much wider range of chip sizes and their case codes. These departures from EIA standards mean devices from different manufacturers are no longer always uniform.
An overview of the dimensions of conventional tantalum rectangular chip capacitors and their coding is shown in the following table:[15]
Dimensioning of a tantalum chip capacitor
Standard dimensions for surface-mount (SMD) tantalum chip capacitors
EIA Code
L ± 0,2
W ± 0,2
Case Code
EIA 1608-08 1.6 0.8 0.8 0603 — — —
EIA 1608-10 1.6 0.85 1.05 0603 L — M,M0
EIA 2012-12 2.05 1.35 1.2 0805 R R W
EIA 2012-15 2.05 1.35 1.5 0805 P — R
EIA 3216-10 3.2 1.6 1.0 1206 K I Q,A0
EIA 3216-12 3.2 1.6 1.2 1206 S S —
EIA 3216-18 3.2 1.6 1.8 1206 A A A
EIA 3528-12 3.5 2.8 1.2 1210 T T N
EIA 3528-15 3.5 2.8 1.5 1210 H M T
EIA 3528-21 3.5 2.8 2.1 1210 B B B
EIA 6032-15 6.0 3.2 1.5 2312 W U —
EIA 6032-20 6.0 3.2 2.0 2312 F L —
EIA 6032-28 6.0 3.2 2.8 2312 C C C
EIA 7343-15 7.3 4.3 1.5 2917 X W —
EIA 7343-20 7.3 4.3 2.0 2917 Y V V
EIA 7343-30 7.3 4.3 3.0 2917 N — —
EIA 7343-31 7.3 4.3 3.1 2917 D D D
EIA 7343-40 7.3 4.3 4.0 2917 — Y —
EIA 7343-43 7,3 4.3 4.3 2917 E X E
EIA 7360-38 7.3 6.0 3.8 2623 — E W
EIA 7361-38 7.3 6.1 3.8 2924 V — —
EIA 7361-438 7.3 6.1 4.3 2924 U — —
Note: EIA 3528 metric is also known as EIA 1411 imperial (inches).
Wet tantalum capacitors[edit]
Cross section of a non-solid all tantalum electrolytic capacitor, hermetic sealed
The main feature of modern non-solid (wet) tantalum electrolytic capacitors are their energy density compared with that of solid tantalum and wet aluminum electrolytic capacitors within the same temperature range. Due to their self-healing properties (the non-solid electrolyte can deliver oxygen to form new oxide layer in weak areas of the dielectric), the dielectric thickness can be formed with much lower safety margins and consequently with much thinner dielectric than for solid types, resulting in a higher CV value per volume unit. Additionally, wet tantalum capacitors are able to operate at voltages in excess of 100 V up to 630 V, have a relatively low ESR, and have the lowest leakage current of all electrolytic capacitors.
The original wet tantalum capacitors developed in the 1930s were axial capacitors, having a wound cell consisting of a tantalum anode and foil cathode separated by a paper stripe soaked with an electrolyte, mounted in a silver case and non-hermetic elastomer sealed.[16] Because of the inertness and stability of the tantalum dielectric oxide layer against strong acids, the wet tantalum capacitors could use sulfuric acid as an electrolyte, thus providing them with a relatively low ESR.
Because in the past, silver casings had problems with silver migration and whiskers which led to increasing leakage currents and short circuits, new styles of wet tantalum capacitors use a sintered tantalum pellet cell and a gelled sulfuric acid electrolyte mounted in a pure tantalum case.
Due to their relatively high price, wet tantalum electrolytic capacitors have few consumer applications. They are used in ruggedized industrial applications, such as in probes for oil exploration. Types with military approvals can provide the extended capacitance and voltage ratings, along with the high quality levels required for avionics, military, and space applications.
Main article: Electrolytic_capacitor § History
The group of "valve metals" capable of forming an insulating oxide film was discovered in 1875. In 1896 Karol Pollak patented a capacitor using aluminum electrodes and a liquid electrolyte. Aluminum electrolytic capacitors were commercially manufactured in the 1930s.
The first tantalum electrolytic capacitors with wound tantalum foils and non-solid electrolyte were developed in 1930 by Tansitor Electronic Inc. (US), and were used for military purposes.[16]
Solid electrolyte tantalum capacitors were invented by Bell Laboratories in the early 1950s as a miniaturized and more reliable low-voltage support capacitor to complement their newly invented transistor. The solution R. L. Taylor and H. E. Haring from the Bell labs found for the new miniaturized capacitor found in early 1950 was based on experience with ceramics. They ground metallic tantalum to a powder, pressed this powder into a cylindrical form and then sintered the powder particles at high temperature between 1,500 and 2,000 °C (2,730 and 3,630 °F) under vacuum conditions, into a pellet ("slug").[17][18]
These first sintered tantalum capacitors used a liquid electrolyte. In 1952 Bell Labs researchers discovered the use manganese dioxide as a solid electrolyte for a sintered tantalum capacitor.[19]
Although the fundamental inventions came from the Bell Labs, the innovations for manufacturing commercially viable tantalum electrolytic capacitors were done by the researchers of the Sprague Electric Company. Preston Robinson, Sprague's Director of Research, is considered to be the actual inventor of tantalum capacitors in 1954.[20][21] His invention was supported by R. J. Millard, who introduced the "reform" step in 1955,[22][23] a significant improvement in which the dielectric of the capacitor was repaired after each dip-and-convert cycle of MnO2 deposition. This dramatically reduced the leakage current of the finished capacitors.
This first solid electrolyte manganese dioxide had 10 times better conductivity than all other types of non-solid electrolyte capacitors. In the style of tantalum pearls, they soon found wide use in radio and new television devices.
Conductivity of non-solid and solid used electrolytes
In 1971, Intel launched its first microcomputer (the MCS 4) and 1972 Hewlett Packard launched one of the first pocket calculators (the HP 35).[24][25] The requirements for capacitors increased, especially the demand for lower losses. The equivalent series resistance (ESR) for bypass and decoupling capacitors of standard electrolytic capacitors needed to be decreased.[26]
Although solid tantalum capacitors offered lower ESR and leakage current values than the aluminum electrolytics, in 1980 a price shock for tantalum in the industry dramatically reduced the usability of tantalum capacitors, especially in the entertainment industry.[27] [28] In search of cheaper alternatives, the industry switched back to using aluminum electrolytic capacitors.
The development of conducting polymers by Alan J. Heeger, Alan MacDiarmid and Hideki Shirakawa in 1975 was a break-through in point of lower ESR.[29] The conductivity of conductive polymers such as polypyrrole (PPy)[30] or PEDOT[31] are better by a factor of 1000 than that of manganese dioxide, and are close to the conductivity of metals. In 1993 NEC introduced their SMD polymer tantalum electrolytic capacitors, called "NeoCap". In 1997 Sanyo followed with their "POSCAP" polymer tantalum chips.
A new conductive polymer for tantalum polymer capacitors was presented by Kemet at the "1999 Carts" conference.[32] This capacitor used the newly developed organic conductive polymer PEDT Poly(3,4-ethylenedioxythiophene), also known as PEDOT (trade name Baytron).[33]
This development to low ESR capacitors with high CV-volumes in chip style for the rapid growing SMD technology in the 1990s increased the demand on tantalum chips dramatically. However, another price explosion for tantalum in 2000/2001 forced the development of niobium electrolytic capacitors with manganese dioxide electrolyte, which have been available since 2002.[34][35] The materials and processes used to produce niobium-dielectric capacitors are essentially the same as for existing tantalum-dielectric capacitors. The characteristics of niobium electrolytic capacitors and tantalum electrolytic capacitors are roughly comparable.[36]
Electrical characteristics[edit]
Series-equivalent circuit[edit]
Series-equivalent circuit model of a tantalum capacitor
Tantalum electrolytic capacitors as discrete components are not ideal capacitors, as they have losses and parasitic inductive parts. All properties can be defined and specified by a series equivalent circuit composed of an idealized capacitance and additional electrical components which model all losses and inductive parameters of a capacitor. In this series-equivalent circuit the electrical characteristics are defined by:
C, the capacitance of the capacitor
Rleak, the resistance representing the leakage current of the capacitor
RESR, the equivalent series resistance which summarizes all ohmic losses of the capacitor, usually abbreviated as "ESR"
LESL, the equivalent series inductance which is the effective self-inductance of the capacitor, usually abbreviated as "ESL".
Using a series equivalent circuit rather than a parallel equivalent circuit is specified by IEC/EN 60384-1.
Capacitance standard values and tolerances[edit]
The electrical characteristics of tantalum electrolytic capacitors depend on the structure of the anode and the electrolyte used. This influences the capacitance value of tantalum capacitors, which depend on operating frequency and temperature. The basic unit of electrolytic capacitors capacitance is microfarad (μF).
The capacitance value specified in the data sheets of the manufacturers is called rated capacitance CR or nominal capacitance CN and is the value for which the capacitor has been designed. Standardized measuring condition for electrolytic capacitors is an AC measuring method with a frequency of 100 to 120 Hz. Electrolytic capacitors differ from other capacitor types, whose capacitances are typically measured at 1 kHz or higher. For tantalum capacitors a DC bias voltage of 1.1 to 1.5 V for types with a rated voltage of ≤2.5 V or 2.1 to 2.5 V for types with a rated voltage of >2.5 V may be applied during the measurement to avoid reverse voltage.
The percentage of allowed deviation of the measured capacitance from the rated value is called capacitance tolerance. Electrolytic capacitors are available in different tolerance series classifications, whose values are specified in the E series specified in IEC 60063. For abbreviated marking in tight spaces, a letter code for each tolerance is specified in IEC 60062.
rated capacitance, E3 series, tolerance ±20%, letter code "M"
rated capacitance, E12 series, tolerance ±10%, letter code "K"
The required capacitance tolerance is determined by the particular application. Electrolytic capacitors, which are often used for filtering and bypassing capacitors don't have the need for narrow tolerances because they are mostly not used for accurate frequency applications like oscillators.
Rated and category voltage[edit]
Relation between rated and category voltage and rated and category temperature
Referring to IEC/EN 60384-1 standard the allowed operating voltage for tantalum capacitors is called "rated voltage UR " or "nominal voltage UN". The rated voltage UR is the maximum DC voltage or peak pulse voltage that may be applied continuously at any temperature within the rated temperature range TR (IEC/EN 60384-1).
The voltage rating of electrolytic capacitors decreases with increasing temperature. For some applications it is important to use a higher temperature range. Lowering the voltage applied at a higher temperature maintains safety margins. For some capacitor types therefore the IEC standard specify a "temperature derated voltage" for a higher temperature, the "category voltage UC". The category voltage is the maximum DC voltage or peak pulse voltage that may be applied continuously to a capacitor at any temperature within the category temperature range TC. The relation between both voltages and temperatures is given in the picture right.
Lower voltage applied may have positive influences for tantalum electrolytic capacitors. Lowering the voltage applied increases the reliability and reduces the expected failure rate.[37]
Applying a higher voltage than specified may destroy tantalum electrolytic capacitors.
Surge voltage[edit]
The surge voltage indicates the maximum peak voltage value that may be applied to electrolytic capacitors during their application for a limited number of cycles. The surge voltage is standardized in IEC/EN 60384-1. For tantalum electrolytic capacitors the surge voltage shall be 1.3 times of the rated voltage, rounded off to the nearest volt. The surge voltage applied to tantalum capacitors may influence the capacitors failure rate.[38][39]
Transient voltage[edit]
Transient voltage or a current spike applied to tantalum electrolytic capacitors with solid manganese dioxide electrolyte can cause some tantalum capacitors to fail and may directly lead to a short.[38][40]
Reverse voltage[edit]
Tantalum electrolytic are polarized and generally require anode electrode voltage to be positive relative to the cathode voltage.
With a reverse voltage applied, a reverse leakage current flows in very small areas of microcracks or other defects across the dielectric layer to the anode of the electrolytic capacitor. Although the current may only be a few microamps, it represents a very high localized current density which can cause a tiny hot-spot. This can cause some conversion of amorphous tantalum pentoxide to the more conductive crystalline form. When a high current is available, this effect can avalanche and the capacitor may become a total short.
Nevertheless, tantalum electrolytic capacitors can withstand for short instants a reverse voltage for a limited number of cycles. The most common guidelines for tantalum reverse voltage are:
10% of rated voltage to a maximum of 1 V at 25 °C,
3% of rated voltage to a maximum of 0.5 V at 85 °C,
1% of rated voltage to a maximum of 0.1 V at 125 °C.
These guidelines apply for short excursion and should never be used to determine the maximum reverse voltage under which a capacitor can be used permanently.[41][42]
Impedance[edit]
Simplified series-equivalent circuit of a capacitor for higher frequencies (above); vector diagram with electrical reactances XESL and XC and resistance ESR and for illustration the impedance Z and dissipation factor tan δ
Tantalum electrolytic capacitors, as well as other conventional capacitors, have two electrical functions. For timers or similar applications, capacitors are seen as a storage component to store electrical energy. But for smoothing, bypassing, or decoupling applications like in power supplies, the capacitors work additionally as AC resistors to filter undesired AC components from voltage rails. For this (biased) AC function the frequency dependent AC resistance (impedance "Z") is as important as the capacitance value.
Typical impedance curves for different capacitance values over frequency. As higher the capacitance as lower the resonance frequency.
The impedance is the complex ratio of the voltage to the current with both magnitude and phase at a particular frequency in an AC circuit. In this sense impedance is a measure of the ability of the capacitor to attenuate alternating currents and can be used like Ohms law
Z = u ^ ı ^ = U e f f I e f f . {\displaystyle Z={\frac {\hat {u}}{\hat {\imath }}}={\frac {U_{\mathrm {eff} }}{I_{\mathrm {eff} }}}.}
The impedance is a frequency dependent AC resistance and possesses both magnitude and phase at a particular frequency. In data sheets of electrolytic capacitors, only the impedance magnitude |Z| is specified, and simply written as "Z". Regarding to the IEC/EN 60384-1 standard, the impedance values of tantalum electrolytic capacitors are measured and specified at 10 kHz or 100 kHz depending on the capacitance and voltage of the capacitor.
Besides measuring, the impedance can also be calculated using the idealized components out of a capacitor's series-equivalent circuit, including an ideal capacitor C, a resistor ESR, and an inductance ESL. In this case the impedance at the angular frequency ω therefore is given by the geometric (complex) addition of ESR, by a capacitive reactance XC
X C = − 1 ω C {\displaystyle X_{C}=-{\frac {1}{\omega C}}}
and by an inductive reactance XL (Inductance)
X L = ω L E S L {\displaystyle X_{L}=\omega L_{\mathrm {ESL} }} .
Then Z is given by
Z = E S R 2 + ( X C + ( − X L ) ) 2 {\displaystyle Z={\sqrt {{ESR}^{2}+(X_{\mathrm {C} }+(-X_{\mathrm {L} }))^{2}}}} .
In the special case of resonance, in which the both reactive resistances XC and XL have the same value (XC=XL), then the impedance will only be determined by ESR. With frequencies above the resonance the impedance increases again due to the ESL of the capacitor. At this point, the capacitor begins to behave primarily as an inductance.
ESR and dissipation factor tan δ[edit]
Typical impedance and ESR curves as a function of frequency and temperature
Typical impedance and ESR as a function of frequency
Typical impedance and ESR curves over frequency for different electrolytic capacitor styles compared with MLCC
The equivalent series resistance (ESR) summarizes all resistive losses of the capacitor. These are the terminal resistances, the contact resistance of the electrode contact, the line resistance of the electrodes, the electrolyte resistance, and the dielectric losses in the dielectric oxide layer.[43]
ESR influences the remaining superimposed AC ripple behind smoothing and may influence the circuit functionality. Related to the capacitor ESR is accountable for internal heat generation if a #ripple current flows over the capacitor. This internal heat may influence the reliability of tantalum electrolytic capacitors.
Generally, the ESR decreases with increasing frequency and temperature.[44]
Discussions of electrolytic capacitors historically sometimes refer to the dissipation factor, tan δ, in the relevant data sheets instead of ESR. The dissipation factor is determined by the tangent of the phase angle between the subtraction of capacitive reactance XC from inductive reactance XL, and the ESR. If the capacitor's inductance ESL is small, the dissipation factor can be approximated as:
tan δ = ESR ⋅ ω C {\displaystyle \tan \delta ={\mbox{ESR}}\cdot \omega C}
The dissipation factor tan δ is used for capacitors with very low losses in frequency determining circuits or resonant circuits where the reciprocal value of the dissipation factor is called the quality factor (Q) which represents a resonator's bandwidth.
Ripple current[edit]
The superimposed (DC biased) AC ripple current flow across the smoothing capacitor C1 of a power supply causes internal heat generation corresponding to the capacitors ESR
A "ripple current" is the RMS value of a superimposed AC current of any frequency upon a DC current. It arises mainly in power supplies (including switched-mode power supplies) after rectifying an AC voltage and flows as charge and discharge current through the decoupling or smoothing capacitor.
Ripple currents generate heat inside the capacitor body. This dissipation power loss PL is caused by ESR and is the squared value of the effective (RMS) ripple current IR.
P L = I R 2 ⋅ E S R {\displaystyle P_{L}=I_{R}^{2}\cdot ESR}
This internal generated heat, in addition to the ambient temperature and possibly other external heat sources, leads to a capacitor body temperature having a temperature difference of Δ T against the ambient. This heat has to be distributed as thermal losses Pth over the capacitors surface A and the thermal resistance β to the ambient.
P t h = Δ T ⋅ A ⋅ β {\displaystyle P_{th}=\Delta T\cdot A\cdot \beta }
The internal generated heat has to be distributed to the ambient by thermal radiation, convection, and thermal conduction. The temperature of the capacitor, which is established on the balance between heat produced and distributed, should not exceed the capacitors maximum specified temperature.
The ripple current is specified as an effective (RMS) value at 100 or 120 Hz or at 10 kHz at upper category temperature. Non-sinusoidal ripple currents have to be analyzed and separated into their component sinusoidal frequencies by means of Fourier analysis and the equivalent ripple current calculated as the square root of the sum of the squares of the individual currents.[45]
I R = i 1 2 + i 2 2 + i 3 2 + i n 2 {\displaystyle I_{R}={\sqrt {{i_{1}}^{2}+{i_{2}}^{2}+{i_{3}}^{2}+{i_{n}}^{2}}}}
In solid tantalum electrolytic capacitors the heat generated by the ripple current influences the reliability of the capacitors.[46][47][48] Exceeding the limit tends to result in catastrophic failures with shorts and burning components.
Current surge, peak or pulse current[edit]
Solid tantalum electrolytic capacitors can be damaged by surge, peak or pulse currents.[38][39] Tantalum capacitors, which are exposed to surge, peak or pulse currents should be used with a voltage derating up to 70% in highly inductive circuits. If possible, the voltage profile should be a ramp turn-on, as this reduces the peak current seen by the capacitor.
Leakage current[edit]
general leakage behavior of electrolytic capacitors: leakage current I l e a k {\displaystyle I_{leak}} as a function of time t {\displaystyle t} depending of the kind of electrolyte
non solid, high water content
non solid, organic
solid, polymer
The DC leakage current is a special characteristic for electrolytic capacitors other conventional capacitors don't have. This current is represented by the resistor Rleak in parallel with the capacitor in the series-equivalent circuit of electrolytic capacitors. The main causes of leakage current for solid tantalum capacitors are electrical breakdown of the dielectric, conductive paths due to impurities or due to poor anodization, bypassing of dielectric due to excess manganese dioxide, due to moisture paths or due to cathode conductors (carbon, silver).[49] This leakage current in solid electrolyte capacitors cannot be reduced by "healing" in the sense of generating new oxide because under normal conditions solid electrolytes are unable to deliver oxygen for forming processes. This statement should not be confused with the self-healing process during field crystallization, as described in Reliability (failure rate).
The specification of the leakage current in datasheets often will be given by multiplication of the rated capacitance value CR with the value of the rated voltage UR together with an addendum figure, measured after a measuring time of 2 or 5 minutes, for example:
I L e a k = 0 . 01 A V ⋅ F ⋅ U R ⋅ C R + 3 μ A {\displaystyle I_{\mathrm {Leak} }=0{.}01\,\mathrm {{A} \over {V\cdot F}} \cdot U_{\mathrm {R} }\cdot C_{\mathrm {R} }+3\,\mathrm {\mu A} }
The value of the leakage current depends on the voltage applied, on temperature of the capacitor, on measuring time, and on influence of moisture caused by case sealing conditions. They normally have a very low leakage current, most much lower than the specified worst-case.
Dielectric absorption (soakage)[edit]
Main article: Dielectric absorption
Dielectric absorption occurs when a capacitor that has remained charged for a long time retains some charge when briefly discharged. Although an ideal capacitor would reach zero volts after discharge, real capacitors develop a small voltage from time-delayed dipole discharging, a phenomenon that is also called dielectric relaxation, "soakage" or "battery action".
Values of dielectric absorption for tantalum capacitors
Type of capacitor
Dielectric Absorption
Tantalum electrolytic capacitors with solid electrolyte 2 to 3%,[50] 10%[51]
Dielectric absorption can cause a problem in circuits where very small currents are used, such as long-time-constant integrators or sample-and-hold circuits.[52][53] However, in most applications where tantalum electrolytic capacitors are supporting power supply lines, dielectric absorption is not a problem.
Reliability and life time[edit]
Reliability (failure rate)[edit]
Bathtub curve with times of "early failures", "random failures", and wear-out failures". The time of random failures is the time of constant failure rate
The reliability of a component is a property that indicates how well a component performs its function in a time interval. It is subject to a stochastic process and can be described qualitatively and quantitatively; it is not directly measurable. The reliability of electrolytic capacitors are empirically determined by identifying the failure rate in production-accompanying endurance tests, see Reliability engineering#Reliability testing.
The reliability normally is shown in a bathtub curve and is divided into three areas: Early failures or infant mortality failures, constant random failures and wear out failures. Failure types included in the total failure rate are short circuit, open circuit, and degradation failures (exceeding electrical parameters).
The reliability prediction is generally expressed in a Failure rate λ, abbreviation FIT (Failures In Time]. This is the number of failures that can be expected in one billion (109) component-hours of operation (e.g. 1000 components for 1 million hours, or 1 million components for 1000 hours which is 1 ppm/1000 hours) at fixed working conditions during the period of constant random failures. These failure rate model implicitly assume the idea of "random failure". Individual components fail at random times but at a predictable rate. The standard operation conditions for the failure rate FIT are 40 °C and 0.5 UR.
The reciprocal value of FIT is MTBF (Mean Time Between Failures).
For tantalum capacitors, often the failure rate is specified at 85 °C and rated voltage UR as reference conditions and expressed as per cent failed components per thousand hours (n %/1000 h). That is "n" number of failed components per 105 hours or in FIT the ten-thousand-fold value per 109 hours.
For conditions other than the standard operation conditions 40 °C and 0.5 UR, for other temperature and voltage applied, for current load, capacitance value, circuit resistance, mechanical influences and humidity, the FIT figure can recalculated with acceleration factors standardized for industrial[54] or military[55] contexts. For example, higher temperature and applied voltage cause the failure rate to increase.
The most often cited source for recalculation the failure rate is the MIL-HDBK-217F, the "bible" of failure rate calculations for electronic components. SQC Online, the online statistical calculators for acceptance sampling and quality control gives an online tool for short examination to calculate given failure rate values to application conditions.[56]
Some manufacturers of tantalum capacitors may have their own FIT calculation tables.[57][58]
Tantalum capacitors are reliable components. Continuous improvement in tantalum powder and capacitor technologies have resulted in a significant reduction in the amount of impurities present, which formerly have caused most of the field crystallization failures. Commercially available tantalum capacitors now have reached as standard products the high MIL standard "C" level which is 0.01%/1000h at 85 °C and UR or 1 failure per 107 hours at 85 °C and UR.[11] Recalculated in FIT with the acceleration factors coming from MIL HDKB 217F at 40 °C and 0.5 UR is this failure rate for a 100 µF/25 V tantalum chip capacitor used with a series resistance of 0.1 Ω the failure rate is 0.02 FIT.
Life time[edit]
The life time, service life, load life or useful life of tantalum electrolytic capacitors depends entirely on the electrolyte used:
Those using liquid electrolytes do not have a life time specification. (When hermetically sealed)
Those using manganese dioxide electrolytes do not have a life time specification.
Those using polymer electrolytes do have a life time specification.
The polymer electrolyte have a small deterioration of conductivity by a thermal degradation mechanism of the conductive polymer. The electrical conductivity decreased, as a function of time, in agreement with a granular metal type structure, in which aging is due to the shrinking of the conductive polymer grains.[59] The life time of polymer electrolytic capacitors is specified in similar terms to the non-solid electrolytic caps, but its life time calculation follows other rules which lead to much longer operational life times.[60][61][62]
Failure modes and self-healing mechanism[edit]
Tantalum capacitors show different electrical long-term behaviors depending on the electrolyte used. Application rules for types with an inherent failure mode are specified to ensure high reliability and long life.
Long-term electrical behavior, failure modes, self-healing mechanism, and application rules of the different types of tantalum electrolytic capacitors
Type of
electrical behavior
Tantalum e-caps
solid MnO2 electrolyte stable Field crystallization
[11] Thermally induced insulating
of faults in the dielectric
by reduction of the electrolyte MnO2
into insulating Mn2O3
if current availability is limited Voltage derating 50%
Series resistance 3 Ω/V
[63][64]
solid polymer electrolyte Deterioration of conductivity,
ESR increases Field crystallization
[6][11] Insulating of faults
in the dielectric by oxidation or evaporation
of the polymer electrolyte Voltage derating 20 %
Tantalum capacitors are reliable on the same very high level as other electronic components with very low failure rates. However, they have a single unique failure mode called "field crystallization".[9] Field crystallization is the major reason for degradation and catastrophic failures of solid tantalum capacitors.[13] More than 90% of the today's rare failures in tantalum solid-state electrolytic capacitors are caused by shorts or increased leakage current due to this failure mode.[65]
The extremely thin oxide film of a tantalum electrolytic capacitor, the dielectric layer, must be formed in an amorphous structure. Changing the amorphous structure into a crystallized structure is reported to increase the conductivity by 1000 times, combined with an enlargement of the oxide volume.[11] The field crystallization followed by a dielectric breakdown is characterized by a sudden rise in leakage current within a few milliseconds, from nanoamp magnitude to amp magnitude in low-impedance circuits. Increasing current flow can accelerate in an "avalanche effect" and rapidly spread through the metal/oxide. This can result in various degrees of destruction from rather small, burned areas on the oxide to zigzag burned streaks covering large areas of the pellet or complete oxidation of the metal.[6] If the current source is unlimited a field crystallization may cause a capacitor short circuit. In this circumstance, the failure can be catastrophic if there is nothing to limit the available current, as the series resistance of the capacitor can become very low.
If the current is limited in tantalum electrolytic capacitors with solid MnO2 electrolyte, a self-healing process can take place, reducing MnO2 into insulating Mn2O3
Impurities, tiny mechanical damages, or imperfections in the dielectric can affect the structure, changing it from amorphous to crystalline structure and thus lowering the dielectric strength. The purity of the tantalum powder is one of the most important parameters for defining its risk of crystallization. Since the mid-1980s, manufactured tantalum powders have exhibited an increase in purity.
Surge currents after soldering-induced stresses may start crystallization, leading to insulation breakdown.[66] The only way to avoid catastrophic failures is to limit the current which can flow from the source in order to reduce the breakdown to a limited area. Current flowing through the crystallized area causes heating in the manganese dioxide cathode near the fault. At increased temperatures a chemical reaction then reduces the surrounding conductive manganese dioxide to the insulating manganese(III) oxide (Mn2O3) and insulates the crystallized oxide in the tantalum oxide layer, stopping local current flow.[6][63]
Failure avoidance[edit]
Solid tantalum capacitors with crystallization are most likely to fail at power-on.[67] It is believed that the voltage across the dielectric layer is the trigger mechanism for the breakdown and that the switch-on current pushes the collapse to a catastrophic failure. To prevent such sudden failures, manufacturers recommend:[11][63][68]
50% application voltage derating against rated voltage
using a series resistance of 3 Ω/V or
using of circuits with slow power-up modes (soft-start circuits).
Additional information[edit]
Capacitor symbols[edit]
Electrolytic capacitor symbols
Polarized-
capacitor Polarized-
capacitor Bipolar
electrolytic-
Parallel connection[edit]
Small or low voltage electrolytic capacitors may be safely connected in parallel. Large sizes capacitors, especially large sizes and high voltage types should be individually protected against sudden discharge of the whole bank due to a failed capacitor.
Series connection[edit]
Some applications like AC/AC converters with DC-link for frequency controls in three-phase grids need higher voltages than aluminum electrolytic capacitors usually offer. For such applications electrolytic capacitors can be connected in series for increased voltage withstanding capability. During charging, the voltage across each of the capacitors connected in series is proportional to the inverse of the individual capacitor's leakage current. Since every capacitor differs a little bit in individual leakage current the capacitors with a higher leakage current will get less voltage. The voltage balance over the series connected capacitors is not symmetrically. Passive or active voltage balance has to be provided in order to stabilize the voltage over each individual capacitor.[69]
Polarity marking[edit]
Polarity marking of tantalum electrolytic capacitors
All tantalum capacitors are polarized components, with distinctly marked positive or negative terminals. When subjected to reversed polarity (even briefly), the capacitor depolarizes and the dielectric oxide layer breaks down, which can cause it to fail even when later operated with correct polarity. If the failure is a short circuit (the most common occurrence), and current is not limited to a safe value, catastrophic thermal runaway may occur. This failure can even result in the capacitor forcefully ejecting its burning core.
Tantalum electrolytic capacitors with solid electrolyte are marked at their positive terminal with a bar or a "+". Tantalum electrolytic capacitors with non-solid electrolyte (axial leaded style) are marked on the negative terminal with a bar or a "-" (minus). The polarity better can be identified on the shaped side of the case, which has the positive terminal. The different marking styles can cause dangerous confusion.
A particular cause of confusion is that on surface mount tantalum capacitors the positive terminal is marked with a bar. Whereas on aluminium surface mount capacitors it is the negative terminal that is so marked.
Imprinted markings[edit]
Tantalum capacitors, like most other electronic components and if enough space is available, have imprinted markings to indicate manufacturer, type, electrical and thermal characteristics, and date of manufacture. But most tantalum capacitors are chip types so the reduced space limits the imprinted signs to capacitance, tolerance, voltage and polarity.
Smaller capacitors use a shorthand notation. The most commonly used format is: XYZ J/K/M "V", where XYZ represents the capacitance (calculated as XY × 10Z pF), the letters K or M indicate the tolerance (±10% and ±20% respectively) and "V" represents the working voltage.
105K 330V implies a capacitance of 10 × 105 pF = 1 µF (K = ±10%) with a working voltage of 330 V.
476M 100V implies a capacitance of 47 × 106 pF = 47 µF (M = ±20%) with a working voltage of 100 V.
Capacitance, tolerance and date of manufacture can be indicated with a short code specified in IEC/EN 60062. Examples of short-marking of the rated capacitance (microfarads): µ47 = 0,47 µF, 4µ7 = 4,7 µF, 47µ = 47 µF
The date of manufacture is often printed in accordance with international standards.
Version 1: coding with year/week numeral code, "1208" is "2012, week number 8".
Version 2: coding with year code/month code. The year codes are: "R" = 2003, "S"= 2004, "T" = 2005, "U" = 2006, "V" = 2007, "W" = 2008, "X" = 2009, "A" = 2010, "B" = 2011, "C" = 2012, "D" = 2013, "E" = 2014 etc. Month codes are: "1" to "9" = Jan. to Sept., "O" = October, "N" = November, "D" = December. "X5" is then "2009, May"
For very small capacitors no marking is possible. Here only the traceability of the manufacturers can ensure the identification of a type.
Standardization[edit]
The standardization for all electrical, electronic components and related technologies follows the rules given by the International Electrotechnical Commission (IEC),[70] a non-profit, non-governmental international standards organization.[71][72]
The definition of the characteristics and the procedure of the test methods for capacitors for use in electronic equipment are set out in the Generic specification:
IEC/EN 60384-1: Fixed capacitors for use in electronic equipment
The tests and requirements to be met by aluminum and tantalum electrolytic capacitors for use in electronic equipment for approval as standardized types are set out in the following sectional specifications:
IEC/EN 60384-3—Surface mount fixed tantalum electrolytic capacitors with manganese dioxide solid electrolyte
IEC/EN 60384-15—fixed tantalum capacitors with non-solid and solid electrolyte
IEC/EN 60384-24—Surface mount fixed tantalum electrolytic capacitors with conductive polymer solid electrolyte
Tantalum ore[edit]
Tantalum capacitors are the main use of the element tantalum. Tantalum ore is one of the conflict minerals. Some non-governmental organizations are working together to raise awareness of the relationship between consumer electronic devices and conflict minerals.
Market[edit]
The market of tantalum electrolytic capacitors in 2008 was approximately US$2.2 billion, which was roughly 12% of the total capacitor market.[73]
Product programs of larger manufacturers of tantalum electrolytic capacitors
Ta-MnO2-
SMD-Chips
Ta-Polymer-
Axial-solid-MnO2-
Axial-Wet-
AVX X X X — X
Cornell-Dubillier X — — — —
Exxelia Group X — X X X
Kemet X X X X —
NCC-Matsuo X X X X X
NEC/Tokin X X — — —
NIC X X — — —
ROHM X X — — —
Samsung Electro-Mechanics X X — — —
Vishay X X X X X
Uses[edit]
The low leakage and high capacity of tantalum capacitors favor their use in sample and hold circuits to achieve long hold duration, and some long duration timing circuits where precise timing is not critical. They are also often used for power supply rail decoupling in parallel with film or ceramic capacitors which provide low ESR and low reactance at high frequency. Tantalum capacitors can replace aluminum electrolytic capacitors in situations where the external environment or dense component packing results in a sustained hot internal environment and where high reliability is important. Equipment such as medical electronics and space equipment that require high quality and reliability makes use of tantalum capacitors.
An especially common application for low-voltage tantalum capacitors is power supply filtering on computer motherboards and in peripherals, due to their small size and long-term reliability.[74][75]
Electronics portal
Coltan mining and ethics
Niobium capacitor
Polymer capacitor
Solid aluminum capacitor (SAL)
Surface-mount technology
Types of capacitor
^ Tomáš Kárník, AVX, NIOBIUM OXIDE FOR CAPACITOR MANUFACTURING , METAL 2008, 13. –15. 5. 2008, Hradec nad Moravicí PDF
^ a b I. Horacek, T. Zednicek, S. Zednicek, T. Karnik, J. Petrzilek, P. Jacisko, P. Gregorova, AVX, "High CV Tantalum Capacitors: Challenges and Limitations" PDF
^ H.C. Starck GmbH, Product Information Tantalum capacitor powder
^ H. Haas, H. C. Starck GmbH, Magnesium Vapour Reduced Tantalum Powders with Very High Capacitances [1]
^ a b c d J. Gill, AVX, BASIC TANTALUM CAPACITOR TECHNOLOGY, PDF or [2]
^ a b c d VISHAY, DC LEAKAGE FAILURE MODE, PDF
^ K. H. Thiesbürger: Der Elektrolyt-Kondensator. 4. Auflage. Roederstein, Landshut 1991, OCLC 313492506
^ J. Qazi, Kemet, An Overview of Failure Analysis of Tantalum Capacitors
^ a b B. Goudswaard, F. J. J. Driesens, Failure Mechanism of Solid Tantalum Capacitors, Philips, Electrocomponent Science and Technology, 1976, Vol. 3. pp 171-179 [3]
^ H. W. Holland, Kemet, Solid Tantalum Capacitor Failure Mechanism and Determination of Failure Rates
^ a b c d e f T.Zednicek, AVX, A Study of Field Crystallization in Tantalum Capacitors and its effect on DCL and Reliability, [4]
^ a b P. Vasina, T. Zednicek , AVX, J. Sikula, J. Pavelka, AVX, Failure Modes of Tantalum Capacitors made by Different Technologies, CARTS USA 2001 [5]
^ a b Y. Pozdeev-Freeman, Vishay, How Far Can We Go with High CV Tantalum Capacitors, PCI, January/February 2005, page 6, PDF Archived 2016-01-24 at the Wayback Machine
^ R. Faltus, AVX Corp.EET Asia, Choosing the right capacitors to ensure long-term control-circuit stability [6]
^ Manufacturer's Cross Reference and Tantalum Chip Capacitor Part Numbering Systems; F3075D; Kemet; November 2004>PDF
^ a b D. F. Tailor, Tantalum and Tantalum Compounds, Fansteel Inc., Encyclopedia of Chemical Technology, Vol. 19, 2nd ed. 1969 John Wiley & sons, Inc.
^ R. L. Taylor and H. E. Haring, "A metal semi-conductor capacitor," J. Electrochem. Soc., vol. 103, p. 611, November, 1956.
^ E. K. Reed, Jet Propulsion Laboratory, Characterization of Tantalum Polymer Capacitors, NEPP Task 1.21.5, Phase 1, FY05] [7]
^ D. A. McLean, F. S. Power, Proc. Inst. Radio Engrs. 44 (1956) 872
^ Preston Robinson, Sprague, US Patent 3066247, 25 Aug. 1954 – 27 Nov. 1962
^ Sprague, Dr. Preston Robinson Granted 103rd Patent Since Joining Company In 1929 [8][permanent dead link]
^ A. Fraioli, Recent Advances in the Solid-State Electrolytic Capacitor, IRE Transactions on Component Parts, June 1958
^ R. J. Millard, Sprague, US Patent 2936514, October 24, 1955 – May 17, 1960
^ Computerposter
^ K. Lischka, Spiegel 27.09.2007, 40 Jahre Elektro-Addierer: Der erste Taschenrechner wog 1,5 Kilo, [9]
^ Larry E. Mosley, Intel Corporation, Capacitor Impedance Needs For Future Microprocessors, CARTS USA 2006, [10] Archived 2014-12-14 at the Wayback Machine
^ W. Serjak, H. Seyeda, Ch. Cymorek, Tantalum Availability: 2000 and Beyond, PCI,March/April 2002, "Archived copy" (PDF). Archived from the original (PDF) on 2014-08-08. Retrieved 2015-01-02. CS1 maint: Archived copy as title (link)
^ The Tantalum Supply Chain: A Detailed Analysis, PCI, March/April 2002 Archived 2014-08-08 at the Wayback Machine
^ About the Nobel Prize in Chemistry 2000, Advanced Information, October 10, 2000,[11]
^ Y. K. ZHANG, J. LIN,Y. CHEN, Polymer Aluminum Electrolytic Capacitors with Chemically-Polymerized Polypyrrole (PPy) as Cathode Materials Part I. Effect of Monomer Concentration and Oxidant on Electrical Properties of the Capacitors, PDF Archived 2014-12-14 at the Wayback Machine
^ U. Merker, K. Wussow, W. Lövenich, H. C. Starck GmbH, New Conducting Polymer Dispersions for Solid Electrolyte Capacitors, PDF Archived 2016-03-04 at the Wayback Machine
^ John Prymak, Kemet, Replacing MnO2 with Polymers, 1999 CARTS
^ F. Jonas, H.C.Starck, Baytron, Basic chemical and physical properties, Präsentation 2003, [www.hcstarck.de]
^ Ch. Schnitter, A. Michaelis, U. Merker, H.C. Starck, Bayer, New Niobium Based Materials for Solid Electrolyte Capacitors, Carts 2002
^ T. Zednicek, W. A. Millman, Ch. Reynolds, AVX, Tantalum and Niobium Technology Roadmap PDF
^ Y. Pozdeev-Freeman, P. Maden, Vishay, Solid-Electrolyte Niobium Capacitors Exhibit Similar Performance to Tantalum, Feb 1, 2002, [12]
^ Ch. Reynolds, AVX, Technical Information, Reliability Management of Tantalum Capacitors, PDF
^ a b c J. Gill, AVX, Surge in Solid Tantalum Capacitors,PDF Archived 2015-01-09 at the Wayback Machine
^ a b A. Teverovsky, NASA, Effect of Surge Current Testing on Reliability of Solid Tantalum Capacitors PDF Archived 2014-12-14 at the Wayback Machine
^ A. Teverovsky, Perot Systems Code 562, NASA GSFCE, Effect of Surge Current Testing on Reliability of Solid Tantalum Capacitors PDF Archived 2014-12-14 at the Wayback Machine
^ I. Bishop, J. Gill, AVX Ltd., Reverse Voltage Behavior of Solid Tantalum Capacitors [13]
^ P. Vasina, T. Zednicek, Z. Sita, J. Sikula, J. Pavelka, AVX, Thermal and Electrical Breakdown Versus Reliability of Ta2O5 Under Both – Bipolar Biasing Conditions PDF
^ A. Berduque, Kemet, Low ESR Aluminium Electrolytic Capacitors for Medium to High Voltage Applications, PDF[permanent dead link]
^ Joelle Arnold, Uprating of Electrolytic Capacitors, DfR Solutions
^ Vishay BCcomponents, Introduction Aluminum Capacitors, Revision: 10-Sep-13 1 Document Number: 28356, PDF Archived 2016-01-26 at the Wayback Machine
^ I. Salisbury, AVX, Thermal Management of Surface Mounted Tantalum Capacitors PDF
^ R.W. Franklin, AVX , Ripple Rating of Tantalum Chip Capacitors PDF
^ KEMET, Ripple Current Capabilities, Technical Update 2004
^ R.W. Franklin, AVX, AN EXPLORATION OF LEAKAGE CURRENT
^ Kemet, Polymer Tantalum Chip Capacitors
^ R. W. Franklin, AVX, ANALYSIS OF SOLID TANTALUM CAPACITOR LEAKAGE CURRENT PDF
^ "Understand Capacitor Soakage to Optimize Analog Systems" by Bob Pease 1982 "Archived copy". Archived from the original on 2010-01-23. Retrieved 2010-01-26. CS1 maint: Archived copy as title (link)
^ * "Modeling Dielectric Absorption in Capacitors", by Ken Kundert
^ IEC/EN 61709, Electric components. Reliability. Reference conditions for failure rates and stress models for conversion
^ MIL-HDBK-217F Reliability Prediction of Electronic Equipment
^ SQC online table calculator, Capacitor Failure Rate Model, MIL-HDBK-217, Rev. F—Notice 2 [14]
^ Hitachi, Precautions in using Tantalum Capacitors, 4.2 Failure Rate Calculation Formula "Archived copy" (PDF). Archived from the original (PDF) on 2014-12-14. Retrieved 2015-01-02. CS1 maint: Archived copy as title (link)
^ KEMET FIT Calculator Software
^ E. Vitoratos, S. Sakkopoulos, E. Dalas, N. Paliatsas, D. Karageorgopoulos, F. Petraki, S. Kennou, S.A. Choulis, Thermal degradation mechanisms of PEDOT:PSS, Organic Electronics, Volume 10, Issue 1, February 2009, Pages 61–66, [15]
^ Nichicon, Technical Guide, Calculation Formula of Lifetime PDF
^ Estimating of Lifetime FUJITSU MEDIA DEVICES LIMITED PDF Archived 2013-12-24 at the Wayback Machine
^ NIC Technical Guide, Calculation Formula of Lifetime Archived 2013-09-15 at the Wayback Machine
^ a b c d J.Gill, T. Zednicek, AVX, VOLTAGE DERATING RULES FOR SOLID TANTALUM AND NIOBIUM CAPACITORS, PDF
^ a b R. Faltus, AVX, Advanced capacitors ensure long-term control-circuit stability, 7/2/2012, EDT [16]
^ Elna, Failure Rates of Tantalum Chip Capacitors
^ A. Teverovsky, DERATING OF SURGE CURRENTS FOR TANTALUM CAPACITORS, Dell Services Federal Government, Inc. NASA/GSFC Greenbelt, MD20771, USA [17]
^ D. Liu, MEI Technologies, Inc. NASA Goddard Space Flight Center, Failure Modes in Capacitors When Tested Under a Time-Varying Stress [18]
^ Jim Keith, What a cap-astrophe!,EDN, May 27, 2010
^ Epcos, Aluminum electrolytic capacitors, General technical informations PDF
^ IEC Homepage
^ IEC Webstore
^ IEC/EN/DIN Standards, Beuth-Verlag
^ Electronic Capacitors, SIC 3675, NAICS 334414: Electronic Capacitor Manufacturing, Industry report: [19]
^ Prymak, J.D. (1998). "New tantalum capacitors in power supply applications". Industry Applications Conference, 1998. 2: 1129–1137. doi:10.1109/IAS.1998.730289. ISBN 0-7803-4943-1.
^ Tamara Schmitz and Mike Wong Choosing and Using Bypass Capacitors
Media related to Tantalum capacitors at Wikimedia Commons
Retrieved from "https://en.wikipedia.org/w/index.php?title=Tantalum_capacitor&oldid=904838484"
Articles with dead external links from June 2018
Articles with dead external links from May 2019 | CommonCrawl |
Design and implementation of AD9361-based software radio receiver
Feng Tian1,
Hanqing Li1 &
Liangchen Yuan1
EURASIP Journal on Wireless Communications and Networking volume 2019, Article number: 95 (2019) Cite this article
Existing software radio platforms constructed by discrete devices have many disadvantages, such as high power consumption, high cost, and poor portability. In this study, an AD9361-based software radio communication system was designed on the basis of the zero-IF bandpass sampling software radio structure to solve the poor universality and expansibility problem of traditional software radio receivers. In the AD9361-based software radio communication system, the influences of channels on received signals and the inter-symbol interference caused by the multipath configuration are offset and eliminated by the minimum mean-squared error (MMSE) equalization algorithm. The simulation analyses on the core functions, including group detection, frame synchronization, channel estimation, and frequency-domain equilibrium, of the designed receiver was performed by ModelSim. The receiving functions of the software radio were realized by the core radio frequency (RF) board of AD9361 and the digital baseband development board of ZC706. The signal frequency spectra received and sent by the designed receiver overlap on the basis of the joint debugging and testing of the RF and digital baseband modules. Test results demonstrate that the designed software radio receiver has a reasonable structural design and can meet the design requirements in terms of overall performance. Additionally, the repeated development process of traditional software radio receivers is simplified, and the integration level and expansibility of the system can be improved. The results can provide valuable references for the development of universal software radio receivers.
Software radio has overcome the disadvantage that previous communication platforms with different communication functions and frequency bands cannot communicate mutually. Existing software radio platforms are constructed by discrete devices, which are constrained by high power consumption and high cost. This situation not only requires technicians to be experienced in hardware design and radio frequency (RF) signal processing, but it also poses high access requirements for software. Given the coexistence of 3G and 4G communication standards, and even that of 5G, there are multiple frequency bands around the world. Traditional software radio designs require different hardware platforms to support varied communication protocols and frequency bands, and they require long development periods and high design costs.
Andrews et al. [1] suggested the use of a mixer as the first-level structure of a software radio to reduce noise at the cost of power with the supply voltage of the baseband low-noise amplifier increased. Murphy et al. (2012) put forward the use of a noise-canceling technology as the software radio structure. Useful signals were enhanced by increasing the auxiliary channels and offset noises at the output terminal, thus improving the noise reduction capability and the overall performance of the software radio system [2]. Zhang et al. propounded a software radio technique that can eliminate the spatial incident angle interference and frequency interference in signal filtering [3]. Loubser and Swart [4] encoded two existing CR-specific media access control protocols by using a CR-specific simulator. Kamaleldin and Ahmed put forward that hardware platforms of software radio system which support many wireless standards could be realized by dynamical program reconfiguration [5]. Marwanto et al. [6] proposed ARDUINO UNO and X-Bee technologies for software radio systems to reduce the costs of spectrum exchange information based on OFDM. Sahoo et al. propounded a multichannel finite impulse response filter for software radio, which can reduce power consumption effectively by the launcher umbilical tower, and can be applied to software radio systems with multichannel filter efficiently [7].
Tsinghua University (2014) completed a chip for software radio receivers in the working frequency band of 0.1–5 GHz. Qin and Wang et al. constructed a radio communication system platform in Matlab and a universal software radio peripheral to increase the utilization of spectrum resources. Spectrum sensing and available spectral bandwidth estimation of signals of master users were obtained by performing an energy detection method. Thus, spectrum detection was realized, and a set of judgment criteria was provided for the spectral access of secondary users [8]. Xu and Yu designed and completed a fault prediction software platform for an airborne software radio system by analyzing its structure [9]. Cui [10] designed a communication terminal for the time-hopping spread spectrum of the TDMA system based on the software radio concept, realized the single-channel launching and multichannel reception of RFs, and accomplished the design of RF modules and their link terminals. Zhang et al. [11] designed a monitoring system over interferences and multipath in the signal bands and adjacent frequencies of current four navigation systems based on software-defined radio concept. Yin and Cheng [12] built a new hardware design program for the software radio processing platform with high-performance and low power consumption based on the requirements of special radio communication systems with low power consumption.
Extant studies have reported that software radio systems are generally limited by their structures. Studies based on AD9361 RF modules and processing modules of SOC digital basebands remain underexplored. Covering most bands with the charter and license-free bands, the working frequency range of AD9361 is from 70 MHz to 6.0 GHz. The supported channel bandwidth is from less than 200 kHz to 56 MHz. With the RF front end and the baseband of the flexible mixed signal integrated, AD9361 provides a configurable digital interface for the processor and integrates frequency synthesizer, thus simplifying the import of the design, which can achieve lower noise and higher accuracy of modulation with the high programmability. In this study, an AD9361-based software radio structure was constructed by using the broadband zero-IF bandpass sampling software radio structure. The corresponding software radio receiver was designed, which achieved many core functions, such as group detection, frame synchronization, channel estimation, and frequency-domain equilibrium. The results can provide new universal platforms and methods for software radio receivers.
Structure of software radio system
The software radio system is mainly composed of RF module and digital baseband processing module. The RF module converts RF analog signals to baseband digital signals and vice versa, and it facilitates RF receiving and transmitting. The communication protocol in the physical layer is realized by the digital baseband processing module, which ensured signal encoding/decoding, and facilitated modulation and demodulation [13]. The structure of the software radio system is shown in Fig. 1.
The structure of the software radio system. The software radio system is mainly composed of an RF module and a digital baseband processing module
Implementation of software radio receiving terminal
The receiving terminal of the software radio performs group detection, frame synchronization, channel estimation, frequency-domain equilibrium, RS decoding, inverse mapping, and so on. In the main signal processing of the digital baseband receiving module, the first step is to conduct group detection, followed by signal synchronization and channel estimation based on training sequences. Then, signals are balanced on the basis of data from channel estimation to compensate for the frequency-selective fading of signals caused by multipath transmission. Finally, RS decoding and 16QAM demodulation are accomplished. The structure of the software radio receiver is shown in Fig. 2.
The structure of the software radio receiver. The receiving terminal performs group detection, frame synchronization, channel estimation, frequency-domain equilibrium, RS decoding, and inverse mapping
The structure of AD9361
AD9361 is composed of the 2 × 2 transceiver, the configuration interface of Serial Peripheral Interface (SPI), the AUXADC, internal PLLs, the programmable GPO, and the data bus of 2 × 12 bits which can be configured as differential or single terminal. Each transmitting and receiving channel of AD9361 is independent. The two receiving channels are composed of low-noise amplifier, mixer, amplifier, filter, and ADC. The two transmitting channels are composed of DAC, filter, amplifier, low-noise amplifier, and mixer. The configuration interface of SPI is compatible with the standard mode of four lines. The structure of AD9361 is shown in Fig. 3.
The structure of AD9361. The whole circuit of RF and IF signals is integrated into one chip by adopting zero-IF architecture, which is composed of the transceiver, the configuration interface of SPI, the AUXADC, the internal PLLs, the programmable GPO, and the data bus
Group detection
Group detection is performed by using the leading structure. The circuit used for group detection is shown in Fig. 4, which is designed and realized according to the circuit procedure [14].
Circuit procedure for group detection. Group detection is performed by using the leading structure. Cn is the mutual correlation between the currently received L data and the L data received before D. Pn is the value of received signal energy. mn is the decision variable of the delay correlation algorithm
The value of delay correlation Cn is:
$$ {C}_n=\sum \limits_{k=0}^{L-1}{r}_{n-k}{r^{\ast}}_{n-k-D}, $$
where rn is the received signal, and Cn is the mutual correlation between the currently received L data and the L data received before D.
The value of received signal energy Pn is:
$$ {P}_n=\sum \limits_{k=0}^{L-1}{r}_{n-k-D}{r^{\ast}}_{n-k-D}=\sum \limits_{k=0}^{L-1}{\left|{r}_{n-k-D}\right|}^2. $$
The decision variable mn of the delay correlation algorithm is:
$$ {m}_n=\frac{C_n}{P_n}=\frac{\mid \sum \limits_{k=0}^{L-1}{r}_{n-k}{r^{\ast}}_{n-k-D}\mid }{\sum \limits_{k=0}^{L-1}{r}_{n-k}{r^{\ast}}_{n-k-D}}. $$
The value of mn for group detection can be determined by the leading structure when the signal-to-noise ratio (SNR) is 15 dB. A value of Cn approaching 0 indicates that effective data transmission has not been achieved and only noise exists. Parameter mn begins to increase with the occurrence of the secondary short-training symbol and begins to decrease when the ninth period is reached.
A simulation on the basis of the ModelSim platform is conducted. The simulation results of group detection are shown in Figs. 5 and 6.
The simulation diagram 1 of group detection. The value of mn for group detection can be determined by the leading structure when the signal-to-noise ratio (SNR) is 15 dB
The simulation diagram 2 of group detection. The value of mn for group detection can be determined by the leading structure when the SNR is 15 dB
DataInRe and DataInIm are the real part and imaginary part of the current data, respectively. SumMagnitude is the sum of relevant window energies, and SumDelayCorrelation is the sum of correlation coefficients of relevant windows. BufferForDetection represents the initial judgment samples of 32 continuous groups, and BufferForDetection represents the judgment samples at the end of 48 continuous groups. As shown in Figs. 4 and 5, SumMagnitude and SumDelayCorrelation are immediately calculated after a system reset and when the effective signal of the grouping detection elevates. When 32 samples are detected continuously, data grouping of judgment arrives. When the SumMagnitude of the 48 continuous samples is smaller than the threshold, the data grouping is completed.
Frame synchronization
The frame synchronization of signals requires the calculation of the cross-correlation coefficient between the received data groups and the locally known short-training symbols [15]. Cross-correlation coefficient Ck can be expressed as:
$$ {C}_k=\sum \limits_{m=0}^{D-1}{r}_{k-m}\times {S}_m^{\ast }, $$
where the superscript ∗ is a conjugation, and D is the length of the cross-correlation coefficient, which is determined to be 16. The positions of the short-training symbols are judged according to the value of ∣Ck∣. The moment of the last peak of ∣Ck∣ is designated as the end point of the short-training symbols.
The simulation results of frame synchronization based on the ModelSim platform are shown in Fig. 7.
The simulation diagram of frame synchronization. The simulation results of frame synchronization. DataOutEn is the effective time for outputting one symbol, which calculates from the long sequence
DataInRe and DataInIm are the real part and imaginary part of input data, respectively. DataInEnable is the enable signal of input data, and PCouter is the number of detected peaks. First, quantization is implemented when the data to be synchronized arrives. Then, the correlation is calculated on the basis of the 16 local short-training symbols, and the moment at the ninth peak is viewed as the end point of the short-training symbols. Finally, long-training symbols and data symbols are designated with serial output according to the output format with the cyclic prefix eliminated at the same time. As shown in Fig. 7, DataOutEn is the effective time for outputting one symbol denoted as 1 and 2 successively which calculates from the long-sequence, with the data symbols started from 3. This scheme is viewed as one cycle of output data.
Channel estimation
Channel estimation is first performed to estimate the received signals from the time domain, according to which the estimation of frequency domain can be easily obtained [16]. Then, the received signals of the estimator can be expressed as:
$$ r(t)=s(t)\times h(t)+n(t), $$
where h(t) is the impulse response, r(t) represents the received signals, s(t) denotes the theoretically received signals, and n(t) is the signal noise. The estimated value of the input signal \( \widehat{s}(t) \) is produced by the convolution of inverse channel system \( \widehat{h}(t) \) that is composed of r(t) and h(t), where \( h(t)\otimes \widehat{h}(t)=\delta (t) \).
$$ {\displaystyle \begin{array}{c}\widehat{s}(t)=\left[s(t)\otimes h(t)+n(t)\right]\otimes \widehat{h}(t)\\ {}=s(t)\otimes h(t)\otimes \widehat{h}(t)+n(t)\otimes \widehat{h}(t)\\ {}=s(t)+\widehat{n}(t)\end{array}} $$
The estimation of the channel frequency response of the frequency domain can be deduced directly from the time domain. Channel frequency response H(jω) is estimated by using r(t), and the inverse channel system \( \widehat{H}\left( j\omega \right) \) is constructed by using H(jω). Therefore,
$$ {\displaystyle \begin{array}{c}\widehat{s}(t)={F}^{-1}\left\{\left[F\left\{S(t)\right\}\cdot H(jw)\right]\cdot \widehat{H}(jw)\right\}\\ {}={F}^{-1}\left\{F\;\left\{s(t)\right\}H(jw)\;\widehat{H}(jw)+N(jw)\cdot \widehat{H}(jw)\right\}\;\\ {}=s(t)+{F}^{-1}\left\{N(jw)\cdot \widehat{H}(jw)\right\}\end{array}}, $$
where \( \widehat{H}\left( j\omega \right)=\frac{1}{H\left( j\omega \right)} \).
Channel estimation is realized by the unique words (UW) inserted into the data sequence. If UW is {xm} with a length of P, then the channel frequency response \( {\widehat{H}}_k \) can be estimated by FFT from the transmitting sequences {xm} and {ym} to {xm} and {Ym}.
$$ {\widehat{H}}_k=\frac{Y_k}{X_k}\kern1.8em k=1,2,3,K,P-1 $$
The corresponding time-domain discrete signal {hm} can be initially obtained from the IFFT operation of \( {\widehat{H}}_k \) at the point P, where P denotes the length. Then, the zero-padding operation of {hm} is performed, thus obtaining the {hm} of the point M. Finally, the frequency response value \( {\widehat{H}}_k \) is acquired from the FFT operation of {hm} of the point M.
The frequency domain is calculated on the basis of the time domain as part of the channel estimation. The received signals can be expressed by the Y = XH + V matrix as follows:
$$ {J}_{LS}={\left(Y- XH\right)}^H\left(Y- XH\right), $$
where X = diag[X(0), X(1), … , X(Np − 1)], Y = [Y(0), Y(1), … , Y(Np − 1)]T, and H = [H(0), H(1), … , H(Np − 1)]T. Np is the number of UW. The first-order derivative and the second-order derivative of JLS for H are calculated as follows:
$$ \frac{\partial }{\partial H}{J}_{LS}=-2{X}^H\left(Y- XH\right), $$
$$ \frac{\partial }{\partial H}{\left(\frac{\partial }{\partial H}\;{J}_{LS}\right)}^H=2{X}^H\;X\ge 0. $$
The minimum can be derived from Eq. (11). If Eq. (11) is 0, then:
$$ {\widehat{H}}_{LS}={\left({X}^HX\right)}^{-1}{X}^HY={X}^{-1}Y=\frac{Y}{X}. $$
Thus, the estimated value denoted by \( {\widehat{H}}_{LS} \) can be expressed as:
$$ {\widehat{H}}_{LS}={X}^{-1}\left( XH+V\right)=H+{X}^{-1}V=H+n, $$
where n is the estimation error, and n = X−1V. The simulation results of channel estimation based on the ModelSim platform are shown in Fig. 8.
The simulation diagram of channel estimation. The simulation results of channel estimation. The frequency domain is calculated on the basis of the time domain as part of the channel estimation
DataInRe and DataInIm are the real part and imaginary part of input data, respectively. DataInEnable is the enable signal of input data, and ChannelcoeEnable is the enable signal of output data. ChannelcoeIm and ChannelcoeRe are the real part and imaginary part of output channel estimation, respectively.
Frequency-domain equilibrium module
Frequency-domain equilibrium is performed to offset the effects of channels on the received signals. Here, the minimum mean-squared error (MMSE) equilibrium algorithm is applied as the frequency-domain equilibrium [17].
Suppose that the set of transmitting data is denoted by s (s = [s0, s1, … , sN − 1]T), N is the number of FFT points (h = [h0, … , hL − 1.0, … , 0]T), and L is the length of impulse response. Then, the received signal vector is r = [r0, r1, … , rN − 1]T. Accordingly,
$$ r=h\otimes s+v $$
where ⊗ is the cyclic convolution, and v = [v0, v1, … , vN − 1]T is the channel noise. On the basis of the FFT of Eq. (14),
$$ {R}_k={S}_k{H}_k+{V}_k, $$
where \( {H}_k=\sum \limits_{m=0}^{L-1}\left({h}_m{e}^{-j2\pi km/N}\right) \), and Rk, Sk, Hk, and Vk are the frequency domain values of received signals, transmitting signals and channel impulse response function, and additive white Gaussian noise.
If the equilibrium coefficient is Wk, then the frequency-domain output after equilibrium is:
$$ {\tilde{S}}_k={W}_k{H}_k{S}_k+{W}_k{V}_k. $$
According to its definition, the mean square error (MSE) after the equilibrium can be deduced.
$$ {\displaystyle \begin{array}{c}\mathrm{MSE}=E\sum \limits_{k=0}^{N-1}\mid {\tilde{S}}_k-{S}_k\mid \\ {}=E\sum \limits_{k=0}^{N-1}{\left|\left[{W}_k{H}_k-1\right]{S}_k+{W}_k{V}_k\right|}^2\\ {}=\sum \limits_{k=0}^{N-1}E{\left|\left[{W}_k{H}_k-1\right]{S}_k+{W}_k{V}_k\right|}^2\\ {}=\sum \limits_{k=0}^{N-1}{F}_k\end{array}}. $$
Suppose that \( {\sigma}_N^2 \) is the noise power on the frequency domain and \( {\sigma}_S^2 \) is the signal power on the frequency domain, then:
$$ {\sigma}_N^2=E{\left|{V}_k\right|}^2, $$
$$ {\sigma}_S^2=E{\left|{S}_k\right|}^2. $$
On the basis of Eq. (18) and Eq. (19), the following can be derived:
$$ {F}_k={\left|{W}_k{H}_k-1\right|}^2{\sigma}_S^2+{\left|{W}_k\right|}^2{\sigma}_N^2. $$
If argz1 = arg z2, |z1 − z2|≥|| z1| − | z2| |(z1, z2 ∈ W) is true, then:
$$ {F}_k\ge {\left|{W}_k{H}_k-1\right|}^2{\sigma}_N^2={\left(\left|{W}_k{H}_k\right|-1\right)}^2{\sigma}_S^2+{\left|{W}_k\right|}^2{\sigma}_N^2, $$
where Fk uses the lower limit value based on the condition of argWkHk = arg 1 = 0.
$$ \arg {W}_k=-\arg {\mathrm{H}}_k $$
If the lower limit of Fk is y, then the minimum y should be calculated, such that:
$$ y={\left(\left|{W}_k{H}_k\right|-1\right)}^2{\sigma}_S^2+{\left|{W}_k\right|}^2{\sigma}_N^2=\left|{H}_k^2{\sigma}_S^2-{\sigma}_N^2\right|{\left|{W}_k\right|}^2-2{H}_k{\sigma}_S^2{\left|{W}_k\right|}^2+{\sigma}_S^2. $$
To derive the minimum of y:
$$ {W}_k=\frac{H^{\ast }{\sigma}_S^2}{{\left|{H}_k\right|}^2{\sigma}_S^2+{\sigma}_N^2}=\frac{H^{\ast }}{{\left|{H}_k\right|}^2+\frac{\sigma_S^2}{\sigma_N^2}}. $$
Given that \( \frac{\sigma_S^2}{\sigma_N^2}=\mathrm{SNR} \), the MMSE equilibrium coefficient can be expressed as \( {W}_k=\frac{H_k^{\ast }}{{\left|{H}_k\right|}^2+1/\mathrm{SNR}} \). In Eq. (24), 0 ≤ k ≤ N − 1 and SNR denotes the signal-to-noise ratio of the transmitting terminal.
The simulation of signals based on the MMSE algorithm is conducted in Matlab. The following parameters are included:
Multipath channel: the corresponding power of the SUI.3 channel model is [0, − 5, − 10 dB];
Modulation mode: 16QAM;
SC-FDE system parameters: UW uses the Chu sequence, and the length is N = 64;
M = 256, and the MMSE equilibrium algorithm is used.
The hardware implementation block diagram of channel equalization module based on FPGA is shown in Fig. 9.
The hardware implementation block diagram of channel equalization module. The hardware implementation block diagram of channel equalization module. SC-FDE symbols in the time domain are read from "buffer of RX frame sample" by the channel equalization module and sent to the FFT module to calculate the frequency domain values of SC-FDE symbols. Frequency domain values are read from "buffer of CSI" by the CSI_ACQ module, which can complete the integration of corresponding samples meanwhile. With the complex multiplication of the corresponding sample points completed by the FDE_CORE module, the frequency domain equalization is achieved
SC-FDE symbols in the time domain are read from "buffer of RX frame sample" by the channel equalization module and sent to the FFT module to calculate the frequency domain values of SC-FDE symbols.
Frequency domain values are read from "buffer of CSI" by the CSI_ACQ module, which can complete the integration of corresponding samples meanwhile.
With the complex multiplication of the corresponding sample points completed by the FDE_CORE module, the frequency domain equalization is achieved.
Meanwhile, with the subsequent IFFT_256 module controlled by the FDE_CORE module, the sample points in the frequency domain after the equilibration are restored to the time domain and stored in the symbol buffer of SIG domain and the time buffer of DATA domain, respectively.
The simulation results of the channel equilibrium based on the ModelSim platform are shown in Fig. 10.
The simulation results of channel equilibrium. The simulation results of channel equilibrium. Frequency-domain equilibrium is performed to offset the effects of channels on the received signals. And the MMSE equilibrium algorithm is applied as the frequency-domain equilibrium
DataInRe and DataInIm are the real part and imaginary part of the input data, respectively. DataInEnable is the enable signal of input data. DataOutRe and DataOutIm are the real part and imaginary part of output data, respectively.
RS decoding
The design procedures of the RS decoder are as follows:
The adjoint polynomial s(x) of RS codes is calculated from the receiving codes.
The error position polynomial a(x) and error value polynomial δ(x) are solved by an adjoint polynomial.
The error position can be acquired by using the Chien searching method to calculate the roots of error location polynomials.
The error magnitude corresponding to each error location can be obtained from the error value polynomial by using the Fomey algorithm, namely C(x) = R(x) − E(x).
After decoding, the adjoint formula of the codeword is calculated again, and the adjoint formula is determined by detecting whether the adjoint formula is zero or not.
According to the above procedures, RS decoder should include four parts: the adjoint polynomial calculation module, the key equation solving module, the money search module, and the Fomey algorithm module [18].
Specific procedures of RS decoding design are as follows:
1. Solving the adjoint polynomial of RS decoding. The parameters of RS (255,191) are as follows:
Encoding length: n = 255
Information bit length: k = 191
Parity bit length: 64
Error correcting capability: t = 32
Primitive polynomial:
$$ p(x)={x}^8+{x}^4+{x}^3+{x}^2+{x}^1+1. $$
The generating polynomial of RS (255,191) is as follows:
$$ g(x)=\left(x-{a}^0\right)\left(x-{a}^1\right)\left(x-{a}^2\right)\Lambda \left(x-{a}^{15}\right). $$
The RS is solved by using a, a2, a3, … , a32, in which R(x) = r0 + r1x + r2x2 + … rn − 1a(n − 1). The 32 adjoint expressions of RS (255,191) codes are acquired, namely s1, s2, s3, … , s32.
$$ s(x)=\sum \limits_{i=1}^{2t}{s}_i{x}^{i-1},{s}_i=R\left({a}^i\right),\left(1\le i\le 2t\right){s}_i=R\left({a}^i\right) $$
2. Solving the error position polynomial. Firstly, the error location polynomial δ(x) is obtained, then the error location polynomial and the error value polynomial are obtained. The error location polynomial δ(x) can be defined as:
$$ \delta (x)=\left(1-{\theta}_1x\right)\left(1-{\theta}_2x\right)\Lambda \left(1-{\theta}_tx\right). $$
The error location is θ1. … θt. The right part of the expansion equation is simplified as follows:
$$ \delta (x)=1+\left({\theta}_1+{\theta}_2+\Lambda +{\theta}_t\right)x+\left({\theta}_1{\theta}_2+{\theta}_1{\theta}_3+\Lambda +{\theta}_{t-1}{\theta}_t\right){x}^2+\Lambda +{\theta}_1{\theta}_2\Lambda +{\theta}_1\Big){x}^t. $$
The error location polynomial is acquired:
$$ \delta (x)=1+{\delta}_1x+{\delta}_1{x}^2+\Lambda +{\delta}_1{x}^t. $$
3. Solving the error position. The error location of receiving polynomial R(x) = r0 + r1x + Λ + rn − 2xn − 2 + rn − 1xn − 1 is acquired according to the root of δ1x.
4. Ascertaining the error pattern E(x) and the polynomial of c(x). The error value polynomial is defined as follows: ω(x) = S(x)δ(x), which is simplified as follows:
$$ \omega (x)={s}_1x+\left({s}_2+{\delta}_1{s}_1\right){x}^2+\Lambda +\left({s}_t+{\delta}_1{s}_{t-1}+\Lambda +{\delta}_{t-1}{s}_1\right){x}^t+\left({s}_{t+1}+{\delta}_t{s}_1+\Lambda +{\delta}_1{s}_1\right){x}^{t+1}+\Lambda . $$
The following equations can be verified.
$$ {s}_{t=1}+{\delta}_1{s}_t+\Lambda +{\delta}_t{s}_1=0 $$
$$ {s}_{t+2}+{\delta}_1{s}_t+\Lambda +{\delta}_t{s}_2=0 $$
$$ {s}_{2t}+{\delta}_1{s}_{2t+1}+\Lambda +{\delta}_t{s}_t=0 $$
The error value polynomial ω(x) = ω1x + ω2x2 + ωtxt can be obtained if ω1 = s1, ω2 = s2 + δ1s1, Λ, ωt = st + δ1st − 1 + Λδt − 1s1. The error pattern \( E(x)=\sum \limits_{i=1}^t{Y}_i{x}_i^{ti} \) can be obtained by \( {Y}_j=\frac{-{x}_j\omega \left({x}_j^{-1}\right)}{\delta \left({x}_j^{-1}\right)} \). Here, xj is the root of the Chien searching method. The final actual code C(x) is obtained with E(x) and the receiving code R(x) superposed.
5. Calculating the adjoint formula of the codeword again. The decoding result is determined by detecting whether the adjoint formula is zero or not.
The RS decoding is implemented on FPGA. The simulation results are shown in Figs. 11, 12, and 13.
The simulation diagram 1 of the RS decoder. The output data of the decoders. The decoding results are shown
The simulation diagram 2 of the RS decoder. The output data of the decoders. The output of RS encoding is used as the input of RS decoding data
The simulation diagram 3 of the RS decoder. The output data of the decoders. The output data of RS decoding is the input of RS encoding. And the RS encoding is correct
As is shown in Figs. 11 and 12, the output of RS encoding is used as the input of RS decoding data. The input is as follows: (1, 2, 3, …, 190, 191, 204, 5, 85, 10, 239, 109, 76, 117, 180, 235, 220, 44, 210, 158, 235, 68, 138, 211, 46, 185, 196, 249, 194, 92, 219, 237, 254, 229, 151, 239, 246, 19, 26, 219, 66, 100, 210, 157, 6, 208, 187, 169, 68, 168, 78, 28, 34, 163, 42, 134, 149, 43, 0, 88, 70, 90, 93, 129, 173, 131, 235, 192, 66, 34). If the output data of RS decoding is the input of RS encoding, the output is correct. Namely, the output is (1, 2, 3, …, 190, 191). As is shown in Fig. 13, the RS decoding decodes the encoded data. The RS encoding is correct.
16QAM demodulation module
RS decoding is followed by the 16QAM demodulation. With the orthogonal coherent demodulation method applied, the signal is judged, detected, and converted in series and parallel, and the final output is generated.
The expressions of demodulated I branch and Q branch are shown in Eq. (33).
$$ \left\{\begin{array}{c}{I}_{\mathrm{branch}}={S}_{\mathrm{QAM}}\cos at=\frac{1}{2}{X}_k+\frac{1}{2}{X}_k\cos 2 at+\frac{1}{2}{Y}_k\sin 2 at\\ {}{Q}_{\mathrm{branch}}={S}_{\mathrm{QAM}}\sin at=\frac{1}{2}{Y}_k+\frac{1}{2}{Y}_k\cos 2 at+\frac{1}{2}{X}_k\sin 2 at\end{array}\right. $$
With \( \frac{1}{2}{X}_k\cos 2 at+\frac{1}{2}{Y}_k\sin 2 at \), \( \frac{1}{2}{Y}_k\cos 2 at+\frac{1}{2}{X}_k\sin 2 at \), \( \frac{1}{2}{X}_k \), and \( \frac{1}{2}{Y}_k \) filtered by the low pass filter, the output of 16QAM demodulation is obtained. The expression is as follows:
$$ \left\{\begin{array}{c}{I}_{\mathrm{branch}}=\frac{1}{2}{X}_k\\ {}{Q}_{\mathrm{branch}}=\frac{1}{2}{Y}_k\end{array}\right. $$
The constellation of the 16QAM modulation is shown in Fig. 15, and its mapping output value is d = (I + jQ) × KMOD, where \( {K}_{\mathrm{MOD}}=1/\sqrt{10} \). The constellation of the 16QAM modulation is shown in Fig. 14.
The constellation of the 16QAM modulation. The I-way component and Q-way component correspond to b0˴b1 and b2˴b3 in the code elements b0˴b1˴b2˴b3, respectively
The I-way component and Q-way component correspond to b0˴b1 and b2˴b3 in the code elements b0˴b1˴b2˴b3, respectively. With the decision thresholds set as −2 × KMOD, 0 and 2 × KMOD, respectively, the I and Q can be demodulated now.
Experiment results and discussion
With the AD9361 used as the RF module, ZC706 applied as the digital baseband processing module of SOC, and ZC706 utilized as the ARM+FPGA framework, the hardware platform of the software radio system is built. The physical connection between AD9361 and ZC706 is shown in Fig. 15. The AD9361 board card and the ZYNQ ZC706 development board are connected by FMC. A spectrum analyzer is used as the tester of the transmitting and receiving terminals during the system test. The accuracy of the system test is evaluated by observing the frequency spectra.
The physical connection between AD9361 and ZC706. The AD9361 board card and the ZYNQ ZC706 development board are connected by FMC
Joint testing is performed for the designed software radio receiver, which is based on the hardware platform of AD9361. The test framework of the receiving terminal is shown in Fig. 16. The transmitting central frequency, transmit gain, and bandwidth are 1.435 GHz, 15 dB, and 20 MHz, respectively. AD9361 is connected to ZYNQ via FMC, and ZYNQ is connected to the spectrum analyzer via JTAG. The final results are displayed through the spectrum analyzer. The detailed procedure can be described as follows: signals are received, signals are inputted into AD9361 via the antenna, and these input signals are sent into the digital baseband processing module to complete the processing after amplification, mixing, filtering, and A/D conversion based on AD9361.
The test framework of the receiving terminal. The test framework of the receiving terminal. Joint testing is performed for the designed software radio receiver, which is based on the hardware platform of AD9361
AD9361 is set as the working modes of 1R1T, LVDS, and TDD. The ADC frequency is set as 13 MHz, and the local frequency is set as 1.435 GHz, which can be displayed through the spectrum analyzer. The frequency spectra at the receiving terminal are shown in Fig. 17.
The frequency spectra at the receiving terminal. AD9361 is set as the working modes of 1R1T, LVDS, and TDD. The ADC frequency is set as 13 MHz, and the local frequency is set as 1.435GHz
Joint testing is also performed for the transmitting signals and the receiver, with the transmitting and receiving frequency spectra examined, which are shown in Fig. 18.
The transmitting and receiving frequency spectra. Joint testing is also performed for the transmitting signals and the receiver, with the transmitting and receiving frequency spectra examined
As shown in Fig. 16, the transmitting frequency spectrum is located in the upper position, while the receiver frequency spectrum is located in the lower position. The overlapping of the frequency spectra of the transmitting signals and receiver indicates the consistency of parameters between the transmitting and receiving signals. Therefore, transmitting signals are received accurately.
In this study, a joint experiment of RF module and digital baseband processing module is carried out. The experiment is carried out by two modules combined with a signal source and a spectrum analyzer. But there is no video display part and signal compression part, which can be added to the system to promote the applications in our future research.
An AD9361-based software radio system was designed by using AD9361 as the hardware platform. The receiving function is realized, with the modules for group detection, frame synchronization, channel estimation, and frequency-domain equilibrium designed. Finally, the results of the software radio receiver were optimized and verified. The major conclusions from this study are as follows:
The single-carrier frequency domain equilibrium is applied on the digital baseband physical layer, which is optimized on the basis of the SC-FDE communication protocol. Then, the receiving function on the basis of the software radio receiver is realized by using the optimized communication protocol. The system can effectively resist the frequency-selective fading of channels, thus achieving high-rate and large-capacity communication transmission.
Peak to average power ratio and the sensitivity to the phase noise are decreased by the MMSE equilibrium algorithm, thus enhancing the resistance against multipath interference.
The AD9361-based software radio receiver presented in this study has a reasonable structure, and its performance indexes are satisfactory. The proposed system can effectively increase the communication speed and capacity, remarkably ameliorate the reduction of signal quality caused by multipath fading, and excellently offset the poor universality and expansibility of traditional radio receivers. The results of this study can serve as a useful reference in the development of next-generation universal software radio receivers.
FFT:
Fast Fourier transform algorithm
FMC:
FPGA Mezzanine Card
FPGA:
Intermediate frequency
Joint Test Action Group
MMSE:
Minimum mean-squared error
MSE:
Mean square error
QAM:
SC-FDE:
Single-carrier frequency domain equalization
Signal-to-noise ratio
Serial Peripheral Interface
TDMA:
Time-division multiple address
UW:
C. Andrews, A passive mixer-first receiver with digitally controlled and widely tunable RF interface. IEEE J. Solid State Circuits 45(12), 2696–2708 (2010). https://doi.org/10.1109/JSSC.2010.2077151
H. Darabi, D. Murphy, M. Mikhemar, H. Wu, 2.1 A Highly Linear Inductor Less Wideband Receiver with Phase- and Thermal-Noise Cancellation (IEEE International Solid-State Circuits Conference, San Francisco, 2015), pp. 1–3. https://doi.org/10.1109/ISSCC.2015.7062850
L. Yang, Z. Zhang, W. Hou, B. Zhao, H. Zheng, Papyrus: a software platform for distributed dynamic spectrum sharing using SDRs. ACM SIGCOMM Computer Communication Review 41(1), 32–37 (2017). https://doi.org/10.1145/1925861.1925866
C.J. Loubser, T.G. Swart, Cognitive Radio DAB MAC Protocol Performance Using a CR Specific Simulator and Software Defined Radio (IEEE AFRICON, Cape Town, 2017), pp. 139–144. https://doi.org/10.1109/AFRCON.2017.8095470
A. Kamaleldin, S. Hosny, K. Mohamed, M. Gamal, A. Hussien, E. Elnader, A. Shalash, A.M. Obeid, Y. Ismail, H. Mostafa, A Reconfigurable Hardware Platform Implementation for Software Defined Radio Using Dynamic Partial Reconfiguration on Xilinx Zynq FPGA (2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS), Boston, 2017), pp. 1540–1543. https://doi.org/10.1109/MWSCAS.2017.8053229
A. Marwanto, S.K.S. Yusof, M.H. Satria, Orthogonal frequency-division multiplexing-based cooperative spectrum sensing for cognitive radio networks. Telkomnika (Telecomm Comput Electron Contr) 12(1), 143–152 (2014). https://doi.org/10.12928/TELKOMNIKA.v12i1.310
R. Srinivasa, S. Kotha, S. Kumar, An approach for fixed coefficient RNS-based FIR filter. Int. J. Electron., 1–19 (2017). https://doi.org/10.1080/00207217.2017.1296593
X. Qin, L. Wang, H. Zhao, Y. Deng, Realization of spectrum sensing based on MATLAB and general software radio peripherals. Comput. Eng. 41(5), 106–110 (2015)
S. Xu, Z. Yu, K. Chang, Y. Wu, B. L, Design and implementation of fault prediction software for airborne software radio system. Meas Contr Technol 35(8), 111–114 (2016). https://doi.org/10.3969/j.issn.1000-8829.2016.08.028
Z. Cui, Design of frequency hopping communication terminal based on software radio principle. Modern Navigation 8(5), 365–371 (2017)
Y. Zhang, X. Cui, T. Da, G. Wang, Design of GNSS jamming and multipath monitoring system based on software radio. Telecommun. Eng. 57(3), 288–295 (2017)
Z. Yin, L. Cheng, Design of low power software radio platform based on ZYNQ7000. Wireless Internet Technol 37(3), 31–32 (2018)
M. Mishra, A. Potnis, P. Dwivedy, S.K. Meena, Software Defined Radio Based Receivers Using RTL — SDR: a Review (2017 International Conference on Recent Innovations in Signal processing and Embedded Systems (RISE), Bhopal, 2017), pp. 62–65. https://doi.org/10.1109/RISE.2017.8378125
Y. Qi, X. Chen, Z. Xie, S. Yuan, An Improved MMSE-RISIC Equalization Algorithm Based on STBC-SC-FDE (2018 IEEE 3rd International Conference on Signal and Image Processing (ICSIP), Shenzhen, 2018), pp. 310–314. https://doi.org/10.1109/SIPROCESS.2018.8600476
K. Kumar, S. Pillai, S. Sagar, S.K. Parambeth, N. Prem Krishnan, Implementation of Digital Pre-Distortion for Power Amplifier Linearisation in Software Defined Radio (Twenty-third National Conference on Communications (NCC), Chennai, 2017), pp. 1–6
J. David Cepeda, S.I. Rodríguez, M. Rico-Martínez, C. Daniel Muñoz, M. Varón, I.T. Monroy, Performance Evaluation of a Real Time OFDM Radio over Fiber System at 2.5 GHz Using Software Defined Radio SDR (SBMO. IEEE MTT-S International Microwave and Optoelectronics Conference (IMOC), 2017), pp. 1–5. https://doi.org/10.1109/IMOC.2017.8121094
E. Balevi, A.Ö. Yιlmaz, Analysis of frequency domain oversampled MMSE SC-FDE. IEEE Commun. Lett. 20(2), 232–235 (2016)
W. Lu, Y. Liang, Y. Liu, Z. Liang, W. Zhang, The design of an RS decoder based on the mCS-RiBM algorithm for 100 Gb/s optical communication systems. IEEE Trans Circuits Syst Express Briefs 66(1), 76–80 (2019)
This work was supported by the Project of Science and Technology of Shaanxi (No.2018GY-151).
The authors declare that all the data and materials in this manuscript are available.
College of Communication and Information Engineering, Xi'an University of Science and Technology, Xi'an, China
Feng Tian
, Hanqing Li
& Liangchen Yuan
Search for Feng Tian in:
Search for Hanqing Li in:
Search for Liangchen Yuan in:
The contributions of all authors are equal in this manuscript, and all authors read and approved the final manuscript.
Correspondence to Feng Tian.
Feng Tian is currently an associate professor of College of Communication and Information Engineering, Xi'an University of Science and Technology.
Hanqing Li is currently pursuing an MS degree at the College of Communication and Information Engineering, Xi'an University of Science and Technology.
Liangchen Yuan is currently pursuing an MS degree at the College of Communication and Information Engineering, Xi'an University of Science and Technology.
Tian, F., Li, H. & Yuan, L. Design and implementation of AD9361-based software radio receiver. J Wireless Com Network 2019, 95 (2019) doi:10.1186/s13638-019-1420-6
Frequency-domain equalization
Recent Challenges & Avenues in Wireless Communication through Advance computational Intelligence or Deep learning methods | CommonCrawl |
Artificial Intelligence Stack Exchange is a question and answer site for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment. It only takes a minute to sign up.
Why does a negative reward for every step really encourage the agent to reach the goal as quickly as possible?
If we shift the rewards by any constant (which is a type of reward shaping), the optimal state-action value function (and so optimal policy) does not change. The proof of this fact can be found here.
If that's the case, then why does a negative reward for every step encourage the agent to quickly reach the goal (which is a specific type of behavior/policy), given that such a reward function has the same optimal policy as the shifted reward function where all rewards are positive (or non-negative)?
More precisely, let $s^*$ be the goal state, then consider the following reward function
$$ r_1(s, a)= \begin{cases} -1, & \text{ if } s \neq s^*\\ 0, & \text{ otherwise} \end{cases} $$
This reward function $r_1$ is supposed to encourage the agent to reach $s^*$ as quickly as possible, so as to avoid being penalized.
Let us now define a second reward function as follows
\begin{align} r_2(s, a) &\triangleq r_1(s, a) + 1\\ &= \begin{cases} 0, & \text{ if } s \neq s^*\\ 1, & \text{ otherwise} \end{cases} \end{align}
This reward function has the same optimal policy as $r_1$, but does not incentivize the agent to reach $s^*$ as quickly as possible, given that the agent does not get penalized for every step. So, in theory, $r_1$ and $r_2$ lead to the same behavior. If that's the case, then why do people say that $r_1$ encourage the agents to reach $s^*$ as quickly as possible? Is there a proof that shows that $r_1$ encourages a different type of behaviour than $r_2$ (and how is that even possible given what I have just said)?
reinforcement-learning proofs reward-shaping reward-functions
nbro
nbro♦nbro
$\begingroup$ I have just found that a related question had already been asked, but my question is different, in that I am looking for a proof (or, at least, good logical argument) that shows that $r_1$ and $r_2$ lead to different behaviours, although they shouldn't. $\endgroup$
– nbro ♦
$\begingroup$ The linked proof is not valid for episodic problems. More concretely, it assumes that absorbing states are also modified, but that would break the episodic nature (because reward values other than zero would be processed after the end of an episode) $\endgroup$
– Neil Slater
$\begingroup$ @NeilSlater Why do you say that it assumes that "absorbing states" are modified? Do you mean that I assumed that the reward for the terminal/absorbing state can also be changed (i.e. be different than zero)? Maybe you should comment under the answer/proof to that linked post, and we can discuss there the limitations of that proof. $\endgroup$
$\begingroup$ I think here "quickly" means that the training itself will be faster. If I imagine a gridworld problem for a moment, both $r_1$ and $r_2$ will result in a policy that gives the shortest path to the goal cell. I would say negative rewards encourage exploration and "quickly" means that we converge to the optimal policy quicker. $\endgroup$
– Hai Nguyen
$\begingroup$ @HaiNguyen You may be right (i.e. the final policy may be the same, but the way you get to it may be different), but I would like to see a formal proof or, at least, an step-by-step example that shows that's the case. We probably can come up with such an example in a very simple environment with 2-3 states or in simple grid world. $\endgroup$
Your examples are equivalent. But it is possible to find a constant yielding a different optimal policy.
Your examples are absolutely equivalent. The agent maximizes the reward, and only way to do so is by reaching $s^*$.
Consider $r_3$ :
$$ r_3(s, a)= \begin{cases} 1, & \text{ if } s \neq s^*\\ 2, & \text{ otherwise} \end{cases} $$
With a sufficiently large $\gamma$, moving infinitely without reaching $s^*$ is now the optimal solution.
For the generic case
$$ r_4(s, a)= \begin{cases} \alpha, & \text{ if } s \neq s^*\\ \beta, & \text{ otherwise} \end{cases} $$
the threshold is found by comparing the results of the series $\alpha + \alpha^2 + \alpha^3 + ... + \alpha^{t_m}$, where $t_m$ is the maximum episode length, and $\alpha + \alpha^2 + \alpha^3 + ... + \alpha^{t^*}$, where $t^*$ is the length of the episode following the fastest policy.
In the example of $r_3$, it is trivial to find examples where the fastest policy isn't optimal. Imagine a race, the agent starts on the left and gets either $\alpha$ or $\beta$ points, depending on where it is. With $\gamma = 0.9$ and no time-limit (infinite episodes) the optimal policy is to move randomly, but in the second-to-last house, avoid the goal state. With $\gamma = 0.1$, the optimal policy is to move randomly (not really, probably there would be a slight advantage in moving right), but in the second-to-last house, enter the goal.
BlueMoon93BlueMoon93
$\begingroup$ This answer is unclear (to me at least). You say "With a sufficiently large $\gamma$". Why can't that also be said for $r_1$ and $r_2$? $\gamma \in [0, 1)$: this is an assumption. I also don't get your point here "it is trivial to find examples where the fastest policy isn't optimal.". Note that we are talking about "optimal policies" associated with theoretically equivalent reward functions (i.e. theoretically, they have the same optimal policy or policies, which follows from the proof). $\endgroup$
$\begingroup$ I think you're assuming that you have some RL algorithm that, in a finite number of steps, does not reach the goal and prefers to wander around, given that seems to be optimal policy. And, as someone had already stated in the comments, in practice, these reward functions may lead to different policies, in a finite number of steps, with some algorithm like Q-learning. However, your answer is not clear enough. $\endgroup$
$\begingroup$ What are these $a$ and $b$ letters in your diagram? It's not even clear what algorithm you're assuming (if any). It's not clear where this conclusion "the optimal policy is to move randomly, but in the second-to-last house, avoid the goal state" comes from too. As far as I am concerned, this answer is not clear. Maybe start from your definition of the optimal Bellman equation and express it in terms of rewards. If that's where you started from, then find the contradiction to (or mistake in) the proof. Or maybe start by stating your assumpions. $\endgroup$
Thanks for contributing an answer to Artificial Intelligence Stack Exchange!
Not the answer you're looking for? Browse other questions tagged reinforcement-learning proofs reward-shaping reward-functions or ask your own question.
What are some best practices when trying to design a reward function?
Is the policy really invariant under affine transformations of the reward function?
Why does shifting all the rewards have a different impact on the performance of the agent?
What is the optimal value function of the shifted version of the reward function?
How do I avoid an agent to tend to terminate in a negative state when time needs to be taken into account?
How does the initialization of the value function and definition of the reward function affect the performance of the RL agent?
Are there any reliable ways of modifying the reward function to make the rewards less sparse?
How should I design the reward function for racing game (where the goal is to reach finishing line before the opponent)?
How do I design the rewards and penalties for an agent whose goal it is to explore a map
How can I fix jerky movement in a continuous action space
How to encourage the reinforcement-learning agent to reach the goal as quickly as possible, and what's the effect of discount factor? | CommonCrawl |
Spatial registration of neuron morphologies based on maximization of volume overlap
Ajayrama Kumaraswamy ORCID: orcid.org/0000-0001-7928-877X1,
Kazuki Kai2,
Hiroyuki Ai2,
Hidetoshi Ikeno3 &
Thomas Wachtler1
Morphological features are widely used in the study of neuronal function and pathology. Invertebrate neurons are often structurally stereotypical, showing little variance in gross spatial features but larger variance in their fine features. Such variability can be quantified using detailed spatial analysis, which however requires the morphologies to be registered to a common frame of reference.
We outline here new algorithms — Reg-MaxS and Reg-MaxS-N — for co-registering pairs and groups of morphologies, respectively. Reg-MaxS applies a sequence of translation, rotation and scaling transformations, estimating at each step the transformation parameters that maximize spatial overlap between the volumes occupied by the morphologies. We test this algorithm with synthetic morphologies, showing that it can account for a wide range of transformation differences and is robust to noise. Reg-MaxS-N co-registers groups of more than two morphologies by iteratively calculating an average volume and registering all morphologies to this average using Reg-MaxS. We test Reg-MaxS-N using five groups of morphologies from the Droshophila melanogaster brain and identify the cases for which it outperforms existing algorithms and produce morphologies very similar to those obtained from registration to a standard brain atlas.
We have described and tested algorithms for co-registering pairs and groups of neuron morphologies. We have demonstrated their application to spatial comparison of stereotypic morphologies and calculation of dendritic density profiles, showing how our algorithms for registering neuron morphologies can enable new approaches in comparative morphological analyses and visualization.
Since Ramon y Cajal's 'Neuron Theory' [1], neuronal morphology has been a prominent field of study in Neuroscience. With early hand-drawn illustrations, later camera lucida tracings and more recent digital reconstructions [2], scientists have investigated the structure of individual nerve cells to better understand its role in neuronal function and pathology. Using modern imaging techniques and reconstruction algorithms, labs from around the world are producing huge numbers of detailed 3D morphologies [3, 4], and databases have been developed to collect and host such data [5].
A prominent application of neuron morphology is in comparative studies aiming to quantify the inter-group and intra-group variability of neurons. Neuronal shape and structure have been known to vary widely, even across specimens of a single species, making their characterization and classification a very difficult task [6]. Although long investigated [7, 8], the general principles underlying such diverse structures have largely been elusive, with a few widely applicable ones being uncovered only in the last decade [9–12]. Many different approaches with increasingly complex methods have therefore been used in the investigation of neuronal shape and structure.
A common approach has been to statistically test the variance of whole cell scalar measures ([13, 14]) of neuronal morphologies within and between groups. Although these methods have been successful in some cases [15–17], they have proven unsuitable for quantifying finer changes in topology and morphology [15, 18].
The next finer level of quantification involves dividing each morphology into concentric disks or shells about pre-identified centering points, grouping topologically or morphologically equidistant regions from different individuals and computing statistical variability of morphological and topological measures like the number of dendrites [19–21] within and across groups to characterize morphologies. For each such set of corresponding regions, statistical variability of morphological and topological measures like the number of dendrites [19–21] are used to characterize morphologies. Although this approach has been successfully used to quantify inter-group and intra-group variability in several studies of specific cell types [22–25], it has been found to be inadequate for morphologies that have similarly complex structures but differ in fine spatial distributions of morphological and topological features [15, 18]. For such cases, Mizrahi et al. [18] illustrated the use of Hausdorff Distance based features by quantifying the overall spatial dissimilarity between morphologies at different spatial scales. More recently, Kanari et al. [26] proposed a novel feature based on radial distance and topological "persistence" of dendrites and showed that a distance measure based on it could distinguish groups of complex morphologies with fine differences. A shortcoming of these approaches is that regions that are morphologically or topologically equidistant are lumped together for analysis, which can lead to dilution or cancellation of differences. Another drawback of this approach is the requirement for identification of corresponding centering points across different specimens, especially for invertebrates for which the somas are "segregated" [27] and variably located (for example, see Additional file 1 that visualizes classified groups of morphologies from Drosophila melanogaster).
For localization of inter-group and intra-group differences in morphological features, a spatial correspondence needs to be established between regions, in other words, the morphologies need to be co-aligned or co-registered. Several recent studies have proposed methods for such co-registration of morphologies and used them to compare morphologies.
Fiduciary markers can be used to register the original image data to a standard brain before extracting morphologies [28, 29]. Although this approach is very effective for brain regions with an existing standard brain [30–32], construction of a new standard brain is beyond the means of individual researchers as it requires a huge concerted effort. Furthermore, even for the cases where brain atlases are available, registration of individual morphologies can be ineffective due to lack of sufficient fiduciary markers in the brain region of interest. Hence methods that co-register morphologies without requiring external information are needed.
Other studies have presented co-registration methods that do not need fiduciary markers. Mizrahi et al. [18] implemented a method consisting of a translation for matching landmarks and rotation about one axis based on radii of ganglia. BlastNeuron [33] uses an affine registration method based on pointwise Euclidean distances and RANSAC sampling [34] as a preprocessing step for establishing detailed spatial and topological correspondence between morphologies. Several Iterative Closest Point (ICP) based methods from computer vision and biomedical imaging are also applicable, specifically the ones that can handle morphologies scaled differently along different axes [35, 36]. All these methods use measures of dissimilarity based on pointwise Euclidean distances for registration and hence seek a solution of point-to-point or surface-to-surface overlap, which can be hard to achieve for neuron morphologies, due to natural biological variation in their fine spatial structures. This has also been a major consideration in the construction and application of brain atlases [37]. Even neurons that have highly consistent global spatial features show considerable variation in their lower order branches [18, 37]. Moreover, the spatial region occupied by dendritic arbor has been shown to be important for the classification and synthesis of morphologies [15] and for investigating the role of single neuron morphology in the population [38]. This is consistent with dendrites and axons occupying specific spatial regions for making synaptic connections, while, within these regions, there can be variability in the exact arborization patterns at fine spatial scales [10]. Therefore, our approach aims to match the volume occupied by dendritic arbors at different spatial scales instead of seeking a point-to-point match between morphologies. Specifically, affine transformations are applied to blurred volume representations of morphologies at different spatial scales (Fig. 1) to maximize spatial overlap between volumes occupied by them. Using this approach, we present Reg-MaxS (Registration based on Maximization of Spatial overlap) and Reg-MaxS-N for co-registering pairs and groups of morphologies, respectively.
Volume representation of morphologies and spatial dissimilarity profiles at different voxel sizes illustrated using planar morphologies. Top row: Two example planar morphologies with volume representations at different voxel sizes. Circles visualize SWC points and lines their connectivity, with circle sizes indicating the diameter of the points. The two morphologies are identical but are rotated against each other about their centroids. Their discretized volume representations at corresponding voxel sizes are indicated by the filled squares. Bottom row: Variation of spatial dissimilarity between the morphologies at different voxel sizes as one of the morphologies (red morphology in top row) was rotated about its centroid. Dissimilarity was quantified using the spatial non-centric measure (see main text). The actual rotation difference between the morphologies is indicated by the red line. With decreasing voxel size, spatial dissimilarity profiles show increasing number of local minima (green arrows)
We describe here algorithms for co-registration of morphologies based on maximizing spatial overlap and such an approach requires defining a measure of spatial dissimilarity between morphologies and describing a strategy for finding a transformation that minimizes this dissimilarity. We discuss these aspects in the following subsections.
Measures of spatial dissimilarity
Our algorithms approach spatial dissimilarity based on the overlap between volumes occupied by morphologies at different spatial scales. The following definition for volume occupied by morphologies is used.
Representing the volume of a morphology
A common way of representing a neuron's three dimensional structure is by using the SWC format [14, 39], which represents a binary tree embedded in three dimensional space. Each point or node has, apart from its three spatial coordinates, a radius associated with it. With these features, every parent-child pair of nodes can be used to construct a frustrum, and consequently a set of connected frustra can be constructed from a tree structure which then represents the neuronal morphology. In our algorithms, to extract a volume representation of a morphology described in the SWC format, the three dimensional space containing the morphology is discretized into a set of equally sized cubic voxels (Fig. 1 top row). The voxels are positioned so that there is a voxel with its center at the origin of the space and the edge length of a voxel, which we term "voxel size", is the most important parameter of this volume discretization. Among these voxels, those that contain at least one point of the morphology are identified and the resulting set of voxels is used to represent its volume.
Measures of spatial dissimilarity for two morphologies
Given two morphologies A and B, we define spatial dissimilarity (D) from their volume representations setA and setB as:
$$\begin{array}{*{20}l} D(setA,\ setB) &= \frac{n(setA\ -\ setB) + n(setB\ -\ setA)}{n(setA\ \cup \ setB)}\\ &= 1 - \frac{n(setA\ \cap \ setB)}{n(setA\ \cup \ setB)} \end{array} $$
where n() represents the number of elements in a set, and ∪ and ∩ represent the set union and set intersection operators, respectively. This measure essentially quantifies the spatial overlap between two morphologies normalized by their total volume.
Our algorithms use two measures of spatial dissimilarity, which we call "centric" and "non-centric" measures. The non-centric measure calculates the spatial dissimilarity between morphologies based on the values given, without applying any transformations. This measure is used when estimating translation and rotation differences between morphologies. The centric measure first translates one of the morphologies so that its centroid coincides with that of the other and calculates spatial dissimilarity using the volumes of the resulting morphologies. This measure is used when estimating scaling differences.
Measures of spatial dissimilarity for a group of morphologies
We define a measure for more than two morphologies based on voxel occupancy in the following paragraphs.
Given a group of morphologies, occupancy of a voxel is defined as the total number of morphologies of the group that have at least one point belonging to the voxel. A histogram of voxel occupancy values is calculated using all voxels with non-zero occupancy. A weighted histogram is created by multiplying each count of the histogram by its voxel occupancy. A normalized histogram is created by normalizing the weighted histogram by its sum.
It is desirable that a perfectly co-registered group of morphologies, i.e., a group with each morphology occupying the same set of voxels, has a spatial dissimilarity of zero. The normalized histogram of such a group would have a value of one at voxel occupancy equal to the size of the group and zero for all other values of voxel occupancy. Larger deviation from such a normalized histogram indicates larger spatial dissimilarity. Therefore, we define spatial dissimilarity of a group of morphologies as the distance between its normalized histogram and the normalized histogram corresponding to perfect spatial overlap, quantified by Earth-Mover-Distance [40].
Estimating best transformations
In our approach, morphologies are co-registered by repeatedly removing rotation, scaling or translation differences. These differences are estimated using a multi-scale method based on exhaustive searches, which are described in the following paragraphs. Since the measures defined above show multiple local minima over the space of transformations, especially when working at low voxel sizes (Fig. 1), gradient based optimization techniques are not suitable.
Exhaustive search is a basic search algorithm where all candidates from the search space are sequentially generated and tested to find the solution which optimizes a certain criteria. To illustrate this with the example of estimating the rotational difference between two morphologies, exhaustive search can be formulated as sequentially generating all possible rotations, applying them to one of the morphologies, calculating spatial dissimilarity for each of them with the reference and choosing that rotation which leads to the least dissimilarity. However, the number of possible rotations is infinite. Therefore, an approximate estimate is obtained by generating a discrete set of equally spaced rotations from a plausible region of the rotation space and exhaustively searching among these rotations for the optimal rotation. This can be implemented by parametrizing rotation, sampling the plausible range of each parameter uniformly with a certain inter-sample-interval, and exhaustively searching all combinations of the resulting parameters (for implementation details see Additional file 2).
Multi-scale estimation
Using exhaustive search on a discretized search space imposes a trade-off between accuracy of the resulting estimate and the computational cost associated with its calculation. To reduce this computational cost, our algorithms use the strategy of hierarchical or multi-resolution matching [41, 42] which has been successfully used to speed up and reduce errors of 3D image registration methods. Starting at the largest voxel size, it runs an exhaustive search over an equally spaced discrete set of plausible parameters to find an estimate. The exhaustive search at the next lower voxel size is run over a smaller region around this estimate determined by its uncertainty (see Additional file 2 for more details). Thus estimates are progressively refined by running exhaustive searches over a sequence of discretized volumes generated using decreasing values of voxel size.
Reg-MaxS
Using this multi-scale estimation method to determine transformation differences between morphologies, Reg-MaxS iteratively applies transformations to remove determined differences until no transformation reduces the spatial dissimilarity between the morphologies any further. It first translates one of the morphologies so that its center coincides with the other. It then applies a sequence of translation, rotation and scaling transforms to minimize the spatial dissimilarity between morphologies. The order in which the different transformations are applied is determined based on how the application of one transformation influences the subsequent estimation of another transformation difference.
Rotation and translation do not affect each other, i.e., if there are only rotation and translation differences between morphologies, it does not matter whether the rotation difference is removed first and then the translation difference or vice versa. However, scaling and rotation/translation affect each other, i.e., applying a scaling affects a subsequent estimation of a translation/rotation difference and vice versa. Hence, Reg-MaxS applies a sequence of translation/rotation transforms until no translation or rotation can reduce spatial dissimilarity further. Then it applies a scaling transform. This is followed again by a set of translation/rotation transforms which is then followed again by a scaling. This iteration of alternatively applying a set of translation/rotation and scaling is continued until none of the transforms can decrease the spatial dissimilarity between the morphologies any further. Finally, the iteration at which spatial dissimilarity was minimized is chosen as the final solution. (see Additional file 2 for actual algorithm). Note that Reg-MaxS does not handle reflections. Any reflections must be removed before the algorithm is applied.
Reg-MaxS-N
Reg-MaxS-N is an algorithm for co-registering multiple morphologies. It uses Reg-MaxS for co-registering pairs of morphologies and is based on "iterative averaging" [43] which has been successfully used to generate several standard brain atlases [43–45]. It is an iterative algorithm, which in each iteration uses a reference volume and registers all morphologies to it. From the resulting registered morphologies, it generates an "average volume", which is then used as the reference in the following iteration. For the first iteration, volume occupied of one of the morphologies to be registered is chosen as the initial reference. The iteration stops when all pairwise registrations of an iteration are rejected (see "Accepting a pairwise registration" section) Finally, the iteration at which the occupancy based measure of the morphologies was minimized is chosen as the final solution (see Additional file 2 for actual algorithm).
Computing the average volume
There are several ways of generating an average volume from a group of registered morphologies. In image stack registration paradigms, where voxel values are multi-valued and numerical (E.g.: for grayscale image stacks), an average of a set of images is generated by averaging the value for each voxel across the set of images. In other problems where voxel values are non-numerical (string labels for example, as in [43]), a democratic policy is used, where the most frequently occurring value is chosen for each voxel. However, in our formulation each voxel takes one of two values, '1' or '0', indicating whether it contains at least one point of the morphology or not. Using a democratic policy would mean that the average retains only those voxels for which more morphologies have '1's than '0's. For those cases where some parts of the morphologies have not yet overlapped at the end of the first iteration, this policy would remove those parts from the average. Since the morphologies are registered to this average in the following iteration, those parts would no longer be taken into account for registration. Instead, we use a more conservative approach and assign a voxel in the average volume to be '1' if at least one of the morphologies being averaged has a value '1'. In other words, the average volume of a given set of morphologies is calculated as the union of the voxel sets of all the morphologies. This ensures that each morphology is completely represented in the average and thereby contributes equally in determining the final registration.
Initial approximate registration
For the first iteration, an initial approximate registration is performed by matching centroids. For all subsequent iterations, no initial registration is applied.
Restricting total scaling
In every iteration, Reg-MaxS-N uses Reg-MaxS for registering morphologies to an average volume. A parameter of Reg-MaxS is the range of values of scales over which Reg-MaxS searches to find the scale that, when applied to the test morphology, minimizes its spatial dissimilarity with the reference. However, if this range of possible scales is constant, and Reg-MaxS-N repeatedly aligns the morphologies to the average volume of the previous iteration, it would scale the morphologies larger and larger to stretch the dimensions which show high spatial dissimilarity. If such scaling is not constrained, the morphologies would become disproportionately and unrealistically large to achieve a high similarity value. Hence, Reg-MaxS-N constrains the total scaling that is applied to a morphology. It keeps track of the total scaling that has been already applied to a morphology at the end of each iteration and reduces the amount of scaling that can be applied to it in the next iteration. This prevents the total scaling from becoming unrealistic.
Normalizing final morphologies
As explained above, since Reg-MaxS-N repeatedly registers morphologies to the average of the previous iteration, the final morphologies would have translation, rotation and scaling differences with the initial reference morphology, i.e., the reference morphology of the first iteration. For further analysis on these final registered morphologies, it is convenient to transform them such that they are comparable to the original reference morphology. Thus, Reg-MaxS-N calculates the sum total of all translation, rotation and scaling transforms applied to the original reference morphology over all iterations and applies the inverse of this total transformation to all the final registered morphologies. This makes all of them comparable with the original reference morphology.
Accepting a pairwise registration
At each step, Reg-MaxS uses the multi-scale method for determining transformation differences. In the multi-scale method, the final estimate is determined at the lowest voxel size of the algorithm. Thus, Reg-MaxS tries to minimize spatial dissimilarity between two morphologies at this lowest voxel size. Doing so could lead to an increase in spatial dissimilarity at a higher voxel size. This is acceptable, since we want an exact or a very large overlap between the volumes of the morphologies. However, when working iteratively with a group of morphologies, the reference corresponds to an actual morphology only for the first iteration. For all other iterations, it is a conservative "average" representing the union of the volumes of several morphologies, which does not represent any single morphology. Sacrificing spatial overlap at a higher voxel size for spatial overlap at a lower voxel size can cause over-fitting, in the sense that parts which do not necessarily correspond to each other would end up being randomly matched. Hence, a morphology registered to an average is accepted only if spatial dissimilarity at the highest voxel size has decreased. If the spatial dissimilarity at the highest voxel size has remained the same, then the spatial dissimilarity at the next highest voxel size is considered, and so on. When a registration is not accepted, the test morphology is itself designated as the registered morphology.
Testing the methods
To validate Reg-MaxS and Reg-MaxS-N, we tested them on several groups of morphologies. We defined measures for quantifying performance and calculated them for each of the test cases. Comparing these measures, we identified the cases where the algorithms performed poorly and investigated the reason behind them. In this section, we describe the morphologies and performance measures used for testing the algorithms.
Morphologies used for testing
Synthetic Morphologies used to test Reg-MaxS
To illustrate its working and explore its limitations, we applied Reg-MaxS to synthetic data generated from a morphology of a visual neuron from the blowfly [15] (Fig. 2b green) obtained from NeuroMorpho.org [2]. The morphology is nearly two dimensional and has a dense dendritic arbor with a thick axon which projects to a couple of nearby regions.
Examples of pairwise co-registration of morphologies using Reg-MaxS. Results of pairwise co-registration of a morphology (green) and three versions of it (blue, magenta and red) transformed by random translations, rotations and scaling. In each example, Reg-MaxS was applied to co-register a transformed morphology to the reference. a Distribution of corresponding point pairs distances between the resulting morphologies and the reference. Box plots extend between first and third quartiles with the median indicated by a black line while whiskers indicate the extrema. The red dashed line indicates the smallest voxel size used for the co-registrations. The Y-axis has been scaled to focus on distances in the range zero to the lowest voxel size, which indicate good registration performance. Asterisk indicates whether corresponding point pairs were significantly closer than the smallest voxel size used according to Signs test at 1% significance level. b The morphologies before and after co-registration. Reg-MaxS was successful in removing the transformation differences between the morphologies for Example1 and Example2 as shown by distribution of distances in (a) and close alignment in (b, "After"). For Example 3, which showed a high degree of anisotropic scaling (MAS=0.37), some scaling differences remained
We first created a set of 10 noisy morphologies by adding independent zero-mean Gaussian noise of standard deviations (std) 1, 3, 5,...,17, 19 μm to the points of the morphology. Next, 100 different random transformations were constructed by drawing translations from a uniform distribution over [-20, 20] μm, rotations from a uniform distribution over [-30, 30] degrees and scaling from a uniform distribution over [0.5, 1/0.5]. Each transformation was applied to the set of ten noisy morphologies to generate one hundred such sets. In addition, 1000 noiseless morphologies were generated by applying 1000 different random transformations constructed as above to the original noiseless morphology. To summarize, we used 2000 transformed morphologies: (1000 without noise) + (100 with noise of std 1 μm) + (100 with noise of std 3 μm) +.... + (100 with noise of std 19 μm).
Morphologies used to test Reg-MaxS and Reg-MaxS-N
Table 1 describes the five groups of neuron morphologies from Drosophila melanogaster used for testing Reg-MaxS and Reg-MaxS-N. Morphologies within a group have stereotypic structure but each group shows a different three dimensional dendritic arborization (see Additional file 1).
Table 1 Neurons from Drosophila melanogaster used for testing Reg-MaxS and Reg-MaxS-N
All the morphologies were generated from image stacks of the FlyCircuit Database [31]. The morphologies reconstructed without registering to any standard brain atlas ("non-standard" morphologies) were obtained from NeuroMorpho.org [2]. Morphologies which were reconstructed after registering to a Drosophila standard brain [30, 46] ("standardized" morphologies) were obtained from Dr. Gregory Jefferis.
Measures for quantifying performance of Reg-MaxS
Reg-MaxS was evaluated by applying it to register a test morphology to a reference and calculating residual errors based on the Euclidean distances of corresponding point pairs between result and reference morphologies. When synthetic morphologies were used, the test morphologies were randomly transformed versions of the reference and hence a pointwise correspondence was readily available. When real morphologies were used, test and reference morphologies were from the group 'LCInt' and no such correspondence was available. In this case, correspondences were defined by choosing the nearest neighbor among the test SWC points for every SWC point of the reference morphology.
Measures of performance:
The residual error above between result and reference morphologies was quantified as follows. Given a reference morphology P and a result morphology Q1, let {p1,p2,⋯,p m } be the SWC points of P and \(\{q_{p_{1}}, q_{p_{2}}, \cdots, q_{p_{m}}\}\) be their corresponding points in Q1. From these points, a set of Euclidean distances \(\left \{d_{1}^{Q_{1}}, d_{2}^{Q_{1}}, \cdots, d_{m}^{Q_{1}}\right \}\) were calculated as follows:
$$\begin{array}{*{20}l} d_{i}^{Q_{1}} = & \sqrt{\left(p_{i}^{x} - q_{p_{i}}^{x}\right)^{2} + \left(p_{i}^{y} - q_{p_{i}}^{y}\right)^{2} + \left(p_{i}^{z} - q_{p_{i}}^{z}\right)^{2}} \\ & \text{for} \ i \ \text{in} \ \{1, 2, 3, \cdots, m\} \end{array} $$
where the superscripts x, y and z indicate coordinates in space. We used multiple tests for validation and therefore given a set of tests {Q1,Q2,Q3...,Q n }, a set of Euclidean distances as shown below were calculated.
$$\begin{aligned} \left\{ d_{1}^{Q_{1}}, \quad d_{2}^{Q_{1}}, \quad \cdots, \quad d_{m}^{Q_{1}},\right.\\[-2pt] d_{1}^{Q_{2}}, \quad d_{2}^{Q_{2}}, \quad \cdots, \quad d_{m}^{Q_{2}},\\[-2pt] \cdots, \quad \cdots, \quad \cdots, \quad \cdots,\\[-2pt] \left. d_{1}^{Q_{n}}, \quad d_{2}^{Q_{n}}, \quad \cdots, \quad d_{m}^{Q_{n}} \right\} \end{aligned} $$
Since the finest spatial scale at which Reg-MaxS registers morphologies is the smallest voxel size used, distances less than the smallest voxel size indicate good registration. We regrouped these distances in two ways to quantify two kinds of performances:
Performance for every test across SWC points, using
\(\left \{\left \{d_{1}^{Q_{1}}, d_{2}^{Q_{1}}, \cdots, d_{m}^{Q_{1}}\right \}, \left \{d_{1}^{Q_{2}}, d_{2}^{Q_{2}}, \cdots, d_{m}^{Q_{2}}\right \}, \cdots, \left \{d_{1}^{Q_{n}}, d_{2}^{Q_{n}}, \cdots, d_{m}^{Q_{n}}\right \}\right \}\)
Performance for every SWC point of the reference morphology across tests, using,
\(\left \{\left \{d_{1}^{Q_{1}}, d_{1}^{Q_{2}}, \cdots, d_{1}^{Q_{n}}\right \},\left \{d_{2}^{Q_{1}}, d_{2}^{Q_{2}}, \cdots, d_{2}^{Q_{n}}\right \}, \cdots, \left \{d_{m}^{Q_{1}}, d_{m}^{Q_{2}}, \cdots, d_{m}^{Q_{n}}\right \}\right \}\)
These performance measures were calculated as the percentage of tests or SWC points for which distances were significantly smaller than the smallest voxel size used. Since only distance values smaller than the smallest voxel size were relevant, we used the one-tailed Wilcoxon test, also known as the Signs test with a significance level cutoff of one percent.
Measure of anisotropic scaling:
Some preliminary tests with Reg-MaxS indicated that performance of the algorithm was affected by different scaling along different axes of the morphologies relative to each other (see "Results" section). To quantify such differences in scaling along axes, we defined the following Measure of Anisotropic Scaling (MAS):
$$ \text{MAS} = 1 - \frac{1}{3}\left(\frac{s_{1}}{s_{2}}+\frac{s_{1}}{s_{3}}+\frac{s_{2}}{s_{3}}\right) $$
where s1,s2,s3 are the scaling differences along the axes arranged in ascending order. MAS has a value of zero when the scaling differences along all axes are equal, and increase gradually to one as the scales become more and more different.
Comparing Reg-MaxS-N with other methods
We compared the performance of Reg-MaxS-N with Reg-MaxS and four other methods for co-registering morphologies from recent studies:
PCA: A method using Principal Component Analysis based on a similar method for image stacks [47].
PCA + RobartsICP: The PCA method above followed by Anisotropic-Scaled Iterative Closed Point [36].
BlastNeuron: The affine transformation step of BlastNeuron [33].
Standardized: A method using a standard brain [30].
Code for BlastNeuron and RobartsICP was obtained from the respective authors. Morphologies registered to a standard brain were provided by Dr. Gregory Jefferis. The PCA method was implemented as follows. Given a test and a reference morphologies, we assumed that they have similar dendritic density profiles and were oriented similarly in space. Based on this, the method assumes a correspondence between the first principal axes (principal axes corresponding to the largest principal factors), second principal axes and the third principal axes of the two morphologies. This method translates the test morphology so that its center coincides with that of the reference and rotates it so that their corresponding principal axes align. Scaling differences are determined based on the variances of the morphologies along the corresponding principal axes and the test morphology is appropriately scaled.
Each registration method was applied to each of the five groups of morphologies with the standardized version of one of the morphologies as the initial reference. Performance was quantified using the occupancy-based measure defined above. The results of PCA, PCA + RobartsICP, Reg-MaxS and Reg-MaxS-N were in the same frame of reference as the standardized morphologies allowing direct comparison. The results of BlastNeuron however were in a different frame of reference.
In addition, the above registration tests were repeated three times for each method and each group using different morphologies as initial references and performances were quantified in each case.
Computing density profiles from sets of registered morphologies for visualization
We visualized the results of PCA, BlastNeuron and Reg-MaxS-N along with the standardized morphologies by constructing density profiles from each of them and by maximal projections of these density profiles along two orthogonal planes. These density profiles were generated using the method described in [30]. For each set of morphologies that were co-registered, a density profile was constructed discretized with a voxel size of 0.25μm×0.25μm×0.25μm. Each morphology was resampled so that the distance between any pair of connected points was at most 0.1μm. Each voxel that contained at least one point of the morphology was assigned a value of 1 and all others were assigned 0. This binary density profile was smoothed using a unity sum 3D discrete Gaussian Kernel. The standard deviation of this kernel was chosen individually for each group of morphologies. Density profiles so calculated for each morphology were averaged across morphologies to obtain a density profile for the set of morphologies.
Testing Reg-MaxS with synthetic morphologies
Testing Reg-MaxS with noiseless morphologies
We first used the synthetically generated noiseless morphologies for testing Reg-MaxS. In each of these test registrations, the respective original morphology was always used as the reference while a transformed version of the original morphology was used as the test. The smallest voxel size used was 10 μm for all the tests. When pointwise distance statistics were calculated for each test registration across SWC points, 675 of 1000 tests (67.5%) had final distances that were significantly smaller than the smallest voxel size (n =1290, Signs Test, 1% significance level). When pointwise distance statistics were calculated for each SWC point across test registrations, 1287 of 1290 SWC points (99.76%) had final distances that were significantly smaller than the smallest voxel size (n =1000, Signs Test, 1% significance level). Thus, although Reg-MaxS fails to register a significant number of SWC points in a third of the test registrations, the number of points for which it consistently fails across tests is small.
Three example tests are illustrated in Fig. 2. Reg-MaxS failed for the test morphology "Example3", especially in removing scaling differences. This was caused by the heavy anisotropic scaling in this morphology (scaling differences: 1.12 along X, 0.61 along Y and 1.27 along Z, MAS =0.37). We analyzed this further by separating morphologies based on their level of anisotropic scaling (see "Effect of anisotropic scaling" section below).
In these tests the morphologies used had nearly planar densities. However, Reg-MaxS also performed well on morphologies with 3D extent. This is demonstrated in the "Testing Reg-MaxS with real reconstructions" section using LCInt morphologies which have a non-planar dendritic density profile.
Effect of anisotropic scaling
To investigate the effect of the level of anisotropic scaling on the performance of Reg-MaxS, we calculated statistics only for the tests with low levels of anisotropic scaling, i.e., for cases where Measure of Anisotropic Scaling (MAS) was less than 0.2. Across SWC points, 166 of 193 tests (86%) had significant numbers of final distances smaller than the smallest voxel size (n =1290, Signs Test, 1% significance level). Across test registrations, 1290 of 1290 SWC points (100%) had final distances less than smallest voxel size (n =193, Signs Tests, 1% significance level). This shows that Reg-MaxS performs better for cases with low levels of anisotropic scaling, i.e, for cases where the MAS is less than 0.2.
Testing Reg-MaxS with noisy morphologies
Reg-MaxS was designed to co-register morphologies so that their spatial characteristics can be compared, assuming that the morphologies have very similar structure and belong to the same stereotypic neuron group but are obtained from different specimens. Even stereotypical neurons exhibit natural biological variability in the exact location of their dendrites from individual to individual, especially for higher order dendrites. Thus, in order to properly register such morphologies, Reg-MaxS must be able to tolerate such variability in dendritic position. We tested this by applying Reg-MaxS to morphologies where noise was added to each point of the morphology.
As described in "Methods" section, we generated noisy synthetic morphologies by first adding independent Gaussian noise to each point of a reference morphology M (Fig. 3a) to generate a noisy morphology N(M), shown in Fig. 3b. Then we randomly transformed N(M) to obtain the morphology TN(M), shown in Fig. 3c together with the original morphology M. We then ran Reg-MaxS with M as reference and TN(M) as the test to produce the morphology RTN(M), shown in Fig. 3d. Since the best expected registration of TN(M) to M is N(M), we compared RTN(M) to N(M) and calculated point-wise distances and registration accuracy accordingly. This was done for ten different values of standard deviation and a hundred different transforms. Figure 3e show the results of these tests. Reg-MaxS showed about 85% success for values of noise standard deviation less than the smallest voxel size.
Testing Reg-MaxS with noisy morphologies. a The reference morphology M. b M (green) and the morphology N(M) (blue), which was obtained by adding independent Gaussian noise of standard deviation 7 μm to each point of M. c M (green) and the morphology TN(M) (red), which was obtained by applying random translation, scaling and rotation to N(M). d N(M) (blue) and RTN(M) (violet), which was obtained by registering TN(M) to M using Reg-MaxS. The process was repeated using multiple random transformations and different values of noise standard deviations (see "Methods" section). e Performance of Reg-MaxS as a function noise standard deviation. Reg-MaxS performance was calculated as the percentage of tests for which the distribution of resulting pointwise distances was significantly smaller than the smallest voxel size (10 μm). Reg-MaxS-N showed high performance for noise with standard deviation below the smallest voxel size
Testing Reg-MaxS with real reconstructions
Reg-MaxS applies affine transforms for reducing spatial dissimilarity between morphologies. However, multiple morphologies of the same stereotypical neuron obtained from different specimens could show non-affine differences as well, if the brains of the specimens show non-affine differences. This is taken into account while constructing brain atlases that use both affine and non-affine transforms (e.g., [43]). To test if the limitation to affine transforms is a major drawback for Reg-MaxS, we registered non-standard versions of LCInt morphologies (see Additional file 1 for its 3D structure) to their corresponding standardized versions. Since a pointwise correspondence between the morphologies was not available in this case, we used distance statistics of nearest point pairs of the reference morphology and the registered morphology for quantifying algorithm performance. The algorithm performed well on all neurons, with significant number of nearest point pairs closer than the smallest voxel size (117≤n≤276, Signs test, 1% significance level). However, these tests showed slightly larger final distances (5.51 ± 4.49 μm) compared to tests using noiseless synthetic morphologies with only affine transformation differences (3.08 ± 3.35 μm). The distributions of nearest point distances also showed more outliers compared to noiseless synthetic tests because of non-rigid differences between the non-standard and standardized morphologies.
Testing Reg-MaxS-N with groups of morphologies
For evaluating Reg-MaxS-N, we compared its performance with that of five other methods (see "Methods" section). We applied the six methods to five groups of morphologies, repeating each case for four different initial references. Results of applying the methods are visualized in Fig. 4 using one sample morphology per group. Performance was quantified using occupancy-based dissimilarity (see "Methods" section) and averaged across initial references as shown in Fig. 5. Reg-MaxS-N outperformed PCA, BlastNeuron and PCA+RobartsICP for four of the five groups – LCInt, ALPN, OPInt and AA. For AA2, a group of neurons with unusually high structural stereotypy, BlastNeuron and PCA+RobartsICP showed slightly higher performance than Reg-MaxS-N (see "Applicability" in "Discussion" section for more). The density profiles calculated from the result morphologies of Reg-MaxS-N were very similar to those obtained using methods relying on a standard brain (Fig. 6). Furthermore, the performance of Reg-MaxS-N across initial references was less variable than BlastNeuron, PCA-Based+RobartsICP and Reg-MaxS for all groups as seen from the error bars in Fig. 5 (also see Additional file 3). Although PCA showed lower variance across initial references for ALPN and AA1 morphologies, its median performance was lower. Thus Reg-MaxS-N showed higher average performance and lower sensitivity to initial reference than other existing methods in a large majority of our tests.
Comparative visualization of co-registration results of different methods using single morphologies. Co-registration results of PCA, PCA+RobartsICP, BlastNeuron, Reg-MaxS-N and standard brain based method visualized using a single morphology for LCInt (a), ALPN (b), AA1 (c) and AA2 (d). These visualizations illustrate some of the misalignments that can occur from the application of these methods
Performance comparison of registration methods. Performance of six registration methods for five different groups of morphologies. Error bars indicate 95% confidence intervals of median performance, calculated across values for four initial references. BlastNeuron performance for OPInt morphologies are not shown as the program provided by the authors stopped after a time limit of 30 min and produced no results. In most cases, Reg-MaxS and Reg-MaxS-N outperform the other methods
Comparison of dendritic density profiles generated using Reg-MaxS-N and brain altas based method. a Two dimensional projections of morphological densities after registration with Reg-MaxS-N (left columns) and standard brain based registration (right columns). Densities were calcuated for voxels of size 0.25μm. Color indicates the fraction of morphologies that, after registration, had at least one dendrite in the particular voxel. For illustration purposes, densities have been smoothed by a 3D Gaussian kernel with standard deviation of 1.25μm. a Densities for eight LCInt morphologies. A1, A2 and A3 correspond to the principal axes of the standardized LCInt morphology used as initial reference. b Densities for fourteen ALPN morphologies. A4, A5 and A6 correspond to the principal axes of the standardized ALPN morphology used as initial reference. c Densities for twelve AA1 morphologies. A7, A8 and A9 correspond to the principal axes of the standardized AA1 morphology used as initial reference. d Densities for nine AA2 morphologies. A10, A11 and A12 correspond to the principal axes of the standardized AA2 morphology used as initial reference. In all cases, Reg-MaxS-N produced densities very similar to that produced by standard brain based method
We have presented Reg-MaxS and Reg-MaxS-N, algorithms for co-registering pairs and groups of neuron morphologies, respectively, by maximizing spatial overlap. We have quantified the performance of Reg-MaxS using synthetic and real morphologies. We have tested Reg-MaxS-N on different groups of morphologies with different initial references and quantified its performance for each case.
Spatial registration is a global optimization problem usually consisting of multiple local minima. Most registration algorithms therefore initialize using an approximate solution before minimizing dissimilarity. Several different strategies have been developed for initialization of registration algorithms [48]. However, initialization is required only when the objects being registered are expected to have large transformation differences. Neuron morphologies of the same type obtained from different individuals do no usually have large transformation differences other than translations caused by arbitrary choice of origin. Hence Reg-MaxS uses centroid alignment for initialization. Nonetheless, Reg-MaxS can be easily modified to include an appropriate initialization if an application demands it.
Reg-MaxS vs Reg-MaxS-N
Compared to Reg-MaxS, Reg-MaxS-N has mainly two additional components in its procedure — iterative registration and final normalization. While Reg-MaxS registers all morphologies once to the initial reference, Reg-MaxS-N applies multiple iterations of such registrations, calculating a new reference in each iteration. This iterative strategy reduces the effect of the choice of initial reference on algorithm performance. In our tests, Reg-MaxS-N performed better than Reg-MaxS for most cases, and showed less variability across different initial references compared to Reg-MaxS (see Figs. A31 and A35 of Additional file 3), indicating better suitability for these cases.
For ALPN, OPInt and AA1 morphologies, the performance of Reg-MaxS-N was nearly the same as that of Reg-MaxS. In these cases, Reg-MaxS-N chose the morphologies at the end of its first iteration as the solution, i.e., the same solution as Reg-MaxS. However, the solution morphologies for Reg-MaxS-N were additionally normalized so that they were comparable to the initial reference and this caused the observed reduction in performance of Reg-MaxS-N compared to Reg-MaxS in some of these cases. The normalization was applied mainly for the purpose of visualization and comparison with other methods, and can therefore be excluded when analyzing single groups of morphologies.
Computational cost
Reg-MaxS applies a sequence of transformations for maximizing spatial overlap between two morphologies. It estimates transformation differences at each step using a measure of spatial overlap based on the set of voxels occupied by each morphology. However, the set of voxels occupied by a morphology can change with every rotation or scaling. This makes it hard to predict the computational cost of estimating transformation differences at each step and thus to estimate the total computational cost of Reg-MaxS. Furthermore, Reg-MaxS and Reg-MaxS-N are both iterative algorithms which stop only when spatial overlap between morphologies cannot be improved further. This further complicates the prediction of total number of iterations and total computational cost.
We compared the run times per morphology of Reg-MaxS, Reg-MaxS-N and other methods for co-registration of groups of neurons with different settings of initial reference (Table 2). Run times for Reg-MaxS-N were many times longer than those of the other methods. This is because Reg-MaxS-N iteratively registers morphologies many times, refining their spatial alignment and incorporating features of all morphologies. Therefore Reg-MaxS-N is expected to run longer than methods that register each morphology to the reference only once. A more suitable comparison is between Reg-MaxS and the other methods, since all of them register each morphology once. These run times were comparable, differing by factors between 0.25 and 5. The main reason for the variabilities seen both in the run times of each algorithm and in the relative run times between algorithms is that all the algorithms except PCA use iterative routines for finding optimal transformations and their run times can vary substantially and differently depending on properties of the morphologies like size, number of SWC nodes and spatial structure.
Table 2 Comparison of average runtimes per morphology for different registration algorithms
Choice of voxel sizes
The most important parameters of Reg-MaxS and Reg-MaxS-N are the set of voxel sizes over which transformation difference estimates are refined during co-registration of morphologies. The largest and the smallest voxel sizes define the coarsest and the finest spatial scales, respectively, at which the algorithms register morphologies. The algorithms consider a voxel to be occupied by a morphology if it contains one or more of its SWC points and align morphologies by applying transformations to match the sets of occupied voxels. Thus morphological features at scales finer than that defined by the smallest voxel size are ignored by the algorithms. Therefore, a good choice for the smallest voxel size is the spatial scale below which morphological features are not expected to match.
In our preliminary tests involving morphologies of different sizes and dendritic densities, we found a smallest voxel size of 10μm to be a good compromise and therefore used it for evaluating algorithm performances. To investigate the effect of reducing the value of the smallest voxel size, we repeated the tests by setting the value of smallest voxel size to 5μm. The results are summarized in Additional file 4. For pairwise co-registration of test morphologies that were larger in size and that had fewer features at scales smaller than 10μm than other test morphologies, the performance of Reg-MaxS reduced from 67.5% at 10μm to 32.2% at 5μm. On the other hand, for pairwise co-registration of test morphologies that were smaller in size and had more features at scales smaller than 10μm, performance of Reg-MaxS showed only a minor improvement. Furthermore, performance of Reg-MaxS and Reg-MaxS-N in co-registration of groups of morphologies did not show any substantial changes when smallest voxel size was changed from 10μm to 5μm (Additional file 4, Fig. A41). Thus, the value of smallest voxel size can influence the performance of our algorithms depending on the size and the sparsity of structural features of morphologies being registered, and should be chosen accordingly.
Reg-MaxS repeatedly applies a set of rotation/translations followed by a scaling to maximize spatial overlap between morphologies. Scales are estimated after aligning centroids of morphologies. In other words, Reg-MaxS seeks a solution of close centroid alignment. Therefore Reg-Max-S and consequently Reg-MaxS-N are best applicable to morphologies that are complete and have similarly situated centroids. Their application to partial morphologies or largely incomplete reconstructions is not straightforward and requires caution and consideration. For more efficient handling of such cases, the algorithms could be modified so that they do not depend heavily on centroid alignment.
Reg-MaxS-N was outperformed by PCA + RobartsICP and BlastNeuron for one out of five of our test groups of morphologies, AA2. Importantly, this was not due to poor performance of Reg-MaxS-N, but due to untypically good performance of BlastNeuron and PCA+RobartsICP. A reason for this could lie in the unusually high structural stereotypy of AA2 morphologies, which is also reflected by lower values of occupancy-based dissimilarity compared to other groups (Fig. 5, also see Fig. A35 of Additional file 3). This high structural stereotypy indicates the existence of a solution with very close point-to-point alignment, and hence BlastNeuron and PCA + RobartsICP, which are based on pointwise distance statistics, performed better. Under most realistic conditions, however, neuron morphologies will have a non-negligible biological variability in their fine spatial features, and therefore we would expect Reg-MaxS-N to perform better than the other methods considered here, as was the case for the other four test groups. However, since our sample sizes were small (n =4) we could not establish statistical significance for the differences in performance.
Calculating dendritic density profiles using Reg-MaxS-N
Applying Reg-MaxS-N to three groups of stereotypic neuron morphologies from the Droshophila melanogaster brain, we have shown that Reg-MaxS-N can co-register groups of neuron morphologies. Without the need for an external reference like a standard brain atlas, the registration results were very similar to morphologies registered conventionally, using such a reference. Dendritic density profiles can be calculated from groups of registered morphologies by spatial averaging (see "Methods" section). Thus Reg-MaxS-N can be used to calculate dendritic density from profiles of stereotypic neurons (Fig. 6). Such density profiles are useful in analyzing spatial variances in different subregions of neurons and can provide insights about the brain regions surrounding neurons [11]. Furthermore, density profiles so calculated could be used in generative models of neuron morphology [10, 49, 50]. Such models usually assume simple density profiles like a uniform density over the region of arborization. The availability of better spatial density profile estimates can improve such existing models and also enable the development of new models.
Possible improvements
Reg-MaxS applies a sequence of translation, rotation and scaling transformations to maximize the spatial overlap between morphologies. We tested Reg-MaxS with synthetic morphologies that had random translation, rotation and scaling differences and demonstrated its ability to revert these transformations. Other affine differences like shear would be expected to be compensated approximately by combinations of rotation and anisotropic scaling transformations. However, specifically including shear in the sequence of transformations applied could speed up the registration process and possibly result in better performance.
Topological features play an important role in determining neuronal function [51, 52] and hence are indispensable in the study of neuron morphology. Some recent studies [26, 33] have illustrated the effectiveness of the combined use of spatial and topological features for characterization and classification of morphology. Since Reg-MaxS-N can provide better spatial registration of morphologies than existing methods, it could be used as preprocessing to remove spatial differences for algorithms that subsequently estimate topological differences. Further, incorporating topological features into its formulation could lead to even more powerful methods for analyzing neuron morphologies.
We have addressed the problem of co-registering neuron morphologies, which is a crucial requirement for visualization and spatial analysis of stereotypical neurons, by formulating algorithms based on maximizing spatial overlap. Our tests using synthetic and real groups of morphologies have indicated that our algorithms can be used for registering stereotypic neuron morphologies that show considerable spatial variability in their fine structures as long as they are similarly scaled along different axes. The dendritic densities of stereotypic neurons calculated using our algorithms were very similar to those produced using a standard brain, demonstrating the potential of our algorithms in detailed spatial comparison of neuron morphologies.
López-Muñoz F, Boya J, Alamo C. Neuron theory, the cornerstone of neuroscience, on the centenary of the Nobel Prize award to Santiago Ramón y Cajal. Brain Res Bull. 2006; 70(4–6):391–405. Available from: https://doi.org/10.1016/j.brainresbull.2006.07.010.
Ascoli GA. Mobilizing the base of neuroscience data: the case of neuronal morphologies. Nat Rev Neurosci. 2006; 7(4):318–24. Available from: https://doi.org/10.1038/nrn1885.
Halavi M, Hamilton KA, Parekh R, Ascoli GA. Digital reconstructions of neuronal morphology: three decades of research trends. Front Neurosci. 2012; 6:49. Available from: https://doi.org/10.3389/fnins.2012.00049.
Peng H, Hawrylycz M, Roskams J, Hill S, Spruston N, Meijering E, et al. BigNeuron: Large-Scale 3D Neuron Reconstruction from Optical Microscopy Images. Neuron. 2015; 87(2):252–256. Available from: https://doi.org/10.1016/j.neuron.2015.06.036.
Parekh R, Ascoli GA. Neuronal morphology goes digital: a research hub for cellular and system neuroscience. Neuron. 2013; 77(6):1017–38. Available from: https://doi.org/10.1016/j.neuron.2013.03.008.
DeFelipe J, López-Cruz PL, Benavides-Piccione R, Bielza C, Larrañaga P, Anderson S, et al.New insights into the classification and nomenclature of cortical GABAergic interneurons. Nat Rev Neurosci. 2013; 14(3):202–216. Available from: https://doi.org/10.1038/nrn3444.
Smit GJ, Uylings HBM. The morphometry of the branching pattern in dendrites of the Visual Cortex Pyramidal cells. Brain Res. 1975; 87:41–53. Available from: https://doi.org/10.1016/0006-8993(75)90777-5.
Uylings HBM, Smit GJ. 3-dimensional branching structure of pyramidal cell dendrites. Brain Res. 1975; 87(1):55–60. Available from: https://doi.org/10.1016/0006-8993(75)90778-7.
Teeter CM, Stevens CF. A General Principle of Neural Arbor Branch Density. Curr Biol. 2011; 21(24):2105–2108. Available from: https://doi.org/10.1016/j.cub.2011.11.013.
Cuntz H, Forstner F, Borst A, Häusser M. One rule to grow them all: a general theory of neuronal branching and its practical application. PLoS Comput Biol. 2010;6(8). Availa2ble from: https://doi.org/10.1371/journal.pcbi.1000877.
Cuntz H. The dendritic density field of a cortical pyramidal cell. Front Neuroanat. 2012; 6:2. Available from: https://doi.org/10.3389/fnana.2012.00002.
Kubota Y, Karube F, Nomura M, Gulledge AT, Mochizuki A, Schertel A, et al.Conserved properties of dendritic trees in four cortical interneuron subtypes. Sci Rep. 2011; 1:89. Available from: https://doi.org/10.1038/srep00089.
Uylings HBM, van Pelt J. Measures for quantifying dendritic arborizations. Netw (Bristol, England). 2002; 13(3):397–414. Available from: https://doi.org/10.1088/0954-898X/13/3/309.
Ascoli GA, Alonso-Nanclares L, Anderson SA, Barrionuevo G, Benavides-Piccione R, Burkhalter A, et al.Petilla terminology: nomenclature of features of GABAergic interneurons of the cerebral cortex. Nat Rev Neurosci. 2008; 9(7):557–568. Available from: https://doi.org/10.1038/nrn2402.
Cuntz H, Forstner F, Haag J, Borst A. The morphological identity of insect dendrites. PLoS Comput Biol. 2008; 4(12):e1000251. Available from: https://doi.org/10.1371/journal.pcbi.1000251.
Rautenberg PL, Grothe B, Felmy F. Quantification of the three-dimensional morphology of coincidence detector neurons in the medial superior olive of gerbils during late postnatal development. J Comp Neurol. 2009; 517(3):385–396. Available from: https://doi.org/10.1002/cne.22166.
Conjeti S, Mesbah S, Negahdar M, Rautenberg PL, Zhang S, Navab N, et al.Neuron-Miner: An Advanced Tool for Morphological Search and Retrieval in Neuroscientific Image Databases. Neuroinformatics. 2016. Available from: https://doi.org/10.1007/s12021-016-9300-2.
Mizrahi A, Ben-Ner E, Katz MJ, Kedem K, Glusman JG, Libersat F. Comparative analysis of dendritic architecture of identified neurons using the Hausdorff distance metric. J Comp Neurol. 2000; 422(3):415–428. Available from: https://doi.org/10.1002/1096-9861(20000703)422:3%3C415::AID-CNE8%3E3.0.CO;2-T.
Sholl DA. Dendritic organization in the neurons of the visual and motor cortices of the cat. J Anat. 1953; 87(Pt 4):387–406.1. Available from: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1244622/.
Langhammer CG, Previtera ML, Sweet ES, Sran SS, Chen M, Firestein BL. Automated Sholl analysis of digitized neuronal morphology at multiple scales: Whole cell Sholl analysis versus Sholl analysis of arbor subregions. Cytom Part A. 2010; 77 A(12):1160–1168. Available from: https://doi.org/10.1002/cyto.a.20954.
Garcia-Segura LM, Perez-Marquez J. A new mathematical function to evaluate neuronal morphology using the Sholl analysis. J Neurosci Methods. 2014; 226:103–109. Available from: https://doi.org/10.1016/j.jneumeth.2014.01.016.
Billeci L, Pioggia G, Vaglini F, Ahluwalia A. Assessment and comparison of neural morphology through metrical feature extraction and analysis in neuron and neuron-glia cultures. J Biol Phys. 2009; 35(4):447–464. Available from: https://doi.org/10.1007/s10867-009-9150-3.
Coskren PJ, Luebke JI, Kabaso D, Wearne SL, Yadav A, Rumbell T, et al.Functional consequences of age-related morphologic changes to pyramidal neurons of the rhesus monkey prefrontal cortex. J Comput Neurosci. 2014;:263–283. Available from: https://doi.org/10.1007/s10827-014-0541-5.
Xu HP, Sun JH, Tian N. A general principle governs vision-dependent dendritic patterning of retinal ganglion cells. J Comp Neurol. 2014; 522(15):3403–3422. Available from: https://doi.org/10.1002/cne.23609.
O'Neill KM, Akum BF, Dhawan ST, Kwon M, Langhammer CG, Firestein BL. Assessing effects on dendritic arborization using novel Sholl analyses. Front Cell Neurosci. 2015; 9:285. Available from: https://doi.org/10.3389/fncel.2015.00285.
Kanari L, Dłotko Pawełand Scolamiero M, Levi R, Shillcock J, Hess K, Markram H. A Topological Representation of Branching Neuronal Morphologies. Neuroinformatics. 2017. Available from: https://doi.org/10.1007/s12021-017-9341-1.
Rivera-alba M, Peng H, Polavieja GGD, Chklovskii DB. Wiring economy can account for cell body placement across species and brain areas. Curr Biol. 2014; 24(3):R109–R110. Available from: https://doi.org/10.1016/j.cub.2013.12.012.
Brandt R, Rohlfing T, Rybak J, Krofczik S, Maye A, Westerhoff M, et al.Three-dimensional average-shape atlas of the honeybee brain and its applications. J Comp Neurol. 2005; 492(1):1–19. Available from: https://doi.org/10.1002/cne.20644.
Ashburner J. Computational neuroanatomy. Nat Methods. 2000; 8(6):493–500. Available from: https://doi.org/10.1038/nmeth.1602.
Jefferis GSXE, Potter CJ, Chan AM, Marin EC, Rohlfing T, Maurer CR, et al.Comprehensive maps of Drosophila higher olfactory centers: spatially segregated fruit and pheromone representation. Cell. 2007; 128(6):1187–203. Available from: https://doi.org/10.1016/j.cell.2007.01.040.
Chiang AS, Lin CY, Chuang CC, Chang HM, Hsieh CH, Yeh CW, et al.Three-Dimensional Reconstruction of Brain-wide Wiring Networks in Drosophila at Single-Cell Resolution. Curr Biol. 2011; 21(1):1–11. Available from: https://doi.org/10.1016/j.cub.2010.11.056.
Kohl J, Ostrovsky AD, Frechter S, Jefferis GSXE. A Bidirectional Circuit Switch Reroutes Pheromone Signals in Male and Female Brains. Cell. 2013; 155(7):1610–1623. Available from: https://doi.org/10.1016/j.cell.2013.11.025.
Wan Y, Long F, Qu L, Xiao H, Hawrylycz M, Myers EW, et al.BlastNeuron for Automated Comparison, Retrieval and Clustering of 3D Neuron Morphologies. Neuroinformatics. 2015; 13(4):487–499. Available from: https://doi.org/10.1007/s12021-015-9272-7.
Schnabel R, Wahl R, Klein R. Efficient RANSAC for Point-Cloud Shape Detection. Comput Graph Forum. 2007; 26(2):214–226. Available from: https://doi.org/10.1111/j.1467-8659.2007.01016.x.
Maier-Hein L, Franz AM, Dos Santos TR, Schmidt M, Fangerau M, Meinzer HP, et al.Convergent iterative closest-point algorithm to accomodate anisotropic and inhomogenous localization error. IEEE Trans Pattern Anal Mach Intell. 2012; 34(8):1520–1532. Available from: https://doi.org/10.1109/TPAMI.2011.248.
Chen ECS, McLeod aJ, Baxter JSH, Peters TM. Registration of 3D shapes under anisotropic scaling. Int J CARS. 2015; 10(6):867–878. Available from: https://doi.org/10.1007/s11548-015-1199-9.
Livneh Y, Mizrahi A. A time for atlases and atlases for time. Front Syst Neurosci. 2010; 3(17). Available from: https://doi.org/10.3389/neuro.06.017.2009.
Schneider CJ, Cuntz H, Soltesz I. Linking Macroscopic with Microscopic Neuroanatomy Using Synthetic Neuronal Populations. PLoS Comput Biol. 2014; 10(10). Available from: https://doi.org/10.1371/journal.pcbi.1003921.
Cannon RC, Turner DA, Pyapali GK, Wheal HV. An on-line archive of reconstructed hippocampal neurons. J Neurosci Methods. 1998; 84(1-2):49–54. Available from: https://doi.org/10.1016/S0165-0270(98)00091-0.
Rubner Y, Tomasi C, Guibas LJ. Earth mover's distance as a metric for image retrieval. Int J Comput Vis. 2000; 40(2):99–121. Available from: https://doi.org/10.1023/A:1026543900054.
Thevenaz P, Ruttimann UE, Unser M. Iterative multi-scale registration without landmarks. Proc Int Conf Image Process. 1995; 3:228–231. Available from: https://doi.org/10.1109/ICIP.1995.537622.
Je C, Park HM. Optimized hierarchical block matching for fast and accurate image registration. Signal Process Image Commun. 2013. Available from: https://doi.org/10.1016/j.image.2013.04.002.
Rohlfing T, Brandt R, Maurer CR, Menzel R. Bee brains, B-splines and computational democracy: generating an average shape atlas. Proc IEEE Workshop Math Meth Biomed Image Anal (MMBIA 2001). 2001; 2001:187–194. Available from: https://doi.org/10.1109/MMBIA.2001.991733.
Egger R, Narayanan RT, Helmstaedter M, de Kock CPJ, Oberlaender M. 3D Reconstruction and Standardization of the Rat Vibrissal Cortex for Precise Registration of Single Neuron Morphology. PLOS Comput Biol. 2012; 8(12):1–18. Available from: https://doi.org/10.1371/journal.pcbi.1002837.
Ikeno H, Kazawa T, Namiki S, Miyamoto D, Sato Y, Haupt SS, et al.Development of a Scheme and Tools to Construct a Standard Moth Brain for Neural Network Simulations. Comput Intell Neurosci. 2012;2012. Available from: https://doi.org/10.1155/2012/795291.
Costa M, Manton JD, Ostrovsky AD, Prohaska S, Jefferis GSXE. NBLAST: Rapid, Sensitive Comparison of Neuronal Structure and Construction of Neuron Family Databases. Neuron. 2016. Available from: https://doi.org/10.1016/j.neuron.2016.06.012.
Alpert NM, Bradshaw JF, Kennedy DN, Correia JA. The Principal Axes Transformation - A method for Image Registration. J Nucl Med. 1990; 31:1717–1722. Available from: http://jnm.snmjournals.org/content/31/10/1717.long.
Tam GKL, Cheng ZQ, Lai YK, Langbein FC, Liu Y, Marshall D, et al.Registration of 3D Point Clouds and Meshes: A Survey From Rigid to Non-Rigid. IEEE Trans Vis Comput Graph. 2013; 19(7):1–20. Available from: https://doi.org/10.1017/CBO9781107415324.004.
Torben-Nielsen B, De Schutter E. Context-aware modeling of neuronal morphologies. Front Neuroanat. 2014; 8:92. Available from: https://doi.org/10.3389/fnana.2014.00092.
van Pelt J, van Ooyen A. Estimating neuronal connectivity from axonal and dendritic density fields. Front Comput Neurosci. 2013; 7:160. Available from: https://doi.org/10.3389/fncom.2013.00160.
Chen JY. A simulation study investigating the impact of dendritic morphology and synaptic topology on neuronal firing patterns. Neural Comput. 2010; 22(4):1086–111. Available from: https://doi.org/10.1162/neco.2009.11-08-913.
Ferrante M, Migliore M, Ascoli GA. Functional Impact of Dendritic Branch-Point Morphology. J Neurosci. 2013; 33(5):2156–2165. Available from: https://doi.org/10.1523/JNEUROSCI.3495-12.2013.
The authors would like to thank Gregory Jefferis for providing standardized neuron morphologies extracted from the Flycircuit database before they were published and also for insightful discussions. The authors would like to thank Philipp Rautenberg, Christian Kellner, Christian Garbers, Andreas Herz and Dinu Patirniche for constructive comments and fruitful discussions.
The authors would like to also thank the NeuroMorpho.org team for hosting neuronal morphologies and the FlyCircuit team for making the large collection of single neuron image stacks and their reconstructions available.
This study was supported by German Federal Ministry of Education and Research (BMBF) (grants 01GQ1116 and 01GQ1302) and the Japan Science and Technology Agency (JST) through the "German - Japanese Collaborations in Computational Neuroscience.
Availability of data materials
The morphologies reconstructed without registering to any standard brain atlas ("non-standard" morphologies) were obtained from NeuroMorpho.org (http://dx.doi.org/10.1038/nrn1885). The corresponding registered ("standardized") morphologies were part of NBLAST (http://dx.doi.org/10.1016/j.neuron.2016.06.012) and were obtained from Dr. Gregory Jefferis ([email protected]). The names of all the neurons used in this study are provided below.
Visual Neuron From Blow Fly: HSN-fluoro01;
LCInt: Gad1-F-000062, Cha-F-000012, Cha-F-300331, Gad1-F-600000, Cha-F-000018, Cha-F-300051, Cha-F-400051, Cha-F-200000;
ALPN: VGlut-F-700500, VGlut-F-700567, VGlut-F-500471, Cha-F-000353, VGlut-F-600253, VGlut-F-400434, VGlut-F-600379, VGlut-F-700558, VGlut-F-500183, VGlut-F-300628, VGlut-F-500085, VGlut-F-500031, VGlut-F-500852, VGlut-F-600366;
OPInt: Trh-F-000047, Trh-M-000143, Trh-F-000092, Trh-F-700009, Trh-M-000013, Trh-M-000146, Trh-M-100009, Trh-F-000019, Trh-M-000081, Trh-M-900003, Trh-F-200035, Trh-F-200015, Trh-M-000040, Trh-M-600023, Trh-M-100048, Trh-M-700019, Trh-F-100009, Trh-M-400000, Trh-M-000067, Trh-M-000114, Trh-M-100018, Trh-M-000141, Trh-M-900019, Trh-M-800002;
AA1: VGlut-F-300181.CNG, VGlut-F-400545.CNG, VGlut-F-500778.CNG, VGlut-F-300196.CNG, VGlut-F-300288.CNG, VGlut-F-600290.CNG, VGlut-F-600499.CNG, VGlut-F-400665.CNG, VGlut-F-300142.CNG, VGlut-F-500147.CNG, VGlut-F-600181.CNG, VGlut-F-700190.CNG
AA2: Trh-F-700063.CNG, Trh-F-500050.CNG, Trh-F-500106.CNG, Trh-M-500051.CNG, Trh-F-600071.CNG, Trh-F-500093.CNG, Trh-F-500148.CNG, Trh-F-500154.CNG, Trh-F-700018.CNG
Programs for applying Reg-MaxS and Reg-MaxS-N to SWC files are available at https://github.com/wachtlerlab/Reg-MaxS. doi: https://doi.org/10.12751/g-node.feee47.
Department of Biology II, Ludwig-Maximilians-Universität München, Grosshadernerstr, 2, Planegg-Martinsried, 82152, Germany
Ajayrama Kumaraswamy & Thomas Wachtler
Department of Earth System Science, Fukuoka University, 8-19-1 Nanakuma, Jonan-ku, Fukuoka-shi, Fukuoka, 814-0180, Japan
Kazuki Kai & Hiroyuki Ai
School of Human Science and Environment, University of Hyogo, 1-1-12 Shinazaike-Honcho, Himeji, 670-0092, Hyogo, Japan
Hidetoshi Ikeno
Ajayrama Kumaraswamy
Kazuki Kai
Hiroyuki Ai
Thomas Wachtler
All the authors contributed to the conceptualization of the methods. AK implemented the method, carried out the analysis. TW guided the study. HI and TW contributed to improvement of the methods. AK and TW prepared the original draft. HA, HI and TW reviewed and edited the manuscript. All authors approved the final manuscript.
Correspondence to Ajayrama Kumaraswamy.
Images used in Additional file 1 have been reproduced from http://flybrain.mrc-lmb.cam.ac.uk with written permission from Dr. Gregory Jefferis.
Additional file 1
Figure showing the five groups of neuron morphologies used for evaluating Reg-MaxS-N registered to a standard brain (a) Interneurons in the Lobula complex (b) Antennal lobe projection neurons (c) Interneurons of ventrolateral protocerebum (d) Neuron of the antennal mechanosensory and motor center (e) Interneurons in the optic lobe. Figures from http://flybrain.mrc-lmb.cam.ac.uk/si/nblast/clusters/. (PNG 2101 kb)
Implementation details of Reg-MaxS and Reg-MaxS-N. (PDF 306 kb)
Performance comparison of Reg-MaxS-N and other methods for different initial references plotted separately for each group of morphologies. (PDF 593 kb)
Tests Results of Reg-MaxS and Reg-MaxS-N with smallest voxel size set to 5μm. (PDF 181 kb)
Kumaraswamy, A., Kai, K., Ai, H. et al. Spatial registration of neuron morphologies based on maximization of volume overlap. BMC Bioinformatics 19, 143 (2018). https://doi.org/10.1186/s12859-018-2136-z
Spatial registration
Neuron morphology
Imaging, image analysis and data visualization | CommonCrawl |
Building a connection through obstruction; relating gauge gravity and string theory
Cartwright_mines_0052N_11060.pdf
Cartwright, Casey
Squier, Jeff A.
Flournoy, Alex
fiber bundles
Gauge theories of internal symmetries, e.g.\ the strong and electroweak forces of the Standard Model, have a geometric description in terms of standard fiber bundles. It is tempting then to ask if the remaining force, gravitation, has a description as a gauge theory. The answer is yes, however unlike the internal symmetries of the Standard Model, the story is not so simple. There are dozens of renditions of gravitational gauge theory and no standard fiber bundle description. The main issue in the construction of gravitational gauge theory is the inclusion of translational symmetry. While the Lorentz group, like internal symmetries, acts only at each point, the translational symmetry shifts points in spacetime. For this reason a gauge theory of gravity requires a somewhat more sophisticated fiber bundle known as a composite fiber bundle. When constructing gauge theories of internal symmetries it is easy to take certain topological conditions for granted, like orientability or the ability to define spinors. However it is known that there exist spaces which do not have the properties required to define sensible field theories. Although we may take these topological properties for granted when constructing gauge theories of internal symmetries we haven't had evidence yet to expect we can do the same for gravitational gauge theory. By studying the geometry of the composite bundle formalism underlying viable gauge theories of gravity we have found previously unappreciated subbundles of the primary bundle. We were able to identify these subbundles as the spacetime bundles we would expect to be created by a gauge theory of gravity. Remarkably, the origin of these subbundles leads to the natural inclusion of expected, and unexpected, topological conditions. While the overall bundle used for gravitation is $P(M,ISO(1,3))$, i.e.\ a principal Poincar\'e bundle over a space $M$, the Poincar\'e group ($ISO(1,3)$) can be viewed as a bundle in its own right $ISO(\reals^4,SO(1,3))$. Thinking of the fiber space itself as yet another bundle leads to consideration of two primary bundles $P(E,SO(1,3))$ and $E(M,\reals^4,ISO(1,3),P(M,ISO(1,3)))$. The split of the total bundle $P(M,ISO(1,3))$ into the two bundles $E$ and $P(E,SO(1,3))$ however requires the existence of a global section of the bundle $E$. Such a global section is guaranteed to exist by a theorem of Kobayashi and Nomizu. However it is interesting to investigate the topology of the bundle space $E$ and hence of $P(E,SO(1,3))$. The requirement of the global section leads to the definition of a bundle $Q(M,SO(1,3))\subset P(M,ISO(1,3))$ which can be identified as the frame bundle of spacetime. Its associated bundle, $Q(M,SO(1,3))\times_{SO(1,3)} \reals^4$ where $\times_{SO(1,3)}$ denotes a specific quotient of the product space $Q(M,SO(1,3))\times\reals^4$ by the group $SO(1,3)$, can then be identified as the tangent bundle. The existence of a global section of $E$ leads to topological conditions on the induced spacetime bundles. Using cohomology with compact support one can show that global sections of $E$ descend to global sections of $Q$ and force the Stiefel-Whitney, Euler and first fractional Pontryagin classes of the spacetime bundles to be trivial. Furthermore the triviality of these characteristic classes is equivalent to the condition that the base space $M$ admit a string structure. Each characteristic class has an interpretation as an obstruction to the creation of a global structure or a topological attribute of the bundle. For the composite bundle formulation the obstructions are to orientablility, parallelizablility, global sections, and conditions related to stable causality and string structures. Similar to the case of a supersymmetric point particle, where the parallelizability of the base manifold determines whether there will be a global anomaly encountered during quantization, whether a manifold admits a string structure will determine if a global anomaly will be encountered in the process of quantization of extended degrees of freedom. This implies that the topological aspects of gravitational gauge theories automatically accommodate the consistent introduction of extended degrees of freedom. This path to structures associated with extended degrees of freedom is in contrast to the typical route, i.e.\ demanding a consistent quantum theory of gravitation. Here the need for such structures arises classically from demanding that gravitation be realized from a geometrically supported gauge principle. | CommonCrawl |
Knowledge base » Product notes » Simulation essentials with PLECS
ByStéphane Lovejoy March 21, 2021 April 22, 2022 Updated on April 22, 2022 PN137
Fundamental concepts
Working principle of the main blocks
1) Configuration block (CONFIG)
2) Control Task Trigger
3) Analog-to-digital converter input (ADC)
4) Pulse-width modulators (xx-PWM)
Further readings
This note provides in-depth content for an accurate and efficient offline simulation of an imperix controller and the corresponding plant model using ACG SDK on PLECS.
A general overview of software-related notes is given on this page.
Installation guide for ACG SDK
Getting started with PLECS
Suggested further readings
PN139: Getting started with BB Control
The offline simulation is meant to reproduce as faithfully as possible the behavior of the overall system (controller and plant). To that end, the PLECS blockset (part of the ACG SDK) was designed with the following guidelines in mind:
The plant quantities are to be modeled with continuous signals
This requires a variable-step solver
The control algorithm is modeled with discrete signals, sampled at a rate equal to the interrupt frequency, and with a phase corresponding to the sampling phase.
The "Control Task Trigger" block used within an atomic control subsystem forces the discretization of the control algorithm to the interrupt frequency
This is modeled accurately with the variable-step solver, as it is forced to take a major step at least at each interrupt execution
The behavior of the real PWM generators is modeled, in particular:
The frequency and phase of the carrier
The instants when the duty-cycle and phase parameters are updated (at zero and/or max of the carrier)
The duration of the algorithm execution is modeled
This induces a delay between the start of the interruption and the availability of the new data
With all the phases and delays modeled accurately, the simulated controller has the same dynamics as the real controller, which allows tuning the control parameters during the offline simulation.
The imperix blockset is already implemented in a way that automatically follows these guidelines.
As such, no particular action is required from the user.
The four fundamental blocks are the 1) Config, 2) Control Task Trigger, 3) ADC, and 4) PWM blocks. Most applications can work with those four blocks only, as in the standard configuration shown below.
Typical content of the controller model
The CONFIG block configures CLOCK_0 frequency and oversees the configuration of the main interrupt frequency. CLOCK_0 serves as a time reference for different peripherals such as ADC or PWM.
Four parameters are available to configure CLOCK_0 and the execution of the main interrupt:
The frequency of CLOCK_0
The phase of the main interrupt
The postscaler of the main interrupt
The "total cycle delay" (explained below)
The CONFIG block outputs three different configuration signals (brown dashed wires):
A main clock signal (CLOCK_0) with a period \(T_{clk0}=1/f_{clk0}\) and a phase of zero. This output can be connected to a PWM modulator block to define its switching frequency.
An sampling clock, generated from CLOCK_0 with the addition of the sampling phase and the postscaler parameters. This output can be connected to an ADC block to define its sampling instants. The sampling period is defined by:
$$T_s = \left\{\begin{array}{ll} T_{\text{clk0}} & \text{if postscaler = 0}\\ 2\cdot \text{postscaler}\cdot T_{\text{clk0}} & \text{otherwise} \end{array}\right.$$
A control task signal, generated from the sampling clock, with the addition of the "total cycle delay" parameter. This corresponds to an "end of interrupt" signal and must be connected to the "Control Task Trigger" block. The addition of a cycle delay allows the simulation to correctly model the effect of the interrupt execution time and the different read/write delays, as explained below.
Total cycle delay
The total cycle delay is a parameter used in simulation only. It is introduced to simulate the delay from the moment the ADCs are sampled to the moment the calculated duty-cycle is propagated to the FPGA PWM management. It takes into account the ADC sampling time (2 µs on a B-Box RCP), the control algorithm execution time, and the FPGA read and write transfer time.
The total cycle delay is displayed in the "Target info" pane of Cockpit (formerly Timing Info tab in BB Control) when a code is running (requires software above ACG SDK v3.6). In previous versions, an estimation of the total cycle delay can be derived from the CPU load displayed by BB Control, by multiplying it by the interrupt period.
Display of the CPU load in imperix Cockpit
The PN142 provides more details on the various delays and their effect on the control dynamics.
The Control task trigger block is used to model the execution of the control algorithm within an interrupt routine. Therefore, the discretization of this subsystem and all block contained within are forced at the main interrupt frequency. The Forward Euler method is used to discretize continuous states within the subsystem (it forces the subsystem to be "atomic").
As an example, the following two systems have strictly identical behaviors when the sample time of the discrete integrator and the Control Task trigger are set to 1/f_clk0:
When continuous functions are used within an atomic subsystem containing a Control Task Trigger, its execution is discretized.
The ADC block enables us to retrieve the sampled value of a given ADC channel and to convert it to its value in physical unit. The input signal is typically a continuous signal coming from the plant model and representing a measurement value.
The simulation model of the ADC block simply sample-and-holds the input signal on the rising edges of the ADC trigger input. The sampled signal is then scaled according to the sensor sensitivity parameter to be converted to its value in physical unit.
Various types of pulse-width modulators exist within the imperix control library. Among them, the carrier-based PWM modulator (CB PWM block) is certainly the most commonly used.
The CB-PWM block configures the corresponding FPGA peripheral and generates the PWM signals according to the duty-cycle and carrier phase parameters.
In the simulation model, the clock signal (1) connected to the clock input is used as a time reference to generate the carrier signal (2). In parallel, the duty-cycle value (3) is sampled once or twice per switching period (depending on the update-rate parameter) and compared with the carrier to produce the output PWM signal. A dead-time is then applied to the complementary PWM signals (5). (A more complex model is used to generate a carrier with a variable phase, which is beyond the scope of this document.)
Getting started with BB Control (PN139)
Stéphane Lovejoy
Stéphane is a software development engineer at imperix. He authored and co-authored numerous software reference articles.
ByJulien Orsinger March 19, 2021 July 22, 2021 PN151
Table of Contents Software resourcesH-bridge converterSingle-phase inverterVector-controlled induction motor driveBoost converter This page contains a few code examples that can be used to explore and… | CommonCrawl |
Source identification from line integral measurements and simple atmospheric models
Study of noise effects in electrical impedance tomography with resistor networks
May 2013, 7(2): 445-470. doi: 10.3934/ipi.2013.7.445
Far field model for time reversal and application to selective focusing on small dielectric inhomogeneities
Corinna Burkard 1, , Aurelia Minut 2, and Karim Ramdani 3,
Inria (CORIDA Team), Villers-lès-Nancy, F-54600, France
Mathematics Department, US Naval Academy, 572C Holloway Road, Annapolis, MD 21402-5002, United States
Université de Lorraine, IECL, UMR 7502, Vandoeuvre-les-Nancy, F-54506, France
Received November 2011 Revised February 2013 Published May 2013
Based on the time-harmonic far field model for small dielectric inclusions in $3$D, we study the so-called DORT method (DORT is the French acronym for ``Diagonalization of the Time Reversal Operator''). The main observation is to relate the eigenfunctions of the time-reversal operator to the location of small scattering inclusions. For non penetrable sound-soft acoustic scatterers, this observation has been rigorously proved for $2$ and $3$ dimensions by Hazard and Ramdani in [21] for small scatterers. In this work, we consider the case of small dielectric inclusions with far field measurements. The main difference with the acoustic case is related to the magnetic permeability and the related polarization tensors. We show that in the regime $kd\rightarrow \infty$ ($k$ denotes here the wavenumber and $d$ the minimal distance between the scatterers), each inhomogeneity gives rise to -at most- 4 distinct eigenvalues (one due to the electric contrast and three to the magnetic one) while each corresponding eigenfunction generates an incident wave focusing selectively on one of the scatterers. The method has connections to the MUSIC algorithm known in Signal Processing and the Factorization Method of Kirsch.
Keywords: Time-reversal, scattering, far field operator, wave focusing., small inhomogeneities.
Mathematics Subject Classification: Primary: 74J20, 35P25; Secondary: 78M35, 35P15, 35B4.
Citation: Corinna Burkard, Aurelia Minut, Karim Ramdani. Far field model for time reversal and application to selective focusing on small dielectric inhomogeneities. Inverse Problems & Imaging, 2013, 7 (2) : 445-470. doi: 10.3934/ipi.2013.7.445
M. Abramowitz and I. A. Stegun, "Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables,", 55 of National Bureau of Standards Applied Mathematics Series, 55 (1964). Google Scholar
H. Ammari, E. Iakovleva, D. Lesselier and G. Perrusson, Music-type electromagnetic imaging of a collection of small three-dimensional inclusions,, SIAM J. Sci. Comput., 29 (2007), 674. doi: 10.1137/050640655. Google Scholar
H. Ammari, E. Iakovleva and S. Moskow, Recovery of small inhomogeneities from the scattering amplitude at a fixed frequency,, SIAM J. Math. Anal., 34 (2003), 882. doi: 10.1137/S0036141001392785. Google Scholar
X. Antoine, B. Pinçon, K. Ramdani and B. Thierry, Far field modeling of electromagnetic time reversal and application to selective focusing on small scatterers,, SIAM J. Appl. Math., 69 (2008), 830. doi: 10.1137/080715779. Google Scholar
T. Arens, A. Lechleiter and D. R. Luke, Music for extended scatterers as an instance of the factorization method,, SIAM J. Appl. Math., 70 (2009), 1283. doi: 10.1137/080737836. Google Scholar
C. Ben Amar, N. Gmati, C. Hazard and K. Ramdani, Numerical simulation of acoustic time reversal mirrors,, SIAM J. Appl. Math., 67 (2007), 777. doi: 10.1137/060654542. Google Scholar
L. Borcea, G. Papanicolaou and F. G. Vasquez, Edge illumination and imaging of extended reflectors,, SIAM J. Imaging Sciences, 1 (2008), 75. doi: 10.1137/07069290X. Google Scholar
D. H. Chambers and J. G. Berryman, Target characterization using decomposition of the time-reversal operator: Electromagnetic scattering from small ellipsoids,, Inverse Problems, 22 (2006), 2145. doi: 10.1088/0266-5611/22/6/014. Google Scholar
Q. Chen, H. Haddar, A. Lechleiter and P. Monk, A sampling method for inverse scattering in the time domain,, Inverse Problems, 26 (2010). doi: 10.1088/0266-5611/26/8/085001. Google Scholar
M. Cheney, The linear sampling method and the {MUSIC algorithm},, Inverse Problems, 17 (2001), 591. doi: 10.1088/0266-5611/17/4/301. Google Scholar
D. Colton, H. Haddar and M. Piana, The linear sampling method in inverse electromagnetic scattering theory,, Inverse Problems, 19 (2003). doi: 10.1088/0266-5611/19/6/057. Google Scholar
D. Colton and A. Kirsch, A simple method for solving inverse scattering problems in the resonance region,, Inverse Problems, 12 (1996), 383. doi: 10.1088/0266-5611/12/4/003. Google Scholar
D. Colton and R. Kress, "Inverse Acoustic and Electromagnetic Scattering Theory,", Springer-Verlag, (1998). Google Scholar
A. Devaney, E. Marengo and F. Gruber, Time-reversal-based imaging and inverse scattering of multiply scattering point targets,, J. Acoust. Soc. Amer., 118 (2005), 3129. doi: 10.1121/1.2042987. Google Scholar
A. Fannjiang, On time reversal mirrors,, Inverse Problems, 25 (2009). doi: 10.1088/0266-5611/25/9/095010. Google Scholar
M. Fink, Acoustic time-reversal mirrors,, in, (2002), 17. Google Scholar
M. Fink, D. Cassereau, A. Derode, C. Prada, P. Roux, M. Tanter and J.-L. Thomas, Time-reversed acoustics,, Rep. Prog. Phys., 63 (2000), 1933. Google Scholar
M. Fink and C. Prada, Acoustic time-reversal mirrors,, Inverse Problems, 17 (2001), 1761. Google Scholar
N. A. Gumerov and R. Duraiswami, Computation scattering from n spheres using multipole reexpansion,, J. Acoust. Soc. Amer., 112 (2002), 2688. doi: 10.1121/1.1517253. Google Scholar
C. Hazard and K. Ramdani, Selective acoustic focusing using time-harmonic reversal mirrors,, SIAM J. Appl. Math., 64 (2004), 1057. doi: 10.1137/S0036139903428732. Google Scholar
S. Hou, K. Solna and H. Zhao, Imaging of location and geometry for extended targets using the response matrix,, J. Comput. Phys., 199 (2004), 317. doi: 10.1016/j.jcp.2004.02.010. Google Scholar
E. Iakovleva and D. Lesselier, Multistatic response matrix of spherical scatterers and the back-propagation of singular fields,, IEEE Trans. Antenna. Prop., 56 (2008), 825. doi: 10.1109/TAP.2008.916913. Google Scholar
A. Kirsch, Characterization of the shape of a scattering obstacle using the spectral data of the far field operator,, Inverse Problems, 14 (1998), 1489. doi: 10.1088/0266-5611/14/6/009. Google Scholar
A. Kirsch, Factorization of the far-field operator for the inhomogeneous medium case and an application in inverse scattering theory,, Inverse Problems, 15 (1999), 413. doi: 10.1088/0266-5611/15/2/005. Google Scholar
A. Kirsch, New characterizations of solutions in inverse scattering theory,, Appl. Anal., 76 (2000), 319. doi: 10.1080/00036810008840888. Google Scholar
A. Kirsch, The MUSIC algorithm and the factorization method in inverse scattering theory for inhomogeneous media,, Inverse Problems, 18 (2002), 1025. doi: 10.1088/0266-5611/18/4/306. Google Scholar
R. Kress, Minimizing the condition number of boundary integral operators in acoustic and electromagnetic scattering,, Quart. J. Mech. Appl. Math., 38 (1985), 323. doi: 10.1093/qjmam/38.2.323. Google Scholar
G. Micolau, "Etude Théorique et Numérique de la Méthode de la Décomposition de L'opérateur de Retournement Temporel (D.O.R.T.) en Diffraction ÉlectromagnÉtique,", Ph.D thesis, (2001). Google Scholar
B. Pinçon and K. Ramdani, Selective focusing on small scatterers in acoustic waveguides using time reversal mirrors,, Inverse Problems, 23 (2007), 1. doi: 10.1088/0266-5611/23/1/001. Google Scholar
C. Prada, S. Manneville, D. Spoliansky and M. Fink, Decomposition of the time reversal operator: Detection and selective focusing on two scatterers,, J. Acoust. Soc. Am., 99 (1996), 2067. doi: 10.1121/1.415393. Google Scholar
E. Stein, "Harmonic Analysis: Real-Variable Methods, Orthogonality, and Oscillatory Integrals,", Princeton University Press, (1993). Google Scholar
B. Thierry, "Analyse et Simulations Numériques du Retournement Temporel et de la Diffraction Multiple,", Ph.D thesis, (2011). Google Scholar
H. Tortel, G. Micolau and M. Saillard, Decomposition of the time reversal operator for electromagnetic scattering,, J. Electromagn. Waves Appl., 13 (1999), 687. doi: 10.1163/156939399X01113. Google Scholar
R. Wong, "Asymptotic Approximations of Integrals,", 34 of Classics in Applied Mathematics, 34 (2001). doi: 10.1137/1.9780898719260. Google Scholar
A. Zaanen, "Linear Analysis. Measure and Integral, Banach and Hilbert Space, Linear Integral Equations,", Interscience Publishers Inc., (1953). Google Scholar
Kenrick Bingham, Yaroslav Kurylev, Matti Lassas, Samuli Siltanen. Iterative time-reversal control for inverse problems. Inverse Problems & Imaging, 2008, 2 (1) : 63-81. doi: 10.3934/ipi.2008.2.63
Jingzhi Li, Jun Zou. A direct sampling method for inverse scattering using far-field data. Inverse Problems & Imaging, 2013, 7 (3) : 757-775. doi: 10.3934/ipi.2013.7.757
Huai-An Diao, Peijun Li, Xiaokai Yuan. Inverse elastic surface scattering with far-field data. Inverse Problems & Imaging, 2019, 13 (4) : 721-744. doi: 10.3934/ipi.2019033
Huey-Er Lin, Jian-Guo Liu, Wen-Qing Xu. Effects of small viscosity and far field boundary conditions for hyperbolic systems. Communications on Pure & Applied Analysis, 2004, 3 (2) : 267-290. doi: 10.3934/cpaa.2004.3.267
Batoul Abdelaziz, Abdellatif El Badia, Ahmad El Hajj. Some remarks on the small electromagnetic inhomogeneities reconstruction problem. Inverse Problems & Imaging, 2017, 11 (6) : 1027-1046. doi: 10.3934/ipi.2017047
Benoît Pausader, Walter A. Strauss. Analyticity of the nonlinear scattering operator. Discrete & Continuous Dynamical Systems - A, 2009, 25 (2) : 617-626. doi: 10.3934/dcds.2009.25.617
Roland Griesmaier, Nuutti Hyvönen, Otto Seiskari. A note on analyticity properties of far field patterns. Inverse Problems & Imaging, 2013, 7 (2) : 491-498. doi: 10.3934/ipi.2013.7.491
Rodrigo I. Brevis, Jaime H. Ortega, David Pardo. A source time reversal method for seismicity induced by mining. Inverse Problems & Imaging, 2017, 11 (1) : 25-45. doi: 10.3934/ipi.2017002
Rodica Toader. Scattering in domains with many small obstacles. Discrete & Continuous Dynamical Systems - A, 1998, 4 (2) : 321-338. doi: 10.3934/dcds.1998.4.321
Peter C. Gibson. On the measurement operator for scattering in layered media. Inverse Problems & Imaging, 2017, 11 (1) : 87-97. doi: 10.3934/ipi.2017005
Julien Chambarel, Christian Kharif, Olivier Kimmoun. Focusing wave group in shallow water in the presence of wind. Discrete & Continuous Dynamical Systems - B, 2010, 13 (4) : 773-782. doi: 10.3934/dcdsb.2010.13.773
Zhigang Wang. Vanishing viscosity limit of the rotating shallow water equations with far field vacuum. Discrete & Continuous Dynamical Systems - A, 2018, 38 (1) : 311-328. doi: 10.3934/dcds.2018015
Olha Ivanyshyn. Shape reconstruction of acoustic obstacles from the modulus of the far field pattern. Inverse Problems & Imaging, 2007, 1 (4) : 609-622. doi: 10.3934/ipi.2007.1.609
Giovanni Alessandrini, Eva Sincich, Sergio Vessella. Stable determination of surface impedance on a rough obstacle by far field data. Inverse Problems & Imaging, 2013, 7 (2) : 341-351. doi: 10.3934/ipi.2013.7.341
Qi Wang, Yanren Hou. Determining an obstacle by far-field data measured at a few spots. Inverse Problems & Imaging, 2015, 9 (2) : 591-600. doi: 10.3934/ipi.2015.9.591
Albert Fannjiang, Knut Solna. Time reversal of parabolic waves and two-frequency Wigner distribution. Discrete & Continuous Dynamical Systems - B, 2006, 6 (4) : 783-802. doi: 10.3934/dcdsb.2006.6.783
Kazufumi Ito, Karim Ramdani, Marius Tucsnak. A time reversal based algorithm for solving initial data inverse problems. Discrete & Continuous Dynamical Systems - S, 2011, 4 (3) : 641-652. doi: 10.3934/dcdss.2011.4.641
Masaru Ikehata. The enclosure method for inverse obstacle scattering using a single electromagnetic wave in time domain. Inverse Problems & Imaging, 2016, 10 (1) : 131-163. doi: 10.3934/ipi.2016.10.131
Bingbing Ding, Ingo Witt, Huicheng Yin. Blowup time and blowup mechanism of small data solutions to general 2-D quasilinear wave equations. Communications on Pure & Applied Analysis, 2017, 16 (3) : 719-744. doi: 10.3934/cpaa.2017035
Corinna Burkard Aurelia Minut Karim Ramdani | CommonCrawl |
Why are proteins always made in N to C direction?
Why are proteins always synthesized from the N-terminus to the C-terminus? Can there be any "reverse" peptide-bond formation to synthesize proteins in the C-terminal to the N-terminal direction?
biochemistry molecular-biology proteins
superbugsuperbug
$\begingroup$ Are you talking about natural processes only, or also about chemical ones? Because chemistry really doesn't care (in either case actually) and can do both directions. Nature just stuck with the direction that was easier 'to implement'. $\endgroup$ – Nicolai Jun 2 '17 at 7:06
$\begingroup$ @nicolai I think the OP wants to know about natural processes (lest this question would be better for chemistry.SE). I tried to make the concept as clear as possible in my answer, yet it basically comes down to what you say ;) $\endgroup$ – another 'Homo sapien' Jun 2 '17 at 7:52
$\begingroup$ Actually 3′→5′ synthesis is possible … synthetically: news-medical.net/news/20120830/… $\endgroup$ – Konrad Rudolph Jun 2 '17 at 12:25
$\begingroup$ I have edited your question to remove the irrelevant and incorrect preliminary statements. $\endgroup$ – David Jun 7 '17 at 21:18
Nice question! As you already know, DNA is always shown in 5'$\rightarrow$3' direction because it is always synthesized in this direction (amino acids are joined by CO-NH peptide bond). So, a polypeptide looks like this (source):
In fact, if you just look at the polypeptide in the reverse direction, you could view it in C terminus to N terminus direction. But we don't do so because that is not the conventional direction of the polypeptide's biosynthesis.
Tracing the Roots: To know why there is no "reverse" peptide bond (NH-CO), we first need to know how peptide bonds are formed in polypeptides. Polypeptides are formed in ribosome, and the process of formation of peptide bond occurs in the peptidyl transferase complex of ribosome. Since ribosome is a ribozyme, this reaction is also catalyzed by the catalytic sites of RNA (i.e. 2'-OH) instead of proteins. See the image below for the mechanism (from Marina V. Rodina):
As visible from the diagram, nitrogen (in -NH2) from acceptor tRNA (A site) attacks the ester linkage at the peptidyl tRNA (P site). Carboxylic carbon cannot attack the nitrogen (for "reverse" bond formation) because it is already in ester linkage. One might ask then "if the nitrogen was joined to tRNA at P-site, the carboxylic carbon could have attacked on amino acid at A-site. Why is amino acid not joined to tRNA by the amine nitrogen?" To know why it is so, lets go a step further and see how tRNAs are charged i.e. how aminoacyl tRNA synthetase works. Aminoacyl tRNA synthetase charges tRNA in a two-step reaction. For displaying mechanism, I will take the example of histidyl-tRNA synthetase (diagrams from Proteopedia):
amino acid + ATP → aminoacyl-AMP + PPi
aminoacyl-AMP + tRNA → aminoacyl-tRNA + AMP
As is clearly visible now, whether amine nitrogen or carboxylic carbon will attach to phosphate of ATP is decided in the first step. See the image again:
You might ask that since nitrogen (in -NH2) also has a lone pair of electrons, it could also make a nucleophilic attack on the phosphate group, similar to the oxygen (in -COOH). Why doesn't this happen then? This cannot be accounted for by saying that the amine nitrogen is far away from the reaction site (look at the diagram again, and pay attention to how close the carboxylic carbon and amine nitrogen are. If we could move the amino acid even a little bit, this reaction could easily occur). Well, here's the twist: the optimum pH of this reaction lies near pH 6 i.e. in the acidic pH range (Lui et al, 1978). Now, as we know, under acidic pH, -NH2 exists in the protonated form i.e. -NH3+. In this condition, nitrogen lacks lone pair and thus cannot attack phosphate on ATP. Thus, it is not possible to attach amino acid to AMP via -NH2 group and, thus, form a "reverse" peptide bond. I hope this depth of the answer would be enough to satisfy your curiosity :)
another 'Homo sapien'another 'Homo sapien'
$\begingroup$ Two comments. 1. You are explaining the directionality in terms of the way the activation of the amino acid occurs today. As long as you make that clear, that is fine. However the question of why it evolved that way is not answered. 2. Your arguments about ionization of amino groups at low pH do not hold water. The pH optimum of the charging reaction is irrelevant. What is relevant is the pH under which the reaction occurs, which is the cellular pH of, say 7.4–7.6. And if the amino group lacks a lone pair there, how does it manage to make a peptide bond, as per the first diagram? $\endgroup$ – David Jun 8 '17 at 20:24
$\begingroup$ @david 1. It evolved this way because evolving in the other direction was not possible (again, due to pH, I don't know if you understood what I wanted to convey through the pH factor) 2. I don't think you read Lui et al, the reaction does occur at a rather acidic pH (to make it faster). I don't know about peptidyl transfer reaction, most probably it occurs at cellular pH (about pH 7.5 or because the reaction occurs in almost isolated condition. Also, these are all suggested mechanisms, nobody has seen how the reaction actually occurs. $\endgroup$ – another 'Homo sapien' Jun 9 '17 at 5:26
$\begingroup$ All the stages of protein synthesis occur at the same pH in a cell. If a particular reaction requires an ionization state of a group in the substrate different from that in aqueous solution (e.g. an unionized amino group in the peptidyl synthetase reaction) that is provided by the enzyme environment. In evolutionary terms, when one assumes there was a non-enzyme catalysed reaction first, you still have a problem. And one unexplained assumption of your answer is that activation of amino acids for attachement to tRNA has to be via ATP (not acetyCoA or something else). $\endgroup$ – David Jun 9 '17 at 16:25
$\begingroup$ @david the truth is that we don't yet know the true reaction mechanism, I used histidyl-tRNA synthetase as an example. The main point is that the $\alpha$-amino nitrogen is in the protonated form; if the cell can't provide this proton, the enzyme must. I did not explain acetyl-coA or other residues since the major (and I think only) substrate for aminoacyl-tRNA synthetase is ATP (I don't know if you have studied about aminoacyl-tRNA synthetase, or maybe there are some other enzymes about which I don't know). $\endgroup$ – another 'Homo sapien' Jun 10 '17 at 4:09
This is just in addition to the other answers; some somewhat relevant thoughts I had.
Interestingly, even non-ribosomal peptide synthesis seems to proceed N to C terminal. Given the modular nature of such enzymes, synthesis in the reverse direction doesn't seem to be outside the realm of possibility, though I haven't found any known examples.
This might be a stretch as a counterexample, but if you consider the biosynthesis of the tripeptide glutathione, a γ-peptide bond is first formed between cysteine and glutamate, after which glycine is attached to the carboxyl group (ie C-terminus) of cysteine with an α-peptide bond.
canadianercanadianer
Not the answer you're looking for? Browse other questions tagged biochemistry molecular-biology proteins or ask your own question.
Does the Peptide bond produce water in our organism?
Amino acid compatibility
How are split inteins joined?
Are multi-chain proteins synthesized as one biological unit?
Non-ribosomal peptide synthesis: why Glutathione cannot be produced by the ribosome?
What is the shortest mRNA the ribosome can read to produce a peptide?
Can aptamers be used to identify the terminal amino acid residue of a peptide?
For Penicillin Binding Proteins, why is the enzyme-peptide complex less stable than the enzyme-β-lactam complex?
Why are sushi proteins called "sushi"? What are the origins of this name?
What is C-terminal tryptic peptide? | CommonCrawl |
Search Results: 1 - 10 of 3370 matches for " Linda Kuk "
Page 1 /3370
A Higher Education Leadership Distance Ph.D. Program: An Assessment Using Blocher's Ecological Learning Theory [PDF]
Linda Kuk, James H. Banning
Creative Education (CE) , 2014, DOI: 10.4236/ce.2014.59082
This study utilized a case study approach to discuss the issues of distance education in the delivery of doctoral education. The case study provided abroad background to the issues of distance education graduate programs and an assessment of the case. Blocher's (1974) Ecological Learning Theory is applied to the case to provide for analytical generalization. The results support the need for distance delivery of academic programs to consider both the content and the program's delivery structure as important components in realizing student success.
Creative Expression: Effectiveness of a Weekly Craft Group with Women Who Have Experienced Trauma [PDF]
Linda Garner
Open Journal of Nursing (OJN) , 2015, DOI: 10.4236/ojn.2015.52011
Abstract: Creativity interventions have been shown to positively influence psychological and emotional health indicators. Nurses can play an important role in the development and implementation of interventions designed to counter the longer-term emotional and psychological consequences of trauma. The purpose of this study was to explore how participation in a nurse-facilitated weekly craft group may influence anxiety, depression, self-esteem, and self-confidence among women who have emotional and physical experienced trauma. A pre/post visual analog scale was used during a 7-week intervention to measure changes in anxiety, depression, stress, self-esteem and self-confidence among a convenience sample of adult female trauma survivors (n = 33). A paired sample t test was used to evaluate the intervention with significance set at p = 0.05. Participant observation and field notes were used for qualitative data generation. Significant reductions were noted in anxiety, depression, and stress along with significant increases in self-esteem and self-confidence. Cohen's d statistic indicated a large effect size for anxiety (0.72) and stress (0.69). Moderate effect size was determined for self-confidence (0.36), depression (0.41), and self-esteem (0.52). Emergent qualitative themes included: creative expression improved confidence to sooth the self, safe spaces fostered creativity, a sense of accomplishment was stimulated through creative activities, and creative expression groups provided opportunities for positive affirmation. Offered as a complementary intervention, nurse-facilitated creative expression groups can support continued healing long after traditional support services have been exhausted. It is important for nurses to pursue a greater understanding of the art of nursing and the important contribution of creativity when used as a nursing intervention with trauma survivors.
Residual Effects from Occupational Mercury Exposure Include a Proposed Mercury Tremor Biomarker or "Fingerprint" [PDF]
Journal of Environmental Protection (JEP) , 2017, DOI: 10.4236/jep.2017.810068
Abstract: The study investigated residual effects of high levels of occupational mercury exposure, 30 years after a cohort of women worked in public service dentistry. They had all used copper amalgam in a pellet form that required heating and handling, and silver amalgam before the encapsulated form was available. Mercury handling practices changed in the mid-1970 when the workforce was urine tested and mercury poisoning became apparent. The aim was to compare control group and exposed group scores on tasks from a neurobehavioural test battery; plus survey results from a composite health, work history and environmental influences survey. The findings showed that the exposed and control groups were equivalent not only on those variables that one would want to be matched (age, alcohol consumption), but also on many of the cognitive and psychomotor test scores. The present paper focuses on psychomotor skill and tremor patterns. Tremor patterns were seen as generating new evidence of long term effects of the historic mercury insult. Data also suggest that there may be a distinctive mercury "fingerprint", in samples of sinusoidal waveforms that may have potential as a non-invasive sub-clinical biomarker for adverse effects of mercury exposure, in screening or workplace monitoring.
Prevalence and characteristics of Postpartum Depression symptomatology among Canadian women: a cross-sectional study
Andrea Lanes, Jennifer L Kuk, Hala Tamim
BMC Public Health , 2011, DOI: 10.1186/1471-2458-11-302
Abstract: An analysis based on 6,421 Canadian women, who had a live birth between 2005 and 2006 and were part of the Maternity Experience Survey (MES), was performed. PPDS was measured based on the Edinburgh Postnatal Depression Scale. Various factors that assessed socio-economic status, demographic factors, and maternal characteristics were considered for the multinomial regression model.The national prevalence of minor/major and major PPDS was found to be 8.46% and 8.69% respectively. A mother's stress level during pregnancy, the availability of support after pregnancy, and a prior diagnosis of depression were the characteristics that had the strongest significant association with the development of PPDS.A significant number of Canadian women experience symptoms of postpartum depression. Findings from this study may be useful to increase both the attainment of treatment and the rate at which it can be obtained among new mothers. Interventions should target those with the greatest risk of experiencing PPDS, specifically immigrant and adolescent mothers.Postpartum depression (PPD) refers to a non-psychotic depressive episode that begins in or extends into the postpartum period [1,2]. PPD can evolve from a preexisting case of the baby blues, or can become apparent after the first weeks of giving birth and can last as long as 14 months [1,3-5]. Symptoms include anxiety, guilt, negative maternal attitudes, and poor parenting self-efficacy [3,6,7]. A multitude of treatment options for PPD exist, including interpersonal therapy and the most common treatment, pharmaceutical intervention [1,8].Mothers who suffer from PPD endure significant consequences, especially with their ability to cope with life events, including parenting tasks [9]. PPD elicits negative clinical implications for maternal-infant attachment; there is a withdrawn and disengaged behaviour in the mother and/or intrusive and hostile mother-infant communication [10-13]. Research has shown that experiencing symptoms o
Kuk, Salih;Yazar, Suleyman;Cetinkaya, Ulfet;
Memórias do Instituto Oswaldo Cruz , 2012, DOI: 10.1590/S0074-02762012000800001
Abstract: stool is chemically complex and the extraction of dna from stool samples is extremely difficult. haemoglobin breakdown products, such as bilirubin, bile acids and mineral ions, that are present in the stool samples, can inhibit dna amplification and cause molecular assays to produce false-negative results. therefore, stool storage conditions are highly important for the diagnosis of intestinal parasites and other microorganisms through molecular approaches. in the current study, stool samples that were positive for giardia intestinalis were collected from five different patients. each sample was stored using one out of six different storage conditions [room temperature (rt), +4oc, -20oc, 70% alcohol, 10% formaldehyde or 2.5% potassium dichromate] for dna extraction procedures at one, two, three and four weeks. a modified qiaamp stool mini kit procedure was used to isolate the dna from stored samples. after dna isolation, polymerase chain reaction (pcr) amplification was performed using primers that target the β-giardin gene. a g. intestinalis-specific 384 bp band was obtained from all of the cyst-containing stool samples that were stored at rt, +4oc and -20oc and in 70% alcohol and 2.5% potassium dichromate; however, this band was not produced by samples that had been stored in 10% formaldehyde. moreover, for the stool samples containing trophozoites, the same g. intestinalis-specific band was only obtained from the samples that were stored in 2.5% potassium dichromate for up to one month. as a result, it appears evident that the most suitable storage condition for stool samples to permit the isolation of g. intestinalis dna is in 2.5% potassium dichromate; under these conditions, stool samples may be stored for one month.
Jung Gap Kuk and Nam Ik Cho
EURASIP Journal on Image and Video Processing , 2013, DOI: 10.1186/1687-5281-2013-7
Abstract: This article explores an energy function and its minimization for the weighted gradient domain image processing, where variable weights are applied to the data term of conventional function for attaining better results in some applications. To be specific, larger weights are given to the regions where original pixel values need to be kept unchanged, like strong edge regions in the case of image sharpening application or high contrast regions when fusing multi-exposure images. In the literatures, it is shown that the solution to a constant weight problem can be efficiently obtained in the frequency domain without iterations, whereas the function with the varying weights can be minimized by solving a large sparse linear equation or by iterative methods such as conjugate gradient or preconditioned conjugate gradient (PCG) methods. In addition to introducing weighted gradient domain image processing problems, we also proposed a new approach to finding an efficient preconditioning matrix for this problem, which greatly reduces the condition number of the system matrix and thus reduces the number of iterations for the PCG process to reach the solution. We show that the system matrix for the constant weight problem is an appropriate preconditioner, in the sense that a sub-problem in the PCG is efficiently solved by the FFT and also it ensures the convergent splitting of the system matrix. For the simulation and experiments on some applications, it is shown that the proposed method requires less iteration, memory, and CPU time.
THE STRUCTURAL FABRIC AND SEISMOTECTONIC ACTIVITY OF NORTHERN VELEBIT: SOME NEW OBSERVATIONS
Eduard Prelogovi?,Vlado Kuk,Renato Buljan
Rudarsko-Geolo?ko-Naftni Zbornik , 1998,
Abstract: A permanent seismotectonic activity is present in the region of northern Velebit. Through history a number of earthquakes magnitudes of VIII° and IX° MCS are reported. In this century the most powerful earthquake of a magnitude of 5.8 occurred in 1916. The key tectonic movements that cause these earthquakes are dislocations of the Adriatic platform towards the north and the resistance of the Dinarides towards these movements. The tectonic dynamics of a structural arrangement depend on the relationship between stress and the deformation of structural units. According to the tectonic measurements performed on outcrops of major faults the following orientation of stress was obtained: from 20/25° to 200/205° in the northern Velebit region and 340 to 160° in the hinterland. Within the structural arrangement compression of space occurs which is well pronounced in the Novi Vinodolski area. Possible deformation of structures is manifested spatial diagonal reverse displacements accompanied by rotation of the structure. In the Velebit hinterland right horizontal tectonic transport was determined. In the zone of the transcurrent faull uta Lokva-Oto ac-Buni the spatial opening leads to the formation of pull-apart structures. Seismotectonically active zones occur in the contact region between the Adriatic platform and the Dinaridcs. Spatially its position is inclined and bent.
The Display System of a Patient's History Using the RFID and Linux
Soo Young Ye,Heung-kuk Jo
International Journal of Distributed Sensor Networks , 2013, DOI: 10.1155/2013/314028
Seroprevalance of toxocara antibodies in multiple sclerosis and ankylosing spondylitis
Kuk Salih,Ozgocmen Salih,Bulut Serpil
Indian Journal of Medical Sciences , 2006,
Saddlepoint methods for conditional expectations with applications to risk management
Sojung Kim,Kyoung-kuk Kim
Statistics , 2015,
Abstract: The paper derives saddlepoint expansions for conditional expectations in the form of $\mathsf{E}[\overline{X} | \overline{\mathbf Y} = {\mathbf a}]$ and $\mathsf{E}[\overline{X} | \overline{\mathbf Y} \geq {\mathbf a}]$ for the sample mean of a continuous random vector $(X, {\mathbf Y}^\top)$ whose joint moment generating function is available. Theses conditional expectations frequently appear in various applications, particularly in quantitative finance and risk management. Using the newly developed saddlepoint expansions, we propose fast and accurate methods to compute the sensitivities of risk measures such as value-at-risk and conditional value-at-risk, and the sensitivities of financial options with respect to a market parameter. Numerical studies are provided for the accuracy verification of the new approximations. | CommonCrawl |
Robust Exponential Memory in Hopfield Networks
Christopher J. Hillar1 &
Ngoc M. Tran2
The Hopfield recurrent neural network is a classical auto-associative model of memory, in which collections of symmetrically coupled McCulloch–Pitts binary neurons interact to perform emergent computation. Although previous researchers have explored the potential of this network to solve combinatorial optimization problems or store reoccurring activity patterns as attractors of its deterministic dynamics, a basic open problem is to design a family of Hopfield networks with a number of noise-tolerant memories that grows exponentially with neural population size. Here, we discover such networks by minimizing probability flow, a recently proposed objective for estimating parameters in discrete maximum entropy models. By descending the gradient of the convex probability flow, our networks adapt synaptic weights to achieve robust exponential storage, even when presented with vanishingly small numbers of training patterns. In addition to providing a new set of low-density error-correcting codes that achieve Shannon's noisy channel bound, these networks also efficiently solve a variant of the hidden clique problem in computer science, opening new avenues for real-world applications of computational models originating from biology.
Discovered first by Pastur and Figotin [1] as a simplified spin glass [2] in statistical physics, the Hopfield model [3] is a recurrent network of n linear threshold McCulloch–Pitts [4] neurons that can store \(n/(4 \ln n)\) binary patterns [5] as distributed "memories" in the form of auto-associative fixed-point attractors. While several aspects of these networks appeared earlier (see, e.g., [6] for dynamics and learning), the approach nonetheless introduced ideas from physics into the theoretical study of neural computation. The Hopfield model and its variants have been studied intensely in theoretical neuroscience and statistical physics [7], but investigations into its utility for memory and coding have mainly focused on storing collections of patterns X using a "one-shot" outer-product rule (OPR) for learning, which essentially assigns abstract synaptic weights between neurons to be their correlation, an early idea in neuroscience [8, 9]. Independent of learning, at most 2n randomly generated dense patterns can be simultaneously stored in networks with n neurons [10].
Despite this restriction, super-linear capacity in Hopfield networks is possible for special pattern classes and connectivity structures. For instance, if patterns to memorize contain many zeros, it is possible to store nearly a quadratic number [11]. Other examples are random networks, which have \({\approx}1.22^{n}\) attractors asymptotically [12], and networks storing all permutations [13]. In both examples of exponential storage, however, memories have vanishingly small basins of attraction, making them ill-suited for noise-tolerant pattern storage. Interestingly, the situation is even worse for networks storing permutations: any Hopfield network storing permutations will not recover the derangements (more than a third of all permutations) from asymptotically vanishing noise (see Theorem 4, proved in Sect. 5).
In this note, we design a family of sparsely connected n-node Hopfield networks with (asymptotically, as \(n \to\infty\))
$$ {\sim} \frac{2^{\sqrt{2n} + \frac{1}{4}}}{n^{1/4} \sqrt{\pi}} $$
robustly stored fixed-point attractors by minimizing "probability flow" [14, 15]. To our knowledge, this is the first rigorous demonstration of super-polynomial noise-tolerant storage in recurrent networks of simple linear threshold elements. The approach also provides a normative, convex, biologically plausible learning mechanism for discovering these networks from small amounts of data and reveals new connections between binary McCulloch–Pitts neural networks, efficient error-correcting codes, and computational graph theory.
The underlying probabilistic model of data in the Hopfield network is the non-ferromagnetic Lenz–Ising model [16] from statistical physics, more generally called a Markov random field in the literature, and the model distribution in a fully observable Boltzmann machine [17] from artificial intelligence. The states of this discrete distribution are length n binary column vectors \({\mathbf {x}} = (x_{1},\ldots, x_{n}) \in\{0,1\}^{n}\) each having probability \(p_{{\mathbf {x}}} := \frac{1}{Z} \exp ( - E_{\mathbf{x}} )\), in which \(E_{\mathbf {x}} := -\frac{1}{2}\mathbf {x}^{\top} \mathbf {W} \mathbf {x} + \theta^{\top}\mathbf {x}\) is the energy of a state, W is an n-by-n real symmetric matrix with zero diagonal (the weight matrix), the vector \(\theta\in\mathbb {R}^{n}\) is a threshold term, and \(Z := \sum_{\mathbf{x}}\exp(-E_{\mathbf {x}})\) is the partition function, the normalizing factor ensuring that \(p_{\mathbf{x}}\) represents a probability distribution. In theoretical neuroscience, rows \(\mathbf{W}_{e}\) of the matrix W are interpreted as abstract "synaptic" weights \(W_{ef}\) connecting neuron e to other neurons f.
The pair \((\mathbf{W}, \theta)\) determines an asynchronous deterministic ("zero-temperature") dynamics on states x by replacing each \(x_{e}\) in x with the value:
$$ x_{e} = \textstyle\begin{cases} 1 & \text{if } \sum_{f \neq e} {W_{ef} x_{f}} > \theta_{e}, \\0 & \text{otherwise}, \end{cases} $$
in a (usually initialized randomly) fixed order through all neurons \(e = 1, \ldots, n\). The quantity \(I_{e} := \langle\mathbf{W}_{e}, \mathbf{x} \rangle\) in (2) is often called the feedforward input to neuron e and may be computed by linearly combining input signals from neurons with connections to e. Let \(\Delta E_{e}\) (resp. \(\Delta x_{e} = \pm1, 0\)) be the energy (resp. bit) change when applying (2) at neuron e. The relationship
$$ \Delta E_{e} = -\Delta x_{e} (I_{e} - \theta_{e}) $$
guarantees that network dynamics does not increase energy. Thus, each initial state x will converge in a finite number of steps to its attractor \(\mathbf {x}^{*}\) (also called in the literature fixed-point, memory, or metastable state); e.g., see Fig. 1. The biological plausibility and potential computational power [18] of the dynamics update (2) inspired both early computer [19] and neural network architectures [4, 20].
Energy landscape and discrete dynamics in a Hopfield network having robust storage of all 4-cliques in graphs on 8 vertices. The deterministic network dynamics sends three corrupted cliques to graphs with smaller energy, converging on the underlying 4-clique attractors
We next formalize the notion of robust fixed-point attractor storage for families of Hopfield networks. For \(p \in[0,\frac{1}{2}]\), the p-corruption of x is the random pattern \(\mathbf {x}_{p}\) obtained by replacing each \(x_{e}\) by \(1-x_{e}\) with probability p, independently. The p-corruption of a state differs from the original by pn bit flips on average so that for larger p it is more difficult to recover the original binary pattern; in particular, \(\mathbf{x}_{\frac{1}{2}}\) is the uniform distribution on \(\{0,1\}^{n}\) (and thus independent of x). Given a Hopfield network, the attractor \(\mathbf{x}^{\ast}\) has \((1-\varepsilon )\)-tolerance for a p-corruption if the dynamics can recover \(\mathbf{x}^{\ast}\) from \((\mathbf{x}^{\ast})_{p}\) with probability at least \(1-\varepsilon \). The α-robustness \(\alpha(X, \varepsilon )\) for a set of states X is the most p-corruption every state \((1-\varepsilon )\)-tolerates.
At last, we say that a sequence of Hopfield networks \(\mathcal{H}_{n}\) robustly stores states \(X_{n}\) with robustness index \(\alpha> 0\) if the following limit exists and equals the number α:
$$ \lim_{\varepsilon \to0^{+}} \lim_{n \to\infty} \inf \bigl\{ \alpha (X_{n},\varepsilon ), \alpha(X_{n+1},\varepsilon ), \ldots \bigr\} = \alpha. $$
If α is the robustness index of a family of networks, then the chance that dynamics does not recover an α-corrupted memory can be made as small as desired by devoting more neurons. (Note that by definition, we always have \(\alpha\leq1/2\).)
To determine parameters \((\mathbf{W}, \theta)\) in our networks from a set of training patterns \(X \subseteq\{0,1\}^{n}\), we minimize the following probability flow objective function [14, 15]:
$$ \frac{1}{|X|} \sum_{\mathbf{x} \in X} \sum _{\mathbf{x}' \in\mathcal {N}(\mathbf{x})} \exp \biggl(\frac{E_{\mathbf{x}}-E_{\mathbf {x}'}}{2} \biggr), $$
in which \(\mathcal{N}(\mathbf{x})\) are those neighboring states \(\mathbf {x}'\) differing from x by a single flipped bit. It is elementary that a Hopfield network has attractors X if and only if the probability flow (5) can be arbitrarily close to zero, motivating the application of minimizing (5) to find such networks [15]. Importantly, the probability flow is a convex function of the parameters, consists of a number of terms linear in n and the size of X, and avoids the exponentially large partition function Z. We remark that the factor of \(\frac{1}{2}\) inside of the exponential in (5) will turn out to be unimportant for our analysis; however, we keep it to be consistent with the previous literature on interpreting (5) as a probability density estimation objective.
Let v be a positive integer and set \(n = \frac{v(v-1)}{2}\). A state x in a Hopfield network on n nodes represents a simple undirected graph G on v vertices by interpreting a binary entry \(x_{e}\) in x as indicating whether edge e is in G (\(x_{e} = 1\)) or not (\(x_{e} = 0\)). A k-clique x is one of the \({v \choose k} = \frac{v \cdot(v-1)\cdots(v-k+1)}{k \cdot(k-1)\cdots2 \cdot1}\) graphs consisting of k fully connected nodes and \(v-k\) other isolated nodes. Below, in Sect. 3, we will design Hopfield networks that have all k-cliques on 2k (or \(2k-2\)) vertices as robustly stored memories. For large n, the count \({2k \choose k}\) approaches (1) by Stirling's approximation. Figure 1 depicts a network with \(n = 28\) neurons storing 4-cliques in graphs on \(v = 8\) vertices.
Our first result is that numerical minimization of probability flow over a vanishingly small critical number of training cliques determines linear threshold networks with exponential attractor memory. We fit all-to-all connected networks on \(n = 3160, 2016, 1128\) neurons (\(v = 80, 64, 48\); \(k=40,32, 24\)) with increasing numbers of randomly generated k-cliques as training data X by minimizing (5) with the limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) algorithm [21] (implemented in the programming language Python's package SciPy). In Fig. 2, we plot the percentage of 1000 random new k-cliques that are fixed-points in these networks after training as a function of the ratio of training set size to total number of k-cliques. Each triangle in the figure represents the average of this fraction over 50 networks, each given the same number of randomly generated (but different) training data. The finding is that a critical number of training samples allows for storage of all k-cliques. Moreover, this count is significantly smaller than the total number of patterns to be learned.
Learning critical networks with exponential memory by minimizing probability flow on few training patterns. For numbers of vertices \(v = 80, 64, 48\) (\(k = 40, 32, 24\)) with 50 trials each, the average percent of 1000 randomly drawn cliques that are memories vs. the fraction of training samples to total number of k-cliques. Inset displays enlarged version of the region demarcated by black square; filled regions indicate standard deviation errors over these 50 trials. Dotted lines are average percentage of correct bits after converging dynamics
In Fig. 3(a), we display a portion of the weight matrix with minimum probability flow representing a \(v = 80\) network (4,994,380 weight and threshold parameters) given 100 (\({\approx}1\mathrm{e}{-}21\%\) of all 40-cliques), 1000 (\(1\mathrm{e}{-}20\%\)), or \(10\text{,}000\) (\(1\mathrm{e}{-}19\%\)) randomly generated 40-cliques as training data; these are the three special starred points in Fig. 2. In Fig. 3(b), we also plot histograms of learned parameters from networks trained on data with these three sample sizes. The finding is that weights and thresholds become highly peaked and symmetric about three limiting quantities as sample size increases.
Distribution of network parameters learned by minimizing probability flow (MPF) sharpens around three critical values. (a) Portion of network weights W after minimizing (5) given 100 (bottom), 1000 (middle), or 10,000 (top) random 40-cliques X (of about 1023 in total) on \(v = 80\) vertices. These networks represent the marked points in Fig. 2. (b) Histograms of weight and threshold parameters for networks in (a) (histogram of thresholds θ in inset). Network parameters are scaled so that thresholds have mean 1 (this does not affect the dynamics). Groups of similar network weights and thresholds are labeled with corresponding parameter x, y, z
We next analytically minimize probability flow to determine explicit networks achieving robust exponential storage. To simplify matters, we first observe by a symmetrizing argument (see Sect. 5) that there is a network storing all k-cliques if and only if there is one with constant threshold \(\theta = (z, \ldots, z) \in\mathbb{R}^{n}\) and satisfying for each pair \(e \neq f\), ether \(W_{ef} = x\) (whenever e and f share one vertex) or \(W_{ef} = y\) (when e and f are disjoint). Weight matrices approximating this symmetry can be seen in Fig. 3(a). (Note that this symmetry structure on the weights is independent of clique size k.) In this case, the energy of a graph G with \(\#E(G)\) edges is the following linear function of \((x,y,z) \in\mathbb {R}^{3}\):
$$ E_{G}(x,y,z) = - x \cdot S_{1}(G) - y \cdot S_{0}(G) + z \cdot\#E(G), $$
in which \(S_{1}(G)\) and \(S_{0}(G)\) are the number of edge pairs in the graph G with exactly one or zero shared vertices, respectively.
Consider the minimization of (5) over a training set X consisting of all \({v \choose k}\) k-cliques on \(v = 2k-2\) vertices (this simplifies the mathematics), restricting networks to our 3-parameter family \((x,y,z)\). When \(y = 0\), these networks are sparsely connected, having a vanishing number of connections between neurons relative to total population size. Using single variable calculus and Eq. (6), one can check that, for any fixed positive threshold z, the minimum value of (5) is achieved uniquely at the parameter setting \((x,0,z)\), where
$$ x = \frac{2z}{3k - 5}. $$
This elementary calculation gives our first main theoretical contribution.
McCulloch–Pitts attractor networks minimizing probability flow can achieve robust exponential pattern storage.
We prove Theorem 1 using the following large deviation theory argument; this approach also allows us to design networks achieving optimal robustness index \(\alpha= 1/2\) (Theorem 2). Fix \(v = 2k\) (or \(v = 2k-2\)) and consider a p-corrupted clique. Using Bernstein's concentration inequality for sums of Bernoulli binary random variables [22] ("coin flips"), it can be shown that with high probability (i.e., approaching 1 as \(v \to\infty\)) an edge in the clique has 2k neighboring edges at least, on average (see Corollary 1).
This gives the fixed-point requirement from (2):
$$ 2kx + o(x\sqrt{k}\ln k) > z. $$
On the other hand, a non-clique edge sharing a vertex with the clique has \(k(1+2p)\) neighbors at most, on average. Therefore, for a k-clique to be a robust fixed-point, this forces again from (2):
$$ k(1+2p)x + o(x\sqrt{k}\ln k) \leq z, $$
and any other edges will disappear when this holds. (\(o(\cdot)\) is "little-o" notation.)
It follows that the optimal setting (7) for x minimizing probability flow gives robust storage (with a single parallel dynamics update) of all k-cliques for \(p < 1/4\). This proves Theorem 1 (see Sect. 5 for the full mathematical details).
It is possible to do better than robustness index \(\alpha= 1/4\) by setting \(x = \frac{1}{2} [\frac{z}{2k} + \frac{z}{k(1+2p)} ] = \frac{z(3+2p)}{4k(1 + 2p)}\), which satisfies the above fixed-point requirements with probability approaching 1 for any fixed \(p < 1/2\) and increasing k. We have thus also demonstrated:
There is a family of Hopfield networks on \(n = {2k \choose 2}\) nodes that robustly store \({2k \choose k} \sim\frac{2^{\sqrt{2n} + \frac {1}{4}}}{n^{1/4} \sqrt{\pi}}\) binary patterns with maximal robustness index \(\alpha= 1/2\).
In Fig. 4, we show robust storage of the (≈1037) 64-cliques in graphs on 128 vertices using three \((x,y,z)\) parameter specializations designed here.
Robust exponential storage in networks of McCulloch–Pitts neurons. Error-correction performance of Hopfield networks storing all 64-cliques in \(v=128\) vertex graphs using a fully connected 8128-bit network minimizing probability flow (5) on \(50\text{,}000\) random 64-cliques (light gray line), a sparsely connected \((x, 0, 1)\) network with large deviation setting \(x = \frac{3+2p}{4k(1 + 2p)}\) and \(p=1/4\) (gray), or a sparsely connected MPF theoretical optimum (7) (black). Over 10 trials, 100 64-cliques chosen uniformly at random were p-corrupted for different p and then dynamics were converged initialized at noisy cliques. The plot shows the fraction of cliques completely recovered vs. pattern corruption p (standard deviation error bars). Dotted lines are average number of bits in a pattern retrieved correctly after converging network dynamics
A natural question is whether we can store a range of cliques using the same architecture. In fact, we show here that there is a network storing nearly all cliques.
For large v, there is a Hopfield network on \(n = {v \choose 2}\) nodes that stores all \({\sim}2^{v}(1 - e^{-Cv})\) cliques of size k as fixed-points, where k is in the range:
$$m = \frac{1}{D} v \leq k \leq v = M, $$
for constants \(C \approx0.43\), \(D \approx13.93\). Moreover, this is the largest possible range of k for any such Hopfield network.
Our next result demonstrates that even robustness to vanishingly small amounts of noise is nontrivial (see Sect. 5.5 for the proof).
Hopfield–Platt networks storing all permutations will not robustly store derangements (permutations without fixed-points).
As a final application to biologically plausible learning theory, we derive a synaptic update rule for adapting weights and thresholds in these networks. Given a training pattern x, the minimum probability flow (MPF) learning rule moves weights and thresholds in the direction of steepest descent of the probability flow objective function (5) evaluated at \(X = \{\mathbf{x}\}\). Specifically, for \(e \neq f\) the rule takes the form:
$$ \begin{aligned} \Delta W_{ef} & \propto- x_{f} \Delta x_{e} \exp(-\Delta E_{e}/2), \\ \Delta\theta_{e} & \propto\Delta x_{e} \exp(-\Delta E_{e}/2). \end{aligned} $$
After learning, the weights between neurons e and f are symmetrized to \(\frac{1}{2}(W_{ef} + W_{fe})\), which preserves the energy function and guarantees that dynamics terminates in fixed-point attractors. As update directions (8) descend the gradient of an infinitely differentiable convex function, learning rules based on them have good convergence rates [23].
Let us examine the (symmetrized) learning rule (8) more closely. Suppose first that \(x_{e} = 0\) so that \(\Delta x_{e} = 0\) or 1 (depending on the sign of \(I_{e} - \theta_{e}\)). When \(\Delta x_{e} = 0\), weight \(W_{ef}\) does not change; on the other hand, when \(\Delta x_{e} = 1\), the weight decreases if \(x_{f} = 1\) (and stays the same, otherwise). If instead \(x_{e} = 1\), then \(W_{ef}\) changes only if \(\Delta x_{e} = -1\) or \(\Delta x_{f} = -1\), in which case the update is positive when at least one of \(x_{e}\), \(x_{f}\) is 1 (and zero, otherwise). In particular, either (i) weights do not change (when the pattern is memorized or there is no neural activity) or (ii) when neurons e and f are both active in (8), weights increase, while when they are different, they decrease, consistent with Hebb's postulate [9], a basic hypothesis about neural synaptic plasticity. In fact, approximating the exponential function with unity in (8) gives a variant of classical outer-product rule (OPR) learning. Note also that adaptation (8) is local in that updating weights between 2 neurons only requires their current state/threshold and feedforward input from nearby active neurons.
The biologically inspired networks introduced in this work constitute a new nonlinear error-correcting scheme that is simple to implement, parallelizable, and achieves the most asymptotic error tolerance possible [24] for low-density codes over a binary symmetric channel (\(\alpha= 1/2\) in definition (4)). There have been several other approaches to optimal error-correcting codes derived from a statistical physics perspective; for a comprehensive account, we refer the reader to [25]. See also [26–29] for related work on neural architectures with large memory. Additionally, for a recent review of memory principles in computational neuroscience theory more broadly, we refer the reader to the extensive high level summary [30].
Although we have focused on minimizing probability flow to learn parameters in our discrete neural networks, several other strategies exist. For instance, one could maximize the (Bayesian) likelihood of cliques given network parameters, though any strategy involving a partition function over graphs might run into challenging algorithmic complexity issues [31]. Contrastive divergence [17] is another popular method to estimate parameters in discrete maximum entropy models. While this approach avoids the partition function, it requires a nontrivial sampling procedure that precludes exact determination of optimal parameters.
Early work in the theory of neural computation put forward a framework for neurally plausible computation of (combinatorial) optimization tasks [32]. Here, we add another task to this list by interpreting error-correction by a recurrent neural network in the language of computational graph theory. A basic challenge in this field is to design efficient algorithms that recover structures imperfectly hidden inside of others; in the case of finding fully connected subgraphs, this is called the "Hidden clique problem" [33]. The essential goal of this task is to find a single clique that has been planted in a graph by adding (or removing) edges at random.
Phrased in this language, we have discovered discrete recurrent neural networks that learn to use their cooperative McCulloch–Pitts dynamics to solve hidden clique problems efficiently. For example, in Fig. 5 we show the adjacency matrices of three corrupted 64-cliques on \(v=128\) vertices returning to their original configuration by one iteration of the network dynamics through all neurons. As a practical matter, it is possible to use networks robustly storing k-cliques for detecting highly connected subgraphs with about k neighbors in large graphs. In this case, error-correction serves as a synchrony finder with free parameter k, similar to how "K-means" is a standard unsupervised approach to decompose data into K clusters.
Learning to solve ≈1037 "Hidden clique" problems. (Bottom) Adjacency matrices of three 64-cliques on \(v = 128\) vertices. (Top) Adjacency matrices of noisy versions of the cliques having, on average, 1219 bits corrupted out of \(n = 8128\) from the original. Converging dynamics of a symmetric 3-parameter network \((x, y, z) = (0.0107, 0, 1)\) with minimum probability flow initialized at these noisy cliques uncovers the originals
In the direction of applications to basic neuroscience, we comment that it has been proposed that co-activation of groups of neurons—that is, synchronizing them—is a design principle in the brain (see, e.g., [34–36]). If this were true, then perhaps the networks designed here can help discover this phenomenon from spike data. Moreover, our networks also then provide an abstract model for how such coordination might be implemented, sustained, and error-corrected in nervous tissue.
As a final technical remark about our networks, note that our synapses are actually discrete since the probability flow is minimized at a synaptic ratio equaling a rational number. Thus, our work adds to the literature on the capacity of neural networks with discrete synapses (see, e.g., [26, 37–40]), all of which build upon early classical work with associative memory systems (see, e.g., [20, 41]).
Mathematical Details
We provide the remaining details for the proofs of mathematical statements appearing earlier in the text.
Symmetric 3-Parameter \((x,y,z)\) Networks
The first step of our construction is to exploit symmetry in the following set of linear inequalities:
$$ E_{\mathbf{c}} - E_{\mathbf{c'}} < 0, $$
where c runs over k-cliques and \(\mathbf{c}'\) over vectors differing from c by a single bit flip. The space of solutions to (9) is the convex polyhedral cone of networks having each clique as a strict local minimum of the energy function, and thus a fixed-point of the dynamics.
The permutations \(P \in P_{V}\) of the vertices V act on a network by permuting the rows/columns of the weight matrix (\(\mathbf{W} \mapsto P \mathbf{W}P^{\top}\)) and thresholds (\(\theta\mapsto P \theta\)), and this action on a network satisfying property (9) preserves that property. Consider the average \((\mathbf{\overline{W}}, \bar{\theta})\) of a network over the group \(P_{V}\): \(\mathbf{\overline {W}} := \frac{1}{v!}\sum_{P \in P_{V}}P \mathbf{W} P^{\top}\), \(\bar{\theta } := \frac{1}{v!}\sum_{P \in P_{V}}P \theta\), and note that if \((\mathbf {W}, \theta)\) satisfies (9) then so does the highly symmetric object \((\mathbf{\overline{W}}, \bar{\theta})\). To characterize \((\mathbf{\overline{W}}, \bar{\theta})\), observe that \(P \mathbf{\overline{W}} P^{\top}= \mathbf{\overline{W}}\) and \(P \bar {\theta} = \bar{\theta}\) for all \(P \in P_{V}\).
These strong symmetries imply there are x, y, z such that \(\bar{\theta} = (z, \ldots, z) \in\mathbb{R}^{n}\) and for each pair \(e \neq f\) of all possible edges:
$$ \overline{W}_{ef} = \textstyle\begin{cases} x & \mbox{if } |e \cap f| = 1, \\ y & \mbox{if } |e \cap f| = 0, \end{cases} $$
where \(|e \cap f|\) is the number of vertices that e and f share.
Our next demonstration is an exact setting for weights in these Hopfield networks.
Exponential Storage
For an integer \(r \geq0\), we say that state \(\mathbf{x}^{\ast}\) is r-stable if it is an attractor for all states with Hamming distance at most r from \(\mathbf{x}^{\ast}\). Thus, if a state \(\mathbf{x}^{\ast}\) is r-stably stored, the network is guaranteed to converge to \(\mathbf {x}^{\ast}\) when exposed to any corrupted version not more than r bit flips away.
For positive integers k and r, is there a Hopfield network on \(n = \binom{2k}{2}\) nodes storing all k-cliques r-stably? We necessarily have \(r \leq\lfloor k/2 \rfloor\), since \(2(\lfloor k/2 \rfloor+1)\) is greater than or equal to the Hamming distance between two k-cliques that share a \((k-1)\)-subclique. In fact, for any \(k > 3\), this upper bound is achievable by a sparsely connected three-parameter network.
Lemma 1
There exists a family of three-parameter Hopfield networks with \(z = 1\), \(y = 0\) storing all k-cliques as \(\lfloor k/2 \rfloor\)-stable states.
The proof relies on the following lemma, which gives the precise condition for the three-parameter Hopfield network to store k-cliques as r-stable states for fixed r.
Fix \(k > 3\) and \(0 \leq r < k\). The Hopfield network \((\mathbf{W}(x,y), \theta(z))\) stores all k-cliques as r-stable states if and only if the parameters \(x,y,z \in\mathbb {R}\) satisfy
$$ M \cdot \left [ \begin{matrix} x \\ y \end{matrix} \right ] < \left [ \begin{matrix} -2 \\ -2 \\ 2 \\ 2 \end{matrix} \right ] z, $$
$$M = \left [ \begin{matrix} 4(2-k)+2r & (2-k)(k-3) \\ 4(2-k) & (2-k)(k-3)-2r \\ 2(k-1)+2r & (k-1)(k-2) \\ 2(k-1) & (k-1)(k-2)-2r \end{matrix} \right ]. $$
Furthermore, a pattern within Hamming distance r of a k-clique converges after one iteration of the dynamics.
For fixed r and k-clique x, there are \(2^{r}\) possible patterns within Hamming distance r of x. Each of these patterns defines a pair of linear inequalities on the parameters \(x,y,z\). However, only the inequalities from the following two extreme cases are active constraints. All the other inequalities are convex combinations of these.
r edges in the clique with a common node i are removed.
r edges are added to a node i not in the clique.
In the first case, there are two types of edges at risk of being mislabeled. The first are those of the form ij for all nodes j in the clique. Such an edge has \(2(k-2)-r\) neighbors and \({k-2 \choose 2}\) non-neighbors. Thus, each such edge will correctly be labeled 1 after one network update if and only if x, y, and z satisfy
$$ 2(2k-r-4)x + (k-2) (k-3)y > 2z. $$
The other type are those of the form īj for all nodes \(\bar{i} \neq i\) in the clique, and j not in the clique. Assuming \(r < k-1\), such an edge has at most \(k-1\) neighbors and \({k-1 \choose 2} - r\) non-neighbors. Thus, each such edge will be correctly labeled 0 if and only if
$$ 2(k-1)x + \bigl((k-1) (k-2)-2r\bigr)y < 2z. $$
Rearranging Eqs. (10) and (11) yield the first two rows of the matrix in the lemma. A similar argument applies for the second case, giving the last two inequalities.
From the derivation, it follows that if a pattern is within Hamming distance r of a k-clique, then all spurious edges are immediately deleted by case 1, all missing edges are immediately added by case 2, and thus the clique is recovered in precisely one iteration of the network dynamics. □
Proof of Lemma 1
The matrix inequalities in Lemma 2 define a cone in \(\mathbb {R}^{3}\), and the cases \(z = 1\) or \(z = 0\) correspond to two separate components of this cone. For the proof of Theorem 1 in the main article, we use the cone with \(z = 1\). We further assume \(y = 0\) to achieve a sparsely connected matrix W. In this case, the second and fourth constraints are dominated by the first and third. Thus, we need x that solves
$$\frac{1}{2(k-1)-r} < x < \frac{1}{k-1+r}. $$
There exists such a solution if and only if
$$ 2(k-1)-r > k-1+r\quad \Leftrightarrow\quad k > 2r+1. $$
The above equation is feasible if and only if \(r \leq\lfloor k/2 \rfloor\). □
Proofs of Theorems 1, 2
Fix \(y = 0\) and \(z = 1\). We now tune x such that asymptotically the α-robustness of our set of Hopfield networks storing k-cliques tends to \(1/2\) as \(n \to\infty\). By symmetry, it is sufficient to prove robustness for one fixed k-clique x; for instance, the one with vertices \(\{1, \ldots, k\}\). For \(0 < p < \frac{1}{2}\), let \(\mathbf{x}_{p}\) be the p-corruption of x. For each node \(i \in\{1, \ldots, 2k\}\), let \(i_{\mathrm{in}}, i_{\mathrm{out}}\) denote the number of edges from i to other clique and non-clique nodes, respectively. With an abuse of notation, we write \(i \in\mathbf{x}\) to mean a vertex i in the clique; that is, \(i \in\{1, \ldots, k\}\). We need the following inequality originally due to Bernstein from 1924.
(Bernstein's inequality [22])
Let \(S_{i}\) be independent Bernoulli random variables taking values +1 and −1, each with probability \(1/2\). For any \(\varepsilon > 0\), the following holds:
$$\mathbb {P} \Biggl( \frac{1}{n} \sum_{i=1}^{n} S_{i}> \varepsilon \Biggr) \leq \exp \biggl( -\frac{n\varepsilon ^{2}}{2+2\varepsilon /3} \biggr). $$
The following fact is a fairly direct consequence of Proposition 1.
Let Y be an \(n \times n\) symmetric matrix with zero diagonal, \(Y_{ij} \stackrel{\mathrm{i.i.d.}}{\sim} \operatorname{Bernoulli}(p)\). For each \(i = 1, \ldots, n\), let \(Y_{i} = \sum_{j}Y_{ij}\) be the ith row sum. Let \(M_{n} = \max_{1 \leq i \leq n}Y_{i}\), and \(m_{n} = \min_{1 \leq i \leq n} Y_{i}\). Then, for any constant \(c > 0\), as \(n \to\infty\), we have
$$\mathbb{P}\bigl( \vert m_{n} - np \vert > c\sqrt{n}\ln n\bigr) \to0 $$
$$\mathbb{P}\bigl( \vert M_{n} - np \vert > c\sqrt{n}\ln n\bigr) \to0. $$
In particular, \(|m_{n} - np|, |M_{n} - np| = o(\sqrt{n}\ln n)\).
Fix \(c > 0\). As a direct corollary of Bernstein's inequality, for each i and for any \(\varepsilon > 0\), we have
$$\mathbb {P}\bigl(Y_{i} - np > n\varepsilon - (p + \varepsilon )\bigr) \leq\exp \biggl( -\frac {(n-1)\varepsilon ^{2}}{2+2\varepsilon /3} \biggr). $$
It follows that
$$\mathbb {P}(Y_{i} - np > n\varepsilon ) \leq\exp \biggl( - \frac{n\varepsilon ^{2}}{4+4\varepsilon /3} \biggr), $$
and thus from a union bound with \(\varepsilon = \frac{c\ln n}{\sqrt{n}}\), we have
$$\begin{aligned} \begin{aligned} \mathbb {P}\Bigl(\max_{i}Y_{i} - np > c\sqrt{n}\ln n\Bigr) &\leq \exp \biggl( -\frac {n\varepsilon ^{2}}{4+4\varepsilon /3} + \ln n \biggr) \\ & \leq \exp \biggl(-\frac{c^{2} \ln^{2} n}{4 + 4c} + \ln n \biggr). \end{aligned} \end{aligned}$$
Since this last bound converges to 0 with \(n \to\infty\), we have proved the claim for \(M_{n}\). Since \(Y_{i}\) is symmetric about np, a similar inequality holds for \(m_{n}\). □
Corollary 1
Let \(M_{\mathrm{in}} = \max_{i \in\mathbf{x}} i_{\mathrm{in}}\), \(m_{\mathrm{in}} = \min_{i \in \mathbf{x}} i_{\mathrm{in}}\), \(M_{\mathrm{out}} = \max_{i \notin\mathbf{x}} i_{\mathrm{out}}\), \(m_{\mathrm{out}} = \min_{i \notin\mathbf{x}} i_{\mathrm{out}}\), and \(M_{\mathrm{between}} = \max_{i \notin\mathbf{x}} i_{\mathrm{in}}\). Then \(M_{\mathrm{in}} - k(1-p)\), \(m_{\mathrm{in}} - k(1-p)\), \(M_{\mathrm{out}} - kp\), \(m_{\mathrm{out}} - kp\), and \(M_{\mathrm{between}} - kp\) are all of order \(o(\sqrt{k}\ln k)\) as \(k \to\infty\) almost surely.
Proofs of Theorems 1, 2 (robustness)
Let \(N(e)\) be the number of neighbors of edge e. For each e in the clique:
$$N(e) \geq2m_{\mathrm{in}} + 2m_{\mathrm{out}} \sim2k + o(\sqrt{k}\ln k),\quad \mbox{w.h.p. (with high probability)}. $$
To guarantee that all edges e in the clique are labeled 1 after one dynamics update, we need \(x > \frac{1}{N(e)}\); that is,
$$ x > \frac{1}{2k + o(\sqrt{k}\ln k)}. $$
If f is an edge with exactly one clique vertex, then we have
$$\begin{aligned} N(f) & \leq M_{\mathrm{in}} + M_{\mathrm{out}} +2M_{\mathrm{between}} \\ &\sim k(1+2p) + o(\sqrt{k}\ln k),\quad \mbox{w.h.p}. \end{aligned}$$
To guarantee that \(\mathbf{x}_{f} = 0\) for all such edges f after one iteration of the dynamics, we need \(x < \frac{1}{N(f)}\); that is,
$$ x < \frac{1}{k(1+2p) + o(\sqrt{k}\ln k)}. $$
In particular, if \(p = p(k) \sim\frac{1}{2} - k^{\delta-1/2}\) for some small \(\delta\in(0, 1/2)\), then taking \(x = x(k) = \frac{1}{2} [\frac{1}{2k} + \frac{1}{k(1+2p)} ]\) would guarantee that for large k the two inequalities (13) and (14) are simultaneous satisfied. In this case, \(\lim_{k\to \infty}p(k) = 1/2\), and thus the family of two-parameter Hopfield networks with \(x(k)\), \(y = 0\), \(z = 1\) has robustness index \(\alpha= 1/2\). □
Clique Range Storage
In this section, we give precise conditions for the existence of a Hopfield network on \(\binom{v}{2}\) nodes that stores all k-cliques for k in an interval \([m,M]\), \(m \leq M \leq v\). We do not address the issue of robustness as the qualitative trade-off is clear: the more memories the network is required to store, the less robust it is. This trade-off can be analyzed by large deviation principles as in Theorem 2.
Fix m such that \(3 \leq m < v\). For \(M \geq m\), there exists a Hopfield network on \(\binom{v}{2}\) nodes that stores all k-cliques in the range \([m,M]\) if and only if M solves the implicit equation \(x_{M} - x_{m} < 0\), where
$$\begin{aligned} x_{m} &= \frac{-(4m - \sqrt{12m^{2} - 52m + 57} - 7)}{2(m^{2} - m - 2)}, \\ x_{M} &= \frac{-(4M + \sqrt{12M^{2} - 52M + 57} - 7)}{2(M^{2}-M-2)}. \end{aligned}$$
Fix \(z = 1/2\) and \(r = 0\) in Lemma 1. (We do not impose the constraint \(y = 0\).) Then the cone defined by the inequalities in Lemma 1 is in bijection with the polyhedron \(\mathcal{I}_{k} \subseteq\mathbb{R}^{2}\) cut out by inequalities:
$$\begin{aligned} 4(k-2)x + (k-2) (k-3)y - 1 &> 0, \\ 2(k-1)x + (k-1) (k-2)y - 1 &< 0. \end{aligned}$$
Let \(R_{k}\) be the line \(4(k-2)x + (k-2)(k-3)y - 1 = 0\), and \(B_{k}\) be the line \(2(k-1)x + (k-1)(k-2)y - 1 = 0\). By symmetry, there exists a Hopfield network that stores all k-cliques in the range \([m,M]\) if and only if \(\bigcap_{k=m}^{M}\mathcal{I}_{k} \neq\emptyset\). For a point \(P \in\mathbb{R}^{2}\), write \(x(P)\) for its x-coordinate. Note that, for \(k \geq3\), the points \(B_{k} \cap B_{k+1}\) lie on the following curve Q implicitly parametrized by k:
$$Q := \biggl\{ \biggl(\frac{1}{k-1}, \frac{-1}{(k-1)(k-2)} \biggr): k \geq3 \biggr\} . $$
When the polytope \(\bigcap_{k=m}^{M}\mathcal{I}_{k}\) is nonempty, its vertices are the following points: \(R_{M} \cap R_{m}\), \(R_{M} \cap B_{m}\), \(B_{k} \cap B_{k+1}\) for \(m \leq k \leq M-1\), and the points \(B_{M} \cap R_{m}\). This defines a nonempty convex polytope if and only if
$$x_{M} := x(Q \cap R_{M}) < x_{m} := x(Q \cap R_{m}). $$
Direct computation gives the formulas for \(x_{m}\), \(x_{M}\) in the lemma statement. See Fig. 6 for a visualization of the constraints of the feasible region.
Feasible exponential storage. (a) The shaded region is the feasible polytope for network parameters giving clique storage for \(5 \leq k \leq15\). Black points are its vertices, the red \(R_{k}\) and blue \(B_{k}\) lines are linear constraints. (b) Lines \(R_{k}\) (red) and \(B_{k}\) (blue) for \(1000 \leq k \leq5500\). Note the appearance of the smooth curve Q enveloping the family \(B_{k}\) in the figure
Fixing the number of nodes and optimizing the range \(M - m\) in Lemma 4, we obtain Theorem 3 from Sect. 3.
Proof of Theorem 3
From Lemma 4, for large m, M, and v, we have the approximations \(x_{m} \approx\frac{\sqrt{12}-4}{2m}\), \(x_{M} \approx\frac {-\sqrt{12}-4}{2M}\). Hence \(x_{M} - x_{m} < 0\) when \(M \lesssim\frac {2+\sqrt{3}}{2-\sqrt{3}}m = Dm\). Asymptotically for large v, the most cliques are stored when \(M = Dm\) and \([m,M]\) contains \(v/2\). Consider \(m = \beta v\) so that \(v \geq M = D\beta v \geq v/2\), and thus \(1/D \geq\beta\geq1/(2D)\). Next, set \(u = v/2 - m = v(1/2-\beta)\) and \(w = M - v/2 = v(D\beta- 1/2)\) so that storing the most cliques becomes the problem of maximizing over admissible β the quantity:
$$\max\{u,w\} = \max\bigl\{ v(1/2-\beta),v(D\beta-1/2)\bigr\} . $$
One can now check that \(\beta= 1/D\) gives the best value, producing the range in the statement of the theorem.
Next, note that \(\binom{v}{k}2^{-v}\) is the fraction of k-cliques in all cliques on v vertices, which is also the probability of a \(\operatorname{Binom}(v, 1/2)\) variable equaling k. For large v, approximating this variable with a normal distribution and then using Mill's ratio to bound its tail c.d.f. Φ, we see that the proportion of cliques storable tends to
$$1 - \varPhi \biggl(\frac{D-1}{D} \sqrt{v} \biggr) \approx1 - \exp(-Cv), $$
for some constant \(C \approx\frac{(D-1)^{2}}{2D^{2}} \approx0.43\). □
Hopfield–Platt Networks
We prove the claim in the main text that Hopfield–Platt networks [13] storing all permutations on \(\{1,\ldots,k\}\) will not robustly store derangements (permutations without fixed-points). For large k, the fraction of permutations that are derangements is known to be \(e^{-1} \approx0.36\).
Fix a derangement σ on \(\{1,\ldots,k\}\), represented as a binary vector x in \(\{0,1\}^{n}\) for \(n = k(k-1)\). For each ordered pair \((i,j)\), \(i \neq j\), \(j \neq\sigma(i)\), we construct a pattern \(\mathbf{y}_{ij}\) that differs from x by exactly two bit flips:
Add the edge ij.
Remove the edge \(i\sigma(i)\).
There are \(k(k-2)\) such pairs \((i,j)\), and thus \(k(k-2)\) different patterns \(\mathbf{y}_{ij}\). For each such pattern, we flip two more bits to obtain a new permutation \(\mathbf{x}^{ij}\) as follows:
Remove the edge \(\sigma^{-1}(j)j\).
Add the edge \(\sigma^{-1}(j)\sigma(i)\).
It is easy to see that \(\mathbf{x}^{ij}\) is a permutation on k letters with exactly two cycles determined by \((i,j)\). Call the set of edges modified the critical edges of the pair \((i,j)\). Note that \(\mathbf{x}^{ij}\) are all distinct and have disjoint critical edges.
Each \(\mathbf{y}_{ij}\) is exactly two bit flips away from x and \(\mathbf{x}^{ij}\), both permutations on k letters. Starting from \(\mathbf{y}_{ij}\), there is no binary Hopfield network storing all permutations that always correctly recovers the original state. In other words, for a binary Hopfield network, \(\mathbf{y}_{ij}\) is an indistinguishable realization of a corrupted version of x and \(\mathbf{x}^{ij}\).
We now prove that, for each derangement x, with probability at least \(1 - (1-4p^{2})^{n/2}\), its p-corruption \(\mathbf{x}_{p}\) is indistinguishable from the p-corruption of some other permutation. This implies the statement in the theorem.
For each pair \((i,j)\) as above, recall that \(\mathbf{x}_{p}\) and \(\mathbf {x}^{ij}_{p}\) are two random variables in \(\{0,1\}^{n}\) obtained by flipping each edge of x (resp. \(\mathbf{x}^{ij}\)) independently with probability p. We construct a coupling between them as follows. Define the random variable \(\mathbf{x}'_{p}\) via:
For each non-critical edge, flip this edge on \(\mathbf{x}'_{p}\) and \(\mathbf{x}^{ij}\) with the same \(\operatorname{Bernoulli}(p)\).
For each critical edge, flip them on \(\mathbf{x}'_{p}\) and \(\mathbf {x}^{ij}\) with independent \(\operatorname{Bernoulli}(p)\).
Then \(\mathbf{x}'_{p} \stackrel{d}{=} \mathbf{x}_{p}\) have the same distribution, and \(\mathbf{x}'_{p}\) and \(\mathbf{x}^{ij}_{p}\) only differ in distribution on the four critical edges. Their marginal distributions on these four edges are two discrete variables on 24 states, with total variation distance \(1 - 4(1-p)^{2}p^{2}\). Thus, there exists a random variable \(\mathbf{x}''_{p}\) such that \(\mathbf{x}''_{p} \stackrel{d}{=} \mathbf{x}'_{p} \stackrel{d}{=} \mathbf{x}_{p}\), and
$$\mathbb{P}\bigl(\mathbf{x}''_{p} = \mathbf{x}^{ij}_{p}\bigr) = 4(1-p)^{2}p^{2}. $$
In other words, given a realization of \(\mathbf{x}^{ij}_{p}\), with probability \(4(1-p)^{2}p^{2}\), this is equal to a realization from the distribution of \(\mathbf{x}_{p}\), and therefore no binary Hopfield network storing both \(\mathbf{x}^{ij}\) and x can correctly recover the original state from such an input. An indistinguishable realization occurs when two of the four critical edges are flipped in a certain combination. For fixed x, there are \(k(k-2)\) such \(\mathbf{x}^{ij}\) where the critical edges are disjoint. Thus, the probability of \(\mathbf{x}_{p}\) being an indistinguishable realization from a realization of one of the \(\mathbf{x}^{ij}\) is at least
$$1 - \bigl(1 - 4(1-p)^{2}p^{2}\bigr)^{k(k-2)} > 1 - \bigl(1-4p^{2}\bigr)^{n/2}, $$
completing the proof of Theorem 4. □
Examples of Clique Storage
To illustrate the effect of two different noise levels on hidden clique finding performance of the networks from Fig. 4, we present examples in Fig. 7 of multiple networks acting with their dynamics on the same two noisy inputs. Notice that non-clique fixed-points appear, and it is natural to ask whether a complete characterization of the fixed-point landscape is possible. Intuitively, our network performs a local, weighted degree count at each edge of the underlying graph and attempts to remove edges with too few neighbors, while adding in edges that connect nodes with high degrees. Thus, resulting fixed-points (of the dynamics) end up being graphs such as cliques and stars. Beyond this intuition, however, we do not have a way to characterize all fixed-points of our network in general.
Examples of robustness for networks in Fig. 4 of main text with \(v = 128\), \(k = 64\), \(n = 8128\). Adjacency matrices of noisy cliques (in red) have 1219 (top) or 1625 (bottom) bits corrupted out of 8128 (\(p=0.15 / 0.2\)) from the original 64-clique (in green). Images show the result of dynamics applied to these noisy patterns using networks with all-to-all MPF parameters after L-BFGS training on \(50\text{,}000\) 64-cliques (\({\approx}2\mathrm{e}{-}31\%\) of all 64-cliques), Large deviation parameters \((x, y, z) = (0.0091, 0, 1)\), or MPF Theory parameters \((x, y, z) = (0.0107, 0, 1)\) from Eq. (7) in the main text
In fact, this is a very difficult problem in discrete geometry, and except for toy networks, we believe that this has never been done. Geometrically, the set of all states of a binary Hopfield network with n neurons is the n-hypercube \(\{0,1\}^{n}\). Being a fixed-point can be characterized by the energy function becoming larger when one bit is flipped. As the energy function is quadratic, for each of the n bits flipped, this creates a quadratic inequality. Thus, the set of all fixed-point attractors in a binary Hopfield network is the n-hypercube intersected with n quadratic inequalities in n variables. In theory, one could enumerate such sets for small n; however, characterizing them all is challenging, even for the highly symmetric family of weight matrices that we propose here.
OPR:
outer-product rule
MPF:
minimum probability flow
Pastur L, Figotin A. Exactly soluble model of a spin glass. Sov J Low Temp Phys. 1977;3:378–83.
Edwards S, Anderson P. Theory of spin glasses. J Phys F, Met Phys. 1975;5(5):965.
Hopfield J. Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci USA. 1982;79(8):2554.
McCulloch W, Pitts W. A logical calculus of the ideas immanent in nervous activity. Bull Math Biol. 1943;5(4):115–33.
McEliece R, Posner E, Rodemich E, Venkatesh S. The capacity of the Hopfield associative memory. IEEE Trans Inf Theory. 1987;33(4):461–82.
Amari S-I. Learning patterns and pattern sequences by self-organizing nets of threshold elements. IEEE Trans Comput. 1972;100(11):1197–206.
Talagrand M. Spin glasses: a challenge for mathematicians. vol. 46. Berlin: Springer; 2003.
Lorente de Nó R. Vestibulo-ocular reflex arc. Arch Neurol Psychiatry. 1933;30(2):245–91.
Hebb D. The organization of behavior. New York: Wiley; 1949.
Cover T. Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Trans Comput. 1965;3:326–34.
Article MATH Google Scholar
Amari S-I. Characteristics of sparsely encoded associative memory. Neural Netw. 1989;2(6):451–7.
Tanaka F, Edwards S. Analytic theory of the ground state properties of a spin glass. I. Ising spin glass. J Phys F, Met Phys. 1980;10:2769.
Platt J, Hopfield J. Analog decoding using neural networks. In: Neural networks for computing. vol. 151. Melville: AIP Publishing; 1986. p. 364–9.
Sohl-Dickstein J, Battaglino P, DeWeese M. New method for parameter estimation in probabilistic models: minimum probability flow. Phys Rev Lett. 2011;107(22):220601.
Hillar C, Sohl-Dickstein J, Koepsell K. Efficient and optimal binary Hopfield associative memory storage using minimum probability flow. In: 4th neural information processing systems (NIPS) workshop on discrete optimization in machine learning (DISCML): structure and scalability. 2012. p. 1–6.
Ising E. Beitrag zur theorie des ferromagnetismus. Z Phys. 1925;31:253–8.
Ackley D, Hinton G, Sejnowski T. A learning algorithm for Boltzmann machines. Cogn Sci. 1985;9(1):147–69.
Turing A. On computable numbers, with an application to the Entscheidungsproblem. Proc Lond Math Soc. 1937;2(1):230–65.
Von Neumann J. First draft of a report on the EDVAC. IEEE Ann Hist Comput. 1993;15(4):27–75.
Rosenblatt F. Principles of neurodynamics: perceptrons and the theory of brain mechanisms. Washington, DC: Spartan Books; 1961.
Nocedal J. Updating quasi-Newton matrices with limited storage. Math Comput. 1980;35(151):773–82.
Bernstein S. On a modification of Chebyshev's inequality and of the error formula of Laplace. Ann Sci Inst Sav Ukr, Sect Math. 1924;1(4):38–49.
Hazan E, Agarwal A, Kale S. Logarithmic regret algorithms for online convex optimization. Mach Learn. 2007;69(2–3):169–92.
Shannon C. A mathematical theory of communication. Bell Syst Tech J. 1948;27:379–423.
Vicente R, Saad D, Kabashima Y. Low-density parity-check codes—a statistical physics perspective. Adv Imaging Electron Phys. 2002;125:232–355.
MATH Google Scholar
Gripon V, Berrou C. Sparse neural networks with large learning diversity. IEEE Trans Neural Netw. 2011;22(7):1087–96.
Kumar K, Salavati A, Shokrollahi A. Exponential pattern retrieval capacity with non-binary associative memory. In: Information theory workshop (ITW). New York: IEEE Press; 2011. p. 80–4.
Curto C, Itskov V, Morrison K, Roth Z, Walker J. Combinatorial neural codes from a mathematical coding theory perspective. Neural Comput. 2013;25(7):1891–925.
Karbasi A, Salavati A, Shokrollahi A, Varshney L. Noise facilitation in associative memories of exponential capacity. Neural Comput. 2014;16(11):2493–526.
Chaudhuri R, Fiete I. Computational principles of memory. Nat Neurosci. 2016;19(3):394–403.
Jerrum M, Sinclair A. Polynomial-time approximation algorithms for the Ising model. SIAM J Comput. 1993;22(5):1087–116.
Hopfield J, Tank D. Computing with neural circuits: a model. Science. 1986;233(4764):625–33.
Alon N, Krivelevich M, Sudakov B. Finding a large hidden clique in a random graph. Random Struct Algorithms. 1998;13(3–4):457–66.
Singer W. Synchronization of cortical activity and its putative role in information processing and learning. Annu Rev Physiol. 1993;55(1):349–74.
Singer W. Neuronal synchrony: a versatile code for the definition of relations? Neuron. 1999;24(1):49–65.
Womelsdorf T, Schoffelen J-M, Oostenveld R, Singer W, Desimone R, Engel A, Fries P. Modulation of neuronal interactions through neuronal synchronization. Science. 2007;316(5831):1609–12.
Gutfreund H, Stein Y. Capacity of neural networks with discrete synaptic couplings. J Phys A. 1990;23(12):2613–30.
Kocher I, Monasson R. On the capacity of neural networks with binary weights. J Phys A. 1992;25(2):367–80.
Knoblauch A. Efficient associative computation with discrete synapses. Neural Comput. 2015;28(1):118–86.
Alemi A, Baldassi C, Brunel N, Zecchina R. A three-threshold learning rule approaches the maximal capacity of recurrent neural networks. PLoS Comput Biol. 2015;11(8):1004439.
Willshaw D, Buneman O, Longuet-Higgins H. Non-holographic associative memory. Nature. 1969;222(5197):960–2.
We thank Kilian Koepsell and Sarah Marzen for helpful comments that enhanced the quality of this work.
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
Support was provided, in part, by NSF grant IIS-0917342 (CH), an NSF All-Institutes Postdoctoral Fellowship administered by the Mathematical Sciences Research Institute through its core grant DMS-0441170 (CH), and DARPA Deep Learning Program FA8650-10-C-7020 (NT).
Redwood Center for Theoretical Neuroscience, Berkeley, CA, USA
Christopher J. Hillar
University of Texas, Austin, Austin, TX, USA
Ngoc M. Tran
CJH and NMT contributed equally. All authors read and approved the final manuscript.
Correspondence to Christopher J. Hillar.
Christopher J. Hillar and Ngoc M. Tran contributed equally to this work.
Hillar, C.J., Tran, N.M. Robust Exponential Memory in Hopfield Networks. J. Math. Neurosc. 8, 1 (2018). https://doi.org/10.1186/s13408-017-0056-2
Hopfield network
Recurrent dynamics
Exponential codes
Error-correcting
Shannon optimal
Hidden clique | CommonCrawl |
Title:Helium-Abundance and Other Composition Effects on the Properties of Stellar Surface Convection in Solar-like Main-sequence Stars
Authors:Joel D. Tanner, Sarbani Basu, Pierre Demarque
(Submitted on 1 Oct 2013)
Abstract: We investigate the effect of helium abundance and $\alpha$-element enhancement on the properties of convection in envelopes of solar-like main-sequence stars stars using a grid of 3D radiation hydrodynamic simulations. Helium abundance increases the mean molecular weight of the gas, and alters opacity by displacing hydrogen. Since the scale of the effect of helium may depend on the metallicity, the grid consists of simulations with three helium abundances ($Y=0.1, 0.2, 0.3$), each with two metallicities ($Z=0.001, 0.020)$. We find that changing the helium mass fraction generally affects structure and convective dynamics in a way opposite to that of metallicity. Furthermore, the effect is considerably smaller than that of metallicity. The signature of helium differs from that of metallicity in the manner in which the photospheric velocity distribution is affected. \rev{We also find that helium abundance and surface gravity behave largely in similar ways, but differ in the way they affect the mean molecular weight}. A simple model for spectral line formation suggests that the bisectors and absolute Doppler shifts of spectral lines depends on the helium abundance. We look at the effect of $\alpha$-element enhancement and find that it has a considerably smaller effect on the convective dynamics in the SAL compared to that of helium abundance.
Comments: Accepted for publication in ApJ
Subjects: Solar and Stellar Astrophysics (astro-ph.SR)
DOI: 10.1088/0004-637X/778/2/117
From: Joel Tanner [view email]
[v1] Tue, 1 Oct 2013 20:03:39 UTC (277 KB) | CommonCrawl |
OGLE-2017-BLG-0173Lb: Low-mass-ratio Planet in a "Hollywood" Microlensing Event
Mroz P
Collaboration K
Collaboration OGLE
Team UKIRTM
We present microlensing planet OGLE-2017-BLG-0173Lb, with planet-host mass ratio either $q\simeq 2.5\times 10^{-5}$ or $q\simeq 6.5\times 10^{-5}$, the lowest or among the lowest ever detected. The planetary perturbation is strongly detected, $\Delta\chi^2\sim 10,000$, because it arises from a bright (therefore, large) source passing over and enveloping the planetary caustic: a so-called "Hollywood" event. The factor $\sim 2.5$ offset in $q$ arises because of a previously unrecognized discrete degeneracy between Hollywood events in which the caustic is fully enveloped and those in which only one flank is enveloped, which we dub "Cannae" and "von Schlieffen", respectively. This degeneracy is "accidental" in that it arises from gaps in the data. Nevertheless, the fact that it appears in a $\Delta\chi^2=10,000$ planetary anomaly is striking. We present a simple formalism to estimate the sensitivity of other Hollywood events to planets and show that they can lead to detections close to, but perhaps not quite reaching, the Earth/Sun mass ratio of $3\times 10^{-6}$. This formalism also enables an analytic understanding of the factor $\sim 2.5$ offset in $q$ between the Cannae and von Schlieffen solutions. The Bayesian estimates for the host-mass, system distance, and planet-host projected separation are $M=0.39^{+0.40}_{-0.24}\,M_\odot$, $D_L=4.8^{+1.5}_{-1.8}\,\kpc$, and $a_\perp=3.8\pm 1.6\,\au$. The two estimates of the planet mass are $m_p=3.3^{+3.8}_{-2.1}\,M_\oplus$ and $m_p=8^{+11}_{-6}\,M_\oplus$. The measured lens-source relative proper motion $\mu=6\,\masyr$ will permit imaging of the lens in about 15 years or at first light on adaptive-optics imagers on next-generation telescopes. These will allow to measure the host mass but probably cannot resolve the planet-host mass-ratio degeneracy.
planetary systems | CommonCrawl |
Antibacterial resistance in ophthalmic infections: a multi-centre analysis across UK care settings
Alice E. Lee1,
Kanchana Niruttan1,
Timothy M. Rawson1 &
Luke S. P. Moore ORCID: orcid.org/0000-0001-7095-79221,2,3,4
Bacterial ophthalmic infections are common. Empirical treatment with topical broad-spectrum antibiotics is recommended for severe cases. Antimicrobial resistance (AMR) to agents used for bacterial ophthalmic infections make it increasingly important to consider changing resistance patterns when prescribing, however UK data in this area are lacking. We evaluate the epidemiology and antimicrobial susceptibilities of ophthalmic pathogens across care settings and compare these with local and national antimicrobial prescribing guidelines.
A retrospective, multi-centre observational analysis was undertaken of ophthalmic microbiology isolates between 2009 and 2015 at a centralised North-West London laboratory (incorporating data from primary care and five London teaching hospitals). Data were analysed using descriptive statistics with respect to patient demographics, pathogen distribution (across age-groups and care setting), seasonality, and susceptibility to topical chloramphenicol, moxifloxacin, and fusidic acid.
Two thousand six hundred eighty-one isolates (n = 2168 patients) were identified. The commonest pathogen in adults was Staphylococcus spp. across primary, secondary, and tertiary care (51.7%; 43.4%; 33.6% respectively) and in children was Haemophilus spp. (34.6%;28.2%;36.6%). AMR was high and increased across care settings for chloramphenicol (11.8%;15.1%;33.8%); moxifloxacin (5.5%;7.6%;25.5%); and fusidic acid (49.6%;53.4%; 58.7%). Pseudomonas spp. was the commonest chloramphenicol-resistant pathogen across all care settings, whilst Haemophilus spp. was the commonest fusidic acid-resistant pathogen across primary and secondary care. More isolates were recorded in spring (31.6%) than any other season, mostly due to a significant rise in Haemophilus spp.
We find UK national and local antimicrobial prescribing policies for ophthalmic infections may not be concordant with the organisms and antimicrobial susceptibilities found in clinical samples. We also find variations in microbial incidence related to patient age, clinical setting, and season. Such variations may have further important implications for prescribing practices and modification of antimicrobial guidelines.
The severity of ophthalmic infections ranges widely from self-limiting bacterial conjunctivitis to potentially sight-threatening conditions such as keratitis and endophthalmitis [1]. One of the commonest ophthalmic infections in the UK is acute infective conjunctivitis, which accounts for approximately 2% of presentations to general practice and constitutes a significant healthcare burden [2].
Antimicrobial resistance (AMR) has been identified as a major healthcare threat worldwide [3] yet few observations have been made regarding the epidemiology and susceptibility patterns of ophthalmic pathogens in the UK [4]. Analysis of electronic data that is routinely collected in healthcare settings is an effective approach for monitoring of AMR [5] and may help to improve clinical management of ophthalmic infections by informing and enabling refinement of current antimicrobial policies.
Whilst antimicrobials are not necessarily required for self-limiting bacterial conjunctivitis, they have been shown to speed symptom resolution compared to placebo, thus reducing the healthcare and societal burden of this common condition [6]. Antimicrobials are also indicated for treatment of corneal abrasion, and bacterial keratitis [7]. Antimicrobial guidelines often recommend empirical antimicrobial treatment in severe infections, often before the results of antimicrobial susceptibility tests are known. However, in some cases empirical prescription of antimicrobials may be with an agent to which the infecting pathogen is resistant, putting the patient at a subsequent risk of treatment failure. This must be balanced against the not infrequent overprescribing of antimicrobials for conditions such as viral or mild bacterial conjunctivitis, for which they are not indicated [8]. Growing international concerns about AMR has led to an increased emphasis on antimicrobial stewardship, a strategy which promotes judicious use of antibiotics to preserve their future efficacy [9]. A key focus of such programmes is to prescribe antimicrobials in relation to local resistance patterns [10], however UK data in this area are lacking.
In this study, we analyse routine microbiological data collected across primary, secondary, and tertiary care settings in London to investigate ophthalmic pathogens and AMR, with reference to local and national antimicrobial policies.
Study setting and design
A retrospective multicentre observational analysis was undertaken to review all clinical microbiology samples sent from eyes/conjunctiva between 2009 and 2015 processed at a centralised North West London microbiology laboratory. This serves a population of 2.5 million people and incorporates data from primary care and five London teaching hospitals including a specialist tertiary ophthalmic hospital. Antimicrobial prescriptions for the pharmacological management of bacterial conjunctivitis are guided by national (for primary care) or local (for secondary and tertiary care) policies (Table 1).
Table 1 National and local secondary care antimicrobial prescribing guidelines for the management of bacterial conjunctivitis. Information on national guidelines were adapted from Public Health England [11] guidance. Local guidelines are derived from the Adult and Children Treatment of Eye Infections and Ophthalmology Handbook [7]. Both guidelines highlight the importance of self-care as the initial approach to management of non-severe conjunctivitis, only proceeding to pharmacotherapy where this fails or the infection is severe
Data on patient characteristics, location of patients' care (primary/secondary/tertiary), specimen type, and organism characteristics (identification, antimicrobial susceptibility pattern) were collected during the period March 2009 to February 2015 from electronic health records and the laboratory management system (LIMS; Sunquest™ Laboratory 7.3.1) containing linked microbiology data. For the tertiary care setting, data were collected during the period March 2012 to February 2015 only. Changes to the laboratory information management system post-2015 precluded data extraction and analysis. All patient data was anonymised. Isolate speciation was performed using API® (bioMérieux) from 2009 to 2011 and matrix assisted laser desorption/ionisation-time of flight (MALDI-TOF) spectroscopy (Biotyper®, Bruker) from 2011 onwards. Antimicrobial susceptibilities were determined by disc diffusion (BSAC criteria) [12]. The minimal level of organism identification was reported in line with the national UK Standards for Microbiology Investigations from Public Health England [13]. As such, pathogens of the family Enterobacteriales were not speciated beyond 'coliforms'. A minimal number of fungal isolates were identified so the main focus of the study was on bacterial pathogens (excluding mycobacteria).
The data were de-duplicated (for repeat isolates from the same patient within 14 days) and analysed to describe patient demographics including age and gender. Positive isolates were divided into those obtained from children (< 18 years old) and adults (≥18 years old). The distribution of pathogens was analysed by age-group and level of care (split into primary, secondary, and tertiary care). Pathogen seasonality across each cohort and for each of the most common species was examined. Resistance to chloramphenicol, fusidic acid, and moxifloxacin, three common empirical topical antimicrobials used in bacterial ophthalmic infections, were calculated for each cohort. Resistance was calculated as the proportion of organisms resistant to each agent, including both (i) pathogens identified as intrinsically resistant and (ii) those which were found to be resistant on susceptibility testing. Intrinsically resistant pathogens which also underwent susceptibility testing were removed to avoid data duplication. The following formula was used to calculate resistance rate:
$$ \frac{\mathrm{Intrinsically}\ \mathrm{resistant}\ \mathrm{isolates}+\mathrm{isolates}\ \mathrm{resistant}\ \mathrm{on}\ \mathrm{testing}}{\mathrm{Total}\ \mathrm{isolates}}\ \mathrm{x}\ 100. $$
Descriptive statistics (Chi-square test and Yates correction) were applied where appropriate using SPSS® 24.0 (IBM®, Ca, USA) software.
Patient demographics
A total of 2681 ophthalmic isolates from 2168 patients were identified over 6-years. Table 2 details the number of patients and isolates for each care setting as well as patient age and gender distribution. In primary care, 29.5% (433/1467) of isolates were obtained from adults and 70.2% (1030/1467) from children. In secondary care, 39.8% of isolates (297/747) were obtained from adults and 60.2% (450/747) from children. In tertiary care 91.2% isolates (426/467) were obtained from adults and 8.8% (41/467) from children. The number of isolates obtained from adults was significantly higher in secondary and tertiary care than in primary care (p < 0.01), whilst the number of isolates obtained from children was significantly higher in primary care than either secondary or tertiary care (p < 0.01).
Table 2 Demographic details of patients with ophthalmic infections, London, 2009–2015. #age was not reported for four patients
Specimen and pathogen distribution
Eye cultures were obtained from various sources including eye swabs (2358/2681, 88.0%), conjunctival swabs (98/2681, 3.7%), contact lens swabs (54/2681, 2.0%), corneal scrapes (148/2681, 5.5%), and invasive samples (23/2681, 0.8%).
A wide range of pathogens were identified across all care settings. Figure 1 shows the distribution of pathogens across the cohort and also divided across the primary, secondary, and tertiary care settings.
Ophthalmic infection causative organisms, London, 2009–15. Frequency of identification of organisms from clinical ophthalmic microbiology specimens sent to a centralised microbiology laboratory for primary, secondary and tertiary hospitals serving a population of 2.5 million in London. Sub-analysis by age (< 18 years and ≥ 18 years) and level of care (primary, secondary, tertiary) is depicted in the inset pie charts
In adults, the most commonly isolated pathogens were Staphylococcus spp. across all levels of care (224/433, 51.7%; 129/297, 43.4%; 143/426, 33.6% respectively). Staphylococcus aureus was the most prevalent species of this genus (219/433, 50.6%; 72/297, 24.2%; 86/426, 20.2% respectively), and the contribution of coagulase negative staphylococci to anterior segment ophthalmic infections is unclear. In primary and secondary care, the second most frequent pathogens identified were coliforms (62/433, 14.3%; 48/297, 16.2%, respectively) but in tertiary care was Pseudomonas spp. (114/426, 26.8%). The reported prevalence of Pseudomonas spp. increased significantly from primary to tertiary care (p < 0.01).
In children, in contrast to adults, the most common pathogen across all levels of care was Haemophilus spp. (356/1030, 34.6%; 127/450, 28.2%; 15/41, 36.6%, respectively), among which H.influenzae predominated (328/356, 92.1%; 118/127, 92.9%; 15/15, 100%). The second most common pathogen isolated in both primary and tertiary care was Streptococcus spp. (222/1030, 21.6%; 13/41, 31.7%, respectively), with Staphylococcus spp. being the second most frequent pathogen in secondary care (117/450, 26.0%). Compared with adults, Pseudomonas spp. isolation in children was significantly less common across all levels of care (in children: 38/1030, 3.7%; 14/450, 3.1%; 1/41, 2.4%, respectively, p < 0.01 vs. adults). Conversely, Moraxella spp. was isolated more commonly from children (76/1030, 7.4%) than adults (11/433, 2.5%) in primary care (p < 0.01).
Seasonality
Across the cohort, the greatest number of isolates were recorded in spring (848/2681, 31.6%) followed by summer (666/2681, 24.8%), winter (640/2681, 23.9%), and autumn (527/2681, 19.7%)(Fig. 2a). The number of isolates recorded in spring was significantly higher than in other seasons (p < 0.001). Across both adult and children in Spring, Haemophilus spp. was the commonest organism (245/848, 28.9%; p < 0.001; Fig. 2b), and among children, Haemophilus spp. demonstrated an even greater Spring preponderance (197/469; 42.0%).
Seasonal variation among ophthalmic infection pathogens, London, 2009–15. 2(a) variation across organisms groups, 2(b) variation in Haemophilus spp. ***p < 0.001, spring vs. non-spring seasons
Rates of antimicrobial resistance to chloramphenicol, moxifloxacin, and fusidic acid increased across care settings (Table 3). The rate of overall chloramphenicol resistance increased significantly from primary to tertiary care (173/1467, 11.8%; 113/747, 15.1%; 158/467, 33.8%; p < 0.01). Across all levels of care, the commonest chloramphenicol-resistant pathogen was the intrinsically-resistant Pseudomonas spp. (Pseudomonas spp. as a percentage of all chloramphenicol-resistant isolates: 82/173, 47.4%; 56/113, 49.6%; 115/158, 75.2%). Among those tested for chloramphenicol susceptibility (i.e. excluding intrinsically-resistant pathogens), those found to be most frequently resistant across all levels of care were the coliforms (as a percentage of non-intrinsically resistant pathogens: 42/91; 46.2%; 36/55; 65.6%; 13/31; 41.9%). Across all levels of care, fewer isolates were tested for moxifloxacin susceptibility than for chloramphenicol or fusidic acid susceptibility. The commonest moxifloxacin-resistant pathogen was again the intrinsically-resistant Pseudomonas spp. across all levels of care (Pseudomonas spp. as a percentage of all moxifloxacin-resistant isolates: 81/81, 100%primary care; 57/57, 100% secondary care; 115/118, 97.4% tertiary care). For fusidic acid, Haemophilus spp. (intrinsically-resistant) was the commonest resistant pathogen in both primary and secondary care, whereas in tertiary care it was Pseudomonas spp. Among Staphylococcus spp., rates of fusidic acid resistance were similar between primary (38/429; 8.9%) and secondary care (28/249; 11.2%)(p = 0.4) but significantly increased in tertiary care (37/137; 27.0%)(p < 0.01).
Table 3 Antimicrobial resistance to commonly used topical agents among bacterial isolates from patients with ophthalmic infections, London, 2009–2015. #One isolate was 'intermediate'. Fusidic acid resistance increased significantly from secondary to tertiary care (p < 0.01). Chloramphenicol resistance increased significantly from primary to tertiary care (p < 0.01). ^ including Pseudomonas spp., Chryseobacterium spp. ~ including Pseudomonas spp. * including enterococci and all Gram-negative pathogens (except Neisseria spp. and Moraxella spp.) [14, 15]
The main findings of this study are (i) an age-dependent distribution of ophthalmic infection pathogens with Staphylococcus spp. most commonly isolated from adults and Haemophilus spp. most commonly isolated from children; (ii) high rates of AMR (intrinsic and acquired) to first line agents for treatment of ophthalmic infections; (iii) difference in infecting pathogens between patients attending primary, secondary and tertiary care, and (iv) a seasonal predilection for ophthalmic infections caused by Haemophilus spp. occurring in spring.
Other studies investigating the epidemiology of eye infections in the UK have done so mostly with a narrower focus than the present study, focussing on the microbiology of specific eye infections (i.e. endophthalmitis, keratitis) or rare presentations such as periorbital necrotising fasciitis and phaeohyphomycosis [16,17,18,19,20,21,22,23,24]. One previous study on adult bacterial conjunctivitis by Silvester et al. evaluated AMR, but focussed on a primary care setting only [4], finding a lower resistance rate to chloramphenicol (8%) than our study (12%) and Silvester et al. concluded chloramphenicol to be a suitable first-line treatment. One possible explanation for this discrepancy points to the lack of coliforms in the study by Silvester et al., which have been previously demonstrated to exhibit reduced susceptibility to chloramphenicol [25]. Conversely, in our study, coliforms represented the greatest proportion of isolates tested and found to be non-susceptible to chloramphenicol. Our study adds to the existing literature by examining AMR across levels of care. Our data reveal increasing AMR to chloramphenicol in the secondary (15%) and tertiary (34%) care settings. This is most likely due to the increasing proportion of intrinsically-resistant Pseudomonas spp., which represented up to one quarter of all isolates in the tertiary setting. This may be due to increased contact lens-associated infections seen in tertiary vs. primary care [18, 26]. Another possibility is that patients with Pseudomonas spp. infections may have been initially treated empirically in primary care (with non-antipseudomonal agents) and then referred for specialist assessment when the patients did not respond to first-line therapy (a form of ascertainment bias).
In the United States, use of chloramphenicol for bacterial conjunctivitis has been largely superseded by other antimicrobials (including aminoglycosides and fluoroquinolones) due to associations with aplastic anaemia [27]. A large US study investigated in vitro susceptibility to various commonly-prescribed antimicrobials for bacterial conjunctivitis, finding third generation fluoroquinolones (such as moxifloxacin) to be a preferred choice for empirical broad-spectrum coverage, based on resistance rates of both Gram-positive and Gram-negative organisms [28]. In our study, resistance rates to moxifloxacin were approximately half of the respective resistance rates for chloramphenicol in both primary (6% vs. 12%) and secondary (8% vs. 15%) care, but are difficult to interpret because fewer isolates were tested for moxifloxacin susceptibility compared to chloramphenicol or fusidic acid. Less widespread use of empiric topical moxifloxacin in the UK may reflect fears that increasing use of fluoroquinolones will speed up development of resistance, relegating these agents to be 'last resort' options [29]. However, previous studies have demonstrated that resistance to moxifloxacin develops slowly as it requires a dual-step mutation [29], suggesting that we should be testing for moxifloxacin susceptibility and possibly using it more often in the UK, as well as monitoring usage trends and resistance.
Another focus of the present study was to analyse the epidemiology of ophthalmic pathogens by patient age. The results show a marked age-dependent distribution of pathogens, with Staphylococcus spp. the most common pathogen in adults and Haemophilus spp. the most common in children, consistent across all care settings. This could again reflect increased contact lens use in adults [18]. Furthermore, higher prevalence of Haemophilus spp. in children may reflect the increased incidence of upper respiratory tract infection and poorer hand hygiene in this age-group.
This variation in causative organisms between age groups has further connotations for empiric antimicrobial prescribing policies. Fusidic acid is commonly used first line in children, yet resistance to fusidic acid was consistently high (50–59%) across all levels of care, driven in part by the preponderance of Haemophilus spp. This suggests that prescribers should consider alternative agents in children with better Haemophilus spp. cover, particularly in the spring months.
Considering the findings of our study in the context of local and national ophthalmic infection antimicrobial policies, several recommendations may be able to be made. First, given the variation in causative organisms, development of age-dependent antimicrobial guidelines should be strongly considered. We would suggest that particularly among children, where there is a high proportion of Haemophilus spp., fuscidic acid is unlikely to be a useful empiric agent. Secondly, the wider use of fusidic acid and chloramphenicol in primary and secondary care should be considered in light of local epidemiology. We would advocate not that these first line therapies necessarily be avoided, but rather that clear early signs of potential antimicrobial failure are highlighted to patients and prescribers, so that early moves to second line agents with lower rates of resistance can be made in a timely fashion. Third, given the variation in AMR, we advocate systems be developed to regularly review ophthalmic infection antimicrobial guidelines in light of local resistance patterns to ensure that first-line therapy is appropriate and effective. Fourth, given resistance rates to all three antimicrobials we examined were unacceptably high (26–59%) in tertiary care (most likely reflecting the high proportion of the intrinsically-resistant Pseudomonas spp.), threshold's for empiric use of anti-pseudomonal agents should be reviewed in specialist eye hospitals. We suggest topical gentamicin or ciprofloxacin may be useful agents, but susceptibility testing for these agents and monitoring of resistance trends must be coupled with any changes to prescribing policies. Finally, whilst prescribing patterns were not examined in this study, the high rates of resistance that we have reported reiterate the importance of restricting antimicrobial use to situations in which they are clinically indicated, in line with national recommendations. This is particularly pertinent given a recent study suggesting that approximately half of UK ophthalmologists may be prescribing antimicrobials for suspected viral conjunctivitis [8].
Our study is limited by its retrospective design and absence of patient level linked clinical data. Whilst samples sent to the microbiology laboratory were for clinical indications (as opposed to being screening or surveillance samples) it is our assumption that these were sent for clinical concerns of infection. We could not ascertain whether the infections (and isolates) were derived from the community or were healthcare associated, although we propose that by far the majority were community associated. Moreover our results will not reflect the sum of all ocular infections seen in primary, secondary or tertiary care as some patients will have been treated empirically and not have had samples sent for culture. This may have biased the results towards more resistant pathogens (either intrinsically or acquired) as it is possible that only those who had already failed first line therapy had samples sent. Therefore we suggest that inferring a need to alter antimicrobial prescribing policies based upon this data must be considered with caution. Whilst routinely collected data, such as that used in our analysis, provides insights into trends in causative organisms and resistance patterns, absolute prevalence can only be obtained by purposeful sampling. To discern the degree with which this limitation may have affected our work, in addition to advocating wider geographical analysis of routinely collected data, we also suggest sentinel surveillance be considered for ophthalmic infections to determine the true burden of causative organisms and resistance. This would further help design of relevant national antimicrobial prescribing guidelines for ophthalmic infections, and would be particularly important for primary and secondary care where there may be a high proportion of patients treated successfully who do not undergo microbiological sampling.
Our study has further limitations in the parsing of data. First, the non-speciation of Enterobacteriales beyond 'coliforms' may mask more subtle variations in causative pathogens between age groups and between patients presenting to different care areas. With the advent of low cost rapid speciation platforms, such as MALDI-ToF, we advocate identification of all bacteria from ophthalmic samples to the species level, and have recently adopted this into our operating procedures. Second, we have grouped together anterior segment cultures because of the noted lack of patient level linked clinical data. This precluded us being able to confidently differentiate samples sent for differing clinical diagnoses (conjunctivitis vs. corneal ulcer for example). In the tertiary level care setting having this level of discrimination for syndromic management has obvious benefits when considering construction of empiric antimicrobial policies. However we consider that in primary and secondary care where specialist ophthalmic opinions may not be readily available, empiric antimicrobial prescribing for ophthalmic infections may need to cater for a wider group of presentations. Finally, interpretation of the data was based in vitro susceptibility testing, which may not perfectly predict clinical outcomes. As linked prescribing and clinical outcome data were not available, the relationship between in vitro resistance and treatment failure cannot be determined.
In our urban UK study analysing the epidemiology of ophthalmic infections we find causative organisms and antimicrobial resistance vary significantly by patient age and care setting. We demonstrate an age-dependent distribution of pathogens and high resistance rates to commonly used antimicrobials, which increases across levels of care. Such findings highlight the need to reassess ophthalmic antimicrobial prescribing policies in the UK in accordance with local resistance patterns. This may include earlier adoption of alternative agents for bacterial conjunctivitis such as moxifloxacin, and tailoring prescribing policies by patient age and clinical setting.
The datasets analysed during the current study are available from the corresponding author on reasonable request, as long as this meets local ethics and research governance criteria.
AMS:
BSAC:
British society for antimicrobial chemotherapy
LIMS:
Laboratory information management system
MALDI-ToF:
Matrix assisted laser desorption/ionisation – time of flight
Snyder RW, Glasser DB. Antibiotic therapy for ocular infection. West J Med. 1994;161(6):579–84.
Everitt H, Little P. How do GPs diagnose and manage acute infective conjunctivitis? A GP survey. Fam Pract. 2002;19(6):658–60.
Antimicrobial resistance: global report on surveillance [Internet]. World Health Organisation. 2014 [cited 2018 Jan 17]. Available from: http://apps.who.int/iris/bitstream/10665/112642/1/9789241564748_eng.pdf?ua=1.
Silvester A, Neal T, Czanner G, Briggs M, Harding S, Kaye S. Adult bacterial conjunctivitis: resistance patterns over 12 years in patients attending a large primary eye care Centre in the UK. BMJ Open Ophthalmol. 2016;1:e000006.
Freeman R, Moore LSP, Charlett A, Donaldson H, Holmes AH. Exploring the epidemiology of carbapenem-resistant gram-negative bacteria in West London and the utility of routinely collected hospital microbiology data. J Antimicrob Chemother. 2014;70(4):1212–8.
Sheikh A, Hurwitz B, Cp VS, Mclean S, Nurmatov U, Sheikh A, et al. Antibiotics versus placebo for acute bacterial conjunctivitis (review). Cochrane Database Syst Rev. 2012;9:CD001211.
Adult and Children Treatment of Eye Infections and Ophthalmology Handbook. Imperial College Healthcare NHS Trust. Last updated September 2017.
Fayers T, Loh GK, Cordeiro MF, Lee V, Jain R, Fayers PM. Overprescribing of antibiotics by UK ophthalmologists. Eye. 2017:1–3.
Doron S, Davidson LE. Antimicrobial stewardship. Mayo Clin Proc. 2011;86(11):1113–23.
Antimicrobial stewardship: systems and processes for effective antimicrobial medicine use [Internet]. National Institute for Health and Care Excellence. 2015 [cited 2018 Jan 17]. p. 1–34. Available from: https://www.nice.org.uk/guidance/ng15/resources/antimicrobial-stewardship-systems-and-processes-for-effective-antimicrobial-medicine-use-1837273110469
Public Health England. Summary of antimicrobial prescribing guidance – managing common infections. PHE publications gateway number: 2018511. Published August 2010. Last updated October 2018. Available from: https://www.gov.uk/government/publications/managing-common-infections-guidance-for-primary-care .
BSAC methods for antimicrobial susceptibility testing [internet]. Vol. version 12, BSAC. 2013 [cited 2018 Jan 17]. Available from: http://bsac.org.uk/wp-content/uploads/2012/02/Version-12-Apr-2013_final.pdf.
UK Standards for Microbiology Investigations: Investigation of Bacterial Eye Infections [Internet]. Public Health England. [cited 2017 Jan 17]. Available from: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/612529/B_2i6.1.pdf
Chen FL, Wang GC, Teng SO, Ou TY, Yu FL, Lee W. Sen. Clinical and epidemiological features of Chryseobacterium indologenes infections: analysis of 215 cases. J Microbiol Immunol Infect. 2013;46(6):425–32.
Leclercq R, Cantón R, Brown DFJ, Giske CG, Heisig P. EUCAST expert rules in antimicrobial susceptibility testing. Clin Microbiol Infect. 2011;19(2):1–34.
Shalchi Z, Gurbaxani A, Baker M, Nash J. Antibiotic resistance in microbial keratitis: ten-year experience of corneal scrapes in the United Kingdom. Ophthalmology. 2011;118(11):2161–5.
Shankar J, Sueke H, Wiehlmann L, Horsburgh MJ, Tuft S, Neal TJ, et al. Genotypic analysis of UK keratitis-associated Pseudomonas aeruginosa suggests adaptation to environmental water as a key component in the development of eye infections. FEMS Microbiol Lett. 2012;334(2):79–86.
Eltis M. Contact-lens-related microbial keratitis: case report and review. J Opt. 2011;4:122–7.
Park JC, Ramasamy B, Shaw S, Ling RHL, Prasad S. A prospective and nationwide study investigating endophthalmitis following pars plana vitrectomy: clinical presentation, microbiology, management and outcome. Br J Ophthalmol. 2014;98(8):1080–6.
Gupta A, Orlans HO, Hornby SJ, Bowler ICJW. Microbiology and visual outcomes of culture-positive bacterial endophthalmitis in Oxford, UK. Graefes Arch Clin Exp Ophthalmol. 2014;252(11):1825–30.
Weston K, Nicholson R, Bunce C, Yang YF. An 8-year retrospective study of cataract surgery and postoperative endophthalmitis: injectable intraocular lenses may reduce the incidence of postoperative endophthalmitis. Br J Ophthalmol. 2015;99:1377–80.
Jackson TL, Eykyn SJ, Graham EM, Stanford MR. Endogenous bacterial endophthalmitis: a 17-year prospective series and review of 267 reported cases. Surv Ophthalmol. 2003;48:403–23.
Maudgil A, Johnson Z, Rogers N, Mudhar HS. Unusual ocular presentations of ocular phaeohyphomycosis. Eye. 2016;30(11):1517–9.
Flavahan PW, Cauchi P, Gregory ME, Foot B, Drummond SR. Incidence of periorbital necrotising fasciitis in the UK population: a BOSU study. Br J Ophthalmol. 2014;98(9):1177–80.
Livermore DM, Warner M, Mushtaq S, Doumith M, Zhang J, Woodford N. What remains against carbapenem-resistant Enterobacteriaceae? Evaluation of chloramphenicol, ciprofloxacin, colistin, fosfomycin, minocycline, nitrofurantoin, temocillin and tigecycline. Int J Antimicrob Agents. 2011;37(5):415–9.
Azari AA, Barney NP. Conjunctivitis: a systematic review of diagnosis and treatment. JAMA. 2013;310(16):1721–9.
Doona M, Walsh J. Topical chloramphenicol is an outmoded treatment. BMJ. 1998;316(7148):1903.
Adebayo A, Parikh JG, McCormick SA, Shah MK, Huerto RS, Yu G, et al. Shifting trends in in vitro antibiotic susceptibilities for common bacterial conjunctival isolates in the last decade at the new York eye and ear Infirmary. Graefes Arch Clin Exp Ophthalmol. 2011;249(1):111–9.
Benitez-Del-Castillo J, Verboven Y, Stroman D, Kodjikian L. The role of topical moxifloxacin, a new antibacterial in Europe, in the treatment of bacterial conjunctivitis. Clinical Drug Investigation. 2011;31:543–57.
LSPM and TMR acknowledge support from the National Institute of Health Research (NIHR) Imperial Biomedical Research Centre (BRC) and the National Institute for Health Research Health Protection Research Unit (HPRU) in Healthcare Associated Infection and Antimicrobial Resistance at Imperial College London in partnership with Public Health England. The views expressed in this publication are those of the authors and not necessarily those of the NHS, the National Institute for Health Research, or the UK Department of Health.
No specific funding was obtained for this study.
Imperial College London, South Kensington Campus, London, SW7 2AZ, UK
Alice E. Lee, Kanchana Niruttan, Timothy M. Rawson & Luke S. P. Moore
Chelsea and Westminster NHS Foundation Trust, 369 Fulham Rd, Chelsea, London, SW10 9NH, UK
Luke S. P. Moore
North West London Pathology, Imperial College Healthcare NHS Trust, Fulham Palace Road, London, W6 8RF, UK
National Institute for Health Research Health Protection Research Unit (HPRU) in Healthcare Associated Infection and Antimicrobial Resistance, Imperial College London, Commonwealth Building, Du Cane Road, London, W12 0NN, UK
Alice E. Lee
Kanchana Niruttan
Timothy M. Rawson
TMR and LSPM designed the study methodology. AEL and KN collated the data and performed the initial analysis. All authors reviewed the analyses and drew conclusions. AEL drafted the initial manuscript with all authors contributing significantly to subsequent revisions. All authors agreed on the final version for submission.
Correspondence to Luke S. P. Moore.
Data was collected as a routine part of clinical care. The authors did not have not use nor access to any information that enabled them to identify individual patients. Analysis of this data was undertaken as part of a research database approved by Chelsea Research Ethics Committee (ref 18/ LO/ 0746).
No patient identifiable data necessitating consent was used in this study. No organisational consent to publish is required.
LSPM has consulted for bioMerieux (2013), DNAelectronics (2015–2018), Dairy Crest (2017–2018), received speaker fees from Profile Pharma (2018) and Pfizer (2018–2019), received research grants from the National Institute for Health Research (2013–2018), Leo Pharma (2016), and CW+ Charity (2017–2019), and received educational support from Eumedica (2016–2017). All other authors have no conflicts of interest to declare.
Lee, A.E., Niruttan, K., Rawson, T.M. et al. Antibacterial resistance in ophthalmic infections: a multi-centre analysis across UK care settings. BMC Infect Dis 19, 768 (2019). https://doi.org/10.1186/s12879-019-4418-0
Topical antimicrobials | CommonCrawl |
Why do black holes need a quantum mechanical description?
I read about black holes, about the Schwarzschild metric, Einstein field equations and their solution in the vacuum for a spherical body.
I understood black holes are object whose gravity is enormous, and I also understood that there does exist the so called Schwarzschild radius, which is
$$R_s = \frac{2GM}{c^2}$$
which is the radius of a sphere such that, if all the mass of an object were to be compressed within that sphere, the escape velocity from the surface of the sphere would equal the speed of light.
So I understood of course why General Relativity is more than essential to describe those objects.
But now, if we make an example with our Sun (which won't ever become a black hole, but let's just play), we would get
$$R_s \approx 2.95\cdot 10^3\ m$$
Now this sphere seems everything but infinitesimal to me, so my question is: why black holes do need also quantum mechanics to be completely understood? Or is it referred only to the "inside" of a Black Hole (namely the region beyond the Events Horizon)?
I'm sorry, this question may be unclear or stupid but don't down vote it. I'm sure you all got some doubts in your past, and the most beautiful thing is someone who explains them to you.
gravity black-holes quantum-gravity
Les AdieuxLes Adieux
$\begingroup$ It's not really macroscopic black holes that need a quantum mechanical description, it's strongly curved spacetime which one would expect near the classical singularity of a black hole. $\endgroup$ – CuriousOne Feb 25 '16 at 15:33
$\begingroup$ Duplicate of What are the reasons to expect that gravity should be quantized? $\endgroup$ – ACuriousMind♦ Feb 25 '16 at 16:06
$\begingroup$ Other possible duplicates: physics.stackexchange.com/q/52211/2451 and links therein. $\endgroup$ – Qmechanic♦ Feb 25 '16 at 16:36
There are lots of reasons why we expect that a quantum theory of gravity is necessary, and they are well summarised in the question that ACuriousMind linked: What are the reasons to expect that gravity should be quantized?.
Bit since you are specifically asking about black holes let's just consider black holes, in which case the obvious reason we need a theory of quantum gravity is to describe Hawking radiation and black hole evaporation.
Hawking based his calculation on a technique called semiclassical gravity which is a sort of halfway house between classical physics and a full theory of quantum gravity. Semi-classical gravity is only an approximation, and while we expect it to work pretty well for large black holes it will get increasingly inaccurate as the block hole gets smaller. The only way we will ever understand what happens in the final stages of black hole evaporation is if we have a full theory of quantum gravity.
There's another aspect of black hole event horizons that is currently rather controversial and that's the black hole firewall paradox. If the firewall idea is true then the inside of a black hole is a weirder place than anyone suspected, but again with a proper theory of quantum gravity we'll never know.
These are a couple of examples of why we need a quantum gravity theory that are specific to black holes and which we may one day be able to actually measure. But I encourage you to read the question ACuriousMind linked for a deeper understand of why a classical theory of gravity and a quantum theory of matter are never going to sit well together.
John RennieJohn Rennie
$\begingroup$ If the firewall idea is correct, the inside of a black hole is not "a weirder place than anyone suspected". It means that there is no inside to a black hole at all. Space and time simply end at the event horizon and black holes are revealed to be cutouts or cavities in the spacetime manifold. All of which fits quite nicely with the concept of metric stretching as described in General Relativity and exemplified in the radial component of the Schwarzschild metric. $\endgroup$ – dcgeorge Mar 6 '17 at 18:43
But do not give a link connected to this question. Quantum mechanics has been necessary for the current standard cosmological model, the Big Bang model.
There quantization of gravity is necessary for the model to fit the observations, but as far as I know, the general relativity version of black hole formation and description is adequate. Quantization of gravity is the holy grail of current theoretical trends in order to be able to unify all forces at the high energies of the beginning of the Big Bang. It is actively researched. At present only effective quantization models are used . String theories have quantized gravity but there are many possibilities; no unique theoretical model has been proposed up to now, that could contain the standard model of particle physics and give testable predictions.
anna vanna v
Not the answer you're looking for? Browse other questions tagged gravity black-holes quantum-gravity or ask your own question.
Unified theory of gravity
What are the reasons to expect that gravity should be quantized?
Is the quantization of gravity necessary for a quantum theory of gravity?
Supermassive black holes with the density of the Universe
Scaling of non-gravitational energy in a black hole
Is it possible to assign a physical radius to a black hole?
creating black holes
Do black holes have a moment of inertia?
Relationship between freefall velocity time dilation and gravitational time dilation in a Schwarzschild metric
Density of supermassive black holes
Are Black Holes 4th dimensional? | CommonCrawl |
How do I find an integer value for which an expression is non-prime?
I've just begun Robert S. Wolf's, Proof, Logic and Conjecture. At the end of the first chapter there are some exercises to warm you up for the proof techniques he will eventually introduce. I only mention this so that you are aware that I have yet to encounter formal proof techniques.
The first part of the question simply asks you to substitute small values of $n$ into the expression $n^2-n+41$ and to test if these values are prime. I did this for $n=1$ to $12$, and all the values seemed prime. This leads me onto the second part of the question where I am stuck.
I'll paraphrase the question:
(1)(b) Find a positive integer value of $n$ for which the expression $n^2-n+41$ is not a prime number.
My attempt
I will decompose the expression $n^2-n+41$ into symbolic and numeric parts i.e. $n^2-n$ and $41$, in order to obtain a better understanding of it.
The symbolic part of the expression $n^2-n$ can be factorised to $n(n-1)$. From this it is apparent that this portion of the expression will only ever return even values, because it will always be of the form where we have an odd number multiplied by an even number. For example: for $n=5$, an odd number, we have $5(5-1)=5(4)=20$; similarly for $n=4$, an even number, we have $4(4-1)=4(3)=12$.
The addition of an odd number and even number returns an odd number, thus the addition of $41$ (an odd number) to $n^2-n$ (an expression which always returns even numbers) will give an odd number for all integer values of $n$.
As $n^2-n+41$ always returns odd numbers, it then stands to reason that if we are to find any non-prime value of this expression it will also be odd.
The only way I could think of doing this was by defining the odd numbers as $2n+1$, (where $2n$ is an even number and $n$ is an integer) and equating this to the expression in the hope that the intersection would return values that are odd and non-prime, however it is not the case that this equivalence returned an integer value of $n$ for which this expression is not prime.
Where have I gone wrong?
algebra-precalculus elementary-number-theory prime-numbers
seekerseeker
How about $n=41$?
In general, if you choose $n$ so that all of the terms in a sum are divisible by the same number, then the whole sum will be divisible by that number.
Edit: My understanding is that your approach was to set $n^2-n+41 = 2n+1$ and look for integer solutions. But this is quite a strong condition: you're saying not just that $n^2-n+41$ is odd, but that it's the particular odd number $2n+1$. This is a quadratic in $n$, so it has at most $2$ solutions - it's not particularly surprising that it doesn't have integer solutions.
But for any integer value of $n$, $n^2-n+41$ is odd. If you just want to express that $n^2-n+41$ is odd, the relevant equation is $n^2 -n +41 = 2k+1$. This equation has exactly one integer solution for every value of $n$: an example is $n = 41$, $k = 840$.
Alex KruckmanAlex Kruckman
Though $n = 41$ is an obvious solution, your question 1 (b) is really to be able to find for which $n$, the given expression is not prime. (Though it asks 'an integer', we can try to make a generalization to find all integers).
One approach would be to write it as $n^2 - (n - 41)$. If we make $(n - 41)$ a square, then obviously the given expression factorizes.
So $\color{blue}{n = 41, 42, 45, 50, 57}$ and so on are values of $n$ for which the given expression is not prime. I think Alex has answered the rest.
ShaileshShailesh
$\begingroup$ Though I would point out that this does not necessarily find all values of $n$ for which the expression is (edit: not) prime. $\endgroup$ – David Z Oct 20 '15 at 7:50
$\begingroup$ @Shailesh Where does it say we are asked to find all such values? That's an unreasonable demand. $\endgroup$ – Erick Wong Oct 20 '15 at 16:01
Here is another simple solution.
We want to find some $n$ for which $n^2 - n + 41$ is not prime. Obviously n = $41$ is a trivial solution as pointed by Alex.
For $n = 41k$, where $k \in {\mathbb N}$, the given expression will have $41$ as a factor. This case is not included in my above answer, though it overlaps at some points (where for example $n = 41\cdot42$).
An answer is a place to ask a question, but would be interesting to find all $n$ for which $n^2 - n + 41$ (The Euler Polynomial) is prime.
$\begingroup$ There isn't any quadratic polynomial of one variable which we know how to prove infinitely many primes for. At the same time, it's widely believed that $n^2-n+41$ is prime infinitely often. To find all $n$ would most likely require proving the infinitude of $n$, which is somewhere close to hopelessly difficult. However there are infinitely many $n$ for which $n^2-n+41$ has at most $2$ prime factors. $\endgroup$ – Erick Wong Oct 21 '15 at 6:42
$\begingroup$ @ErickWong. Thanks. I looked up a little bit on these polynomials myself. $\endgroup$ – Shailesh Oct 21 '15 at 6:55
Look for one.
If you want to find a number with a certain property, and the property is relatively easy to check, it's often a good idea to just try a bunch of numbers, systematically or at random. You might get lucky!
This approach sounds stupid, but it's one of the most powerful techniques in mathematics, and it's led to some really big results.
For example, as $p$ varies over the primes, the first few values of $2^p - 1$ are 3, 7, and 31—all primes. People wanted to know if there's a prime $p$ for which $2^p - 1$ is composite, so they looked for one. In 1536, someone named Hudalricus Regius (Ulrich Rieger, in German) succeeded with $p = 11$. Over the next few centuries, some of the greatest mathematicians on Earth—including Pierre de Fermat, Leonhard Euler, and François Lucas—joined a ridiculous-looking parade of people plugging in bigger and bigger primes $p$ and working out whether $2^p - 1$ was prime. Fermat's work, part of a larger quest to learn about perfect numbers, led to a major result: Fermat's little theorem. Lucas's work led to the less fundamental but still fascinating Lucas-Lehmer test, a primality test that works only on numbers of the form $2^p - 1$ (for $p$ prime).
Here's another example. The Riemann zeta function has a bunch of predictable zeros (the negative even numbers), and a bunch of wildly unpredictable ones. The first few unpredictable zeros are $$0.5 + i\;14.134\ldots,\quad 0.5 + i\;21.022\ldots,\quad 0.5 + i\;25.010\ldots.$$ People want to know whether there's a zero of the zeta function whose real part isn't 0.5, so they're looking for one. From the 1970s through the 1990s, a mathematician named Andrew Odlyzko used supercomputers to check trillions of zeros. He didn't find any with real parts other than 0.5, but he did find substantial evidence for a conjecture that suggests deep connections between the Riemann zeta function and the math of quantum mechanics.
Sophisticated reasoning is sometimes the best way to find a number, but you should never underestimate the power of good old-fashioned looking for it. In your case, a hand search is quite feasible. In a typical modern programming language, a computer search takes minutes to write, and milliseconds to execute. I'm pretty sure it was the first thing Wolf expected you to try.
VectornautVectornaut
Not the answer you're looking for? Browse other questions tagged algebra-precalculus elementary-number-theory prime-numbers or ask your own question.
Method to find smallest value of $x$ for which $x^2-x+C$ is composite.
Find an $n$ for which $(P_1,\dots,P_n)+1$ is not prime
Sequence of non-prime naturals
Asymptotic expression for the $n$th prime number
Find the smallest prime number $p$ such that $p\, | \,n^2-n-2023$ for some integer $n$.
Algorithm that returns the number of prime factors (non-distinct) of an integer WITHOUT finding the prime factorization
Maximum value of $x$ such that $3^x-2^n$ is a prime. | CommonCrawl |
Search IIETA Content -Any-ArticleBasic pageBlog entryJournalEventFeature
Home Journals EJEE An Integrated Approach for Modeling Three-Phase Micro Hydropower Plants
EJEE
Submisssion
Citation List
CiteScore 2018: 0.18 ℹCiteScore:
CiteScore is the number of citations received by a journal in one year to documents published in the three previous years, divided by the number of documents indexed in Scopus published in those same three years.
SCImago Journal Rank (SJR) 2018: 0.117 ℹSCImago Journal Rank (SJR):
The SJR is a size-independent prestige indicator that ranks journals by their 'average prestige per article'. It is based on the idea that 'all citations are not created equal'. SJR is a measure of scientific influence of journals that accounts for both the number of citations received by a journal and the importance or prestige of the journals where such citations come from It measures the scientific influence of the average article in a journal, it expresses how central to the global scientific discussion an average article of the journal is.
Source Normalized Impact per Paper (SNIP): 0.415 ℹSource Normalized Impact per Paper(SNIP):
SNIP measures a source's contextual citation impact by weighting citations based on the total number of citations in a subject field. It helps you make a direct comparison of sources in different subject fields. SNIP takes into account characteristics of the source's subject field, which is the set of documents citing that source.
240x200fu_ben_.jpg
An Integrated Approach for Modeling Three-Phase Micro Hydropower Plants
Achour El Hamdaouy* | Issam Salhi | Said Doubabi | Najib Essounbouli | Mohammed Chennani
Superior Technology School, Ibn Tofail University, B.P. 242, Kenitra 14090, Morocco
Laboratory of Electric Systems and Telecommunications, Cadi Ayyad University, BP 549, Av Abdelkarim Elkhattabi, Gueliz, Marrakesh 40000, Morocco
CReSTIC, Reims University, 9, rue de Québec B.P. 396, F-10026, Toyes cedex 10026, France
Corresponding Author Email:
[email protected]
https://doi.org/10.18280/ejee.210601
Accepted:
| Citation
21.06_01.pdf
Micro hydropower plants (MHPPs) are often adopted to provide electricity to remote areas. To ensure the generation performance and power quality, the operators must have enough knowledge about the MHPP, especially its physical model. This paper proposes an integrated approach for modelling three-phase MHPPs in off-grid sites. The differential equations about the electrical operations and mechanical behaviors of the generator were fully integrated into our approach. The physical model thus established considers the behavior of different components of the target MHPP, such as the turbine and the generators, and helps to obtain the time-variation of all physical parameters of the MHPP. The validity of the integrated approach was verified through a simulation under three scenarios. Our research makes it possible to develop a physical model suitable for single and/or three-phase loads. The model can be easily adapted to any real configuration of hydropower plant.
renewable energy, micro hydropower plant, modelling, Pelton turbine, synchronous generator
Nowadays, electrical energy demand is growing continuously. This is due to an unavoidable need for this vital energy, its serious depletion, and aberrant usage. Consequently, obtaining other electrical energy sources has become an indisputable necessity [1-4]. For this reason, renewable energy attracted the attention of the world, particularly hydro energy that has become an outstanding source to generate electricity all over the world [5]. As a matter of fact, hydropower plants help limit toxic gases emissions by providing 19% of the world's electricity [4]. In addition, the hydropower has many other benefits it does not encounter the population displacement and environmental problems. In remote areas, hydropower plants are considered more efficient than other sources of renewable energy like wind or solar [6]. In fact, solar cells convert about 10% to 12% of light energy directly to electric energy whereas hydro plants have an efficiency between 60% to 90% range [7].
At the national level, Morocco has significant hydraulic potential mainly in the mountainous regions [8, 9]. A hydropower plant is considered as a micro hydropower plant (MHPP) when a nominal generated power less than 200 kW [10]. The MHPP can produce electricity from run-of-river to satisfy the locals' needs [11]. Hence, the emergence of the idea of installing MHPP in different mountainous areas, especially in isolated sites where it is difficult to be connected to the national grid. This electricity facilitates the local community's main requirements for agricultural land irrigation. Moreover, it will enhance the living standards by fostering the educational level, hospital services, communications, conservation of agricultural products (fruits and vegetables) and providing the heat during winter.
Since the development of MHPP prototypes is costly [12-21], it has become necessary to develop mathematical models and use computer simulations in order to study the MHPP operation. The rationale behind this is to develop control approaches ensuring the expected performances. So, obtaining a model reflecting the MHPP real dynamic in the transient state as well as the steady-state is a necessity.
In the literature, most of the proposed MHPP models are based on transfer functions which are validated for a specific operating point. In other words, the transfer function based models do not reflect the MHPP overall operation, especially its nonlinearities. For example, the MHPP model, proposed in the scientific article [22], gives only the frequency evolution parameter around a specific operating point and uses linear transfer functions. However, developing a control system for large operating points is not considered. Although, a nonlinear MHPP model around several operating points was proposed in the research article [23]. It is not based on differential equations describing the MHPP behavior. This model requires identification tests wherever a different MHPP is involved. The frequency regulation was studied in the scientific article [24] without simulation and working directly on a prototype. The simulation might give more opportunities to design suitable robust controllers. An MHPP mathematical model for a single-phase MHPP was presented in the research article [16]. In this work, the developed model gives only the frequency evolution and just takes into consideration resistive loads. A scheme of sliding mode control by model order reduction for load control problem of MHPP was discussed in the scientific article [25]. This scheme was tested under isolated and connected grid modes. The proposed schemes are based on transfer functions and devote only the frequency evolution. In the research article [26], a multi-source power system approach, using teaching-learning optimization algorithm for automatic generation controller of the power system, was suggested. They use 2-degree freedom of proportional-integral-derivative controller. In addition, the multi-source model combines several source models such as thermal, hydro and gas powers which are modeled by transfer functions. The approach was compared with some published approaches. The software application platform, developed in the research article [27], is contained a platform component library that is based on linear equations. These equations cannot correctly simulate the behavior under different conditions of MHPP and so, it has limitations. Considering the previous works, any realistic models for three-phase MHPP, which reflect all physical phenomena such as reactive power and converters impact, are not suggested.
The MHPP models conception must consider all kind of loads. In most sites where MHPPs are likely to be installed, locals use generally single-phase loads, inductive and resistive type such as lamps, televisions and small engines except for those who use three-phase pumps for irrigation or different equipments for commercial activities. In the literature, the models consider resistive loads which are generally characterized by their nominal power. These models do not have the flexibility to study the connected loads random effect and all kind of loads.
This paper presents a modeling approach to describe the three-phase MHPP overall operation for all operating points range. In fact, it is based on the electrical and the mechanical differential equations expressing the behavior of the constituting MHPP components. These equations illustrate the MHPP nonlinear behavior. The developed model could be used by researchers interested in the MHPP control, to develop new advanced controllers. The model gives also the opportunity to improve the power quality in remote sites, especially when the proposed three-phase MHPP model is used in closed-loop. Besides, it might be useful for academic and educational purposes, so that students can easily understand the MHPP real operation. Among the strengths of our model is that it provides the temporal evolution of all MHPP physical parameters (different currents, voltages, frequency, powers, torques, etc.). In addition, using the developed model, different kind of loads can be connected and tested (three-phase, single-phase resistive and/or inductive loads, etc.). Therefore, it should be integrated in Matlab/Simulink.
There are no specific international standards about isolated electrical systems; however, the power should be similar to that of the international standards [28], because the consumers' equipment of both isolated and interconnected grids requires the same power quality to operate. In remote sites, loads are generally resistive with an inductive effect such as three-phase pumps and small engines, etc. These loads require a reactive power for their operation. The need to improve and manage this reactive power in these sites, using an efficient way (reactive power compensation) is suitable. In this context, the developed MHPP model could be used by researchers in the closed-loop to develop advanced controllers for enhancing the power factor. In addition, the developed model gives more chances to adapt and to propose frequency controllers for all MHPP control techniques such as speed-flow, load-frequency, and mixed control. Moreover, the proposed approach devotes the feasibility and the simplicity to coordinate the hydropower plant with other sources, like wind and solar, etc. After coordination, the researchers have the possibility to regulate different parameters, since the integrated approach permits getting all physical parameters evolution versus time. Considering all these beneficial points, the developed MHPP model is considered as an added value for renewable energy.
The most suitable turbine for such configuration is the Pelton turbine which is generally used for high waterfalls and can operate over a wide range of water flow level [28]. The flow rate is adjustable by means of a movable needle which moves manually or by an electric servo-motor [17]. On the other hand, the three-phase synchronous generator is more appropriate, for several advantages; it presents a very satisfactory efficiency (close to 99%) and its size is relatively small compared to the generated power [5, 29].
2. Model of the Hydraulic Turbine
The MHPP operation principle consists of transforming the hydraulic energy into electrical energy. Through a water tunnel, the water reaches a surge tank that supplies a penstock characterized by its height. At the output of this latter, the water rotates a hydraulic turbine which drives a synchronous generator. The generator produces electricity that feeds loads which can be composed from three or/and single-phase loads. Figure 1 shows the various components of the constituting hydropower plants.
Figure 1. Hydropower plant components
The Pelton turbine consists of buckets set on a circular disk periphery and one or many nozzles which strike the buckets by water jets. The water flow can be adjusted by a mobile needle inside the nozzle that is usually driven by an electric servo-motor which must be relatively slow to minimize water hammers effect [17].
The hydraulic power is given by the expression (1) [16]:
$P_{t}=\rho g Q_{t} H_{t}$ (1)
with: $\rho$ is the water's density (kg m-3), Qt is the water flow (m3 s-1), Ht is the effective waterfall height (m) and g is the gravity acceleration (m s-2).
In the Pelton turbine case, the hydraulic power expression becomes [16]:
$P_{t}=\rho Q_{t} U\left(V_{1}-U\right)(1+m \cos \beta)$ (2)
with: U is the turbine's drive speed, V1 is the water's speed in the contact of the jet with the buckets, m is the ratio of V1 and V2 and $\beta$ is the angle between $\overrightarrow{V_{1}}$ and $\overrightarrow{V_{2}}$.
To make the equations easy to interpret, reduced quantities "per unit system" (pu) have been adopted for the model, and the international system (IS) units have been adopted for the physical quantities.
Based on Eq. (2), the Pelton turbine torque is given by the expression (3):
$C_{t}=\rho Q_{t} \frac{U}{\Omega_{t}}\left(V_{1}-\frac{D_{t}}{2}\right)(1+m \cos \beta)$ (3)
with: $\Omega_{t}$ is the angular velocity of the generator (rad s-1) and Dt is the diameter of the turbine (m).
The jet velocity at the penstock exit expression is given by the Eq. (4):
$\mathrm{V}_{1}=\sqrt{2 \mathrm{gH}_{\mathrm{t}}}$ (4)
The turbine speed expression in (pu) becomes:
$\gamma_{\mathrm{t}}=\sqrt{\mathrm{h}_{\mathrm{t}}}$ (5)
with: ht is the waterfall height (pu).
The turbine torque expression using the reduced quantities becomes:
$c_{\mathrm{t}}=\frac{q_{\mathrm{t}}\left(\sqrt{\mathrm{h}_{\mathrm{t}}}-\mathrm{k}_{\mathrm{t}} \mathrm{n}_{\mathrm{t}}\right)}{\left(1-\mathrm{k}_{\mathrm{t}}\right)}$ (6)
$\mathrm{k}_{\mathrm{t}}=\frac{\mathrm{D}_{\mathrm{t}} \Omega_{\mathrm{tn}}}{2 \mathrm{V}_{\mathrm{1n}}}$ (7)
with: $q_{t}$ is the water flow $(p u), n_{t}$ is the turbine speed $(p u), c_{t}$ is the turbine torque (pu), $V_{1 n}$ is the nominal speed of the jet
$\left.\left(\mathrm{m} \mathrm{s}^{-1}\right) \text { and } \Omega_{t n} \text { is the nominal speed of the turbine (rad } \mathrm{s}^{-1}\right) .$
3. Model Representing the MHPP
Using a single-phase generator is more expensive in kilowatt (kW) compared to a three-phase generator [30]. For this reason, it is the most recommended generators to be used in MHPP for isolated sites. The three-phase synchronous generator can be modeled by different methods such as the linear state representation and the equivalent electrical circuit in the reference frame d-q. The linear state representation presents a disadvantage, bearing in mind, the machine speed involved in the state matrix [31, 32]. This assumes that the speed remains steady along with the simulation, whereas in the MHPP case, this speed depends on the operating conditions. The equivalent electrical circuit helps to reflect the MHPP overall behavior. Therefore, this paper uses the differential equations which describe the generator electrical operation as well as those describing the mechanical behavior.
In the literature, some research works use the synchronous generators blocks are provided by the toolbox «Sim Power System» of Matlab/Simulink [33]. However, these models are not suitable for the following tests: overload/load discharge, short-circuit, etc.
3.1 The generator's electrical equations
The synchronous generator is a rotating electric machine directly coupled with the hydraulic turbine. It consists of a rotating part named the rotor and a fixed part named the stator. The rotor has a field winding (f) (inductor) and absorbs the hydraulic power. The stator has three armatures windings (a, b and c) and is connected in parallel with the mini-load grid presented in Figure 1. Figure 2 shows the synchronous generator view without dampers. A "receiver" convention is adopted for the rotor and a "generator" convention is adopted for the stator. The generator voltages can be described in three-phase reference by the Eqns. (8) [32]:
$\begin{aligned} \mathrm{v}_{\mathrm{a}} &=-\mathrm{r}_{\mathrm{s}} \mathrm{i}_{\mathrm{a}}+\frac{\mathrm{d} \varphi_{\mathrm{a}}}{\mathrm{dt}} \\ \mathrm{v}_{\mathrm{b}} &=-\mathrm{r}_{\mathrm{s}} \mathrm{i}_{\mathrm{b}}+\frac{\mathrm{d} \varphi_{\mathrm{b}}}{\mathrm{d} \mathrm{t}} \\ \mathrm{v}_{\mathrm{c}} &=-\mathrm{r}_{\mathrm{s}} \mathrm{i}_{\mathrm{c}}+\frac{\mathrm{d} \varphi_{\mathrm{c}}}{\mathrm{d} \mathrm{t}} \\ \mathrm{v}_{\mathrm{f}} &=\mathrm{r}_{\mathrm{f}} \mathrm{i}_{\mathrm{f}}+\frac{\mathrm{d} \varphi_{\mathrm{f}}}{\mathrm{dt}} \end{aligned}$ (8)
where, ia,b,c are the armature currents (A), $\varphi_{a, b, c}$ are the total armature flux (wb), $\varphi_{\mathrm{f}}$ is the total main field flux (A), rf is the main field resistance (Ω), rs is the stator resistance (Ω) and vf, if are the main field excitation voltage (V) and the main field current (A), respectively.
Figure 2. Synchronous generator view without dampers
Stator and rotor flux terms are functionally related to the rotor angle ($\theta$) which varies with time at the rotor rotation speed rate. These elements have computational problems to solve direct phase quantities. To achieve, the use of the Park transformation to shift from a fixed reference point linked to the stator towards a rotating reference frame linked to the rotor is recommended. The result of this transformation, called reference frame d-q, is a mathematical tool for describing a generator behavior using equations with fixed coefficients. Hence, the generator analytical study becomes relatively simple in the reference frame d-q [32]. The Eq. (9) shows the used Park transformation matrix ( Park ).
$\left[P_{a r k}\right]=\sqrt{\frac{2}{3}}\left[\begin{array}{ccc}{\cos \theta} & {\cos \left(\theta-\frac{2 \pi}{3}\right)} & {\cos \left(\theta+\frac{2 \pi}{3}\right)} \\ {-\sin \theta} & {-\sin \left(\theta-\frac{2 \pi}{3}\right)} & {-\sin \left(\theta+\frac{2 \pi}{3}\right)} \\ {\frac{1}{\sqrt{2}}} & {\frac{1}{\sqrt{2}}} & {\frac{1}{\sqrt{2}}}\end{array}\right]$ (9)
The stator transformed voltages, currents and flux are given by the Eqns (10):
$\begin{aligned}\left[\mathrm{V}_{\mathrm{qd} 0}\right] &=\left[\mathrm{P}_{\mathrm{ark}}\right]\left[\mathrm{V}_{\mathrm{abc}}\right] \\\left[\mathrm{i}_{\mathrm{qd} 0}\right] &=\left[\mathrm{P}_{\mathrm{ark}}\right]\left[\mathrm{i}_{\mathrm{abc}}\right] \\\left[\varphi_{\mathrm{qd} 0}\right] &=\left[\mathrm{P}_{\mathrm{ark}}\right]\left[\varphi_{\mathrm{abc}}\right] \end{aligned}$ (10)
where, Vd, Vq are the direct and the transverse voltages (V) respectively; id , iq are the direct and the transverse currents (A) respectively and $\varphi_{\mathrm{d}}$, $\varphi_{q}$ are the direct and the transverse total flux (wb) respectively.
The synchronous generator model is defined in the reference frame d-q. The main assumptions taken into consideration for modeling are:
The magnetic circuit saturation, hysteresis and eddy currents are assumed negligible. The capacitive coupling between windings are assumed negligible.
The stator and the main field resistances of generator are invariant with respect to changes in temperature.
The magnetomotive forces of armature windings are sinusoidal distribution.
The leakage inductances and the leakage flux are assumed negligible.
Finally, the stator voltages and flux of three-phase synchronous generator in the reference frame d-q are written by the Eqns. (11) and (12), respectively.
$\begin{aligned} \mathrm{V}_{\mathrm{d}}=&-\mathrm{r}_{\mathrm{s}} \mathrm{i}_{\mathrm{d}}+\frac{\mathrm{d} \varphi_{\mathrm{d}}}{\mathrm{dt}}-\mathrm{w}_{\mathrm{e}} \varphi_{\mathrm{q}} \\ \mathrm{V}_{\mathrm{q}}=&-\mathrm{r}_{\mathrm{s}} \mathrm{i}_{\mathrm{q}}+\frac{\mathrm{d} \varphi_{\mathrm{q}}}{\mathrm{dt}}+\mathrm{w}_{\mathrm{e}} \varphi_{\mathrm{d}} \\ & \mathrm{V}_{\mathrm{f}}=\mathrm{r}_{\mathrm{f}} \mathrm{i}_{\mathrm{f}}+\frac{\mathrm{d} \varphi_{\mathrm{f}}}{\mathrm{dt}} \end{aligned}$ (11)
$\begin{aligned} \varphi_{\mathrm{d}}=&-\mathrm{L}_{\mathrm{d}} \mathrm{i}_{\mathrm{d}}+\mathrm{m}_{\mathrm{sf}} \mathrm{i}_{\mathrm{f}} \\ & \varphi_{\mathrm{q}}=-\mathrm{L}_{\mathrm{q}} \mathrm{i}_{\mathrm{q}} \\ \varphi_{\mathrm{f}}=& \mathrm{L}_{\mathrm{f}} \mathrm{i}_{\mathrm{f}}-\mathrm{m}_{\mathrm{sf}} \mathrm{i}_{\mathrm{d}} \end{aligned}$ (12)
with: we is the electrical speed (rad.s-1), msf is the mutual inductance between the field winding and d-axis stator winding (wb), Ld, Lq are the inductances of the d-axis stator winding and q-axis stator winding (H) respectively and Lf is the inductance of the main field winding (H).
Using the stator voltages and flux equations, the generator equivalent electrical circuit can be drawn, as it's shown in Figure 3. This circuit can be used to simulate the electrical behavior of the generator.
Figure 3. Electrical behavior of the generator using the equivalent electrical circuit in the reference frame d-q
3.2 Mechanical behavior
The synchronous generator modeling will not be completed without the electromagnetic torque expression [34]. The electromagnetic torque is given by the expression (13):
$T_{e l e}=P\left[\varphi_{d} i_{q}-\varphi_{q} i_{d}\right]$ (13)
with: P is the machine's poles number.
To illustrate the rotation speed, the synchronous generator model must incorporate the fundamental principle of dynamics, linking different torques to the mechanical rotor speed which is written as the Eq. (14):
$J_{\Delta} \frac{d}{d t}\left(w_{m}\right)+f_{v} w_{m}=T_{m e c h}-T_{s}-T_{e l e}$ (14)
with: $J_{\Delta}$ combines inertia moment of both the generator and the hydraulic turbine (kg m2), Wm is the mechanical rotor speed (rad s-1), Ts is the sec friction torque (N m), fv is the viscous friction coefficient (N m/rad s-1) and Tmech is the mechanical torque (N m).
3.3 Development of the generator model
The proposed model generates the three-phase voltage and creates the three terminals va , vb, and vc on which three/single-phase loads of mini-load grid can be connected.
Figure 4. Schematic diagram of the three-phase generator
The generator model, proposed in Figure 4, is characterized by two inputs: the mechanical torque ( Tmech) and excitation voltage ( vf ). The mechanical torque is the mechanical equation input for generating the electrical speed and the mechanical speed which will be useful for direct Park transformation blocks, inverse Park transformation and equivalent electrical circuit. The excitation voltage goal is generating the excitation current ( if ). Thus, creating static electromotive forces is due to flux variations and electrodynamics forces based on the pulses obtained by the mechanical equation. Afterwards, the equivalent electrical circuit provides the currents id and iq characterizing the inputs of the inverse Park transformation block to generate the three-phase currents ( ia, ib , and ic ). The use of three resistances "R-image" which are equal to 106Ω provides the three-phase voltages ( va , vb, and vc ). Finally, the equivalent electrical circuit contains voltages ( vd and vq ) which are the generator output voltages, but they are used as feedback to perform the correct creation of the currents id and iq .
According to the Figure 4, the proposed generator model is characterized by two inputs (mechanical torque and excitation voltage) and three outputs (three voltage terminals ( va, vb , and vc )) in order to ensure the conversion of the mechanical energy to electricity. The model is suitable for all kind of loads single or/and three-phase which can be connected with the proposed model via the terminals.
3.4 Development of the MHPP model
To develop a global MHPP model shown in Figure 5, the previously developed models should be incorporated together. Interactions between different model blocks give the MHPP overall behavior and allow extracting the maximum information related to the MHPP operation (under all working conditions).
Figure 5. Schematic diagram of the proposed MHPP model
4. Implementation and Simulation Results
Figure 6 shows the obtained global diagram Simulink model of the three-phase MHPP. The model was implemented using Matlab/Simulink 2010a, with Ode1 solver and running with a sample time equal to 10 . The electrical and mechanical parameters values of the used synchronous generator are shown in Table 1.
Figure 6. Proposed Simulink model of a three-phase MHPP
Table 1. Three-phase generator parameters values
Electrical parameters values:
rs (Ω)
rf (Ω)
Ld (H)
Lq (H)
Lf (H)
msf (H)
Mechanical parameters values:
Cs (N m)
fv (N m/rad.s-1)
$\mathrm{J}_{\Delta}$ (kg m2)
Nominal values of the three-phase generator:
Power Pn
Frequency fn
Voltage Un
Speed Nn
1500tr min-1
Current In
Power factor cos (φ)
4.1 Scenario 1: Load variations effects on the MHPP electrical quantities
The load variations effect on the frequency is a test among classic validation tests generally used in the literature. It's based on connecting or disconnecting a load when the MHPP reaches its steady-state. The first scenario test is to run the hydropower plant with a nominal balanced three-phase load (270W) on the three phases and to connect, at the instant t=3s, three-phase resistive load constituting 25% of the rated load. After 10s, the same load is disconnected. Figure 7 presents the three-phase voltage line to line waveforms ( $U_{a b}, U_{b c},$ and $U_{c a}$ ) in the load terminals. Figure 8 shows the three-phase current waveforms absorbed by the load. Figures 9 and 10 show respectively the evolution of the frequency and the excitation current.
From figures 7 and 8, it can be seen when the MHPP is overloaded, the consumed current increases and the output voltage drops down. And when the same load is discharged, the absorbed current decreases and the output voltage return to their nominal values. According to the obtained curve of the Figure 9, as one can clearly see that the frequency evolution drops down (45.9Hz) when applying an overload of 25% and return to the nominal value (50 Hz) when the same connected load is disconnected. The results, illustrated in Figure 10, show a constant excitation current that equal its nominal value. These obtained results show a similar behavior compared to a real MHPP [16, 23].
Figure 7. Simulation waveforms of the three-phase voltage line to line versus time. (a) Before and after applying a load. (b) Zoom on the rated steady-state. (c) Zoom on the three-phase voltage line to line after applying an overload
Figure 8. Simulation waveforms of the three-phase current absorbed by the load versus time. (a) Before and after applying a load. (b) Zoom on the rated steady-state. (c) Zoom on the three-phase current absorbed by loads after applying an overload
Figure 9. Simulation frequency variation versus time when applying an overload followed by a load discharge
Figure 10. Simulation of a load variation effect on the excitation current
Figures 7(b) and 8(b) respectively show a zoom on the three-phase voltage line to line waveforms and the three-phase current waveforms in the nominal steady-state. From these curves, as one can clearly see that their shapes are sinusoidal with a period equal to 0.02 s, have a phase shift of $\frac{2 \pi}{3}$ between two successive phases and root-mean-square (RMS) values equal to the nominal values.
It should be noted that all these simulation tests were performed in open-loop (without using any controller).
4.2 Scenario 2: Highlighting the nonlinear behavior of the MHPP
At the beginning of this scenario, the MHPP is driven at its nominal speed and nominal excitation, where the turbine flow rate and the effective waterfall height are constant. Afterwards, different load variations were made (overloads and load discharges of different values). The result corresponding to 4 different overload variation values is illustrated in Figure 11. Loads connections were applied at the instant t=1s. Similarly, Figure 12 illustrates the result for different load discharge values at the instant t=1s. According to figures 11 and 12 and reasoning on a one load variation (45%) as an example, it is clear that the hydropower plant is a nonlinear system. Indeed, for the same load variation (∆P), the frequency variation (∆f) is different between the load discharge and the overload: ∆f=10.5FigureHz for load discharge and ∆f=6.73Hz for overload. Table 2 provides a qualitative assessment of the resulting frequency variations for the applied load variations.
The non-linearity is also demonstrated by analyzing the frequency variations from different ∆P in the load discharge case. For example, ∆P=5% gives a ∆f=0.91Hz while ∆P=45%=5%*9 gives ∆f=10.59Hz instead of ∆f=8.19Hz=0.91Hz*9. This behavior is completely normal by comparing this result with recent research publications [16, 17, 23, 35]. The MHPP modeling with experimental validation, developed by the published article [16], has a strong non-linearity system and the frequency varies significantly with load variations. This recently published result was validated using an MHPP laboratory prototype which consists of a Pelton hydraulic turbine and a single-phase synchronous generator. According to the research article [16] and the present developed approach, the frequency evolution versus time behaviors, when applying different overloads and load discharges, is similar. In addition, the suggested MHPP model is a powerful tool, as the user could get all physical parameters evolution versus time, while the major existing models in the literature give only single parameter evolution: frequency around a specific operating point. It is concluded that the developed model is faithful to the MHPP real behavior.
Figure 11. Simulation frequency evolution versus time when applying different overloads
Figure 12. Simulation frequency evolution versus time when applying different load discharges
Table 2. Qualitative assessment of simulation frequency variations resulting from load discharge/overload according to different values of ∆P
|∆f| after load discharge
|∆f| after overload
4.3 Scenario 3: Test of unbalance load between the generator's phases
In remote sites, the loads are generally single-phase. Thus, the third scenario consists of operating the MHPP, during a steady-state, with a nominal resistive three-phase load (270W). After that, at the instant t=1s, a single-phase load of 50W is connected to phase A only. This allows creating an unbalanced load into the three-phase generator terminals. The waveforms, illustrated in Figure 13, show the generator three-phase current waveforms before and after the unbalanced load. Figure 14(a) shows the resulting frequency evolution. The obtained result shows the appearance of an unbalanced current between phases. This unbalance generates torque pulsation in the generator that leads to mechanical vibration and acoustic noise [36]. Figure 14(b) illustrates a zoom on the rated steady-state and justified the torque pulsation presence. Thus, the proposed model makes it possible to simulate the MHPP operation even in case of an unbalanced load.
Figure 13. Simulation waveforms of the three-phase current versus time before and after the use of unbalanced load
Figure 14. Simulation frequency evolution versus time. (a) Before and after applying an overload to phase A. (b) Zoom on the rated steady-state
In this study, modeling of three-phase MHPP was presented. The model includes two sub-models, the Pelton turbine model and the three-phase synchronous generator model. And three scenarios tests were performed using the MHPP model. These scenarios show the performances and the flexibility of the proposed model to provide the temporal evolution of several MHPP parameters under all kind of MHPP operation conditions. The first scenario illustrates the load variations effect on the MHPP electrical quantities. The second scenario highlights the non-linear character of MHPP using load discharge and overload tests. The last scenario demonstrates the developed model flexibility to connect different kind of loads. This test demonstrates the feasibility to simulate the MHPP even in the case of an unbalance. Besides, simple identification methods can be used to adapt this model to any real hydropower plant configuration. The authors have as prospects to improve efficiency-cost for MHPP exploitation and to ensure the electricity quality continuously as the produced voltages characteristics remain constant (frequency and RMS), and to propose a control system to maintain these characteristics regardless of the random load variations.
[1] Berrada, A., Bouhssine, Z., Arechkik, A. (2019). Optimisation and economic modeling of micro hydropower plant integrated in water distribution system. Journal of Cleaner Production, 232(2): 877-887. https://doi.org/10.1016/j.jclepro.2019.06.036
[2] Sahoo, S.K. (2016). Renewable and sustainable energy reviews solar photovoltaic energy progress in India: A review. Renewable and Sustainable Energy Reviews, 59: 927-939. https://doi.org/10.1016/j.rser.2016.01.049
[3] Canales, F.A., Beluco, A., Mendes, C.A.B. (2015). A comparative study of a wind hydro hybrid system with water storage capacity: conventional reservoir or pumped storage plant. Journal of Energy Storage, 4: 96-105. https://doi.org/10.1016/j.est.2015.09.007
[4] Salimi, A.A., Karimi, A., Noorizadeh, Y. (2019). Simultaneous operation of wind and pumped storage hydropower plants in a linearized security-constrained unit commitment model for high wind energy penetration. Journal of Energy Storage, 22: 318-330. https://doi.org/10.1016/j.est.2019.02.026
[5] Dursun, B., Gokcol, C. (2011). The role of hydroelectric power and contribution of small hydropower plants for sustainable development in Turkey. Renewable Energy, 36(4): 1227-1235. https://doi.org/10.1016/j.renene.2010.10.001
[6] Laghali, J.A., Mokhlis, H., Bakar, A.H.A., Hasmaini, M. (2013). A comprehensive overview of new designs in the hydraulic, electrical equipments and controllers of mini hydro power plants making it cost effective technology. Renewable Energy, 20: 279-293. https://doi.org/10.1016/j.rser.2012.12.002
[7] Khan, A.A., Khan, A.A., Zahid, M., Rizwan, R. (2013). Flow acceleration by converging nozzles for power generation in existing canal system. Renewable Energy, 60: 548-552. https://doi.org/10.1016/j.renene.2013.06.005
[8] Salhi, I., Chennani, M., Doubabi, S., Ezziani, N. (2008). Modeling and regulation of a micro hydroelectric power plant. IEEE International Symposium on Industrial Electronics, Cambridge, UK, pp. 1639-44. https://doi.org/10.1109/ISIE.2008.4677235
[9] Alhamwi, A., Kleinhans, D., Weitemeyer, S., Vogt, T. (2015). Moroccan National Energy Strategy reviewed from a meteorological perspective. Energy Strategy Reviews, 6: 39-47. https://doi.org/10.1016/j.esr.2015.02.002
[10] Ghadimi, A.A., Razavi, F., Mohammadian, B. (2011). Determining optimum location and capacity for micro hydropower plants in Lorestan province in Iran. Renewable and Sustainable Energy Reviews, 15(8): 4125-4131. https://doi.org/10.1016/j.rser.2011.07.003
[11] Ranjitkar, G., Huang, J., Tung, T. (2006). Application of micro-hydropower technology for remote regions. IEEE EIC climate change technology, Ottawa, Canada. https://doi.org/10.1109/EICCCC.2006.277207
[12] Márquez, J.L., Molina, M.G., Pacas, J.M. (2010). Dynamic modeling, simulation and control design of an advanced micro hydro power plant for distributed generation applications. International Journal of Hydrogen Energy, 35(11): 5772-5777. https://doi.org/10.1016/j.ijhydene.2010.02.100
[13] Guo, W., Yang, J., Chen, J., Yang, W., Teng, Y., Zeng, W. (2015). Time response of the frequency of hydroelectric generator unit with surge tank under isolated operation based on turbine regulating modes. Electric Power Components and Systems, 43(20): 2341-2355. https://doi.org/10.1080/15325008.2015.1082681
[14] Vallet, M., Munteanu, I., Bratcu, A.I., Bacha, S., Roye, D. (2012). Synchronized control of cross-flow-water-turbine-based twin towers. Renewable Energy, 48: 382-391. https://doi.org/10.1016/j.renene.2012.05.013
[15] Guo, W., Yang, J., Yang, W., Chen, J., Teng, Y. (2015). Regulation quality for frequency response of turbine regulating system of isolated hydroelectric power plant with surge tank. Electrical Power and Energy Systems, 73: 528-538. https://doi.org/10.1016/j.ijepes.2015.05.043
[16] Salhi, I., Doubabi, S., Essounbouli, N., Hamzaoui, A. (2014). Frequency regulation for large load variations on micro-hydro power plants with real-time implementation. Electrical Power and Energy Systems, 60: 6-13. https://doi.org/10.1016/j.ijepes.2014.02.029
[17] Raja Singh, R., Anil Kumar, B., Shruthi, D., Panda, R., Thanga Raj, C. (2018). Review and experimental illustrations of electronic load controller used in standalone Micro-Hydro generating plants. Engineering Science and Technology, and International Journal, 21(5): 886-900. https://doi.org/10.1016/j.jestch.2018.07.006
[18] Khan, M.R.B., Pasupuleti, J., Jidin, R. (2018). Load frequency control for mini-hydropower system: A new approach based on self-tuning fuzzy proportional-derivative scheme. Sustainable Energy Technologies and Assessment, 30: 253-262. https://doi.org/10.1016/j.seta.2018.10.013
[19] Saxena, S. (2019). Load frequency control strategy via fractional-order controller and reduced-order modeling. Electrical Power and Energy Systems, 104: 603-614. https://doi.org/10.1016/j.ijepes.2018.07.005
[20] Xu, Y.P. (2017). A study of hydropower generation process control based on fuzzy control theory. European Journal of Electrical Engineering, 19(3-4): 167-179. https://doi.org/10.3166/EJEE.19.167-179
[21] Guo, B.Q., Xu B.B., Chen D.Y., Ye W., Guo P.C., Luo X.Q. (2018). Dynamic modeling and energy distribution analysis in a hydroelectric generating system considering the stochastic turbine flow. Electrical Power and Energy Systems, 103: 611-621. https://doi.org/10.1016/j.ijepes.2018.06.032
[22] Hanmandlu, M., Goyal, H. (2008). Proposing a new advanced control technique for micro hydro power plants. Electrical Power and Energy Systems, 30(4): 272-282. https://doi.org/10.1016/j.ijepes.2007.07.010
[23] Salhi, I., Doubabi, S., Essounbouli, N., Hamzaoui, A. (2010). Application of multi-model control with fuzzy switching to a micro hydro-electrical power plant. Renewable Energy, 35(9): 2071-2079. https://doi.org/10.1016/j.renene.2010.02.008
[24] Şerban, I., Marinescu, C. (2011). Aggregate load-frequency control of a wind-hydro autonomous microgrid. Renewable Energy, 36(12): 3345-3354. https://doi.org/10.1016/j.renene.2011.05.012
[25] Qian, D.W., Tong, S.W., Liu, X.J. (2015). Load frequency control for micro hydro power plants by sliding mode and model order reduction. Automatica, 56(3): 318-330. https://doi.org/10.7305/automatika.2015.12.816
[26] Sahu, R.K., Panda, S., Rout, U.K., Sahoo, D.K. (2016). Teaching learning based optimization algorithm for automatic generation control of power system using 2-DOF PID controller. Electrical Power and Energy Systems, 77: 287-301. https://doi.org/10.1016/j.ijepes.2015.11.082
[27] Garrido, J., Zafra, A., Vázquez, F. (2009). Object oriented modelling and simulation of hydropower plants with run-of-river scheme: A new simulation tool. Simulation Modelling Pratice and Theory, 17: 1748-1767. https://doi.org/10.1016/j.simpat.2009.08.007
[28] Cobb, B.R., Sharp, K.V. (2013). Impulse (Turgo and pelton) turbine performance characteristics and their impact on pico-hydro installations. Renewable Energy, 50: 959-64. https://doi.org/10.1016/j.renene.2012.08.010
[29] Haidar, A.M.A., Senan, M.F.M., Noman, A., Radman T. (2012). Utilization of pico hydro generation in domestic and commercial loads. Renew Sustain Energy Rev, 16(1): 518-524. https://doi.org/10.1016/j.rser.2011.08.017
[30] Derakhshan, S., Nourbakhsh, A. (2008). Theoretical, numerical and experimental investigation of centrifugal pumps in inverse operation. Experimental Thermal and Fluid Science, 32(1): 620-1627. https://doi.org/10.1016/j.expthermflusci.2008.05.004
[31] Mouni, E., Tnani, S., Champenois, G. (2008). Synchronous generator modelling and parameters estimation using least squares method. Simulation Modelling Practice and Theory, 16(6): 678-689. https://doi.org/10.1016/j.simpat.2008.04.005
[32] Barakat, A., Tnani, S., Champenois, G., Mouni, E. (2010). Analysis of synchronous machine modeling for simulation and industrial applications. Simulation Modelling Practice and Theory, 19(9): 1382-1396. https://doi.org/10.1016/j.simpat.2010.05.019
[33] Mouni, E., Tnani, S., Champenois, G. (2006). Comparative study of three modeling methods of synchronous generator. Conference of the IEEE Industrial Electronics Society. Paris, France. https://doi.org/10.1109/IECON.2006.347987
[34] Ong, C.M. (1997). Dynamic Simulation of Electric Machinery using Matlab/Simulink. Prentice Hall.
[35] Ozdemir, M.T., Orhan, A. (2015). A new approach to the development of a nonlinear model for micro-Pelton turbines. Turkish Journal of Electrical Engineering & Computer Sciences, 23(5): 1272-1283. https://doi.org/10.3906/elk-1303-72
[36] Serban, I., Ion, C.P., Marinescu, C. (2008). Frequency control and unbalances compensation in stand-alone fixed-speed wind turbine systems. In: The 34th Annual Conference of the IEEE Industrial Electronics Society, Orlando, Florida, USA, pp. 2167-2172. https://doi.org/10.1109/IECON.2008.4758292
Latest News & Announcement
Phone: + 1780 218 9926
Email: [email protected]
IJHT
MMEP
ACSM
IJES
EESRJ
RCES
AMA_A
AMA_B
AMA_C
AMA_D
MMC_A
MMC_B
MMC_C
Please sign up to receive notifications on new issues and newsletters from IIETA
Select Journal/Journals:
IJHTMMEPACSMEJEEISII2MJESARCMARIATSIJESEESRJRCESAMA_AAMA_BAMA_CAMA_DMMC_AMMC_BMMC_CMMC_D
Copyright © 2019 IIETA. All Rights Reserved. | CommonCrawl |
Estimation of Additive and Dominance Genetic Variances in Line Breeding Swine
Ishida, T.;Kuroki, T.;Harada, H.;Fukuhara, R. 1
https://doi.org/10.5713/ajas.2001.1 PDF
Additive and dominance genetic variances were estimated for purebred Landrace selected with line breeding from 1989 to 1995 at Miyazaki Livestock Experiment Station, Kawaminami Branch. Ten body measurements, two reproductive traits and fifteen carcass traits were analyzed with single-trait mixed model analysis. The estimates of narrow-sense heritabilities by additive model were in the range of 0.07 to 0.46 for body measurements, 0.05 to 0.14 for reproductive traits, and 0.05 to 0.68 for carcass traits. The additive model tended to slightly overestimate the narrow-sense heritabilities as compared to the additive and dominance model. The proportion of the dominance variance to total genetic variance ranged from 0.11 to 0.91 for body measurements, 0.00 to 0.65 for reproductive traits, and 0.00 to 0.86 for carcass traits. Large differences among traits were found in the ratio of dominance to total genetic variance. These results suggested that dominance effect would affect the expression of all ten body measurements, one reproductive trait, and nine carcass traits. It is justified to consider the dominance effects in genetic evaluation of the selected lines for those traits.
Factors Affecting the Survival of Frozen Thawed Bovine In Vitro Produced Blastocysts
Gustafsson, H.;Larsson, B.;Shamsuddin, M.;Jaakma, U.;Emanuelson, U. 7
Factors Affecting the Survival of Frozen Thawed Bovine In Vitro Produced Blastocysts. The effect of some factors on the post-thaw survival of a total of 240 in vitro produced bovine blastocysts was investigated using logistic regression analysis. The explanatory variables tested were: type of culture medium before freezing (TCM 199 supplemented with BSA, BSAITS (BSA+insulin+transferrin+selenium), ECS (estrous cow serum) with or without BOEC (bovine oviductal epithelial cells), age of the blastocyst (Day 7, Day 8+9), morphological appearance before freezing (distinct=Q1 or indistinct=Q2 inner cell mass) and type of cryoprotectant (glycerol, 1.0 M or ethylene glycol, 1.6 M). The survival after thawing based on the post-thaw quality and the development after co-culture with BOEC for 24 and 48 hours. Day 7 blastocysts had an almost three times better chance of survival than Day 8+9 blastocysts. Q1, Day 8+9 blastocysts had higher odds to survive after 48 hours in culture than Q2 blastocysts (p<0.05). Blastocysts produced in BSAITS medium had the best chances of survival; however, the odds were not always significant. Blastocysts frozen in glycerol had a better post-thaw quality rating than those frozen in ethylene glycol; however, the difference in post-thaw development at culture was not significant. The relationship between post-thaw quality and post-thaw development at culture was significant (p<0.05). The developmental stage and/or age of the embryo and culture medium where development up to blastocyst takes place affect the post-thaw survival of the bovine embryos.
Cloning and Expression of β1-Adrenergic Receptor Genes in Adipose Tissues from Korean Native Cattle (Hanwoo)
Ha, S.H.;Chung, M.I.;Baik, M.G.;Choi, Y.J. 13
https://doi.org/10.5713/ajas.2001.13 PDF
Bovine ${\beta}1$-adrenergic receptor (AR) cDNA was cloned using degenerative primers. Bovine ${\beta}1$-AR coded for 467 amino acids and the comparison of the deduced amino acid sequence with that of sheep showed 93.4% identity. Northern blot analysis indicated that transcript size for the bovine ${\beta}1$-AR was 3.6 kb in the adipose tissue. The expression level of three $\beta$-ARs (1, 2, and 3) in bovine abdominal, subcutaneous, and perirenal adipose tissues were examined using reverse transcription-polymerase chain reaction (RT-PCR), and the levels of ${\beta}1$- and ${\beta}3$-AR mRNA were found to be lower in the subcutaneous adipose tissue than in the abdominal and perirenal adipose tissues. These results suggest that the expression of $\beta$-ARs mRNA are differentially regulated among the adipose tissues.
Chicken Insulin-Like Growth Factor-I Stimulates Protein Synthesis of Chicken Embryo Myoblasts Cultured in Serum-Free Medium
Kita, K.;Okumura, J. 17
The effect of chicken IGF-I on protein synthesis of chicken embryo myoblasts cultured in serum-free medium was examined. When myoblasts were expanded to approximate 20-30% of well, the medium was changed to the serum-free medium including 0, 2, 20, 200 or 2000 ng/ml of recombinant chicken IGF-I. The culture medium including 10% fetal calf serum (FCS) was used as positive control. After 1 day of incubation, protein synthesis was measured by the incorporation of [$^3H$]-L-leucine. Thereafter cells were continued to incubate for further 18 hours, and the radioactivity in the protein was measured as an index of protein synthesis. The values for protein synthesis cultured in the serum-free medium without chicken IGF-I or with 2000 ng/ml of chicken IGF-I were the lowest. Protein synthesis was elevated with increasing chicken IGF-I concentration from 0 to 20 ng/ml. The values for protein synthesis in the 20 ng/ml and 200 ng/ml IGF-I groups were about half of that of the FCS group. The present study revealed that the potency of chicken IGF-I at the levels of 20 to 200 ng/ml to stimulate myoblast protein synthesis was about half of that of 10% FCS.
Haematological and Immunological Response in Lambs Fed on Raw and Variously Processed Cottonseed Meal
Nagalakshmi, D.;Sastry, V.R.B.;Agrawal, D.K.;Katiyar, R.C. 21
An experiment was conducted with twenty crossbred male lambs to assess the effect of cotton (Gossypium) seed meal (CSM) on blood constituents and immunity. Lambs were randomly assigned to a reference diet (30% deoiled peanut meal, DPNM) and four test diets containing 40% of either raw, 45 minutes cooked, 1% $Ca(OH)_2$ and iron (1 free gossy-pol, FG : 0.3 Fe) treated CSM (replacing approximately 50%, reference concentrate mixture). These isonitrogenous and isocaloric concentrate mixtures were fed to meet 80% of protein requirements (NRC, 1985) along with ad lib maize hay for 180 days. Blood was collected at 60, 120 and 180 days post feeding. The lambs were sensitized with Brucella abortus S99 antigen after 140 days and were subjected to ELISA and delayed type hypersensitivity. Blood haemoglobin, erythrocyte count, leucocyte count, total protein, total albumin, total globulin, urea, creatinine concentration and aspartate aminotransferase activity in lambs fed on raw or processed CSM were comparable to the values of reference lambs. The higher (p<0.01) blood glucose levels observed in CSM fed lambs at 60 days of feeding was latter reduced to the levels comparable with those on reference diet at 120 and 180 days of feeding. The alanine amino transferase activity was lower in lambs fed raw and cooked CSM containing diets at 120 and 180 days of feeding. A marginal increase in serum iron and alkaline posphatase activity was observed in iron treated group and raw CSM fed lambs, respectively. The humoral immune response and DTH reactivity was lower (p<0.05) in lambs fed raw CSM (consuming 302.83 mg FG/day). Cooking, $Ca(OH)_2$ and iron treatment of raw CSM showed a positive response in alleviating the suppression of immune response owing to the reduced consumption of FG by 40.19, 17.40% and 26.73%, respectively in these diets. The present study thus indicated that consumption of 40% raw CSM (302.83 mg FG/day) though did not affect majority of the haematological and blood biochemical parameters, but markedly suppressed the immune mechanism of lambs.
Caecal Fermentation, Blood Biochemical Profile and Histopathological Changes in Broiler Rabbits Fed Graded Levels of Neem (Azadirachta indica) Seed Kernel Cake
Vasanthakumar, P.;Sharma, K.;Sastry, V.R.B.;Sharma, A.K. 30
The expediency of promoting rabbit production on underutilized by-product feedstuffs prompted the current investigation of caecal fermentation pattern, blood biochemical profile and histopathological changes of vital organs in 48 broiler rabbits fed diets containing 0, 5, 10 and 20% level of neem seed kernal cake (NSKC) for 6 weeks and subsequently sacrificed. The NSKC incorporation in the diet did not exert any adverse effect on caecal fermentation although the weight of caecum and its contents was significantly (p<0.01; p<0.05) lower in rabbits fed the diet containing 20% NSKC. Except for blood glucose concentration, none of the blood biochemical constituents (serum urea nitrogen, creatinine, total protein, haemoglobin) and the activities of different enzymes (alanine aminotransferase, asparatate aminotransferase, alkaline phosphatase) varied significantly due to NSKC feeding. A histopathological examination of the vital organs (liver, kidney, heart, spleen, lungs, intestine and stomach) revealed a variable degree of villus atrophy in the intestine and degenerative changes in the liver and tubular epithelium of kidney in some rabbits when NSKC was fed at levels above 10%.
The Role of Neuropeptide Y in the Central Regulation of Grass Intake in Sheep
Sunagawa, K.;Weisiger, R.S.;McKinley, M.J.;Purcell, B.S.;Thomson, C.;Burns, P.L. 35
The physiological role of brain neuropeptide Y (NPY) in the central regulation of grass intake in sheep was investigated through a continuous intracerebroventricular (ICV) infusion of NPY at a dose of $5{\mu}g/0.2ml/hr$ for 98.5 hours from day 1 to day 5. Sheep (n=5) were fed for 2 hours once a day, and water and 0.5 M NaCl solution were given ad libitum. Feed intake during ICV NPY infusion increased significantly compared to that during ICV artificial cerebrospinal fluid (CSF) infusion. Water and NaCl intake during ICV NPY infusion remained unchanged. Mean arterial blood pressure (MAP) and plasma osmolality during ICV NPY infusion were not significantly different from those during ICV CSF infusion. On the other hand, plasma glucose concentration during ICV NPY infusion increased significantly compared to that during ICV CSF infusion. The results suggest that brain NPY acts as a hunger factor in brain mechanisms controlling feeding to increase grass intake in sheep.
In Sacco Ruminal Degradation Characteristics of Chemical Components in Fresh Zoysia japonica and Miscanthus sinensis Growing in Japanese Native Pasture
Ogura, S.;Kosako, T.;Hayashi, Y.;Dohi, H. 41
Ruminal degradation characteristics of dry matter (DM), neutral detergent fiber (NDF) and crude protein (CP) in fresh leaves of two Japanese native grasses (Zoysia japonica and Miscanthus sinensis) and one sown temperate grass (Dactylis glomerata) were investigated by an in sacco method in spring (mid-May), summer (mid-July) and autumn (mid-September). Japanese native grasses had higher NDF and lower CP concentrations than D. glomerata, and the CP concentration in native grasses decreased in autumn. Ruminal degradability of DM, NDF and CP was lower in native grasses than in D. glomerata (p<0.05) in all seasons. DM and NDF degradability decreased in summer for Z. japonica and D. glomerata, while it decreased in autumn for M. sinensis. CP degradability in Z. japonica was constant throughout the seasons, whereas that in M. sinensis greatly decreased in summer and autumn (p<0.05). It was concluded that Z. japonica could stably supply ruminally digestible nutrients for grazing animals in Japanese native pasture. However, the degradation characteristics of freshly chopped native grasses did not fit the exponential model of $D=a+b(1-e^{-ct})$ proposed by Ørskov and McDonald.
Effect of Sources and Levels of Carbohydrates on Fermentation Characteristics and Hydrogenation of Linoleic Acid by Rumen Bacteria In Vitro
Wang, J.H.;Song, M.K. 48
An in vitro study was conducted to examine the effect of sources and the addition levels of carbohydrates on fermentation characteristics, bacterial growth, and hydrogenation of linoleic acid ($C_{18:2}$) by mixed ruminal bacteria. Starch and cellobiose were added to the 200 ml non-selective basal media at the levels of 0.20 and 0.35% (w/v), respectively. Linoleic acid (66.8~79.6 mg) in the absorbed form into the pieces of nylon cloth was also added to the media of 5 treatments including control which was not added with carbohydrate. Three mls of rumen fluid strained through 12 layers of cheese cloth were added to each medium, and were incubated anaerobically in the shaking incubator of $39^{\circ}C$ for 24 hours. During 24 h incubation the pH in incubation media of all treatments was maintained above 6.6 by the addition of sodium bicarbonate. The pH and ammonia concentration of incubation media were not clearly influenced by the sources and addition levels of carbohydrates while additions of carbohydrates increased (p<0.0001) VFA concentration at the 24 h incubation. Molar proportions of acetate were reduced (p<0.0004) while those of propionate were increased (p<0.0006) by the addition of carbohydrates. But the differences in concentration and molar proportions of the VFA were small between the sources or the addition levels. Bacterial growth was faster (p<0.0004) in the starch added treatments than in the cellobiose added ones and control, but no differences were found between addition levels. Increased (p<0.0487) hydrogenation was observed from the starch added treatments compared to the cellobiose added ones, but there was no difference between addition levels.
Intestinal Structure and Function of Broiler Chickens on Wheat-Based Diets Supplemented With a Microbial Enzyme
Iji, P.A.;Hughes, R.J.;Choct, M.;Tivey, D.R. 54
A study was conducted to assess the nutritive value of two diets based on a low-energy variety of wheat, RAC C1 and their effects on intestinal mucosal structure and function in broiler chickens. The diets were fed with or without microbial enzyme supplement to male and female broiler chickens. The digesta viscosity was reduced (p<0.001) through supplementation with a microbial enzyme in male and female chicks. Enzyme supplementation also improved the dietary apparent metabolizable energy content (p<0.001) and had slight but non-significant positive effects on chick growth and feed conversion ratio. Intestinal mucosal structure and enzyme function were not affected by microbial enzyme supplement. Male chicks consumed more feeds (p<0.001), attained higher final body weight (p<0.001) and were more efficient at feed utilization (p<0.01) than the female chicks. Except for duodenal villus surface area and ileal protein content, intestinal mucosal structure and enzyme activities were similar between the two sexes and dietary treatment groups. The study showed an improvement in the nutritive value of the diets in the presence of the microbial enzyme supplement.
Effects of Yucca Extracts and Protein Levels on Growth Performance and Nutrient Utilization in Growing Pigs
Min, T.S.;Kim, J.D.;Tian, J.Z.;Cho, W.T.;Hyun, Y.;Sohn, K.S.;Han, In K. 61
A total of 120 pigs were used to investigate the effects of yucca extracts on the growth performance, nutrient digestibility and excretion of growing pigs fed different levels of dietary protein. Pigs were allotted into $2{\times}3$ factorial design by the supplementation with yucca extract (YE, 0 and 120 mg/kg) and 3 levels of dietary protein (16, 18, 20%). During the whole experimental period (18 to 52 kg body weight), there were no significant differences in ADG, ADFI or F/G by YE addition or different protein levels among treatments (p>0.05). Overall, although addition of YE to the diet and elevation of protein level showed better ADG, there were no significant differences in growth performance among treatments. Pigs fed diets with YE showed significantly (p<0.05) higher dry matter (DM), crude ash (CA) and crude protein (CP) digestibility than did the others during the growing period. Concerning the levels of dietary protein, only the CP digestibility was significantly higher in pigs fed high protein diet. Pig fed the low protein diet without YE showed a significantly low CP digestibility (p<0.05). No significant differences were found in crude fat (CF), calcium (Ca) and phosphorus (P) digestibilities regardless of YE supplementation or dietary protein levels. Pigs fed YE supplemented diets showed significantly (p<0.05) higher amino acid digestibility. Also, high CP level diets showed a higher amino acid digestibility than low CP diets (p<0.05). DM and N excretion did not show any significant differences among treatments, there was a slightly lower excretion with increase in dietary protein level. Supplementation with YE significantly decreased the DM and N excretion. Interaction (YE$\times$protein) was found in P excretion. Pigs fed a medium protein diet without YE showed the lowest P excretion during the growing period. The NH3-N content in the feces tended to be increased by the increased dietary protein levels and with YE supplementation. During the whole experimental period, the cost for YE supplementation was similar to value of the improvements of performance obtained. The cost of feeding high level protein was significantly higher than that of medium level protein by 10% and low level protein by 9% (p<0.05). It could be concluded that the effects of dietary protein level and yucca extract on growth performance, nutrient digestibility and excretion might play a role to some extent in growing pigs from the aspect of pollution control.
Effects of Isoleucine Supplementation of a Low Protein, Corn-Soybean Meal Diet on the Performance and Immune Function of Weanling Pigs
Zheng, C.T.;Li, D.F.;Qiao, S.Y.;Gong, L.M.;Zhang, D.F.;Thacker, P.;Han, In K. 70
This experiment was conducted to investigate the effects of crystalline isoleucine supplementation of a low protein, corn-soybean meal diet on the performance and immune function of weanling pigs. Forty-five crossbred ($Duroc{\times}Landrace{\times}Large\;White$) piglets, weighing an average of $11.00{\pm}0.07kg$, were assigned to either a control diet containing 20% crude protein (0.64% isoleucine), a 16% crude protein diet without isoleucine supplementation (0.41% isoleucine) or a 16% crude protein diet supplemented with isoleucine (0.64% isoleucine). Reducing the crude protein content of the diet from 20 to 16% significantly (p<0.05) reduced both average daily gain and feed intake. Feed conversion also tended (p=0.07) to be poorer for a low protein diet without isoleucine supplementation. Isoleucine supplementation of the 16% crude protein diet increased both gain and feed intake to a level similar to that obtained by pigs fed the 20% crude protein diet (p>0.05). Blood urea nitrogen, serum total protein and serum globulin were significantly (p<0.05) higher for pigs fed the unsupplemented 16% crude protein diet than for pigs fed the isoleucine-supplemented diet or the control. Egg albumin antibody titre decreased significantly (p<0.05) in pigs fed the diet with isoleucine supplementation, whereas the antibody titre of pigs fed the low protein and low isoleucine diet was similar to that of pigs fed the diet containing 20% crude protein and 0.64% isoleucine. It was suggested that crystalline isoleucine supplementation of a low protein and low isoleucine diet improved pig performance but suppressed humoral immune function.
Effect of Dietary Protein and Lysine Levels on Lactating Multiparous Sows and Litter Performance
Cheng, C.S.;Yen, H.T.;Roan, S.W.;Wu, J.F.;Hsu, J.C. 77
The effects of dietary protein and lysine levels on lactating multiparous sows and litter performance were studied. Sixty-two crossbred multiparous sows ($Landrace{\times}Yorkshire$) were used. Thirty-three and twenty-nine sows were studied in their second parity and third parity respectively. The three dietary treatments were: (1) the control diet containing 15% CP and 0.75% lysine, (2) a diet containing 13% CP and 0.75% lysine (0.60% natural+0.15% synthetic), and (3) a diet containing 13% CP and 0.60% lysine. They were fed twice daily and allowed ad libitum access to food and water throughout a 28 day lactation from parturition until weaning. The results of this experiment showed that body weight and backfat losses of the sows from farrowing to weaning were significantly affected (p<0.01) by reducing dietary protein. Neither average daily feed intake nor weaning to estrus interval of sows were significantly different among treatments. Supplementing lower dietary protein with synthetic lysine could mitigate backfat losses, but could not prevent body weight losses in lactating multiparous sows. A corn-soybean meal diet containing 13% crude protein and 0.60% lysine did not significantly affect litter size and survival rate of weanling piglets compared with the 15% crude protein diet. There was a tendency towards decreased piglet weight at weaning (p<0.10) and reduced daily gain of piglets (p<0.11) when the multiparous sows were fed the 13% protein diet during lactation. We found a severe loss of body weight and backfat when reducing dietary protein for lactating multiparous sows.
Effects of Replacing Fish Meal With Ensiled Shrimp By-Product on the Performance and Carcass Characteristics of Growing Pigs
Ngoan, Le Duc;Ogle, Brian;Lindberg, Jan Erik 82
A feeding trial was conducted to evaluate the effects of replacing fish meal (FM) with ensiled shrimp by-product (ESB) in a cassava root meal and rice bran-based diet on the performance and carcass characteristics of growing pigs. Thirty six crossbred (Large White$\times$Mong Cai) pigs, with an average initial BW of 19.4 kg, were randomly allocated to one of three different dietary treatments in which the crude protein of the FM was replaced with 0, 50 or 100% ESB. The animals were fed restrictedly and at the end of the experiment at a BW of about 90 kg, 6 representative animals in each treatment group were slaughtered for carcass quality evaluation. Animal growth performance and daily feed intake were significantly reduced (p<0.05) by the inclusion of shrimp by-products in the diets, whereas feed conversion ratios and carcass measurements were not significantly affected (p>0.05). Daily weight gains of the pigs fed the 100% FM diet and 50% ESB diet were significantly (p<0.05) higher than those of pigs fed the 100% ESB diet. In conclusion, from an economical as well as performance point of view, ESB can replace 50% of the crude protein of FM in cassava root meal and rice bran-based diets for growing pigs with a low genetic growth potential.
Relationship Between the Incidence of Displaced Abomasum and Feeding of Cows in Tokachi District, Hokkaido, Northern Japan
Mori, F.;Sawada, K.;Watanabe, K.;Ducusin, R.J.T.;Kumase, N.;Tanabe, S.;Uzuka, Y.;Takahashi, J.;Sarashina, T. 88
In order to investigate the relationship between the incidence of displaced abomasum and feeding, the actual feeding practices and chemical compositions of roughage were examined in 2 kinds of farms at Tokachi district in Hokkaido. Examination of animal health records revealed that the annual milk yield per cow in high incidence (H-DA) farms was significantly higher than that in low incidence (L-DA) farms. The amount of concentrates fed in H-DA farms tended to be higher than that in L-DA farms during lactation. Compared to L-DA farms, the amount of juicy roughage (corn silage and grass silage) and dry roughage (hay, hay cube and roll wrap silage) during lactation in H-DA farms tended to be higher and lower, respectively. Moreover, the amount of roughage and the ratio of roughage to concentrates in H-DA farms tended to be lower than in L-DA farms. The survey indicated that displaced abomasum was associated with insufficient feeding of dietary fiber and overfeeding of concentrates.
Quality and Amount of Morning and Evening Milk of the Bangladesh Baghabarighat Milk Shed Area Throughout the Year
Islam, K.M.S. 92
The Baghabarighat Milk Shed Area (BMSA), in Bangladesh belonging to the Bangladesh Milk Producers Cooperative Union Limited (BMPCUL) was selected to study the qualitative and quantitative aspects of dairy milk and to establish a trend throughout the year (July 1995 to June 1990) of milk collected at morning and evening, and daily. The average fat, solids-not-fat (SNF) and total solid (TS) contents of the morning milk were 4.58, 7.75 and 12.33%, respectively, and evening milk contained 5.41, 7.81 and 13.23%. The values for the quality varied (p<0.01) throughout the year without a specific trend. Higher milk collection occured at morning (52.77%) than evening (47.24%), but total yield of fat from morning milk was lower (48.74%) than evening milk (51.26%) due to a lower fat value (4.58%). Yield of SNF (52.57% morning, 47.37% evening) followed the values of the amount of milk collected due to their similar value at morning (7.75) and evening (7.81). Overall, TS yield was 51 and 49% from morning and evening. There was not a significant (p>0.01) relationship between fat and TS values of morning milk with those of evening milk, but there was a correlation (p<0.01) between milk yield at morning and all other parameters for evening. Solids-not-fat value of morning milk was related with yield and SNF value of milk from evening, but not with fat and TS evening milk. It may be concluded that evening milk contains higher amounts of fat, SNF and TS, but yields were higher at morning, except fat. Milk collected at morning showed a relationship with all the parameters found at evening.
Comparative Study on the Resistance of Three Commercial Strains and Balady (Local) Breed of Chickens to Infection With Salmonella gallinarum
Alshawabkeh, K.;Tabbaa, M.J. 96
Three commercial strains bred in the US (Hubbard), Holland (Lohman), and Canada (Shaver) and Balady breed of chickens were orally infected with $10^6$ cfu of Salmonella gallinarum per chick. Chicks were compared for weight gain, feed intake, feed conversion ratio, mortality rate and contamination of the liver, spleen and intestine with Salmonella gallinarum on the day of slaughter. The Balady chicks had significantly the lowest mortality rate (p<0.01) and weight gain (p<0.001) and poorest feed conversion ratio (p<0.001). Salmonella gallinarum could be shed from the liver, spleen and intestine with alternative rates in all strains. Lohman was the most resistance over the other two commercial strains, while Hubbard had the highest susceptibility. The Balady chicks were the most resistance. However, they may serve as a possible reservoir of Salmonella gallinarum and it may play a role of spreading the infection to the commercial farms in Jordan.
A Study on the Scanning Electron Microscopy of the Buffalo Mammary Gland
Dang, A.K.;Ludri, R.S. 101
https://doi.org/10.5713/ajas.2001.101 PDF
Scanning Electron Microscopy of twelve lactating buffalo mammary gland was done. The lactating mammary gland showed alveolus, arrangement of blood vessels and myoepithelial cells on the alveolus, the formation of lobules and interlobular connective tissue. From the exposed alveolar lumen fat globule formation can be seen which is still attached to the alveolar surface by microvilli. This technique should further be extended to study the alveolar structure in detail during different stages of mammary gland development in buffaloes.
Smallholder Dairy Production Systems in Developing Countries : Characteristics, Potential and Opportunities for Improvement - Review -
Devendra, C. 104
Smallholder dairy production systems in developing countries are discussed with reference to type of systems, their characteristics, potential, and opportunities for improvement. Three types of dairy systems are identified and described: smallholder systems, smallholder cooperative dairy production systems, and intensive dairy production systems. The first two systems are by far the most important, and are associated with increasing intensification. Buffaloes are especially important in South Asia, but elsewhere dairy production mainly involves Holstein-Friesian cross-bred cattle. Dairy goats are important in some countries, but are generally neglected in development programmes. The expansion and intensification of smallholder dairy production is fueled by increased demand for milk with associated problems of milk handling and distribution, hygiene and environmental pollution. The major constraints to production are inter alia, choice of species, breeds and availability of animals; feed resources and improved feeding systems; improved breeding, reproduction, and animal health care; management of animal manure, and organised marketing, and market outlets. These constraints provide major opportunities and challenges for research and development to increase dairy production, efficient management of natural resources, and improved livelihoods of poor farmers. Specific areas for research are identified, as also the need of a holistic focus involving interdisciplinary research and integrated natural resource management, in a shared partnership between farmers and scientists that can demonstrate increased productivity and sustainable production systems. Suggestions for performance indicators in smallholder dairy production systems are indicated.
The Genetic and Non-Genetic Aspects of Leg Weakness and Osteochondrosis in Pigs - Review -
Fukawa, K.;Kusuhara, S. 114
Leg weakness in pigs is one of the most serious problems in the pig industry. Leg weakness is responsible for an increase in the rate of culling of breeding pigs, which results in economic loss in the pig industry. Many researchers have investigated the cause of leg weakness, and that of osteochondrosis, in pigs, as well as in other mammals. For evaluating leg weakness, subjective scoring criteria have been applied, but the scoring systems varied widely. Subjective scoring systems have also been used in evaluating osteochondrosis lesions in leg joints. The lesions were scored optically, radiographically and histologically. Leg weakness and osteochondrosis are to some extent heritable traits. The heritability of leg weakness and joint lesion scores was found to range from 0.01 to 0.42. Leg weakness and osteochondrosis are associated with production traits (lean percentage and back fat depth). Nutritional aspects of leg weakness and osteochondrosis have also been extensively studied. Although the energy level of the diet may affect leg weakness and osteochondrosis, other nutritional factors seem to have very little effect. Confinement conditions also have an effect on leg weakness and osteochondrosis. Although studied extensively, the cause and etiology of leg weakness and osteochondrosis remain uncertain. It could be that there are multiple causes of leg weakness.
Growth Promoters and Their Effects on Beef Production - Review -
Song, M.K.;Choi, S.H. 123
Application of growth promoters by means of implantation or supplementation to the diets has been routine in the beef cattle industry of many countries for the better performance in growth and improvement of feed efficiency. Anabolic implants (zeranol, trenbolone acetate, and estradiol with testosterone or progesterone) have generated various positive effects. Zeranol implantation, in general, improved average daily gain (ADG), feed conversion (FC), dressing percentage (DP) and yield grade (YG) of cattle, and increased dry matter intake (DMI). Trenbolone acetate with or without estradiol also increased mean values of ADG and loin eye area (LEA) but reduced DMI and improved FC of cattle. Estradiol with testosterone or progesterone increased ADG and DMI. Anabolic implants, however, had minimal or negative effects on marbling or quality grade. The magnitude of the response to these anabolic implants in performance of beef cattle has varied depending on the type of implants, amount and duration of exposure, age of animals and combination of implants. Administration of bovine somatotropin improved ADG and FC, and decreased fat deposition. Ionophores improved FC in cattle from reduced DMI without great response to ADG. Supplementation of monensin and lasalocid reduced molar proportion of propionate. Monensin and lysocellin increased apparent absorption and retention of some minerals in cattle. Despite the improved cattle performance in growth and FC, results in beef quality from the application of the growth promoters appeared to vary or in conflict under a variety of environmental conditions. | CommonCrawl |
The Myers-Steenrod theorem for Finsler manifolds of low regularity
Authors: Vladimir S. Matveev and Marc Troyanov
Journal: Proc. Amer. Math. Soc. 145 (2017), 2699-2712
MSC (2010): Primary 53B40, 53C60, 35B65
DOI: https://doi.org/10.1090/proc/13407
Published electronically: February 10, 2017
Abstract | References | Similar Articles | Additional Information
Abstract: We prove a version of Myers-Steenrod's theorem for Finsler manifolds under the minimal regularity hypothesis. In particular we show that an isometry between $C^{k,\alpha }$-smooth (or partially smooth) Finsler metrics, with $k+\alpha >0$, $k\in \mathbb {N} \cup \{0\}$, and $0 \leq \alpha \leq 1$ is necessarily a diffeomorphism of class $C^{k+1,\alpha }$. A generalization of this result to the case of Finsler 1-quasiconformal mapping is given. The proofs are based on the reduction of the Finslerian problems to Riemannian ones with the help of the Binet-Legendre metric.
B. Aradi and D. Cs. Kertész, Isometries, submetries and distance coordinates on Finsler manifolds, Acta Math. Hungar. 143 (2014), no. 2, 337–350. MR 3233537, DOI https://doi.org/10.1007/s10474-013-0381-1
F. Brickell, On the differentiability of affine and projective transformations, Proc. Amer. Math. Soc. 16 (1965), 567–574. MR 178430, DOI https://doi.org/10.1090/S0002-9939-1965-0178430-4
Salomon Bochner and Deane Montgomery, Locally compact groups of differentiable transformations, Ann. of Math. (2) 47 (1946), 639–653. MR 18187, DOI https://doi.org/10.2307/1969226
Eugenio Calabi and Philip Hartman, On the smoothness of isometries, Duke Math. J. 37 (1970), 741–750. MR 283727
Paul Centore, Volume forms in Finsler spaces, Houston J. Math. 25 (1999), no. 4, 625–640. MR 1829124
Gyula Csató, Bernard Dacorogna, and Olivier Kneuss, The pullback equation for differential forms, Progress in Nonlinear Differential Equations and their Applications, vol. 83, Birkhäuser/Springer, New York, 2012. MR 2883631
Shaoqiang Deng and Zixin Hou, The group of isometries of a Finsler space, Pacific J. Math. 207 (2002), no. 1, 149–155. MR 1974469, DOI https://doi.org/10.2140/pjm.2002.207.149
Shaoqiang Deng, Homogeneous Finsler spaces, Springer Monographs in Mathematics, Springer, New York, 2012. MR 2962626
Lawrence C. Evans and Ronald F. Gariepy, Measure theory and fine properties of functions, Studies in Advanced Mathematics, CRC Press, Boca Raton, FL, 1992. MR 1158660
Sigurdur Helgason, Differential geometry, Lie groups, and symmetric spaces, Pure and Applied Mathematics, vol. 80, Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York-London, 1978. MR 514561
Tadeusz Iwaniec, Regularity theorems for solutions of partial differential equations for quasiconformal mappings in several dimensions, Dissertationes Math. (Rozprawy Mat.) 198 (1982), 45. MR 670148
Tadeusz Iwaniec and Gaven Martin, Geometric function theory and non-linear analysis, Oxford Mathematical Monographs, The Clarendon Press, Oxford University Press, New York, 2001. MR 1859913
Shoshichi Kobayashi and Katsumi Nomizu, Foundations of differential geometry. Vol I, Interscience Publishers, a division of John Wiley & Sons, New York-London, 1963. MR 0152974
Tony Liimatainen and Mikko Salo, $n$-harmonic coordinates and the regularity of conformal mappings, Math. Res. Lett. 21 (2014), no. 2, 341–361. MR 3247061, DOI https://doi.org/10.4310/MRL.2014.v21.n2.a11
Alexander Lytchak and Asli Yaman, On Hölder continuous Riemannian and Finsler metrics, Trans. Amer. Math. Soc. 358 (2006), no. 7, 2917–2926. MR 2216252, DOI https://doi.org/10.1090/S0002-9947-06-04195-X
Ĭozhe Maleshich, The Hilbert-Smith conjecture for Hölder actions, Uspekhi Mat. Nauk 52 (1997), no. 2(314), 173–174 (Russian); English transl., Russian Math. Surveys 52 (1997), no. 2, 407–408. MR 1480156, DOI https://doi.org/10.1070/RM1997v052n02ABEH001792
Gaven J. Martin, The Hilbert-Smith conjecture for quasiconformal actions, Electron. Res. Announc. Amer. Math. Soc. 5 (1999), 66–70. MR 1694197, DOI https://doi.org/10.1090/S1079-6762-99-00062-1
Vladimir S. Matveev and Marc Troyanov, The Binet-Legendre metric in Finsler geometry, Geom. Topol. 16 (2012), no. 4, 2135–2170. MR 3033515, DOI https://doi.org/10.2140/gt.2012.16.2135
Vladimir S. Matveev and Marc Troyanov, Completeness and incompleteness of the Binet-Legendre metric, Eur. J. Math. 1 (2015), no. 3, 483–502. MR 3401902, DOI https://doi.org/10.1007/s40879-015-0046-4
A. A. George Michael, On locally Lipschitz locally compact transformation groups of manifolds, Arch. Math. (Brno) 43 (2007), no. 3, 159–162. MR 2354804
Deane Montgomery and Leo Zippin, Topological transformation groups, Interscience Publishers, New York-London, 1955. MR 0073104
S. B. Myers and N. E. Steenrod, The group of isometries of a Riemannian manifold, Ann. of Math. (2) 40 (1939), no. 2, 400–416. MR 1503467, DOI https://doi.org/10.2307/1968928
I. G. Nikolaev and S. Z. Shefel′, Differential properties of mappings that are conformal at a point, Sibirsk. Mat. Zh. 27 (1986), no. 1, 132–142, 199 (Russian). MR 847421
John Pardon, The Hilbert-Smith conjecture for three-manifolds, J. Amer. Math. Soc. 26 (2013), no. 3, 879–899. MR 3037790, DOI https://doi.org/10.1090/S0894-0347-2013-00766-3
Peter Petersen, Riemannian geometry, 2nd ed., Graduate Texts in Mathematics, vol. 171, Springer, New York, 2006. MR 2243772
Dušan Repovš and Evgenij Š�epin, A proof of the Hilbert-Smith conjecture for actions by Lipschitz maps, Math. Ann. 308 (1997), no. 2, 361–364. MR 1464908, DOI https://doi.org/10.1007/s002080050080
Ju. G. Rešetnjak, Differential properties of quasiconformal mappings and conformal mappings of Riemannian spaces, Sibirsk. Mat. Zh. 19 (1978), no. 5, 1166–1183, 1216 (Russian). MR 508507
I. Kh. Sabitov, On the smoothness of isometries, Sibirsk. Mat. Zh. 34 (1993), no. 4, 169–176, iv, x (Russian, with English and Russian summaries); English transl., Siberian Math. J. 34 (1993), no. 4, 741–748. MR 1248802, DOI https://doi.org/10.1007/BF00975178
I. Kh. Sabitov, Isometric immersions and embeddings of locally Euclidean metrics, Reviews in Mathematics and Mathematical Physics, vol. 13, Cambridge Scientific Publishers, Cambridge, 2008. MR 2584444
E. V. Shchepin, Hausdorff dimension and the dynamics of diffeomorphisms, Mat. Zametki 65 (1999), no. 3, 457–463 (Russian, with Russian summary); English transl., Math. Notes 65 (1999), no. 3-4, 381–385. MR 1717521, DOI https://doi.org/10.1007/BF02675081
S. Z. Shefel′, Smoothness of a conformal mapping of Riemannian spaces, Sibirsk. Mat. Zh. 23 (1982), no. 1, 153–159, 222 (Russian). MR 651886
Michael Taylor, Existence and regularity of isometries, Trans. Amer. Math. Soc. 358 (2006), no. 6, 2415–2423. MR 2204038, DOI https://doi.org/10.1090/S0002-9947-06-04090-6
F. Brickell, On the differentiability of affine and projective transformations, Proc. Amer. Math. Soc. 16 (1965), 567–574. MR 0178430
Salomon Bochner and Deane Montgomery, Locally compact groups of differentiable transformations, Ann. of Math. (2) 47 (1946), 639–653. MR 0018187
Eugenio Calabi and Philip Hartman, On the smoothness of isometries, Duke Math. J. 37 (1970), 741–750. MR 0283727
Gyula Csató, Bernard Dacorogna, and Olivier Kneuss, The pullback equation for differential forms, Progress in Nonlinear Differential Equations and their Applications, 83, Birkhäuser/Springer, New York, 2012. MR 2883631
Alexander Lytchak and Asli Yaman, On Hölder continuous Riemannian and Finsler metrics, Trans. Amer. Math. Soc. 358 (2006), no. 7, 2917–2926 (electronic). MR 2216252, DOI https://doi.org/10.1090/S0002-9947-06-04195-X
Gaven J. Martin, The Hilbert-Smith conjecture for quasiconformal actions, Electron. Res. Announc. Amer. Math. Soc. 5 (1999), 66–70 (electronic). MR 1694197, DOI https://doi.org/10.1090/S1079-6762-99-00062-1
Dus̆an Repovs̆ and Evgenij S̆c̆epin, A proof of the Hilbert-Smith conjecture for actions by Lipschitz maps, Math. Ann. 308 (1997), no. 2, 361–364. MR 1464908, DOI https://doi.org/10.1007/s002080050080
Michael Taylor, Existence and regularity of isometries, Trans. Amer. Math. Soc. 358 (2006), no. 6, 2415–2423 (electronic). MR 2204038, DOI https://doi.org/10.1090/S0002-9947-06-04090-6
Retrieve articles in Proceedings of the American Mathematical Society with MSC (2010): 53B40, 53C60, 35B65
Retrieve articles in all journals with MSC (2010): 53B40, 53C60, 35B65
Vladimir S. Matveev
Affiliation: Institut für Mathematik, Friedrich-Schiller Universität Jena, 07737 Jena, Germany
MR Author ID: 609466
Email: [email protected]
Marc Troyanov
Affiliation: Section de Mathématiques, École Polytechnique Féderale de Lausanne, station 8, 1015 Lausanne, Switzerland
Email: [email protected]
Keywords: Finsler metric, isometries, Myers-Steenrod theorem, Binet-Legendre metric
Received by editor(s): May 12, 2016
Received by editor(s) in revised form: July 27, 2016
Additional Notes: The authors thank the Friedrich-Schiller-Universität Jena, EPFL and the Swiss National Science Foundation for their support.
Communicated by: Jeremy Tyson
Article copyright: © Copyright 2017 American Mathematical Society | CommonCrawl |
Three-Dimensional Fermiology by Soft-X-Ray ARPES: Origin of Charge Density Waves in VSe2
@article{Strocov2012ThreeDimensionalFB,
title={Three-Dimensional Fermiology by Soft-X-Ray ARPES: Origin of Charge Density Waves in VSe2},
author={Vladimir N. Strocov and Ming Shi and Masaki Kobayashi and Claude Monney and Xiaoqiang Wang and Juraj Krempask{\'y} and Thorsten Schmitt and Luc Patthey and Helmuth Berger and Peter Blaha},
journal={arXiv: Strongly Correlated Electrons},
V. Strocov, M. Shi, +7 authors P. Blaha
Published 1 May 2012
arXiv: Strongly Correlated Electrons
Electronic structure of crystalline materials is their fundamental characteristic which is the basis of almost all their physical and chemical properties. Angle-resolved photoemission spectroscopy (ARPES) is the main experimental tool to study all electronic structure aspects with resolution in k-space. However, its application to three-dimensional (3D) materials suffers from a fundamental problem of ill-defined surface-perpendicular wavevector kz. Here, we achieve sharp definition of kz to…
View PDF on arXiv
Probing bulk electronic structure with hard X-ray angle-resolved photoemission.
A. Gray, C. Papp, +11 authors C. Fadley
Physics, Medicine
The effects of photon wave vector on wave vector conservation, and methods for the removal of phonon-associated smearing of features and photoelectron diffraction effects are assessed.
Soft x-ray angle-resolved photoemission spectroscopy on Ag(001): Band mapping, photon momentum effects, and circular dichroism
F. Venturini, J. Minár, J. Braun, H. Ebert, N. Brookes
The (001) surface of silver (Ag) has been chosen as a simple and well known system for investigating the band mapping potential of angle-resolved photoemission spectroscopy in the soft x-ray energy…
Three-dimensional band structure of layered TiTe{sub 2}: Photoemission final-state effects
V. Strocov, E. Krasovskii, +4 authors R. Claessen
Three-dimensional band structure of unoccupied and occupied states of the prototype layered material TiTe2 is determined focusing on the ! A line of the Brillouin zone. Dispersions and lifetimes of…
Fermi surface nesting in several transition metal dichalcogenides
D. Inosov, V. Zabolotnyy, +5 authors S. Borisenko
By means of high-resolution angle-resolved photoelectron spectroscopy (ARPES), we have studied the fermiology of 2H transition metal dichalcogenide polytypes TaSe2, NbSe2 and Cu0.2NbS2. The…
Fermi surface nesting and the origin of charge density waves in metals
M. Johannes, I. Mazin
The concept of a charge density wave (CDW), which is induced by Fermi-surface nesting, originated from the Peierls idea of electronic instabilities in purely one-dimensional metals and is now often…
Intrinsic accuracy in 3-dimensional photoemission band mapping
V. Strocov
Abstract Fundamental principles of mapping 3-dimensional quasiparticle dispersions in the valence band using angle-resolved photoemission spectroscopy are discussed. Such mapping is intrinsically…
Extended phonon collapse and the origin of the charge-density wave in 2H-NbSe2.
F. Weber, S. Rosenkranz, +7 authors D. Reznik
Inelastic x-ray scattering measurements of the temperature dependence of phonon dispersion in the prototypical charge-density-wave (CDW) compound 2H-NbSe2 and ab initio calculations show that it is the wave vector dependence of the electron-phonon coupling that drives the CDW formation in 2H, and determines its periodicity.
New Method for Absolute Band Structure Determination by Combining Photoemission with Very-Low-Energy Electron Diffraction: Application to Layered VSe 2
V. Strocov, H. Starnberg, P. Nilsson, H. E. Brauer, L. J. Holleboom
We have combined photoelectron spectroscopy (PES) and very-low-energy electron diffraction (VLEED) to study the electronic band structure $E(\mathbf{k})$ of a material with complicated unoccupied…
Band structures and Fermi surfaces for 1T-TaS2, 1T-TaSe2 and 1T-VSe2
A. Woolley, G. Wexler
Band structures and Fermi surfaces are presented for three quasi-octahedral layer compounds. The dependence of both properties on the component kz of the wavevector normal to the layers is shown to…
Charge-density-wave transitions in 1T-VSe2
D. Eaglesham, R. Withers, D. Bird
An electron diffraction and imaging study of charge-density-wave phases in 1T-VSe2 is presented. Two phase transitions are found. At 110K a 3q-state is formed which at 80K transforms into an… | CommonCrawl |
Developing a mathematical model for the evaluation of the potential impact of a partially efficacious vaccine on the transmission dynamics of Schistosoma mansoni in human communities
Andria Stylianou ORCID: orcid.org/0000-0002-2245-44391,2,
Christoforos Hadjichrysanthou2,
James E. Truscott1,2 &
Roy M. Anderson1,2
Parasites & Vectors volume 10, Article number: 294 (2017) Cite this article
There is currently no vaccine available to protect humans against infection with the schistosome digenean parasites, although candidate formulations for Schistosoma mansoni are under trial in animal models, including rodents and primates. Current strategies for the control of infection are based on mass drug administration (MDA) targeted at school-aged children of age 5 to 14 years. This approach is unlikely to eliminate exposure to infection except in settings with very low levels of transmission.
A deterministic mathematical model for the transmission dynamics of the parasite is described and employed to investigate community level outcomes. The model is defined to encompass two different delivery strategies for the vaccination of the population, namely, infant (cohort) and mass vaccination. However, in this paper the focus is on vaccination delivered in a cohort immunisation programme where infants are immunised within the first year of life before acquiring infection. An analysis of the parasite's transmission dynamics following the administration of a partially protective vaccine is presented. The vaccine acts on parasite mortality, fecundity or/and establishment.
A vaccine with an efficacy of over 60% can interrupt transmission in low and moderate transmission settings. In higher transmission intensity areas, greater efficacy or higher infant vaccination coverage is required. Candidate vaccines that act either on parasite mortality, fecundity or establishment within the human host, can be similarly effective. In all cases, however, the duration of protection is important. The community level impact of vaccines with all modes of action, declines if vaccine protection is of a very short duration. However, durations of protection of 5–10 years or more are sufficient, with high coverage and efficacy levels, to halt transmission. The time taken to break transmission may be 18 years or more after the start of the cohort vaccination, depending on the intensity of the transmission in a defined location.
The analyses provide support for the proposition that even a partially efficacious vaccine could be of great value in reducing the burden of schistosome infections in endemic regions and hopefully could provide a template for the elimination of parasite transmission.
Schistosomiasis is a parasitic disease, which infects approximately 258 million people in 78 countries and is estimated to kill 280,000 people annually [1,2,3,4,5]. All age groups are infected with the schistosome parasites, with children being the most vulnerable age group. The disease induced by infection is typically chronic and debilitating, with the severity dependent on parasite burden [1, 6].
The primary form of public health control of schistosomiasis is mass drug administration (MDA) using praziquantel. However, MDA is inadequate to provide long term protection against the parasite and repeated treatment must be administered in endemic regions at intervals of 1 to 2 years [7, 8]. Past infection does not protect against reinfection post-drug treatment. Immunological responses to parasite antigens can be detected in the human host but they do not appear to generate protective levels of acquired immunity to prevent reinfection. MDA in endemic regions is having a beneficial impact on the burden of disease caused by the human schistosome parasites, but the drug availability as well as the logistical problems of getting treatment to remote communities, has prevented progress on controlling this infection in many regions of the world. Ideally, a vaccine is needed and much effort has been directed towards improving understanding of the human immune responses to infection and why the parasite is able to successfully re-infect individuals after drug treatment [9, 10].
Currently, there is no prophylactic or therapeutic vaccine available for human use to protect against the schistosome parasites (or indeed other helminth infections). However, experiments in mice, rodents, hamsters and baboons infected with S. mansoni, have shown partial prophylactic and anti-fecundity efficacies using various candidate formulations, including recombinant Sm-p80 protein and DNA priming followed by boosting with various parasite proteins [6, 11,12,13,14,15]. Recombinant protein with the Sm-p80 antigen is the leading vaccine candidate at present [16, 17]. A human schistosome vaccine that provides some protection may be possible, although efficacy will be measured in partial effects via reducing adult parasite survival, fecundity and establishment [6]. This may be due to the candidate vaccines inducing immunity that act to reduce but not eliminate adult parasite establishment and growth within the immunized hosts.
Published studies on schistosome transmission dynamics under the impact of various interventions have largely focused on the effect of MDA and snail control on the mean adult parasite burden, assuming a constant human population size with a fixed age structure [18,19,20,21,22,23]. Some mathematical model-based studies have looked at the potential impact of vaccines, but the models have been based on the dynamics of the human population described by a Susceptible-Vaccinated-Infected-Recovered (SVIR) framework [24,25,26]. This type of model construct may not be appropriate for examining the impact of a partial efficacious vaccine that acts differentially on parasite longevity, fecundity and establishment in the human host. The experimental studies referenced earlier in animal models of various candidate vaccines, suggest effects on all three population processes in the vaccinated mammalian host. In these circumstances, a macro-parasite model framework is ideally required, which could track changes in parasite burden in the human host. Past model development of this type is built on the early studies of Macdonald and Anderson & May, and has included full age structure of the human host population, sexual mating probabilities, density dependent fecundity, acquired immunity, and aggregated parasite distributions per human host (described by the negative binomial probability model) [27, 28]. Individual-based stochastic models have also been developed to describe the transmission dynamics of the human schistosome parasites and the impact of MDA [29, 30]. Mathematical model development for the study of the transmission dynamics of schistosome infections in humans has recently been reviewed by Anderson and colleagues [30].
In this paper, based on recent pre-clinical studies in primates [6, 14, 15], we describe the development of a simple deterministic mathematical model which details the dynamics of the human host and adult parasite populations, to assess the impact of a potential vaccination programme that is applied to a community. The model has a general framework such that it can mirror different vaccine delivery strategies, namely infant and mass immunisation. If mass vaccination takes place across all age classes in an area of endemic infection, then uncertainties arise surrounding the effects of immunising already infected individuals and those with past experience of infection. In this paper, the generic model is presented but analyses are focused on the vaccination at birth strategy. The model is constructed to show the potential vaccine effect on either adult worm mortality, fecundity or establishment. In subsequent papers more complex models will be examined, including full age structure and individual based stochastic frameworks.
The classic macro-parasite transmission dynamics model of Anderson & May [27] is extended to investigate the potential effect of a partially efficacious vaccine based on the properties defined by early experimental studies in animal models which include reducing the parasite's life expectancy, fecundity and rate of establishment in the mammalian host [6, 11,12,13,14,15]. We incorporate into the model sexual mating probabilities, negative binomial distributions of adult worms per human host and density dependence in egg output. The main aim is to provide some information on the potential of a partially efficacious vaccine for S. mansoni to interrupt transmission community wide, prior to the conduct of expensive phase I, II and III trials in humans.
A deterministic mathematical model (including probability terms for the parasite distribution within the human host and the mating probability) is developed to facilitate investigation of the potential impact of different community based vaccination programmes using a vaccine candidate with defined properties at the individual, and concomitantly, the community level. The model includes representation of the dynamics of the human host population and can represent two different intervention delivery scenarios; infant (cohort) or mass vaccination. The candidate vaccine is assumed to act either on adult worm survival, fecundity or establishment in the human host.
Analytical studies of model properties are presented, where functions can be derived for key quantities such as the critical level of vaccine coverage required to interrupt transmission.
The effects of vaccination on the adult worm population dynamics
A human helminth vaccine would ideally contain antigens to stimulate host immunological responses that affect three factors. These are the per capita mortality rate of the adult worms, σ, the number of eggs being produced per female worm per unit of time, λ, and the rate at which the cercarial larvae are able to infect and grow to sexual maturity within the human host on contact, β. The efficacies of a vaccine affecting the above three factors are denoted by v 1 ∈ [0, 1] , v 2 ∈ [0, 1] and v 3 ∈ [0, 1], respectively. Hence, v 1 = v 2 = v 3 = 0 is the case where the vaccine has no efficacy, while v 1 = v 2 = v 3 = 1 is the case where the vaccine is 100% effective. After a vaccine administration, the rates σ , λ and β become σ ′, λ ′ and β ′, respectively, where:
$$ {\sigma}^{\prime }=\left(\frac{1}{1-{v}_1}\right)\sigma,\ 0\le {v}_1<1 $$
$$ {\lambda}^{\prime }=\left(1-{v}_{\kern.1em 2}\right)\kern.1em \lambda,\ 0\le {v}_2\le 1 $$
$$ {\beta}^{\prime }=\left(1-{v}_3\right)\kern.1em \beta,\ 0\le {v}_3\le 1 $$
Host and parasite population dynamics
An individual in the host population is defined as belonging to one of two groups, either the vaccinated or the unvaccinated, denoted by N v and N u , respectively. In the case of a cohort immunisation programme a proportion p of infants aged from 0 to 1 year old are chosen randomly to receive a single dose of vaccine, yearly. On the other hand, mass vaccination is applied to randomly chosen individuals at a per capita rate q per year. The selection procedure does not take into account the sex, age, socioeconomic status or other characteristics of the individuals, but instead we assume a homogeneous population. Vaccine-induced immunity is lost at a rate ω, i.e. \( \tau =\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$\omega $}\right. \) is the average duration of vaccine protection. When the vaccine protection wanes the vaccinated individuals move back to the unvaccinated group. The parasites have a different life-cycle, depending on whether or not they grow within an immunised individual. Thus, the parasitic population has similar dynamics to the host population. The variables M u and M v represent the average worm burden inhabited unvaccinated and vaccinated hosts. The models of the human and the parasite populations are represented schematically in Fig. 1. Note that both populations are dynamically connected via the contact of the human with the cercaria released by infected snails, represented by the transmission coefficients, β or β ′ (depending on which vaccination state the host belongs to). The parameters influencing the host and the parasite populations are described in Table 1.
Schematic diagrams representing the host (a) and the parasite (b) population dynamics
Table 1 Parameter values for the host and parasite (S. mansoni) populations and key vaccination parameters, used in the evaluations of the models as summarised in [31]
As it is customary with models of this form, the dynamics of the life-cycle stages outside the human host (miracidia, infected snail hosts and cercariae) are assumed to be turning over on a very fast time scale (hours, days or weeks) compared to the adult parasite life cycle (4–6 years) within humans. Therefore, the dynamics of the infective stages are adjusted to the steady state value and we focus on the temporal dynamics of the mature worm and egg output dynamics [29].
Note that it is assumed that the vaccine has no impact on the host mortality rate, μ. In the Additional file 1: Figure S1 records the age and sex pyramids for Malawi in 2016. The graphs show an almost constant mortality rate by age with a mean life expectancy of approximately 50 years. We also consider that the immunised individuals receive the vaccine-induced immunity benefits instantaneously - no time delays in the induction of immunity are taken into account. Throughout the paper the assumption that the human death rate, the rate of loss of vaccine-induced immunity and the continuous vaccination rate are age- and time-independent holds.
Vaccination programmes will not only affect the mean intensity of infection within the vaccinated individuals but will also have an indirect impact on the unvaccinated host population (herd immunity effects) due to changes in both egg output and worm burden in the vaccinated individuals that reduces the overall transmission within the entire community. This is the reason that the host population is split into two groups, vaccinated and unvaccinated people. The dynamics of the two vaccination groups is described by the following system of differential equations:
$$ \frac{d{N}_u}{ d t}=- q{N}_u+\omega {N}_v - \mu {N}_u, $$
$$ \frac{d{N}_v}{ d t}= q{N}_u-\omega {N}_v-\mu {N}_v. $$
It should be noted that the total population size is N(t) = N u (t) + N v (t), where N(t) = e -μt. The equations shown above do not include a vaccination at birth term, but this term will turn up through the initial conditions of new births entering the host population. If a fraction p is vaccinated at birth, then N v (0) = p.
The above generic model can be reduced to represent the following model framework:
Model 1: Vaccination of a proportion p of infants within the age range [0, 1), (q = 0).
Model 2: Vaccination of the general population at a per capita rate q per year (p = 0).
The dynamics of the worm burden within the unvaccinated and the vaccinated host population and the environmental reservoir, denoted by M u , M v and L, respectively, are described by the following system of equations:
$$ \frac{d{M}_u}{ d t}= L{\beta}_u-\left(\mu +\sigma \right){M}_u- q{M}_u+\omega {M}_v $$
$$ \frac{d{M}_v}{ d t}= L{\beta}_v - \left(\mu +{\sigma}^{\prime}\right){\kern.1em M}_v+ q{M}_u - \omega {M}_v $$
$$ \frac{dL}{dt}=\psi \left(\lambda {M}_u+{\lambda}^{\prime }{M}_v\right) - {\mu}_2 L $$
where β u and β v , are defined as:
$$ {\beta}_u=\frac{\beta \left(\mu +\omega - p\mu \right)}{\left(\omega + q+\mu \right)} $$
$$ {\beta}_v=\frac{\beta^{\prime}\left( q+ p\mu \right)}{\left(\omega + q+\mu \right)} $$
The parameter ψ characterises the flow of infectious material into the environment. The derivation of eqs. (9) and (10) can be found in the Additional file 2.
Note that in the above equations we did not take into account the density dependence on egg output and the mating probability functions.
The mean parasitic load within a community can instinctively be defined as the weighted average of the worms within the vaccinated and the unvaccinated individuals of a community, i.e.
$$ M=\left(1- p\right){M}_u+ p{M}_v $$
All the results in the next section, consider the mating probability and the density dependence function and they are solved numerically. In this case, eq. (8) becomes:
$$ \frac{dL}{dt}=\psi \left(\lambda {M}_u F\left({M}_u\right)+{\lambda}^{\prime }{M}_v F\left({M}_v\right)\right)-{\mu}_2 L $$
The function F(M) is a product of density dependence and mating probability times the normalised host population of each group. The exact definition of this is given by Anderson & May [27] and can be found in the Additional file 2. In this study we assume that the negative binomial distributed parasites are monogamous and have a fixed value for the aggregation parameter, k.
Basic and effective reproductive numbers (R 0, R e )
The basic reproductive number, R 0, is defined as the average number of female offspring produced per female adult worm, that survive to reproductive maturity in the absence of density dependent constraints on parasite population growth [27]. This is a crucial quantity which determines whether the parasite will spread and persist within the host population. The parasite persistence criterion is R 0 > 1 in the absence of density dependent constraints, and if R 0 falls below unity the parasite population cannot maintain itself.
A definition for the effective reproductive number can be constructed using the master equation for the probabilities of being in the two states, vaccinated or unvaccinated, using the transition rates. The output equation is:
$$ {R}_e=-\frac{\psi}{\mu_2}{\boldsymbol{\varLambda}}^T{\boldsymbol{M}}^{-1}\boldsymbol{B}, $$
$$ \boldsymbol{\Lambda} =\left(\lambda, {\lambda}^{\prime}\right),\kern1em \boldsymbol{M}=\left(\begin{array}{cc}-{\mu}_u& \omega \\ {} q& -{\mu}_v\end{array}\right),\kern1em \boldsymbol{B}=\left(\genfrac{}{}{0pt}{}{\beta_u}{\beta_v}\right), $$
with μ u = μ + σ + q and μ v = μ + σ ′ + ω. A detailed derivation of eq. (13) is provided in the Additional file 2.
In the relationship above Μ is the matrix describing the transition rates.
Substituting the above vectors and the matrix in (13) we obtain:
$$ {R}_e=\frac{\psi}{\mu_2\left({\mu}_u{\mu}_v- q\omega \right)}\left[\lambda, \left({\mu}_v{\beta}_u+\omega {\beta}_v\right),+,{\lambda}^{\prime },\left( q{\beta}_u+{\mu}_u{\beta}_v\right)\right]. $$
In the absence of any interventions R e is reduced to R 0 given by:
$$ {R}_0=\frac{\psi \lambda \beta}{\mu_2\left(\mu +\sigma \right)}. $$
Throughout the results section, we explore predicted impacts for a range of R 0 values. In broad terms a low transmission setting has values in the range of (1, 1.4], a medium transmission setting has values in the range [1.5, 2.5) and anything over 2.5 is a high transmission setting. These observations are based on the estimation of R 0 values for S. mansoni from field studies in villages with low, medium and high transmission intensities that record a full cross sectional age intensity profile (eggs per gram (epg) measures) [30, 31].
Let \( {R}_0^v \) be the effective reproductive number when the infant vaccination coverage is 100%, i.e. p = 1 and under the assumption that the vaccine is imperfect.
The effective reproductive number in an infant immunisation programme can be written as the sum of the effective reproductive number in the unvaccinated and fully vaccinated populations, weighted by the proportion of unvaccinated and vaccinated infants, respectively:
$$ {R}_e=\left(1- p\right){R}_0+ p{R}_0^v, $$
The derivation of eq. (16) is given in the Additional file 2.
Critical infant vaccination coverage, p c
The prime interest is in the proportion of people that have to be successfully vaccinated in order to achieve the interruption of transmission, i.e. in the critical vaccination level, p c , so that Re < 1. Reformulating eq. (16) for p and setting the restriction Re < 1, we have:
$$ {p}_c=\frac{1-\frac{1}{R_0}}{1-\frac{R_0^v}{R_0}}. $$
The numerator of equation (17) gives the critical vaccination level of a perfect vaccine, with 100% efficacy on mortality, fecundity and establishment (v 1 = v 2 = v 3 = 1), and a lifelong protection (ω = 0).
It should also be noted that the value calculated from equation (17) does not take into account the density dependent effects on parasite fecundity and, most importantly, the mating probability at low parasite burdens where both males and females must be in the same host to produce viable offspring to sustain transmission. The numerical calculations of the Results section, include both the density dependent and the mating probability functions [23].
Parameters uncertainty
There is some uncertainty surrounding key parasite population biology parameter assignments (as reviewed in [30]), especially the expected lifespan of the adult worms in the human host. Current estimates of this parameter, which are in the range of 3.5 to 8 years, are unlikely to be refined in the near future so we analyse the impact of a vaccine using a consensus set of parameter values [31]. Moreover, we recognise that results are sensitive to changes in this particular parameter. Other key parameters, such as the age group dependent infection rates, are derived using Markov Chain Monte Carlo (MCMC) methods by fitting models to observed age intensity and age prevalence profiles for S. mansoni [27, 30, 31].
In the assignment for parameter values for the vaccine efficacy, we consider a wide range of options for the effect on all three parasite population parameters (establishment in the host and growth to maturity, adult worm life expectancy and fecundity). Experiments in animal models of the candidate vaccine labelled Sm-p80, suggest that vaccination acts on all three worm population variables, but most effectively on the establishment of female and male worms [6, 14, 15]. Vaccination may also affect parasite lifespan for those parasites who do grow to maturity in the immunised host, but the effects are not quantified as yet. In the case of fecundity, again those worms who manage to establish in the vaccinated hosts do seem to have a reduced fecundity, but again quantitative measures are not available at present.
As far as the duration of protection is concerned, little is known of how long protection against worm establishment will last, although experiments in mice suggest that antibodies against vaccine antigens persist for long periods [6, 11]. Ideally, experiments need to be performed to look at this issue but these need to be carried out over long time intervals, perhaps up to 10 to 20 years. This is an important aspect and the implications are examined by making various assumptions about the average expectancy of protection under the assumption of a constant decay rate of immunity with a half-life of τ.
As noted earlier, the importance of the duration of protection will depend to a large extent on the prevailing background mortality in the human population. If, for example, the death rate over the infant and school aged child age classes is high, as it is in many poor regions of the world, many vaccinated individuals will die before moving into the teenage age classes in an infant cohort-based programme. As such, even with high vaccine coverage soon after birth, the overall proportion of immunised people in the total population may never reach a high level assuming both vaccinated and unvaccinated individuals have similar rates of death. For this reason, the importance of the duration of protection will depend on this early life background human mortality rate. Almost exponential decay distributions of numbers of people by age are observed in some poor regions, which is the assumption embedded in the model (a constant rate of mortality, μ, independent of age).
The analytical solutions of the system of equations (4)–(5) and (6)–(7) can be found in the Additional file 2.
Infant immunisation with lifelong protection for varying efficacies
In this sub-section, we make the optimistic assumption that the benefit of immunisation is lifelong, i.e. ω = 0.
Figure 2 shows the dependence of the critical infant vaccination level, p c , on the efficacy of the vaccine that reduces the adult worm life expectancy and on the transmission intensity in a defined location, R 0, such that the transmission of the parasite is interrupted, i.e. R e < 1.
The relationship between the critical infant vaccine coverage required to break transmission for different R 0 values, and the efficacy of a vaccine with an impact on increasing the mortality of the adult worm. The duration of vaccine protection is lifelong (ω = 0). The other vaccine impact parameters are set as v 2 = 0 , v 3 = 0, i.e. no effect on parasite fecundity and establishment. The parasite population parameters are defined in Table 1
The pattern portrayed in Fig. 2 shows that for moderate (which are the most usual) transmission settings (R 0 = 1.1−2.5), a vaccine efficacy of 63% reduction in adult worm survival at an infant annual coverage of 100% will be able to interrupt transmission. For high transmission settings, for example R 0 = 4, a vaccine that provides full efficacy, 100% for a minimum of 75% infant coverage will be able to break transmission.
Note that for the parameter space where the surface reaches a plateau, all infants must be vaccinated in order the disease to be eliminated. In some cases, even this is not sufficient and more frequent vaccinations or booster vaccine doses will be required within a year to reduce R e < 1. As we will examine later in this section, the time taken to get to the point of transmission interruption, will be long in a cohort immunisation programme simply because it takes time to build up herd immunity.
The relationship between p c and the efficacy of a vaccine that impacts fecundity is displayed in Fig. 3 for different R 0 values. The pattern is very similar to the one presented in Fig. 2. In medium transmission settings stopping transmission by infant vaccination with a moderate vaccine efficacy of 60% reduction in worm fecundity requires 100% coverage. Again, for high transmission settings R 0 > 4, a vaccine with full efficacy will break transmission when the vaccination coverage is at least 75%.
The relationship between the critical infant vaccine coverage required to break transmission for different R 0 values, and the efficacy of a vaccine with an impact on reducing the per capita fecundity of adult worms. The duration of vaccine protection is lifelong (ω = 0). The other vaccine impact parameters are set as v 1 = 0 , v 3 = 0, i.e. no effect on parasite survival and establishment. The parasite population parameters are defined in Table 1
A vaccine that has a potential effect on the adult worm establishment has similar impacts to those that influence the worm survival and fecundity, as the plotted surface shows in Fig. 4. Moderate R 0 values in medium vaccine efficacy levels, 60%, will halt transmission if all infants are being immunised every year.
The relationship between the critical infant vaccine coverage required to break transmission for different R 0 values, and the efficacy of a vaccine with an impact on reducing the worm transmission. The duration of vaccine protection is lifelong (ω = 0). The other vaccine impact parameters are set as v 1 = 0 , v 2 = 0, i.e. no effect on parasite survival and fecundity. The parasite population parameters are defined in Table 1
The similarities in the predictions of the impact of the three modes of vaccine action (reducing parasite fecundity, life expectancy and establishment in the human host) in Figs. 2, 3 and 4, relates to the fact that all three parameters impact the magnitude of the effective reproductive number in a similar linear manner.
Duration of vaccine protection
In the previous subsections, the duration of protection was taken to be lifelong. Now we relax this assumption and the associated impact of vaccine duration on the critical vaccination coverage, p c , is explored.
Figure 5 shows that vaccine protection duration is critical after 5 years in a moderate transmission environment. If a vaccine provides 80% efficacy in the reduction of worm establishment and a protection for less than 5 years then little benefit arises in increasing the value of the proportion of the population that require vaccination each year. The predicted pattern is very much influenced by the background mortality of the human host population. As described earlier, the assumption of a constant mortality rate, independent of age, implies continual loss of those vaccinated. This in turn creates difficulties in maintaining herd immunity.
The relationship between the proportion of the population that must be immunised per year to halt transmission, as a function of the transmission intensity in a defined area and the average duration of protection, \( \tau =\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$\omega $}\right. \) , in years. Vaccine efficacies are set as: v 3 = 0.80 , v 1 = v 2 = 0
The critical vaccination level increases significantly for medium efficacies on worm establishment and R 0 = 2.5, if a potential vaccine provides less than 10 years of protection, as illustrated in Fig. 6. In addition, a vaccine with 100% efficacy in the reduction of worm establishment that provides lifelong immunity (at least 50 years), requires 75% of newly borns to be immunised to break the transmission of the parasite.
The relationship between the critical vaccination proportion of infants, with the vaccine efficacy on worm establishment and the average duration of protection. The other parameters are defined as: R 0 = 2.5 , v 1 = v 2 = 0
Table 2 records the sensitivity of the critical infant vaccination coverage in the variation in the rate of loss of vaccine induced immunity, ω, for Model 1. As mentioned before, the duration of protection has limited effect on p c as long as it is below 5 years (Fig. 5). On the other hand, a considerable reduction of the proportion of infants that have to be vaccinated to halt transmission may occur when the duration of protection is long (decades).
Table 2 Impact of intervention when vaccine protection duration varies for different R 0 values
Vaccine impact on the mean worm burden
In community-based vaccination programmes, vaccine efficacy and coverage may not reach high enough levels to halt transmission. Stopping transmission is of course not the only desired outcome, and lower efficacies can have a very substantial impact on the mean worm burden and hence morbidity.
Figure 7a, b shows the equilibrium worm burden for different transmission settings achieved by a vaccine that impacts only adult parasite establishment in the human host with an efficacy of 80% and an infant coverage of 70% and 85%, respectively. Table 3 records the numerical mean worm burden values at equilibrium 50 years after vaccination for low, medium and high transmission settings. Note that worm elimination can be achieved even with R 0 = 3.5 and 70% immunisation coverage of infants, but high efficacy levels, 80%, in worm establishment are required.
The impact of the transmission setting, R 0 , for Model 1 on the temporal trend in the mean worm burden M, with 80% vaccine efficacy on parasite establishment only, i.e. v 1 = 0 , v 2 = 0 , v 3 = 0.80, for (a) p = 70% and (b) p = 85%. The starting value for the mean worm burden is M 0 = 100 per host and the vaccine protection is assumed to be lifelong, i.e. ω = 0
Table 3 The equilibrium worm burden, M ∗ for different R 0 values with the vaccine efficacy set as: v 1 = 0, v 2 = 0, v 3 = 0.80 and the vaccination coverage as p = 70% and p = 80%
Cohort immunisation takes time to impact the level of herd immunity within infants even for high vaccination coverage levels. Figure 8 shows the temporal trends in the mean worm burden after the introduction of vaccination. For a 50% coverage, and assuming a 80% vaccine efficacy acting on parasite's establishment in a moderate transmission setting, R 0 = 2.5, it takes approximately 21 years to eliminate the worm population. In part, this is related to the impact of human mortality on the build-up of herd immunity (the mortality rate in infant age is very high, and hence many vaccinated infants die, and in part by the slow dynamic time scale of the system set by adult worm life expectancy, set as 4 years. This is the reason we obtain a small increase in the mean worm burden up to 1 year after the intervention is introduced. The numerical equilibrium values of worm burden for various proportions of infant vaccination are given in Table 4.
The impact of infant vaccine coverage, p, for Model 1 on the temporal trend in the mean worm burden, with 80% efficacy of a vaccine having an impact on parasite establishment, i.e. v 1 = 0 , v 2 = 0 , v 3 = 0.80, and R 0 = 2.5. The starting value for the mean worm burden used is M 0 = 100 per host
Table 4 The equilibrium worm burden, M ∗ for Model 1 at time t = 50 years for different vaccination levels, with the vaccine efficacy parameters set as: v 1 = 0, v 2 = 0, v 3 = 0.80 and the transmission intensity constant: R 0 = 2.5
A vaccine to protect against schistosome infection is ideally required to eliminate the parasite in endemic regions. MDA alone is effective in preventing morbidity in children, but breaking transmission will require high coverage levels in children and adults over many decades [30, 31]. This study describes a mathematical model for the potential effect of a schistosome vaccine if applied to the human host population, under two different vaccine delivery strategies, namely, infant and mass vaccination. The mathematical model explores the vaccination coverage required to achieve transmission elimination. In these calculations, the possible impact of a partially efficacious vaccine is considered. The vaccine acts to reduce the parasite's life expectancy, fecundity, and adult worm establishment within the human host.
An important factor that the developed model investigates is the duration of vaccine-induced immunity, which determines the required time to break the transmission of the parasite. To measure the duration of protection requires the repeated challenge of immunised animals over say 1, 5 and 10 years. In practise, such experiments on the duration of protection may not be possible for justifiable animal welfare reasons, including regulations which restrict the period of time an infected animal can be left untreated. As such, duration of protection may need to be tested in humans, if progression through phase I and phase II clinical trials is successful. This would entail the careful design of phase IV trials to measure reinfection rates, as has been the case for the candidate malaria vaccine [32].
The period taken to break parasite transmission following an infant vaccination programme could be possibly greatly reduced by applying a mass vaccination strategy across all age classes. In this circumstance, the safety of vaccinating those already infected, if a potential mass immunisation programme is applied, is of obvious importance and needs to be established via clinical trials. Previous studies have shown that MDA can reduce up to 87% the worm burden [33]. Thus, theoretically, it may be best to treat with MDA first across all age classes and then immunise. Other unknowns include, the safety and impact of vaccination on uninfected people (post treatment of praziquantel) who have had short, or long, past experience of infection, given the potential generation of good immunological responses that can be generated in previously infected individuals.
In addition, the long average timescale set for adult worm life expectancy (3.5–8) years that will negate a strong influence of changes on an annual basis and the precise nature of the density dependence effect are considered limitations of our model. Little improvement on the understanding of these processes is expected in the near future.
This paper has focused on the impact of infant vaccination. However, the mathematical model framework is general in form, and allows mass vaccination to be explored. Future publications will model combination of interventions (MDA and vaccination) and different delivery options in both an age structured hybrid model and a full individual based stochastic model.
The real challenge though, is in finding a financial model that will pay for the development of such candidates to progress via costly trials in humans to test safety, efficacy and community based impact, to vaccine production and manufacture for servicing resource poor settings. Despite this rather pessimistic note, what is encouraging is the fact that vaccine candidates of reasonable efficacy in primates can be developed which have the potential to interrupt transmission in endemic regions of the world.
A series of general conclusions emerge from the analyses. First, a vaccine with moderate efficacy of 60% will, according to our mathematical model, interrupt transmission in communities with low and moderate transmission. For high transmission settings, higher vaccine efficacies are required to interrupt transmission or multiple booster vaccine doses each year may be necessary. Second, the analysis suggests that a candidate vaccine that impacts either on worm establishment, worm fecundity or adult parasite survival in the human host is almost equally beneficial. Experiments in mice, rodents and baboons suggest that all three factors may be affected by the lead vaccine candidate (Sm-p80 protein), where reduced female worm growth in immunised hosts acts to decrease fecundity, and perhaps survival as well. The ongoing experiment on baboons will give more detail on the vaccine impact on parasite's establishment, growth, and fecundity within the immunised animals. Third, with an infant based vaccination programme, the benefits of immunisation will take some time to become apparent due to the slow build-up of herd immunity in a cohort vaccination programme. Our model shows that breaking transmission in even low intensity transmission areas, may take 18 years or more of medium to high infant coverage.
Fourth, the conclusions outlined above depend on the duration of protection generated by one or a short course of vaccination. If duration is less than a few years, repeated immunisation may be necessary per year for breaking transmission. The mathematical model we have developed suggests that the duration does not need to be life-long to achieve elimination of the disease. An average duration of protection of the order of 5–10 years is adequate to ensure good community based impact at moderate to good efficacy and good coverage levels.
Epg:
Eggs per gram
MDA:
Mass drug administration
Gryseels B, Polman K, Clerinx J, Kestens L. Human schistosomiasis. Lancet. 2006;368(9541):1106–18.
Hotez PJ, Molyneux DH, Fenwick A, Kumaresan J, Sachs SE, Sachs JD, et al. Control of neglected tropical diseases. N Engl J Med. 2007;357(10):1018–27.
Steinmann P, Keiser J, Bos R, Tanner M, Utzinger J. Schistosomiasis and water resources development: systematic review, meta-analysis, and estimates of people at risk. Lancet Infect Dis. 2006;6(7):411–25.
Colley DG, Bustinduy AL, Secor WE, King CH. Human schistosomiasis. Lancet. 2014;383(9936):2253–64.
Van Der Werf MJ, De Vlas SJ, Brooker S, Looman CWN, Nagelkerke NJD, Habbema JDF, et al. Quantification of clinical morbidity associated with schistosome infection in sub-Saharan Africa. Acta Trop. 2003;86(2–3):125–39.
Karmakar S, Zhang W, Ahmad G, Torben W, Alam MU, Le L, et al. Use of an sm-p80-based therapeutic vaccine to kill established adult schistosome parasites in chronically infected baboons. J Infect Dis. 2014;209(12):1929–40.
Hotez PJ, Fenwick A. Schistosomiasis in Africa: an emerging tragedy in our new global health decade. PLoS Negl Trop Dis. 2009;3(9):9–11.
Utzinger J, N'Goran EK, Caffrey CR, Keiser J. From innovation to application: social-ecological context, diagnostics, drugs and integrated control of schistosomiasis. Acta Trop. 2011;120(Suppl. 1):S121–37.
Butterworth AE, Capron M, Cordingley JS, Dalton PR, Dunne DW, Kariuki HC, et al. Immunity after treatment of human schistosomiasis mansoni. II. Identification of resistant individuals, and analysis of their immune responses. Trans R Soc Trop Med Hyg. 1985;79:393–408.
Capron A, Dessaint JP, Capron M, Ouma JH, Butterworth AE. Immunity to schistosomes: progress toward vaccine. Science. 1987;238(4830):1065–72.
Ahmad G, Zhang W, Torben W, Ahrorov A, Damian RT, Wolf RF, et al. Preclinical prophylactic efficacy testing of sm-p80-based vaccine in a nonhuman primate model of Schistosoma mansoni infection and immunoglobulin G and e responses to sm-p80 in human serum samples from an area where schistosomiasis is endemic. J Infect Dis. 2011;204(9):1437–49.
Ahmad G, Zhang W, Torben W, Damian RT, Wolf RF, White GL, et al. Protective and antifecundity effects of sm-p80-based DNA vaccine formulation against Schistosoma mansoni in a nonhuman primate model. Vaccine. 2009;27(21):2830–7.
Zhang W, Ahmad G, Torben W, Noor Z, Le L, Damian RT, et al. Sm-p80-based DNA vaccine provides baboons with levels of protection against Schistosoma mansoni infection comparable to those achieved by the irradiated cercarial vaccine. J Infect Dis. 2010;201(7):1105–12.
Le L, Zhang W, Karmakar S, Ahmad G, Torben W, Siddiqui AA. Simultaneous priming with DNA encoding sm-p80 and boosting with sm-p80 protein confers protection against challenge infection with Schistosoma mansoni in mice. Parasitol Res. 2014;113(3):1195–200.
Zhang W, Ahmad G, Le L, Rojo JU, Karmakar S, Tillery KA, et al. Longevity of sm-p80-specific antibody responses following vaccination with sm-p80 vaccine in mice and baboons and transplacental transfer of sm-p80-specific antibodies in a baboon. Parasitol Res. 2014;113(6):2239–50.
Molehin AJ, Rojo JU, Siddiqui SZ, Gray SA, Carter D, Siddiqui AA. Development of a schistosomiasis vaccine. Expert Rev Vaccines. 2016;15(5):619–27.
Siddiqui AA, Siddiqui SZ. Sm-p80-based schistosomiasis vaccine: preparation for human clinical trials. Trends Parasitol. 2016;xx:1–8.
Anderson RM, Truscott JE, Hollingsworth TD. The coverage and frequency of mass drug administration required to eliminate persistent transmission of soil-transmitted helminths. Philos Trans R Soc Lond Ser B Biol Sci. 2014;369(1645):20130435.
Anderson RM, Truscott JE, Pullan RL, Brooker SJ, Hollingsworth TD. How effective is school-based deworming for the community-wide control of soil-transmitted helminths? PLoS Negl Trop Dis. 2013;7(2):e2027.
Chan MS, Guyatt HL, Bundy DAP, Medlay GF. The development and validation of an age-structured model for the evaluation of disease-control strategies for intestinal helminths. Parasitology. 1994;109:389–96.
Truscott J, Hollingsworth TD, Anderson R. Modeling the interruption of the transmission of soil-transmitted helminths by repeated mass chemotherapy of school-age children. PLoS Negl Trop Dis. 2014;8(12):e3323.
Truscott JE, Turner HC, Anderson RM. What impact will the achievement of the current World Health Organisation targets for anthelmintic treatment coverage in children have on the intensity of soil transmitted helminth infections? Parasit Vectors. 2015;8(1):551.
Anderson RM, May RM. Population dynamics of human helminth infections: control by chemotherapy. Nature. 1982;297(5867):557–63.
Liu X, Takeuchi Y, Iwami S. SVIR epidemic models with vaccination strategies. J Theor Biol. 2008;253(1):1–11.
Keeling M, Tildesley M, House T, Danon L. The mathematics of vaccination. Math Today. 2013;49:40–3.
Scherer A, McLean A. Mathematical models of vaccination. Brit Med Bull. 2002;62:187–99.
Anderson RM, May RM. Infectious diseases of humans: dynamics and control. Oxford: Oxford University Press; 1992. p. 757.
Macdonald G. The dynamics of helminth infections, with special reference to schistosomes. Trans R Soc Trop Med Hyg. 1965;59(5):489–506.
Anderson RM, May RM. Helminth infections of humans: mathematical models, population dynamics, and control. Adv Parasitol. 1985;24:1–101.
Anderson RM, Turner HC, Farrell SH, Truscott JE. Studies of the transmission dynamics, mathematical model development and the control of schistosome parasites by mass drug administration in human communities. In: Basáñez MG, Anderson RM, editors. Mathematical models for neglected tropical diseases: Essential tools for control and elimination, Part B. London: Academic; 2016. pp. 199–246.
Anderson R, Turner H, Farrell S, Yang J, Truscott J. What is required in terms of mass drug administration to interrupt the transmission of schistosome parasites in regions of endemic infection? Parasit Vectors. 2015;8(1):553.
RTS,S Clinical Trials Partnership. Efficacy and safety of RTS,S/AS01 malaria vaccine with or without a booster dose in infants and children in Africa: final results of a phase 3, individually randomised, controlled trial. Lancet. 2015;386(9988):31–45.
French MD, Churcher TS, Gambhir M, Fenwick A, Webster JP, Kabatereine NB, et al. Observed reductions in Schistosoma mansoni transmission from large-scale administration of praziquantel in Uganda: a mathematical modelling study. PLoS Negl Trop Dis. 2010;4(11):e897.
Chan MS, Guyatt HL, Bundy DA, Booth M, Fulford AJ, Medley GF. The development of an age structured model for schistosomiasis transmission dynamics and control and its validation for Schistosoma mansoni. Epidemiol Infect. 1995;115(2):325–44.
We would like to thank Afzal A. Siddiqui and his research group at Texas Tech University for many discussions about past experimental results and candidate vaccines, and access to unpublished data on experiments in mice and baboons.
AS, CH, JET and RMA gratefully acknowledge funding of the NTD Modelling Consortium by the Bill and Melinda Gates Foundation in partnership with the Task Force for Global Health. The views, opinions, assumptions, or any other information set out in this article should not be attributed to Bill & Melinda Gates Foundation and The Task Force for Global Health or any person connected with them.
AS: developed and analysed the mathematical models, derived the analytical and numerical results, wrote the paper. CH: contributed to the development and analysis of the mathematical models and the writing of the paper. JET: contributed to the development of the mathematical models. RMA: designed the study, contributed to the development of the mathematical models and the writing of the paper. All authors read and approved the final manuscript.
RMA is a Non-Executive Director of GlaxoSmithKline (GSK). GSK had no influence on the conduct of the research, its funding, or the writing of this paper. AS, CH and JET have no competing interests.
London Centre for Neglected Tropical Disease Research, London, UK
Andria Stylianou
, James E. Truscott
& Roy M. Anderson
Department of Infectious Disease Epidemiology, School of Public Health, Faculty of Medicine, Imperial College London, London, UK
, Christoforos Hadjichrysanthou
Search for Andria Stylianou in:
Search for Christoforos Hadjichrysanthou in:
Search for James E. Truscott in:
Search for Roy M. Anderson in:
Correspondence to Andria Stylianou.
Additional file 1: Figure S1.
Population pyramid by age and sex of Malawi from the US Bureau of Population and Census database (https://www.census.gov/population/international/data/idb/region.php?N=%20Results%20&T=12&A=separate&RT=0&Y=2016&R=−1&C=MI). The plots show exponential decay in population size by age, with a mean life expectancy \( \left(\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$\upmu $}\right.\right) \) for the total population of approximately 50 years. (TIFF 105 kb)
Analytical solutions of the host population dynamics. Analytical solutions of the parasite population dynamics. Derivation of equations of the transmission functions. Density dependence and mating probability functions. Derivation of the effective reproductive number, R e . Derivation of R e formula in terms of R 0 and \( {R}_0^v \). (DOCX 29 kb)
Stylianou, A., Hadjichrysanthou, C., Truscott, J.E. et al. Developing a mathematical model for the evaluation of the potential impact of a partially efficacious vaccine on the transmission dynamics of Schistosoma mansoni in human communities. Parasites Vectors 10, 294 (2017) doi:10.1186/s13071-017-2227-0
Vaccine trials
Schistosoma mansoni vaccine
Partially efficacious vaccine
The LCNTDR Collection: Advances in scientific research for NTD control | CommonCrawl |
A parametric study of the heat exchanger copper coils used in an indirect evaporative cooling system
Esmaeel Fatahian ORCID: orcid.org/0000-0002-0845-41411,
Hesamoddin Salarian1 &
Hossein Fatahian1
SN Applied Sciences volume 2, Article number: 112 (2019) Cite this article
Due to the lower energy consumption in the evaporative cooling, it is the subject of numerous studies. Evaporative cooling considers a green cooling system which does not require any chemical reaction and does not depend on the hazardous material. The present study mainly focused on parametric analysis of an indirect evaporative cooling system using the computational fluid dynamics method. The numerical simulation of an indirect evaporative cooling system was carried out using ANSYS Fluent 18.2. The effects of the water inlet velocity, air inlet velocity, coil diameters (dc) and the number of the heat exchanger copper coils were numerically investigated. The flow was considered as three-dimensional, turbulent, and incompressible. The results indicated that the increment of the coil diameter and water inlet velocity had a positive effect on the performance of the indirect evaporative cooling system. The maximum water outlet temperature was obtained at 285.05 K for the water inlet velocity of 0.5 m/s. Moreover, the saturation efficiency is decreased by increasing air inlet velocity. On the other hand, saturation efficiency was increased at all air inlet velocities by increasing the coil diameter.
The evaporative cooling system needs less input energy than mechanical vapor compression systems [1]. It has been taking place over the air-cooling system because the cooling process depends on the evaporation of water which is not harmful to the environment as well as requires less energy. The mechanical cooling system requires a higher cost for installation and higher energy where HVAC converts 100% outside air into the cool air. Evaporative cooling units can be used in a residential area, business buildings, and data center cooling. Evaporative cooling unit has two distinguished systems; Direct Evaporative Cooling unit (DEC) and indirect evaporative cooling system (IEC) [1,2,3]. An indirect evaporative cooler (IEC) is an energy-efficient cooling device which has been widely used to cool the air or other fluids [4,5,6,7]. Indirect evaporative cooling systems are air to air heat exchangers where the primary air is cooled by the secondary air. The secondary air is either cooled by Direct Evaporative Cooling before entering the air to air heat exchanger or it is cooled in the air to the air heat exchanger by wetting the secondary air stream side [8, 9]. The indirect evaporative cooling unit is not efficient as Direct Evaporative Cooling so, in summer-time evaporative cooler has been placed in series for higher efficiency. Indirect evaporative cooling overcomes the disadvantages of a Direct Evaporative cooling unit of adding humidity in the air. Currently, many researchers are working on indirect evaporative cooling systems [10,11,12,13], dealing with new thermodynamic cycles, heat exchanger materials and geometries, humidification systems and with the evaluation of energy savings compared to conventional devices.
Boxem et al. [14] introduced a model for an indirect evaporative cooler: a compact counter flow heat exchanger with louver fins on both sides. They demonstrated that their calculations overestimated the cooler performance by 20% for inlet air temperatures below 24 °C and by 10% for higher inlet temperatures. Shariaty-Niassar and Gilani [15] studied the effects of air stream direction in the channels of the indirect evaporative cooler (IEC). They found that a higher performance was achieved by using the indirect evaporative cooler with counter-current configuration. You et al. [16] presented a method of indirect evaporative cooling heat exchanger using the CFD method. They investigated the influence such as inlet air temperature, relative humidity, mass flow rate, and exchanger channel height. They concluded that lower inlet temperature and relative humidity could obtain lower outlet temperature and moisture content. Wan et al. [17] developed a new approach to investigate the coupled heat and mass transfer characteristics in an indirect evaporative cooling with counter-flow configurations. In their study, the effects of different parameters on the average Nusselt and Sherwood numbers were examined in detail. Pakari and Ghani [8] numerically and experimentally investigated the performance of a counterflow dew point evaporative cooling system. They found that the predicted outlet temperature of the cooling system by 1D and 3D models were in good agreement with the experimental results. De Antonellis et al. [18] examined an indirect evaporative cooler, based on a cross-flow heat exchanger. In their study, a new indirect evaporative cooling system model was developed, including the effects of secondary air humidification and surface wettability factor, as a function of working conditions. Wan et al. [19] used a CFD model to consider counter flow dew point evaporative coolers. The model indicated a maximum discrepancy of 6% against experimental data. The two-dimensions analyzed in the model were the length and width of the cooling system's channels. Li et al. [20] presented a comparative study of a counter-cross flow plate heat recovery exchanger, which operates as an indirect evaporative air cooler. Their results indicated that the outlet air temperature of the exchanger set vertically was lower than that of exchanger set horizontally.
Despite the above-mentioned researches, any numerical study has not been yet done to analyze the copper coil diameter variation. The present study is mainly focused on investigating the effect of coil diameter on the performance of an indirect evaporative cooling (IEC) system as well as obtaining favorable water and air inlet velocities simultaneously. On the other hand, cooling coil efficiency not only depends on water and air temperature but also depends on coil diameter, number of coils, water inlet velocity, and air inlet velocity. For this purpose, three different coil diameters (dc) of 0.0127, 0.01905 and 0.0254 m were used. In addition, wide ranges of air inlet velocities between 0.1 m/s to 0.6 m/s and water inlet velocities between 0.1 m/s to 0.5 m/s were considered at each different coil diameters to achieve the favorable parameters. The outcome of this work could be helpful to find out the best configuration of an indirect evaporative cooling system through parametric study in CFD simulation for real-life implementation.
In the present study, 8-row chilled water cooling coils with a length of 0.5 m and a diameter of 0.0127 m were used which the distance between the coils was 0.05 m. The entire coils were placed in a rectangular cube of 2 m in length, 0.75 in width and 1 m in height for applying the boundary conditions. The geometry of an indirect evaporative cooling system is shown in Fig. 1 [21] and Fig. 2 illustrates the side view of its geometry.
The geometry of an Indirect Evaporative Cooling (IEC) system [21]
The side view of the geometry of an Indirect Evaporative Cooling (IEC) system [21]
Governing equations
In the present study, the flow was considered as three-dimensional, turbulent, and incompressible. The CFD code ANSYS Fluent solves continuity, momentum and energy equations along with species transport. Equations (1) and (2) represent the continuity and momentum equations used in this case, respectively [22]. Here acceleration due to droplet particle and viscous forces are considered [22].
$$\frac{{\partial\uprho}}{{\partial {\text{t}}}} + \nabla \cdot \left( {{\uprho \vec{\upnu }}} \right) = S_{DPM} + S_{Other}$$
where \(\rho\) = density of the fluid phase, \({\vec{\upnu }}\) = velocity in vector form, SDPM = discrete phase model source, Sother = additional mass source
$$\frac{{\partial {\uprho \vec{\upnu }}}}{{\partial {\text{t}}}} + \nabla \cdot \left( {\uprho {\vec{\upnu }\vec{\upnu }}} \right) = - \nabla {\text{p}} + \nabla\uptau +\uprho{\text{g}} + F_{DPM}$$
where P = static pressure, τ = stress tensor, g = gravitational acceleration, FDPM = DPM force acceleration.
Water vapor in the air is modeled using species transport in ANSYS Fluent 18.2. Species transport equation uses convection–diffusion phenomena to transport water vapor in the air which is presented in Eq. (3).
$$\frac{{\partial\uprho{\text{Y}}_{i} }}{{\partial \text{t}}} + \nabla \cdot \left( {{\uprho \vec{\upnu }}{\text{Y}}_{i} } \right) = - \nabla \cdot \overrightarrow {{\text{J}_{i} }} + \overrightarrow {{{\text{S}}_{i} }}$$
where Yi = local mass fraction of each species, Ji = diffusive flux, Si = creation of species by DPM.
Heat transfer is governed by the energy equation which is presented in Eq. (4) [22].
$$\frac{{\partial\uprho{\text{E}}}}{{\partial \text{t}}} + \nabla \cdot \left( {{\vec{\upnu }}\left( {\uprho{\text{E}} + {\text{p}}} \right)} \right) = \nabla \cdot \left( {{\text{k}}_{eff} \nabla {\text{T}} - \mathop \sum \limits_{j} h_{j} \overrightarrow {{\text{J}_{i} }} + \overline{\overline{\uptau}}_{eff} \cdot {\vec{\upnu }}} \right)$$
where E = enthalpy, Keff = effective thermal conductivity, Ji = diffusive flux due to species.
Using the above governing equations for the flow simulation with enabling species transport equation and \({\text{k}} -\upvarepsilon\) turbulence model has been carried out. The equations in the \({\text{k}} -\upvarepsilon\) turbulence model are as follows:
Turbulent kinetic energy (k) [23]:
$$\uprho\frac{{\partial {\text{k}}}}{{\partial {\text{t}}}} +\uprho\overline{{{\text{u}}_{\text{j}} }} \frac{{\partial {\text{k}}}}{{\partial {\text{x}}_{\text{j}} }} = \uptau_{\text{ij}} \frac{{\partial \overline{{{\text{u}}_{\text{j}} }} }}{{\partial {\text{x}}_{\text{j}} }} -\uprho \upvarepsilon + \frac{\partial }{{\partial {\text{x}}_{\text{j}} }}\left[ {\left( {\upmu +\upmu_{\text{t}} /\upsigma_{\text{k}} } \right)\frac{{\partial {\text{k}}}}{{\partial {\text{x}}_{\text{j}} }}} \right]$$
The rate of turbulence energy dissipation (\(\upvarepsilon\)) [22]:
$$\uprho\frac{{\partial\upvarepsilon}}{{\partial {\text{t}}}} +\uprho\overline{{{\text{u}}_{\text{j}} }} \frac{{\partial\upvarepsilon}}{{\partial {\text{x}}_{\text{j}} }} = {\text{C}}_{{\upvarepsilon1}} \frac{\upvarepsilon}{\text{k}}\uptau_{\text{ij}} \frac{{\partial \overline{{{\text{u}}_{\text{i}} }} }}{{\partial {\text{x}}_{\text{j}} }} - {\text{C}}_{{\upvarepsilon2}}\uprho\frac{{\upvarepsilon^{2} }}{\text{k}} + \frac{\partial }{{\partial {\text{x}}_{\text{i}} }}\left[ {\left( {\upmu +\upmu_{\text{t}} /\upsigma_{\upvarepsilon} } \right)\frac{{\partial\upvarepsilon}}{{\partial {\text{x}}_{\text{j}} }}} \right].$$
Grid generation and numerical method
In the present study, tetrahedral meshes have been used for the inlet and outlet regions of the computational area. Moreover, fine meshes were applied for the areas which are more sensitive in order to have high accuracy in computations. Figures 3 and 4 demonstrate the entire and the side view of the grid, respectively. For the inlet side, the boundary condition was set to inlet velocity and for the outlet side, the boundary condition was set to outlet pressure. Furthermore, the no-slip boundary condition was set for the coils. The water with a density of 998.2 kg/m3 and a viscosity of 0.001003 kg/m s and also the air with a density of 1.225 kg/m3 and a viscosity of 1.7894 × 10−5 kg/m s were considered as the working fluids. The water inlet temperature was 275 K and the hot air inlet temperature was 300 K with a relative humidity of 34.9%. For the inlet section of the coils, the boundary condition was set to inlet velocity and for the outlet section of the coils, the boundary condition was set to outlet pressure (Fig. 5). The Reynolds number was considered from 6300 to 12,640 for different coil diameters at a water inlet velocity of 0.5 m/s so the flow was assumed to be turbulent.
The entire view of the grid
The side view of the grid
The numerical analysis was carried out using ANSYS Fluent 18.2 based on the CFD codes which are used to describe the complex behaviors of the heat and mass transfer [24,25,26] and also are implemented in various problems [27,28,29]. The upwind second-order method was applied for the discretization of continuity, momentum, and energy equations. In addition, The SIMPLE algorithm was used for pressure–velocity coupling and pressure interpolation was second-order [30]. The convergence criteria were considered to be less than 10−6 for all equations. Due to the high Reynolds number flow inside the coils, the fluid governing equations consist of Reynolds-Averaged Navier–Stokes (RANS) equations. Modeling of the turbulence behavior inside the coil was performed using a standard k − ε turbulence model according to the recommendations of previous studies for similar flows [31,32,33].
Grid independence study and validation
A grid independence study was conducted to verify the adequacy of the final grid with cell numbers of 125,000, 287,000, 525,000 and 794,000 to calculate the water outlet temperature. Finally, there was a negligible difference between the results of the finest grid and the grid with 525,000 cells. Thus, in order to select a favorable grid size to save computation time and gain better accuracy, the grid with 525,000 cells was used to obtain the results. Figure 6 indicates the detail of grid independence. Moreover, in order to validate the computational results, the values of water outlet temperature and air outlet temperature compared with the experimental results of Zhou et al. [21] at a Reynolds number of 6320 (Table 1). As it can be seen, the results obtained from the numerical simulation had a good agreement with the available experimental data [21].
The detail of grid independence
Table 1 Comparison of temperature variations of the CFD results with the experimental results [21]
The effect of coil diameter on the water outlet temperature
Figure 7 shows the changes in the water outlet temperature in terms of the changes in the water inlet velocity between 0.1 and 0.5 m/s for three different coil diameters. For this purpose, three different coil diameters of 0.0127, 0.01905 and 0.0254 m were used. The coil diameter of 0.0127 was selected as the base diameter which was consistent with the coil diameter of available experimental data [21] and the other diameters were 0.01905 (1.5 dc) and 0.0254 (2 dc).
Comparison of the changes in water outlet temperature in terms of the changes in water inlet velocity
It is obvious that the water outlet temperature had increased with the increment of water inlet velocity for all the coil diameters. The water inlet temperature was 275 K and the hot air temperature was 300 K. After passing through 8 rows of copper coils, the water temperature gradually increased by colliding the hot airflow with the coils, and the hot water was exited at the outlet side of the coil. Also, the entered hot air due to the collision with the coils lost its heat and the cool air was exited from the outlet side. It is clear that the increment of coil diameter affects the performance of the indirect evaporative cooling (IEC) system so that the water outlet temperature had increased by increasing coil diameters. This positive effect is more evident in low water inlet velocity especially at a water inlet velocity of 0.1 m/s. On the other hand, the outlet water temperature had increased by increasing the water inlet velocity. The maximum water outlet temperature was obtained 285.05 K at a water inlet velocity of 0.5 m/s for a coil diameter of 0.0254 m, which increased about 3.65% compared to the water inlet temperature of 275 K.
The effect of coil diameter on the air outlet temperature
Figure 8 demonstrates the changes in the air outlet temperature in terms of the changes in the air inlet velocity between 0.1 and 0.6 m/s for three different coil diameters. As it can be seen, the air outlet temperature had increased with the increment of air inlet velocity for all coil diameters. In contrary to the effect of water inlet velocity, lower values of air inlet velocity had more effect on the performance of the indirect evaporative cooling system. Moreover, a larger coil diameter had more effect on decreasing the air outlet temperature so the best value was obtained at the inlet air velocity of 0.1 m/s and a coil diameter of 0.0254 m. At the inlet air velocity of 0.1 m/s, the maximum reduction of the air outlet temperature was obtained about 5.3%, 5.1% and 4.5% at coil diameters of 0.0254, 0.01905 and 0.0127 m, respectively.
Comparison of the changes in air outlet temperature in terms of the changes in air inlet velocity
Saturation efficiency
Figure 9 depicts the saturation efficiency in terms of the changes in the air inlet velocity between 0.1 and 0.6 m/s for three different coil diameters. Saturation efficiency equation is obtained as follows:
$$\upvarepsilon_{e} = \frac{{t_{1} - t_{2} }}{{t_{1} - t^{\prime } }}100\%$$
where \(\upvarepsilon_{e}\) = saturation efficiency, %, t1 = dry-bulb temperature of entering air, K, t2 = dry-bulb temperature of exiting air, K, \(t^{\prime }\) = wet bulb temperature of entering the air, K.
Comparison of saturation efficiency in terms of the changes in air inlet velocity
A user-defined function (UDF) for computing the saturation efficiency was developed and added to FLUENT to be incorporated into the conservation equations. First, the values of density, pressure, temperature, and mass fraction were read by existing macros in FLUENT. Then, the specific gas constant of water vapor and dry air were defined as constants from which the gas constant for the mixture was calculated based on the components' mass fractions. A fixed mass fraction of H2O was calculated based on the air temperature and relative humidity, imposed at the inlet for the species transport equation. The dry and wet bulb temperatures of entering air were given as flow conditions. The dry-bulb temperature of exiting air was calculated after solving governing equations for the flow simulation with enabling species transport equation.
It is obvious that the saturation efficiency was decreased with the increment of air inlet velocity. On the other hand, saturation efficiency was increased at all air inlet velocities by increasing coil diameter. The maximum saturation efficiency was obtained 87.83% at a coil diameter of 0.0254 m and air inlet velocity of 0.1 m/s which was about 4% and 14% higher than coil diameters of 0.01905 m and 0.0127 m, respectively.
The effect of coil diameter on pressure drop
The pressure drop for all three coil diameters was calculated at an air inlet velocity of 0.5 m/s and water inlet velocity of 0.6 m/s which is presented in Table 2. As it can be seen, the pressure drop had decreased by increasing coil diameter so that the pressure drop was reduced about 56% in a coil diameter of 0.0254 m compared to the coil diameter of 0.0127 m.
Table 2 Comparison of pressure drop (Pa) between three different coil diameters
The changes in Nusselt number against Reynolds number
Figure 10 shows the changes in the Nusselt number against Reynolds number for three different coil diameters. From this figure, the Nusselt number had increased considerably by increasing the Reynolds number for all coil diameters. Furthermore, increasing coil diameter had a significant effect on increasing Nusselt number. The maximum value of the Nusselt number was obtained for a coil diameter of 0.0254 m.
Comparison of Nusselt number against Reynolds number for three different coil diameters
The effect of the number of coils
Figures 11 and 12 illustrate the effect of the number of coils on changes of water and air outlet temperatures for a coil diameter of 0.0127 m. For this purpose, three different rows of copper coils (4, 6 and 8 rows) were considered. It is concluded that the number of coils has a significant effect on the performance of an indirect evaporative cooling (IEC) system. As the number of coils increased, the collision of hot airflow with the coils had increased. This caused to enhance the heat transfer rate and consequently reduce the air outlet temperature. On the other hand, a higher number of coils led to increase in the water outlet temperature. The maximum water outlet temperature was obtained 284.9 K for 8 rows of copper coils at a water inlet velocity of 0.5 m/s which increased about 1% and 0.42% compared to 4 rows and 6 rows copper coils, respectively. Furthermore, the maximum reduction of the air outlet temperature was obtained about 2.7%, 2.2% and 1.8% for 8 rows, 6 rows and 4 rows of copper coils at an air inlet velocity of 0.6 m/s, respectively.
Changes in water outlet temperature due to the number of coils with d = 0.0127 m
Changes in air outlet temperature due to the number of coils with d = 0.0127 m
The temperature and velocity distributions
The temperature distribution (K) of the indirect evaporative cooling (IEC) system was investigated for three different coil diameters at an air inlet velocity of 0.6 m/s and water inlet velocity of 0.5 m/s which is illustrated in Figs. 13, 14, 15. As mentioned before, the passing hot airflow by colliding with the coils was exited from the outlet side with a reduction in temperature. Moreover, the water in the coils was heated and exited from the outlet side of the coils. Figures 16, 17, 18 demonstrate the velocity distribution (m/s) of the indirect evaporative cooling system which is considered for three different coil diameters at an air inlet velocity of 0.6 m/s and water inlet velocity of 0.5 m/s. The velocity increases as it passes through the coil arrangement which increases convective heat transfer. Also, the velocity field indicates turbulence flow in the region before passing the coil arrangement which is the consequence of this fact that the pre-existing air is not evenly distributed between coils.
Temperature distribution (K) for a coil diameter of 0.0127 m
Temperature distribution (K) for a coil diameter of 0.01905 m
Velocity distribution (m/s) for a coil diameter of 0.0127 m
Velocity distribution (m/s) for a coil diameter of 0.01905 m
In the present study, a parametric analysis of an indirect evaporative cooling (IEC) system was carried out using the CFD method. The effects of the water inlet velocity, air inlet velocity, and coil diameters and the number of the heat exchanger copper coils were numerically analyzed. For this purpose, three different coil diameters of 0.0127, 0.01905 and 0.0254 m were used. The flow was considered as three-dimensional, turbulent, and incompressible. The SIMPLE algorithm was used for pressure–velocity coupling and also the \({\text{k}} -\upvarepsilon\) turbulence model was adapted to simulate the turbulence flow. For validation, the computational results compared with the available experimental results of Zhou et al. [21] and good agreement was achieved. Based on the results, it was concluded that a larger value of coil diameter and higher water inlet velocity improved the performance of the indirect evaporative cooling (IEC) system. The maximum saturation efficiency was obtained 87.83% at a coil diameter of 0.0254 m and air inlet velocity of 0.1 m/s which was about 4% and 14% higher than coil diameters of 0.01905 m and 0.0127 m, respectively. The maximum water outlet temperature was obtained 285.05 K at the water inlet velocity of 0.5 m/s for a coil diameter of 0.0254 m, which increased up to 3.65%. On the other hand, lower values of air inlet velocity had more effect on the performance of the indirect evaporative cooling system. The maximum reduction of the air outlet temperature was obtained about 5.3%, 5.1% and 4.5% at an air inlet velocity of 0.1 m/s and coil diameters of 0.0254, 0.01905 and 0.0127 m, respectively.
Heidarinejad G, Moshari S (2015) Novel modeling of an indirect evaporative cooling system with cross-flow configuration. Energy Build 92:351–362
Al-Abbasi O, Al-Alawi Y (2019) Modeling of indirect evaporative cooling and its performance analysis in harsh environments. Heat Mass Transf 55:1–14
Duan Z, Zhan C, Zhang X, Mustafa M, Zhao X, Alimohammadisagvand B, Hasan A (2012) Indirect evaporative cooling: past, present, and future potentials. Renew Sustain Energy Rev 16(9):6823–6850
Bravo G, González E (2013) Thermal comfort in naturally ventilated spaces and under indirect evaporative passive cooling conditions in hot–humid climate. Energy Build 63:79–86
Lin J, Bui DT, Wang R, Chua KJ (2018) On the fundamental heat and mass transfer analysis of the counter-flow dew point evaporative cooler. Appl Energy 217:126–142
Cruz EG, Krüger E (2015) Evaluating the potential of an indirect evaporative passive cooling system for Brazilian dwellings. Build Environ 87:265–273
Rogdakis ED, Koronaki IP, Tertipis DN (2014) Experimental and computational evaluation of a Maisotsenko evaporative cooler at Greek climate. Energy Build 70:497–506
Pakari A, Ghani S (2019) "Comparison of 1D and 3D heat and mass transfer models of a counter flow dew point evaporative cooling system: numerical and experimental study. Int J Refrig 99:114–125
Heidarinejad G, Bozorgmehr M, Delfani S, Esmaeelian J (2009) Experimental investigation of two-stage indirect/direct evaporative cooling system in various climatic conditions. Build Environ 44:2073–2079
Sohani A, Sayyaadi H, Azimi M (2019) Employing static and dynamic optimization approaches on a desiccant-enhanced indirect evaporative cooling system. Energy Convers Manag 199:112017
Comino F, Milani S, De Antonellis S, Joppolo CM, de Adana MR (2018) Simplified performance correlation of an indirect evaporative cooling system: development and validation. Int J Refrig 88:307–317
Sohani A, Sayyaadi H, Zeraatpisheh M (2019) "Optimization strategy by a general approach to enhance improving potential of dew-point evaporative coolers. Energy Convers Manag 188:177–213
Pandelidis D, Cichoń A, Pacak A, Anisimov S, Drąg P (2018) Counter-flow indirect evaporative cooler for heat recovery in the temperate climate. Energy 165:877–894
Boxem G, Boink S, Zeiler W (2007) Performance model for the small-scale indirect evaporative cooler. In: Proceedings of clima wellbeing indoors, REHVA World Congress, Helsinki, Finland, pp. 10–14
Shariaty-Niassar M, Gilani N (2009) An investigation of indirect evaporative coolers, IEC with respect to thermal comfort criteria. Iran J Chem Eng 6(2):15–25
You Y, Jiang H, Lv J (2019) Analysis of influence of IEC heat exchanger based on CFD method. Energy Procedia 158:5759–5764
Wan Y, Ren C, Xing L (2017) An approach to the analysis of heat and mass transfer characteristics in indirect evaporative cooling with counterflow configurations. Int J Heat Mass Transf 108:1750–1763
De Antonellis S, Joppolo CM, Liberati P, Milani S, Romano F (2017) Modeling and experimental study of an indirect evaporative cooler. Energy Build 142:147–157
Wan Y, Lin J, Chua KJ, Ren C (2018) Similarity analysis and comparative study on the performance of counter-flow dew point evaporative coolers with experimental validation. Energy Convers Manag 169:97–110
Li WY, Li YC, Zeng LY, Lu J (2018) Comparative study of vertical and horizontal indirect evaporative cooling heat recovery exchangers. Int J Heat Mass Transf 124:1245–1261
Zhou X, Braun JE (2004) Transient modeling of chilled water cooling coils. In: International refrigeration and air conditioning conference, paper 652
Fluent A, Ansys (2013) Release 15.0. Theory Guide, November
Versteeg HK, Malalasekera W (2007) An introduction to computational fluid dynamics: the finite method. Pearson Education, New York
Fatahian H, Salarian H, Nimvari ME, Fatahian E (2018) Numerical study of thermal characteristics of fuel oil-alumina and water-alumina nanofluids flow in a channel in the laminar flow. IIUM Eng J 19(1):251–269
Lima AAS, Ochoa AAV, Da Costa JAP, Henríquez JR (2019) CFD simulation of heat and mass transfer in an absorber that uses the pair ammonia/water as a working fluid. Int J Refrig 98:514–525
Qian Y, Han Z, Zhan JH, Liu X, Xu G (2018) Comparative evaluation of heat conduction and radiation models for CFD simulation of heat transfer in packed beds. Int J Heat Mass Transf 127:573–584
Fatahian E, Nichkoohi AL, Fatahian H (2019) Numerical study of the effect of suction at a compressible and high Reynolds number flow to control the flow separation over Naca 2415 airfoil. Prog Comput Fluid Dyn Int J 19(3):170–179
Yue C, Zhang Q, Zhai Z, Ling L (2018) CFD simulation on the heat transfer and flow characteristics of a microchannel separate heat pipe under different filling ratios. Appl Therm Eng 139:25–34
Fatahian H, Salarian H, Nimvari ME, Fatahian E (2018) Numerical study of suction and blowing approaches to control flow over a compressor cascade in turbulent flow regime. Int J Automot Mech Eng 15(2):5326–5346
Montazeri H, Blocken B, Hensen JLM (2015) Evaporative cooling by water spray systems: CFD simulation, experimental validation, and sensitivity analysis. Build Environ 83:129–141
Khalajzadeh V, Farmahini-Farahani M, Heidarinejad G (2012) A novel integrated system of ground heat exchanger and indirect evaporative cooler. Energy Build 49:604–610
Gebrehiwot B, Dhiman N, Rajagopalan K, Agonafer D, Kannan N, Hoverson J, Kaler M (2013) CFD modeling of indirect/direct evaporative cooling unit for modular data center applications. In: ASME International technical conference and exhibition on packaging and integration of electronic and photonic microsystems. American Society of Mechanical Engineers
Martín RH (2009) Numerical simulation of a semi-indirect evaporative cooler. Energy Build 41(11):1205–1214
Department of Mechanical Engineering, Nour Branch, Islamic Azad University, Nour, Iran
Esmaeel Fatahian
, Hesamoddin Salarian
& Hossein Fatahian
Search for Esmaeel Fatahian in:
Search for Hesamoddin Salarian in:
Search for Hossein Fatahian in:
Correspondence to Esmaeel Fatahian.
The authors declare that they have no conflict of interest.
Fatahian, E., Salarian, H. & Fatahian, H. A parametric study of the heat exchanger copper coils used in an indirect evaporative cooling system. SN Appl. Sci. 2, 112 (2020). https://doi.org/10.1007/s42452-019-1915-0
Indirect evaporative cooling system
Engineering: Computational Fluid Dynamics (CFD) in Energy Related Problems | CommonCrawl |
Enabling multiplexed testing of pooled donor cells through whole-genome sequencing
Yingleong Chan ORCID: orcid.org/0000-0003-4474-61141,2,
Ying Kai Chan1,2,
Daniel B. Goodman1,2,3,
Xiaoge Guo1,2,
Alejandro Chavez4,
Elaine T. Lim1,2 &
George M. Church1,2
Genome Medicine volume 10, Article number: 31 (2018) Cite this article
We describe a method that enables the multiplex screening of a pool of many different donor cell lines. Our method accurately predicts each donor proportion from the pool without requiring the use of unique DNA barcodes as markers of donor identity. Instead, we take advantage of common single nucleotide polymorphisms, whole-genome sequencing, and an algorithm to calculate the proportions from the sequencing data. By testing using simulated and real data, we showed that our method robustly predicts the individual proportions from a mixed-pool of numerous donors, thus enabling the multiplexed testing of diverse donor cells en masse.
More information is available at https://pgpresearch.med.harvard.edu/poolseq/
The screening of many cell lines for specific phenotypes is commonly performed to discover factors that confer donor cell specific effects. For example, several studies have employed the screening of multiple cancer cell lines for identifying cell type specific essential genes [1,2,3]. Other studies have also used primary cells from different donors to identify genetic variants associated with various cellular phenotypes. In one study, the authors reported six loci associated with immune response to pathogens by measuring cytokine production in peripheral blood mononuclear cells from hundreds of different donors [4]. Other groups measured the transcriptional response to pathogenic stimulus in primary monocytes obtained from many African and European individuals [5, 6]. In these studies, the experiments were performed on cells from each individual donor separately. However, with increasing numbers of donors, generating data from cells from more donors would require more research effort and time. As such, it would be advantageous to multiplex these assays by performing a single experiment on a pool of all donor cells and simultaneously retrieve phenotypic data from each donor.
To achieve this, one would require a method to accurately estimate the individual proportion of each donor from a pool of cells containing multiple donors. With such a method, one can perform a selection assay or perform fluorescence-activated cell sorting (FACS) to sort the pool of cells based on criteria of interest (e.g. response to pathogen, drug resistance, protein expression) and identify the proportion of every individual donor within this new pool (case group). A similar experiment can be performed for the control group, to identify the donor proportion either at baseline or from cells sorted with different criteria. The phenotype for an individual donor is then measured by comparing the difference in proportion between case and control groups (Fig. 1). A recent study aimed at discovering genotype-specific effects in a mixture of cancer cells reported a method (PRISM) that achieved this [7]. Briefly, PRISM uses a unique 24-nt barcode that was integrated into each donor cell line by lentiviral delivery before pooling. To obtain individual donor proportions, the barcodes were amplified using polymerase chain reaction (PCR) and sequenced by next-generation sequencing. Each individual donor proportion is then estimated by calculating the proportion of their corresponding barcodes from the sequenced reads. However, the PRISM method requires the barcoding of individual donor cells using lentiviral delivery, which is a tedious process because each lentiviral barcode has to be generated, applied to the donor cells, and selected for separately. Furthermore, primary cells, non-dividing cells, and cells with limited ability to be passaged in vitro cannot be effectively barcoded in this manner. Here, we describe a method that can accurately estimate each donor proportion in a mixed pool without the use of exogenous barcodes or amplification of a specific locus using PCR.
Workflow of how our method is used for testing cells from multiple donors en masse. Using FACS or selection, one can obtain the case and control group of cells. The individual donor proportions for the case and control group can be obtained using our method and thus each individual donor can be assigned a phenotype value. The method does not require artificial barcodes or amplification of a specific locus
Our method harnesses the presence of millions of common single nucleotide polymorphisms (SNPs) within the human genome. These SNPs, which are usually bi-allelic, can be exploited as a natural barcode and are distributed throughout the entire genome. These SNPs are spaced relatively far apart, with approximately one common SNP for every 1000 base pairs in the human genome [8]. The genotypes of these SNPs for each donor are pre-determined before executing the method. These SNP genotypes can be easily acquired using whole-genome genotyping arrays or by performing whole-genome sequencing for each donor. While each individual SNP is not unique, the combination of SNPs throughout the genome is unique to each donor. However, PCR amplification and sequencing of any genomic locus is not adequate enough to cover enough SNPs to uniquely identify an individual donor. As such, our method overcomes this problem by using all the SNPs distributed throughout the host genome. Using the standard process of sequencing a human genome from a library of short DNA fragments, many of the short sequencing reads (200–300 bp) generated will cover a SNP in the human population [9]. Our method works by first extracting genomic DNA from the mixed pool of cells and sequencing it. The method then employs an expectation–maximization (EM) algorithm that takes the genotypes for all the donors as well as the sequencing reads from the mixed pool as input to calculate the individual donor proportion. Using an iterative process, the algorithm determines the donor proportion that best matches the expected allelic fraction with the observed allelic fraction for all the SNPs analyzed.
In this study, we demonstrated the feasibility of our approach by designing simulation experiments to determine how well our method can accurately predict donor proportion. From simulation experiments, we tested a number of scenarios by varying the number of donors, number of SNPs as well as the sequencing read-depth per SNP. We found that in most cases, our method accurately predicts the donor proportion even at the lowest possible read-depth (1X) as long as a sufficient number of SNPs were analyzed (> 500,000 SNPs). Finally, we empirically tested our method by sequencing a mixed pool of human donor cells and demonstrate that our approach can accurately predict donor proportion within the mixed population.
EM algorithm for estimating proportion of individual donors within the pool
We first define θ as the probability or proportion of any individual donor, which is the probability that we are trying to estimate, i.e.
$$ {\displaystyle \begin{array}{c}\theta =\left({P}_1,{P}_2,{P}_3,\dots, {P}_N\right)\\ {}{\theta}_n={P}_n\end{array}} $$
where P n is the probability or proportion of donor n within the pool of N donors, the sum of which is 1.
Next, we assume that we only analyze sequenced reads from autosomes and only at SNP positions that are known to be bi-allelic, i.e. having only two alleles, Reference (R) or Alternate (A), although the algorithm can be amended to consider X and/or Y chromosomes as well as also incorporating multiallelic polymorphisms. Given this, we define Reads as the number of sequence reads (read-depth) for each allele for each SNP, i.e.
$$ {Reads}_{m,R}= No. of\ observed\ reads\ with\ allele\ R\ at\ SNP\ position\ m $$
$$ {Reads}_{m,A}= No. of\ observed\ reads\ with\ allele\ A\ at\ SNP\ position\ m $$
where m is the index defining the SNP at that position.
Next, we assume that the genotypes for all bi-allelic SNPs analyzed for every donor is accurately known. As such, the genotype for each donor for each SNP can only be one of the following states: RR, RA, or AA, i.e.
SNPm, n= genotype of donor n at SNP m (RR, RA, or AA).
To estimate θ, we employ an EM algorithm and initialized the values of θ so that each donor has the same starting proportion or probability [10], i.e.
$$ {\uptheta}_n^0=\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$N$}\right. $$
where \( {\uptheta}_n^0 \) is the proportion or probability estimate of individual n at iteration 0.
Next, we calculate the Total function for each SNP given θ, which is the expected number of R and A alleles given the current estimate of θ, i.e.
$$ {Total}_{m,R}=\sum \limits_{n=1}^N\left\{\begin{array}{c}\kern0.75em {\theta}_n^t\kern4.25em if\ {SNP}_{m,n}= RR\kern1em \\ {}0.5\ast {\theta}_n^t\kern1.75em if\ {SNP}_{m,n}= RA\ \\ {}0\kern4.75em if\ {SNP}_{m,n}= AA\ \end{array}\right. $$
$$ {Total}_{m,A}=\sum \limits_{n=1}^N\left\{\begin{array}{c}\kern0.75em 0\kern4.5em if\ {SNP}_{m,n}= RR\kern0.75em \\ {}0.5\ast {\theta}_n^t\kern1.5em if\ {SNP}_{m,n}= RA\\ {}{\theta}_n^t\kern4em if\ {SNP}_{m,n}= AA\ \end{array}\right. $$
where m is the index for each SNP, R and A represent the respective alleles, and \( {\theta}_n^t \) represents the current estimate of θ for individual n at the current iteration t.
Next, we calculate the likelihood function L for each individual given the current estimate of θ by going through all the SNPs (M being the total number of SNPs), i.e.
$$ {L}_n=\sum \limits_{m=1}^M{\displaystyle \begin{array}{c}\kern1.5em \frac{\theta_n^t}{Total_{m,R}}\ast {Reads}_{m,R}\kern13.75em if\ {SNP}_{m,n}= RR\kern1em \\ {}0.5\left(\frac{\theta_n^t}{Total_{m,R}}\ast {Reads}_{m,R}+\frac{\theta_n^t}{Total_{m,A}}\ast {Reads}_{m,A}\right)\ if\ {SNP}_{m,n}= RA\ \\ {}\frac{\theta_n^t}{Total_{m,A}}\ast {Reads}_{m,A}\kern13.75em if\ {SNP}_{m,n}= AA\ \end{array}} $$
Finally, we re-estimate θ for each donor for the next iteration, i.e.
$$ {\theta}_n^{t+1}=\frac{L_n}{\sum \limits_{n=1}^N{L}_n} $$
This procedure is repeated until θ converges to a stable estimate, t = 2000. The final value of t can be adjusted depending on the number of donors and SNPs analyzed. For a sample size of ten donors, we used t = 500 as the last iteration. To help explain the algorithm, we provide a working example of estimating the proportion of a mixed pool of five donors (Additional file 1: Note S1). We also included a short description of how our method would be used in a real experimental setting by comparing our method against the lentiviral barcoding method (PRISM) used in Yu et al. [7] (Additional file 1: Note S2).
Simulating individual donors in a mixed pool and estimating their proportions using the EM algorithm
Individuals were simulated by first defining the value of several variables, namely,
N, the total number of individual donors;
M, the total number of SNPs;
X, the read-depth (coverage) for every SNP.
First, a total of M SNPs were simulated by randomly assigning a minor allele frequency (MAF) by drawing from a uniform distribution in the range of 5–50%.
$$ {MAF}_m= random\ number\ between\ 5\% and\ 50\% $$
Next, genotypes for each SNP were randomly assigned according to their MAF to each of the N donors, i.e. for any donor at any SNP with a MAF of f, the probability of having a genotype of RR, RA, and AA is f2, 2f(1-f), and (1-f)2, respectively.
Next, each individual was randomly assigned a copy-number count (Donor n ) by drawing from a uniform distribution in the range of 1–10,000 to represent the true number of copies of that donor.
$$ {Donor}_n= random\ number\ between\ 1\ and\ \mathrm{10,000} $$
The true proportion for each donor (θ n ) was then calculated by taking their copy-number count divided by the sum of all the copy-number for all donors.
$$ {\theta}_n=\frac{Dono{r}_n}{\sum \limits_{n=1}^N{ Dono r}_n} $$
The sequencing-reads were then simulated by randomly drawing X number of alleles from a binomial distribution where the probability of drawing the R allele for that SNP (P m,R ) is the sum of the true proportion multiplied by the likelihood for drawing the R allele given the genotype for that individual, i.e.
$$ {P}_{m,R}=\sum \limits_{n=1}^N\left\{\begin{array}{c}\kern1.00em {\theta}_n\kern4em if\ {SNP}_{m,n}= RR\kern1em \\ {}0.5\ast {\theta}_n\kern0.75em if\ {SNP}_{m,n}= RA\ \\ {}0\kern4.25em if\ {SNP}_{m,n}= AA\ \end{array}\right. $$
The simulation can also be done with regards to the A allele by changing the above equation or subtracting from 1 the probability of drawing the R allele.
$$ {P}_{m,A}=1-{P}_{m,R} $$
Nonetheless, if the random draw for the read fails to draw the R allele, it will be assigned the A allele and vice versa. The simulated alleles and SNP genotypes for all N individuals are then used as inputs to the EM algorithm to estimate the individual donor proportion. The estimated proportion is then compared to the true proportion and the accuracy of the prediction is evaluated using the Pearson correlation coefficient (represented as R).
Pooling B-lymphocytes from Personal Genome Project samples
B-lymphocytes from the Harvard Personal Genome Project (PGP) were obtained from the NIGMS Human Genetic Cell Repository at the Coriell Institute for Medical Research (https://www.coriell.org). To create the initial pool of donor cells, we used five distinct pools of B-lymphocytes previously mixed together at approximately equal numbers (Invitrogen Countess) and kept cryopreserved in liquid nitrogen. The five pools of frozen cells were resuscitated and grown overnight separately in upright T25 flasks in a standard incubator at 37 °C with 15 mL of growth media upright (Thermofisher, RPMI 1640 Medium, GlutaMAX™ Supplement, HEPES + 10% fetal bovine serum + 1% Penicillin-Streptomycin [10,000 U/mL]). The pools of cells were counted (Invitrogen Countess) and cells were taken from Pool 1, Pool 2, Pool 3, Pool 4, and Pool 5 at increments of 100,000 cells, i.e. 100,000 cells were taken from Pool 1, 200,000 cells were taken from Pool 2, … and 500,000 cells were taken from Pool 5. The cells were mixed together to form the final pool. To create the subsequent (more accurate) pool of donor cells, a different set of 50 donor cells were resuscitated and cultured for five days separately in 24-well plates in a standard incubator at 37 °C with 0.5 mL of growth media (Thermofisher, RPMI 1640 Medium, GlutaMAX™ Supplement, HEPES + 10% fetal bovine serum + 1% Penicillin-Streptomycin [10,000 U/mL]). On the day of cell sorting, each donor cell was collected in 1.5-mL micro-centrifuge tubes and re-suspended in 0.5 mL of Dulbecco's Phosphate Buffered Saline (DPBS) solution. The donor cells were then sorted into a single 15-mL conical centrifuge tube containing 5 mL of DPBS (Sony SH800S Cell Sorter). Ten different donors were selected for each of the five pools and 10,000, 20,000, 30,000, 40,000, and 50,000 events were used to sort donors representing pools 1, 2, 3, 4, and 5, respectively.
DNA extraction, library preparation, and sequencing
Genomic DNA of the initial pool was extracted using the QIAamp DNA FFPE Tissue Kit (QIAGEN). Genomic DNA of the subsequent pool was extracted using the AccuPrep Genomic DNA extraction kit (BioNEER). The extracted genomic DNA of both pools were submitted to Biopolymers facility at Harvard Medical School (https://genome.med.harvard.edu/) for genomic DNA library preparation (Genomic-Seq Wafergen) and subsequent next-generation sequencing using Illumina MiSeq. The DNA from the initial pool resulted in 5,112,179 paired sequencing reads while the DNA from the subsequent pool resulted in 13,111,543 paired sequencing reads that mapped to the human genome. The reads were aligned to the human genome reference sequence (GRCh37/hg19) using bwa (version 0.7.8-r566) [11].
SNP identification
Whole-genome sequencing information was available for all 102 PGP samples (Complete Genomics) and the genotypes of all bi-allelic SNPs within the autosomes were recorded. We compared the sequencing reads with the recorded SNPs to determine the allele for each SNP sequenced. The final alignment of the sequencing reads for the initial pool resulted in the sequencing of 1,425,723 SNPs at 1.16X coverage while the subsequent pool resulted in the sequencing of 1,988,295 SNPs at 1.23X coverage.
An algorithm that accurately predicts the proportion of individuals within a simulated mixed pool
To test the efficacy of our algorithm, we designed and implemented a simulation program to generate simulated data for testing the robustness of the prediction given the number of donors, number of SNPs as well as sequencing read-depth. Taking these parameters as input, the program first randomly simulates the true proportion for each donor within the mixed pool. Next, it generates genotypes for all SNPs and donors by simulating SNPs with MAF randomly selected in the range of 5–50%. Finally, for each SNP, it stochastically samples the number of each of the alleles under a probabilistic model that reflects the true donor proportion according to the assigned read-depth. The program then applies our algorithm on the simulated data to determine how accurately it can predict the individual donor proportion (see "Methods").
Using our program, we first simulated two sets of ten diploid individuals with similar proportions, the first (set A) having genotypes from 500 SNPs with sequencing read-depth (coverage) of 1000X while the second (set B) having genotypes from 500,000 SNPs but with sequencing read-depth of only 1X (Additional file 2: Table S1). We ran the algorithm to estimate the individual proportions given the simulated sequencing reads and genotypes of the individuals for both sets and found that the prediction converges to a fixed estimate (Fig. 2a, b) and accurately predicted the real simulated proportion for both set A and set B (Fig. 2c, Additional file 2: Table S2). This result shows that the algorithm is as effective on high coverage sequencing data across a small number of SNPs compared with low-coverage sequencing data across a much larger number of SNPs.
Estimating the proportions of ten simulated donor individuals. Showing the results of simulating (a) deep-coverage sequencing (1000X) on a small number (500) of SNPs and (b) low-coverage sequencing (1X) on many (500,000) SNPs. Both graph shows the estimated proportion (y-axis) by the algorithm at every iteration (x-axis). c Bar plot comparing the estimated proportion against the true proportion for both set A and set B after 500 iterations. The black bars represent the true proportion for each simulated donor, while the red and blue bars represent the estimated proportion of set A and set B, respectively
Testing the algorithm on simulated mixed pools by varying the sample size, number of SNPs, and sequencing read-depth
To test how the number of SNPs and read-depth (coverage) would scale with increased sample size, we perform simulations on pools of 100, 500, and 1000 different donors, using 500,000 SNPs with 1X, 10X, and 30X coverage. For a pool of 100 donors, we obtained Pearson correlation coefficients of 0.956, 0.994, and 0.998 for 1X, 10X, and 30X coverage respectively, demonstrating that under these circumstances, low-coverage sequencing data would be sufficient to accurately predict individual donor proportion (Fig. 3a–c, Additional file 2: Table S3). With a pool of 500 donors, the algorithm produced Pearson correlation coefficients of 0.511, 0.877, and 0.947 for 1X, 10X, and 30X coverage, respectively, indicating a drop in prediction accuracy with increased sample size (Fig. 3d–f). Finally, when the number of donors was increased to 1000, the accuracy further declined for 1X, 10X, and 30X coverage (R = 0.25, 0.665, and 0.838, respectively) (Fig. 3g–i). These results show that by analyzing 500,000 SNPs positions, the algorithm can accurately estimate pools of 100 different donors at any read-depth but higher read-depths would be required to accurately estimate donor proportion for pools with substantially more donors.
Comparing the true proportions with the estimated proportions of varying number of simulated donor individuals by simulating 500,000 SNPs at varying coverage. The x-axis represents the true simulated proportion while the y-axis represents the estimated proportion by our algorithm (EM estimated proportion). a 100 donors at 1X coverage. b 100 donors at 10X coverage. c 100 donors at 30X coverage. d 500 donors at 1X coverage. e 500 donors at 10X coverage. f 500 donors at 30X coverage. g 1000 donors at 1X coverage. h 1000 donors at 10X coverage. i 1000 donors at 30X coverage. R represents the Pearson-correlation coefficient of comparing the true proportions with the estimated proportions
To determine if the accuracy of the algorithm increases with the use of more SNPs in the analysis, we repeated the simulation experiments using 1,000,000 SNPs. Indeed, when we doubled the number of SNPs, the accuracy for all the simulation experiments increased when compared to their previous counterpart (Fig. 4, Additional file 2: Table S4). This suggests that even for a pool of > 100 donors, sequencing more SNPs in general increases the accuracy of the prediction. Based on these results, we tabulated the minimal read-depth required to obtain an accurate prediction with Pearson correlation coefficient ≥ 0.9 (Table 1).
Comparing the true proportions with the estimated proportions of varying number of simulated donor individuals by simulating 1,000,000 SNPs at varying coverage. The x-axis represents the true simulated proportion while the y-axis represents the estimated proportion by our algorithm (EM estimated proportion). a 100 donors at 1X coverage. b 100 donors at 10X coverage. c 100 donors at 30X coverage. d 500 donors at 1X coverage. e 500 donors at 10X coverage. f 500 donors at 30X coverage. g 1000 donors at 1X coverage. h 1000 donors at 10X coverage. i 1000 donors at 30X coverage. R represents the Pearson-correlation coefficient of comparing the true proportions with the estimated proportions
Table 1 Minimal read-depth required for accurate prediction of donor proportion
The method accurately predicts the donor proportions of a mixed pool of actual human donor cells
To test if our method can accurately estimate the proportions of actual human donor samples, we set up a system using a pool of immortalized B-lymphocytes from the Harvard PGP [12,13,14]. We combined five pools of PGP B-lymphocytes with ten individuals per pool at 1X, 2X, 3X, 4X, and 5X concentration, respectively (see "Methods"). We extracted genomic DNA from the pool of B-lymphocytes and subjected the DNA to low-coverage whole-genome sequencing which resulted in the sequencing of 1,425,723 SNPs at 1.16X coverage. Using our method, we estimated the individual proportion of donors within the pool of 102 PGP individuals, including 52 donors that were not part of the combined pool and acted as negative controls. We found that the method predicted the proportion of the individuals within the pool (Fig. 5, Additional file 2: Table S5). The results showed that pool 0, which consists of the 52 individuals not part of the combined pool, had very low estimated proportions, with a mean proportion of 0.07% and none of the 52 samples had proportions > 0.18%. In contrast, pools 1–5 gave mean estimated proportions of 0.57%, 1.08%, 1.87%, 2.8%, and 3.35%, respectively, which accurately reflected the expected proportions (expected proportions being 0.67%, 1.33%, 2%, 2.67%, and 3.33%, respectively).
Estimating individual donor proportion of 102 PGP donors in a mixed pool. Individuals from Pool 0 are absent from the mixed pool while individuals from Pools 1, 2, 3, 4, and 5 are represented on average at 1X, 2X, 3X, 4X, and 5X, respectively (see "Methods"). a The estimated proportion (y-axis) at each iteration (x-axis) when running the algorithm. The different colors represent donors from different pools. b Box plot of the final estimate of the donor proportion for each pool. The blue dots indicate the estimate for each individual donor
The initial pools had undergone a few rounds of passaging and they were created using a relatively inaccurate method for counting cells (Invitrogen Countess). Because of this, it is expected that the predicted individual proportions within each pool will vary greatly. We decided to repeat the experiment but with a more accurate way of determining the actual donor proportion before sequencing. Instead of using the pre-pooled cells, we chose a different set of 50 different donor cell lines to culture individually. We then sorted each donor cells using a cell sorter by assigning the number of live cell events (either 10,000, 20,000, 30,000, 40,000, or 50,000) for each donor to create the new pools (pools 1–5) (see "Methods"). Although there was a single outlier in pool 4 (hu52F345), we found that our method accurately predicted the proportion of the individuals within the pool (Fig. 6, Additional file 2: Table S6). The ranges of proportions for the different pools are as follows: pool 0 (0.00–0.16%); pool 1 (0.41–1%); pool 2 (1.11–1.4%); pool 3 (1.75–2.19%); pool 4 (2.41–3.99%); pool 5 (2.87–3.26%) (Additional file 2: Table S6). We observed that pools 0–5 gave mean estimated proportions of 0.03%, 0.6%, 1.29%, 1.99%, 2.84%, and 3.08%, respectively, which accurately reflected their actual proportions (expected proportions being 0%, 0.67%, 1.33%, 2%, 2.67%, and 3.33%). Taken together, our results demonstrate that our method can accurately predict the proportions of real samples where the donor genotypes are known through whole-genome sequencing or otherwise.
Estimating individual donor proportion after cell sorting each donor into a mixed pool. Similar to Fig. 5, but a cell sorter was used to accurately sort each individual donor cells into the mixed pool (see "Methods"). Note that the donors for each pool are different from those depicted in Fig. 5 (Additional file 2: Table S6). a The estimated proportion (y-axis) at each iteration (x-axis) when running the algorithm. The different colors represent donors from different pools. b Box plot of the final estimate of the donor proportion for each pool. The blue dots indicate the estimate for each individual donor
Various ways of pooling and sequencing DNA from multiple individuals in an effort to save costs in identifying genetic variants associated with disease status have been extensively investigated in genome-wide association studies [15]. Here, we propose a radically different use of whole-genome sequencing of pools of individuals: to enable the accurate prediction of individual donor proportion of a mixed pool of human tissue samples or cell lines. Human tissue samples and cell lines are the bedrock of biomedical research and their uses have been vital for many scientific discoveries. More recently, the development of induced pluripotent stem cells (iPSCs) derived from human tissue have allowed researchers to model a variety of cell types from any given patient [16,17,18]. Hence, technologies that improve our capability to perform high-throughput assays for phenotypes from cell lines will be increasingly more important, especially in the age of personalized medicine.
We described a method, which can accurately predict the individual donor proportion of a mixed pool of samples from many different donors without the need for artificial barcodes or amplification of a specific locus. Depending on the host and cell type, introducing artificial barcodes to every donor cell may not be practical or feasible to perform for large numbers of different donors. Also, PCR amplification of exogenous barcodes may potentially bias the results, as demonstrated by previous experiments when performed on mixtures of template DNA [19,20,21,22]. The use our method avoids the need to barcode every donor cell or PCR amplification of a specific locus.
As our method effectively uses many SNPs present in the host genome as input to identify donor proportion, it is not suitable for applications where such SNPs are not present. For example, previous research reported the use of 20-nt barcodes to simultaneously create and tag a library of yeast deletion mutants using mitotic recombination for high-throughput multiplex assays [23, 24]. The library of deletion mutants was created from cells from a single donor and our method would not be able to differentiate between different deletion mutants as their genome-wide SNP profile are identical. On the other hand, when multiple donor cells are used like the study that interrogated multiple cancer cell lines from different donors [7], our method would be highly effective for identifying the proportion of different donor cells without the need for DNA barcodes. Our method can also be adjusted for parallel model organism screens, i.e. pooling of cells from different organisms to be interrogated together. If the genomes of the organisms are different enough, the problem becomes trivial, as it is possible to determine the origin of each of the sequencing reads by alignment to the right host genome. However, if the genomes of the various model organisms were similar, the main genetic difference between them may not be SNPs but other polymorphisms such as insertion-deletion polymorphisms. We can incorporate these polymorphisms or other types of genetic variants into our method for such use.
Experimentally, all that is required is genomic DNA extraction and whole-genome sequencing of the extracted DNA. The prediction of individual donor proportion is then determined computationally. Our described method enables the multiplexing of phenotypic assays on multiple different donor samples in a single experiment, which significantly reduces effort and time and facilitate discoveries. Our method can be used for high-throughput measurements of various cellular phenotypes for the purpose of discovering genetic alleles associated with cellular phenotypes, similar to those performed on human traits and diseases obtained from medical record data [25,26,27,28,29,30]. While there are substantially fewer such studies of cellular phenotypes, we predict that our method would greatly accelerate such discoveries of cellular phenotypes by facilitating and enabling researchers to perform multiplexed testing of diverse donor cells en masse. Whether the cells are sorted via FACS or selected for via different growth conditions, the resulting proportion for each donor within the sorted or selected pool can be accurately estimated using our method, resulting in the simultaneous testing of numerous different donor cells in a single experiment (Fig. 1). Current work in our laboratory is focused on utilizing this method to perform multiple phenotype characterization on thousands of cell lines from PGP and other cohorts to uncover genetic alleles associated with these phenotypes. We have also made the software for estimating donor proportion as well as performing the simulation experiments freely available (see "Availability of data and material") so that other groups can harness our method for their research experiments as well
In summary, we have developed a method to accurately predict the individual proportion from a mixed pool of cells from different donors without artificial barcodes or amplification of a specific locus. The method enables the simultaneous testing of cells from a pool of different donors and is transformative for scaling up the number of donor samples used. Instead of performing lentiviral barcoding manually for each donor sample, our method relies on having whole-genome genotype information for each donor, which is now readily available for many samples. Our method lowers the costs and associated resources for performing such experiments and would help facilitate multiplexed experimentation on large cohorts of donor cells.
EM:
Expectation-maximization
FACS:
Fluorescence-activated cell sorting
MAF:
Minor allele frequency
PGP:
Personal Genome Project
SNP:
Single nucleotide polymorphism
Cheung HW, Cowley GS, Weir BA, Boehm JS, Rusin S, Scott JA, et al. Systematic investigation of genetic vulnerabilities across cancer cell lines reveals lineage-specific dependencies in ovarian cancer. Proc Natl Acad Sci. 2011;108:12372–7.
Marcotte R, Brown KR, Suarez F, Sayad A, Karamboulas K, Krzyzanowski PM, et al. Essential gene profiles in breast, pancreatic, and ovarian cancer cells. Cancer Discov. 2012;2:172–89.
Cowley GS, Weir BA, Vazquez F, Tamayo P, Scott JA, Rusin S, et al. Parallel genome-scale loss of function screens in 216 cancer cell lines for the identification of context-specific genetic dependencies. Sci Data. 2014;1:140035.
Li Y, Oosting M, Deelen P, Ricaño-Ponce I, Smeekens S, Jaeger M, et al. Inter-individual variability and genetic influences on cytokine responses to bacteria and fungi. Nat Med. 2016;22:952–60.
Quach H, Rotival M, Pothlichet J, Loh Y-HE, Dannemann M, Zidane N, et al. Genetic adaptation and neandertal admixture shaped the immune system of human populations. Cell. 2016;167:643–656.e17.
Nédélec Y, Sanz J, Baharian G, Szpiech ZA, Pacis A, Dumaine A, et al. Genetic ancestry and natural selection drive population differences in immune responses to pathogens. Cell. 2016;167:657–669.e21.
Yu C, Mannan AM, Yvone GM, Ross KN, Zhang Y-L, Marton MA, et al. High-throughput identification of genotype-specific cancer vulnerabilities in mixtures of barcoded tumor cell lines. Nat Biotechnol. 2016;34:419–23.
Abecasis GR, Auton A, Brooks LD, DePristo MA, Durbin RM, Handsaker RE, et al. An integrated map of genetic variation from 1,092 human genomes. Nature. 2012;491:56–65.
Bentley DR, Balasubramanian S, Swerdlow HP, Smith GP, Milton J, Brown CG, et al. Accurate whole human genome sequencing using reversible terminator chemistry. Nature. 2008;456:53–9.
Dempster AP, Laird NM, Rubin DB. Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc Ser B Methodol. 1977;39:1–38.
Li H. Aligning sequence reads, clone sequences and assembly contigs with BWA-MEM. 2013. http://arxiv.org/abs/1303.3997. ArXiv13033997 Q-Bio.
Ball MP, Thakuria JV, Zaranek AW, Clegg T, Rosenbaum AM, Wu X, et al. A public resource facilitating clinical use of genomes. Proc Natl Acad Sci. 2012;109:11920–7.
Ball MP, Bobe JR, Chou MF, Clegg T, Estep PW, Lunshof JE, et al. Harvard Personal Genome Project: lessons from participatory public research. Genome Med. 2014;6:10.
Chan Y, Tung M, Garruss AS, Zaranek SW, Chan YK, Lunshof JE, et al. An unbiased index to quantify participant's phenotypic contribution to an open-access cohort. Sci Rep. 2017;7:46148.
Schlötterer C, Tobler R, Kofler R, Nolte V. Sequencing pools of individuals — mining genome-wide polymorphism data without big funding. Nat Rev Genet. 2014;15:749.
Burkhardt MF, Martinez FJ, Wright S, Ramos C, Volfson D, Mason M, et al. A cellular model for sporadic ALS using patient-derived induced pluripotent stem cells. Mol Cell Neurosci. 2013;56:355–64.
Millman JR, Xie C, Dervort AV, Gürtler M, Pagliuca FW, Melton DA. Generation of stem cell-derived β-cells from patients with type 1 diabetes. Nat Commun. 2016;7:11463.
Middelkamp S, van Heesch S, Braat AK, de Ligt J, van Iterson M, Simonis M, et al. Molecular dissection of germline chromothripsis in a developmental context using patient-derived iPS cells. Genome Med. 2017;9:9.
Polz MF, Cavanaugh CM. Bias in template-to-product ratios in multitemplate PCR. Appl Environ Microbiol. 1998;64:3724–30.
Hansen MC, Tolker-Nielsen T, Givskov M, Molin S. Biased 16S rDNA PCR amplification caused by interference from DNA flanking the template region. FEMS Microbiol Ecol. 1998;26:141–9.
Schloss PD, Gevers D, Westcott SL. Reducing the effects of PCR amplification and sequencing artifacts on 16S rRNA-based studies. PLoS One. 2011;6:e27310.
Kalle E, Kubista M, Rensing C. Multi-template polymerase chain reaction. Biomol Detect Quantif. 2014;2:11–29.
Pierce SE, Davis RW, Nislow C, Giaever G. Genome-wide analysis of barcoded Saccharomyces cerevisiae gene-deletion mutants in pooled cultures. Nat Protoc. 2007;2:2958.
Smith AM, Heisler LE, Mellor J, Kaper F, Thompson MJ, Chee M, et al. Quantitative phenotyping via deep barcode sequencing. Genome Res. 2009;19:1836–42.
Wood AR, Esko T, Yang J, Vedantam S, Pers TH, Gustafsson S, et al. Defining the role of common variation in the genomic and biological architecture of adult human height. Nat Genet. 2014;46:1173–86.
Locke AE, Kahali B, Berndt SI, Justice AE, Pers TH, Day FR, et al. Genetic studies of body mass index yield new insights for obesity biology. Nature. 2015;518:197–206.
Chan Y, Salem RM, Hsu Y-HH, McMahon G, Pers TH, Vedantam S, et al. Genome-wide analysis of body proportion classifies height-associated variants by mechanism of action and implicates genes important for skeletal development. Am J Hum Genet. 2015;96:695–708.
Lim ET, Raychaudhuri S, Sanders SJ, Stevens C, Sabo A, MacArthur DG, et al. Rare complete knockouts in humans: population distribution and significant role in autism spectrum disorders. Neuron. 2013;77:235–42.
Lim ET, Liu YP, Chan Y, Tiinamaija T, Käräjämäki A, Madsen E, et al. A novel test for recessive contributions to complex diseases implicates Bardet-Biedl syndrome gene BBS10 in idiopathic type 2 diabetes and obesity. Am J Hum Genet. 2014;95:509–20.
Lim ET, Uddin M, De Rubeis S, Chan Y, Kamumbu AS, Zhang X, et al. Rates, distribution and implications of postzygotic mosaic mutations in autism spectrum disorder. Nat Neurosci. 2017;20:1217–24.
We thank all the participants of the Harvard PGP for contributing cell lines and whole-genome sequencing data. We thank all staff members of the PGP for their effort in creating and maintaining the PGP. We also like to thank the Wyss Institute for Biologically Inspired Engineering for providing the necessary resources for performing cell sorting as well as general laboratory supplies for carrying out the research.
AC was funded by the Burroughs Wellcome Fund Career Award for Medical Scientists. This work was funded by grant NIH RM1HG008525 from the National Human Genome Research Institute, NIH, and the Robert Wood Johnson Foundation (grant: 74178). The views expressed here do not necessarily reflect the views of the Foundation.
All data on PGP participants can be found at the PGP website at (https://my.pgp-hms.org). All software and datasets supporting the conclusions of this article are freely available at (https://pgpresearch.med.harvard.edu/poolseq).
Wyss Institute for Biologically Inspired Engineering, Harvard University, Boston, MA, 02115, USA
Yingleong Chan, Ying Kai Chan, Daniel B. Goodman, Xiaoge Guo, Elaine T. Lim & George M. Church
Department of Genetics, Harvard Medical School, Boston, MA, 02115, USA
Harvard-MIT Health Sciences and Technology, Cambridge, MA, 02139, USA
Daniel B. Goodman
Department of Pathology and Cell Biology, Columbia University College of Physicians and Surgeons, New York, NY, 10032, USA
Alejandro Chavez
Yingleong Chan
Ying Kai Chan
Xiaoge Guo
Elaine T. Lim
George M. Church
YC, DBG, ETL, and GMC conceptualized the study. YC, YKC, DBG, XG, AC, and ETL performed the experiments. YC wrote the code for simulation and analysis of data. YC, YKC, and ETL wrote the initial manuscript. DBG, AC, and GMC provided critical feedback on the manuscript. GMC supervised project. All authors edited and reviewed the manuscript. All authors read and approved the final manuscript.
Correspondence to Yingleong Chan or George M. Church.
The protocols for recruitment of PGP participants for sequencing and creation of cell lines were approved by Harvard University's Human Research Protection Program (IRB 15461). The pooling of these cell lines and estimation of individual donor proportion was also determined to not be human subjects research (IRB18–0480). All PGP participants consented for the sequencing of their own genetic material as well as the creation of their cell lines [12].
GMC holds leadership positions in many companies related to DNA sequencing technologies. A full list of these companies is available at http://arep.med.harvard.edu/gmc/tech.html. The remaining authors declare that they have no competing interests.
Note S1–2. working example of the method and comparison of the method against PRISM. (PDF 278 kb)
Table S1–6. all supplementary tables. (XLSX 638 kb)
Chan, Y., Chan, Y.K., Goodman, D.B. et al. Enabling multiplexed testing of pooled donor cells through whole-genome sequencing. Genome Med 10, 31 (2018). https://doi.org/10.1186/s13073-018-0541-6
Multiplexed testing
Barcode free method
Single nucleotide polymorphisms
Expectation maximization algorithm
Submission enquiries: [email protected] | CommonCrawl |
Discrete & Continuous Dynamical Systems - B
June 2009 , Volume 11 , Issue 4
Global existence and internal stabilization for a reaction-diffusion system posed on non coincident spatial domains
Sebastian Aniţa, William Edward Fitzgibbon and Michel Langlais
2009, 11(4): 805-822 doi: 10.3934/dcdsb.2009.11.805 +[Abstract](2566) +[PDF](239.8KB)
We consider a two-component Reaction-Diffusion system posed on non coincident spatial domains and featuring a reaction term involving an integral kernel. The question of global existence of componentwise nonnegative solutions is assessed. Then we investigate the stabilization of one of the solution components to zero via an internal control distributed on a small subdomain while preserving nonnegativity of both components. Our results apply to predator-prey systems.
Sebastian Ani\u0163a, William Edward Fitzgibbon, Michel Langlais. Global existence and internal stabilization for areaction-diffusion system posed on non coincident spatial domains. Discrete & Continuous Dynamical Systems - B, 2009, 11(4): 805-822. doi: 10.3934/dcdsb.2009.11.805.
Convergence of a finite volume scheme for nonlocal reaction-diffusion systems modelling an epidemic disease
Mostafa Bendahmane and Mauricio Sepúlveda
2009, 11(4): 823-853 doi: 10.3934/dcdsb.2009.11.823 +[Abstract](2572) +[PDF](3094.2KB)
A finite-volume scheme for a nonlocal three-component reaction-diffusion system modeling an epidemic disease with susceptible, infected, and recovered, individuals is analyzed. For this SIR model, the existence of solutions to the finite volume scheme and its convergence to a weak solution of the PDE is established. The convergence proof is based on deriving a series of apriori estimates and by using a general $L^p$ compactness criterion. Finally, numerical simulations from the finite volume scheme are given.
Mostafa Bendahmane, Mauricio Sep\u00FAlveda. Convergence of a finite volume scheme for nonlocalreaction-diffusion systems modelling an epidemic disease. Discrete & Continuous Dynamical Systems - B, 2009, 11(4): 823-853. doi: 10.3934/dcdsb.2009.11.823.
Lifting in equation-free methods for molecular dynamics simulations of dense fluids
Yves Frederix, Giovanni Samaey, Christophe Vandekerckhove, Ting Li, Erik Nies and Dirk Roose
Within the context of multiscale computations, equation-free methods have been developed. In this approach, the evolution of a system is simulated on the macroscopic level while only a microscopic model is explicitly available. To this end, a coarse time stepper for the macroscopic variables can be constructed, based on appropriately initialized microscopic simulations. In this paper, we investigate the initialization of the microscopic simulator using the macroscopic variables only (called lifting in the equation-free framework) when the microscopic model is a molecular dynamics (MD) description of a mono-atomic dense fluid. We assume a macroscopic model to exist in terms of the lowest order velocity moments of the particle distribution (density, velocity and temperature). The major difficulty is to design a lifting operator that accurately reconstructs the physically correct state of the fluid (i.e., the higher order moments) at a reasonable computational cost. We construct a lifting operator, as well as a restriction operator for the reverse mapping. For a simple model problem, we perform a systematic numerical study to assess the time scales on which the lifting errors disappear after reinitialization (healing); we also examine the effects on the simulated macroscopic behavior. The results show that, although in some cases accurate initialization of the higher order moments is not crucial, in general a detailed study of the lifting operator is required.
Yves Frederix, Giovanni Samaey, Christophe Vandekerckhove, Ting Li, Erik Nies, Dirk Roose. Lifting in equation-free methods for molecular dynamics simulations of dense fluids. Discrete & Continuous Dynamical Systems - B, 2009, 11(4): 855-874. doi: 10.3934/dcdsb.2009.11.855.
Dynamic bifurcation of the complex Swift-Hohenberg equation
Jongmin Han and Masoud Yari
In this paper we are concerned with the dynamic bifurcation of the complex Swift-Hohenberg equation on a closed interval in $\mathbb R$. We consider the equations under the Dirichlet and the periodic boundary conditions. It is shown that the equation bifurcates from the trivial solution to an attractor when the control parameter crosses the critical value.
Jongmin Han, Masoud Yari. Dynamic bifurcation of the complex Swift-Hohenberg equation. Discrete & Continuous Dynamical Systems - B, 2009, 11(4): 875-891. doi: 10.3934/dcdsb.2009.11.875.
Relaxation oscillation profile of limit cycle in predator-prey system
Sze-Bi Hsu and Junping Shi
It is known that some predator-prey system can possess a unique limit cycle which is globally asymptotically stable. For a prototypical predator-prey system, we show that the solution curve of the limit cycle exhibits temporal patterns of a relaxation oscillator, or a Heaviside function, when certain parameter is small.
Sze-Bi Hsu, Junping Shi. Relaxation oscillation profile of limit cycle in predator-prey system. Discrete & Continuous Dynamical Systems - B, 2009, 11(4): 893-911. doi: 10.3934/dcdsb.2009.11.893.
Constrained stability and instability of polynomial difference equations with state-dependent noise
Cónall Kelly and Alexandra Rodkina
We examine the stability and instability of solutions of a polynomial difference equation with state-dependent Gaussian perturbations, and describe a phenomenon that can only occur in discrete time. For a particular set of initial values, we find that solutions approach equilibrium asymptotically in a highly regulated fashion: monotonically and bounded above by a deterministic sequence. We observe this behaviour with a probability that can be made arbitrarily high by choosing the initial value sufficiently small.
However, for any fixed initial value, the probability of instability is nonzero, and in fact we can show that as the magnitude of the initial value increases, the probability of instability approaches $1$.
C\u00F3nall Kelly, Alexandra Rodkina. Constrained stability and instability of polynomial differenceequations with state-dependent noise. Discrete & Continuous Dynamical Systems - B, 2009, 11(4): 913-933. doi: 10.3934/dcdsb.2009.11.913.
On the behaviour at infinity of solutions to stationary convection-diffusion equation in a cylinder
Iryna Pankratova and Andrey Piatnitski
The work focuses on the behaviour at infinity of solutions to second order elliptic equation with first order terms in a semi-infinite cylinder. Neumann's boundary condition is imposed on the lateral boundary of the cylinder and Dirichlet condition on its base. Under the assumption that the coefficients stabilize to a periodic regime, we prove the existence of a bounded solution, its stabilization to a constant, and provide necessary and sufficient condition for the uniqueness.
Iryna Pankratova, Andrey Piatnitski. On the behaviour at infinity of solutions to stationary convection-diffusion equationin a cylinder. Discrete & Continuous Dynamical Systems - B, 2009, 11(4): 935-970. doi: 10.3934/dcdsb.2009.11.935.
Point-vortex interaction in an oscillatory deformation field: Hamiltonian dynamics, harmonic resonance and transition to chaos
Xavier Perrot and Xavier Carton
2009, 11(4): 971-995 doi: 10.3934/dcdsb.2009.11.971 +[Abstract](2751) +[PDF](10591.7KB)
We study the Hamiltonian system of two point vortices, embedded in external strain and rotation. This external deformation field mimics the influence of neighboring vortices or currents in complex flows. When the external field is stationary, the equilibria of the two vortices, symmetric with respect to the center of the plane, are determined. The stability analysis indicates that two saddle points lie at the crossing of separatrices, which bound streamfunction lobes having neutral centers.
When the external field varies periodically with time, resonance becomes possible between the forcing and the oscillation of vortices around the neutral centers. A multiple time-scale expansion provides the slow-time evolution equation for these vortices, which, for weak periodic deformation, oscillate within their original (steady) trajectory. These analytical results accurately compare with numerical integration of the complete equations of motion. As the periodic deformation field increases, this vortex oscillation migrates out of the original trajectories, towards the location of the separatrices. With a periodic external field, these separatrices have given way to heteroclinic trajectories with multiple self-intersections, as shown by the calculation of the Melnikov function.
Chaos appears in vortex trajectories as they enter the aperiodic domain around the heteroclinic curves. In fact, this chaotic domain progressively fills out the plane, replacing KAM tori and cantori, as the periodic deformation field reaches finite amplitude. The appearance of windows of periodicity is illustrated.
Xavier Perrot, Xavier Carton. Point-vortex interaction in an oscillatory deformation field:Hamiltonian dynamics, harmonic resonance and transition to chaos. Discrete & Continuous Dynamical Systems - B, 2009, 11(4): 971-995. doi: 10.3934/dcdsb.2009.11.971.
Linear stability analysis of systems with Preisach memory
Alexander Pimenov and Dmitrii I. Rachinskii
2009, 11(4): 997-1018 doi: 10.3934/dcdsb.2009.11.997 +[Abstract](2577) +[PDF](538.0KB)
We consider differential equations coupled with the input-output memory relation defined by the Preisach operator. The differential equation relates an instant value of the rate of change of the output of the Preisach operator with an instant value of its input. We propose an algorithm for the linearisation of the evolution operator of the system and apply it to define the characteristic multiplier of periodic solutions, which determines their stability. Examples of the system considered include models of terrestrial hydrology and electronic oscillators with hysteresis.
Alexander Pimenov, Dmitrii I. Rachinskii. Linear stability analysis of systems with Preisach memory. Discrete & Continuous Dynamical Systems - B, 2009, 11(4): 997-1018. doi: 10.3934/dcdsb.2009.11.997.
Pseudospectral method using generalized Laguerre functions for singular problems on unbounded domains
Zhong-Qing Wang, Ben-Yu Guo and Yan-Na Wu
2009, 11(4): 1019-1038 doi: 10.3934/dcdsb.2009.11.1019 +[Abstract](2682) +[PDF](291.2KB)
In this paper, we develop a pseudospectral method for differential equations defined on unbounded domains. We first introduce Gauss-type interpolations using a family of generalized Laguerre functions, and establish basic approximation results. Then we propose a pseudospectral method for differential equations on unbounded domains, whose coefficients may degenerate or grow up. As examples, we consider two model problems. The proposed schemes match the underlying problems properly and exhibit spectral accuracy. Numerical results demonstrate the efficiency of this new approach.
Zhong-Qing Wang, Ben-Yu Guo, Yan-Na Wu. Pseudospectral method using generalizedLaguerre functions for singular problems on unbounded domains. Discrete & Continuous Dynamical Systems - B, 2009, 11(4): 1019-1038. doi: 10.3934/dcdsb.2009.11.1019.
Abundance of heteroclinic and homoclinic orbits for the hyperchaotic Rössler system
Daniel Wilczak
The four dimensional Rössler system is investigated. For this system the Poincaré map exhibits chaotic dynamics with two expanding directions and one strongly contracting direction. It is shown that the 16th iterate of this Poincaré map has a nontrivial invariant set on which it is semiconjugated to the full shift on two symbols. Moreover, it is proven that there exist infinitely many homoclinic and heteroclinic solutions connecting periodic orbits of period two and four, respectively. The proof utilizes the method of covering relations with smooth tools (cone conditions).
The proof is computer assisted - interval arithmetic is used to obtain bounds of the Poincaré map and its derivative.
Daniel Wilczak. Abundance of heteroclinic and homoclinic orbits for the hyperchaotic R\u00F6ssler system. Discrete & Continuous Dynamical Systems - B, 2009, 11(4): 1039-1055. doi: 10.3934/dcdsb.2009.11.1039.
Error analysis of stabilized semi-implicit method of Allen-Cahn equation
Xiaofeng Yang
We consider in this paper the stabilized semi-implicit (in time) scheme and the splitting scheme for the Allen-Cahn equation $\phi_t-\Delta\phi+$ε$^-2f(\phi)=0$ arising from phase transitions in material science. For the stabilized first-order scheme, we show that it is unconditionally stable and the error bound depends on ε-1 in some lower polynomial order using the spectrum estimate of [2, 10, 11]. In addition, the first- and second-order operator splitting schemes are proposed and the accuracy are tested and compared with the semi-implicit schemes numerically.
Xiaofeng Yang. Error analysis of stabilized semi-implicit method of Allen-Cahnequation. Discrete & Continuous Dynamical Systems - B, 2009, 11(4): 1057-1070. doi: 10.3934/dcdsb.2009.11.1057.
$\Gamma$-convergence with Dirichlet boundary condition and Landau-Lifshitz functional for thin film
Jian Zhai and Zhihui Cai
In this paper, we consider the $\Gamma$-convergence of Landau-Lifshitz ferromagnetic model in the presence of Bloch wall in the disk of $\bb R^2$ with Dirichlet boundary condition.
Jian Zhai, Zhihui Cai. $\\Gamma$-convergence with Dirichlet boundarycondition and Landau-Lifshitz functional for thin film. Discrete & Continuous Dynamical Systems - B, 2009, 11(4): 1071-1085. doi: 10.3934/dcdsb.2009.11.1071. | CommonCrawl |
Where does the use of tensors to describe orientation dependence of physical phenomena arise from?
In the context of anisotropy, I have often read that the use of a rank 2 tensor is "a model". But what is the idea behind this choice? Can anyone describe in what sense the use of tensor in this context is a "model"?
tensor-calculus stress-strain models linear-systems
GiuliaGiulia
The tensor itself is not the model, but the the tensor is used to model (one could also say describe or quantify) the anisotropy.
One example is an anisotropic electric conductor. The conductivity $\sigma$ describes which current occurs in response to an electric field $\vec j = \sigma \vec E$. In isotropic materials (e.g. glass, microcrystalline metals when averaged), this quantity is scalar, that means that the current points in the same direction as the electric field and how much current density is generated by the electric field does not depend on the direction.
In general, however, the conductivity is a tensor of rank 2. For example in a graphite monocrystal (which consists of loosely coupled layers) the conductivity in the layers is much higher, than the conductivity perpendicular to the layers. So in a coordinate system where the layers are stacked along the $z$-direction we will have a conductivity tensor of the form (assuming conductivity is approximately isotropic within the layers): \begin{align*} \sigma &= \begin{pmatrix} \sigma_l & 0 & 0 \\ 0 & \sigma_l & 0 \\ 0 & 0 & \sigma_p \end{pmatrix} \end{align*} Here $\sigma_l \gg \sigma_p$. So we can compute the current that occurs if we apply an arbitrary electric field with this tensor, and the current will be larger along the layers than perpendicular to the layers. The nice thing about this relation is, that $\vec j = \sigma \vec E$ holds in any coordinate system, we just have to transform the components of $\sigma$ accordingly. (We can even determine the orientation of the layers from a macroscopic measurement of the conductivity tensor by computing the main axes of the tensor).
Note, that there are anisotropies that must be described by tensors of higher rank. For example, the mechanical stress in a material is a rank 2 tensor and the elasticity tensor (relating the stress to the strain) is a rank 4 tensor.
That tensors are the objects that occur here has two reasons:
The equations must be invariant under the choice of the coordinate systems and tensors are the natural objects when we seek equations invariant under rotations (or more general under arbitrary coordinate transformations).
We often work in linearized theories and the equation $A^{(n)} = B^{(n+m)} C^{(m)}$ (where the juxtaposition denotes contraction and the upper indices denote the tensor rank) is the most general linear relation between the tensors $A$ and $C$.
Sebastian RieseSebastian Riese
I would expand upon Sebastian's nice answer to point out that any orientation-sensitive quantity $f(\hat v)$ may be expanded by spherical harmonics and the symmetric rank-2 tensor can be often used to represent the first nonzero term.
To understand this, start by noting that all of these spherical harmonics come with a polynomial structure factor. You may recognized them from atomic orbital theory (from wikipedia):
In this case we are trying to represent the wavefunction $\psi(\vec r)$.
Notice the third row there are these polynomials that label the different orbitals. For a fixed angular momentum (l), a state with that momentum may be labeled by a degree l polynomial with complex coefficients, where we consider the "trace polynomial" $x^l + y^l + z^l$ to be equivalent to zero (this just contributes to the isotropic ($l = 0$) response).
In particular, for $l = 2$ (the quadropole moment), we're talking about quadratic polynomials, and we can represent these as a traceless symmetric matrix $A$ by
$$v^T A v,$$
where $v$ is the vector $v^T = (x,y,z)$. This traceless symmetric matrix $A$, once we let it depend on spatial position, becomes our rank-2 symmetric tensor. For many applications, this is the first multipole moment that is nonzero (because generic potentials are quadratic near equilibrium). In those cases, to first order, this matrix is the object of interest.
In general, however, the spherical harmonics are labelled by symmetric tensors of all rank, and this decomposition has to do with the representation theory of $SO(3)$. A more advanced model may have to include higher multipole moments to capture the orientation-dependence.
I think Sebastian's example of the conductivity matrix is actually a bit confusing, since there we're talking about a vector-valued quantity that also depends on orientation, namely $\vec j(\vec E)$. In this case, the conductivity matrix really comes from the $l = 1$ moment (it's rank (1,1), not rank (2,0)). The symmetry of this matrix is not guaranteed by representation theory but by Onsager reciprocity. Higher moments would come from nonlinear corrections to Ohm's law but this symmetry relation would still hold near equilibrium!
Similarly, the elastic tensor is a rank (2,2) tensor, symmetric in each factor ($l=2$) and actually also symmetric between the two, but the reason for this last symmetry is mysterious to me...
Ryan ThorngrenRyan Thorngren
In the context of anisotropy, I often read that the use of a tank 2 tensor is "a model". But what is the idea behind this choice? Can anyone describe in which sense the use of tensor in this context is a "model"?
Without context, it's difficult to guess at what exactly someone might mean by referring to a second-order tensor as "a model", as it could be a reference to a bunch of different observations.
Personally, I think that I've complained most about it for the typical presumption of locality that goes into it. For a simple early example, there's the Cauchy stress tensor from fluid mechanics:
I can completely understand why folks would think that this is pretty general since it seems to avoid assumptions about dynamics, at any given point.
However, the tensor reflects the implicit assumption that mechanical interactions are local, i.e. that there're fully captured at any given point. And perhaps this seems like a defensible enough approximation for a crystalline solids, but even in that best-case scenario, it's merely an approximation.
Then on the opposite end of the spectrum, e.g. low-pressure gases, obviously there're significant free paths, such that mechanical interactions of a fluid are poorly described by assuming the local model implied by the tensor.
This isn't necessarily what a speaker means in any given context, though it's one possible type of modeling assumption represented by a tensor that someone might be referring to.
Found a Wikipedia article that discusses a relaxation:
In continuum mechanics, the finite strain theory—also called large strain theory, or large deformation theory—deals with deformations in which strains and/or rotations are large enough to invalidate assumptions inherent in infinitesimal strain theory.
–"Finite strain theory", Wikipedia
Though in general, if you're working with second-order tensors, you're probably working with a model that's using those tensors to capture infinitely localized interactions, which is an extremely-convenient-but-obviously-flawed model in any real-world physics situation.
$\begingroup$ Modelling using tensors does not imply locality: You can have a tensorial integral kernel that is convoluted with the input quantity. This convolution decouples to a point-wise product in Fourier space and you arrive at $\vec j(\omega, \vec k) = \sigma(\omega, \vec k) \vec E(\omega, \vec k)$, so interpreted right, $j = \sigma E$ does cover non-local and time dependent linear response! $\endgroup$ – Sebastian Riese May 8 '18 at 20:54
Not the answer you're looking for? Browse other questions tagged tensor-calculus stress-strain models linear-systems or ask your own question.
Duality in arbitrary finite dimension using the Levi-Civita tensor
Degree of anisotropy of crystal tensors
Speed of light and E = mc^2
Derivatives of the stress tensor in geophysical fluid dynamics
It is correct to say that a tensor is simply a multidimensional array of related quantities? But what about a tensor as a transformation?
'Ambiguity' of dual vectors $\{dx^i\}$ in cotangent space in general relativity
Why are isotropic tensors not considered scalars?
Raising and lowering indices and tensor contraction | CommonCrawl |
Search SpringerLink
Canard solutions in neural mass models: consequences on critical regimes
Elif Köksal Ersöz1 &
Fabrice Wendling1
The Journal of Mathematical Neuroscience volume 11, Article number: 11 (2021) Cite this article
Mathematical models at multiple temporal and spatial scales can unveil the fundamental mechanisms of critical transitions in brain activities. Neural mass models (NMMs) consider the average temporal dynamics of interconnected neuronal subpopulations without explicitly representing the underlying cellular activity. The mesoscopic level offered by the neural mass formulation has been used to model electroencephalographic (EEG) recordings and to investigate various cerebral mechanisms, such as the generation of physiological and pathological brain activities. In this work, we consider a NMM widely accepted in the context of epilepsy, which includes four interacting neuronal subpopulations with different synaptic kinetics. Due to the resulting three-time-scale structure, the model yields complex oscillations of relaxation and bursting types. By applying the principles of geometric singular perturbation theory, we unveil the existence of the canard solutions and detail how they organize the complex oscillations and excitability properties of the model. In particular, we show that boundaries between pathological epileptic discharges and physiological background activity are determined by the canard solutions. Finally we report the existence of canard-mediated small-amplitude frequency-specific oscillations in simulated local field potentials for decreased inhibition conditions. Interestingly, such oscillations are actually observed in intracerebral EEG signals recorded in epileptic patients during pre-ictal periods, close to seizure onsets.
Brain dynamics emerges from neural entities interacting at different levels, from single neurons to large-scale neural networks. At each level, transitions between different regimes, such as firing/resting states in single neurons and up/down states in neural networks, are associated with both physiological functions and pathological activity [1–3]. One of the features of the system that determines how these transitions would occur is excitability. The concept of neural excitability for single neurons was introduced initially by Louis Lapique in 1907 [4, 5]. Alan Hodgkin, who then re-introduced the concept [6], classified the neural excitability with respect the firing rate of neurons in response to injected steps of currents. Excitability properties of neural systems can vary with internal dynamics, leading to different physiological and pathological behavior [7–10]. At the cortical scale, for instance, variations in excitability [11] and loss of network resilience [12] are associated with epileptic seizures. Yet, what may be as important as a transition itself is the dynamics preceding the transition. In the context of epilepsy, for example, identification of the dynamic features along the path to a transition is crucial for intervention and prevention of seizures.
Mathematical models of brain activity range from microscopic level of single cell dynamics to macroscopic level of interactions between large scale neural systems. Neural mass models (NMMs) consider the average temporal dynamics of interconnected neural subpopulations without explicitly representing the underlying mechanisms at the level of single cells. The mesoscopic level offered by the neural mass formulation has been used to model brain signals, from local field potentials (LFPs) to global electroencephalographic (EEG) recordings, and to investigate various cerebral rhythms [13–15]. NMMs have also been used extensively to study pathological dynamics such as in epilepsy [16–19], Alzheimer's disease [20] and Parkinson's disease [21, 22].
Interactions between slow and fast components of neural systems, hence, of their mathematical models, result in multiple time-scale complex oscillations, such as relaxation, bursting and mixed-mode oscillations. Geometric singular perturbation theory (GSPT) is a key tool for understanding the interaction between the geometry of the system and the emerging multiple time-scale dynamics. In particular, canard solutions, which can exist in multiple time-scale systems with a folded geometry, appear as building blocks of complex oscillations in both phenomenological and neurophysiologically plausible models ranging from single cell [23–26] to neural networks [27, 28]. The canard phenomenon in such systems has been related to neural excitability [29], excitability thresholds [23, 30–34], and boundaries between different type of solutions, such as subthreshold oscillations and large amplitude spiking/bursting oscillations [24, 28, 35–43]. While such canard-organized fine structures have been shown in a wide range of two-time-scale models, recent studies started to explore canard-mediated processes in systems with three or more time-scales [44–46].
In this study we investigate critical regimes in the NMM initially presented in [16]. This physiologically-grounded model has been extensively used for modeling structural and functional changes leading to epileptic activity observed in intracranial (stereoelectroencephalography, SEEG) signals. The model includes four interacting neuronal subpopulations: two interconnected subpopulations of glutamatergic pyramidal neurons and GABAergic inhibitory interneurons (somatostatin positive (SOM+), and parparvalbunim positive (PV+), also called dendrite-projecting slow and soma-projecting fast interneurons, respectively). Although the model was introduced for the CA1 region of the hippocampus, implementation of these four subpopulations mediating glutamatergic and GABAergic signaling makes it generic enough to be considered for many other cortical regions [47]. Activity of each subpopulation is given by the corresponding average post-synaptic potential (PSP) that is determined by two functions: 1) a "pulse to wave" function, \(S(v)=5/(1+\exp (0.56(6-v)))\), transforming the incoming synaptic potentials into a firing rate; and 2) a "wave to pulse" converting the input average firing rate into a mean PSP at the input of each subpopulation, that is, \(h(t)=W t /\tau _{w} \exp (-t/\tau _{w})\), where W represents the average synaptic gain and \(\tau _{w}\) is the average synaptic time constant mimicking the rise and decay of actual PSPs. The system schematized in Fig. 1a reads
$$\begin{aligned}& \dot{y}_{0} = y_{5}, \end{aligned}$$
(1a)
$$\begin{aligned}& \dot{y}_{5} = \frac{A}{\tau _{a}}S[y_{1} - y_{2} - y_{3}] - \frac{2}{\tau _{a}} y_{5} - \frac{1}{\tau _{a}^{2}}y_{0}, \end{aligned}$$
(1b)
$$\begin{aligned}& \dot{y}_{1} = y_{6} \end{aligned}$$
(1c)
$$\begin{aligned}& \dot{y}_{6} = \frac{A}{\tau _{a}}\bigl\{ p(t) + C_{2} S[C_{1}y_{1}]\bigr\} - \frac{2}{\tau _{a}}{y}_{6} - \frac{1}{\tau _{a}^{2}}y_{1}, \end{aligned}$$
(1e)
$$\begin{aligned}& \dot{y}_{7} = \frac{B}{\tau _{b}} C_{4} S[C_{3}y_{0}] - \frac{2}{\tau _{b}}y_{7} - \frac{1}{\tau _{b}^{2}}y_{2}, \end{aligned}$$
$$\begin{aligned}& \dot{y}_{8} = \frac{G}{\tau _{g}} C_{7} S \biggl[C_{5}y_{0} - \frac{C_{6}}{C_{4}}y_{4}\biggr] - \frac{2}{\tau _{g}}y_{8} - \frac{1}{\tau _{a}^{2}}y_{3}. \end{aligned}$$
(1h)
The variables \(y_{0,1}\) stand for the excitatory PSPs mediated by two pyramidal neuron subpopulations, \(y_{2}\) and \(y_{3}\) are inhibitory PSPs mediated by the SOM+ and PV+ interneuron subpopulations, respectively. Variables \(y_{j}\) (\(j\in \{ 5,6,7,8\}\)) are the auxiliary variables that are introduced to convert the second-order differential equations describing the wave to pulse functions to first-order differential equations [13]. The parameters A, B, G are the synaptic gains, the \(C_{i}\) are the connectivity constants representing the average number of synaptic contacts, \(p(t)\) is the external (noisy) cortical input (\(p(t)= p+\xi \), where p is the mean of external input ξ is a random variable following a normal distribution with \(\mathcal{N}(0,\sigma ^{2})\)). The synaptic time constants are given by \(\tau _{a}\), \(\tau _{b}\), \(\tau _{g}\). The major contribution to LFPs (as recorded by intracranial electrodes in patients candidate to surgery) corresponds to the PSPs summated at the level of pyramidal neurons, which are geometrically aligned "in palisades", i.e. one relative to the other and perpendicular to the plane of the cortical layers. In the model, the LFP is given by the sum of excitatory PSP (EPSP) and inhibitory PSPs (IPSPs) received by the glutamatergic pyramidal cells, hence \(\text{LFP} = y_{1} - y_{2} - y_{3}\).
Model diagram and time series of a bursting solution. (a) Model diagram showing excitatory (red connections) and inhibitory (blue connections) interactions between subpopulations of pyramidal neurons (PYR and PYR') and inhibitory interneurons (PV+ and SOM+). The post-synaptic potential of a subpopulation, which is the output of \(h(t)\), is multiplied by a synaptic coefficient \(C_{i}\) before being received by another subpopulation. (b) Time series of a bursting solution for the parameter set in Table 1. The panels from the top to the bottom show the time courses of post-synaptic potentials of PV+ (\(v_{3}\)), PYR (\(v_{0}\)), SOM+ (\(v_{2}\)) and the local field potential (LFP), i.e. \(A \tau _{a} p + C_{2} \tau _{a} v_{1}- C_{4} \tau _{b} v_{2} - C_{7} \tau _{g} v_{3}\)
As introduced in [48], under the following variable conversion:
$$ \biggl(\frac{t}{\tau _{g}}, \frac{y_{0}}{\tau _{a}}, \frac{y_{1}}{\tau _{a}}, \frac{y_{2}}{\tau _{b}}, \frac{y_{3}}{\tau _{g}}, y_{5}, y_{6}, y_{7}, y_{8} \biggr) \implies (\tilde{t}, v_{0}, v_{1}, v_{2}, v_{3}, y_{5}, y_{6}, y_{7}, y_{8}), $$
with \(\delta = \tau _{g}/\tau _{a} \) and \(\varepsilon = \tau _{a}/\tau _{b}\), system (1a)–(1h) can be written in the following deterministic (\(\sigma = 0\)) slow–fast form:
$$\begin{aligned}& \frac{d v_{3}}{d\tilde{t}} = y_{8} := F_{3}(y_{8}), \end{aligned}$$
$$\begin{aligned}& \frac{d y_{8}}{d\tilde{t}} = G S[C_{5} \tau _{a} v_{0} - C_{6} \tau _{b} v_{2} ] - v_{3} - 2 y_{8} := F_{8}(v_{0}, v_{2}, v_{3}, y_{8}), \end{aligned}$$
$$\begin{aligned}& \frac{d v_{0}}{d\tilde{t}} = \delta y_{5} := \delta F_{0}(y_{5}), \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[t] \frac{d y_{5}}{d\tilde{t}} & = \delta \bigl(A S[A \tau _{a} p + C_{2} \tau _{a} v_{1}- C_{4} \tau _{b} v_{2} - C_{7} \tau _{g} v_{3}]-v_{0} - 2 y_{5}\bigr) \\ & := \delta F_{5}(v_{0}, v_{1}, v_{2}, v_{3}, y_{5}), \end{aligned} \end{aligned}$$
$$\begin{aligned}& \frac{d y_{6}}{d\tilde{t}} = \delta \bigl(A S[C_{1} \tau _{a} v_{0}] - v_{1} - 2y_{6}\bigr) := \delta F_{6}(v_{0}, v_{1}, y_{6}), \end{aligned}$$
$$\begin{aligned}& \frac{d v_{2}}{d\tilde{t}} = \delta \varepsilon y_{7} := \delta \varepsilon F_{2}(y_{7}), \end{aligned}$$
$$\begin{aligned}& \frac{d y_{7}}{d\tilde{t}} = \delta \varepsilon \bigl(B S[C_{3} \tau _{a} v_{0}] - v_{2} - 2 y_{7}\bigr) := \delta \varepsilon F_{7}(v_{0}, v_{2}, y_{7}). \end{aligned}$$
In this manuscript, we will consider system (2a)–(2h) for the slow–fast analysis and (1a)–(1h) for simulations under the stochastic input. We will be using the parameter set given in Table 1, unless otherwise stated, for which \(\delta = 0.3\) and \(\varepsilon = 0.2\). Numerical bifurcation analysis is performed in AUTO-07p software [49]. The stochastic differential equations were integrated using the Euler–Murayama method with a step size \(dt = 10\text{e}{-}4\) second in XPPAUT software [50].
Table 1 Parameter values for the bursting-type discharges
As noticed in [48], system (2a)–(2h) is a three-time-scale system written in fast form with \((v_{3}, y_{8})\) being fast, \((v_{0}, y_{5}, v_{1}, y_{6})\) slow and \((v_{2}, y_{7})\) super-slow variables. Köksal Ersöz et al. [48] have focused on electrophysiological pre-ictal bursting patterns recorded in human patients just before the onset of seizure. Pre-ictal bursting patterns are characterized by fast oscillatory discharges (which will be referred as spikes) followed by a slower oscillation (a simulated pattern with the parameter set in Table 1 is exemplified in Fig. 1b). The authors have reproduced pre-ictal bursting and unveiled the mechanism yielding these solutions by decorticating the three-time-scale structure of the model. They have discussed appropriate stimulation strategies for aborting of the pre-ictal bursting, hence, for preventing a subsequent epileptic seizure. However, they did not focus on possible slow–fast transitions. Here we extend the slow–fast analysis initiated in [48] by investigating the role of slow-manifolds in transitions to relaxation and bursting type of solutions. We will focus on how canard trajectories shape the different routes from physiological to pathological brain activity. In what follows, we will go briefly through the multiple-time-scale analysis presented in [48], and then show different canard structures present in the model and how they take a part in critical transitions. Finally, we will see the system's response to stochastic inputs near critical regimes, and make a remark on the slow oscillations observed along the path to seizure in SEEG signals recorded during pre-surgical evaluation of two patients with drug-resistant epilepsy.
System (2a)–(2h) expressed in the fast time t̃ is called a fast system. The slow system is obtained by defining \(\tilde{t}_{s} = \delta \tilde{t}\),
$$\begin{aligned}& \delta \frac{d v_{3}}{d\tilde{t}_{s}} = F_{3}(y_{8}), \end{aligned}$$
$$\begin{aligned}& \delta \frac{d y_{8}}{d\tilde{t}_{s}} = F_{8}(v_{0}, v_{2}, v_{3}, y_{8}), \end{aligned}$$
$$\begin{aligned}& \frac{d v_{0}}{d\tilde{t}_{s}} = F_{0}(y_{5}), \end{aligned}$$
$$\begin{aligned}& \frac{d y_{5}}{d\tilde{t}_{s}} = F_{5}(v_{0}, v_{1}, v_{2}, v_{3}, y_{5}), \end{aligned}$$
$$\begin{aligned}& \frac{d y_{6}}{d\tilde{t}_{s}} = F_{6}(v_{0}, v_{1}, y_{6}), \end{aligned}$$
$$\begin{aligned}& \frac{d v_{2}}{d\tilde{t}_{s}} = \varepsilon F_{2}(y_{7}), \end{aligned}$$
$$\begin{aligned}& \frac{d y_{7}}{d\tilde{t}_{s}} = \varepsilon F_{7}(v_{0}, v_{2}, y_{7}), \end{aligned}$$
where the functions \(F_{i}(\cdot)\) are as defined in (2a)–(2h). The super-slow system is obtained by defining \(\tilde{t}_{ss} = \varepsilon \tilde{t}_{s} = \varepsilon \delta \tilde{t}\):
$$\begin{aligned}& \delta \varepsilon \frac{d v_{3}}{d\tilde{t}_{ss}} = F_{3}(y_{8}), \end{aligned}$$
$$\begin{aligned}& \delta \varepsilon \frac{d y_{8}}{d\tilde{t}_{ss}} = F_{8}(v_{0}, v_{2}, v_{3}, y_{8}), \end{aligned}$$
$$\begin{aligned}& \varepsilon \frac{d v_{0}}{d\tilde{t}_{ss}} = F_{0}(y_{5}), \end{aligned}$$
$$\begin{aligned}& \varepsilon \frac{d y_{5}}{d\tilde{t}_{ss}} = F_{5}(v_{0}, v_{1}, v_{2}, v_{3}, y_{5}), \end{aligned}$$
$$\begin{aligned}& \varepsilon \frac{d y_{6}}{d\tilde{t}_{ss}} = F_{6}(v_{0}, v_{1}, y_{6}), \end{aligned}$$
$$\begin{aligned}& \frac{d v_{2}}{d\tilde{t}_{ss}} = F_{2}(y_{7}), \end{aligned}$$
$$\begin{aligned}& \frac{d y_{7}}{d\tilde{t}_{ss}} = F_{7}(v_{0}, v_{2}, y_{7}). \end{aligned}$$
Systems (2a)–(2h), (3a)–(3h) and (4a)–(4h) describe different dynamics in the singular limits \(\varepsilon \to 0\) and/or \(\delta \to 0\), although they are equivalent for \(\varepsilon \neq 0\) and \(\delta \neq 0\). Letting \(\delta \to 0\) in (2a)–(2h) yields the fast layer problem (2a)–(2b) which describes the dynamics of the fast variables \((v_{3}, y_{8})\) for fixed values of the slow (\(v_{0}\)) and super-slow (\(v_{2}\)) variables. The critical manifold is defined by the equilibrium points of the fast layer problem, that is,
$$ S^{0} = \bigl\{ (v_{3},y_{8},v_{0},y_{5},v_{1},y_{6} v_{2},y_{7}) \in \mathbb{R}^{8} \mid G S[C_{5} \tau _{a} v_{0} - C_{6} \tau _{b} v_{2} ] - v_{3} =0\bigr\} , $$
which is eventually in the \((y_{8}=0)\)-space. Since the eigenvalues of the Jacobian matrix of the fast layer problem defined by (2a)–(2b) with respect to \((v_{3}, y_{8})\) are \(\lambda _{1,2} = -1\), the 6-dimensional critical manifold \(S^{0}\) is normally hyperbolic and stable, thus, it is perturbed to local slow manifolds for sufficiently small \(\delta >0\). Therefore, the fast dynamics can be approximated by slow dynamics as suggested by the Fenichel theorem [51].
Setting \(\delta \to 0\) in (3a)–(3h) gives an algebraic-differential slow reduced problem,
$$\begin{aligned}& 0 = F_{3}(y_{8}), \end{aligned}$$
$$\begin{aligned}& 0 = F_{8}(v_{0}, v_{2}, v_{3}, y_{8}), \end{aligned}$$
which describes the slow dynamics restricted to \(S^{0}\). System (6a)–(6h) is a two-time-scale system of 4 slow/2 super-slow variables with ε being the time-scaling parameter. The equilibria of the slow layer problem in the \(\varepsilon \to 0\) limit defines the super-slow manifold \(L^{0}\), which is reduced to
$$\begin{aligned} L^{0} = & \bigl\{ (v_{3},y_{8},v_{0},y_{5},v_{1},y_{6} v_{2},y_{7}) \in S^{0} \mid \\ & A S\bigl[ A \tau _{a} p+C_{2} \tau _{a} A S[C_{1} \tau _{a} v_{0} ]- C_{4} \tau _{b} v_{2}-C_{7} \tau _{g} v_{3} \bigr]-v_{0}=0\bigr\} , \end{aligned}$$
and restricted to \(S^{0}\) by the algebraic condition \(v_{3} = G S[C_{5} \tau _{a} v_{0} - C_{6} \tau _{b} v_{2}] = \mathcal{K}(v_{0}, v_{2})\) in (6a)–(6b). The super-slow dynamics restricted to the 2-dimensional manifold \(L^{0}\), hence to \(S^{0}\), are given by the super-slow reduced system in the \(\varepsilon \to 0\) limit of (4a)–(4h).
In order to investigate the super-slow flow on \(L^{0}\), we consider the two-time-scale system (6a)–(6h) with the fast variable \(v_{3}\) on \(S^{0}\), i.e. \(v_{3} = \mathcal{K}(v_{0},v_{2})\), and rewrite the slow reduced system (6a)–(6h) as
$$\begin{aligned}& \frac{d y_{5}}{d\tilde{t}_{s}} = F_{5}\bigl(v_{0}, v_{1}, v_{2}, \mathcal{K}(v_{0},v_{2}), y_{5} \bigr), \end{aligned}$$
$$\begin{aligned}& \frac{d y_{7}}{d\tilde{t}_{s}} = \varepsilon F_{7}(v_{0}, v_{2}, y_{7}). \end{aligned}$$
Applying the time-scaling \(\tilde{\tau }_{s} = \varepsilon \tilde{t}_{s}\) and taking the singular limit \(\varepsilon \to 0\) give the algebraic-differential system
$$\begin{aligned}& 0 = F_{5}\bigl(v_{0}, v_{1}, v_{2}, \mathcal{K}(v_{0},v_{2}), y_{5}\bigr), \end{aligned}$$
$$\begin{aligned}& 0 = F_{6}(v_{0}, v_{1}, y_{6}), \end{aligned}$$
$$\begin{aligned}& \frac{d v_{2}}{d\tilde{\tau }_{s}} = F_{2}(y_{7}), \end{aligned}$$
$$\begin{aligned}& \frac{d y_{7}}{d\tilde{\tau }_{s}} = F_{7}(v_{0}, v_{2}, y_{7}). \end{aligned}$$
The algebraic conditions (9a)–(9d) define the 'critical manifold' of (8a)–(8f) which is equivalent to \(L^{0}\) given by (7). Notice that \(L^{0}\) is restricted in the zero plane of the \((y_{5}, y_{6})\)-space. Assuming that \(v_{2}\) is some function of \(v_{0}\) on \(L^{0}\), i.e. \(v_{2} = \mathcal{M}(v_{0})\), the fold points on \(L^{0}\) are defined by
$$ \mathcal{F} = \biggl\{ (v_{0}, v_{1}, v_{2}, v_{3}, y_{5}, y_{6}, y_{7}, y_{8}) \in L^{0} \Bigm| v_{2} = \mathcal{M}(v_{0}), \frac{\partial \mathcal{M}(v_{0})}{\partial v_{0}} = 0 \biggr\} . $$
Figure 2a shows \(S^{0}\) and \(L^{0}\) in the \((v_{0}, v_{2}, v_{3})\)-space, and Fig. 2b \(L^{0}\) in the \((y_{7}, v_{2}, v_{0})\)-space. The super-slow manifold \(L^{0}\) expands between the lower horizontal and vertical planes of \(S^{0}\). The part of curve \(L^{0}\) on the lower horizontal plane of \(S^{0}\) is folded with respect to \(v_{2}\) at along the fold curves \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\) defined by (10), i.e. \(\mathcal{F} = \mathcal{F}_{1} \cup \mathcal{F}_{2}\). In this projection, the 1-D fold curves divide \(L^{0}\) into two stable (left-hand side \(L^{0}_{l}\) and right-hand side \(L^{0}_{r}\)) and one unstable (middle \(L^{0}_{m}\)) branches on the \((v_{0}, v_{2}, v_{3})\)-space. We also verify that four eigenvalues of \(L^{0}\) (two real and two complex conjugate) have negative real parts along the stable parts of \(L^{0}\). One of the real eigenvalues changes sign along \(\mathcal{F}_{1,2}\), hence the unstable middle branch is of saddle type. Along the stable and unstable branches \(L^{0}\) is normally hyperbolic, so \(L^{0}\) is perturbed to local super-slow manifolds for small values of \(\varepsilon > 0\) within (6a)–(6h); see the extension of Fenichel theory for systems with more than two time-scales [52]. On the other hand, the dynamics near the non-hyperbolic fold curves \(\mathcal{F}_{1,2}\) should be investigated by using the elements of GSPT.
Critical manifold, slow manifold and folded singular points. (a) Critical manifold \(S^{0}\) (green surface), super-slow manifold \(L^{0}\) (red curve) and a bursting orbit in the \((v_{0}, v_{2}, v_{3})\)-space. The curve \(L^{0}\) is divided into three branches at \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\) (red dots) where it changes stability. The middle branch of the \(L^{0}\) (\(L^{0}_{m}\)) curve between \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\) is unstable (dashed). The stable left-hand side branch (\(L^{0}_{l}\)), \(\mathcal{F}_{1}\), \(L^{0}_{m}\) and \(\mathcal{F}_{2}\) are entirely on the almost horizontal part of \(S^{0}\) (approximately on the \((v_{3} \approx 0)\)-plane). The stable right-hand side branch of \(L^{0}\) (\(L^{0}_{r}\)) on expands both on the horizontal and vertical parts of \(S^{0}\). Arrows indicate the corresponding time-scale (single-headed for super-slow, double-headed for slow dynamics). (b) Super-slow manifold \(L^{0}\) (red surface), fold curves \(\mathcal{F}_{1,2}\) (black lines) and folded singular points \(p_{1,2}\) (red dots) in the \((y_{7}, v_{2}, v_{0})\)-space. Arrows indicate the corresponding time-scale
As being the usual strategy, we consider the desingularized version of the super-slow dynamics on \(L^{0}\) is given by the desingularized slow reduced system (DSRS), reading
$$\begin{aligned}& \frac{d v_{0}}{d\hat{\tau }_{s}} = -y_{7}, \end{aligned}$$
(11a)
$$\begin{aligned}& \frac{d y_{7}}{d\hat{\tau }_{s}} = - \frac{\partial \mathcal{M}(v_{0})}{\partial v_{0}} F_{7}\bigl(v_{0}, \mathcal{M}(v_{0}), y_{7}\bigr), \end{aligned}$$
(11b)
where \(\tilde{\tau }_{s} = \frac{\partial \mathcal{M}(v_{0})}{\partial v_{0}} \hat{\tau }_{s}\). The equilibria of (11a)–(11b) on the fold set \(\mathcal{F}\) are located at \((v_{0}^{p1}, y_{7}^{p1})=(1.2343,0)\) and \((v_{0}^{p2}, y_{7}^{p2})=(9.9976,0)\) for the parameter set given in Table 1. These equilibrium points, which are not generally the true equilibria of (8a)–(8f), are related to the folded singularities of (8a)–(8f), hence of (2a)–(2h). On the other hand, equilibrium points \((v_{0}^{F_{7}}, y_{7}^{F_{7}})\), i.e. \(F_{7}(v_{0}^{F_{7}}, y_{7}^{F_{7}}) = 0\), are ordinary singularities since they are also equilibria of (8a)–(8f), hence of (2a)–(2h). Figure 2b shows \(L^{0}\), fold curves \(\mathcal{F}_{1,2}\) and folded singular points \(p_{1,2}\) in the \((y_{7}, v_{2}, v_{0})\)-space.
Stability of the equilibrium points of the desingularized (slow) reduced system on the fold set determines the type of the folded singularities of the original system. Classification of these equilibrium points is based on the linear stability analysis. When the desingularized (slow) reduced system is planar, this analysis can be done using the trace and the determinant of the Jacobian matrix at the fold equilibrium. If both are different from zero, the fold equilibrium can be a folded saddle, a folded node or a folded focus. If the determinant equals zero but not the trace, then the desingularized flow has a degenerate equilibrium point, which is a folded saddle-node. A folded saddle-node is either related to a saddle-node bifurcation of the folded equilibria or a transcritical bifurcation of a folded equilibrium with an ordinary equilibrium. The latter case refers to the folded saddle-node type II (FSN II) singularity [53, 54], where a folded node becomes a folded saddle and a regular saddle becomes a regular node. The original system exhibits a singular Hopf bifurcation close to a FSN II singularity [55, 56].
The Jacobian matrix of (11a)–(11b) has the following general form:
$$\begin{aligned} J = \begin{bmatrix} 0 & -1 \\ -\frac{\partial ^{2} \mathcal{M}(v_{0}^{*})}{\partial v_{0}^{2}} F_{7}(v_{0}^{*}, y_{7}^{*}) - \frac{\partial \mathcal{M}(v_{0}^{*})}{\partial v_{0}} \frac{\partial {F_{7}(v_{0}^{*}, y_{7}^{*})}}{\partial v_{0}} & 2 \frac{\partial \mathcal{M}(v_{0}^{*})}{\partial v_{0}} \end{bmatrix}, \end{aligned}$$
where \((v_{0}^{*}, y_{7}^{*})\) stands for the equilibrium point of interest. Since on the folded equilibria \(2\frac{\partial \mathcal{M}(v_{0}^{p1, p2})}{\partial v_{0}}= 0\), the trace and determinant of (12) on the folded equilibria read
$$\begin{aligned} \textbf{tr}\bigl(J^{p1, p2}\bigr) = 0, \qquad \textbf{det} \bigl(J^{p1, p2}\bigr) = - \frac{\partial ^{2} \mathcal{M}(v_{0}^{p1, p2})}{\partial v_{0}^{2}} F_{7} \bigl(v_{0}^{p1, p2}, y_{7}^{p1, p2}\bigr). \end{aligned}$$
The trace and determinant of (12) on the regular equilibria read
$$\begin{aligned} \textbf{tr}\bigl(J^{F_{7}}\bigr) = 2 \frac{\partial \mathcal{M}(v_{0}^{F_{7}})}{\partial v_{0}}, \qquad \textbf{det}\bigl(J^{F_{7}}\bigr) = - \frac{\partial \mathcal{M}(v_{0}^{F_{7}})}{\partial v_{0}} \frac{\partial {F_{7}(v_{0}^{F_{7}}, y_{7}^{F_{7}})}}{\partial v_{0}}. \end{aligned}$$
Notice that the generic folded singularity condition is violated due to the fact that \(\frac{\partial (\mathcal{F}_{v_{2}}\dot{v}_{2} +\mathcal{F}_{y_{7}}\dot{y}_{7} )}{\partial v_{0}} = 0\) in (12), and \(\textbf{tr}(J^{p1, p2}) = 0\) in (13). Therefore, the folded singularities determined by (11a)–(11b) are not generic and a folded equilibrium is one of the following types: a saddle for \(\textbf{det}(J^{p1, p2})<0\), a center for \(\textbf{det}(J^{p1, p2})>0\), a nilpotent for \(\textbf{det}(J^{p1, p2}) = 0\). The latter degenerate type corresponds to a point in the parameter space at which a folded singularity and a regular singularity meet, i.e. \(\textbf{tr}(J^{F_{7}}) = 0\) and \(\textbf{det}(J^{F_{7}}) = 0\) in (14). Consequently, the equilibrium points of (11a)–(11b) related to the folded and regular singularities undergo degenerate transcritical bifurcations where (12) has two zero-eigenvalues.
Figure 3 shows the bifurcation diagram of (11a)–(11b) with respect to B in the region of interest. Two straight lines of the equilibria \((v_{0}^{p1}, y_{7}^{p1})\) and \((v_{0}^{p2}, y_{7}^{p2})\) intersect with the regular equilibria curve \(F_{7}(v_{0}, y_{7})\) at two bifurcation points, \(BP_{1}\) at \(B_{BP1} \approx 16.7817\) and \(BP_{2}\) at \(B_{BP2} \approx 5.4817\), which are degenerate transcritical bifurcations. For \(B< B_{BP1}\), the equilibrium \((v_{0}^{p1}, y_{7}^{p1})\) is a center with two complex conjugate eigenvalues. After the bifurcation at \(BP_{1}\), \((v_{0}^{p1}, y_{7}^{p1})\) becomes a saddle. Consequently, the system (8a)–(8f) (and (2a)–(2h)) has a folded-saddle singularity near \(p_{1}\) for \(B > B_{BP1}\). The equilibrium \((v_{0}^{p2},y_{7}^{p2})\) is of a saddle type for \(B < B_{BP2}\) and becomes a center with two complex conjugate eigenvalues at \(B = B_{BP2}\). Hence, the system (8a)–(8f) (and (2a)–(2h)) has a folded-saddle singularity near \(p_{2}\) for \(B < B_{BP2}\). Finally, in a neighborhood of \(BP_{1}\), the equilibrium points along the \(F_{7}(v_{0}, y_{7})\) curve are of saddle type for \(B < B_{BP1}\) and stable focus for \(B > B_{BP1}\). Similarly, in a neighborhood of \(BP_{2}\), the equilibrium points along the \(F_{7}(v_{0}, y_{7})\) curve are of stable focus type for \(B < B_{BP2}\) and of saddle type for \(B > B_{BP2}\).
Bifurcation diagram of (11a)–(11b) with respect to B. Equilibrium points \((v_{0}^{p1}, y_{7}^{p1})\) lie on the lower red horizontal line \((v_{0}^{p1}, y_{7})\), and \((v_{0}^{p2}, y_{7}^{p2})\) on the upper red horizontal line \((v_{0}^{p2}, y_{7}^{p2})\). Dashed parts of the red lines represent saddle type, solid parts represent center type solutions. The true equilibrium points line on the black curve, \(F_{7}\). Dashed part of the black curve represents saddle type, solid parts stable focus type solutions. The saddle type equilibrium points along \((v_{0}^{p1}, y_{7}^{p1})\) and \((v_{0}^{p2}, y_{7}^{p2})\) change to center at the intersections with \(F_{7}\) at \(BP_{1}\) and \(BP_{2}\), respectively
As mentioned above, a generic transcritical bifurcation of regular and folded singularities is related to a FSN II singularity. In our case, a folded saddle becomes a folded center and a stable focus becomes a saddle at the degenerate transcritical bifurcation points \(BP_{1}\) and \(BP_{2}\). Furthermore, system (2a)–(2h) can undergo the (singular) Hopf bifurcations close to \({BP_{2}}\) and \({BP_{1}}\) in the parameter space (see Fig. 4d), as it will be detailed in the following sections. Hence, the interaction of regular and non-generic folded singularities can be referred as a degenerate FSN II singularity. A degenerate FSN II singularity in (2a)–(2h) stems from the structure of the NMM, which is defined as a second-order system that violates the generic folded singularity condition \(\textbf{tr}(J)\neq 0\).
Bifurcation diagrams of (2a)–(2h) with as a function of \((B, C_{3})\). (a) Bifurcation diagram of (2a)–(2h) on the \((B, C_{3})\) plane. Curves are named, respectively, after the limit point (LP, black curves), Hopf (H, red curves) and homoclinic (HOM, blue curves) bifurcations in panels (b–f). Only the LP bifurcations interacting with canard solutions are plotted. Black squares indicate cusp (CP), red circles indicate Bogdanov–Takens (BT) and red squares indicate generalized Hopf (GF) bifurcations. The regions marked by black, green and purple boxes are zoomed in black, green and purple framed insets. The region where the homoclinic curve tips to the \(LP_{1}\) is zoomed inside the green inset. (b–f) Bifurcation diagrams of (2a)–(2h) as a function of B for different values of \(C_{3}\). The limit point bifurcations of interest are marked by black squares, Hopf bifurcations by red circles, and homoclinic connections by blue stars. Stable and unstable solutions are represented by continuous and dashed curves, respectively. Along the curves of equilibrium points, (2a)–(2h) undergoes four Hopf bifurcations (\(H_{1,2,3,4}\)) for \(C_{3}=\{50,80,145\}\) (c, d, e) and three Hopf bifurcations (\(H_{1,2,3}\)) for \(C_{3}=15\) (b) and (\(H_{1,2,4}\)) for \(C_{3}=200\) (f)
Folded singularities can lead to canard solutions in the original system. In a planar slow–fast system, a singular Hopf bifurcation can occur near a folded singularity, which is then called a canard point. In such case, the amplitude of the periodic orbits bifurcated at the singular Hopf point increases stiffly in a narrow interval of the parameter (scaled by the time-scale separation parameter) that controls the transition from small amplitude to relaxation oscillations [57]. This phenomenon is known as canard explosion [26, 58]. A canard-explosive branch hosts small canards following the unstable branch of the critical manifold and one stable branch (so-called canard-without-head solutions), large canards following the unstable branch of the critical manifold and two stable branches (so-called canard-with-head solutions), and a maximal canard solution that follows the longest the repelling branch. In planar multiple time-scale systems, canard solutions are tightly connected to excitability and firing thresholds [30, 31]. In higher dimensional multiple time-scale systems with at least two slow variables, the folded-singularities are generic, hence they are robust to small parameter perturbations, and canard solutions associated with folded singularities connect stable and unstable branches of a folded critical manifold [36, 53, 59–61]. Canards of folded node and FSN II singularities support mixed mode oscillations [27, 36, 44]. FSN II singularities have been identified in neuronal models where the exchange from an excitable to a relaxation oscillatory state is accompanied by subthreshold oscillations [24, 28, 42, 62]. Folded-saddle canards have been shown to be sculpting firing threshold manifolds, as well [33, 34, 63–65].
In our problem, the critical manifold \(S^{0}\) (5) is hyperbolic, whereas the super-slow manifold \(L^{0}\) (8a)–(8f) has a folded structure. Thus, the critical transitions occur mainly in the 6-dimensional reduced system given by (8a)–(8f). As the analysis above have shown, (8a)–(8f) has degenerate folded singularities along the fold curve at \(p_{1}\) and \(p_{2}\). Notice that, since the system has neither a folded node nor a FSN II, small amplitude oscillations do not exist near \(p_{1}\) or \(p_{2}\). But the folded saddle, degenerate FSN II and singular Hopf bifurcations can lead to canard solutions governing the critical transitions in (8a)–(8f) (hence in (2a)–(2h)). On the other hand, the bursting behavior cannot be captured by (8a)–(8f) because (8a)–(8f) is restricted in the critical manifold \(S^{0}\), whereas the fast oscillations of the bursting orbits leave off \(S^{0}\). So the bursting solutions exist in the full system (2a)–(2h) (see [48] for a detailed analysis of the bursting solutions). As a result, our problem yields both three and two time-scaled behaviors. In the next section, we investigate canard dynamics near \(p_{1}\) and \(p_{2}\).
Multiple time-scale oscillations and canard transitions
Transitions near the folded singularities of (2a)–(2h) which lead to canard solutions depend on the system parameters. The reader may refer to Table 1 for the parameter values, unless otherwise stated. The connectivity strength from the pyramidal cell population on the subpopulation of the SOM+ interneurons, \(C_{3}\), and their synaptic gain, B, appear as two crucial parameters controlling the transitions by affecting the curve \(F_{7}\) in (11a)–(11b) (see Fig. 3). Figure 4a shows a 2-parameter bifurcation diagram in the plane \((B, C_{3})\). Depending on \(C_{3}\), system (2a)–(2h) undergoes several Hopf bifurcations as a function of B. The first two Hopf bifurcations, \(H_{1}\) and \(H_{2}\), yield harmonic oscillations, whereas the periodic branches appearing at \(H_{3}\) and/or \(H_{4}\) connect to multiple time-scale oscillations. Under the variations in \((B, C_{3})\), \(H_{1}\) and \(H_{2}\) persist; and the emerging periodic orbits do not change qualitatively. On the other hand, \(H_{3}\) and \(H_{4}\) undergo Bogdanov–Takens (BT) bifurcations \(BT_{1,2}\) and the corresponding periodic branches vary qualitatively. The periodic orbits emerging at \(H_{3}\) and \(H_{4}\) can end on homoclinic connections, namely \(HOM_{1,2,3,4}\).
Figures 4b–f exemplify qualitative variations in (2a)–(2h) as a function of B for different values of \(C_{3}\). For \(C_{3} < C_{3, BT1}\approx 18.9\), the system undergoes three Hopf bifurcations, for instance in Fig. 4b for \(C_{3}=15\). The branch of periodic solutions starting at \(H_{3}\) terminates at a homoclinic connection, \(HOM_{1}\). As \(C_{3}\) increases, \(HOM_{1}\) and \(LP_{1}\) get closer while the amplitude and the number of spikes of the periodic orbits increase. The spike adding occurs as the \(HOM_{1}\) curve folds back and forth in the \((B, C_{3})\)-space (see the black framed inset in Fig. 4a for an example folding). At \(C_{3} = C_{3, BT1}\) another Hopf bifurcation, \(H_{4}\), appears yielding a new branch of periodic orbits making a second homoclinic connection, \(HOM_{2}\) (green framed inset in Fig. 4a). Consequently, \(HOM_{1}\) and \(HOM_{2}\) points coexist in a narrow range of \((B, C_{3})\). The \(HOM_{1}\) curve touches the \(LP_{1}\) curve at \((B, C_{3}) \approx (23.98, 22.43)\), folds back and continues in the parameter space, which then we name as \(HOM_{3}\) curve (dashed zone in the green framed inset in Fig. 4a). The curves \(HOM_{1}\) and \(HOM_{3}\) stay very close to each other in \((23.98< B<24.46, 21.94<C_{3}<22.43)\), before \(HOM_{3}\) bends in the \(C_{3}\) direction at \((B, C_{3}) \approx (24.46, 22.43)\). As it happens, the branch of periodic orbits curls below \(LP_{1}\) in the B-space and eventually connects to \(HOM_{3}\). With increasing \(C_{3}\), this branch of periodic orbits advances further towards the stable equilibrium points while introducing a region of multi-attractors of nodes, saddles, unstable small oscillations and stable large amplitude bursting oscillations (see Fig. 4c for \(C_{3} = 50\), dynamics will be detailed in Sect. 3.2). Concurrently, \(H_{4}\) moves away from \(LP_{1}\) and \(HOM_{2,3}\) approach \(LP_{2}\). In \((20.09< B<20.26, 54.08<C_{3}<54.43)\), \(HOM_{2}\) and \(HOM_{3}\) curves are connected by a section that is parallel to the \(LP_{2}\) curve (purple framed inset in Fig. 4a). For \(C_{3}>54.43\), the branch of periodic orbits initiated at \(H_{4}\) connects to the branch of large amplitude multiple time-scale oscillations (Fig. 4d).
System (2a)–(2h) does not have any LPs between \(H_{3}\) and \(H_{4}\) for \(C_{3, CP1}\leq C_{3} \leq C_{3, CP2}\). At \(C_{3} = C_{3, CP2}\approx 141.4\), as the lower branch of equilibrium points curls below \(H_{3}\), the connection between the large amplitude orbits and \(H_{3}\) is broken up on a saddle-saddle homoclinic bifurcation (the equilibrium points in a neighborhood of \(LP_{3}\) for \(B \geq B_{LP3}\) are saddles [66, 67]). As a consequence, the branch of periodic orbits starting from \(H_{3}\) terminates on a homoclinic connection, \(HOM_{4}\) (for \(C_{3}=145\) in Fig. 4e). This homoclinic connection remains until \(H_{3}\) disappears at \(C_{3} = C_{3, BT2} \approx 157\). Beyond \(C_{3} > C_{3, BT2}\), the large amplitude bursting orbits introduced by \(H_{4}\) terminate on a saddle-node homoclinic connection (for instance at \(C_{3} =200\) in Fig. 4f).
The Hopf bifurcations \(H_{3}\) and \(H_{4}\) occur close to the folded singularities \(p_{2}\) and \(p_{1}\), respectively. System (2a)–(2h) can yield canard solutions close to these points in the parameter space of B, such as \(B\approx B_{H3}\) and \(B\approx B_{H4}\). In the following section, we will show the canard-mediated transition from sinusoidal oscillations initiated by \(H_{3}\) to large amplitude bursting/relaxation type solutions. Subsequently, Sect. 3.2 will detail the canard dynamics and related excitability near \(H_{4}\), in particular, the type-I excitability for \(C_{3} = 50\) and type-II excitability for \(C_{3} = 80\).
Canard-mediated transitions between sinusoidal and multiple time-scale oscillations
Köksal Ersöz et al. [48] have realized that the number of spikes of a bursting solution of (2a)–(2h) depends on the amount of the PSP received by the PV+ interneuron subpopulation, hence on the EPSP coming from the pyramidal cell subpopulation and the IPSP from the SOM+ interneurons. For instance, increasing the IPSP on the PV+ interneurons by increasing B decreases the number of spikes while driving the oscillations one peak to the next one in the parameter space (see Fig. 4c–f and 5). The connectivity constant from the pyramidal cell subpopulation to the PV+ interneuron subpopulation, \(C_{5}\), directly scales the EPSP on this subpopulation, therefore determines the maximum number of fast spikes of the bursting oscillations, or more generally, the type of the multiple time-scale oscillations.
Variation of large amplitude solutions with respect to \((B, C_{5})\). (a) Bifurcation diagrams of (2a)–(2h) with respect to B for \(C_{3}=80\) and different values of \(C_{5}\). Curves and Hopf bifurcations (\(H_{1,2,3,4}\), dots) are colored with respect to the color codes of \(C_{5}\) values. Stable and unstable solutions are represented by continuous and dashed curves, respectively. For the sake of simplicity, the periodic solutions between \(H_{1}\) and \(H_{2}\) are not shown. (b) Zoom into the region of transitions between sinusoidal and large amplitude multiple-time-scale solutions in \(B\in [4.7,4.9]\)
Figure 5 exemplifies how \(C_{5}\) modulates the large amplitude oscillations between \(H_{3}\) and \(H_{4}\) on the bifurcation diagram of (2a)–(2h) for \(C_{3} = 80\). Increasing \(C_{5}\) decreases the amplitude of the oscillations, moves \(H_{1}\) and \(H_{2}\) slightly to the right, but does not affect considerably the locations \(H_{3}\) and \(H_{4}\) with respect to B (Fig. 5a). The supercritical Hopf bifurcation at \(H_{3}\) yields a branch of sinusoidal periodic oscillations (Fig. 5b) that folds back and forth as B varies and enters in a regime of multiple time-scale periodic oscillations. These oscillations are of relaxation type for small values of \(C_{5}\), and of bursting type for large values of \(C_{5}\). Furthermore, the stable sinusoidal and multiple time-scale periodic oscillations can coexist depending on the values of \(C_{5}\) (see Fig. 5b at \(B \approx 4.8\)).
The form of the branch of periodic solutions between \(H_{3}\) and \(H_{4}\) in Fig. 5a indicates the type of the multiple time-scale oscillations for a certain parameter combination. For \(C_{5} = 80\) (black diagram in Fig. 5) the smoothly decreasing amplitude of \(v_{0}\) with B indicates that the corresponding orbits are of relaxation type (exemplified in Fig. 6). The horizontal zigzags along the upper part of the periodic branches obtained for greater values of \(C_{5}\) indicate the presence of bursting solutions along these periodic branches and the number of their fast spikes. For instance, the 5 peaks that we count between \(H_{3}\) and \(H_{4}\) for \(C_{5}=350\) (blue diagram in Fig. 5) signify that the maximum number of fast spikes for \(C_{5}=350\) is 4. Such a bursting orbit is obtained for sufficiently small values of B (\(B=5\), for instance). Then as B increases, the bursting orbits lose their fast spikes one by one through the peaks of the horizontal branch. They become relaxation cycles (\(B=16\), for instance), before shrinking and disappearing via a subcritical Hopf bifurcation at \(H_{4}\).
Example canard orbits along the transition from sinusoidal oscillations to relaxation oscillations. Zoom near the bifurcation diagram for \(C_{3}= 80\), \(C_{5}=80\) and \(B \in [4.7, 4.9]\) (see Fig. 5a for the whole diagram). Continuous and dash curves represent stable and unstable solutions, respectively. Hopf (\(H_{3}\), red dot) is marked on the diagram. Numbered orbits from 1-7 are given in panels (b–d). The orange curve traces the frequency of the oscillations emerging at \(H_{3}\) The orange curve traces the frequency of the oscillations emerging at \(H_{3}\). (b) Periodic orbits marked in panel (a), \(L^{0}\) (red curve), fold curves \(\mathcal{F}_{1,2}\) (red points) and the critical surface \(S^{0}\) (green surface) are projected on the \((v_{0},v_{2},v_{3})\)-space. Arrows indicate the corresponding time-scale (single-headed for super-slow, double-headed for slow dynamics). (c) Periodic orbits marked in panel (a), \(L^{0}\) (red surface), fold curves \(\mathcal{F}_{1,2}\) (black curves) and folded singular points \(p_{1,2}\) (red dots) are projected on the \((y_{7},v_{2},v_{0})\)-space. Arrows indicate the corresponding time-scale. (d) Time series of the periodic orbits on panels (b, c) with respective color codes. Period is normalized to 1 (\(\tilde{t}/ \tilde{T} =1\), where T̃ represents period of a cycle)
The periodic solutions connected to \(H_{1}\) and \(H_{2}\) do not interact with the singular fold points of \(L^{0}\), \(p_{1}\) and \(p_{2}\), but the ones near \(H_{3}\) and \(H_{4}\) do because \(H_{3}\) and \(H_{4}\) take place close to the degenerate FSN II singularities on the fold curve \(\mathcal{F}\). As a consequence, the multiple time-scale orbits emanating at singular Hopf bifurcations \(H_{3}\) and \(H_{4}\) can undergo canard explosion along which canard trajectories sculpt the periodic oscillations. Figure 6 shows example orbits along the periodic branch that follow \(H_{3}\) for \(C_{3}=80\) and \(C_{5}=80\). As the periodic branch folds with respect to B and becomes unstable (Fig. 6a), the sinusoidal orbits of 4.5–6 Hz start to interact with \(p_{2}\). In particular, they move along the unstable branch of \(L^{0}\), \(L^{0}_{m}\), before jumping back to the stable branch \(L^{0}_{r}\). Hence, the periodic orbits become canard orbits (the 1st orbit). As B varies along the periodic branch in the parameter space, the canard orbits grow in amplitude along \(L^{0}_{m}\) (the second, third and fourth orbits) until they stretch out between \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\) (the fifth orbit). The canard orbits that oscillate between \(L^{0}_{m}\) and \(L^{0}_{r}\) can be interpreted as canard-without-head orbits, and the fifth orbit as the maximal canard since it has the largest period of the canard family of the periodic branch under consideration. Soon after the fifth orbit, the trajectories jump to the attracting branch \(L^{0}_{l}\), get a shape of canard-with-head solutions, and become stable (the sixth orbit). As B increases, the relaxation cycles appear with parts exclusively following the attracting branches of \(L^{0}\) and jumping close to the fold points.
As mentioned in the introduction, canard solutions play a fundamental role in separating different dynamical regimes. The unstable canard orbits in Fig. 6 (from the first to the fifth) appear as an other example of this phenomenon by accompanying the transition from sinusoidal oscillations to relaxation oscillations. For instance, sinusoidal oscillations and large amplitude canard-with-head cycles coexist for \(B \in (4.75, 4.81)\) and the canard-without-head cycles form the boundary between them, as seen clearly in Fig. 6a. While increasing \(C_{5}\) introduces bursting type of solutions, it can preserve the bistability between the bursting and sinusoidal oscillations, for example for \(C_{5}=\{250, 350, 425\}\) in Fig. 5. Notice that with increasing \(C_{5}\), the initially smooth branch of periodic orbits becomes steeper, gains vertical zigzags that move to the right along the B-axis, and the region of bistability decreases.
Figure 7 zooms into the region of canard orbits following the sinusoidal solutions of 4.8–6 Hz started at \(H_{3}\) for \(C_{5}=450\). As the stable sinusoidal oscillations grow in amplitude with increasing B, they start to interact with \(p_{2}\) and to follow the bits of \(L^{0}_{m}\) (the first orbit). Soon after, the orbits undergo a LP bifurcation (where the branch of the periodic orbits folds back at \(B \approx 4.821\)) and become unstable. As B varies in the parameter space along the periodic branch, the orbits moving along \(L^{0}_{m}\) in the super-slow time-scale grow in amplitude and they start to interact with the vertical panel of \(S^{0}\) as they jump to \(L^{0}_{r}\) in the slow time-scale. So, the orbits become canard orbits.
Example canard orbits along the transition from sinusoidal oscillations to bursting oscillations (a) Bifurcation diagram for \(C_{3} = 80\), \(C_{5}=450\) and \(B \in [4.7, 4.9]\) (see Fig. 5a for the whole diagram). Stable and unstable solutions are represented by continuous and dashed curves, respectively. Hopf (\(H_{3}\), red dot) is marked on the diagram. The rectangular region is zoomed in panel (b). Numbered orbits from 1-7 are given in panels (c–d). The orange curve traces the frequency of the oscillations emerging at \(H_{3}\). (c) Periodic orbits marked in panels (a, b), \(L^{0}\) (red curve), fold curves \(\mathcal{F}_{1,2}\) (red points) and the critical surface \(S^{0}\) (green surface) are projected on the \((v_{0},v_{2},v_{3})\)-space. Arrows indicate the flow direction and its time-scale (single-head for super-slow, double-head for slow, triple-head for fast). (d) Periodic orbits marked in panel (a), \(L^{0}\) (red surface), fold curves \(\mathcal{F}_{1,2}\) (black curves) and folded singular points \(p_{1,2}\) (red dots) are projected on the \((y_{7},v_{2},v_{0})\)-space. Arrows indicate the corresponding time-scale. (e) Time series of the periodic orbits on panels are (b, c) shown with respective color codes. Period is normalized to 1 (\(\tilde{t}/ \tilde{T} =1\), where T̃ represents period of a cycle)
As the part of the trajectory along \(L^{0}_{m}\) grows in amplitude, the trajectory gets attracted by \(L^{0}_{r}\) along the vertical panel of \(S^{0}\) and it spirals around \(L^{0}_{r}\) before landing on the horizontal plane of \(S^{0}\). This interaction with the vertical panel of \(S^{0}\) occurs in the fast time-scale, and eventually yields fast spikes, i.e., bursting-type canard oscillations. For instance, the second orbit in Fig. 7c and 7d has one fast spike. The number of spikes increases as the trajectory stays longer and longer along \(L^{0}_{m}\) while B varies. More precisely, the number of spikes changes by one as we pass from one fold to another on the same side of along the snaking periodic branch with respect to B (Fig. 7b). For instance, solution-2 has 1 spike, solution-3, which is two fold below, has 3 spikes and so on. The spike adding continues until the canard orbits (analogous to canard-without-head orbits) expands between two folded singularities, hence till the occurrence of the maximal canard of the family (approximated by the sixth orbit). After the maximal canard, the canard cycles start to follow \(L^{0}_{l}\) in the super-slow time-scale (analogous to canard-with-head orbits) and they become stable (the seventh orbit). The part of the trajectory along \(L^{0}_{m}\) decreases as B increases further.
For \(C_{3} = 80\), the system undergoes a complete canard explosion along the periodic branch following \(H_{3}\) since it visits the whole canard family from small to large (Figs. 6 and 7). However, what we may observe for small values of \(C_{3}\) is an incomplete canard explosion terminating at a homoclinic connection. For instance, for \(C_{3} = 15\) (see Fig. 4b), the sinusoidal oscillations along the periodic branch initiated at \(H_{3}\) change qualitatively by interacting with \(L^{0}_{m}\) as B varies and we observe homoclinic canard-without-head orbits at \(HOM_{1}\). The orbits terminating on \(HOM_{3}\) are homoclinic canard-with-head orbits surrounded by stable large amplitude oscillations. Increasing \(C_{3}\) completes the canard explosion and the system enters into an excitable regime which will be detailed in the following section.
Canard-mediated transitions and excitability
According to Hodgkin's [6] classification of neural excitability, type-I excitable neurons have continuous frequency-injected current curves, whereas type-II excitable neurons have discontinuous frequency-injected current curves. Rinzel and Ermentrout [68, 69] linked the type-I excitability to a SNIC bifurcation and the type-II excitability to a Hopf bifurcation. De Maesschalck and Wechselberger [29] explained the transition between the two excitability types via an intermediate regime of type-I excitability associated with a codimension-2 Bogdanov–Takens (BT) bifurcation in a planar system. They showed the existence of incomplete canard transitions in this transitory regime. Later on, transitions between the neuronal excitability types was shown to be induced by the inhibitory and excitatory autapse in the Morris-Lecar model [70]. Folded singularities and corresponding canard solutions in higher dimensional systems also have been shown to be shaping systems' excitability properties [24, 28, 33, 34, 63–65].
System (2a)–(2h) can yield large amplitude oscillations in response to certain forms of stimulation (due to stochastic inputs, for instance) after being initiated from an equilibrium point for a B value close to \(H_{4}\), \(LP_{1}\) and \(LP_{2}\) in Figs. 4c–4f. Hence, system (2a)–(2h) is excitable in these regions and the excitability properties of (2a)–(2h) determined by the parameter \(C_{3}\) (see Fig. 4). Indeed, the local pictures in these regions are similar to the ones investigated in [29, 70]. In particular, system (2a)–(2h) is type-I excitable for \(C_{3} \in (22.43, 54.43)\), basically between the homoclinic/saddle-saddle interactions near \(LP_{1}\) and \(LP_{2}\). In this parameter region, the large amplitude oscillations terminate on a homoclinic orbit for which the firing frequency is zero. System (2a)–(2h) is type-II excitable for \(C_{3} > 54.43\) for which the termination is issued via a Hopf bifurcation. In both cases, canard solutions shape the resulting dynamics.
Figure 8 zooms in near the excitable region for \(C_{3} = 50\) (see Fig. 4c for the whole diagram). For a particular value of B for \(B < B_{LP1}\), the only attractor is the large amplitude bursting oscillation (the 1st orbit). In \(B_{LP1} < B < B_{H4}\) the unstable attractors of the equilibrium points appear. The subcritical Hopf bifurcation at \(B = B_{H4}\) initiates a branch of periodic orbits that terminates on the homoclinic point \(HOM_{2}\), which bounds the canard explosion near \(H_{4}\). For \(B_{HOM2}\), a homoclinic canard-without-head orbit (the second orbit) coexists with a large stable bursting orbit of canard type (the third orbit). At \(B_{HOM3}\) a homoclinic canard-with-head orbit (the fifth orbit) appears together with an outer large amplitude canard cycle (the fourth orbit). The large amplitude canard cycle grows in amplitude and disappears on a saddle-node of periodic orbits (SNPO) at \(B = B_{SNPO}\) (the sixth orbit). We also notice that, as \(HOM_{3}\) gets closer to \(LP_{2}\) for \(C_{3} \approx 54.4\) and \(B\approx B_{LP2}\), the canard orbits on the \(HOM_{3}\) become of without-head type.
Example canard orbits near the type-I excitable regime. (a) Bifurcation diagram for \(C_{3} = 50\), \(C_{5}=450\) and \(B\in [18, 22.5]\) (see Fig. 4c for the whole diagram). Stable and unstable solutions are represented by continuous and dashed curves, respectively. Limit point (\(LP_{1,2}\), black squares), Hopf (\(H_{4}\), red dot), homoclinic (\(HOM_{2,3}\), blue stars) bifurcations and saddle-node bifurcation of periodic orbits (SNOP, orange purple square) are marked on the diagram. Numbered solutions are presented in panels (b–d). The orange curves trace the frequency of the oscillations. (b) Periodic orbits marked in panel (a), \(L^{0}\) (red curve), fold curves \(\mathcal{F}_{1,2}\) (red points) and the critical surface \(S^{0}\) (green surface) are projected on the (\(v_{0}\), \(v_{2}\), \(v_{3}\))-space. Arrows indicate the flow direction and its time-scale (single-headed for super-slow, double-headed for slow dynamics. The homoclinic points \(HOM_{2}\) and \(HOM_{3}\) are marked by cyan and dark blue stars. (c) Periodic orbits marked in panel (a), \(L^{0}\) (red surface), fold curves \(\mathcal{F}_{1,2}\) (black curves) and folded singular points \(p_{1,2}\) (red dots) are projected on the \((y_{7},v_{2},v_{0})\)-space. Arrows indicate the corresponding time-scale. (d) Time series of the periodic orbits on panels (b, c) with respective color codes. Period is normalized to 1 (\(\tilde{t}/ \tilde{T} =1\), where T̃ represents period of a cycle)
For a parameter set ensuring the type-II excitability (\(C_{3} = 80\), for instance), the fast spikes of the bursting oscillations disappear and the final oscillation turns out to be a relaxation type running in slow and super-slow timescales. These relaxation oscillations terminate via a complete canard-explosion near the singular Hopf bifurcation point \(H_{4}\). This happens in a similar manner for all \(C_{5}\) values under consideration. Figure 9 provides an example for \(C_{5}= 450\). As the large amplitude periodic solutions decrease in amplitude, they start to follow \(L_{m}^{0}\) and take the shape of canard-with-head solutions (the second and third orbits). The maximal canard of this canard family is the fourth orbit that stays along to the super-slow manifolds as long as possible. After the fourth orbit, we observe canard-without-head orbits (the 5th and the 6th orbits) that shrink to \(p_{1}\). The frequency of the oscillations along the canard explosion ranges in \(1.8\text{--}3.5\text{ Hz}\). We also notice a region of bistability between large amplitude bursting oscillations and equilibrium points. Once again the canard solutions construct the boundary between them. For a parameter set giving relaxation oscillations in this region (e.g. \(C_{5} = \{80, 250, 350\}\) in Fig. 5), the relaxation oscillations shrink \(H_{4}\) via a 'classical' canard explosion, similar to the one in the 2D van der Pol system, without having any fast component in \(v_{3}\).
Example canard orbits near the type-II excitable regime. (a) Bifurcation diagram for \(C_{3} = 80\), \(C_{5}=450\) and \(B \in [16, 18]\) (see Fig. 5a for the whole diagram). Stable and unstable solutions are represented by continuous and dashed curves, respectively. Hopf bifurcation (\(H_{4}\), red dot) is marked on the diagram. Numbered orbits on the lower branch of periodic solutions are presented in panels (c–d). The orange curve traces the frequency of the oscillations emerging at \(H_{4}\). (b) Periodic orbits marked in panel (a), \(L^{0}\) (red curve), fold curves \(\mathcal{F}_{1,2}\) (red points) and the critical surface \(S^{0}\) (green surface) are projected on the \((v_{0},v_{2},v_{3})\)-space. Arrows indicate the flow direction and its time-scale (single-headed for super-slow, double-headed for slow dynamics). (c) Periodic orbits marked in panel (a), \(L^{0}\) (red surface), fold curves \(\mathcal{F}_{1,2}\) (black curves) and folded singular points \(p_{1,2}\) (red dots) are projected on the \((y_{7},v_{2},v_{0})\)-space. Arrows indicate the corresponding time-scale. (d) Time series of the periodic orbits on panels (b, c) are shown with respective color codes. Period is normalized to 1 (\(\tilde{t}/ \tilde{T} =1\), where T̃ represents period of a cycle)
Local field potential in critical regimes
In the previous section we have shown two different regions in parameter space where canard solutions determine boundaries and organize transitions between different dynamical regimes. The narrow-band sinusoidal activity of 4.5–6 Hz emerging near \(H_{3}\) and of 1.8–3.5 Hz emerging near \(H_{4}\) are connected to large amplitude periodic multiple-timescale solutions through canard orbits. System (1a)–(1h) emits aperiodic large amplitude epileptic discharges under stochastic input (\(p(t) = p+ \xi \), with \(\xi = \mathcal{N}(0, 2^{2})\)) when it is initialized near the critical regions of \(H_{3}\) and \(H_{4}\) (Fig. 10–11). A parameter setting ensuring the type-I excitability without any canard solutions near \(H_{4}\) gives a board band activity between the large amplitude spikes (Fig. 10a1–a3). On the other hand, taking the system to type-II excitability near \(H_{4}\) introduces transient small amplitude oscillations of ≈ 3.5 Hz due to the presence of the canard cycles in this region (Fig. 10b1–b3). We observe transitions between large amplitude discharges and harmonic oscillations of ≈ 6 Hz when the system is initialized close to the Hopf bifurcation \(H_{3}\) (Fig. 10c1–c3). Simulated PSPs at the level of the pyramidal cell subpopulation are given in Fig. 11.
LFP traces of system (1a)–(1h) near critical transitions under stochastic input. (a1) Transitions between multiple time-scale oscillations and background regime for a type-I setting at \(B=23\), \(C_{3} = 35\), \(C_{5}=200\). Panel (a2) zooms between two large amplitude discharges (blue) and panel (a3) shows the normalized power spectral density of the signal. (b1) Transitions between multiple time-scale oscillations and background regime with slow oscillations of \(\approx 3.5\text{ Hz}\) for a type-II \(B=17.8\), \(C_{3} = 80\), \(C_{5}=200\). Panel (b2) zooms between two large amplitude discharges (red) and panel (b3) shows the normalized power spectral density of the signal. (c1) Transitions between multiple time-scale oscillations and sinusoidal oscillations for \(B=4.7\), \(C_{3} = 80\), \(C_{5}=200\). Panel (c2) zooms between two large amplitude discharges (cyan) and panel (c3) shows the normalized power spectral density of the signal
Corresponding PSPs of the LFPs given in Fig. 10. (a1)–(a3) The EPSP, slow IPSP (IPSPs) and fast IPSP (IPSPf) for the type-I setting given in Fig. 10a. (b1)–(b3) The EPSP, slow IPSP (IPSPs) and fast IPSP (IPSPf) for the type-II setting given in Fig. 10b. (c1)–(c3) The EPSP, slow IPSP (IPSPs) and fast IPSP (IPSPf) for the setting given in Fig. 10c
Figures 12 and 13 show LFPs recorded by the SEEG electrodes in two different patients with drug-resistant focal epilepsy during presurgical evaluation (see Table 2 for the details). Multiple-contact depth electrodes were implanted according to the SEEG technique as a standard clinical procedure in the care of patients who consented the possible use of data for research purpose. The positioning of the electrodes is determined in each patient from hypotheses about the localization of the epileptogenic areas. Implantation accuracy peri-operatively is verified by an X-ray CT scan. A post-operative CT scan without contrast product is then used to verify the precise 3D location of each electrode contact. After SEEG exploration, intracerebral electrodes are removed. An MRI is performed on which the trajectory of each electrode remains visible. Finally, a CT-scan/MRI data fusion is performed to anatomically locate each contact along each electrode trajectory. The patient had electrodes implanted in the temporal region. For this study, signals were selected as they exhibited clear transitions in electrophysiological patterns. In particular, we selected pre-ictal events followed by a fast discharge typical of the seizure onset, which is one of the markers of the imbalanced relation of excitation and inhibition [16, 71] that involves excitability variations.
SEEG signals recorded in a patient with epilepsy during the inter-ictal and ictal transition. (a) Transition from inter-ictal to ictal period in the first patient. Background activity (BKG) observed further away from seizure (panel b1) has a broad-band frequency distribution (normalized power density spectrum in panel b2). A sporadic spike is preceded by narrow band low amplitude resembling canard-mediated oscillations (CMO, marked in red, zoomed in panel (c1)) at \(\approx 3.5\text{ Hz}\) (normalized power density spectrum in panel (c2))
SEEG signals recorded in a patient with epilepsy during the inter-ictal and ictal transition. (a) Transition from inter-ictal to ictal period in the second patient. Background activity (BKG) observed further away from seizure (panel b1) has a broad-band frequency distribution (normalized power density spectrum in panel b2). A sporadic spike is preceded by narrow band low amplitude resembling canard-mediated oscillations (CMO, marked in red, zoomed in panel (c1)) at \(\approx 7\text{ Hz}\) (normalized power density spectrum in panel (c2))
Table 2 Summary of patients' features
In Fig. 12, a narrow band activity of theta-band of 3.5 Hz is followed by a large amplitude epileptic discharge between two sporadic discharges as we advance towards sustained pre-ictal discharges. Such narrow band activity may be a signature of canard-mediated regions where slowly varying system's parameters and/or remote interactions lead to transitions between small-amplitude-low-frequency oscillations and large amplitude discharges. In Fig. 13 a narrow band activity of about 7 Hz is followed by a large amplitude epileptogenic discharge. We also notice that the form of the epileptic discharges in Fig. 12 and Fig. 13 are different, which may indicate that the systems would have different characteristics. Interestingly, we have identified parameter regions for the corresponding frequency bands in the model (see simulated LFPs and PSPs in Fig. 10 and Fig. 11, respectively). Hence, we think that the properties of transient narrow band oscillations may be related to the excitability properties and level of synaptic projections (scaled by the coupling coefficients in the model) of the epileptogenic zone.
In this article, we extended the multiple time-scale analyses previously initiated in [48]. Here, we both investigated canard transitions present in a neurophysiologically-relevant NMM and analyzed their consequences in terms of subsequent signatures in LFPs. In this three-time-scale model, the canard transitions occur in the 6-dimensional two-time-scale reduced system of slow and super-slow variables. They are associated with degenerate FSN II singularities and singular Hopf bifurcations. They organize initiation of relaxation/bursting oscillations from harmonic oscillations of 4.5–6 Hz or from equilibrium points, and determine the boundaries between them. We showed that the system switches between type-I and type-II excitability near the transitions between the equilibrium points and relaxation/bursting oscillations. We further noticed that the canard regimes of type-II excitability (and partially of type-I) yield low-frequency (near 3.5 Hz) oscillations in the LFP under stochastic input.
These model predictions motivated a close analysis of SEEG recordings performed in epileptic patients. In this paper, results illustrative of both signatures are reported only in two patients. Interestingly, in brain structures clearly involved in the transition from interictal to ictal activity, we observed a narrow band activity between sporadic discharges before the seizure initiation, which strongly differed from the preceding background activity. Although the parameter set used in this paper was not aimed for modeling these recordings specifically, it is striking to see such a matching between the mathematical analysis and the actual recordings.
It has been evidenced that impaired excitation–inhibition balance shapes the activity of neural networks and, therefore, causes the emergence of "pathological" electrophysiological patterns such as pre-ictal spikes and seizures in the context of epilepsy (see for a review [72]). Indeed, epileptogenic brain regions are typical example of such excitation–inhibition imbalanced networks [73]. We showed that the level of EPSP on the subpopulation of SOM+ interneurons determines the type of the excitability. In particular, the system is type-I excitable if the average number of synaptic contacts from the excitatory pyramidal cells to the GABAergic SOM+ interneurons is low, and type-II excitable if the average number of synaptic contacts from the excitatory pyramidal cells to the GABAergic SOM+ interneurons is high. It is then the decreasing GABAergic inhibition (modeled by decreasing inhibitory drive by the subpopulation of the SOM+ interneurons) that is responsible for transitions from background to epileptiform discharges. Interestingly, such model parameter variations are plausible and linked to the failure of inhibitory barrages observed in epileptic tissues [74] and generation of slow waves preceding the fast activity [75, 76]. Properties of emerging epileptic discharges (e.g. their shape and frequencies), and possible "silent" phases in between are strongly connected to the type of the excitability. In the context of epilepsy, transitory regimes between the background activity and epileptic discharges are crucial for understanding the underlying mechanisms [11, 77]. Epileptic biomarkers during such regimes, such as high-frequency oscillations [78], shape features of epileptic spikes [79] or maybe frequency-specific oscillations reported here, are essential for identification of epileptogenic networks and for further development of therapeutic procedures. Verification of the presence of such oscillations across different patients and accurate modeling of the inter-ictal activity are needed, of course, for suggesting them as biomarkers. This is the topic of future investigations.
As epilepsy can be considered as a dynamic disease [73, 80, 81], mathematical models of different cellular levels inherit multiple time-scale thinking [82–84]. We note a few studies on the slow–fast transitions in NMMs. Desroches et al. extended NMMs [85] by considering the synaptic gain of SOM+ interneurons as a slowly changing variable. They showed that this configuration introduced regions of torus canards. Jafarian et al. [86] proposed a NMM which incorporates slow variations in ionic currents leading to spontaneous paroxysmal activity. Hebbink et al. [87] investigated response of the NMM of Wendling et al. [16] to slowly varying inputs under which the systems yields bursting oscillations. Weigenand et al. remarked the role of canard solutions in fast transitions in sleep wake patterns of K-complexes in a NMM of sleep-wake patterns [43]. Our paper shows that canard-mediated solutions are naturally present in the NMM of Wendling et al. [16]. Importantly, as this model implements two main sub-types of interneurons (dendrite- and soma-projecting), it is generic and can be considered for studying the dynamics of other regions than hippocampus, such as neocortical areas, and in different contexts, such as consciousness [47] and Alzheimer's disease [20]. Furthermore, canard regimes reported in this study are governed by the interactions between the pyramidal cell and SOM+ interneuron subpopulations that follow a two-time-scale structure. It would be natural to observe canard-mediated transitions in another generally used NMM of Jansen and Rit [13] for modeling the brain activity. Hence, the canard-mediated fine structures we have demonstrated here could be relevant for a number of situations and lead to markers of subsequent critical transitions. The reported degenerate FSN II singularity leading to canard trajectories is due to the general structure of NMMs, which are defined via second-order differential equations. The dynamics associated with the degenerate FSN II singularity merits further investigations and will be considered as a future work. Finally, organisation of homoclinic canard orbits, possible codimension-two bifurcations and interactions with the fold points will be studied in forthcoming works.
The codes used for numerical analysis are available from the GitHub database (https://github.com/elifkoksal/NMM_BurstingDynamics).
NMM:
Neural Mass Model
EEG:
LFP:
Local Field Potential
GSPT:
Geometric Singular Perturbation Theory
SOM+:
Somatostatin positive
PV+:
Parvanium positive
CA1:
Cornu Ammonis 1
PSP:
Post-Synaptic Potential
EPSP:
Excitatory Post-Synaptic Potential
IPSP:
Inhibitory Post-Synaptic Potential
SEEG:
Stereoelectroencephalography
FSN II:
Folded Saddle-Node type II
DSRS:
Desingularized Slow Reduced System
LP:
Limit Point
Hopf
HOM:
Homoclinic
Cusp Point
BT:
Bogdanov–Takens
GH:
Generalized Hopf
SNIC:
Saddle-Node of Invariant Cycle
SNPO:
Saddle-Node of Periodic Orbits
Tukker JJ, Beed P, Schmitz D, Larkum ME, Sachdev R. Up and down states and memory consolidation across somatosensory, entorhinal, and hippocampal cortices. Front Syst Neurosci. 2020;14:22. https://doi.org/10.3389/fnsys.2020.00022.
Schulz DJ, Baines RA, Hempel CM, Li L, Liss B, Misonou H. Cellular excitability and the regulation of functional neuronal identity: from gene expression to neuromodulation. J Neurosci. 2006;26(41):10362–7. https://doi.org/10.1523/JNEUROSCI.3194-06.2006.
Ratté S, Hong S, De Schutter E, Prescott SA. Impact of neuronal properties on network coding: roles of spike initiation dynamics and robust synchrony transfer. Neuron. 2013;78(5):758–72. https://doi.org/10.1016/j.neuron.2013.05.030.
Brunel N, van Rossum MCW. Lapicque's 1907 paper: from frogs to integrate-and-fire. Biol Cybern. 2007;97(5):337–9. https://doi.org/10.1007/s00422-007-0190-0.
MathSciNet Article MATH Google Scholar
Lapique L. Recherches quantitatives sur l'excitation électrique des nerfs traitée comme une polarisation. J Physiol Pathol Gén. 1907;9:620–35.
Hodgkin AL. The local electric changes associated with repetitive action in a non-medullated axon. J Physiol. 1948;107(2):165–81. https://doi.org/10.1113/jphysiol.1948.sp004260.
Beraneck M, Idoux E. Reconsidering the role of neuronal intrinsic properties and neuromodulation in vestibular homeostasis. Front Neurol. 2012;3:25. https://doi.org/10.3389/fneur.2012.00025.
Sanabria ERG, Su H, Yaari Y. Initiation of network bursts by \(Ca^{2+}\)-dependent intrinsic bursting in the rat pilocarpine model of temporal lobe epilepsy. J Physiol. 2001;532(1):205–16. https://doi.org/10.1111/j.1469-7793.2001.0205g.x.
Jung S, Jones TD, Lugo JN, Sheerin AH, Miller JW, D'Ambrosio R, Anderson AE, Poolos NP. Progressive dendritic hcn channelopathy during epileptogenesis in the rat pilocarpine model of epilepsy. J Neurosci. 2007;27(47):13012–21. https://doi.org/10.1523/JNEUROSCI.3605-07.2007.
Shah MM, Anderson AE, Leung V, Lin X, Johnston D. Seizure-induced plasticity of h channels in entorhinal cortical layer III pyramidal neurons. Neuron. 2004;44(3):495–508. https://doi.org/10.1016/j.neuron.2004.10.011.
Freestone DR, Kuhlmann L, Grayden DB, Burkitt AN, Lai A et al.. Electrical probing of cortical excitability in patients with epilepsy. Epilepsy Behav. 2011;22:110–8.
Chang W-C, Kudlacek J, Hlinka J, Chvojka J, Hadrava M, Kumpost V, Powell AD, Janca R, Murana MI, Karoly PJ, Freestone DR, Cook MJ, Palus M, Otahal J, Jefferys JGR, Jiruska P. Loss of neuronal network resilience precedes seizures and determines the ictogenic nature of interictal synaptic perturbations. Nat Neurosci. 2018;21:1742–52. https://doi.org/10.1038/s41593-018-0278-y.
Jansen BH, Rit VG. Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. Biol Cybern. 1995;73:357–66.
David O, Friston KJ. A neural mass model for MEG/EEG: coupling and neuronal dynamics. NeuroImage. 2003;20(3):1743–55. https://doi.org/10.1016/j.neuroimage.2003.07.015.
Ursino M, Cona F, Zavaglia M. The generation of rhythms within a cortical region: analysis of a neural mass model. NeuroImage. 2010;52(3):1080–94. https://doi.org/10.1016/j.neuroimage.2009.12.084.
Wendling F, Bartolomei F, Bellanger JJ, Chauvel P. Epileptic fast activity can be explained by a model of impaired GABAergic dendritic inhibition. Eur J Neurosci. 2002;15(9):1499–508. https://doi.org/10.1046/j.1460-9568.2002.01985.x.
Suffczynski P, Kalitzin S, Lopes Da Silva FH. Dynamics of non-convulsive epileptic phenomena modeled by a bistable neuronal network. Neuroscience. 2004;126(2):467–84. https://doi.org/10.1016/j.neuroscience.2004.03.014.
Molaee-Ardekani B, Benquet P, Bartolomei F, Wendling F. Computational modeling of high-frequency oscillations at the onset of neocortical partial seizures: from altered structure to dysfunction. NeuroImage. 2010;52(3):1109–22. https://doi.org/10.1016/j.neuroimage.2009.12.049.
Wendling F, Benquet P, Bartolomei F, Jirsa V. Computational models of epileptiform activity. J Neurosci Methods. 2016;260:233–51. https://doi.org/10.1016/j.jneumeth.2015.03.027.
Bhattacharya BS, Coyle D, Maguire LP. A thalamo-cortico-thalamic neural mass model to study alpha rhythms in Alzheimer's disease. Neural Netw. 2011;24(6):631–45. https://doi.org/10.1016/j.neunet.2011.02.009.
Liu F, Wang J, Liu C, Li H, Deng B, Fietkiewicz C, Loparo KA. A neural mass model of basal ganglia nuclei simulates pathological beta rhythm in Parkinson's disease. Chaos, Interdiscip J Nonlinear Sci. 2016;26(12):123113. https://doi.org/10.1063/1.4972200.
MathSciNet Article Google Scholar
Liu C, Zhu Y, Liu F, Wang J, Li H, Deng B, Fietkiewicz C, Loparo KA. Neural mass models describing possible origin of the excessive beta oscillations correlated with Parkinsonian state. Neural Netw. 2017;88:65–73. https://doi.org/10.1016/j.neunet.2017.01.011.
Moehlis J. Canards for a reduction of the Hodgkin–Huxley equations. J Math Biol. 2006;52(2):141–53. https://doi.org/10.1007/s00285-005-0347-1.
Rubin J, Wechselberger M. Giant squid-hidden canard: the 3D geometry of the Hodgkin–Huxley model. Biol Cybern. 2007;97(1):5–32. https://doi.org/10.1007/s00422-007-0153-5.
Hasan CR, Krauskopf B, Osinga HM. Saddle slow manifolds and canard orbits in \(R^{4}\) and application to the full Hodgkin–Huxley model. J Math Neurosci. 2018;8(1):5. https://doi.org/10.1186/s13408-018-0060-1.
Article MATH Google Scholar
Krupa M, Szmolyan P. Relaxation oscillation and canard explosion. J Differ Equ. 2001;174(2):312–68. https://doi.org/10.1006/jdeq.2000.3929.
Curtu R, Rubin J. Interaction of canard and singular Hopf mechanisms in a neural model. SIAM J Appl Dyn Syst. 2011;10(4):1443–79. https://doi.org/10.1137/110823171.
Köksal Ersöz E, Desroches M, Guillamon A, Rinzel J, Tabak J. Canard-induced complex oscillations in an excitatory network. J Math Biol. 2020. https://doi.org/10.1007/s00285-020-01490-1.
De Maesschalck P, Wechselberger M. Neural excitability and singular bifurcations. J Math Neurosci. 2015;5:16. https://doi.org/10.1186/s13408-015-0029-2.
De Maesschalck P, Desroches M. Numerical continuation techniques for planar slow–fast systems. SIAM J Appl Dyn Syst. 2013;12(3):1159–80. https://doi.org/10.1137/120877386.
Desroches M, Krupa M, Rodrigues S. Inflection, canards and excitability threshold in neuronal models. J Math Biol. 2013;67(4):989–1017. https://doi.org/10.1007/s00285-012-0576-z.
Desroches M, Freire E, Hogan SJ, Ponce E, Thota P. Canards in piecewise-linear systems: explosions and super-explosions. Proc R Soc A, Math Phys Eng Sci. 2013;469(2154):20120603. https://doi.org/10.1098/rspa.2012.0603.
Mitry J, McCarthy M, Kopell N, Wechselberger M. Excitable neurons, firing threshold manifolds and canards. J Math Neurosci. 2013;3(1):12. https://doi.org/10.1186/2190-8567-3-12.
Wechselberger M, Mitry J, Rinzel J. Canard theory and excitability. In: Kloeden PE, Pötzsche C, editors. Nonautonomous dynamical systems in the life sciences. vol. 2102. Cham: Springer; 2013. p. 89–132. https://doi.org/10.1007/978-3-319-03080-7_3.
Burke J, Desroches M, Barry AM, Kaper TJ, Kramer MA. A showcase of torus canards in neuronal bursters. J Math Neurosci. 2012;2(1):3. https://doi.org/10.1186/2190-8567-2-3.
Desroches M, Kaper TJ, Krupa M. Mixed-mode bursting oscillations: dynamics created by a slow passage through spike-adding canard explosion in a square-wave burster. Chaos, Interdiscip J Nonlinear Sci. 2013;23(4):046106. https://doi.org/10.1063/1.4827026.
Desroches M, Krupa M, Rodrigues S. Spike-adding in parabolic bursters: the role of folded-saddle canards. Phys D, Nonlinear Phenom. 2016;331:58–70. https://doi.org/10.1016/j.physd.2016.05.011.
Guckenheimer J, Kuehn C. Computing slow manifolds of saddle type. SIAM J Appl Dyn Syst. 2009;8(3):854–79. https://doi.org/10.1137/080741999.
Kramer MA, Traub RD, Kopell NJ. New dynamics in cerebellar Purkinje cells: torus canards. Phys Rev Lett. 2008;101(6):068103. https://doi.org/10.1103/PhysRevLett.101.068103.
Nowacki J, Osinga HM, Tsaneva-Atanasova K. Dynamical systems analysis of spike-adding mechanisms in transient bursts. J Math Neurosci. 2012;2(1):7. https://doi.org/10.1186/2190-8567-2-7.
Osinga HM, Tsaneva-Atanasova KT. Dynamics of plateau bursting depending on the location of its equilibrium. J Neuroendocrinol. 2010;22(12):1301–14. https://doi.org/10.1111/j.1365-2826.2010.02083.x.
Rubin J, Wechselberger M. The selection of mixed-mode oscillations in a Hodgkin–Huxley model with multiple timescales. Chaos, Interdiscip J Nonlinear Sci. 2008;18(1):015105. https://doi.org/10.1063/1.2789564.
Weigenand A, Costa MS, Ngo H-VV, Claussen JC, Martinetz T. Characterization of K-complexes and slow wave activity in a neural mass model. PLoS Comput Biol. 2014;10(11):1003923. https://doi.org/10.1371/journal.pcbi.1003923.
Krupa M, Vidal A, Desroches M, Clément F. Mixed-mode oscillations in a multiple time scale phantom bursting system. SIAM J Appl Dyn Syst. 2012;11:1458–98. https://doi.org/10.1137/110860136.
Vo T, Bertram R, Wechselberger M. Multiple geometric viewpoints of mixed mode dynamics associated with pseudo-plateau bursting. SIAM J Appl Dyn Syst. 2013;12(2):789–830. https://doi.org/10.1137/120892842.
Desroches M, Kirk V. Spike-adding in a canonical three-time-scale model: superslow explosion and folded-saddle canards. SIAM J Appl Dyn Syst. 2018;17(3):1989–2017. https://doi.org/10.1137/17M1143411.
Bensaid S, Modolo J, Merlet I, Wendling F, Benquet P. COALIA: a computational model of human EEG for consciousness research. Front Syst Neurosci. 2019;13:59. https://doi.org/10.3389/fnsys.2019.00059.
Köksal Ersöz E, Modolo J, Bartolomei F, Wendling F. Neural mass modeling of slow–fast dynamics of seizure initiation and abortion. PLoS Comput Biol. 2020;16(11):1008430. https://doi.org/10.1371/journal.pcbi.1008430.
Doedel EJ, Champneys A, Fairgrieve TF, Yu AB, Kuznetsov AP, Oldeman BE, Paffenroth RC, Sandstede B, Wang XJ, Zhang C. Auto-07p: continuation and bifurcation software for ordinary differential equations. 2007. http://cmvl.cs.concordia.ca/auto/.
Ermentrout B. Simulating, analyzing, and animating dynamical systems. Software, environments and tools. Philadelphia: SIAM; 2002. https://doi.org/10.1137/1.9780898718195.
Book MATH Google Scholar
Fenichel N. Geometric singular perturbation theory for ordinary differential equations. J Differ Equ. 1979;31(1):53–98. https://doi.org/10.1016/0022-0396(79)90152-9.
Cardin PT, Teixeira MA. Fenichel theory for multiple time scale singular perturbation problems. SIAM J Appl Dyn Syst. 2017;16(3):1425–52. https://doi.org/10.1137/16M1067202.
Szmolyan P, Wechselberger M. Canards in R3. J Differ Equ. 2001;177(2):419–53. https://doi.org/10.1006/jdeq.2001.4001.
Milik A, Szmolyan P. Multiple time scales and canards in a chemical oscillator. In: Jones CKRT, Khibnik AI, editors. Multiple-time-scale dynamical systems. The IMA volumes in mathematics and its applications. New York: Springer; 2001. p. 117–40. https://doi.org/10.1007/978-1-4613-0117-2_5.
Chapter MATH Google Scholar
Guckenheimer J. Return maps of folded nodes and folded saddle-nodes. Chaos, Interdiscip J Nonlinear Sci. 2008;18:015108. https://doi.org/10.1063/1.2790372.
Krupa M, Wechselberger M. Local analysis near a folded saddle-node singularity. J Differ Equ. 2010;248(12):2841–88. https://doi.org/10.1016/j.jde.2010.02.006.
Benoît E, Callot JF, Diener F, Diener M. Chasse au canard. Collect Math. 1981;31–32(1–3):37–119.
MathSciNet MATH Google Scholar
Dumortier F, Roussarie R. Canard cycles and center manifolds. Memories of the American mathematical society. vol. 121. Providence: Am. Math. Soc.; 1996.
MATH Google Scholar
Benoît E. Systèmes lents-rapides dans \(\mathbb{R}^{3}\) et leur canards. Astérisque. 1983;2:109–10.
Wechselberger M. À propos de canards (Apropos canards). Trans Am Math Soc. 2012;364(6):3289–309. https://doi.org/10.1090/S0002-9947-2012-05575-9.
Wechselberger M. Existence and bifurcation of canards in \(\mathbb {R}^{3}\) in the case of a folded node. SIAM J Appl Dyn Syst. 2005;4(1):101–39. https://doi.org/10.1137/030601995.
Krupa M, Popović N, Kopell N. Mixed-mode oscillations in three time-scale systems: a prototypical example. SIAM J Appl Dyn Syst. 2008;7(2):361–420. https://doi.org/10.1137/070688912.
Mitry J, Wechselberger M. Folded saddles and faux canards. SIAM J Appl Dyn Syst. 2017;16(1):546–96. https://doi.org/10.1137/15M1045065.
Albizuri JU, Desroches M, Krupa M, Rodrigues S. Inflection, canards and folded singularities in excitable systems: application to a 3D FitzHugh–Nagumo model. J Nonlinear Sci. 2020;30(6):3265–91. https://doi.org/10.1007/s00332-020-09650-9.
Wieczorek S, Ashwin P, Luke CM, Cox PM. Excitability in ramped systems: the compost-bomb instability. Proc R Soc A, Math Phys Eng Sci. 2011;467(2129):1243–69. https://doi.org/10.1098/rspa.2010.0485.
Shil'nikov L. On a new type of bifurcation of multidimensional dynamical systems. Sov Math Dokl. 1969;10.
Kuznetsov YA. Elements of applied bifurcation theory. 2nd ed. New York: Springer; 1998.
Rinzel J. Excitation dynamics: insights from simplified membrane models. Fed Proc. 1985;44(15):2944–6. https://doi.org/10.1186/s13408-015-0029-2.
Rinzel J, Ermentrout BG. Analysis of neural excitability and oscillations. In: Koch C, Segev I, editors. Methods in neuronal modeling. Cambridge: MIT Press; 1998.
Zhao Z, Huaguang G. Transitions between classes of neuronal excitability and bifurcations induced by autapse. Sci Rep. 2017;7:6760. https://doi.org/10.1038/s41598-017-07051-9.
Lagarde S, Buzori S, Trebuchon A, Carron R, Scavarda D, Milh M et al.. The repertoire of seizure onset patterns in human focal epilepsies: determinants and prognostic values. Epilepsia. 2019;60(1):85–95. https://doi.org/10.1111/epi.14604.
Lopatina OL, Malinovskaya NA, Komleva YK, Gorina YV, Shuvaev AN, Olovyannikova RY, Belozor OS, Belova OA, Higashida H, Salmina AB. Excitation/inhibition imbalance and impaired neurogenesis in neurodevelopmental and neurodegenerative disorders. Rev Neurosci. 2019;30:807–20. https://doi.org/10.1515/revneuro-2019-0014.
Lopes da Silva FH, Blanes W, Kalitzin SN, Parra J, Suffczynski P, Velis DN. Dynamical diseases of brain systems: different routes to epileptic seizures. IEEE Trans Biomed Eng. 2003;50:540–8. https://doi.org/10.1109/TBME.2003.810703.
Trevelyan AJ, Sussillo D, Watson BO, Yuste R. Modular propagation of epileptiform activity: evidence for an inhibitory veto in neocortex. J Neurosci. 2006;26:12447–55. https://doi.org/10.1523/JNEUROSCI.2787-06.2006.
Timofeev I, Grenier F, Steriade M. The role of chloride-dependent inhibition and the activity of fast-spiking neurons during cortical spike wave electrographic seizures. Neuroscience. 2002;114:1115–32.
Timofeev I, Steriade M. Neocortical seizures: initiation, development and cessation. Neuroscience. 2004;123:299–336. https://doi.org/10.1016/j.neuroscience.2003.08.051.
Meisen C, Loddenkemper T. Seizure prediction and intervention. Neuropharmacology. 2020;172:107898.
Shamas M, Benquet P, Merlet I, Khalil M, El Falou W, Nica A, Wendling F. On the origin of epileptic high frequency oscillations observed on clinical electrodes. Clin Neurophysiol. 2018;129(4):829–41. https://doi.org/10.1016/j.clinph.2018.01.062.
Huneau C, Benquet P, Dieuset G, Biraben A, Martin B, Wendling F. Shape features of epileptic spikes are a marker of epileptogenesis in mice. Epilepsia. 2013;54(12):2219–27. https://doi.org/10.1111/epi.12406.
Lopes da Silva FH, Blanes W, Kalitzin N, Parra J, Suffczynski P, Velis DN. Epilepsies as dynamical diseases of brain systems: basic models of the transition between normal and epileptic activity. Epilepsia. 2003;44(12):72–83. https://doi.org/10.1111/j.0013-9580.2003.12005.x.
Meisel C, Kuehn C. Scaling effects and spatio-temporal multilevel dynamics in epileptic seizures. PLoS ONE. 2012;2:30371. https://doi.org/10.1371/journal.pone.0030371.
Chizhov AV, Zefirov AV, Amakhin DV, Smirnova EY, Zaitsev AV. Minimal model of interictal and ictal discharges "epileptor-2". PLoS Comput Biol. 2018;14(5):1006186. https://doi.org/10.1371/journal.pcbi.1006186.
Ullah G, Wei Y, Dahlem MA, Wechselberger M, Schiff SJ. The role of cell volume in the dynamics of seizure, spreading depression, and anoxic depolarization. PLoS Comput Biol. 2015;11(8):1004414. https://doi.org/10.1371/journal.pcbi.1004414.
Hübel N, Dahlem MA. Dynamics from seconds to hours in Hodgkin-Huxley model with time-dependent ion concentrations and buffer reservoirs. PLoS Comput Biol. 2014;10(12):1003941. https://doi.org/10.1371/journal.pcbi.1003941.
Desroches M, Faugeras O, Krupa M. Slow–fast transitions to seizure states in the Wendling-Chauvel neural mass model. Opera Med Physiol. 2015;2(3–4):228–34.
Jafarian A, Freestone DR, Nešić D, Grayden DB. Slow–fast Duffing neural mass model. In: 41st annual international conference of the IEEE engineering in medicine and biology society (EMBC). 2019. p. 142–5.
Hebbink J, van Gils SA, Meijer HGE. On analysis of inputs triggering large nonlinear neural responses slow–fast dynamics in the Wendling neural mass model. Commun Nonlinear Sci Numer Simul. 2020;83:105103. https://doi.org/10.1016/j.cnsns.2019.105103.
We would like to thank Prof. Fabrice Bartolomei (APHM, Timone Hospital, Clinical Neurophysiology, Marseille, France) for providing the clinical data, Mathieu Desroches (Inria Sophia Antipolis - Méditerranée, MathNeuro Team, Sophia Antipolis, France) for his helpful comments and referees for their useful suggestions.
Authors' information
Université de Rennes 1, INSERM, Laboratoire Traitement du Signal et de L'Image (LTSI) - U1099, Campus de Beaulieu - Batiment 22, 35042 Rennes, France.
EKE was supported by NIH (Application number: R01 NS092760-01A1; 18/02/2019 to 31/12/2020). She is member of the Galvani project (ERC-SyG 2019; grant agreement No 855109) since 01/01/2021.
Univ Rennes, INSERM, LTSI-U1099, Campus de Beaulieu, F - 35000, Rennes, France
Elif Köksal Ersöz & Fabrice Wendling
Elif Köksal Ersöz
Fabrice Wendling
EKE developed the theoretical framework, conducted the mathematical analysis, interpreted the results and was a major contributor in writing the manuscript. FW contributed to interpreting the results and writing the manuscript. All authors read and approved the final manuscript.
Correspondence to Fabrice Wendling.
The SEEG recordings were carried out as part of normal clinical care of patients. Patients were informed that their data may be used for research purposes.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Köksal Ersöz, E., Wendling, F. Canard solutions in neural mass models: consequences on critical regimes. J. Math. Neurosc. 11, 11 (2021). https://doi.org/10.1186/s13408-021-00109-z
Multiple time-scale systems
Excitability
Over 10 million scientific documents at your fingertips
Switch Edition
Academic Edition
Corporate Edition
© 2022 Springer Nature Switzerland AG. Part of Springer Nature. | CommonCrawl |
Physics Quantum Physics
Fundamental Theories of Physics
Open Access This content is freely available online to anyone, anywhere at any time.
The Cellular Automaton Interpretation of Quantum Mechanics
Authors: 't Hooft, Gerard
A radically different approach to the challenges of quantum theory
Authored by one of today's most creative theoretical physicists
Pedagogically structured to first introduce concepts and main arguments, with technical details presented separately
Access this book for free
This book is an open access book, you can download it for free on link.springer.com
This book presents the deterministic view of quantum mechanics developed by Nobel Laureate Gerard 't Hooft.
Dissatisfied with the uncomfortable gaps in the way conventional quantum mechanics meshes with the classical world, 't Hooft has revived the old hidden variable ideas, but now in a much more systematic way than usual. In this, quantum mechanics is viewed as a tool rather than a theory.
The author gives examples of models that are classical in essence, but can be analysed by the use of quantum techniques, and argues that even the Standard Model, together with gravitational interactions, might be viewed as a quantum mechanical approach to analysing a system that could be classical at its core. He shows how this approach, even though it is based on hidden variables, can be plausibly reconciled with Bell's theorem, and how the usual objections voiced against the idea of 'superdeterminism' can be overcome, at least in principle.
This framework elegantly explains - and automatically cures - the problems of the wave function collapse and the measurement problem. Even the existence of an "arrow of time" can perhaps be explained in a more elegant way than usual. As well as reviewing the author's earlier work in the field, the book also contains many new observations and calculations. It provides stimulating reading for all physicists working on the foundations of quantum theory.
Gerard 't Hooft is professor of theoretical physics at Utrecht University, The Netherlands. He was awarded the Nobel Prize in 1999, together with his thesis advisor Martinus Veltman, for "elucidating the quantum structure of electroweak interactions". In his subsequent work he has made further major contributions in areas including black-hole physics, quantum gravity, the holographic principle, gauge theory, and the foundations of quantum mechanics.
Motivation for This Work
't Hooft, Gerard
Deterministic Models in Quantum Notation
Interpreting Quantum Mechanics
Deterministic Quantum Mechanics
Concise Description of the CA Interpretation
Information Loss
More Problems
Alleys to Be Further Investigated and Open Questions
Introduction to Part II
More on Cogwheels
The Continuum Limit of Cogwheels, Harmonic Rotators and Oscillators
Fermions
$\mathit{PQ}$ Theory
Models in Two Space–Time Dimensions Without Interactions
Symmetries
The Discretized Hamiltonian Formalism in $\mathit{PQ}$ Theory
The Cellular Automaton
The Problem of Quantum Locality
Conclusions of Part II
Gerard 't Hooft
XVIII, 298
2 b/w illustrations, 19 illustrations in colour | CommonCrawl |
Distinguishability in Quantum Ensembles
Inspired by this question: Are these two quantum systems distinguishable? and discussion therein.
Given an ensemble of states, the randomness of a measurement outcome can be due to classical reasons (classical probability distribution of states in ensemble) and quantum reasons (an individual state can have a superposition of states). Because a classical system cannot be in a superposition of states, and in principle the state can be directly measured, the probability distribution is directly measurable. So any differing probability distributions are distinguishable. However in quantum mechanics, an infinite number of different ensembles can have the same density matrix.
What assumptions are necessary to show that if two ensembles initially have the same density matrix, that there is no way to apply the same procedure to both ensembles and achieve different density matrices? (ie. that the 'redundant' information regarding what part of Hilbert space is represented in the ensemble is never retrievable even in principle)
To relate to the referenced question, for example if we could generate an interaction that evolved:
1) an ensemble of states $|0\rangle + e^{i\theta}|1\rangle$ with a uniform distribution in $\theta$
2) an ensemble of states $|0\rangle + e^{i\phi}|1\rangle$ with a non-uniform distribution in $\phi$
such an mapping of vectors in Hilbert space can be 1-to-1. But it doesn't appear it can be done with a linear operator.
So it hints that we can probably prove an answer to the question using only the assumption that states are vectors in a Hilbert space, and the evolution is a linear operator.
Can someone list a simple proof showing that two ensembles with initially the same density matrix, can never evolve to two different density matrices? Please be explicit with what assumptions you make.
Update: I guess to prove they are indistinguishable, we'd also need to show that non-unitary evolution like the projection from a measurement, can't eventually allow one to distinguish the underlying ensemble either. Such as perhaps using correlation between multiple measurements or possibly instead of asking something with only two answers, asking something with more that two so that finally the distribution of answers needs more than just the expectation value to characterize the results.
quantum-mechanics quantum-information
GinsbergGinsberg
$\begingroup$ Hah! I addressed your update in my answer before I even saw it. $\endgroup$
– Keenan Pepper
You only need to assume
the Schrödinger equation (yes, the same old linear Schrödinger equation, so the proof doesn't work for weird nonlinear quantum-mechanics-like theories)
the standard assumptions about projective measurements (i.e. the Born rule and the assumption that after you measure a system it gets projected into the eigenspace corresponding to the eigenvalue you measured)
Then it's easy to show that the evolution of a quantum system depends only on its density matrix, so "different" ensembles with the same density matrix are not actually distinguishable.
First, you can derive from the Schrödinger equation a time evolution equation for the density matrix. This shows that if two ensembles have the same density matrix and they're just evolving unitarily, not being measured, then they will continue to have the same density matrix at all future times. The equation is $$\frac{d\rho}{dt} = \frac{1}{i\hbar} \left[ H, \rho \right]$$
Second, when you perform a measurement on an ensemble, the probability distribution of the measurment results depends only on the density matrix, and the density matrix after the measurement (of the whole ensemble, or of any sub-ensemble for which the measurement result was some specific value) only depends on the density matrix before the measurement.
Specifically, consider a general observable (assumed to have discrete spectrum for simplicity) represented by a hermitian operator $A$. Let the diagonalization of $A$ be $$A = \sum_i a_i P_i$$ where $P_i$ is the projection operator in to the eigenspace corresponding to eigenvalue (measurement outcome) $a_i$. Then the probability that the measurement outcome is $a_i$ is $$p(a_i) = \operatorname{Tr}(\rho P_i)$$ This gives the complete probability distribution of $A$.
The density matrix of the full ensemble after the measurment is $$\rho' = \sum_i P_i \rho P_i$$ and the density matrix of the sub-ensemble for which the measurment value turned out to be $a_i$ is $$\rho'_i = \frac{P_i \rho P_i}{\operatorname{Tr}(\rho P_i)}$$
Since none of these equations depend on any property of the ensemble other than its density matrix (e.g. the pure states and probabilities of which the mixed state is "composed"), the density matrix is a full and complete description of the quantum state of the ensemble.
Keenan PepperKeenan Pepper
$\begingroup$ Oh, and for the case of an observable $A$ with a continuous spectrum, it works basically the same way. For mathematicians it might get more hairy, but as a physicist I have no problem just saying "replace all the summation signs with integrals". $\endgroup$
$\begingroup$ You don't even need to assume Schrödinger equation, but only the fact that the evolution of a quantum state is unitary. $\endgroup$
– Frédéric Grosshans
$\begingroup$ Can you please justify why just by weighted sum of density matrices for sub ensembles gives the thing for whole ensemble? In other words, what's difference between density matrix for a single system, as opposed to an ensemble? I understand former quite well as the "state" of system. $\endgroup$
– physicophilic
Density matrices are an alternative description of quantum mechanics. Consequently, if two ensembles have the same density matrix, they are not distinguishable.
Example, consider the unpolarized spin-1/2 density matrix which can be modeled as a system that is half pure states in the +x direction and half in the -x direction, or alternatively, as half pure states in the +z direction (i.e. spin up) and half in the -z direction (i.e. spin down):
$$\begin{pmatrix}0.5&0\\0&0.5\end{pmatrix} = 0.5\rho_{+x}+0.5\rho_{-x} = 0.5\rho_{+z}+0.5\rho_{-z}$$ Now compute the average value of an operator $H$ with respect to these ensembles. Let $$H = \begin{pmatrix}h_{11}&h_{12}\\h_{21}&h_{22}\end{pmatrix}$$ then the averages for the four states involved are: $$\begin{array}{rcl} \langle H\rangle_{+x} &=& 0.5(h_{11}+h_{12}+h_{21}+h_{22})\\ \langle H\rangle_{-x} &=& 0.5(h_{11}-h_{12}-h_{21}+h_{22})\\ \langle H\rangle_{+z} &=& h_{11}\\ \langle H\rangle_{-z} &=& h_{22} \end{array}$$ From the above, it's clear that taking the average over $\pm x$ will give the same result as taking the average over $\pm z$, that is, in both cases the ensemble will give an average of $$\langle H\rangle = 0.5(h_{11}+h_{22})$$
Any preparation of the system amounts to an operator acting on the states and so $H$ can stand for a general operation. Therefore there is no way of distinguishing an unpolarized mixture of +- x from an unpolarized mixture of +-z.
The argument for general density matrices is similar, but I think this gets the point across.
Carl BrannenCarl Brannen
$\begingroup$ Are you saying instead of representing a state as a vector in Hilbert space, it is sufficient to represent a state as a density matrix? It seems like this view would change the counting of physical states and would have an effect in statistical mechanics or thermodynamics of a system. It almost seems like you would be reducing the entropy by mixing two ensembles. $\endgroup$
– Ginsberg
$\begingroup$ Either way, the whole point of the question was to see a concrete mathematical proof. Instead of just saying it is so, can you please show how it is so, such that I can learn more? $\endgroup$
$\begingroup$ @Ginsberg; Yes, a density matrix is equivalent to a collection of pure states (presumably represented by state vectors) along with a probability density for the pure states. I've not found the reference I was looking for so I'll type up an outline of a proof and edit it in. $\endgroup$
– Carl Brannen
Not the answer you're looking for? Browse other questions tagged quantum-mechanics quantum-information or ask your own question.
Are these two quantum systems distinguishable?
Is energy conserved in decay of hydrogen atom in superposed state?
States versus ensembles in quantum mechanics
Is there a physical interpretation to invariant random matrix ensembles?
Density operator matrices of pure and completely mixed ensembles
What is the state of a single electron in an entanglement?
Can the same density matrix represent two (or more) different ensembles?
Reconcile a pair of two-qubit boundary-state separability probability analyses
Understanding the difference between classical ensembles and quantum ensemble | CommonCrawl |
Gradient flow structures for discrete porous medium equations
Uniqueness for Keller-Segel-type chemotaxis models
April 2014, 34(4): 1339-1353. doi: 10.3934/dcds.2014.34.1339
On the twist condition and $c$-monotone transport plans
Thierry Champion 1, and Luigi De Pascale 2,
Université de Toulon, IMATH, EA 2134, 83957 La Garde, France
Dipartimento di Matematica Applicata, Università di Pisa, Via Filippo Buonarroti 1/c, 56127 Pisa, Italy
Received October 2012 Revised March 2013 Published October 2013
A usual approach for proving the existence of an optimal transport map, be it in ${\mathbb R}^d$ or on more general manifolds, involves a regularity condition on the transport cost (the so-called Left Twist condition, i.e. the invertibility of the gradient in the first variable) as well as the fact that any optimal transport plan is supported on a cyclically-monotone set. Under the classical assumption that the initial measure does not give mass to sets with $\sigma$-finite $\mathcal{H}^{d-1}$ measure and a stronger regularity condition on the cost (the Strong Left Twist), we provide a short and self-contained proof of the fact that any feasible transport plan (optimal or not) satisfying a $c$-monotonicity assumption is induced by a transport map. We also show that the usual costs induced by Tonelli Lagrangians satisfy the Strong Left Twist condition we propose.
Keywords: Tonelli Lagrangian., cyclical monotonicity, optimal transport problem, Monge-Kantorovich problem.
Mathematics Subject Classification: Primary: 49Q20, 49K30; Secondary: 49J4.
Citation: Thierry Champion, Luigi De Pascale. On the twist condition and $c$-monotone transport plans. Discrete & Continuous Dynamical Systems - A, 2014, 34 (4) : 1339-1353. doi: 10.3934/dcds.2014.34.1339
Luigi Ambrosio, Lecture notes on optimal transport problems,, in, 1812 (2003), 1. doi: 10.1007/978-3-540-39189-0_1. Google Scholar
Luigi Ambrosio and Aldo Pratelli, Existence and stability results in the $L^1$ theory of optimal transportation,, in, 1813 (2003), 123. doi: 10.1007/978-3-540-44857-0_5. Google Scholar
Patrick Bernard and Boris Buffoni, Optimal mass transportation and Mather theory,, J. Eur. Math. Soc. (JEMS), 9 (2007), 85. doi: 10.4171/JEMS/74. Google Scholar
Yann Brenier, Décomposition polaire et réarrangement monotone des champs de vecteurs,, C. R. Acad. Sci. Paris Sér. I Math., 305 (1987), 805. Google Scholar
Luis A. Caffarelli, Allocation maps with general cost functions,, in, 177 (1996), 29. Google Scholar
Guillaume Carlier, Duality and existence for a class of mass transportation problems and economic applications,, in, 5 (2003), 1. doi: 10.1007/978-4-431-53979-7_1. Google Scholar
Laura Caravenna, A proof of Sudakov theorem with strictly convex norms,, Math. Z., 268 (2011), 371. doi: 10.1007/s00209-010-0677-6. Google Scholar
Thierry Champion and Luigi De Pascale, The Monge problem for strictly convex norms in $\mathbbR^d$,, J. Eur. Math. Soc. (JEMS), 12 (2010), 1355. doi: 10.4171/JEMS/234. Google Scholar
______, The Monge problem in $\mathbbR^d$,, Duke Math. J., 157 (2011), 551. doi: 10.1215/00127094-1272939. Google Scholar
______, The Monge problem in $\mathbbR^d$: Variations on a theme,, Journal of Mathematical Sciences, 181 (2012), 856. doi: 10.1007/s10958-012-0719-1. Google Scholar
Thierry Champion, Luigi De Pascale and Petri Juutinen, The $\infty$-Wasserstein distance: Local solutions and existence of optimal transport maps,, SIAM J. Math. Anal., 40 (2008), 1. doi: 10.1137/07069938X. Google Scholar
Dario Cordero-Erausquin, Sur le transport de mesures périodiques,, C. R. Acad. Sci. Paris Sér. I Math., 329 (1999), 199. doi: 10.1016/S0764-4442(00)88593-6. Google Scholar
Luis A. Caffarelli, Mikhail Feldman and Robert J. McCann, Constructing optimal maps for Monge's transport problem as a limit of strictly convex costs,, J. Amer. Math. Soc., 15 (2002), 1. doi: 10.1090/S0894-0347-01-00376-9. Google Scholar
L. C. Evans and W. Gangbo, Differential equations methods for the Monge-Kantorovich mass transfer problem,, Mem. Amer. Math. Soc., 137 (1999). doi: 10.1090/memo/0653. Google Scholar
Albert Fathi and Alessio Figalli, Optimal transportation on non-compact manifolds,, Israel J. Math., 175 (2010), 1. doi: 10.1007/s11856-010-0001-5. Google Scholar
Alessio Figalli, Existence, uniqueness, and regularity of optimal transport maps,, SIAM J. Math. Anal., 39 (2007), 126. doi: 10.1137/060665555. Google Scholar
_______, The Monge problem on non-compact manifolds,, Rend. Semin. Mat. Univ. Padova, 117 (2007), 147. Google Scholar
Wilfrid Gangbo and Robert J. McCann, Optimal maps in Monge's mass transport problem,, C. R. Acad. Sci. Paris Sér. I Math., 321 (1995), 1653. Google Scholar
_______, The geometry of optimal transportation,, Acta Math., 177 (1996), 113. doi: 10.1007/BF02392620. Google Scholar
Nassif Ghoussoub and Abbas Moameni, A self-dual polar factorization for vector fields,, Comm. Pure Appl. Math., 66 (2013), 905. doi: 10.1002/cpa.21430. Google Scholar
L. V. Kantorovith, On the translocation of masses,, C. R. (Dokl.) Acad. Sci. URSS (N. S.), 37 (1942), 199. Google Scholar
_______, On a problem of Monge,, (in Russian) Uspekhi Mat. Nauk., 3 (1948), 225. Google Scholar
Eckehard Krauss, A representation of arbitrary maximal monotone operators via subgradients of skew-symmetric saddle functions,, Nonlinear Analysis, 9 (1985), 1381. doi: 10.1016/0362-546X(85)90097-5. Google Scholar
Vladimir Levin, Abstract cyclical monotonicity and Monge solutions for the general Monge-Kantorovich problem,, Set-Valued Anal., 7 (1999), 7. doi: 10.1023/A:1008753021652. Google Scholar
Robert J. McCann, Polar factorization of maps on Riemannian manifolds,, Geom. Funct. Anal., 11 (2001), 589. doi: 10.1007/PL00001679. Google Scholar
Gaspard Monge, "Mémoire sur la Théorie des Déblais et des Remblais,", Histoire de l'Académie des Sciences de Paris, (1781). Google Scholar
A. Pratelli, On the sufficiency of $c$-cyclical monotonicity for optimality of transport plans,, Math. Z., 258 (2008), 677. doi: 10.1007/s00209-007-0191-7. Google Scholar
L. Rüschendorf and S. T. Rachev, A characterization of random variables with minimum $L^2$-distance,, J. Multivariate Anal., 32 (1990), 48. doi: 10.1016/0047-259X(90)90070-X. Google Scholar
Ludger Rüschendorf, On $c$-optimal random variables,, Statist. Probab. Lett., 27 (1996), 267. doi: 10.1016/0167-7152(95)00078-X. Google Scholar
Walter Schachermayer and Josef Teichmann, Characterization of optimal transport plans for the Monge-Kantorovich problem,, Proc. Amer. Math. Soc., 137 (2009), 519. doi: 10.1090/S0002-9939-08-09419-7. Google Scholar
Neil S. Trudinger and Xu-Jia Wang, On the Monge mass transfer problem,, Calc. Var. Partial Differential Equations, 13 (2001), 19. doi: 10.1007/PL00009922. Google Scholar
Cédric Villani, Topics in optimal transportation,, Graduate Studies in Mathematics, 58 (2003). doi: 10.1007/b12016. Google Scholar
_______, "Optimal Transport. Old and New,", Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 338 (2009). doi: 10.1007/978-3-540-71050-9. Google Scholar
Abbas Moameni. Invariance properties of the Monge-Kantorovich mass transport problem. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2653-2671. doi: 10.3934/dcds.2016.36.2653
Zuo Quan Xu, Jia-An Yan. A note on the Monge-Kantorovich problem in the plane. Communications on Pure & Applied Analysis, 2015, 14 (2) : 517-525. doi: 10.3934/cpaa.2015.14.517
Jesus Garcia Azorero, Juan J. Manfredi, I. Peral, Julio D. Rossi. Limits for Monge-Kantorovich mass transport problems. Communications on Pure & Applied Analysis, 2008, 7 (4) : 853-865. doi: 10.3934/cpaa.2008.7.853
Giuseppe Buttazzo, Eugene Stepanov. Transport density in Monge-Kantorovich problems with Dirichlet conditions. Discrete & Continuous Dynamical Systems - A, 2005, 12 (4) : 607-628. doi: 10.3934/dcds.2005.12.607
Yupeng Li, Wuchen Li, Guo Cao. Image segmentation via $ L_1 $ Monge-Kantorovich problem. Inverse Problems & Imaging, 2019, 13 (4) : 805-826. doi: 10.3934/ipi.2019037
Nassif Ghoussoub, Bernard Maurey. Remarks on multi-marginal symmetric Monge-Kantorovich problems. Discrete & Continuous Dynamical Systems - A, 2014, 34 (4) : 1465-1480. doi: 10.3934/dcds.2014.34.1465
Kaizhi Wang, Yong Li. Existence and monotonicity property of minimizers of a nonconvex variational problem with a second-order Lagrangian. Discrete & Continuous Dynamical Systems - A, 2009, 25 (2) : 687-699. doi: 10.3934/dcds.2009.25.687
Christian Léonard. A survey of the Schrödinger problem and some of its connections with optimal transport. Discrete & Continuous Dynamical Systems - A, 2014, 34 (4) : 1533-1574. doi: 10.3934/dcds.2014.34.1533
Luis Bayón, Jose Maria Grau, Maria del Mar Ruiz, Pedro Maria Suárez. A hydrothermal problem with non-smooth Lagrangian. Journal of Industrial & Management Optimization, 2014, 10 (3) : 761-776. doi: 10.3934/jimo.2014.10.761
Qinglan Xia, Shaofeng Xu. On the ramified optimal allocation problem. Networks & Heterogeneous Media, 2013, 8 (2) : 591-624. doi: 10.3934/nhm.2013.8.591
Hang-Chin Lai, Jin-Chirng Lee, Shuh-Jye Chern. A variational problem and optimal control. Journal of Industrial & Management Optimization, 2011, 7 (4) : 967-975. doi: 10.3934/jimo.2011.7.967
Ouayl Chadli, Gayatri Pany, Ram N. Mohapatra. Existence and iterative approximation method for solving mixed equilibrium problem under generalized monotonicity in Banach spaces. Numerical Algebra, Control & Optimization, 2020, 10 (1) : 75-92. doi: 10.3934/naco.2019034
Xueyong Wang, Yiju Wang, Gang Wang. An accelerated augmented Lagrangian method for multi-criteria optimization problem. Journal of Industrial & Management Optimization, 2020, 16 (1) : 1-9. doi: 10.3934/jimo.2018136
Jingang Xiong, Jiguang Bao. The obstacle problem for Monge-Ampère type equations in non-convex domains. Communications on Pure & Applied Analysis, 2011, 10 (1) : 59-68. doi: 10.3934/cpaa.2011.10.59
Shouchuan Hu, Haiyan Wang. Convex solutions of boundary value problem arising from Monge-Ampère equations. Discrete & Continuous Dynamical Systems - A, 2006, 16 (3) : 705-720. doi: 10.3934/dcds.2006.16.705
John C. Schotland, Vadim A. Markel. Fourier-Laplace structure of the inverse scattering problem for the radiative transport equation. Inverse Problems & Imaging, 2007, 1 (1) : 181-188. doi: 10.3934/ipi.2007.1.181
Jingzhen Liu, Ka Fai Cedric Yiu, Alain Bensoussan. The optimal mean variance problem with inflation. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 185-203. doi: 10.3934/dcdsb.2016.21.185
Ali Fuat Alkaya, Dindar Oz. An optimal algorithm for the obstacle neutralization problem. Journal of Industrial & Management Optimization, 2017, 13 (2) : 835-856. doi: 10.3934/jimo.2016049
Ellina Grigorieva, Evgenii Khailov, Andrei Korobeinikov. An optimal control problem in HIV treatment. Conference Publications, 2013, 2013 (special) : 311-322. doi: 10.3934/proc.2013.2013.311
Qinglan Xia. An application of optimal transport paths to urban transport networks. Conference Publications, 2005, 2005 (Special) : 904-910. doi: 10.3934/proc.2005.2005.904
Thierry Champion Luigi De Pascale | CommonCrawl |
A First Look at Quantum Probability, Part 2
Welcome back to our mini-series on quantum probability! Last time, we motivated the series by pondering over a thought from classical probability theory, namely that marginal probability doesn't have memory. That is, the process of summing over of a variable in a joint probability distribution causes information about that variable to be lost. But as we saw then, there is a quantum version of marginal probability that behaves much like "marginal probability with memory." It remembers what's destroyed when computing marginals in the usual way. In today's post, I'll unveil the details. Along the way, we'll take an introductory look at the mathematics of quantum probability theory.
Let's begin with a brief recap of the ideas covered in Part 1: We began with a joint probability distribution on a product of finite sets $p\colon X\times Y\to [0,1]$ and realized it as a matrix $M$ by setting $M_{ij} = \sqrt{p(x_i),p(y_j)}$. We called elements of our set $X=\{0,1\}$ prefixes and the elements of our set $Y=\{00,11,01,10\}$ suffixes so that $X\times Y$ is the set of all bitstrings of length 3.
We then observed that the matrix $M^\top M$ contains the marginal probability distribution of $Y$ along its diagonal. Moreover its eigenvectors define conditional probability distributions on $Y$. Likewise, $MM^\top$ contains marginals on $X$ along its diagonal, and its eigenvectors define conditional probability distributions on $X$.
The information in the eigenvectors of $M^\top M$ and $MM^\top$ is precisely the information that's destroyed when computing marginal probability in the usual way. The big reveal last time was that the matrices $M^\top M$ and $MM^\top$ are the quantum versions of marginal probability distributions.
As we'll see today, the quantum version of a probability distribution is something called a density operator. The quantum version of marginalizing corresponds to "reducing" that operator to a subsystem. This reduction is a construction in linear algebra called the partial trace. I'll start off by explaining the partial trace. Then I'll introduce the basics of quantum probability theory. At the end, we'll tie it all back to our bitstring example.
In this article and the next, I'd like to share some ideas from the world of quantum probability.* The word "quantum" is pretty loaded, but don't let that scare you. We'll take a first—not second or third—look at the subject, and the only prerequisites will be linear algebra and basic probability. In fact, I like to think of quantum probability as another name for "linear algebra + probability," so this mini-series will explore the mathematics, rather than the physics, of the subject.**
In today's post, we'll motivate the discussion by saying a few words about (classical) probability. In particular, let's spend a few moments thinking about the following:
What do I mean? We'll start with some basic definitions. Then I'll share an example that illustrates this idea.
A probability distribution (or simply, distribution) on a finite set $X$ is a function $p \colon X\to [0,1]$ satisfying $\sum_x p(x) = 1$. I'll use the term joint probability distribution to refer to a distribution on a Cartesian product of finite sets, i.e. a function $p\colon X\times Y\to [0,1]$ satisfying $\sum_{(x,y)}p(x,y)=1$. Every joint distribution defines a marginal probability distribution on one of the sets by summing probabilities over the other set. For instance, the marginal distribution $p_X\colon X\to [0,1]$ on $X$ is defined by $p_X(x)=\sum_yp(x,y)$, in which the variable $y$ is summed, or "integrated," out. It's this very process of summing or integrating out that causes information to be lost. In other words, marginalizing loses information. It doesn't remember what was summed away!
I'll illustrate this with a simple example. To do so, I need to give you some finite sets $X$ and $Y$ and a probability distribution on them. | CommonCrawl |
Generalized Lorenz Equations for Acoustic-Gravity Waves in the Atmosphere. Attractors Dimension, Convergence and Homoclinic Trajectories
CPAA Home
Existence and convexity of solutions of the fractional heat equation
November 2017, 16(6): 2227-2251. doi: 10.3934/cpaa.2017110
Essential spectral singularities and the spectral expansion for the Hill operator
O. A. Veliev
Depart. of Math., Dogus University, Acıbadem, 34722, Kadiköy, Istanbul, Turkey
Received February 2017 Revised May 2017 Published July 2017
Full Text(HTML)
In this paper we investigate the spectral expansion for the one-dimensional Schrodinger operator with a periodic complex-valued potential. For this we consider in detail the spectral singularities and introduce new concepts as essential spectral singularities and singular quasimomenta.
Keywords: Schrodinger operator, spectral singularities, spectral expansion.
Mathematics Subject Classification: 47E05, 34L05.
Citation: O. A. Veliev. Essential spectral singularities and the spectral expansion for the Hill operator. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2227-2251. doi: 10.3934/cpaa.2017110
M. S. P. Eastham, The Spectral Theory of Periodic Differential Operators, New York: Hafner, 1974. Google Scholar
M. G. Gasymov, Spectral analysis of a class of second-order nonself-adjoint differential operators, Fankts. Anal. Prilozhen, 14 (1980), 14-19. Google Scholar
I. M. Gelfand, Expansion in series of eigenfunctions of an equation with periodic coefficients, Sov. Math. Dokl., 73 (1950), 1117-1120. Google Scholar
F. Gesztesy and V. Tkachenko, A criterion for Hill's operators to be spectral operators of scalar type, J. Analyse Math., 107 (2009), 287-353. doi: 10.1007/s11854-009-0012-5. Google Scholar
W. Magnus and S. Winkler, Hill's Equation, New York: Inter. Publ. , 1966. Google Scholar
V. A. Marchenko, Sturm-Liouville Operators and Applications, Birkhauser Verlag, Basel, 1986. doi: 10.1007/978-3-0348-5485-6. Google Scholar
D. C. McGarvey, Differential operators with periodic coefficients in Lp(-∞, ∞), Journal of Mathematical Analysis and Applications, 11 (1965), 564-596. doi: 10.1016/0022-247X(65)90105-8. Google Scholar
D. C. McGarvey, Perturbation results for periodic differential operators, Journal of Mathematical Analysis and Applications, 12 (1965), 187-234. doi: 10.1016/0022-247X(65)90033-8. Google Scholar
V. P. Mikhailov, On the Riesz bases in L2(0, 1), Sov. Math. Dokl., 25 (1962), 981-984. Google Scholar
M. A. Naimark, Linear Differential Operators, George G. Harrap, London, 1967. Google Scholar
E. C. Titchmarsh, Eigenfunction Expansion (Part II), Oxford Univ. Press, 1958. Google Scholar
V. A. Tkachenko, Spectral analysis of nonself-adjoint Schrodinger operator with a periodic complex potential, Sov. Math. Dokl., 5 (1964), 413-415. Google Scholar
A. A. Shkalikov, On the Riesz basis property of the root vectors of ordinary differential operators, Russian Math. Surveys, 34 (1979), 249-250. Google Scholar
O. A. Veliev, The one dimensional Schrodinger operator with a periodic complex-valued potential, Sov. Math. Dokl., 250 (1980), 1292-1296. Google Scholar
O. A. Veliev, The spectrum and spectral singularities of differential operators with complexvalued periodic coefficients, Differential Cprime Nye Uravneniya, 19 (1983), 1316-1324. Google Scholar
O. A. Veliev, The spectral resolution of the nonself-adjoint differential operators with periodic coefficients, Differential Cprime Nye Uravneniya, 22 (1986), 2052-2059. Google Scholar
O. A. Veliev and M. Toppamuk Duman, The spectral expansion for a nonself-adjoint Hill operators with a locally integrable potential, J. Math. Anal. Appl., 265 (2002), 76-90. doi: 10.1006/jmaa.2001.7693. Google Scholar
O. A. Veliev, Uniform convergence of the spectral expansion for a differential operator with periodic matrix coefficients, Boundary Value Problems, Volume 2008, Article ID 628973, 22 pp. (2008). Google Scholar
O. A. Veliev, Asymptotic analysis of non-self-adjoint Hill's operators, Central European Journal of Mathematics, 11 (2013), 2234-2256. doi: 10.2478/s11533-013-0305-x. Google Scholar
O. A. Veliev, On the spectral singularities and spectrality of the Hill's Operator, Operators and Matrices, 10 (2016), 57-71. doi: 10.7153/oam-10-05. Google Scholar
O. A. Veliev. On the spectrality and spectral expansion of the non-self-adjoint mathieu-hill operator in $ L_{2}(-\infty, \infty) $. Communications on Pure & Applied Analysis, 2020, 19 (3) : 1537-1562. doi: 10.3934/cpaa.2020077
Lassi Päivärinta, Valery Serov. Recovery of jumps and singularities in the multidimensional Schrodinger operator from limited data. Inverse Problems & Imaging, 2007, 1 (3) : 525-535. doi: 10.3934/ipi.2007.1.525
Eduardo Lara, Rodolfo Rodríguez, Pablo Venegas. Spectral approximation of the curl operator in multiply connected domains. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 235-253. doi: 10.3934/dcdss.2016.9.235
Mark F. Demers, Hong-Kun Zhang. Spectral analysis of the transfer operator for the Lorentz gas. Journal of Modern Dynamics, 2011, 5 (4) : 665-709. doi: 10.3934/jmd.2011.5.665
Mario Ahues, Filomena D. d'Almeida, Alain Largillier, Paulo B. Vasconcelos. Defect correction for spectral computations for a singular integral operator. Communications on Pure & Applied Analysis, 2006, 5 (2) : 241-250. doi: 10.3934/cpaa.2006.5.241
Rúben Sousa, Semyon Yakubovich. The spectral expansion approach to index transforms and connections with the theory of diffusion processes. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2351-2378. doi: 10.3934/cpaa.2018112
Oktay Veliev. Spectral expansion series with parenthesis for the nonself-adjoint periodic differential operators. Communications on Pure & Applied Analysis, 2019, 18 (1) : 397-424. doi: 10.3934/cpaa.2019020
Roman Romanov. Estimates of solutions of linear neutron transport equation at large time and spectral singularities. Kinetic & Related Models, 2012, 5 (1) : 113-128. doi: 10.3934/krm.2012.5.113
Laurent Amour, Jérémy Faupin. Inverse spectral results in Sobolev spaces for the AKNS operator with partial informations on the potentials. Inverse Problems & Imaging, 2013, 7 (4) : 1115-1122. doi: 10.3934/ipi.2013.7.1115
Charles Fulton, David Pearson, Steven Pruess. Characterization of the spectral density function for a one-sided tridiagonal Jacobi matrix operator. Conference Publications, 2013, 2013 (special) : 247-257. doi: 10.3934/proc.2013.2013.247
Grégoire Allaire, Yves Capdeboscq, Marjolaine Puel. Homogenization of a one-dimensional spectral problem for a singularly perturbed elliptic operator with Neumann boundary conditions. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 1-31. doi: 10.3934/dcdsb.2012.17.1
Sébastien Gadat, Laurent Miclo. Spectral decompositions and $\mathbb{L}^2$-operator norms of toy hypocoercive semi-groups. Kinetic & Related Models, 2013, 6 (2) : 317-372. doi: 10.3934/krm.2013.6.317
Adina Juratoni, Flavius Pater, Olivia Bundău. Operator representations of logmodular algebras which admit $\gamma-$spectral $\rho-$dilations. Electronic Research Announcements, 2012, 19: 49-57. doi: 10.3934/era.2012.19.49
Alexandre I. Danilenko, Mariusz Lemańczyk. Spectral multiplicities for ergodic flows. Discrete & Continuous Dynamical Systems - A, 2013, 33 (9) : 4271-4289. doi: 10.3934/dcds.2013.33.4271
Virginie Bonnaillie-Noël, Corentin Léna. Spectral minimal partitions of a sector. Discrete & Continuous Dynamical Systems - B, 2014, 19 (1) : 27-53. doi: 10.3934/dcdsb.2014.19.27
Michael Baake, Daniel Lenz. Spectral notions of aperiodic order. Discrete & Continuous Dynamical Systems - S, 2017, 10 (2) : 161-190. doi: 10.3934/dcdss.2017009
Nakao Hayashi, Pavel I. Naumkin. Modified wave operator for Schrodinger type equations with subcritical dissipative nonlinearities. Inverse Problems & Imaging, 2007, 1 (2) : 391-398. doi: 10.3934/ipi.2007.1.391
Vladimir Müller, Aljoša Peperko. Lower spectral radius and spectral mapping theorem for suprema preserving mappings. Discrete & Continuous Dynamical Systems - A, 2018, 38 (8) : 4117-4132. doi: 10.3934/dcds.2018179
Sobhan Seyfaddini. Spectral killers and Poisson bracket invariants. Journal of Modern Dynamics, 2015, 9: 51-66. doi: 10.3934/jmd.2015.9.51
Chaoqian Li, Yaqiang Wang, Jieyi Yi, Yaotang Li. Bounds for the spectral radius of nonnegative tensors. Journal of Industrial & Management Optimization, 2016, 12 (3) : 975-990. doi: 10.3934/jimo.2016.12.975
HTML views (16)
Article outline | CommonCrawl |
Low Level Radiation
Low level radiation is present as both naturally occurring radiation and man made radiation. The impact of the naturally occurring radiation is hard to quantify; no studies have shown an increase in health problems in areas where natural radiation levels are higher. The impact of man made radiation is likely to remain insignificant until it rises above the naturally occurring radiation in the environment.
Natural Radiation
Natural radiation exists in the environment and is due to several processes:
Cosmic rays are high energy photons or particles that enter the Earth's atmosphere. At sea level, the atmosphere has attenuated most of their energy but they still contribute a significant portion to a person's yearly dose
Cosmogenic radionuclides are produced through interactions of cosmic rays with atoms in the atmosphere or on the ground. These include 14C, 3H and 7Be
Terrestrial Radiation occurs due to the decay of 238U, 235U and 232Th and their daughter products in the earth's crust.
Radon gas is usually considered as a separate entity as it makes up a large portion of the yearly dose.
Internal radiation typically arises from ingestion of 40K, a long lived isotope of potassium that decays via beta emission (and thus has minimal exposure from the crust).
Interestingly, Australia has a low level of radon gas in homes, meaning that the contribution to total dose is much less than in other parts of the world.
Man Made Radiation
About 18% of radiation exposure to the general population is due to man made sources:
15% from medical sources (11% x-ray and 4% nuclear medicine)
3% from consumer products (smoke alarms etc)
0.3% from occupational exposure (mining, energy)
Under 0.3% from fallout
Hazards of low level radiation
In general, low levels of radiation do not cause deterministic effects as they fall below the threshold dose required to induce these effects. The exception is growth retardation, microcephaly or mental retardation in persons exposed in utero to doses over 0.1 - 0.2 Sv. Aside from this, the main risk to the populace is stochastic effects, such as induction of cancer and hereditary effects.
The ICRP uses the term detriment to describe stochastic effects. The detriment for developing cancer is about $4.6 \times 10^{-2}$ per Sv. The risk of hereditary effects is much lower at $0.1 \times 10^{-2}$ per Sv. For the average radiation worker, the risk of developing an adverse effect is about $2 \times 10^{-4}$ per year.
carcinogenesislate_effectphase1protectionradiobiology
The Employment Network of SB
"Don't Just Find A Job...Get a Job!!!"
Software Craftsmanship
A conference all about building it right
FreeSMUG
Free Open Source Software Mac User Group | CommonCrawl |
Artificial Intelligence Stack Exchange is a question and answer site for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment. It only takes a minute to sign up.
Sutton & Barto's notation $V_{t+n}$ in Chapter 7: $n$-step Bootstrapping
Until Chapter 6 of Sutton & Barto's book on Reinforcement Learning, the authors use $V$ for the current estimate of a state value. Equation (6.1), for example, shows:
$$ V(S_t) \leftarrow V(S_t) + \alpha[G_t - V(S_t)]\ \ \ \ \ \ (6.1)$$
However, on Chapter 7 they add a subscript to $V$. The first time this appears is on page 143 when they define the return from $t$ to $t+1$:
$$ G_{t:t+1} \dot{=} R_{t+1} + \gamma V_t(S_{t+1})$$
and say that $V_t : \mathcal{S} \rightarrow \mathbb{R}$ is "the estimate at time $t$ of $v_\pi$."
At first I thought I understood this as a natural consequence of considering $n$ steps ahead in the future and needing an extra index to go over the $n$ steps. But then this stopped making sense when I realized that an estimate for a state must be consolidated, no matter at which of $n$ steps that is coming from. After all, a state $s$ has a single value to estimate, $v_\pi(s)$, and that does not depend on $t$.
Then I thought that they are just taking into account that there are many successive estimates of $V$ as the algorithm progresses, so $V_t$ is just the estimate after processing the $n$ steps starting at time $t$. In other words, the subscript would be a rigorous mathematical way of denoting the sequence of algorithmic updates. But this does not make sense either since even in Chapter 6 and before, the estimate is also successively updated. See Equation (6.1), for example. The $V$ on the left-hand side is a different variable from the one on the right-hand side (this is why they must use $\leftarrow$ indicating an assignment as opposed to a mathematical equality with $=$). It could have easily been written with an index as well.
So, what is the purpose of the new index for $V$ in Chapter 7, and why is it more important at this particular chapter?
Edit and elaboration: Going back to the text, it seems to me that the new subscript is indeed added as an attempt for greater clarity, even though the subscript-less notation $V$ from previous chapters might have been kept (and in fact it is still used in the pseudo-code in page 144).
It seems the authors wanted to stress that the update of $V$ happens not only for every trace of $n$ steps, but also at every one of those steps.
However, I think this introduced a technical error, because suppose we just learned from an 10-step episode ($T=10$), using $n = 3$. Then the latest estimate of $v_\pi$ is $V_{T-1} = V_{10 - 1} = V_{9}$. Then at the next episode, the first time $V_{t + n}$ is used to inform a target update, it will be for $\tau = 0$ (from the pseudo-code), which implies $t - n + 1 = 0$, so $t = n - 1$, that is, $V_{t+n}=V_{n-1+n}=V_{2n-1}=V_5$, which is not the most up-to-date estimate $V_9$ of $v_\pi$.
Of course the problem would be easily solved if we simply set the next used estimate $V_{2n + 1}$ to be equal to the last episode's $V_{T-1}$, but to avoid confusion this would have to be explicitly stated somewhere.
reinforcement-learning sutton-barto notation
$\begingroup$ Welcome to SE:AI! $\endgroup$
– DukeZhou ♦
$\begingroup$ Thank you, DukeZhou! :-) $\endgroup$
My guess would be that your intuition is correct, and that it's mostly introduced just to clarify exactly which "version" of our value function approximator is going to be used in any particular equation. In previous chapters, which discuss single-step update rules, I guess the authors assumed there was less potential for confusion, and therefore no need to clarify this. Without the clarification, some people might for instance wonder if we should use $V_t$ for our value estimates of an $n$-step return $G_{t+n}$, regardless of how large $n$ is.
Once we start considering a situation with more than a single episode, the $V_t$ notation becomes quite confusing. You should read $V_t$ as "the value function approximator that we have available at time $t$ of the current episode". So, if we were to use the symbol $V_0$ within the context of a second episode, that would be identical to what was referred to as $V_T$ in the context of the first episode. The $V_t$ notation can be convenient if we're thinking about our equations with our minds in "math-mode", but becomes highly confusing once we start thinking about practical implementations involving multiple episodes -- this is probably why they did not include it in the pseudocode.
If you really wanted to use the subscript-notation in the pseudocode, you'd have to add an extra term in the subscript that adds up all the durations of all previous episodes. If we then try to work out your example situation, we'd run into another problem though... we'd want to use $V_{t+n+T} = V_{2n-1+T} = V_{15}$ at the first iteration where $\tau = 0$ in the second episode. But, across the two episodes, only $13$ steps have passed, so this does not yet exist! You run into the same issue if you try to work out what happened when $\tau = 0$ in the first episode: applying exactly the same reasoning as in your quote, we would have wanted to use $V_5$ after only $3$ time steps passed in the first episode.
The problem here is that you're trying to use the variable named $t$ in the pseudocode as the subscript for $V$. To get a better idea of what's going on here, let's loop back to the previous page and examine the definition of the $n$-step return:
$$G_{t:t+n} \doteq R_{t+1} + \gamma R_{t+2} + \dots + \gamma^{n-1} R_{t+n} + \gamma^n V_{t+n-1} (S_{t+n}).$$
Ok, we've got that. Now, let's take another look at the update rule in which we use this quantity:
$$V_{t+n}(S_t) \doteq V_{t+n-1}(S_t) + \alpha \left[ G_{t:t+n} - V_{t+n-1} (S_t) \right].$$
Ok. So $V_{t+n-1}$ appears three times in the update rule. Two times explicitly, estimating the value of $S_t$, and once more "hidden" in the definition of $G_{t:t+n}$, where it is used to estimate the value of $S_{t+n}$. Note very carefully what it is that this update rule is doing; it's updating the state estimate of $S_t$. If you now look at the pseudocode again, you'll see a comment on the line where $\tau$ is computed: ($\tau$ is the time whose state's estimate is being updated)!
What this means, is that in the pseudocode, you should be using $\tau$ as the subscript for $V$! If you do that, it'll at least be correct for the first episode. In the pseudocode, the update rule looks like:
$$V(S_{\tau}) \gets V(S_{\tau}) + \dots$$
Plugging in the subscripts from the mathematical definition leads to:
$$V_{\tau + n}(S_{\tau}) \gets V_{\tau + n - 1}(S_{\tau}) + \dots$$
Since the pseudocode defines $\tau = t - n + 1$, we can substitute above:
$$ \begin{aligned} V_{t - n + 1 + n}(S_{t - n + 1}) &\gets V_{t - n + 1 + n - 1}(S_{t - n + 1}) + \dots\\ V_{t + 1}(S_{t - n + 1}) &\gets V_{t}(S_{t - n + 1}) + \dots \end{aligned}$$
and now it should make sense again from a practical point of view. At every time step $t$, where $t$ measures number of steps of experience that we have simulated, we simply use the latest value function $V_t$ we have available at that time for bootstrapping in the update rule. When $t + 1 < n$, $S_{t - n + 1}$ is undefined. In these cases, the above update rule doesn't work, which makes sense intuitively because we have not yet progressed far enough into the episode to be capable of compute $n$-step returns.
Dennis Soemers♦Dennis Soemers
$\begingroup$ Yes, we are in agreement about this, it seems, thank you for your thoughtful answer! $\endgroup$
Thanks for contributing an answer to Artificial Intelligence Stack Exchange!
Not the answer you're looking for? Browse other questions tagged reinforcement-learning sutton-barto notation or ask your own question.
Why does the discount rate in the REINFORCE algorithm appear twice?
How to deal with episode termination in Advantage Actor-Critic algorithm?
How do we prove the n-step return error reduction property?
Why is an average of all returns used to update the value in the first-visit MC control?
On-policy state distribution for episodic tasks on Sutton & Barto, page 199
What does the notation ${s'\sim T(s,a,\cdot)}$ mean?
What would be the importance sampling ratio for off-policy TD learning control using Q values? | CommonCrawl |
Hot mix asphalt (HMA) moisture susceptibility analysis: material loss to mechanical properties
Uma Maheswar Arepalli ORCID: orcid.org/0000-0001-6474-47011,
Nivedya Madankara Kottayi1,
Paul Mathisen1,
Veeraragavan Amirthalingam2 &
Rajib B. Mallick1
SN Applied Sciences volume 2, Article number: 64 (2019) Cite this article
Numerous studies have been conducted to identify moisture sensitive mixes during mix design by simulating various mechanisms of moisture damage. These methods involve the determination of changes in strength or stiffness of asphalt mixes due to moisture conditioning. The objective of this study is to understand the coupled problem of moisture induced material loss and change in strength/stiffness of the mix. Moisture Induced Stress Tester was used for conditioning samples of a poor and a good performing mixes. This test applies cyclic pressures in the asphalt mix samples through repeated pulses of water. The effluent containing aggregates and binder that were dislodged from the samples during the moisture conditioning process were collected for testing. Both coated and uncoated/fractured aggregates were found in the effluent. The results indicated that the samples with a higher loss of asphalt binder compared to other samples in the investigation during conditioning may exhibit higher tensile strengths, and those with a loss of finer materials, which is indicative of aggregate breakdown, show a lower tensile strength. Both seismic modulus and indirect tensile strength tests were found to be able to differentiate the poor and good performing mixes. For the mixes used in this study, the rate of change in indirect tensile strength during moisture conditioning was found to be strongly correlated to the pre-conditioning modulus of the mix, and a method is suggested for using the threshold values of properties of pre-conditioning mixes for different durations of moisture conditioning during mix design to screen poor mixes in a fast and nondestructive manner.
Most of the common tests that are conducted for the evaluation of moisture susceptibility of hot mix asphalt (HMA) consist of a conditioning process and a mechanical test [1]. These tests predict the moisture susceptibility in terms of loss in mechanical properties, or retained strength. However, moisture induced damage may also include loss of material—dislodgement of aggregates and loss of binder compounds, which could have an influence on the loss of mechanical properties or performance of the mixes. Though there was no comprehensive study reported in this regard, a previous study had identified the physical and chemical changes of asphalt binder from moisture conditioning [2]. The present study is conducted to evaluate the use of Moisture Induced Stress Tester (MIST), assess the loss of materials and change in mechanical properties as a potential approach for identifying moisture susceptible HMA during mix design. A combination of two approaches including using the effluent water from water conditioning process that was collected and using a nondestructive testing equipment was adopted in this study.
MIST is a relatively new device that was developed to simulate moisture induced damage in the laboratory [3]. The equipment consists of a chamber with an air pressurized bladder and inlet from a water tank as shown in Fig. 1. The HMA sample is placed inside the enclosed chamber which is filled with water. When the machine is started, the bladder at the bottom of the chamber inflates and deflates repeatedly, and the sample is subjected to cycles of pressurized water. This process simulates the repeated action of traffic loading under moisture conditions in the wheel-path [4]. The MIST conditioning simulates the moisture induced damage in the field, and such conditioned samples are tested for their mechanical properties.
A typical picture of Moisture Induced Stress Tester (MIST): a MIST machine, b MIST chamber, c MIST bladder
Various studies have reported the efficiency of MIST to characterize moisture susceptible mixes similar to or even better than the currently practiced method of AASHTO T 283 [5, 6]. The intensity of moisture induced damage simulated by MIST was found to be a function of the number of cycles and the duration of pre-MIST dwell. The intensity of moisture induced damage simulated by MIST was found to be a function of the number of cycles and the duration of pre-MIST dwell time (dwell time is the time when samples are immersed in water prior to the conditioning cycles). Tarefder et al. [7] found significant damage when doubling the cycles from 3500 to 7000. Varveri et al. [8] reported a greater reduction of strength of the mix with longer pre-MIST dwell period. Indirect tensile strength test (ITS), superpave simple performance tests (SPT), superpave indirect tensile test (IDT), dynamic modulus test, Ultrasonic pulse velocity test are some of the mechanical tests which have the potential to be combined with the MIST conditioning process for the identification of moisture susceptible mixes [5, 9, 10]. Tarefder and Ahmad [9] have studied the effect of pore structure on moisture damage using the MIST and found that permeable pore had a good correlation with permeability of asphalt samples but did not contribute much to the moisture damage. Using the MIST, Shu et al. [10] have found a similar performance of HMA and WMA, and high resistance of mixes with RAP against moisture induced damage [11]. Recently, an ASTM standard D7870-13 [12] has been developed for the MIST conditioning process.
A few studies have reported the observation of loss of materials during moisture conditioning in the laboratory. Varveri et al. [8] reported fine aggregates and Zofka et al. [11] reported traces of asphalt binder in the water that has been collected from the conditioning process. Studies have also reported the leaching of Poly Aromatic Hydrocarbons (PAH) from asphalt mixes [13, 14] under the impact of water conditioning, though the amounts are very low. Song et al. [15] conducted a study to determine the chemical composition of water-soluble compounds in asphalt and have listed various organic compounds and their respective proportions that varied with the asphalt source and mix aging methods that were used.
A number of studies have been conducted with the use of nondestructive tests (NDT) on HMA. As early as in 1960s, Stephenson [16] have used the compression wave velocity technique to study the change in dynamic properties of HMA in the transitional temperature regime; it was later taken up by the researchers in the 20th century and after which it was considered as one of the useful non destructive techniques to determine the properties of HMA mixes. Celaya and Nazarian [17] and Rojas et al. [18] have developed quality control guidelines for the construction of HMA layers with the use of NDT. Rojas et al. [18] evaluated HMA mixes in the laboratory using ultrasonic pulse velocity (UPV) test and concluded that the seismic modulus increases with a decrease in the voids in the total mix (VTM) and decreases with a decrease in the binder viscosity; however, the impact of the viscosity was found to become less pronounced as the VTM increased. Norambuena-Contreras et al. [19] have examined two types of mixes—dense and porous using the ultrasonic direct test to determine dynamic modulus. The authors identified the difference in transmission time between two types of mixes due to the difference in porosity, which resulted in longer propagation times for the porous mix. Birgisson et al. [20] evaluated the ultrasonic pulse wave velocity test for monitoring moisture damage effects in asphalt mixtures and also studied the effects of saturation levels, aggregate structure and aggregate type on mixture conditioning. The results demonstrated the sensitivity of seismic modulus to effects of moisture damage and a decrease in modulus was observed with an increase in the level of saturation. A visual investigation of failed specimens indicated a cohesive and adhesive failure, and breakage of aggregate failures for conditioned dense graded, granite aggregate mixes. Arabani et al. [21] evaluated the effect of various HMA mix parameters with the UPV, which was found to be sensitive to changes in the asphalt content, filler content, percent of fractured particles, gradation type and compaction method of the HMA. Though various researchers have studied the effect of moisture on asphalt mixes, a comprehensive study to understand the moisture susceptibility of hot mix asphalt (HMA) in relation to strength, stiffness and material loss was not carried out.
The objective of this study was to understand the impact of moisture susceptibility of hot mix asphalt (HMA) mixes on strength, stiffness and material loss, and thereby deduce a method to utilize these properties to detect moisture susceptible HMA mixes.
Materials and mixes
Two different types of aggregates and one asphalt binder, procured from the Maine Department of Transportation (DOT) were used to prepare samples. All samples were compacted to 7 ± 1% voids in total mix (VTM), with a Superpave gyratory compactor, using gradation and asphalt binder content that is used by Maine DOT for producing regular mixes. One mix, prepared with the "PI" aggregate has been identified as a poor performing mix (evidence of moisture-induced loss of materials and deterioration of layer), on the basis of field observation and tests with Hamburg rut tester [22], whereas the other "SM" mix has been identified as a good performing mix. Table 1 shows the relevant mix design information for the two mixes.
Table 1 Details of mix design
Apart from testing for bulk specific gravity and theoretical maximum gravity (for mixes) to calculate voids in total mix (VTM, %), the testing of mixes consisted of pre-MIST and post-MIST determination of Seismic Modulus (Es, MPa) and indirect tensile strength (ITS, kPa), and post-MIST testing of loss of materials (LOM, gram). Samples of the effluent water were subjected to dissolved organic carbon (DOC, ppm) analysis to detect traces of asphalt binder in the effluent and determine their content. The testing utilized a Shimadzu TOC-5000A analyzer, which uses combustion of carbon to CO2 and analysis with a non-dispersive infrared (NDIR) gas detector to quantify total carbon.
Moisture conditioning
In the MIST conditioning process, first the cylindrical samples compacted with a gyratory compactor are placed in a conditioning chamber in the MIST and the chamber is filled with water. The water can be maintained at a specific temperature, and the sample can be kept in the water for a specific period of "dwell" time. The moisture conditioning process used in this study consisted of 20 h of dwell time at 60 °C and 10,000 cycles at 207 kPa and 60 °C. Every 1000 cycles take approximately 1 h. At the end of the conditioning process, the samples are taken out and subjected to post-MIST condition testing such as Es, ITS, LOM and DOC. Note that since this is a simulative test, the number of cycles used in the MIST was selected so as to cause significant (detectable) damage in poor performing mixes, as found from other studies [7, 8, 22].
The Seismic Modulus testing was conducted using an ultrasonic pulse velocity (UPV, V-meter) tester with 150 kHz transducers (ASTM C597-16 [23]). The time for travel of the wave was noted from testing and then utilized in the following formulae to calculate Es.
$$V_{p} = \frac{H}{{t_{v} }} .$$
$$M_{V} = \rho \times V_{p}^{2} .$$
$$E_{s} = M_{V} \times \frac{{\left( {1 + \mu } \right) \times \left( {1 - 2\mu } \right)}}{1 - \mu } .$$
where, Vp = velocity of wave; tv = time of travel; ρ = density; µ = Poisson's ratio, considered to be 0.35.
Indirect tensile strength tests were carried out with a compressn testing machine using a loading rate of 50 mm per minute, and the following formula was used for the calculation.
$${\text{Indirect tensile strength (ITS)}}, {\text{kPa}} = \frac{2P}{\pi \times t \times d} .$$
where, P = failure load in N; t = thickness, mm; d = diameter, mm.
The effluent from the MIST conditioning process was collected at the completion of the 10,000 cycles. The aggregate particles found in the effluent were collected and checked for gradation by sieve analyses, from which the fineness modulus (ASTM C125-16, [24]) was estimated. Samples of the effluent water were subjected to dissolved organic carbon (DOC) analysis to detect traces of asphalt binder in the effluent and determine their content. The testing utilized a Shimadzu TOC-5000A analyzer, which uses combustion of carbon to CO2 and analysis with a non-dispersive infrared (NDIR) gas detector to quantify total carbon.
Table 2 shows the results of volumetric and mechanical properties, and loss of material from MIST conditioning. It can be seen from the VTM results that there was a slight increase in air voids due to moisture conditioning for both mixes PI and SM. DOC is observed to have a positive correlation with post-MIST ITS based on the PI and SM mix pooled data (Fig. 2). The correlation in Fig. 2 can be explained by the fact that a higher DOC indicates a higher loss of asphalt binder from the mix, and mixes with reduced asphalt content are expected to be at a higher tensile strength. This observation is important since in many cases designers rely on the retained strength or the post conditioning strengths only, to evaluate the mix's resistance against moisture damage. While this is a reasonable approach, it should be used with caution since, a loss of the binder, which is a precursor to more severe damage of loss of aggregates and gradual loosening of the mix in the field, may falsely indicate a high resistance against moisture damage after the laboratory conditioning process.
Table 2 MIST conditioning results–volumetric and mechanical properties, and loss of material
Plot of DOC Vs. post-MIST indirect tensile strength (ITS)–pooled data
The fineness modulus (FM) of the aggregate material lost during the MIST conditioning shows a negative correlation with change in indirect tensile strength as a result of MIST conditioning (Fig. 3). This is because, a lower FM indicates a finer gradation, and a finer gradation indicates more breakdown of larger aggregates, which would have a higher weakening effect on the strength of the mix. It is noted that a gradation that is finer than the original gradation means that fine aggregates have been generated during the MiST conditioning process. Since there was no other source of fine aggregates, these additional fine aggregates must have come from the breakdown of larger aggregates in the mix. A higher FM most likely means that larger size aggregates are displaced by moisture, as whole particles, and there is a relatively less aggregate breakdown in the mix. This is evident from a higher FM for the materials lost by the SM mix, as compared to that of the PI mix. Note that an outlier was removed from the dataset, which improved the correlation significantly.
Plot of fineness modulus versus change in indirect tensile strength (ITS)
The rate of change in ITS was seen to have a good correlation with the pre-MIST Es (Fig. 4). It is noted that the rate of change in ITS per hour was determined as a ratio of loss in ITS due to MIST conditioning and the time taken for MIST conditioning cycles. For example, a loss in ITS of 261 kPa for a conditioning time of 10 h (1000 cycles per hour and a total of 10,000 cycles) results in a rate of change in ITS of 26.1 kPa/h. This can be explained by the fact that mixes with higher stiffness experience lower strain under the applied stress in the MIST and are hence less susceptible to deterioration of the mix. Equation 5 shows the relation, developed from the pooled data from SM and PI mixes.
Plot of pre-MIST seismic modulus versus rate of change in indirect tensile strength (ITS)
$$Rate \,of\, Change \,in \,ITS, \,kPa, \,per\, hour = 219.21 - 0.0151 \times (pre - MIST E_{s} )$$
R2 = 0.95; were Es = pre-MIST Seismic Modulus, MPa
This equation can be utilized to estimate the loss of ITS throughout the design life of the pavement, if the number of hours the pavement is subjected to moisture is known. The data can then be utilized to estimate the minimum initial Es that is required to ensure a minimum ITS of the mix throughout the design life. For example, determining the estimated loss during the design life, one can choose a mix with the required amount of ITS during construction, such that even after loss due to moisture damage, the mix remains sufficiently strong for adequate performance. The relationship presented in Eq. 5 can be explained as follows. The change in tensile strength in mixes during the conditioning process is due to the growth of cracks formed by the repeated cycles of loading of the sample in water. The dependence of the crack growth rate (at a specific temperature) on the material is exhibited by the relationship between the rate of change in the indirect tensile strength and the pre-conditioning Seismic Modulus value. The equation can also be utilized to estimate the expected change in ITS due to an expected variation in the pre-MIST Seismic Modulus. For the data used in this study, a mean and a standard deviation of 13,660 MPa and 681 MPa were observed for the pre-MIST Seismic Modulus.
Since in general, the variability of Es, due to variabilities in materials and construction, values can be expected in the field, it will be beneficial to evaluate the impact of the variability on the rate of change in ITS. The results will be more practical than a single point value since a confidence interval regarding the resistance of the mix against moisture damage can be determined. To obtain such an estimate of confidence interval, a probabilistic method is generally used. In this case, Monte Carlo analysis was selected as a tool for such a probabilistic analysis. This technique generates a value of the dependent variable, based on a random selection of data from the input variables. The method continues for the specific number of simulations and finally reports the expected distribution of the dependent variable. Utilizing the average and standard deviation values obtained in this study, a Monte Carlo simulation of the change in ITS, was conducted and the results are shown in Fig. 5. The 90% confidence interval for loss of ITS (per hour of moisture conditioning) is −4 to 30 kPa, with a mean of 13 kPa per hour.
Results of Monte Carlo analyses
Based on the information regarding the relatively poor performance of one of the aggregates (PI) and the better performance of the other (SM), the data were then separated to determine if there was any statistically significant difference between the test results of the two mixes. From the statistical analysis, it was found that there was a significant difference between pre-MIST and post-MIST ITS for the mixes with PI aggregates (poor performing) and no significant difference was found in the case of mixes with SM aggregates (better performing). This showed that mixes with SM aggregates were more resistant to damage caused by moisture conditioning. Similar results were also found for the Seismic Modulus (Table 3). Therefore, the seismic modulus can be suggested as a potential method of screening mixes for their moisture susceptibility—the advantage is that the method is fast and nondestructive. Changes in both ITS and the Seismic Modulus during the conditioning process in moisture susceptible mixes are due to the loss of integrity of the mix, as a result of loss in cohesion or adhesion or breakdown of material. Note that Seismic Modulus has been found to be sensitive to moisture effects [20, 25].
Table 3 Results of statistical analyses of seismic modulus and indirect tensile strength data (see Table 2 for the data)
One more point is that the variability of the seismic modulus is much lower than that of the ITS, as evident from its lower COV specifically in the post-MIST condition which is 1.95 (PI mix) and 2.95 (SM mix) compared to 10.8 (PI mix) and 13.3 (SM mix) for ITS.
Estimation of pre-MIST threshold values
Equation 5 can be utilized to estimate minimum (threshold) values of pre-MIST ITS and seismic modulus (Es) in order to ensure adequate performance throughout the pavement design life. In order to accomplish this, test and field performance (with respect to moisture damage) data from twenty-one mixes were collected from Maine DOT. It was found that the only two poor performing mixes have post-MIST ITS values at or below 500 kPa whereas the good performing ones have > 500 kPa post-MIST ITS values [22]. Hence 500 kPa can be taken as a minimum desirable ITS after the expected number of hours of moisture damage for adequate performance of a mix in the field. Therefore, knowing the number of hours of expected moisture damage conditioning (or exposure to moisture in the field), and taking the minimum value of 500 kPa, it is possible to estimate a threshold value of pre-MIST Es for different values of Pre-MIST ITS, as shown in Fig. 6.
Plots of threshold values of pre-MIST seismic modulus versus duration of moisture conditioning for different pre-MIST ITS
The utility of the plots presented in Fig. 6 is as follows. It would be desirable for a mix designer to identify mixes which do not meet the required criteria even before the tests of moisture conditioned mixes are performed. Both MIST conditioning and Indirect Tensile Strength tests are time-consuming, and a desirable option will be to find out that the mix does not meet the minimum retained strength even before the MIST conditioning. Instead of using ITS and MIST first, the mix designer can assume a pre-MIST ITS on the basis of his/her experience with similar mixes, check the seismic modulus of the designed mix (which will take a very short period of time and is nondestructive) and then utilize the chart to determine whether the Seismic Modulus meets the minimum value for the specific time of conditioning. Then the same samples could be utilized for pre-MIST indirect tensile strength tests or post-MIST Seismic Modulus tests, if the Indirect Tensile Strength test is avoided altogether. If after testing, the strengths are higher than what was assumed, the mix can be assumed to be adequately resistant as the minimum required seismic modulus value decreases with an increase in the pre-MIST ITS, for a specific duration of moisture conditioning. If, however, the strength is found to be lower than the assumed value, the designer can improve the mix design. This will help the agency to reduce the chance of ending up with mixes that fail to meet the minimum post-conditioning ITS requirement, and reduce the time of actual MIST conditioning (Fig. 6).
Conclusions and recommendations
The study reported in this paper demonstrated the use of MIST, and the seismic modulus and indirect tensile strength tests as effective moisture conditioning and testing methods, respectively. The methods can be utilized for detecting moisture susceptible HMA mixes during the mix design process. To minimize the use of samples, the designer can also make predictions regarding the moisture susceptibility on the basis of seismic modulus results alone, since the seismic modulus values were found to be sensitive to moisture induced damage. Furthermore, for the expected number of moisture exposure hours, the designer can also estimate the desirable seismic modulus for a range of dry tensile strengths of the mixes. This process can also help avoid the use of MIST in the preliminary mix selection process, and select a few mixes for more rigorous testing. Figure 7 presents a proposed flowchart for screening moisture susceptible mixes. The following specific conclusions and recommendations are made on the basis of this study.
Samples undergoing a higher loss of asphalt binder compared to other samples, during moisture conditioning may exhibit higher tensile strengths.
Samples with aggregate particles breakdown will exhibit lower tensile strengths compared to other samples with less/no aggregate particle breakdown.
The rate of change in indirect tensile strength during moisture conditioning is strongly correlated to the pre-conditioning modulus of the mix, and a proposed equation can be utilized to estimate the loss of strength for a given mix.
Based on pre and post-conditioning tests, both seismic modulus and indirect tensile strength tests were able to differentiate between good and poor mixes.
Threshold values of seismic modulus of pre-conditioning mixes for different durations of moisture conditioning can be utilized during mix design to screen poor mixes in a fast and nondestructive manner.
Suggested flowchart for using MIST and seismic modulus (Es)
Solaimanian M, Harvey J, Tahmoressi M, Tandon V (2003) Test methods to predict moisture sensitivity of hot-mix asphalt pavements. In: Moisture sensitivity of asphalt pavements-a national seminar, San Diego, 2003-2-4 to 2003-2-6 2003
Ahmad M, Mannan UA, Islam MR, Tarefder RA (2017) Chemical and mechanical changes in asphalt binder due to moisture conditioning. Road Mater Pavement Des. https://doi.org/10.1080/14680629.2017.1299631
Buchanan MS, Moore V, Mallick R, O'Brien S, Regimand A (2004) Accelerated moisture susceptibility testing of hot mix asphalt (HMA) mixes. In: Paper presented at the 83rd annual meeting of the Transportation research board, Washington
Pinkham R, Cote SA, Mallick RB, Tao M, Bradbury RL, Regimand A (2013) Use of moisture induced stress testing to evaluate stripping potential of hot mix asphalt (HMA). In: Paper presented at the 92nd annual meeting of the Transportation Research Board, Washington
Chen X, Huang B (2008) Evaluation of moisture damage in hot mix asphalt using simple performance and superpave indirect tensile tests. Constr Build Mater 22(9):1950–1962
Mallick RB, Pelland R, Hugo F (2005) Use of accelerated loading equipment for determination of long term moisture susceptibility of hot mix asphalt. Int J Pavement Eng 6(2):125–136. https://doi.org/10.1080/10298430500158984
Tarefder RA, Weldegiorgis MT, Ahmad M (2014) Assessment of the effect of pore pressure cycles on moisture sensitivity of hot mix asphalt using MIST conditioning and dynamic modulus. J Test Eval 42(6):1–11. https://doi.org/10.1520/JTE20130095
Varveri A, Avgerinopoulos S, Scarpas A (2016) Experimental evaluation of long- and short-term moisture damage characteristics of asphalt mixtures. Road Mater Pavement Des 17(1):168–186. https://doi.org/10.1080/14680629.2015.1066705
Tarefder R, Ahmad M (2017) Evaluation of pore structure and its influence on permeability and moisture damage in asphalt concrete. Int J Pavement Eng 18(3):274–283
Shu X, Huang B, Shrum ED, Jia X (2012) Laboratory evaluation of moisture susceptibility of foamed warm mix asphalt containing high percentages of RAP. Constr Build Mater 35:125–130. https://doi.org/10.1016/j.conbuildmat.2012.02.095
Zofka A, Maliszewski M, Bernier A (2014) Alternative moisture sensitivity test. In: Paper presented at the 9th international conference environmental engineering, Vilnius, Lithuania
ASTM D7870 (2013) Standard practice for moisture conditioning compacted asphalt mixture specimens by using Hydrostatic Pore Pressure. American Society for Testing and Materials
Brantley AS, Townsend TG (1999) Leaching of pollutants from reclaimed asphalt pavement. Environ Eng Sci 16(2):105–116
Kriech A, Kurek J, Osborn L, Wissel H, Sweeney B (2002) Determination of polycyclic aromatic compounds in asphalt and in corresponding leachate water. Polycycl Aromat Compd 22(3–4):517–535
Song Y, Wang C, Zhang Y (2011) Determination of the chemical composition in asphalt aqueous solutions by SPE and GC/MS. Pet Sci Technol 29(15):1590–1595
Stephenson RW (1968) Temperature effects of the compressional wave velocities of asphalt-aggregate mixtures. Doctoral dissertation, Oklahoma State University, Oklahoma
Celaya M, Nazarian S (2008) Implementation of quality management of hot-mix asphalt with seismic methods. Transp Res Rec J Transp Res Board 2057:99–106
Rojas J, Nazarian S, Tandon V, Yuan D (1999) Quality management of asphalt-concrete layers using wave propagation techniques. University of Texas at El Paso, Texas
Norambuena-Contreras J, Castro-Fresno D, Vega-Zamanillo A, Celaya M, Lombillo-Vozmediano I (2010) Dynamic modulus of asphalt mixture by ultrasonic direct test. NDT and E Int 43(7):629–634
Birgisson B, Roque R, Page G (2003) Ultrasonic pulse wave velocity test for monitoring changes in hot-mix asphalt mixture integrity from exposure to moisture. Transp Res Rec J Transp Res Board 1832:173–181
Arabani M, Kheiry PT, Ferdosi B (2009) Laboratory evaluation of the effect of HMA Mixt parameters on ultrasonic pulse wave velocities. Road Mater Pavement Des 10(1):223–232. https://doi.org/10.1080/14680629.2009.9690189
Arepalli UM (2017) A study of moisture induced material loss of hot mix asphalt (HMA). Doctoral dissertation, Worcester Polytechnic Institute, Worcester
ASTM C597 (2016) Standard test method for pulse velocity through concrete. American Society for Testing and Materials
ASTM C125 (2016) Standard terminology relating to concrete and concrete aggregates. American Society for Testing and Materials
Nazarian S, Yuan D, Tandon V, Arellano M (2002) Quality management of flexible pavement layers with seismic methods. The Center for Transportation Infrastructure Systems, The University of Texas at El Paso., Texas
Department of Civil and Environmental Engineering, Worcester Polytechnic Institute, Worcester, MA, 01609, USA
Uma Maheswar Arepalli
, Nivedya Madankara Kottayi
, Paul Mathisen
& Rajib B. Mallick
Department of Civil Environmental Engineering, Indian Institute of Technology Madras, Chennai, Tamil Nadu, 600036, India
Veeraragavan Amirthalingam
Search for Uma Maheswar Arepalli in:
Search for Nivedya Madankara Kottayi in:
Search for Paul Mathisen in:
Search for Veeraragavan Amirthalingam in:
Search for Rajib B. Mallick in:
Correspondence to Uma Maheswar Arepalli.
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Arepalli, U.M., Madankara Kottayi, N., Mathisen, P. et al. Hot mix asphalt (HMA) moisture susceptibility analysis: material loss to mechanical properties. SN Appl. Sci. 2, 64 (2020) doi:10.1007/s42452-019-1841-1
HMA
Seismic modulus
Indirect tensile strength
Dissolved organic carbon
Moisture induced stress tester
Loss of aggregates
Loss of asphalt binder
Engineering: Advances in Civil Engineering: Towards a Sustainable Future | CommonCrawl |
Are endothermic bombs possible?
Explosives as we know them are exothermic: they produce heat and light, or, in other words, energy is expelled.
Let's consider endothermic bombs: weapons that, upon detonation, consume heat and / or light and / or nearby electricity.
Assuming there is sufficient funding to develop and manufacture this technology, and for whatever reason, there is a use for it, is it feasible to construct a detonated weapon with this purpose?
If so, where can energy be drawn from, or what will it actually do? Will nearby electricity be cut out? Will the room be drained of all light? Will it suddenly get colder? etc.
Please, no handwavium. Also note that "economically viable" as part of "feasible" would be nice but isn't necessary for an accepted answer.
science-based reality-check weapons explosions
ZxyrraZxyrra
$\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ – Tim B♦ Dec 1 '16 at 16:06
Gas expansion
This process is already around us on an everyday basis. It is the thing that makes it rain.
When you have a gas closed in some capsule and around it is lower pressure (and ideally vacuum), when you release the gas it will start expanding rapidly with the loss of internal energy, effectively cooling the gas and its surroundings. It does not work same for every gas; for example, He and H gases have the effect reversed on room temperature (they do achieve cooling on way lower temperatures, however).
In the real world, it works in clouds. When the Earth produces and releases hot air, it rises because it is lighter than cold air. The higher you are, the less pressure is around and raising gas expands, which cools it. At the height of ~2km, it will hit the temperature needed for condensation of water gas into water drops, effectively making "clouds" you can see. When it expands higher and higher (it has to be fed from the bottom because heavier and therefore colder gas tends to drop down as a rain), it possibly can make even storm clouds (Cumulonimbus), which are really tall. At the top parts of that, it effectively expands to nearly vacuum, thus cooling itself A LOT. At that point, a hailstorm may form, which is just extremely cold rain.
For a bomb utilising such an expansion, you don't need any reaction, nor ignition, nor any kind of magic. You only need some capsule that will store A LOT of gas that can be released. Upon releasing the gas, it will expand rapidly (thus make the effect of "explosion", pressure wave etc), and it will cool down in the process. If it is released fast enough, it will consume its internal energy for expansion, thus cooling surrounding objects.
The problem to make such a bomb is that you need to start with room temperature, high pressurised gas, which is hard to find. If you compress the gas, it tends to raise its temperature, so you need to do it in steps. Also, the capsule that you use must be strong enough to hold such a pressure. That might also be a problem, nowadays used gas cylinders are not really robust and hold too little of gas for your bomb purposes.
YoYoYonnY
Antoine HejlíkAntoine Hejlík
$\begingroup$ Wouldn't friction and compression of the stuff around the bomb create more heat than the expansion would absorb? $\endgroup$ – Annonymus Nov 30 '16 at 13:54
$\begingroup$ @Annonymus Preparing the bomb will be really hot job, compressing stuff can heat it a lot. Yes, more heat than it eats later, but you can deal with it in factory. Probably you will end up with gas in fluid form, which probably will be somehow stable, if not exposed to higher temperatures and will have containment strong enough. It will want to explode and expand. However the change from liquid to gas on expanding is another heat-eater. I would look on behaviour of gas cylinders and their safety. It will be really costy ammo compared to conventional explosives (and sensitive to temperatures) $\endgroup$ – Antoine Hejlík Nov 30 '16 at 14:14
$\begingroup$ I mean on explosion actually. The expansion of the gas will eat some heat but the friction between the gas and anything around the bomb as well as the compression of the stuff around the bomb (likely air) will also produce heat. I'm not actually sure how much energy is added/subtracted by either process but at a rough estimate it seems to me like more heat would be generated than absorbed $\endgroup$ – Annonymus Nov 30 '16 at 14:18
$\begingroup$ Well done sir, have a +1! Just two points: 1) If you compress a gas so much it's atoms collapse into neutrons, you get the effect even better! 2) Don't use Hydrogen for this as when expanded, it heats. That would be better for a real bomb instead. :) $\endgroup$ – RudolfJelin Nov 30 '16 at 15:59
$\begingroup$ @RudolfL.Jelínek No, if you compress the gas so much it is neutrons, the expansion results in unstable free neutrons, which rapidly decay over a few minutes, converting a non-trivial percentage of their mass to energy. I suspect that this will overwealm the decompression energy costs. $\endgroup$ – Yakk Nov 30 '16 at 19:01
This could be a bit of an underwhelming answer, but it is the best I can do while sticking to realism.
For absorbing heat
One simple solution is to use a device that causes rapid evaporation/sublimation of a liquid or solid. We already have something similar—a bottle full of liquid nitrogen. To make the device better, you can pressurize it to prevent ineffective evaporation, to make the device easier to store. (so that your character can say things like "hey look, this centuries-abandoned arsenal seem to contain some endothermic bombs, we can use that to...")
For absorbing light
The Ozone layer is absorbing ultraviolet light right now—at least on the sunlit side of this planet, hopefully—however, gases like ozone are frequency-specific in their absorption of light, meaning that light of a different colour/frequency cannot be absorbed by the same gas, so a mixture of many gases is needed to absorb a broad range of light (you also have to be aware of re-emission where the energized gas release that energy in a lower frequency).
This will be boring, but if the purpose is to block light rather than remove the photonic energy, just use a smoke bomb.
For electricity
I can only imagine this being useful when the target is some kind of machinery that runs on electricity. Traditional EMPs will probably do the job, but if you want to physically reduce the amount of electricity running in the system, try a graphite bomb, which will cause short-circuiting.
Some physics/chemistry
An endothermic chain reaction is not possible because of this equation: $$ΔG = ΔH - T\times ΔS$$ Where ΔG is the change in Gibbs free energy, which must be a negative number for spontaneous reactions (those that can go on without people helping along)
ΔH is the change in enthalpy, in endothermic reactions, it is positive
T is temperature, it will decrease in the case of endothermic reactions, it is always positive because it is measured in Kelvins
ΔS is change in entropy, cooling generally means that this value is positive, but it can be negative in the case of evaporation
So, these being said, as the endothermic reaction carries on, T will decrease, so no matter what the posivity/negativity of your ΔS is, the reaction will always start becoming non-spontaneous because the effect of ΔS is decreasing while ΔH is positive and constant, causing ΔG to become positive.
JDługosz
LunaLuna
$\begingroup$ A weapon that consumes electricity could have a major advantage over an EMP when you want to deny an opponent the use of their electronics, yet leave the device intact. $\endgroup$ – a CVn♦ Nov 30 '16 at 8:48
$\begingroup$ Don't know how scientific this is but if the device could absorb all the electricity in an area wouldn't this make a very good clean bomb? It would also disrupt the electrical impulses in the human brain, causing death while not destroying any infrastructure. $\endgroup$ – Snowlockk Nov 30 '16 at 9:54
$\begingroup$ @Snowlockk I am not sure what you mean by "absorbing electricity", since electricity is really just moving charged particles. You can slow them down perhaps, if you know how exactly they are moving and use a magnetic field, but for small and complex circuitry, like the brain or a computer, you might as well just look for the plug. $\endgroup$ – Luna Nov 30 '16 at 18:29
$\begingroup$ It should be noted that a simple foil chaff bomb will absorb radio waves. Usually used for simply disrupting radar, but an appropriate design could "kill" most radio communications over a limited area. $\endgroup$ – Hot Licks Dec 1 '16 at 22:23
Surprisingly (at least to me, at first) there really are chemical reactions that both go forward usefully quickly and are strongly endothermic. Nor are exotic chemicals necessarily required:
A classic example is mixing ice and salt to get a lower-than-freezing temperature -- as was done to make ice cream, before refrigeration was available.
http://physicsbuzz.physicscentral.com/2014/06/the-science-of-ice-cream.html
Another example -- one that works fine starting from strictly room temperature reactants are the "instant ice packs" used in first aid (great for sprains); see https://en.wikipedia.org/wiki/Ice_pack#Instant_ice_packs
The mechanism for what appears to be a violation of the first law of Thermodynamics is that the second Law gets involved, via the Gibbs Free energy: https://en.wikipedia.org/wiki/Gibbs_free_energy
I don't know of any such reaction that would be much use as a weapon, except against a poor goldfish in a bowl :-(
CatalystCatalyst
$\begingroup$ if you created something like a fuel air bomb with these you might, but the effect would still be localized and minor, once it cools the surroundings enough there is not enough ambient heat to sustain the reaction. they are self limiting. You have made a few square blocks cold and... that's it. $\endgroup$ – John Nov 30 '16 at 14:21
$\begingroup$ In the ice cream maker case, the salt just lowers the freezing point of water, making liquid possible at lower temperatures. The ice was already below freezing, and the salt dissolving into the water doesn't take any energy out of the system. $\endgroup$ – mskfisher Nov 30 '16 at 14:40
$\begingroup$ @mskfisher: mixing ice and salt actually lowers the temperature, below that of the ice itself! The entropic term (T times delta S) overpowers the enthalpy term. Strange, but it works; try an old-fashoned ice cream freezer and see! (Experiments with tasty results are the best kind ;-) $\endgroup$ – Catalyst Nov 30 '16 at 16:49
The endothermic bomb would be a heat sink. The problem for a point sink is that you can only go to ~0 Kelvin which is not that impressive compared to the exothermic at least ~500 Kelvins. The point sink simply has no "suction" comparable to "expansion" from the heat source. Thus there is no "explosion" from a point sink.
Thus the bomb would need to spread the substance to a big volume. One mechanism could be that the molecules are pressurized before they are released and that in the pressure the molecules will not react, but as released the gas spreads and then does the reaction. For that there can be made a kind of approximation of the power that the sink would "suck". Inversely you could then try to find a reaction that could produce this required effect.
One thing could be that the energy for the reactions is sucked directly from the target. It would not be an explosion, but more like a gas attack.
EDIT: One thing to notice is that unlike in exothermic explosion the frontier pushes itself, in endothermic the frontier would be "sucked" as soon as the reaction starts. Thus the reaction needs to be slower. The effective mechanism could be the expansion of stabilizing pressure after the sink has "sucked" energy. In exothermic the effective mechanism is the expansion at the beginning and there is then some mild suction after for the pressure to stabilize. Exothermic reactions simply are better because they can expand and they have no similar thermic limit as endothermic.
$\begingroup$ "Impressive" may depend on how you phrase it. You can heat the environment to 500 Kelvins and you've only doubled the energy. But reduce the environment to 2 Kelvins, and it's been reduced to less than a hundredth of its original energy. $\endgroup$ – James Beninger Dec 1 '16 at 5:01
$\begingroup$ Yes, "impressive" is relative, but it's about the difference. Just lighting a match changes temperature more than what can be achieved only in a cryonic lab. The potential of making a endothermic "explosion" is simply too inferior to exothermic. $\endgroup$ – user3644640 Dec 1 '16 at 7:31
$\begingroup$ In purely destructive terms, you're right. But a "freeze bomb" that sucks out every bit of energy from an area impresses me just as much, if not more. From a military perspective, it would be superior in many circumstances. It would be less damaging to infrastructure, and the people (prisoners) may be revivable since they were flash-frozen, giving their blood no chance to crystalize. $\endgroup$ – James Beninger Dec 1 '16 at 14:03
No handwavium but not possible with our current technology: convert energy into matter. This will require considerable amount of energy and will suck the heat around it. To get 1g of matter, you need $8 \times 10^{14}$ joules of energy, which is quite a lot. If I did the math right, it will reduce the temperature of 10 million tons by 190 degrees.
Cem KalyoncuCem Kalyoncu
$\begingroup$ I'd be interested in seeing the math you did to reach that answer. Also, how would one go about converting energy into matter? $\endgroup$ – automaton Nov 30 '16 at 17:40
$\begingroup$ No idea about how. That requires proper understanding of how the universe really works. But the math is simple: E = mc^2 (m is in kg and c is in meters/sec, E will be in joules), convert joules to calories, 1 cal = 1g of water heated by 1 degree. This will give you a rough estimate. $\endgroup$ – Cem Kalyoncu Nov 30 '16 at 21:29
Endothermic Explosive
Entropic explosives are driven by entropy rather than enthalpy.
Many modern explosives are actually endothermic. However the effect is nothing like what you are expecting. In short, for the reaction to be driven by entropy, it would basically need to generate a HUGE amount of gas in a short space of time.
The mechanical, concussive effect would make it almost indistingushable from an Exothermic explosive.
More information can be found on the Chemistry SE https://chemistry.stackexchange.com/questions/41979/are-non-exothermic-explosions-possible
Cold Bomb
However, what you described, is not an endothermic explosive, but rather a cold bomb. A devices that removes Entropy from its surroundings instantly.
This breaks the second law of thermodynamics.
When considering the bomb and its surrounds as a closed system. The bomb is able to reduce entropy from a closed system. This is forbidden by the second law.
Any such device, must sidestep this law. Possibly by "openning up the system", using a wormhole.
AronAron
$\begingroup$ You need to include idea, that the factory making the bomb is also in that closed extended system. In factory you will do reverse process by preparing the bomb. For example for making pressurized gas bomb, you will need a lot of work and a lot of heat will be emitted. Then the whole system will be thermodynamically okay, and locally decreasing entropy is okay. Whole unverse system does matter. $\endgroup$ – Antoine Hejlík Dec 1 '16 at 7:30
Won't work.
The second law of thermodynamics requires that entropy increases, and implies that to decrease entropy, you have to do work that adds more energy to the system than you took out.
So, for instance, if you have two objects at room temperature, you'll need to do work, i.e., transfer energy into the system, to move heat from one to the other. That means your bomb can't be at room temperature, since to remove Q Joules of heat from the room, you'll need to add W Joules of work done, so the total energy in the system is now Q+W Joules, and for any combination of Q and W, the total energy of the system is greater than Q alone.
"Bomb" implies a chemical reaction. To generate a highly endothermic reaction, you run into the same issue as above. The lowest energy state, i.e., what's left after every possible exothermic reaction possible, given the reagents present, is the most stable, just as a body at the bottom of a cliff is gravitationally more stable than at the top. In order to get your chemical mixture to move to a different combination, you'll need to supply energy, as described above. This allows you to move to a different chemical mixture that CAN potentially move to a mixture of lower energy state and WILL, unless there is an energy barrier in the way. Think of it as standing on a cliff with a high wall between you and the edge. The higher the wall, the less risk of falling off. On the flip side, if you're starting from the bottom of the cliff, the higher the wall, the more energy you need to expend initially climbing it, and the more energy that will be given back out when you climb down the other side.
In thermodynamic terms, you need a high initial concentration of energy to start the reaction, followed by part of it being given back. Now, bear in mind that the energy source is going to be air at room temperature. As I pointed out in the first paragraph, some 200 words ago, you can't get much energy out of a system at the same temperature, without doing work, so let's assume a fuse to start the process, give it a boost over the cliff.
The fuse pushes the first few molecules into the higher energy state, and they give back some energy, a fraction of what was supplied, which can push a fraction of the number of molecules that initially reacted. So, let's say 40% reacted in the second round. In the third, 40% of the second round number will react, and so on. The reaction cannot sustain itself; more energy is needed.
Bottom line, from above, the only energy absorbed, is that generated by the fuse. On the other hand, part of the energy released as the products roll down the wall, is lost to the surroundings. Net result, energy gain by the environment.
TL:DR, endothermic bombs don't work, unless of course, you're using them to damp a runaway exothermic reaction.
nzamannzaman
$\begingroup$ Don't paraphrase. The law includes an important stipulation in that it "requires that entropy increases" in a closed system. If entropy in the universe is increased during the creation and detonation of the "bomb" then the law is not violated. $\endgroup$ – Samuel Nov 30 '16 at 21:15
$\begingroup$ @Samuel: Unless it's a closed system, we may as well throw away the books and say, "anything goes". $\endgroup$ – nzaman Dec 1 '16 at 13:16
$\begingroup$ Also wrong. It's certainly not "anything goes" and it's not necessarily "nothing goes". If it is the latter case, your reasoning is still wrong. $\endgroup$ – Samuel Dec 1 '16 at 15:51
$\begingroup$ @Samuel: If the universe isn't a closed loop, then as you've stated, the laws of thermodynamics don't apply. Also, the laws of conservation become false and perpetual motion becomes possible. As far as I'm concerned, that's anything goes, magic, supernatural, whatever else you might decide to call it. And what's "nothing goes" and how does it affect my reasoning? $\endgroup$ – nzaman Dec 2 '16 at 5:56
Long Answer
There are 2 parts to your question.
The bomb part. I am assuming this means that there is a spontaneous reaction with an epicenter and a radially propagating explosion.
The endothermic part. This means that as the reaction progresses, it cools the surroundings as it passes.
Let us assume that such a reaction exists, which was spontaneous (had negative Gibbs energy) and endothermic:
$$ G(p,T) = U + pV - TS \\ G(p,T) = H - TS $$
This would mean that as the reaction proceeds, the reactants and products would expand (given that the assumptions surrounding the explosion). For an endothermic reaction, $\Delta H$ is positive (as the internal energy rises $U$ increases from absorbing heat and the volume, $V$, is increasing for the fluid components of the reaction - from spreading out due to the explosion)
$T$ is decreasing over time. (as its an endothermic reaction)
$S$ is dependent on the nature of the reaction. In order for the reaction to be spontaneous at the start (our assumption for the start of the reaction), as the reaction went along, assuming that there mechanics of the reaction stayed the same (not true - will explain why later), this would still mean that the overall Gibbs energy would tend towards a positive value over time.
When it reaches 0, it stops being spontaneous and the reaction will stop being spontaneous. (it will propagate as long as it is kinetically permissible, i.e. activation constraints are satisfied)
Why I think $S$ will decrease over time
The reaction spreads radially so the reaction components themselves have to diffuse radially from the epicenter.
Considering the Boltzmann (stochastic, state-based) entropy as a measure of the system, in the system of the explosion, the fluid components' internal energy only decreases. This means that the overall disorder of the system decreases - thereby decreasing the entropy. The gaseous components dominate the measurement of entropy therefore, this means that there $S$ will decrease over time.
An answer but not to your question
Drop a balloon filled with liquid nitrogen and it will be close to what I think you're imagining.
This isn't really a 'reaction' but will have the effect of an endothermic bomb (the shockblast will be from the expanding nitrogen) and the surroundings will be cooled due to nitrogen absorbing the latent heat.
How about a hypothetical endothermic self-replicating nanothermitic reaction?
The principle behind a self-replicating nanothermitic reaction is that the the reaction components produce the feed stock required for the reaction to continue indefinitely from the surroundings.
The absorbed energy would provide a access to the high-energy quantum states required to pass the activation energy barrier (of this hypothetical reaction pathway) and the solid products emitted would be left at such a low temperature that they supercool their surroundings as they pass through.
This is just food for thought, but I guess but I don't think that the reaction conditions on Earth can sustain such a reaction.
lohithbblohithbb
A short response not repeating what's been said in other answers, just adding conceptual clarification:
Assuming that you can indeed cause a large endothermic reaction (which wouldn't be an "explosion"), one thing to realize is that an exothermic reaction really just produces heat. Yes, light is a byproduct - when something heats it emits a range of EM radiation, including visible light if hot enough - but in the end it's all just heat. Chemicals endothermically reacting simply get cold, and therefore won't directly "absorb" light, electricity, or any other form of energy. The net result of a large endothermic reaction is that the immediate area gets colder than it was before.
Still, what might rapid cooling do? Rapid contraction of some brittle solids makes them fracture/shatter (try dipping hot glass in ice water). Unprotected electronic systems will collect frost from water vapor in the air, potentially shorting them out when they thaw. Living creatures get frostbite. All in all, the effect is pretty mild if the temperature isn't sustained, and weaponizing it would be difficult without alien tech or other handwaving.
automatonautomaton
If your 'civilization' was in the ocean, or other body of water, there are several reactions that would quickly pull heat from the water, ammonium chloride, for example when mixed with water reduces the temperature of the water around it. YouTube
Encase the ammonium chloride in a 'bomb' and detonate it in the water.
Thanks for contributing an answer to Worldbuilding Stack Exchange!
Not the answer you're looking for? Browse other questions tagged science-based reality-check weapons explosions or ask your own question.
Are diamond berries possible?
How to bootstrap technological development from one man in the forest, to building a spaceship?
Which fast STL drive could not be weaponized or results in perpetual motion?
Are tropical fjords possible?
How realistic are explosive collars?
Making freezer arrows using endothermic reactions
Are Trojan Planets Possible? Are Habitable Trojan Planets Possible?
Are wolverine style retractable bone claws possible?
How to best use a cloning machine to get rich and powerful
Feasibility of an endothermic gun | CommonCrawl |
Review of Gradient and Original Functions
Notations for the derivative (investigation) LIVE
Rate of change functions
Deriving the Power Rule
Power Rule (x^n)
Derivative of a Sum (x^n)
Power rule (ax^n)
Derivative of a Sum (ax^n)
Power rule (x^n and ax^n)
Derivative of a Sum (x^n, ax^n)
Further derivatives using power rule
Derivative of a polynomial (expansion then power rule)
Evaluate Derivative at a point
Tangents (gradients and equations using power rule)
Simple applications (using power rule only)
Speed, displacement and velocity
Key Features of Graphs and the Derivative
Sketching functions using derivative information
Maximums and Minimums Using Calculus (power rule only)
Primitive Functions
Applications of primitive functions
Differentiation holds the key to a beautiful mathematical relationship between Speed and Distance, and Velocity and Displacement.
Distance - a scalar quantity that describes how far an object is from a fixed point. As this is a distance it is measured in units of length such as km, m, cm.
Speed - a scalar quantity that describes the rate of change of an object, measured using a distance/time unit such as km/h, or m/s. We usually use $s(t)$s(t) to describe a speed function describing speed over time.
Displacement - a vector quantity that describes the distance an object is from a fixed point with its corresponding direction. We usually use $x(t)$x(t) to describe a displacement function describing displacement over time.
Velocity - a vector quantity that describes the speed of an object and it's corresponding direction. We usually use $v(t)$v(t) to describe a velocity function describing velocity over time.
The derivative of a distance function gives you a function to determine the speed.
So if the distance of an object can be described using $d(t)=t^2+4t$d(t)=t2+4t, then the speed of the object can be described using $\frac{dd}{dt}=s(t)=2t+4$dddt=s(t)=2t+4.
The derivative of a displacement function gives you a function to determine the velocity.
So if the displacement of an object can be described using $x(t)=t^3-2t^2+5$x(t)=t3−2t2+5, then the velocity of the object can be described using $\frac{dx}{dt}=v(t)=3t^2-4t$dxdt=v(t)=3t2−4t.
You may have noticed how similar the above two statements are. In fact the theory is identical, the only difference is that when working with distance we get speed, and when working with displacement we get velocity. Also, there is such a thing as the second derivative (and third, and fourth etc), and we will find out more about all of these later but interestingly if we take the derivative of $v(t)$v(t), which is the derivative of displacement, we get an acceleration function. We will also look at these later, but thought you'd like to know where we might be heading with all of this!
The position (in metres) of an object along a straight line after $t$t seconds is modelled by $x\left(t\right)=6t^2$x(t)=6t2.
State the velocity $v\left(t\right)$v(t) of the particle at time $t$t.
Which of the following represent the velocity of the particle after $4$4 seconds? Select all that apply.
$x'\left(4\right)$x′(4)
$v'\left(4\right)$v′(4)
$x\left(4\right)$x(4)
$v\left(4\right)$v(4)
Hence find the velocity of the particle after $4$4 seconds.
The position (in metres) of an object along a straight line after $t$t seconds is modelled by $x\left(t\right)=18\sqrt{t}$x(t)=18√t.
Determine the function $v\left(t\right)$v(t) for the velocity of the particle. Express $v\left(t\right)$v(t) in surd form.
Hence, calculate the velocity of the object after $9$9 seconds.
The displacement of a particle moving in rectilinear motion is given by $x\left(t\right)=-5t\left(t-4\right)$x(t)=−5t(t−4) where $x$x is the displacement in metres from the origin and $t$t is the time in seconds.
Calculate the initial displacement of the particle.
Solve for the time $t$t when the particle next returns to the origin.
Using graphical methods, calculate the distance traveled by the particle between leaving the origin and returning again.
Apply differentiation and anti-differentiation techniques to polynomials
Apply calculus methods in solving problems | CommonCrawl |
Definitive resource hub on everything higher math
Bonus guides and lessons on mathematics and other related topics
Definitive Guide to Learning Higher Mathematics
Ultimate LaTeX Reference Guide
Comprehensive List of Mathematical Symbols
Math Tutoring / Consulting
Higher Math Proficiency Test
10 Commandments of Higher Math Learning
Math Vault Linear Algebra Ebook Series
Recommended Math Books
Recommended Math Websites
Recommended Online Tools
Where we came from, and where we're going
Join us in contributing to the glory of mathematics
Math Vault
Desmos Art: The Definitive Guide to Computational Sketching
Functions & Operations, General Math, Geometry, Graphing, Math Tools
Functions & Operations
Navigational Menu
General Math
Foundation of Higher Math
Math Tools
Higher Math Exploration Series
Compendium of Math Symbols
Definitive Guide to Learning Higher Math
Linear Algebra eBook Series
The Definitive Glossary of Higher Mathematical Jargon
The Definitive, Non-Technical Introduction to LaTeX, Professional Typesetting and Scientific Publishing
The Definitive Higher Math Guide on Integer Long Division (and Its Variants)
If you're of the type who enjoys playing around with graphing calculator, then you might be interested in the so-called Desmos Art. These are basically pictures and animations created in Desmos primarily through the clever use of equations and inequalities — among other features such as tables, animating sliders and regression models. Indeed, if you go to this official staff pick page, you should see that a whole bunch of creative artworks were already being created — anything from cartoon characters, landscape to logos and portraits.
But here's a problem: when you reach a Desmos artwork page, you get to see the end-result with all the equations and inequalities, without necessarily having any clue about how the sketching process comes about from the beginning to the end:
Why does this weird-looking equation pop out from nowhere?
How did the author know that $3.982$ is the leading coefficient to use?
And how did those numbers get so precise up to four decimals digits?
To remedy this situation, we came up with a laborious and interesting solution: we decided to go out there and create a new Desmos sketch from scratch, and work backward to comment a bit on the steps, procedures and reasoning involved in the sketching process — as we go through the different stages and portions of the drawing.
And with that, let's proceed straight into our Desmos Art guide in computational sketching, which — as you might have guessed — is going to be both fun and informative — even if you have no intention whatsoever of using any graphing calculator in the near future.
Step 1: Initial Setup — Source Picture
Step 2: Divide and Conquer
Step 3: Portion Crunching
Step 3a: Drawing Curves
Modeling Line Segments
Modeling Circular/Elliptical Segments
Modeling Other Curvy Segments
Subdividing Curve Segments
Curve Trimming
Step 3b: Coloring Regions
Portion Crunching in Action: Antenna & Hat
Step 4: More Portion Crunching
Redditlady's Face
Hand & Rubik's Cube
Skirt & Legs
Redditdog
Ears & Beyond
Redditlady's Armor
Closing Words
Unless you are some kind of sketching guru — which we certainly are not — it's generally better to base the sketching on some underlying model: some picture that you actually want to draw; some picture that you can rely upon in case the sketching derails by a huge margin.
And in case it's not clear enough, you should only choose a picture that you really, really enjoy sketching — something that you can derive a whole lot of satisfaction from. This is because unlike ordinary pencil-and-eraser sketching, sketching in Desmos has to be done by filling the command lines with equations, tables and inequalities, so that if you can't fuel enough passion into a drawing, then the time and effort invested into reproducing it is probably not going to worth it.
Also, when choosing a picture, take good care to select one that is not too difficult to draw. Otherwise, you might have to regret your decision much later when you're already, say, 21 hours into the process. Of course, you don't want to make the picture too easy either, as it might not be much of a gratifying experience if you do it that way.
In our case, using the Reddit Gold membership recently awarded to us, we went out there and created our first very own signature Reddit alien avatar:
Guess who's in the house? The Math Vault Snoovtar! (alien avatar in Redditese)
For the lack of better terms, let's just call these lovely creatures Math Vault Redditlady and Redditdog, respectively. Now, does this logo make a good choice as a source image? We certainly think it does:
Passion Level: Since we only create avatars that we like, channeling energy into recreating the creatures invented in part by ourselves is definitely not an issue.
Complexity: Both the Redditlady and the Redditdog have very well-defined borders and colors, making it a formidable challenge that's not too intimidating.
And with both criteria passed, we proceed to happily import our avatar image into Desmos. By default, Desmos likes to set the origin $(0,0)$ as the center of the image, and that is a good choice for most purposes. However, depending on the size of the source image, sometimes it makes sense to consider rescaling it so that at the default zoom level, the image occupies around 70% of the graphing grid either in height or in width. If the image is too small, then it's hard to appreciate any fine detail, and if it's too big, then one might have a hard time focusing on the big picture — literally.
In our case, since the avatar is just about the right size to start with, we decided to keep the dimension of the image to the default $10 \times 10$ setting at the end. We also didn't alter the image's aspect ratio either, which in many cases aren't really necessary anyway.
But just to drive the message home a bit, setting up a source image correctly from the get-go is more important than it seems, since the last thing we want to be doing is to, say, rescale the image to an appropriate size after being already halfway through the sketching process.
How do you deal with a huge task that seems impossible to complete? You partition the task into smaller subtasks. That is, tasks that are so small that you actually enjoy doing them!
In particular, when it comes to sketching in Desmos, you want to section the picture into different portions, each of which can be finished in a reasonable amount of time, so that when it comes to the time of sketching, you focus on one single portion and one single portion only.
In our case, since we know that sketching the entire Redditlady and its companion pet can be quite a challenge, instead of resorting to despair, we divide the entire avatar into 6 portions — from the top to the bottom:
Hat & Antenna
Math Vault Redditbots cut into 6 pieces. Neat!
By inspection, while we see that most of these portions are relatively simple to implement, we still anticipate some challenges ahead of us when it comes to the Armor portion. In an attempt to make the Armor less intimidating, we proceed to subdivide it further into 7 subportions:
Left Shoulder
Right Shoulder
"Outer Lungs"
With the gist of our organisational structure now settled, we then proceed to create a Desmos folder for each of the general portions, all of the while making a mental note that every command line pertaining to a specific portion should be dragged into its associated folder — if it's not already in there.
And while Desmos doesn't really support folder nesting at this point, there's nothing preventing us from using comment lines to provide labels for the subportions. True, while this level of organization could be an overkill for a simple picture, as the complexity of the picture increases, structuring the command lines this way can make line identification and future reference substantially easier.
A list of command lines under the Armor folder. Note the use of comments to label the Armor subportions.
In our case for example, the need for quick reference became so urgent that towards the end of the sketching, we ended up creating far more subportions than we originally thought needed. Here's our updated organisational structure at the completion of the avatar:
Antenna & Hat
Collar & Arms
"Lungs" &"Outer Lungs"
Hands & Rubik's Cube
Body Gaps
Leg Gaps
(Yep. The Redditdog proved to be a bit more annoying than originally anticipated. Hence the saying "Don't judge a puppy by its cover.")
Learn to sketch a puppy computationally in Desmos through equations and inequalities! Click to Tweet
As mentioned a bit earlier, when working on a sketch, it's often less intimidating to just focus on a portion while ignoring the rest. We call this portion crunching, and is done in part to ensure that the work completed with minimal resistance and mental blocks. In Desmos, portion crunching usually boils down to the following two parts: curving drawing and region coloring.
In general, a self-contained portion of an image is composed of close figures, which in turn can be construed as a series of curve segments. However, since these curve segments tend interact with each other, it would be hard for us to draw these segments sequentially — as would be done in ordinary pencil-and-eraser sketching. Instead, we prefer to resort to the strategy as outlined below:
Model a curve segment using an appropriate class of equations, and leave its graph with the full domain intact.
Do the modeling for each of the curve segments in the close figure.
Cut off the excessive parts of each curve — by imposing restrictions on the associated equation.
In particular, we want the curve trimming to happen at the very end, because once the curve modeling are done for each curve segment in a close figure, Desmos will map out all the intersection points along with their coordinates — the latter of which provide key information as to where to cut off the curves, and might not be available had we decided to trim the curves off from the get-go.
In actuality though, we'll probably spend more time modeling the curve segments rather than trimming them — as there are simply too many curves in too many different shapes and forms. And while it might be hard to provide an exhaustive set of tips for modeling each curve segment we will ever encounter, the following procedures should be more than enough to take care of most of them.
Many close figures are composed of nothing but lines, and vertical/horizontal line segments are about as easy as one can get. For example, to create the line segment connecting $(1.2, 5)$ with $(1.2, 5.67)$, simply typing $x=1.2 \, \{5\le y \le 5.67 \}$ into a command line will do just fine.
In general, if you know the coordinates of the two endpoints of a line segment, then you can enter them into a Desmos table, and re-configure the points (via the Gear Icon) to have a line segment passing through it.
On the other hand, if you only have the coordinate of a single point on the line segment, say $(2.33,4.67)$, then you can still resort to the point-slope form, which in this case becomes $y=(x-2.33)+4.67$. In fact, by simply adjusting the slope one digit at a time, and observing the graph at an ever-increasing zoom level, it shouldn't take long to obtain a line that reasonably overlaps with the segment you are trying to model.
In some cases, the borders of objects (e.g., eyes, ring, hands) are either ellipses themselves, or can modeled using some elliptical arcs. In which case, the following modeling procedure could be useful:
Procedure for Modeling Elliptical Segments
For a full circle or a full ellipse, use the general equation $\displaystyle (x-h)^2+(y-k)^2 = r^2$ (resp., $\displaystyle \frac{(x-h)^2}{a^2}+\frac{(y-k)^2}{b^2}$$=1$) in the command line. If the circular/elliptical segment is already part of the top-half, bottom-half, left-half or right-half of a circle/ellipse, then use the corresponding function instead — this is vital if the region is to be colored later on.
Don't worry about which numbers to use for the parameters $r$, $a$, $b$, $h$ and $k$ yet. Just put in some numbers into the command line so that the resulting graph is reasonably close to the segment being modeled.
Start by tweaking the radius $r$ (or in the case of ellipse, the radii $a$ and $b$) so that the resulting graph looks "parallel" to the segment being modeled. Do it one decimal at a time, before zooming in and moving onto the next decimal.
Once satisfied, adjust the center of the circle/ellipse $(h,k)$ to where you think it should be. Try to nail in the accuracy one decimal at a time, and zoom in the graph repeatedly as you get better and better fit.
Repeat Step 3 and Step 4 in that order — if needed to.
While elliptical segments tend to be curves with a rather pronounced curvature, there are other curve segments such that — while still curvy in their own way — does not posses this distinct characteristic. In which case, what we've found is that by modeling them using "polynomials", it's possible to sneak in some reasonable fit most of the time.
For example, to model a concave-up, increasing curve segment with a vertex close to, say, $(1.56,6,78)$, we can start with the vertex form $y = (x-1.56)^2 + 6.78$. And if the segment has a slower increase in the beginning but a faster increase in the end, the vertex form of a quartic function — say $y=(x-1.56)^4 + 6.78$ — can be used as a starting point instead.
In fact, by adjusting the "degree" of the "polynomial" to any other positive real number, we can provide a reasonable fit to a surprising number of curvy segments as well:
For a U-curve segment, degree $4$ or more usually works wonder.
For a concave up, line-like segment, a degree between $1$ and $2$ seems to be more appropriate.
For a concave-down, increasing curve segment, a degree between $0$ and $1$ can be used instead.
In fact, by applying horizontal or vertical reflections (e.g., $-(x-1.56)^4 + 6.78$, $[-(x-1.56)]^{1.5} + 6.78$) to a "polynomial" vertex form, we can usually bend the resulting graph to any direction we want. This flexibility would result in the following procedure for modeling non-elliptical curve segments in general:
Procedure for Modeling Other Curvy Segments
Type in the appropriate "polynomial" vertex form — up to the horizontal/vertical reflection — into the command line. The polynomial can be either a function of $x$ (which opens upwards or downwards), or a function of $y$ (which opens leftward or rightward).
Don't worry too much about which coordinates to use as the vertex at this point. As long as the resulting graph is reasonably close to the segment being modeled, it should be fine.
Determine the degree of the "polynomial" that produces a graph most similar to the curve segment — by tweaking the number one decimal at a time.
Determine the leading coefficient of the "polynomial" that makes the resulting graph most "parallel" to the segment being modeled— again by tweaking the number one decimal at a time.
Once satisfied, correct the coordinates of the vertex so that the resulting graph overlaps with the curve segment being modeled. If any minor discrepancy is found, repeat Step 3, Step 4 and Step 5 — in that order.
In some occasions, a curve segment can be a bit convoluted that it's simply easier to subdivide it into multiple curve segments instead (as opposed to using higher-order polynomial regression). For example:
A round corner can be conceived as three curve segments, consisting of the two surrounding lines and a quarter-circle for the edge.
The top portion of a mustache can sometimes be broken down into two elliptical segments.
The tail of a pet can sometimes be made into a series of line/curve segments, and perhaps with a half-circle reserved for the tip of the tail.
In general, a curve segment can be cut by visualizing vertical lines in between. In which case, we can model the resulting subsegments individually as functions of $x$ — and if needed to — group these functions together again by defining a piecewise function in terms of $x$. Alternatively, a curve segment can also be cut by visualizing horizontal lines in between. In which case, the resulting subsegments can be modeled individually as functions of $y$ instead.
(For more on defining piecewise functions in Desmos, see Desmos: A Definitive Guide on Graphing and Computing.)
Once we finish modeling all the curve segments in a close figure, we can begin to trim off the excessive parts of each modeling curve by imposing restrictions on the associated equation. For most purposes, this is usually done by specifying the lower/upper bounds within which the curve is supposed to occur. For example:
If a half-circle is only to be graphed when $x$ is between $2.57$ and $13.534$, then adding the clause $\{ 2.57 \le x \le 13.534 \}$ to the end of its equation will achieve the desired effect.
If a U-curve is only to be displayed when $y$ is between $-2.3$ and $1.87$, then adding $\{ -2.3 \le y \le 1.87 \}$ to the end of its equation will take care of it.
As alluded to a bit earlier, the actual numbers for the upper/lower bounds are usually part of the coordinates of some intersection point. which can be displayed up to four decimals if you zoom in hard enough before clicking on them. In the case where an upper/lower bound doesn't occur at any of the key points, the bound will have to be obtained manually by clicking a bit on the region where the curve is supposed to be trimmed off.
In some occasions (such as the case with an obliquely-trimmed ellipse), a simple upper/lower bound simply won't be enough to cut the curve the right way. Instead, you will have to define the upper/lower bounds as actual functions (either in terms of $x$ or in terms of $y$), and use them accordingly in the restriction clauses. In other occasions, you might even have to stack up multiple restriction clauses in order to get the desired effect.
After modeling and trimming all the curve segments in a close figure, it's about time to start thinking about coloring the region inside. For implicit equations such as that of a full circle or an ellipse, this can usually be done by replacing the $=$ sign in the equation with the $\le$ sign. In most likelihood though, coloring a region is not going this easy, as many close figures have an irregular area that is bounded by a series of different-looking curve segments instead.
For example. in the case where an irregular area occurs but has rather well-defined upper and lower bounds, the coloring is usually done using the procedure below:
Define the upper bound of the region as a function of $x$ (let's call it $g(x)$). If the upper bound consists of multiple curve segments, then the function will have to be defined piecewise in order to take into account all the segments involved.
In a similar manner, define the lower bound of the region as a function of $x$ as well (let's call it $f(x)$ for now).
Color the region by entering the inequality $f(x) \le y \le g(x)$ into a command line.
In some occasions, it might not be possible to color a region using $y$ as the bounded variable. If that's the case, then you might have to adjust the above procedure to use $x$ as the bounded variable instead (e.g., color the region via an inequality of the form $f(y) \le x \le g(y)$).
Note how the depth of the coloring increases as we duplicate the inequalities more and more times.
In Desmos, if you color a region using a single inequality, the coloring will usually be a bit on the paler side. To increase the depth of the coloring, one thing you can do is to simply duplicate the same inequality three or more times in the command line. In fact, if you have some knowledge in color mixing, then you should play around with the different color combinations a bit and see what kind of end result you get!
So, how exactly does portion crunching work out in practice? To figure that out, we thought we would elaborate a bit on how we manage to crunch out the Antenna & Hat portion of our avatar.
First of all, we would start by subdividing the antenna into several sections, and modeled them individually using a series of functions:
Antenna Ring: This is simply modeled using two full circles.
Antenna Cable (Upper Part): Being more like a round corner, the cable is segmented and modeled using a line, a half-circle and another line joining the ring to the hat.
Antenna Cable (Lower Part): This is simply modeled using two lines.
Redditlady's Antenna and Hat
As for the hat, here's what we did:
Hat Disks (Top and Bottom): Sorry. We cheated. Each of the disks is modeled after an rotated ellipse, which takes on a slightly more involved equation.
Hat Borders (Vertical): These are simply modeled after two oblique lines.
Hat Mid-lines (Horizontal Curves): These are modeled using quadratic functions in vertex form.
Equation we used to model the top disk of the hat.
And with the modeling all done, we're left with a bunch of curves that intersect each other. Using the coordinates of these intersection points, we proceed to trim off the excessive parts of the curves one by one:
Outer Circle (Antenna Ring): This circle needs to have a gap on the upper-left end, so we trimmed off the excess by:
Defining a linear function connecting the endpoints of the gap (let's call it $f(x)$ for now).
Appending the restriction clause $\{ y \le f(x) \}$ to the end of the equation of the outer circle.
Antenna Cable: Each equation making up the antenna cable is trimmed off by imposing simple restriction clauses on $x$ (e.g., $\{0.6153 \le x \le 1.755 \}$), where the actual numbers for the upper/lower bounds are obtained through the coordinates of the intersection points.
Hat: the equations for the vertical hat borders and the horizontal hat mid-lines are trimmed off in a similar manner — again using the relevant intersection points.
Once the curves are well-trimmed, we then proceed to color the different parts of the hat as follows:
Top Disk: This is colored easily by changing the $=$ sign in the equation to $\le$.
Upper and Lower Strip: Each of the strip is colored using an inequality of the form $f(x) \le y \le g(x)$, where $f(x)$ and $g(x)$ are the lower and upper bound functions of the strip region, respectively, and both functions have to be defined piecewise (see the figure below for more).
Bottom Disk: Since the Lower Strip is red and this part needs to be in orange, we color it by appending the restriction clause $\{ y\le f(x) \}$ into the equation of the ellipse.
All right. So that pretty much concludes the very first portion of our avatar! By the way, here's a visual depiction of the upper/lower bounds for the red Lower Strip region — in case it still hasn't clicked yet:
A visual representation of the lower and upper bound functions used to color the Red Strip.
Once you run through all the hurdles of drawing a portion/subportion of your picture, it's time to move on to other portions and put your new-found skills into good use. How? By picking a close figure, focusing on modeling the curves, trimming off the excess and coloring the figure, and before you know it, everything is set in front of your eyes!
(after 6 hours of undivided attention, for example)
However, there's a caveat: for many aspiring Desmos artists, this advice doesn't seem to be particularly actionable. If that's you, then you'll definitively find the following tips and strategies we used to sketch the other portions of the avatar a bit more informative.
As expected, we find the face to be one of the easiest portions to sketch, since most of the curve segments involved are either full circles themselves, or parts of a circle/ellipse:
The redditlady's face, or maybe just the redface for short. 🙂
Eyes: As round as they are, these eyes can be easily modeled using full circles. Indeed, with just a bit of parameter tweaking, it didn't take long before we figure out the radius and the center of the circle, which is then colored by changing the $=$ sign in their equation to $\le$. The resulting inequalities are then duplicated several times to increase the depth of the coloring.
Face Border & Smile: Being both elliptical segments in disguise, the face border and the smile are both modeled using full ellipses. The "smile" ellipse is trimmed off using the restriction clause $\{ y \le 0.202 \}$, while the face border had to be trimmed off using a custom-defined upper bound function on $y$ (the latter of which is defined as the line connecting the two points where the face and the hat intersect).
Left Ear: The two outer circular arcs are modeled using full circles, while the innermost curve — which resembles a line — is actually modeled using the vertex form of a "polynomial" of degree $0.7$ (i.e., $c(x-h)^{0.7}+k$). Using the coordinates of the relevant intersection points provided by Desmos, the three resulting curves are then trimmed off by appending simple upper/lower bounds on their respective equations.
Right Ear: The two outer arcs are now modeled after a full circle (inner arc) and a full ellipse (outermost arc), respectively. The two inner curves are not exactly elliptical in nature, so we modeled the top one using a quadratic vertex form ($0.24 (x-2.783)^2+2.188$ to be sure), and the bottom one using a "polynomial" vertex form with degree $2.7$ (after a bit of tweaking). Once there, by using the coordinates of the surrounding intersection points, we trimmed off the two outer arcs by imposing simple upper/lower bounds on both $x$ and $y$, and the two inner curves by appending simple restriction clauses on the value of $x$.
Being composed of simple geometrical shapes, we didn't anticipate the Hand & Rubik's Cube portion to be particular hard either, and in retrospect, we were almost entirely right on that one:
Hand: These are basically just two standard full ellipses — with no trimming required.
Rubik's Cube (Left and Right Side): Since this part is composed entirely of lines, we basically went point-slope form all the way through. In addition, since the vertical-looking lines here are indeed vertical, it didn't take a lot of tweaking to finish this part.
Rubik's Cube (Top Side): Same drill. Except that all the lines are now oblique, hence literally point-slope form all the way through.
The upper and lower bound functions needed to color the cube.
So all seems to go well, until it came the time to color the cube that is! As you can see in the left figure, the red portion is colored by defining $g_1(x)$ and $g_2(x)$ as the upper and lower bound functions, and by using the inequality $g_2(x) \le y$ $\le g_1(x)$ — which is duplicated five times in the command line to provide that reddish tint.
(by the way, these are not the actual names we gave to those functions, but it helps to think of them that way.)
In a similar manner, the left side of the cube is colored by restricting the $y$-values to be between $f_2(x)$ and $f_1(x)$ — both of which had to be defined piecewise. For example, the lower bound function $f_2(x)$ had to be defined by conjoining the left vertical line segment, the top-half of the ellipse, and the right vertical line segment — in that order. This of course means that the original equation of the full ellipse had to be reduced to a function first — before we can used it to define a piecewise function.
So in overall, while the coloring process did get a bit involved in the end, sketching the Rubik's cube in Desmos is kind of cool in itself, as it exemplifies how a 3D figure can be reproduced on a 2D interface — with equations and inequalities even!
While this portion might look simple at the first sight, it is actually a bit sticky as there are many different kinds of curve segments involved. But nevertheless, let's have the picture do the talk for us first:
Redditlady's skirt and legs, with most curve segments labelled.
As can be seen above, the skirt alone is surrounded by seven curve segments, where:
Curve 1 is modeled using the bottom-half of an ellipse.
Curve 2, 3 and 4 are modeled after quadratic vertex forms.
Curve 6 is just a horizontal line — nothing fancy here.
Curve 5 and 7 are both parts of the bottom half of the same ellipse.
With these seven curve segments modeled, we then use the coordinates of the resulting intersection points to trim the curves into the right length. After which, we proceed to define the upper bound function $g(x)$ by conjoining Curve 1,2,3 and 4, and the lower bound function $f(x)$ by conjoining Curve 5, 6 and 7. The skirt is then colored green using the inequality $f(x) \le y \le g(x)$, which is duplicated three times in the command line to make the skirt a bit greener.
As for the legs, we've decided to sketch them using the following modeling scheme instead:
Curve c and Curve 0 are both modeled after the top-half of an ellipse.
Curve a is modeled using a quadratic vertex form, while Curve b — which looks more like a "curvy line" — is modeled using a "polynomial" vertex form with degree $0.8$.
Curve 8 and 9 got a bit more subtle, so we chose to model them using "polynomial" vertex form as well. After a bit of tweaking, we found that Curve 8 — which looks like an intermediate function between a line and a parabola — can be modeled fairly well with degree $1.35$. As for Curve 9, we've found a match when the degree is $1.99$
(Yep. Looks like we've "curved" it a little bit.)
Remember us complaining about the Redditdog? Well, it's about time to figure out just how annoying it was!
Redditdog's ears and beyond
The subportion we're looking at here are the tail and the ears. As surprising as it seems, we didn't model the tail using a bunch of lines and half-circles (which we probably should have). Instead, we decided to model the left and right sides of the tail using hyperbolas, and the tip of the tail using the top-half of a circle.
And yes, we had to do some serious zooming and number tweaking to make sure that the parameters were robust enough, and that we can actually glue the tail together without any human noticing. But that still doesn't change the fact that the tail is really three segments in one. Not good if you are this dog of course! 🙂
As for the ears, since all the four outer borders are pretty curvy, we decided to modeled them after the top-half/bottom-half of an ellipse. On the other hand, because the inner borders are much less curvy than the outer ones, we decided finally to use "polynomial" vertex forms on them instead.
Redditdog's face (a.k.a., dogface)
All right. Let's take a look at how we dealt with this, hmm… dogface:
Face Border: This is just a plain-old full ellipse — with no trimming required.
Eyes: Unlike the Redditlady, our Redditdog's eyes are not exactly made of full circles. As such, we have to take into account of the line segments in the upper-extreme corners when modeling the borders. In order to color the eyes, we define the upper bound function $g(x)$ piecewise by conjoining the line segment with the corresponding quarter-circle. We also define the lower bound function $f(x)$ as the bottom-half of the relevant circle. After which, each eye is then shaded red by duplicating the inequality $f(x) \le y \le g(x)$ three times.
Nose: This is nothing more than a triangle, or more specifically, a horizontal line segment on the top followed by two oblique line segments at the bottom. Make sure you know by now how to sketch these segments and color the region within!
Whiskers: Due to their curvy nature, we decided to model the whiskers after the bottom-half of a circle. In fact, since the whiskers are symmetric about a horizontal line, all we had to do is to work out the function for the right whisker, and apply a horizontal reflection to get the left one for free.
Redditdog's body and legs.
Apparently, someone on Reddit made sure that a lot of design go into a dog's lower body. However, after a bit of trial-and-errors, we found that we were still able to model each of the curve segments using either a full ellipse, a half-ellipse, or a "polynomial" vertex form. For example:
The two outermost curve segments on the left and right sides of the body can be modeled as full ellipses, which are then trimmed off by imposing simple restrictions on both $x$ and $y$.
The remaining curve segments forming the left and right "triangles" — which are not as curvy than the ones above — are modeled after "polynomial" vertex form with varying degrees.
The left and right "horseshoes" can be easily modeled using a combination of full ellipses and horizontal lines, each of which subject to some simple restriction clause.
Similarly, the "trapezoidal" gap between the legs can also be modeled using a combination of half-ellipses and horizontal lines — which are again subject to some simple restriction clauses.
As one would have guessed, the armor piece is probably the toughest portion we have had to deal with. To ease the sketching process, we divide it further into four subportions: the shoulders, the arm, the chest, and the "abdomen".
First, here's a graphical depiction of the breakdown of the shoulders and the "collarbone":
Redditlady's shoulders and "collarbone".
As you can see, the curve segments here start to get a bit into the crazy zone. The left shoulder, for example. is colored by using Curve 1, 2, 3 jointly as the upper bound and Curve 4 as the lower bound. Curve 1 — being the round edge of the shoulder — is modeled after the top-half of a circle, while Curve 2 and Curve 4 are modeled using "polynomial" vertex forms (with degree $1.6$ and $4$, respectively). On the other hand, Curve 3, which is a part of the face border, had to be remodeled as the bottom-half of an ellipse — by recycling the equation of the full ellipse which was previously used to model the face border.
In a similar manner, we also colored the right shoulder grey by using Curve a, b, c jointly as the upper bound, and Curve 9, f, e, d as the lower bound, except that this time, the curve segments are modeled using a combination of lines (Curve f and e), half-circles/ellipses (Curve b, c and d) and "polynomial" vertex forms (Curve 9 and b). In fact, a similar modeling and coloring scheme applies to the "collarbone" area as well.
Redditlady's Arm.
Looking at the figure on the left, we see that the arm is a relatively easy portion to implement. For example:
The sleeve can be colored grey using an inequality of the form $f(x) \le y \le g(x)$, where $f(x)$ is the horizontal line used to modeled Curve 4, and $g(x)$ is the piecewise function formed by conjoining Curve 1,2 and 3 — all of which can be modeled after "polynomial" vertex forms.
The two outer curve segments for the arm (i.e., Curve 5 and Curve a) are actually elliptical arcs, and thus are modeled after full ellipses.
On the other hand, Curve c and Curve 6 — which are a bit more linear in nature — had to be modeled using "polynomial" vertex forms of low degree instead.
The other remaining curves are just line segments, so it didn't take us a long time to reproduce them.
OK. Here comes the inevitable part: the chest. To begin, here's a graphical depiction of most of the curve segments involved:
The Redditlady's chest. A lot of stuffs going on!
By inspection, the easiest part is probably the "tie" area in the middle, since it's composed of a "trapezoid" on the top, and a "rectangle" at the bottom. To be sure, we did realize that neither Curve 1 nor Curve 7 are actually lines, so we took good care to model them using "polynomial" vertex forms instead.
After that, we moved on to the second easiest parts, which we determined to be the tiny "trapezoid" on the upper-right, and the weird-looking region next to the hand. Why the second easiest? Because both regions can be colored using an inequality of the form $f(x) \le y \le g(x)$ — as long as we take good care in defining the upper and the lower bound function $f$ and $g$ to take into account all the relevant segments involved.
Now, here comes the sticky part: if you look at the region in the above figure that is enclosed in orange and blue, then it shouldn't take long to realize that it would be difficult to color this region using standard inequalities of the form $f(x) \le y \le g(x)$. In fact, if you decide to stubbornly stick with this route anyway, then you will end up "scratching" our Redditlady's armor with a few unsightly little patches!
Instead, what we want to do is to color the region using an inequality of the form $f(y) \le x \le g(y)$, where the lower bound function $f(y)$ is defined by conjoining the relevant part of Curve 7, Curve c and Curve b, and where the upper bound function $g(y)$ is defined by conjoining Curve d with the relevant part of Curve 2. Basically, $f(x)$ corresponds to the curve in the above figure that is outlined in orange, and $g(x)$ the part in blue.
In fact, by using the same strategy, the two remaining areas in the chest region — the right "lung" and the left "outer lung" that is — can also be colored using a single inequality as well.
All right. We've come to the last part, which is not too bad either — especially after going through the nitty-gritty of that Chest portion.
The abdomen in numbers.
In a nutshell, we basically colored the abdomen in grey by using a standard inequality of the form $f(x) \le y \le g(x)$, where:
$g(x)$ is the function we used to model Curve 1 (a "polynomial" vertex form with degree $1.6$ to be exact — due to the linear nature of the curve).
$f(x)$ is the lower bound function obtained by conjoining — in that order — Curve 2, the relevant section of Curve 3, Curve 4, the relevant section of Curve 5, and finally, Curve 6. Interestingly, each of these curve segments can be modeled using "polynomial" vertex forms — in particular as quadratic functions with varying slopes.
And guess what? We're done! All the 219 equations/inequalities that is! Here's a before-and-after picture for to be sure:
Math Vault Redditlady and Redditdog reborn!
Granted, there's still a lot of subtleties and nuances going into how each of the equations and inequalities are constructed. and that is why we've decided to reserve the best for last. Namely, if you're still curious about all the technical details behind the sketch, then you should definitely go to our Redditlady's Desmos page and take a look at those command lines yourself!
See the Technical Nitty-Gritty of Redditlady and Redditdog on Desmos
Wow! That was a lot of work and fun for us, so hopefully the same is true for you too. If anything, this is definitely graphic design of a different caliber — and with a wicked geeky twist even!
Indeed, sketching in Desmos is like drawing with two hands (or maybe the legs as well) tied, even though a voice deep inside us is saying that this is only good for the cause of mathematics. This is because by the time you become a Desmos sketching master, you would have acquired a solid intuition about various geometrical shapes and the corresponding equations/inequalities you can use to model them. Who knows. perhaps it might even improve your ability to evaluate a double/triple integral — if you ever decide to take the multivariate calculus route that is!
All right. Ready for the long-awaited interactive table? Because here it is!
Step 1: Source Image
Make sure to go through the following checklist for your source image — before you have to learn things the hard way:
Passion Level
Are you passionate enough about the picture you have chosen?
Are you OK with injecting hours of time and effort into your picture, in order to create something you can be potentially proud of?
Is the picture too easy to be deemed a fair challenge?
Is the picture too intricate to be worth a try?
Image Scaling at the Default Zoom Level
Does the image occupy about 70% of the graphing grid — either by width or by height?
Ease the sketching process by dividing the image into multiple portions — each of which is to be managed using a Desmos folder.
For each portion that is still relatively large to handle, divide it further into subportions that are reasonably manageable.
While Desmos does not support folder nesting at this point, the subportions can still be managed using comment lines in Desmos.
General Procedure
In Desmos, curves are better drawn in groups by:
Picking a close figure (or an otherwise self-contained figure).
Modeling each of its curve segments using an appropriate class of equations.
Trimming off the excessive parts of each curve by imposing appropriate restriction clause(s) to the equation of the curve.
Vertical/horizontal line segments can be easily modeled using equations — along with some simple restriction clauses.
A line segment can also be drawn in Desmos using a table of points — provided that the two endpoints are known.
If only one point on the line segment is known, then the segment can still be modeled using point-slope form.
For full circles/ellipses, use the corresponding general equation.
For curve segments which are parts of a half circle or a half-ellipse, use the aassociated functions instead.
Tweak the radius parameter(s) first to get a curve that is as "parallel" to the segment being modeled as possible, before adjusting the coordinates of the center. Repeat the process as many times as needed.
Use "polynomial" vertex form to model a segment with a less pronounced curvature.
The "degree" of the polynomial can be any positive real number — the closer the degree is to $1$, the straighter the polynomial.
Tweak the degree and the leading coefficient first to get a curve that is as "parallel" to the segment being modeled as possible, before adjusting the coordinate of the vertex. Repeat the process as many times as needed.
If a curve segment is too intricate, we can further subdivide it using:
Vertical lines (which produces subsegments that can be modeled using functions of $x$).
Horizontal lines (which produces subsegments that can be modeled using functions of $y$).
Once all the curve segments of a figure are modeled, the excessive parts of a curve can be trimmed off by appending restriction clause(s) to the equation of the curve:
Restriction on $x$ (e.g., $\{ 1.24 \le x \le 2.566 \}$)
Restriction on $y$ (e.g., $\{ -23.15 \le y \le -5.69 \}$)
In general, the actual numbers for the bounds can be obtained from the coordinates of the relevant intersection points. In other occasions, one might have to stack up multiple restriction clauses to get the desired effect.
Implicit Equations
For regions such as that of an ellipse, a circle or a plane, the coloring can be sometimes done by changing the $=$ sign in the equation into $\le$.
General Coloring
In most cases, a region can also be colored by:
Imposing bounds on the $y$-values (via an inequality of the form $f(x) \le y \le g(x)$).
Imposing bounds on the $x$-values (via an inequality of the form $f(y) \le x \le g(y)$).
where the functions $f$ and $g$ stands for the lower and the upper bound of the region, respectively, some of which might have to be defined piecewise — by conjoining all the relevant curve segments together.
Really, just grind through each of the portions using the procedures introduced in Step 3, and you should be good to go!
And there you have it. The end of yet another intensely graphical saga with Desmos! Now, back to you: what kind of picture are you looking to draw with equations and inequalities? Either way, make sure to make use of what you've just learned on curve modelling so that when the time comes, you can draw like a real computational artist!
How's your higher math going?
Shallow learning and mechanical practices rarely work in higher mathematics. Instead, use these 10 principles to optimize your learning and prevent years of wasted effort.
Hmm... Tell me more.
Math Vault and its Redditbots enjoy advocating for mathematical experience through digital publishing and the uncanny use of technologies. Check out their 10-principle learning manifesto so that you can be transformed into a fuller mathematical being too.
Desmos: A Definitive Guide on Graphing and Computing
A First Introduction to Statistical Significance — Through Dice Rolling and Other Uncanny Examples
Get notified of our latest developments and free resources
About Math Vault
Originally founded as a Montreal-based math tutoring agency, Math Vault has since then morphed into a global resource hub for people interested in learning more about higher mathematics. For more, see about us.
Comprehensive List of Math Symbols
© 2021 Math Vault. All Rights Reserved.
Privacy Policy Terms of Use Anti-Spam Disclosure DMCA Notice
[tcb_post_title link='1' rel='0' target='0' inline='1' css='tve-u-173599ca74e' static-link='{"className":"tve-froala fr-basic","href":"https://mathvault.ca/latex-guide/","title":"The Definitive, Non-Technical Introduction to LaTeX, Professional Typesetting and Scientific Publishing","data-css":"tve-u-173599ca74e","class":"tve-froala fr-basic"}' link-css-attr="tve-u-173599ca74e"] | CommonCrawl |
A prescribed anisotropic mean curvature equation modeling the corneal shape: A paradigm of nonlinear analysis
DCDS-S Home
One-dimensional nonlinear boundary value problems with variable exponent
April 2018, 11(2): 193-212. doi: 10.3934/dcdss.2018012
Quasilinear elliptic equations with measures and multi-valued lower order terms
Siegfried Carl and Christoph Tietz ,
Institut für Mathematik, Martin-Luther-Universität Halle-Wittenberg, 06099 Halle, Germany
* Corresponding author: Christoph Tietz
Received November 2016 Revised April 2017 Published January 2018
Fund Project: The second author is supported by a doctoral studies grant of Saxony-Anhalt.
In this paper, we consider the existence and further qualitative properties of solutions of the Dirichlet problem to quasilinear multi-valued elliptic equations with measures of the form
$Au + G(\cdot,u) \ni f,$
$A$
is a second order elliptic operator of Leray-Lions type and
$f\in \mathcal M_b(\Omega)$
is a given Radon measure on a bounded domain
$\Omega\subset \mathbb R^N$
. The lower order term
$s\mapsto G(\cdot,s)$
is assumed to be a multi-valued upper semicontinuous function, which includes Clarke's gradient
$s\mapsto \partial j(\cdot,s)$
of some locally Lipschitz function
$s\mapsto j(\cdot,s)$
as a special case. Our main goals and the novelties of this paper are as follows: First, we develop an existence theory for the above multi-valued elliptic problem with measure right-hand side. Second, we propose concepts of sub-supersolutions for this problem and establish an existence and comparison principle. Third, we topologically characterize the solution set enclosed by sub-supersolutions.
Keywords: Upper semicontinuous multi-valued operator, pseudomonotone multi-valued operator, Radon measure, sub-supersolutions, quasilinear elliptic operator.
Mathematics Subject Classification: Primary: 35R70, 35R06; Secondary: 35J62, 47H05.
Citation: Siegfried Carl, Christoph Tietz. Quasilinear elliptic equations with measures and multi-valued lower order terms. Discrete & Continuous Dynamical Systems - S, 2018, 11 (2) : 193-212. doi: 10.3934/dcdss.2018012
J. Appell, E. De Pascale, H.T. Nguyen and P.P. Zabrejko, Multivalued superpositions, Dissertationes Mathematicae, 345 (1995), 1-97. Google Scholar
L. Boccardo, Some nonlinear Dirichlet problems in L1 involving lower order terms in divergence form, Progress in Elliptic and Parabolic Partial Differential Equations (Capri, 1994), 43–57, Pitman Res. Notes Math. Ser. , 350, Longman, Harlow, 1996. Google Scholar
L. Boccardo and T. Gallouët, Non-linear elliptic and parabolic equations involving measure data, Journal of Functional Analysis, 87 (1989), 149-169. doi: 10.1016/0022-1236(89)90005-0. Google Scholar
S. Carl and V.K. Le, Existence results for hemivariational inequalities with measures, Applicable Analysis, 86 (2007), 735-753. doi: 10.1080/00036810701397796. Google Scholar
S. Carl and V.K. Le, Elliptic inequalities with multi-valued operators: Existence, comparison and related variational-hemivariational type inequalities, Nonlinear Analysis: Theory, Methods & Applications, 121 (2015), 130-152. doi: 10.1016/j.na.2014.10.033. Google Scholar
S. Carl, V. K. Le and D. Motreanu, Nonsmooth Variational Problems and Their Inequalities Springer Monograph in Mathematics, Springer, New York, 2007. doi: 10.1007/978-0-387-46252-3. Google Scholar
G. Dal Maso, F. Murat, L. Orsina and A. Prignet, Renormalized solutions of elliptic equations with general measure data, nnali della Scuola Normale Superiore di Pisa -Classe di Scienze, 28 (1999), 741-808. Google Scholar
J. J. Duistermaat and J. A. C. Kolk, Distributions: Theory and Applications Birkhäuser, Boston, 2010. doi: 10.1007/978-0-8176-4675-2. Google Scholar
J. Leray and J.-L. Lions, Quelques résultats de Višik sur les problémes elliptiques non linéaires par les méthodes de Minty-Browder, Bulletin de la Société Mathématique de France, 93 (1965), 97-107. Google Scholar
Z. Naniewicz and P. D. Panagiotopoulos, Mathematical Theory of Hemivariational Inequalities and Applications Marcel Dekker, Inc. , New York, Basel, Hong Kong, 1995. Google Scholar
A. C. Ponce, Selected problems on elliptic equations involving measures, preprint, arXiv:1204.0668v2. Google Scholar
M. M. Rao, Measure Theory and Integration Marcel Dekker, Inc. , New York, Basel, 2004. Google Scholar
R. E. Showalter, Monotone Operators in Banach Space and Nonlinear Partial Differential Equations American Mathematical Society, Providence, RI, 1997. Google Scholar
I. I. Vrabie, Compactness Methods for Nonlinear Evolutions Pitman Monographs and Surveys in Pure and Applied Mathematikcs, 75 2nd edition, Longman, New York, 1995. Google Scholar
Ting Li. Pullback attractors for asymptotically upper semicompact non-autonomous multi-valued semiflows. Communications on Pure & Applied Analysis, 2007, 6 (1) : 279-285. doi: 10.3934/cpaa.2007.6.279
Yejuan Wang. On the upper semicontinuity of pullback attractors for multi-valued noncompact random dynamical systems. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3669-3708. doi: 10.3934/dcdsb.2016116
Jingyu Wang, Yejuan Wang, Tomás Caraballo. Multi-valued random dynamics of stochastic wave equations with infinite delays. Discrete & Continuous Dynamical Systems - B, 2022 doi: 10.3934/dcdsb.2021310
Yejuan Wang, Lin Yang. Global exponential attraction for multi-valued semidynamical systems with application to delay differential equations without uniqueness. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1961-1987. doi: 10.3934/dcdsb.2018257
Zhiming Liu, Zhijian Yang. Global attractor of multi-valued operators with applications to a strongly damped nonlinear wave equation without uniqueness. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 223-240. doi: 10.3934/dcdsb.2019179
Limei Dai. Multi-valued solutions to a class of parabolic Monge-Ampère equations. Communications on Pure & Applied Analysis, 2014, 13 (3) : 1061-1074. doi: 10.3934/cpaa.2014.13.1061
Yangrong Li, Renhai Wang, Lianbing She. Backward controllability of pullback trajectory attractors with applications to multi-valued Jeffreys-Oldroyd equations. Evolution Equations & Control Theory, 2018, 7 (4) : 617-637. doi: 10.3934/eect.2018030
Yejuan Wang, Peter E. Kloeden. The uniform attractor of a multi-valued process generated by reaction-diffusion delay equations on an unbounded domain. Discrete & Continuous Dynamical Systems, 2014, 34 (10) : 4343-4370. doi: 10.3934/dcds.2014.34.4343
Dalila Azzam-Laouir, Warda Belhoula, Charles Castaing, M. D. P. Monteiro Marques. Multi-valued perturbation to evolution problems involving time dependent maximal monotone operators. Evolution Equations & Control Theory, 2020, 9 (1) : 219-254. doi: 10.3934/eect.2020004
Yejuan Wang, Tomás Caraballo. Morse decomposition for gradient-like multi-valued autonomous and nonautonomous dynamical systems. Discrete & Continuous Dynamical Systems - S, 2020, 13 (8) : 2303-2326. doi: 10.3934/dcdss.2020092
Yury Arlinskiĭ, Eduard Tsekanovskiĭ. Constant J-unitary factor and operator-valued transfer functions. Conference Publications, 2003, 2003 (Special) : 48-56. doi: 10.3934/proc.2003.2003.48
Monika Eisenmann, Etienne Emmrich, Volker Mehrmann. Convergence of the backward Euler scheme for the operator-valued Riccati differential equation with semi-definite data. Evolution Equations & Control Theory, 2019, 8 (2) : 315-342. doi: 10.3934/eect.2019017
Qi Lü, Xu Zhang. Operator-valued backward stochastic Lyapunov equations in infinite dimensions, and its application. Mathematical Control & Related Fields, 2018, 8 (1) : 337-381. doi: 10.3934/mcrf.2018014
Simona Fornaro, Stefano Lisini, Giuseppe Savaré, Giuseppe Toscani. Measure valued solutions of sub-linear diffusion equations with a drift term. Discrete & Continuous Dynamical Systems, 2012, 32 (5) : 1675-1707. doi: 10.3934/dcds.2012.32.1675
Chuangxia Huang, Hedi Yang, Jinde Cao. Weighted pseudo almost periodicity of multi-proportional delayed shunting inhibitory cellular neural networks with D operator. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1259-1272. doi: 10.3934/dcdss.2020372
Andrea Bondesan, Laurent Boudin, Marc Briant, Bérénice Grec. Stability of the spectral gap for the Boltzmann multi-species operator linearized around non-equilibrium maxwell distributions. Communications on Pure & Applied Analysis, 2020, 19 (5) : 2549-2573. doi: 10.3934/cpaa.2020112
Michiel Bertsch, Flavia Smarrazzo, Andrea Terracina, Alberto Tesei. Signed Radon measure-valued solutions of flux saturated scalar conservation laws. Discrete & Continuous Dynamical Systems, 2020, 40 (6) : 3143-3169. doi: 10.3934/dcds.2020041
John Villavert. On problems with weighted elliptic operator and general growth nonlinearities. Communications on Pure & Applied Analysis, 2021, 20 (4) : 1347-1361. doi: 10.3934/cpaa.2021023
Craig Cowan. Supercritical elliptic problems involving a Cordes like operator. Discrete & Continuous Dynamical Systems, 2021, 41 (9) : 4297-4318. doi: 10.3934/dcds.2021037
Ole Løseth Elvetun, Bjørn Fredrik Nielsen. A regularization operator for source identification for elliptic PDEs. Inverse Problems & Imaging, 2021, 15 (4) : 599-618. doi: 10.3934/ipi.2021006
Siegfried Carl Christoph Tietz | CommonCrawl |
Diophantine approximations and directional discrepancy of rotated lattices
Authors: Dmitriy Bilyk, Xiaomin Ma, Jill Pipher and Craig Spencer
Journal: Trans. Amer. Math. Soc. 368 (2016), 3871-3897
MSC (2010): Primary 11K38, 11K60, 28A78, 52C05
Published electronically: September 9, 2015
Full-text PDF Free Access
In this paper we study the following question related to Diophantine approximations and geometric measure theory: for a given set $\Omega$ find $\alpha$ such that $\alpha - \theta$ has bad Diophantine properties simultaneously for all $\theta \in \Omega$. How do the arising Diophantine inequalities depend on the geometry of the set $\Omega$? We provide several methods which yield different answers in terms of the metric entropy of $\Omega$ and consider various examples.
Furthermore, we apply these results to explore the asymptotic behavior of the directional discrepancy, i.e., the discrepancy with respect to rectangles rotated in certain sets of directions. It is well known that the extremal cases of this problem (fixed direction vs. all possible rotations) yield completely different bounds. We use rotated lattices to obtain directional discrepancy estimates for general rotation sets and investigate the sharpness of these methods.
C. Aistleitner, I. Berkes, and R. Tichy, On the asymptotic behavior of weakly lacunary series, Proc. Amer. Math. Soc. 139 (2011), no. 7, 2505–2517. MR 2784816, DOI 10.1090/S0002-9939-2011-10682-8
Michael Bateman, Kakeya sets and directional maximal operators in the plane, Duke Math. J. 147 (2009), no. 1, 55–77. MR 2494456, DOI 10.1215/00127094-2009-006
József Beck, Irregularities of distribution. I, Acta Math. 159 (1987), no. 1-2, 1–49. MR 906524, DOI 10.1007/BF02392553
József Beck, On the discrepancy of convex plane sets, Monatsh. Math. 105 (1988), no. 2, 91–106. MR 930428, DOI 10.1007/BF01501162
J. Beck and W. W. L. Chen, Irregularities of point distribution relative to convex polygons. III, J. London Math. Soc. (2) 56 (1997), no. 2, 222–230. MR 1489133, DOI 10.1112/S0024610797005267
Dmitriy Bilyk, Xiaomin Ma, Jill Pipher, and Craig Spencer, Directional discrepancy in two dimensions, Bull. Lond. Math. Soc. 43 (2011), no. 6, 1151–1166. MR 2861537, DOI 10.1112/blms/bdr050
Yann Bugeaud and Nikolay Moshchevitin, Badly approximable numbers and Littlewood-type problems, Math. Proc. Cambridge Philos. Soc. 150 (2011), no. 2, 215–226. MR 2770060, DOI 10.1017/S0305004110000605
J. W. S. Cassels, On a result of Marshall Hall, Mathematika 3 (1956), 109–110. MR 84526, DOI 10.1112/S0025579300001765
W. W. L. Chen and G. Travaglini, Discrepancy with respect to convex polygons, J. Complexity 23 (2007), no. 4-6, 662–672. MR 2372021, DOI 10.1016/j.jco.2007.03.006
H. Davenport, A note on Diophantine approximation. II, Mathematika 11 (1964), 50–58. MR 166154, DOI 10.1112/S0025579300003478
Michael Drmota and Robert F. Tichy, Sequences, discrepancies and applications, Lecture Notes in Mathematics, vol. 1651, Springer-Verlag, Berlin, 1997. MR 1470456, DOI 10.1007/BFb0093404
P. Erdős, Problems and results on Diophantine approximations. II, Répartition modulo 1 (Actes Colloq., Marseille-Luminy, 1974) Springer, Berlin, 1975, pp. 89–99. Lecture Notes in Math., Vol. 475. MR 0412138
Paul Hagelstein, Maximal operators associated to sets of directions of Hausdorff and Minkowski dimension zero, Recent advances in harmonic analysis and applications, Springer Proc. Math. Stat., vol. 25, Springer, New York, 2013, pp. 131–138. MR 3066883, DOI 10.1007/978-1-4614-4565-4_{1}3
Marshall Hall Jr., On the sum and product of continued fractions, Ann. of Math. (2) 48 (1947), 966–993. MR 22568, DOI 10.2307/1969389
Y. Katznelson, Chromatic numbers of Cayley graphs on $\Bbb Z$ and recurrence, Combinatorica 21 (2001), no. 2, 211–219. Paul Erdős and his mathematics (Budapest, 1999). MR 1832446, DOI 10.1007/s004930100019
A. Khintchine, Über eine Klase linearer Diophantischer Approximationen, Rend. Circ. Mat. Palermo 50 (1926), 170–195.
L. Kuipers and H. Niederreiter, Uniform distribution of sequences, Pure and Applied Mathematics, Wiley-Interscience [John Wiley & Sons], New York-London-Sydney, 1974. MR 0419394
M. Lerch, Question 1547, L'Intermediaire Math. 11 (1904), 144–145.
B. de Mathan, Numbers contravening a condition in density modulo $1$, Acta Math. Acad. Sci. Hungar. 36 (1980), no. 3-4, 237–241 (1981). MR 612195, DOI 10.1007/BF01898138
N. G. Moshchevitin, On the distribution modulo 1 of lacunary and sublacunary sequences: application of the Peres-Schlag construction, Fundam. Prikl. Mat. 16 (2010), no. 5, 117–138 (Russian, with English and Russian summaries); English transl., J. Math. Sci. (N.Y.) 180 (2012), no. 5, 610–625. MR 2804897, DOI 10.1007/s10958-012-0660-3
Yuval Peres and Wilhelm Schlag, Two Erdős problems on lacunary sequences: chromatic number and Diophantine approximation, Bull. Lond. Math. Soc. 42 (2010), no. 2, 295–300. MR 2601556, DOI 10.1112/blms/bdp126
A. D. Pollington, On the density of sequence $\{n_{k}\xi \}$, Illinois J. Math. 23 (1979), no. 4, 511–515. MR 540398
I. P. Rochev, On the distribution of fractional parts of linear forms, Fundam. Prikl. Mat. 16 (2010), no. 6, 123–137 (Russian, with English and Russian summaries); English transl., J. Math. Sci. (N.Y.) 182 (2012), no. 4, 527–538. MR 2825521, DOI 10.1007/s10958-012-0756-9
K. F. Roth, On irregularities of distribution, Mathematika 1 (1954), 73–79. MR 66435, DOI 10.1112/S0025579300000541
Wolfgang M. Schmidt, Irregularities of distribution. VII, Acta Arith. 21 (1972), 45–50. MR 319933, DOI 10.4064/aa-21-1-45-50
P. Sjögren and P. Sjölin, Littlewood-Paley decompositions and Fourier multipliers with singularities on certain sets, Ann. Inst. Fourier (Grenoble) 31 (1981), no. 1, vii, 157–175 (English, with French summary). MR 613033, DOI 10.5802/aif.821
C. Aistleitner, I. Berkes, and R. Tichy, On the asymptotic behavior of weakly lacunary series, Proc. Amer. Math. Soc. 139 (2011), no. 7, 2505–2517. MR 2784816 (2012c:42021), DOI 10.1090/S0002-9939-2011-10682-8
Michael Bateman, Kakeya sets and directional maximal operators in the plane, Duke Math. J. 147 (2009), no. 1, 55–77. MR 2494456 (2009m:42029), DOI 10.1215/00127094-2009-006
József Beck, Irregularities of distribution. I, Acta Math. 159 (1987), no. 1-2, 1–49. MR 906524 (89c:11117), DOI 10.1007/BF02392553
József Beck, On the discrepancy of convex plane sets, Monatsh. Math. 105 (1988), no. 2, 91–106. MR 930428 (89f:11109), DOI 10.1007/BF01501162
J. Beck and W. W. L. Chen, Irregularities of point distribution relative to convex polygons. III, J. London Math. Soc. (2) 56 (1997), no. 2, 222–230. MR 1489133 (99a:11093), DOI 10.1112/S0024610797005267
Yann Bugeaud and Nikolay Moshchevitin, Badly approximable numbers and Littlewood-type problems, Math. Proc. Cambridge Philos. Soc. 150 (2011), no. 2, 215–226. MR 2770060 (2012a:11093), DOI 10.1017/S0305004110000605
J. W. S. Cassels, On a result of Marshall Hall, Mathematika 3 (1956), 109–110. MR 0084526 (18,875d)
W. W. L. Chen and G. Travaglini, Discrepancy with respect to convex polygons, J. Complexity 23 (2007), no. 4-6, 662–672. MR 2372021 (2008m:11151), DOI 10.1016/j.jco.2007.03.006
H. Davenport, A note on Diophantine approximation. II, Mathematika 11 (1964), 50–58. MR 0166154 (29 \#3432)
Michael Drmota and Robert F. Tichy, Sequences, discrepancies and applications, Lecture Notes in Mathematics, vol. 1651, Springer-Verlag, Berlin, 1997. MR 1470456 (98j:11057)
P. Erdős, Problems and results on Diophantine approximations. II, Répartition modulo 1 (Actes Colloq., Marseille-Luminy, 1974) Lecture Notes in Math., Vol. 475, Springer, Berlin, 1975, pp. 89–99. MR 0412138 (54 \#265)
Paul Hagelstein, Maximal operators associated to sets of directions of Hausdorff and Minkowski dimension zero, Recent advances in harmonic analysis and applications, Springer Proc. Math. Stat., vol. 25, Springer, New York, 2013, pp. 131–138. MR 3066883, DOI 10.1007/978-1-4614-4565-4_13
Marshall Hall Jr., On the sum and product of continued fractions, Ann. of Math. (2) 48 (1947), 966–993. MR 0022568 (9,226b)
Y. Katznelson, Chromatic numbers of Cayley graphs on $\mathbb {Z}$ and recurrence, Combinatorica 21 (2001), no. 2, 211–219. Paul Erdős and his mathematics (Budapest, 1999). MR 1832446 (2002h:05065), DOI 10.1007/s004930100019
L. Kuipers and H. Niederreiter, Uniform distribution of sequences, Wiley-Interscience [John Wiley & Sons], New York-London-Sydney, 1974. Pure and Applied Mathematics. MR 0419394 (54 \#7415)
B. de Mathan, Numbers contravening a condition in density modulo $1$, Acta Math. Acad. Sci. Hungar. 36 (1980), no. 3-4, 237–241 (1981). MR 612195 (82e:10088), DOI 10.1007/BF01898138
N. G. Moshchevitin, On the distribution modulo 1 of lacunary and sublacunary sequences: application of the Peres-Schlag construction, Fundam. Prikl. Mat. 16 (2010), no. 5, 117–138 (Russian, with English and Russian summaries); English transl., J. Math. Sci. (N. Y.) 180 (2012), no. 5, 610–625. MR 2804897 (2012e:11123), DOI 10.1007/s10958-012-0660-3
Yuval Peres and Wilhelm Schlag, Two Erdős problems on lacunary sequences: chromatic number and Diophantine approximation, Bull. Lond. Math. Soc. 42 (2010), no. 2, 295–300. MR 2601556 (2011e:11126), DOI 10.1112/blms/bdp126
A. D. Pollington, On the density of sequence $\{n_{k}\xi \}$, Illinois J. Math. 23 (1979), no. 4, 511–515. MR 540398 (80i:10066)
I. P. Rochev, On the distribution of fractional parts of linear forms, Fundam. Prikl. Mat. 16 (2010), no. 6, 123–137 (Russian, with English and Russian summaries); English transl., J. Math. Sci. (N. Y.) 182 (2012), no. 4, 527–538. MR 2825521, DOI 10.1007/s10958-012-0756-9
K. F. Roth, On irregularities of distribution, Mathematika 1 (1954), 73–79. MR 0066435 (16,575c)
Wolfgang M. Schmidt, Irregularities of distribution. VII, Acta Arith. 21 (1972), 45–50. MR 0319933 (47 \#8474)
P. Sjögren and P. Sjölin, Littlewood-Paley decompositions and Fourier multipliers with singularities on certain sets, Ann. Inst. Fourier (Grenoble) 31 (1981), no. 1, vii, 157–175 (English, with French summary). MR 613033 (82g:42014)
Retrieve articles in Transactions of the American Mathematical Society with MSC (2010): 11K38, 11K60, 28A78, 52C05
Retrieve articles in all journals with MSC (2010): 11K38, 11K60, 28A78, 52C05
Dmitriy Bilyk
Affiliation: School of Mathematics, University of Minnesota, Minneapolis, Minnesota 55455
Email: [email protected]
Xiaomin Ma
Affiliation: Department of Mathematics, Brown University, Providence, Rhode Island 02912
Email: [email protected]
Jill Pipher
Email: [email protected]
Affiliation: Department of Mathematics, Kansas State University, Manhattan, Kansas 66506
Email: [email protected]
Received by editor(s): February 27, 2013
Received by editor(s) in revised form: March 24, 2014 | CommonCrawl |
Dual-Ouroboros: An improvement of the McNie scheme
AMC Home
Locally recoverable codes from algebraic curves with separated variables
May 2020, 14(2): 279-299. doi: 10.3934/amc.2020020
Multi-point codes from the GGS curves
Chuangqiang Hu 1, and Shudi Yang 2,,
Yau Mathematical Sciences Center, Tsinghua University, Peking, 100084, China
School of Mathematical Sciences, Qufu Normal University, Shandong, 273165, China
*Corresponding author: Shudi Yang
Received June 2018 Revised December 2018 Published May 2020 Early access September 2019
Fund Project: This work is partially supported by the NSFC (11701317, 11531007, 11571380, 11701320, 61472457) and Tsinghua University startup fund. This work is also partially supported by China Postdoctoral Science Foundation Funded Project (2017M611801), Jiangsu Planned Projects for Postdoctoral Research Funds (1701104C), Guangzhou Science and Technology Program (201607010144) and the Natural Science Foundation of Shandong Province of China (ZR2016AM04)
This paper is concerned with the construction of algebraic-geometric (AG) codes defined from GGS curves. It is of significant use to describe bases for the Riemann-Roch spaces associated with some rational places, which enables us to study multi-point AG codes. Along this line, we characterize explicitly the Weierstrass semigroups and pure gaps by an exhaustive computation for the basis of Riemann-Roch spaces from GGS curves. In addition, we determine the floor of a certain type of divisor and investigate the properties of AG codes. Multi-point codes with excellent parameters are found, among which, a presented code with parameters $ [216,190,\geqslant 18] $ over $ \mathbb{F}_{64} $ yields a new record.
Keywords: Algebraic geometric code, GGS curve, Weierstrass semigroup, pure Weierstrass gap.
Mathematics Subject Classification: Primary: 14H55, 11R58, 11T71.
Citation: Chuangqiang Hu, Shudi Yang. Multi-point codes from the GGS curves. Advances in Mathematics of Communications, 2020, 14 (2) : 279-299. doi: 10.3934/amc.2020020
M. Abdón, J. Bezerra and L. Quoos, Further examples of maximal curves, Journal of Pure and Applied Algebra, 213 (2009), 1192-1196. doi: 10.1016/j.jpaa.2008.11.037. Google Scholar
É. Barelli, P. Beelen, M. Datta, V. Neiger and J. Rosenkilde, Two-point codes for the generalized GK curve, IEEE Transactions on Information Theory, 64 (2018), 6268-6276. doi: 10.1109/TIT.2017.2763165. Google Scholar
D. Bartoli, L. Quoos and G. Zini, Algebraic geometric codes on many points from Kummer extensions, Finite Fields and Their Applications, 52 (2018), 319-335. doi: 10.1016/j.ffa.2018.04.008. Google Scholar
D. Bartoli, M. Montanucci and G. Zini, AG codes and AG quantum codes from the GGS curve, Des. Codes Cryptogr., 86 (2018), 2315-2344. doi: 10.1007/s10623-017-0450-5. Google Scholar
D. Bartoli, M. Montanucci and G. Zini, Multi point AG codes on the GK maximal curve, Designs, Codes and Cryptography, 86 (2018), 161-177. doi: 10.1007/s10623-017-0333-9. Google Scholar
P. Beelen and M. Montanucci, Weierstrass semigroups on the Giulietti-Korchmáros curve, Finite Fields and Their Applications, 52 (2018), 10-29. doi: 10.1016/j.ffa.2018.03.002. Google Scholar
C. Carvalho and F. Torres, On Goppa codes and Weierstrass gaps at several points, Designs, Codes and Cryptography, 35 (2005), 211-225. doi: 10.1007/s10623-005-6403-4. Google Scholar
C. S. Ding, Linear codes from some 2-designs, IEEE Transactions on Information Theory, 61 (2015), 3265-3275. doi: 10.1109/TIT.2015.2420118. Google Scholar
A. S. Castellanos, A. M. Masuda and L. Quoos, One-and two-point codes over Kummer extensions, IEEE Transactions on Information Theory, 62 (2016), 4867-4872. doi: 10.1109/TIT.2016.2583437. Google Scholar
A. S. Castellanos and G. C. Tizziotti, Two-point AG codes on the GK maximal curves, IEEE Transactions on Information Theory, 62 (2016), 681-686. doi: 10.1109/TIT.2015.2511787. Google Scholar
S. Fanali and M. Giulietti, One-point AG codes on the GK maximal curves, IEEE Transactions on Information Theory, 56 (2010), 202-210. doi: 10.1109/TIT.2009.2034826. Google Scholar
A. Garcia, C. Güneri and H. Stichtenoth, A generalization of the Giulietti-Korchmáros maximal curve, Advances in Geometry, 10 (2010), 427-434. doi: 10.1515/ADVGEOM.2010.020. Google Scholar
A. Garcia, S. J. Kim and R. F. Lax, Consecutive Weierstrass gaps and minimum distance of Goppa codes, Journal of Pure and Applied Algebra, 84 (1993), 199-207. doi: 10.1016/0022-4049(93)90039-V. Google Scholar
A. Garcia and R. F. Lax, Goppa codes and Weierstrass gaps, in Coding Theory and Algebraic Geometry, Lecture Notes in Math., Springer Berlin, 1518 (1992), 33–42. doi: 10.1007/BFb0087991. Google Scholar
M. Giulietti and G. Korchmáros, A new family of maximal curves over a finite field, Mathematische Annalen, 343 (2009), 229-245. doi: 10.1007/s00208-008-0270-z. Google Scholar
V. D. Goppa, Codes associated with divisors, Problemy Peredači Informatsii, 13 (1977), 33-39. Google Scholar
C. Güneri, M. Özdemiry and H. Stichtenoth, The automorphism group of the generalized Giulietti-Korchmáros function field, Advances in Geometry, 13 (2013), 369-380. doi: 10.1515/advgeom-2012-0040. Google Scholar
V. Guruswami and M. Sudan, Improved decoding of Reed-Solomon and algebraic-geometric codes, IEEE Transactions on Information Theory, 45 (1999), 1757-1767. doi: 10.1109/18.782097. Google Scholar
M. Homma and S. J. Kim, Goppa codes with Weierstrass pairs, Journal of Pure and Applied Algebra, 162 (2001), 273-290. doi: 10.1016/S0022-4049(00)00134-1. Google Scholar
C. Q. Hu and S. D. Yang, Multi-point codes over Kummer extensions, Des. Codes Cryptogr, 86 (2018), 211-230. doi: 10.1007/s10623-017-0335-7. Google Scholar
S. J. Kim, On the index of the Weierstrass semigroup of a pair of points on a curve, Archiv der Mathematik, 62 (1994), 73-82. doi: 10.1007/BF01200442. Google Scholar
C. Kirfel and R. Pellikaan, The minimum distance of codes in an array coming from telescopic semigroups, IEEE Transactions on Information Theory, 41 (1995), 1720-1732. doi: 10.1109/18.476245. Google Scholar
G. Korchmáros and G. P. Nagy, Hermitian codes from higher degree places, Journal of Pure and Applied Algebra, 217 (2013), 2371-2381. doi: 10.1016/j.jpaa.2013.04.002. Google Scholar
Y. Liu, M. J. Shi, Z. Sepasdar and P. Solé, Construction of Hermitian self-dual constacyclic codes over $ \mathbb{F}_{q^2} + u \mathbb{F}_{q^2}$, Applied and Computational Mathematics, 15 (2016), 359-369. Google Scholar
H. Maharaj, Code construction on fiber products of Kummer covers, IEEE Transactions on Information Theory, 50 (2004), 2169-2173. doi: 10.1109/TIT.2004.833356. Google Scholar
H. Maharaj and G. L. Matthews, On the floor and the ceiling of a divisor, Finite Fields and Their Applications, 12 (2006), 38-55. doi: 10.1016/j.ffa.2005.01.002. Google Scholar
H. Maharaj, G. L. Matthews and G. Pirsic, Riemann-Roch spaces of the Hermitian function field with applications to algebraic geometry codes and low-discrepancy sequences, Journal of Pure and Applied Algebra, 195 (2005), 261-280. doi: 10.1016/j.jpaa.2004.06.010. Google Scholar
G. L. Matthews, Weierstrass pairs and minimum distance of Goppa codes, Designs, Codes and Cryptography, 22 (2001), 107-121. doi: 10.1023/A:1008311518095. Google Scholar
G. L. Matthews, The Weierstrass semigroup of an $m$-tuple of collinear points on a {H}ermitian curve, Finite Fields and Their Applications, Lecture Notes in Comput. Sci., Springer, Berlin, 2948 (2004), 12–24. doi: 10.1007/978-3-540-24633-6_2. Google Scholar
G. L. Matthews, Weierstrass semigroups and codes from a quotient of the Hermitian curve, Designs, Codes and Cryptography, 37 (2005), 473-492. doi: 10.1007/s10623-004-4038-5. Google Scholar
MinT, Online database for optimal parameters of $ (t, m, s) $-nets, $ (t, s) $-sequences, orthogonal arrays, and linear codes, Accessed on 2017-01-10, URL http://mint.sbg.ac.at. Google Scholar
M. J. Shi, L. Q. Qian, L. Sok, N. Aydin and P. Solé, On constacyclic codes over $ \mathbb{Z}_4[u]/\langle u^2-1 \rangle $ and their Gray images, Finite Fields and Their Applications, 45 (2017), 86-95. doi: 10.1016/j.ffa.2016.11.016. Google Scholar
M. J. Shi and Y. P. Zhang, Quasi-twisted codes with constacyclic constituent codes, Finite Fields and Their Applications, 39 (2016), 159-178. doi: 10.1016/j.ffa.2016.01.010. Google Scholar
H. Stichtenoth, Algebraic Function Fields and Codes, Graduate Texts in Mathematics, 254. Springer-Verlag, Berlin, 2009. Google Scholar
K. Yang and P. V. Kumar, On the true minimum distance of Hermitian codes, in Coding Theory and Algebraic Geometry, Lecture Notes in Math., Springer, Berlin, 1518 (1992), 99–107. doi: 10.1007/BFb0087995. Google Scholar
H. D. Yan, H. Liu, C. J. Li and S. D. Yang, Parameters of LCD BCH codes with two lengths, Advances in Mathematics of Communications, 12 (2018), 579-594. doi: 10.3934/amc.2018034. Google Scholar
K. Yang, P. V. Kumar and H. Stichtenoth, On the weight hierarchy of geometric Goppa codes, IEEE Transactions on Information Theory, 40 (1994), 913-920. doi: 10.1109/18.335903. Google Scholar
S. D. Yang and C. Q. Hu, Weierstrass semigroups from Kummer extensions, Finite Fields and Their Applications, 45 (2017), 264-284. doi: 10.1016/j.ffa.2016.12.005. Google Scholar
S. D. Yang and C. Q. Hu, Pure Weierstrass gaps from a quotient of the Hermitian curve, Finite Fields and Their Applications, 50 (2018), 251-271. doi: 10.1016/j.ffa.2017.12.002. Google Scholar
Alonso Sepúlveda, Guilherme Tizziotti. Weierstrass semigroup and codes over the curve $y^q + y = x^{q^r + 1}$. Advances in Mathematics of Communications, 2014, 8 (1) : 67-72. doi: 10.3934/amc.2014.8.67
Francisco Crespo, Sebastián Ferrer. On the extended Euler system and the Jacobi and Weierstrass elliptic functions. Journal of Geometric Mechanics, 2015, 7 (2) : 151-168. doi: 10.3934/jgm.2015.7.151
Fei Yu, Kang Zuo. Weierstrass filtration on Teichmüller curves and Lyapunov exponents. Journal of Modern Dynamics, 2013, 7 (2) : 209-237. doi: 10.3934/jmd.2013.7.209
Laura Luzzi, Ghaya Rekaya-Ben Othman, Jean-Claude Belfiore. Algebraic reduction for the Golden Code. Advances in Mathematics of Communications, 2012, 6 (1) : 1-26. doi: 10.3934/amc.2012.6.1
Seungkook Park. Coherence of sensing matrices coming from algebraic-geometric codes. Advances in Mathematics of Communications, 2016, 10 (2) : 429-436. doi: 10.3934/amc.2016016
Amadeu Delshams, Rafael de la Llave and Tere M. Seara. A geometric mechanism for diffusion in Hamiltonian systems overcoming the large gap problem: Announcement of results. Electronic Research Announcements, 2003, 9: 125-134.
Bernard Bonnard, Monique Chyba, Alain Jacquemard, John Marriott. Algebraic geometric classification of the singular flow in the contrast imaging problem in nuclear magnetic resonance. Mathematical Control & Related Fields, 2013, 3 (4) : 397-432. doi: 10.3934/mcrf.2013.3.397
María Chara, Ricardo A. Podestá, Ricardo Toledano. The conorm code of an AG-code. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2021018
Viorel Nitica, Andrei Török. On a semigroup problem. Discrete & Continuous Dynamical Systems - S, 2019, 12 (8) : 2365-2377. doi: 10.3934/dcdss.2019148
Irene Márquez-Corbella, Edgar Martínez-Moro, Emilio Suárez-Canedo. On the ideal associated to a linear code. Advances in Mathematics of Communications, 2016, 10 (2) : 229-254. doi: 10.3934/amc.2016003
Serhii Dyshko. On extendability of additive code isometries. Advances in Mathematics of Communications, 2016, 10 (1) : 45-52. doi: 10.3934/amc.2016.10.45
J. W. Neuberger. How to distinguish a local semigroup from a global semigroup. Discrete & Continuous Dynamical Systems, 2013, 33 (11&12) : 5293-5303. doi: 10.3934/dcds.2013.33.5293
Frank Trujillo. Uniqueness properties of the KAM curve. Discrete & Continuous Dynamical Systems, 2021, 41 (11) : 5165-5182. doi: 10.3934/dcds.2021072
Andrzej Biś. Entropies of a semigroup of maps. Discrete & Continuous Dynamical Systems, 2004, 11 (2&3) : 639-648. doi: 10.3934/dcds.2004.11.639
Michael Blank. Recurrence for measurable semigroup actions. Discrete & Continuous Dynamical Systems, 2021, 41 (4) : 1649-1665. doi: 10.3934/dcds.2020335
Carlos Cabrera, Peter Makienko, Peter Plaumann. Semigroup representations in holomorphic dynamics. Discrete & Continuous Dynamical Systems, 2013, 33 (4) : 1333-1349. doi: 10.3934/dcds.2013.33.1333
Jean-Marie Souriau. On Geometric Mechanics. Discrete & Continuous Dynamical Systems, 2007, 19 (3) : 595-607. doi: 10.3934/dcds.2007.19.595
Koray Karabina, Berkant Ustaoglu. Invalid-curve attacks on (hyper)elliptic curve cryptosystems. Advances in Mathematics of Communications, 2010, 4 (3) : 307-321. doi: 10.3934/amc.2010.4.307
Robert L. Devaney, Daniel M. Look. Buried Sierpinski curve Julia sets. Discrete & Continuous Dynamical Systems, 2005, 13 (4) : 1035-1046. doi: 10.3934/dcds.2005.13.1035
Pierre Cardaliaguet, Chloé Jimenez, Marc Quincampoix. Pure and Random strategies in differential game with incomplete informations. Journal of Dynamics & Games, 2014, 1 (3) : 363-375. doi: 10.3934/jdg.2014.1.363
HTML views (854)
Chuangqiang Hu Shudi Yang | CommonCrawl |
Astrophysics > Astrophysics of Galaxies
Title: The Optical/Near-Infrared Extinction Law In Highly Reddened Regions
Authors: M.W. Hosek Jr, J.R. Lu, J. Anderson, T. Do, E.F. Schlafly, A.M. Ghez, W.I. Clarkson, M.R. Morris, S.M. Albers
(Submitted on 25 Jan 2018)
Abstract: A precise extinction law is a critical input when interpreting observations of highly reddened sources such as young star clusters and the Galactic Center (GC). We use Hubble Space Telescope observations of a region of moderate extinction and a region of high extinction to measure the optical and near-infrared extinction law (0.8 $\mu$m -- 2.2 $\mu$m). The moderate extinction region is the young massive cluster Westerlund 1 (Wd1; A$_{Ks} \sim$ 0.6 mag), where 453 proper motion-selected main-sequence stars are used to measure the shape of the extinction law. To quantify the shape we define the parameter $\mathcal{S}_{1/\lambda}$, which behaves similarly to a color excess ratio but is continuous as a function of wavelength. The high extinction region is the GC (A$_{Ks} \sim$ 2.5 mag), where 819 red clump stars are used to determine the normalization of the law. The best-fit extinction law is able to reproduce the Wd1 main sequence colors, which previous laws misestimate by 10%-30%. The law is inconsistent with a single power law, even when only the near-infrared filters are considered, and has A$_{F125W}$/A$_{Ks}$ and A$_{F814W}$/A$_{Ks}$ values that are 18% and 24% larger than the commonly used \citet{Nishiyama:2009fc} law, respectively. Using the law we recalculate the Wd1 distance to be 3896 $\pm$ 328 pc from published observations of eclipsing binary W13. This new extinction law should be used for highly reddened populations in the Milky Way, such as the Quintuplet cluster and Young Nuclear Cluster. A python code is provided to generate the law for future use.
Comments: 32 pages, 20 figures; submitted to ApJ
Subjects: Astrophysics of Galaxies (astro-ph.GA); Solar and Stellar Astrophysics (astro-ph.SR)
DOI: 10.3847/1538-4357/aaabbb
Cite as: arXiv:1801.08574 [astro-ph.GA]
(or arXiv:1801.08574v1 [astro-ph.GA] for this version)
From: Matthew Hosek Jr [view email]
[v1] Thu, 25 Jan 2018 19:24:40 GMT (7035kb,D) | CommonCrawl |
Integration of relative metabolomics and transcriptomics time-course data in a metabolic model pinpoints effects of ribosome biogenesis defects on Arabidopsis thaliana metabolism
Systematic quantitative analysis of ribosome inventory during nutrient stress
Heeseon An, Alban Ordureau, … J. Wade Harper
The coenzyme thiamine diphosphate displays a daily rhythm in the Arabidopsis nucleus
Zeenat B. Noordally, Celso Trichtinger, … Teresa B. Fitzpatrick
Arabidopsis REI-LIKE proteins activate ribosome biogenesis during cold acclimation
Bo Eng Cheong, Olga Beine-Golovchuk, … Joachim Kopka
Whole-cell modeling in yeast predicts compartment-specific proteome constraints that drive metabolic strategies
Ibrahim E. Elsemman, Angelica Rodriguez Prado, … Bas Teusink
Pyrophosphate inhibits gluconeogenesis by restricting UDP-glucose formation in vivo
Ali Ferjani, Kensuke Kawade, … Hirokazu Tsukaya
The intertwined metabolism during symbiotic nitrogen fixation elucidated by metabolic modelling
Thomas Pfau, Nils Christian, … Oliver Ebenhöh
Small is big in Arabidopsis mitochondrial ribosome
Florent Waltz, Tan-Trung Nguyen, … Philippe Giegé
iTRAQ-based proteome profiling revealed the role of Phytochrome A in regulating primary metabolism in tomato seedling
Sherinmol Thomas, Rakesh Kumar, … Sanjeeva Srivastava
Maize multi-omics reveal roles for autophagic recycling in proteome remodelling and lipid turnover
Fionn McLoughlin, Robert C. Augustine, … Richard D. Vierstra
Christopher Pries1 na1,
Zahra Razaghi-Moghadam2,3 na1,
Joachim Kopka1 &
Zoran Nikoloski2,3
Scientific Reports volume 11, Article number: 4787 (2021) Cite this article
Ribosome biogenesis is tightly associated to plant metabolism due to the usage of ribosomes in the synthesis of proteins necessary to drive metabolic pathways. Given the central role of ribosome biogenesis in cell physiology, it is important to characterize the impact of different components involved in this process on plant metabolism. Double mutants of the Arabidopsis thaliana cytosolic 60S maturation factors REIL1 and REIL2 do not resume growth after shift to moderate 10 \(^{\circ }\hbox {C}\) chilling conditions. To gain mechanistic insights into the metabolic effects of this ribosome biogenesis defect on metabolism, we developed TC-iReMet2, a constraint-based modelling approach that integrates relative metabolomics and transcriptomics time-course data to predict differential fluxes on a genome-scale level. We employed TC-iReMet2 with metabolomics and transcriptomics data from the Arabidopsis Columbia 0 wild type and the reil1-1 reil2-1 double mutant before and after cold shift. We identified reactions and pathways that are highly altered in a mutant relative to the wild type. These pathways include the Calvin–Benson cycle, photorespiration, gluconeogenesis, and glycolysis. Our findings also indicated differential NAD(P)/NAD(P)H ratios after cold shift. TC-iReMet2 allows for mechanistic hypothesis generation and interpretation of system biology experiments related to metabolic fluxes on a genome-scale level.
Ribosomes are the workplaces of protein biosynthesis, and defects in the pathway of ribosome biogenesis have an effect on many cellular processes, like metabolism, which critically depend on enzymatic proteins. While metabolism is known to affect ribosome function via the target of rapamycin (TOR) signalling pathway, little is known about how defects in ribosome biogenesis feed back on metabolism1. The Arabidopsis thaliana REIL proteins are involved in the late cytosolic steps of 60S ribosome subunit maturation and are required for growth under low temperature2. The reil1-1 reil2-1 double mutant is deficient for both REIL paralogs and, unlike Arabidopsis Col-0 wild type, does not resume growth after cold shift, even to moderate 10 \(^{\circ }\)C chilling conditions. This experimental system is ideally suited to investigate the cytosolic ribosome biogenesis defect at the metabolic level, since both wild type and mutant show growth arrest during the early hibernation phase (less than seven days after cold shift) followed by differential growth in the later stages. Therefore, mechanistic insights in the impact of defects of the mutant's ribosome biogenesis on metabolism may become apparent early after cold shift, during hibernation phase.
One possibility to investigate the feedback of ribosome biogenesis defects on metabolism is the characterization of reaction fluxes. Metabolic fluxes depend, in part, on the metabolite pools3. They also depend on the enzymatic setup of a cell, which is in turn governed by gene regulatory and signalling networks that affect protein activity4. However, determination of metabolic fluxes is a tedious and labour-intensive task5,6,7. A targeted analysis that predicts relevant fluxes for hypothesis generation based on integration of available high-throughput data sets from systems biology studies may streamline the planning of such time-consuming experimental flux studies.
In this regard, constraint-based approaches have proved as a valuable tool for hypotheses generation regarding flux distributions and their differential behaviour. For instance, the simplest of these approaches, flux balance analysis (FBA), can predict steady-state fluxes in bacteria at exponential growth8. In general, metabolic fluxes of a system are predicted under the assumption that this system operates in steady-state and optimizes an objective (e.g. biomass yield). If feasible, the resulting mathematical approach often results in a non-unique flux distribution. To this end, constraints defined through integration of high-throughput data can reduce the solution space of feasible flux distributions9,10,11. Such approaches have been shown to result in a more accurate prediction which is closer to the actual physiological state12. Despite the availability of methods that integrate high-throughput data, their full potential has yet to be realized13.
Of particular interest are approaches which allow integration of relative metabolite levels, since these datasets are easier to obtain in contrast to absolute metabolite concentrations used in thermodynamic flux balance analysis14 as well as approaches that use time-series data (e.g. TREM-Flux15, uFBA16, and dFBA17). iReMet-flux18 is the only constraint-based approach to date that can integrate relative metabolite levels to investigate differential flux behaviour between two scenarios. It relies on a mass-action-like description of reaction rates (i.e. flux). In contrast to uFBA, iReMet-Flux does not require data on absolute quantification of metabolite levels and therefore allows for a broader application due to the availability of relative metabolomics data. In contrast to TREM-Flux, it does not assume a linear scaling with the change of metabolite levels between two time points. In addition, iReMet-Flux differs from a recent approach in which the relative metabolomics data are integrated on a qualitative level (i.e. increases or decreases)14. Similar to the objective on which MOMA is based19, iReMet-flux minimizes the flux differences between two scenarios, but does not rely on pre-calculated flux distributions for a reference scenario. Additionally, iReMet-flux allows for the integration of relative enzyme levels either by direct usage of quantitative or qualitative proteomics data, or via gene expression ratio that can serve as a proxy10,20,21. However, if employed to time-series data, it does not account for the magnitude of possible flux changes between time steps. To address this problem, we extended iReMet-flux to account for temporal changes, while keeping the possibility of multi-level high-throughput data integration.
Here, we aimed to develop a novel constraint-based approach, termed TC-iReMet2, that facilitates the integration of relative metabolite and transcript levels while accounting for temporal change of physiological parameters. We used TC-iReMet2 to investigate differential flux behaviour of A. thaliana Col-0 wild type and reil1-1 reil2-1 double mutant plants before and after cold shift. Finally, we provided directly testable hypotheses about the impact of REIL-mediated deficiency in ribosome biogenesis on metabolism.
Formulation of TC-iReMet2
We propose Time Course Integration of Relative Metabolite and Transcript levels (TC-iReMet2) that estimates fluxes based on the integration of time-course data on relative metabolite and transcript levels. The key feature of TC-iReMet2 is that it accounts for the possible magnitude of flux changes between time points and thus could provide a more accurate explanation of flux rerouting over time. We show that TC-iReMet2 can be applied to study flux redistributions in pathways in a large-scale metabolic network of A. thaliana. Unlike genome-scale metabolic networks22, we refer to large-scale models as those reconstructed following a bottom-up approach23.
Similar to other constraint-based approaches, TC-iReMet2 uses a stoichiometric matrix S of the considered metabolic model. The rows of the stoichiometric matrix correspond to metabolites, and columns stand for reactions. The integer entries denote the molarity of a product (positive entry) or a substrate (negative entry) in a reaction, ensuring mass and charge conservation. In the following, we assume that the investigated metabolic network contains P reactions and n metabolites, and that its functioning is compared between two experimental scenarios, denoted by A and B (e.g. mutant and wild type) over to time points \({t+1}\) and t. Furthermore, we denote by \(p_1\) the number of irreversible reactions and by \(p-p_1\) the number of reversible reactions.
Under mass action kinetics, a flux through an irreversible reaction i, \(1 \le p_1 \le p_1\), can be formally described by \(v_i = k_i E_i \prod _{j=1}^n (x_j)^{|S_{ji}|}\), where \(x_j\) denotes the concentration of metabolite j, \(S_{ji}\) denotes the stoichiometric coefficient with which a metabolite j enters a reaction i as a substrate, \(E_i\) denotes the enzyme concentration and \(k_i\) denotes the reaction specific rate constant. Note that this expression can be written equally for scenario A: \(v_i^A = k_i^A E_i^A \prod _{j=1}^n (x_j^A)^{|S_{ji}|}\)and scenario B: \(v_i^B = k_i^B E_i^B \prod _{j=1}^n (x_j^B)^{|S_{ji}|}\), where the rate constant \(k_i\) is the only unchanged parameter (\(k_i^A=k_i^B\)) - as it summarizes the key property of the same enzyme. Therefore, the relationship of a single flux between two scenarios can be written as:
$$\begin{aligned} \frac{v_i^A}{v_i^B} = \frac{E_i^A}{E_i^B} \prod _{j=1}^n \frac{(x_j^A)^{|S_{ji}|}}{(x_j^B)^{|S_{ji}|}} \end{aligned}$$
To simplify the notation, we will refer to the ratio of metabolite levels of j as \(r_j = \frac{x_j^A}{x_j^B}\) and the ratio of enzyme levels catalyzing reaction i as \(q_i = \frac{E_i^A}{E_i^B}\). This allows us to rewrite the ratio of flux rates of reaction i as \(\frac{v_i^A}{v_i^B} = q_i \prod _{j=1}^n (r_j)^{|S_{ji}|}\) or equivalently \(v_i^A = [q_i \prod _{j=1}^n (r_j)^{|S_{ji}|}] v_i^B\).
Determining the entirety of metabolite and enzyme concentrations is not possible with the existing technologies24,25. For metabolite ratios, only a small portion of the metabolome, and hence metabolite ratios, can be quantified. To account for the case that a metabolite ratio cannot be measured, general upper and lower boundaries for metabolite ratios are introduced. If the ratio of metabolite j is experimentally quantified, it is indicated by \(\chi (r_j) = 1\) and otherwise by \(\chi (r_j) = 0\).
In the absence of enzyme ratios, we use the Gene Protein Reaction (GPR) rules of metabolic models to approximate enzyme ratios using transcriptomic data. The GPR roles are defined by a set of Boolean expressions that describe which genes encode an enzyme. For example, gene products encoding for isoenzymes or isoforms are linked by an OR operator. Conversely, protein subunits that must be present simultaneously to form an active enzyme are linked by an AND operator. In case of an enzyme encoded by one gene, the enzyme concentration is approximated by its expression value. For each reaction that is catalyzed by a complex requiring multiple genes, the enzyme concentration is set to the minimum expression value of gene products connected by the AND operator. For the OR operator, the sum of expression values for the respective genes is used. These rules were applied to each reaction in both scenarios, fractioned and assigned as the corresponding enzyme ratio. Therefore, an enzyme ratio is represented by a ratio of gene expression levels following the GPR rules. Equivalently to metabolite ratios, if a GPR rule for reaction i is defined, it is indicated by \(H (q_i) = 1\) and for reactions without a defined GPR rule, by \(H (q_i) = 0\).
In this setup, we only consider constraints for irreversible reactions, since more than 80% of reactions that are assumed to follow mass–action-like kinetics (this excludes artificial and transport reactions) are irreversible in the analyzed model of A. thaliana. This has been verified by performing flux variability analysis at a fixed flux through the biomass reaction, to specify that 80% of reactions operate in only one direction18. A ratio constraint for reaction i is included if not only the enzyme ratio, but also at least one of the substrate ratios corresponding to that reaction is available. For metabolites or enzymes whose ratios could not be determined we use the extremal values found at that specific time point. Let F(i) denote the set of substrates of reaction i. Additionally, let the set of irreversible reactions with at least one experimentally quantified metabolite ratio and approximated enzyme ratio be denoted by \(\mathfrak {I}= \{ i | \sum _{j\in F(i)} \chi (r_j)> 0 \;\; \& \; H(q_i) > 0 \}\). A measured metabolite ratio for j and transcript ratio of i are indicated by \({{\hat{r}}_j^{min}} \le {{\hat{r}}_j} \le {{\hat{r}}_j^{max}}\) and \({{\hat{q}}_i^{min}} \le {{\hat{q}}_i} \le {{\hat{q}}_i^{max}}\), respectively. The bounds are defined as multiples of the standard deviation for the ratio. Cofactors were treated as unmeasured metabolites and for them the lower and upper bounds are \(min_{m:\; m \in \{ \ell | \chi (r_\ell )=1\}}\;{{\hat{r}}_m^{min}}\) and \(max_{m:\; m \in \{ \ell | \chi (r_\ell )=1\}}\;{{\hat{r}}_m^{max}}\), respectively. Equivalently we can write \(min_{m:\; m \in \{ \eta | H (q_\eta )=1\}}\;{{\hat{q}}_m^{min}}\) and \(max_{m:\; m \in \{ \eta | H (q_\eta )=1\}}\;{{\hat{q}}_m^{max}}\) for unmeasured transcript ratios. Furthermore, to account for enzymes that are substrate saturated and in turn would lead to infeasibilities due to metabolite ratio constraints, slack variables \(\varepsilon _i\) were introduced to relax the strict ratio constraints. To minimize these relaxations a weighting of the summed slack variables of \(\epsilon = 0.01\) was used. Hence, a ratio constraint was formulated as follows:
$$\begin{aligned} v_i^{B}q_i\prod _{j\in F(i)}(r_j^{min})^{|S_{ji}|}-\varepsilon _i \le v_i^{A} \le v_i^{B}q_i\prod _{j\in F(i)}(r_j^{max})^{|S_{ji}|}+\varepsilon _i. \end{aligned}$$
Similarly, a ratio constraint for the biomass reaction can be formulated. To this end, a time-point specific biomass fraction, denoted by \(\varkappa _{t+1}\), can be calculated. First, the maximum biomass yield, denoted by opt, is calculated for both scenarios via FBA. A biomass fraction \(\varkappa _{t+1}\) between both scenarios is then determined by using proxies for biomass (for a detailed description see Methods - Parameterizing the objection and of TC-iReMet2 and estimating fractions of biomass yield). We fix the biomass reaction of scenario B to its respective value derived from FBA. In contrast, biomass flux in scenario A is fixed to a fraction \(\varkappa _{t+1}\) of its optimal yield. Lower and upper bounds are specified as deviations, denoted by \(\delta\), of the calculated fraction. Therefore, biomass fluxes for both scenarios can be constrained as follows:
$$\begin{aligned}&(\varkappa _{t+1}-\delta )\;opt^A \le v_{Biomass}^{A} \le (\varkappa _{t+1}+\delta )\;opt^A, \end{aligned}$$
$$\begin{aligned}&v_{biomass}^B = opt^B. \end{aligned}$$
Furthermore, we assume that: (i) the metabolic network to operate in quasi-steady state at every time point. Hence, \(Sv^A = Sv^B = 0\), where \(v^A\) and \(v^B\) denote the flux distributions of scenarios A and B respectively. (ii) the biological system aims to maintain an optimal state given by the enzymatic setup. This assumption is captured by making sure that the flux distributions between the two scenarios at a given time point \(t+1\) are as close as possible, i.e. \(||(v^{B}_{t+1}-v^{A}_{t+1})||_2^{2}\). (iii) the physiological state at time \(t+1\) depends on the physiological state at time t. We model this assumption by accounting for the magnitude of possible physiological changes by assuring that the difference of flux distributions between time points is as small as possible, i.e. \(||(v^{B}_{t+1}-v^{B}_{t})||_2^{2}, \; ||(v^{A}_{t+1}-v^{A}_{t})||_2^{2}\), respectively. This magnitude obviously depends on the difference between time points, where the magnitude of possible flux changes increases with time. To this end, we introduce weighting factors to minimize the difference of flux distributions between scenarios at the current time point, weighted by \(\alpha\), as well as for differences between prior time points for scenario A, weighted by \(\beta\), and scenario B, weighted by \(\gamma\).
In summary, the TC-iReMet2 approach is cast as a quadratic program (QP) as follows:
$$\begin{aligned} \begin{array}{ll@{}ll} {\displaystyle \min _{v^{A},v^{B},\varepsilon } \alpha ||(v^{A}_{t+1}-v^{B}_{t+1})||_2^{2}+\beta ||(v^{A}_{t+1}-v^{A}_{t})||_2^{2}+\gamma ||(v^{B}_{t+1}-v^{B}_{t})||_2^{2} + \epsilon \; \Sigma _{i=1}^{p} \;\varepsilon _i} &{} &{} \\ \\ \text {s.t.}&{} &{} &{} \\ &{} &{} &{} \\ \\ {Sv_{t+1}^{A}=Sv_{t+1}^{B} = 0 ,} \\ \\ {v_{min}^{A} \le v^{A}_{t+1} \le v_{max}^{A} ,} \\ \\ {v_{min}^{B} \le v^{B}_{t+1} \le v_{max}^{B} ,} \\ \\ {(\varkappa _{t+1}-\delta )\;opt^A \le v_{Biomass}^{A} \le (\varkappa _{t+1}+\delta )\;opt^A ,} \\ \\ {v_{biomass}^B = opt^B ,}\\ \\ {\forall i \in \mathfrak {I}: v_i^{B}q_{i}\prod _{j\in F(i)}(r_j^{min})^{|S_{ji}|}-\varepsilon _i \le v_i^{A} \le v_i^{B}q_{i}\prod _{j\in F(i)}(r_j^{max})^{|S_{ji}|}+\varepsilon _i ,} \\ \\ {\forall j \in \{ \ell | \chi (r_\ell )=1\}:r^{min}_j=\hat{r}_j^{min}, r^{max}_j=\hat{r}_j^{max} ,} \\ \\ { \forall j \in \{ \ell | \chi (r_\ell )=0\}:r^{min}_\ell =min_{m:\; m \in \{ \ell | \chi (r_\ell )=1\}}\;{{\hat{r}}_m^{min}}, \;\; r^{max}_\ell =max_{m:\; m \in \{ \ell | \chi (r_\ell )=0\}}\;{{\hat{r}}_m^{max}} ,} \\ \\ {\forall i \in \{ \eta | H (q_\eta )=1\}:q^{min}_i=\hat{q}_i^{min}, q^{max}_i=\hat{q}_i^{max} ,} \\ \\ {\forall i \in \{ \eta | H (q_\eta )=0\}:q^{min}_\eta =min_{m:\; m \in \{ \eta | H (q_\eta )=1\}}\;{{\hat{q}}_m^{min}}, \;\; q^{max}_\eta =max_{m:\; m \in \{ \eta | H (q_\eta )=0\}}\;{{\hat{q}}_m^{max}} ,} \\ \\ {\forall i \in \mathfrak {I}: 0 \le \varepsilon _i}. \\ \\ \end{array} \end{aligned}$$
Application of TC-iReMet2 to data from the reil1-1 reil2-1 A. thaliana mutant
We employed TC-iReMet2 to gain insights into the metabolic effects of the ribosome biogenesis defect that is caused by A. thaliana REIL deficiency. To this end, we compared predicted flux differences between Col-0 wild type and reil1-1 reil2-1 double mutant with deficiency in cytosolic 60S ribosome biogenesis. The REIL proteins are required for growth when plants are shifted to cold (\(< 10\) \(^{\circ }\hbox {C}\)) conditions, but not at optimal temperature (\({\simeq }20\) \(^{\circ }\hbox {C}\))2. The reil1-1 reil2-1 double mutant and wild type differ only slightly in size when grown at 20 \(^{\circ }\hbox {C}\). Young developing leaves of the mutants showed an acute tip and two basal serrations instead of the typical rounded leaves of the Col-0 wild type, and were similar to the pointed leaves phenotype of cytosolic ribosome mutants26,27,28. However, the pointed-leaf phenotype of the reil1-1 reil2-1 double mutant was no longer apparent after transfer to soil and at developmental stages < 1.1029 that were analyzed in this study. When shifted to 10 \(^{\circ }\)C (cold), both the mutant and the wild type stopped growing. Following seven days in the cold, the wild type resumed growth, while the mutant remained strongly growth-inhibited (Fig. 1, Supplementary Table S1). The mutant survived at least four weeks after cold shift and maintained cellular integrity as was determined by electrolyte leakage assays of rosette leaves30. Growth parameters of wild type and reil1-1 reil2-1 were determined as proxies of relative biomass accumulation at day 0, day 1, days 7 and 21 after cold shift using morphometric data (see Methods – parameterizing the objective function). Along with the morphometric data, the relative changes of metabolite pools and transcripts were profiled30 (see "Methods" section for details).
Morphometric analyses of reil1-1 reil2-1 and wild type after shift from optimized (20 \(^{\circ }\)C) to low temperatures (10 \(^{\circ }\)C). Reil1-1 reil2-1 double mutants and A. thaliana Col-0 wild type plants were shifted at developmental stage 1.1029. Week-0 plants were grown at 20 \(^{\circ }\)C and assayed before the temperature shift. Rosette diameter, (A); leaf area, (B), (mean +/− standard deviation; \(n =\)3–10 plants), for original data and definitions of morphometric parameters refer to Schmidt et al. 20132. The R coefficients represent the Pearson correlation between mutant and wild type with respect to the Diameter (A) (P-value = \(2.91^{-11}\)) and Leaf area (B) (P-value = \(1.53^{-5}\)).
The experimental setup and the availability of transcriptomics data and data on relative metabolite levels allowed the application of TC-iReMet229,31 to quantify the nominal and relative differences in metabolic fluxes of the wild type and the mutant (Supplementary Fig. S1). We refer to nominal changes as the sum of predicted flux differences, defined as the absolute value of difference between wild type and mutant flux, over all analyzed time points. The nominal changes may provide a skewed picture about the differences, particularly since the differences in fluxes between reactions in a given flux distribution differ in several orders of magnitude20. As a result, differences between fluxes that are anyhow small will be dominated by the differences between fluxes that take larger values. To remedy this issue, we also calculated the relative changes, defined as the sum of normalized flux differences over all analyzed time points, where the flux differences between wild type and mutant were normalized to their respective absolute maximum value over all time points. To apply TC-iReMet2 we used a bottom-up assembled model of A. thaliana, ArabidopsisCore23. This model consists of 549 reactions, of which 229 are transport reactions and artificial reactions representing growth (biomass) and non-growth-associated maintenance functions (NGAM22).
Changes in predicted sum of Fluxes. Shown are the optimal values of the Euclidean distance (displayed on y-axis) at each corresponding time point or time step (displayed on x-axis). Distances were visualized by plotting the Euclidean distance value above each bar. (A) Displayed are the sums of flux difference between wild type and mutant at each corresponding time point. (B) Displayed are the sums of flux differences between wild type fluxes and mutant fluxes between each two time consecutive points.
Sum of predicted flux differences
The overall flux distance of wild type compared to mutant across all predicted reactions differed before cold shift, with the wild type having a higher overall flux (Fig. 2A). This prediction was consistent with the slight growth advantage of the wild type at the optimized growth temperature (Fig. 1). The difference of fluxes between consecutive time points remained approximately constant during the common hibernation phase, up to day 7. When the wild type resumed growth in the cold, the overall predicted flux differences increased approximately 3-fold. When considering the sum of flux changes per time step for wild type (Supplementary Fig. S2) and mutant (Supplementary Fig. S3), we find similar changes for the wild type and the mutant at the steps from 0 days to 1 day and 1 day to 7 days, with an increase in the change from day 7 to day 21 (Fig. 2B). However, we observe that the changes between day 7 and 21 are considerably larger in the wild type in comparison to the mutant, in line with the resumed growth of the former in the cold. In the following, we identify the reactions and pathways which contribute most to these observed differences.
Analysis of differential reactions
We next considered the flux differences for each reaction in the metabolic model. Additionally, we investigated reactions displaying large changes in flux differences at early time points, as the most interesting to understand the changes in the metabolic network functionality in response to the cold shift.
K-means clustering of reaction behaviour
We focussed on differential behaviour of all reactions between mutant and wild type, excluding transport reactions and artificial reactions to avoid bias due to lack of gene association for these reactions. To this end, we applied K-means clustering to group reactions (Supplementary Table S2) with similar relative flux changes, where the number of clusters was determined by the silhouette index (Supplementary Fig. S4). As a result, we identified K = 7 clusters of reactions (Fig. 3), with a maximum silhouette index value of 0.78, based on the relative flux changes (Fig. 3A). For comparison, we also consider the K-mean clustering of the nominal flux changes (Fig. 3B). To provide an intuitive description of clusters as well as reaction behaviour over time, we introduce a three-character pattern consisting of Up (U), Down (D) and No changes (N) if the respective relative flux differences increased, decreased or stayed the same between two time points. Using this classification method we found 17 from the 27 possible patterns displayed by 320 reactions. A total of 111 reactions were classified by the most common pattern 'UUU' making up roughly 35% of all observed patterns.
Overview of K-means clustering based on relative changes in reaction fluxes. K-means with Euclidean distance was used to identify seven clusters (C) of reactions (excluding transporters and artificial reactions). (A) Shows flux difference values normalized to the absolute maximum difference of each reaction for each time point. Corresponding nominal flux differences are shown in (B).
Overall, we mainly identified conserved flux differences in the first three time points with a shift in flux difference at day 21. This behaviour can be observed in the three biggest clusters. Cluster 5 consisted of 164 reactions, which exhibit an increase of relative flux changes (UUU). In contrast, cluster 2, consisting of 46 reactions, exhibited mainly decrease of relative flux changes (DDD). This inverse behaviour is best captured by the function of RuBisCO as it exhibits strong flux changes for its carboxylation function (cluster 5) and oxygenation function (cluster 2). Reactions in cluster 3 mainly exhibited no changes (NNN). Similarly, cluster 6 summarizes reactions that exhibit constant positive flux change over all time points. The remaining clusters 1, 4 and 7 group reactions that exhibit an inverse shift in behaviour at day 21.
If we consider the top 10 reactions (Supplementary Table S3) with respect to relative and nominal changes directly after cold shift, we find H-serine dehydrogenase (HSerDHNADP_h (UDU), HSerDHNAD_h(DUD)), isocitrate dehydrogenase (iCitDHNADP_m (DDD), iCitDHNAD_m(UUD)) as well as 6-phosphogluconic dehydrogenase (6PGDHNAD_h(DUD)), glutamate dehydrogenase (GluDH1NADP_m(DUD)) and glutamate synthetase (GluSNAD_h(UDD)) conserved among both measures. All these reactions are redox reactions. Additionally, 6-phosphogluconic dehydrogenase (6PGDHNADP_h(UDU)), glutamate dehydrogenase (GluDH2NAD_m(DUD)) and glutamate synthetase (GluSNAD_h(UDD)) can only be found in the top 10 reactions of nominal changes. Conversely, malate dehydrogenase (MalDH2NADP_c(UNN)), fructose-biphosphate aldolase (SBPA_h(UDD)) and sedoheptulose-biphosphatase (SBPase_h(UDD)) can only be found in the top 10 reactions of relative changes.
Pathways enriched in reactions with highly altered fluxes across time points
Metabolic reactions do not function in isolation, so analysis and interpretation of the prediction is best carried out in terms of pathways. To identify the pathways that are changed over time, we used the metabolic pathways as defined by the underlying A. thaliana model23(for definitions of pathway membership refer to Arnold et al. 201423 (Supplementary Table S4). We inspected and considered as relevant those pathways that were enriched with reactions displaying large predicted flux differences between wild type and mutant (Fig. 4). A reaction was defined to exhibit large changes, if its absolute sum of flux changes across all time points was above the median of considered reactions present in the model (excluding transport and artificial reactions, as specified above). To identify pathways enriched with such reactions we used the Fishers exact test with significance threshold P considering multiple hypotheses correction following the Benjamini–Hochberg procedure (Supplementary Table S5). Considering nominal changes, we found five pathways to be enriched for reactions with large changes. These pathways, ordered by decreasing P-value, with \(p<\) 0.01, include: the Calvin–Benson-Cycle (CBC), photorespiration, gluconeogenesis, leucine synthesis, and in addition with \(< 0.05\), glycolysis. Considering relative instead of nominal changes, we found pathways with \(p < 0.01\) to include the Calvin–Benson cycle, glycolysis, gluconeogenesis, and in addition with \(p < 0.05\), photorespiration.
Pathways enriched in reactions with highly altered fluxes. Displayed are pathways significantly (\(\textit{P} <= 0.05\)) enriched in regulated reactions based on (A) relative and (B) nominal differences. They are descending ordered according to their respective P-value. Size of the dots corresponds to the count of reactions present in the pathway. Bar size represents the negative logarithm of the P-value (x-axis).
Flux sampling analysis with quadratic constraints
We examined how specific these findings are by sampling the solution space given in optimal solution for each considered time point. As a sufficiently large enough sample size gives information about range of fluxes as well as their probability, it gives the means to explore for alternative solutions and so for the uniqueness of the solution. In this case for each considered time point the proposed approach (see Methods - Flux sampling for TC-iReMet2) did not find a solution after 1000 trials. Therefore, this analysis indicates that the findings are specific, in the sense that alternative optima are unlikely, and significant as there are no other possible flux distributions in optimal solution.
Here, we proposed a computational approach, termed TC-iReMet2, and showed that it provides the means for time-resolved predictions of fluxes while keeping the simplicity of the constraint-based modelling framework and allowing for the integration of relative metabolomic, transcriptomic, and morphometric data. The findings of this study indicate that TC-iReMet2, a differential flux profiling method, can be used to identify differential fluxes between wild type and mutants over time. It is important to note that TC-iReMet2 uses the ratio of transcripts as a proxy for the ratio of enzyme abundance (following GPR rules). This is a strong assumption, knowing that post-translational modifications and translational efficiency have a large effect on both the abundances and ratios of proteins. However, since transcript ratios are used as one component of the constraints, such an approach provides a better coverage of metabolic networks than proteomics data14. With the advances in the proteomics profiling, TC-iReMet2 has the potential to provide further applications closer to the assumptions of the approach.
Moreover, the enzyme kinetic assumed in TC-iReMet2 does not consider saturation effects no presence of regulators (e.g. activators or inhibitors) of enzyme activity. Inclusion of a saturation effect, like in Michaelis-Menten kinetic, would not allow casting the problem with only linear constraints, rendering application to large-scale networks computationally challenging. Similar problem arises when considering the inclusion of regulation, which is additionally problematic due to the lack of information on how the effect of the regulator is captured in the enzyme kinetic form used, particularly for plants32,33. One possible approach to overcome this issues is to use a power-law formalism34, which would allow the constraints to remain linear, at the cost of making assumptions about which regulators affect a reaction rate and with what strength. For this reasons we have decided that TC-iReMet2 is formulated based on mass-action-like kinetic, while allowing for discrepancy to model possible effects due to the mentioned saturation and regulation.
Applying TC-iReMet2 to the comparison of growth deficient reil1-1 reil2-1 double mutant to A. thaliana Col-0 wild type before and after cold shift strongly supported for the previously suggested theory stating that REIL mediated ribosome biogenesis deficiency feeds back into metabolism. Overall, we find that flux differences are more similar during hibernation phase, with strong flux redistributions occurring at day 21. This is evident from the data from the morphometric analysis (Fig. 1, Supplementary Table S6). Mutant and wild type plants grow similar but start to differ strongly between days 7 and 21 after temperature shift.
Even more important, TC-iReMet2 enables a way to compare wild type and mutant differential fluxes prior to cold shift and in the early hibernation phase. Thus, it allows for the analysis of the mutant system relative to wild type without being obscured by the effects of differential growth occurring between days 7 and 21 of the current experiments. When considering differential fluxes during the hibernation phase, we find that REIL mediated ribosome biogenesis deficiency might feed back into metabolism by altering the RuBisCO carboxylase to oxygenase ratio (Supplementary Table S7). Additionally, mutant associated deficiencies of the CBC and glycolysis fluxes combined with mutant-specific increase of all fluxes in the photorespiratory pathway support this hypothesis (Fig. 4). Overall the strongest mutant flux deficiencies appear to be in the RuBisCO (carboxylation), FNR, malate dehydrogenase and alanine transaminase reactions (Supplementary Table S5).
Predicted relative flux changes directly after cold shift appear to be small. However, one day after cold shift, the fluxes of reactions distributed across various pathways of central metabolism, including carbohydrate, organic acid and amino acid metabolism differ between mutant and wild type. What is common to these reactions is that they all require NAD(P) as a cofactor. These changes may indicate either an altered redox state of these cofactors or more likely differential use of NAD and NADP after cold shift in the mutant. For example, when considering the predicted inverse flux changes of the mitochondrial iCit dehydrogenase isozyme reactions, iCitDHNAD_m and iCitDHNADP_m, we can deduce in agreement with our metabolic model that the mutant switches to preferential use of NAD rather than NADP for this reaction. Inversely, a preferential use of NADP is predicted for 6-phosphogluconic dehydrogenase reactions, 6PGDHNADP_h and 6PGDHNAD_h, and for the H-serine dehydrogenase isozyme-reactions, HserDHNADP_h and HserDHNAD_h. Taken together with the additional indicated flux changes of glutamate synthetase (GluSNAD_h), and of the mitochondrial malate dehydrogenase (MalDHNAD_m), or glutamate dehydrogenases, GluDH1NADP_m and GluDH2NAD_m, we hypothesize that the reil1-1 reil2-1 double mutant defect is associated with a NAD/ NADP cofactor deregulation.
Generation of this hypothesis would not have been possible by stand-alone analysis of the transcriptome data alone. When we compare the results of TC-iReMet2 with a differential analysis of the transcriptomics data reaction per reaction following GPR rules, only the increase of isocitrate dehydrogenase (iCitDHNAD_m) flux in the mutant can be found overlapping with TC-iReMet2's predictions (Supplementary Table S8). This indicates that TC-iReMet2's integration of metabolomics and transcriptomics data provides added value compared to the sole analysis of either, thus, allowing new and additional support for hypothesis generation. Verification of these findings and hypothesis testing can be performed by subsequent studies and detailed quantification of NAD and NADP levels and their redox states under same and extended experimental set-ups. Altogether, the predictions from TC-iReMet2 suggest that altered use of NAD and NADP or of their redox state is an important mechanism by which REIL mediated ribosome biogenesis deficiency feedbacks into metabolism early after cold shift.
The current formulation of TC-iReMet2 has the potential to be further optimized, since the weighting parameters of the objective function are chosen based on the assumption that the consecutive increase between time points equals consecutive decrease of weighting. Validation of predictions gave robustness to this assumption. Yet, the usage of different weights could be considered based on other insights from independent physiological measurements. In addition, rather than using relative transcriptomics data, relative proteomic data9 or enzyme activity measurements35 could be integrated to provide more reliable predictions that are less influenced or obscured by post-transcriptional levels of regulation than transcriptome data. Therefore, TC-iReMet2 improves existing constraint-based approaches for differential flux prediction by accounting for possible temporal physiological change while also allowing for the integration of morphometric data.
Flux sampling for TC-iReMet2
Uniform flux sampling provides an unbiased characterization of the solution space. When enough flux distributions are sampled, they can be used to analyze their probability distributions or the range of specific fluxes. In this setup, flux sampling was performed using a random walk algorithm (Hit and Run).
A linear program with a single quadratic constraint is defined to find possible alternative solutions. For this, the solution space has to be defined. In addition to the defining constraints given by TC-iReMet2, a single quadratic constraint, due to quadratic objective function TC-iReMet2 is based on, has to be introduced. It fixes the value of the objective function to be the same as in optimal solution, forcing the optimization to find alternative optima. Here, z denotes the value of TC-iReMet2's objective function and \(z*\) the value found in optimal solution at the specific time point. Fluxes at the current time point are denoted by \(v_{t+1}\), whereas \(v_{t}\) denotes the flux distribution of the prior time point. We also allow for a small deviation, denoted by \(\zeta\), of the objective function at the optimum to counteract numerical problems. This way the solution space, containing all possible solutions at the time point specific optimum, is defined. To sample this space, steps are done as follows:
$$\begin{aligned} \begin{array}{ll@{}ll} {\displaystyle \max _{\lambda , v_{t+1}, z} \lambda } &{} &{} \\ \\ {s.t.}&{} &{} &{} \\ &{} &{} &{} \\ \\ {Sv_{t+1}^{A}=Sv_{t+1}^{B} = 0 ,} \\ \\ {v_{min}^{A} \le v^{A}_{t+1} \le v_{max}^{A},} \\ \\ {v_{min}^{B} \le v^{B}_{t+1} \le v_{max}^{B},} \\ \\ { (\varkappa _{t+1}-\delta )\;opt^A \le v_{Biomass}^{A} \le (\varkappa _{t+1}+\delta )\;opt^A ,} \\ \\ {v_{biomass}^B = opt^B , }\\ \\ {\forall i \in \mathfrak {I}: v_i^{B}q_{i}\prod _{j\in F(i)}(r_j^{min})^{|S_{ji}|}-\varepsilon _i \le v_i^{A} \le v_i^{B}q_{i}\prod _{j\in F(i)}(r_j^{max})^{|S_{ji}|}+\varepsilon _i,} \\ \\ {\forall j \in \{ \ell | \chi (r_\ell )=1\}:r^{min}_j=\hat{r}_j^{min}, r^{max}_j=\hat{r}_j^{max} ,} \\ \\ { \forall j \in \{ \ell | \chi (r_\ell )=0\}:r^{min}_\ell =min_{m:\; m \in \{ \ell | \chi (r_\ell )=1\}}\;{{\hat{r}}_m^{min}}, \;\; r^{max}_\ell =max_{m:\; m \in \{ \ell | \chi (r_\ell )=0\}}\;{{\hat{r}}_m^{max}} ,} \\ \\ {\forall i \in \{ \eta | H (q_\eta )=1\}:q^{min}_i=\hat{q}_i^{min}, q^{max}_i=\hat{q}_i^{max} ,} \\ \\ {\forall i \in \{ \eta | H (q_\eta )=0\}:q^{min}_\eta =min_{m:\; m \in \{ \eta | H (q_\eta )=1\}}\;{{\hat{q}}_m^{min}}, \;\; q^{max}_\eta =max_{m:\; m \in \{ \eta | H (q_\eta )=0\}}\;{{\hat{q}}_m^{max}} ,} \\ \\ {0 \le \lambda \le \infty ,} \\ \\ { (1-\zeta )z*\le z \le (1+\zeta )z* ,} \\ \\ { v_{t+1} = v_{0} + \lambda \;v_{direction} ,} \\ \\ { z = \alpha ||(v^{A}_{t+1}-v^{B}_{t+1})||_2^{2}+\beta ||(v^{A}_{t+1}-v^{A}_{t})||_2^{2}+\gamma ||(v^{B}_{t+1}-v^{B}_{t})||_2^{2}} \end{array} \end{aligned}$$
1. Select an initial point \(v_0\) in solution space (here, we used \(v_{t+1}\) in optimal solution, derived from the main optimization problem of TC-iReMet2 as \(v_0\), since it must lie in the solution space).
2. Select a random direction \(v_{direction}\) pointing in solution space.
3. Find the extreme point in solution space described by \(v_{t+1} = v_0 + \lambda \; v_{direction}\) by solving a linear program with a quadratic constraint (due to the quadratic problem the main objective is based on): If there is no solution to the optimization problem given above, \(v_{direction}\) does not point into solution space. As a consequence, steps 2. and 3. are repeated until a solution is found.
4. If there is a solution for the optimization program at step 3, a new point at the edge of the solution space can be determined to form a line segment with the initial point.
5. Randomly choose a point on this line segment to create a sample, which is in turn updated as a new initial point \(v_0\).
6. Repeat steps 1. to 5. until the defined number of samples is collected.
Numerical stability of TC-iReMet2
The multiplication of relative metabolite levels, substitution of unmeasured metabolite ratios with their respective minimum and maximum ratio value together with approximated enzyme ratios can lead to immense ratio constraints, which in turn could lead to numerical instabilities. Determining a maximum considered ratio constraint is therefore crucial to ensure numerical stability. To this end, we calculated the flux distributions allowing for a maximum ratio constraint ranging from 10\(^1\) to 10\(^{21}\) (includes the maximum possible ratio constraint in this setup) 10 times. Since each of those repeated measurements resulted in the same flux distribution, we used Pearson correlation to measure similarity between each flux distribution to its prior and successive one. Overall, correlations between flux redistributions were very high being above 0.9. The highest correlated region, while having a feasible solution at each considered time point, was detected when allowing for a maximum ratio constraint of 10\(^8\). This leads to 252, 252, 253 and 189 ratio constraints made at each considered time point. Hence, no ratio constraint exceeding 10\(^8\) was considered in this setup to ensure numerical stability.
K-means clustering
We used R statistical programming languages implementation of the K-means algorithm with seven assumed clusters (\(K=7\)) and Euclidean distance as distance measure. The reason for selecting \(K=7\) is the following: We assumed there to be one cluster of no differences fluxes, one cluster displaying stronger wild type flux with a rise at 21 days and inversely the same for mutant. Two clusters of inverse behaviour where wild type or mutant flux is stronger at the first time points with a shift in sign at 21 days. Lastly, we assumed two clusters of consistent flux difference favouring wild type or mutant conserved over all time points. This line of reasoning was supported by the silhouette index analysis, which specify the number of clusters K = 7 to maximize the value of the index.
Fishers exact test for enrichment analysis
We used a right-tailed Fisher's exact test to determine the enrichment in regulated reactions of a pathway. To this end, we defined a reaction as regulated if its sum of relative or nominal differences across time points was above its corresponding median of all considered reactions, else we considered the reaction to be unregulated. Therefore, we tested the association between regulated reactions and pathways for both relative and nominal differences. This test was conducted with a significance level of 0.05 through Matlab's 'fishertest' function. The resulting P-values were corrected for multiple hypotheses testing following the Benjamini–Hochberg procedure.
Parameterizing the objective function of TC-iReMet2 and estimating fractions of biomass yield
The analyzed time points differed in scale and therefore constituted a good case to test the assumption that a subsequent flux distribution is dependent on the previous one, therefore allowing for different magnitudes of physiological change. To model the dependency of flux distributions between time points, we assumed a steady decrease in the dependence as the interval between the points increases. More specifically, the following weights were used at each time point depicted by Table 1. Weighting of flux distribution dependency of scenario A and B to a prior one is denoted by \(\beta\) and \(\gamma\) respectively. Difference of flux distributions between scenario A and B at analyzed time point was weighted by \(\alpha\).
Table 1 Weight values of TC-iReMet2's objective function and biomass fraction values at each analyzed time point.
To model the fraction of biomass yield in scenario A to the biomass yield in scenario B at each specific time point \(\varkappa _{t+1}\), we used four biomass proxy parameters (Supplementary Table S1), two diameter measurements of the A. thaliana rosette, diameter 1 and 2, the apparent planar leaf area, and leaf perimeter (i.e. the circumference). The morphometric parameters apparent planar leaf area and perimeter underestimate biomass accumulation, since rosette leaves could slightly overlap. We contrasted these estimates by using the sum of diameter 1 and 2 as proxies of biomass accumulation. The diameter may slightly overestimate biomass because only the longest leaves are considered. Accordingly, we integrated the four biomass proxy parameters by giving equal weight to each one of them (Supplementary Table S1). In detail, the morphometric parameter measurements were averages separately for the wild type and the mutant. Ratios of mutant and wild type were calculated per time point based on the averages of biomass proxies. Finally, the resulting ratios across the four biomass proxies were averaged to obtain time point specific biomass fractions \(\varkappa _{t+1}\). We allowed to a deviation \(\delta\) of +/- 0.05 from the biomass fraction \(\varkappa _{t+1}\) as these calculations are estimates. In absence of biomass estimates, e.g. at day 1, we assumed a steady decrease of biomass fraction \(\varkappa _{t+1}\) between day 0 and day 7. Therefore, the ratio at day 1 is a seventh closer to the biomass fraction \(\varkappa _{t+1}\) of day 7 compared to day 0, resulting in a fraction of 0.77. All used biomass fractions \(\varkappa _{t+1}\) are depicted in Table 1.
Transcriptomics and metabolomics data used
The transcriptomics data are already published and are uploaded to the Gene Expression Omnibus (https://www.ncbi.nlm.nih.gov/geo/) and are available through accession number GSE101111. The metabolomics data are obtained from the Supplemental Table S1 of Beine-Golovchuk et al., 201830. The morphometric data are obtained from Schmidt et al., 20132. All data are included in the provided GitHub repository (https://github.com/tciremet2/TC-iReMet2) as well as in the Supplementary Tables to ensure easy access and reuse of the provided implementation.
Implementation and tools
For implementation of TC-iReMet2 we used "MATLAB 2017b, The MathWorks"36 in conjunction with the Tomlab optimization environment37. Statistical analysis and creation of figures was done with R38 programming language and R's ggplot2 library39 and "MATLAB 2017b, The MathWorks"36. The implementation is available at https://github.com/tciremet2/TC-iReMet2.
Code availibility
The transcriptome data are available from the Gene Expression Omnibus (https://www.ncbi.nlm.nih.gov/geo/) through accession number GSE101111. The metabolome data are previously published among the supplemental data of Beine-Golovchuk and co-authors2,3.
Ren, M. et al. Target of rapamycin signaling regulates metabolism, growth, and life span in Arabidopsis. Plant Cell 24, 4850–4874 (2012).
Schmidt, S., Dethloff, F., Beine-Golovchuk, O. & Kopka, J. The reil1 and reil2 proteins of Arabidopsis thaliana are required for leaf growth in the cold. Plant Physiol. 163, 1623–1639 (2013).
Fernie, A. R., Geigenberger, P. & Stitt, M. Flux an important, but neglected, component of functional genomics. Curr. Opin. Plant Biol. 8, 174–182 (2005).
Desvergne, B., Michalik, L. & Wahli, W. Transcriptional regulation of metabolism. Physiol. Rev. 86, 465–514 (2006).
Dieuaide-Noubhani, M. & Alonso, A. P. Application of metabolic flux analysis to plants. 1–17 (2014).
Basler, G., Fernie, A. R. & Nikoloski, Z. Advances in metabolic flux analysis toward genome-scale profiling of higher organisms. Biosci. Rep. 38, BSR20170224 (2018).
Kruger, N. J. & Ratcliffe, R. G. Fluxes through plant metabolic networks: measurements, predictions, insights and challenges. Biochem. J. 465, 27–38 (2015).
Orth, J. D., Thiele, I. & Palsson, B. Ø. What is flux balance analysis?. Nat. Biotechnol. 28, 245 (2010).
Ebrahim, A. et al. Multi-omic data integration enables discovery of hidden biological regularities. Nat. Commun. 7, 13091 (2016).
Töpfer, N., Kleessen, S. & Nikoloski, Z. Integration of metabolomics data into metabolic networks. Front. Plant Sci. 6, 49 (2015).
Vivek-Ananth, R. & Samal, A. Advances in the integration of transcriptional regulatory information into genome-scale metabolic models. Biosystems 147, 1–10 (2016).
Yurkovich, J. T. et al. Quantitative time-course metabolomics in human red blood cells reveal the temperature dependence of human metabolic networks. J. Biol. Chem. 292, 19556–19564 (2017).
Noor, E., Cherkaoui, S. & Sauer, U. Biological insights through omics data integration. Curr. Opin. Syst. Biol. 15, 39–47 (2019).
Pandey, V., Hadadi, N. & Hatzimanikatis, V. Enhanced flux prediction by integrating relative expression and relative metabolite abundance into thermodynamically consistent metabolic models. PLoS Comput. Biol. 15, e1007036 (2019).
Kleessen, S., Irgang, S., Klie, S., Giavalisco, P. & Nikoloski, Z. Integration of transcriptomics and metabolomics data specifies the metabolic response of chlamydomonas to rapamycin treatment. Plant J. 81, 822–835 (2015).
Bordbar, A. et al. Elucidating dynamic metabolic physiology through network integration of quantitative time-course metabolomics. Sci. Rep. 7, 46249 (2017).
Mahadevan, R., Edwards, J. S. & Doyle, F. J. III. Dynamic flux balance analysis of diauxic growth in Escherichia coli. Biophys. J . 83, 1331–1340 (2002).
Sajitz-Hermstein, M., Töpfer, N., Kleessen, S., Fernie, A. R. & Nikoloski, Z. Iremet-flux: constraint-based approach for integrating relative metabolite levels into a stoichiometric metabolic models. Bioinformatics 32, i755–i762 (2016).
Segre, D., Vitkup, D. & Church, G. M. Analysis of optimality in natural and perturbed metabolic networks. Proc. Nat. Acad. Sci. 99, 15112–15117 (2002).
Robaina Estévez, S. & Nikoloski, Z. Context-specific metabolic model extraction based on regularized least squares optimization. PLoS ONE 10, e0131875 (2015).
Colijn, C. et al. Interpreting expression data with metabolic flux models: predicting mycobacterium tuberculosis mycolic acid production. PLoS Comput. Biol. 5, e1000489 (2009).
Thiele, I. & Palsson, B. Ø. A protocol for generating a high-quality genome-scale metabolic reconstruction. Nat. Protoc. 5, 93 (2010).
Arnold, A. & Nikoloski, Z. Bottom-up metabolic reconstruction of Arabidopsis and its application to determining the metabolic costs of enzyme production. Plant Physiol. 165, 1380–1391 (2014).
Alseekh, S. & Fernie, A. R. Metabolomics 20 years on: what have we learned and what hurdles remain?. Plant J. 94, 933–942 (2018).
Vowinckel, J. et al. Cost-effective generation of precise label-free quantitative proteomes in high-throughput by microlc and data-independent acquisition. Sci. Rep. 8, 1–10 (2018).
Van Lijsebettens, M. et al. An s18 ribosomal protein gene copy at the Arabidopsis pfl locus affects plant development by its specific expression in meristems. EMBO J. 13, 3378–3388 (1994).
Berná, G., Robles, P. & Micol, J. L. A mutational analysis of leaf morphogenesis in Arabidopsis thaliana. Genetics 152, 729–742 (1999).
Horiguchi, G. et al. Differential contributions of ribosomal protein genes to Arabidopsis thaliana leaf development. Plant J. 65, 724–736 (2011).
Boyes, D. C. et al. Growth stage-based phenotypic analysis of Arabidopsis: a model for high throughput functional genomics in plants. Plant Cell 13, 1499–1510 (2001).
Beine-Golovchuk, O. et al. Plant temperature acclimation and growth rely on cytosolic ribosome biogenesis factor homologs. Plant Physiol. 176, 2251–2276 (2018).
Gray, G. R. & Heath, D. A global reorganization of the metabolome in Arabidopsis during cold acclimation is revealed by metabolic fingerprinting. Physiol. Plant 124, 236–248 (2005).
Reznik, E. et al. Genome-scale architecture of small molecule regulatory networks and the fundamental trade-off between regulation and enzymatic activity. Cell Rep. 20, 2666–2677 (2017).
Hackett, S. R. et al. Systems-level analysis of mechanisms regulating yeast metabolic flux. Science 354, 1–17 (2016).
Savageau, M. A. Introduction to s-systems and the underlying power-law formalism. Math. Comput. Modell. 11, 546–551 (1988).
Gibon, Y. et al. A robot-based platform to measure multiple enzyme activities in Arabidopsis using a set of cycling assays: comparison of changes of enzyme activities and transcript levels during diurnal cycles and in prolonged darkness. Plant Cell 16, 3304–3325 (2004).
MATLAB. R2017b (2017).
Holmström, K. & Edvall, M. M. The tomlab optimization environment. In Modeling Languages in Mathematical Optimization 369–376 (Springer, Berlin, 2004).
Ihaka, R. & Gentleman, R. R: a language for data analysis and graphics. J. Comput. Gr. Stat. 5, 299–314 (1996).
Wickham, H. ggplot2. Wiley Interdiscip. Rev. Comput. Stat. 3, 180–185 (2011).
We acknowledge the longstanding support by Dr. L. Willmitzer, Dr. M. Stitt, and Dr. R. Bock (Max-Planck-Institute of Molecular Plant Physiology) and funding by the Max-Planck Society.
Open Access funding enabled and organized by Projekt DEAL.
These authors contributed equally: Christopher Pries and Zahra Razaghi-Moghadam.
Metabolomics Infrastructure Group, Max-Planck Institute of Molecular Plant Physiology, 14476, Potsdam, Germany
Christopher Pries & Joachim Kopka
Bioinformatics, Institute of Biochemistry and Biology, University of Potsdam, 14476, Potsdam, Germany
Zahra Razaghi-Moghadam & Zoran Nikoloski
Systems Biology and Mathematical Modeling Group, Max-Planck Institute of Molecular Plant Physiology, 14476, Potsdam, Germany
Christopher Pries
Zahra Razaghi-Moghadam
Joachim Kopka
Z.N. conceived the modelling approach and J.K. the experiment(s), C.P. and Z.R.-M. conducted the data analyses and data experiment(s), All authors analysed the results. Z.N. and C.P. wrote the manuscript with contributions of J.K. and Z.R.-M. All authors reviewed the manuscript.
Correspondence to Zoran Nikoloski.
Supplementary Information 1.
Pries, C., Razaghi-Moghadam, Z., Kopka, J. et al. Integration of relative metabolomics and transcriptomics time-course data in a metabolic model pinpoints effects of ribosome biogenesis defects on Arabidopsis thaliana metabolism. Sci Rep 11, 4787 (2021). https://doi.org/10.1038/s41598-021-84114-y | CommonCrawl |
Home » Department Colloquium - Carolyn Gordon (Dartmouth College)
Time: 2:30 p.m. Place: Jeffery Hall 234
Speaker: Carolyn Gordon (Dartmouth College)
Title: Decoding geometry and topology from the Steklov spectrum of orbisurfaces.
Abstract: The Dirichlet-to-Neumann or "voltage-to-current" operator of, say, a surface $M$ with boundary is a linear map $C^\infty(\partial M)\to C^\infty(\partial M)$ that maps the Dirichlet boundary values of each harmonic function $f$ on M to the Neumann boundary values of $f$. The spectrum of this operator is discrete and is called the Steklov spectrum. The Dirichlet-to-Neumann operator also generalizes to the setting of orbifolds, e.g., cones. We will address the extent to which the Steklov spectrum encodes the geometry and topology of the surface or orbifold and, in particular, whether it recognizes the presence of orbifold singularities such as cone points.
This is joint work with Teresa Arias-Marco, Emily Dryden, Asma Hassannezhad, Elizabeth Stanhope and Allie Ray.
Prof. Gordon is an expert in spectral geometry. She obtained her PhD from Washington University in 1979, then went to the Technion institue and held positions at Lehigh University and Washington University before moving to Dartmouth where she is currently the Benjamin Cheney Professor of Mathematics.
Prof. Gordon was awarded an AMS Centennial Fellowship in 1990, the MAA Chauvenet prize in 2001 and was the 2010 Noether Lecturer. In 2012, she became a fellow of both the AMS and the American Association for the Advancement of Science. In 2017, she was selected to be a fellow of the AWM in the inaugural class. | CommonCrawl |
How many gold coins can you extract from the billionaire?
An eccentric billionaire plays a game with you. She has an urn with 100 gold coins.
Each time, you can take any number of coins from the urn.
If you take n coins, she will flip a fair coin.
If head, she adds one more gold coin to the urn.
If tail, nothing happens.
She stops flipping at the nth tail. After her flipping stops, you can take the coins from the urn again. If the urn is empty after her flipping stops, the game ends.
What is the maximum and minimum expected number of coins you can extract from the game?
mathematics combinatorics probability
smci
$\begingroup$ The rules are easy to misinterpret. So it seems that she's going to flip the coin until she gets n tails, and then her turn ends. Not the game ends. $\endgroup$
– Tony Ennis
$\begingroup$ I think all answers below that say "infinite" were based upon a different version of the rules. $\endgroup$
$\begingroup$ As far as I can tell no one misinterpreted the rules - as you said, she flips until she gets $n$ tails and then her "turn" ends, after which you get to go again. How else would everyone arrive at an infinite expectation if they thought the game ended when she was done flipping? $\endgroup$
– SQLnoob
$\begingroup$ @SQLnoob Consider if n = 5. We pocket 5 coins. The only way she puts 5 back (making our profit infinite) in is if she flips 5 heads before she flips 5 tails, on average. That's what I'll check when I get home. $\endgroup$
$\begingroup$ @TonyEnnis on average that's exactly what she'll do, given by the negative binomial distribution. $\endgroup$
I agree with the answer some others have given but I don't think their analysis is correct.
First of all,
the maximum and minimum expectation are the same. Taking n coins produces the same distribution of coins afterwards as taking one coin n times; the only way there could be a difference would be if you emptied the urn part-way through the sequence, but that can't happen.
So, let the expected number of coins you get, when the urn starts off with N coins in it, be f(N). We want f(100).
On each turn, the billionaire chooses a number m from a geometric distribution with parameter 1/2 (note: there are two different things "geometric distribution" can mean; we want the one that counts failures rather than trials) and the number of coins in the urn increases by m-1.
Note that
the expected number of coins after your turn is still N, but if the actual number ever hits zero the game ends, so it's possible for the expected number of coins you take to be finite (this is why I don't think the answers that say "when you play a turn, the expected number of coins doesn't change, so obviously your expectation is infinite" are being careful enough). Here's an example where perhaps the situation is clearer. We start with an urn containing 100 coins. At each turn, you take a coin from the urn; the billionaire flips a fair coin, and depending on the result either empties the urn or fills it up to double the number of coins it originally contained. Here it's still true that each turn leaves the number of coins in the urn unchanged in expectation, but the expected number of coins you get is very finite; in fact it's 2. Now, there is one important respect in which this isn't analogous to the situation here: I'm only letting you take one coin at a time. If you're allowed to take as many coins as are in the urn, then with probability 1/2 you get 100 coins, with probability 1/4 you get 200, with probability 1/8 you get 400, etc., and in fact your expected number is infinite. Similarly, if what happens is that at each turn you take just one coin and the billionaire (with equal probability) either does nothing or replaces it with two, then again your expected number of coins gained is infinite. So it's possible that there's some nice general theorem saying that provided you're allowed to take as many coins as are in the urn at each step, and provided the expected overall change in urn-count is zero, your expectation must be infinite. But the existing answers don't appeal to anything so carefully stated as this (still less prove it).
Starting with N coins in the urn,
the number of coins after your turn is $N-1+m$ with probability $2^{-m+1}$. So we have $f(0) = 0$ and $f(N) = 1 + f(N-1)/2 + f(N)/4 + f(N+1)/8 + f(N+2)/16 + \ldots$ when $N > 0$. Note that this means that if any f(N) is infinite then all are. Let's suppose they're finite, and seek either a solution or a contradiction.
substituting N+1 for N in the recurrence and dividing by 2 we get $f(N+1)/2 = 1/2 + f(N)/4 + f(N+1)/8 + f(N+2)/16 + \cdots$ whose RHS has the same "tail" as that of the original recurrence. So, subtracting, $f(N) - f(N+1)/2 = 1/2 + f(N-1)/2$ when $N>0$.
if we write $g(N)=f(N+1)-f(N)$ this becomes $g(N-1)-g(N)=1$ when $N>0$. This in turn means that $g(N)=a-N$ for some constant $a$, whence $f(N)=b+aN-N^2/2$ for constants $a,b$; and since $f(0)=0$ we must have $b=0$ so $f(N)=aN-N^2/2$. It's easy to verify that any such $f$ does in fact satisfy the recurrence relation.
no such $f$ can actually be correct, because for large $N$ the values are negative! So we have a contradiction, and the only dubious assumption we made was that the values of $f$ are finite. So in fact they are not all finite; as mentioned above, this means that they are all infinite; in particular $f(100)$ is infinite, and your expected number of coins is infinite whatever values of $n$ you pick.
justhalf
Gareth McCaughan♦Gareth McCaughan
$\begingroup$ "has a very finite expectation, namely 2. This is why I don't think it's enough to say," Note that this argument is also not correct: If the bills triple or disappear, the expected end capital is still infinite, even if the expected number of turns is finite. $\endgroup$
– Retudin
$\begingroup$ @Retudin: What argument are you talking about? Isn't Gareth's answer just saying "I don't think it's obvious that OP's game has an infinite expected number of steps"? $\endgroup$
– tehtmi
$\begingroup$ @tehtmi The 'example used as argument'. It is incorrect in the sense that it is incomplete (since nothing is taken out at each step, and finite duration and finite expectation are different aspects of the puzzle). Not ideal if used as argument that other analysis is incomplete. $\endgroup$
$\begingroup$ It's trivial to modify it to have something closer to the form of the game here. I'll make that modification since the difference seems to be distracting. $\endgroup$
– Gareth McCaughan ♦
$\begingroup$ Right, perhaps a more interesting framing of the question would be something like, "What is the probability that you win at least 1,000,000 coins?" in which case the answer would be only like 5%. You will most likely end up with less - but mathematically because your profit will sometimes effectively grow forever your expectation is infinite. $\endgroup$
The maximum expected number of coins you can get is
How can you get that much money?
Take all the coins from the urn. The billionaire will toss the coin until the 100th tails. This will happen (on average) after 200 throws: on average 100 heads and 100 tails. Thus the billionaire will put (on average) 100 new coins in the urn. You can now take again 100 coins -- or whatever number of coins are in the urn -- and the game never ends.
melfntmelfnt
4,71322 gold badges99 silver badges5353 bronze badges
$\begingroup$ The billionaire stops her tossing at the nth tail, not at the nth toss. $\endgroup$
– Eric
$\begingroup$ @Eric either case, if you take 100 coins there will be 50 replacements on average. Or am I missing something? $\endgroup$
– melfnt
$\begingroup$ @Eric ok, that's what I was missing: the billionaire can toss the coin more than 100 times, can't she? $\endgroup$
$\begingroup$ Yes, she can. There's no limit. $\endgroup$
$\begingroup$ @Eric thank you for pointing that out, I misunderstood the question. I corrected my answer accordingly, but I'm not sure about the correctness anymore $\endgroup$
Attempt at a bit more rigor in an answer.
Consider first the strategy of taking 1 coin each turn:
What is the expected number of turns until the jar ends up with one fewer coin (after refilling)? Let this number be denoted $E_1$ and let $E_n$ be the expected number of turns before the jar ends of with $n$ fewer coins. Then $E_n = n E_1$ and $E_1 = 1 + \frac{1}{2}\cdot0 + \frac{1}{4}\cdot E_1 + \frac{1}{8}\cdot E_2 + \dots = 1 + \frac{E_1}{2} \sum_{k=1}^\infty{\frac{k}{2^k}}=1 + E_1$. This is a contradiction if $E_1$ is finite, so $E_1$ is infinite and we expect an infinite number of turns and thus infinite earnings.
For other strategies:
Taking $n$ coins from the jar is equivalent to committing to taking one coin $n$ times in a row. Either way, the billionaire flips coins until she has gotten $n$ tails and refills the jar accordingly. So, there can be no other result.
We can think of this problem as a kind of one-dimensional random walk. Like a balanced random walk, the expected number of steps to reach a point is infinite. However, the probability of eventually reaching 0 coins in the jar is still 100% -- this can be thought of as a case of Gambler's ruin. (In the model, the gambler's bankroll is the jar, although in the actual game our profit is determined by the number of steps.) (This does not contradict the infinite expected number of steps.)
tehtmitehtmi
$\begingroup$ The question says "billionaire" but interestingly every answer seems to assume an "infinillionaire". $\endgroup$
– WhatsUp
$\begingroup$ @WhatsUp: Obviously if the billionaire can run out of coins, there is a finite result (which we could calculate given that number). But, given the "math" tag, I think the word "billionaire" is irrelevant: the question says she refills the jar if she flips heads, and that's that. $\endgroup$
The value of $n$ is important as follows:
It is completely irrelevant - every coin removed from the urn results in a series of tosses of the coin until a tail occurs, where each toss has 50% probability of adding a coin to the urn, and a 50% chance of moving on to the effect of the next coin removed.
Each coin removed has an expectation of adding $\frac12 + \frac14 + \frac18 + ... + \frac1{2^i} + ...$ additional coins, regardless of the value of $n$ chosen at each step.
The value of coins in the pot at the start of each turn will always be equal to
$100 + h - t$, where $h$ is the number of heads tossed so far, and $t$ is the number of tails, which simulates a simple random walk.
The game ends
the first time that $t = h + 100$. Although it could seem that the game could continue for ever, it will terminate with probability 1, and we could expect to calculate a finite expected value...
Currently this is a partial answer, as I don't currently have time to work out, and my google-fu failed to find the formula for
a simple random walk crossing a specified threshold.
Update: I later realised I abandoned a further edit and forgot to undelete this answer after fixing the misunderstanding that made me delete it initially... Gareth McCaughan♦ has since posted a more rigorous answer that demonstrates that
the expectation is in fact infinite.
Despite this,
although it could seem that the game could continue for ever, it will terminate with probability 1 - a random walk eventually visits all points an infinite number of times.
Also, for most games,
the actual result will be well within the billionaire's means - after about 10000 coin flips (not calculated nor correct formula determined - a rough approximation by "law of large numbers" where deviation is of the order of the square root of the number of samples), the number of coins in the pot would be expected to differ from the original number by around 100, and in cases where it differs in the negative direction, the game ends. The mathematical expectation is infinite because of a small minority of games that add arbitrarily large numbers of coins to the pot before eventually removing as many as they added (which will eventually occur with probability 1!).
For a slightly different analysis of why this is the expectation, consider what happens first between two events that must have equal probability:
The pot becomes empty for the first time
The pot doubles in value for the first time
The probability of
staying strictly between 0 and 200 coins forever is zero. On average we would expect to exceed those limits after something of the order of 10000 coins have been taken out of the pot (I'll call this $m$ later). Thus there is a 50:50 chance that we'll either double the pot or end the game.
It is just as though we started the game over, but with 200 coins in the pot, and the same analysis applies, but this time we expect to get 4 times as many coins out of the pot before it either doubles or becomes empty... each time we double the pot rather than ending the game, the next phase of the game will, on average, take 4 times as long, and get us 4 times the amount of money.
So our expected winnings by this analysis are:
$m + 4m(\frac12) + 16m(\frac14) + ... + 2^{2i}m\frac1{2^i} + ...$
which is the same as
$m(1 + 2 + 4 +... + 2^i + ...)$
which is infinite.
However, taking into account that we're playing against a billionaire
we should ignore all terms in that infinite series that exceed the billionaire's wealth. Even assuming a "gold coin billionaire" who has over a billion gold coins, they'll have to stop after adding something of the order of $km$, where k is of the order of 100000 or so. To get an easily calculable result, assume $m = 10000, k < 2^{17}$
so we sum only the terms up to the point we assume the billionaire will run out of money:
$m + 4m(\frac12) + 16m(\frac14) + ... + 2^{16}m\frac1{2^8}$
$m(1 + 2 + 4 + 8 + 16 + 32 + 64 + 128) = 255m$
As such, our expected winnings are around 2 or 3 million coins (although in practice, almost all games will end with winnings much less than 1 million coins).
$\begingroup$ So although the expectation is infinite, if you want to become a millionaire (take away a million coins) by playing this game, your chance is not that great. $\endgroup$
$\begingroup$ @Eric ... and it just occurs as you mention this that if one asks "what is a fair price to play the billionaire in order to play the game", it's a direct analogue of the game typically associated with the St. Petersburg paradox. $\endgroup$
$\begingroup$ @Eric I added some calculations using some gross simplifications... I'd be fascinated to see how close that is if someone has the time or inclination to do a mathematically more rigorous calculation, but not fascinated enough to do such myself! $\endgroup$
They are both infinite. The question seems to imply that different strategies would lead to different expected outcomes, but for any $n$ you take, the billionaire is expected to add $n$ coins back into the urn.
The expected number of coin flips until she reaches the $n$th tail is $n/p$, where $p$ is the probability of tails (in this case, 0.5). So for any $n$ you choose, she's expected to flip $n/0.5 = 2n$ times, which is expected to add $2n/2 = n$ coins back into the urn.
SQLnoobSQLnoob
Between 500K and 500M gold coins, depending on the exact net worth of the billionaire
As others have explained, on average one gold coin is replaced for every coin you take. So the expectation is that the game goes on as long as the billionaire can continue replacing the coins.
A one-ounce gold coin sells for around 2000 US dollars according to this website I found with a Google search. A "billionaire" is a person whose net worth is at least one billion US dollars, so our billionaire can afford at least 500K gold coins. We can feel confident that this billionaire's net worth is less than one trillion US dollars (else she would be a "trillionaire"), so she cannot afford 500M gold coins.
(The total amount of gold in the world is 244K metric tons, which is equivalent to 8.6B one-ounce gold coins. So playing this game to completion would not exhaust the world's gold supply, though it would doubtless cause prices to rise.)
The minimum number of coins you could extract would be zero, by choosing not to take any. If you do not take any coins, the game would not end; but presumably you can go on with your life while the game is running, and eventually either you or the billionaire will die, ending the game.
Dan BDan B
$\begingroup$ I don't think this is how the game works. If you take n coins, only then will the millionairess flip coins, and even then she'll only pay off about 1/2 coin per flip. $\endgroup$
This answer draws freely from other answers, in particular, @Gareth's and @Steve's.
As pointed out by others our strategy
is irrelevant, the outcome depends solely on the billionaire's coin tosses.
Let's assume that the billionaire gets cold feet once there are a total of M (M>100) coins in the pot. We can give a closed form solution for the expected time $E_{100,M}$ until either this happens or the game ends regularly. This provides a lower bound for the original question as we will walk away with at least an expected $\frac{E+100}2$ coins (if the game ended regularly, more if the billionaire chickened out), and this number can only increase if we revert to the original problem $E_{100,\infty}$ (infinite billionaire, no threshold M).
To map the game to a normal on-grid random walk we identify grid points with the number of coins in the pot plus the one we just took. (We always take a single coin.) Every heads toss is a hop to the right, every tails a hop to the left (by means of the coin we took to trigger the tosses). After the tails toss we transfer the coin we are holding to our purse and take a new one from the pot if possible.
Let $E_{k,M}$ be expected time to end from k coins in the jar. Then $E_{0,M}=E_{M,M}=0$ and $E_{k,M} = 1 + \frac 1 2 (E_{k-1,M}+E_{k+1,M})$. We can easily verify that this is solved by $E_{k,M} = k(M-k)$
Remember that for any $M$ this is a lower bound of $E_{100,\infty}$. As $E_{100,M}$ is unbounded it follows that $E_{100,\infty}$ cannot be finite.
How much intuition can we draw from the closed formula? Obviously, it is unfortunate that it lumps together regular and billionaire-gets-nervous endings.
Let us at least determine their relative frequencies. This can be done similarly to the approach used above:
Let $p_{k,M}$ be the probability for an irregular end. Then, simply, $p_{k,M} = \frac 1 2 (p_{k-1,M}+p_{k+1,M})$. This is therefore just a linear ramp $p_{k,M} = \frac k M$ with endpoints $p_{0,M} = 0$ and $p_{M,M} = 1$.
loopy waltloopy walt
11k1919 silver badges5151 bronze badges
$\begingroup$ Is our strategy completely irrelevant? I mean the number of coins you take each time doesn't effect the expectation, but does it effect the variance? If our goal is to get away with say 10000 coins, does the probability depend on the strategy? $\endgroup$
$\begingroup$ @Eric One can completely ignore what we are doing. Everything only depends on the billionaire's tosses. The game ends regularly the moment they have tossed 100 more tails than heads. What we earn is (total number of tosses + 100) / 2, also completely independent of what we are actually doing. $\endgroup$
– loopy walt
$\begingroup$ The expectation is infinite even if the billionaire starts with only 1 coin in the urn. So your probability of winning a certain amount of gold coins depends on the initial number of coins in the urn, not your strategy. $\endgroup$
Well, mean is infinite (easy enough to see):
From starting 100 you have the same probability to first reach 0 or 200. From 200, you have again same probability to first reach 0 or 400. Etc. Amount of money you end up getting is area under the random walk, you have a sequence like 100s1+2000.5s2+4000.25*s3 etc; where s1... is the expected number of steps to first reach 0 from that value. No matter how many steps that is, it is obvious 0<s1<=s2 so even if we replace all steps with s1 we get infinite money.
Median is finite though.
In code I slightly simplified the problem -
instead of waiting on n tails, I performed n coin flips and put 2 coins back for each head. On average this gives the same number of coins back in the urn; the main difference is that I cannot ever get odd number of coins. Code (matlab) is below:
function [n,m,k] = so_coins(n)
% n=number of coins left in the urn.
m=0; % number of coins taken so far
k=1; % steps performed - end condition.
while (n > 0 && k < 1000000) % stop after 1m steps of taking coins out.
r=rand(n,1);
m=m+n;
k=k+1;
n=2*sum(r > 0.5); % proclaim r>0.5 heads, r<0.5 tails
I ran the following 1k times and it worked well enough. You should hope you never get a very long run with huge n, the code is very slow.
Minimum number of coins was 840. Median was 22500. Mean was 10^7. Maximum was 2.6*10^9 (it stopped in all cases). I was unlucky and didn't hit the "infinite streak" that would push mean to infinity. Maybe better luck next time :D
Zizy ArcherZizy Archer
The minimum number of coins is 100: Take all 100 coins from the urn, and the billionaire hits the completely improbable (given that it's a fair coin) string of no heads and 100 tails on flipping the coin. There is no maximum, because the billionaire can hit a completely improbable string of any number of heads before hitting that 100th tail. Regardless of how many coins are in the urn at the end of any turn, you take all of them.
Jeff ZeitlinJeff Zeitlin
$\begingroup$ OP requests min/max expected coins, based on different choices of n, not bare minimums/maximums $\endgroup$
– George Menoutis
Not the answer you're looking for? Browse other questions tagged mathematics combinatorics probability or ask your own question.
Entertaining the Clumsy Coin Flipper
The asymmetric coin game
More coin flipping
Best strategy to get all heads/tails from three coins
Risk Appetites - betting percentages
1000 gold coins to share with the king
1000 gold coins to share with the knight
Removing coins from a pile depending on the previous move | CommonCrawl |
Fractals using just modulo operation
Let us calculate the remainder after division of 27 by 10.
27 \equiv 7 \pmod{10}
We have 7. So let's calculate the remainder after divison of 27 by 7.
27 \equiv 6 \pmod{7}
Ok, so let us continue with 6 as the divisor…
Good! We have finally reached 0 which means we are unable to continue the procedure.
Let's make a function that counts the modulo operations we need to perform until we finally arrive at 0.
So we find some remainder r_{1} after division of some a by some b, then we find some remainder r_{2} after division of a by r_{1} and we repeat the procedure until we find such index i that r_{i} = 0.
Therefore, let M(a, b) = i-1
for a, b \in \mathbb{N}, b \neq 0
(I like to call it "modulity of a by b", thence M)
For our example: M(27, 10) = 3.
Notice that M(a, b) = 0 \Leftrightarrow b|a (this is why i-1 feels nicer to me than just i)
Recall what happens if we put a white pixel at such (x, y) that y|x:
This is also the plot of M(x, y) = 0.
(the image is reflected over x and y axes for aesthetic reasons. (0, 0) is exactly in the center)
What we see here is the common divisor plot that's already been studied extensively by prime number researchers.
Now here's where things start getting interesting:
What if we put a pixel at such (x, y) that M(x, y) = 1?
Looks almost like the divisor plot… but get a closer look at the rays. It's like copies of the divisor plot are growing on each of the original line!
How about M(x, y) = 2?
Copies are growing on the copies!
Note that I do not overlay any of the images, I just follow this single equation.
Now here is my favorite.
Let us determine luminosity (0 – 255) of a pixel at (x, y) by the following equation:
255 \over{ M(x,y) + 1 }
(it is therefore full white whenever y divides x, half-white if M(x, y) = 1 and so on)
The full resolution version is around ~35 mb so I couldn't upload it here (I totally recommend seeing this in 1:1):
https://drive.google.com/file/d/0B_gBQSJQBKcjakVSZG1KUVVoTmM/view?usp=sharing
What strikes me the most is that some black stripes appear in the gray area and they most often represent prime number locations.
The above plot with and without prime numbers marked with red stripes:
http://i.imgur.com/E9YIIbd.png
http://i.imgur.com/vDgkT8j.png
The above plot considering only prime x:
Formula: 255 \over{ M(p_{x},y) } (note I do not add 1 to the denominator because it would be full white only at y equal 1 or the prime. Therefore, the pixel is fully white when p_{x} mod y = 1 )
Full 1:1 resolution: https://drive.google.com/file/d/0B_gBQSJQBKcjTWMzc3ZHWmxERjA/view?usp=sharing
Interestingly, these modulities form a divisor plot of their own.
Notice that for M(a, b) = i-1, r_{i-1} results in either 1 or a divisor of a (which is neither 1 nor a).
I put a white pixel at such (x, y) that for M(x, y) = i – 1, it is true that r_{i-1}\neq 1 \wedge r_{i-1} | x (the one before last iteration results in a remainder that divides x and is not 1 (the uninteresting case))
http://i.imgur.com/I85rlH5.png
It is worth our notice that growth of M(a, b) is rather slow and so if we could discover a rule by which to describe a suitable b that most often leads to encountering a proper factor of a, we would discover a primality test that works really fast (it'd be O(M(a, b)) because we'd just need to calculate this r_{i-1}).
Think of M'(a, b) with and without primes marked:
http://i.imgur.com/gE0Bvwg.png
http://i.imgur.com/vb5YxVP.png
Plot of M(x, 11), enlarged 5 times vertically:
http://i.imgur.com/K2ghJqe.png
Can't notice any periodicity in the first 1920 values even though it's just 11.
For comparison, plot of x mod 11 (1:1 scale):
http://i.imgur.com/KM6SCF3.png
As it's been pointed out in the comments, subsequent iterations of M(a, b) look very much like Euclidean algorithm for finding the greatest common divisors using repeated modulo. A strikingly similar result can be obtained if for (x, y) we plot the number of steps of gcd(x, y):
I've also found similar picture on wikipedia:
This is basically the plot of algorithmic efficiency of gcd.
Somebody even drew a density plot here on stackexchange.
The primes, however, are not so clearly visible in GCD plots. Overall, they seem more orderly and stripes do not align vertically like they do when we use M(a, b) instead.
Here's a convenient comparative animation between GCD timer (complexity plot) and my Modulity function (M(x, y)). Best viewed in 1:1 zoom. M(x, y) appears to be different in nature from Euclid's GCD algorithm.
Where is M(a, b) used in mathematics?
Is it already named somehow?
How could one estimate growth of M(a, b) with relation to both a and b, or with just a increasing?
What interesting properties could M(a, b) possibly have and could it be of any significance to number theory?
\newcommand{\Natural}{\mathbb{N}}
\newcommand{\Integer}{\mathbb{Z}}
\newcommand{\Rational}{\mathbb{Q}}
\newcommand{\Real}{\mathbb{R}}
\newcommand{\abs}[1]{\left\vert#1\right\vert}
\newcommand{\paren}[1]{\left(#1\right)}
\newcommand{\brac}[1]{\left[#1\right]}
\newcommand{\set}[1]{\left\{#1\right\}}
\newcommand{\seq}[1]{\left<#1\right>}
\newcommand{\floor}[1]{\left\lfloor#1\right\rfloor}
\DeclareMathOperator{\GCD}{GCD}
\DeclareMathOperator{\TL}{TL}
Here are some rediscovered (but fairly old) connections between the analysis
of Euclid's GCD algorithm, the Farey series dissection of the continuum,
and continued fractions. Some of these topics are all treated in chs. 3 and 10 in
(1) by Hardy and Wright. Long time ago the author of this response asked this question in the newsgroup sci.math and this is a collected summary with some new findings, after the main responder's analysis, that of Gerry Myerson. Additional contributions and thanks to Dave L. Renfro, James Waldby, Paris Pamfilos, Robert Israel, Herman Rubin and Joe Riel. References may be a little mangled.
When studying the asymptotic density distribution of \phi(n)/n or of \phi(n) versus n, both graphs display certain trend lines around which many positive integers accumulate. On the \phi(n)/n graph they are horizontal, while on the \phi(n) versus n graph they have varying slopes r, with 0\le r \le 1. This density distribution has been studied extensively. Schoenberg, for example in [9, pp. 193-194] showed that \phi(n)/n has a continuous distribution function D(r) (also in [4, p.96]). Later in [10, p.237] he proved that under fairly general conditions D(r) exists for a multiplicative function, leading to necessary and sufficient conditions for the existence and continuity of such a D for an additive arithmetical function. Erdos showed ([3, p. 96]) that D(r) for \phi(n)/n is purely singular, hence trend lines rx exist for almost any r\in [0,1]. Weingartner in [11, p. 2680] and Erdos in [2, p. 527] derived explicit bounds for the number of integers such that \phi(n)/n\le r. Here we first briefly try to explain those trend lines and then we present a theorem which suggests that they follow certain fractal patterns related to the Farey series \mathfrak{F}_n of order n, which exist in a graph which times the asymptotic performance of the Euclidian GCD algorithm. Because the timing of the Euclidian GCD algorithm is involved, this theorem can ultimately be used to speed-up factorization by trial and error searches. Additionally, these trend lines are also connected with a certain function which plays a role in tetration.
To avoid complicated notation, we always notate primes with p, q and p_i, q_i, with i\in I=\set{1,2,\ldots,t}, t\in\Natural an appropriate finite index set of naturals. Naturals will be notated with m, n, k, etc. Reals will be notated with x, y, r, a, \epsilon, etc. \floor{x} denotes the familiar floor function. Functions will be notated with f, g, etc. The Greatest Common Divisor of m and n will be denoted with \GCD(m,n). When we talk about the \GCD algorithm, we explicitly mean the Euclidean algorithm for \GCD(m,n).
The Trend Lines in the Asymptotic Distribution of \phi(n)
For an introduction to Euler's \phi function and for some of its basic properties the reader may consult [5, p. 52], [6, p.~20] and [1, p.25]. Briefly, \phi(n) counts the number of positive integers k with k\le n and \GCD(n,k)=1. For calculations with \phi, the author used the Maple package (8), although for n\in\Natural one can use the well known identity,
\begin{equation}\label{eq31}
\phi(n)=n\cdot \prod_{p|n}\paren{1-\frac{1}{p}}
The graph of \phi(n) as a function of n\in\set{1,\ldots,10000}, showing some of the trend lines, is shown next in Fig. 1.
\phi(n) as a function of n, for n\in\set{1,\ldots,10000}
Trying to fit the trend lines in figure 1 experimentally, it looks as though the lines might be related to specific functions.
For example the uppermost line looks like the line f(x)\sim x-1, since \phi(p)=p-1 for p prime. The second major trend looks like the line f(x)\sim x/2 although this is far from certain. The next major trends look like they are f(x)\sim x/3 and f(x)\sim 2x/3.
Although the uppermost line contains primes, it also contains other numbers, such as for example n=pq with p and q both large primes. In fact, if n is any number all of whose prime factors are large then \phi(n) will be relatively close to n. Let's see what happens exactly.
Theorem 3.1: The non-prime trend lines on the graph of \phi(n) versus n follow the functions f(r,s,x)=rx/s, with r=\prod_{p|n}(p-1)/(q-1), s=\prod_{p|n}p/q, where q is the largest prime q|n.
Proof: We first give some examples. For n=2^kq, q>2, \phi(n)=n(1-1/2)(1-1/q)=(n/2)(q-1)/q. For large q, t=(q-1)/q\sim 1, hence also for large n, \phi(n)\sim n/2, consequently these numbers follow approximately the trend f(1,2,x)=x/2.
For n=3^kq, q>3, \phi(n)=n(1-1/3)(1-1/q)=(2n/3)(q-1)/q. For large q, again t=(q-1)/q\sim 1, hence also for large n, \phi(n)\sim 2 n /3, consequently these numbers follow the trend f(2,3,x)=2x/3.
Generalizing, for n=p^kq, q>p, \phi(n)=n(1-1/p)(1-1/q)=(p-1)/p(q-1)/q. For large q, again t=(q-1)/q\sim 1, hence also for large n, \phi(n)\sim (p-1) n /p, consequently these numbers follow the trend f(p-1,p,x)=(p-1)x/p.
For n=2^k3^lq, q>3, \phi(n)=n(1-1/2)(1-1/3)(1-1/q)=(n/2)(2/3)(q-1)/q=(n/3)(q-1)/q, hence again for large n, \phi(n) \sim n/3, consequently these numbers follow the trend f(1,3,x)=x/3.
For n=2^k5^lq, q>5, \phi(n)=n(1-1/2)(1-1/5)(1-1/q)=(2n/5)(q-1)/q, hence the trend is f(2,5,x)=2x/5.
For n=3^k5^lq, the trend will be f(8,15,x)=8x/15.
For n=2^k3^l5^mq, the trend will be f(4,15,x)=4x/15.
Generalizing, for n=\prod p_i^{k_i}q, q>p_i the trend will be:
f\paren{\prod_i (p_i-1),\prod_i p_i,x}=
\prod_i \paren{1-\frac{1}{p_i}}x
and the theorem follows.
In figure 2 we present the graph of \phi(n) along with some trend lines \TL:
\begin{equation*}
\begin{split}
\TL&=\set{x-1,x/2,2x/3,4x/5}\\
&\cup \set{x/3,6x/7,2x/5}\\
&\cup \set{3x/7,8x/15,4x/7,4x/15,24x/35}\\
&\cup\set{2x/7,12x/35,16x/35,8x/35}
\end{split}
\end{equation*}
\phi(n) combined with the trend lines f_k(x)\in \TL, k\in\set{1,\ldots,16}
The trend lines correspond to n having the following factorizations:
F &\in\set{q,2^kq,3^kq,5^kq}\\
&\cup\set{2^{k_1}3^{k_2}q,7^kq,2^{k_1}5^{k_2}q}\\
&\cup\set{2^{k_1}7^{k_2}q,3^{k_1}5^{k_2}q,3^{k_1}7^{k_2}q,2^{k_1}3^{k_2}5^{k_3}q,5^{k_1}7^{k_2}q}\\
&\cup\set{2^{k_1}3^{k_2}7^{k_3}q,2^{k_1}5^{k_2}7^{k_3}q,3^{k_1}5^{k_2}7^{k_3}q,2^{k_1}3^{k_2}5^{k_3}7^{k_4}q}
We now proceed to investigate the nature of these trend lines. The Farey series \mathfrak{F}_n of order n\ge 2 ([5, p.23]), is the ascending series of irreducible fractions between 0 and 1 whose denominators do not exceed n. Thus, h/k\in \mathfrak{F}_n, if 0\le h \le k\le n, \GCD(h,k)=1. Individual terms of a specific Farey series of order n\ge 2 are indexed by m\ge 1, with the first term being 0 and the last 1. Maple code for creating the Farey series of order n is given in the Appendix.
Theorem 3.2: The Farey series \mathfrak{F}_n of order n satisfies the following identities:
\abs{\mathfrak{F}_n}&=\abs{\mathfrak{F}_{n-1}}+\phi(n)\\
\abs{\mathfrak{F}_n}&=1+\sum_{m=1}^n\phi(m)
Proof: By induction on n. \mathfrak{F}_2=\set{0,1/2,1}, hence \abs{\mathfrak{F}_2}=3, since there are 3 irreducible fractions of order n=2. Note that the irreducible fractions of order n are necessarily equal to the irreducible fractions of order n-1 plus \abs{\set{k/n\colon k\le n,\GCD(k,n)=1}}=\phi(n), and the first identity follows. The second identity follows as an immediate consequence of the first identity and induction, and the theorem follows.
In [5, p.23], we find the following theorem:
Theorem 3.3: If 0<\xi<1 is any real number, and n a positive integer, then there is an irreducible fraction h/k such that
0<k\le n, \abs{\xi-h/k}\le 1/k/(n+1).
We can now reformulate Theorem 3.1, which follows as a consequence of Theorem 3.3.
Corollary 3.4: The trend lines on the graph of \phi(n) versus n follow the functions g(n,m,x)=\mathfrak{F}_{n,m}\cdot x.
Proof: Note that for large n=p^k, \phi(n)/n\to 1. For large n=\prod_i p_i^{k_i}, \phi(n)/n\to 1/\zeta(1)=0. Putting \xi= \phi(n)/n, Theorem 3.3 guarantees the existence of an irreducible fraction h/k, and some n, such that \phi(n)/n is close to a member h/k of \mathfrak{F}_n and the result follows.
The trend lines on the graph of \phi(n) versus n are completely (and uniquely) characterized by either description. For example, consider the factorizations 2^k3^l5^mq, with q>5 and k,l,m\ge0. Then if n=2^kq, \phi(n)/n\sim 1/2, if n=3^lq, \phi(n)/n\sim 2/3, if n=5^mq, \phi(n)/n \sim 4/5, if n=2^k3^lq, \phi(n)/n\sim 1/3, if n=2^k5^mq, \phi(n)/n\sim 2/5, if n=3^l5^mq, \phi(n)/n\sim 8/15\sim 2/3, if n=2^k3^l5^mq, \phi(n)/n\sim 4/15\sim 1/3, all of which are close to members of \mathfrak{F}_{5}=\set{0,1/5,1/4,1/3,2/5,1/2,3/5,2/3,3/4,4/5,1}.
In figure 3 we present the graph of \phi(n) along with with g(10,m,x).
\phi(n) combined with the functions g(10,m,x)
The Asymptotic \epsilon-Density of the Trend Lines of \phi(n)
We will need the following counting theorem.
Theorem 4.1: If i\in\Natural, L=\set{a_1,a_2,\ldots,a_i} a set of distinct numbers a_i\in\Natural and N\in\Natural, then the number of numbers n\le N of the form n=\prod_i a_i^{k_i} for some k_i\in\Natural, is given by S(L,N), where,
S(L,N)=
\begin{cases}
\floor{\log_{a_{\abs{L}}}(N)} &\text{, if $\abs{L}=1$}\\
\sum\limits_{k=1}^{\floor{\log_{a_{\abs{L}}}(N)}}S\paren{L\setminus\set{a_{\abs{L}}},\floor{\frac{N}{a_{\abs{L}}^k}}}&\text{, otherwise}
Proof: We use induction on i=\abs{L}. When i=1, the number of numbers n\le N of the form n=a^k is exactly \floor{\log_a(N)}. Now assume that the expression S(L,N) gives the number of n\le N, with n=\prod_i a_i^{k_i}, when i=\abs{L}>1. We are interested in counting the number of m\le N, with m=\prod_i a_i^{k_i}a_{i+1}^{k_{i+1}}. If we divide N by any power a_{i+1}^k, we get numbers of the form n, which we have already counted. The highest power of a_{i+1} we can divide N by, is a_{i+1}^k, with k=\floor{\log_{a_{i+1}}(N)}, hence the total number of such m is exactly,
\sum_{k=1}^{\floor{\log_{a_{i+1}}(N)}}S(L\setminus\set{a_{i+1}},\floor{N/a_{i+1}^k})=S(L\cup \set{a_{i+1}},N)
so the expression is valid for i+1=\abs{L\cup \set{a_{i+1}}} and the theorem follows.
In the Appendix we provide two Maple procedures which can count the number of such n. The first using brute force, the second using Theorem 4.1. The results are identical.
We define now the asymptotic \epsilon-density of a line f(x)=rx, with 0<r<1.
Definition 4.2: Given r\in\Real, 0<r<1 and N\in\Natural, then the asymptotic \epsilon-density of the line f(x)=rx at n is,
D_{\epsilon}(N,f(x))=\frac{\abs{\set{n\le N\colon \abs{1-\frac{\phi(n)}{f(n)}}\le\epsilon}}}{N}
Briefly, D_{\epsilon}(N,f(x)) counts the distribution of positive integers inside a strip of width 2\epsilon centered around the line f(x), from 1 to N. If one wishes, one could alternatively interpret \epsilon-density as the difference \abs{D(r_1)-D(r_2)}, where D(r) is Schoenberg's finite distribution function on \phi(n) versus n and \epsilon=\abs{r_1-r_2}/2. Following Definition 4.2, we now have,
Theorem 4.3: The trend lines of \phi are the lines of maximal \epsilon-density.
Proof: Let n\le N and \epsilon> 0 be given. Then, by the Fundamental theorem of Arithmetic (FTA) n has a unique factorization n=\prod_i p_i^{k_i}q^k, for some p_i and q, with q>p_i. Consider the set K_{f(x)}=\set{n\le N\colon \abs{1-\phi(n)/f(n)}\le\epsilon}. If 1/q\le\epsilon then, \abs{1-\phi(n)/f(\prod_i (p_i-1),\prod_i p_i,n)}=\abs{1-(1-1/q)}=1/q\le\epsilon, therefore n belongs to K_{f(r,s,x)}, with r=\prod_i (p_i-1) and s=\prod_i p_i. If 1/q>\epsilon then n belongs to K_{f(r(q-1),sq,x)}. Hence, for each n\le N, \phi(n) falls \epsilon-close to or on a unique trend line f(r,s,x) for appropriate r and s and the theorem follows.
Theorem 4.3 can be reformulated in terms of Farey series \mathfrak{F}_n. It is easy to see that the functions g(n,m,x) are exactly the lines of maximal \epsilon-density. This follows from the proof of theorem 3.3 ([5, p.31]): Because \phi(n)/n always falls in an interval bounded by two successive fractions of \mathfrak{F}_n, say h/k and h'/k', it follows that \phi(n)/n will always fall in one of the intervals
\paren{\frac{h}{k},\frac{h+h'}{k+k'}}, \paren{\frac{h+h'}{k+k'},\frac{h'}{k'}},
Hence, \phi(n)/n falls \epsilon-close to either g(n,m,x), or g(n,m+1,x), for sufficiently large n.
In figure 4 we present the 0.01-density counts of the trends f_k(x)\in \TL for the sample space \set{1,\ldots,10000}.
D_{0.01}(10000,f_k(x)) for f_k(x)\in \TL, k\in\set{1,\ldots,16}
\sum_{k=1}^{16}D_{0.01}(10000,f_k(x))\sim 0.5793, so for \epsilon=0.01 approximately half the sample space falls onto the trend lines \TL. D_{0.01}(10000,f_1(x))\sim 0.1205, while the Prime Number Theorem (PNT) gives P(n=prime)=1/\log(10000)\sim 0.10857.
We define now the asymptotic 0-density of a line f(x)=rx, with 0<r<1.
Definition 4.4: Given r\in\Real, 0<r<1 and N\in\Natural, then the 0-density of the line f(x)=rx is,
D_0\paren{N,f(x)}=\lim\limits_{\epsilon\to 0}D_{\epsilon}\paren{N,f(x)}
The reader is welcome to try to generate different graphs for different densities (including 0-densities) using the Maple code in the Appendix. The 0-densities for N=10000 are shown in figure 5.
D_0(10000,f_k(x)) for f_k(x)\in \TL, k\in\set{1,\ldots,16}
We observe that the 0-densities of the trend lines of m and n look like they are roughly inversely proportional to the products \prod_i p_i when m and n have the same number of prime divisors, although this appears to be false for at least one pair of trend lines (bins 3 and 12 on figure 5):
2\cdot 3\cdot 7>3, while \abs{\set{n\le 10000\colon n= 2^k3^l7^m}}=S(\set{2,3,7},10000)=43\gt\abs{\set{n\le 10000\colon n=3^k}}=S(\set{3},10000)=8
The trend line density is a rough indicator of the probability n has one of the mentioned factorizations in F. The calculated densities of figures 4 and 5 of course concern only the sample space \set{1,\ldots,N}, with N=10000 and the primes we are working with, \set{2,3,5,7}. If N (or the lower bound, 1) or the set of primes changes, these probabilities will have to be recalculated experimentally.
Then we have,
Theorem 4.5: Given N\in\Natural, r=\prod_i (p_i-1), s=\prod_i p_i, and L=\set{p_1,p_2,\ldots,p_i}, then
D_0\paren{N,f(r,s,x)}=\frac{S(L,N)}{N}
Proof: The \epsilon-density of the trend line f(r,s,x) is \abs{K}/N, with K being \set{n\le N\colon \abs{1-\phi(n)/f(r,s,n)}\le\epsilon}. As \epsilon\to 0, K will contain exactly only those n having the factorization n=\prod_i p_i^{k_i} and the theorem follows by applying Theorem 4.1 with a_i=p_i.
Remark: Note that the existence of Schoenberg's continuous distribution function D(r) together with theorem 4.5 automatically guarantee the following:
Corollary 4.6: Given r, s and L as in theorem 4.5 then
\lim_{N\to\infty}\lim_{\epsilon\to 0}D_{\epsilon}\paren{N,f(r,s,x)}=\lim_{N\to\infty}D_0\paren{N,f(r,s,x)}=\lim_{N\to\infty}\frac{S(L,N)}{N}<\infty
The Timing of the Euclidian GCD Algorithm
The Euclidean GCD algorithm has been analyzed extensively (see 6 for example). For two numbers with m and n digits respectively, it is known to be O((m+n)^2) in the worst case if one uses the crude algorithm. This can be shortened to O((m+n)\cdot \log(m+n)\cdot \log(\log(m+n))), and if one uses the procedure which gets the smallest absolute remainder, trivially the length of the series is logarithmic in m+n. So the worst time, using the crude algorithm, is O((m+n)^2\cdot \log(m+n)), with the corresponding bound for the asymptotically better cases. It has been proved by Gabriel Lame that the worst case occurs when m and n are successive Fibonacci numbers.
Using the Maple code on the Appendix, in figure 6 we show the timing performance graph of the Euclidean GCD algorithm as a function of how many steps it takes to terminate for integers m and n, relative to the maximum number of steps. Darker lines correspond to faster calculations. The time performance of \GCD(m,n) is exactly equal to the time performance of \GCD(n,m), hence the graph of figure 6 is symmetric with respect to the line m=n.
Time of \GCD(m,n) for (m,n)\in\set{1,\ldots,200}\times\set{1,\ldots,200}
A Probabilistic Theorem
If we denote by \mathfrak{A} the class of all \GCD algorithms, then for 1\le m,n\le N\in\Natural, we define the function S[G,N]\colon \mathfrak{A}\times\Natural\to\Natural to be the number of steps of the Euclidean algorithm for \GCD(m,n). If H denotes the density of the hues on figure 6, ranging from black (few steps) to white (many steps), then figure 6 suggests,
S\brac{\GCD(m,n),N}\sim H\brac{f(n,m,x)}\sim g(n,m,x)
Keeping in mind that S\brac{\GCD(m,n),N}=S\brac{\GCD(n,m),N} and interpreting grey scale hue H as (black pixel) \epsilon-density (a probability) on figure 6, relation the above suggests,
Theorem 6.1: Given \epsilon>0, N\in\Natural, if m\le N and \min\set{\phi(m)\colon m\le N}\le n\le N, \phi's trend lines of highest \epsilon-density (as in figure 1) correspond to the lines of fastest \GCD(m,n) calculations (as in figure6), or:
S\brac{\GCD(n,m),N}\sim D_{\epsilon}(N,f(n,m,x))\sim g(n,m,x)
Proof: First, we present figures 1 and 6 superimposed using Photoshop as figure 7. Next we note that on the sample space \set{1,\ldots,N}, both figures 1 and 6 share a common dominant feature: The emergence of trend lines g(n,m,x). As established by Theorem 4.3, on figure 1 these lines are the lines of highest asymptotic \epsilon-density, given by D_{\epsilon}(N,f(n,m,x)). On the other hand, on figure 6 note that n=\phi(m) by superposition of the two figures, hence using the fundamental identity for \phi, n=m\prod_i (p_i-1)/p_i\Rightarrow n/m\sim f(\prod_i (p_i-1),\prod_i p_i,x)\sim f(n,m,x)\sim g(n,m,x) therefore n/m\sim g(n,m,x). The trend lines g(n,m,x) are already established as the regions of highest \epsilon-density, because their locations are close to irreducible fractions n/m (for which \GCD(m,n)=1), which are fractions which minimize S\brac{\GCD(n,m),N}, therefore S\brac{\GCD(n,m),N} is maximized away from these trend lines and minimized close to them, and the theorem follows.
To demonstrate Theorem 6.1, we present an example. The \epsilon-densities of the trend lines of \phi on figure 4 for the space \set{1,2,\ldots,N}, N=10000 and for the primes we used, \set{2,3,5,7} are related to the speed of the GCD algorithm in our space. For example, the highest 0.01-density trend line in our space is the line corresponding to the factorization m=2^kq. For prime q>2, \phi(m)\sim m/2. From figure 6, \phi(m)=n, hence m/2=n. Thus the fastest GCD calculations in our space with these four primes will occur when n=m/2. This is validated on figure 6. The next highest 0.01-density trend lines correspond to the factorizations m=3^kq, m=2^k3^lq and m=5^kq. In these cases, for q>5, \phi(m)\sim m/3, \phi(m)\sim 2m/3 and \phi(m)\sim 4m/5 respectively. From figure 6 again, \phi(m)=n, hence the next fastest GCD calculations in our space will occur when n=m/3, n=2m/3 and n=4m/5. This is also validated on figure 6. The process continues in a similar spirit, until our 0.01-density plot is exhausted for our space and the primes we are working with.
When we are working with all primes p_i\le N, Theorem 6.1 suggests that the fastest GCD calculations will occur when m=\prod_i p_i^{k_i}q, which correspond to the cases \phi(m)=n\Rightarrow n=m\prod_i(1-1/p_i)\Rightarrow n=m\prod_i (p_i-1)/\prod_i p_i. These lines will eventually fill all the black line positions on figure 6 above the line n=\min\set{\phi(m)\colon m\le N}, according to the grey hue gradation on that figure.
If one maps the vertical axis [0,N] of figure 6 onto the interval [0,1] and then the latter onto a circle of unit circumference, one gets a Farey dissection of the continuum, as in [5, p.29]. Hence, the vertical axis of figure 6 represents an alternate form of such a dissection. This dissection of figure 6 is a rough map of the nature of factorization of n. Specifically, the asymptotic distribution of \phi(n)/n in [0,1], indicates (in descending order) roughly whether n is a power of a (large) prime (\phi(n)/n\sim 1, top), a product of specific prime powers according to a corresponding Farey series (\phi(n)/n\sim \mathfrak{F}_n), or a product of many (large) prime powers (\phi(n)/n\sim 0, bottom).
The trend lines of \phi's asymptotic density correspond to the fastest GCD calculations, or, the totient is the discrete Fourier transform of the gcd, evaluated at 1 (GCDFFT).
Practical Considerations of Theorem 6.1
What is the practical use (if any) of theorem 6.1? The first obvious use is that one can make a fairly accurate probabilistic statement about the speed of \GCD(m,n) for specific m and n, by `inspecting' the \epsilon-density of the line rx, where r=m/n (or 1/r=n/m). To demonstrate this, we use an example with two (relatively large) random numbers. Let:
m=63417416233261881847163666172162647118471531, and
n=84173615117261521938172635162731711117360011.
Their ratio is approximately equal to r=m/n\sim 0.7534120538, so it suffices to determine a measure of the \epsilon-density of the line rx on the graph of figure 6}. To locate the line rx on the graph, we use Maple to construct a rectangle whose sides are exactly at a ratio of r. This rectangle is shown superimposed with figure 6, on figure 8. The \epsilon-density of the line rx is fairly high (because it falls close to a trend line of \phi), which suggests that the timing of \GCD(m,n) for those specific m and n will likely be "relatively fast", compared to the worst case of \GCD(m,n) for m and n in that range (0.1-0.9\cdot 10^{44}). Note that for k\ge 1, we have S[\GCD(m,n),N]=S[\GCD(km,kn),N], so we can determine the approximate speed of \GCD(m,n), by reducing m and n suitably. To an accuracy of 10 decimal places for example, we can be certain that S[\GCD(m,n),N]\sim S[\GCD(7534120538,10^{10}),N], since 7534120538/10^{10}\sim m/n.
The real practical use of this theorem however, lies not so much in determining the actual speed of a specific GCD calculation (that's pretty much impossible by the probabilistic nature of the theorem), rather, in avoiding trying to factorize a large number m, when the ratio r=p/m for various known primes p determines lines rx of relatively low \epsilon-density on figure 6. The latter can effectively be used to speed up factorization searches by trial and error, acting as an additional sieve which avoids such timely unfavorable primes and picks primes for which \GCD(m,p) runs to completion relatively fast.
Remark: Note that such timely unfavorable primes can still be factors of m. The usefulness of such a heuristic filter, lies in that it doesn't pick them first, while searching, leaving them for last.
Speed of \GCD(m,n), with the given m and n of section 7.
Addendum #3 (for your last comment, re: similarity of the two algorithms)
Yes, you are right, because the algorithm you describe by your "modulity" function is not the one I thought you were using. The explanation is the same as the one I've given you, before. Let me summarize: The GCD algorithm counter works as follows:
GCD := proc (m, n) local T, M, N, c; M := m/igcd(m, n); N := n/igcd(m,
n); c := 0; while 0 < N do T := N; N := mod(M, N); M := T; c := c+1
end do; c end proc
Result, nice and smooth (modulo 1):
From your comment description, you seem to be asking about:
Mod := proc (m, n) local a, b, c, r; a := m/igcd(m, n); b := n/igcd(m,
n); c := 0; r := b; while 1 < r do r := mod(a, r); c := c+1 end do;
c end proc
Result, nice, but not smooth:
And that's to be expected, as I said in my Addendum #2. Your "modulity" algorithm, is NOT equivalent to the GCD timer, since you are reducing modulo r, always. There are exactly \phi(a) integers less than a and relatively prime to a, so you are getting an additional dissection of the horizontal continuum, as per \phi(a), for 1\le a\le 200.
T.M. Apostol, Introduction to analytic number theory, Springer-Verlag, New York, Heidelberg, Berlin, 1976.
P. Erdos, Some remarks about additive and multiplicative functions, Bull. Amer. Math. Soc. 52(1946), 527-537.
_, Problems and results on Diophantine Approximations (II), Compositio Math. 16 (1964), 52-65.
_, On the distribution of numbers of the form \sigma(n)/n and on some related questions, Pacific Journal of Mathematics, 52(1) 1974, 59-65.
G.H. Hardy and E.M. Right, An introduction to the theory of numbers, Clarendon Press, Oxford, 1979.
K. Ireland and M. Rosen, A classical introduction to modern number theory, Springer-Verlag, New York, Heidelberg, Berlin, 1982.
D. Knuth, The art of computer programming, volume 2: Semi-numerical algorithms, Addison-Wesley, 1997.
D. Redfern, The Maple Handbook, Springer-Verlag, New York, 1996.
I.J. Schoenberg, Uber die asymptotische verteilung reeler zahlen mod 1, Math Z. 28 (1928), 171-199.
_, On asymptotic distributions of arithmetical functions, Trans. Amer. Math. Soc. 39 (1936), 315-330.
A. Weingartner, The distribution functions of \sigma(n)/n and n/\phi(n), Procceedings of the American Mathematical Society, 135(9) (2007), 2677-2681.
Source : Link , Question Author : Pythagoras of Samos , Answer Author : Yiannis Galidakis
Categories algebraic-number-theory, divisibility, modular-arithmetic, prime-numbers Tags algebraic-number-theory, divisibility, modular-arithmetic, prime-numbers Post navigation
Why is this coin-flipping probability problem unsolved?
Any odd number is of form $a+b$ where $a^2+b^2$ is prime | CommonCrawl |
<< Week of November 24 >>
Women's Basketball vs. Arkansas
Sport - Intercollegiate - Basketball | November 24 | Haas Pavilion
Cal Women's Basketball hosts Arkansas at Haas Pavilion.
TDPS presents The Caucasian Chalk Circle by Bertolt Brecht
Performing Arts - Theater | November 15 – 24, 2019 every day with exceptions | Zellerbach Playhouse
Director: Christine Nicholson
Playwright: Bertolt Brecht
Translator: Alistair Beaton
Sponsor: Department of Theater, Dance, and Performance Studies
This captivating and comedic play within a play tells the corresponding stories of a post-war land dispute in the Caucasus Mountains and a young servant who risks her life to raise an abandoned child.
Tickets required: $13–20
Ticket info: Buy tickets online
NaNoWriMo 2019: Come Write In at Doe Library
Special Event | November 3 – December 1, 2019 every Sunday | 1-4 p.m. | Doe Library, Room 180 Doe
National Novel Writing Month – Celebrate 20 Years of Creativity at Doe Library!
Have you ever thought about writing a novel but just didn't think you had the time? Join fellow writers during the month of November at Doe Library and let your creativity shine!
Attendance restrictions: The Library attempts to offer programs in accessible, barrier-free settings. If you think you may require disability-related accommodations, please contact Shannon Monroe at least two weeks prior to the event at [email protected]
Black Life: Diamano Coura
Performing Arts - Dance | November 24 | 1:30 p.m. | Berkeley Art Museum and Pacific Film Archive
Diamano Coura West African Dance Company is a nonprofit cultural organization dedicated to the preservation, education, and appreciation of traditional West African music, dance, theater, and culture. Join them for this class and learn a traditional West African song and dance movements to live drumming. All ages welcome.
Since its inception in 1975, Diamano Coura, under Director Dr. Zak Diouf... More >
Tour/Open House | January 3 – December 29, 2019 every Sunday, Thursday, Friday & Saturday with exceptions | 1:30-2:45 p.m. | UC Botanical Garden
Join us for a free, docent-led tour of the Garden as we explore interesting plants from around the world, learn about the vast diversity in the collection, and see what is currently in bloom. Meet at the Entry Plaza.
Free with Garden admission. Advanced registration not required
Film - Feature | November 24 | 2 p.m. | Berkeley Art Museum and Pacific Film Archive
An endless array of bossy citizens bombard a poor traffic policeman with all the excuses and reasons why they (but no one else) should be allowed into a restricted area in Kiarostami's satiric document of humanity's seemingly endless capacity for lying through its teeth—that is, telling stories. Kiarostami uses a telephoto lens to eavesdrop on the action, fashioning out of one traffic panic an... More >
David Finckel, cello; Wu Han, piano
Performing Arts - Music | November 24 | 3-5 p.m. | Hertz Concert Hall
Speakers/Performers: David Finckel, Cal Performances; Wu Han, Cal Performances
Sponsor: Cal Performances
Beethoven/Cello Sonata in A major, Op. 69
Brahms/Cello Sonata No. 2 in F major, Op. 99
Debussy/Nocturne and Scherzo
Franck/Violin Sonata in A major (trans. cello)
Tickets required: $68 (prices subject to change)
Ticket info: Tickets go on sale August 6. Buy tickets online or by calling 5106429988, or by emailing [email protected]
Cellist David Finckel and pianist Wu Han perform Sunday, November 24, 2019 in Hertz Hall. (credit: Daniel Ashworth)
Film - Feature | November 24 | 4:30 p.m. | Berkeley Art Museum and Pacific Film Archive
In Traffic, Tati's Monsieur Hulot is a resourceful automobile salesman en route to an Amsterdam car show, where he will present an elaborate camper on behalf of a Paris firm. Getting there is half the agony in Tati's apocalyptic vision of mechanized modernity, in which humankind indulges in a perpetual love-hate relationship with its favorite pet, the automobile. Tati's rigorously composed... More >
Michael Caine makes a fabulous cad as the randy playboy Alfie, but it's the female leads who make a more lasting impression in this surprisingly candid study of British sexual and social mores just as the sixties began to really swing. Directly addressing the camera through pre- or post-coital confessionals, the casual misogynist Alfie serves up a lothario's list of do's and who's: married women... More >
EHS 403 RUA On-Boarding
Course | November 25 | 10:30-11:30 a.m. | 370 University Hall
Speaker/Performer: JS
Sponsor: Office of Environment, Health & Safety
Nordic Parental Leaves: Inspiration for California?
Panel Discussion | November 25 | 11 a.m.-1:30 p.m. | Haas School of Business, Spieker Forum, Chou Hall
Sponsor: Center for Responsible Business
Join us for "Nordic Parental Leaves: Inspiration for California?"
Registration: https://www.eventbrite.com/e/nordic-parental-leaves-inspiration-for-california-tickets-78529049501
"It's easy in Sweden to work and have kids." -- Making Motherhood Work
Georgian Polyphony Workshop with Carl Linich
Workshop | November 25 | 11 a.m.-1 p.m. | Hertz Concert Hall
Performer: Carl Linich
Sponsors: Institute of Slavic, East European, and Eurasian Studies (ISEEES), Near Eastern Studies, Department of Music
Participants will learn a polyphonic folk song from the Republic of Georgia, using only word sheets rather than scores. All you need are open ears and the willingness to give it a try. The atmosphere is very relaxed, and all levels of musicianship are welcome.The workshop will be lead by Carl Linich, one of the foremost practitioners of Georgian vocal music in the United States.
Graduate Student Seminar
Seminar | November 25 | 11:10 a.m.-12:30 p.m. | 489 Minor Hall
Speakers/Performers: Avi Aizenman, Levi Lab; Norick Bowers, Roorda Lab
How to Write a Research Proposal Workshop
Workshop | November 25 | 12-1 p.m. | 9 Durant Hall
Speaker: Leah Carroll, Haas Scholars Program Manager/Advisor, Office of Undergraduate Research and Scholarships
Need to write a grant proposal? This workshop is for you! You'll get a head start on defining your research question, developing a lit review and project plan, presenting your qualifications, and creating a realistic budget.
Open to all UC Berkeley students.
The Working Women's Charter: Women's Rights between Socialist Internationalism and Neoliberalism in 1970s Europe
Lecture | November 25 | 12-1 p.m. | 201 Moses Hall
Speaker/Performer: Celia Donert, University of Cambridge
Sponsors: Institute of European Studies, Institute of Slavic, East European, and Eurasian Studies (ISEEES), Co-sponsored by the Working Group for German History and Culture: Der Kreis
This talk examines the transnational circulation of proposals for a Working Women's Charter in 1970s Europe, a decade that saw both the expansion of gender equality legislation in the expanding European Economic Community, and a renewed attempt by communist regimes in the Soviet bloc to internationalize a socialist vision of women's rights through international communist organisations such as the... More >
Trade Lunch: "The Unequal Effects of Trade and Automation on Local Labor Markets"
Seminar | November 25 | 12:05-1 p.m. | 639 Evans Hall
Speaker: Simon Galle, University of California, Berkeley
GSPP Research Seminar
Seminar | November 25 | 12:10-1:30 p.m. | Goldman School of Public Policy, Room 105 (in 2607 Hearst St)
Speaker: jack Glaser, UC Berkeley
Sponsor: Goldman School of Public Policy
Goldman School of Public Policy Research Seminar
Mondays 12:10-1:30
Pizza Served
Combinatorics Seminar: The tropical Cayley-Menger variety
Speaker: Robert Krone, UC Davis
The Cayley-Menger variety is the Zariski closure of the set of vectors specifying the pairwise squared distances between n points in $R^d$. For a graph on n vertices, a coordinate projection of the Cayley-Menger variety gives the possible edge lengths of the embeddings of the graph into $R^d$. Tropicalization converts an algebraic set into a polyhedral complex, the "combinatorial shadow" of the... More >
Political Economy Seminar: The Rise and Fall of Local Elections in China: Evidence on the Autocrat's Trade-off∗
Seminar | November 25 | 12:30-2 p.m. | 223 Moses Hall
Speaker: Gerard Padro-i-Miquel, Professor, Yale University
Seminar 231, Public Finance: "Capital Gains Taxes and Real Corporate Investment"
Seminar | November 25 | 2-3:30 p.m. | 648 Evans Hall
Speaker: Terry Moon, University of British Columbia
War in Raqqa: Rhetoric vs. Reality
Special Event | October 22 – December 20, 2019 every Monday, Tuesday, Wednesday, Thursday & Friday | 2-5 p.m. | 2224 Piedmont (Center for Digital Archaeology )
Sponsor: Human Rights Center
Experience photographs, videos, open source investigations, and 360° Virtual Reality that document the assault on Raqqa, Syria by coalition forces in 2017. The show draws on Amnesty International's investigations, supported by students in UC Berkeley's Human Rights Investigations Lab and the Digital Verification Corps worldwide. Immerse yourself in video, testimonials, satellite imagery and maps... More >
Differential Geometry Seminar: Positive currents on the boundary of the ample cone of K3 surfaces
Seminar | November 25 | 3-4 p.m. | 939 Evans Hall
Speaker: Simion Filip, IAS
A lot of attention has been given to the degenerations of Ricci-flat metrics on K3 surfaces as the Kahler class approaches a rational point on the boundary of the ample cone. This corresponds to Gromov–Hausdorff collapse of the K3 along an elliptic fibration. I will discuss how to use techniques from dynamics to understand the behavior of the Ricci-flat Kahler form as the class approaches an... More >
Probabilistic Operator Algebra Seminar: Noncommutative Choquet theory
Speaker: Matthew Kennedy, University of Waterloo
I will present a new framework for noncommutative convexity and noncommutative function theory, along with a corresponding noncommutative Choquet theory that generalizes much of classical Choquet theory. I will also introduce a notion of noncommutative Choquet simplex, which generalizes the classical notion of Choquet simplex and plays a similar role in noncommutative dynamics. I will discuss... More >
Haas Scholars Program Info Session: $13,800 to carry out a final project in *ANY* major
Information Session | November 25 | 3-4 p.m. | 9 Durant Hall
Speaker/Performer: Leah Carroll, Haas Scholars
Learn about how to apply to this research program for your last year!
The Haas Scholars Program supports twenty undergraduates with financial need with their interest for conducting research during their final year at UC-Berkeley. Applicants are evaluated primarily on the merit and originality of their proposal for an independent research or creative project that will serve as the basis for a... More >
Arithmetic Geometry and Number Theory RTG Seminar: Semiorthogonal decompositions for projective plane
Seminar | November 25 | 3:10-5 p.m. | 740 Evans Hall
Speaker: Dmitrii Pirozhkov, Columbia
A semiorthogonal decomposition is a way to decompose a derived category into smaller components. We know many examples, but we do not really understand the constraints on the structure of an arbitrary decomposition. In this talk I will show that all semiorthogonal decompositions of the derived category of coherent sheaves on the projective plane $P^2$ arise from full exceptional collections,... More >
QP-Fest
Colloquium | November 25 | 3:10-5 p.m. | 370 Dwinelle Hall
Speakers/Performers: Schuyler Laparle; Martha Schwarz; Edwin Ko; Tyler Lemon
Sponsor: Department of Linguistics
Third-year Linguistics graduate student presenting their qualifying papers.
Opening Windows Into The Cell: Bringing Structure To Cell Biology Using Cryo-electron Tomography
Seminar | November 25 | 4-5 p.m. | *105* Stanley Hall | Note change in location
Speaker: Elizabeth Villa, University of California, San Diego
To perform their function, biological systems need to operate across multiple scales. Current techniques in structural and cellular biology lack either the resolution or the context to observe the structure of individual biomolecules in their natural environment, and are often hindered by artifacts. Our goal is to build tools that can reveal molecular structures in their native cellular... More >
Seminar 271, Development: No Seminar
Leading Change in Higher Education
Colloquium | November 25 | 4-5:30 p.m. | Berkeley Way West, Room 1102, Berkeley Way West (2121 Berkeley Way, Berkeley, CA 94720)
Speaker: Susan Singer, Vice President for Academic Affairs and Provost, Rollins College
Sponsor: Graduate School of Education
While research on undergraduate STEM education has yielded robust evidence on improving undergraduate science, technology, engineering, and mathematics (STEM) education, scaling evidence-based practice remains challenging.
Seminar 208, Microeconomic Theory: No Seminar
Seminar | November 25 | 4:10-5:30 p.m. | 639 Evans Hall
AI & Responsible Design: Perspectives from Practitioners
Panel Discussion | November 25 | 4:10-5:30 p.m. | 210 South Hall
Sponsor: Information, School of
How practitioners think about design for algorithmic systems and products.
Analysis and PDE Seminar: Dynamical zeta functions at zero on surfaces with boundary
Speaker: Charles Hadfield, Rigetti Quantum Computing
The Ruelle zeta function counts closed geodesics on a Riemannian manifold of negative curvature. Its zeroes are related to Pollicott-Ruelle resonances which have been heavily studied in the setting of Anosov dynamical systems. In 2016, Dyatlov-Zworski proved an unexpected result relating the structure of the zeta function near the origin to the topology of the manifold. This extended a formula... More >
Being Communist, Being Other
Lecture | November 25 | 5-7 p.m. | Wheeler Hall, Maude Fife Room, 315
Speaker/Performer: Etienne Balibar, Anniversary Chair Professor at the Center for Research in Modern European Philosophy (CRMEP), Kingston University and Visiting Professor, Department of French and Romance Philology, Columbia University
Sponsor: The Program in Critical Theory
Etienne Balibar will reflect on his relationship to reading Marx, starting with Reading Capital, his early work co-written with Louis Althusser. He will seek to reconstruct his relation to Marx's thought, communism, and engage the question of communism for the present and future. Details forthcoming.
Distinguished Lecture: Affective Justice: The Racialized Imaginaries of International Justice
Lecture | November 25 | 5-7 p.m. | Doe Library, Morrison Room
Speaker/Performer: Dr. Kamari Clarke, UCLA
Sponsor: Department of Anthropology
This talk contributes to contemporary debates in the anthropology of international justice by exploring how narratives about the international Criminal Court have been applied, understood, and contested.
Why "5-Year Plan" is Stupid: A Career Path in Drug Discovery and Commercialization: SLAM Seminar Series
Seminar | November 25 | 5:30-6:30 p.m. | 106 Stanley Hall
Speaker: L.-C. Campeau, Executive Director, Merck
We are often taught that the best plan wins. We see it in movies and sports all the time. From high school, career counselors help us plan to get into the best college, then best graduate school and ultimately through to our professional lives. L.-C. Campeau's experience is quite the opposite. This talk will meander through the peaks and valleys of his formal training and professional career. He... More >
Science Meets Science: After the Wildfires: Where Should We Live?
Panel Discussion | November 25 | 5:30-7:30 p.m. | Anthony Hall
Speakers/Performers: Scott Stephens, Professor, UC Berkeley Department of Environmental Science, Policy, and Management; Laurie Johnson, Urban Planning Consultant, Laurie Johnson Consulting; Louise Comfort, Visiting Scholar, Center for Information Technology Research in the Interest of Society (CITRIS)
Sponsor: Science Policy Group at Berkeley
Join the Science Policy Group at Berkeley for a moderated discussion on the scientific ethics surrounding housing and wildfires.
This panel will discuss the policy issues surrounding housing, wildfires, and where we should be building.
We hope to explore questions such as:
Should we be building in wildfire-prone areas?
Should we rebuild cities like Paradise?
Should we be giving out... More >
Screening: Tongues Untied
Lecture | November 25 | 6:30-8 p.m. | Berkeley Art Museum and Pacific Film Archive
Speakers/Performers: Darieck B. Scott; Leila Weefur; Ken Light
Sponsor: Arts + Design
Presented by the UC Berkeley Department of Art Practice, African American Studies, and the Graduate School of Journalism
Darieck B. Scott, Associate Professor, African American Studies
Leila Weefur, Artist, Writer, Curator; Teacher, Art Practice
Ken Light, Reva and David Logan Professor of Journalism
Marlon Riggs (1957-1994) was a graduate of and professor at UC Berkeley's Graduate School... More >
Port Triumph screening
Film - Documentary | November 25 | 7-9 p.m. | 160 Kroeber Hall
Speaker/Performer: Jeffrey Gould, History Department, Indiana University Bloomington
Sponsors: Department of History, Center for Latin American Studies
During the 1970s, El Salvador boasted a vast shrimp
industry, and nearly all of the 3700 tons that it
exported each year made its way to the United States.
As shrimp was transitioning away from luxury status, few
Americans were likely to give much thought to how the
shrimp reached their plates. Fewer still would ever have
heard of the story of Puerto el Triunfo—Port Triumph in
English—and... More >
Chancellor's Awards for Public Service Nominations
Miscellaneous | November 26 – December 6, 2019 every day | Eshleman Hall, Online Nomination
Sponsor: Public Service Center
Each year, the Chancellor recognizes students, staff, faculty, and community partnerships that embody UC Berkeley's proud tradition of public service and commitment to improving our local and global community. Faculty, staff, undergraduate and graduate students, student organizations, and campus-community partnerships are all eligible. Award categories include: civic engagement, community-engaged... More >
BPM 201 Employee Engagement
Workshop | November 26 | 9 a.m.-12:30 p.m. | #24 University Hall
Sponsor: Human Resources
This workshop is for UC Berkeley Staff. The content covers an overview of employee engagement, new employee onboarding, the use of ongoing assessments of engagement, creation of an engagement action plan, and communicating the engagement strategy.
Computer Health Matters: User Friendly Workstations (BEUHS400)
Workshop | November 26 | 9-10 a.m. | Tang Center, University Health Services, Class of '42
Speaker: Greg Ryan, Be Well at Work - Ergonomics
Sponsor: Be Well at Work - Ergonimics
Learn how to set up a user-friendly workstation and practice stretches to help relieve computer-related aches and pains. This workshop is required to qualify for computer ergonomics matching funds.
Being Communist, Being Other - Seminar
Seminar | November 26 | 10 a.m.-12 p.m. | 3401 Dwinelle Hall
Advance registration for the seminar is required. To register, please contact [email protected].
Etienne Balibar, in discussion with Zeynep Gambetti (Boğazici University, Turkey) and Jacques Lezra (UC Riverside).
Balibar will reflect on his relationship to reading Marx, starting with Reading Capital, his early work co-written with Louis Althusser. He will seek to... More >
Keyboards and Mice: Ergonomic Alternatives (BEUHS401)
Workshop | November 26 | 10:10-11 a.m. | Tang Center, University Health Services, Class of '42
Speaker/Performer: Greg Ryan, Campus Ergonomist, Be well at Work - Ergonomics
Learn about the ergonomics of keyboards and pointing devices, including appropriate workstation set-up, postures, and techniques for using them. Find out about the keyboards and pointing devices covered by the Computer Ergonomics Matching Funds Program. Enroll online at the UC Learning Center.
Student Faculty Macro Lunch - NO MEETING
Presentation | November 26 | 12-1 p.m. | 597 Evans Hall | Canceled
Sponsor: Clausen Center
Meeting | February 20, 2018 – January 5, 2021 every Tuesday | 12:15-1 p.m. | 3110 Tang Center, University Health Services
Sponsor: Tang Center (University Health Services)
The Mindfulness Meditation Group meets every Tuesday at 12:15-1:00 pm at 3110 Tang Center on campus. All campus-affiliated people are welcome to join us on a drop-in basis, no registration or meditation experience necessary. We start with a short reading on meditation practice, followed by 30 minutes of silent sitting, and end with a brief discussion period.
Space Physics Seminar
Seminar | September 17 – December 3, 2019 every Tuesday | 1-2 p.m. | 325 LeConte Hall
Sponsor: Space Sciences Laboratory (SSL)
Seminar 237, Macro: No Seminar
Seminar | November 26 | 2-3:30 p.m. | 597 Evans Hall | Canceled
Hai Wang — Multi-Objective Online Ride-Matching
Seminar | November 26 | 3:30-4:30 p.m. | 405 Soda Hall
Speaker/Performer: Hai Wang, Carnegie Mellon University
Sponsor: Industrial Engineering & Operations Research
Abstract: We propose a general framework to study the on-demand shared ride-sourcing transportation systems, and focus on the multi-objective matching between demand and supply. The platforms match passengers and drivers in real time without observing future information, considering multiple objectives such as pick-up time, platform revenue, and service quality. We develop an efficient online... More >
Harmonic Analysis and Differential Equations Student Seminar: Small data global regularity for simplified 3-D Ericksen-Leslie's compressible hyperbolic liquid crystal model
Speaker: Jiaxi Huang, USTC
In this talk, we will consider the Ericksen-Leslie's hyperbolic system for compressible liquid crystal model in three spatial dimensions. Global regularity for small and smooth initial data near equilibrium is proved for the case that the system is a nonlinear coupling of compressible Navier-Stokes equations with wave map to $\mathbb { S }^2$. Our argument is a combination of vector field method... More >
Commutative Algebra and Algebraic Geometry: The Fellowship of the Ring: Finding local summands
Speaker: Mengyuan Zhang, UC Berkeley
The theory of basic elements developed by Eisenbud-Evans is concerned with finding local free summands of a module. A modification of the arguments by Bruns allows one to find local free summands up to a given codimension (or depth). In this expository talk, we discuss this problem in the graded case, where the degrees of the free local summands give extra structure not present in the affine... More >
Effect of surfaces and osmolytes in modulating peptide assembly
Seminar | November 26 | 4-5 p.m. | 120 Latimer Hall
Featured Speaker: Joan-Emma Shea, Department of Chemistry and Biochemistry, UC Santa Barbara
Intrinsically disordered peptides are a special class of proteins that do not fold to a unique three-dimensional shape. These proteins play important roles in the cell, from signaling to serving as structural scaffolds. Under pathological conditions, these proteins are capable of self-assembling into structures that are toxic to the cell, and a number of neurodegenerative diseases, such as... More >
Cloud Computing Meetup
Social Event | October 29, 2019 – December 29, 2020 the last Tuesday of the month every month | 4:30-7 p.m. | Berkeley Skydeck, Penthouse (PH) floor
Location: 2150 Shattuck Avenue, Berkeley, CA 94704
Sponsor: Division of Data Science, IST/OCIO, Research IT,
The UC Berkeley Cloud Community of Practice will be holding our monthly meetup in the Berkeley SkyDeck Penthouse. We have some excellent guests and topics and hope to see you there.
Everyone is welcome - this is a really inclusive group! You don't have to be an expert, or already know about cloud technologies; we get together so people can learn from each other. The Berkeley SkyDeck is at... More >
UC Berkeley Cloud Computing Meetup
Meeting | August 27, 2019 – August 25, 2020 the last Tuesday of the month every month | 4:30-7 p.m. | Berkeley SkyDeck Penthouse
Sponsor: Data Sciences
This meetup is aimed at bringing together students, researchers, faculty, IT staff and the wider local community to learn, share and discuss our work in designing and operating services built on cloud computing architectures.
We will feature speakers and engaging discussions around current issues in cloud computing often with an enterprise focus.
Commutative Algebra and Algebraic Geometry: The Fellowship of the Ring: Moduli Spaces of Toric Vector Bundles
Speaker: Lauren Heller, UC Berkeley
I will discuss Klyachko's classification of toric vector bundles with given equivariant Chern class, as well as the application of this classification to the construction of moduli spaces, as described by Sam Payne. As an example, I will illustrate the possibilities for rank 2 vector bundles on $\mathbb P^1 \times \mathbb P^1$.
No BioE Department Seminar due to Thanksgiving Day Holiday
Seminar | November 27 | Stanley Hall
Early Garden Closure 2pm
Miscellaneous | November 27 | UC Botanical Garden
Speaker/Performer: Marykaren Mrowka
The Garden will close at 2pm today.
No MVZ Lunch Seminar
Seminar | November 27 | 12-1 p.m. | Valley Life Sciences Building, 3101 VLSB, Grinnell-Miller Library
Weekly Coffee Break
Social Event | August 28 – December 11, 2019 every Wednesday | 3-4 p.m. | Windmill Coffee House
Location: 2131 Durant, Berkeley, CA 94704
Berkeley International Office's Coffee Break is a weekly opportunity for UC Berkeley global students to meet and engage with one another in a relaxing environment. It's a chance to enjoy free coffee, share stories about your experiences on campus, and get to know Berkeley International Office advisers. All UC Berkeley students are invited. Students are free to talk about anything, but we'll also... More >
Toastmasters on Campus Club: Learn public speaking
Meeting | January 14, 2015 – December 18, 2019 every Wednesday | 6:15-7:30 p.m. | 3111 Etcheverry Hall
Sponsor: Toastmasters on Campus
Toastmasters has been the world leader in teaching public speaking since 1924. Meetings are an enjoyable self-paced course designed to get you up and running as a speaker in only a few months.
Thanksgiving Day: Garden Closed
Holiday | November 28 | UC Botanical Garden
The Garden is closed today for the Thanksgiving holiday.
Seminar 251, Labor Seminar: Thanksgiving
Fall 2019 LEP Global Book Club: Barracoon: The Story of the Last "Black Cargo"
Miscellaneous | October 25 – December 6, 2019 every Friday | 1-2 p.m. | 110 César E. Chávez Student Center
Sponsor: SLC Language Exchange Program
Join us for the Fall 2019 LEP Global Book Club! This semester, as we join the campus in commemorating 400 Years of Resistance to Slavery and Injustice, we will be reading Barracoon: The Story of the Last "Black Cargo."
Written in 1931 and published just last year, Barracoon chronicles Hurston's conversations with Cudjo Kossula Lewis, one of the last survivors of the Atlantic Slave... More >
RSVP recommended
RSVP info: RSVP online
Fall 2019 LEP Global Book Club
M. Hulot's Holiday
In a cinematic postcard from a seaside summer resort, Tati observes the forced gaiety and gently absurd antics of the English and French on vacation. A stream of sight and sound gags, this is comedy as choreography, plotless and virtually without dialogue. Or, rather, with almost inaudible dialogue that lays waste the speaker's vanity even as he speaks. As in a dance, people are recurring motifs:... More >
Performing Arts - Other | November 29 | 2-4 p.m. | Zellerbach Hall
The perennially popular troupe of China's most gifted tumblers, contortionists, jugglers, and gymnasts returns to Cal Performances on Thanksgiving weekend with a performance steeped in 2,000 years of folk tradition. In vibrant costumes and accompanied by live musicians on an array of Chinese folk instruments, the Peking Acrobats perform signature routines including trick cycling, precision... More >
Tickets required: $30–$72 (prices subject to change)
Peking Acrobats perform Friday–Sunday, November 29–December 1, 2019 in Zellerbach Hall.
Sponsors: Berkeley Art Museum and Pacific Film Archive, Institute for South Asia Studies
Based on the romance between seventeenth-century Mughal ruler Shah Jahan and his queen, this epic silent film is the ravishing, romantic tale behind the creation of one of the world's most iconic structures, the Taj Mahal. Shot entirely on location in India, it features lavish costumes and gorgeous settings—all the more impressive in this restoration by the BFI National Archive, which features a... More >
Jim Allison: Breakthrough
Film - Documentary | November 29 | 6:30 p.m. | Berkeley Art Museum and Pacific Film Archive
This documentary tells the remarkable story of the perseverance of a biological scientist who found a cure for cancer through his trailblazing immunotherapy research. Jim Allison spent a significant part of his career as a professor of immunology and director of the Cancer Research Laboratory at UC Berkeley between 1985 and 2004, and concurrently at UC San Francisco from 1997. Among his many... More >
Women's Basketball vs. Long Beach State
Sport - Intercollegiate - Basketball | November 29 | 7 p.m. | Haas Pavilion
Cal Women's Basketball hosts Long Beach State at Haas Pavilion.
Women's Basketball Thanksgiving Tournament: North Carolina Central
Cal Women's Basketball hosts the Thanksgiving Tournament at Haas Pavilion. | CommonCrawl |
IMU 2019 Annual Meeting – Titles and Abstracts
Published on May 18, 2019 June 12, 2019 by yairglasner
Plenary talks:
10:00-10:45 Plenary talk, Ron Peled (Tel-Aviv University)
Title: The proper way to color a grid
We consider the task of coloring the vertices of a large discrete box in the integer lattice Z^d with q colors so that no two adjacent vertices are colored the same. In how many ways can this be done? How does a typical coloring look like? What is the proportion of proper colorings in which two opposite corners of the box receive the same color? Is it about one in q?
We discuss these questions and the way their answers depend on the dimension d and the number of colors q, presenting recent results with Yinon Spinka and with Nishant Chandgotia, Scott Sheffield and Martin Tassy. Motivations are provided from statistical physics (anti-ferromagnetic materials, square ice), combinatorics (proper colorings, independent sets) and the study of random Lipschitz functions on a lattice.
12:00-12:45 Plenary talk, Karim Adiprasito (Hebrew University)
Title: Dots, diagrams and dictionaries
Discrete objects have recently come to enjoy the same structural theory that numbers have been enjoying for a long while now. But while numbers now are embedded within algebra and analysis, the more flexible combinatorial structures have not yet achieved a similar depth in theory.
I will propose a program that aims to give combinatorial structures a similarly wholesome embedding to the one that number theory now enjoys, and discuss how recent developments, such as the combinatorial uses and proofs of Lefschetz and Hodge type theorems, fit into this program.
Algebra Session (Organizers: Lior Bary-Soroker and Eitan Sayag)
14:20 – 14:50 Aner Shalev (Hebrew University)
Title: Girth, diameter and random Cayley graphs
We study the girth of Cayley graphs of finite classical groups G on random sets of generators. Our main tool is an essentially best possible bound we obtain on the probability that a given word w takes the value 1 when evaluated in G in terms of the length of w, which has additional applications.
We also study the girth of random directed Cayley graphs of symmetric groups, and the relation between the girth and the diameter of random Cayley graphs of finite simple groups.
15:00-15:30 Ofir Gorodetsky (Tel-Aviv University)
Title: Random matrix theory over finite fields
Draw a random matrix A from the unitary group U_n(C) according to the Haar measure. What is the distribution of the trace of A, and of powers of A, as n goes to infinity? Diaconis and Shahshahani have studied this question, showing that the traces, normalized appropriately, converge in distribution to i.i.d complex normal variables.
We consider this question in the setting of finite fields: Drawing a matrix from U_n(F_p) or other classical group over a finite field (e.g. SL_n(F_p)), what is the distribution of traces as n goes to infinity? We show that one obtains i.i.d uniform random variables taking values in F_p. We will explain our results, and compare with the characteristic-zero situation.
Joint work with Brad Rodgers.
16:20-16:50 Shifra Reif (Bar-Ilan University)
Title: The Grothendieck ring for finite-dimensional representations of Lie superalgebras.
A fundamental link between algebraic combinatorics and representation theory is the isomorphism between the ring of symmetric Laurent polynomials and the ring of finite-dimensional representations of the general linear Lie algebra. This ring is called the Grothedieck ring and has a similar description for all simple Lie algebras. The description of the Grothendieck ring for basic Lie superalgbras was given only in 2007 by Sergeev and Veselov.
In this talk, I will explain these isomorphisms and describe the Grothendieck ring for a Lie superalgebra which is not basic, namely the periplectic Lie superalgebra p(n). Joint work with M. S. Im and V. Serganova.
17:00-17:30 Gal Binyamini (Weizmann Institute of Science)
Title: Tame geometry and Diophantine geometry
About 12 years ago Pila and Wilkie proved a powerful theorem on counting rational points in transcendental sets, using a combination of methods from logic and diophantine approximation. Over the past decade a deep connection between tame geometry and diophantine geometry grew around this theorem, leading to proofs for various diophantine conjectures of Manin-Mumford, Andre-Oort, Masser-Zannier and others.
I will review the Pila-Wilkie counting theorem and the basic strategy for many of its applications. If time permits I will also discuss some directions of ongoing research.
17:40-18:10 Avner Segal (Bar-Ilan University)
Title: Jacquet modules and irreducibility of parabolic induction
Determining the irreducibility of parabolic induction is one of the main problems in the representation theory of algebraic groups. In this talk I will demonstrate, via a number of examples, a method of proving the irreducibility of parabolic induction using Jacquet modules. This method is more effective for groups of large rank. In particular, it is being used successfully in a joint project with Hezi Halawi to determine the reducibility of degenerate principal series of exceptional groups of type E_n.
If time permits, I will explain how this method helps to study the structure of reducible parabolic induction.
Analysis Session (Organizers: Gidi Amir and Nir Lev)
14:20-14:50 Benjamin Weiss (Hebrew University)
Title: Entire functions as a dynamical system
The complex plane acts on the space of entire functions by translation. With the natural topology of uniform convergence on compact subsets, this action is continuous.
I will survey some of the results that have been obtained on the invariant probability measures and on the compact minimal subsets under this action.
15:00-15:30 Boaz Slomka (Weizmann Institute of Science)
Title: On Hadwiger's covering problem
A long-standing open problem, known as Hadwiger's covering problem, asks what is the smallest natural number N(n) such that every convex body in {\mathbb R}^n can be covered by a union of the interiors of at most N(n) of its translates. Despite continuous efforts, the best general upper bound known for this number remains as it was more than sixty years ago, of the order of {2n \choose n} n\ln n.
In this talk, I will present a new result in which we improve this bound by a sub-exponential factor. Our approach combines ideas from previous work with tools from asymptotic geometric analysis. As a key step, we use thin-shell estimates for isotropic log-concave measures to prove a new lower bound for the maximum volume of the intersection of a convex body K with a translate of -K. We further show that the same bound holds for the volume of K\cap(-K) if the center of mass of K is at the origin.
If time permits we shall discuss some other methods and results concerning this problem and its relatives.
Joint work with H. Huang, B. Vritsiou, and T. Tkocz
16:20-16:50 Avner Kiro (Tel-Aviv University)
Title: Power substitution in quasianalytic Carleman classes
In this talk, I will consider power substitutions in quasianalytic Carleman classes, i.e. equations of the form f(x)=g(x^k), where k>1 is an integer and f is a given function in a quasianalytic Carleman class. I will show that if g happens to be a smooth function, then g belongs to a quasianalytic class completely characterized in terms of bounds on the derivatives of g.
The talk is based on joint work with L. Buhovski and S. Sodin.
17:00-17:30 Gady Kozma (Weizmann Institute of Science)
Title: Irreducibility of random polynomials
Take a polynomial whose coefficients are random, i.i.d. and integer. What is the probability that it is irreducible over the rationals? This question, despite much progress over the last two years, is still unresolved. We will survey recent results.
Joint work with Lior Bary-Soroker and Dimitris Koukoulopoulos.
Applied Mathematics Session (Organizers: Haggai Katriel and Raz Kupferman)
14:20-14:50 Baruch Barzel (Bar-Ilan University)
Title: The spatio-temporal propagation of signals in complex networks
A major achievement in the study of complex networks is the realization that diverse systems, from sub-cellular biology to social networks, exhibit universal topological characteristics. Yet, such universality does not naturally translate to the dynamics of these systems, as dynamic behaviour cannot be uniquely predicted from topology alone. Rather, it depends on the interplay of the network's topology with the dynamic mechanisms of interaction between the nodes. Hence, systems with similar structure may exhibit profoundly different dynamic behaviour. We therefore seek a general theoretical framework to help us systematically translate topological elements into their predicted dynamic outcome. We present such a translation in the context of signal propagation, linking the topology of a network to its observed spatiotemporal spread of perturbative signals, capturing the network's role in propagating local information. For a range of nonlinear dynamic models, we predict that the propagation rules condense into three highly distinctive universality classes, characterized by the interplay between network paths, degree distribution and the interaction dynamics. As a result, classifying a system's intrinsic interaction mechanisms into the relevant dynamic regime allows us to systematically translate topology into dynamic patterns of information propagation.
Nature Physics 15, 403 (2019)
Nature Communications 8, 2181 (2017) .
15:00-15:30 Elihu Olami (Hebrew Univeristy)
Title: Homogenization of edge dislocations via de-Rham currents
Edge dislocations are material defects: atomic discrepancies in a material lattice structure. In this talk, we present a geometric model for edge-dislocations using (layering) 1-forms and their singular counterparts, de-Rham currents. Isolated edge-dislocations are represented by 1-forms which are smooth and closed outside a singularity segment. A smooth distribution of dislocations is represented by a (globally) smooth non-closed 1-form. We prove a homogenization result for edge dislocations; every smooth distribution of dislocations is a limit (in the sense of currents) of arrays of isolated dislocations. We also define the notion of singular torsion and study its relation to the defect structure and homogenization process.
This is a joint work with Raz Kupferman.
16:20-16:50 Neta Rabin (Afeka Academic College of Engineering)
Title: Multi-scale approximation and extension of functions using Laplacian pyramids
Modeling and learning functions, which are defined on scattered, high-dimensional datasets, is a common task that arises in many applications. In this talk, we describe a multi-scale algorithm named Laplacian pyramids for approximating and extending such functions. The algorithm smoothes the target function by automatically selecting the optimal resolution (stopping scale) adapted to the data and its noise. In addition, we describe a recent extension of this approach for processing matrix type data, where the Laplacian pyramids evoked in a two-directional manner and applied for data imputation.
17:00-17:30 Tatiana Savin (Ohio University)
Title: Hele-Shaw two-phase flow driven by line distributions of sinks/sources and a uniform change of the the cell's gap
A Hele-Shaw cell is a pair of parallel plates separated by a small gap. The motion of the fluids sandwiched between the plates could be driven by a pressure gradient, gravity, fluid injection, and an external potential fields. This has been the subject of numerous investigations. The reason for the intense interest is the mathematical relation to modeling of several applied problems in material science and fluid dynamics, as well as to modeling of biological processes involving moving fronts of populations or tumors. These latter processes include cancer, biofilms, wound healing, granulomas, and atherosclerosis.
In this talk we consider two fluids with different viscosities in a Hele-Shaw cell. The evolution of the interface, separating the fluids, is driven by a uniform change in the gap width of the cell as well as by the presence of some special distributions of sinks and sources located in both the interior and exterior domains. The effect of surface tension is neglected.
Using the Schwarz function approach and a two-phase mother body, we give examples of exact solutions when the interface belongs to a certain family of algebraic curves.
17:40-18:10 Sagi Snir (University of Haifa)
Title: Using computational tools for piecing together small trees into the Large Tree of Life
The reconstruction of evolutionary trees (also known as "phylogenies") is central to many problems in Biology. With the explosion of molecular data being produced, a new program, "Assembling the Tree of Life", has set the goal of producing a highly accurate estimate of the evolutionary history of all life on earth. Key to this goal is the ability to estimate very accurate trees on different groups of taxa (organisms), and then combine these different trees into a tree on the full dataset. This task, combining small trees into a big tree, is the supertree task– and no really accurate supertree method yet exists.
Quartets, trees over four taxa, are the most basic phylogenetic informational unit and therefore quartet based supertree is the simplest supertree task. Nevertheless, despite its simplicity, the problem lies at the root of many long open combinatorial phylogenetic problems, for which accurate answers seem very hard.
In a series of works we have developed graph theoretically based approaches for the quartet supertree and other related tasks. Our approach is based on a divide and conquer algorithm where our divide step uses a semi-definite programming (SDP) formulation of MaxCut in a graph representing relationships between the taxa. We also show theoretical guarantees as well as impossibility results for our approach over a large family of inputs where the best known result is a random tree.
Based on work with Raphy Yuster, Satish Rao, and Noga Alon. The talk is self contained and requires no prior knowledge in Biology.
Discrete Mathematics Session (Organizers: Martin Golumbic, Roy Meshulam and Shakhar Smorodinsky)
14:20-14:50 Chaya Keller (Technion)
Title: The structure of sets of vectors in the plane whose sums span a few directions
A classical theorem of Ungar (1982) asserts that any n non-collinear points in the plane determine at least 2[n/2] directions. That is, considering the points as vectors, at least 2[n/2] different directions are spanned by their differences. Jamison and Hill characterized the sets of points for which the minimum is attained.
In this talk we discuss the related problem of directions spanned by sums of n vectors in the plane, proposed by Balog and Roche-Newton (2015) in the context of sum-product theorems. Assuming that the vectors are pairwise independent, it is easy to show that the minimal number of directions is 2n-1 if all vectors are contained in the right half-plane, and 2n otherwise. We obtain a structural characterization of all sets of vectors for which the minimum is attained. In particular, we show that for any such set A, almost all elements of A must lie on the union of two quadrics.
Joint work with Rom Pinchasi.
15:00-15:30 Michael Simkin (Hebrew University)
Title: Combinatorial designs in random hypergraphs
A combinatorial design is a hypergraph exhibiting some form of regularity. Well-known examples include Steiner triple systems (triangle decompositions of $K_n$), Latin squares (triangle decompositions of $K_{n,n,n}$), regular graphs, and perfect matchings.
A classical theorem of Erd\H{o}s and R\'enyi is that the threshold for the appearance of perfect matchings in $G(n;p)$ is the same as the threshold for the disappearance of isolated vertices. Viewing combinatorial designs as generalizations of perfect matchings, we ask: What is the threshold for the appearance of a given combinatorial design in a binomial random hypergraph? In particular, does it correspond to the disappearance of an obvious local obstruction? For example, does a random $3$-graph contain a (spanning) Steiner triple system as soon as every pair of vertices is contained in a triangle? Despite several recent breakthroughs, most of these questions remain open.
Focusing on the case of Latin squares, we will show sharp threshold results for the appearance of approximate Latin squares. We will also use Keevash's method of randomized algebraic construction to bound from above the threshold for the appearance of Latin squares in random hypergraphs.
Partly based on joint work with Zur Luria.
16:20-16:50 Frank Mousset (Tel-Aviv University)
Title: The minrank of random graphs over arbitrary fields
The minrank of a graph on G on the vertex set {1,…,n} over a field F is the minimum possible rank of an n-by-n matrix M over F that has nonzero diagonal entries and satisfies M(i,j) = 0 whenever i and j are distinct nonadjacent vertices of G. This notion arises in several different contexts: the study of the Shannon capacity of a graph, the index coding problem, and the study of unit distance graphs. I will talk about the minrank of the Erdős-Rényi random graph G(n,p) and present a result stating that the minrank of G(n,p) over any field F is in the order of n log(1/p)/log n with high probability. For the case where F is the field of real numbers, this answers a question raised by Knuth in 1994.
This is joint work with Noga Alon, Igor Balla, Lior Gishboliner, and Adva Mond.
17:00-17:30 Ido Nachum (Technion)
Title: Learning symmetric functions with neural networks
Initializing the weights of neural networks close to zero is the common practice for training them. This practice fails if the network tries to learn the parity function. Contrastingly, using the right initialization, we show that the parity function and any other \emph{symmetric} function, i.e. a function that is invariant under permutations of its input (the corresponding group is $\mathbb{S}_n$), can be learned with neural networks. This demonstrates the importance of choosing an initialization (or an architecture) that accommodates the symmetries of a given learning problem. A natural question now follows, what other groups can be used to generate effective convergence guarantees and generalization bounds?
17:40-18:10 Shay Mozes (IDC Herzliya)
Title: Almost optimal distance oracles for planar graphs.
We present new tradeoffs between space and query-time for exact distance oracles in directed weighted planar graphs. These tradeoffs are almost optimal in the sense that they are within polylogarithmic, subpolynomial or arbitrarily small polynomial factors from the naive linear space, constant query-time lower bound. These tradeoffs include:
(i) an oracle with space $O(n^{1+\epsilon})$ and query-time $\tilde O(1)$ %\footnote{The $\tilde O(\cdot)$ notation hides polylogarithmic factors.} for any constant $\epsilon>0$,
(ii) an oracle with space $\tilde O(n)$ and query-time $O(n^{\epsilon})$ for any constant $\epsilon>0$, and (iii) an oracle with space $n^{1+o(1)}$ and query-time $n^{o(1)}$.
Based on joint work with Panagiotis Charalampopoulos, Pawel Gawrychowski and Oren Weimann (https://arxiv.org/abs/1811.01551) to appear in STOC 2019.
Geometry and Topology Session (Organizers: David Blanc and Ruth Lawrence-Neimark)
14:20-14:50 Lior Yanovski (Hebrew University)
Title: Higher semi-additivity in chromatic homotopy theory
In ordinary algebra, characteristic zero behaves differently from characteristic p>0, partially due to the possibility to symmetrize finite group actions. In particular, given a finite group G acting on a rational vector space V, the "norm map" from the co-invariants V_G to the invariants V^G is an isomorphism (in marked contrast to the positive characteristic case). In the chromatic world, the Morava K-theories provide an interpolation between the zero characteristic (represented by rational cohomology) and positive characteristic (represented by F_p cohomology). A classical result of Hovey-Sadofsky-Greenlees shows that the norm map is still an isomorphism in these "intermediate characteristics". A subsequent work of Hopkins and Lurie vastly generalises this result and puts it in the context of a new formalism of "higher semiadditivity" (a.k.a. "ambidexterity").
I will describe a joint work with Tomer Schlank and Shachar Carmeli in which we generalize the results of Hopkins-Lurie and extend them among other things to the telescopic localizations and draw some consequences (along the way, we obtain a new and more conceptual proof for their original result).
15:00-15:30 Surojit Ghosh (Haifa University)
Title: Equivariant cell complexes and freeness theorem in $RO(G)$-graded cohomology theory
We discuss analogues of the theorem "A CW complex with even dimensional cells has free homology" for G-spaces. For the cyclic group of order p, the "freeness theorem" was first proved by Lewis, and subsequently, the result was improved by Ferland -Lewis.
We generalize this result to other cyclic groups (in joint work Samik Basu).
16:20-16:50 Chandrika Sadanand (Technion)
Title: You can "hear" the shape of a polygonal billiard table
Consider a polygon-shaped billiard table on which a ball can roll along straight lines and reflect off of edges infinitely. In work joint with Moon Duchin, Viveka Erlandsson and Chris Leininger, we have characterized the relationship between the shape of a polygonal billiard table and the set of possible infinite edge-itineraries of balls travelling on it.
In this talk, we will explore this relationship and the tools used in our characterization (notably a new rigidity result for flat cone metrics).
17:00-17:30 Yaniv Ganor (Tel-Aviv University)
Title: Rigidity and flexibility phenomena in Poisson bracket invariants
Symplectic manifolds are the natural setting for classical mechanics. The symplectic structure induces a Lie algebra structure on the space of smooth functions on a symplectic manifold, called the Poisson bracket.
An approach known as Symplectic function theory, suggests to study symplectic manifolds via the lens of the Poisson bracket and its interplay with the supremum norm of functions.
Poisson bracket invariants for triples and quadruples of sets were introduced by Buhovski, Entov and Polterovich (2012) as a means to study the C^0 symplectic topology of closed subsets of a symplectic manifold. They were found to have applications to the study of Hamiltonian chords (Entov-Polterovich, 2016) and were also applied to the study of a symplectic topological invariant of Lagrangian submanifolds (Entov-G-Membrez, 2016). Interestingly, they manifest various aspects of both rigid and flexible phenomena.
In this talk we will survey various rigidity and flexibility behaviors of Poisson bracket invariants of closed sets in symplectic manifolds, and present a recent result (G, 2018) answering the question "To what extend do to they depend on the union of the four sets in the quadruple?", to which the answer is homotopically flavored – they depend on the union together with some homotopical data encoding the manner of decomposition into four sets, up to homotopy.
No symplectic background is assumed, all terms will be defined and explained during the talk.
17:40-18:10 Mikhail Katz (Bar-Ilan University)
Title: Systolically extremal nonpositively curved surfaces are flat with finitely many singularities
The regularity of systolically extremal surfaces is a notoriously difficult problem already discussed by M. Gromov in 1983, who proposed an argument toward the existence of L^2 -extremizers exploiting the theory of r -regularity developed by P. A. White and others by the 1950s. We propose to study the problem of systolically extremal metrics in the context of generalized metrics of nonpositive curvature. A natural approach would be to work in the class of Alexandrov surfaces of finite total curvature, where one can exploit the tools of the completion provided in the context of Radon measures as studied by Reshetnyak and others. However the generalized metrics in this sense still don't have enough regularity. Instead, we develop a more hands-on approach and show that, for each genus, every systolically extremal nonpositively curved surface is piecewise flat with finitely many conical singularities. This result exploits a decomposition of the surface into flat systolic bands and nonsystolic polygonal regions, as well as the combinatorial/topological estimates of Malestein-Rivin-Theran, Przytycki, Aougab-Biringer-Gaster and Greene on the number of curves meeting at most once, combined with a kite excision move. The move merges pairs of conical singularities on a surface of genus g and leads to an asymptotic upper bound g^{4+\epsilon} on the number of singularities.
See https://arxiv.org/abs/1904.00730
Mathematical Education Session (Organizers: Boris Koichu and Alon Pinto)
All talks in the math education session will be delivered in Hebrew.
14:20-14:50 N. Keller and E. Sheiner (Bar-Ilan University)
Title: Academic studies during high school: the Academic Program for Youth at Bar-Ilan University as an example
In Israel, several hundreds of students complete every year their BSc degree in Mathematics or in Computer Science by the age of 19. Most of them study in special programs, in which they do the matriculation exam (i.e., Bagrut) in mathematics at their 10th grade, and then study a large part of the first degree in parallel with high-school studies. The number of students participating in these programs continues growing rapidly.
The programs give rise to various challenges and interesting dilemmas, some of them related to dealing with secondary-tertiary transition (STT) – and especially, STT of very talented but very young and immature students.
In this talk we shall present some of these dilemmas, via observing a concrete example: the program for Youth at Bar-Ilan university, which has an almost 20-year experience in the field, and is currently the largest such program in Israel, with more than 100 students every year.
15:00-15:30 Y. Godin (Hebrew University); correspondent: Tommy Dreyfus (Tel-Aviv University)
Title: Supporting first year infinitesimal calculus students
The transition from high school to university mathematics is problematic for the majority of the Israeli students, and the problems are particularly arduous in the students' first course in differential and integral calculus.
In this session, I will briefly sketch some of the main reasons underlying Israeli students' difficulties in this course; I will also describe in some detail and evaluate steps that have been implemented at the Einstein Institute of Mathematics to alleviate the problems while trying to preserve the academic level.
16:20-16:50 A. Berman (Technion)
Title: A problem-based course as a bridge between high school mathematics and university mathematics
Many newcomers to university suffer from the first-year shock. For some, the background is not sufficient. For many, the need to prove (and the enjoyment of proving) is a surprise. Most of them solved in high school exercises, but not problems.
In my talk, I will discuss several ways to deal with this shock and in particular describe a problem-based course that may suggest some remedy.
17:00-17:30 A. Pinto (Weizmann Institute of Science)
Title: Fostering learning of informal mathematics in a formal course: what are the key challenges for instructors, and what can be done about them?
A recent international survey among nearly two hundred mathematicians indicates that many mathematicians consider student transition from school mathematics to university mathematics an issue of concern at their departments. Moreover, most of the mathematicians that participated in this survey, and in particular the Israeli participants, stated that challenges related to student transition at their departments have increased over the last decade. The understanding that there exists a divide between school and university mathematics is not new. Already in 1906, Felix Klein published the first volume of his "Elementarmathematik vom höheren Standpunkt aus" (Elementary Mathematics from a Higher Standpoint). In this book, Klein identified a discontinuity between mathematics as experienced at school and as practiced at university. One may wonder, why after more than a century, not only that the gap between school and university mathematics still exists, it even seems to be widening. Two widespread explanations for this phenomenon are that the mathematical preparation of students entering university is insufficient (and getting worse over time), and that doing mathematics on an advanced level is inherently difficult and only suitable to a relatively small proportion of the student population.
In my talk I will argue why these answers are not satisfying, on the basis of selected findings from several case studies that investigated how university mathematics instructors address, explicitly or implicitly, student transition in their teaching. I will highlight key pedagogical challenges for instructors, and suggest steps that could be taken towards assuming greater responsibility over students' transition.
17:40-18:10 B. Koichu (Weizmann Institute of Science)
Title: Approaches to dealing with the secondary-tertiary transition: examples from an international survey and discussion
An international survey among mathematicians on secondary-tertiary transition (STT), which is also mentioned in Alon Pinto's talk, contains the following question: "Can you describe in rough detail a realistic project that, given appropriate support, could make things better in terms of STT?" Currently, 93 mathematicians answered this question, with the answers ranging from a single word ("no"), to long paragraphs that outline key challenges and alternative solutions.
A sample of various answers to this question from different countries will be presented at the beginning of the session. The main part of the session will be devoted to an open discussion with the audience. The goal of the discussion is to lay foundations for further action in relation to STT in Israel.
Tags IMU, Israel Mathematical union, Mathematics
Previous Annual Meeting 2019 in Jerusalem – June 13, 2019 | CommonCrawl |
Exercises - The Definition of the Derivative
Use the definition of the derivative to $f\,'(x)$, when $\displaystyle{f\,(x) = \frac{1}{2-3x}}$
Find $f\,'(x)$ using the definition of the derivative, where $\displaystyle{f\,(x)=\frac{1}{\sqrt{4-x}}}$, and then find the equation of the normal line at $x=3$
Find $f\,'(x)$ using the definition of the derivative, where $f\,(x)=x^2-4x+4$ and then determine where the tangent line to the graph of $y=f\,(x)$ is horizontal
Given $f\,'(x) = 3x^2-2x+1$
Use the definition of the derivative to find $f\,'(x)$
Find the equation of the tangent line to the graph of $y=f\,'(x)$ at $x=2$
Find the point on the graph of $y=f\,'(x)$ where the tangent line is horizontal
Find the equation of the normal line to the graph of $y=f\,'(x)$ at $x=-1$
A ball is thrown upward from the top of a building. The initial height is 640 ft, and the initial velocity (upward) is 64 ft/s. Its height above the ground is given by $h(t)=-16t^2+64t+640$. Use the definition of the derivative to answer the following questions:
What is the instantaneous velocity at $t=1$ second?
What is the height of the ball at $t=1$ second?
When will the ball reach its maximum height?
The cost (in dollars) to make $x$ new computers is given by $C(x) = x^2+196x+400$. The expected revenue is given by $R(x)=-3x^2+660x$. Use the definition of the derivative to answer the following questions:
How many units need to be made in order to maximize profits?
What is the maximum profit that can be made?
Use the definition of the derivative to find the derivative of each function given
$\displaystyle{f\,(x) = \frac{1}{\sqrt{1-2x}}}$
$\displaystyle{h(x)=6-\sqrt{x+4}}$
$\displaystyle{f(x)=\frac{1}{3}}$
$\displaystyle{f(x)=\frac{3}{2x+1}}$
$\displaystyle{g(x)=\frac{x}{x+1}}$
$\displaystyle{h(x)=1+\sqrt{x}}$
For each function given below, find $f\,'(x)$ using the definition of the derivative, then find the equation of the tangent line to the graph of $y=f\,(x)$ at the given $x$-value.
$\displaystyle{f\,(x) = \frac{1}{\sqrt{3x}}; \quad x=3}$
$\displaystyle{f\,(x) = \frac{1}{1-2x}; \quad x=1}$
$\displaystyle{f\,(x) = \frac{1}{x^2+1}; \quad x=-1}$
Given $\displaystyle{f\,(x) = \frac{2}{4-x}}$
Find $f\,'(x)$ using the definition of the derivative
Find the equation of the normal line to the graph of $y=f\,(x)$ at $x=2$.
Use the definition of the derivative to show that $f\,'(x) = \displaystyle{\frac{-1}{2x\sqrt{x}}}$ when $\displaystyle{f\,(x) = \frac{1}{\sqrt{x}}}$, and then find the equation of the normal line to the graph of the equation $y=f\,(x)$ at $x=4$
Given $\displaystyle{f\,(x) = \frac{1}{1+x}}$
Find the equation of the tangent line to the graph of $y=f(x)$ at $x=0$
Find the equation of the normal line to the graph of $y=f(x)$ at $x=-2$ | CommonCrawl |
Journal of the Institute of Mathematics of Jussieu
ORBIFOLD POINTS ON PRYM–TEICHMÜ...
July 2019 , pp. 673-706
ORBIFOLD POINTS ON PRYM–TEICHMÜLLER CURVES IN GENUS $4$
MSC 2010: Special properties
David Torres-Teigell (a1) and Jonathan Zachhuber (a2)
1Fachrichtung Mathematik, Universität des Saarlandes, Campus E24, 66123 Saarbrücken, Germany ([email protected])
2FB 12 – Institut für Mathematik, Johann Wolfgang Goethe-Universität, Robert-Mayer-Str. 6–8, D-60325 Frankfurt am Main, Germany ([email protected])
DOI: https://doi.org/10.1017/S1474748017000196
Published online by Cambridge University Press: 22 May 2017
For each discriminant $D>1$ , McMullen constructed the Prym–Teichmüller curves $W_{D}(4)$ and $W_{D}(6)$ in ${\mathcal{M}}_{3}$ and ${\mathcal{M}}_{4}$ , which constitute one of the few known infinite families of geometrically primitive Teichmüller curves. In the present paper, we determine for each $D$ the number and type of orbifold points on $W_{D}(6)$ . These results, together with a previous result of the two authors in the genus $3$ case and with results of Lanneau–Nguyen and Möller, complete the topological characterisation of all Prym–Teichmüller curves and determine their genus. The study of orbifold points relies on the analysis of intersections of $W_{D}(6)$ with certain families of genus $4$ curves with extra automorphisms. As a side product of this study, we give an explicit construction of such families and describe their Prym–Torelli images, which turn out to be isomorphic to certain products of elliptic curves. We also give a geometric description of the flat surfaces associated to these families and describe the asymptotics of the genus of $W_{D}(6)$ for large $D$ .
1. Bainbridge, M., Euler characteristics of Teichmüller curves in genus two, Geom. Topol. 11 (2007), 1887–2073.
2. Bainbridge, M., Chen, D., Gendron, Q., Grushevsky, S. and Möller, M., Strata of $k$ -Differentials, Preprint, 2016, arXiv:1610.09238 [math.AG].
3. Birkenhake, C. and Lange, H., Complex Abelian Varieties, Die Grundlehren der mathematischen Wissenschaften in Einzeldarstellungen, (Springer, Berlin, 2004).
4. Bouw, I., Pseudo-elliptic bundles, deformation data, and the reduction of Galois covers, Habilitation, Universität Duisburg–Essen, 2005. URL: http://www.mathematik.uni-ulm.de/ReineMath/mitarbeiter/bouw/papers/crystal.ps.
5. Broughton, S. A., Classifying finite group actions on surfaces of low genus, J. Pure Appl. Algebra 69(3) (1991), 233–270.
6. Bujalance, E. and Conder, M., On cyclic groups of automorphisms of Riemann surfaces, J. Lond. Math. Soc. (2) 59(2) (1999), 573–584.
7. Cohen, H., A Course in Computational Algebraic Number Theory, Graduate Texts in Mathematics, Volume 138 (Springer, Berlin, 1993).
8. González-Diez, G. and Harvey, W. J., Moduli of Riemann surfaces with symmetry, in Discrete Groups and Geometry (Birmingham, 1991), (ed. Harvey, W. J. and Maclachlan, C.), London Mathematical Society Lecture Note Series, Volume 173, pp. 175–193 (Cambridge University Press, Cambridge, 1992).
9. Lanneau, E. and Nguyen, D.-M., Teichmüller curves generated by Weierstrass Prym eigenforms in genus 3 and genus 4, J. Topol. 7(2) (2014), 475–522.
10. Lanneau, E. and Nguyen, D.-M., Teichmüller curves and Weierstrass Prym eigenforms in genus four, in preparation, 2016.
11. Lehman, J. L., Levels of positive definite ternary quadratic forms, Math. Comp. 58(197) (1992), 399–417.
12. McMullen, C. T., Billiards and Teichmüller curves on Hilbert modular surfaces, J. Amer. Math. Soc. 16(4) (2003), 857–885.
13. McMullen, C. T., Teichmüller curves in genus two: discriminant and spin, Math. Ann. 333(1) (2005), 87–130.
14. McMullen, C. T., Prym varieties and Teichmüller curves, Duke Math. J. 133(3) (2006), 569–590.
15. McMullen, C. T., Mukamel, R. E. and Wright, A., Cubic curves and totally geodesic subvarieties of moduli space, Ann. of Math. (2) 185(3) (2017), 957–990.
16. Möller, M., Teichmüller Curves, Mainly from the Viewpoint of Algebraic Geometry, IAS/Park City Mathematics Series, (2011).
17. Möller, M., Prym covers, theta functions and Kobayashi geodesics in Hilbert modular surfaces, Amer. J. Math. 135 (2014), 995–1022.
18. Möller, M. and Zagier, D., Modular embeddings of Teichmüller curves, Compos. Math. 152 (2016), 2269–2349.
19. Mukamel, R. E., Orbifold points on Teichmüller curves and Jacobians with complex multiplication, Geom. Topol. 18(2) (2014), 779–829.
20.PARI/GP version 2.3.5 The PARI Group. Bordeaux, 2010, URL: http://pari.math.u-bordeaux.fr/.
21. Rohde, J. C., Cyclic Coverings, Calabi-Yau Manifolds and Complex Multiplication, Lecture Notes in Mathematics, Volume Nr. 1975 (Springer, Berlin, 2009).
22. Schiller, J., Moduli for special Riemann surfaces of genus 2, Trans. Amer. Math. Soc. 144 (1969), 95–113.
23. Shimura, G., On modular forms of half-integral weight, Ann. of Math. (2) 97(2) (1973), 440–481.
24. Singerman, D., Finitely maximal Fuchsian groups, J. Lond. Math. Soc. (2) 6 (1972), 29–38.
25. Torres-Teigell, D. and Zachhuber, J., Orbifold points on Prym–Teichmüller curves in genus three, Int. Math. Res. Not. IMRN (2016), doi:10.1093/imrn/rnw277, to appear.
26. Zachhuber, J., The Galois action and a spin invariant for Prym–Teichmüller curves in genus 3, Bull. Soc. Math. France (2016), to appear.
URL: /core/journals/journal-of-the-institute-of-mathematics-of-jussieu
MathJax
MathJax is a JavaScript display engine for mathematics. For more information see http://www.mathjax.org.
flat surfaces
Teichmüller curves
orbifold points
cyclic covers
MSC classification
14H10: Families, moduli (algebraic)
14H40: Jacobians, Prym varieties
14H52: Elliptic curves
57R18: Topology and geometry of orbifolds
54F65: Topological characterizations of particular spaces
Altmetric attention score | CommonCrawl |
Home > Journals > Ann. Statist. > Volume 23 > Issue 5 > Article
October, 1995 Adaptive Root $n$ Estimates of Integrated Squared Density Derivatives
Tiee-Jian Wu
Ann. Statist. 23(5): 1474-1495 (October, 1995). DOI: 10.1214/aos/1176324308
Based on a random sample of size $n$ from an unknown density $f$ on the real line, the nonparametric estimation of $\theta_k = \int\{f^{(k)}(x)\}^2 dx, k = 0, 1,\ldots$, is considered. These functionals are important in a number of contexts. The proposed estimates of $\theta_k$ is constructed in the frequency domain by using the sample characteristic function. It is known that the sample characteristic function at high frequency is dominated by sample variation and does not contain much information about $f$. Hence, the variation of the estimate can be reduced by modifying the sample characteristic function beyond some cutoff frequency. It is proposed to select adaptively the cutoff frequency by a generalization of the (smoothed) cross-validation. The exact convergence rate of the proposed estimate to $\theta_k$ is established. It depends solely on the smoothness of $f$. For sufficiently smooth $f$, it is shown that the proposed estimate is asymptotically normal, attains the optimal $O_p(n^{-1/2})$ rate and achieves the information bound. Finally, to improve the performance of the proposed estimate at small to moderately large $n$, two modifications are proposed. One modification is for estimating $\theta_0$; it reduces bias of the estimate. The other modification is for estimating $\theta_k, k \geq 1$; it reduces sample variation of the estimate. In simulation studies the superior performance of the proposed procedures is clearly demonstrated.
Tiee-Jian Wu. "Adaptive Root $n$ Estimates of Integrated Squared Density Derivatives." Ann. Statist. 23 (5) 1474 - 1495, October, 1995. https://doi.org/10.1214/aos/1176324308
Published: October, 1995
First available in Project Euclid: 11 April 2007
Digital Object Identifier: 10.1214/aos/1176324308
Primary: 62G05
Secondary: 62G20
Keywords: Bandwidth selection, Characteristic function, convergence rate, Density derivative, kernel estimate, nonparametric information bound, smoothed cross-validation
Rights: Copyright © 1995 Institute of Mathematical Statistics
Ann. Statist.
Vol.23 • No. 5 • October, 1995
Tiee-Jian Wu "Adaptive Root $n$ Estimates of Integrated Squared Density Derivatives," The Annals of Statistics, Ann. Statist. 23(5), 1474-1495, (October, 1995) | CommonCrawl |
Bamboo lignocellulose degradation by gut symbiotic microbiota of the bamboo snout beetle Cyrtotrachelus buqueti
Chaobing Luo ORCID: orcid.org/0000-0002-8460-94321 na1,
Yuanqiu Li1,2 na1,
Ying Chen1,2,
Chun Fu1,
Wencong Long1,
Ximeng Xiao1,2,
Hong Liao1 &
Yaojun Yang ORCID: orcid.org/0000-0002-6141-76221
Biotechnology for Biofuels volume 12, Article number: 70 (2019) Cite this article
Gut symbiotic microbiota plays a critical role in nutrient supply, digestion, and absorption. The bamboo snout beetle, Cyrtotrachelus buqueti, a common pest of several bamboo species, exhibits high lignocellulolytic enzyme activity and contains various CAZyme genes. However, to date, no studies have evaluated the role of gut symbiotic microbiota of the snout beetle on bamboo lignocellulose degradation. Therefore, the present study investigated the role of gut symbiotic microbiota of C. buqueti on bamboo lignocellulose degradation.
Gut symbiotic microbiota of female (CCJ), male (XCJ), and larvae (YCJ) beetles was used to treat bamboo shoot particles (BSPs) in vitro for 6 days. Scanning electron microscopy (SEM) revealed significant destruction of the lignocellulose structure after treatment, which was consistent with the degradation efficiencies of CCJ, XCJ, and YCJ for cellulose (21.11%, 17.58% and 18.74%, respectively); hemicellulose (22.22%, 27.18% and 34.20%, respectively); and lignin (19.83%, 24.30% and 32.97%, respectively). Gut symbiotic microbiota of adult and larvae beetles was then identified using 16sRNA sequencing, which revealed that four microbes: Lactococcus, Serratia, Dysgonomonas and Enterococcus, comprise approximately 84% to 94% of the microbiota. Moreover, the genomes of 45 Lactococcus, 72 Serratia, 86 Enterococcus and 4 Dysgonomonas microbes were used to analyse resident CAZyme genes. These results indicated that gut symbiotic microbiota of adult and larvae C. buqueti is involved in the lignocellulose degradation traits shown by the host.
This study shows that the gut symbiotic microbiota of C. buqueti participates in bamboo lignocellulose degradation, providing innovative findings for bamboo lignocellulose bioconversion. Furthermore, the results of this study will allow us to further isolate lignocellulose-degrading microbiota for use in bamboo lignocellulose bioconversion.
Insects are the largest group of organisms on earth, and most insect species feed on plants. Some insects, such as termites and beetles, consume plant cell walls and are involved in the degradation of lignocellulose and other types of biomass; thereby contributing to lignocellulose bioconversion and energy utilisation [1, 2]. To date, many insects, including termites, wood-feeding roaches, beetles, wood wasps, leaf-shredding aquatic insects, silverfish and leaf-cutting ants, have been shown to exhibit lignocellulose degradation ability [1].
In phytophagous insects, such as termites, beetles and wood bees, lignocellulose digestion requires cooperation between insects and symbiotic microorganisms in the insect gut, especially bacteria, protozoa, fungi, and yeasts [3]. In Costelytra zealandica (New Zealand grass grub), various hindgut bacteria participate in lignocellulose degradation [4] and some lignocellulose-degrading bacteria have been isolated from the larvae of the scarab beetle Pachnoda marginata [5]. Moreover, bacteria exhibiting cellulose hydrolysis activity have been isolated from other insects, such as Ips pini (North American pine engraver), Dendroctonus frontalis (Southern pine beetle), Saperda vestita (Linden borer) [6] and Tipula abdominalis (giant crane fly) [7]. Symbiotic gut microbes of the fungus-cultivating termites Macrotermes annandalei, Odontotermes yunnanensis and Macrotermes barneyi possess various lignocellulolytic enzyme genes [8]. A recent study identified 111 glycoside hydrolase (GH) family genes among the symbiotic microbes of Macrotermes natalensis [9]. Taken together, these studies show that there is significant involvement of insect symbiotic microbes in lignocellulose degradation.
16S rRNA sequencing has been widely applied to the detection and identification of gut microorganisms in insects. Gut bacteria in many phytophagous insects, including termites [10, 11], Rhynchophorus ferrugineus (palm weevil) [12] and Dendroctonus armandi (Chinese white pine beetle) [13] have been identified by 16S rRNA sequencing and show that the symbiotic bacteria primarily belong to Escherichia sp., Serratia sp., Pantoea sp., Acinetobacter sp., Salmonella sp., Pseudomonas sp., Shigella sp., Staphylococcus sp., Klebsiella sp., Enterobacter sp., Erwinia sp., Vibrio sp., Proteus sp. and Bacillus sp. Of these identified bacteria, Serratia sp. [14], Pseudomonas sp. [15] and Bacillus sp. [16] exhibited lignocellulose degradation activity.
The bamboo snout beetle, Cyrtotrachelus buqueti, is a serious pest of bamboo species, which include Phyllostachys pubescens, Neosinocalamus affinis, Bambusa textilis and Dendrocalamus farinosus [17]. Cyrtotrachelus buqueti also exhibits high lignocellulolytic enzyme activity [18]. However, to date, no studies have investigated the role of snout beetle gut microbiota in lignocellulose degradation, as has been done for termites [8] and other species of beetles [4,5,6,7]. The present study investigates the role of adult and larval C. buqueti gut bacteria using 16sRNA sequencing. Adult and larval C. buqueti were shown to carry similar loadings of lignocellulose-degrading bacteria. Furthermore, the degradation of bamboo shoots (Bambusa emeiensis) by snout beetle gut microbiota was investigated in vitro, and the results revealed that the microbiota of both adult and larval C. buqueti has lignocellulose-degrading ability.
Gut structure in different age groups and microbial colonisation of the gut paunch
In insects, the gut is divided into foregut, midgut, and hindgut [19]. Although the foregut is mostly involved in mechanical treatment [20], delignification occurs in the midgut [21] and symbiotic microorganisms exist mainly in the hindgut, where the biomass is degraded [19].
In the present study, the intestinal tract of the C. buqueti was analysed. Results showed that it comprises a foregut, a large midgut comprising an anterior and a posterior midgut, and a hindgut comprising a paunch, an ileum and a colon (Fig. 1a, b). The total length of the larval intestinal tract was 21.25 ± 0.55 cm, which corresponds to 3.8 times its body length. The foregut represented 3.33–3.7% of the total length, whereas the midgut represented 52–58%, and the hindgut represented 38–44%. On the other hand, the total length of the adult intestinal tract was 9.83 ± 0.84 cm, which corresponds to 2.4 times of its body length. The foregut represented 13–20% of the total length, the midgut represented 28–32%, and the hindgut represented 51–55%.
Intestinal tract structure in adult and larvae of C. buqueti. Morphological characteristics of larvae (a) and adults (b). I: foregut; II: midgut; III: hindgut; oe: oesophagus; am: anterior midgut; pm: posterior midgut; il: ileum; Co: colon; rt: rectum; Scanning electron micrographs global (c) and detailed (d) views of the adult intestinal tract. Scanning electron micrographs global (e) and detailed (f) views of the larvae intestinal tract
The distribution of microbial cells colonising the gut paunch was analysed by SEM (Fig. 1c–f). In adults and larvae, the gut paunch was mostly inhabited by rods and cocci, indicating the existence of symbiotic microbiota in the gut of adult and larval C. buqueti.
SEM, BSPs components and culture reducing sugar content after treatment
In insects that digest lignocellulose, gut symbiotic microbes are ubiquitous and involved in lignocellulose degradation in various degrees [4]. To investigate the degradation efficiency of the gut symbiotic microbes in C. buqueti, the gut symbiotic microbiota of female (CCJ) and male (XCJ) adults and larvae (YCJ) beetles was extracted and used for in vitro degradation of bamboo shoot particles (BSPs) for 6 days.
Changes in the fibre microstructure of plant raw materials can be accurately observed using SEM. Following in vitro exposure to the intestinal tract microbiota of female, male, and larval beetles, the BSPs were observed using SEM. The results showed that the surface of lignocellulose from untreated, control BSPs was rough, showing gully-like surface features and a dense structure (Fig. 2a–f). However, no significant differences were observed between raw (Fig. 2a–c) and control (Fig. 2d–f) samples. On the other hand, after 72 h of treatment, the cell walls of BSPs became thinner with an enlarged cellulosic cavity (Fig. 2g, j, m) and many cracks appeared on the surface (Fig. 2h, k, n). Moreover, compared to what is shown in Fig. 2c (Fig. 2i, l, o) new dimples or holes were found on the surface of BSPs. These results are consistent with previous reports [22, 23]. A comparison between the pre- and post-treatment microstructure of BSPs revealed that its structure was significantly destroyed by treatment.
Scanning electron microscopy (SEM), bamboo shoot particles (BSPs) components and reducing sugar content in cultures after in vitro treatment. a–c SEM for raw BSPs; d–f SEM for BSPs in the control group; g–i SEM for BSPs after 72 h of treatment in CCJ; j–l SEM for BSPs after 72 h of treatment in XCJ; m–o SEM for BSPs after 72 h of treatment in YCJ; p amount of reducing sugar in the culture after 6 days of treatment; q cellulose, hemicellulose and lignin degradation efficiency of BSPs after 6 days of treatment. CCJ: gut symbiotic microbiota of female; XCJ: gut symbiotic microbiota of male; YCJ: gut symbiotic microbiota of larvae. Descriptive data were expressed as mean ± Standard Error of Mean; the different normal letters indicated significant difference at different developmental stage at 0.05 level (n = 3); the different capital letters indicated significant difference in different components at 0.05 level (n = 3)
Since the reducing sugars in the culture medium was mainly derived from the hydrolysis of cellulose and hemicellulose in BSPs, the reducing sugar content was determined to reflect the degree of conversion of lignocellulose. The results showed small content changes during the first day of digestion, which increased henceforth. The maximum value was reached on day three in adults and on day 4 in larvae, subsequently decreasing until day 6 (Fig. 2p). The increase of reducing sugars indicated the degradation of cellulose and hemicellulose [24].
We then determined the degradation efficiencies of lignocellulose from BSPs after 6 days of treatment. As shown in Fig. 2q, it was found that cellulose, lignin, and hemicellulose were partially removed by the gut symbiotic microbiota. The degradation efficiencies of CCJ, XCJ, and YCJ digestions were: 21.11%, 17.58% and 18.74%, respectively, for cellulose; 22.22%, 27.18% and 34.20%, respectively, for hemicellulose and 19.83%, 24.30% and 32.97%, respectively, for lignin (Fig. 2q). These results showed that gut symbiotic microbiota degrading cellulose, lignin, and hemicellulose in BSPs could be cultured in vitro and then applied to the pretreatment and hydrolysis of bamboo lignocellulose.
Determination of lignocellulolytic enzyme activity
To investigate the mechanism of the degradation of BSPs lignocellulose in vitro, the activities of the lignocellulolytic enzymes present in the gut symbiotic microbiota of adults and larvae, such as endoglucanase, β-glucosidase, xylanase, exoglucanase, laccase and lignin peroxidase, were determined. As presented in Fig. 3, the activity of cellulase, including endoglucanase, β-glucosidase and exoglucanase increased with treatment, whereas that of xylanase, laccase and lignin peroxidase first increased and subsequently decreased. To investigate whether the higher enzyme activity was caused by an increase of protein secretion, we determined the secreted protein content during the course of the treatment. The amount of secreted proteins was found to increase continuously (Fig. 3), indicating that the higher enzyme activity was partially caused by increased protein secretion.
The lignocellulolytic enzyme activities of C. buqueti displayed by gut symbiotic microbiota 1, 2, 3, 4, 5 and 6 days after being co-cultured with BSP in vitro
Gas chromatography–mass spectrometry (GC–MS) of lignin degradation products of BSP in vitro
GC–MS has been shown to effectively detect lignin degradation products [24]. We used GC–MS to identify the type of low-molecular-weight products obtained from the cultures after six days of treatment. The total ion chromatograms (TIC) of compounds extracted from control and treated samples revealed that several small molecular aromatic metabolites occurred only in treated samples. Such was the case of phenol (RT 7.22), phenylethyl alcohol (RT 10.71), 2-methoxyphenol (RT 9.85), 4-ethylphenol (RT 12.36) and p-cresol (RT 9.97), which are considered as the basic units of the lignin polymer (Table 1; Additional file 1: Figure S1). This result indicates that lignin in BSPs was degraded to produce phenolic compounds, with the guaiacyl structure unit as the main body. Furthermore, bond breakage between the methoxyl groups linked to the benzene ring of guaiacyl produces one-substituted aromatics, such as phenol and dimethyl phenol.
Table 1 Identification of metabolites as trimethylchlorosilane (TMS) derivatives from BSP samples
Bamboo lignin contains guaiacyl (G), syringyl (S) and p-hydroxyphenyl (H) units, belonging to the G–S–H type [25]. It was previously reported that most bamboo cell walls are mainly composed of guaiacyl units at the early stage of lignification [26]. In the present study, BSPs were degraded to produce phenolic compounds; with guaiacyl structure unit as the main body, indicating that the G unit of BSPs was degraded by the beetles' gut symbiotic microbiota.
Structure of bacterial communities
To investigate the bacterial diversity in the larval and adult C. buqueti guts, a bacterial 16S rRNA gene clone library was constructed by PCR. The whole DNA was extracted from the whole intestinal tracts of the adults and larvae. The larval and adult guts were labelled as 'YWG' and 'AWG', respectively. A total of 1,957,125 raw reads from 15 YWGs and 15 AWGs were characterised by Illumina high-throughput sequencing. After quality trimming, 1,889,645 high-quality clean tags were obtained and binned into 31,196 operational taxonomic units (OTUs) (Table 2).
Table 2 Estimated richness and diversity of bacterial communities in the gut of adult and larval Cyrtotrachelus buqueti obtained from pyrosequencing analysis
At the phylum level, a total of 14 prokaryotic phyla were identified, among which five were common to both groups (Fig. 4a, b). In both groups, Proteobacteria (38.6–73.0%) and Firmicutes (19.2–59.8%) were the most abundant microbial communities. The two most increased phyla in the AWG group were Proteobacteria and Tenericutes; which, compared with the YWG group, had a relative abundance increase of 89% and 3095%, respectively. On the contrary, two phyla, Firmicutes and Fibrobacteres, decreased by 63% and 68%, respectively (Additional file 2: Table S1). At the genus level, besides unclassified OTUs, 74 genera were detected, among which 29 were common to both groups (Fig. 4c, d). The abundance of all genera in each group is shown in Additional file 3: Table S2. Lactococcus (59.1–72.3%) was consistently abundant in both groups. Thirty-four genera were detected only in the AWG group, whereas 11 were detected only in the YWG group. Non-metric multi-dimensional scaling (NMDS) plots and similarity analysis (ANOSIM) (p < 0.05) (Additional file 4: Figure S2) revealed a divergence of the community structure in the AWG and YWG groups.
Bacterial operational taxonomic units (OTUs) composition in adult and larvae beetles (AWG and YWG). a Venn diagram showing the coincidence of phyla between the groups. b Phylum-level comparison of bacterial OTUs between the groups. c Venn diagram showing the overlap of genera between groups. d Genus-level comparison of bacterial OTUs between the groups
Diversity and richness of microbial communities
The YWG had the highest species richness (Chao 1, ACE and OTUs) (Table 2). A significant difference was observed between the groups regarding the diversity of microbial communities as determined by the Shannon index (p < 0.05), whereas the Simpson index revealed no significant difference (Table 2; Additional file 5: Figure S3). Differentially represented OTUs were analysed via linear discriminant analysis (LDA) effect size (LEfSe), a statistical measure used in metagenomic biomarker discovery (Fig. 5a, b). Genera such as Lactococcus, Enterococcus and Nitrospira and only one species—the Bacillus firmus—increased in YWG, whereas one genus and two species increased in AWG (Fig. 5c–i).
LEfSe analysis of the two groups. a Cladogram of the two groups. b LDA score histogram. The relative abundance of c Bacillus firmus, d Enterococcus, e Lactococcus, f Lactococcus garvieae, g Nitrospira, h Citrobacter and i Entomoplasmatales. Solid and dashed lines indicate mean and median, respectively. The x-axis in c–i represented the samples: AWG1-15 and YWG1-15
UniFrac analysis was performed to compare the degree of phylogenetic overlap in the microbial communities of AWG and YWG (Fig. 6a). Moreover, a total of six OTUs with maximum abundance, including Lactococcus, Enterococcus, Bacillus, Citrobacter, Vagococcus and Serratia, were used for principal component analysis and clearly separated into two groups. Specifically, the difference of PC1 was caused mostly by Enterococcus and Serratia, which were positively correlated. Additionally, the Enterococcus was positively correlated with the other OTUs except Citrobacter, whereas the Serratia was positively correlated with all OTUs (Fig. 6b). Maximum likelihood (ML) analysis of the 50 detectable OTUs showed that the significantly increased OTUs in the YWG group belonged to the families: Enterobacteriaceae, Streptococcaceae, Enterococcaceae, and Bacillaceae subdivision 4 (Fig. 6c).
a Weighted UPGMA tree. b Beta diversity of the samples. c Maximum likelihood tree of 50 detectable OTUs (the relative abundance > 1% in the given sample). The complete 16S rRNA gene sequences of the corresponding species in the RDP database were used to construct the tree. Red indicates the OTUs in the AWG group, and green indicates the OTUs in the YWG group. Only the OTUs with significant differences (p < 0.05) in relative abundance are shown behind the branches. The size of the symbol indicates the relative abundance of OTUs
Functional analysis of microbial communities
In this study, PICRUSt was used to predict the microbial community metagenome [27]. At the KEGG L2 level, in AWG, OTUs were mainly enriched in metabolic pathways, such as amino acid metabolism, lipid metabolism, xenobiotics biodegradation and metabolism and human diseases. In YWG, the OTUs were enriched in energy metabolism, environment adaption, nucleotide metabolism and molecular signalling and interaction (Fig. 7a). The functions of each bacterial community in AWG and YWG were significantly different for all pathways (Fig. 7b).
KEGG pathways enriched in adults and larvae of insects (AWG and YWG). a Cluster of pathways in AWG and YWG. b The relative pathways abundance was compared between AWG and YWG
Moreover, PICRUSts predicted an enrichment of the relative abundance of genes relevant to lignocellulose degradation, genes encoding the carbohydrate-active enzymes (CAZymes), including glycoside hydrolases (GHs), glycosyl transferases (GTs), polysaccharide lyases (PLs), carbohydrate esterases (CEs), and auxiliary activities (AAs) (Additional file 6: Figure S4). Among the CAZyme family genes included on the prediction, many were closely related to lignocellulose degradation.
Composition and lignocellulose degradation potential of the core degradative microbiota
16sRNA-sequencing showed that Lactococcus, Serratia, Dysgonomonas, and Enterococcus represented about 84–94% of the microbiota, comprising the core digestive microbiota of AWG and YWG (Additional file 7: Figure S5). To further explore the potential role of Lactococcus, Serratia, Dysgonomonas, and Enterococcus in lignocellulose degradation, we compared their CAZymes content with that of Cytophaga hutchinsonii ATCC 33406, a model bacterium for lignocellulose bioconversion [28]. This comparison was based on the public CAZy databases (Table 3; Additional file 8: Table S3). Since no Dysgonomonas genome was available in the CAZy database, we manually annotated the CAZyme-coding sequences in the genome of the genus available to date in the NCBI Genome, i.e., that of Dysgonomonas mossii DSM 22836, Dysgonomonas capnocytophagoides DSM 22835, Dysgonomonas macrotermitis and Dysgonomonas gadei ATCC BAA-286. Compared with the genome of C. hutchinsonii ATCC 33406, the genomes of Dysgonomonas contained more GHs and CEs (Table 3; Additional file 8: Table S3). To evaluate the ability of Lactococcus to degrade lignocellulose in the intestine, we examined the CAZyme content of the 45 Lactococcus genomes currently available in the CAZy database (Additional file 8: Table S3). The 45 genomes present lower numbers of CAZyme than the C. hutchinsonii genome (Table 3; Additional file 8: Table S3). However, the examined Lactococcus genomes were mostly enriched in GH1 and GH13, which putatively exhibit (among others) β-glucosidase and β-galactosidase activity [29] and α-amylase [30], respectively (Additional file 8: Table S3). For Serratia, 72 genomes were examined, which showed a higher capacity to degrade carbohydrates, including lignocellulose, than C. hutchinsonii, due to a higher number of CAZyme families and genes in the genome (Table 3; Additional file 8: Table S3). Moreover, the GHs of Serratia were enriched in GH1, GH2, GH3, GH4, GH13, GH18 and GH23, and more AAs were performed. Eighty-six Enterococcus genomes were compared with the C. hutchinsonii genome, which exhibited more families and number, especially in GHs (Table 3; Additional file 8: Table S3). The examined Enterococcus genomes were mostly enriched in GH1, GH13 and GH73, which putatively exhibit (among others) β-glucosidase and β-galactosidase activity [22], α-amylase [23] and lysozyme [31], respectively (Additional file 8: Table S3).
Table 3 Total number of putative CAZy genes, including glycoside hydrolase (GH), carbohydrate esterase (CE), auxiliary activity (AA) and polysaccharide lyase (PL) genes in selected genomes
Microbiota phenotype prediction using BugBase
BugBase (https://bugbase.cs.umn.edu/) is an algorithm that predicts the organism-level coverage of functional pathways; as well as biologically interpretable phenotypes, such as oxygen tolerance, Gram staining, and pathogenic potential within complex microbiota using either whole-genome shotgun or marker gene sequencing data [32]. However, no significant differences were observed in the relative abundance of all classifications (Fig. 8; Additional file 9: Table S4), indicating similarities in the bacterial phenotype of AWG and YWG. To date, few reports have focused on degradation by microbes, such as Coriolus versicolor [33] and Galactomyces sp. CCZU11-1 [34]. These predicted results will, therefore, expand the current knowledge on gut symbiotic microbiota of the beetle and support the culture of lignocelluloses-degrading bacteria in vitro.
BugBase analyses, based on the NGS dataset. The outcome is grouped according to the modules AWG and YWG (X-axis). The relative abundance is given on the Y-axis. 'Mobile elements' refers to bacteria, most probably carrying mobile elements
The symbiotic microbiota of insects includes bacteria, protozoa, fungus and archaea, among which bacteria are the most representative. Symbiotic bacteria exist in Coleoptera, Blattaria, Isoptera, Diptera, Hemiptera, Mallophaga, and Anoplura [35] and the symbiotic bacteria of insects primarily belong to the phyla: Proteobacteria, Bacteroidetes, Firmicutes, Actinomycetes, Spirochetes, and Verrucomicrobia [36].
Gut symbiotic microbes play an important role in nutrient supply, digestion and absorption. Bashir et al. [37] isolated 42 cellulose-degrading bacteria from the gut of termites, pill bugs, and stem borers. Dantur et al. [38] isolated 118 cellulose-degrading bacteria from the larval intestine of Diatraea saccharalis, and Manfredi et al. [39] isolated 233 cellulose-degrading bacteria from Spodoptera frugiperda and Diatraea saccharalis, among which Bacillus and Paenibacillus were the most prevalent. Peterson et al. [40] used antibiotics to clarify the role of symbiotic bacteria on cellulose degradation by Reticulitermes flavipes. Furthermore, Shotorkhoft et al. [41] isolated three bacteria exhibiting ligninase activity from Microcerotermes diversus and discussed their ability to degrade wheat straw.
In the present study, we used 16sRNA sequencing to identify gut symbiotic bacteria in C. buqueti. Results showed that Lactococcus, Serratia, Dysgonomonas, and Enterococcus represented approximately 84–94% of the microbiota. Robert et al. [42] found that the most common cellulolytic strains isolated corresponded to the Ruminococcus and Enterococcus species found in the human colon. Shil et al. [43] showed that Enterococcus species participates in lignocellulose degradation by insects. In the present study, BSPs were treated with gut symbiotic microbiota, which revealed that the degradation efficiency of CCJ, XCJ and YCJ was 21.11%, 17.58% and 18.74%, respectively, for cellulose; 22.22%, 27.18% and 34.20%, respectively, for hemicellulose; and 19.83%, 24.30% and 32.97%, respectively, for lignin, indicating that symbiotic s degrade the cellulose, hemicellulose, and lignin of BSPs. Moreover, gut symbiotic microbiota has the potential to be cultured in vitro and at large scale; and then applied to the pretreatment and hydrolysis of bamboo lignocellulose. BugBase (https://bugbase.cs.umn.edu/) was used to predict microbiota phenotypes, which allowed the expansion of current knowledge regarding gut symbiotic microbiota of C. buqueti, with the potential to support in vitro cultures of lignocellulose-degrading bacteria.
The present study investigated the bamboo lignocellulose-degrading ability of gut symbiotic microbiota of C. buqueti. The in vitro hydrolysis assay revealed bamboo lignocellulose-degrading efficiencies in CCJ, XCJ, and YCJ of 21.11%, 17.58% and 18.74% for cellulose; 22.22%, 27.18% and 34.20% for hemicellulose, and 19.83%, 24.30% and 32.97% for lignin. Results from the 16sRNA-sequence identified four microbes, namely Lactococcus, Serratia, Dysgonomonas and Enterococcus representing approximately 84–94% of the microbiota in this system. These microorganisms exhibited abundant CAZyme genes and lignocellulose-degrading ability. Finally, predicted results of BugBase support the in vitro culturing of lignocellulose-degrading bacteria of the gut, such as C. buqueti. This study has enriched our knowledge of bamboo lignocellulose-degrading microbiota, which can be applied to bamboo lignocellulose bioconversion.
Collection of insect samples
Adult and larval C. buqueti were collected in August 2018 in the Muchuan City, Sichuan Province, China (N103°98′, E28°96′). All adults were collected at the 3rd day after emergence [44]. Intestines were extracted from 15 individual adults (males and females) and 15 individual larval beetles, and then immediately stored in liquid nitrogen to ensure gut microbiome stabilisation until subjection to DNA extraction and 16sRNA-sequencing.
In vitro assay of bamboo shoot particles (BSPs) degradation by gut symbiotic microbiota
In the present study, BSPs at the early stage of lignification were used. BSPs were prepared by drying to a constant weight at 65 °C, crushing to particles using a pulverising machine, and straining through a 40-mesh sieve. Gut symbiotic microbiota of adult and larval beetles was prepared by grounding and mixing, respectively. The mixed suspension was then cultured in liquid medium 1 (pH 7.2) composed of 0.04 g yeast extract, 0.1 g malt extract, 2 g CaCO3 and 10 g BSPs for 15 d. Two millilitres of the culture was added into a second liquid medium (pH 7.2) composed of 0.5 g yeast extract, 0.5 g malt extract, 0.5 g tryptone, 0.5 g NaCl, 0.2 g KH2PO4, 0.13 g MgSO4·7H2O and 0.5 g CaCl2. The assay was then performed following the steps listed in Table 4.
Table 4 Design and determination methods of gut symbiotic microbiome degradation of bamboo shoot particles (BSPs) in vitro
As shown in Table 4, the samples were placed into a 250-mL conical flask and incubated under constant-temperature shock at 37 °C and 150 rpm for 6 days. The reaction products were inactivated at 100 °C for 30 min and centrifuged at 13,000 rpm for 10 min, after which the supernatant was collected and dried at 65 °C to a constant weight. The dried deposit was weighed, and the levels of cellulose, hemicellulose, and lignin were determined and used for SEM. The supernatant was used to determine reducing sugar and low-molecular-weight products.
Determination of reducing sugar
Reducing sugar was identified using 3,5-dinitrosalicylic acid [45].
Determination of cellulose, hemicellulose and lignin
The cellulose, hemicellulose, and lignin contents of the dried BSPs were determined using the Van Soest method [46] and the following formulas:
$${\text{Hemicellulose}}\;{\text{content}} = {\text{neutral}}\;{\text{detergent}}\;{\text{fibre }}\left( {\text{NDF}} \right) - {\text{acid detergent fibre }}\left( {\text{ADF}} \right)$$
$${\text{Cellulose}}\;{\text{content}} = {\text{ADF}} - {\text{acid detergent lignin }}\left( {\text{ADL}} \right)$$
$${\text{Lignin content}} = {\text{ADL}} - {\text{ash}}\;{\text{content}}$$
Cellulose, hemicellulose and lignin degradation efficiencies
$${\text{Cellulose}}\;\;{\text{degradation}}\;\;{\text{efficiency }} = \left( {1 - \frac{{{\text{The}}\;{\text{mass}}\;{\text{of}}\;{\text{cellulose}}\;{\text{in}}\;{\text{deposit}} }}{{{\text{The}}\;{\text{mass}}\;{\text{of}}\;{\text{cellulose}}\;{\text{in}}\;{\text{raw}}\;{\text{material}}}}} \right) \times 100\%$$
$${\text{Hemicellulose}}\;{\text{degradation}}\;\;{\text{efficiency}} = \left( {1 - \frac{{{\text{The}}\;{\text{mass}}\;{\text{of}}\;{\text{hemicellulose}}\;{\text{in}}\;{\text{deposit}} }}{{{\text{The}}\;{\text{mass}}\;{\text{of}}\;{\text{hemicellulose}}\;{\text{in}}\;{\text{raw}}\;{\text{material}}}}} \right) \times 100\%$$
$${\text{Lignin}}\;{\text{degradation}}\;\;{\text{efficiency}} = \left( {1 - \frac{{{\text{The}}\;{\text{mass}}\;{\text{of}}\;{\text{lignin}}\;{\text{in}}\;{\text{deposit}} }}{{{\text{The}}\;{\text{mass}}\;{\text{of}}\;{\text{lignin}}\;{\text{in}}\;{\text{raw}}\;{\text{material}}}}} \right) \times 100\%$$
Scanning electron microscopy (SEM)
BSPs collected 72 h after treatment were analysed using SEM (Hitachi 3400N, Japan), to observe the samples' surface morphology. Prior to imaging, the samples were sprayed with gold to a thickness of ~ 10 nm using the E-1010 sputtering film coating machine (Japan). The SEM operating current and voltage were 81 mA and 10 kV, respectively.
Determination the lignocellulolytic enzymes activities
From the co-cultures, 2-mL samples were withdrawn at 1, 2, 3, 4, 5 and 6 days for assays determining lignocellulolytic enzyme activity. Endoglucanase (EC 3.2.1.4), exoglucanase (EC 3.2.1.91), β-glucosidase (EC 3.2.1.21), lignin peroxidase (LiP)-like, laccase-like, and xylanase enzyme activities were analysed as previously described by Luo et al. [18]. Briefly, carboxymethyl cellulose (CMC), microcrystalline cellulose (MCC), salicin, veratryl alcohol (VA), 2,2′-azino-bis (ABTS) and xylan were used as substrates to determine endoglucanase, exoglucanase, β-glucosidase, LiP-like, laccase-like and xylanase, respectively. All assays were performed five times.
Gas chromatography–mass spectrometry (GC–MS)
On day 6, control and treatment cultures were collected and centrifuged at 8000 rpm for 20 min to remove biomass. The supernatants were collected and treated using the method described by Raj et al. [47]. Briefly, the supernatants were extracted three times using equal volumes of dichloromethane, collected extracted liquor, dewatered with anhydrous Na2SO4 and filtered and concentrated to ~ 1 mL. Next, 100-µL dioxane and 10-µL pyridine were added to the sample followed by silylation with 50-µL trimethyl silyl (N,O-bis[trimethylsilyl]trifluoroacetamide [BSTFA]/trimethylchlorosilane [TMS] = 99/1 [v/v]). Gas chromatography–mass spectrometry (GC–MS) was analysed according to the procedure described by Chen et al. [48]. The TMS were identified by comparing their mass spectra with the NIST library.
DNA extraction, amplification and sequencing of 16S rRNA encoding gene sequences
DNA from different samples was extracted using the E.Z.N.A. ®Stool DNA Kit (D4015, Omega, Inc., USA) according to the manufacturer's instructions. PCR amplification was performed in 25 μL of the following reaction mixture: 50-ng DNA, 12.5-μL Phusion Hot start flex 2X Master Mix (NEB, M0536L), 2.5-μL forward primer 338F 5′-ACTCCTACGGGAGGCAGCAG-3′ and 2.5-μL reverse primer 806R 5′-GGACTACHVGGGTWTCTAAT-3′ [49]. The PCR was performed at 98 °C for 30 s, 35 cycles of 98 °C for 10 s, 54 °C for 30 s, 72 °C for 45 s and 72 °C for 10 min. The amplicon pools were prepared for sequencing. The libraries were sequenced on Illumina MiSeq Instrument (Illumina Inc., San Diego, CA, USA) using the 300 bp paired-end protocol.
QIIME v1.9.1 [50] was used to further analyse the sequences, using scripts available in the Qiime website (http://qiime.org/). First, potential chimeras were identified in valid sequences using de novo Uchime (usearch v9.0.2132_i86linux32) [51] and removed with Qiime build-in python scripts. The resulting sequences were defined as good sequences. The final good sequences were clustered into OTUs at 97% similarity using the de novo UCLUST algorithm [52]. Taxonomic assignment was then performed using the GreenGenes database (version 13.8) [53], after which an OTU abundance table was constructed using the Qiime python scripts. Multiple sequence alignment was conducted using the PyNAST (v1.2.2) [54] software, and a phylogenetic tree was constructed using FastTree (v2.1.9) [55], to investigate the phylogenetic relationship of different OTUs. To reduce the noise of amplicon datasets and make the data more credible, the OTUs whose abundance was lower than 0.001% of the total were removed, resulting in a modified OTU abundance table [56]. Using the phylogenetic tree and the modified OTU abundance table, alpha diversity (Ace, Chao1, Shannon, Simpson, Observed OTUs, goods_coverage, PD_whole tree) was calculated using the Qiime script, and beta diversity (Bray–Curtis, weighted and unweighted UniFrac) was estimated with the Phyloseq package (v1.20.0) [57]. Both biodiversities were visualised using the R programme (v3.4.1).
Differentially abundant OTUs and microbes
DESeq 2 (v1.16.1) was used to detect differentially abundant OTUs among groups [58]. Padj < 0.01 was considered as statistically significant. DESeq2 tests were performed using the Qiime python script. Differences among microbes regarding abundance of different taxonomic ranks were determined using a Metastats analysis, which is based on a non-parametric t-test [59]. The relative abundance of a microbe in a sample was calculated by determining the read count normalised by the total reads in that sample. Microbes with a relative abundance lower than 1% in all samples were classified into 'others'. A level of p < 0.05 was considered as statistically significant. Metastats tests were performed using the EDDA R package (v1.10.0) [60].
LEfSe analysis
The LEfSe algorithm was used to identify different abundance biomarkers between the two groups [61]. LEfSe couples robust tests such as Kruskal–Wallis test with quantitative tests such as Wilcoxon rank-sum test. After LDA, the features are ranked by effect size. An effect size threshold above 3 (on a log10 scale) was used for all biomarkers discussed in this study.
Inferred metagenomics and phenotypes
The PICRUSt (v1.1.2) (http://picrust.github.io) was used to predict microbial community metagenome [27]. First, a collection of closed-reference OTUs was obtained from the filtered reads of QIIME v 1.9.1 and by querying the data against the GreenGenes database (http://greengenes.lbl.gov). The OTUs were assigned at 97% identity. PICRUSt was used to predict and derive relative KEGG pathway abundance. Microbial phenotypes were predicted using BugBase (https://bugbase.cs.umn.edu/) [32], a software that relies on the tools PICRUSt, IMG, KEGG and PATRIC.
Statistical analyses were performed using SPSS 19.0 (IBM SPSS, Armonk, NY, USA). Descriptive data were expressed as mean ± standard error of mean (SEM). A Student t-test was used to compare the means from two groups. Comparisons of more than two groups were performed using analysis of variance. A level of p < 0.05 indicated a statistically significant difference.
C. buqueti :
Cyrtotrachelus buqueti
KEGG:
Kyoto Encyclopedia of Genes and Genomes
OTU:
operational taxonomic units
LDA:
linear discriminant analysis
GHs:
glycoside hydrolases
GTs:
glycosyltransferases
CEs:
carbohydrate esterases
CBMs:
carbohydrate-binding domains
PLs:
polysaccharide lyases
AAs:
auxiliary activities
CAZyme:
carbohydrate-active enzymes
NCBI:
The National Center for Biotechnology Information
ABTS:
[2,2′-Azino-bis (3-ethylbenzothiazoline-6-sulfonic acid)]
BSPs:
bamboo shoot particles
CCJ:
gut symbiotic microbiota of female
XCJ:
gut symbiotic microbiota of male
YCJ:
gut symbiotic microbiota of larvae
LEfSe:
Linear Discriminant Analysis Effect Size
total ion chromatograms
Sun JZ, Scharf ME. Exploring and integrating cellulolytic systems of insects to advance biofuel technology. Insect Sci. 2010;17:163–5.
Rizzi A, Crotti E, Borruso L, Jucker C, Lupi D, Colombo M, et al. Characterization of the bacterial community associated with larvae and adults of Anoplophora chinensis collected in Italy by culture and culture-independent methods. Biomed Res Int. 2013;2013:420287.
Oppert C, Klingeman WE, Willis JD, Oppert B, Jurat-Fuentes JL. Prospecting for cellulolytic activity in insect digestive fluids. Comp Biochem Physiol Part B. 2010;155:145–54.
Zhang HY, Jackson TA. Autochthonous bacterial flora indicated by PCR-DGGE of 16S rRNA gene fragments from the alimentary tract of Costelytra zealandica (Coleoptera: Scarabaeidae). J Appl Microbiol. 2008;105:1277–85.
Cazemier AE, Verdoes JC, Reubsaet FAG, Hackstein JH, Van der Drift C, Opden Camp HJM. Promicromonospora pachnodae sp. nov., a member of the (hemi) cellulolytic hindgut flora of larvae of the scarab beetle Pachnoda marginata. Antonie van Leeuwenhoek. 2003;83:135–48.
Delalibera I, Handelsman JJ, Kenneth FR. Contrasts in cellulolytic activities of gut microorganisms between the wood borer, Saperda vestita (Coleoptera: Cerambycidae), and the bark beetles, Ips pini and Dendroctonus frontalis (Coleoptera: Curculionidae). Environ Entomol. 2005;34:541–7.
Rogers TE, Peterson JB. Analysis of cellulolytic and hemicellulolytic enzyme activity within the Tipula abdominalis (Diptera; Tipulidae) larval gut and characterization of Crocebacterium ilecola gen. nov., sp. nov., isolated from the Tipula abdominalis larval hindut. Insect Sci. 2010;17:291–302.
Ni JF, Tokuda G. Lignocellulose-degrading enzymes from termites and their symbiotic microbiota. Biotechnol Adv. 2013;31:838–50.
Poulsen M, Hu HF, Li C, Chen ZS, Xu LH, Otani S, et al. Complementary symbiont contributions to plant decomposition in a fungus-farming termite. Proc Natl Acad Sci USA. 2014;111:14500–5.
Hongoh Y, Deevong P, Hattori S, Inoue T, Noda S, Noparatnaraporn N, et al. Phylogenetic diversity, localization, and cell morphologies of members of the candidate phylum TG3 and a subphylum in the phylum Fibrobacteres, recently discovered bacterial groups dominant in termite guts. Appl Environ Microbiol. 2006;72:6780–8.
Warnecke F, Luginbühl P, Ivanova N, Ghassemian M, Richardson TH, Stege JT, et al. Metagenomic and functional analysis of hindgut microbiota of a wood-feeding higher termite. Nature. 2007;450:560–5.
Abrar M, Ya F, Youming H, Zhanghong S. The gut entomotype of red palm weevil Rhynchophorus ferrugineus olivier (Coleoptera: Dryophthoridae) and their effect on host nutrition metabolism. Front Microbiol. 2017;8:2291.
Hu X, Wang C, Chen H, Ma J. Differences in the structure of the gut bacteria communities in development stages of the Chinese white pine beetle (Dendroctonus armandi). Int J Mol Sci. 2013;14:21006–20.
Anand AAP, Vennison SJ, Sankar SG, Prabhu DIG, Vasan PT, Raghuraman T, et al. Isolation and characterization of bacteria from the gut of Bombyx mori that degrade cellulose, xylan, pectin and starch and their impact on digestion. J Insect Sci. 2010;10:107.
Sindhu SS, Dadarwal KR. Chitinolytic and cellulolytic Pseudomonas sp. antagonistic to fungal pathogens enhances nodulation by Mesorhizobium sp. Cicer in chickpea. Microbiol Res. 2001;156:353–8.
Trivedi N, Gupta V, Kumar M, Kumari P, Reddy CRK, Jha B. An alkali-halotolerant cellulase from Bacillus flexus isolated from green seaweed Ulva lactuca. Carbohydr Polym. 2011;83:891–7.
Yang YJ, Wang SF, Gong JW, Liu C, Mu C, Qin H. Relationships among Cyrtotrachelus buqueti larval density and wormhole number and bamboo shoot damage degree. J Appl Ecol. 2009;20:1980.
Luo CB, Li YQ, Liao H, Yang YJ. De novo transcriptome assembly of the bamboo snout beetle Cyrtotrachelus buqueti reveals ability to degrade lignocellulose of bamboo feedstock. Biotechnol Biofuels. 2018;11:292.
Watanabe H, Tokuda G. Cellulolytic systems in insects. Annu Rev Entomol. 2010;55:609–32.
Bayané A, Guiot SR. Animal digestive strategies versus anaerobic digestion bioprocesses for biogas production from lignocellulosic biomass. Rev Environ Sci Biotechnol. 2011;10:43–62.
Monlau F, Kaparaju P, Trably E, Steyer JP, Carrere H. Alkaline pretreatment to enhance one-stage CH4 and two-stage H2/CH4 production from sunflower stalks: mass, energy and economical balances. Chem Eng J. 2015;260:377–85.
Yang Z, Zhang M, Xin D, Wang J, Zhang J. Evaluation of aqueous ammonia pretreatment for enzymatic hydrolysis of different fractions of bamboo shoot and mature bamboo. Bioresour Technol. 2014;173:198–206.
Li KN, Wang X, Wang JF, Zhang JH. Benefits from additives and xylanase during enzymatic hydrolysis of bamboo shoot and mature bamboo. Bioresour Technol. 2015;192:424–31.
Potumarthi R, Baadhe RR, Nayak P, Jetty A. Simultaneous pretreatment and sacchariffication of rice husk by Phanerochete chrysosporium for improved production of reducing sugars. Bioresour Technol. 2013;128:113–7.
Parameswaran N, Liese W. Ultrastructural aspects of bamboo cells. Cellul Chem Technol. 1980;14:587–609.
Lin J, He X, Hu Y, Kuang T, Ceulemans R. Lignification and lignin heterogeneity for various age classes of bamboo (Phyllostachys pubescens) stems. Physiol Plant. 2010;114:296–302.
Langille MG, Zaneveld J, Caporaso JG, McDonald D, Knights D, Reyes JA, et al. Predictive functional profiling of microbial communities using 16S rRNA marker gene sequences. Nat Biotechnol. 2013;31:1–10.
Zhu Y, Li H, Zhou H, Chen G, Liu W. Cellulose and cellodextrin utilization by the cellulolytic bacterium Cytophaga hutchisonii. Bioresour Technol. 2010;101:6432–7.
McKenna DD, Scully ED, Pauchet Y, Hoover K, Kirsch R, Geib SM, et al. Genome of the Asian longhorned beetle (Anoplophora glabripennis), a globally significant invasive species, reveals key functional and evolutionary innovations at the beetle–plant interface. Genome Biol. 2016;17:227.
Manners DJ. Enzymic synthesis and degradation of starch and glycogen. Adv Carbohydr Chem. 1962;17:371–430.
CAS Google Scholar
Blade CCF, Johnson LN, Mair GA, North ACT, Phillips DC, Sarma VR. Crystallographic studies of the activity of hen egg-white lysozyme. Proc R Soc Lond B Biol Sci. 1967;167:378–88.
Ward T, Larson J, Meulemans J, Hillmann B, Lynch J, Sidiropoulos D, et al. BugBase predicts organism level microbiota phenotypes. BioRxiv. 2017;1–19.
Zhang X, Xu C, Wang H. Pretreatment of bamboo residues with Coriolus versicolor for enzymatic hydrolysis. J Biosci Bioeng. 2007;104:149–51.
Dai Y, Zhang HS, Huan B, He Y. Enhancing the enzymatic saccharification of bamboo shoot shell by sequential biological pretreatment with Galactomyces sp. cczu11-1 and deep eutectic solvent extraction. Bioprocess Biosyst Eng. 2017;40:1–10.
Rosdocimi EM, Mapelli F, Gonella E, Borin S, Crotti E. Microbial ecology-based methods to characterize the bacterial communities of non-model insects. J Microbiol Methods. 2015;119:110–25.
Colman DR, Toolson EC, Takacs-Vesbach CD. Do diet and taxonomy influence insect gut bacterial communities? Mol Ecol. 2012;21:5124–37.
Bashir Z, Kondapalli VK, Adlakha N, Sharma A, Bhatnagar RK, Chandel G, Yazdani SS. Diversity and functional significance of cellulolytic microbes living in termite, pill-bug and stem-borer guts. Sci Rep. 2013;3:2558.
Dantur KI, Enrique R, Welin B, Castagnaro AP. Isolation of cellulolytic bacteria from the intestine of Diatraea saccharalis larvae and evaluation of their capacity to degrade sugarcane biomass. AMB Express. 2015;5:1–11.
Manfredi AP, Perotti NI, Martínez MA. Cellulose degrading bacteria isolated from industrial samples and the gut of native insects from Northwest of Argentina. J Basic Microbiol. 2015;55:1384–93.
Peterson BF, Stewart HL, Scharf ME. Quantification of symbiotic contributions to lower termite lignocellulose digestion using antimicrobial treatments. Insect Biochem Mol Biol. 2015;59:80–8.
Shotorkhoft A, Mohammadabadi T, Motamedi H, Chaji M, Fazaeli H. Isolation and identification of termite gut symbiotic bacteria with lignocellulose-degrading potential, and their effects on the nutritive value for ruminants of some by-products. Anim Feed Sci Technol. 2016;221:234–42.
Robert C, Bernalier-Donadille A. The cellulolytic microflora of the human colon: evidence of microcrystalline cellulose-degrading bacteria in methane-excreting subjects. FEMS Microbiol Ecol. 2003;46:81–9.
Shil RK, Mojumder S, Sadida FF, Uddin M, Sikdar D. Isolation and identification of cellulolytic bacteria from the gut of three phytophagus insect species. Braz Arch Biol Technol. 2014;57657:927–32.
Yang H, Su T, Yang W, Yang CP, Lu L, Chen ZM. The developmental transcriptome of the bamboo snout beetle Cyrtotrachelus buqueti and insights into candidate pheromone-binding proteins. PLoS ONE. 2017;12:e0179807.
Miller GL. Use of dinitrosalicylic acid reagent for the determination of reducing sugar. Anal Chem. 1959;31:426–8.
Van Soest PJ, Robertson JB, Lewis BA. Methods for dietary fiber, neutral detergent fiber, and nonstarch polysaccharides in relation to animal nutrition. J Dairy Sci. 1991;74:3583–97.
Raj A, Reddy MMK, Chandra R. Identification of low molecular weight aromatic compounds by gas chromatography-mass spectrometry (GC–MS) from kraft lignin degradation by three Bacillus sp. Int Biodeterior Biodegrad. 2007;59:292–6.
Chen YH, Chai LY, Zhu YH, Yang ZH, Zheng Y, Zhang H. Biodegradation of kraft lignin by a bacterial strain Comamonas sp. B-9 isolated from eroded bamboo slips. J Appl Microbiol. 2012;112:900–6.
Ren D, Gong S, Shu J, Zhu J, Rong F, Zhang Z, et al. Mixed Lactobacillus plantarum strains inhibit Staphylococcus aureus induced inflammation and ameliorate intestinal microflora in mice. BioMed Res Int. 2017;7476467.
Caporaso JG, Kuczynski J, Stombaugh J, Bittinger K, Bushman FD, Costello EK, et al. QIIME allows analysis of high-throughput community sequencing data. Nat Methods. 2010;7:335–6.
Edgar RC, Haas BJ, Clemente JC, Quince C, Knight R. UCHIME improves sensitivity and speed of chimera detection. Bioinformatics. 2011;27:2194–200.
Edgar RC. Search and clustering orders of magnitude faster than BLAST. Bioinformatics. 2010;26:2460–1.
DeSantis TZ, Hugenholtz P, Larsen N, Rojas M, Brodie EL, Keller K, et al. Greengenes, a chimera-checked 16S rRNA gene database and workbench compatible with ARB. Appl Environ Microbiol. 2006;72:5069–72.
Caporaso JG, Bittinger K, Bushman FD, DeSantis TZ, Andersen GL, Knight R. PyNAST: a flexible tool for aligning sequences to a template alignment. Bioinformatics. 2009;26:266–7.
Price MN, Dehal PS, Arkin AP. FastTree: computing large minimum evolution trees with profiles instead of a distance matrix. Mol Biol Evol. 2009;26:1641–50.
Bokulich NA, Subramanian S, Faith JJ, Gevers D, Gordon JI, Knight R, et al. Quality-filtering vastly improves diversity estimates from Illumina amplicon sequencing. Nat Methods. 2013;10:57–9.
McMurdie PJ, Holmes S. Phyloseq: an R package for reproducible interactive analysis and graphics of microbiome census data. PLoS ONE. 2013;8:e61217.
Love MI, Huber W, Anders S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 2014;15:550.
White JR, Nagarajan N, Pop M. Statistical methods for detecting differentially abundant features in clinical metagenomic samples. PLoS Comput Biol. 2009;5:e1000352.
Juntao L, Huaien L, Burton CKH, Nagarajan N. EDDA: experimental design in differential abundance analysis. Genome Biol. 2014;15:527.
Segata N, Izard J, Waldron L, Gevers D, Miropolsky L, Garrett WS, et al. Metagenomic biomarker discovery and explanation. Genome Biol. 2011;12:R60.
CBL, YQL and YJY designed and performed the experiments; CBL and YJY wrote the manuscript; CBL, YQL, YC, CF, WCL, XMX and HL analysed the data. All authors read and approved the final manuscript.
We would like to thank Chengdu Basebiotech Co., Ltd for its assistance in original data processing and related bioinformatics analysis. We also thank other members of the laboratory for suggestions and discussion regarding this work and revision of the manuscript.
The Illumina high-throughput sequencing data are available in the NCBI database Sequence Read Archive (SRA) (BioProject PRJNA503626). The sequencing raw data of the bacteria can be accessed through accession numbers SRR8148938- SRR8148952 and SRR8148220- SRR8148234 in the NCBI database.
The authors have consented for publication.
Ethical approval and consent to participate
This work was supported by Sichuan science and technology programme (2019YFG0139) and the National Natural Science Foundation of China (31470655).
Chaobing Luo and Yuanqiu Li contributed equally to this work
Bamboo Diseases and Pests Control and Resources Development Key Laboratory of Sichuan Province, Leshan Normal University, No. 778, Riverside Road, Central District, Leshan, 614000, Sichuan, China
Chaobing Luo, Yuanqiu Li, Ying Chen, Chun Fu, Wencong Long, Ximeng Xiao, Hong Liao & Yaojun Yang
College of Food and Biological Engineering, Xihua University, Chengdu, 610039, Sichuan, China
Yuanqiu Li, Ying Chen & Ximeng Xiao
Chaobing Luo
Yuanqiu Li
Chun Fu
Wencong Long
Ximeng Xiao
Hong Liao
Yaojun Yang
Correspondence to Yaojun Yang.
Additional file 1: Figure S1.
TIC of dichloromethane extract analysed as TMS derivative from control (A) and treatment CCJ (B), XCJ (C) and YCJ (D) in vitro. CCJ: gut symbiotic microbiota of female beetle; XCJ: gut symbiotic microbiota of male beetle; YCJ: gut symbiotic microbiota of beetle larvae.
Additional file 2: Table S1.
Microbial composition of the AWG and YWG groups at the phyla level.
Microbial composition of the AWG and YWG groups at the genus level.
Non-metric multidimensional scaling (NMDS) analysis of the Bray–Curtis similarity coefficients based on the relative abundance of OTUs in the given sample.
Boxplot analysis comparing the bacterial OTUs between the two groups.
Relative abundance of PICRUSt-predicted CAZyme genes relevant to lignocellulose degradation. (A) Glycoside hydrolases (GHs). (B) Glycosyl transferases (GTs). (C) Carbohydrate esterases (CEs). (D) Carbohydrate-binding modules (CBMs). (E) Auxiliary activities (AAs).
Composition of the digestive core microbiota at the genus level.
Number of annotated genes for each GH, CE, PL and AA family in all Lactococcus, Serratia, Enterococcus and Dysgonomonas genomes available at http://www.cazy.org/and in Cytophaga hutchinsonii ATCC 33406. GH, glycoside hydrolase; CE, carbohydrate esterase; AA, auxiliary activity; PL, polysaccharide lyase; CAZyme, carbohydrate-active enzyme.
Relative abundance of the predicted classifications in each sample by BugBase.
Luo, C., Li, Y., Chen, Y. et al. Bamboo lignocellulose degradation by gut symbiotic microbiota of the bamboo snout beetle Cyrtotrachelus buqueti. Biotechnol Biofuels 12, 70 (2019). https://doi.org/10.1186/s13068-019-1411-1
Lignocellulose degradation
Gut symbiotic microbiota
16sRNA-sequencing | CommonCrawl |
A framework for continuous target tracking during MR-guided high intensity focused ultrasound thermal ablations in the abdomen
Cornel Zachiu ORCID: orcid.org/0000-0001-9755-45841,
Baudouin Denis de Senneville1,2,
Ivan D. Dmitriev1,
Chrit T. W. Moonen1 &
Mario Ries1
During lengthy magnetic resonance-guided high intensity focused ultrasound (MRg-HIFU) thermal ablations in abdominal organs, the therapeutic work-flow is frequently hampered by various types of physiological motion occurring at different time-scales. If left un-addressed this can lead to an incomplete therapy and/or to tissue damage of organs-at-risk. While previous studies focus on correction schemes for displacements occurring at a particular time-scale within the work-flow of an MRg-HIFU therapy, in the current work we propose a motion correction strategy encompassing the entire work-flow.
The proposed motion compensation framework consists of several linked components, each being adapted to motion occurring at a particular time-scale. While respiration was addressed through a fast correction scheme, long term organ drifts were compensated using a strategy operating on time-scales of several minutes. The framework relies on a periodic examination of the treated area via MR scans which are then registered to a reference scan acquired at the beginning of the therapy. The resulting displacements were used for both on-the-fly re-optimization of the interventional plan and to ensure the spatial fidelity between the different steps of the therapeutic work-flow. The approach was validated in three complementary studies: an experiment conducted on a phantom undergoing a known motion pattern, a study performed on the abdomen of 10 healthy volunteers and during 3 in-vivo MRg-HIFU ablations on porcine liver.
Results have shown that, during lengthy MRg-HIFU thermal therapies, the human liver and kidney can manifest displacements that exceed acceptable therapeutic margins. Also, it was demonstrated that the proposed framework is capable of providing motion estimates with sub-voxel precision and accuracy. Finally, the 3 successful animal studies demonstrate the compatibility of the proposed approach with the work-flow of an MRg-HIFU intervention under clinical conditions.
In the current study we proposed an image-based motion compensation framework dedicated to MRg-HIFU thermal ablations in the abdomen, providing the possibility to re-optimize the therapy plan on-the-fly with the patient on the interventional table. Moreover, we have demonstrated that even under clinical conditions, the proposed approach is fully capable of continuously ensuring the spatial fidelity between the different phases of the therapeutic work-flow.
Percutaneous thermal ablation of tumors has emerged as an alternate treatment option for patient groups affected by unresectable pathologies and/or are not eligible for surgical interventions [1]. Such therapies rely on locally increasing the temperature of the pathological tissue to an extent that induces irreversible cell injury and eventually apoptosis and/or coagulative necrosis [2]. In particular, high intensity focused ultrasound (HIFU) [3–5] is currently the only percutaneous thermal ablation modality capable of non-invasive treatment delivery [1, 6]. It has already met success in treating several medical conditions such as: palliation of painful bone metastases, uterine fibroids, prostate malignancies, liver tumors and several neurological diseases such as tremor-dominant Parkinson's or neuropathic pain [7]. However, HIFU thermal ablations still remain challenging when the target pathology is situated in the abdomen or lower thorax. The challenge mainly stems from the fact that therapy delivery in such areas is hampered by various types of physiological motion, occurring at different time scales [8–10]:
Respiratory motion, for example, leads to a rapid quasi-periodic displacement of the organs in the upper abdomen and thoracic cage, with a typical frequency of 0.2 - 0.3 Hz (3 - 5 s per respiratory cycle)[11]. Previous studies have addressed this type of motion through different compensation techniques such as breath-holding, gating and/or beam-steering, with each approach involving their own set of advantages and drawbacks [12].
Digestive activity, metabolic processes and muscle relaxation have been identified to lead to significant displacements of abdominal organs on time scales of several minutes [13–15]. For this reason, in the scope of this paper, such motion will be referred to as slow physiological/long term drifts. The term "drift" was chosen due to the fact that contrary to respiration, these types of motion are generally of progressive nature and irreversible. The problem of digestive and peristaltic activity, in particular, can be alleviated by adjusting the patient's diet prior to the intervention [16] or by the administration of drugs such as butylscopolamine and/or glucagon acting as digestive motility inhibitors [17, 18]. Long term drifts originating from other physiological sources such as bladder filling are usually addressed by the use of Foley catheters [19, 20].
Finally, spontaneous motion due to, for example, muscle spasms is fast and infrequent, making it difficult to predict and to compensate for. It becomes particularly problematic for long interventions in absence of sedation or anesthesia, requiring the patient to lie in an uncomfortable position for lengthy periods of time. This is usually addressed by using molds or casts or by putting the patient under sedation [21, 22].
If left un-addressed, motion can lead to the therapeutic energy being diverted from the anatomy due for ablation. This not only increases the risk of under-treating the pathology but can also lead to unnecessary damage to otherwise healthy tissue [9, 23, 24]. Thus, a motion compensation strategy dedicated to HIFU thermal ablations in the abdomen can be beneficial for reducing the probability of such developments.
Regardless of the approach used for thermal ablation, there are multiple imaging modalities that can be used for therapy guidance [12, 25–28], with magnetic resonance imaging (MRI) possibly being the most versatile. Besides allowing precise delineation and identification of the pathology due to its superior soft tissue contrast [29], it also provides a means to non-invasively monitor in real-time the temperature of the treated anatomy and its surroundings through a technique called MR-thermometry [30]. The MR temperature measurements are typically the main observable during MR-guided HIFU (MRg-HIFU) thermal ablations. By computing the time integral of a non-linear temperature dependent term at a particular anatomical location, thermal dose measurements can be obtained [31], providing a mean to quantify thermal damage. The unit of measurement for thermal dose is equivalent minutes at 43°C (CEM43), with an anatomy being regarded as necrotic once it exceeds 240 CEM43 [31, 32]. However, motion-induced spatial misalignments between the MR temperature maps will most likely lead to miscalculations of the delivered thermal dose, since the associated time integral at a point in space actually includes temperature measurements from different anatomical locations. Thus, a motion compensation strategy which ensures the spatial alignment between the MR temperature maps is expected to improve the monitoring of therapy progress and effectiveness. Moreover, MR-thermometry acquisition sequences that are optimized for acquisition speed, which is usually the case for MRg-HIFU therapies, are often subject to geometric distortions [33]. This leads to a spatial inconsistency between the apparent location of a voxel in the acquired image and its true position in the imaged anatomy. Thus, a spatial misalignment between the temperature maps and the true underlying anatomy may occur. Therefore, it would be preferable that the geometric distortions, which can potentially affect the MR-thermometry images, are addressed prior to the calculation of the thermal dose.
While previous studies concerned with motion compensation during MRg-HIFU thermal therapies focused on displacements occurring at a particular time-scale [12, 34], in the current study we propose a motion correction framework that encompasses the entire intervention. Our solution consists of several linked motion compensation modules, each addressing a particular type of displacement/deformation, including:
A correction scheme for slow physiological motion, allowing on-the-fly adaptation of the interventional plan according to the displacements exhibited by the target anatomy.
A respiratory motion compensation scheme operating during HIFU energy deliveries, which ensures the spatial alignment between temperature measurements. This, in turn, is expected to lead to a more accurate evaluation of the delivered thermal dose.
A feature which allows therapy progress to be evaluated on-the-fly, in a spatially consistent way, on the interventional planning image(s) acquired at the beginning of the therapy.
A method for correcting the geometric distortions that frequently affect the MR-thermometry images, ensuring a spatial consistency between the estimated thermal dose maps and the true underlying anatomy.
The above features were integrated in the work-flow of an MRg-HIFU thermal ablation, ensuring continuous availability of the target position over the duration of the intervention. For the remainder of the manuscript percutaneous thermal ablations by the means of HIFU will be simply referred to as HIFU therapies/interventions/ablations.
General strategy
Figure 1 illustrates schematically the typical work-flow of an MRg-HIFU therapy, together with the proposed motion correction framework. The work-flow of an MRg-HIFU thermal therapy is in practice frequently episodic: bursts of energy deliveries (called sonications) are interleaved with periods of inactivity, during which the tissues in the near- and far-field are allowed to cool down. This allowed various MR-scans to be integrated at different stages of the therapy, which had the purpose of sampling the position of the treated anatomy and its surroundings over the duration of the intervention. The scans include 3D acquisitions several minutes apart (green boxes in Fig. 1), for sampling long-term drifts, while motion during sonications was sampled using more rapidly acquired 2D MR images, primarily used for thermometry (orange box in Fig. 1). Motion estimation was achieved by comparing the acquired MR-images, using image registration [35], to a reference image acquired at the beginning of the therapy. In order to relate all the acquired MR images to a single reference, several registration steps were employed: 1) A 3D-to-3D registration step (RS #1), for estimating long term drifts; 2) A 2D-to-2D registration step (RS #2), for estimating motion during sonications; 3) A 2D-to-3D registration step (RS #3), for estimating any residual displacement between a 2D reference image (pink box in Fig. 1) and its preceding 3D volume. All of these steps and the manner in which they link to each other will be described in detail during later sections.
Proposed motion estimation framework for continuous target tracking during MRg-HIFU interventions
The design of the framework includes images of two different MR contrasts: one for the 3D and one for the 2D images. For this reason, image registration was performed using two classes of algorithms: mono- and multi-modal [36]. In the scope of this study, a mono-modal registration method, namely the optical flow algorithm [37], was employed when the compared images had the same contrast weighting (more specifically during (RS #1) and (RS #2)). For images acquired with different contrasts, a multi-modal method based on the modality independent neighborhood descriptors (MIND) [38] was used (more specifically during (RS #3)). An important feature of both the optical flow and the MIND algorithm is their capability of providing dense and elastic deformations. This is particularly beneficial for estimating the complex deformations underwent by abdominal and thoracic organs.
The resulting estimated displacements/deformations provided by the framework were used for two purposes:
A "down-stream" propagation of the planned sonication positions such that their updated location match the initial anatomy due for ablation.
An "up-stream" propagation of the thermal dose delivered by each individual sonication such that therapy progress evaluation can be made in a common reference-frame.
Both the MR-acquisitions and the energy deliveries were performed using respiratory gating via a pencil-beam navigator placed on the diaphragm [39]. This implied that images were acquired and/or the HIFU beam was turned on only when the diaphragm is close to a predefined range of locations, referred to as the gating window.
Estimation of the 3D slow physiological drifts
In order to measure the long term drifts of the target area and its surroundings, a T1-weighted 3D scan is acquired after each sonication (green boxes in Fig. 1), during the cool-down intervals. The drifts are then estimated by registering the 3D images via the optical flow algorithm, to a reference scan of the same size and contrast acquired at the beginning of the therapy (RS #1).
The 3D scans employed the following MR acquisition protocol: TE = 2ms, TR = 4.3ms, image matrix 192×192×75, 10° flip angle, with an isotropic voxel size of 2×2×2 mm3, resulting in an acquisition time of 60 − 90 s, depending on the frequency and reproducibility of the subject's breathing cycle. For the remainder of the manuscript, this type of images will be referred to as 3D anatomical anchors.
Real-time compensation of respiratory motion
Respiratory gating was used as a first-order method for respiratory motion compensation and is expected to considerably reduce the side-effects of respiration during individual sonications, since energy deposition becomes more localized. However, due to the finite size of the gating window, residual respiratory motion might still be present. This can lead to spatial misalignments between the MR temperature maps and in turn to miscalculations of the thermal dose.
In order to estimate the residual displacements between the temperature maps, the magnitude images provided by MR-thermometry during a particular sonication (orange box in Fig. 1) were registered in real-time (RS #2), using the optical flow algorithm, to a common reference scan (pink box in Fig. 1). The resulting motion estimates were then used to spatially align the temperature measurements, thus more accurate thermal dose estimates are expected.
MR-thermometry was performed using the proton resonance frequency shift (PRFS) [30], with the acquisition sequence employing the following parameters: each scan was a single shot gradient-recalled echo, TE = 15 ms, TR = 72.5 ms, 20° flip angle, image size 160×160, voxel size 2.5×2.5×7 mm3. Concerning spatial coverage, each dynamic consisted of one coronal and one sagittal slice intersecting in the focal point. All the scans in the MR-thermometry series were acquired with a 5 mm gating window, except the 2D reference scan used in the registration process (pink box in Fig. 1). For the latter, the gating window was reduced to 2 mm for increased precision, which will be explained later in the manuscript.
Registration of the thermal dose measurements to a common reference
Correcting the temperature maps with respect to respiratory motion should provide improved thermal dose measurements for a particular sonication. At this point, however, each MR-thermometry series is registered to their own dedicated reference image. This leads to the resulting individual thermal dose maps being represented in their own frame-of-reference. It is, however, preferable to have all thermal dose measurements mapped into the reference frame of the planning image. In this manner, therapy progress can be monitored relative to the interventional plan, in a spatially consistent way. This mapping of the thermal dose measurements was achieved in two steps. First, the 2D reference scan was registered to its preceding 3D anatomical anchor (RS #3), in order to account for any potential residual displacements. This was followed by a mapping of the thermal dose with respect to both these residual displacements and the 3D drifts estimated on that particular anatomical anchor, thus projecting the dose into the reference space of the planning image.
The 2D reference scan and the 3D anatomical anchor have different dimensionality, geometry and MR-contrast weighting. In effect, their registration was achieved via a two-step process. Initially, the geometry and dimensionality issue was addressed by re-formatting the 3D anatomical anchor into the coordinate system of the 2D reference scan. The re-formatting was performed by relying on geometrical information extracted from the imaging parameters (field-of-view position, orientation, size, etc.). Once the 3D anchor was re-formatted, the planes covered by the 2D reference scan were extracted from the 3D re-formatted image and the alignment was further refined via the MIND multi-modal registration algorithm [38].
Extensions to the proposed motion correction framework
Two additional components were included in the proposed framework, which do not directly contribute towards motion compensation, having instead validation purposes. The extended framework is displayed in Fig. 2. Note that it includes two additional scans: one called a non-distorted EPI (purple box) and the other a sparse anatomical anchor (brown box). The non-distorted EPI scan is used in the process of estimating the geometric distortions that frequently hamper fast MR thermometry acquisitions, while the purpose of the sparse anatomical anchor is in the validation of the MIND multi-modal registration algorithm. The two added components are described in more detail within the following two paragraphs.
Extended motion estimation framework. In addition to the original, the extended version also includes a distortion correction scheme and a validation component for the multi-modal registration algorithm
Estimation of geometric distortions
In the proposed protocol, MR-thermometry is based on images acquired using an echo-planar imaging (EPI) readout train. In order to have fast temperature updates, the sequence was optimized for acquisition speed, which leads to the images containing geometric distortions [33, 40]. The latter manifest themselves as a mismatch between the apparent position of the anatomy in the acquires images and its true position. Thus, when registering the 2D reference scans to their preceding 3D anatomical anchors (RS #3 in Fig. 1), the estimated displacements will also include the geometric distortions.
In order to differentiate motion from geometric distortions we propose acquiring an additional image, which was called a non-distorted EPI scan (purple box in Fig. 2), having the same contrast and geometry as the 2D reference scans, but re-optimized such that distortions are minimized. An estimation of the geometric distortions is then obtained by registering via the optical flow algorithm the 2D reference scan to the non-distorted EPI (RS #4 in Fig. 2). The actual residual motion between the 2D reference scans and their preceding 3D anatomical anchor is now estimated using the non-distorted EPI (RS #3 in Fig. 2). The non-distorted EPI was acquired using a 2 mm gating window, immediately prior to the 2D reference scan. This also justifies why the latter was also acquired using a 2 mm gating window. Since the goal is to estimate solely geometric distortions, the potential residual motion between the 2D reference and the non-distorted EPI scans was minimized by using a narrow gating window.
Validation of the MIND multi-modal registration algorithm
Multi-modal algorithms tend to be more complex and error prone than mono-modal methods. Therefore, an independent validation procedure is proposed in the current work for the MIND algorithm. This procedure consists in the acquisition of an additional image immediately prior to the non-distorted EPI scan, in the same geometry, but with the contrast of the 3D anatomical anchors. The newly acquired image, called a sparse anatomical anchor (brown box in Fig. 2), is then registered to the 3D anatomical anchor via the optical flow algorithm (RS #5 in Fig. 2). This is performed, however, after re-formatting the 3D anatomical anchor into the coordinate system of the sparse anatomical anchor, and the selection of the corresponding planes. By design, the motion fields obtained during RS #3 and RS #5 should be identical. The deformations provided by the optical flow algorithm during RS #5 were established as a silver standard, with the errors associated to the MIND algorithm during RS #3 being quantified in terms of the endpoint error (EE):
$$ \text{EE}(\vec{r}) = \|\mathbf{u}_{\text{OF}}(\vec{r}) - \mathbf{u}_{\text{MIND}}(\vec{r}))\|_{2} $$
where \(\vec {r}\) is the pixel position, ∥·∥2 is the Euclidean distance and u OF and u MIND are the motion vectors estimated by the optical flow and MIND algorithms, respectively.
Registration algorithms
As previously mentioned, depending on whether the images being registered were acquired with the same MR-contrast weighting or not, one of two registration algorithms were employed: the optical flow [37] or the MIND [38] algorithm. The methods were chosen due to their fast numerical schemes, low number of input parameters and their capability to provide deformations on a voxel-by-voxel/pixel-by-pixel basis, characteristics which make them particularly attractive for medical image registration.
The optical flow algorithm followed the implementation described by Zachiu et al. [37]. The approach provides the deformation between two images I and J, as the minimizer of the following functional:
$$ E_{\text{OF}}(\mathbf{u}) = \sum_{\vec{r}\in\Omega}{\left(|I(\vec{r}) - J(\vec{r} + \mathbf{u}(\vec{\mathbf{r}}))| + \alpha\|\vec{\nabla}\mathbf{u}(\vec{\mathbf{r}})\|_{2}^{2}\right)} $$
where u is the 2D or 3D displacement, depending on the image dimensionality, Ω is the image domain, \(\vec {r}\) is a pixel/voxel spatial location, \(\vec {\nabla }\) is the gradient operator, ∥·∥2 is the Euclidean norm and α is a parameter linking the two terms of the functional. The optimization scheme and method validation are discussed at large in the original paper [37].
The MIND algorithm, initially proposed by Heinrich et al. [38], is a deformable multi-modal/cross-contrast registration algorithm relying on the concept of self-similarity introduced by Buades et al. [41]. The method associates to each pixel/voxel of an image I, a descriptor based on local similarities defined by:
$$ \text{MIND}(I,\vec{r},\gamma) = \frac{1}{Z}\text{exp}\left(-\frac{D_{p}(I, \vec{r}, \vec{r} + \gamma)}{V(I, \vec{r})}\right) $$
where Z is a normalization constant, D p is the Euclidean distance between the local neighborhoods of size p around the pixels/voxels at positions \(\vec {r}\) and \(\vec {r}+\gamma \), γ∈Γ is a search region of the pixels/voxels included in the descriptor and \(V(I, \vec {r})\) is a local variance estimation accounting for noise perturbations. In effect, a MIND descriptor associates to each pixel/voxel of the image I a vector of size Γ. According to the MIND algorithm, the deformations between two images are found as the minimizer of the following functional:
$$ E_{MIND}(\mathbf{u}) = \sum_{\vec{r}\in\Omega}{\left(S(I(\vec{r}), J(\vec{r}+\mathbf{u}(\vec{\mathbf{r}}))))^{2} + \alpha\|\vec{\nabla}\mathbf{u}(\vec{\mathbf{r}})\|_{2}^{2}\right)} $$
$$ S(I, J) = \frac{1}{|\Gamma|}\sum_{\gamma\in\Gamma}{|\text{MIND}(I, \vec{r}, \gamma) - \text{MIND}(J, \vec{r}, \gamma)|} $$
where I and J are the images to be registered. The optimization scheme together with implementation and validation details can be found in the original paper [38].
Validation of the proposed motion estimation framework
The proposed motion estimation framework was validated in three complementary studies:
An experiment carried-out on a phantom undergoing a known motion pattern.
A study conducted on the abdomen of 10 healthy volunteers.
An in-vivo study involving MRg-HIFU thermal ablations on porcine liver, which included a total of 3 animal experiments.
Phantom experiment
The motion estimates provided by the proposed framework were initially validated on a dataset acquired during a phantom study. Motion estimation algorithms typically rely on structural information in order to estimate displacements, having to interpolate/extrapolate/infer motion in more homogenous regions. For this reason, the phantom consisted of a set of two grid-like structures (positioned in the coronal and the sagittal plane) placed inside a cylindrical plastic casing. The casing itself was then filled with a polymer gel and one of the bases was provided with an acoustic-transparent mylar membrane. Figure 3(a) and (b) illustrate a coronal and a sagittal slice of a T1-weighted 3D MR image acquired on the phantom, showcasing the two integrated structures surrounded by the polymer gel. The 3D rendering in Fig. 3(c) displays the shape and the manner in which the two structures are positioned with respect to one-another.
The custom-built phantom. a: A coronal and b: a sagittal slice of a T1-weighted 3D MR image acquired on the phantom. The two structures integrated in the phantom have a dark appearance in the images, while the polymer gel appears as bright. c: A 3D computer-generated rendering of the two structures inside the phantom
Known displacements were induced to the phantom and used as gold-standard during the validation process. The known displacements were applied to the phantom via a motorized platform linked to an in-house developed interface which allowed the injection of custom-designed motion patterns. For the purpose of making the experiment more realistic, a pre-recorded free-breathing pattern was induced to the phantom. The pattern was recorded during a separate experiment and consisted of the average head-foot liver displacement of a healthy volunteer.
The phantom, moving according to the pre-recorded breathing pattern, underwent the extended MR image acquisition protocol illustrated in Fig. 2. A total of 9 3D anatomical anchors (including the reference) were acquired on the phantom, with all the other scans in between, over a duration of ∼1 h. During the experiment, two 5 mm drifts were induced in the breathing pattern after the acquisition of the 3rd and respectively 6th anatomical anchor, in order to simulate the effect of slow physiological motion. Note that the FOV of the 3D anatomical anchors was fixed in such a way that the phantom did not leave it following the two induced drifts.
Besides validating the motion estimates provided by the optical flow and the MIND algorithms, the phantom experiment also allowed evaluating the extent of the geometric distortions present in the 2D thermometry images and the performance of the proposed distortion correction scheme. For this purpose, the position of the grid points contained by the two structures placed inside the phantom, was manually identified in the non-distorted EPI scans and compared, in terms of the Euclidean distance, to their position in the succeeding 2D reference scans, before and after correction. The distances before and after correction, were then placed in an individual set, and the statistical distribution of the resulting two sets were compared in order to determine the extent to which the geometric distortions were corrected by the proposed scheme.
Healthy volunteer study
A study satisfying the required standards and in conformity with regulatory requirements was carried-out on 10 healthy volunteers. The main purpose of this experiment was to evaluate the displacements underwent by the human liver and kidneys over the typical duration of an MRg-HIFU intervention. Five of the volunteers were subjected to the original MR-protocol illustrated in Fig. 1, while the other five were put through the extended protocol displayed in Fig. 2. The volunteers were placed in the MR-scanner in a prone-head-first position and were instructed not to move over the duration of the study. Each experiment lasted for a maximum of 60 min, with a minimum duration imposed by the volunteer. During this time interval, the original or the extended (depending on the volunteer) MR-acquisition protocol was run continuously, with a time gap of 7-8 min between the 3D anatomical anchors.
The 3D slow physiological drifts and the residual respiratory motion present within the MR-thermometry series were quantified for the liver and kidneys of each individual volunteer. The quantification was performed in terms of the spatial and/or statistical distribution of the following set:
$$ M = \{\|\mathbf{u}(\vec{r})\|_{2}^{2}\, | \, \vec{r}\in\text{ROI}\} $$
where u are the estimated displacements, \(\vec {r}\) is the spatial position, ∥·∥2 is the Euclidean norm and ROI is a region encompassing an organ of interest. Basically, M is a set containing the magnitude of the motion vectors estimated in all pixels/voxels belonging to the organs of interest. Such a metric is meant to provide the extent of the deformations undergone by the organs. Additionally, by using the metric in Eq. 1, validation of the motion estimates provided by the MIND cross-contrast registration algorithm was performed for the volunteers that underwent the extended MR-protocol.
In order to define the ROI in Eq. 6 encompassing the organs of interest, an active contour-based segmentation procedure was employed, which was subsequently manually refined. The segmentation was performed using ITK-Snap v3.0 [42].
In-vivo experiments
The current animal study was performed in agreement with the European law on animal experimentation and in compliance with the institution's rules for animal care and use.
In order to evaluate the compatibility of the proposed motion estimation framework with the work-flow of an MRg-HIFU intervention under clinical conditions, 3 animal experiments were conducted, consisting of in-vivo ablations on porcine liver. Similar to the study performed, for example, by Wijlemans et al, the MRg-HIFU ablations were carried-out on female Dalland pigs of 60 – 70 kg. The animals were anesthetized by an initial intramuscular injection containing ketamine (13 mg/kg), midazolam (0.7 mg/kg), atropine (0.02 mg/kg) and meloxicam (0.4 mg/kg) and placed under mechanical ventilation. Subsequently, general anesthesia was maintained by continuous intravenous administration of sufentanil (11.3 μg/kg/h), midazolam (1 mg/kg/h) and cisatracurium (0.09 mg/kg/h). A total number of 6 sonications were performed on one of the animals and 9 on the other two. The sonication cells had a 4 mm diameter, and each sonication consisted in delivering 450 W of acoustic power over a duration of ∼30 s, resulting in ∼13.5 kJ of energy per shot. The sonications were carried-out at a frequency of 1.2 MHz and a depth of ∼10 cm, using a modified Philips Sonalleve ablation system (Philips Healthcare, Vantaa, Finland). Time-wise, the experiments extended over a duration of ∼1-2h each.
In order to prevent rather large delays during the experiments that would allow naturally occurring slow physiological drifts to become significant, artificial motion was induced in the abdominal area of the animals. This was achieved by varying the volume of water within a cooling cushion placed between the mylar membrane of the Philips Sonalleve system and the skin of the animals. Technical details related to the cushion can be found in [43] and Chapter 6 of [44]. Throughout each experiment, the volume of water inside the cooling cushion was varied twice, reducing the cushion's height by ∼5 mm each time.
During animal experiment # 3, immediately after the last sonication, an additional 3D anatomical anchor was acquired, followed by a contrast-enhanced (CE) 3D T1w scan. The additional 3D anchor was registered to the reference 3D anchor and the resulting deformations were used to map the CE 3D T1w image into the reference space of the planning image. In this manner, the non-perfused volume (NPV) visible on the CE image, can be compared to the initial volume due for ablation and the up-stream propagated thermal dose, in a spatially consistent way. The CE 3D T1w acquisition sequence employed the following parameters: TE = 2.6ms, TR = 5.4ms, image matrix 512×512×53, 10° flip angle, with a voxel size of 0.48×0.48×1.5 mm3. Prior to its mapping into the reference space of the planning image, the CE 3D T1w scan was reformatted onto the grid of the 3D anatomical anchors. This facilitated the consistent application of the deformation estimated on the last 3D anatomical anchor. Once the experiments were finished, the animals were euthanized using an overdose of sodium pentobarbital.
Hardware and implementation
A multi-threaded (8 threads) C++ implementation was performed for all registration algorithms included by the proposed motion correction framework. The data resulting from the healthy volunteer study and the phantom experiment was processed in retrospect, with motion estimation and analysis being conducted on an Intel 3.2 GHz i7 workstation (8 cores) with 16 GB of RAM.
During the animal experiments, the calculations associated to the proposed framework were offloaded on a dedicated custom-build node with 32 cores and 64 GB of RAM. The implementation was performed as an additional module directly into the clinical software dedicated to the interventional radiologist delivering the therapy. The node together with access to the clinical software code was provided by Philips Healthcare, Vantaa, Finland.
Phantom study
Validation of the optical flow mono-modal registration algorithm
A total of 9 volumes (including the reference 3D anchor) were acquired on the phantom undergoing a known motion pattern, with a 5 mm drift injected in the pattern after every third scan. In order to validate the optical flow mono-modal registration algorithm, the magnitude of the motion vectors estimated on the 3D anchors were compared to the injected drifts. The resulting EE between the estimated and the injected displacements are displayed in Fig. 4. The illustrated boxplots correspond to the statistical distribution of the errors estimated on each of the 3D images (except the reference scan itself). The boxplots were constructed as follows: the box limits are the 25th and the 75th percentiles, the whiskers correspond to the 5th and the 95th percentiles, the red cross indicates the average of the set and the red line is the set median. Note that, for most of the volumes, 95% of the errors remain sub-voxel (<2 mm). It is only for two of the volumes that the 95th percentile of the registration errors marginally exceeds this threshold. Also, with one exception (the first of the volumes), the average EE remains sub-millimeter. The analysis was restricted to the structures included in the body of the phantom.
Validation of the optical flow algorithm on the phantom dataset. The boxplots correspond to the statistical distribution of the EE between the estimated motion vectors and the injected drifts for each 3D anatomical anchor acquired during the phantom experiment. The abscissa provides the index of the 3D anatomical anchor, for which the errors are displayed, within the corresponding time-series
Validation of the multi-modal registration algorithm
A validation of the motion estimates provided by the MIND multi-modal registration algorithm was also performed on the phantom dataset. However, instead of registering the non-distorted EPI image to its preceding 3D anatomical anchor, it was registered directly to the 3D reference anchor (see Fig. 2 for reference). The estimated motion vectors were then compared in terms of the EE to the drifts induced in the motion pattern. The spatial distribution of the temporally averaged EE maps is displayed in Fig. 5(a) and (b) for a coronal and a sagittal slice respectively. It can be observed that the EE remain below the in-plane voxel size (<2.5 mm). This is further confirmed by analyzing their statistical distribution illustrated in Fig. 5(c). Moreover, the two boxplots in Fig. 5(c) show that 95% of the errors remain sub-millimeter with an average of ∼0.5 mm. Note that the analysis was restricted to the two structures embedded in the phantom. The borders of the phantom were also excluded from the analysis, due to a signal drop in the non-distorted EPI images, particularly visible in Fig. 5(b).
Validation of the multi-modal registration algorithm on the moving phantom data set. The spatial distribution of the temporally averaged EE in a (a): coronal (b): sagittal plane through the phantom (c): Statistical distribution of the errors illustrated in (a) and (b)
Quantification and correction of the geometric distortions present in the MR-thermometry images
Figure 6(a) and (b) display a coronal and a sagittal slice from an MR-thermometry image acquired on the phantom, before distortion correction. It can be observed that some segments of the structures inside the phantom appear to be bent, when in reality the structures are made of straight elements. This effect is notably reduced after distortion correction, as shown in Fig. 6(c) and (d).
Validation of the proposed distortion correction scheme employed for the MR-thermometry images. Example of a coronal (left) and a sagittal (middle) MR-thermometry magnitude image acquired on the phantom (a), (b): before and (c), (d): after distortion correction. (e): Extent of the geometric distortions measured for the grid points marked with "x" in a and b, before (left boxplot) and after (right boxplot) correction
The grid-like shape of the structures integrated in the body of the phantom, in both the coronal and sagittal plane, allowed the quantification of the geometric distortions present in the MR-thermometry images and also the extent to which these are corrected by the proposed method. The boxplots in Fig. 6(e) display the extent of the geometric distortions before and after applying the proposed correction scheme. Measurements were performed for the grid points marked with "x" in Fig. 6(a) and (b), for all the 2D reference - non-distorted EPI pairs acquired on the phantom. It can be observed that, in the absence of correction, distortions extend up to ∼6 mm. After correction, however, distortions were reduced to in-plane voxel size values (<2.5 mm), with an average reduced from ∼5 mm to ∼1 mm.
Volunteer study
Analysis of the 3D slow physiological drifts
Figure 7 illustrates the temporal evolution of the long term drifts estimated on the 10 healthy volunteers. Figure 7(a) and (b) show, separately for the liver and kidneys, the statistical distribution of the magnitude of the 3D motion vectors, pooled from all volunteers, at each 3D anatomical anchor acquisition time point. It can be observed, for both the liver and kidneys, that there is a tendency of the displacements to increase over time, with a magnitude of the motion vectors exceeding 7 mm at the acquisition time point of the last 3D anatomical anchor. Figure 7(c) and (d) display separately for the liver and kidneys, the average magnitude of the displacement vectors over time, individually for each of the 10 volunteers. The different length of some curves compared to others is due to the respective volunteers terminating the experiment before the 60 min maximum observation period. A rather large inter-subject variability can be observed in both the liver and kidneys. In volunteer #4, for example, the average displacement remained under 2 mm for the entire duration of the study, for all organs of interest. However, in volunteer #1, already halfway through the experiment, the average displacement exceeded 6 mm.
Temporal evolution of the 3D slow physiological drifts estimated on the 10 healthy volunteers. a, b: Statistical distribution of the magnitude of the 3D motion vectors estimated on the liver and kidneys, pooled from all volunteers, at each 3D anatomical anchor acquisition time point. c, d: Time evolution of the average magnitude of the motion vectors estimated on the liver and kidneys, illustrated individually for each of the 10 healthy volunteers. The abscissa in Fig. a - d provides the index of the 3D anatomical anchor, for which the displacements are displayed, within the time-series
Assessment of the residual respiratory motion present in the MR-thermometry series
For each dynamic of the multiple MR-thermometry series acquired on the healthy volunteers, the average liver and kidney residual respiratory displacement was estimated and pooled in a separate set for each volunteer. Figure 8(a) and (b) display, per individual, the statistical distribution of the average magnitude of the 2D motion vectors corresponding to the residual respiratory displacements, separately for the liver and the kidneys. While the average displacements remain close to 1 mm, they occupy a rather large range of values, in some instances exceeding 4 mm. The extent of the residual motion and the inter-individual variations are most likely determined by the stability/reproducibility of the breathing cycle of each individual volunteer.
Quantification of the residual respiratory motion for the 10 healthy volunteers. Statistical distribution of the average magnitude of the motion vectors estimated for a: liver b: kidneys
As specified in the methods section, in order to validate the MIND registration algorithm, an endpoint error map (see Eq. 1) was calculated for each sparse anatomical anchor - non-distorted EPI image pair acquired on the volunteers (see Fig. 2 for scan nomenclature). Figure 9(a) and (b) illustrate for one of the volunteers, the spatial distribution in the organs of interest of the temporally averaged EE maps. It can be observed that, for this particular volunteer, the estimation errors for the MIND multi-modal algorithm remain under 2.5 mm, which corresponds to the in-plane voxel size. The pixel-wise EE were pooled separately for the liver and kidneys of each volunteer and illustrated under the shape of a boxplot in Fig. 9(c) and (d). For all volunteers, the estimation errors associated to the MIND algorithm reside beneath the in-plane voxel size.
Validation, on the healthy volunteer dataset, of the MIND registration algorithm. a, b: Spatial distribution of the temporally averaged registration errors in the liver and kidneys of one of the volunteers, in a coronal and a sagittal plane respectively. c, d: Statistical distribution of the multi-modal registration errors in the liver and kidneys of each volunteer
Animal experiments
The proposed motion correction framework was validated under clinical conditions during 3 separate in-vivo MRg-HIFU thermal ablations conducted on porcine liver. Figure 10(a) illustrates the therapy planning for animal experiment #3, showcasing a coronal (left) and respectively a sagittal (right) slice through the 3D planning image, upon which the 9 sonication cells are overlaid. For better visibility, the interventional plan was magnified and shown in Fig. 10(b). For this particular experiment, an artificial motion event was induced in the abdominal area of the animal after every 3 sonications. In effect, the initially planned sonication positions were updated twice during the experiment, according to the displacements estimated on the 3D anatomical anchors after each artificial motion event. Figure 11(a) illustrates the "down-propagated" sonication cells overlaid as blue ellipses on a coronal and a sagittal slice from the planning image. Following the two motion events, displacements of over 5 mm can be observed for all initially planned sonication cells. The red overlay represents the lethal thermal dose accumulated from all sonications, without mapping the thermal dose delivered by the individual sonications into the reference space of the planning image. This provides the means to evaluate the effects of motion on the outcome of the therapy, in the absence of the proposed motion compensation framework. This scenario is better illustrated in Figure 11(b) where the initial plan is overlaid as blue ellipses on the planning image, together with the non-registered lethal thermal dose in red. In the absence of a motion compensation strategy, a large part of the anatomy initially due for ablation would have been left untreated, while at the same time the therapy would have resulted in considerable collateral damage. Following the "up-stream" propagation and accumulation of the thermal dose delivered by each individual sonication, the result shown in Fig. 11(c) was obtained. The initially planned location of the sonications is displayed as blue ellipses overlaid on the interventional planning image, together with the motion corrected lethal thermal dose. The good overlap between the latter and the initial plan indicates that the proposed motion compensation framework performed as intended. Figure 11(d) displays the initially planned sonication cells overlaid as blue ellipses on a CE T1w image (acquired exclusively on animal #3), after its registration to the planning image. A good correspondence can be observed between the NPV, the initial volume due for ablation and the registered lethal thermal dose map displayed in Fig. 11(c). Note that the contrast of the Fig. 11(d) was digitally enhanced for improved NPV visibility. In all images from Figs. 10 and 11, the white overlay traces an approximation of the HIFU beam cone, while the yellow overlay defines a search region within which the HIFU system checks for the existence of a focal spot.
Fig. 10
Therapy plan for animal experiment #3. Coronal (left) and sagittal (right) slice through the 3D planning image together with the 9 sonication cells a: overall b: magnified for better visibility
Validation of the proposed motion compensation framework under clinical conditions. Outcome of the MRg-HIFU liver ablation in animal experiment #3. a: Down-propagated/Motion corrected sonication locations (blue ellipses) overlapped with the non-registered lethal thermal dose (in red). b: Originally planned sonication locations (blue ellipses) overlapped with the non-registered lethal thermal dose (in red). c: Originally planned location of the sonications (blue ellipses) overlapped with the motion corrected lethal thermal dose (in green). d: The initially planned sonication cells (blue ellipses) overlaid on a coronal (left) and a sagittal (right) slice of the registered contrast-enhanced T1w image. Figures a, b and c all showcase a coronal (left image) and a sagittal (right image) slice through the 3D planning image acquired at the beginning of the therapy as background
Table 1 reports, for the animal experiments, the percentage of the anatomical volume due for ablation that was estimated to receive a lethal amount of thermal dose, with and without enabling the proposed motion compensation framework. Although to different extents, when motion compensation is enabled, improvements in coverage were observed in all reported cases.
Table 1 Validation of the proposed motion compensation framework under clinical conditions. Percentage of the volume due for ablation estimated to have received a lethal amount of thermal dose with (first row) and without (second row) the proposed motion compensation framework enabled
Computational performance of the registration algorithms
In order to ensure a smooth work-flow for the MRg-HIFU intervention, the registration algorithms included by the proposed framework should provide positional information with minimal latency. Table 2 reports the average convergence time for each of the registration steps included by the framework. See Fig. 1 for details concerning the purpose of each algorithm.
Table 2 Average convergence time per pair of images of the registration algorithms included by the proposed motion compensation framework
Lengthy MRg-HIFU thermal therapies in the abdomen are usually hampered by various types of physiological motion occurring at different time-scales. So far, studies have focused on developing correction schemes for displacements arising at a particular time-scale [12, 34] which, as we have demonstrated in the current work, may be conceptually insufficient. In effect, the present study proposes a motion compensation framework that encompasses the entire work-flow of an MRg-HIFU intervention. The framework consists of several linked components, each dedicated to estimating a particular type of motion/deformation, with the resulting displacements being used for two purposes: 1) Down-stream propagation of the initially planned sonication locations such that they match the current position of the anatomy and 2) Up-stream propagation of the thermal dose delivered by each individual sonication such that therapy progress can be evaluated in a single frame of reference (namely the reference space of the planning image(s)). Due to its modular nature, the proposed approach has increased flexibility, facilitating the addition, modification, replacement and/or removal of individual components. The proposed motion compensation strategy was tested and validated in three complementary experiments: 1) An experiment carried-out on a phantom undergoing a known motion pattern; 2) A study conducted on the abdomen of 10 healthy volunteers and 3) An in-vivo study involving HIFU ablations on porcine liver.
The healthy volunteer study reconfirmed that over a duration of 1h, the human liver and kidneys can manifest slow physiological drifts of up to 7 – 8 mm, exceeding acceptable therapeutic margins. This is in good correspondence with previous reportings [13, 45, 46]. Additionally, a rather large inter-subject variability was observed. Concerning the proposed correction scheme for respiratory motion, the study conducted on the 10 volunteers demonstrated that gating during energy deliveries, as a first order method, has good motion compensation capabilities, with estimated average residual displacement of ∼1-1.5 mm. Of importance are, however, the instances in which the estimated average residual displacements extend up to 4 mm or more (see Fig. 8). Due to the misalignments induced between the temperature maps, such displacements during MR-thermometry can impact the thermal dose measurements to an extent that they become unreliable for that particular sonication. A simple solution to reduce the range of residual respiratory displacements is to narrow-down the size of the gating window. However, depending on the reproducibility of the patient's breathing cycle, this can lead to a poor duty cycle of the HIFU beam, affecting the overall therapy efficiency. The proposed dedicated registration scheme, on the other hand, allows energy deliveries with a wide gating window (≥5m m), facilitating a higher duty cycle not only for the HIFU beam, but also for the MR-thermometry.
The reliability of the motion estimates provided by the proposed framework was analyzed in both the phantom and the volunteer experiment. Following the phantom study, both the mono- and the multi-modal registration algorithms have proven on average sub-voxel accuracy and precision. In order to avoid a bias due to outliers during the validation of the multi-modal registration algorithm, the borders of the phantom were excluded from the analysis (see Fig. 5). This was performed due to a signal drop in these areas in the non-distorted EPI images, signal drops which were not present in the reference 3D anatomical anchor. Due to a violation of the basic assumption made by the MIND algorithm, that all structures in the reference image have a counterpart in the moving image, the reliability of the motion estimates was poor on the phantom borders. Since this aspect is known a priori, these low-signal areas are not representative for the algorithm's performance. Instead they simply emphasize some of its limitations. The phantom experiment, however, has only limited validation capabilities since the phantom was able to undergo motion with fewer degrees of freedom than an actual abdominal organ. On the other hand, while the known motion patterns induced to the phantom can be used as a robust gold standard when analyzing the performance of the registration algorithms, obtaining a gold standard for in-vivo studies is a challenging task. In particular for the cross-contrast registration algorithm, this issue was addressed by comparing the in-vivo motion estimates against a silver standard. The latter was constructed based on motion estimates provided by the optical flow algorithm. This decision was made due to its prior successful in-vivo validation in previous independent studies [47–50]. However, the drawback of such an approach is that the optical flow algorithm has its own shortcomings which affect its performance (discussed at length in [37]). Thus, the in-vivo errors reported for the multi-modal algorithm stem from both its own mis-registrations and the errors in the silver standard itself. Nevertheless, the overall estimation errors remain sub-voxel for 3D registration and lower than the in-plane voxel size for the 2D registration methods, which is in good correspondence with previous studies [38, 47–50]. Potential errors may also occur during the registration of the 2D reference scan to its preceding 3D anatomical anchor (RS #3 in Fig. 1) or during the registration of the non-disorted EPI scan and the sparse anatomical anchor, again, to their preceding 3D anatomical anchor (RS #3 and RS #5 in the extended framework from Fig. 2). This is due to the fact that the elastic refinement of the registration between the 2D scans and the corresponding planes from the reformatted 3D anatomical anchor (following the initial rigid alignment step), was only performed in 2D. In case of severe through-plane motion, misregistration may occur, since through-plane motion might be interpreted as in-plane motion. The risk of such a development is, however, considerably reduced since the 3D anatomical anchor and the 2D reference scan, sparse anatomical anchor and the non-distorted EPI scan are respiratory gated and acquired in rapid succession. In the scope of this study, this led to residual displacements predominantly in the cranio-caudal direction, with the anterior-posterior component being well under the voxel size. For the estimation of through-plane deformations, a 3D dense and elastic registration between the 2D scans and their preceding 3D anatomical anchor would be necessary. This is, from a mathematical point-of-view, a severely ill-posed problem and a topic in itself, making it the object of future studies.
Since the proposed motion compensation framework consists of several linked components, with the estimated displacements in some instances being successively added to one another, error accumulation becomes an important aspect. While slow physiological drifts and residual respiratory displacements are the result of independent registrations between a scan and its corresponding reference image, with sub-voxel/sub-pixel estimation errors, projecting the thermal dose delivered by a particular sonication onto the reference space of the planning image relies on a chain of up to 4 registration algorithms (see Fig. 2). However, even so, the accumulated estimation errors remained within 10−20% of the total average displacement.
An effective way to improve the precision and accuracy of the registration algorithms is to increase the spatial resolution of the acquired MR images. A higher spatial resolution typically implies a greater level of detail and structural information in the images, which facilitates a better performance of the registration algorithms. Note, however, that MR imaging usually implies a tradeoff between spatial resolution, temporal resolution and signal-to-noise ratio (SNR). For a smooth work-flow of an MRg-HIFU therapy, constrains may have to be imposed on the image acquisition times. Therefore, a higher spatial resolution may result in a lower SNR of the acquired images. Particular attention is thus required, in order to ensure that the losses in terms of SNR do not counteract the gains in terms of precision and accuracy facilitated by a higher spatial resolution.
The overall performance of the proposed motion compensation framework together with its compatibility with the typical work-flow of an MRg-HIFU intervention under clinical conditions was validated during 3 animal experiments. In all 3 cases, the mapping of the thermal dose in the reference space of the planning image, with respect to the displacements estimated by the framework, resulted in different amounts of improvement between the planned sonication locations and the lethal thermal dose. Moreover, as shown in Fig. 11(c) and (d), for the animal for which a CE 3D T1w image was acquired at the end of the HIFU ablation session, a good correspondence can be observed between the planned location of the sonication cells, the registered lethal thermal dose and the registered NPV. This further confirms the success the proposed motion correction framework. Although in the scope of this study a CE T1w image was acquired only for the third animal, the visible NPV upon such images can generally be used as a metric for evaluating acute therapeutic response. Furthermore, in the context of motion correction schemes for MRg-HIFU, the NPV pattern can be used for additional validation, as demonstrated in animal experiment #3.
The majority of the MR-scans and registration algorithms included by the framework were integrated during the cool-down intervals between successive sonications. In order to ensure a smooth therapeutic work-flow, the acquisition times of the scans together with the computational requirements of the registration algorithms must not exceed typical cool-down durations. For the animal experiments, each sonication consisted in the delivery of ∼13.5 kJ of energy, which led to the HIFU system imposing cool-down intervals of 2-5 min, determined by the perfusion effects in the near-field. In practice, more typical values are 5 kJ of energy per sonication with 2-3 min cool-down. For the proposed framework, the duration of the MR-scans together with the registration algorithms integrated during the cool-down intervals resulted in average delays of 2-3 min, depending on the subject's respiratory frequency. For the purpose of this study this was sufficient since such a duration is well in accordance with the cool-down threshold imposed by the HIFU system. Nevertheless, the protocol can be further accelerated if necessary. For example, this can be achieved by re-optimizing the acquisition sequence of the 3D anatomical anchors for speed rather than resolution and spatial coverage, which were favored in the current study.
In interventional oncology, pathology identification and delineation, interventional planning, therapy monitoring and therapy response evaluation are preferably performed in the same frame of reference. Especially in moving organs, this can become problematic for HIFU thermal ablations, since physiological motion frequently induces spatial mismatches between these steps. The current study aims to render the work-flow of an MRg-HIFU therapy in mobile organs compatible with pre-existing work-flows from interventional oncology by proposing a suitable motion estimation/correction strategy that encompasses all the previously specified phases of an intervention. Results have shown that physiological drifts of 7-8 mm have to be expected when therapy is conducted in the liver or kidneys, displacements which, if left unaddressed, can have severe consequences. For example, as illustrated in Fig. 11(b), in case the ablation area is situated in the proximity of the gallbladder, there is a high risk that the latter and/or the associated structures (such as the bile duct) are perforated/damaged, leading to complications due to the possible release of emulsifying enzymes into the bloodstream. Or, if therapy is conducted in the kidney, damage to the pelvis or the ureter might occur. Moreover, such displacements might lead to large areas of residual pathological tissue. Such a case is again depicted in Fig. 11(b), where only 40% of the initially planned anatomy would have been ablated (see also Table 1). However, the animal experiments have demonstrated that the risk of such developments can be considerably reduced when therapy is conducted with the proposed motion compensation framework active. As shown in Table 1, in all analyzed cases the framework led to improvements of the overlap between the planned anatomy due for ablation and the delivered lethal thermal dose, compared to the scenario when no corrections are performed. Moreover, the framework demonstrated good compatibility with the typical work-flow of an MRg-HIFU thermal therapy, with latencies introduced by the integrated MR scans and the registration algorithms that allowed a smooth progress of the intervention. Additionally, the fact that the proposed motion compensation strategy was implemented directly into the clinical software stack granted the possibility of on-the-fly re-optimization of the therapeutic plan according to the estimated displacements and the projection of the thermal dose delivered by each individual sonication in the same frame of reference (see Fig. 11). Particular attention was paid during the implementation of the framework such that the work-flow of an intervention conducted on (quasi-)static anatomies remains unchanged, with the additional motion compensation features being hidden to the radiologist. Noteworthy is also the fact that the implementation and execution of the proposed framework was performed using commercially available hardware, all being integrated on an existing HIFU platform in conjunction with a standard 1.5 T Achieva MRI.
One of the drawbacks of the proposed motion correction framework is the lack of a component that addresses spontaneous motion. Thus, the latter is considered to be circumvented by some other means such as patient sedation. Problematic might also be the fact that the framework does not include an objective quality evaluation criterion for neither the MR images used for tracking, nor for the estimated displacements. While in the scope of this study, during the in-vivo experiments, a visual inspection of the MR images and the resulting estimated displacements was deemed sufficient, future studies need to address this issue by establishing an objective quality evaluation criterion. In addition, note that all deformations estimated and corrected by the proposed framework rely entirely on image registration algorithms. However, in particular for correcting geometric distortions, more specialized correction schemes can be employed. For example, the problem can be entirely avoided by an optimization of the acquisition parameters, such that it still allows thermometry with a high update rate, while at the same time minimizing geometric distortions. It is, however, difficult to predict whether such alternative approaches perform better than the proposed registration-based method, making this again the object of future studies.
The present study proposes a motion correction framework encompassing the entire work-flow of an MRg-HIFU thermal therapy, ensuring spatial coherence between the different stages of the therapeutic work-flow. It was demonstrated that the framework allows both the adjustment of the interventional plan and projection of the therapy observables (e.g. temperature and thermal dose measurements) in a common frame-of-reference on-the-fly, with the patient on the interventional table. Furthermore, the proposed motion correction strategy was implemented as an additional feature directly into the clinical software stack, while at the same time maintaining compatibility with MRg-HIFU therapies for static anatomies.
CEM43 :
Cumulative equivalent minutes at 43°C
EPI:
Echo planar imaging
EE:
Endpoint error
HIFU:
MRg-HIFU:
Magnetic resonance-guided high intensity focused ultrasound
MIND:
Modality independent neighborhood descriptor
PRFS:
Proton resonance frequency shift
Region of interest
TR:
Repetition time
Echo time
Chu KF, Dupuy DE. Thermal ablation of tumours: biological mechanisms and advances in therapy. Nat Rev Cancer. 2014; 14:199–208.
Moros EG. 2 Physics of Thermal Therapy: Fundamentals and Clinical Applications, 1st edn.Boca Raton: CRC Press Taylor and Francis Goup; 2012, pp. 23–41.
Cline HE, Schenck JF, Hynynen K, Watkins RD, Souza SP, Jolesz FA. MR-guided focused ultrasound surgery. J Comput Assist Tomogr. 1992; 16:956–65.
Hynynen K, Freund WR, Cline HE, Chung AH, Watkins RD, Vetro JP, Jolesz FA. A clinical, noninvasive, MR imaging-monitored ultrasound surgery method. Radiographics. 1996; 16:185–95.
Illing RO, Kennedy JE, ter Haar GR, Protheroe AS, Friend PJ, Gleeson FV, Cranston DW, Phillips RR, Middleton MR. The safety and feasibility of extracorporeal high-intensity focused ultrasound (HIFU) for the treatment of liver and kidney tumours in a western population. Br J Cancer. 2005; 93(8):890–5.
Kennedy JE. High-intensity focused ultrasound in the treatment of solid tumours. Nat Rev Cancer. 2005; 5:321–7.
Tempany CM, McDannold NJ, Hynyen K, Jolesz FA. Focused ultrasound surgery in oncology: Overview and principles. Radiology. 2011; 36(5):39–56.
Goitein M. Organ and tumor motion: An overview. Semin Radiat Oncol. 2004; 14:2–9.
Ries M, Denis de Senneville B, Roujol S, Berber Y, Quesson B, Moonen C. Real-time 3D target tracking in MRI guided focused ultrasound ablations in moving tissues. Magn Reson Med. 2010; 64:1704–12.
Vijayan S, Klein S, Hofstad EF, Lindseth F, Ystgaard B, Lango T. Validation of a non-rigid registration method for motion compensation in 4D ultrasound of the liver. In: IEEE 10th International Symposium on Biomedical Imaging. Piscataway: IEEE: 2013. p. 792–5.
Seppenwoolde Y, Shirato H, Kitamura K, Shimizu S, van Herk M, Lebesque JV, Miyasaka K. Precise and real-time measurement of 3D tumor motion in lung due to breathing and heartbeat, measured during radiotherapy. Int J Radiat Oncol Biol Phys. 2002; 53(4):822–34.
Escoffre JM, Bouakaz A. 3. Therapeutic Ultrasound, vol. 880. Switzerland: Springer; 2016, pp. 43–65.
von Siebenthal M, Szekely G. Systematic errors in respiratory gating due to intrafraction deformations in the liver. Med Phy. 2007; 34(9):265–78.
Langen KM, Willoughby TR, Meeks SL, Santhanam A, Cunningham A, Levine L, Kupelian PA. Observations on real-time prostate gland motion using electromagnetic tracking. Int J Radiat Oncol Biol Phys. 2008; 71(4):1084–90.
Arnold P, Preiswerk F, Fasel B, Salomir R, Scheffler K, Cattin PC. 3D organ motion prediction for MR-guided high intensity focused ultrasound. Med Image Comput Comput Assist Interv. 2011; 14(Pt 2):623–30.
Smitmans MHP, Pos FJ, de Bois J, Heemsbergen WD, Sonke JJ, Lebesque JV, van Herk M. The influence of a dietary protocol on a cone beam CT-guided radiotherapy for prostate cancer patients. Int J Radiat Oncol Biol Phys. 2008; 71(4):1279–86.
Emmott J, Sanghera B, Chambers J, Wong WL. The effects of n-butylscopolamine on bowel uptake: An F-FDG PET study. Nuclear Med Commun. 2008; 29:11–16.
Froehlich JM, Daenzer M, von Weymarn C, Eturk SM, Zolikofer CL, Patak MA. Functional monitoring of small bowel motility: Comparison of spasmolysis induced by glucagon or buscopan. In: International Society for Magnetic Resonance in Medicine. Concord: ISMRM: 2009. p. 1122–3.
Mirabell R, Nouet P, Rouzaud M, Bardina A, Heijira N, Schneider D. Radiotherapy of bladder cancer: relevance of bladder volume changes in planning boost treatment. Int J Radiat Oncol Biol Phys. 1998; 41(4):741–6.
Thariat J, Aluwini S, Housset M. Adaptive partial-boost stereotactic radiation therapy for muscle-invasive carcinoma of the urinary bladder In: Ponsky LE, Fuller DB, Meier RM, Ma C, editors. Robotic Radiosurgery. Treating Prostate Cancer and Related Genitourinary Applications. Berlin: Springer: 2012. p. 185–200.
Verhey LJ. Immobilizing and positioning patients for radiotherapy. Semin Radiat Oncol. 1995; 5(2):100–14.
Zhou YF. High intensity focused ultrasound in clinical tumor ablation. World J Clin Oncol. 2011; 2(1):8–27.
Auboiroux V, Petrusca L, Viallon M, Goget T, Becker CD, Salomir R. Ultrasonography-based 2d motion-compensated HIFU sonication integrated with reference-free mr temperature monitoring: a feasibility study ex vivo. Phys Med Biol. 2012; 57(10):159–71.
Holbrook AB, Ghanouni P, Santos JM, Dumoulin C, Medan Y, Pauly KB. Respiration based steering for high intensity focused ultrasound liver ablation. Magn Reson Med. 2014; 71(2):797–806.
Kennedy JE, Wu F, ter Haar GR, Gleeson FV, Phillips RR, Middleton MR, Cranston D. High-intennsity focused ultrasound for the treatment of liver tumours. Ultrasonics. 2004; 42(1-9):931–5.
Liu D, Ebbini ES. Real time 2-d temperature imaging using ultrasound. IEEE Trans Bio-Med Eng. 2010; 57(1):12–16.
Wood BJ, Locklin JK, Viswanathan A, Kruecker J, Haemmerich D, Cebral J, Sofer A, Cheng R, McCreedy E, Cleary K, McAuliffe MJ, Glossop N, Yanof J. Technologies for guidance of radiofreequency ablation in the multimodality interventional suite of the future. J Vasc Interv Radiol. 2007; 18(1):9–24.
Cheng M, Fay M, Steinke K. Percutaneous CT-guided thermal ablation as salvage therapy for recurrent non-small cell lung cancer after external beam radiotherapy: A retrospective study. Int J Hyperth. 2016; 32(3):316–23.
Sahani DV, Bajwa MA, Andrabi Y, Bajpai S, Cusack JC. Current status of imaging and emerging techniques to evaluate liver metastases from colorectal carcinoma. Ann Surg. 2014; 259(5):861–72.
Rieke V, Pauly KB. MR-thermometry. J Magn Reson Imaging. 2008; 27(2):376–90.
Separeto S, Dewey W. Thermal dose determination in cancer therapy. Int J Radiat Oncol Biol Phys. 1984; 10(6):787–800.
Damianou C, Hynynen K. The effect of various physical parameters on the size and shape of necrosed tissue volume during ultrasound surgery. J Acoust Soc Am. 1994; 95(3):1641–49.
Dragonu J, Denis de Senneville B, Quesson B, Moonen C, Ries M. Real-time geometric distorsion correction for interventional imaging with echo-planar imaging (EPI). Magn Reson Med. 2009; 61:994–1000.
Muller A, Petrusca L, Auboiroux V, Valette PJ, Salomir R, Cotton F. Management of respiratory motion in extracorporeal high-intensity focused ultrasound treatment in upper abdominal organs: current status and perspectives. Cardiovasc Intervent Radiol. 2016; 36(6):1464–76.
Mani VRS, Arivazhagan S. Survey of medical image registration. J Biomed Eng Technol. 2013; 1(2):8–25.
Hill DL, Batchelor PG, Holden M, Hawkes DJ. Medical image registration. Phys Med Biol. 2001; 46(3):1–45.
Zachiu C, Papadakis N, Ries M, Moonen CTW, Denis de Senneville B. An improved optical flow tracking technique for real-time mr-guided beam therapies in moving organs. Phys Med Biol. 2015; 60(23):9003.
Heinrich MP, Jenkinson M, Bhushan M, Matin T, Gleeson FV, Brady SM, Schnabel JA. MIND: Modality independent neighbourhood descriptor for multi-modal deformable registration. Med Image Anal. 2012; 16(7):1423–35.
Ehman RL, Felmlee JP. Adaptive technique for high-definition MR imaging of moving structures. Radiology. 1989; 173(1):255–63.
Poutschi-Amin M, Mirowitz SA, Brown JJ, McKinstry RC, Li T. Principles and applications of echo-planar imaging: A review for the general radiologist. Radiographics. 2001; 21(3):767–79.
Buades A, Coll B, Morel JM. A review of image denoising algorithms, with a new one. J Multiscale Model Simul. 2005; 4(2):490–530.
Yushkevich PA, Piven J, Cody Hazlett H, Gimpel Smith R, Ho S, Gee JC, Gerig G. User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability. Neuroimage. 2006; 31(3):1116–28.
Wijlemans J, de Greef M, Schubert G, van den Bosch M, Moonen C, Ries M. Near-field management during MR-HIFU ablation in highly perfused organs. In: International Society for Magnetic Resonance in Medicine. Concord: ISMRM: 2014. p. 2340.
Wijlemans J. Towards MR-HIFU ablation of liver tumors. PhD thesis, Utrecht University. 2014.
Zachiu C, Denis de Senneville B, Moonen C, Ries M. A framework for the correction of slow physiological drifts during MR-guided HIFU therapies: Proof of concept. Med Phys. 2015; 42(7):4137–48.
Case RB, Sonke JJ, Moseley DJ, Kim J, Brock KK, Dawson LA. Inter- and intrafraction variability in liver position in non-breath-hold stereotactic body radiotherapy. Int J Radiat Oncol Biol Phys. 2009; 75(1):302–8.
Ostergaard NK, Denis de Senneville B, Elstrom UV, Tanderup K, Sorensen TS. Acceleration and validation of optical flow based deformable registration for image-guided radiotherapy. Acta Oncol. 2008; 47(7):1286–93.
Castillo R, Castillo E, Guerra R, Johnson VE, McPhail T, Garg AK, Guerrero T. A framework for evaluation of deformable image registration spatial accuracy using large landmark point sets. Phys Med Biol. 2009; 54:1849–70.
Brock KK. Results of a multi-institution deformable registration accuracy study(MIDRAS). Int J Radiat Oncol Biol Phys. 2010; 76(2):583–96.
Kadoya N, Fujita Y, Katsuta Y, Dobashi S, Takeda K, Kishi K, Kubozono M, Umezawa R, Sugawara T, Matsushita H, Jingu K. Evaluation of various deformable image registration algorithms for thoracic images. J Radiat Res. 2013; 55(1):175–82.
The authors would like to thank Johanna M. M. van Breugel (Dept. of Radiology, UMC Utrecht, Netherlands), Gerald Schubert (Philips Healthcare, Vantaa, Finland) and Martijn de Greef (TomTom, Eindhoven, Netherlands) for their support during the animal experiments. The authors also gratefully acknowledge the two anonymous referees whose remarks and suggestions have helped us greatly in improving the quality of the manuscript.
This work was supported by the Dutch Technology Foundation (STW) (project OnTrack no. 12813) and in part by the European Research Council (project ERC-2010-AdG-20100317, Sound Pharma) and ITEA 2 (project SoRTS).
Availability of data and material
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Imaging Division, UMC Utrecht, Heidelberglaan 100, Utrecht, 3508, GA, Netherlands
Cornel Zachiu, Baudouin Denis de Senneville, Ivan D. Dmitriev, Chrit T. W. Moonen & Mario Ries
Institut de Mathématiques de Bordeaux, CNRS UMR5251/Université de Bordeaux, Talence Cedex, Bordeaux, 33405, France
Baudouin Denis de Senneville
Cornel Zachiu
Ivan D. Dmitriev
Chrit T. W. Moonen
Mario Ries
All authors have contributed to the development of this study. CZ performed the experiments, analyzed and interpreted the data and drafted the manuscript. MR, BDS and CTWM helped design the study and experiments and draft the manuscript. MR and IDD provided assistance and technical support during the phantom and animal experiments. CZ, BDS and IDD performed the software implementation of the methods described in the current work. All authors read and approved the manuscript.
Correspondence to Cornel Zachiu.
The animal experiments were approved by our institution's animal experiment committee 2014.III.01.013 (Utrecht, The Netherlands). For the healthy volunteer study approval was provided by our institution's ethics committee number 07-235. All participants were counseled on the nature of the procedure and all provided an informed consent to participate in the MR imaging study.
Written informed consent for publication of their clinical details and/or clinical images was obtained from the healthy volunteer whose images were included in the manuscript. A copy of the consent form is available for review by the Editor of this journal.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Zachiu, C., Denis de Senneville, B., Dmitriev, I. et al. A framework for continuous target tracking during MR-guided high intensity focused ultrasound thermal ablations in the abdomen. J Ther Ultrasound 5, 27 (2017). https://doi.org/10.1186/s40349-017-0106-y
Therapy guidance
Motion correction | CommonCrawl |
Practical example of multiple antibody screening for evaluation of malaria control strategies
Marie-Louise Varela1 na1,
David Koffi2 na1,
Michael White3,
Makhtar Niang1,
Babacar Mbengue5,6,
Fatoumata Diene Sarr4,
André Offianan Touré2 na1 &
Ronald Perraut1,5 na1 nAff7
Ongoing efforts to fight Plasmodium falciparum malaria has reduced malaria in many areas, but new tools are needed to monitor further progress, including indicators of decreasing exposure to parasite infection. Sero-surveillance is considered promising to monitor exposure, transmission and immunity.
IgG responses to three antigen biomarkers were evaluated in a retrospective study involving: (i) surveys of 798 asymptomatic villagers from 2 Senegalese endemic settings conducted before 2002 and after the 2013 intensification of control measures, and (ii) in 105 symptomatic individuals from different settings in Côte d'Ivoire. Response to up to eight P. falciparum antigens, including recombinant MSP1p9 antigen and LSA141 peptide, were analysed using multiplex technology and responses to whole P. falciparum schizont extract (SE, local strain adapted to culture) were measured by ELISA.
MSP1p9 and LSA141 IgG responses were shown to be relevant indicators monitoring immune status in the different study sites both from Côte d'Ivoire and Senegal. Between 2002 and 2013, individuals participating in both studies showed higher decline of sero-positivity in young (< 15 years: range 12% to 50%) than older (> 15 years: no decline to 15%) individuals from Dielmo and Ndiop. A mathematical sero-catalytic model from the complete Dielmo/Ndiop survey was used to reconstruct declining levels of sero-positivity in more detail, demonstrating that anti-SE seroprevalence levels most accurately reflected malaria exposure in the two villages.
For standard screening of population immune status at sites envisaging elimination, the use of ELISA-based assays targeting selected antigens can contribute to provide important epidemiologic surveillance data to aid malaria control programmes.
In recent years, the scale-up of control efforts has resulted in a major decline in malaria transmission in many regions, fueling hopes for elimination in numerous countries. Nevertheless, the malaria case load is still very high (over 200 million annually) including an estimated 236,000–635,000 deaths according to the World Health Organization (WHO) [1]. Declining transmission has been attributed to improved control policies including rapid diagnosis and effective treatment with artemisinin-based combination therapy (ACT), vector control strategies—most notably long-term insecticide-treated bed nets (LLIN), intermittent preventive treatments, and close follow-up during pregnancy [2].
Monitoring changes in malaria transmission intensity and disease prevalence through surveillance allows health authorities to evaluate control programmes and plan interventions. This welcome reduction in malaria transmission poses substantial challenges to surveillance efforts, because when transmission becomes too low, clinical surveillance and entomological inoculation rate (EIR) become insufficiently sensitive to track potential transient asymptomatic parasite carriers and infected mosquitoes with a potential risk of re-emergence of malaria from this invisible reservoir. Furthermore, in addition to the threats associated with the emergence of resistance to artemisinin in Southeast Asia and insecticides in Africa, malaria has shown rebounds in countries, such as Rwanda, Sao Tome and Principe, and Zambia, some of which were leaders in new control strategies [3]. In Senegal, longitudinal analyses of data from the villages of Dielmo and Ndiop showed that after a sharp decline in malaria cases during the 2 years following the use of ACT and LLINs, a rebound in the number of malaria cases occurred [4, 5]. Such rebounds affect all age groups and not only young children who are usually the most susceptible, indicating that the sharp decrease of exposure resulted in a clinically observable population wide decreases in natural protective immunity [4, 6]. These observations underline the need for increased surveillance to monitor these rapid epidemiological changes.
A key tool is sero-surveillance based on the use of Plasmodium species-specific antibodies as indicators for exposure, transmission, and immunity. Such tools have significant potential for contributing to the effectiveness of malaria control and elimination programmes [7]. Antibodies are very sensitive markers of population-level malaria exposure in low-transmission settings and reflect cumulative exposure over a period of time [8, 9]. Although this approach was used historically as part of malaria control programmes, it has not had widespread use in part because of the lack of standardized antigens and methodology [9]. Of more than 5000 proteins expressed by Plasmodium, only a few have been examined in detail [10]. A comprehensive evaluation of candidate antigens is required to identify those antibody responses that are most sensitive for detecting changes in transmission. Studies employing protein microarrays [8] or expanded repertoires of purified antigens are beginning to address this knowledge gap, and it is likely that multiple antigens will need to be included in serologic assays [8, 9, 11, 12]. Previous studies conducted in the villages of Dielmo and Ndiop [13,14,15,16,17,18] and other settings [11, 19,20,21] have shown associations between antibody responses against Plasmodium antigens and trends in clinical malaria in the context of stable epidemiological conditions. Some antibody responses are short-lived decreasing during the dry season while some others remain positive for years [8, 22, 23]. Recent work identified antigen markers for recent exposure contributing to precise estimates of community-wide exposure [24]. However, there is a lack of comprehensive information regarding the consequences of transmission changes on antibody responses in different age groups. In addition, few analyses were focused on symptomatic cases.
In this report, a practical example is provided as a snapshot cross-sectional picture using samples from symptomatic and asymptomatic cohorts in the Côte d'Ivoire and Senegal. A large panel of antigens were initially explored using the multiplex Luminex assay [25,26,27], here analysis was focused on 3 main antigen targets: (i) a preerythrocytic antigen LSA141; (ii) a merozoite antigen MSP1p19, and (iii) a whole schizont extract (SE: measured by standard ELISA) from a local strain adapted to in vitro culture [28], as a reference antigen for overall antibody response against P. falciparum. In the cohort from Senegal, antibody responses to six additional antigens are given with regards to the sero-catalytic models. The analysis was also focused on a subgroup of individuals from the two villages who participated to the two cross sectional studies in 2002 and 2013. It is shown that these IgG responses could differentiate settings with different epidemiological context and help characterize the decay of immunity.
Study area and recruitment
The Côte d'Ivoire study involved 163 individuals (Table 1). This included 93 patients consulting for symptomatic fever in three health centres: Korhogo, located in a savannah area with a sudanian climate, the village of Man in the Western forest and mountain area, and Abobo located in the southern part of the township of Abidjan characterized by the presence of a lagoon. In addition, 35 asymptomatic individuals from a cross sectional survey in Abobo and 35 young patients with severe malaria from the University Hospital Centre (CHU) of Abidjan were included as shown in Table 1. In these areas, the level of transmission is high with EIR around 400 infective bites/person/year [29].
Table 1 Characteristics of recruitment of patients and individuals from Ivory Coast
The protocol of surveillance was approved by the National Committee of the Ministry of Health. Individual informed written consent was obtained from participants/parents/guardians. All clinical cases were treated and followed-up according to the standard national procedure as described [25, 26].
The study from Senegal was done in the endemic villages of Dielmo and Ndiop, where a long-term longitudinal survey designed to study acquisition and maintenance of natural immunity has been carried out over 20 years [4, 6, 30]. The project protocol and objectives were carefully explained to the assembled villagers, and informed written consent was obtained from all participants or their parents or guardians. Consent is individually renewed from all subjects; anyone can withdraw from the study and the follow-up procedure at any time. The protocol was approved by the Senegalese National Health Research Ethics Committee. An agreement between the Fondation Institut Pasteur de Dakar, Institut de Recherche pour le Développement (IRD) and the Ministère de la Santé et de la Prévention of Senegal defines all research activities. In July 2008, vector control was massively implemented with provision of LLINs to each household. New LLINs were provided in July 2011 [6].
Two cross-sectional samplings were done in July 2002 and July 2013 before the rainy season, i.e. before peak transmission in Dielmo (holoendemic area) and before the transmission season in Ndiop (meso-endemic). Overall 184 and 196 villagers in Dielmo, and 202 and 216 villagers in Ndiop were evaluated in 2002 and 2013, respectively with a sub-group of samples from 75 (Dielmo) and 86 (Ndiop) individuals who participated to both studies (Table 2) [27]. After withdrawal, plasma and red blood cells were separated by centrifugation and stored at − 20 °C.
Table 2 Characteristics and antibody levels and prevalence to antigens in villagers from Dielmo and Ndiop 200
ELISA and Multiplex techniques for antibody responses
The P. falciparum schizont extract (SE) of the 07/03 Dielmo strain was prepared and used in Indirect ELISA as described [31, 32]. The multiplex bead-based assay (MBA) using covalent coupled antigen (MSP1p19) or BSA-peptide (LSA141) to carboxylated magnetic Luminex beads was done as described [33,34,35,36]. In the cohort from Senegal, antibody responses to an additional six antigens were measured using the multiplex Luminex assay: circumsporozoite protein (CSP); liver-stage antigen 3 (LSA-3); glutamate-rich protein (GLURP); sporozoite and liver stage antigen (Salsa); erythrocyte associated antigen PF13 from membrane protein 1 (PfEMP1/PF13); and apical membrane antigen 1 (AMA-1) [32,33,34,36]. A pool of sera from immune adults from Dielmo and a pool of European and African non-immune sera (20 Senegalese individuals confirmed as negative IgG response to P. falciparum schizont extract) were included in each assay as positive and naïve controls and normalization control, respectively. For ELISA, IgG Levels were expressed as OD ratio = OD sample/mean ODnaive pool. For MBA, IgG levels were expressed as mean fluorescence intensity (MFI) or log MFI. The positivity cut-off was set as above the net signal + 3 SD of naïve controls. MFI of naïve controls + 3 SD ranged from 40 to 175 for the overall panel of antigens (MSP1p19 = 175; LSA141 = 63).
Statistical analysis and modelling
Categorical variables were compared using the Fisher exact test, continuous variables of antibody responses were analysed using the Man-Whitney rank test for non-normally distributed data. p values < 0.05 were considered significant. Seroconversion (SCR) and sero-reversion (SRR) rates were calculated using an age-specific reversible catalytic conversion model [7]. Analyses were performed with R and Statview 5.0® (SAS Institute).
Sero-catalytic model
Sero-catalytic models can be used to describe the rate at which individuals sero-convert following malaria exposure and subsequently sero-revert due to decaying levels of antibodies. Sero-catalytic models have typically been applied under the assumption of constant transmission over time, or a single reduction in transmission at some time in the past [7].
Here is described an extended class of sero-catalytic models accounting for multiple past changes in transmission, which can provide more flexible estimates of past transmission, as well as better accounting for uncertainty.
The sero-reversion rate is denoted ρ. Tmax = 80 years was defined to be the maximum time considered before the first cross-sectional survey. The sero-conversion rate at this time is assumed to be λ0. It is assumed that between Tmax years in the past and the time of survey, the sero-conversion rate changed on n occasions at times (T1, T2,…,Tn), so that the sero-conversion rate during the interval (Ti, Ti+1) was λi. For a person of age α at the time of the cross-sectional survey, it is necessary to calculate their age at the times when transmission changed: (α1, α2,… αm), where αm = α (the age at the cross-sectional survey), and m ≤ n to account for the possibility that some of the times when transmission changed may have occurred before that person was born. It is then possible to denote λj to be the sero-conversion rate that an individual experienced between the ages αj-1 and αj. The following recurrence equation gives the estimated proportion sero-positive at each point when the sero-conversion rate changes, with Pm being the probability that a person of age α is sero-positive at the time of the cross-sectional survey.
$$P_{1} = \frac{{\lambda_{1} }}{{\lambda_{1} + \rho }}\left( {1 - e^{{ - \left( {\lambda_{1} + \rho } \right)\alpha_{1} }} } \right)$$
$$P_{j} = \left( {1 - P_{j - 1} } \right)\frac{{\lambda_{j} }}{{\lambda_{j} + \rho }}\left( {1 - e^{{ - \left( {\lambda_{j} + \rho } \right)\left( {\alpha_{j} - \alpha_{j - 1} } \right)}} } \right) + P_{j - 1} \frac{{\lambda_{j} + \rho e^{{ - \left( {\lambda_{j} + \rho } \right)\left( {\lambda_{j} + \rho } \right)}} }}{{\lambda_{j} + \rho }}$$
The sero-catalytic model is parameterized by Ɵ = (λ0..n, T1…n, ρ). The model is fitted in a Bayesian statistical framework with uniform prior distributions. By sampling from the estimated posterior distribution, historical sero-conversion rates in Dielmo and Ndiop can be estimated. Importantly, the inference framework provides estimation of uncertainty, allowing identification of situations when sero-conversion rates are not identifiable (e.g. it is not possible to accurately estimate sero-conversion rates 40 years before a cross-sectional survey). Parameter estimates for the nine antigens are presented in Additional file 1: Table S1.
IgG biomarker responses and characteristics of symptomatic malaria
In the individuals recruited in Côte d'Ivoire, only patients with severe malaria showed a significantly younger age distribution than the others (p < 0.01) (Table 1). Regarding asymptomatic individuals and severe malaria cases, parasitaemia was checked by blood smear. In the severe hospitalized cases with confirmed clinical malaria (based on WHO-defined clinical symptoms which includes RDT positive test, history of previous infection and treatment in health centre before reaching hospital), some showed negative blood smear at the time of sampling (likely due to previous treatment before/during hospitalization). In the asymptomatic cross sectional survey, 29% of individuals had no circulating parasites detectable by blood smear. Accounting only for individuals found with positive parasitaemia from asymptomatic survey and from hospitalized severe cases (detectable on blood smear), levels of parasitaemia from positive asymptomatic carriers were significantly lower than in severe cases (mean = 11 300 vs 41 300 trophozoite per µl of blood; p < 0.01). In patients from the three health centres, those from the township of Abobo had the highest levels of parasitaemia (Fig. 1).
Comparison of parasitaemia and distribution of antibody responses in different groups from Ivory Coast. Parasitaemia and antibody responses from individuals with urban severe malaria (Sev_M, Black), asymptomatic survey (Abo_As, white), symptomatic cases from Abobo (Abobo, light grey), from Korhogo (Korhogo, dark grey) and Man (Man, middle grey) are plotted as boxplots. Asterisks indicate significant different levels of IgG to SE, MSP1p19 and LSA141 (p < 0.05)
There was little difference between cohorts when comparing IgG responses to SE, except for SM cases with significantly lower levels of antibodies to SE compared to symptomatic cases in Abobo and Korhogo. This finding may be attributable to the younger age of this group.
High levels of MSP1p19 were found for Korhogo, SM and asymptomatic carriers. IgG to LSA141 was more variable among groups: Korhogo patients showed significantly higher levels than all other. The significant lowest level of anti-LSA141 was in asymptomatic carriers and in Man (Fig. 1).
Individual decrease of antibody levels in asymptomatic villagers of Dielmo and Ndiop
The IgG responses of the subgroup of villagers that were present and sampled in 2002 and 2013 are shown in Fig. 2. Changes in IgG levels between 2002 and 2013 for each individual are plotted as dot plot connected by arrows coloured in black when an increase in antibody response was observed. The majority, but not all individuals, showed reduced antibody responses to the three antigens.
Individual variations of antibody responses to SE (a), MSP1p19 (b) and LSA141 (c) in individuals from Dielmo and Ndiop between 2002 and 2013. This figure show detailed individual antibody responses in Dielmo and Ndiop as dot plot in 2002 and 2013. Red arrows link responses from each individual from 2002 to 2013 in groups of younger villagers (< 15 years in 2002) and older ones (> 15 years). Arrows linking individual measures that did not decrease between 2002 and 2013 are in black colour. The decrease of antibody levels was significant for all antigens and for all age groups in Dielmo (p < 0.01, Wilcoxon signed rank test for paired data). In Ndiop, there was a significant decrease for IgG to SE and MSP1p19 (p < 0.05) but not LSA141. The mean decay of Ab levels ranged from 17 to 49%
Decreases were checked across all antigens, the proportion of individuals that experienced a decrease in antibody responses range from 73% to 80% of individuals in Dielmo and from or 67% to 76% of individuals in Ndiop.
Regarding global mean levels of antibody responses (Table 2), the magnitude of decrease in Dielmo was greatest for: LSA141 (− 49%), followed by > MSP1 (− 35%), with the smallest decreases against > SE (− 22%). In Ndiop, the decay of antibody levels was lower and classification of the magnitude of decrease was largest for: MSP1 (-21%) and > LSA141 (− 19%), and smallest against > SE (− 17%).
An estimation of the effective loss of IgG levels of young individuals (< 15 years of age) can be calculated by the difference between the mean antibody levels measured in 2013 and their potential antibody levels with constant transmission i.e. mean antibody responses measured in a comparable > 15 yo group in 2002 from the entire survey.
In Dielmo, mean levels in 2013 (N = 20, mean age = 7.7 in 2002 i.e. 18.8 in 2013) expressed as ODratio and MFI were 3.5, 432, 386 (Table 2) compared to expected values of 5.5, 1176, 2395 (N = 44 individuals, mean age = 19.2) for SE, LSA141 and MSP1p19, respectively.
In Ndiop, mean levels in 2013 (N = 28, mean age = 7.8 in 2002 i.e. 18.8 in 2013) expressed as ODratio and MFI were 2.5, 558, 1234 (Table 2) compared to expected values of 3.8, 1913, 4132 (N = 44 individuals, mean age = 18.6) for SE, LSA141 and MSP1p19, respectively.
The potential decreases are − 34%, − 34% for SE; − 63%, − 71% for LSA141; − 84%, − 70% for MSP1p19 for Dielmo and Ndiop, respectively.
Sero-catalytic model and prediction of transmission
In this analysis, anti-malarial antibody responses were those measured in samples from the 2 cross-sectional studies done before the rainy season in 2002 and 2013 (184 and 196 villagers in Dielmo, and 202 and 216 villagers in Ndiop) [27]. Comparison of levels of prevalence using this set of data are shown in Fig. 3.
Prevalence of responders to the antigen tested in 2002 vs 2013. Comparison of prevalence of responders to LSA141, MSP1p19 and SE in Dielmo and Ndiop are plotted as vertical bars. The study involved the entire cross-sectional analysis of 184 (Dielmo 2002), 196 (Dielmo 2013), 202 (Ndiop 2002) and 216 villagers (Ndiop 2013) [27]. Plotting of prevalence is shown age-stratified ≤ 15 years (black and white bars) and > 15 years (dark and light grey). Classification of the magnitude of decrease (Dielmo, Ndiop, respectively) was: LSA141 (54%, 65%) < MSP1p19 (69%, 72%) < SE (71%, 80%) for the younger age group. In older individuals, decrease of prevalence was around 15% (8–17%) without clear difference between prevalence of responses to the 3 antigens
The age-dependent variation in sero-prevalence levels can be formally analysed using sero-catalytic models [7]. Figure 4 shows of such an analysis using the data for LSA141, MSP1p19, and SE with the Dielmo-Ndiop cross-sectional surveys from 2002 and 2013.
Sero-catalytic models fitted to data from cross sectional studies done in 2002 and 2013. Age-stratified sero-prevalence of anti-malarial antibodies to LSA141, MSP1p19 and SE in Dielmo (top row) and Ndiop (middle row). Data are plotted as points with vertical bars representing 95% confidence intervals. The fitted lines represent the posterior median predictions of the sero-catalytic model, and the shaded region denotes the 95% credible interval. The bottom row shows the estimated historical trends in transmission as measured by changes in sero-conversion rate. The times of the cross-sections are indicated with vertical lines in 2002 (red) and 2013 (blue). Shaded regions denote the 95% credible interval
In both Dielmo and Ndiop, there was a significant substantial reduction in sero-prevalence from 2002 to 2013. The difference in sero-prevalence is most notable in children younger than 10 years of age, where very low levels are observed in the 2013 cross-section. It has been documented that malaria transmission has been historically higher in Dielmo than Ndiop. However, for antibody responses to most antigens, there were not substantial differences in sero-positivity (Additional file 2: Fig. S1), or in model-estimated sero-conversion rate between Dielmo and Ndiop (Additional file 1: Table S1, Additional file 2: Fig. S1). The important exceptions are liver-stage antigen 3 (LSA-3), erythrocyte membrane protein 1 (PfEMP1/PF13) and schizont extract (SE) where sero-positivity (and hence the model-estimated sero-conversion rates) were higher in Dielmo than in Ndiop.
In the study of patients from Côte d'Ivoire, parasite invasion strongly stimulates immune responses and antibody responses may be higher in more exposed persons who, therefore, have a higher degree of acquired immunity [37]. Cross-sectional analysis of randomly recruited symptomatic and well-documented cases from sentinel sites, showed that a multi-target measure of antibody responses could constitute a surrogate of actual immune status [25]. Here, IgG responses to SE, MSP1 and LSA141 in sentinel sites were compared with results from asymptomatic recruitment (AR) in Abobo and severe malaria cases (SM) from the CHU of Abidjan (Table 1).
When using this limited set of potential markers for immunity in patients with clinical malaria, significant unexpected differences in antibody responses were found depending on geographical setting and clinical status. While patients from the Sentinel National Network health centre showed a clearly augmented antibody response to MSP1p19 and LSA141 in the Korhogo cohort [25], asymptomatic carriers and young individuals with SM had comparable high levels of antibodies to MSP1p19 but variable levels to LSA141. SE antigen was moderately associated with setting except for asymptomatic carriers with lower levels of IgG. Thus, antibody responses measured during a mild clinical episode could represent a surrogate of effective immunity which may depend on the duration and intensity of parasitaemia before treatment. For a given health centre, patients can be considered to have comparable susceptibility to clinical infections requiring hospitalization and treatment, independent of the setting and the individual's previous history of infection. It must be noted recruitment in urban central hospitals are based on symptomatic cases presenting at the facilities, and cross-sectional representative surveys are more likely to capture asymptomatic cases with different immune profiles. Thus, cross sectional surveillance of symptomatic immune responses to selected malarial antigens in local health centres could be a convenient way of assessing immunity at the population level in different areas with high levels of transmission. Such immunity is related to the degree of adherence to control measures, and analysis of antibody responses to validated antigens could be an indirect way of tracking compliance to measures such as LLINs which is the largest contributor to sustained protection [2]. Importantly, immunity affects treatment outcome with a significant impact when using the current first-line artemisinin combined treatments [38]. Monitoring of parasite clearance times in symptomatic malaria, which is routinely done in Côte d'Ivoire, is a complementary approach for assessing immunity. The establishment of such annual prospective multi-centre cross-sectional studies should help document the effectiveness of large-scale integrated long-lasting control interventions at the community level.
In the cross-sectional studies from the villages of Dielmo and Ndiop done in 2002 and 2013, 75 and 86 villagers respectively, aged from 3.4 to 80 years were evaluated both in 2002 and 2013. During that time, cumulative EIR dropped dramatically from 215 and 171 to 7.5 and 2.5 infective bites/person/year in Dielmo and Ndiop, respectively. The cumulative number of infective bites/person/year from 2002 to 2008 were 1604, 218 and from 2009 to July 2013 229, 27 in Dielmo and Ndiop, respectively, i.e. a decrease of around 86% in transmission intensity [6]. Results showed larger decreases of IgG to all antigens in the younger group of 2002 than the older groups. Importantly, the reduced immunity with regard to antibody levels in the younger age group between 2002 and 2013 are substantially higher than reported in Table 2 when estimated by the difference between their actual antibody level measured in 2013 and the expected responses with no reduction in transmission. The decay of IgG levels was 34%, 63%, 84% in Dielmo and 34%, 71%, 70% in Ndiop for SE, LSA141 and MSP1p19, respectively.
These observations suggest that the waning of antibody responses measured using these antigens as markers was much more dramatic for younger individuals in the process of acquiring immunity despite 8.9 to 20.3 years of cumulative unchanged exposure from 2002 to 2008 and 5.5 years decay from 2008 to 2013. A confounding factor may be the high levels of parasite prevalence in children under 5 years in the 2002 cross-section from Dielmo [27]. Another limitation was also that simultaneous sampling before the peak of transmission was considered as a relatively 'steady' state for IG responses before the high fluctuations due to transmission in the rainy season that was high in 2002 and much lower in 2013. Individuals with circulating blood-stage parasites are likely to have also higher levels of circulating IgG antibodies. The individual decrease of antibody responses appears to be a complex pattern depending on previous cumulative and possible recurrent exposure after 2008 when transmission had almost disappeared. The observed increase of specific IgG in some individuals did not correlate between the antigens tested. There were 39/75 villagers in Dielmo and 47/86 in Ndiop with black arrows in Fig. 2, meaning that over 50% of individuals (mostly adults) showed higher responses in 2013 than in 2002 although less than 7% had simultaneous increases of IgG to all three antigens. It is clear that the changing epidemiology of infection and disease cannot be easily explained by increased vector control alone. Transient peaks of IgG can result from rebounds in transmission linked to different events such as reduced efficiency of LLINs requiring replacement at least every 3 years [5, 39], climate change [40] or unexplained/undetected asymptomatic sub-microscopic carriage [41].
Once transmission has declined, sero-surveillance helps to identify populations where transmission still occurs, allowing targeted interventions. Thus, MSP1p19 appears to be a relevant antigen for detecting IgG signaling parasite presence. It is applicable both to asymptomatic and symptomatic malaria, for example allowing categorization of transmission into two major groups among the complex epidemiological strata in Cameroon [42].
When anti-malarial antibody responses are measured in samples from cross-sectional studies, changes in sero-prevalence levels with age can be used to make inferences on changing transmission [27]. For example, if high levels of sero-prevalence are observed in individuals older than 5 years of age, and if very low levels in children less than 5 years, this would suggest that there was a sharp reduction in transmission 5 years previously. The changes in sero-prevalence with age have been used to identify changing trends in transmission. For the three antigens, it is clear that the reduction in transmission occurred after the 2002 survey. Over the period of time where data has been collected, P. falciparum prevalence and incidence have been substantially higher in Dielmo compared to Ndiop. Anti-SE sero-prevalence levels reflect this pattern with higher levels of sero-prevalence in Dielmo than in Ndiop. However, levels of anti-LSA141 and anti-MSP1p19 sero-prevalence are comparable between Ndiop and Dielmo. There is an important need to identify antigens or combinations of antigens that best reflect known transmission trends.
The goal is to identify a limited number of relevant antigens to develop standard screening methods for monitoring the antibody-related immune levels of populations residing in regions where the incidence of clinical malaria remains high or proceeding towards pre-elimination. As a simple ELISA-based technology for measuring antibody responses to multiple antigens, the multiplex approach offers the potential to generate important epidemiologic surveillance data accessible to malaria control programmes.
MFI:
Mean fluorescence intensity
Enzyme linked immunosorbent assay
MSP1p19:
P. falciparum merozoite surface protein 1–19kd fragment
LSA1:
Liver stage antigen 1
P. falciparum schizont extract
CHU:
University Hospital Centre
MBA:
Multiplex bead-based assay
Sero-conversion rate
SRR:
Sero-reversion rate
AR:
Asymptomatic recruitment
SM:
Severe malaria cases
LLINs:
Long lasting impregnated bed nets
Artemisinin-based combination therapy
EIR:
Entomological inoculation rate
IgG:
Immunoglobulin G
WHO. World Malaria Report 2016. Geneva: World Health Organization; 2016. http://www.who.int/malaria/publications/world-malaria-report-2016/en/ 2016.
WHO. World Malaria Report 2008. Geneva: World Health Organization; 2008. http://www.who.int/malaria/publications/atoz/9789241563697/en/index.html. 2010.
Trape JF, Tall A, Diagne N, Ndiath O, Ly AB, Faye J, et al. Malaria morbidity and pyrethroid resistance after the introduction of insecticide-treated bednets and artemisinin-based combination therapies: a longitudinal study. Lancet Infect Dis. 2011;11:925–32.
Wotodjo AN, Doucoure S, Gaudart J, Diagne N, Diene Sarr F, Faye N, et al. Malaria in Dielmo, a Senegal village: is its elimination possible after seven years of implementation of long-lasting insecticide-treated nets? PLoS ONE. 2017;12:e0179528.
Trape JF, Tall A, Sokhna C, Ly AB, Diagne N, Ndiath O, et al. The rise and fall of malaria in a West African rural community, Dielmo, Senegal, from 1990 to 2012: a 22 year longitudinal study. Lancet Infect Dis. 2014;14:476–88.
Corran P, Coleman P, Riley E, Drakeley C. Serology: a robust indicator of malaria transmission intensity? Trends Parasitol. 2007;23:575–82.
Crompton PD, Kayala MA, Traore B, Kayentao K, Ongoiba A, Weiss G, et al. A prospective analysis of the Ab response to Plasmodium falciparum before and after a malaria season by protein microarray. Proc Natl Acad Sci USA. 2010;107:6958–63.
Drakeley C, Cook J. Potential contribution of sero-epidemiological analysis for monitoring malaria control and elimination: historical and current perspectives. Adv Parasitol. 2009;69:299–352.
Elliott SR, Fowkes FJ, Richards JS, Reiling L, Drew DR, Beeson JG. Research priorities for the development and implementation of serological tools for malaria surveillance. F1000 Prime Rep. 2014;6:100.
Osier FH, Fegan G, Polley SD, Murungi L, Verra F, Tetteh KK, et al. Breadth and magnitude of antibody responses to multiple Plasmodium falciparum merozoite antigens are associated with protection from clinical malaria. Infect Immun. 2008;76:2240–8.
Richards JS, Arumugam TU, Reiling L, Healer J, Hodder AN, Fowkes FJ, et al. Identification and prioritization of merozoite antigens as targets of protective human immunity to Plasmodium falciparum malaria for vaccine and biomarker development. J Immunol. 2013;191:795–809.
Oeuvray C, Theisen M, Rogier C, Trape JF, Jepsen S, Druilhe P. Cytophilic immunoglobulin responses to Plasmodium falciparum glutamate-rich protein are correlated with protection against clinical malaria in Dielmo. Senegal. Infect Immun. 2000;68:2617–20.
Perraut R, Joos C, Sokhna C, Polson HE, Trape JF, Tall A, et al. Association of antibody responses to the conserved Plasmodium falciparum Merozoite Surface Protein 5 with protection against clinical malaria. PLoS ONE. 2014;9:e101737.
Perraut R, Marrama L, Diouf B, Sokhna C, Tall A, Nabeth P, et al. Antibodies to the conserved C-terminal domain of the Plasmodium falciparum merozoite surface protein 1 and to the merozoite extract and their relationship with in vitro inhibitory antibodies and protection against clinical malaria in a Senegalese village. J Infect Dis. 2005;191:264–71.
Perraut R, Varela ML, Joos C, Diouf B, Sokhna C, Mbengue B, et al. Association of antibodies to Plasmodium falciparum merozoite surface protein-4 with protection against clinical malaria. Vaccine. 2017;35:6720–6.
Roussilhon C, Oeuvray C, Muller-Graf C, Tall A, Rogier C, Trape JF, et al. Long-term clinical protection from falciparum malaria is strongly associated with IgG3 antibodies to merozoite surface protein 3. PLoS Med. 2007;4:e320.
Toure-Balde A, Perlaza BL, Sauzet JP, Ndiaye M, Aribot G, Tall A, et al. Evidence for multiple B- and T-cell epitopes in Plasmodium falciparum liver-stage antigen 3. Infect Immun. 2009;77:1189–96.
Dodoo D, Aikins A, Kusi KA, Lamptey H, Remarque E, Milligan P, et al. Cohort study of the association of antibody levels to AMA1, MSP119, MSP3 and GLURP with protection from clinical malaria in Ghanaian children. Malar J. 2008;7:142.
Richards JS, Stanisic DI, Fowkes FJ, Tavul L, Dabod E, Thompson JK, et al. Association between naturally acquired antibodies to erythrocyte-binding antigens of Plasmodium falciparum and protection from malaria and high-density parasitaemia. Clin Inf Dis. 2010;51:e50–60.
Taylor RR, Allen SJ, Greenwood BM, Riley EM. IgG3 antibodies to Plasmodium falciparum merozoite surface protein 2 (MSP2): increasing prevalence with age and association with clinical immunity to malaria. Am J Trop Med Hyg. 1998;58:406–13.
Kinyanjui SM, Bejon P, Osier FH, Bull PC, Marsh K. What you see is not what you get: implications of the brevity of antibody responses to malaria antigens and transmission heterogeneity in longitudinal studies of malaria immunity. Malar J. 2009;8:242.
Perraut R, Mercereau-Puijalon O, Diouf B, Tall A, Guillotte M, Le Scanf C, et al. Seasonal fluctuation of antibody levels to Plasmodium falciparum parasitized red blood cell-associated antigens in two Senegalese villages with different transmission conditions. Am J Trop Med Hyg. 2000;62:746–51.
Helb DA, Tetteh KK, Felgner PL, Skinner J, Hubbard A, Arinaitwe E, et al. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities. Proc Natl Acad Sci USA. 2015;112:E4438–47.
Koffi D, Toure AO, Varela ML, Vigan-Womas I, Beourou S, Brou S, et al. Analysis of antibody profiles in symptomatic malaria in three sentinel sites of Ivory Coast by using multiplex, fluorescent, magnetic, bead-based serological assay (MAGPIX). Malar J. 2015;14:509.
Koffi D, Varela ML, Loucoubar C, Beourou S, Vigan-Womas I, Toure A, et al. Longitudinal analysis of antibody responses in symptomatic malaria cases do not mirror parasite transmission in peri-urban area of Côte d'Ivoire between 2010 and 2013. PLoS ONE. 2017;12:e0172899.
Perraut R, Varela ML, Loucoubar C, Niass O, Sidibe A, Tall A, et al. Serological signatures of declining exposure following intensification of integrated malaria control in two rural Senegalese communities. PLoS ONE. 2017;12:e0179146.
Perraut R, Guillotte M, Drame I, Diouf B, Molez JF, Tall A, et al. Evaluation of anti-Plasmodium falciparum antibodies in Senegalese adults using different types of crude extracts from various strains of parasite. Microbes Infect. 2002;4:31–5.
Betsi AN. La riziculture et le paludisme dans la région forestière ouest de la Côte d'Ivoire. Cah Agricultures. 2003;12:341–6.
Trape JF, Rogier C. Combating malaria morbidity and mortality by reducing transmission. Parasitol Today. 1996;12:236–40.
Diop F, Diop G, Niang M, Diouf B, Ndiaye D, Richard V, et al. The value of local malaria strains for serological studies: local strains versus Palo Alto reference strain. Malar J. 2015;14:229.
Perraut R, Marrama L, Diouf B, Fontenille D, Tall A, Sokhna C, et al. Distinct surrogate markers for protection against Plasmodium falciparum infection and clinical malaria identified in a Senegalese community after radical drug cure. J Infect Dis. 2003;188:1940–50.
Ambrosino E, Dumoulin C, Orlandi-Pradines E, Remoue F, Toure-Balde A, Tall A, et al. A multiplex assay for the simultaneous detection of antibodies against 15 Plasmodium falciparum and Anopheles gambiae saliva antigens. Malar J. 2010;9:317.
Kerkhof K, Sluydts V, Willen L, Kim S, Canier L, Heng S, et al. Serological markers to measure recent changes in malaria at population level in Cambodia. Malar J. 2016;15:529.
Perraut R, Richard V, Varela ML, Trape JF, Guillotte M, Tall A, et al. Comparative analysis of IgG responses to Plasmodium falciparum MSP1p19 and PF13-DBL1 alpha1 using ELISA and a magnetic bead-based duplex assay (MAGPIX(R)-Luminex) in a Senegalese meso-endemic community. Malar J. 2014;13:410.
Perraut R, Varela ML, Mbengue B, Guillotte M, Mercereau-Puijalon O, Vigan-Womas I. Standardization of a multiplex magnetic bead-based assay for simultaneous detection of IgG to Plasmodium antigens. J Immunol Tech Infect Dis. 2015;4:1–8.
Greenhouse B, Ho B, Hubbard A, Njama-Meya D, Narum DL, Lanar DE, et al. Antibodies to Plasmodium falciparum antigens predict a higher risk of malaria but protection from symptoms once parasitemic. J Infect Dis. 2011;204:19–26.
O'Flaherty K, Maguire J, Simpson JA, Fowkes FJI. Immunity as a predictor of anti-malarial treatment failure: a systematic review. Malar J. 2017;16:158.
Solomon T, Loha E, Deressa W, Balkew M, Gari T, Overgaard HJ, et al. Bed nets used to protect against malaria do not last long in a semi-arid area of Ethiopia: a cohort study. Malar J. 2018;17:239.
Ishengoma DS, Mmbando BP, Mandara CI, Chiduo MG, Francis F, Timiza W, et al. Trends of Plasmodium falciparum prevalence in two communities of Muheza district North-eastern Tanzania: correlation between parasite prevalence, malaria interventions and rainfall in the context of re-emergence of malaria after two decades of progressively declining transmission. Malar J. 2018;17:252.
Niang M, Thiam LG, Sane R, Diagne N, Talla C, Doucoure S, et al. Substantial asymptomatic submicroscopic Plasmodium carriage during dry season in low transmission areas in Senegal: implications for malaria control and elimination. PLoS ONE. 2017;12:e0182189.
Kwenti TE, Moye AL, Wiylanyuy AB, Njunda LA, Nkuo-Akenji T. Variation in the immune responses against Plasmodium falciparum merozoite surface protein-1 and apical membrane antigen-1 in children residing in the different epidemiological strata of malaria in Cameroon. Malar J. 2017;16:453.
We are particularly grateful to Dr Shirley Longacre (Vaximax, Paris) who generously provided the baculovirus Plasmodium falciparum MSP1p19 antigen. We thank gratefully Drs Odile Mercereau-Puijalon, Aissatou Touré and Pr Alioune Dieye for constant support, Joseph Faye for Dielmo/Ndiop database management and Cheikh Loucoubar for bioinformatics support. We are indebted to Dr Jean-François Trape and Cheikh Sokhna (IRD, Dakar) for their institutional support in the follow-up of Dielmo and Ndiop villages. We thank gratefully all villagers from Dielmo and Ndiop for their generous and committed long-term participation to the study. We thank all patients from Cote d'Ivoire for their participation to the survey.
The work was supported by grants from the Institut Pasteur Foundation, the prix Jacques Piraud of the Fondation pour la Recherche Médicale and grants from Institut Pasteur ACIP 25_2012 and from the Rotary International associated with the Rotary Paris Alliance and Rotary Dakar Almadies.
Ronald Perraut
Present address: Centre Pasteur du Cameroun, Annexe de Garoua, Garoua, Cameroun
Marie-Louise Varela, David Koffi, André Offianan Touré and Ronald Perraut contributed equally to the work reported
Unité d'Immunologie, Institut Pasteur de Dakar, Dakar, Senegal
Marie-Louise Varela, Makhtar Niang & Ronald Perraut
Unité de Paludologie, Institut Pasteur de Côte d'Ivoire, Abidjan, Côte d'Ivoire
David Koffi & André Offianan Touré
Malaria Unit, Parasites and hosts, Institut Pasteur, Paris, France
Unité d'Epidémiologie, Institut Pasteur de Dakar, Dakar, Senegal
Fatoumata Diene Sarr
Unité d'Immunogénétique, Institut Pasteur de Dakar, Dakar, Senegal
Babacar Mbengue & Ronald Perraut
Service d'Immunologie FMPO, Université Cheikh Anta Diop de Dakar, Dakar, Senegal
Babacar Mbengue
Marie-Louise Varela
David Koffi
Makhtar Niang
André Offianan Touré
RP, AO and MLV designed the study. FDS, DK, AO, MLV supervised the collection of field samples and clinical data. MLV, DK, RP, conducted the laboratory experiments and statistical analyses, with help and input from MN and BM and FDS. Advanced statistics and mathematical models were done by MW, analysis of data were done by RP, MLV, BM. MW, MLV and RP drafted the paper, with input and comments from all other authors. Figures and tables were prepared by RP, MW and MLV with input from all authors. All authors read and approved the final manuscript.
Correspondence to Ronald Perraut.
In Sénégal, the follow-up in the endemic villages of Dielmo and Ndiop is a long-term longitudinal survey to study acquisition and maintenance of natural immunity. The Dielmo/Ndiop project has been carried out over 25 years and presently ongoing. The project protocol and objectives were carefully explained to the assembled villagers, and informed written consent was obtained from all participants or their parents or guardians. Consent is individually renewed from all subjects; anyone can withdraw from the study and the follow-up procedure at any time. The protocol was approved by the Senegalese National Health Research Ethics Committee. An agreement between the Fondation Institut Pasteur de Dakar, Institut de Recherche pour le Développement (IRD) and the Ministère de la Santé et de la Prévention of Senegal defines all research activities.
In Côte d'Ivoire, the protocol of surveillance was approved by the National Committee of the Ministry of Health taking place in the national procedure of malaria surveillance in health centre of the country. Individual informed written consent was obtained from participants/parents/guardians. In case of an illiterate patient, his/her thumb impression and signature of an independent witness were obtained. The study was conducted in accordance with the local laws and regulations, International Conference on Harmonization—Good Clinical Practice (ICH-GCP). The protocol was reviewed and approved by the Comité National d'Ethique et de Recherche de Côte d'Ivoire (N°56/MSLS/CNER-dkn). All clinical cases were treated and followed-up according to the standard national procedure.
The authors declare no competing interests that exist.
Additional file 1: Table S1.
Estimates from sero-catalytic models. Statistical inference was implemented in a Bayesian framework with uniform priors. Parameters are presented as median and 95% credible intervals of the estimated posterior distributions.
Additional file 2: Fig. S1.
Sero-catalytic models fitted to data from antibody responses to eight antigens plus schizont extract measured in cross sectional studies done in 2002 and 2013.Age-stratified sero-prevalence of anti-malarial antibodies in Dielmo and Ndiop. Data are plotted as points with vertical bars representing 95% confidence intervals. The fitted lines represent the posterior median predictions of the sero-catalytic model, and the shaded region denotes the 95% credible interval. The third and sixth row shows the estimated historical trends in transmission as measured by changes in sero-conversion rate. Shaded regions denote the 95% credible interval.
Varela, ML., Koffi, D., White, M. et al. Practical example of multiple antibody screening for evaluation of malaria control strategies. Malar J 19, 117 (2020). https://doi.org/10.1186/s12936-020-03186-9
Multiple antigens
MAGPIX
Symptomatic malaria
Asymptomatic carriage | CommonCrawl |
Electronic – Complex Impedances
What does it mean to have a complex impedance?
For example, the impedance of a capacitor (in the Laplace domain?) is given by 1/sC (I believe) which equates to \$ \dfrac{1}{j \cdot 2 \pi \cdot f \cdot C}\$ where transients are neglected. What does it mean for the impedance to be imaginary?
I'm currently in my 2nd year of Electrical Engineering at University so, if possible, I'd appreciate a mathematically valid and thorough response if it's not too much trouble, with the reference of study material (web and paper resources) ideal.
TL;DR The imaginary part of the impedence tells you the reactive component of the impedance; this is responsible (among others) for the difference in phase between current and voltage and the reactive power used by the circuit.
The underlying principle is that any periodic signal can be treated as the sum of (sometimes) infinite sinewaves called harmonics, with equally spaced frequencies. Each of them can be treated separately, as a signal of its own.
For these signals you use a representation that is like: $$ v(t) = V_{0} \cos (2 \pi f t + \phi) = \Re \{ V_{0}e^{j 2 \pi f t + \phi} \} $$
And you can see that we already jumped in the domain of complex numbers, because you can use a complex exponential to represent rotation.
So impedance can be active (resistance) or reactive (reactance); while the first one by definition doesn't affect the phase of signals (\$ \phi \$) the reactance does, so using complex numbers is possible to evaluate the variation in the phase that is introduced by the reactance.
So you obtain: $$ V = I \cdot Z = I \cdot |Z| \cdot e^{j \theta} $$
where |Z| is the magnitude of the impedance, given by: $$|Z|=\sqrt{R^2+X^2}$$
and theta is the phase introduced by the impedance, and is given by: $$\theta = \arctan \left( \frac{X}{R} \right) $$
When applied to the previous function, it becomes: $$ v(t) = \Re \{ I_{0}|Z|e^{j 2 \pi f t + \phi + \theta } \} = I_{0} |Z| \cos (2 \pi f t + \phi + \theta ) $$
Let's consider the ideal capacitor: it's impedance will be \$ \frac{1}{j \omega C} = -\frac{j}{\omega C} \$ which is imaginary and negative; if you put it into the trigonometric circumference, you obtain a phase of -90°, which means that with a purely capacitive load the voltage will be 90° behind the current.
So why?
Let's say that you want to sum two impedances, 100 Ohm and 50+i50 Ohm (or, without complex numbers, \$ 70.7 \angle 45 ^\circ \$ ). Then with complex numbers you sum the real and imaginary part and obtain 150+i50 Ohm.
Without using complex numbers, the thing is quite more complicated, as you can either use cosines and sines (but it's the same of using complex numbers then) or get into a mess of magnitudes and phases. It's up to you :).
Some additional notions, trying to address your questions:
The harmonics representation of signals is usually addressed by Fourier series decomposition:
$$ v(t) = \sum_{- \infty}^{+ \infty} c_{n}e^{jnt} , \text{ where } c_{n} = \frac{1}{2 \pi } \int_{-\pi}^{\pi} v(t)e^{-jnt} \, dt $$
The complex exponential is related to the cosine also by the Euler's formula:
$$ cos(x) = \frac{e^{ix}+e^{-ix}}{2} $$
Solving three impedances in series
Let's translate your words into an equation for the equivalent impedance.
Since C1 and R1 are in parallel
$$Z_1 = R_1||\frac{1}{sC_1}$$
likewise C2 and R2
$$Z_E = Z_1 + Z_2 + R_3$$
Electronic – Selecting different impedances then 50 ohm in DDR3
There are no real downsides to any impedance for EMI, as long as you have matched impedances on source, termination and trace.
One thing that many fail to consider is the typical routing of DDR2/3 with a data line that goes far enough from the CPU to be considered a transmission line, then T's to go to each chip (or 4 chips, etc.) The 50 ohm impedance just dropped to 25 ohm, with two traces in parallel. This creates an impedance mismatch, which always creates a reflected signal and noise on the line.
The proper way to handle this is to T the lines either close enough to the CPU to still be considered a lumped length, with a series termination at the T of 1/2 the line impedance. The signal sees 25 ohm resistor to a 25 ohm parallel set of 50 ohm transmission lines and everything is good. Or this can be done where the T's will be short enough to be considered lumped at the two RAM chips.
A similar setup can be used to match impedances differing CPU to RAM, but it is best to keep SoC and RAM source and termination impedances the same, then make traces to match.
If you have differing impedance only at the RAM side, you should be able to use parallel or series termination resistors to match to that.
Electronic – Collisions of Thermal Impedances | CommonCrawl |
neverendingbooks
big Witt vectors for everyone (1/2)
Published February 2, 2010 by lievenlb
Next time you visit your math-library, please have a look whether these books are still on the shelves : Michiel Hazewinkel's Formal groups and applications, William Fulton's and Serge Lange's Riemann-Roch algebra and Donald Knutson's lambda-rings and the representation theory of the symmetric group.
I wouldn't be surprised if one or more of these books are borrowed out, probably all of them to the same person. I'm afraid I'm that person in Antwerp…
Lately, there's been a renewed interest in $\lambda $-rings and the endo-functor W assigning to a commutative algebra its ring of big Witt vectors, following Borger's new proposal for a geometry over the absolute point.
However, as Hendrik Lenstra writes in his 2002 course-notes on the subject Construction of the ring of Witt vectors : "The literature on the functor W is in a somewhat unsatisfactory state: nobody seems to have any interest in Witt vectors beyond applying them for a purpose, and they are often treated in appendices to papers devoting to something else; also, the construction usually depends on a set of implicit or unintelligible formulae. Apparently, anybody who wishes to understand Witt vectors needs to construct them personally. That is what is now happening to myself."
Before doing a series on Borger's paper, we'd better run through Lenstra's elegant construction in a couple of posts. Let A be a commutative ring and consider the multiplicative group of all 'one-power series' over it $\Lambda(A)=1+t A[[t]] $. Our aim is to define a commutative ring structure on $\Lambda(A) $ taking as its ADDITION the MULTIPLICATION of power series.
That is, if $u(t),v(t) \in \Lambda(A) $, then we define our addition $u(t) \oplus v(t) = u(t) \times v(t) $. This may be slightly confusing as the ZERO-element in $\Lambda(A),\oplus $ will then turn be the constant power series 1…
We are now going to define a multiplication $\otimes $ on $\Lambda(A) $ which is distributively with respect to $\oplus $ and turns $\Lambda(A) $ into a commutative ring with ONE-element the series $~(1-t)^{-1}=1+t+t^2+t^3+\ldots $.
We will do this inductively, so consider $\Lambda_n(A) $ the (classes of) one-power series truncated at term n, that is, the kernel of the natural augmentation map between the multiplicative group-units $~A[t]/(t^{n+1})^* \rightarrow A^* $.
Again, taking multiplication in $A[t]/(t^{n+1}) $ as a new addition rule $\oplus $, we see that $~(\Lambda_n(A),\oplus) $ is an Abelian group, whence a $\mathbb{Z} $-module.
For all elements $a \in A $ we have a scaling operator $\phi_a $ (sending $t \rightarrow at $) which is an A-ring endomorphism of $A[t]/(t^{n+1}) $, in particular multiplicative wrt. $\times $. But then, $\phi_a $ is an additive endomorphism of $~(\Lambda_n(A),\oplus) $, so is an element of the endomorphism-RING $End_{\mathbb{Z}}(\Lambda_n(A)) $. Because composition (being the multiplication in this endomorphism ring) of scaling operators is clearly commutative ($\phi_a \circ \phi_b = \phi_{ab} $) we can define a commutative RING $E $ being the subring of $End_{\mathbb{Z}}(\Lambda_n(A)) $ generated by the operators $\phi_a $.
The action turns $~(\Lambda_n(A),\oplus) $ into an E-module and we define an E-module morphism $E \rightarrow \Lambda_n(A) $ by $\phi_a \mapsto \phi_a((1-t)^{-1}) = (1-at)^{-a} $.
All of this looks pretty harmless, but the upshot is that we have now equipped the image of this E-module morphism, say $L_n(A) $ (which is the additive subgroup of $~(\Lambda_n(A),\oplus) $ generated by the elements $~(1-at)^{-1} $) with a commutative multiplication $\otimes $ induced by the rule $~(1-at)^{-1} \otimes (1-bt)^{-1} = (1-abt)^{-1} $.
Explicitly, $L_n(A) $ is the set of one-truncated polynomials $u(t) $ with coefficients in $A $ such that one can find elements $a_1,\ldots,a_k \in A $ such that $u(t) \equiv (1-a_1t)^{-1} \times \ldots \times (1-a_k)^{-1}~mod~t^{n+1} $. We multiply $u(t) $ with another such truncated one-polynomial $v(t) $ (taking elements $b_1,b_2,\ldots,b_l \in A $) via
$u(t) \otimes v(t) = ((1-a_1t)^{-1} \oplus \ldots \oplus (1-a_k)^{-1}) \otimes ((1-b_1t)^{-1} \oplus \ldots \oplus (1-b_l)^{-1}) $
and using distributivity and the multiplication rule this gives the element $\prod_{i,j} (1-a_ib_jt)^{-1}~mod~t^{n+1} \in L_n(A) $.
Being a ring-qutient of $E $ we have that $~(L_n(A),\oplus,\otimes) $ is a commutative ring, and, from the construction it is clear that $L_n $ behaves functorially.
For rings $A $ such that $L_n(A)=\Lambda_n(A) $ we are done, but in general $L_n(A) $ may be strictly smaller. The idea is to use functoriality and do the relevant calculations in a larger ring $A \subset B $ where we can multiply the two truncated one-polynomials and observe that the resulting truncated polynomial still has all its coefficients in $A $.
Here's how we would do this over $\mathbb{Z} $ : take two irreducible one-polynomials u(t) and v(t) of degrees r resp. s smaller or equal to n. Then over the complex numbers we have
$u(t)=(1-\alpha_1t) \ldots (1-\alpha_rt) $ and $v(t)=(1-\beta_1) \ldots (1-\beta_st) $. Then, over the field $K=\mathbb{Q}(\alpha_1,\ldots,\alpha_r,\beta_1,\ldots,\beta_s) $ we have that $u(t),v(t) \in L_n(K) $ and hence we can compute their product $u(t) \otimes v(t) $ as before to be $\prod_{i,j}(1-\alpha_i\beta_jt)^{-1}~mod~t^{n+1} $. But then, all coefficients of this truncated K-polynomial are invariant under all permutations of the roots $\alpha_i $ and the roots $\beta_j $ and so is invariant under all elements of the Galois group. But then, these coefficients are algebraic numbers in $\mathbb{Q} $ whence integers. That is, $u(t) \otimes v(t) \in \Lambda_n(\mathbb{Z}) $. It should already be clear from this that the rings $\Lambda_n(\mathbb{Z}) $ contain a lot of arithmetic information!
For a general commutative ring $A $ we will copy this argument by considering a free overring $A^{(\infty)} $ (with 1 as one of the base elements) by formally adjoining roots. At level 1, consider $M_0 $ to be the set of all non-constant one-polynomials over $A $ and consider the ring
$A^{(1)} = \bigotimes_{f \in M_0} A[X]/(f) = A[X_f, f \in M_0]/(f(X_f) , f \in M_0) $
The idea being that every one-polynomial $f \in M_0 $ now has one root, namely $\alpha_f = \overline{X_f} $ in $A^{(1)} $. Further, $A^{(1)} $ is a free A-module with basis elements all $\alpha_f^i $ with $0 \leq i < deg(f) $.
Good! We now have at least one root, but we can continue this process. At level 2, $M_1 $ will be the set of all non-constant one-polynomials over $A^{(1)} $ and we use them to construct the free overring $A^{(2)} $ (which now has the property that every $f \in M_0 $ has at least two roots in $A^{(2)} $). And, again, we repeat this process and obtain in succession the rings $A^{(3)},A^{(4)},\ldots $. Finally, we define $A^{(\infty)} = \underset{\rightarrow}{lim}~A^{(i)} $ having the property that every one-polynomial over A splits entirely in linear factors over $A^{(\infty)} $.
But then, for all $u(t),v(t) \in \Lambda_n(A) $ we can compute $u(t) \otimes v(t) \in \Lambda_n(A^{(\infty)}) $. Remains to show that the resulting truncated one-polynomial has all its entries in A. The ring $A^{(\infty)} \otimes_A A^{(\infty)} $ contains two copies of $A^{(\infty)} $ namely $A^{(\infty)} \otimes 1 $ and $1 \otimes A^{(\infty)} $ and the intersection of these two rings in exactly $A $ (here we use the freeness property and the additional fact that 1 is one of the base elements). But then, by functoriality of $L_n $, the element
$u(t) \otimes v(t) \in L_n(A^{(\infty)} \otimes_A A^{(\infty)}) $ lies in the intersection $\Lambda_n(A^{(\infty)} \otimes 1) \cap \Lambda_n(1 \otimes A^{(\infty)})=\Lambda_n(A) $. Done!
Hence, we have endo-functors $\Lambda_n $ in the category of all commutative rings, for every number n. Reviewing the construction of $L_n $ one observes that there are natural transformations $L_{n+1} \rightarrow L_n $ and therefore also natural transformations $\Lambda_{n+1} \rightarrow \Lambda_n $. Taking the inverse limits $\Lambda(A) = \underset{\leftarrow}{lim} \Lambda_n(A) $ we therefore have the 'one-power series' endo-functor
$\Lambda~:~\mathbf{comm} \rightarrow \mathbf{comm} $
which is 'almost' the functor W of big Witt vectors. Next time we'll take you through the identification using 'ghost variables' and how the functor $\Lambda $ can be used to define the category of $\lambda $-rings.
Lambda-rings for formula-phobics
BC stands for Bi-Crystalline graded
Absolute linear algebra
adeles and ideles
From the Da Vinci code to Habiro
From the Da Vinci code to Galois
How to dismantle scheme theory?
Prep-notes dump
differential forms
Connes-Consani for undergraduates (1)
Published in absolute
Grothendieck
lambda rings
Lenstra
Previous Post The odd knights of the round table
Next Post Seating the first few thousand Knights | CommonCrawl |
Prove that an equation has no elementary solution
There are methods proving that a polynomial isn't solvable in radical extensions (see Abel–Ruffini theorem) or proving that an integral or a differential equation has no solutions expressible through elementary functions (Risch Algorithm and differential Galois theory).
But I've never seen a proof that proves an equation (like $xe^x=1$) not be solvable in terms of elementary functions. Is there a mathematical theory on that?
Is there a proof that the solutions of $xe^x=1$ cannot be expressed using basic arithmetic, trig, exponentials, logarithms and a composition of them?
abstract-algebra closed-form elementary-functions
Andrei Kh
Andrei KhAndrei Kh
A solution of an equation is elementary iff it is an elementary number.
Polynomial functions, rational functions and irrational algebraic functions of one variable are algebraic elementary functions. Their solutions are algebraic numbers. An algebraic number is elementary iff it can be represented as a radical expression. The problem when an equation of algebraic functions is solvable by radicals is solved by Galois theory. You are asking for solvability of equations of transcendental elementary functions in terms of elementary expressions.
This part answers when it is possible to find an elementary solution only by transforming the equation only by elementary operations that are readable from the equation.
This task can be related to the question of existence of closed-form inverses of the functions which are contained in the equation: If only elementary expressions are allowed, an equation $F(x)=0$ with elementary function $F$ given in closed form can be transformed according to $x$ simply by reading off the partial inverses of $F$ from the equation only if all partial inverses needed are elementary functions.
The incomprehensibly unfortunately hardly noticed theorem of Joseph Fels Ritt in Ritt, J. F.: Elementary functions and their inverses. Trans. Amer. Math. Soc. 27 (1925) (1) 68-90 answers which kinds of Elementary functions can have an inverse which is an Elementary function.
Risch also proved this theorem in Risch, R. H.: Algebraic Properties of the Elementary Functions of Analysis. Amer. J. Math 101 (1979) (4) 743-759 by his structure theorem for Elementary functions.
A method of proof for certain transcendental equations is given in Rosenlicht, M.: On the explicit solvability of certain transcendental equations. Publications mathématiques de l'IHÉS 36 (1969) 15-22.
Two methods for simpler transcendental elementary equations are given in [Lin 1983] Ferng-Ching Lin: Schanuel's Conjecture Implies Ritt's Conjectures. Chin. J. Math. 11 (1983) (1) 41-50 and in [Chow 1999] Chow, T.: What is a closed-form number. Am. Math. Monthly 106 (1999) (5) 440-448. Both need the proof of Schanuel's conjecture what currently is an unsolved mathematical problem.
IV_IV_
Not the answer you're looking for? Browse other questions tagged abstract-algebra closed-form elementary-functions or ask your own question.
Proof that no analytical solution exists for $x+e^{-x}=2$
Is there a general rule for proving that an equation has no analyticial solution
Solvability arguments?
Integrating non-elementary functions
Is there a theory of integration in elementary terms for definite integrals?
Prove that primitives of $\frac{x^3}{{\rm e}^x - 1}$ have no closed form in terms of elementary functions
Why is it that the Lambert W relation cannot be expressed in terms of elementary functions?
Risch algorithm analogue for differential equations
Polynomials with degree $5$ solvable in elementary functions?
Can someone prove that the inverse of $x^x$ is not an elementary function?
Are the trigonometric functions really Elementary Functions?
Lowest technology method to prove fundamental theory of galois theory in characteristic zero? | CommonCrawl |
Traveling wave solution for a lattice dynamical system with convolution type nonlinearity
Asymptotic behavior of solutions to a one-dimensional full model for phase transitions with microscopic movements
January 2012, 32(1): 125-166. doi: 10.3934/dcds.2012.32.125
Spectral analysis for transition front solutions in Cahn-Hilliard systems
Peter Howard 1, and Bongsuk Kwon 1,
Department of Mathematics, Texas A&M University, College Station, TX 77843-3368, United States, United States
Received July 2010 Revised March 2011 Published September 2011
We consider the spectrum associated with the linear operator obtained when a Cahn--Hilliard system on $\mathbb{R}$ is linearized about a transition wave solution. In many cases it's possible to show that the only non-negative eigenvalue is $\lambda = 0$, and so stability depends entirely on the nature of this neutral eigenvalue. In such cases, we identify a stability condition based on an appropriate Evans function, and we verify this condition under strong structural conditions on our equations. More generally, we discuss and implement a straightforward numerical check of our condition, valid under mild structural conditions.
Keywords: Evans function., transition fronts, stability, Cahn-Hilliard systems.
Mathematics Subject Classification: Primary: 35B35, 35P05; Secondary: 35Q9.
Citation: Peter Howard, Bongsuk Kwon. Spectral analysis for transition front solutions in Cahn-Hilliard systems. Discrete & Continuous Dynamical Systems - A, 2012, 32 (1) : 125-166. doi: 10.3934/dcds.2012.32.125
N. D. Alikakos, S. I. Betelu, and X. Chen, Explicit stationary solutions in multiple well dynamics and non-uniqueness of interfacial energy densities,, European J. of Applied Mathematics, 17 (2006), 525. doi: 10.1017/S095679250600667X. Google Scholar
N. D. Alikakos and G. Fusco, On the connection problem for potentials with several global minima,, Indiana Univ. Math. J., 57 (2008), 1871. doi: 10.1512/iumj.2008.57.3181. Google Scholar
J. Alexander, R. Gardner and C. K. R. T. Jones, A topological invariant arising in the stability analysis of traveling waves,, J. Reine Angew. Math., 410 (1990), 167. Google Scholar
J. Bricmont, A. Kupiainen and J. Taskinen, Stability of Cahn-Hilliard fronts,, Comm. Pure Appl. Math., 52 (1999), 839. doi: 10.1002/(SICI)1097-0312(199907)52:7<839::AID-CPA4>3.0.CO;2-I. Google Scholar
F. Boyer and C. Lapuerta, Study of a three component Cahn-Hilliard flow model,, Mathematical Modeling and Numerical Analysis, 40 (2006), 653. doi: 10.1051/m2an:2006028. Google Scholar
J. W. Cahn, On spinodal decomposition,, Acta Metall., 9 (1961), 795. doi: 10.1016/0001-6160(61)90182-1. Google Scholar
J. W. Cahn and J. E. Hilliard, Free energy of a nonuniform system I: Interfacial free energy,, J. Chem. Phys., 28 (1958), 258. doi: 10.1063/1.1744102. Google Scholar
D. de Fontaine, "A Computer Simulation of the Evolution of Coherent Composition Variations in Solid Solutions,'', Ph. D. thesis, (1967). Google Scholar
D. de Fontaine, An analysis of clustering and ordering in multicomponent solid solutions I. Stability criteria,, J. Phys. Chem. Solids, 33 (1972), 297. doi: 10.1016/0022-3697(72)90011-X. Google Scholar
D. de Fontaine, An analysis of clustering and ordering in multicomponent solid solutions II. Fluctuations and kinetics,, J. Phys. Chem. Solids, 34 (1973), 1285. doi: 10.1016/S0022-3697(73)80026-5. Google Scholar
, D. de Fontaine,, Private communication 2009., (2009). Google Scholar
D. J. Eyre, Systems of Cahn-Hilliard equations,, SIAM J. Appl. Math., 53 (1993), 1686. doi: 10.1137/0153078. Google Scholar
M. Giaquinta, "Multiple Integrals in the Calculus of Variations and Nonlinear Elliptic Systems,", Annals of Mathematics Studies, (1983). Google Scholar
M. Giaquinta and E. Giusti, Differentiability of minima of non-differentiable functionals,, Invent. Math., 72 (1983), 285. doi: 10.1007/BF01389324. Google Scholar
C. P. Grant, Slow motion in one-dimensional Cahn-Morral systems,, SIAM J. Math. Anal., 26 (1995), 21. doi: 10.1137/S0036141092226053. Google Scholar
R. Gardner and K. Zumbrun, The gap lemma and geometric criteria for instability of viscous shock profiles,, Comm. Pure Appl. Math., 51 (1998), 797. doi: 10.1002/(SICI)1097-0312(199807)51:7<797::AID-CPA3>3.0.CO;2-1. Google Scholar
D. Henry, "Geometric Theory of Semilinear Parabolic Equations,", Lecture notes in mathematics, 840 (1981). Google Scholar
P. Howard, Pointwise estimates and stability for degenerate viscous shock waves,, J. Reine Angew. Math., 545 (2002), 19. doi: 10.1515/crll.2002.034. Google Scholar
P. Howard, Local tracking and stability for degenerate viscous shock waves,, J. Differential Eqns., 186 (2002), 440. Google Scholar
P. Howard, Asymptotic behavior near transition fronts for equations of generalized Cahn-Hilliard form,, Commun. Math. Phys., 269 (2007), 765. doi: 10.1007/s00220-006-0102-5. Google Scholar
P. Howard, Asymptotic behavior near planar transition fronts for the Cahn-Hilliard equation,, Phys. D, 229 (2007), 123. doi: 10.1016/j.physd.2007.03.018. Google Scholar
P. Howard, Spectral analysis of stationary solutions of the Cahn-Hilliard equation,, Advances in Differential Equations, 14 (2009), 87. Google Scholar
P. Howard and B. Kwon, Stability for transition front solutions in Cahn-Hilliard Systems,, in preparation., (). Google Scholar
P. Howard and K. Zumbrun, The Evans function and stability criteria for degenerate viscous shock waves,, Discrete and Continuous Dynamical Systems, 10 (2004), 837. doi: 10.3934/dcds.2004.10.837. Google Scholar
J. J. Hoyt, Spinodal decomposition in ternary alloys,, Acta Metall., 37 (1989), 2489. doi: 10.1016/0001-6160(89)90047-3. Google Scholar
J. Kim and K. Kang, A numerical method for the ternary Cahn-Hilliard system with a degenerate mobility,, Applied Numerical Mathematics, 59 (2009), 1029. doi: 10.1016/j.apnum.2008.04.004. Google Scholar
R. V. Kohn and X. Yan, Coarsening rates for models of multicomponent phase separation,, Interfaces Free Bound., 6 (2004), 135. doi: 10.4171/IFB/94. Google Scholar
J. E. Morral and J. W. Cahn, Spinodal decomposition in ternary systems,, Acta Metall., 19 (1971), 1037. doi: 10.1016/0001-6160(71)90036-8. Google Scholar
, I. Prigogine,, Bull. Soc. Chim. Belge., 8-9 (1943), 8. Google Scholar
M. Reed and B. Simon, "Methods of Modern Mathematical Physics IV: Analysis of Operators,", Academic Press, (1978). Google Scholar
V. Stefanopoulos, Heteroclinic connections for multiple-well potentials: The anisotropic case,, Proc. Royal Soc. Edinburgh Sect. A, 138 (2008), 1313. doi: 10.1017/S0308210507000145. Google Scholar
K. Zumbrun and P. Howard, Pointwise semigroup methods and stability of viscous shock waves,, Indiana Univ. Math. J., 51 (1998), 741. doi: 10.1512/iumj.2002.51.2410. Google Scholar
Tian Ma, Shouhong Wang. Cahn-Hilliard equations and phase transition dynamics for binary systems. Discrete & Continuous Dynamical Systems - B, 2009, 11 (3) : 741-784. doi: 10.3934/dcdsb.2009.11.741
Mauro Fabrizio, Claudio Giorgi, Angelo Morro. Phase transition and separation in compressible Cahn-Hilliard fluids. Discrete & Continuous Dynamical Systems - B, 2014, 19 (1) : 73-88. doi: 10.3934/dcdsb.2014.19.73
Ahmad Makki, Alain Miranville. Existence of solutions for anisotropic Cahn-Hilliard and Allen-Cahn systems in higher space dimensions. Discrete & Continuous Dynamical Systems - S, 2016, 9 (3) : 759-775. doi: 10.3934/dcdss.2016027
Ciprian G. Gal, Maurizio Grasselli. Longtime behavior of nonlocal Cahn-Hilliard equations. Discrete & Continuous Dynamical Systems - A, 2014, 34 (1) : 145-179. doi: 10.3934/dcds.2014.34.145
Alain Miranville. Existence of solutions for Cahn-Hilliard type equations. Conference Publications, 2003, 2003 (Special) : 630-637. doi: 10.3934/proc.2003.2003.630
Desheng Li, Xuewei Ju. On dynamical behavior of viscous Cahn-Hilliard equation. Discrete & Continuous Dynamical Systems - A, 2012, 32 (6) : 2207-2221. doi: 10.3934/dcds.2012.32.2207
Laurence Cherfils, Alain Miranville, Sergey Zelik. On a generalized Cahn-Hilliard equation with biological applications. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 2013-2026. doi: 10.3934/dcdsb.2014.19.2013
Álvaro Hernández, Michał Kowalczyk. Rotationally symmetric solutions to the Cahn-Hilliard equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (2) : 801-827. doi: 10.3934/dcds.2017033
Sergey P. Degtyarev. On Fourier multipliers in function spaces with partial Hölder condition and their application to the linearized Cahn-Hilliard equation with dynamic boundary conditions. Evolution Equations & Control Theory, 2015, 4 (4) : 391-429. doi: 10.3934/eect.2015.4.391
Georgia Karali, Yuko Nagase. On the existence of solution for a Cahn-Hilliard/Allen-Cahn equation. Discrete & Continuous Dynamical Systems - S, 2014, 7 (1) : 127-137. doi: 10.3934/dcdss.2014.7.127
Christopher P. Grant. Grain sizes in the discrete Allen-Cahn and Cahn-Hilliard equations. Discrete & Continuous Dynamical Systems - A, 2001, 7 (1) : 127-146. doi: 10.3934/dcds.2001.7.127
Jie Shen, Xiaofeng Yang. Numerical approximations of Allen-Cahn and Cahn-Hilliard equations. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1669-1691. doi: 10.3934/dcds.2010.28.1669
Shixing Li, Dongming Yan. On the steady state bifurcation of the Cahn-Hilliard/Allen-Cahn system. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3077-3088. doi: 10.3934/dcdsb.2018301
Alain Miranville, Wafa Saoud, Raafat Talhouk. On the Cahn-Hilliard/Allen-Cahn equations with singular potentials. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3633-3651. doi: 10.3934/dcdsb.2018308
Peter Howard, K. Zumbrun. The Evans function and stability criteria for degenerate viscous shock waves. Discrete & Continuous Dynamical Systems - A, 2004, 10 (4) : 837-855. doi: 10.3934/dcds.2004.10.837
Ramon Plaza, K. Zumbrun. An Evans function approach to spectral stability of small-amplitude shock profiles. Discrete & Continuous Dynamical Systems - A, 2004, 10 (4) : 885-924. doi: 10.3934/dcds.2004.10.885
Elena Bonetti, Pierluigi Colli, Luca Scarpa, Giuseppe Tomassetti. A doubly nonlinear Cahn-Hilliard system with nonlinear viscosity. Communications on Pure & Applied Analysis, 2018, 17 (3) : 1001-1022. doi: 10.3934/cpaa.2018049
Dirk Blömker, Bernhard Gawron, Thomas Wanner. Nucleation in the one-dimensional stochastic Cahn-Hilliard model. Discrete & Continuous Dynamical Systems - A, 2010, 27 (1) : 25-52. doi: 10.3934/dcds.2010.27.25
Pierluigi Colli, Gianni Gilardi, Paolo Podio-Guidugli, Jürgen Sprekels. An asymptotic analysis for a nonstandard Cahn-Hilliard system with viscosity. Discrete & Continuous Dynamical Systems - S, 2013, 6 (2) : 353-368. doi: 10.3934/dcdss.2013.6.353
Pierluigi Colli, Gianni Gilardi, Danielle Hilhorst. On a Cahn-Hilliard type phase field system related to tumor growth. Discrete & Continuous Dynamical Systems - A, 2015, 35 (6) : 2423-2442. doi: 10.3934/dcds.2015.35.2423
Peter Howard Bongsuk Kwon | CommonCrawl |
Adverse Selection, Heterogeneous Beliefs, and Evolutionary Learning
Clemens Buchen ORCID: orcid.org/0000-0003-2448-23261 &
Alberto Palermo ORCID: orcid.org/0000-0002-7075-11472
Dynamic Games and Applications volume 12, pages 343–362 (2022)Cite this article
We relax the common assumption of homogeneous beliefs in principal-agent relationships with adverse selection. Principals are competitors in the product market and write contracts also on the base of an expected aggregate. The model is a version of a cobweb model. In an evolutionary learning set-up, which is imitative, principals can have different beliefs about the distribution of agents' types in the population. The resulting nonlinear dynamic system is studied. Convergence to a uniform belief depends on the relative size of the bias in beliefs.
Avoid the most common mistakes and prepare your manuscript for journal editors.
Usually in mechanism design it is assumed that players have a subjective probability distribution over a set of possible elements or outcomes, which represents information privately known to other players. More specifically, in a principal–agent relationship with adverse selection the principal does not know the type of agent that she is matched with, but the distribution of types is common knowledge. Given a belief about this distribution, principals write court-enforceable contracts and agents self-select. Against this backdrop of the standard model, we introduce a bias on the part of the principals concerning their beliefs.
We model an aggregative game where principals are firms in an competitive market. Each principal is randomly matched with an agent who, in exerting effort, generates an output. Principals offer contracts based on the expected aggregate quantity and their belief over the distribution of types in the economy. Whereas the payoff of the agent depends on his (privately known) cost, the principals' payoff is affected by the realized aggregate output and their beliefs.
We then study imitation equilibria in this market characterized by adverse selection and heterogeneous beliefs. The overarching question in the set-up then becomes: What are possible long-run equilibria of the beliefs principals hold? To this end, we formulate conditions under which biased beliefs can persist.
The aggregate of all individual firm decisions has an externality effect on all market participants. Building on that notion, we are interested in the way that a bias can affect that externality and, in a feedback effect, how the externality affects the bias. This means that, on the one hand, and this is to be expected, firms acting on biased beliefs influence the aggregate quantity in the market, because their individual output decisions are changed by the bias. On the other hand, however, convergence toward a beliefs equilibrium depends on the market quantity and the realized profits.
There is a long-term effect on market fundamentals, i.e., price, quantity, utilities and labor contracts. We show that the magnitude of the bias is decisive for the long-run outcomes. As intuition suggests, a large bias would be eradicated by market forces, whereas a modest degree of bias can persist. The reason is that for a range of biases the net of all externalities affecting an individual firm is positive.
Imitation in a game-theoretic setting is developed by Björnerstedt and Weibull [5], Vega-Redondo [22] and Schlag [18], where individuals imitate those strategies that offer higher profits. Apesteguia et al. [2] synthesize these approaches and test the theory with an experiment. Selten and Ostmann [20] study an imitation equilibrium in which higher profits also determine who will be imitated. In addition they introduce the notion of the reference group, which comprises all other players any individual would at all consider imitating. The precise definition of reference group is made in each case on the basis of the problem at hand. For example in a spatial sense as in Selten and Apesteguia [19], where firms imitate only neighboring firms. Or, as in Rothschild and Stiglitz [15] and Ania et al. [1], the reference group for principals—while not so named—includes contracts that are similar enough to one's own contract.
We contribute to this literature on imitation. We assume (and w.l.o.g.) that some of the principals are optimistic about the distribution.Footnote 1 This means that they believe that the distribution of types is more favorable than it actually is.Footnote 2 As a result, the profits are different depending on the belief a principal holds. Hence, we study a polymorphic population characterized by unbiased and optimistic principals. To the best of our knowledge, our paper is the first to introduce a bias for the uninformed market side in an evolutionary set-up.Footnote 3 In our model, learning takes place by imitation of other principals' beliefs. That is, information about the beliefs is shared by word-of-mouth communication as in Banerjee and Fudenberg [4]. Alternatively, beliefs can be inferred if the menus of contracts are observable as in Ania et al. [1].
In putting the model squarely in the tradition of imitation, we effectively assume that principals are memoryless about past matches. Instead, we could assume that the lifespan is short. Alternatively, if we assumed principals able to revise beliefs as Bayesian updaters, then, with an appeal to the law of large numbers, the answer would be straightforward. As more information accumulates over time, Bayesian updating with perfect memory would ultimately lead to all principals having the same unbiased belief. However, in this scenario, the information requirements are strong, because principals have to live a sufficiently long time to collect enough data points. We imagine a continuum between a perfect Bayesian updater at the one end and imitation only at the other. A Bayesian approach requires that firms potentially live forever and that (in the long run) perfectly recover the underlying distribution of types. In assuming memorylessness, we are at the one end of the spectrum where the evolution of beliefs is not straightforward. Fundamentally, we are interested in studying potential outcomes if economic actors do not or are unable to accumulate enough observations to recover the true distribution. In this context, the assumption of memorylessness is a mathematical convenience.
In our set-up, the probability to change beliefs depends on the matching, the propensity to switch, and the payoff difference between principals. We partially follow Selten and Ostmann's [20] notion of reference groups. There it is stated that individuals tend to compare with similar others (i.e., membership of the same reference group). In this sense, we introduce a mechanism that describes the willingness of individuals to compare themselves with others. To be concrete, consider a model with two types of agents. There is one group including the principals matched with a high-cost agent and the second with those matched with a low-cost agent. This lets us define a propensity to compare for each principal, which expresses the willingness to compare with a randomly chosen different principal. Hence, we do not restrict the comparison to a given member of a reference group, but rather we conceive of a probability representing the willingness of a given principal to compare herself with others. Whereas this assumption represents an extension of the notion of reference groups in economics, it is a common view in the social comparison theory, which is commonplace in other disciplines. Starting with Festinger [9], psychologists point out that individuals tend to carry out "social comparisons" preferably (but not exclusively) with similar others. The latter point suggests that our principals treat information gained from different reference groups differently. A precursor to our approach in experimental economics is the work by Todt [21]. The subjects in these early experiments tended to be more open to imitation if the situation of the other party was perceived to be more similar to their own.
Principals offer contracts that stipulate the production of a given quantity of a homogeneous good. Aside from a belief about the distribution of types, each principal also forms an expectation about the aggregate quantity produced in the market. We focus our attention on naive adaptive expectations, which implies that each principal writes a contract assuming that the aggregate quantity in a given period is the same as in the period before. The rationale is the following. Principals in our model do not know the salient characteristics of the market, simply because they are not aware that there are different beliefs. Rational expectations about the quantity would run contrary to this view of the role of principals and therefore are not useful in this respect.Footnote 4 Given the assumption about naive expectations, our model is then a version of a cobweb model where, traditionally, fluctuations arise due to a disconnect between the time quantities are chosen and prices are realized.
We focus on just two possible beliefs, but obviously one could imagine a population in which each principal holds her own prior belief about the distribution. Then, the switching process described above should sooner or later lead to a situation in which the polymorphism of the population consists of either two remaining strategies or a stable configuration with more than two beliefs present. Whereas the latter case would require an analysis of the conditions under which a configuration of multiple beliefs can coexist, the former is a study of convergence toward a unique belief. We focus on this case keeping in mind that this is a "reduced" problem, because we start the analysis at a moment in time in which a potentially large number of beliefs has already been eliminated from the population and where the only polymorphism consists of two beliefs.Footnote 5
The paper proceeds as follows: the next section introduces the interaction of the different groups in the model. The resulting dynamic system is studied in Sect. 3. Finally, we offer some concluding remarks in Sect. 4. All proofs are relegated to Appendix.
Population Interactions
In Sect. 2.1, we describe the interaction between principals and agents in the stage game of the model. Then, in Sect. 2.2, we set up and describe the interaction among principals with different beliefs. This entails defining a mechanism that allows principals to modify their beliefs and therefore change their contract offers in the stage game. Hence, in the next two subsections we derive the nonlinear map which governs the evolution of the population composition and of the quantity.
Stage Game
There are two large populations of principals and agents of equal size. Each principal wants to delegate a task to an agent in order to produce a quantity q. Agents are heterogeneous with regard to their ability to produce the quantity. They have a linear cost function defined as \(C\left( q,\theta \right) =\theta q\). As is standard, we assume that \(\theta \in \left\{ \underline{\theta },\overline{\theta }\right\} \), where \(\overline{\theta }>\underline{\theta }\) and we denote \(\Delta \theta =\overline{\theta }-\underline{\theta }\). The proportion of agents with marginal cost \(\underline{\theta }\) is \(v\in \left( 0,1\right) \). In the next step, the stage game will be defined.
Principals are heterogeneous in that they hold different beliefs about the distribution of agents' abilities. In particular, we assume that some principals believe that the proportion of low-cost agents in the population is larger than it really is:
Assumption 1
Each principal has a belief \(\phi \) about the prevalence of low-cost agents, with \(\phi \in \{\rho ,v\}\) where \(\rho >v\). We will sometimes refer to those biased principals as optimistic.
The agent's production provides a benefit to a principal i, which is measured by a function \(S(q_t^i,\tilde{q}_t)\), where \(q_t^i\) is the quantity produced by the agent working for principal i in period t and \(\tilde{q}_{t}\) is a sufficient statistic of the aggregate quantity in the market. The precise definition of \(\tilde{q}_{t}\) will be given below.
Timing Time is discrete. In a generic period t, the fraction of principals who write contracts on the basis of belief v is denoted by \(\alpha _{t}\).
The timing of the game follows the timing of the cobweb model where there is a lag between output decisions and realizations. In our model, this disconnect is between the contracting stage and the observability of outcomes. This means that the principal designs a contract in t for a quantity that is only observed in \(t+1\). The payment is conditioned on the observed quantity as well, contracted in t, but paid only when the quantity is observed. Technically speaking, the contracting stage takes place in a period t, when principals offer menus of contracts for the next period based on the beliefs as in Assumption 1.
The functional form ([23, 231]) of the benefit that the principal expects to gain in \(t+1\) is:
$$\begin{aligned} S\left[ q_{t+1},\mathbb {E}_{t}\left( \tilde{q}_{t+1}\right) \right] =\beta q_{t+1}-\frac{\left( q_{t+1}\right) ^{2}}{2}+\delta q_{t+1}\mathbb {E}_{t}\left( \tilde{q}_{t+1}\right) \end{aligned}$$
In t, each principal defines a mechanism \(\left\langle q_{t+1}\left( \theta \right) ,w_{t+1}\left( \theta \right) \right\rangle \) which entails a transfer \(w_{t+1}\) for each observed quantity in \(t+1\). We assume \(\beta \) to be a positive constant and \(\delta \in (-1,0)\) which is a measure of the degree of substitutability between principals' outputs. \(\mathbb {E}_{t}\left( \tilde{q}_{t+1}\right) \) denotes the expectation a principal forms in t about the value of \(\tilde{q}_{t+1}\) in \(t+1\), when all production is carried out. We make the following assumption about this expectation.
In each period t, each principal has a naive expectation about the aggregate quantity in the market: \(\mathbb {E}_{t}\left( \tilde{q}_{t+1}\right) =\tilde{q}_{t}\).
Assumption 2 is the simplest way of modeling adaptive expectations compared to the alternative of Bayesian learning.
The timing for the contracting-production stage in a flow period \(t, t+1\) can be summarized as follows:
(Period t) Each agent realizes his type.
(Period t) Principals write contracts according to their beliefs about the distribution of types and according to naive expectations about the aggregate quantity \(\tilde{q}_{t+1}\).
(Period t) Each agent is randomly matched with a principal and the agent decides whether to accept the contract or not.
(Period \(t+1\)) Contracts are executed and outcomes are realized and observed: profits and payments to agents are realized.
Contracts In each period t, principals write contracts which entail a rent for each quantity observed in \(t+1\). The quantities contracted in t and observed in \(t+1\) are indicated by \(\underline{q}_{t+1} \triangleq q_{t+1}(\underline{\theta })\) for the low-cost type and \(\overline{q}_{t+1} \triangleq q_{t+1}(\overline{\theta })\) for the high-cost type. We will use either of the two notations where convenient. In addition, a similar notation will hold for the transfers \(w_{t+1}(\theta )\).
We restrict our analysis to direct revelation mechanisms that are truthful. This can be done because the agent's rent is only a function of his principal's contract and of the aggregate quantity in the market in the previous period.
The rent is \(U\left( q_{t+1}\left( \theta \right) ,w_{t+1}\left( \theta \right) ,\theta \right) =w_{t+1}\left( \theta \right) -\theta q_{t+1}\left( \theta \right) \). Moreover, we assume that agents are protected in every state of the world by limited liability on the rent. Formally, each principal maximizes expected profits given the usual incentive (IC) and participation constraints (PC):
$$\begin{aligned}&\underset{\{\underline{q}_{t+1},\underline{w}_{t+1},\overline{q}_{t+1},\overline{w}_{t+1}\}}{\max } \phi \left\{ S\left[ \underline{q}_{t+1},\mathbb {E}_{t}\left( \tilde{q}_{t+1}\right) \right] -\underline{w}_{t+1}\right\} +\left( 1-\phi \right) \left\{ S\left[ \overline{q}_{t+1},\mathbb {E}_{t}\left( \tilde{q}_{t+1}\right) \right] -\overline{w}_{t+1}\right\}&\\&\text {s.t}&\\&w_{t+1}\left( \theta \right) -\theta q_{t+1}\left( \theta \right) \geqslant 0\qquad \forall \,\theta \in \left\{ \underline{\theta },\overline{\theta }\right\} \qquad \left( \text {PCs}\right)&\\&w_{t+1}\left( \underline{\theta }\right) -\underline{\theta }q_{t+1}\left( \underline{\theta }\right) \geqslant w_{t+1}\left( \overline{\theta }\right) -\underline{\theta }q_{t+1}\left( \overline{\theta }\right) \qquad \left( \text {IC}\left( \underline{\theta }\right) \right)&\\&w_{t+1}\left( \overline{\theta }\right) -\overline{\theta }q_{t+1}\left( \overline{\theta }\right) \geqslant w_{t+1}\left( \underline{\theta }\right) -\overline{\theta }q_{t+1}\left( \underline{\theta }\right) \qquad \left( \text {IC}\left( \overline{\theta }\right) \right)&\end{aligned}$$
Recall that principals in this model use the aggregate quantity from the previous period as expectation for the next. That is, with regard to the quantity, the model we present is a cobweb model, because given the timing and the specification of the benefit function it is mathematically equivalent to a model where principals form expectations about the price instead of the quantity with a linear demand function.Footnote 6
Given the standard nature of the maximization problem the following proposition is straightforward.
Given different beliefs and the same naive expectations about \(\tilde{q}_{t+1}\), the quantities for the low-cost types are equal, or \(\underline{q}_{t+1}^{v}=\underline{q}_{t+1}^{\rho }\), whereas for the high-cost type we have \(\overline{q}_{t+1}^{v}>\overline{q}_{t+1}^{\rho }\). The rent \(U(\cdot ,\overline{\theta })\) for the high-cost type is equal to zero for both types of principals, whereas for the low-cost type the rent \(U(\cdot ,\underline{\theta })\) is higher with a v-principal than a \(\rho \)-principal.
For the low-cost agent both types of principals stipulate the same, first-best quantity. However, the \(\rho \)-principal offers a smaller rent, because she mistakenly believes that there are more low-cost agents than there really are. For the high-cost type both contracts offer the same (zero) rent, but the v-principal stipulates a bigger quantity. This is so because the odds of being matched with a low-cost agent appear too large for the \(\rho \)-principal. Given that the quantity for the high-cost type is decreasing in the odds of being matched with one, the quantity of the optimistic principal is set too low.
Quantities We denote the expected quantity over the different types for a principal with belief \(\phi \) by \(\mathbb {E}_{\theta }\left[ q_{t+1}^{\phi }\left( \theta \right) \right] = v \underline{q}_{t+1}^{\phi } + (1-v) \overline{q}_{t+1}^{\phi }\) and with \(\tilde{q_t}=\int _{i} q_{t,i}\, di\) the aggregate quantity in the market, where i is an indicator of the principals in the population. Given the different proportions of principals with different beliefs, an informal appeal to the law of large numbers allows us to write the aggregate quantity as:
$$\begin{aligned} \tilde{q}_{t+1}=\int _{i}q_{t+1,i}\,di=\alpha _{t} \mathbb {E}_{\theta }\left[ q_{t+1}^{v}\left( \theta \right) \right] + \left( 1-\alpha _{t}\right) \mathbb {E}_{\theta }\left[ q_{t+1}^{\rho } \left( \theta \right) \right] \end{aligned}$$
Profits For contracts stipulated in t, the realized profits in \(t+1\) for each \(\theta \) and for a given belief are functions \(\underline{\pi }_{t+1}\left( \tilde{q}_{t+1},\tilde{q}_{t},\phi , \underline{\theta }\right) \) and \(\overline{\pi }_{t+1}\left( \tilde{q}_{t+1},\tilde{q}_{t},\phi , \overline{\theta }\right) \). The expected profits are \(\mathbb {E}_{\theta }\left[ \pi _{t+1}\left( \tilde{q}_{t+1}, \tilde{q}_{t},\phi ,v\right) \right] \). The presence of \(\tilde{q}_{t}\) comes from the fact that each contracted quantity \(q_{t+1}\left( \theta \right) \) is a function of \(\tilde{q}_{t}\) (Assumption 2).
Given different beliefs and the same naive expectations about the total quantity, for any realization \(\theta \in \left\{ \underline{\theta },\overline{\theta }\right\} \), the realized profits \(\pi _{t+1}^{\phi }\left( \cdot ,\theta \right) \) are such that:
\(\underline{\pi }_{t+1}^{\rho }>\underline{\pi }_{t+1}^{v}\) with
$$\begin{aligned} \underline{\pi }_{t+1}^{\rho }-\underline{\pi }_{t+1}^{v}= \left( \Delta \theta \right) ^{2}\frac{\rho -v}{\left( 1-\rho \right) \left( 1-v\right) } \end{aligned}$$
\(\overline{\pi }_{t+1}^{v}\lesseqqgtr \overline{\pi }_{t+1}^{\rho }\) with
$$\begin{aligned} \overline{\pi }_{t+1}^{v}-\overline{\pi }_{t+1}^{\rho }=\Delta \theta \frac{\rho -v}{\left( 1-\rho \right) \left( 1-v\right) }\left[ \delta \left( \tilde{q}_{t+1}-\tilde{q}_{t}\right) +\frac{1}{2}\Delta \theta \left( \frac{v}{1-v}+\frac{\rho }{1-\rho }\right) \right] \end{aligned}$$
Parts of the results in Proposition 2 are a direct result from Proposition 1. Due to the fact the unbiased v-principal pays a higher rent for the low-cost agent, but produces the same quantity as the biased \(\rho \)-principal, profits must be smaller. This can be seen from Eq. (2). Further, as can be seen in Eq. (3), the difference in profits for the high-cost agent depends on the change of the quantity \(\tilde{q}\) from one period to the next. The v-principal makes a larger profit than the \(\rho \)-principal if the quantity decreases or is constant from one period to the next. The reverse is true if the change is positive and large enough.
To summarize, the basic stage game defines an aggregative game in which the profit of principals in a particular period depends on the belief about the distribution of types, the specific match and the behavior of all other principals, which affects the aggregate quantity in the market.
Evolutionary Learning by Imitation
We use a proportional imitation rule to model the replica equation ([18]). For that purpose we define the conditional switch rate, which is the probability that at the end of a period a principal changes beliefs. To do that, we periodically allow some principals at the end of a period to observe the profit of a second principal. For each principal two scenarios are possible. Either she meets a principal from the same reference group, who got matched with the same type of agent, or from a different reference group, i.e., a principal who got matched with a different type of agent.
The propensity to compare is a measure of how open a principal is toward comparing her situation to a different principal. In what follows we use the following assumption.
The propensity to compare is equal to zero if the principals come from different reference groups.
Assumption 3 immensely simplifies the following exposition and analysis, and the intuitions are not hidden behind the algebra. We shall relax this assumption in Sect. 3.4.
Given the proportion of low-cost types v and the proportion \(\alpha _{t}\) of principals using v, \(P(\phi \rightsquigarrow \lnot \phi )=\alpha _{t}\left( 1-\alpha _{t}\right) \) is the probability that a principal with a belief \(\phi \) meets a principal with the different belief. Since we assume that matching between principals and different types of agents is type-independent, the probability that two principals were matched with a low-cost agent is simply \((v)^2\) and the probability that both were matched with a high-cost agent is given by \((1-v)^2\). Hence, we have the probabilities that two principals with different beliefs and in the same reference group meet:
$$\begin{aligned}&\gamma _{t}^{v\rho }=P\left( v \rightsquigarrow \rho \right) v^{2}=\alpha _{t}\left( 1-\alpha _{t}\right) v^2\\&\gamma _{t}^{\rho v}=P\left( \rho \rightsquigarrow v \right) \left( 1-v\right) ^{2}=\alpha _{t}\left( 1-\alpha _{t}\right) (1-v)^2 \end{aligned}$$
In words, \(\gamma _{t}^{v\rho }\) is the probability that a v-principal would consider switching to belief \(\rho \), with a similar interpretation of \(\gamma _{t}^{\rho v}\). The probability of switching to the other strategy is linearly dependent on the payoff difference. Formally, it is the product \(\Omega \cdot \left[ \pi _{t+1}^{\phi }-\pi _{t+1}^{\lnot \phi }\right] \), where \(\Omega >0\) is chosen to scale the payoff difference in such a way that it can be used as a probability.
We find three mechanisms to justify why principals infer whether or not they come from the same reference group. Either because of a mechanism based on word-of-mouth communication as in Banerjee and Fudenberg [4], or because the contracts and profits are observed as in Ania et al. [1], or simply because the contracting quantities are observed, because principals with low-cost agents obtain equal quantities.
We assume that principals are memoryless about past plays or past switches and that learning takes place by imitation of other principals' beliefs. This assumption is based on the following considerations. Information about the beliefs is shared by word-of-mouth communication (see above). Alternatively, beliefs can be inferred if the menus of contracts are observable.
Putting the pieces together, the dynamic over time is described by the following equation:
$$\begin{aligned} \alpha _{t+1}=\alpha _{t} + \gamma _{t}^{\rho v}\left\{ \Omega \left[ \overline{\pi }_{t+1}^{v}-\overline{\pi }_{t+1}^{\rho }\right] \right\} -\gamma _{t}^{v\rho }\left\{ \Omega \left[ \underline{\pi }_{t+1}^{\rho }-\underline{\pi }_{t+1}^{v}\right] \right\} \end{aligned}$$
The equation should be read as follows. The fraction of v-principals in a period is equal to the fraction in the previous period plus all \(\rho \)-principals who switch to v minus all v-principals who switch to \(\rho \). From Proposition 2 we know that the term \(\overline{\pi }_{t+1}^{v}-\overline{\pi }_{t+1}^{\rho }\) can be positive or negative depending on the magnitudes and direction of fluctuations of the quantity in the market. If the term is negative the direction of proportional imitation is reversed, which means that the v-principal switches to \(\rho \) with the given probability. The resulting equation is equivalent.Footnote 7 Substituting the specific switch rates defined above, we arrive at the discrete change of \(\alpha \) from one period to the next:
$$\begin{aligned} \alpha _{t+1}=\alpha _{t}+\alpha _{t}\left( 1-\alpha _{t}\right) \Omega \left\{ \left( 1-v\right) ^{2}\left[ \overline{\pi }_{t+1}^{v} -\overline{\pi }_{t+1}^{\rho }\right] -v^{2}\left[ \underline{\pi }_{t+1}^{\rho } -\underline{\pi }_{t+1}^{v}\right] \right\} \end{aligned}$$
To recap, the model aims at combining insights from cobweb models and the problem of asymmetric information, in particular, adverse selection outcomes. The three main assumptions we make reflect this basic goal.
Certainly, one could imagine alternatives to Assumption 1. If one assumes that all principals have the same belief (effectively, \(\rho =v\)), then Eq. (4) simply disappears and the model reverts back to a basic cobweb. If, instead of optimism, one assumed pessimism (\(\rho <v\)), all of the results presented below would be symmetrically reversed.
Assumption 2 is integral to the cobweb model reflecting a version of adaptive expectations. As discussed, rational expectations would go against the spirit of boundedly rational principals. We use naive expectations, which are the simplest version of adaptive expectations taking only one preceding period into account. For an overview of the role of expectations see Evans and Honkapohja [8].
Lastly, Assumption 3 governs the set of possible subjects for comparison for a principal. This assumption comes mainly from Selten and Ostmann [20], Selten and Apesteguia [19], whose ideas are based on behavioral studies from psychology [9] and early experimental economics Todt [21]. For the moment the propensity to compare has been set rather narrowly by allowing a principal to compare only with another principal of a different belief if they were matched with the same type of agent.
We shall return to the last two points in the discussion section.
Equilibria, Stability and Dynamics
The economy in our model is governed by Eqs. (1) and (4). They can be rearranged using the functional forms of the quantities and profits. The algebraic derivation can be found in Appendix "Derivation of the Nonlinear Map".
$$\begin{aligned} \alpha _{t+1}= & {} \alpha _{t}+\alpha _{t}(1-\alpha _{t})a\Omega \{ (1-v)^{2} \{ \delta \big [\beta -\overline{\theta }+(\delta -1)\tilde{q}_{t}\nonumber \\&-\left( 1-\alpha _{t}\right) c\big ]+b\} -v^{2}\Delta \theta \} \end{aligned}$$
$$\begin{aligned} \tilde{q}_{t+1}= & {} \beta -\overline{\theta }+\delta \tilde{q}_{t}-\left( 1-\alpha _{t}\right) c \end{aligned}$$
where \(a=\Delta \theta \frac{\rho -v}{\left( 1-\rho \right) \left( 1-v\right) }\), \(b=\frac{1}{2}\Delta \theta \left( \frac{v}{1-v}+\frac{\rho }{1-\rho }\right) \) and \(c=\Delta \theta \frac{\rho -v}{1-\rho }\) are used to simplify the expressions.Footnote 8 The dynamic system described by equations (5) and (6) is a map \(X_{t+1}=\Gamma (X_t)\) where \(X_t=(\alpha _t,\tilde{q}_t)\).
Fixed Points
The nullclines, i.e., the loci of points where a variable does not vary from one period to the next, are plotted in Fig. 1. The dashed line gives the combination of points where \(\tilde{q}_{t+1}=\tilde{q}_t\) from Eq. (6) and all solid lines (including both vertical lines) gives the locus of points where \(\alpha _{t+1}=\alpha _t\) from Eq. (5).
Two phase diagrams with \(\alpha \) on the horizontal and \(\tilde{q}\) on the vertical axis. The dashed line shows the locus of points where \(\tilde{q}_{t+1} = \tilde{q}_t\), all solid lines show the locus of points where \(\alpha _{t+1}=\alpha _t\). Panel (a) (b) shows the case where \(k<0\) (\(k>0\)) as defined in the text. \(\tilde{q}_{high}\) (\(\tilde{q}_{low}\)) is the fixed point-quantity associated with \(\alpha =1\) (\(\alpha =0\))
To better describe the fixed points and their dependence on the beliefs, we define \(k \triangleq \left( 1-v\right) ^{2}b-v^{2}\Delta \theta \). This k determines the relative location of the nullclines. It will play a crucial role in determining the stability of the steady state. As will become clear, for \(k\ne 0\), any steady state is hyperbolic meaning that the associated eigenvalues are different from unity. For \(k<0\) (\(>0\)) the nullcline giving the steady states for the quantity \(\tilde{q}\) is below (above) the one for \(\alpha \) (see Fig. 1). The third case (not shown) is \(k=0\), when the two diagonal nullclines overlap. From Fig. 1, it is clear that the system admits two or infinitely many fixed points, where the latter occurs only when \(k=0\). Focusing on the two non-degenerate cases (where the nullclines do not overlap) we can claim:
Lemma 1
Whenever \(k=\left( 1-v\right) ^{2}b-v^{2}\Delta \theta \ne 0\), the nonlinear system admits two hyperbolic steady states \(X^{0}=\left( 0,\tilde{q}_{low}\right) \), \(X^{1}=\left( 1,\tilde{q}_{high}\right) \) with
$$\begin{aligned} \tilde{q}_{low}= & {} \frac{\beta -\overline{\theta }}{1-\delta }-\frac{c}{1-\delta } \\ \tilde{q}_{high}= & {} \frac{\beta -\overline{\theta }}{1-\delta } \end{aligned}$$
The quantities \(\tilde{q}_{low}\) and \(\tilde{q}_{high}\) are simply the intersections of the nullcline associated with the quantity (the dashed line in Fig. 1) with the vertical parts of the nullclines for \(\alpha \). Either of the two steady states represents a population in which all principals hold the same belief. The presence of two steady states is due to the principals' inability to update their priors in a Bayesian fashion, but rely on imitation. Therefore, in a steady state in which all principals have a given belief about the distribution, there is no learning. The aggregate quantities reflect the earlier finding in Proposition 1. There we show that being a \(\rho \)-principal means to ask a lower quantity from the high-cost type, whereas for the low-cost type both principals produce the same quantity, i.e., the commonly known result that there is no distortion "at the top" is preserved. Therefore, the aggregate quantity in the steady state \(X^0\) is lower than in \(X^1\). This leads to higher prices, which can be dubbed a "cartel of the ignorant", because the collusion is not the result of a coordinated action, but the spillover effects of the imitative learning of its members.
Next, in order to study the stability, the Jacobians are evaluated at the steady states:
$$\begin{aligned} J\left( X^{0}\right) = \left[ \begin{array}{cc} 1+a\Omega k &{} 0\\ c &{} \delta \end{array}\right] \qquad J\left( X^{1}\right) =\left[ \begin{array}{cc} 1-a\Omega k &{} 0\\ c &{} \delta \end{array}\right] \end{aligned}$$
Using the usual definitions related to local bifurcations, from the Jacobians the following proposition immediately follows:
Given beliefs v and \(\rho \) the following holds for the system defined in (5) and (6).
The system has always either
a stable and an unstable fixed point, which are both hyperbolic (for \(k\ne 0\)), or
two non-hyperbolic fixed points and infinitely many fixed points (for \(k=0\)).
The system has a local fold bifurcation for both fixed points if \(k=0\).
The system undergoes a transcritical bifurcation. For \(k<0\) \(X^0\) is the stable and \(X^1\) is the unstable fixed point. It is the other way around for \(k>0\).
The local stability comes from the design of the imitation protocol involving realized payoffs. More clearly, the evolutionary pressure is not based on the mere difference in expected payoff, which is larger for the unbiased principal in the steady state, but the difference in realized payoffs has to be taken into consideration. Proposition 3 lays bare the mechanics of the exchange of stability synthesized in k. Before we get to the economic interpretation, the following subsection discusses the robustness of this finding.
Two Qualitative Remarks
First, a remark is needed concerning a possible shutdown policy of principals, which in the literature of mechanism design refers to a situation in which principals choose to write contracts only for the low-cost type (e.g., Laffont and Martimort [14], chapter 2). We make clear that a linear demand and supply function in the standard cobweb model with naive expectations lead necessarily to fluctuations in the quantity (and therefore in the price) such that negative values are unavoidable unless more restrictive assumptions are imposed. Therefore, it appears clear that high values of the quantity \(\tilde{q}_t\) in a preceding period could lead principals (who solve the maximization problem with naive expectations about \(\tilde{q}_t\)) to adopt a shutdown policy for the high-cost type. In our dynamic context, a shutdown policy would apply whenever the gain from a negotiation with this type is, in expectation, negative. We assume that in our model \(\beta \) is large enough so that this never happens.Footnote 9
Second, in the previous section we defined \(\Omega \) as a parameter needed to bound the difference in payoffs to unity such that the whole expression could be considered a probability. The following lemma shows that for a qualitative analysis this is unproblematic.
For every two different \(\Omega \) and \(\Omega '\) there exists \(\Delta \theta '=\overline{\theta }'-\underline{\theta }'\) such that the system with \(\Omega \) and \(\Delta \theta \) is topologically equivalent near the steady state to the system with \(\Omega '\) and \(\Delta \theta '\).
The result in Lemma 2 implies that the normalization by the parameter \(\Omega \) is not problematic for a local qualitative analysis of the dynamic system given a rescaling of \(\Delta \theta \).
Global Behavior
Having identified the local stability of the steady state, we move from a local analysis to a global analysis of the dynamic system. Given the phase diagram of our map made of a possible saddle-sink connection, the study of convergence should be only related to the identification of the sink acting as an attractor and a saddle, with an invariant unstable manifold, acting as a repeller. As seen, this can be done easily observing that the condition \(k \triangleq \left( 1-v\right) ^{2}b-v^{2}\Delta \theta \ne 0\) implies the magnitude of the eigenvalues and therefore the topological structure of the fixed points. Therefore, any preliminary analysis should start from the relationship between the different beliefs described by k.
As is well known, a standard cobweb model generates oscillating time series, and can present a limit two cycle whenever the ratio between the slopes of the demand and supply functions are equal to \(-1\). The analogy of our set-up with the standard "stable" cobweb model is helpful in this regards (see footnote 6). The mere observation that our map describes a standard linear cobweb model with a shifting supply curve suggests a possible presence of similar patterns. Hence, the analysis should account for the existence of cycles also when the ratio between functions is not (necessarily) \(-1\).
Convergence to a Monomorphic State
This section analyzes the global convergence to a monomorphic state where all the principals have the same beliefs. We start with the inspection of the condition \(k \triangleq \left( 1-v\right) ^{2}b-v^{2}\Delta \theta \gtreqless 0\). If \(k=0\), it defines a critical value \(\rho ^{c}\) as a function of the true distribution \((v,1-v)\) of agents' type in the economy. Hence, we refer to \(\rho ^{c}\) as an indicator of the degree of optimism and to overoptimistic principals, who have a belief higher than \(\rho ^{c}\). It holds that if \(\rho >\rho ^{c}\) (\(\rho <\rho ^{c}\)), then \(k>0\) (\(k<0\)). Given that the sign of k determines the magnitude of the two eigenvalues, it is sufficient to analyze how it depends on the relationship between v and \(\rho \). With this aim, we observe:
Theorem 1
Whenever some principals are optimistic, and the map \(\Gamma (X_t)\) only has the two steady states \(X^0\) and \(X^1\), the following holds:
Whenever the proportion of low-cost agents is greater than half of the population (\(v>\frac{1}{2}\)), for all \(\rho <\rho ^{c}\) the population will converge to a state where all principals are optimistic (\(X^0\)). Conversely, for a high degree of optimism (\(\rho >\rho ^{c}\)), the population will converge to a state where all principals are unbiased (\(X^1\)).
For \(v \leqslant \frac{1}{2}\), any \(\rho >v\) is such that \(\rho >\rho ^c\). Hence, for every \(\rho \) the population will converge to a state where all principals are unbiased (\(X^1\)).
To grasp intuition, consider (for example) the two effects of being a \(\rho \)-principal. On the one hand, a \(\rho \)-principal pays a lower informational rent. On the other hand, she reduces the quantity of the high-cost agent more. In addition, given the switching protocol, based on the reference group, the comparison between principals is undoubtedly successful for a \(\rho \)-principal matched with a low-cost agent. If \(\rho \) is not too large, the lower informational rent represents the marginal benefit, whereas the lower quantity is the marginal cost. A too high \(\rho \) (\(>\rho ^c\)) implies a too high cost. In fact, in this case, the steady state becomes unstable. An identical rationale explains the (in)stability of a steady state \(X^1\) where all principals have the unbiased belief.
If the proportion of low-cost type is small (i.e., \(v \leqslant \tfrac{1}{2}\)), then the convergence is necessarily to an unbiased case. The cost of being a \(\rho \)-principal is higher than the benefit. Moreover, as seen, \(\rho \)-principals realize a higher profit with low-cost agents. Intuitively, for v small, matches are more likely to be with a high-cost type and, therefore, \(\rho \)-principals lose their advantage.
A Limit-Two Cycle
In the basic cobweb model a cyclical behavior comes from the fact that demand and supply have identical (in absolute value) slopes and that suppliers have naive expectations about the next period's aggregate quantity. In our variant of the cobweb model, this can be induced despite the fact that the underlying cobweb model is stable, because shifting the supply function can induce a cyclical pattern.Footnote 10 Given two states \(X^{\prime }=(\alpha ^{\prime },\tilde{q}^{\prime })\) and \(X^{\prime \prime }=(\alpha ^{\prime \prime },\tilde{q}^{\prime \prime })\) in two different generic time periods, a limit two cycle is defined as:
$$\begin{aligned} \Gamma (X')=X''\quad {\text { and }}\; \Gamma (X'')=X'\quad {\text { for }}\; X' \ne X'' \end{aligned}$$
The following theorem gives conditions under which this can occur.
All else equal, a necessary condition for the existence of limit-two cycles for the map \(\Gamma \left( X\right) \) is that \(\delta \) is sufficiently small or \(\Delta \theta \) is sufficiently large.
In the standard cobweb model, fluctuations are present and come from the expectation about the aggregate. The evolutionary learning in our model adds an additional factor which influences the supply curve. More precisely, the polymorphic configuration synthesized in \(\alpha \) implies a shift of the supply. If \(\delta \) increases, the demand function becomes flatter making the cobweb stable. Nevertheless, a large enough difference in abilities (measured as the difference in the agents' marginal costs \(\Delta \theta \)) implies larger shifts in the supply over time, which can then again create cyclical patterns. Depending on the interplay between the two parameters \(\delta \) and \(\Delta \theta \), our model can generate more than one limit-two cycle without assuming equal slopes.
Before we conclude, we would like to return to some of the assumptions made in the model and discuss their implications.
First, the assumption of optimism (Assumption 1) can be reversed. If one assumes pessimism, all findings presented so far will be symmetrically reversed, as well. This implies, for example, for one of the main results in Theorem 1 that there exists a level of pessimism below which (overly pessimistic principals) the population converges to an unbiased equilibrium. The opposite holds when this threshold is not passed.
Second, Assumption 3 about the reference groups for comparison can be relaxed. So far, we have only considered the reference group including all the principals who were matched with the same type. However, it is readily possible to assume that also a reference group with other principals can play a role in the imitation. In this case, the general formalization of the model includes a probability, \(\xi \), representing the propensity to compare. This implies that principals are not precluded from comparing themselves with any other principals independent of the match. Technically, this would change the value of the composite parameter k to a new value \(\tilde{k}\).Footnote 11 In our model so far the propensity to compare with a different type-matching principal is set equal to zero. If this is not the case, the result of our main theorem would slightly change. This can be summarized in the following.
Corollary 1
Whenever an optimistic or pessimistic bias is present and the propensities to compare are different, there is a critical value \(\tilde{k}\) such that for \(\tilde{k}<0\), the population converges to a monomorphic biased state.
As we have seen, in the absence of a cycle, the economy converges to one of the two steady states according to the degree of optimism: a high degree leads to the unbiased equilibrium. Enlarging the set of possible references, i.e., increasing \(\xi \), leaves the structure of the results intact. Whenever the propensity to compare with different matches increases the over-optimism threshold decreases.
The model presented is very much in the tradition of approaches modeling boundedly rational individuals' imitation (see Schlag [18] and references therein). Put simply, in all of these approaches, individuals use a version of comparing realized profits. This is true also for our model whenever \(\xi <1\). For completeness' sake, if all principals are potential objects for comparison independent of the match (\(\xi = 1\)), the threshold disappears, and the steady state \(X^0\) (all biased) loses stability. The rationale for this finding is based on the characterization of the resulting protocol in this extreme case. Mathematically, the proportional expected protocol is linear in payoff differences. Consequently, if principals are open to comparing their expected (hypothetical) profit with a randomly matched principal, this would de facto lead to a situation where each principal linearly compares her (expected) profit to the average profit in the markets. Under this scenario, every steady state with a biased belief is unstable with respect to a perturbation with some principals using the true belief. Moreover, the naive expectation assumption (Assumption 2) also gives rise to a linear nullcline describing the evolution of the quantity. The last two points and the absence of stochastic perturbations lead to a case where the nullclines in our phase diagram do not present interior intersections and, therefore, no interior steady states. Whereas our paper aims to provide a simple intuition on how the imitative dynamic affects results under adverse selection, it appears interesting for future research to include more sophisticated learning procedures and alternative imitative protocols (see, e.g., Schlag [18] and Sandholm [16] for discussions about imitative protocols, Evans and Honkapohja [8] for the role of expectation and, Schlag [18] and Hommes [12] for alternative expectation modeling with bounded and behavioral rationality).
In addition, we assume that there are just two types of agents, whereas the mechanism design literature allows also for a continuity of types. From a formal point of view, it is possible to include continuity of abilities. Then, the belief of the biased principal is represented by a cumulative distribution function, which is first-order-stochastic dominated by the true one. In order to allow a comparison between principals one should define a norm for each matched ability level. This would clearly add complexity to the algebra of the model, but would not add anything of substance to our results.
Our paper introduces an evolutionary learning model with beliefs into a market characterized by an adverse selection problem. We relax the common assumption of homogeneous beliefs: principals have one of two possible beliefs about the distribution of the ability of agents in the sense that some overestimate the true fraction of low-cost agents. In our model, the evolutionary learning takes place in the form of a non-Bayesian updating characterized by imitation. The higher the fraction of principals with a particular belief and the higher the payoff difference between two randomly chosen principals, the higher the probability to switch to this belief. We study convergence toward different compositions of the population showing how heterogeneity drives the economy toward possibly different equilibria.
We show that if the bias is relatively moderate, the learning process leads to a uniformly biased population. The reverse is true for large biases. The model hones in on the externality of a learning process as the decision to update one's beliefs impacts other market participants. The interplay between quantity decisions based on beliefs, on the one hand, and the effect biased beliefs have on aggregate market outcomes, on the other hand, raises new questions to study in competitive markets.
The analysis is carried out with the assumption of an optimistic bias. A pessimistic bias would lead to a specular model with specular results.
This is different from Arifovic and Karaivanov [3], who start from an adverse selection model evolving over time assuming that principals are unable to solve the correct maximization problem.
There is a related literature on biases of agents, for example with respect to the perception of their own ability ([11]), their own and others' ability ([17]) or the success probability of a project and the agent's contribution to the success ([6, 24]).
There is a complementary behavioral approach to our set-up in Esponda [7] and Frick et al. [10]. They go in the direction of aggregation under misperception where actors would attempt to estimate an aggregate (footnote 4 continued) ignoring the effect of biased choices. In our behavioral approach, evolution is not driven by an incomplete estimation but by imitation.
Including the belief which turns out to be the true one (and not assuming a situation characterized by two biases) is less restrictive and comes from the aim of showing convergence toward a biased belief, and a possible coexistence of beliefs. A situation with two biases would mean, therefore, assuming that such convergence has already happened. However, a model with only biased beliefs would not substantially change the dynamic.
This equivalent model can be summarized as follows:
Each principal maximizes:
$$\begin{aligned} \underset{\{\underline{q}_{t+1},\underline{w}_{t+1},\overline{q}_{t+1}, \overline{w}_{t+1}\}}{\max }\phi \left\{ \mathbb {E}_{t}\left[ P_{t+1}\right] \underline{q}_{t+1}-\tfrac{\underline{q}^2_{t+1} }{2}-\underline{w}_{t+1}\right\} +\left( 1-\phi \right) \left\{ \mathbb {E}_{t}\left[ P_{t+1}\right] \overline{q}_{t+1}-\tfrac{\overline{q}^2_{t+1}}{2} -\overline{w}_{t+1}\right\} \end{aligned}$$
under (ICs) and (PCs), and therefore a linear supply function is obtained.
Principals have naive expectations about the price: \(\mathbb {E}_{t}\left[ P_{t+1}\right] =P_{t}\).
The demand is linear: \(Q_{t+1}=A-BP_{t+1}\).
Market clears: the prices are computed on the demand function.
The connection to our model is established for \(\beta =\frac{A}{B}\) and \(\delta =-\frac{1}{B}\). Our choice of the interval for \(\delta \in (-1,0)\) defines a standard stable cobweb model (in the absence of any kind of heterogeneity of expectations about any variable). See Hommes [12] for an overview and a recent reapprecitation of the cobweb model.
To see this, write the dynamic for \(\overline{\pi }_{t+1}^{v}<\overline{\pi }_{t+1}^{\rho }\) as \(\alpha _{t+1}=\alpha _{t}-\gamma _{t}^{\rho v}\{ \Omega [\overline{\pi }_{t+1}^{\rho }-\overline{\pi }_{t+1}^{v}]\}-\gamma _{t}^{v\rho } \{\Omega [\underline{\pi }_{t+1}^{\rho }-\underline{\pi }_{t+1}^{v}]\}\), which is equivalent.
To ensure that \(\alpha \) never leaves the unit interval the long form of Eq. (5) should be written as: \(\alpha _{t+1}=\min \{ 1,\max \{ \alpha _{t}+\alpha _{t}\left( 1-\alpha _{t}\right) a\Omega \{ \left( 1-v\right) ^{2} \{ \delta \left[ \beta -\overline{\theta }+(\delta -1)\tilde{q}_{t}- \left( 1-\alpha _{t}\right) c\right] +b \} -v^{2}\Delta \theta \} ,0 \} \}.\)
Alternatively this could be dealt with by assuming that in some periods (given some wide fluctuation of the quantity) only contracts for low-cost types are offered. This would imply possible lag periods, in which only principals who got matched with a low-cost agent consider switching. The results would be essentially the same only complicating the calculus.
In our model \(\delta \) is the ratio between the slope of the demand and supply function, where we normalize the slope of the latter to 1. Therefore, if \(\left| \delta \right| =1\), we would have the standard limit-two cycle for the aggregate quantity. See also footnote 6.
The change in the reference group would change Eq. (4) in the following way. With \(\xi \) as the propensity to compare with principals with a different match: \(\alpha _{t+1}=\alpha _{t}+\alpha _{t}(1-\alpha _{t}) \Omega ((1-v)^{2}[\overline{\pi }_{t+1}^{v}-\overline{\pi }_{t+1}^{\rho }]-v^{2}[\underline{\pi }_{t+1}^{\rho }-\underline{\pi }_{t+1}^{v}] + \xi \{v(1-v)[\overline{\pi }_{t+1}^{v}-\overline{\pi }_{t+1}^{\rho }]-v(1-v) [\underline{\pi }_{t+1}^{\rho } - \underline{\pi }_{t+1}^{v}]\}) \). The critical k then changes to \(\tilde{k} = k+\xi v(1-v)(b-\Delta \theta )\).
Ania AB, Tröger T, Wambach A (2002) An evolutionary analysis of insurance markets with adverse selection. Games Econ Behav 40(2):153–184
Article MathSciNet Google Scholar
Apesteguia J, Huck S, Oechssler J (2007) Imitation-theory and experimental evidence. J Econ Theory 136(1):217–235
Arifovic J, Karaivanov A (2010) Social learning in a model of adverse selection. In: Industrial organization, trade and social interaction: essays in Honour of B. Curtis Eaton. University of Toronto Press, Toronto
Banerjee A, Fudenberg D (2004) Word-of-mouth learning. Games Econ Behav 46(1):1–22
Björnerstedt J, Weibull JW (1996) Nash equilibrium and evolution by imitation. In: Arrow KJ, Colombatto E, Perlman M, Schmidt C (eds) The rational foundations of economic behavior. Macmillan, Houndmills, pp 155–171
de la Rosa LE (2011) Overconfidence and moral hazard. Games Econ Behav 73(2):429–451
Esponda I (2008) Behavioral equilibrium in economies with adverse selection. Am Econ Rev 98(4):1269–91
Evans GW, Honkapohja S (2001) Learning and expectations in macroeconomics. Princeton University Press, Princeton
Festinger L (1954) A theory of social comparison processes. Hum Relat 7(2):117–140
Frick M, Iijima R, Ishii Y (2020) Stability and robustness in misspecified learning models. Cowles Foundation Discussion Paper, 2235
Gervais S, Goldstein I (2007) The positive effects of biased self-perceptions in firms. Rev Finance 11(3):453–496
Hommes CM (2013) Behavioral rationality and heterogeneous expectations in complex economic systems. Cambridge University Press, Cambridge
Kuznetsov YA (1998) Elements of applied bifurcation theory. Springer, New York
MATH Google Scholar
Laffont J-J, Martimort D (2002) The theory of incentives: the principal-agent model. Princeton University Press, Princeton
Rothschild M, Stiglitz JE (1976) Equilibrium in competitive insurance markets: an essay on the economics of imperfect information. Q J Econ 90(4):629–649
Sandholm WH (2011) Population games and evolutionary dynamics. MIT Press, Cambridge
Santos-Pinto L (2008) Positive self-image and incentives in organisations. Econ J 118(531):1315–1332
Schlag KH (1998) Why imitate, and if so, how? J Econ Theory 78(1):130–156
Selten R, Apesteguia J (2005) Experimentally observed imitation and cooperation in price competition on the circle. Games Econ Behav 51(1):171–192
Selten R, Ostmann A (2001) Imitation equilibrium. Homo oeconomicus 18(1):111–149
Todt H (1972) Pragmatic decisions on an experimental market. In: Sauermann H (ed) Contributions to experimental economics, pp 608–634
Vega-Redondo F (1997) The evolution of Walrasian behavior. Econometrica 65(2):375–384
Vives X (2001) Oligopoly pricing: old ideas and new tools. MIT Press, Cambridge
Wang J, Zhuang X, Yang J, Sheng J (2014) The effects of optimism bias in teams. Appl Econ 46(32):3980–3994
Open Access funding enabled and organized by Projekt DEAL.
WHU-Otto Beisheim School of Management, Burgplatz 2, 56179, Vallendar, Germany
Clemens Buchen
Institute for Labour Law and Industrial Relations in the European Union (IAAEU), Trier University, Behringstr. 21, 54296, Trier, Germany
Alberto Palermo
Correspondence to Clemens Buchen.
We are grateful to seminar participants at the University of Tartu, the University of Marburg, EBS Business School, and the EARIE conference in Munich with a previous version. We thank an associate editor and two anonymous referees for very helpful comments and remarks, which helped us greatly improve the exposition of the paper.
A Appendix
A.1 Proof of Proposition 1
As is standard (see, e.g., Laffont and Martimort [14]), the participation constraint of the low-cost type is implied by PC(\(\overline{\theta }\)) and IC(\(\underline{\theta }\)). The incentive constraint of the high-cost type is slack at the optimum. Moreover, the other two are binding constraints. Then, using the binding constraints to substitute wages in the objective function, the maximization problem reads as follows:
$$\begin{aligned} \begin{aligned} \underset{\{\underline{q}_{t+1},\overline{q}_{t+1}\}}{\max }&\phi \{ S[\underline{q}_{t+1},\mathbb {E}_{t}(\tilde{q}_{t+1})]- \underline{\theta } \underline{q}_{t+1}\} - \phi \Delta \theta \overline{q}_{t+1} \\&+(1-\phi )\{ S\left[ \overline{q}_{t+1},\mathbb {E}_{t}\left( \tilde{q}_{t+1}\right) \right] -\overline{\theta }\overline{q}_{t+1}\}. \end{aligned} \end{aligned}$$
(A.1)
The quantities at the optimum are defined implicitly by: \(S_{\underline{q}}^{\prime }(\cdot )=\underline{\theta }\), \(S_{\overline{q}}^{\prime }(\cdot )=\overline{\theta }+\frac{\phi }{1-\phi }\Delta \theta \). Substituting for \(\mathbb {E}_{t}(\tilde{q}_{t+1})=\tilde{q}_{t}\) and using the specific functional form for \(S(\cdot )\), we obtain that in any generic time the quantities set by principal are:
$$\begin{aligned}&\underline{q}_{t+1}^{v}=\underline{q}_{t+1}^{\rho }=\beta +\delta \tilde{q}_{t}-\underline{\theta } \end{aligned}$$
$$\begin{aligned}&\overline{q}_{t+1}^{\phi }=\beta +\delta \tilde{q}_{t}-\overline{\theta }-\frac{\phi }{1-\phi }\Delta \theta \end{aligned}$$
From \(\rho >v\), follows: \(\underline{q}_{t+1}^{v}=\underline{q}_{t+1}^{\rho }>\overline{q}_{t+1}^{v}>\overline{q}_{t+1}^{\rho }\).
The binding PC\((\overline{\theta })\) clarifies that the high-cost types realize a zero rent independently from the principal they are matched with. Conversely, from the binding IC\((\underline{\theta })\), we have that the rent of the low-cost types \((\text {rent}^\phi _{t+1}(\underline{\theta }))\) differs according to the principals' belief. It holds:
$$\begin{aligned} \text {rent}^\phi _{t+1}(\underline{\theta })=\Delta \theta \overline{q}^\phi _{t+1}, \end{aligned}$$
and therefore \(\text {rent}^v_{t+1}(\underline{\theta })>\text {rent}^\rho _{t+1}(\underline{\theta })\).
Recall that principals design contracts based on the belief \(\mathbb {E}_{t}(\tilde{q}_{t+1})=\tilde{q}_{t}\); meaning that in a generic time t contracts are defined on the basis of quantities as in (A.2) and (A.3). Hence, their choices about quantities in a time \(t+1\) are based on the belief about the aggregate quantity, which in our set-up equals the quantity one period before \(\tilde{q}_{t}\). However, in \(t+1\) the payoff is affected by the realization of the aggregate quantity \(\tilde{q}_{t+1}\) which is described by (1).
To compute the differences in payoffs, it is useful computing the difference in quantities for the high-cost type. From Eq. (A.3) we obtain: \(\overline{q}_{t+1}^{v}-\overline{q}_{t+1}^{\rho }=\Delta \theta \left[ \frac{\rho }{1-\rho }-\frac{v}{1-v}\right] \). We have that for a match with a low-cost type the quantity for both principals is equal. Hence, the surpluses are equal and the only difference is in the paid informational rent. It follows:
$$\begin{aligned} \underline{\pi }_{t+1}^{\rho }-\underline{\pi }_{t+1}^{v}= \Delta \theta \overline{q}_{t+1}^{v} - \Delta \theta \overline{q}_{t+1}^{\rho } = (\Delta \theta )^{2}\frac{\rho -v}{(1-\rho )(1-v)}, \end{aligned}$$
which is Eq. (2) in the paper.
Conversely, for a match with a high-cost type
$$\begin{aligned} \begin{aligned}&\overline{\pi }_{t+1}^{v}-\overline{\pi }_{t+1}^{\rho }\\&\quad =(\beta -\overline{\theta }+\delta \tilde{q}_{t+1}) \overline{q}_{t+1}^{v} -\frac{ (\overline{q}_{t+1}^{v})^2}{2}-(\beta -\overline{\theta }+\delta \tilde{q}_{t}) \overline{q}_{t+1}^{\rho }+\frac{ (\overline{q}_{t+1}^{\rho })^2}{2}\\&\quad =(\beta -\overline{\theta }+\delta \tilde{q}_{t+1})(\overline{q}_{t+1}^{v} -\overline{q}_{t+1}^{\rho })-\frac{(\overline{q}_{t+1}^{v}+\overline{q}_{t+1}^{\rho }) (\overline{q}_{t+1}^{v}-\overline{q}_{t+1}^{\rho })}{2}\\&\quad =(\overline{q}_{t+1}^{v}-\overline{q}_{t+1}^{\rho })\left[ (\beta -\overline{\theta }+\delta \tilde{q}_{t+1})-\frac{(\overline{q}_{t+1}^{v}+\overline{q}_{t+1}^{\rho })}{2} \right] \\&\quad =(\overline{q}_{t+1}^{v}-\overline{q}_{t+1}^{\rho })\left[ (\beta -\overline{\theta }+\delta \tilde{q}_{t+1})-\frac{1}{2} (2 \beta + 2\delta \tilde{q}_{t}-2\overline{\theta }-\frac{v}{1-v}\Delta \theta -\frac{\rho }{1-\rho }\Delta \theta )\right] \\&\quad =\Delta \theta \frac{\rho -v}{\left( 1-\rho \right) \left( 1-v\right) }\left[ \delta \left( \tilde{q}_{t+1}-\tilde{q}_{t}\right) +\frac{1}{2}\Delta \theta \left( \frac{v}{1-v}+\frac{\rho }{1-\rho }\right) \right] , \end{aligned} \end{aligned}$$
which is Eq. (3) in the text.
A.3 Derivation of the Nonlinear Map
We start by computing the equation describing the evolution of the aggregate quantity over time. From (1), we know:
$$\begin{aligned} \tilde{q}_{t+1}=\alpha _{t}\mathbb {E}_{\theta }[q_{t+1}^{v} (\theta )]+(1-\alpha _{t})\mathbb {E}_{\theta }[q_{t+1}^{\rho }(\theta )] \end{aligned}$$
Using (A.2) and (A.3), we compute \(\mathbb {E}_{\theta }[q_{t+1}^{v}(\theta )]\) and \(\mathbb {E}_{\theta }[q_{t+1}^{\rho }(\theta )]\), where the expectation is w.r.t. the true realization of the variable \(\theta \) (i.e., the distribution for which it holds \(Pr(\theta =\underline{\theta })=v\)).
$$\begin{aligned} \begin{aligned} \mathbb {E}_{\theta }[q_{t+1}^{v}(\theta )]&= v \underline{q}_{t+1}^{v}+(1-v) \overline{q}_{t+1}^{v} \\&=v (\beta +\delta \tilde{q}_{t}-\underline{\theta }) + (1-v) (\beta +\delta \tilde{q}_{t}-\overline{\theta }-\frac{v}{1-v}\Delta \theta )\\&=\beta -\overline{\theta }+\delta \tilde{q}_{t} \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned} \mathbb {E}_{\theta }[q_{t+1}^{\rho }(\theta )]&= v \underline{q}_{t+1}^{\rho }+(1-v) \overline{q}_{t+1}^{\rho } \\&= v (\beta +\delta \tilde{q}_{t}-\underline{\theta }) + (1-v) (\beta +\delta \tilde{q}_{t}-\overline{\theta }-\frac{\rho }{1-\rho }\Delta \theta )\\&= \beta -\overline{\theta }+\delta \tilde{q}_{t}+v \Delta \theta -\frac{\rho (1-v)}{1-\rho }\Delta \theta \\&= \beta -\overline{\theta }+\delta \tilde{q}_{t}-c, \end{aligned} \end{aligned}$$
with \(c=\Delta \theta \frac{\rho -v}{1-\rho }\) as defined in the text.
Using (A.8) and (A.9) in (A.7), it is immediate to obtain:
$$\begin{aligned} \tilde{q}_{t+1}=\beta -\overline{\theta }+\delta \tilde{q}_{t} -\left( 1-\alpha _{t}\right) c, \end{aligned}$$
(A.10)
which is Eq. (6) in the paper. Subtracting \(\tilde{q}_{t}\) to both sides of this equation, we obtain:
$$\begin{aligned} \tilde{q}_{t+1}-\tilde{q}_{t}= \beta -\overline{\theta }+(\delta -1)\tilde{q}_{t}-(1-\alpha _{t})c \end{aligned}$$
Substituting (A.11) in Eq. (A.6) to eliminate its dependence on \(\tilde{q}_{t+1}\) gives the difference in realized payoffs:
$$\begin{aligned} \overline{\pi }_{t+1}^{v}-\overline{\pi }_{t+1}^{\rho }=a\Delta \theta \delta \left\{ \left[ \beta -\overline{\theta }+(\delta -1)\tilde{q}_{t}-\left( 1-\alpha _{t}\right) c\right] +b\right\} \end{aligned}$$
Then, using both differences in realized payoffs (A.5 and A.12) in the replica equation (4), we obtain (5).
The proof involves a simple inspection of the eigenvalues of the Jacobians. The two eigenvalues are \(\delta \) and \(1\pm a\Omega k\). Then, for \(k=0\) one eigenvalue crosses the unit circle for both points (fold bifurcation). Moreover, for k changing sign one point has both eigenvalues smaller than one, whereas the other becomes a saddle point exchanging stability (transcritical bifurcation). This implies that for \(k<0\) the point \(X^0\) is a stable hyperbolic steady state, which corresponds to the situation depicted in Fig. 1a. Accordingly, the reverse case of \(k>0\) is shown in Fig. 1b, in which \(X^1\) is stable.
A.5 Proof of Lemma 2
The following proof works on the basis of the center manifold theorem. The theorem claims that whenever the system is close enough to a steady state, the stable and unstable manifolds are tangent to the respective stable and unstable eigenvectors of the linearized system (see, e.g., Kuznetsov [13] page 157). Given the theorem, the proof can be formulated as follows. Given the Jacobians in the steady state, the two eigenvalues are \(\delta \) and \(1\pm a\Omega k\). Since \(\left| \delta \right| <1\), the corresponding eigenvectors are the stable ones, they are invariant and correspond to the vertical line in \(\alpha =0\) and \(\alpha =1\). Then, for an \(\Omega '\) it is sufficient to define a rescaling of \(\Delta \theta \) such that \(a\Omega k=a'\Omega 'k'\). The rest follows from the center manifold theorem.
A.6 Proof of Theorem 1
The proof is based on the results of Proposition 3, and therefore, it requires to identify the stable and unstable fixed points. As seen, the stability of the steady states depends on the sign of k. Whenever the proportion of low-cost agents is greater than half of the population (\(v>\frac{1}{2}\)), k can be greater than, smaller than or equal to zero. Recall that \(k=0\) is satisfied for \(\rho =\rho ^{c}\) and that the sign of k depends on the relation between \(\rho \) and \(\rho ^c\). If \(\rho >\rho ^{c}\) then \(k>0\) and, therefore, from Proposition 3 the fixed point \(X^{1}\) is a sink and \(X^{0}\) is a saddle node; the opposite is true for \(\rho <\rho ^{c}\), which implies that \(k<0\). Hence, given that there are only two fixed points, one stable and the other unstable, for any initial state the population converges to the stable one.
It remains to prove that for \(v \leqslant \tfrac{1}{2}\) for every \(\rho >v\) it follows \(\rho >\rho ^c\) and, therefore, that \(X^{1}\) is the sink. In fact, solving \(k=0\) to obtain \(\rho ^c\), we have:
$$\begin{aligned} \rho ^c(v)=\frac{v (3 v-1)}{4 v^2-3 v+1}. \end{aligned}$$
The function \(\rho ^c(v)\) in the interval \(v\in [0,\tfrac{1}{2}]\) has a unique minimum, and therefore, it is U-shaped. Moreover, it holds \(\rho ^c(v=0)=\rho ^c(v=\tfrac{1}{3})=0\). Hence, for \(v\in [0,\tfrac{1}{3})\) it holds that \(\rho ^c<0\) and therefore every \(\rho>0>\rho ^c\). Conversely, for \(v\in (\tfrac{1}{3}, \tfrac{1}{2}]\) it holds that \(\rho ^c(v)\) is increasing, but \(\rho ^c(v=\tfrac{1}{2})=\tfrac{1}{2}\). Hence, the function \(\rho ^c(v)\) is always below the straight line v and therefore every \(\rho >v\) it also such that \(\rho >\rho ^c\).
The aim is to show that there can exist a limit-two cycle. Hence, we will proceed in computing the second iterate for the equations describing the evolution of the aggregate quantity and the fraction \(\alpha \). Then, we will show that the conditions (\(\delta \) relatively small or \(\Delta \theta \) relatively large) as in the theorem ensure the existence of the limit-two cycle.
To simplify the algebra, let \(R\equiv a\Omega \left( 1-v\right) ^{2}\frac{\delta }{1+\delta }c\) and \(S\equiv a\Omega k\). From Eq. (6), recursively, we compute the second iterate, i.e., \(\tilde{q}_{t+2}\) as only dependent on \(\tilde{q}_t\). With \(q^{\left( 2\right) } \) we denote the solution imposing \(\tilde{q}_{t+2}=\tilde{q}_{t}\) which is equal to:
$$\begin{aligned} q^{\left( 2\right) }=\frac{\beta -\overline{\theta }}{1-\delta }- \left( 1-\alpha _{t}\right) \frac{\delta }{1-\delta ^{2}}c- \left( 1-\alpha _{t+1}\right) \frac{1}{1-\delta ^{2}}c. \end{aligned}$$
Inserting (A.13) in (5), and simplifying using the expressions for R and S, we can write:
$$\begin{aligned} \alpha _{t+1}\left[ 1+\alpha _{t}\left( 1-\alpha _{t}\right) R\right] = \alpha _{t}\left[ 1+\alpha _{t}\left( 1-\alpha _{t}\right) R\right] + \alpha _{t}\left( 1-\alpha _{t}\right) S \end{aligned}$$
The same relationship holds for \(\alpha _{t+1}\), \(\alpha _{t+2}\):
$$\begin{aligned} \alpha _{t+2}\left[ 1+\alpha _{t+1}\left( 1-\alpha _{t+1}\right) R\right] =\alpha _{t+1}\left[ 1+\alpha _{t+1}\left( 1-\alpha _{t+1}\right) R\right] +\alpha _{t+1}\left( 1-\alpha _{t+1}\right) S \end{aligned}$$
To simplify the algebra (and, more importantly, the subsequent analysis) even further, we will use the following substitution:
$$\begin{aligned} H := 1 + \alpha _t (1 -\alpha _t) R \end{aligned}$$
With this last equation, it is helpful to rewrite (A.14) as:
$$\begin{aligned} \alpha _{t+1}=\alpha _{t}+\frac{\alpha _{t}\left( 1-\alpha _{t}\right) S}{H} =\alpha _t \frac{H+(1-\alpha _t)S}{H} \end{aligned}$$
To reduce the amount of computations, we will write:
$$\begin{aligned} \alpha _{t+1}=\alpha _t \frac{L}{H}\quad {\text { with }}\; L:= H+(1-\alpha _t)S \end{aligned}$$
Substituting (A.16) in (A.15):
$$\begin{aligned}&\alpha _{t+2}\left( 1+\frac{\alpha _t}{H} L \left( 1-\frac{\alpha _t}{H} L \right) R \right) \\&\quad =\frac{\alpha _t}{H} L \left( 1+\frac{\alpha _t}{H} L \left( 1-\frac{\alpha _t}{H} L \right) R \right) +\frac{\alpha _t}{H} L \left( 1-\frac{\alpha _t}{H} L \right) S \end{aligned}$$
We denote with \(\alpha ^{(2)}\) the steady state of the second iterate, i.e., \(\alpha ^{(2)} \equiv \alpha _{t+2}=\alpha _{t}\). Hence, dividing the previous equation by \(\alpha ^{(2)}\), we can write:
$$\begin{aligned}&\left( 1+\frac{\alpha ^{(2)}}{H} L \left( 1-\frac{\alpha ^{(2)}}{H} L \right) R \right) \\&\quad =\frac{L}{H} \left( 1+\frac{\alpha ^{(2)}}{H} L \left( 1-\frac{\alpha ^{(2)}}{H} L \right) R \right) +\frac{L}{H} \left( 1-\frac{\alpha ^{(2)}}{H} L \right) S \end{aligned}$$
Adding and subtracting in the last bracket L/H and collecting common factors:
$$\begin{aligned} \left( 1+\frac{\alpha ^{(2)}}{H} L \left( \frac{H-\alpha ^{(2)}}{H} L \right) R -\frac{L S}{H}\right) =\frac{L^2}{H} \left( \frac{1-\alpha ^{(2)}}{H} \right) S \end{aligned}$$
This last expression can be simplified, obtaining:
$$\begin{aligned}&(1-\alpha ^{(2)} ) H L^2 S+(H-1) L S (H-\alpha ^{(2)} L)-(1-\alpha ^{(2)} ) H L S^2=-(1-\alpha ^{(2)} ) H^2 S \longrightarrow \\&L\left( HL + \frac{(H-1)(H-\alpha ^{(2)} L)}{1-\alpha ^{(2)}}-HS\right) =- H^2 \end{aligned}$$
Using the expression for L and simplifying:
$$\begin{aligned} (H+(1-\alpha )S)(2H^2-2H \alpha S-H+\alpha S) =- H^2 \end{aligned}$$
Factoring and recalling the expression for H, we write:
$$\begin{aligned}&\left[ H+\left( 1-\alpha ^{\left( 2\right) }\right) S\right] \left[ H-\alpha ^{\left( 2\right) } S\right] \left[ 2H-1\right] =-H^{2} \end{aligned}$$
$$\begin{aligned}&H=1+\alpha ^{\left( 2\right) }\left( 1-\alpha ^{\left( 2\right) }\right) R \end{aligned}$$
Equations (A.17) and (A.18) determine \(\alpha ^{\left( 2\right) }\), (A.14) determines \(\alpha _{t+1}\) and (A.13) determines \(q^{\left( 2\right) }\).
Equation (A.13) has an unique solution for \(q^{(2)}\). The solutions for \(\alpha ^{\left( 2\right) }\) are not easily obtainable, and they can be in the set of complex numbers. Hence, in what follows, we discuss the conditions leading to real solutions. Equations (A.17) and (A.18) define a polynomial of degree 6 for \(\alpha ^{(2)}\). Observe that \(\delta \rightarrow -1\) implies \(R \rightarrow -\infty \). It follows that (independently of \(\Delta \theta \)) H from (A.18) can be sufficiently negative to allow for the LHS of (A.17) to be negative and therefore potentially ensure real solutions for \(\alpha ^{(2)}\). Conversely, suppose \(\delta \) is sufficiently large; we show that also \(\Delta \theta \) sufficiently large ensures the existence of a solution. Equations (A.17) and (A.18) describe a function of \(\alpha ^{(2)}\), say, \(f(\alpha ^{(2)})\). We have to prove that \(f(\alpha ^{(2)})=0\) is possible. With this aim, observe that \(\lim _{\alpha ^{(2)} \rightarrow 0}f(\alpha ^{(2)})=2+S\) and \(\lim _{\alpha ^{(2)} \rightarrow 1}f(\alpha ^{(2)})=2-S\), implying that there is at least one solution whenever \(\vert S \vert >2\). Notice that the sign of k does not depend on \(\Delta \theta \), and it is straightforward to see that \(S\equiv a\Omega k \propto (\Delta \theta )^2 \). It follows that the value of \(\Delta \theta \) can be chosen large enough to ensure \(\vert S \vert >2\).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Buchen, C., Palermo, A. Adverse Selection, Heterogeneous Beliefs, and Evolutionary Learning. Dyn Games Appl 12, 343–362 (2022). https://doi.org/10.1007/s13235-021-00396-x
Issue Date: June 2022
Evolutionary game theory
Imitation equilibrium
Heterogeneous beliefs
Cobweb model | CommonCrawl |
Moduli of symplectic instanton vector bundles of higher rank on projective space $\mathbb{P}^3$
Central European Journal of Mathematics. 2012. Vol. 10. No. 4. P. 1232-1245.
Tikhomirov A. S., Bruzzo U., Markushevich D.
Symplectic instanton vector bundles on the projective space $\mathbb{P}^3$ constitute a natural generalization of mathematical instantons of rank-2. We study the moduli space $I_{n;r}$ of rank-$2r$ symplectic instanton vector bundles on $\mathbb{P}^3$ with $r\ge2$ and second Chern class $n\ge r, n\equiv r(\mod 2)$. We introduce the notion of tame symplectic instantons by excluding a kind of pathological monads and show that the locus $I_{n;r}^*$ of tame symplectic instantons is irreducible and has the expected dimension, equal to $4n(r+1)-r(2r+1)$.
Research target: Mathematics
Priority areas: mathematics
Keywords: vector bundlessymplectic bundlesmoduli space
Modified Elliptic Genus
Gritsenko V. In bk.: Partition Functions and Automorphic Forms. Springer Publishing Company, 2020. P. 87-119.
Abstract This mini course is an additional part to my semester course on the theory of Jacobi modular forms given at the mathematical department of NRU HSE in Moscow (see Gritsenko Jacobi modular forms: 30 ans après; COURSERA (12 lectures and seminars), 2017–2019). This additional part contains some applications of Jacobi modular forms to the theory of elliptic genera and Witten genus. The subject of this course is related to my old talk given in Japan (see Gritsenko (Proc Symp "Automorphic forms and L-functions" 1103:71–85, 1999)).
Gushel-Mukai varieties: moduli
Kuznetsov A. G., Debarre O. math. arxive. Cornell University, 2018
We describe the moduli stack of Gushel-Mukai varieties as a global quotient stack and its coarse moduli space as the corresponding GIT quotient. The construction is based on a comprehensive study of the relation between this stack and the stack of Lagrangian data; roughly speaking, we show that the former is a generalized root stack of the latter. As an application, we define the period map for Gushel-Mukai varieties and construct some complete nonisotrivial families of smooth Gushel-Mukai varieties. In an appendix, we describe a generalization of the root stack construction used in our approach to the moduli space.
Added: Jun 8, 2019
Bubble tree compactification of moduli spaces of vector bundles on surfaces
Tikhomirov A. S., Markushevich D., Trautmann G. Central European Journal of Mathematics. 2012. Vol. 19. No. 4. P. 1331-1355.
We announce some results on compactifying moduli spaces of rank 2 vector bundles on surfaces by spaces of vector vector bundles on trees of surfaces. This is thought as an algebraic counterpart of the so-called bubbling of vector bundled connections an in differential geometry. The new moduli spaces are algebraic spaces arising as quotients by group actions according surfaces to a result of Kollár. As an example, the compactification of the space of stable rank 2 vector bundles with Chern classes $c_1=0, c_2=2$ on the projective plane is studied in more detail. Proofs are only indicated and will appear in separate papers.
On the Sn-equivariant Euler characteristic of moduli spaces of hyperelliptic curves.
Gorsky E. Mathematical Research Letters. 2009. Vol. 16. No. 4. P. 591-603.
The generating function for Sn-equivariant Euler characteristics of moduli spaces of pointed hyperelliptic curves for any genus g ≥ 2 is calculated. This answer generalizes the known ones for genera 2 and 3 and the answers obtained by J. Bergstro ̈m for any genus and n ≤ 7 points.
On the S_n-equivariant Euler characteristic of M_{2,n}.
Gorsky E. Working papers by Cornell University. Cornell University, 2007
The Getzler's formula relates the S_n-equivariant Hodge-Deligne polynomial of the space of ordered tuples of distinct points on a given variety X with the Hodge-Deligne polynomial of X. We obtain the analogue of this formula for the case when X has a nontrivial automorphism group. Collecting together all strata of M_2 with different automorphism groups, we derive a formula for the S_n-equivariant Euler characteristic of M_{2,n}. 2 with different automorphism groups, we derive a formula for the S_n-equivariant Euler characteristic of
Moduli of symplectic instanton vector bundles of higher rank on projective space $\mathbb{P^3}$. II.
Tikhomirov A. S., Bruzzo U., Markushevich D. MPIM. MPIM. Max Planck Institute for Mathematics, 2014. No. 2014-22.
Symplectic instanton vector bundles on the projective space $\mathbb{P^3}$ are a natural generalization of mathematical instantons of rank 2. We study the moduli space $I_{n,r}$ of rank-$2r$ symplectic instanton vector bundles on $\mathbb{P^3}$ with $r\ge2$ and second Chern class $n\ge r+1,\ n-r \equiv 1(\mod2)$. We introduce the notion of tame symplectic instantons by excluding a kind of pathological monads and show that the locus $I_{n,r}^∗$ of tame symplectic instantons is irreducible and has the expected dimension, equal to $4n(r+1)-r(2r+1)$.
Teichmüller space for hyperkähler and symplectic structures
Amerik E., Verbitsky M. Journal of Geometry and Physics. 2015. Vol. 97. P. 44-50.
Let S be an infinite-dimensional manifold of all symplectic, or hyperkähler, structures on a compact manifold M, and Diff0 the connected component of its diffeomorphism group. The quotient S/Diff0 is called the Teichmüller space of symplectic (or hyperkähler) structures on M. MBM classes on a hyperkähler manifold M are cohomology classes which can be represented by a minimal rational curve on a deformation of M. We determine the Teichmüller space of hyperkähler structures on a hyperkähler manifold, identifying any of its connected components with an open subset of the Grassmannian variety SO(b2-3, 3)/SO(3)×SO(b2-3) consisting of all Beauville-Bogomolov positive 3-planes in H2(M,R) which are not orthogonal to any of the MBM classes. This is used to determine the Teichmüller space of symplectic structures of Kähler type on a hyperkähler manifold of maximal holonomy. We show that any connected component of this space is naturally identified with the space of cohomology classes v∈H2(M,R) with q(v,v)>0, where q is the Bogomolov-Beauville-Fujiki form on H2(M,R). © 2015 Elsevier B.V.
Added: Sep 8, 2015
Введение в математическую статистику
Ивченко Г. И., Медведев Ю. И. М.: ЛКИ, 2010.
Model for organizing cargo transportation with an initial station of departure and a final station of cargo distribution
Khachatryan N., Akopov A. S. Business Informatics. 2017. No. 1(39). P. 25-35.
A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traffic is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the final node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a finite-dimensional system of differential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the "correct" extension of solutions of a system of differential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station.
Nullstellensatz over quasi-fields
Trushin D. Russian Mathematical Surveys. 2010. Vol. 65. No. 1. P. 186-187.
Энатская Н. Ю., Хакимуллин Е. Р. М.: МИЭМ, 2011.
Деловой климат в оптовой торговле во II квартале 2014 года и ожидания на III квартал
Лола И. С., Остапкович Г. В. Современная торговля. 2014. № 10.
Деловой климат в оптовой торговле в I квартале 2012 года
Лола И. С., Лукашина Ж. Е., Пахомова О. П. М.: НИУ ВШЭ, 2012.
Laminations from the Main Cubioid
Timorin V., Blokh A., Oversteegen L. et al. arxiv.org. math. Cornell University, 2013. No. 1305.5788.
According to a recent paper \cite{bopt13}, polynomials from the closure $\ol{\phd}_3$ of the {\em Principal Hyperbolic Domain} ${\rm PHD}_3$ of the cubic connectedness locus have a few specific properties. The family $\cu$ of all polynomials with these properties is called the \emph{Main Cubioid}. In this paper we describe the set $\cu^c$ of laminations which can be associated to polynomials from $\cu$.
Entropy and the Shannon-McMillan-Breiman theorem for beta random matrix ensembles
Bufetov A. I., Mkrtchyan S., Scherbina M. et al. arxiv.org. math. Cornell University, 2013. No. 1301.0342.
Bounded limit cycles of polynomial foliations of ℂP²
Goncharuk N. B., Kudryashov Y. arxiv.org. math. Cornell University, 2015. No. 1504.03313.
In this article we prove in a new way that a generic polynomial vector field in ℂ² possesses countably many homologically independent limit cycles. The new proof needs no estimates on integrals, provides thinner exceptional set for quadratic vector fields, and provides limit cycles that stay in a bounded domain.
Новые информационные технологии. Тезисы докладов XVIII международной студенческой конференции-школы-семинара
М.: МИЭМ, 2010.
Метод параметрикса для диффузий и цепей Маркова
Конаков В. Д. STI. WP BRP. Издательство попечительского совета механико-математического факультета МГУ, 2012. № 2012.
Is the function field of a reductive Lie algebra purely transcendental over the field of invariants for the adjoint action?
Colliot-Thélène J., Kunyavskiĭ B., Vladimir L. Popov et al. Compositio Mathematica. 2011. Vol. 147. No. 2. P. 428-466.
Let k be a field of characteristic zero, let G be a connected reductive algebraic group over k and let g be its Lie algebra. Let k(G), respectively, k(g), be the field of k- rational functions on G, respectively, g. The conjugation action of G on itself induces the adjoint action of G on g. We investigate the question whether or not the field extensions k(G)/k(G)^G and k(g)/k(g)^G are purely transcendental. We show that the answer is the same for k(G)/k(G)^G and k(g)/k(g)^G, and reduce the problem to the case where G is simple. For simple groups we show that the answer is positive if G is split of type A_n or C_n, and negative for groups of other types, except possibly G_2. A key ingredient in the proof of the negative result is a recent formula for the unramified Brauer group of a homogeneous space with connected stabilizers. As a byproduct of our investigation we give an affirmative answer to a question of Grothendieck about the existence of a rational section of the categorical quotient morphism for the conjugating action of G on itself.
Absolutely convergent Fourier series. An improvement of the Beurling-Helson theorem
Vladimir Lebedev. arxiv.org. math. Cornell University, 2011. No. 1112.4892v1.
We obtain a partial solution of the problem on the growth of the norms of exponential functions with a continuous phase in the Wiener algebra. The problem was posed by J.-P. Kahane at the International Congress of Mathematicians in Stockholm in 1962. He conjectured that (for a nonlinear phase) one can not achieve the growth slower than the logarithm of the frequency. Though the conjecture is still not confirmed, the author obtained first nontrivial results.
Обоснование адиабатического предела для гиперболических уравнений Гинзбурга-Ландау
Пальвелев Р., Сергеев А. Г. Труды Математического института им. В.А. Стеклова РАН. 2012. Т. 277. С. 199-214.
Hypercommutative operad as a homotopy quotient of BV
Khoroshkin A., Markaryan N. S., Shadrin S. arxiv.org. math. Cornell University, 2012. No. 1206.3749.
We give an explicit formula for a quasi-isomorphism between the operads Hycomm (the homology of the moduli space of stable genus 0 curves) and BV/Δ (the homotopy quotient of Batalin-Vilkovisky operad by the BV-operator). In other words we derive an equivalence of Hycomm-algebras and BV-algebras enhanced with a homotopy that trivializes the BV-operator. These formulas are given in terms of the Givental graphs, and are proved in two different ways. One proof uses the Givental group action, and the other proof goes through a chain of explicit formulas on resolutions of Hycomm and BV. The second approach gives, in particular, a homological explanation of the Givental group action on Hycomm-algebras.
Cross-sections, quotients, and representation rings of semisimple algebraic groups
V. L. Popov. Transformation Groups. 2011. Vol. 16. No. 3. P. 827-856.
Let G be a connected semisimple algebraic group over an algebraically closed field k. In 1965 Steinberg proved that if G is simply connected, then in G there exists a closed irreducible cross-section of the set of closures of regular conjugacy classes. We prove that in arbitrary G such a cross-section exists if and only if the universal covering isogeny Ĝ → G is bijective; this answers Grothendieck's question cited in the epigraph. In particular, for char k = 0, the converse to Steinberg's theorem holds. The existence of a cross-section in G implies, at least for char k = 0, that the algebra k[G]G of class functions on G is generated by rk G elements. We describe, for arbitrary G, a minimal generating set of k[G]G and that of the representation ring of G and answer two Grothendieck's questions on constructing generating sets of k[G]G. We prove the existence of a rational (i.e., local) section of the quotient morphism for arbitrary G and the existence of a rational cross-section in G (for char k = 0, this has been proved earlier); this answers the other question cited in the epigraph. We also prove that the existence of a rational section is equivalent to the existence of a rational W-equivariant map T- - - >G/T where T is a maximal torus of G and W the Weyl group.
Новые информационные технологии. Тезисы докладов XIX международной студенческой конференции-школы-семинара | CommonCrawl |
Derick Ferguson
Member Only Ebooks
Members Only Blog
schrödinger wave equation for hydrogen atom pdf
Hydrogen Atom: Schrödinger Equation and Quantum Numbers l l 3. Klein-Gordon equation for spi, ticles [2,3], wave equations for describing relativistic, dynamics of a system of two interacting s, two body Dirac equation, generalized Bruit equation for, tion of relativistic spin-(0) and spin-(1) bosons [12-20], ganized as follow: We introduce the relativistic Schrö-. The solution of wave equation largely depends on the potential energy function used to solve the Schrödinger equation, while some potential energy functions give exact analytical solutions for all quantum states, In this work we have solved the Schrodinger equation with the molecular Tietz-Hua potential by expanding the centrifugal term potential in Taylor series expansion, we have solved the resulting Gaussian hypergeometric ordinary differential equation by standard method. d�V̌2{P�匩GT��8#v�iI�;�4l�m]&_���f�����7�N�,���q��錙،��M��{�u^Dd�aАe�tǗ�f��Y����z��9�2���"%;@�1��k�z��9Le�1@�#�� _6'�1�+���t���"P�E���=F88hcP1ش��@��N�l�[���d\� $��;���(�^�`��8��((��U��3�l ��Tr� �2)���Vt 5))��D������r@E��.݂J0[����QЀl��Z:� ��+��MC����6�20d0����P�)�1����,|\�L��>T��p�����q-�A�[��'v�}a�S�e�c�alg���1W��Q.��J��=� �\�^��vy7ȣ@� �el�g�į U^�> electrons as relativistic simple hydrogen atom. Spin angular momentum operator is as neatly integrated into the equation of motion as the orbital angular momentum operator. AD�5�m@����%H�0���1�(�f; r��-i�:� b�5 A. By giving the values of n, l, and ml, we also specify a particular orbital (e.g., 1s with n = 1, l = 0, ml = 0). \(Y(\theta,\phi)\) is typically normalized so the the integral of \(Y^2(\theta,\phi)\) over the unit sphere is equal to one. Mathematical Physics," Birkhauser, Basle, 1988. chanical Eigenvalues and Eigenfunctions,", Schrödinger Equation by Means of Supper Symmetry,". That is, in addition to give its principle, angular, and magnetic quantum numbers, we also need to say if it is a spin-up electron or a spin-down electron. In order to represent an exact view of obtained results, from solving relativistic Schrödinger equation are illus-, part of the relativistic Schrödinger equation similar to the, Table 1. Here we define a. Cite as. electrons as relativistic simple hydrogen atom. The solution of wave equation largely depends on the potential energy function used to solve the Schrödinger equation, while some potential energy functions give exact analytical solutions for all quantum states, n [5][6] where n is the principal quantum number and is the principal angular momentum quantum number, on the other hand, few potential energy functions give exact analytical solution only for the special case of 0 (swave solutions) [7]. The implication of the Stern-Gerlach experiment is that we need to include a fourth quantum number, \(m_s\) in our description of the physical state of the electron. startxref C Hence the 2p subshell, which consists of three 2p orbitals (2px, 2py, and 2pz), can contain a total of six electrons, two in each orbital. By separating Schrödinger equat, Laguerre and Jacobi differential equations, relativistic theory. because those electrons have similar average distances from the nucleus. Two types of these supersymmetric structures suggest derivation of algebric solutions for the bound states 0 A Given n = 4, calculate the allowed values of l. From these allowed values, count the number of subshells. ... Download full-text PDF Read full-text. %���� It is shown that the correction introduced to the radial equation due to the modification of classical Laplacian is not the same as that of the Darwin and Pauli theroy however, the resulting energy levels of Hydrogen atom are same as that of Darwin, Pauli and Dirac theories. A potential barrier of the kind studied by Fowler and others may be represented by the analytic function V (Eq. This service is more advanced with JavaScript available, Paradox Lost 0000001948 00000 n The theory also points out that the curvature of the orbit draws on certain amount of kinetic and potential energies affecting the momentum of electron and the spin-orbit interaction energy constitutes a part of this energy. This is because the new theory is not a perturbation theory. One of three quantum numbers that describes the orientation of the region of space occupied by an electron with respect to an applied magnetic field.. The total number of orbitals in the n = 4 principal shell is the sum of the number of orbitals in each subshell and is equal to n2: \( \mathop 1\limits_{(l = 0)} + \mathop 3\limits_{(l = 1)} + \mathop 5\limits_{(l = 2)} + \mathop 7\limits_{(l = 3)} = 16\; {\rm{orbitals}} = {(4\; {\rm{principal\: shells}})^2} \). A new oscillator model with different form of the non-minimal substitution within the framework of the Duffin–Kemmer–Petiau equation is offered. In ions with only a single electron, the energy of a given orbital depends on only n, and all subshells within a principal shell, such as the px, py, and pz orbitals, are degenerate. 0000001815 00000 n The wavefunction with n = 1, \(l\) \(l\) = 0 is called the 1s orbital, and an electron that is described by this function is said to be "in" the ls orbital, i.e. The implications of electron spin for chemistry were recognized almost immediately by an Austrian physicist, Wolfgang Pauli (1900–1958; Nobel Prize in Physics, 1945), who determined that each orbital can contain no more than two electrons whoe developed the Pauli exclusion principle. 3 0 obj The different values of l and ml for the individual orbitals within a given principal shell are not important for understanding the emission or absorption spectra of the hydrogen atom under most conditions, but they do explain the splittings of the main lines that are observed when hydrogen atoms are placed in a magnetic field. Author(s) Leave a comment Mohamed S. El Naschie Example \(\PageIndex{1}\): Number of subshells. In this investigation a simple method developed by introducing spin to Schrodinger equation to study the relativistic hydrogen atom. The shape, invariance equations of the associated Jacobi differential, rewrite shape invariance Equations (39) and (42) as the, laddering relations with respect to the indices, The general algebra covered this example com, the raising andlowering operators of angular part make, In this study, we successfully introduce spin in Schrö-, dinger equation. In order to solve the radial part of the Relativistic Schrödinger wave equation, we define new parameters as, 22 00 12. Calculated results well matched to the results of Dirac's relativistic theory. We will see when we consider multi-electron atoms, these constraints explain the features of the Periodic Table. Second, the quantization of angular momentum is different from that proposed by Bohr. employs the same kinetic energy operator, \(\hat {T}\), written in spherical coordinates. Radial probability densities for three types of atomic orbitals are plotted in Figure \(\PageIndex{2}\) (right). We can summarize the relationships between the quantum numbers and the number of subshells and orbitals as follows (Table \(\PageIndex{1}\)): Each principal shell has n subshells, and each subshell has 2l + 1 orbitals. trailer In three of the d orbitals, the lobes of electron density are oriented between the x and y, x and z, and y and z planes; these orbitals are referred to as the \(3d_{xy}\), \)3d_{xz}\_, and \)3d_{yz}\) orbitals, respectively. They become larger, extending farther from the nucleus. The hydrogen 3d orbitals have more complex shapes than the 2p orbitals. In particular the ordinary energy of the quantum particle is given by E(0) = (/2)(mc2) where is Hardy's probability of quantum entanglement, =( - 1)/2 is the Hausdorff dimension of the zero measure thin Cantor set modeling the quantum particle, while the dark energy of the quantum wave is given by E(D) = (5/2)(mc2) where is the Hausdorff dimension of the positive measure thick empty Cantor set modeling the quantum wave and the factor five (5) is the Kaluza-Klein spacetime dimension to which the measure zero thin Cantor set D(0) = (0,) and the thick empty set D(-1) = (1,) must be lifted to give the five dimensional analogue sets namely and 5 needed for calculating the energy density E(0) and E(D) which together add to Einstein's maximal total energy density E(total) = E(0) + E(D) = mc2 = E(Einstein).
Doppler Radar Western New York, Smucker's Uncrustables Peanut Butter & Honey Spread Frozen Sandwich, Zebra Finch Lifespan, How To Pronounce Greensward, Pier 14 Myrtle Beach, High 5 Social Media, Wrought Meaning In Bengali, Silicones In Cosmetics List, Hotel With Indoor Water Park Charleston, Sc, Italian Chocolate Chip Cookies, Work From Home Request Email Sample, How To Get Rid Of Aphids Naturally, Camera On Chromebook, Goblin Electromancer Deck, Stationery Shop Meaning In Tamil, Understanding Dairy Markets, Algae Meaning In Arabic, Sie Vs Series 7, Anaglypta Dado Panels Nouveau, Kitchen Islands With Seating, 6 Digit Passcode Ideas, 10 Sentences About Painting, Nut Cappuccino My Cafe, Ground Beef And Vegetable Soup, Savory Cannoli Shell Recipe, Armenian Cucumber Taste, Captured In A Sentence, $50 Workbench Cut List, Did I Write It Correctly, I Hope This Suffices, Paldo Teumsae Ramyun Ingredients,
It is a necessary ingredient for all human … Join Today! about Tribe
Privacy Policy | Terms Of Service | Refund Policy
© 2020 · Websites · Built on the Genesis Framework | CommonCrawl |
Turbulent spacetime from Einstein equation?
It is well known that the fluid equations (Euler equation, Navier-Stokes, ...), being non-linear, may have highly turbulent solutions. Of course, these solutions are non-analytical. The laminar flow solutions (Couette flow for example) may be unstable to perturbations, depending on viscosity.
Also, low-viscosity fluids (water for example) are more turbulent than high-viscosity fluids (oil, for example).
I was wondering if something similar may happen with gravity and spacetime itself. The Einstein equations are highly non-linear: do turbulent solutions exist?
Or is gravity like some highly viscous fluid, i.e. without any turbulence?
What might a turbulent metric look like? Of course it would not be an analytic solution.
I imagine that spacetime turbulences may be relevant on a very large scale only (cosmological scales, or even at the Multiverse level). And maybe at the Planck scale too (quantum foam). But how could we define geometric turbulence?
The only reference I've found on this subject, which shows that the idea isn't crazy, is this :
https://www.perimeterinstitute.ca/news/turbulent-black-holes
EDIT : I have posted an answer below, which I think is very interesting. I don't know if this hypothesis was already studied before.
general-relativity gravity spacetime turbulence non-linear-systems
$\begingroup$ Have a look in 'Galtier & Nazarenko, PRL 119, 221101 (2017)' where a analytical solutions have been found for a turbulence made with gravitational waves. $\endgroup$ – Bodganskii Mar 15 '18 at 16:24
$\begingroup$ The following paper gives the theory of space-time turbulence journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.221101 Someone $\endgroup$ – user197779 Jun 7 '18 at 10:13
$\begingroup$ absolutely wonderful question! I've heard it (the turbulence) referred to as a "storm" in spacetime. (: $\endgroup$ – R. Rankin Dec 6 '18 at 9:55
Gravity can, of course, become turbulent if it is coupled to a turbulent fluid. The interesting question is thus, as John Rennie points out, whether a vacuum solution can be "turbulent".
As far as I'm aware this is not known. If turbulence does occur in vacuum gravity, it is remarkably hard to stir up. Even in very extreme situations like colliding black hole binaries, which are now simulated pretty routinely, no turbulence has been observed.
EDIT: One approach one might take to study this is the "Post-Newtonian expansion", in which GR is formulated as an expansion in powers of some characteristic speed $\frac{v}{c}$. This has been conducted to extremely high order and the accuracy of the results at least for binary black holes rivals that of full nonlinear simulation. To all existing orders, the PN expansion is known to be exactly integrable. So if GR exhibits turbulent behaviour, it does so only in very extreme situations.
There are some theoretical reasons one might expect turbulence, which are hinted at in the press release you link to. Because of AdS/CFT one expects at least certain vacuum GR spacetimes to be equivalently modelled by a certain quantum field theory with a special symmetry. But that field theory, in some limit, should itself be approximately described by the Navier-Stokes equations. Therefore, again perhaps only in some weird and not-entirely-understood limit, the vacuum EFE's ought to described by the Navier-Stokes equations.
The point of the study you linked to was to investigate what sorts of behaviour in the gravitational theory one might get when the corresponding hydrodynamic theory is turbulent. The conclusion seems to be that certain turbulence-like behaviours appear in the gravitational theory. It seems to me a bit of an overstatement to say this group has discovered full-blown gravitational turbulence.
It's also, by the way, not yet known whether more pedestrian sorts of chaos can occur in the GR two-body problem. The Kerr spacetime is exactly integrable and geodesics are not chaotic. However, an actual particle will not move in the Kerr spacetime, but in a deformed spacetime including also its own gravitational field. An open question is if and when this perturbation can lead to chaotic motion.
EDIT2: There are also some theoretical reasons one might not expect turbulence. Basically what I'm imagining by turbulence is something like highly-nonlinear gravitational waves self-interacting strongly enough to excite vortex stretching, etc. But attempts to simulate such self-interactions (e.g. http://relativity.livingreviews.org/Articles/lrr-2007-5/) typically find that more or less generically, such strong gravitational fields either lead to rapid dispersal to infinity or the formation of a black hole. In closed spacetimes even small perturbations seem to eventually form a black hole more or less generically, although this is still unsettled. However, these studies are almost always done in high symmetry, so the question is far from resolved.
AGMLAGML
$\begingroup$ -1: 1) Any metric perturbation away from the Plebański-Demianski class (homogeneously accelerated charged rotating black holes with cosmological constant and NUT charge) makes geodesics chaotic. For instance, gravitation from the galactic halo makes the motion near black holes weakly chaotic. 2) Motion of a classical spinning body in Kerr space-time is weakly chaotic. 3) In the post-Newtonian two-body expansion, the motion is chaotic once you include spin-spin interaction. $\endgroup$ – Void Jan 8 '19 at 15:03
$\begingroup$ 4) There is nothing really not-entirely-understood about the AdS turbulence, they just solve full Einstein equations with a negative cosmological constant without any really special approximation. It is only that reflective boundary conditions allow for a setup where turbulence is easy to achieve and they can study the CFT side of the picture on the boundary. $\endgroup$ – Void Jan 8 '19 at 15:03
$\begingroup$ " In closed spacetimes even small perturbations seem to eventually form a black hole more or less generically, although this is still unsettled." What is meant by "closed" spacetime? $\endgroup$ – N. Steinle Feb 19 '19 at 19:04
Thanks to holography, we now know that solutions to the Einstein equation in certain $d+1$ dimensional spaces are equivalent (dual) to solutions of the Navier-Stokes equation in $d$ dimensions. This is the fluid-gravity correspondence. As a result, turbulence can be studied using the Einstein equations, see, for example, http://arxiv.org/abs/1307.7267.
Recently, there was a talk titled Turbulent gravity in asymptotically AdS spacetimes which may be of interest. In these papers, spacetimes which are anti-de Sitter asymptotically with reflecting boundary conditions are considered, and the notion of turbulence in this case is that small perturbations about these spacetimes exhibit 'turbulent behavior.'
The most relevant paper I think would be A Holographic Path to the Turbulent Side of Gravity which makes use of the gravity/fluid correspondence:
We study the dynamics of a 2+1 dimensional relativistic viscous conformal fluid in Minkowski spacetime. Such fluid solutions arise as duals, under the "gravity/fluid correspondence", to 3+1 dimensional asymptotically anti-de Sitter (AAdS) black brane solutions to the Einstein equation. We examine stability properties of shear flows, which correspond to hydrodynamic quasinormal modes of the black brane. We find that, for sufficiently high Reynolds number, the solution undergoes an inverse turbulent cascade to long wavelength modes.
This relates to the answer posted by Thomas.
There are indeed vacuum solutions to the Einstein field equations which are unstable under perturbations. A famous example is the result of Gregory and Laflamme for black strings, which essentially have the geometry of $\mathrm{Sch}_d \times \mathbb{R}$. For example, a five dimensional black string could have a metric,
$$ds^2 = \left( 1-\frac{2GM}{r}\right)dt^2 - \left( 1-\frac{2GM}{r}\right)^{-1}dr^2 - r^2 d\Omega_2^2 - d\sigma^2$$
where $\sigma$ is the additional fifth coordinate. Clearly, this metric will also satisfy the vacuum equations. Gregory and Laflamme showed that under a perturbation, $g_{ab} \to g_{ab} + h_{ab}$, the solution is unstable, and the instability itself is a tensor mode. (An argument is made to show there is no instability due to the scalar and vector modes.)
A subsequent paper by Lehner (who is in the article you linked) and Pretorius, Black Strings, Low Viscosity Fluids, and Violation of Cosmic Censorship (which I highly recommend) reveals that:
The [Gregory Laflamme] instability unfolds in a self-similar fashion, in which the horizon at any given time can be seen as thin strings connected by hyper-spherical black holes of different radii. As the evolution proceeds pieces of the string shrink while others give rise to further spherical black holes, and consequently the horizon develops a fractal structure. At this stage its overall topology is still $\mathbb R \times S^2$; the fractal geometry arises along $\mathbb R$...
Eventually, it shrinks to zero and you are left with a naked singularity. Of course, there are bound to be other unstable vacuum solutions, but this one in particular comes to mind, as it is also of relevance to cosmic censorship.
JamalSJamalS
The fluid-gravity correspondence that Thomas referred to in his answer is a very concrete set-up where we can import intuition from fluid dynamics to suggest how we might get turbulence in vacuum GR (with negative cosmological constant). I thought it deserved more explanation.
First, fluid dynamics is a universal description, applicable in any system (e.g. water, the quark-gluon plasma, lump of metal,...), describing the regime of long wavelength fluctuations away from equilibrium. Its starting point is thermodynamics, which describes a system in equilibrium in terms of just a few variables (temperature T and chemical potential $\mu$), from which everything else (density, pressure, entropy density, ...) is determined. Fluids goes a step beyond that, by allowing the system to be far out of equilibrium, but still locally in equilibrium, so in any sufficiently small patch the system is well equilibrated, with some local temperature $T(x)$, chemical potential $\mu(x)$, and now velocity $\vec{u}(x)$ defining the local equilibrium rest frame. These functions must vary sufficiently slowly, for example over distances much longer than the molecular mean free path, so that thermodynamics is locally a good approximation. Technically, fluid dynamics is then a "derivative expansion", allowing terms in the equations of motion up to some given order. First order gives perfect fluids, second order introduces viscosity and gives Navier-Stokes and its generalisations. At each order, new `transport coefficients', like viscosities, must be introduced, but these are the only things that depend on the underlying theory.
This all applies to your favourite quantum field theory, and in particular to certain strongly interacting, relativistic, scale invariant theories that have alternative gravitational descriptions. In that context, equilibrium maps to a static, uniform black brane, and adding long-wavelength fluctuations of the horizon is equivalent to studying fluid dynamics in the field theory. In this approximation, Einstein's equations reduce exactly to relativistic Navier-Stokes, with some transport coefficients, and in particular very low viscosity (conjectured to be the lowest possible).
This means that much of what we're used to in fluids, like the turbulent cascade of energy to shorter and shorter length scales, appears in the fluctuations of black branes. Eventually, the turbulence will cause the structure to appear on shorter wavelengths than the mean free path, so the fluid approximation breaks down and the full glory of GR must take over. (This is the same in water or whatever else: the molecular dynamics become important when the turbulence gets to a small enough scale, so you can no longer treat it as a fluid).
HolographerHolographer
$\begingroup$ The point of the work of people doing AdS/CFT in 4d is the fact that the cascade is inverse in Einstein gravity in this dimension. I.e., energy goes from shorter-wavelength modes to longer-wavelength ones. The same happens with the Navier-Stokes equations in 2+1d, of course. $\endgroup$ – Void Jan 8 '19 at 14:53
The obvious example of a chaotic solution to the Einstein equations is the Mixmaster metric. However this is not a vacuum solution, and when matter is present it shouldn't be any surprise that it can evolve in a chaotic fashion.
The more interesting question is whether a vacuum solution can evolve in a chaotic fashion. I can offer only a vague recollection from the 1980s when a friend of mine was working on the interactions between gravitational waves i.e. scattering of one GW by another when the energy is high enough that the linear approximation breaks down. My recollection is that bizarre behaviour could result, but whether this counts as chaos I don't know.
John RennieJohn Rennie
$\begingroup$ The notion of chaotic vacuum solutions is indeed interesting! It's briefly mentioned in this video of Thorne giving a lecture: youtube.com/watch?v=1mziud_XV-Y granted it is geometrodynamics. I think he calls it a "storm in spacetime" $\endgroup$ – R. Rankin Dec 23 '18 at 5:39
I would just like to add a few things to the answers already presented.
If we accept that both general relativity and quantum mechanics are valid in their own rights (or close approximations to, once a quantum-gravity marriage occurs) then we can get turbulent type behaviours on very small scales. The uncertainity principle suggests that particle anti-particle annihilations may occur on Planck length scales, and the energy of the virtual particles increases as we go to smaller and smaller scales. As a result, GR tells us that the spacetime may behave very wildly at these Planck-type scales, and would indeed be turbulent in this sense. See Quantum Foam.
On a slightly different note, there are many still inherently-classical theories of gravity that go behind GR. Instead of using the Einstein-Hilbert action one can say the spacetime abides by some different geometric relations, i.e. not just $R_{\mu \nu} = 0$ (e.g. f(R) gravity). These are introduced to avoid the dark matter and dark energy problems, as well as some others. It turns out that so much richness comes from these theories that you can imagine choices of $f$ that allow for pecuilar metrics to arise as vacuum solutions. Indeed it can be shown that the Mixmaster solution is a vacuum-$f(R)$ solution for a judicious choice of $f$. So the turbulent matter-filled solution of GR is a vacuum solution in $f(R)$.
As for classical GR, I expect one can still concoct a vacuum solution in a less crazy sense by using the inverse scattering method to generate N-soliton solutions. If you dump enough non-linear solitions (solitary-waves) into the space (Here's a good starting place -- other papers are behind paywall) and have freedom to place them at any position I am sure you can get turbulence! Physical or not -- you decide.
Arthur SuvorovArthur Suvorov
I'm posting an hypothesis on dark matter to work with, that I'll call the "Turbulence Dark Matter" (TDM).
The universe is filled with matter clumped into stars inside galaxies, and galaxies into clusters and superclusters. There's also gaz and dust everywhere. Their distributions are mostly random and include voids and swiss cheese-like "holes".
Suppose that spacetime is already "turbulent" at a not-so-large scale in space and in time. It could even have been created that way from a very violent Big Bang, and turbulent matter would just make the things worst after that. The exact metric of that spacetime ; $g_{\mu \nu}(x)$, is so complicated that there's no hope in solving the exact Einstein equation : $$\tag{1} G_{\mu \nu}(g) + \Lambda \, g_{\mu \nu} = -\, \kappa \; T_{\mu \nu}(\phi, \, g). $$ The symbol $\phi$ represents all the matter and radiation fields. We could write the exact (turbulent) metric components like this : $$\tag{2} g_{\mu \nu}(x) = \bar{g}_{\mu \nu}(x) + \theta_{\mu \nu}(x), $$ where $\bar{g}_{\mu \nu}(x)$ is a smooth and regular metric, while $\theta_{\mu \nu} \equiv g_{\mu \nu} - \bar{g}_{\mu \nu}$ describes the turbulences. The matter fields could also be written as $\phi = \bar{\phi} + \delta\phi$. Then, the equation (1) can be written like this : $$\tag{3} G_{\mu \nu}(\bar{g}) + \Lambda \, \bar{g}_{\mu \nu} = -\, \kappa \, \big( \, T_{\mu \nu}(\bar{\phi}, \, \bar{g}) + \Theta_{\mu \nu} \big), $$ where I have defined $$\tag{4} \Theta_{\mu \nu} = T_{\mu \nu}(\phi, \, g) - T_{\mu \nu}(\bar{\phi}, \, \bar{g}) + \frac{1}{\kappa} \big( G_{\mu \nu}(g) - G_{\mu \nu}(\bar{g}) \big) + \frac{\Lambda}{\kappa} \, \theta_{\mu \nu}. $$ This tensor could be explicitely developped to first order in $\theta_{\mu \nu}$ and $\delta\phi$. it could be interpreted as the stress-tensor of the "dark matter", induced by the neglected turbulences.
In this interpretation, dark matter is just an artifact of some averaging procedure, using a smooth and regular metric on a large scale (homogeneous and isotropic on cosmological scale) plus a perturbation. By its definition, this dark matter doesn't interact directly with normal matter, and cannot be detected in any lab ! The TDM doesn't really exist, and yet it is out there as an effective field.
EDIT : Take note that since $\Theta_{\mu \nu}$ depends on $\Lambda$ (the cosmological constant, which has nothing to do with the turbulences), it may explain the "coincidence" in the DM and DE proportions in the universe (around 25% and 71%, respectively, plus 4% of normal matter).
Also, if the universe was empty ; $T_{\mu \nu} = 0$, you still could have TDM if the spacetime curvature is very lumpy and chaotic, filled with random primordial gravitational waves : $\Theta_{\mu \nu} \ne 0$ even without normal matter.
Much of this discussion centers around whether a vacuum solution can be "turbulent". Such turbulence may be entering the realm of Quantum Gravity and as such both EFE and NS cannot be applied. And now, the Perimeter Institute makes a strong case for Turbulence.. https://www.perimeterinstitute.ca/news/turbulent-black-holes
Ken AbbottKen Abbott
As follows from the relationship between the Navier – Stokes equation and the Schrödinger equation, the vacuum has a kinematic viscosity ih/(2m) and low density. It is not empty space, but is the medium providing this kinematic viscosity. How to introduce Planck's constant into the GR equation is a separate conversation.
Evgeniy YakubovskiyEvgeniy Yakubovskiy
The turbulent solution is complex, where the real part is the average value, and the imaginary part is the rms value. Solving the non-linear Navier-Stokes equation is necessary in the complex plane. Similarly, the decision of GR should be complex in the case of a turbulent regime. But there is a problem how to recalculate the imaginary part of the solution in the real part, for this you need to use special methods. Moreover, in the case of a liquid medium, the roughness must be taken into account.
$\begingroup$ Turbulent solutions requiring a imaginary soutions is a new idea to me. Could you please provide souces or explain how this is true? $\endgroup$ – psitae Dec 27 '18 at 23:44
$\begingroup$ I can give a link to the site in Russian, but this site has a list of references in English famous-scientists.ru/direction/view/293 $\endgroup$ – Evgeniy Yakubovskiy Dec 28 '18 at 10:34
$\begingroup$ I specify the link to the files used YAKUBOVSKIY, EG. "STUDY OF NAVIER-STOKES EQUATION SOLUTION I. world-science.ru/pdf/2016/3/14.pdf YAKUBOVSKIY, EG. "STUDY OF NAVIER-STOKES EQUATION SOLUTION II. world-science.ru/pdf/2016/3/15.pdf YAKUBOVSKIY, E. G. "STUDY OF NAVIER–STOKES EQUATION SOLUTION III. world-science.ru/pdf/2016/3/16.pdf $\endgroup$ – Evgeniy Yakubovskiy Dec 30 '18 at 20:26
Hmmmn. Strictly speaking spacetime is an abstract mathematical "space" which models space at all time. It's the block universe. You can draw worldlines in it to represent motion through space over time, but there's nothing moving through it or in it. It's static. The wordlines aren't waving around like seaweed in the surf. However space can change over time. Gas clouds collapse to form stars and gravitational fields can become more pronounced. Since a gravitational field is "curved spacetime" we can reasonably say that spacetime changes. Let's call it space-time to distinguish it from the static block-universe spacetime. But can this change be a "turbulent" change? Hmmmn.
It is well known that the fluid equations (Euler equation, Navier-Stokes, ...), being non-linear, may have highly turbulent solutions. Of course, these solutions are non-analytical. The laminar flows solutions (Couette flow for example) may be unstable to perturbations, depending on viscosity. Also, fluids which have a low viscosity (water for example) are more turbulent than fluids with high viscosity (oil, for example).
No problem there. Apart from the fact that space-time isn't a fluid.
I was wondering if something similar may happen with gravity and spacetime itself. Einstein equation being highly non-linear, do turbulent solutions exist ?
No. Because space-time isn't a fluid. Instead it's a gin-clear ghostly elastic solid! This is why you can see a shear stress term in Einstein's stress-energy-momentum tensor:
Public domain image by Maschen, based on an image by created by Bamse see Wikipedia
I kid ye not! Google on Einstein elastic. Then try to imagine a jelly on a plate. You can deform it, and you can curve it, and you can wibble it and you can wobble it. But you can't make it turbulent.
Or is gravity like some highly viscous fluid, i.e. without any turbulence ?
A gravitational field is where space is non uniform, this being modelled as curved space-time. Ah, here we go, this is what Einstein said:
"According to this theory the metrical qualities of the continuum of space-time differ in the environment of different points of space-time, and are partly conditioned by the matter existing outside of the territory under consideration. This space-time variability of the reciprocal relations of the standards of space and time, or, perhaps, the recognition of the fact that 'empty space' in its physical relation is neither homogeneous nor isotropic, compelling us to describe its state by ten functions (the gravitation potentials g$_{\mu\nu}$), has, I think, finally disposed of the view that space is physically empty."
While being non-analytical, how a turbulent metric may look like? I imagine that spacetime turbulences may be relevant on a very large scale only (cosmological scales, or even at the Multiverse level). And maybe at the Planck scale too (quantum foam). But how could we define geometric turbulence ?
I think it's reasonable to propose a chaotic metric, but turbulent doesn't seem to fit in with General Relativity. Which as I'm sure you're aware, is one of the best-tested theories we've got. Meanwhile multiverses and quantum foam remain speculative. IMHO it's good to speculate and ask what if? It's good to think for yourself. But I'd say you're in danger of wandering away from hard science here into pseudoscience, where you won't find any evidence or answers.
The only reference I've found on this subject, which shows that the idea isn't crazy, is this : https://www.perimeterinstitute.ca/news/turbulent-black-holes
The mention of the holographic conjecture does not auger well. And I'm afraid the fact that this comes from the Perimeter Institute doesn't mean it's correct. What we seem to have here is a rather speculative idea that looks as if it's at odds with general relativity. Ah, see the paper on the arXiv : Turbulent black holes. I've skimmed it, and I wonder if there could be some important issues. For example, the "coordinate" speed of light at the event horizon is zero. So if the black hole is spinning at half the speed of light, how fast is it spinning? What's half of zero? Anyway, it's late, and I have to go. You could try a new question asking for feedback on this paper. It's always better to refer to the actual paper rather than the reportage, because the latter can sometimes be misleading.
John DuffieldJohn Duffield
Not the answer you're looking for? Browse other questions tagged general-relativity gravity spacetime turbulence non-linear-systems or ask your own question.
Can Increasing the Turbulence Inside a Pipeline Economically Increase Flow?
Navier-Stokes - Complete set under turbulent eddy viscosity hypothesis
Is this the reason solids suspended in turbulent fluids don't settle?
Is the flow of a viscous fluid in free space under no pressure gradient always laminar?
How to popularly describe typical features of a "non-linear system"
Validity of the Navier Stokes equations for turbulent flows
Are any advanced Reynolds-averaged fluid models used in astrophysics?
How does the transition to turbulence happen in incompressible fluids?
Is there anything more chaotic than fluid turbulence? | CommonCrawl |
The extension of a code is itself a code
I'm reading Cover's "Elements of Information Theory" and I have a problem with the definition of uniquely decodable code.
A code is said to be singular if there exist two elements that map to the same string. In other case it is said to be non singular.
Let $C$ be a code. The extension of $C$ is the homomorphism $C^{*}$ between $\chi$ and $D$ with respect to concatenation that is, $C(x_1 > \cdots x_n) = C(x_1) \cdots C(x_n)$.
Let $C$ be a code. $C$ is uniquely decodable if its extension $C^{*}$ is non-singular.
According to the last definition the extension of a code is itself a code but as it has domain $\chi^{*}$ where $\chi \subseteq \mathbb{R}$ (that last statement is mine).
How can I view the extension as a code?
information-theory coding-theory
RodrigoRodrigo
$\begingroup$ Why did $\mathbb{R}$ show up? $\endgroup$ – Yuval Filmus Oct 10 '16 at 13:24
Cover and Thomas are using here the following definition of a code: given a domain $D$ and an alphabet $A$, a code is a mapping from $D$ to $A^*$. That is, a code assigns every element of $D$ a word over $A$. The extension of the code is a code from $D^*$ to $A^*$ defined by concatenation.
For example, let $D = \{a,b\}$ and $A = \{0,1\}$, and consider the code $C$ given by $C(a) = 0$, $C(b) = 00$. Its extension $C^*$ is a function from $\{a,b\}^*$ to $\{0,1\}^*$ obtained by replacing each $a$ by $0$ and each $b$ by $00$. For example, $C^*(ab) = C(a)C(b) = 000$. Since $C^*(ba) = C(b)C(a) = 000$ as well, the code $C^*$ is singular.
A code $C$ is uniquely decodable if there do not exist two sets of words $x_1\ldots x_n$ and $y_1\ldots y_m$ such that $C(x_1)\ldots C(x_n) = C(y_1)\ldots C(y_m)$. This is exactly the same as saying that $C^*$ is non-singular, since the condition is completely equivalent to $C^*(x_1\ldots x_n) = C^*(y_1\ldots y_n)$.
Yuval FilmusYuval Filmus
Not the answer you're looking for? Browse other questions tagged information-theory coding-theory or ask your own question.
Is Morse code without spaces uniquely decipherable?
Is Morse Code binary, ternary or quinary?
Binary code with constraint
Existence of Hamming code
Upper bound on the covering radius of a code
Covering radius of code
What should be the way to design code for such a situation?
what does the redundancy of a code means?
What is a symbol code? | CommonCrawl |
理論コロキウム2017 †
理論コロキウムは原則として毎週水曜日の午後13:30から開催しています。
2010年度 2011年度 2012年度 2013年度 2014年度 2015年度 2016年度
4/5 all internal members self-introduction Conference room, Cosmos Lodge / 13:30
4/12 Shing Chi Leung (Kavli IPMU) Nucleosynthesis of Type Ia supernovae Conference room, Cosmos Lodge / 13:30
4/17 Toshihiko Kawano (LANL/Tokyo Tech) beta-delayed neutron emission and fission for r-process nucleosynthesis Conference room, Cosmos Lodge / 13:30
4/19 Masaki Yamaguchi (U. Tokyo) The number of black hole-star binaries discovered by the astrometric satellite, Gaia Conference room, Cosmos Lodge / 13:30
4/26 Tomohisa Kawashima (NAOJ DTA) Radiation hydrodynamic simulations of super-critical accretion columns onto neutron stars in ULX-pulsars Conference room, Cosmos Lodge / 13:30
5/08 Jonathan C. Tan (University of Florida) Inside-Out Planet Formation Conference room, Cosmos Lodge / 13:30
5/10 Shinpei Shibata (Yamagata University) Physics of The Rotation Powered Pulsar Conference room, Cosmos Lodge / 13:30
5/17 Tomohiro Ono (Kyoto University) Large-scale Gas Vortex Formed by the Rossby Wave Instability Rinkoh room / 13:30
5/24 Naonori Sugiyama (IPMU) Kinematic Sunyaev-Zel'dovich effect Conference room, Cosmos Lodge / 13:30
5/31 Shogo Ishikawa (NAOJ CfCA) The Galaxy-Halo Connection in High-redshift Universe Conference room, Cosmos Lodge / 13:30
6/7 Tomoya Kinugawa (U. Tokyo) Compact binary remnants from first stars for the gravitational wave source Conference room, Cosmos Lodge / 13:30
6/12 Yamaç Pehlivan (Mimar Sinan University) Stars as extreme laboratories for neutrino physics Conference room, Cosmos Lodge / 13:30
6/14 Cemsinan Deliduman (Mimar Sinan University) Astrophysics with Weyl Gravity Rinkoh room / 13:30
6/21 Hiroyuki Kurokawa (ELSI, Tokyo Tech) Hydrodynamics of first atmospheres of planets embedded in protoplanetary disk Conference room, Cosmos Lodge/ 13:30
6/28 Masanobu Kunitomo (Nagoya University) Revisiting the pre-main sequence evolution of low-mass stars: Importance of accretion and deuterium abundance Conference room, Cosmos Lodge / 13:30
7/5 Yuta Asahina (NAOJ CfCA) MHD Simulations of the Feedback via an AGN outflow to the inhomogenious interstellr medium Conference room, Cosmos Lodge / 13:30
7/12 Shoko Oshigami (NAOJ CfCA) Mare volcanism: Reinterpretation based on Kaguya Lunar Radar Sounder data Conference room, Cosmos Lodge / 13:30
7/19 Shinsuke Takasao (Nagoya University) MHD Simulations of Accretion onto Star from Surrounding Disk Conference room, Cosmos Lodge / 13:30
7/26 Jean Coupon (University of Geneva) Probing the galaxy-mass connection in TeraByte-scale imaging surveys Conference room, Cosmos Lodge / 13:30
9/27 Takayoshi Kusune (NAOJ) Magnetic field of the bright-rimmed cloud SFO 74 Conference room, Cosmos Lodge / 13:30
10/03 Matthew Kenworthy (Leiden Observatory) Looking for exorings towards Beta Pictoris, J1407 and PDS 110 Conference room, Cosmos Lodge / 13:30
10/04 Yuri Aikawa (University of Tokyo) Deuterium Fractionation in Protoplanetary Disks Conference room, Cosmos Lodge / 13:30
10/10 Peter Behroozi (University of Arizona) Maximizing Inference from Galaxy Observations Conference room, Cosmos Lodge / 13:30 Tuesday
10/18 Ryosuke Hirai (Waseda University) Understanding core-collapse supernovae in binaries with various numerical approaches Conference room, Cosmos Lodge / 13:30
10/19 Sergey Blinnikov (Institute for Theoretical and Experimental Physics ) GRB Central Engines within Superluminous Supernovae and their environment Conference room, Cosmos Lodge / 13:30 Thursday
10/25 Sho Fujibayashi (Kyoto University) The evolution and mass ejection from the remnant of the binary neutron star merger Conference room, Cosmos Lodge / 13:30
11/01 No colloquium NAOJ decadal workshop
11/02 Wanggi Lim (NAOJ) Dust in Infrared Dark Clouds Conference room, Cosmos Lodge / 13:30 Thursday
11/08 No colloquium DTA workshop 2017
11/15 Sanemichi Takahashi (Kogakuin University) Early evolution of protoplanetary disks: a ring-gap structure formation Conference room, Cosmos Lodge / 13:30
11/22 Wolfgang Loeffler (Heidelberg ARI) Gaia Sky: A 3D visualisation of the Gaia Catalogues Conference room, Cosmos Lodge / 13:30
11/29 No colloquium CfCA UM
12/06 Kazuyuki Sugimura (Tohoku University) Accretion onto seed BHs: the impacts of anisotropic radiation and gas angular momentum Conference room, Cosmos Lodge / 13:30
12/20 Yutaka Hirai (NAOJ) The role of enrichment of heavy elements in the chemodynamical evolution of dwarf galaxies Conference room, Cosmos Lodge / 13:30
01/17 Kohei Hayashi (NAOJ) The universal dark halo scaling relation for the dwarf spheroidal galaxies in the Local Group Conference room, Cosmos Lodge / 13:30
01/24 Hiroshi Kobayashi (NAOJ) Three-Dimensional Radiation-Hydrodynamic Simulation of Clumpy Outflow and Its Application to Supercritical Accretors around Black Holes *practice for the PhD defence. *in Japanese TBD / 13:30
01/31 Misako Tatsuuma (UTokyo) Gravitational Instability of a Dust Layer Composed of Porous Silicate Dust Aggregates in a Protoplanetary Disk *practice for the master thesis defence. *in Japanese Rinko-room/ 13:30
01/31 Kanji Mori (UTokyo) Quantum Mechanical Constraint on Carbon Fusion Reaction and Its Impact on Type Ia Supernovae *practice for the master thesis defence. *in Japanese Rinko-room/ 14:15
02/21 Kaiki Inoue (Kinki University) Detecting Sub-lunar Mass Compact Objects toward the Local Group Galaxies Rinko-room / 13:30
03/28 Tomoyuki Hanawa (Chiba University) Conservation of Total Energy Including Gravity in Hydrodynamical Simulations Lecture-room/ 13:30
4/11 Adriana Pohl (Max-Planck Institute of Astronomy, Heidelberg Germany) TBA (about protoplanetary disks) Conference room, Cosmos Lodge / 13:30
4/12 Shing Chi Leung (Kavli IPMU) Nucleosynthesis of Type Ia supernovae
Type Ia supernovae (SNe Ia) are an important class of astrophysical objects. They are the standard candles of the universe and the major sources of iron-peak elements. It is known to be the explosion of a carbon-oxygen white dwarf by thermonuclear runaways. However, many theoretical uncertainties still persist, for example whether the progenitor of SNe Ia belongs to single degenerate or double degenerate scenario. Furthermore, the diversity in observations, such as the subclasses of Type Iax or super-luminous SNe Ia, suggests that the standard picture using the explosion of a Chandrasekhar mass white dwarf is insufficient to explain the variety of the observed SNe Ia. To resolve these, a systematic understanding in SNe Ia nucleosynthesis becomes necessary. In this present, I shall present hydrodynamics and nucleosynthesis results of multi-dimensional models for the explosion phase of SNe Ia. We explore the effects of model parameters on the explosion energetic and its chemical production. The influences of our SNe Ia models to galactic chemical evolution are discussed. I also present constraints on the progenitor properties of some recently observed SNe Ia and their remnants.
4/17 Toshihiko Kawano (LANL/Tokyo Tech) beta-delayed neutron emission and fission for r-process nucleosynthesis
We give a brief summary of our recent development of nuclear reaction theories with a particular focus on nuclear data production for the r-process nucleosynthesis. The topics include calculations of the beta-delayed process for neutron-rich nuclei, where several neutrons can be emitted, and eventually fission may take place as well. Our recent studies on fission itself are also given.
4/19 Masaki Yamaguchi (U. Tokyo) The number of black hole-star binaries discovered by the astrometric satellite, Gaia
Although it is believed that there are 10^8-9 stellar mass black holes (BH) in Milky Way, until now only ~60 BHs have been discovered. Moreover, masses of only a dozen BHs of them are constrained. By discovering more BHs and estimating their masses, we would obtain the mass distribution of BHs with a higher confidence level. This distribution is expected to constrain a theoretical model of the supernova explosion in which a BH is produced as a remnant. Gaia is now operated and have a capability to detect binaries with an unseen companion, such as a BH or a neutron star. Gaia performs a high-precision astrometry with the optical band (0.3-1.0um), and surveys a whole sky, where main observational targets are stars. If a target star has an unseen companion, it should show an elliptical motion on the celestial sphere. Gaia can confirm the companion by detecting such motion. Moreover, this elliptical motion leads to all orbital elements, which enables us to estimate the mass of companion. If this mass is larger than 3 solar masses, we can confirm the companion as a BH. In my talk, I will show how many BHs can be detected by such method with Gaia. Considering the binary evolution, we obtain the number of detectable BHs, ~600, for main sequence targets. This means that Gaia can discover the order of one thousand BHs whose masses can be found, although we know only a dozen such BHs now. We conclude that the astrometric observation for binaries is very powerful method for finding BHs.
4/26 Tomohisa Kawashima (NAOJ) Radiation hydrodynamic simulations of super-critical accretion columns onto neutron stars in ULX-pulsars
Ultraluminous X-ray sources are off-centered, extragalactic X-ray sources with luminosities exceeding the Eddington limit for stellar-mass black holes. After the recent discovery of pulsed X-ray emissions in three ULXs, it is widely thought that some ULXs are powered by super-critical column accretion onto neutron stars. The mechanism of super-critical column accretion is, however, still poorly understood. We have, therefore, carried out two-dimensional radiation hydrodynamic simulations of super-critical accretion columns onto neutron stars, and have found that the super-critical accretion can be realized because the most photons escape from the side wall of accretion columns (i.e., the radiation field is anisotropic in the accretion columns). The simulated accretion columns are luminous enough to be consistent with the observed ULX-pulsars.
5/08 Jonathan C. Tan (University of Florida) Inside-Out Planet Formation
The Kepler-discovered systems with tightly-packed inner planets (STIPs), typically with several planets of Earth to super-Earth masses on well-aligned, sub-AU orbits may host the most common type of planets in the Galaxy. They pose a great challenge for planet formation theories, which fall into two broad classes: (1) formation further out followed by migration; (2) formation in situ from a disk of gas and planetesimals. I review the pros and cons of these classes, before focusing on a new theory of sequential in situ formation from the inside-out via creation of successive gravitationally unstable rings fed from a continuous stream of small (~cm-m size) "pebbles," drifting inward via gas drag. Pebbles first collect at the pressure trap associated with the transition from a magnetorotational instability (MRI)-inactive ("dead zone") region to an inner MRI-active zone. A pebble ring builds up until it either becomes gravitationally unstable to form an Earth to super-Earth-mass planet directly or induces gradual planet formation via core accretion. The planet continues to accrete until it becomes massive enough to isolate itself from the accretion flow via gap opening. The process repeats with a new pebble ring gathering at the new pressure maximum associated with the retreating dead-zone boundary. I discuss the theory's predictions for planetary masses, relative mass scalings with orbital radius, and minimum orbital separations, and their comparison with observed systems. Finally I speculate about potential causes of diversity of planetary system architectures, i.e. STIPs versus Solar System analogs.
5/10 Shinpei Shibata (Yamagata University) Physics of The Rotation Powered Pulsar
I review physics of the rotation powered pulsars with special interest of how the energy and angular momentum are emitted from the system. I will mention briefly an recent observational result that torque on the neutron stars varies with various time scales. This talk is given in Japanese.
5/17 Tomohiro Ono (Kyoto University) Large-scale Gas Vortex Formed by the Rossby Wave Instability
Large-scale gas vortexes induced by the Rossby wave instability (RWI) are one of the plausible explanations of the lopsided structures recently observed in several protoplanetary disks. For comparison with the observations, it is important to investigate quantitatively the properties of the vortexes formed by the RWI. However, our knowledge on the properties and outcomes of the RWI has been limited until recent years. We have studied the RWI with linear stability analyses and hydrodynamical simulations using the Athena++ code. As a result of the linear stability analyses, we show that the RWI is one of the shear instabilities which are explained by the interaction between two Rossby waves. We also derive the critical condition for the onset of the RWI in semi-analytic form. From the numerical simulations, we investigate the properties of the vortexes formed by the RWI and discuss possible observational predictions. In my talk, I will present our three results on the RWI: (1) the physical mechanism, (2) the critical condition for the onset and (3) the properties of the vortexes.
5/24 Naonori Sugiyama (IPMU) Kinematic Sunyaev-Zel'dovich effect
Over the past few years, cosmologists have been able to make the first detections of the kinematic Snuyaev-Zel'dovich (kSZ) effect by combining galaxy data with measurements from CMB experiments. The kSZ effect is well-suited for studying properties of the optical depth of halos hosting galaxies or galaxy clusters. As the measured optical depth via the kSZ effect is insensitive to gas temperature and redshift, the kSZ effect can be used to detect ionized gas that is difficult to observe through its emission, so-called "missing baryons". This work presents the first measurement of the kSZ effect in Fourier space. While the current analysis results in the kSZ signals with only evidence for a detection, the combination of future CMB and spectroscopic galaxy surveys should enable precision measurements. This talk emphasizes the potential scientific return from these future measurements.
5/31 Shogo Ishikawa (NAOJ CfCA) The Galaxy-Halo Connection in High-redshift Universe
We present the results of clustering analyses of Lyman break galaxies (LBGs) at z~3, 4, and 5 using the final data release of the Canada–France–Hawaii Telescope Legacy Survey (CFHTLS). Deep- and wide-field images of the CFHTLS Deep Survey enable us to obtain sufficiently accurate two-point angular correlation functions to apply a halo occupation distribution analysis. The mean halo masses increase with the stellar-mass limit of LBGs. Satellite fractions of dropout galaxies, even at less massive halos, are found to drop sharply, from z=2 down to less than 0.04, at z=3-5, suggesting that satellite galaxies form inefficiently even for less massive satellites. We compute stellar-to-halo mass ratios (SHMRs) assuming a main sequence of galaxies, which is found to provide SHMRs consistent with those derived from a spectral energy distribution fitting method. The observed SHMRs are in good agreement with model predictions based on the abundance-matching method, within 1sigma confidence intervals. We derive observationally, for the first time, the pivot halo mass, which is the halo mass at a peak in the star-formation efficiency, at 3<z<5, and it shows a small increasing trend with cosmic time at z>3. In addition, the pivot halo mass and its normalization are found to be almost unchanged during 0<z<5. Our study provides observational evidence that galaxy formation is ubiquitously most efficient near a halo mass of 10^12Msun over cosmic time.
6/7 Tomoya Kinugawa (U. Tokyo) compact binary remnants from first stars for the gravitational wave source
Using our population synthesis code, we found that the typical chirp mass of binary black holes (BH-BHs) whose origin is the first star (Pop III) is ~30 Msun. This result predicted the gravitational wave events like GW150914 and LIGO paper said "recently predicted BBH total masses agree astonishingly well with GW150914 and can have sufficiently long merger times to occur in the nearby universe (Kinugawa et al. 2014)" (Abbot et al. ApJL 818,22 (2016)). Thus, the compact binary remnants of the first stars are interesting targets of LIGO,VIRGO and KAGRA.Nakano, Tanaka & Nakamura 2015 show that if S/N of QNM is larger than 35, we can confirm or refute the General Relativity more than 5 sigma level. In our standard model, the detection rate of Pop III BH-BHs whose S/N is larger than 35 is 3.2 events/yr (SFR_p/(10^{-2.5}Msun/yr/Mpc^3))*([f_b/(1+f_b)]/0.33)* Err_sys. Thus, there is a good chance to check whether GR is correct or not in the strong gravity region. Furthermore, the Pop III binaries become not only BH-BH but also NS-BH. We found Pop III NS-BH merger rate is ~ 1 events/Gpc^3 and the chirp mass of Pop III NS-BH is more massive than that of Pop I and II. Therefore, we might get information of Pop III stars from massive BH-BHs and NS-BHs.
6/12 Yamaç Pehlivan (Mimar Sinan University) Stars as extreme laboratories for neutrino physics
Neutrinos are the second most abundant particle species in the universe after the photons. Due to their small cross sections, their last point of scattering (and hence their memory) lies deep within dense astrophysical objects. As a new observational window to the Universe, neutrinos hold a great potential. But, an equally exciting possibility is to use these observations as a probe to their minuscule properties under the Universe's most extreme conditions.In this talk, I will focus on the neutrinos emitted by core collapse supernova where, in the deep regions, neutrino-neutrino interactions turn their flavor oscillations into a nonlinear many-body phenomenon. Various tiny neutrino properties can be amplified by these nonlinear effects with detectable consequences. These can show themselves directly in a future galactic supernova signal detected by Super-Kamiokande, or indirectly (through their effect on nucleosynthesis) in elemental abundance surveys by Subaru and TMT.
6/14 Cemsinan Deliduman (Mimar Sinan University) Astrophysics with Weyl Gravity
This talk will introduce an attempt to describe the diverse astrophysical phenomena via Weyl gravity. In the first part I will review my work on the resolution of the flat galactic rotation curve problem via geometry instead of assuming the existence of dark matter. Motivation for this work came from the observation that the scale independence of the rotational velocity in the outer region of galaxies could point out to a possible existence of local scale symmetry and therefore the gravitational phenomena inside such regions should be described by the unique local scale symmetric theory, namely Weyl's theory of gravity. Solution to field equations of Weyl gravity will determine the special geometry of the outer region of galaxies. In the second part of the talk it will be conjectured that this special geometry could be valid up to the scale of galaxy clusters. Then one challenge of this approach will be to explain gravitational lens characteristics of galaxy clusters by Weyl geometry without assuming existence of dark matter. Research in this direction will be summarized.
6/21 Hiroyuki Kurokawa (ELSI, Tokyo Tech) Hydrodynamics of first atmospheres of planets embedded in protoplanetary disk
Exoplanet observations revealed that a significant fraction of Sun-like stars harbor super-Earths, here defined as those objects having masses between a few to ~20 Earth masses. Though their masses overlap with the range of core masses believed to trigger runaway accretion of disk gas, these super-Earths retain only small amounts of gas: ~1%-10% by mass. How did super-Earths avoid becoming gas giants? One possible solution is late-stage core formation; super-Earths were formed by the final assembly of proto-cores during disk dispersal (Lee et al. 2014). Another solution is rapid recycling of envelope gas. Ormel et al. (2015) conducted hydrodynamical simulations of isothermal flow past a low-mass planet embedded in disk gas. They found that the atmosphere (inside the Bondi sphere) is an open system where disk gas enters from high latitude (inflow) and leaves through midplane region (outflow). They argued that the recycling is faster than the cooling (namely, the contraction) of the envelope gas, and so that further accretion of disk gas is prevented. To evaluate the influence of the cooling process on the recycling process, we performed non-isothermal hydrodynamical simulations of the flow around an embedded planet, where radiative cooling was approximated by the beta cooling model. We found that the recycling is limited in the non-isothermal cases because of the difference in entropy between the inflow (high entropy) and the atmosphere (low entropy). The high entropy flow cannot penetrate the low-entropy atmosphere, and therefore the recycling is limited to the upper region of the Bondi sphere. Our results suggest that the recycling process may not be able to explain the ubiquity of super-Earths. Nevertheless, the midplane outflow induced by the recycling may prevent or reduce the accretion of pebbles onto proto-cores. This would delay the growth of these cores and help us to explain the ubiquity of super-Earths in the context of the late-stage core-formation scenario.
6/28 Masanobu Kunitomo (Nagoya University) Revisiting the pre-main sequence evolution of low-mass stars: Importance of accretion and deuterium
Recent theoretical work has shown that the pre-main sequence (PMS) evolution of stars is much more complex than previously envisioned: Instead of the traditional one-dimensional solution of the contraction of a spherically symmetric gaseous envelope, protostars grow from the first formation of a small seed and subsequent accretion of material. This material is shocked, accretion may be episodic and not necessarily symmetrical, thereby affecting the energy deposited inside the star and its interior structure. Given this new framework, we confirm the findings of previous works (e.g., Baraffe et al. 2009, 2012, Hosokawa et al. 2011) that the evolution changes significantly with the amount of energy that is lost during accretion. We find that deuterium burning also regulates the PMS evolution. In the low-entropy accretion, the evolutionary tracks in the Hertzsprung-Russell diagram are significantly different from the classical ones and sensitive to the deuterium content. Our results agree with previous work that the variation of heat injection can be the solution of luminosity spread problem of PMS stars and show the importance of the deuterium content. We also discuss the internal structure evolution of young stars and the impact on the stellar surface composition.
7/5 Yuta Asahina (NAOJ CfCA) MHD Simulations of the Feedback via an AGN outflow to the inhomogenious interstellr medium
Co-evolution between central supermassive black holes and host galaxies is a hotly debated issue in astrophysics. Outflows are thought to have an impact of the interstellar medium (ISM), and probably be responsible for the establishment of a widely known correlation between black hole mass (M) and the stellar velocity dispersion in galactic bulge (σ), so-called M-σ relation. Feedback by the quasar wind has been investigated by Silk & Rees (1998), Fabian (1999), and King (2003). However the quasar winds are assumed to be spherical symmetric outflows. Wagner et al. (2012) studied the feedback via AGN jets. They revealed that the feedback via the AGN jet can be origin of M-σ relation. Magnetic fields are not included in their simulations, although magnetic fields of 0.01-1 mG have been reported to exist in the galactic center. In order to study the effect of the magnetic field to the feedback via the AGN jet, we carry out 3D MHD simulations. Our simulations reveal that the magnetic tension force promotes the acceleration of the ISM and enhances the feedback efficiency.
7/12 Shoko Oshigami (NAOJ CfCA) Mare volcanism: Reinterpretation based on Kaguya Lunar Radar Sounder data
The Lunar Radar Sounder (LRS) onboard Kaguya (SELENE) detected widespread horizontal reflectors under some nearside maria. Previous studies estimated that the depths of the subsurface reflectors were up to several hundreds of meters and suggested that the reflectors were interfaces between mare basalt units. The comparison between the reflectors detected in the LRS data and surface age maps indicating the formation age of each basalt unit allows us to discuss the lower limit volume of each basalt unit and its space and time variation. We estimated volumes of basalt units in the ages of 2.7 to 3.8 Ga in the nearside maria. The lower limit volumes of the geologic units estimated in this study were on the order of 10^3 to 10^4 km^3. This volume range is consistent with the total amount of erupted lava flows derived from numerical simulations of thermal erosion models of lunar sinuous rille formation and is also comparable to the average flow volumes of continental flood basalt units formed after the Paleozoic and calculated flow volumes of Archean komatiite flows on the Earth. The lower limits of average eruption rates estimated from the unit volumes were on the order of 10 ^5 to 10^ 3 km^3/yr. The estimated volumes of the geologic mare units and average eruption rate showed clear positive correlations with their ages within the same mare basin, while they vary among different maria compared within the same age range. This talk is given in Japanese.
7/26 Jean Coupon (University of Geneva): Probing the galaxy-mass connection in TeraByte-scale imaging surveys
The past decade has seen the emergence of new techniques and exciting discoveries powered by wide-field imaging surveys from the UV to the near-IR domain. Owing to gravitational lensing, galaxy clustering and abundance matching (to name but a few), coupled with advanced statistical interpretation, the informative power of astronomical imaging surveys has significantly increased. In particular, the connection between galaxies and dark matter, a keystone in cosmology and the study of galaxy evolution, has widely gained from this "scale revolution" and the future is bright, as the next experiments such as HSC, LSST, Euclid or WFIRST are dedicated "survey" machines that will further increase imaging data by orders of magnitude (without mentioning the tremendous gain in image resolution, time domain and deep near-IR imaging). I will focus my talk on reviewing the main techniques to connect galaxies and dark matter in the context of wide-field surveys and I will show some concrete examples of applied data analysis in the CFHTLenS and COSMOS projects, showing that these techniques are now well proven, although the challenges in reducing some critical systematic uncertainties are ahead of us.
9/27 Takayoshi Kusune (NAOJ): Magnetic field of the bright-rimmed cloud SFO 74
Magnetic fields are believed to play an important role in the formation and evolution of molecular cloud. In this talk, I will present the results of near-infrared polarimetric observations toward a bright-rimmed cloud (SFO 74). Bright-rimmed clouds, which are small molecular clouds located at the periphery of the HII regions, are considered to be potential sites for induced star formation by UV radiation from nearby OB stars. The obtained polarization vector maps clearly show that the plane-of-sky (POS) magnetic field structure inside the cloud is quite different from its ambient POS magnetic field direction. By applying the Chandrasekhar-Fermi method, I estimate the POS magnetic field strength toward the two regions inside the cloud. Our results indicate that the magnetic field (configuration and strength) of SFO 74 is affected by the UV-radiation-induced shock. I will discuss the relationship between the POS magnetic field and the cloud structure.
10/03 Matthew Kenworthy (Leiden Observatory) : Looking for exorings towards Beta Pictoris, J1407 and PDS 110
Circumplanetary disks are part of the planet and moon formation process, passing from an optically thick regime of gas and dust through to a planet with retinue of moons and Roche lobe rings formed from the accreted material. There should therefore be a transitional phase where moons are beginning to form and these will clear out lanes in the circumplanetary disk, producing Hill sphere filling 'rings' hundreds of times larger than Saturn's rings. We have seen evidence of these objects transiting their young star - with J1407, and more recently, with the young star PDS 110. This star shows periodic eclipses lasting over two weeks of up to 30% in depth, and the next eclipse is predicted to occur in September this year. The star is 10th magnitude in the belt of Orion, and can be followed in the early morning skies from most places on Earth. We are also following the Hill sphere transit of Beta Pictoris b, a gas giant planet around a nearby bright star, and I will also present the latest light curves from this experiment.
10/04 Yuri Aikawa (University of Tokyo): Deuterium Fractionation in Protoplanetary Disks
Deuterium enrichments in molecules are found in star-forming regions, as well as in Earth's ocean. Asrtrochemical models show that the enrichment originates in exothermic exchange reactions at low temperatures, which could proceed not only in molecular clouds, but also in the cold regions of protoplanetary disks. In recent years, several groups observed deuterated molecules in disks using ALMA, in order to investigate the significance and spatial distribution of the fractionation. Brightness distributions of deuterated molecular lines vary with species and objects. In TW Hya, DCN is centrally peaked, while DCO+ is offset from the center, which suggests that they are formed via different deuteration paths. In AS 209, on the other hand, DCO+ and DCN emissions show similar distribution. Motivated by these observations, we calculate the reaction network model of deuterium chemistry in protoplanetary disks. Our model includes various deuterated molecules, exchange reactions, and nuclear spin-state chemistry of H2 and H3+, which affects the efficiency of deuterium enrichment. We found that the exchange reaction responsible for the fractionation varies among regions. While the exchange reactions of HD with H3+ and CH3+ are effective, as expected, the exchange reaction of D atom with HCO+ is also found to be important in warm regions and disk surface. As long as cosmic rays penetrate the disk, ortho/para ratio of H2 is found to be almost thermal, which lowers the efficiency of fractionation via CH2D+ compared with previous models which assume that H2 is all in para state. We also discuss the effects of grain size and turbulent mixing on deuterium chemistry.
10/10 Peter Behroozi (University of Arizona): Maximizing Inference from Galaxy Observations
I discuss new methods to combine multiple datasets to maximally constrain galaxy evolution and the galaxy—dark matter halo connection, and show how these methods have already changed our understanding of galaxy formation physics (including why galaxies stop forming stars). Basic extensions to the same techniques allow constraining internal galaxy processes, including coevolution between galaxies and supermassive black holes as well as time delays for supernova / GRB progenitors. Finally, I discuss how these methods will benefit from the enormous amount of upcoming data in widefield (HETDEX, LSST, Euclid, WFIRST) and targeted (JWST, GMT) observations, as well as ways they can benefit observers, including making predictions for future telescopes (especially JWST) and testing which of many possible targeted observations would best constrain galaxy formation physics.
10/18 Ryosuke Hirai (Waseda University ): Understanding core-collapse supernovae in binaries with various numerical approaches
Compact binaries have rapidly attracted attention since the recent detection of gravitational waves from a binary black hole merger event. The two components should have originated from massive stars which experience core-collapse at the end of their lives. However, the evolution of massive stars are extremely uncertain and the presence of a close-by companion complicates it even more. A close binary can undergo mass transfer by overflowing its Roche lobe, or dynamical evolution called common envelope phases when a star plunges into the envelope of the other. Another possible effect is the direct impact of supernova ejecta colliding with its companion when one of the star explodes. In this talk I will discuss the consequence of the "ejecta-companion interaction", using hydrodynamical simulations and stellar evolution calculations. Our results have coincidentally helped us understand the nature of the progenitor system of a supernova called iPTF13bvn. I will also discuss the overall evolution of this progenitor system. If I have time, I will also introduce some of my latest works on numerical techniques.
10/19 Sergey Blinnikov (Institute for Theoretical and Experimental Physics ): GRB Central Engines within Superluminous Supernovae and their environment
Large amounts of mass may be expelled by a star a few years before a supernova explosion. The collisions of SN-ejecta and the dense CSM may provide the required power of light to make the supernova much more luminous. This class of models is referred to as "interacting SNe". Many SLSNe-I have photospheric velocity of order 10⁴ km/s which is hard to explain in interacting scenario with modest energy of explosion. A strong "hypernova" explosion improves the situation and the properties of SLSNe near maximum light are explained by a GRB-like central engine, embedded in a dense envelope and shells ejected prior the final collapse/explosion of a massive star. In this case velocity up to 1.5x10⁴ km/s is no problem. The problem remains with the nature of the central engine and evolution scenarios leading to double explosions. In view of new LIGO/VIRGO detections of gravitational waves and accompanying events, a few comments and historical remarks will be given.
10/25 Sho Fujibayashi (Kyoto University ): The evolution and mass ejection from the remnant of the binary neutron star merger
We perform general relativistic, long-term, axisymmetric neutrino radiation hydrodynamics simulations for a remnant massive neutron star (MNS) surrounded by a torus, which is a canonical remnant formed after the binary neutron star merger. In this work, we take into account effects of viscosity which is likely to arise in the merger remnant due to magnetohydrodynamical turbulence. We find that two viscous effects play a key role for the evolution of the remnant system and resulting mass ejection. In the first ~10 ms, the structure of the MNS is changed due to the viscous angular momentum transport. As a result, a sound wave, which subsequently becomes a shock wave, is formed in the vicinity of the MNS and the shock wave leads to significant mass ejection. For the longer-term evolution with ~ 0.1--10 s, viscous effects on the torus surrounding the MNS play an important role for mass ejection. The mass ejection rate depends on the viscous parameter for both mass ejection mechanisms, but even for the conservative alpha viscous parameter $\alpha_{\rm vis}\sim0.01$, total ejecta mass is $\sim 5\times 10^{-3}\ M_\odot$ and for $\alpha_{\rm vis}\sim0.04$, it could be $0.02\ M_\odot$. In this talk, I will explain the viscosity-driven mass ejection processes and discuss the electromagnetic signal from the ejecta.
11/02 Wanggi Lim (NAOJ): Dust in Infrared Dark Clouds
The dust grains play key roles to determine physical properties of dense molecular structures. Despite of their importance, our knowledge toward dust properties is mostly dependent on experimental tests and theoretical models due to observational limits on high extinction regions. Here we introduce Mid, Far and Spectroscopic infrared extinction (MIREX, FIREX & SIREX) mapping methods that show the first observational evidence of grain growth in extremely high density regions, i.e. Infrared Dark Clouds. We utilize archival image data of Spitzer-IRAC band 1-4 (3.5 - 8μm), WISE band 3 (12μm), Spitzer-MIPS (24μm) and Herschel-PACS (70μm) in order to make MIREX & FIREX maps of an IRDC, G028.07+00.07. Spectroscopic data of Spitzer-IRS Long-Low slit (15 to 38μm) is analyzed to produce a SIREX map of same IRDC. The pixel by pixel relative extinction laws of the IRDC show the tentative evidence of dust grain growth via coagulation and ice mantle formation in the region of Av~10-100mag. We then compare mass surface density probability distribution functions (Σ-PDFs) of the IRDC and surrounding giant molecular cloud (GMC) that are derived from MIREX map (8μm) and sub-mm (Herschel 160-500μm) dust emission based Σ map (via grey-body fit). The PDFs can be well fit by a single log-normal distribution, with only a small mass fraction (0.03-0.08) in a high Σ power-law tail, even though gas kinematics indicate the IRDC and GMC are self-gravitating with virial parameter α~1. We also show the effect of spectral index β variation against the mass fraction of high-Σ power-law tails.
11/15 Sanemichi Takahashi (Kogakuin University): Early evolution of protoplanetary disks: a ring-gap structure formation
Planets are formed in protoplanetary disks, which are formed around protostars simultaneously with them. Investigation of the early stages of evolution of the protoplanetary disks is important to understand how the star formation and the planet formation processes are connected. Recent observations with ALMA reveal the detailed structures of the protoplanetary disk at the early evolutionary stage. Sheehan and Eisner 2017 found that the gap structure is formed in the protoplanetary disk the protostar WL 17 within about 10 au. Such a structure is thought to be the feature of the old disks (>10^6 yr) called by transition disks. However, the age of WL 17 is estimated to be a few times 10^5 yr. The mechanisms of the gap formation in such young disks have not been investigated well. Sheehan and Eisner 2017 mentioned that the gap structure is formed by the planet. However, the planets forming the gap have not been observed, and it seems difficult to form the planet in the young protoplanetary disks. Thus, it is important to investigate another mechanism for the gap formation in the young disk. In this talk, we discuss the gap formation by the disk wind in the young disks like WL 17 using 1D disk model for the formation and evolution of protoplanetary disks.
11/22 Wolfgang Loeffler (Heidelberg ARI): Gaia Sky: A 3D visualisation of the Gaia Catalogues
The aim of the ESA Gaia mission is to chart the three-dimensional positions and motions of about 1 billion stars in our Milky Way. Gaia Sky is an open-source, multi-platform, real-time, 3D, astronomy software being developed in the Gaia group of the Astronomisches Rechen-Institut (ZAH, Universität Heidelberg) to visualise these positions and motions. We will give an overview of the software features and catalogues which have already been released for the Gaia Data Release 1 and which are under current development for the Gaia Data Relase 2. After briefly touching some technical aspects of the Gaia Sky implementation, we will present some educational and scientific use cases.
12/06 Kazuyuki Sugimura (Tohoku University): Accretion onto seed BHs: the impacts of anisotropic radiation and gas angular momentum
Seed black hole (BH) growth by gas accretion is supposed to play a crucial role in the formation of supermassive BHs. In this talk, I will present the results of our simulations of accretion onto seed BHs under radiation feedback and explain the impact of anisotropic radiation and gas angular momentum on the accretion flow. Then, I will shortly discuss the growth of Pop III remnant BHs based on our findings.
12/20 Yutaka Hirai (NAOJ): The role of enrichment of heavy elements in the chemodynamical evolution of dwarf galaxies
Recent astronomical observations have shown that there are large star-to-star scatters in the abundances of r-process elements and increasing trend of Zn in extremely metal-poor stars. However, it is not yet clear how the early chemo-dynamical evolution of the building blocks of the Milky Way halo affect the abundances of heavy elements. In this talk, we show that high-resolution N-body/smoothed particle hydrodynamics simulations of dwarf galaxies. We find that galaxies with star formation rates less than 0.001 Msun per year reproduce the observed abundances of r-process elements. On the other hand, r-process elements appear at a higher metallicity in galaxies with star formation rates greater than 0.01 Msun per year. We also find that the ejecta from electron-capture supernovae contribute to stars with [Zn/Fe] > 0.5. The scatters of heavy elements mainly come from the inhomogeneity of the metals in the interstellar medium. We find that timescale of metal mixing is less than 40 Myr. This timescale is lower than that of typical dynamical times of dwarf galaxies. Our results demonstrate that the future observations of r-process elements in extremely metal-poor stars will be able to constrain the early chemo-dynamical evolution of the Local Group galaxies.
1/17 Kohei Hayashi (NAOJ): The universal dark halo scaling relation for the dwarf spheroidal galaxies in the Local Group
Dwarf spheroidal galaxies (dSphs) are are excellent laboratories to shed light on fundamental properties of dark matter because these galaxies are the most dark matter dominated systems. In this talk, we propose the universal dark halo scaling relation for the dSphs in the Milky Way and M31. We calculate the dark halo surface density within a radius, r_max, giving the maximum circular velocity, V_max, with respect to observed dSphs and simulated subhalos associated with MW-sized dark halos, and find that the values of surface densities from pure dark matter simulations are in good agreement with those from observations even without employing any fitting procedures. This implies that this surface density would not be largely affected by any baryonic feedbacks and thus universal. Moreover, all subhalos on the small scales of dwarf satellites are expected to obey the universal relation, irrespective of differences in their orbital evolutions, host halo properties, and observed redshifts. In order to understand the origin of this universal dark halo relation. we also investigate orbital and dynamical evolutions of subhalos and find that most of subhalos evolve generally along the specific r_max - V_max sequence even though these subhalos have undergone different histories of mass assembly and tidal stripping. This sequence, therefore, should be the key feature to understand the nature of the universal scaling relation.
1/24 Hiroshi Kobayashi (NAOJ): Three-Dimensional Radiation-Hydrodynamic Simulation of Clumpy Outflow and Its Application to Supercritical Accretors around Black Holes
We study clumpy outflows from supercritical accretion flow around a stellar mass black hole by means of global three-dimensional (3-D) radiation-hydrodynamic (RHD) simulations. In this work, we find that the radiatively driven outflow with the outflow rate of ~10 LEdd/c2 fragments into many clumps above the photosphere located at a few hundreds of Schwarzschild radius (rS) from the central black hole. Such clumps have a shape of a torn sheet, and are rotating around the central black hole with a sub-Keplerian velocity. The typical clump size is 30 rS or less in the radial direction and is more elongated in the angular directions, ~ hundreds of rS at most. When such clumps pass across the line of the sight of a distant observer, stochastic luminosity variations will be produced. The variation timescales are several seconds for a stellar-mass black hole with mass of ten to several tens of the solar mass and are in rough agreement with the observational results of some ultraluminous X-ray sources (ULXs). Our results thus provide a strong support of a hypothesis that the ULXs are powered by the supercritical accretion onto the stellar mass black hole.
1/31 Misako Tatsuuma (UTokyo): Gravitational Instability of a Dust Layer Composed of Porous Silicate Dust Aggregates in a Protoplanetary Disk
Planetesimal formation is one of the most important unsolved problems in planet formation theory. In particular, rocky planetesimal formation is difficult because silicate dust grains are easily disrupted when they collide. Recently, it has been proposed that they can grow as porous aggregates when their monomer radius is smaller than ~ 10 nm, which can also avoid the radial drift toward the central star. However, the stability of a layer composed of such porous silicate dust aggregates has not been investigated. Therefore, we investigate the gravitational instability of this dust layer. To evaluate the disk stability, we calculate Toomre's stability parameter Q, for which we need to evaluate the equilibrium random velocity of dust aggregates. We calculate the equilibrium random velocity considering gravitational scattering and collisions between dust aggregates, drag by mean flow of gas, stirring by gas turbulence, and gravitational scattering by gas density fluctuation due to turbulence. We derive the condition of the gravitational instability using the disk mass, dust-to-gas ratio, turbulent strength, orbital radius, and dust monomer radius. We find that, for the minimum mass solar nebula model at 1 au, the dust layer becomes gravitationally unstable when the turbulent strength alpha<10^{-5}. If the dust-to-gas ratio is increased twice, the gravitational instability occurs for alpha<10^{-4}. We also find that the dust layer is more unstable in disks with larger mass, higher dust-to-gas ratio, and weaker turbulent strength, at larger orbital radius, and with a larger monomer radius.
1/31 Kanji Mori (UTokyo): Quantum Mechanical Constraint on Carbon Fusion Reaction and Its Impact on Type Ia Supernovae
Type Ia supernovae (SNe Ia) are thought to be thermonuclear explosion of white dwarfs (WDs). Their progenitors are not well understood, but one of popular scenarios is the double degenerate (DD) scenario, which attributes SNe Ia to WD-WD binary mergers. The fate of the WD mergers depends on the rate of 12C+12C reaction. We assume a low energy resonance and impose an upperlimit to the resonance strength using the Wigner limit. The resultant resonant rate is applied to the DD scenario. We show that the strongest possible resonance will enhance the reaction rate by ~10^3 times compared with the standard rate, although some of resonances introduced in previous works are too strong. The resonant rate decreases the ignition temperature of carbon burning, therefore accretion induced collapse occurs more easily and the contribution of the DD scenario to the SNe Ia rate becomes smaller.
2/21 Kaiki Inoue (Kinki University):Detecting Sub-lunar Mass Compact Objects toward the Local Group Galaxies
Sub-lunar mass compact objects (SULCOs) such as primordial black holes and rogue dwarf planets may constitute a substantial portion of the Milky-way halo. In order to detect SULCOs, I propose to observe a large number of stars in the Local Group Galaxies. By monitoring these stars, one can detect nanolensing events by SULCOs. In contrast to microlensing events by stellar-mass objects, the finite source size effect plays an important role. Using stars with V<26 in M33, for one night observation, we would be able to detect 10^(3-4) nanolensing events if SULCOs with a mass range of 10^(-9) to 10^(-7) solar mass constitute the whole dark matter component.
3/28 Tomoyuki Hanawa (Chiba University): Conservation of Total Energy Including Gravity in Hydrodynamical Simulations
Total energy of an astronomical system is of our great interest, since the evolution of the system depends on it. Hence we solve the hydrodynamical equations taking account of the conservation in numerical simulations of astrophysical objects. However, gravitational energy is often taken into account as a source term and the total energy including gravity is not guaranteed. This is partly because it takes additional computational cost to solve hydrodynamical equations in the fully conservative form. This paper shows that the total energy and momentum of a system are fully conserved down to the round off error if the source terms due to gravity are properly taken into account. The method is applicable both when the gravitational force is given by the Poisson equation or explicitly as a function. | CommonCrawl |
yutsumura.com Go back Open original
Problems in Mathematics
by Yu · Published 04/30/2018
Let $T: \R^n \to \R^m$ be a linear transformation.
Suppose that the nullity of $T$ is zero.
If $\{\mathbf{x}_1, \mathbf{x}_2,\dots, \mathbf{x}_k\}$ is a linearly independent subset of $\R^n$, then show that $\{T(\mathbf{x}_1), T(\mathbf{x}_2), \dots, T(\mathbf{x}_k) \}$ is a linearly independent subset of $\R^m$.
Read solution
Add to solve later
Given any constants $a,b,c$ where $a\neq 0$, find all values of $x$ such that the matrix $A$ is invertible if \[ A= \begin{bmatrix} 1 & 0 & c \\ 0 & a & -b \\ -1/a & x & x^{2} \end{bmatrix} .
Find all eigenvalues and corresponding eigenvectors for the matrix $A$ if \[ A= \begin{bmatrix} 2 & -3 & 0 \\ 2 & -5 & 0 \\ 0 & 0 & 3 \end{bmatrix} .
Let $A$ be the matrix given by \[ A= \begin{bmatrix} -2 & 0 & 1 \\ -5 & 3 & a \\ 4 & -2 & -1 \end{bmatrix}
\] for some variable $a$. Find all values of $a$ which will guarantee that $A$ has eigenvalues $0$, $3$, and $-3$.
Let \[ A= \begin{bmatrix} 8 & 1 & 6 \\ 3 & 5 & 7 \\ 4 & 9 & 2 \end{bmatrix} .
\] Notice that $A$ contains every integer from $1$ to $9$ and that the sums of each row, column, and diagonal of $A$ are equal. Such a grid is sometimes called a magic square.
Compute the determinant of $A$.
Define two functions $T:\R^{2}\to\R^{2}$ and $S:\R^{2}\to\R^{2}$ by \[ T\left( \begin{bmatrix} x \\ y \end{bmatrix} \right) = \begin{bmatrix} 2x+y \\ 0 \end{bmatrix} ,\; S\left( \begin{bmatrix} x \\ y \end{bmatrix} \right) = \begin{bmatrix} x+y \\ xy \end{bmatrix} .
\] Determine whether $T$, $S$, and the composite $S\circ T$ are linear transformations.
Using Gram-Schmidt orthogonalization, find an orthogonal basis for the span of the vectors $\mathbf{w}_{1},\mathbf{w}_{2}\in\R^{3}$ if \[ \mathbf{w}_{1} = \begin{bmatrix} 1 \\ 0 \\ 3 \end{bmatrix} ,\quad \mathbf{w}_{2} = \begin{bmatrix} 2 \\ -1 \\ 0 \end{bmatrix} .
Let \[ \mathbf{v}_{1} = \begin{bmatrix} 1 \\ 1 \end{bmatrix} ,\; \mathbf{v}_{2} = \begin{bmatrix} 1 \\ -1 \end{bmatrix} .
\] Let $V=\Span(\mathbf{v}_{1},\mathbf{v}_{2})$. Do $\mathbf{v}_{1}$ and $\mathbf{v}_{2}$ form an orthonormal basis for $V$?
If not, then find an orthonormal basis for $V$.
Let $W$ be the set of $3\times 3$ skew-symmetric matrices. Show that $W$ is a subspace of the vector space $V$ of all $3\times 3$ matrices. Then, exhibit a spanning set for $W$.
Determine bases for $\calN(A)$ and $\calN(A^{T}A)$ when \[ A= \begin{bmatrix} 1 & 2 & 1 \\ 1 & 1 & 3 \\ 0 & 0 & 0 \end{bmatrix} .
\] Then, determine the ranks and nullities of the matrices $A$ and $A^{\trans}A$.
Let $A$ be an $m \times n$ matrix.
Suppose that the nullspace of $A$ is a plane in $\R^3$ and the range is spanned by a nonzero vector $\mathbf{v}$ in $\R^5$. Determine $m$ and $n$. Also, find the rank and nullity of $A$.
Using the axiom of a vector space, prove the following properties.
Let $V$ be a vector space over $\R$. Let $u, v, w\in V$.
(a) If $u+v=u+w$, then $v=w$.
(b) If $v+u=w+u$, then $v=w$.
(c) The zero vector $\mathbf{0}$ is unique.
(d) For each $v\in V$, the additive inverse $-v$ is unique.
(e) $0v=\mathbf{0}$ for every $v\in V$, where $0\in\R$ is the zero scalar.
(f) $a\mathbf{0}=\mathbf{0}$ for every scalar $a$.
(g) If $av=\mathbf{0}$, then $a=0$ or $v=\mathbf{0}$.
(h) $(-1)v=-v$.
The first two properties are called the cancellation law.
Find a basis for $\Span(S)$ where $S= \left\{ \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix} , \begin{bmatrix} -1 \\ -2 \\ -1 \end{bmatrix} , \begin{bmatrix} 2 \\ 6 \\ -2 \end{bmatrix} , \begin{bmatrix} 1 \\ 1 \\ 3 \end{bmatrix}
\right\}$.
Let $S=\{\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3},\mathbf{v}_{4},\mathbf{v}_{5}\}$ where \[ \mathbf{v}_{1}= \begin{bmatrix} 1 \\ 2 \\ 2 \\ -1 \end{bmatrix} ,\;\mathbf{v}_{2}= \begin{bmatrix} 1 \\ 3 \\ 1 \\ 1 \end{bmatrix} ,\;\mathbf{v}_{3}= \begin{bmatrix} 1 \\ 5 \\ -1 \\ 5 \end{bmatrix} ,\;\mathbf{v}_{4}= \begin{bmatrix} 1 \\ 1 \\ 4 \\ -1 \end{bmatrix} ,\;\mathbf{v}_{5}= \begin{bmatrix} 2 \\ 7 \\ 0 \\ 2 \end{bmatrix}
.\] Find a basis for the span $\Span(S)$.
Let $A=\begin{bmatrix} 2 & 4 & 6 & 8 \\ 1 &3 & 0 & 5 \\ 1 & 1 & 6 & 3
\end{bmatrix}$.
(a) Find a basis for the nullspace of $A$.
(b) Find a basis for the row space of $A$.
(c) Find a basis for the range of $A$ that consists of column vectors of $A$.
(d) For each column vector which is not a basis vector that you obtained in part (c), express it as a linear combination of the basis vectors for the range of $A$.
Suppose that a set of vectors $S_1=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ is a spanning set of a subspace $V$ in $\R^3$. Is it possible that $S_2=\{\mathbf{v}_1\}$ is a spanning set for $V$?
Suppose that a set of vectors $S_1=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ is a spanning set of a subspace $V$ in $\R^5$. If $\mathbf{v}_4$ is another vector in $V$, then is the set
\[S_2=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4\}\] still a spanning set for $V$? If so, prove it. Otherwise, give a counterexample.
For a set $S$ and a vector space $V$ over a scalar field $\K$, define the set of all functions from $S$ to $V$
\[ \Fun ( S , V ) = \{ f : S \rightarrow V \} . \]
For $f, g \in \Fun(S, V)$, $z \in \K$, addition and scalar multiplication can be defined by
\[ (f+g)(s) = f(s) + g(s) \, \mbox{ and } (cf)(s) = c (f(s)) \, \mbox{ for all } s \in S . \]
(a) Prove that $\Fun(S, V)$ is a vector space over $\K$. What is the zero element?
(b) Let $S_1 = \{ s \}$ be a set consisting of one element. Find an isomorphism between $\Fun(S_1 , V)$ and $V$ itself. Prove that the map you find is actually a linear isomorpism.
(c) Suppose that $B = \{ e_1 , e_2 , \cdots , e_n \}$ is a basis of $V$. Use $B$ to construct a basis of $\Fun(S_1 , V)$.
(d) Let $S = \{ s_1 , s_2 , \cdots , s_m \}$. Construct a linear isomorphism between $\Fun(S, V)$ and the vector space of $n$-tuples of $V$, defined as
\[ V^m = \{ (v_1 , v_2 , \cdots , v_m ) \mid v_i \in V \mbox{ for all } 1 \leq i \leq m \} . \]
(e) Use the basis $B$ of $V$ to constract a basis of $\Fun(S, V)$ for an arbitrary finite set $S$. What is the dimension of $\Fun(S, V)$?
(f) Let $W \subseteq V$ be a subspace. Prove that $\Fun(S, W)$ is a subspace of $\Fun(S, V)$.
Let $A=\begin{bmatrix} 2 & 4 & 6 & 8 \\ 1 &3 & 0 & 5 \\ 1 & 1 & 6 & 3 \end{bmatrix}$.
Using the definition of the range of a matrix, describe the range of the matrix \[A=\begin{bmatrix} 2 & 4 & 1 & -5 \\ 1 &2 & 1 & -2 \\ 1 & 2 & 0 & -3
\end{bmatrix}.\] | CommonCrawl |
A soft introduction to physics for mathematicians who don't know the first thing about physics
There have been similar questions on mathoverflow, but the answers always gave some advanced introduction to the mathematics of quantum field theory, or string theory and so forth. While those may be good introduction to the mathematics of those subjects, what I require is different: what provides a soft and readable introduction to the (many) concepts and theories out there, such that the mathematics involved in it is in comfortable generality. What makes this is a "for mathematicians" question, is that a standard soft introduction will also assume that the reader is uncomfortable with the word "manifold" or certainly "sheaf" and "Lie algebra". So I'm looking for the benefit of scope and narrative, together with a presumption of mathematical maturity.
N.B. If your roadmap is several books, that is also very welcome.
mp.mathematical-physics physics
James D. Taylor
$\begingroup$ IMHO requiring "introduction to basic physics which is soft and mathematically general" is contradictory. Sure, you can start introduction to classical mechanics with talk about Poisson manifolds and symplectic geometry, or start quantum mechanics with C*-algebras, but this completely obscures the underlying physical ideas with formalism that is irrelevant for most physical purposes. My advice would be to first learn physics the physicist's way and then delve into general mathematical framework, no the other way round. $\endgroup$ – Marcin Kotowski Jan 7 '11 at 16:08
$\begingroup$ @Marcin: I respectfully disagree. There are certain details that books written for physicists just don't pay attention to, e.g. what structures on R^n are necessary to set up the theory, exactly what changes of coordinates are allowed, that I need as a mathematician to make sense of it. Also, books written for physicists tend to focus too much on calculations for my taste and, in both physics and mathematics, I get bored reading calculations and would much prefer concepts. You are also assuming that the end goal is physical applications. At least for some mathematicians, the end goal... $\endgroup$ – Qiaochu Yuan Jan 7 '11 at 18:08
$\begingroup$ ...may be purely mathematical, but in a field where the mathematics is influenced by physics. In that situation I don't see anything wrong with learning the formalism first and then seeing how it descends to physical applications. $\endgroup$ – Qiaochu Yuan Jan 7 '11 at 18:09
$\begingroup$ @Qiaochu: The problem I see with that approach is you're deliberately approaching physics like an outsider. Mathematical formalism in some sense isn't fundamental to thinking about physics. If you study any subject you really ought to always approach it first like an insider. Learn physics the ways physicists do. If you're mathematically inclined you can always re-package the physics later in to various formalisms. But focusing on a particular formalism first IMO is the best way to get distracted and miss the point. $\endgroup$ – Ryan Budney Jan 7 '11 at 20:03
$\begingroup$ @Ryan: Sure, and surely the best way to learn French is to go live in France for a year. But that doesn't mean it's wrong-headed to try to get someone who already knows the language to teach it to you in your native language first. $\endgroup$ – Cam McLeman Jan 8 '11 at 1:03
I haven't looked at it yet, but Michael Spivak's Physics for Mathematicians: Mechanics I was published a month or so ago. It seems very interesting. Here's an excerpt from the preface:
" I want to explore the working of elementary physics ... which I have always found so hard to fathom.[...] I have written this work in order to learn the subject myself, in a form that I find comprehensible.[...] By physics I mean ... well, physics, what physicists mean by physics, i.e., the actual study of physical objects ... (rather than the study of symplectic structures on cotangent bundles, for example)."
Some lecture notes covering what presumably turned into the first few chapters was put online some time ago here:
http://alpha.math.uga.edu/~shifrin/Spivak_physics.pdf (elementary mechanics from a mathematican's viewpoint)
$\begingroup$ I would like to give a strong second to this recommendation. I do have a copy and have been reading and liking it. And let me add that it contains tons of fascinating historical etc. material. While not everything that I know about is done just as I would have presented it, as anyone who knows Spivak's other famous books would expect, the presentation is exceptionally clear and readable. I saw Spivak at the New Orleans Joint Annual Meeting this weekend and he told me that electricity and magnetism is next---but don't hold your breath, he said the Mechanics volume took 6 yrs of very hard work. $\endgroup$ – Dick Palais Jan 10 '11 at 14:52
$\begingroup$ I had the opportunity to study this book and must say it's a real gem, it is one of the few books that balances precise definitions with concise intuition. Just wanna stress the importance of classical mechanics for the physics as a whole, besides it being genereally though as a "first step" before quantum mechanics that one should invest much time, so definitely a good place to start, probably followed by Arnold's classic whcih is mentioned a few answers bellow. $\endgroup$ – cesaruliana Jul 2 '13 at 20:57
$\begingroup$ The version I am using is useful, but hardly a great book for mathematicians. For example, he explains that 1/x + 1/y does not equal 1/(x+y). Also, in many cases the symbols used are inconsistent. Further, he often does not distinguish between variables with direction (like velocity) from related variables without direction (like speed). But, given the dearth of physics books for mathematicians, I too would recommend it. $\endgroup$ – MPitts Mar 22 '19 at 13:27
$\begingroup$ I thought the first chapter was far too verbose for me. $\endgroup$ – Andres Mejia Aug 18 '19 at 5:02
If you really know nothing about physics I suggest you begin with any text book on physics for undergrad. Easy to read, it will introduce the main usual suspects. After, you'll ask again :)
I am not sure that jumping from nothing to quantum mechanics, or even worse quantum fields theory, would be wise, like jumping from nothing in math to algebraic geometry or K-Theory.
After that, it depends of course at what level of mathematical physics you want to stop. I will illustrate this with some examples:
Question: What is the "mass" of an isolated dynamical system?
Math Answer: It is the class of cohomology of the action of the group of Galilee, measuring the lack of equivariance of the moment map, on a symplectic manifold representing the isolated dynamical system.
Another question: Why in general relativity $E = mc^2$?
Math Answer: Because the group of Poincaré has no cohomology
Another, other question: What is the theorem of decomposition of motions around the center of gravity?
Math Answer: Let $(M,\omega)$ be a symplectic manifold with an hamiltonian action of the group of Galilee, if the "mass" of the system is not zero (in the sense above) then $M$ is the symplectic product or $({\bf R}^6, {\rm can})$, representing the motions of the center of gravity, by another symplectic manifold $(M_0,\omega_0)$, representing the motions around the center of gravity. The group of Galillee acting naturally on $\bf R^6$ and $SO(3) \times {\bf R}$ on $M_0$.
Another, other, other question: What are the constants of motions?
Math Answer: Let $(M,\omega)$ be a pre-symplectic manifold with an hamiltonian action of a Lie group $G$, then the moment map is constant on the characteristics of $\omega$, that is the integral manifolds of the vector distribution $x \mapsto \ker(\omega_x)$.
These answers are the mathematical versions of physics classical constructions, but it would be very difficult to appreciate them if you have no pedestrian introduction of physics. You may enjoy also Aristotles' book "Physics", as a first dish, just for tasting the flavor of physics :)
After that, you will be able to appreciate also quantum mechanics, but this is another question.
Just before entering in the modern world of physics I would suggest few basic lectures for the winter evenings, near the fireplace (I'm sorry I write them down in french because I read them in french).
• Platon, Timée, trad. Émile Chambry.
• Aristote, La Physique, Éd. J. Vrin.
• Maïmonide, Le Guide des Égarés, Éd. Maisonneuve & Larose. (the part about time as an accident of motion, accident of the thing. Very deep and modern thoughts).
• Giordano Bruno, Le Banquet des Cendres, Éd. L'éclat.
• Galileo Galilei, Dialogue sur les Deux Grands Systèmes du Monde, Éd. Points.
• Albert Einstein, La Relativité, Éd. Payot.
• Joseph-Louis Lagrange, Mécanique Analytique, Éd. Blanchard.
• Felix Klein, Le Programme d'Erlangen, Éd. Gauthier-Villars.
• Jean-Marie Souriau, Structure des Systèmes Dynamiques, Éd. Dunod.
• Victor Guillement & Shlomo Sternberg, Geometric Asymptotics, AMS Math Books
• François DeGandt Force and Geometry in Newton Principia.
$\begingroup$ Dear Patrick, could you point to references where answers are given in this style? $\endgroup$ – Michael Bächtold Jan 7 '11 at 17:38
$\begingroup$ "Structure des systèmes dynamiques" Jean-Marie Souriau (there is an english version but I prefer the french one :) and few books from Shlomo Sternberg: GeometricsAasymptotics and some others. I will complete maybe later. $\endgroup$ – Patrick I-Z Jan 7 '11 at 20:38
$\begingroup$ Could you give some references where your answer to the $E=mc^2$ question is discussed in detail? As a physics student, I'm roughly familiar with cohomology but I've never made the connection. $\endgroup$ – balu Mar 27 '16 at 0:19
$\begingroup$ I don't see how these books provide a "soft introduction" for those who "don't know the first thing about physics" - as the o.p. requested. $\endgroup$ – MPitts Mar 22 '19 at 13:21
Dolgachev has some lecture notes for an introduction to physics course he taught to math graduate students. Certainly it presumes mathematical maturity.
Let me suggest a reading plan :
The Feynman's books are a pleasure to read and provide great insights into basic physics.
For classical mechanics, "Mathematical methods of Classical mechanics" by V.I. Arnold and "Mechanics" by Landau-Lifshitz.
for quantum mechanics, "Mathematical foundations of quantum mechanics" - Von Neumann. another nice book on quantum mechanics is by R.Shanker.
for statistical physics, the two volumes by Landau are my favorites.
For General Relativity, R.M. Wald and more mathematically inclined is "Large Scale structures of Space time" by Hawking-Ellis.
For QFT and String Theory, read AMS book "Quantum Fields and Strings for mathematicians". It contains beautiful lectures by experts in the field addressed to mathematicians.
Another good book is Clay monograph "Mirror Symmetry" by Hori et al.It starts with classical mechanics, moves through quantum mechanics to QFT, String Theory.
Ivan Mirkovic has nice lecture notes here http://www.math.umass.edu/~mirkovic/ the notes on string theory. It also has discussion on classical and quantum mechanics.
J Verma
I recommend the book The Road to Reality: A Complete Guide to the Laws of the Universe by Roger Penrose.
It tries to touch almost all areas in physics, including the hot ones. Penrose emphasizes the mathematical part (especially the geometric interpretations), and avoids to be superficial (many scientific writers, when trying to make the things easier, use misleading metaphors). One warning is to be careful that sometimes he expresses his personal viewpoint, which is not always mainstream. But it is clear when he does this, and he is very careful to make justice to the mainstream viewpoint, by presenting it very well.
This book is indeed a road map to modern physics, and I recommend to anyone interested in this area to read it at least once, and then to consult the chapters of interest as they need.
Cristi Stoica
$\begingroup$ The book has its pros and cons, but it's the antithesis of what the OP asked for; it assumes no mathematical background whatsoever, and therefore spends hundreds of pages on mathematical preliminaries before doing almost any physics. $\endgroup$ – Ben Crowell Dec 13 '12 at 1:11
First of all "physics" is rather general. You're more likely to find good books on more specific topics like special relativity, quantum mechanics, etc.
Second, if you have the time I would encourage you to read physics books that are written for physicists, not for mathematicians. There are numerous differences in terminology and worldview between the physics and mathematics community, even when the underlying subject matter is in some sense the same. It's very valuable for a mathematician to be able to read and understand recent physics arxiv postings, and the only way to do this is to go through some (perhaps accelerated) version of physics grad school.
Here are some physics books which I have enjoyed. The list is of course constrained by my own limited experience.
Electricity and Magnetism, Berkeley Physics Course Vol. II by Edward M. Purcell. This book presupposes knowledge of special relativity, but I thought is was really great when I read it as an undergraduate.
Feynman lectures on physics. Not mathematically sophisticated, but very readable and also covers many different topics.
The Quantum Theory of Fields, volume 1 by Steven Weinberg. I found this book to be much less impenetrable (from the point of view of a mathematician who foolishly stopped taking physics courses when he was an undergraduate) than the typical QFT textbook.
Quantum Field Theory in a Nutshell by Anthony Zee. This book omits a lot of details and emphasizes the big picture. It's a great companion to a more detailed book on QFT.
$\begingroup$ Textbooks you propose are generally quite a good choice, although in my opinion Weinberg is somewhat hard to read. As to electrodynamics, IMHO Griffiths' textbook is wonderful (one of the best textbooks I have seen, perfect if you want to gain physical intuition). $\endgroup$ – Michal Kotowski Jan 7 '11 at 16:17
$\begingroup$ A mechanics text that would fit naturally into this list is Kleppner and Kolenkow, An introduction to mechanics. $\endgroup$ – Ben Crowell Dec 13 '12 at 1:07
The Feynman lectures are good, but one of the main things which separates physics from mathematics is the role of experiment and observation. Physics is not just a matter of getting the formulae and models right, but also of testing mathematical models against observations to see whether they stand up or break down in "the real world". Part of the role of mathematical models is to give physicists some guidance on potentially fruitful places to look.
So it rather depends whether you are looking at mathematical/theoretical physics as a mathematician/theoretician would understand it, or whether you are looking to understand the role of mathematics in physics as a discipline.
Mark Bennet
$\begingroup$ I strongly agree with your first sentence, that physics differs from math because of experiments, that not says that there is no experiment in mathematics too, there are, but they don't play the same crucial role. However, there would be no physics without mathematics, for the same reason that there would be no literature without vocabulary, syntax, grammar. And math is just the way to express nature, that is physics, as much as poetry is a way to express feelings. If I would to compare with painting I would say that mathematics for physics is figurative art and math for math is abstract art. $\endgroup$ – Patrick I-Z Jan 8 '11 at 1:53
$\begingroup$ "Physics is a branch of experimental science;mathematics is that part of physics where the experiments are cheap."-Vladimir Arnold $\endgroup$ – The Mathemagician Jul 22 '11 at 4:11
I really liked Feynman's popular QED book:
http://en.wikipedia.org/wiki/QED:_The_Strange_Theory_of_Light_and_Matter
and really hated anything by Schwinger.
For Russian readers, there is a really nice little book by Faddeev (I asked him whether he wanted it translated once, and he demurred that it would need more work, but I respectfully disagree).
Igor Rivin
$\begingroup$ Please translate Faddeev's book! $\endgroup$ – Deane Yang Jan 7 '11 at 23:43
$\begingroup$ Is this related to: L. D. Faddeev and O. A. Yakubovskii "Lectures on Quantum Mechanics for Mathematics Students"? $\endgroup$ – Justin Moore Dec 12 '12 at 15:11
As someone who knows a fair amount of math and has never had a physics course, I felt that Sudbery's book on Quantum Mechanics and the Particles of Nature was written specifically for me.
Steven Landsburg
$\begingroup$ (Is there an electronic version available somewhere on the Net?) $\endgroup$ – Sadiq Ahmed Sep 20 '12 at 15:25
Hermann Weyl's The Theory of Groups and Quantum Mechanics is an great read to learn about quantum mechanics (and math!) if you are a mathematician.
Mariano Suárez-Álvarez
$\begingroup$ I want to like this book, but the outdated notation and terminology makes it somewhat difficult to read. I did learn that eigenvalues and eigenvectors used to be called characteristic values and characteristic vectors, which certainly explains the origin of the term characteristic polynomial. Kind of makes you wonder why people don't call it the eigenpolynomial. $\endgroup$ – Qiaochu Yuan Jan 7 '11 at 21:25
$\begingroup$ They also used to be known under "latent roots" and many other names. If Weyl's "Theory of Groups" is similar to his "Classical Groups" then I understand why it is hard to read nowadays. (One word: coordinates.) $\endgroup$ – darij grinberg Jan 8 '11 at 19:27
$\begingroup$ My favorite name for the polynomial (or to the equation resulting from equating it to zero, rather) is secular equation. $\endgroup$ – Mariano Suárez-Álvarez Jan 9 '11 at 2:10
Edit: The list below fits not that good to the requirements you describe, but the texts there are what I found helpfull. If you can read German books, I would recommend W. Greiner's "Theoretische Physik", which explains basically all the needed mathematics. Usefull too may be J. Baez' "Gauge Fields, Knots and Gravity", which contains a "rapid course on manifolds and differential forms, emphasizing how these provide a proper language for formulating Maxwell's equations on arbitrary spacetimes. The authors then introduce vector bundles, connections and curvature in order to generalize Maxwell theory to the Yang-Mills equations".
I found Novikov, Shifman, Vainshtein, Zarkhavov's "ABC of Instantons" very good and helpfull to enter the 'physicist's mindspace'.
And then, I found F. C.'s recommendation to Nahm's very fascinating "Conformal Field Theory and Torsion Elements of the Bloch Group" very good, e.g. Nahm writes "readable for mathematicians", "much of this article is aimed at mathematicians who want to see quantum field theory in an understandable language .... all computations should be easily reproducible by the reader". Nahm's issue is a strange connection between some quantum field theories and algebraic K-theory and he hopes, his article could stimulate mathematicians to become interested in these exciting topic. A forthcoming article by Zagier on "quantum modular forms" may relate to that too.
Very interesting too is Nahm's article on the very strange and puzzling history of quantum field theory and string theory, which makes mathematicians so much headaches.
Connes/Marcolli's book"Noncommutative Geometry, Quantum Fields and Motives" contains a very readable introduction in quantum field theory, renormalization etc., Marcolli's "Feynman motives" a chapter "Perturbative Quantum Field Theory and Feynman Diagrams".
Rabin hold a very readable "Introduction to Quantum Field Theory for Mathematicians" in this conference.
Thomas Riepe
$\begingroup$ I would heartily recommend Griener's 12 volume magnum opus-they're the books most graduate and doctoral students I know like best to teach themselves physics.They're concise yet clear, very mathematical and best of all,have hundreds of solved examples. Unfortunately,the English translations supposedly introduced many errors -so if you can read German,definitely go with the originals. $\endgroup$ – The Mathemagician Jul 22 '11 at 4:08
Here is a list of books I find useful that present some physical topics from a mathematical viewpoint. Sadly I don't know a good reference for electromagnetism, quantum field theory or statistical physics.
Arnold, Mathematical methods of classical mechanics.
Woodhouse, Special relativity.
Woodhouse, General relativity.
Woodhouse, Geometric Quantization.
algori
Jeffrey Rabin has written a lightning-fast introduction to physics designed for exactly the audience you describe: people with "the mathematical background of a first-year graduate student," but "[no] prior knowledge of physics beyond F = ma."
It's a bit single-minded, because Rabin's ultimate goal is quantum field theory, but it hits most of the important subjects in modern physics, including:
Classical mechanics (Newtonian, Lagrangian, and Hamiltonian formalisms)
The Lorentz group (presumably that means some special relativity?)
Quantum mechanics (not sure how in-depth this section is, but it's better than nothing)
The only glaring omissions I can see are classical electromagnetism, statistical mechanics, and general relativity.
You can find Rabin's introduction in the book Geometry and Quantum Field Theory.
Vectornaut
Hmmm, the first thing that occurs to me is that mathematicians need to learn about "time", because something as fundamental as "conservation of energy" is not directly to be found in mathematics in its physics form. The two are connected by what is usually known as "Noether's theorem" and so this provides a more manageable question: where can mathematicians genuinely learn about the role of symmetry principles in physics? This starts getting us somewhere, but observe what goes on: the traditional route goes through calculus of variations in some form, and that is a theory not in Bourbaki.
So the deal looks like this to me: do we want to "bridge the gap" between contemporary mathematics and contemporary physics on the way that hits the Zeno paradox? Or do we want to invoke a bisection method and claim that it works? In the first, the question "are we doing real physics yet?" has the status of the kids in the back of the car asking "are we nearly there yet?": you only get anywhere close to the destination long after you stop asking. And probably if you have to be told what is "real" physics you aren't even close. The second idea seems more promising. If I just said "find a readable introduction to moment maps and find out how they work, and you will have grasped a Bourbaki-type intermediate between mathematics for its own sake and mainstream Newtonian dynamics, avoiding calculus of variations, with use of symmetry", it seems to me that I have communicated something. I don't know the second thing about physics (which might be how you would know that you had quantised a system) but what I have said might be a first thing,
Charles Matthews
I add another one I would recommend, separately to allow the votes to sort them: Mikio Nakahara, Geometry, topology, and physics. I quote from Google Books:
Differential geometry and topology have become essential tools for many theoretical physicists. In particular, they are indispensable in theoretical studies of condensed matter physics, gravity, and particle physics. Geometry, Topology and Physics, Second Edition introduces the ideas and techniques of differential geometry and topology at a level suitable for postgraduate students and researchers in these fields. The second edition of this popular and established text incorporates a number of changes designed to meet the needs of the reader and reflect the development of the subject. The book features a considerably expanded first chapter, reviewing aspects of path integral quantization and gauge theories. Chapter 2 introduces the mathematical concepts of maps, vector spaces, and topology. The following chapters focus on more elaborate concepts in geometry and topology and discuss the application of these concepts to liquid crystals, superfluid helium, general relativity, and bosonic string theory. Later chapters unify geometry and topology, exploring fiber bundles, characteristic classes, and index theorems. New to this second edition is the proof of the index theorem in terms of supersymmetric quantum mechanics. The final two chapters are devoted to the most fascinating applications of geometry and topology in contemporary physics, namely the study of anomalies in gauge field theories and the analysis of Polakov's bosonic string theory from the geometrical point of view. Geometry, Topology and Physics, Second Edition is an ideal introduction to differential geometry and topology for postgraduate students and researchers in theoretical and mathematical physics.
There are two outstanding books which I found very readable (I belong to the class of mathematicians who have great difficulties reading physics books and papers):
Landau and Lifshitz, Mechanics, and
Faddeev and Yakubovskii, Lectures on quantum mechanics for mathematics students.
Alexandre Eremenko
Nobel Prize winner Gerard 't Hooft has a page called "How to become a GOOD Theoretical Physicist", with lots of useful links and book references.
Andreas Holmstrom
As a more advanced introduction I like and recommend A unified grand tour of theoretical physics by Ian D. Lawrie
I quote from Google Books:
A unified account of the principles of theoretical physics, A Unified Grand Tour of Theoretical Physics, Second Edition stresses the inter-relationships between areas that are usually treated as independent. The profound unifying influence of geometrical ideas, the powerful formal similarities between statistical mechanics and quantum field theory, and the ubiquitous role of symmetries in determining the essential structure of physical theories are emphasized throughout. This second edition conducts a grand tour of the fundamental theories that shape our modern understanding of the physical world. The book covers the central themes of space-time geometry and the general relativistic account of gravity, quantum mechanics and quantum field theory, gauge theories and the fundamental forces of nature, statistical mechanics, and the theory of phase transitions. The basic structure of each theory is explained in explicit mathematical detail with emphasis on conceptual understanding rather than on the technical details of specialized applications. The book gives straightforward accounts of the standard models of particle physics and cosmology.
I have seen that most answers (based, perhaps, on the structure of the original question), have verged around quantum mechanics and relativity. I may suggest, however, the following book, which may be used fairly well together with Feynman's lectures in physics: "Physics and Partial Differential Equations, Volume I", by Tatsien Li and Teihu Qin. Translated by Yachun Li. SIAM, ISBN, 978-1-611972-26-9
I am pretty sure this book has many of the aspects upon the spirit of the original question (including assuming some level of mathematical maturity, without going to deep into the details of subjects like group theory), since it is about how some mathematical models are derived from known physics, what is the mathematics important around them (including transformation into other useful forms, theorem proving, and so on), and other aspects which show nicely how mathematics can tell a lot about physics, once it is settled into a mathematical model.
Granted, this book mentions a lot of concepts unfamiliar to a pure mathematician, hence my recommendation to take it along Feynman's lectures.
I am listing the title of the chapters and one or two important issues covered within them (the list is not comprehensive, but it does gives a hint of what could you expect of the book in terms of subjects; also, I have chosen on purpose among the issues with tendency to mathematical reasoning):
Electrodynamics Proof of Gauss's Law Proof of Ampére theorem Mathematical structure of Maxwell's equations
Fluid dynamics some lemmas concerning the convexity of some functions, which lead to the conclusion that other functions are also convex, with respect to their arguments. Theorem 2.1 [certain] first-order system [...] can be turned into a first-order symmetric hyperbolic system [...] through a transformation of unknown variables (I mention this theorem, because the proof is decomposed into the part of necessity and the part of sufficiency) Theorem 2.2 concerning saying that certain gases may have compressive shock waves, given some entropy conditions. This theorem is also proven in segments, as in "per cases".
Magnetohydrodynamics systems of magnetohydrodynamic equations, assuming infinite conductivity. Here one can begin to see the consequences of assuming limit cases.
Mathematical structure of magnetohydrodynamics system
Reacting fluid dynamics Mathematical structure of the system of reacting fluid dynamics
Elastic mechanics Theorem 5.1 demands the existence of a certain matrix with some algebraic properties, and this is proven using some geometrical concepts. Mathematical relationship between stress and deformation.
Appendix A. (some elementary definitions and useful formulas concerning tensors; several theorems are stated and proved in there)
Appendix B. Overview of thermodynamics. This includes the Legendre transform, which roughly speaking, it is a geometrical transformation, which preserves some other mathematical-physical properties.
Arturo Ortiz Tapia
(some) standard treatments (for physicists) are:
At the most elementary level (taken before or in parallel with a standard calculus sequence) there is for instance the The M.I.T. Introductory Physics Series by Anthony Philip French (volumes on: mechanics, electromagnetism, vibrations and waves, special relativity,...)
undergraduate (upper-level; start here if you know the math)
Goldstein Classical Mechanics, Griffiths Introduction to electrodynamics, Griffiths Quantum Mechanics, Griffiths Introduction to elementary particles, Reif Fundamentals of Statistical and Thermal Physics
Arnol'd Mathematical Methods of Classical Mechanics, Jackson Electrodynamics, Sakurai Modern Quantum Mechanics, Kardar Statistical Physics of [...] (two volumes), J.Negele H.Orland Quantum Many-particle Systems, Wald General Relativity, M.Peskin D.Schroeder, An Introduction to Quantum Field Theory ...
As well as the older series by Sommerfeld or Landau and the text on electrodynamics by Smythe.
Higher-level courses in mechanics is (often) geared toward teaching the math needed to study modern physics. That's one of the primary motivations. Goldstein (quantum mechanics), Arnol'd (gauge field theories) etc.
$\begingroup$ Arghhh!!! Goldstein is the best way to get turned off from Classical Mechanics, and Physics in general. Worked for me! $\endgroup$ – Igor Rivin Jan 7 '11 at 22:34
$\begingroup$ I was never able to read Goldstein, either. $\endgroup$ – Deane Yang Jan 7 '11 at 23:44
$\begingroup$ That's funny -- I liked Goldstein! Am I in the minority? Of course, I was young and impressionable and perhaps now I would not recommend it necessarily. $\endgroup$ – José Figueroa-O'Farrill Jan 8 '11 at 3:48
$\begingroup$ For what it's worth, back in the day, I liked Goldstein too. $\endgroup$ – Kelly Davis Jan 8 '11 at 8:16
$\begingroup$ @Jose and @Kelly you guys must have longer attention spans than Deane and I... $\endgroup$ – Igor Rivin Jan 10 '11 at 1:49
I like Folland's Quantum Field Theory: a tourist guide for mathematicians. (although it might not be as mathematically soft as the OP had in mind) In the first chapters he quickly deals with classical mechanics, special relativity and quantum mechanics, so that he can focus on QFT straight away.
babubba
Open a book on partial differential equations that doesn't cover theory (Only the methods of solutions) . About 90& of classical physics and much of quantum physics is about solving differential equations . After you finish differential equations , read Jackson electrodynamics book to understand classical field theory .
As an introduction to quantum mechanics, try Mathematical Structure of Quantum Mechanics, by F. Strocchi. I also like this because it starts out describing (classical) physics in terms of observables and experiments, which is a crucial prerequisite to understanding how physicists think.
Another good book is Hilbert Space and Quantum Logic by David W. Cohen. But this is not really an "introduction to physics" book, rather an "introduction to making sense of experiments with bizarre results".
For classical mechanics, I quite enjoyed John R. Taylor's Classical Mechanics, which is a very clear, self-contained text. It covers Newtonian mechanics, the calculus of variations, electromagnetism and various models for things like air resistance. It does not go into the structural details that mathematicians like, but on the other hand, it does not spend too much time introducing calculus (or worse, avoiding calculus and saying crazy "intuitive" things!).
Andrew Poelstra
As I mentioned in a comment, if you're interested in learning physics you should really consider giving a good amount of time to classical mechanics, for instance with Spivak's book which is great.
After that you could take a look at eletromagnetism, i recommend A. O. Barut's "Electrodynamics and Classical Theory of Fields and Particles", though it should probably be supplemented by a more standard reference, such as Griffiths' "Introduction to Electromagnetism".
There have been good indications for quantum mechanics, and I don't want to be repetitive, but for General Relativity a good place is "Semi-Riemmanian Manifolds - with applications to Relativity" by Barret O'Neill, which does not presupposes differential geometry but gets to the most famous theorems in Relativity by the end, as well as giving a good description of the theory in general
cesaruliana
You have to learn the following :
1 - Learn path integrals . You can do whatever you want using path integrals . The Ising model , potts model , Quantum field theory , String theory , stochastic processes etc etc. It's very important . This book http://www.amazon.com/Integrals-Quantum-Mechanics-Statistics-Financial/dp/9814273554 contains a plethora of applications of path integrals .
2- Dynamics. You have to know the basics of things like calculus of variations , Hamiltonians , Lagrangians , legendre transformations , The principle of least action etc . The best resource of this in my opinion is http://www.amazon.com/Mathematical-Classical-Mechanics-Graduate-Mathematics/dp/0387968903/ref=sr_1_1?s=books&ie=UTF8&qid=1426831652&sr=1-1&keywords=mathematical+methods+of+classical+mechanics after you learn this , You can exercise using these tools by solving problems in any classical mechanics textbook .
You can't understand path integrals with understanding lagrangians and hamiltonians .
3 - Representation theory : To understand particle physics
4 - For classical electrodynamics , Landau's Classical field theory is most important. If you know the lagrangian and the hamiltonian ,you should read it. The idea is that you start from the principle that physics is invariant under lorentz transformations. This means , Spacetime is a four-dimensional semi-riemannian flat manifold with a metric $g_{\mu \nu}$. Expressions like $d^4x$ and $A_{\mu}A^{\mu}$ are invariant under lorentz transformation. So you can put these quantities in the action. At the end you can derive maxwell's equations. Most of classical electrdynamics is how to solve these maxwell's equations in various situations.
5- Learn GTR which is easy if you know riemannian geometry. Basically it says that Curvature of the spacetime manifold=Stress-Energy tensor . Read Wald's general relativity textbook.
4 - Also , read popular books on physics. Books by stephen hawking , leonard susskind , Brian greene etc.
5 - Watch lectures on physics on youtube (e.g by leonard susskind). There are very interesting lectures at http://www.perimeterinstitute.ca/video-library/
When reading physics textbooks , I found that I spend a lot of time doing stupid calculations like multiplying matrices , solving systems of linear equations etc so software like mathematica and matlab can be big time-savers .
Feynman lectures are what i used. Still the best way to brush up on physics for any mathematician.
aelered
Not the answer you're looking for? Browse other questions tagged mp.mathematical-physics physics or ask your own question.
Books on music theory intended for mathematicians
How can simple physical "proofs" of mathematical facts be made rigorous?
What's about "quantum modular forms"?
What is known about first return times to Markov partitions for Anosov diffeomorphisms?
Why don't existence and uniqueness for the Boltzmann equation imply the same for Navier-Stokes?
Who can tell me the properties for the delta function in Sato's theory?
Standard model of particle physics for mathematicians
About using the character formula for $SO(2n)$.
Learning roadmap to TQFT from a mathematics perspective
Companion to theoretical physics for working mathematicians
What's the relation between spin model for subfactors theory and physics?
The Planck constant for mathematicians | CommonCrawl |
Chi-squared distribution
$ \chi ^ {2} $- distribution
The continuous probability distribution, concentrated on the positive semi-axis $ ( 0, \infty ) $, with density
$$ p ( x) = \frac{1}{2 ^ {n / 2 } \Gamma ( {n / 2 } ) } e ^ {- {x / 2 } } x ^ { {n / 2 } - 1 } , $$
where $ \Gamma ( \alpha ) $ is the gamma-function and the positive integral parameter $ n $ is called the number of degrees of freedom. A "chi-squared" distribution is a special case of a gamma-distribution and has all the properties of the latter. The distribution function of a "chi-squared" distribution is an incomplete gamma-function, the characteristic function is expressed by the formula
$$ \phi ( t) = \ ( 1 - 2it) ^ {-} n/2 , $$
and the mathematical expectation and variance are $ n $ and $ 2n $, respectively. The family of "chi-squared" distributions is closed under the operation of convolution.
The "chi-squared" distribution with $ n $ degrees of freedom can be derived as the distribution of the sum $ \chi _ {n} ^ {2} = X _ {1} ^ {2} + \dots + X _ {n} ^ {2} $ of the squares of independent random variables $ X _ {1} \dots X _ {n} $ having identical normal distributions with mathematical expectation 0 and variance 1. This connection with a normal distribution determines the role that the "chi-squared" distribution plays in probability theory and in mathematical statistics.
Many distributions can be defined by means of the "chi-squared" distribution. For example, the distribution of the random variable $ \sqrt {\chi _ {n} ^ {2} } $— the length of the random vector $ ( X _ {1} \dots X _ {n} ) $ with independent normally-distributed components — (sometimes called a "chi" -distribution, see also the special cases of a Maxwell distribution and a Rayleigh distribution), the Student distribution, and the Fisher $ F $- distribution. In mathematical statistics these distributions together with the "chi-squared" distribution describe sample distributions of various statistics of normally-distributed results of observations and are used to construct statistical interval estimators and statistical tests. A special reputation in connection with the "chi-squared" distribution has been gained by the "chi-squared" test, based on the so-called "chi-squared" statistic of E.S. Pearson.
There are detailed tables of the "chi-squared" distribution which are convenient for statistical calculations. For large $ n $ one uses approximations by means of a normal distribution; for example, according to the central limit theorem, the distribution of the normalized variable $ ( \chi _ {n} ^ {2} - n)/ \sqrt 2n $ converges to the standard normal distribution. More accurate is the approximation
$$ {\mathsf P} \{ \chi _ {n} ^ {2} < x \} \rightarrow \Phi ( \sqrt 2x - \sqrt {2n- 1 } ) \ \ \textrm{ as } n \rightarrow \infty , $$
where $ \Phi ( x) $ is the standard normal distribution function.
See also Non-central "chi-squared" distribution.
[1] H. Cramér, "Mathematical methods of statistics" , Princeton Univ. Press (1946)
[2] M.G. Kendall, A. Stuart, "The advanced theory of statistics. Distribution theory" , 1 , Griffin (1969)
[3] H.O. Lancaster, "The chi-squared distribution" , Wiley (1969)
[4] L.N. Bol'shev, N.V. Smirnov, "Tables of mathematical statistics" , Libr. math. tables , 46 , Nauka (1983) (In Russian) (Processed by L.S. Bark and E.S. Kedrova)
The name "chi-square" distribution is also used.
Chi-squared distribution. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Chi-squared_distribution&oldid=46337
This article was adapted from an original article by A.V. Prokhorov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Chi-squared_distribution&oldid=46337" | CommonCrawl |
This site will not be able to provide its services if you do not have Javascript available.
Iberian Strings 2021
Online meeting organized at
Instituto Superior Técnico,
Description Registration Programme Abstracts Participants Contacts
Lars Aalsma
A New Spin On The Weak Gravity Conjecture
2021/01/19, 17:30 — 18:00
The mild form of the Weak Gravity Conjecture (WGC) requires that (quantum) corrections to extremal charged black holes increase their charge-to-mass ratio. Currently, it is unknown what the minimal assumptions are needed to proof this conjecture. To address this issue, I will reformulate the WGC as a necessary and sufficient condition on the stress tensor. Applied to rotating BTZ black holes, this condition suggests a spinning WGC, which I proof for corrections generated by fields holographically dual to relevant deformations. Imposing both the charged and spinning WGC on a five-dimensional black string and compactifications thereof, I derive new positivity bounds on Wilson coefficients. These bounds are stronger than those obtained from the charged WGC alone and further constrain effective theories compatible with quantum gravity. Based on arXiv:2011.05337 with Alex Cole, Gregory Loges and Gary Shiu.
Roberto Vega Álvarez
Instituto Superior Técnico, Universidade de Lisboa
Resurgent properties of minimal string theory: part I
In these talks we will report the findings of our unpublished work on resurgent properties of the Painlevé I and II equations. These equations play a fundamental role in Minimal String Theories as the specific heat of 2D (Super)-Quantum Gravity. We will do a short introduction to the topic of resurgence and explain the role of Stokes constants when constructing solutions to these equations. While reviewing these tools, we will present our new method for numerically calculating these constants in very general setups to reasonably high precision.
Tomas Andrade
Violations of weak cosmic censorship in black hole collisions
We simulate collisions of stable rotating black holes in six and seven dimensions. We find that if the angular momentum of the system is sufficiently large, the post-merger configuration is an elongated dumbbell which is Gregory-Laflamme unstable. As such, it displays a cascade of satellite formation which ultimately leads to pinching off of the horizon, thus violating the weak cosmic censorship conjecture.
Pedro Aniceto
$R^2$ corrected $AdS_2$ holography
We approach the problem of constructing the holographic dictionary for $AdS_2/CFT_1$ in the context of higher derivative gravitational actions in $AdS_2$. We focus on $S_2$ reductions of four-dimensional $N=2$ Wilsonian effective actions with Weyl squared interactions restricted to constant scalar backgrounds. BPS black hole near-horizon spacetimes fall into this class of backgrounds and, by identifying the boundary operators dual to the bulk fields, we explicitly show how the Wald entropy of the BPS black hole is holographically encoded in the anomalous transformation of the operator dual to a composite bulk field. Additionally, using a 2d/3d lift, we show that the $CFT$ holographically dual to $AdS_2$ is naturally embedded in the chiral half of the $CFT_2$ dual to the $AdS_3$ spacetime, and we identify the specific $CFT_1$ operator that encodes the chiral central charge of the $CFT_2$.
Matteo Baggioli
Jiao Tong University Shanghai
How small hydrodynamics can go
Numerous experimental and theoretical results in liquids and plasmas suggest the presence of a critical momentum at which the shear diffusion mode collides with a non-hydrodynamic relaxation mode, giving rise to propagating shear waves. This phenomenon, labelled as $k$-gap, could explain the surprising identification of a low frequency elastic behavior in confined liquids. More recently, a formal study of the perturbative hydrodynamic expansion showed that critical points in complex space, such as the aforementioned $k$-gap, determine the radius of convergence of linear hydrodynamics, its regime of applicability. In this talk, we combine the two new concepts and we study the radius of convergence of linear hydrodynamics in real liquids by using several data from simulations and experiments. We generically show that the radius of convergence increases with temperature and it surprisingly decreases with the interactions coupling. More importantly, we find that such radius is universally set by the characteristic interatomic distance of the liquid, which provides a natural microscopic bound. We finally compare our results with those from holographic theories.
Salvatore Baldino
Seiberg-Witten theory, string theory and WKB analysis
In this talk, we will explore the relationship between non critical string theories, $SU(2)$ supersymmetric gauge theories and WKB analysis. In particular, we will examine the role of Seiberg-Witten geometry in the context of $SU(2)$ theories, and how this is related to the concept of string duality. We will examine the role of WKB analysis in solving the equations that come out in this context. We will focus on the physical interpretation of the various quantities that can be computed in WKB analysis, and how they can be related to observables in Seiberg-Witten theory or string theory. We will conclude by presenting the author's work on WKB analysis of finite difference equations, that naturally appear in this context.
Yago Bea
Queen Mary London
New Insights from Real-time Dynamics
We use together relativistic hydrodynamics, numerical relativity and holography to address novel problems. We go beyond the state of the art in several directions. First, we construct a gravitational solution of a fully localized, compact black hole falling through the Poincare horizon in an asymptotically $AdS$ setting. By holography, this solution is mapped to a localized plasma surrounded by vacuum that disperses away. Second, we study the applicability of hydrodynamics, and in particular we perform time evolution using causal theories of hydrodynamics, for the first time in the context of holography. Third, we also go beyond the state of the art by performing time evolution of the recent generalized frame formulation of hydrodynamics (BDNK), for the first time. Fourth, our setting constitutes a new arena to the fluid/gravity duality, being outside the usual assumptions. Our results may provide relevant insights into the quark-gluon plasma physics and astrophysical scenarios.
Nana Geraldine Cabo Bizet
Testing Swampland Conjectures with Machine Learning
We consider Type IIB string theory compactification on an isotropic torus with geometric and non geometric fluxes. Employing supervised machine learning, consisting of an artificial neural network coupled to a genetic algorithm, we determine more than sixty thousand flux configurations yielding a scalar potential with at least one critical point. Stable AdS vacua with large moduli masses and small vacuum energy as well as unstable dS vacua with small tachyonic mass and large energy are absent, in accordance to the Refined de Sitter Conjecture. Hierarchical fluxes favor perturbative solutions with small values of the vacuum energy and moduli masses, as well as scenarios with the lightest modulus mass much smaller than the AdS vacuum scale.
Dmitri Bykov
MPI Physik Munich
Sigma models as Gross-Neveu models
I will show that there is a wide class of integrable sigma models, which includes $CP^{n-1}$, Grassmannian, flag manifold models, that are equivalent to bosonic (and mixed bosonic/fermionic) chiral Gross-Neveu models. The established equivalence allows to effortlessly construct trigonometric/elliptic deformations, provides a new look on the supersymmetric theory and on the cancellation of anomalies in the integrability charges. Using this formalism, we develop criteria for constructing quantum integrable models related to quiver varieties. Based on arXiv:2006.14124 and arXiv:2009.04608
Pablo Cano
Novel higher-curvature variations of $R^2$ inflation
Higher-curvature corrections to the gravitational action are a definite prediction of string theory and they may play an important role in the early universe. Indeed, one of the most successful inflationary scenarios is based on the Lagrangian $R+R^2$, but the effect of more general corrections is not yet fully understood. In this talk I will describe extensions of the $R+R^2$ model with a recently identified type of curvature corrections that keep the cosmological equations second-order and which therefore give rise to a well-behaved cosmological evolution. We use holographic methods to constrain the couplings of the new operators and then we derive the predictions for the power spectrum of tensor and scalar primordial fluctuations. The predicted values of the scalar spectral index and the tensor-to-scalar ratio turn out to lie within experimental constraints, but observations in the near future may be able to distinguish the presence of higher-curvature corrections.
Mattia Cesaro
Kaluza-Klein fermion mass matrices from Exceptional Field Theory and $N=1$ spectra
Using Exceptional Field Theory, the infinite-dimensional mass matrices for the gravitino and spin-1/2 Kaluza-Klein perturbations above a class of anti-de Sitter solutions of M-theory and massive type IIA string theory with topologically-spherical internal spaces can be determined. These mass matrices can be employed to compute the spectrum of Kaluza-Klein fermions about some solutions in this class with internal symmetry groups containing $SU(3)$. Combining these results with previously known bosonic sectors of the spectra, I will present the complete spectra about some $N=1$ and some non-supersymmetric solutions in this class, together with certain generic features they are shown to enjoy.
Mihailo Cubrovic
University of Belgrade
Lyapunov spectra in traversable wormholes and their holographic duals
We study the decay of out-of-time-ordered correlators (OTOC) in an AdS traversable wormhole and its gravity dual, two coupled Sachdev-Ye-Kitaev models ("left" and "right" subsystem). The gravity calculation of OTOC involves perturbative equations more involved than for a black hole, as the perturbation has complex kinematics and can bounce back and forth through the wormhole many times. The outcome is a phase diagram with three regions. One is black-hole like with uniform exponential growth and the Lyapunov exponent $\lambda=2\pi T$ ("the chaos bound"). The intermediate phase has OTOCs with a spectrum of different exponents for different operator modes, all below the maximal chaos bound. The third phase has exponentially small Lyapunov exponents, behaving as $\exp(-1/T)$, in accordance with a recent field-theory calculation in the literature. The Lyapunov spectrum carries more information than just the maximum exponent: it can be related e.g. to teleportation fidelity from left to right subsystem.
Frank Eisenhauer
Max Planck Institute for extraterrestrial Physics
The Discovery of the Massive Black Hole in the Center of the Galaxy. Outreach colloquium on the occasion of The Nobel Prize in Physics 2020
Outreach colloquium on the discovery of a supermassive compact object at the centre of our galaxy.
Black Holes are among the most mysterious objects in the Universe. They are so massive and compact that nothing - not even light - can escape their gravity. The 2020 Nobel Prize in Physics was awarded to Roger Penrose for showing that these exotic objects are a direct consequence of Einstein's general theory of relativity, and to Reinhard Genzel and Andrea Ghez for the discovery of such a monster in the center of our Galaxy. Our presentation will portrait the 40 year journey from the first indications to the overwhelming observational evidence for a extremely heavy and compact object in the Galactic Center, for which a supermassive black hole is the only known explanation. Using the world's largest telescopes and most advanced optics technology, astronomers can now follow the stars orbiting the central object, precisely measure its mass, and detect the stunning effects of general relativity. In our talk we will present both the spectacular observations and the technology behind.
Zachary Elgood
The first law of heterotic stringy black hole mechanics at zeroth order in $\alpha'$
We re-derive the first law of black hole mechanics in the context of the Heterotic Superstring effective action compactified on a torus to leading order in $\alpha'$, using Wald's formalism, covariant Lie derivatives and momentum maps. The Kalb-Ramond field strength of this theory has Abelian Chern-Simons terms which induce Nicolai-Townsend transformations of the Kalb-Ramond field. We show how to deal with all these gauge symmetries deriving the first law in terms of manifestly gauge-invariant quantities. In the presence of Chern-Simons terms, several definitions of the conserved charges exist, but the formalism picks up only one of them to play a role in the first law. The derivation of a first law is a necessary step towards the derivation of a manifestly gauge-invariant entropy formula which is still lacking in the literature. In its turn, this entropy formula is needed to compare unambiguously macroscopic and microscopic black hole entropies.
Roberto Emparan
ICREA Barcelona
The ultimate trap: a tale of horizons, singularities, and black holes. Outreach colloquium on the occasion of The Nobel Prize in Physics 2020
Outreach colloquium on black hole formation as a robust prediction of the general theory of relativity.
More than fifty years ago, a young mathematician called Roger Penrose revolutionized the study of gravity and spacetime by introducing the idea of a "trapped surface": a region where the pull of gravity is so strong that spacetime is inevitably led to a mysterious singularity. This result is so important that last year Penrose was awarded the Nobel Prize in Physics for "showing that Einstein's theory leads to the formation of black holes, those monsters in time and space". Isn't it fascinating? Sure, but, actually, what does it mean?
Quantum BTZ black hole
The study of quantum effects on black holes including their gravitational backreaction is an important but notoriously hard problem. I will begin by reviewing how the framework of braneworld holography allows to solve it for strongly-coupled quantum conformal fields. Then I will describe a holographic construction of quantum rotating BTZ black holes (quBTZ) using an exact dual four-dimensional bulk solution. Besides yielding the quantum-corrected geometry and the renormalized stress tensor of quBTZ, we use it to show that the quantum black hole entropy, which includes the entanglement of the fields outside the horizon, rather non-trivially satisfies the first law of thermodynamics, while the Bekenstein-Hawking-Wald entropy does not.
Johanna Erdmenger
Information geometry and QFT
Review lecture on Information geometry and quantum field theory.
The AdS/CFT correspondence is the most prominent example of a duality relating a quantum theory of fields (without gravity) to a gravity theory. As proposed by Maldacena in 1997, the AdS/CFT conjecture is strongly motivated by the duality of D-branes in the open and the closed string theory pictures in the the near-horizon limit. Since then, the question has arisen if a duality relating quantum field theory to gravity may be established more generally, leading to further insights into the structure of quantum gravity. This development is fuelled in particular through new developments involving concepts from quantum information, following the holographic entanglement entropy proposal of Ryu and Takayanagi in 2006. In this talk, I will review very recent progress in this area, considering insights from information geometry, a branch of mathematics, in particular. Moreover, I will consider bulk reconstruction, modular flows, and computational complexity. Insights from black hole physics and information theory have led to new developments in quantum field theory. As examples, I will present implications of the Fisher metric curvature for phase transitions, complexity proposals for conformal field theories and non-local modular flows.
Mikel Sanchez Garitaonandia
Bubble Mergers from Holography
Phase separated states are a key feature of theories with a first order thermal phase transition at infinite volume. Such states can dynamically appear as end states in the real time evolution of the spinodal-instability. Holography stands as an appealing tool in order to simulate out-of-equilibrium, non-linear processes which yield the final phase separated state. In this talk, we will focus on one such process - the dynamics of bubble mergers. We will study the details of the mergers for a range of speeds, including relativistic ones, and we will discuss three different important scenarios.
Jerome Gauntlett
Geometric Extremization for AdS/CFT and Black Hole Entropy
Review lecture on Geometric Extremization for AdS/CFT and Black Hole Entropy.
Certain physical properties of SCFTs with an abelian R-symmetry are determined by the R-symmetry. Furthermore, the R-symmetry can be obtained by an extremization principle. If the SCFT has a holographic dual there is a geometric version of the extremization principle which is a powerful tool in identifying and studying the dual SCFT as well as being of intrinsic geometric interest.
We focus on supersymmetric $AdS_3 \times Y^7$ solutions of type IIB supergravity dual to $N=(0,2)$ SCFTs in $d=2$, as well as $AdS_2 \times Y^9$ solutions of $D=11$ supergravity dual to $N=2$ supersymmetric QMs, some of which arise as the near horizon limit of supersymmetric, charged black hole solutions in $AdS_4$. Our results allow us to identify infinite classes of $d=2$ SCFTs and susy QMs that are obtained by by wrapping higher dimensional SCFTs on Riemann surfaces. For the latter case our results provide a microstate counting of the entropy of a class of supersymmetric black holes in $AdS_4$.
Diego Gutiez
Deep learning generation of holographic geometries from $q \bar q$ potential data
The AdS-CFT correspondence is a very powerful tool that allows us to compute field theory observables from a gravity dual. One of the hurdles of this duality is to obtain a dual geometry from certain QFTs such as QCD. In the past few years, some efforts have been made to use deep learning techniques to generate the dual geometry from some known data in the QFT. In particular, Hashimoto et al. were able to propose some geometries dual to QCD using this kind of techniques. The aim of our work is to reproduce the metric of a black hole from the $q \bar q$ potential in a deconfined phase. In particular, we compare the potential generated by a given geometry and compare it to the QFT results in order to train the net.
Yang-Hui He
City, University of London & University of Oxford
Machine learning in field theory and string theory
Review lecture on Machine learning in field theory and string theory.
Mario Herrero
Amplitudes Positivity and IR-UV connections in Graviton Exchange
Positivity bounds are standard tools to assess the validity of EFTs for which a unitary, local and Lorentz Invariance UV completion is assumed. They impose positivity of certain (combination of) Wilson coefficients by connecting IR physics to features of the UV completion through dispersion relations of scattering amplitudes. If the corresponding EFT does not satisfy these bounds, it is assumed to lay on the Swampland.
However, the standard derivation of positivity bounds fails when the exchange of a massless particle is possible, which excludes the very important case of gravitational interactions. In this talk we show how to derive new positivity bounds that take into account this issue. We generalize the standard derivation by writing dispersion relations which are valid when production of massless particles is included.
Furthermore, we show that one can obtain efficient bounds in the case of gravity if one assumes the high energy limit of the scattering amplitude to be of the Regge form, as implied from String Theory. We will discuss implications of these bounds for different physical settings, such as models of interacting scalar fields, scalar QED, and the Weak Gravity Conjecture.
Johannes Lahnsteiner
A Non-Relativistic Limit of NS-NS Gravity
In this talk, I describe recent progress in understanding the background field dynamics of the non-relativistic string theory pioneered by Gomis and Ooguri. It is well-known that the underlying string sigma model can be obtained via a limiting procedure — based on a crucial cancellation of infinities — from the relativistic Polyakov model. I show that a similar, subtle limit of the effective supergravity description can be defined — giving rise to a non-relativistic analog of NS-NS gravity. The results are compared with constraints on the background geometry coming from (one-loop) beta function calculations. In the final part of my talk, I will comment on non-relativistic T-duality, p-brane solutions, and potential applications to non-relativistic holography.
Gabriel Larios
Kaluza-Klein spectra and consistent truncations
Kaluza-Klein spectra on string/M-theory solutions depend significantly on whether the solution can be obtained from uplift of a maximal gauged supergravity. For some solutions of M-theory, mIIA and IIB obtained from uplift, I will present the spectrum of KK gravitons and discuss a persistent form of universality when solutions with same (super)symmetry and supergravity spectrum are present in different theories. In the second part of the talk, I will discuss the spectrum of KK gravitons around the $N=2 \ AdS_4$ solution that is dual to the IR of a cubic deformation of ABJM. This solution cannot be obtained from uplift of an $N=8 \ D=4$ theory, and this seems to be linked to the fact that its metric cannot be isometrically embedded in $R^8$. Further, the allocation of modes with different spins in $N=2$ supermultiplets cannot be made KK level by KK level, but needs space invaders.
Alejandro Vilar Lopez
University of Santiago de Compostela
Holographic entanglement entropy for perturbative higher-curvature gravities
The very well-known prescription of Ryu and Takayanagi for computing holographic entanglement entropy (HEE) in Einstein gravity was extended to higher-curvature theories in works by Xi Dong and Joan Camps. Unfortunately, obtaining the entanglement entropy functional involved an obscure procedure in which the Riemann tensors had to be split and weighted according to a certain prescription. In this talk, I will show that there is a much simpler way to understand this procedure, at least when corrections to Einstein gravity are perturbative. By means of this new way to obtain the HEE functional, I will also show some explicit results for cubic theories, employing them to obtain universal terms of the entanglement entropy for various symmetric regions in the boundary field theory. In particular, the universal function characteristic of corner regions in $d=3$ can be shown to be modified by cubic corrections. This is the first example of a holographically obtained corner function different to the Einstein gravity one.
Strong-coupling dynamics and entanglement in de Sitter space
We use holography to study the dynamics of a strongly-coupled gauge theory in four-dimensional de Sitter space with Hubble rate $H$. The gauge theory is non-conformal with a characteristic mass scale $M$. We solve Einstein's equations numerically and determine the time evolution of homogeneous gauge theory states. If their initial energy density is high compared with $H^4$ then the early-time evolution is well described by viscous hydrodynamics with a non-zero bulk viscosity. At late times the dynamics is always far from equilibrium. The asymptotic late-time state preserves the full de Sitter symmetry group and its dual geometry is a domain-wall in $AdS_5$. The approach to this state is characterised by an emergent relation of the form $P = w E$ that is different from the equilibrium equation of state in flat space. The constant $w$ does not depend on the initial conditions but only on $H/M$ and is negative if the ratio $H/M$ is close to unity. The event and the apparent horizons of the late-time solution do not coincide with one another, reflecting its non-equilibrium nature. In between them lies an "entanglement horizon" that cannot be penetrated by extremal surfaces anchored at the boundary, which we use to compute the entanglement entropy of boundary regions. If the entangling region equals the observable universe then the extremal surface coincides with a bulk cosmological horizon that just touches the event horizon, while for larger regions the extremal surface probes behind the event horizon.
Thomas Mertens
JT gravity
Review lecture on JT gravity.
In this review talk, I will give an overview of several of the main developments in lower dimensional gravity (and in particular Jackiw-Teitelboim (JT) gravity) that have happened in the last couple of years. In particular, emphasis will be placed on the structure and solution of the model in terms of the Schwarzian wiggly curve and Riemann surface technology. At the semi-classical level, this model provides a concrete set-up for understanding some of the recent developments in the information paradox. At the quantum gravity level, higher topological corrections to amplitudes lead to features of discreteness of the underlying system and make contact with Maldacena's version of the information paradox. In particular, JT gravity itself can be written entirely as a matrix integral. We end with some discussions on how generic these lessons are, in particular for other models of quantum gravity.
Alessandro Mininno
Dynamical Tadpoles and Weak Gravity Constraints
Non-supersymmetric string models are plagued with tadpoles for dynamical fields, which signal uncanceled forces sourced by the vacuum. We argue that in certain cases, uncanceled dynamical tadpoles can lead to inconsistencies with quantum gravity, via violation of swampland constraints. We describe an explicit realization in a supersymmetric toroidal $Z_2 \times Z_2 $ orientifold with $D7$-branes, where the dynamical tadpole generated by displacement of the $D7$-branes off its minimum leads to violation of the axion Weak Gravity Conjecture. In these examples, cancellation of dynamical tadpoles provides consistency conditions for the configuration, of dynamical nature (as opposed to the topological conditions of topological tadpoles, such as RR tadpole cancellation in compact spaces). We show that this approach provides a re-derivation of the Z-minimization criterion for $AdS$ vacua giving the gravitational dual of a-maximization in $4d \ N=1$ toric quiver SCFTs.
Ángel Jesus Murcia
CSIC Madrid
Electromagnetic Quasitopological Gravities
We identify a set of higher-derivative extensions of Einstein-Maxwell theory that allow for spherically symmetric charged solutions characterized by a single metric function $f(r)=-g_{tt}=1/g_{rr}$. These theories are a non-minimally coupled version of the recently constructed Generalized Quasitopological gravities and they satisfy a number of properties that we establish. We study magnetically-charged black hole solutions in these new theories and we find that for some of them the equations of motion can be fully integrated, enabling us to obtain analytic solutions. In those cases we show that, quite generally, the singularity at the core of the black hole is removed by the higher-derivative corrections and that the solution describes a globally regular geometry. In other cases, the equations are reduced to a second order equation for $f(r)$. Nevertheless, for all the theories it is possible to study the thermodynamic properties of charged black holes analytically. We show that the first law of thermodynamics holds exactly and that the Euclidean and Noether-charge methods provide equivalent results. We then study extremal black holes, focusing on the corrections to the extremal charge-to-mass ratio at a non-perturbative level. We observe that in some theories there are no extremal black holes below certain mass. We also show the existence of theories for which extremal black holes do not represent the minimal mass state for a given charge. The implications of these findings for the evaporation process of black holes are discussed.
Edvard Musaev
Moscow Institute of Physics and Technology
Non-abelian U-duality of M-theory background
Backgrounds of perturbative string theory are known to enjoy various duality symmetries, relating different points in the space of vacua. Among these are the perturbative T-duality symmetries, relating backgrounds with a certain amount of space-time isometries to dual field configurations. Depending on the algebra of Noether currents one finds abelian, non-abelian or Poisson-Lie T-duality symmetries. Non-abelian U-duality symmetry is a generalisation of Poisson-Lie T-duality transformation to the case of 11-dimensional backgrounds, where the string becomes non-perturbative. This is based on the concept of exceptional Drinfeld double, which is a generalisation of the classical Drinfeld double Lie algebra to Leibniz algebras. In this talk this algebraic construction is reviewed, a generalisation of Buscher rules to the case of non-abelian U-duality of group manifold backgrounds is described and a set of examples is presented. The related concept of tri-vector and six-vector deformations is also discussed and the corresponding generalisation of the classical Yang-Baxter equation governing integrable bi-vector deformations is overviewed.
Kevin Nguyen
Slow scrambling in extremal BTZ and microstate geometries
Out-of-time-order correlators (OTOCs) that capture maximally chaotic properties of a black hole are determined by scattering processes near the horizon. This prompts the question to what extent OTOCs display chaotic behaviour in horizonless microstate geometries.
I will first discuss OTOCs for a class of extremal black holes, namely maximally rotating BTZ black holes, and show that on average they display "slow scrambling", characterized by cubic (rather than exponential) growth. Then I will discuss the extent to which these OTOCs are modified in certain "superstrata", horizonless microstate geometries corresponding to these black holes. Rather than an infinite throat ending on a horizon, these geometries have a very deep but finite throat ending in a cap. We find that the superstrata display the same slow scrambling as maximally rotating BTZ black holes, except that for large enough time intervals the growth of the OTOC is cut off by effects related to the cap region.
Malcom Perry
DAMTP, Cambridge
Soft Charges, Soft Hair and Black Hole Entropy
Review lecture on Asymptotic symmetries in gravity.
I will describe the concept of soft charge emerging from the nature of the BMS group. An explicit construction of the supertranslation charges and their integrability will be presented together with arguments as to why this leads to conserved charges. I will extend this to other asymptotic charges. I will then explain how to construct soft black hole hair and show how it can be used to determine the Hawking black hole entropy through holography. I will conclude by looking at the implications for the information paradox.
Anayeli Ramirez
$1/4$ BPS $AdS_3/CFT_2$
With a focus in their CFT interpretation, in this talk we study $AdS_3$ solutions in massive type IIA supergravity and $AdS_2$ solutions in type IIB supergravity. From the geometry, we engineer the dual CFT with well known tools and propose a duality with a precise family of quivers. Additionally, we compute field theory and holographic central charges showing a clean matching in both descriptions.
Martí Rosselló
Arithmetic of decay walls through continued fractions: a new exact dyon counting solution in $\mathcal{N} =4$ CHL models
We use continued fractions to perform a systematic and explicit characterization of the decays of two-centred dyonic black holes in $4D$ $\mathcal{N} =4$ heterotic $\mathbb{Z}_N$ CHL models. Thereby we give a new exact solution for the problem of counting decadent dyons in these models.
Daniel Hernández Ruipérez
Supermoduli of supersymmetric curves with punctures
Review lecture:
We introduce super schemes (super algebraic varieties) and super analytic spaces and their basic properties. We then focus on SUSY curves (supersymmetric Riemann surfaces) without and with NS and RR punctures and construct a supermoduli for them. It has the structure of an Artin algebraic superspace, that is, it is the quotient of an étale equivalence relation of superschemes (superalgebriac varieties). We also report on compactifications of the supermoduli.
Leonardo Santilli
Faculdade de Ciências, Universidade de Lisboa
$T \bar{T}$–deformation of $q$-Yang-Mills theory
I will present the $T \bar{T}$-perturbed version of $q$-deformed two-dimensional Yang-Mills theory. I will show that the operator $T \bar{T}$ spoils the factorization of the partition function into chiral/anti-chiral sectors. On the other hand, it preserves a large $N$ third order phase transition, although modifying the phase diagram. Implications for the entanglement entropy at large $N$ will be discussed as well. I will conclude with comments on the potential applications of these results to the Bethe/gauge correspondence and to four-dimensional supersymmetric theories.
Based on joint work with Richard J. Szabo and Miguel Tierz, arXiv:2009.00657
Martin Sasieta
A Momentum/Complexity Correspondence
Holographic Volume Complexity naturally incorporates the notion of a Momentum/Complexity correspondence. This correspondence formalizes the idea that gravitational clumping of matter increases the complexity of the quantum state. For purely gravitational states, there is no clear momentum candidate aside from perturbative definitions. A generalization of the Momentum/Complexity correspondence is needed to interpret the gravitational contribution as arising from the Weyl tensor of spacetime.
Max Schwick
Resurgent properties of minimal string theory: part II
We will illustrate the importance of the above concepts in known solutions of the Painlevé equations (tronquée and tritronquée phases). Furthermore, we will expand on the above solutions by introducing some specific characteristics of these equations like resonance, relations between Stokes constants.Then, we will expand on this by explaining a conjecture on the analytic form of these constants that has been checked with our numerical method up to very high precision. Finally, we will talk about possible future directions.
Lárus Thorlacius
University of Iceland
Entanglement islands
Review on Entanglement islands.
Marija Tomasevic
Multi-mouth traversable wormholes
We describe the construction of traversable wormholes with multiple mouths in four spacetime dimensions and discuss associated quantum entanglement. Our solutions are asymptotically flat up to the presence of magnetic fluxes that extend to infinity. The construction begins with a two-mouth traversable wormhole supported by backreaction from quantum fields. Inserting a sufficiently small black hole into its throat preserves traversability between the original two mouths. This black hole can be the mouth of another wormhole connecting the original throat to a new distant region of spacetime. Our wormholes are traversable between any pairs of mouths. This work is based on arxiv:2012.07821.
Pablo Enmanuel León Torres
University of Antofagasta
Massive supermembrane in ten non-compact dimensions
We discuss a formulation of the M2-brane theory in ten non-compact dimensions that exhibits mass terms in the Hamiltonian. The existence of these mass terms improves the quantum behaviour of the theory in comparison with the known compactifications of the M2 to ten dimensions. On the other hand, this formulation of the M2-brane can be interpreted as a realization in ten non-compact dimensions of the central charge condition. This result could be interesting, since it represent another well behaved sector of the M2-brane and it might yield new information about massive Romans supergravity in ten dimensions.
Timo Weigand
The Swampland - From Conjectures to Theorems
Review lecture on The Swampland.
The `swampland' has become known as a term to describe effective field theories which, while consistent as quantum field theories, cannot be completed into fully consistent theories of quantum gravity. A growing web of conjectures has been developed in the literature as to when a theory belongs to the swampland. In this talk I will review some of these so-called `Quantum Conjectures', summarise the general and often times heuristic arguments in their support and highlight various connections between them. Despite the speculative character of many of the involved ideas, we will see that some of the quantum gravity conjectures can be made extremely precise and even be proven within large classes of string constructions.
Max Wiesner
Quantum Corrections in 4d $N=1$ Infinite Distance Limits and the Weak Gravity Conjecture
We study quantum corrections in four-dimensional theories with $N = 1$ supersymmetry in the context of Quantum Gravity Conjectures. According to the Emergent String Conjecture, infinite distance limits in quantum gravity either lead to decompactification of the theory or result in a weakly coupled string theory. We verify this conjecture in the framework of $N = 1$ supersymmetric F-theory compactifications to four dimensions including perturbative $\alpha'$ as well as non-perturbative corrections. After proving uniqueness of the emergent critical string at the classical level, we show that quantum corrections obstruct precisely those limits in which the scale of the emergent critical string would lie parametrically below the Kaluza-Klein scale. Limits in which the tension of the asymptotically tensionless string sits at the Kaluza-Klein scale, by contrast, are not obstructed.
In the second part of the talk we discuss the effect of quantum corrections for the Weak Gravity Conjecture away from the strict weak coupling limit. We propose that gauge threshold corrections and mass renormalisation effects modify the super-extremality bound in four dimensions. For the infinite distance limits in F-theory the classical super-extremality bound is generically satisfied by a sublattice of states in the tower of excitations of an emergent heterotic string. By matching the F-theory $\alpha'$-corrections to gauge threshold corrections of the dual heterotic theory we predict how the masses of this tower must be renormalised in order for the Weak Gravity Conjecture to hold at the quantum level.
Zhenghao Zhong
$5$d SCFTs, 5-brane webs and (orthosymplectic) magnetic quivers
The Higgs branches of $5d \ N=1$ SQCD theories at infinite gauge coupling (UV superconformal fix point) is often inaccessible via standard tools. The recently introduced concept of magnetic quivers proves to be a powerful tool able to probe the finite and infinite coupling limits in a uniform manner. In this talk, I will focus on $5d \ N=1$ SQCD theories whose Higgs branches at infinite coupling exhibit exceptional global symmetries. By realizing these theories as 5-brane web configurations with orientifold planes, I will discuss how to derive the corresponding orthosymplectic magnetic quivers. This allows us to study the geometry of the enlarged Higgs branches of $5d \ N=1$ theories with $Sp(k)$ and $SO(k)$ gauge groups at the UV fix point. | CommonCrawl |
American Institute of Mathematical Sciences
Journal Prices
Book Prices/Order
Proceeding Prices
E-journal Policy
Randomized learning of the second-moment matrix of a smooth function
FoDS Home
Modelling dynamic network evolution as a Pitman-Yor process
September 2019, 1(3): 307-327. doi: 10.3934/fods.2019014
Power weighted shortest paths for clustering Euclidean data
Daniel Mckenzie 1,, and Steven Damelin 2,
Department of Mathematics, University of California, Los Angeles, Los Angeles CA 90095, USA
Department of Mathematics, University of Michigan, Ann Arbor MI 48109, USA
* Corresponding author: [email protected]
Fund Project: The first author gratefully acknowledges the support of the Department of Mathematics, The University of Georgia, where the first author was a graduate student while this work was completed. The second author thanks the Department of Mathematics, The University of Michigan for their support. Both authors thank the anonymous reviewer for many useful suggestions.
Full Text(HTML)
Figure(3) / Table(2)
We study the use of power weighted shortest path metrics for clustering high dimensional Euclidean data, under the assumption that the data is drawn from a collection of disjoint low dimensional manifolds. We argue, theoretically and experimentally, that this leads to higher clustering accuracy. We also present a fast algorithm for computing these distances.
Keywords: Clustering, shortest path distance, manifold hypothesis, unsupervised learning.
Mathematics Subject Classification: Primary: 62H30, 58F15; Secondary: 05C85.
Citation: Daniel Mckenzie, Steven Damelin. Power weighted shortest paths for clustering Euclidean data. Foundations of Data Science, 2019, 1 (3) : 307-327. doi: 10.3934/fods.2019014
M. Alamgir and U. Von Luxburg, Shortest path distance in random k-nearest neighbor graphs, arXiv preprint, arXiv: 1206.6381. Google Scholar
A. Aldroubi, K. Hamm, A. Koku and A. Sekmen, Cur decompositions, similarity matrices, and subspace clustering, Front. Appl. Math. Stat., 4 (2019), p65. doi: 10.3389/fams.2018.00065. Google Scholar
E. Arias-Castro, Clustering based on pairwise distances when the data is of mixed dimensions, IEEE Transactions on Information Theory, 57 (2011), 1692-1706. doi: 10.1109/TIT.2011.2104630. Google Scholar
R. Basri and D. Jacobs, Lambertian reflectance and linear subspaces, IEEE Transactions on Pattern Analysis & Machine Intelligence, 2002,218–233. doi: 10.1109/ICCV.2001.937651. Google Scholar
J. L. Bentley, Multidimensional binary search trees used for associative searching, Communications of the ACM, 18 (1975), 509-517. doi: 10.1145/361002.361007. Google Scholar
A. Beygelzimer, S. Kakade and J. Langford, Cover trees for nearest neighbor, in Proceedings of the 23rd International Conference on Machine Learning, ACM, 2006, 97–104. doi: 10.1145/1143844.1143857. Google Scholar
A. Bijral, N. Ratliff and N. Srebro, Semi-supervised learning with density based distances, in Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence, AUAI Press, 2011, 43–50. Google Scholar
H. Chang and D.-Y. Yeung, Robust path-based spectral clustering, Pattern Recognition, 41 (2008), 191-203. doi: 10.1016/j.patcog.2007.04.010. Google Scholar
T. Chu, G. Miller and D. Sheehy, Exploration of a graph-based density sensitive metric, arXiv preprint, arXiv: 1709.07797. Google Scholar
R. Coifman and S. Lafon, Diffusion maps, Applied and Computational Harmonic Analysis, 21 (2006), 5-30. doi: 10.1016/j.acha.2006.04.006. Google Scholar
[11] T. Cormen, C. Leiserson, R. Rivest and C. Stein, Introduction to Algorithms, MIT press, 2009. Google Scholar
J. Costeira and T. Kanade, A multibody factorization method for independently moving objects, International Journal of Computer Vision, 29 (1998), 159-179. Google Scholar
K. Diaz-Chito, A. Hernández-Sabaté and A. López, A reduced feature set for driver head pose estimation, Applied Soft Computing, 45 (2016), 98-107. doi: 10.1016/j.asoc.2016.04.027. Google Scholar
D. Dua and C. Graff, UCImachine learning repository, 2017, http://archive.ics.uci.edu/ml. Google Scholar
C. Fefferman, S. Mitter and H. Narayanan, Testing the manifold hypothesis, Journal of the American Mathematical Society, 29 (2016), 983-1049. doi: 10.1090/jams/852. Google Scholar
B. Fischer and J. Buhmann, Path-based clustering for grouping of smooth curves and texture segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 25 (2003), 513-518. doi: 10.1109/TPAMI.2003.1190577. Google Scholar
S. Har-Peled, Computing the k nearest-neighbors for all vertices via dijkstra, arXiv preprint, arXiv: 1607.07818. Google Scholar
J. Ho, M.-H. Yang, J. Lim, K.-C. Lee and D. Kriegman, Clustering appearances of objects under varying illumination conditions, 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003, 11–18. doi: 10.1109/CVPR.2003.1211332. Google Scholar
C. D. Howard and C. Newman, Geodesics and spanning trees for euclidean first-passage percolation, Annals of Probability, 29 (2001), 577-623. doi: 10.1214/aop/1008956685. Google Scholar
S. Hwang, S. Damelin and A. Hero Ⅲ, Shortest path through random points, The Annals of Applied Probability, 26 (2016), 2791-2823. doi: 10.1214/15-AAP1162. Google Scholar
M. Jacobs, E. Merkurjev and S. Esedoḡlu, Auction dynamics: A volume constrained mbo scheme, Journal of Computational Physics, 354 (2018), 288-310. doi: 10.1016/j.jcp.2017.10.036. Google Scholar
A. Little, M. Maggioni and J. Murphy, Path-based spectral clustering: Guarantees, robustness to outliers, and fast algorithms, arXiv preprint, arXiv: 1712.06206. Google Scholar
A. Moscovich, A. Jaffe and B. Nadler, Minimax-optimal semi-supervised regression on unknown manifolds, in Artificial Intelligence and Statistics, 2017,933–942. Google Scholar
S. Nene, S. Nayar, H. Murase et al., Columbia object image library (coil-20). Google Scholar
A. Ng, M. Jordan and Y. Weiss, On spectral clustering: Analysis and an algorithm, in Advances in Neural Information Processing Systems, 2002,849–856. Google Scholar
A. Orlitsky and Sajama, Estimating and computing density based distance metrics, in Proceedings of the 22nd International Conference on Machine Learning, ACM, 2005,760–767. Google Scholar
J. Tenenbaum, V. De Silva and J. Langford, A global geometric framework for nonlinear dimensionality reduction, Science, 290 (2000), 2319-2323. doi: 10.1126/science.290.5500.2319. Google Scholar
P. Vincent and Y. Bengio, Density-sensitive Metrics and Kernels, Snowbird Learning Workshop, 2003. Google Scholar
K. Yin and X.-C. Tai, An effective region force for some variational models for learning and clustering, Journal of Scientific Computing, 74 (2018), 175-196. doi: 10.1007/s10915-017-0429-4. Google Scholar
L. Zelnik-Manor and P. Perona, Self-tuning spectral clustering, in Advances in Neural Information Processing Systems, 2005, 1601–1608. Google Scholar
show all references
Figure 1. Three sample geodesics in the power weighted shortest path metric with $ p=2 $, for the data set "Three Lines" (see §6). Observe how the geodesics consist of many small hops, instead of several large hops. The total lengths of the red and green paths are significantly smaller than the length of the blue path
Figure Options
Download as PowerPoint slide
Figure 2. All three synthetic data sets, projected into $ \mathbb{R}^{2} $. From left to right: Three Lines, Three Moons and Three Circles
Figure 3. Varying $ p $ and recording the accuracy of spectral clustering on the Three Lines data set, for three different values of the ambient dimension
Table 1. Classification accuracy of spectral clustering. Note that $ A^{(1)} $ represents using the Euclidean metric
$ A^{(f, 1)} $ $ A^{(1)} $ $ A^{(2)} $ $ A^{(10)} $ $ A^{(\infty)} $
3 Lines $ 66.11\pm 0.94\% $ $ 66.35 \pm 3.73\% $ $ 66.87 \pm 3.37\% $ $ 95.38\pm 9.22\% $ $ \bf{95.38 \pm 9.1\%} $
3 Moons $ 85.90 \pm 1.13\% $ $ 94.40 \pm 1.48\% $ $ 94.40 \pm 1.48\% $ $ \bf{96.20 \pm 1.76\%} $ $ 94.35 \pm 3.34\% $
3 Circles $ 51.87 \pm 0.00\% $ $ 51.93 \pm 0.32\% $ $ 51.94 \pm 0.36\% $ $ 71.22 \pm 9.50\% $ $ \bf{73.61 \pm 10.47\%} $
$\mathtt{ DrivFace} $ $ 78.88\% $ $ 71.62\% $ $ 71.62\% $ $ 74.71\% $ $ \bf{85.38\%} $
$ \mathtt{ COIL-20}$ $ 63.24\% $ $ 75.28\% $ $ \bf{78.61\%} $ $ 77.45\% $ $ 60.92\% $
$ \mathtt{ OptDigits} $ $ 77.73\% $ $ 91.49\% $ $ \bf{91.54\%} $ $ 88.39\% $ $ 83.17\% $
$ \mathtt{ USPS} $ $ 48.65\% $ $ 65.05\% $ $ 65.02\% $ $ 76.20\% $ $ \bf{77.92\%} $
$\mathtt{ MNIST} $ - $ 76.11\% $ $ 75.63\% $ $ 84.54\% $ $ \bf{86.77\%} $
Table 2. Run time of spectral clustering, in seconds. Note that this includes the time rquired to construct the similarity matrix. $ A^{(1)} $ represents using the Euclidean metric
3 Lines $ 0.32 $ $ 0.16 $ $ 1.20 $ $ 1.22 $ $ 1.22 $
3 Moons $ 0.33 $ $ 0.17 $ $ 1.31 $ $ 1.30 $ $ 1.36 $
3 Circles $ 0.35 $ $ 0.16 $ $ 1.00 $ $ 1.06 $ $ 1.07 $
$\mathtt{ DrivFace} $ $ 0.37 $ $ 1.24 $ $ 1.55 $ $ 1.64 $ $ 1.64 $
$ \mathtt{ COIL-20}$ $ 0.57 $ $ 0.72 $ $ 1.57 $ $ 1.82 $ $ 1.78 $
$\mathtt{ OptDigits} $ $ 5.40 $ $ 1.41 $ $ 5.28 $ $ 5.58 $ $ 5.67 $
$\mathtt{ USPS} $ $ 27.40 $ $ 17.12 $ $ 26.75 $ $ 22.78 $ $ 23.79 $
$ \mathtt{ MNIST}$ - $ 2060.23 $ $ 2031.38 $ $ 1554.15 $ $ 1613.41 $
Nicholas Geneva, Nicholas Zabaras. Multi-fidelity generative deep learning turbulent flows. Foundations of Data Science, 2020, 2 (4) : 391-428. doi: 10.3934/fods.2020019
San Ling, Buket Özkaya. New bounds on the minimum distance of cyclic codes. Advances in Mathematics of Communications, 2021, 15 (1) : 1-8. doi: 10.3934/amc.2020038
Liping Tang, Ying Gao. Some properties of nonconvex oriented distance function and applications to vector optimization problems. Journal of Industrial & Management Optimization, 2021, 17 (1) : 485-500. doi: 10.3934/jimo.2020117
Shahede Omidi, Jafar Fathali. Inverse single facility location problem on a tree with balancing on the distance of server to clients. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021017
Kengo Nakai, Yoshitaka Saiki. Machine-learning construction of a model for a macroscopic fluid variable using the delay-coordinate of a scalar observable. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1079-1092. doi: 10.3934/dcdss.2020352
Cheng Peng, Zhaohui Tang, Weihua Gui, Qing Chen, Jing He. A bidirectional weighted boundary distance algorithm for time series similarity computation based on optimized sliding window size. Journal of Industrial & Management Optimization, 2021, 17 (1) : 205-220. doi: 10.3934/jimo.2019107
Kateřina Škardová, Tomáš Oberhuber, Jaroslav Tintěra, Radomír Chabiniok. Signed-distance function based non-rigid registration of image series with varying image intensity. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1145-1160. doi: 10.3934/dcdss.2020386
PDF downloads (121)
HTML views (762)
on AIMS
Daniel Mckenzie Steven Damelin
Copyright © 2021 American Institute of Mathematical Sciences
Recipient's E-mail* | CommonCrawl |
Differentiable Function Calculator
Boolean functions, we characterize properties of the DLCT by means of the Walsh transform of the function and of its differential distribution table, and we present generic bounds on the highest magnitude occurring in the DLCT of vectorial Boolean functions, which coincides (up to a factor 2) with the well-established notion of absolute indicator. scientific calculator: An electronic calculator that can handle trigonometric, exponential and often. Elementary Differential Equations with Boundary Value Problems is written for students in science, en-gineering,and mathematics whohave completed calculus throughpartialdifferentiation. Proof of the implicit function theorem in several variables calculus requires the contraction mapping theorem, so is probably not suitable for your audience. In contrast to the first two equations, the solution of this differential equation is a function φ that will satisfy it i. A real-valued univariate function is said to have an infinite discontinuity at a point in its domain provided that either (or both) of the lower or upper limits of. Whether you are doing simple calculations or working on university-level math problems, Maple Calculator can do it all. For the function y = f(x), the derivative is symbolized by y' or dy/dx, where y is the dependent variable and x the independent variable. Corticosteroids and other immunosuppressive drugs also cause lymphopenia. During World War II, it was common to find rooms of people (usually women) working on mechanical calculators to numerically solve systems of differential equations for military calculations. Cell B3 of the following spreadsheet shows the Excel Yearfrac function, used to calculate the number of years between the two dates in cells B1 and B2:. Choosing the right differential gear ratio will help regain lost power and torque. Free ordinary differential equations (ODE) calculator - solve ordinary differential equations (ODE) step-by-step. How can I force Mathematica to calculate symbolically the partial derivative of a function u[x,y] with respect to a variable z = f(x, y), where f(x, y) is known? u is a function, defined on reals, taking real values. The techniques for solving differential equations based on numerical approximations were developed before programmable computers existed. Differential definition, of or relating to difference or diversity. The AMC Model 35 Rear is also referred to as a Dana 35 differential. As indicated in Chapter 8, SCILAB provides function derivat to calculate derivatives of polynomials. If satisfies the Cauchy-Riemann equations and has continuous first partial derivatives in the neighborhood of , then exists and is given by. · Calculators · Converters · Equation Solvers · Graphers. 5 hours from the night differential. If we have a function that can be expressed as a polynomial, we can use function derivat to check if that function satisfies a particular differential equation as illustrated in the example below:. The concept is used when there are multiple possible options to pursue, and a choice must be made to select one option and drop the others. Our inverse function calculator will quickly calculate the derivative of a function. These functions behave pathologically, much like an oscillating discontinuity where they bounce from point to point without ever settling down enough to calculate a slope at any point. derivative(fonction, x) can then be used. Calculate derivatives of any function using a simple Derivative calculator with a detailed step-by-step solution and graph. The function is differentiable from the left and right. The next theorem shows us a very nice relationship between functions that are continuous and those that are differentiable. You simply set the derivative to 0 to find critical points, and use the second derivative test to judge whether those points are. derivative_calculator(function;variable), function is the function to differentiate. Answer to: Approximate \sqrt[3]{(8. >Problem 234: Calculating Arc Lengths of Simple Functions-Students work with the differential form of the Pythagorean Theorem to determine the basic integral formula for arc length, then evaluate it for a parabola, logrithmic spiral and normal spiral. Bernoulli was a Swiss mathematician in the 1700s, studying the conservation of energy. To calculate online the derivative of a sum, simply type the mathematical expression that contains the sum, specify the variable and apply derivative_calculator function. The differential of a function is given by , where is a change in the independent variable. Shift differential premiums vary depending on a number of factors, including job function, level of responsibility, influence of labor unions on specific jobs, location and type of shift. In fact it is not differentiable there (as shown on the differentiable page). Therefore, to determine the magnitude of the differential output gain, use the calculator magnitude function. To solve two ordinary differential equations of the following type: dy dt ftyz dz dt gtyz = = (,,) (,,). Consider a function that is differentiable at a point. Derivative Calculator with Steps. Interactive Graphing Calculator - Desmos Calculator. The simplest method is to use finite difference approximations. Differential cost refers to the difference between the cost of two alternative decisions. Inverse Calculator Ideas. The second example shows a half of a parabola connected to half of a cubic. val represents the fractional value of the population convergence. The attached project is a Scientific Calculator application developed using C#. Additionally it is clear that is differentiable on (a,b) since polynomials are differentiable everywhere and was assumed to be differentiable on (a,b) to begin with. The concept and application of 'the second derivative' How to use differentiation to solve problems in basic motion, mechanics and optimisation. polynomial with. exists if and only if both. For a function to be differentiable, it must be continuous. Since the "unknown function" v appears in the integrand here, there is no way to calculate this integral without first knowing the answer! Our objec-tive is to manage this quandary. Calculates the exponential functions e^x, 10^x and a^x. The 'Calculate' button is used to execute the conversion of the values entered in the blank text field. Differentiation of Algebraic Functions. Differential Equation Calculator. 2 - Activity 2 - Graphs of Functions and their Derivatives Lesson 28. Equations displayed for easy reference. The AMC Model 35 Rear is also referred to as a Dana 35 differential. 4 - Activity 4 - Introduction to Slope Fields. Calculate Limits of Trigonometric Functions. I would like. So a point where the function is not differentiable is a point where this limit does not exist, that is, is either infinite (case of a vertical tangent), where the function is discontinuous, or where there are two different one-sided limits (a cusp, like for #f(x)=|x|# at 0). Cell B3 of the following spreadsheet shows the Excel Yearfrac function, used to calculate the number of years between the two dates in cells B1 and B2:. Partial differential equations of physics, including those of potential theory, wave propagation, and heat flow, treated by classical means, generalized functions and variational principles. The calculator will find the solution of the given ODE: first-order, second-order, nth-order, separable, linear, exact, Bernoulli, homogeneous, or inhomogeneous Differential Equation Calculator - eMathHelp. If the op-amp was 'railed' (saturated) then the differential input impedance would be higher: R2 + Rg + R1 + Rf. 09-differentiability. Each function un (x,t) is a solution to the PDE (8) and the BCs (10). Although this function contains a wildly oscillating sinusoidal component, these oscillations are flattened out at the origin. A coupled system is formed of two differential equations with two dependent variables and an independent variable. Interactive Graphing Calculator - Desmos Calculator. But, the constant zero function is also a solution of the initial value problem. List of Derivatives of Log and Exponential Functions List of Derivatives of Trig & Inverse Trig Functions List of Derivatives of Hyperbolic & Inverse Hyperbolic Functions. The power on 's' is the derivative number. MDCalc loves calculator creators - researchers who, through intelligent and often complex methods. Euler method. The derivative is the instantaneous rate of change of a function with respect to one of its variables. The concept and application of 'the second derivative' How to use differentiation to solve problems in basic motion, mechanics and optimisation. Consider a function of one variable. Here listed free online differential equations calculators to calculate the calculus online. DNL stands for Differential Non-Linearity and quantifies the ADC or DAC precision. Detention Forum Archive. 001)} using total differential of the function \sqrt[3]{xy}. See Examples. simultaneously. The Error Function Calculator is used to calculate the error function of a given number. Learn about our industry-leading Test and Measurement tools. In calculus (a branch of mathematics), a differentiable function of one real variable is a function whose derivative exists at each point in its domain. Partial Derivative Calculator computes derivatives of a function with respect to given variable utilizing analytical differentiation and displays a step-by-step solution. This online trigonometry calculator will calculate the sine, cosine, tangent, cotangent, secant and cosecant of angle values entered in degrees or radians. As a result, the graph of a differentiable function must have a (non-vertical) tangent line at each interior point in its domain, be relatively smooth. In the 18th century Leonhard Euler invented a simple scheme for numerically approximating the solution to an ODE. The calculator will find the gradient of the given function (at the given point if needed), with steps shown. Math is about vocabulary. Use * for multiplication a^2 is a2. Ex: Determine the Points on a Function When the Tangents Lines Have a Given Slope Determine the value of the derivative function on the graphing calculator Determine a Derivative Function Value on the TI84 (Newer Software) Find the Value of a Derivative Function at a Given Value of x Applications of the Derivatives Using the Power Rule. As with continuous functions, differentiable functions can be added, multiplied, divided, and composed with each other to yield again differentiable functions. However, one of the more important uses of differentials will come in the next chapter and unfortunately we will not be able to discuss it until then. For example x²+y=1, isolate y as a function of x: y= (1-x²) and use the derivative rules. This Limit calculator will help you to find the limit of the given function at the given point. Use this fraction calculator to add, subtract, multiply and divide fractions. Differential equations Calculator Get detailed solutions to your math problems with our Differential equations step-by-step calculator. Calculus - Introducing Differentiable functions and Differentiation - Outline of Contents (Also check out the MCQ Quizzes at The condition for differentiability of functions will be introduced in the tutorial. Geometrically speaking, is the slope of the tangent line of at. It is the nature of differential equations that the sum of solutions is also a solution, so that a general solution can be approached by taking the sum of the two solutions above. Recall that the tangent line to the The calculator uses an approximation! In fact, calculators and computers use approximations all the. A real function is said to be differentiable at a point if its derivative exists at that point. Transformations: Inverse of a Function. Consider the equation , where is a square-wave step function and is the oscillation of a spring-mass system in resonance with the square-wave forcing function. Graphing functions include basic function graphing, parametric graphing, polar graphing, sequence graphing, 3-D graphing, and differential-equation graphing. The derivative of f is the function whose value at x is the limit. The syntax for actually solving a differential equation with these functions is:. Deletes a Lambda function. derivative_calculator(function;variable), function is the function to differentiate. Free derivative calculator - differentiate functions with all the steps. Each function un (x,t) is a solution to the PDE (8) and the BCs (10). the max power of s in the denominator, put that many integrators in series, after each integrator put a negative feedback link, with a constant coefficient, to before the first integrator except for the. A differentiable function is one you can differentiate everywhere! That sounds a bit like a dictionary definition, doesn't it? Completely accurate, but not very helpful!. A number-theoretic function \(\phi\) is said to be recursive if there is a finite sequence of number-theoretic functions \(\phi_1 , \phi_2 , \ldots \phi_n\) that ends with \(\phi\) and has the property that every function \(\phi_k\) of the sequence is recursively defined in terms of two of the preceding functions, or results from any of the. Questions on the differentiability of functions with emphasis on piecewise functions are presented along with their Graphical Meaning of non differentiability. The online professional calculator will help You deduct and calculate the limit of function in some seconds. Online exponents calculator with negative numbers support and steps. So far we have looked at expressions with analytic derivatives and primitive functions respectively. the second line, defining the function explicitly. simultaneously. Excel Yearfrac Function Example. - Only the function is differentiable and not a combination of function. (D) neither continuous nor differentiable. Differential scanning calorimetry (DSC) is a thermoanalytical technique in which the difference in the amount of heat required to increase the temperature of a sample and reference is measured as a function of temperature. Input MUST have the format: AX3 + BX2 + CX + D = 0. Partial differential equations of physics, including those of potential theory, wave propagation, and heat flow, treated by classical means, generalized functions and variational principles. Recall that the tangent line to the graph of at is The calculator uses an approximation! In fact, calculators and computers use approximations all the. If we have a function that can be expressed as a polynomial, we can use function derivat to check if that function satisfies a particular differential equation as illustrated in the example below:. Thus Rolle's Theorem applies to so there exists a such that which yields Theorem 5. Show Instructions In general, you can skip the multiplication sign, so `5x` is equivalent to `5*x`. Inverse function for a function y=f(x) is such function x=g(y) that g(f(x))=x for all values of x where f is defined. By substituting this solution into the nonhomogeneous differential equation, we can determine the function \(C\left( x \right). The term differential refers to the values an ADC takes between two consecutive levels. List of Derivatives of Log and Exponential Functions List of Derivatives of Trig & Inverse Trig Functions List of Derivatives of Hyperbolic & Inverse Hyperbolic Functions. (Use a calculator). Function g below is not differentiable at x = 0 because there is no tangent to the graph at x = 0. Homogeneous Differential Equations Calculator. Partial Derivative Calculator computes derivatives of a function with respect to given variable utilizing analytical differentiation and displays a step-by-step solution. Installation of the correct ring and pinion gear ratio will help optimize performance, fuel economy and overall driveability. This calculator, which makes calculations very simple and interesting. The Derivative Calculator supports solving first, second, fourth derivatives, as well as implicit differentiation and finding the zeros/roots. This implies that for any real number α –. Calculus Calculator - Answer any calculus problem and get step-by-step explanations on how to solve it! Calculus Calculator: Learn Limits Without a Limit! Learning mathematics is definitely one of the. Differential cost is the difference between the cost of two alternative decisions, or of a change in output levels. Interactive Graphing Calculator - Desmos Calculator. Differentiation was never easier don't forget to subscribe and comment any problem that you wish to see it solved. Before we look at procedures for converting from a transfer function to a state space model of a system, let's first examine going from a differential equation to state space. Includes all the functions and options you might need. 09-differentiability. Shift differential premiums vary depending on a number of factors, including job function, level of responsibility, influence of labor unions on specific jobs, location and type of shift. Due to research on HIV infection, a virus that affects T-lymphocytes, much more is now known about lymphocytes and their functions. Calculate the derivative of a function. An important property of the inverse function is that inverse of the inverse function is the function itself. We updated the beam calculator interface and added additional features for calculating beams (calculation of statically indeterminate beams, image Clear beam. A differential amplifier circuit is a very useful op-amp circuit, since it can be configured to either "add" or "subtract" the input voltages, by suitably adding more resistors in parallel with the input resistors. Make use of this free online derivative calculator to differentiate a function. Tutorial on how to calculate (determine) the transfer function of a dynamic system for control theory In control engineering and control theory the transfer function of a system is a very common concept. Proper position sizing is key to managing risk and to avoid blowing out your account on a single trade. Differential signaling, which is less common than single-ended signaling, employs two complementary voltage signals in order to transmit one information signal. The percentage biased differential relay comprises two restraint coils and one operating coil per phase. The function must also be continuous , but any function that is differentiable is also continuous, so no need to worry about that. Theorderof a differential equation is the order of the high-est derivative of the unknown function occurring in the equation. A turning point of a function is a point where the graph of the function changes from sloping downwards to sloping upwards, or vice versa. 2 Setting of the threshold:. The running calculator uses the distance and finish time of your most recent race to approximate your current level of fitness. BYJU'S online differential equation calculator tool makes the calculation faster, and it displays the derivative of the function in a fraction of seconds. provided this limit exists. Differentiability Implies Continuity. If you can easily express y as a function of x, by all means do that first. Same thing holds for f. The attached project is a Scientific Calculator application developed using C#. The above calculator is an online tool which shows output for the given input. Related terms. Let y = f(x) be a function. The page also includes the approximate yield to maturity. The function is differentiable from the left and right. 5Differentiating elementary functions 0. Differential cost refers to the difference between the cost of two alternative decisions. A nowhere differentiable function is, perhaps unsurprisingly, not differentiable anywhere on its domain. Using Laplace transforms, t his solution is more compact than using a Fourier series expansion of the forcing function. 12MB) Chapter 4 Complex Numbers ( 0. Fractional calculations Memory calculations Last answer memory User-defined functions Function to round calculation results. Fast and multiple calculation of arithmetic formulas to calculate the volume or area for different geometric shapes in large In the cell A3, the formula contains the non-argument function PI (), that. SymPy can compute asymptotic series expansions of functions around a point. We hope that you have got a better understanding of how to calculate the transfer function of differential function. Typically, scientific calculators only perform calculations like addition, subtraction, multiplication The GraphCalc graphing calculator is FREE. A number-theoretic function \(\phi\) is said to be recursive if there is a finite sequence of number-theoretic functions \(\phi_1 , \phi_2 , \ldots \phi_n\) that ends with \(\phi\) and has the property that every function \(\phi_k\) of the sequence is recursively defined in terms of two of the preceding functions, or results from any of the. By using differential equations with either velocity or acceleration, it is possible to find position and velocity functions from a known acceleration. Free ordinary differential equations (ODE) calculator - solve ordinary differential equations (ODE) step-by-step. 2) to an exponential input is thus y(t) = CeAt x(0)−(sI−A)−1B +Gyu(s)est. From: Chemical Analysis of Food: Techniques and Applications, 2012. Theorderof a differential equation is the order of the high-est derivative of the unknown function occurring in the equation. Online exponents calculator with negative numbers support and steps. It allows you to calculate sin, cos, tan, sin inverse , cos inverese, tan inverse with radian or degree. For example one could also write: Notice that in all notions the unknown function is written as Y(X) and not simply as Y. Chi-Square Distribution Calculators Exponential Distribution Calculators Gamma Distribution Calculators Gumbel Using this cumulative distribution function calculator is as easy as 1,2,3. This Demonstration gives an example of the differential of a function, using the function. This is a linear partial differential equation of first order for µ: Mµy −Nµx = µ(Nx −My). The percentage biased differential relay comprises two restraint coils and one operating coil per phase. Numerical values are calculated with precision 15 digits. Cell B3 of the following spreadsheet shows the Excel Yearfrac function, used to calculate the number of years between the two dates in cells B1 and B2:. Excel Yearfrac Function Example. Differentiation was never easier don't forget to subscribe and comment any problem that you wish to see it solved. Calculus: Learn Calculus with examples, lessons, worked solutions and videos, Differential Calculus, Integral Calculus, Sequences and Series, Parametric Curves and Polar Coordinates, Multivariable Calculus, and Differential, AP Calculus AB and BC Past Papers and Solutions, Multiple choice, Free response, Calculus Calculator. SymPy can compute asymptotic series expansions of functions around a point. Differential Calculus In mathematics, differential calculus is a subfield of calculus that studies the rates at which quantities change. ?t ( ) We could also write , and could use. Free Online Derivative Calculator allows you to solve first order and higher order derivatives Wolfram|Alpha is a great calculator for first, second and third derivatives; derivatives at a point; and. A coupled system is formed of two differential equations with two dependent variables and an independent variable. Here is an example: Your exercise: This is the graph of your function. Note that for a function to be differentiable at a point, the function must be defined on an open interval containing the point. Free ordinary differential equations (ODE) calculator - solve ordinary differential equations (ODE) step-by-step. is continuous at a point t if the three scalar functions f 1 (t), f 2 (t), and f 3 (t) are continuous at t. Check out all of our online calculators here!. Let's write a function called derivative which takes input parameters f, a, method and h (with default values method='central' and h=0. inverse function calculator domain range inverse function calculator symbols inverse function Download Inverse Function Calculator APK Android Game for free to your Android phone. actually now that I think a little more : you don't need to factor the denominator. In the following formulas, $u$, $v$, and $w$ are differentiable functions of $x$ and $a$ and $n$ are constants. 12MB) Chapter 4 Complex Numbers ( 0. For example one could also write: Notice that in all notions the unknown function is written as Y(X) and not simply as Y. To use the calculator during the exam, students need to select the Calculator icon. On the calculator's web page you find a detailed description. In calculus , a differentiable function of one real variable is a function whose derivative exists at each point in its Differentiable function. for given C1-functions M, N. Boolean Expression Calculator. Here are some facts about derivatives in general. Press the button 'Calculate GCD' to start the calculation or 'Reset' to empty the form and start again. the mass flow rate the; liquid density; the differential height - either it is the static lift from one height to an other or the total head loss component of the system - and can be calculated like. We note that the differential entropy of the Gaussian probability density function depends only on the variance and not on the mean. VERY VERY VERYEXTENSIVE. In calculus, a differentiable function is a continuous function whose derivative exists at all points Differentiability lays the foundational groundwork for important theorems in calculus such as the. A nowhere differentiable function is, perhaps unsurprisingly, not differentiable anywhere on its domain. Indeed, suppose Y is a solution of the differential equation and Y(t 0) is zero. In calculus (a branch of mathematics), a differentiable function of one real variable is a function whose derivative exists at each point in its domain. Let and on some region containing the point. When val is greater than one the function halts. Two C1-functions u(x,y) and v(x,y) are said to be functionally dependent if det µ ux. Chapter 5: Rational Functions and the Calculation of Derivatives Chapter 6: Exponential Functions, Substitution and the Chain Rule Chapter 7: Trigonometric Functions and their Derivatives Chapter 8: Inverse Functions and their Derivatives Chapter 9: Numerical Differentiation, and Non-Differentiable Functions. y'+f(x)y=g(x) Transformations: Scaling a Function. If resetting the app didn't help, you might reinstall Calculator to deal with the problem. Geometric Interpretation of the differential equations, Slope Fields. is continuous at a point t if the three scalar functions f 1 (t), f 2 (t), and f 3 (t) are continuous at t. For more information see Calculus; Engineering Mathematics or eFunda FUNCTION CALCULATOR (ENGLISH) - Xiao Gang, Laboratoire J. Follow the rules mentioned in the above derivative calculator and understand the concept for deriving the given function to differentiate. Shift differential premiums vary depending on a number of factors, including job function, level of responsibility, influence of labor unions on specific jobs, location and type of shift. How do we know where the robot ends up? Forward Kinematics 90 for Differential Drive Robot ICC (2,4) X (0, r 3) First, Translate ICC to origin Forward Kinematics for Differential Drive Robot ICC (2,4) X (3,0) Then, Rotate by degrees about Z axis Forward Kinematics for Differential Drive. DIFFERENTIATING ELEMENTARY FUNCTIONS 0. A graphing calculator, the TI-84 Plus CE, is integrated into the exam software, and available to students during Section 2 of the exam. Additionally it is clear that is differentiable on (a,b) since polynomials are differentiable everywhere and was assumed to be differentiable on (a,b) to begin with. Solve advanced problems in Physics, Mathematics and Engineering. Let y = f(x) be a function. Contributed by: Izidor Hafner (March 2011). You need to use an iterative method and take. This calculator will tell you the critical value of the F-distribution, given the probability level, the numerator degrees of freedom, and the denominator degrees of freedom. · Calculators · Converters · Equation Solvers · Graphers. Each function un (x,t) is a solution to the PDE (8) and the BCs (10). Since the diff function can be used to represent derivatives, it can also be used to define differential equations. 03MB) Chapter 5 Binary, Octal, Decimal, and Hexadecimal Calculations ( 0. Please give your comments in the comment. I want to try different changes of variables in PDEs. A layer is really just a differentiable function which takes in a vector of size n and spits out a new vector of size m. The second part will use this function in concert with SciPy's ODE solver to calculate solutions over a specified time range assuming given initial conditions. Chapter 5: Rational Functions and the Calculation of Derivatives Chapter 6: Exponential Functions, Substitution and the Chain Rule Chapter 7: Trigonometric Functions and their Derivatives Chapter 8: Inverse Functions and their Derivatives Chapter 9: Numerical Differentiation, and Non-Differentiable Functions. Let f be a function that is differentiable at x = and continuous on an interval (a,b) containing 10; let Ar be a number such that a -20 < Ar 0, as illustrated in Fig. Differential cost is the difference between the cost of two alternative decisions, or of a change in output levels. c 1 f(t) + c 2 g(t) = 0. Detention Forum Archive. It is one of the two traditional divisions of calculus, the other being integral calculus—the study of the area beneath a curve. As a result, the graph of a differentiable function must have a (non-vertical) tangent line at each interior point in its domain, be relatively smooth. The TI-89 Titanium lets you perform basic math, algebra, calculus, graphs, matrices, and statistical functions and creating animations, graphing 3-D rotations, and plotting contours. The ReLU activation function g(z) = max{0, z} is not differentiable at z = 0. What are piecewise functions? What are open and closed points? How do you figure out the domain and range of a function without graphing it on a calculator? Continuity and Differentiability of Piecewise Function What values of "a" and "b" will make this piecewise function continuous and differentiable? Find a Function, Integrals. The inclusion of both t and x in dy(t,x) is not necessary if there is no t in the right hand side of the differential equation. , we get a finite limit for the difference quotient. In a cooperation between the author and maths online in the beginning of 2000, the calculator's functionality has been extended. differential equation related to Y, the numberator is the differential equation related to X. Interactive Graphing Calculator - Desmos Calculator. In this section we will compute the differential for a function. Recall that the tangent line to the The calculator uses an approximation! In fact, calculators and computers use approximations all the. Bernoulli was a Swiss mathematician in the 1700s, studying the conservation of energy. Differential cost is the difference between the cost of two alternative decisions, or of a change in output levels. One of the most important tools in a trader's bag is risk management. The solution to the differential equation for disease spread is an exponential function, where the probability of infecting someone appears in the exponent, and there is a free constant in front of the exponential, which I called N 0. In the 18th century Leonhard Euler invented a simple scheme for numerically approximating the solution to an ODE. Instructions: Fill in the flow rate and the corresponding units. Free Calculus worksheets created with Infinite Calculus. Corticosteroids and other immunosuppressive drugs also cause lymphopenia. The ideal hydraulic power to drive a pump depends on. A number-theoretic function \(\phi\) is said to be recursive if there is a finite sequence of number-theoretic functions \(\phi_1 , \phi_2 , \ldots \phi_n\) that ends with \(\phi\) and has the property that every function \(\phi_k\) of the sequence is recursively defined in terms of two of the preceding functions, or results from any of the. Statistics Calculator. The TI-89 Titanium lets you perform basic math, algebra, calculus, graphs, matrices, and statistical functions and creating animations, graphing 3-D rotations, and plotting contours. Power Rule. The story behind its development is here. Our inverse function calculator will quickly calculate the derivative of a function. 09-differentiability. We will give an application of differentials in this section. Sending completion. In my spreadsheet attached, have a key showing these specific start and stop hours for each shift in W2:W5 (shift differential start time) and X2:X5 (shift differential end time). Polar Curve Functions (Differential Calc) Solomon Xie. Interactive Graphing Calculator - Desmos Calculator. Laboratory Procedures for Determining the Dispersibility of. geometrically, the function #f# is differentiable at #a# if it has a non-vertical tangent at the corresponding point on the graph, that is, at. Includes 53 different calculations. , we get a finite limit for the difference quotient. We need only check for differentiability at x = 0 and 3. Berkeley Electronic Press Selected Works. Enabling on-line discussions. for all t. The stability of a system may also be found from the state-space representation. This a Higher-order Derivatives - math formulas. Our calculator is capable of solving systems with a single unique solution as well as undetermined systems which have infinitely many solutions. Chi-Square Distribution Calculators Exponential Distribution Calculators Gamma Distribution Calculators Gumbel Using this cumulative distribution function calculator is as easy as 1,2,3. Try Open Omnia Today. 2 - Activity 2 - Graphs of Functions and their Derivatives Lesson 28. where the. For example, if a worker earns $20 an hour, and the differential for the night shift is 10 percent: 20 x 0. A function is differentiable if it has a defined derivative for every input, or. Using Newton's second law of motion, we get the second order differential equation: d 2 y/dt 2 = (-G - D)/m. Questions on Continuity with Solutions. There are functions that are continuous but differentiable. Calculate integrals online — with steps and graphing! The Integral Calculator supports definite and indefinite integrals (antiderivatives) as well as integrating functions with many. The squeezing theorem is used to find limits of functions such as sin x/x a x approaches 0. Sending completion. the max power of s in the denominator, put that many integrators in series, after each integrator put a negative feedback link, with a constant coefficient, to before the first integrator except for the. In fact it is not differentiable there (as shown on the differentiable page). A graphing calculator, the TI-84 Plus CE, is integrated into the exam software, and available to students during Section 2 of the exam. They evaluate the length of the spiral track on a CDrom. When the input signal swings in any direction, the ADC samples the signal and its output is a stream of binary numbers. Write and scan to calculate. 5) An important point in the derivation of the transfer function is the fact. So, a differential equation must be solved not for a parameter, say x, but for a whole function. Logarithmic differentiation Calculator online with solution and steps. Derivative Calculator computes derivatives of a function with respect to given variable using Steps to use the derivative calculator: Enter function you would like to differentiate and pay attention to. List of Derivatives of Log and Exponential Functions List of Derivatives of Trig & Inverse Trig Functions List of Derivatives of Hyperbolic & Inverse Hyperbolic Functions. Log InorSign Up. Both of them use a similar numerical formula, Runge-Kutta, but to a different order of approximation. Here is a circuit that can be simulated, based on the above definition of differential input impedance (values picked to be different). for shift starts at 9pm - 6 am, and say lunch break is at 1am-2am, it will deduct 1 hour from the night differential mentioned above. This online trigonometry calculator will calculate the sine, cosine, tangent, cotangent, secant and cosecant of angle values entered in degrees or radians. For example x²+y=1, isolate y as a function of x: y= (1-x²) and use the derivative rules. 7/42 Highest Weight Representations of SL(2,R) Let Hα be the set of holomorphic functions of T(R+) such that (F| G) = 2α 2πΓ(α) Z H F(z)G(z) yα−1dxdy<∞. Infant Growth Charts - Baby Percentiles Overtime Pay Rate Calculator Salary Hourly Pay Converter - Jobs Percent Off - Sale Discount Calculator Pay Raise Increase Calculator Linear Interpolation Calculator Dog Age Calculator Ideal Gas Law Calculator Hazen Williams Equations Calculator Venturi Meter Flow Rate Fluid Mechanics Equations Calculators. Partial differential equations of physics, including those of potential theory, wave propagation, and heat flow, treated by classical means, generalized functions and variational principles. Non Differentiable Functions. Here are some facts about derivatives in general. We say that is differentiable at a point if the derivative of at exists (as a finite number), i. for one-variable real functions: limits, integrals, roots This is the main site of WIMS (WWW Interactive Multipurpose Server): interactive exercises, online calculators and plotters, mathematical recreation. DNL stands for Differential Non-Linearity and quantifies the ADC or DAC precision. that is differentiable at a point x=a x=a. Polar Curve Functions (Differential Calc) Solomon Xie. For example $ 2x+3 $ would be 2*x+3, note that you must show the multiplication sign. Calculus - Introducing Differentiable functions and Differentiation - Outline of Contents (Also check out the MCQ Quizzes at The condition for differentiability of functions will be introduced in the tutorial. I was hoping to reference these values in the formulas, that way if the shift hours change, I can just change the key and the spreadsheet would calculate accordingly. The test gets a lot of information from your blood sample:. Differential equations Calculator Get detailed solutions to your math problems with our Differential equations step-by-step calculator. A real function is said to be differentiable at a point if its derivative exists at that point. Also you can compute a number of solutions in a system of linear equations. Explore many other math calculators, as well as hundreds of calculators addressing topics such as finance. (C) differentiable but not continuous. Using your calculator, graph each of the following functions and determine all values of x that produce a maximum or minimum function value on the given interval. Math is about vocabulary. Deletes a Lambda function. It makes the lives of people who use matrices easier. The rule for differentiating constant functions and the power rule are explicit differentiation rules. We now apply the principle of superposition: if u1 and u2 are two solutions to the PDE (8) and BC (10), then c1u1 + c2u2 is also a solution, for any constants c1, c2. Then the differential for a multivariable function is given by three separate formulas. A first-order differential equation, that may be easily expressed as $${\frac{dy}{dx} = f(x,y)}$$ is said to be a homogeneous differential equation if the function on the right-hand side is homogeneous in nature, of degree = 0. sagews (SageMath Another point of note is that if f is differentiable at c, then f is continuous at c. Enabling on-line discussions. Modeling non-differentiable rewards with policy gradients. PDE = differential equation in which all dependent variables are a function of several independent variables, as in the second example. When you start learning how to integrate functions, you'll probably be introduced to the notion of Differential Equations and Slope Fields. dY/dt = ky and Y(t 0) = 0. Partial Derivative Calculator computes derivatives of a function with respect to given variable utilizing analytical differentiation and displays a step-by-step solution. Cadence Calculator Function for difference between two voltages. The conversion is based on equivalent fitness, not equivalent pace. Differentiable. The concept and application of 'the second derivative' How to use differentiation to solve problems in basic motion, mechanics and optimisation. Calculate Arcsine, Arccosine, Arctangent, Arccotangent, Arcsecant and Arccosecant for values of x and get answers in degrees, ratians and pi. Some RF calculators only perform calculations at a specific frequency, usually 2. Lets Work Out-Example: Consider the function \(f(x)=(2x-3)^{\frac{1}{5}}\). The addition of larger tires can greatly decrease performance and fuel economy. The notion of differentiability can also be extended to complex functions (leading to the Cauchy-Riemann equations and the theory of holomorphic functions), although a few additional subtleties arise in complex differentiability that are not present in the real case. This should be rather obvious, but a function that contains a discontinuity is not. Instructions: Fill in the flow rate and the corresponding units. How can I force Mathematica to calculate symbolically the partial derivative of a function u[x,y] with respect to a variable z = f(x, y), where f(x, y) is known? u is a function, defined on reals, taking real values. Let's go through a few examples. To do this, one should learn the theory of the differential equations or use our online calculator with step by step solution. A function to follow the progress of the minimization. 2, Problem 2 Engineering Mathematics and Sciences Solutions to Engineering, Sciences, and Mathematics Problems. This online calculator allows you to solve differential equations online. McGraw-Hill Dictionary of Scientific & Technical Terms, 6E, Copyright © 2003 by The. Fill in the pipe size and the differential pressure. Limit Function Value Calculator. is continuous at a point t if the three scalar functions f 1 (t), f 2 (t), and f 3 (t) are continuous at t. c 1 f(t) + c 2 g(t) = 0. That is, the graph of a differentiable function must have a (non-vertical) tangent line at each point in its domain, be relatively "smooth" (but not necessarily mathematically smooth), and cannot contain any breaks, corners, or cusps. Then there is some open set V containing a and an open W containing f(a) such that f : V → W has a continuous inverse f−1: W → V which is differentiable for all y ∈ W. To get the value of the derivative of f at a given x, the function misc. Instructions: Fill in the flow rate and the corresponding units. Calculates biologically effective dose (BED) and equivalent dose (EQD2) for cancer radiotherapy. the max power of s in the denominator, put that many integrators in series, after each integrator put a negative feedback link, with a constant coefficient, to before the first integrator except for the. Our inverse function calculator uses derivative formula to solve derivative of trig functions. \) The described algorithm is called the method of variation of a constant. In calculus , a differentiable function of one real variable is a function whose derivative exists at each point in its Differentiable function. The ReLU activation function g(z) = max{0, z} is not differentiable at z = 0. By default, the function equation y is a function of the variable x. It is the nature of differential equations that the sum of solutions is also a solution, so that a general solution can be approached by taking the sum of the two solutions above. Estimating values of states with Value Functions. Non Differentiable Functions. Rotate about ICC 90 degrees. Our inverse function calculator uses derivative formula to solve derivative of trig functions. Jan 21, 2019. Let us consider Cartesian coordinates x and y. Enter the function. This calculator automatically assumes an investor holds to maturity, reinvests coupons, and all payments and coupons will be paid on time. Even after setting the number of decimal places on the display. Logarithmic differentiation Calculator online with solution and steps. The above calculator is an online tool which shows output for the given input. 12MB) Chapter 4 Complex Numbers ( 0. Problem Let $f$ be a continuous one-one function defined on an interval and suppose that $f$ is differentiable at $f^{-1}(b)$ with the derivative $f'(f^{-1} Inverse function differentiability proof #2. As an example, if , then and then we can compute :. The inclusion of both t and x in dy(t,x) is not necessary if there is no t in the right hand side of the differential equation. A function to follow the progress of the minimization. becomes equal to R. Make use of this free online derivative calculator to differentiate a function. In Leibniz notation,. Exponential Functions. Log InorSign Up. The graph consists Since the one sided limits are not equal, the function is not continuous at x=3, So, the function can't be differentiable either. For the function y = f(x), the derivative is symbolized by y' or dy/dx, where y is the dependent variable and x the independent variable. In calculus (a branch of mathematics), a differentiable function of one real variable is a function whose derivative exists at each point in its domain. Hydraulic Pump Power. Furthermore, any doubts regarding the applications of differential amplifier and electronics projects. Graphical and analytical explanations. Differential equation, mathematical statement containing one or more derivatives—that is, terms representing the rates of change of continuously varying quantities. I want to try different changes of variables in PDEs. It also has square root, cubing, squaring and many more features. Mathepower calculates other information about your function step-by-step. Interactive, free online graphing calculator from GeoGebra: graph functions, plot data, drag sliders, and much more!. The Ln Calculator has two controls which are used to perform different functions. Estimating values of states with Value Functions. Chapter 3 Solve, Differential/Quadratic Differential, Integration, Maximum/Minimum Value, and ( 0. Shift differential premiums vary depending on a number of factors, including job function, level of responsibility, influence of labor unions on specific jobs, location and type of shift. Trig Equations with Calculators, Part II. If the op-amp was 'railed' (saturated) then the differential input impedance would be higher: R2 + Rg + R1 + Rf. person_outlineTimurschedule1 year ago. Please note: You should not use fractional exponents. This free fraction calculator supports fraction addition, subtraction, multiplication and division. Free Online Scientific Notation Calculator. Derivative of a function is the limit of the ratio of the incremental change of dependent variable to the incremental change of independent variable as change of independent variable approaches zero. You can probably get something closer to the step function by a judicious choice of trigonometric function. We hope you liked our derivative calculator & its theory. In other words, you will need to write a function that takes \(t\), \(y\), and possibly \(c\) and returns \(f(t, y, c)\). Single-ended vs. Calculus: Learn Calculus with examples, lessons, worked solutions and videos, Differential Calculus, Integral Calculus, Sequences and Series, Parametric Curves and Polar Coordinates, Multivariable Calculus, and Differential, AP Calculus AB and BC Past Papers and Solutions, Multiple choice, Free response, Calculus Calculator. , we get a finite limit for the difference quotient. Laboratory Procedures for Determining the Dispersibility of. DNL stands for Differential Non-Linearity and quantifies the ADC or DAC precision. Includes 53 different calculations. Then the differential for a multivariable function is given by three separate formulas. Mathepower calculates other information about your function step-by-step. Calculate derivatives online — with steps and graphing! The Derivative Calculator supports computing first, second, …, fifth derivatives as well as differentiating functions. Generally speaking, curves of degree n can have up to (n − 1) turning points. In addition to minor corrections and updates throughout, this new edition contains materials on higher order Melnikov functions and the bifurcation of limit cycles for planar systems of differential equations. The derivative of a sum is the sum of the derivatives, calculator uses this property to differentiate functions. Free Online Derivative Calculator allows you to solve first order and higher order derivatives Wolfram|Alpha is a great calculator for first, second and third derivatives; derivatives at a point; and. The derivative is the instantaneous rate of change of a function with respect to one of its variables. If satisfies the Cauchy-Riemann equations and has continuous first partial derivatives in the neighborhood of , then exists and is given by. When val is greater than one the function halts. Thanks in advance!. In calculus, a differentiable function is a continuous function whose derivative exists at all points on its domain. What are piecewise functions? What are open and closed points? How do you figure out the domain and range of a function without graphing it on a calculator? Continuity and Differentiability of Piecewise Function What values of "a" and "b" will make this piecewise function continuous and differentiable? Find a Function, Integrals. Differential and Integral Calculus by Feliciano and Uy: Limit of a Function, Exercise 1. Thus Rolle's Theorem applies to so there exists a such that which yields Theorem 5. Deletes a Lambda function. Differentiation rules for evaluating the derivative of non-trivial functions. I was hoping to reference these values in the formulas, that way if the shift hours change, I can just change the key and the spreadsheet would calculate accordingly. In calculus, a differentiable function is a continuous function whose derivative exists at all points Differentiability lays the foundational groundwork for important theorems in calculus such as the. Linear Differential Equations These are first degree differential equations. Inverse function for a function y=f(x) is such function x=g(y) that g(f(x))=x for all values of x where f is defined. It is the nature of differential equations that the sum of solutions is also a solution, so that a general solution can be approached by taking the sum of the two solutions above. Questions on Continuity with Solutions. The function must also be continuous , but any function that is differentiable is also continuous, so no need to worry about that. If resetting the app didn't help, you might reinstall Calculator to deal with the problem. ipynb (Jupyter Notebook) and 09-differentiability. The low set point is defined as the maximum differential current that exists in normal transformer operation. BYJU'S online differential equation calculator tool makes the calculation faster, and it displays the derivative of the function in a fraction of seconds. Here is a circuit that can be simulated, based on the above definition of differential input impedance (values picked to be different). That is, the graph of a differentiable function must have a (non-vertical) tangent line at each point in its domain, be relatively "smooth" (but not necessarily mathematically smooth), and cannot contain any breaks, corners, or cusps. Otherwise, all versions and aliases are deleted. The calculator will find the reduced fraction and will show all the work so you can see how to do it. Because when a function is differentiable we can use all the power of calculus when working with it. Whether you are doing simple calculations or working on university-level math problems, Maple Calculator can do it all. Differentiability lays the. Corticosteroids and other immunosuppressive drugs also cause lymphopenia. Laguerre Functions and Differential Recursion Relations -p. Tektronix has over 70 years of experience designing Test and Measurement equipment. hp calculators HP 50g Solving differential equations hp calculators - 3 - HP 50g Solving differential equations or (if the current variable VX is X) or All intermixed notions will be also accepted as valid arguments. Suppose the function g is defined by: where k and m are constants. Well, it's not differentiable when x is equal to negative 2. Detention Forum Archive. Limit Calculator - online calculator. As indicated in Chapter 8, SCILAB provides function derivat to calculate derivatives of polynomials. It makes the lives of people who use matrices easier. Functions Containing Vertical Asymptotes. poles) are detected and treated specially. In other words, you will need to write a function that takes \(t\), \(y\), and possibly \(c\) and returns \(f(t, y, c)\). Practice your math skills and learn step by step with our math solver. It performs all of the same functions as the Texas. Free derivative calculator - differentiate functions with all the steps. Instructions: Fill in the flow rate and the corresponding units. For example the absolute value function is actually continuous (though not. polynomial with. In calculus , a differentiable function of one real variable is a function whose derivative exists at each point in its Differentiable function. It has often been demonstrated (for example, Goldman, 1953) that for a given, fixed value of variance, s2, the probability density with the greatest value of H is the Gaussian. Line Equations Functions Arithmetic & Comp. The Ln Calculator has two controls which are used to perform different functions. Berkeley Electronic Press Selected Works. Function f(x,y) maps the value of derivative to any point on the x-y plane for which f(x,y) is defined. Calculator Use. A differential amplifier circuit is a very useful op-amp circuit, since it can be configured to either "add" or "subtract" the input voltages, by suitably adding more resistors in parallel with the input resistors. differential equation related to Y, the numberator is the differential equation related to X. Click on a differential below to view photos, measurements, and carrier breaks, as well as applicable models, and available parts, for that particular fitment. The trigonometric functions are also known as the circular functions. Elementary Differential Equations with Boundary Value Problems is written for students in science, en-gineering,and mathematics whohave completed calculus throughpartialdifferentiation. Excel Yearfrac Function Example. , we get a finite limit for the difference quotient. A piecewise function. By using differential equations with either velocity or acceleration, it is possible to find position and velocity functions from a known acceleration. They are not far apart. There are functions that are continuous but differentiable. The calculator will find the solution of the given ODE: first-order, second-order, nth-order, separable, linear, exact, Bernoulli, homogeneous, or inhomogeneous Differential Equation Calculator - eMathHelp. When business executives are faced with such situations, they must select the most viable option that increases revenues. The second part will use this function in concert with SciPy's ODE solver to calculate solutions over a specified time range assuming given initial conditions. In the picture above, m is the ball mass, y is its height, v is its vertical speed, G is its weight and D is the drag of the surrounding air. 5) An important point in the derivation of the transfer function is the fact. 1The power rule The derivative of a power of x is given by d dx xp = pxp−1. See definition of the derivative and derivative as a function. The inclusion of both t and x in dy(t,x) is not necessary if there is no t in the right hand side of the differential equation. Recall that the tangent line to the The calculator uses an approximation! In fact, calculators and computers use approximations all the. Let's use the view. However, a function can still be used if the derivative of the dependent variable need not be returned to the spreadsheet. Add the differential pay to the hourly wage to find out the total earned each hour during the off shift. sagews (SageMath Another point of note is that if f is differentiable at c, then f is continuous at c. A function is differentiable if it has a defined derivative for every input, or. You can also get a better visual and understanding of the function by using our graphing tool. We note that the differential entropy of the Gaussian probability density function depends only on the variance and not on the mean. Corticosteroids and other immunosuppressive drugs also cause lymphopenia. While graphing, singularities (e. HIV causes a reduction in the total number of lymphocytes as well as changes in the ratios of the types of T-lymphocytes. Problem Let $f$ be a continuous one-one function defined on an interval and suppose that $f$ is differentiable at $f^{-1}(b)$ with the derivative $f'(f^{-1} Inverse function differentiability proof #2. Our calculator is capable of solving systems with a single unique solution as well as undetermined systems which have infinitely many solutions. AP Calculus AB 2019 Free Response Question 1 Rate in, rate out problem. Differentiable Function Calculator. Partial Derivative Calculator computes derivatives of a function with respect to given variable utilizing analytical differentiation and displays a step-by-step solution. In Leibniz notation, Note: For an example of the sum rule, see Example #7 below. Exponential Functions. In fact it is not differentiable there (as shown on the differentiable page). Non Differentiable Functions. Partial differential equations of physics, including those of potential theory, wave propagation, and heat flow, treated by classical means, generalized functions and variational principles. When x is equal to negative 2, we really don't have a slope there. Since we can surmise that is an equilibrium for this ODE (that is, the derivative along this line is 0 everywhere), and the slope at (1, 0) is positive, we know that as , the function will converge to 2. Enter the argument(s) for the function, including the symbol x. It has often been demonstrated (for example, Goldman, 1953) that for a given, fixed value of variance, s2, the probability density with the greatest value of H is the Gaussian. Before we can use the formula for the differential, we need to find the partial derivatives of the function with respect to each variable. Deletes a Lambda function. Ordinary Differential Equations in Maple. We'll show that if a function is differentiable, then it's continuous. This is a nonzero Hilbert space if α>0 For F∈ Hα we define πα(g)F(z) = (a−bz)−α−1F(g−1 ·z) where g= a b c. provided this limit exists. The conversion is based on equivalent fitness, not equivalent pace. Calculates the exponential functions e^x, 10^x and a^x. Mathepower calculates other information about your function step-by-step. But a function can be continuous but not differentiable. When x is equal to negative 2, we really don't have a slope there. Partial Derivative Calculator computes derivatives of a function with respect to given variable utilizing analytical differentiation and displays a step-by-step solution. If f is a differentiable function, find an expression for the derivative of each of the following functions?. McGraw-Hill Dictionary of Scientific & Technical Terms, 6E, Copyright © 2003 by The. When val is greater than one the function halts. polynomial with. Square summable function methods and integral equations. This estimate is better the closer we are to the point of tangency, in this case the point where. Homogeneous Differential Equation of the First Order. AMC differentials are found primarily under AMC cars and Jeep vehicles prior to 2007. | CommonCrawl |
Cow's milk allergy skin tests: fresh milk, commercial extracts, or both?
Idit Lachover-Roth ORCID: orcid.org/0000-0001-8968-41431,2,
Nadav Giorno2,
Tzipi Hornik-Lurie3,
Anat Cohen-Engler1,
Yossi Rosman1,2,
Keren Meir-Shafrir1 &
Ronit Confino-Cohen1,2
Allergy, Asthma & Clinical Immunology volume 19, Article number: 6 (2023) Cite this article
The diagnosis of food allergy is based on a history of immediate allergic reaction following food ingestion, and skin prick test (SPT) demonstrating sensitization with commercial extracts (CE) or fresh food (FF). For most food allergens, the SPT with FF is considered more accurate and predictive. Regarding cow's milk, the results are inconclusive. This retrospective study aimed to evaluate the accuracy of SPT with fresh milk compared to CE (cow's milk and casein) for evaluation of cow's milk allergy (CMA).
This study summarized the medical records of children, diagnosed with CMA. The data include demographics, skin tests and oral food challenge results, as well as atopic comorbidities.
Records of 698 patients with the diagnosis of CMA were reviewed, 388 fulfilled the inclusion criteria. Overall, 134 patients (34.54%) had an additional atopic disease. The SPT wheal size with fresh milk was significantly larger than with CE (cow's milk and casein) at first evaluation or before oral food challenge (OFC). Combination of SPT results (CE and FF) gave the maximal odds ratio for reaction during OFC and SPT with fresh milk alone gave the minimal OR (34.18 and 4.74, respectively).
SPT with CE for CMA evaluation is more reliable than SPT performed with fresh milk. In patients suspected of having IgE-mediated CMA, before deciding on performing OFC, it is advised to perform SPT with at least two different extracts, and always include casein. Fresh milk can serve as a backup if commercial extracts are not available. In cases that the SPT with fresh milk is 3 mm or less, there is 93.3% chance that the OFC will pass without reaction.
Trial registration This study protocol was reviewed and approved by the Ethics Committee of Meir Medical Center, IRB Number 0083-18 MMC.
Food allergies are common and present an increasing health problem, affecting up to 10% of young children [1]. The prevalence of cow's milk allergy (CMA) is estimated to be 0.5–0.74% in the first year of life [2, 3].
The diagnosis of food allergy is based on a history of immediate allergic reaction following ingestion of the food and a skin prick test (SPT) showing sensitization and/or specific IgE. However, the gold standard for diagnosis or exclusion of food allergy is an oral food challenge (OFC) [4]. OFC is also performed when the history is inconclusive or when the SPT is equivocal.
A SPT is performed using either a commercial extract (CE) or fresh food (FF). It is considered positive when the wheal diameter is ≥ 3 mm [5]. For most food allergens, SPT with FF will cause a larger wheal diameter than CE. The age at which the SPT is performed also affects the results, with smaller wheal diameters in younger children [4, 6].
Many studies tried to define the wheal size that can predict an allergic reaction during OFC [4,5,6,7,8]. It is believed that SPT with FF is more predictive than that performed with CE [6, 11, 12]. Rance et al. found that for different foods, the correlation between SPT wheal size and the OFC results was higher when SPT was done with FF compared to CE [13]. Regarding cow's milk (CM), the specificity of FM SPT was 100% compared to 50% with CE. CE was more sensitive than FM (73% vs. 66%) [13]. However, specifically regarding CM, FF was not superior to CE in predicting allergic reaction during OFC, as 9 children with CE SPT wheal size > 3 mm had an allergic reaction during OFC compared to the same number of children who had wheal size > 3 mm with FF SPT. When SPT < 3 mm also the same number of children with CE SPT < 3 mm failed the OFC as children with FF SPT < 3 mm [13]. Calvani et al. defined, in patients with CMA, a cutoff for SPT as 7 mm for casein, 20 mm for cow's milk CE and 10 mm for SPT with fresh milk (FM) [9, 10]. These cutoffs produced a specificity of 100% but the sensitivity was zero.[9] Most studies were small series and the recommendations regarding whether SPT to evaluate CMA should be performed with CE and/or FF were inconclusive.
The current study aimed to evaluate the accuracy of SPT with FM compared to CE for evaluating CMA.
This retrospective study included children and teenagers up to age 18 years, with past or current diagnosis of CMA, who were treated at the Allergy Unit from 2010 to 2018.
Data for the entire cohort were retrieved from the Health Maintenance Organization electronic medical record system. Data collected included demographic parameters, age and symptoms during the index reaction, size of SPTs at all evaluations. Eosinophil count proximity to the index reaction, OFC results, current allergic status and atopic comorbidities: asthma, atopic dermatitis, allergy to other foods, and family history of atopic diseases were recorded. The allergic rhinitis symptom as an atopic comorbidity was excluded in the analyses due to the young age of the patients and therefore low and biased prevalence of this comorbidity.
Children and teenagers up to age 18 years, with a history of suspected event as immediate allergic reaction to CM products and SPT confirming the diagnosis were included in the study.
Patients with anamnestic details implying late reaction or without at least one positive SPT were excluded.
SPT were performed by trained staff on the volar aspect of the forearm, with commercial CM extracts (1:10 W/V, ALK-Abello Pharm. Inc), casein (1:100 W/V, ALK-Abello Pharm. Inc), and fresh CM (3% fat). Positive (histamine 1 mg/ml) and negative (0.9% normal saline) controls were also performed. SPT were defined as positive, for all patients in all ages, when the wheal diameter was at least 3 mm larger than the wheal size of the negative control after 15 min. The SPT results that were taking into analyses were the first and last SPT that were done, even if the subject has done more than two SPT's. For children who underwent OFC, the last SPT taken in account was the SPT done before the OFC. For children who were defined as allergic without OFC, the last SPT was used as defining them as allergic for all the analyses.
The decision to perform an OFC was made by an allergy specialist according to clinical parameters and SPT results. OFC was an open challenge with CM formula for infants younger than 12 months and FM for toddlers and children above 12 months.
Challenge protocol
The first dose of 0.5 ml was doubled every 30 min until a final dose of 80/160 ml, containing 2.72/5.44 g total milk protein, respectively was obtained. Toddlers younger than 2 years of age received final dose of 80 ml. Between 2 and 5 years the final dose depends on the child cooperation. OFC was stopped and considered as a failure when one of the following symptoms appeared: urticarial rash, cough, wheezing, or vomiting.
The study cohort was divided into three groups: A. Allergic by OFC or SPT: A.1 Allergic by OFC-patients who referred to OFC by their allergologist due to lack of allergic reactions within few years or improvement in the SPT results and had an allergic reaction during OFC. A.2. Allergic by SPT-patients with high suspicion of existing CMA according to the decision of the patients' allergologist, based on allergic reaction following accidental exposure lately and/or large wheal diameter in the SPT. B. Not allergic according to OFC.
Data were analyzed using SPSS/PC, version 25.0. Descriptive statistics were used to characterize the study participants. Chi-square and one-way ANOVA tests were employed to examine differences in demographic and clinical characteristics between the three groups (A1, A2 and B). OR and 95%CI were calculated.
A hierarchical, binary logistic regression was performed to determine the relationship between demographic variables (sex and age); other comorbidities (asthma, atopic dermatitis, other food allergies), family atopic background, symptoms during the first reaction at the categorical level (yes/ no), and continuous parameters (age at reaction, SPT result, eosinophil count). Two-by-two tables were used to calculate sensitivity, specificity, positive predictive values (PPV), negative predictive values (NPV) and Odds Ratio (OR) of the last SPT values.
To define the sensitivity and specificity of the SPT ratio of milk extract, casein and FM to detect the CMA, a receiver-operating characteristic (ROC) curve was plotted. Cut-off levels were optimized (sensitivity + specificity). The accuracy was measured by area under the curve (AUC) analysis. The OR was calculated by binary logistic regression model. Two-sided tests of significance p < 0.05 were used in all analyses.
The study was approved by the local Ethics Committee.
From 2010 through 2018, 698 patients with a diagnosis of "CMA", were evaluated. Among them, 272 (39.0%) were excluded based on the exclusion criteria, and another 38 were lost to follow-up (shown in Fig. 1). Of the 388 patients in the study cohort, 215 (55.4%) were male.
Patient flow diagram. CMA cow's milk allergy; OFC oral food challenge; SPT skin prick test
The study cohort of 388 patients included the following subgroups: Group A (allergic) 227(58.5%), of whom 30(7.7%) were proven by OFC (group A1), and 197(50.8%) by SPT (group A2). Group B (non-allergic) included 161 subjects (41.5%) who did not react during the OFC.
There were no significant differences between the three groups regarding age at the first allergic reaction (shown in Table 1). Additional demographic parameters are presented in Table 1.
Table 1 Comparison of demographic and clinical parameters between the 3 study groups
Atopic comorbidities
Overall, 134 patients (34.5%) had at least one additional atopic disease; 82(21.1%) had asthma and 68 (17.5%) had atopic dermatitis. Food allergies other than milk were found in 78 patients (20.1%). The prevalence of asthma was similar between group A1 and A2 (p = NS). The incidence in groups A1 and A2 compared to group B was statistically significant (22.58% for group A1 and 31.12% for group A2 vs 8.75% for group B; p < 0.05).
The incidence of other food allergy was significantly more common in group A2 when compared to group B (25.5% vs 14.4%, respectively p < 0.05) (shown in Table 1).
Symptoms during the first reaction
Urticarial rash was the most common symptom affecting 335 (86.3%) patients, 55 (14.5%) had breathing difficulties and 212 (55.9%) had vomiting. Breathing difficulties were more prevalent in group A2 as compared to group B (22.9% vs 4.5%, p < 0.001; shown in Table 1). No other symptoms were reported.
Skin tests
All subjects (n = 388) had done SPT with milk extract and casein simultaneously. Of them, 322 (83%) subjects had done also SPT with FM. From the entire cohort, 66 (17%) were examined only once and have results for the first SPT alone. From those who had last SPT's, 274 (85%) had done also SPT with FM (shown in Additional file 1: Table S1).
Skin test with FM
The wheal size with FM was significantly larger than with milk extract and casein at the first SPT recorded and in the last SPT recorded in the total cohort and for each group separately: at the first SPT—5.1 mm, CI 95%(4.62–5.6) vs. 5.91 mm, CI 95%(5.53–6.3) vs. 8.88 mm, CI 95%(8.24–9.52), p < 0.05, for casein, milk extract and FM respectively. At the last SPT—3.74 mm, CI 95%(3.26–4.21) vs. 5.02 mm, CI95%(4.56–5.48) vs. 8.21 mm, CI 95%(7.52–8.9), for casein, milk extract and FM respectively; p < 0.05 (shown in Fig. 2).
Skin prick tests wheal size (mm) in the different study groups. A Mean skin prick test results at diagnosis. B Mean last skin prick test results. Only significant differences are shown
First SPT
The time elapsed between the first recorded reaction and the first SPT was 11.51 ± 0.72 months.
The mean SPT results of the first tests are shown in Fig. 2A. Significant differences between the wheal size in the different groups were found only in SPT with casein (5.27 mm, CI 95%(3.02–7.52), 6.68 mm, CI 95%(5.99–7.37), 3.12 mm, CI 95%(2.53–3.7), p < 0.05 for groups A1, A2, and B, respectively). Significant differences in the wheal size of SPT with milk extract were found only between group B and group A2 (4.67 mm, CI 95%(4.19–5.15) vs. 6.95 mm, CI 95%(6.39–7.51), respectively). There were no significant differences between the wheal size in the first SPT with FM among the groups (8.43 mm, CI 95%(6.71–10.15), 9.63 mm, CI 95%(8.82–10.44), 8.04 mm, CI 95%(6.9–9.18) for groups A1, A2, and B, respectively).
Only two patients in the not allergic group (1.25%) had first casein SPT ≥ 14 mm.
A binary adjusted logistic regression models have shown significant association between the wheal size of the first SPT with casein and the OFC results (OR = 1.148, CI 95%(1.046–1.259), p < 0.05; shown in Additional file 1: Table S2A). These results are in line with the unadjusted results.
Last SPT
The time elapsed between the first recorded SPT and the last was 2.48 ± 0.13 years.
The mean SPT results of the last tests recorded are shown in Fig. 2B. For patients who underwent OFC, the last SPT results are those prior to the OFC. Significant differences between the wheal sizes in the different groups were found in SPT with all the extracts, including FM.
Logistic regression models after adjustment showed that the association between the wheal size of the last SPT with each extract and the OFC results was statistically significant (shown in Additional file 1: Table S2B). However, the OR for casein was significantly higher than the OR of milk extract and FM (1.907, 1.354 and 1.151, respectively).
Ratio between the first SPT and the last SPT
The ratio \(\frac{Last \,SPT \,wheal \,size\,(mm)}{First \,SPT\, wheal \,size\,(mm)}\) was < 1 for casein, milk extract, and FM in group B, and casein in group A1. The ratio was > 1 for all extracts in group A2 and milk extract and FM in group A1 (shown in Fig. 3). The ratio was significantly lower in group B compared to the two other groups for all three types of SPT extracts (p < 0.05), except for FM where the difference was significant between groups B and A2, and not between groups B and A1. There were no significant differences between the two allergic groups. Logistic regression models after adjustment showed that each extract ratio was statistically significant. The maximal OR was 2.922 (CI 95% (1.317–6.481), p < 0.01) for casein SPT ratio, but without significant differences for SPT ratio with milk extract and FM (shown in Additional file 1: Table S2C).
The ratio between the last skin prick test and the first skin prick test recorded according to the study groups. OFC oral food challenge; SPT skin prick test. The valid number of SPT for calculations—Not allergic (casein—145, milk extract—147, fresh milk—118), allergic by OFC (casein—27, milk extract—28, fresh milk—25), and allergic by SPT (casein—140, milk extract—143, fresh milk—129)
Cutoff point for the different extracts
Standardization of wheal size to age led to the equations:
$${\text{Milk extract wheal size}}\left( {{\text{mm}}} \right) \, = {1}.{175} + \left( {0.{316}*{\text{age}}} \right),$$
$${\text{FM wheal size}}\left( {{\text{mm}}} \right) = {2}.{8}0{2} + \left( {0.{525}*{\text{age}}} \right).$$
For casein, the correlation between age and wheal size was not significant, and wheal size < 3 mm was defined as negative for all ages.
From this point, negative and positive SPT results were defined accordingly.
The NPV of all extracts separately or in combination ranged 92.76–94.2%, without significant differences. The PPV was maximal with the combination of all extracts with 72.73%, the sensitivity was maximal with FM 84%, and the specificity was maximal with the combination of all three extracts with a rate of 95.92% (range 55.08–95.92%). The OR was maximal for the combination of all three extracts and minimal with FM (34.18, 6.44 respectively) (shown in Table 2). NPV, PPV, sensitivity, and specificity were calculated only for groups A1 and B.
Table 2 Predictive values of the different extracts
The AUC for SPT ratio of milk extract and FM were under the accepted value for discrimination (0.67 and 0.68 respectively). The AUC for SPT ratio of casein was 0.73. ROC curve analyses for the ratio found that the optimal sensitivity and specificity for the SPT ratio with casein was 0.68 (OR = 5.36, p < 0.001; shown in Fig. 4). There was no optimal value for milk extract and FM. The ROC curve was created for groups A1 and B, excluding group A2 that was defined as allergic without OFC.
ROC curve for the ratio between the last and the first SPT with the different extracts. ROC curve created for groups A1 and B (Allergic by oral food challenge and Not allergic by oral food challenge)
Food allergy is an increasing health problem, especially in children. The accepted follow-up for allergic children includes periodical SPT with the food allergen. The most available extract for SPT is FF. However, its reliability is questionable and differs between foods. In peanuts, egg, tree nuts, and fruits, the FF SPT gave more reliable results with sensitivity and specificity at least as good as the CE [8, 9, 13]. The current study aims to assess the advantages and drawback of each SPT method, and give the physician opportunity to examine their current practice. Our study demonstrated that in the case of CM, the commercial milk extract and casein are more accurate than FM is, and casein alone is superior to the others. A combination of the results from all three extracts, or at least a combination of two extracts, is the most accurate way to decide who has the best chance to pass OFC without an allergic reaction, and with the highest sensitivity and specificity. FM can serve as a backup if CE are not available. If the SPT with FM is ≥ 4 mm, the OR is 4.74 to develop reactions during OFC; while if the SPT is ≤ 3 mm, there is a 93.33% chance that the OFC will pass without reaction. The low specificity is the drawback of SPT with FM, which is the main reason it is not recommended as the sole extract for SPT.
The cutoff for negative results with CE yields maximal sensitivity and specificity when the wheal is ≤ 3 mm. Previous studies recommended higher cutoff points, with higher specificity, but with higher rate of reactions during OFC with minimal gain [5,6,7]. Therefore, we recommend using the safer cutoff. It is important to emphasize that those cutoffs, based in the results of our study, are relevant only for subjects who had immediate symptoms following consumption of dairy product and are suspected as allergic to cow's milk or as a follow-up after children who had diagnosis of CMA to decide whether it is safe to challenge them. Regarding children with mild and uncertain symptoms, more studies need to be done to assess the reliability of those cutoffs.
The ratio between the last SPT performed to the first SPT can also point toward the chance to pass the OFC. Logistic regression models have shown a maximal OR with casein ratio. The ROC curve analyses showed that the cutoff for the SPT ratio with casein gives maximal sensitivity and specificity when the ratio is > 0.68, with OR = 5.36 for an allergic reaction during the OFC. Nonetheless, the added value beyond looking at the last SPT is small. In borderline cases, it can support the decision to perform or to postpone OFC.
One of the most common questions that parents of allergic patients ask is the chance that the CMA will resolve. Looking at the SPT result with casein from the first allergic evaluation can give a clue regarding the chance that the CMA will wane over time. The higher the initial SPT with casein, the lower the chance for the patient to outgrow the CMA. When the first casein SPT is ≥ 14 mm, the likelihood of developing tolerance in the following three years is exceedingly small. A longer follow-up is needed to assess if casein SPT ≥ 14 mm is a negative prognostic factor for overall recovery rate. As opposed to the study by Uncuoglu et al. [7], in our study, the first SPT with FM did not predict the chance for developing tolerance in the future. However, further studies are needed to create a risk table.
Age affects the wheal size mainly with FM as shown in the equations we calculated. The influence of age when we applied those equations is minimal and not significant, then for clinical purposes we recommend using fixed cutoffs for all ages.
Some clinical parameters were significantly more prevalent in patients who failed the OFC, but multivariate analyses did not find reliable clinical score models that could calculate all variables to define which patients have a high chance to pass an OFC without reaction. The probable reason is that the weight of the SPT results of milk extract and casein is highly significant and overshadows the other parameters.
To avoid bias, most of the statistical analyses were performed on the data of patients who underwent OFC. However, the similarity in the clinical parameters between patients allergic by SPT (group A2) to those defined as allergic by OFC (group A1), and the difference from the not allergic group (group B), was enough to justify the "CMP allergy" label of group A2. According to our results, it is reasonable to assume those patients would have fail in OFC, but as we did not challenge them, we cannot be 100% sure.
This study had limitations inherent to its retrospective nature. The definition of "breathing difficulties" was based on the reports written in the medical record and it is difficult to define more precisely. A second limitation is related to the inclusion criteria. We deliberately excluded from the study children without at least one positive SPT results and diagnosis of CMA. This exclusion caused an intention bias toward allergic children and therefore we cannot assess the NPV of SPTs with CE or FM in the general pediatric population addressing for allergy evaluation with suspected allergic reaction to cow's milk. Focusing on this specific population, especially those who underwent OFC, made our results more accurate in assessing the value of CE and FM. If the SPT results can predict accurately the recovery from CMA, we can assume that it can assess accurately children with low suspicious for CMA, but further studies are needed. Moreover, there was a bias in the patients selected for OFC, probably in favor of those who had a good chance to pass the OFC without an allergic reaction, and as OFC is the "gold standard" it is possible that we labeled non-allergic patients as allergic (group A2). Nevertheless, there were significant differences between the not allergic patients and those allergic by OFC, with no significant differences between the patients allergic by OFC or SPT.
In conclusion, SPT serve as a decision support tool to decide which patient has a good chance to pass OFC without allergic reaction, but it cannot replace OFC. In patients suspected of having IgE-mediated CMA, before deciding on performing OFC, it is advised to perform SPT with at least two different extracts, and always include casein. When the only material available for SPT is FM, it can give a good sense of who has a good chance to pass OFC without allergic reaction. Further prospective studies are required to strengthen these findings.
Who is the best candidate for OFC?
SPT wheal size ≤ 3 mm with casein and/or milk extracts and/or fresh milk.
Casein wheal size ratio \(\frac{Last \,SPT\, wheal\, size (mm)}{First \,SPT\, wheal \,size (mm)}\)<0.68.
All data generated or analyzed during this study are included in this article. The dataset used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Commercial extracts
Cow's milk
CMA:
Cow's milk allergy
FF:
FM:
OFC:
Oral food challenge
SPT:
Skin prick test
Lopes JP, Sicherer S. Food allergy: epidemiology, pathogenesis, diagnosis, prevention, and treatment. Curr Opin Immunol. 2020;66:57–64.
Schoemaker AA, Sprikkelman AB, Grimshaw KE, Roberts G, Grabenhenrich L, Rosenfeld L, et al. Incidence and natural history of challenge-proven cow's milk allergy in European children–EuroPrevall birth cohort. Allergy. 2015;70(8):963–72.
Katz Y, Rajuan N, Goldberg MR, Eisenberg E, Heyman E, Cohen A, et al. Early exposure to cow's milk protein is protective against IgE-mediated cow's milk protein allergy. J Allergy Clin Immunol. 2010;126(1):77-82 e1.
Peters RL, Gurrin LC, Allen KJ. The predictive value of skin prick testing for challenge-proven food allergy: a systematic review. Pediatr Allergy Immunol. 2012;23(4):347–52.
Eigenmann PA, Sampson HA. Interpreting skin prick tests in the evaluation of food allergy in children. Pediatr Allergy Immunol. 1998;9(4):186–91.
Hill DJ, Heine RG, Hosking CS. The diagnostic value of skin prick testing in children with food allergy. Pediatr Allergy Immunol. 2004;15(5):435–41.
Uncuoglu A, Cogurlu MT, Eser Simsek I, Ergul N, Baydemir C, Aydogan M. Predicting outgrowth of IgE-mediated cow's milk allergy: Diagnostic tests in children under two years of age. Allergol Immunopathol. 2019;47(5):449–56.
Uncuoglu A, Eser Simsek I, Cogurlu MT, Baydemir C, Aydogan M. Utility of fresh egg skin prick test and egg yolk specific immunoglobulin E for outgrowth. Ann Allergy Asthma Immunol. 2020;125(4):418–24.
Calvani M, Berti I, Fiocchi A, Galli E, Giorgio V, Martelli A, et al. Oral food challenge: safety, adherence to guidelines and predictive value of skin prick testing. Pediatr Allergy Immunol. 2012;23(8):755–61.
Calvani M, Alessandri C, Frediani T, Lucarelli S, Miceli Sopo S, Panetta V, et al. Correlation between skin prick test using commercial extract of cow's milk protein and fresh milk and food challenges. Pediatr Allergy Immunol. 2007;18(7):583–8.
Rosen JP, Selcow JE, Mendelson LM, Grodofsky MP, Factor JM, Sampson HA. Skin testing with natural foods in patients suspected of having food allergies: is it a necessity? J Allergy Clin Immunol. 1994;93(6):1068–70.
Zivanovic M, Atanaskovic-Markovic M, Medjo B, Gavrovic-Jankulovic M, Smiljanic K, Tmusic V, et al. Evaluation of food allergy in children by skin prick tests with commercial extracts and fresh foods, specific IgE and open oral food challenge—our five years experience in food allergy work-up. Iran J Allergy Asthma Immunol. 2017;16(2):127–32.
Rance F, Juchet A, Bremont F, Dutau G. Correlations between skin prick tests using commercial extracts and fresh foods, specific IgE, and food challenges. Allergy. 1997;52(10):1031–5.
This study was conducted as part of the graduation requirements for Nadav Giorno, MD, Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel.
The study was not funded.
Allergy and Clinical Immunology Unit, Meir Medical Center, 44261, Kfar Saba, Israel
Idit Lachover-Roth, Anat Cohen-Engler, Yossi Rosman, Keren Meir-Shafrir & Ronit Confino-Cohen
Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
Idit Lachover-Roth, Nadav Giorno, Yossi Rosman & Ronit Confino-Cohen
Meir Medical Center, Kfar Saba, Israel
Tzipi Hornik-Lurie
Idit Lachover-Roth
Nadav Giorno
Anat Cohen-Engler
Yossi Rosman
Keren Meir-Shafrir
Ronit Confino-Cohen
IL-R: Substantial contributions to conception and design, analysis, and interpretation of data, drafting the article. NG: Acquisition of data, drafting the article. TH-L: Data analysis, interpretation of data. AC-E: Revised the manuscript critically for important intellectual content. YR: Revised the manuscript critically for important intellectual content. KM-S: Revised the manuscript critically for important intellectual content. RC-C: Substantial contributions to conception and design of the study, analysis, and interpretation of data, revised the manuscript critically for important intellectual content. All authors read and approved the final manuscript.
Correspondence to Idit Lachover-Roth.
This study protocol was reviewed and approved by the Ethics Committee of Meir Medical Center, IRB Number 0083-18 MMC. Informed consent was not required by the ethics committee of Meir Medical Center since the study was a retrospective study.
The authors declare no competing financial interests.
Number of subjects done SPT first and last. Table S2. Logistic regression models for the different skin prick test results.
Lachover-Roth, I., Giorno, N., Hornik-Lurie, T. et al. Cow's milk allergy skin tests: fresh milk, commercial extracts, or both?. Allergy Asthma Clin Immunol 19, 6 (2023). https://doi.org/10.1186/s13223-023-00763-w
Skin prick test (SPT) | CommonCrawl |
Light: Advanced Manufacturing > Published> Review >Published online: 14 January 2022
Rajpal Sirohi * ,
Light: Advanced Manufacturing 3, Article number: 1 (2022)
Alabama A&M University Normal, AL 35762, USA
Corresponding author:
Rajpal Sirohi ([email protected])
Revised: 21 December 2021
Accepted article preview online: 31 December 2021
doi: https://doi.org/10.37188/lam.2022.001
This paper presents the activities in the field of shearography in chronological order and highlights the great potential of this holographic measurement technology. After a brief introduction, the basic theory of shearography is presented. Shear devices, phase-shift arrangements, and multiplexed shearography systems are described. Finally, the application areas where shearography has been accepted and successfully used as a tool are presented.
Electronic speckle pattern interferometry,
Shearography,
[1] Leendertz, J. A. Interferometric displacement measurement on scattering surfaces utilizing speckle effect. Journal of Physics E:Scientific Instruments 3, 214-218 (1970). doi: 10.1088/0022-3735/3/3/312
[2] Erf, R. K. (Ed.) Speckle Metrology. (New York: Academic Press, 1978).
[3] Sirohi, R. S. (Ed.) Speckle Metrology. (New York: CRC Press, 1993).
[4] Patorski, K. & Kujawinska, M. Handbook of the Moiré Fringe Technique. (Amsterdam: Elsevier, 1993).
[5] Kaufmann, G. H. (Ed.), Advances in Speckle Metrology and Related Techniques-. (Weinheim: Wiley-VCH 2011).
[6] Viotti, M. R. & Albertazzi, A. Jr. Robust Speckle Metrology: Techniques for Stress Analysis and NDT. (Bellingham: SPIE Press, 2014).
[7] Sollid, J. E. Holographic interferometry applied to measurements of small static displacements of diffusely reflecting surfaces. Applied Optics 8, 1587-1595 (1969). doi: 10.1364/AO.8.001587
[8] Leendertz, J. A. & Butters, J. N. An image-shearing speckle-pattern interferometer for measuring bending moments. Journal of Physics E:Scientific Instruments 6, 1107-1110 (1973). doi: 10.1088/0022-3735/6/11/019
[9] Hung, Y. Y. & Taylor, C. E. Speckle-shearing interferometric camera -- a tool for measurement of derivatives of surface-displacement. Proceedings of SPIE 0041, Developments in Laser Technology II. San Diego, United States: SPIE, 1973.
[10] Hung, Y. Y. A speckle-shearing interferometer: a tool for measuring derivatives of surface displacement. Optics Communications 11, 132-135 (1974). doi: 10.1016/0030-4018(74)90200-4
[11] Hung, Y. Y. & Taylor, C. E. Measurement of slopes of structural deflections by speckle-shearing interferometry. Experimental Mechanics 14, 281-285 (1974). doi: 10.1007/BF02322832
[12] Hariharan, P. Speckle-shearing interferometry: a simple optical system. Applied Optics 14, 2563 (1975). doi: 10.1364/AO.14.002563
[13] Hung, Y. Y., Rowlands, R. E. & Daniel, I. M. Speckle-shearing interferometric technique: a full-field strain gauge. Applied Optics 14, 618-622 (1975). doi: 10.1364/AO.14.000618
[14] Debrus, S. Speckle shearing interferometer using a Savart plate. Optics Communications 20, 257-261 (1977). doi: 10.1016/0030-4018(77)90346-7
[15] Assa, A., Betser, A. A. & Politch, J. Recording slope and curvature contours of flexed plates using a grating shearing interferometer. Applied Optics 16, 2504-2513 (1977). doi: 10.1364/AO.16.002504
[16] Hovanesian, J. D., Hung, Y. Y. & Durelli, A. J. New optical method to determine vibration induced strains with variable sensitivity after recording. Proceedings of the Conference on Experimental Stress Analysis. Smolenice, Czechoslovakia, 1978.
[17] Hung, Y. Y. et al. Optical method for measuring contour slopes of an object. Applied Optics 17, 128-131 (1978). doi: 10.1364/AO.17.000128
[18] Hung, Y. Y. & Durelli, A. J. Simultaneous measurement of three displacement derivatives using a multiple image-shearing interferometric camera. The Journal of Strain Analysis for Engineering Design 14, 81-88 (1979). doi: 10.1243/03093247V143081
[19] Hung, Y. Y. & Liang, C. Y. Image-shearing camera for direct measurement of surface strains. Applied Optics 18, 1046-1051 (1979). doi: 10.1364/AO.18.001046
[20] Nakadate, S., Yatagai T. & Saito, H. Digital speckle-pattern shearing interferometry. Applied Optics 19, 4241-4246 (1980). doi: 10.1364/AO.19.004241
[21] Hung, Y. Y. Shearography: a new optical method for strain measurement and nondestructive testing. Optical Engineering 21, 391-395 (1982).
[22] Murthy, R. K., Sirohi, R. S. & Kothiyal, M. P. Speckle shearing interferometry: a new method. Applied Optics 21, 2865-2867 (1982). doi: 10.1364/AO.21.002865
[23] Murthy, R. K., Sirohi, R. S. & Kothiyal, M. P. Detection of defects in plates and diaphragms using a split-lens speckle-shearing interferometer. NDT International 15, 329-333 (1982). doi: 10.1016/0308-9126(82)90070-0
[24] Mohanty, R. K., Joenathan, C. & Sirohi, R. S. Speckle-shear interferometry with double dove prisms. Optics Communications 47, 27-30 (1983). doi: 10.1016/0030-4018(83)90330-9
[25] Mohanty, R. K., Joenathan, C. & Sirohi, R. S. Measurement of tilt with speckle-shear interferometry. Journal of Optics 12, 118-121 (1983). doi: 10.1007/BF03549086
[26] Joenathan, C., Mohanty, R. K. & Sirohi, R. S. Multiplexing in speckle shear interferometry. Optica Acta:International Journal of Optics 31, 681-692 (1984). doi: 10.1080/713821556
[27] Sharma, D. K., Sirohi, R. S. & Kothiyal, M. P. Non destructive testing of a diaphragm by multiaperture speckle shearing technique. Optik 66, 133-136 (1984).
[28] Sirohi, R. S. Speckle shear interferometry. Optics & Laser Technology 16, 251-254 (1984).
[29] Murthy, R. K., Mohanty, R. K., Sirohi, R. S. & Kothiyal, M. P. Radial speckle shearing interferometer and its engineering applications. Optik 67, 85-94 (1984).
[30] Iwahashi, Y., Iwata, K. & Nagata, R. Single-aperture speckle shearing interferometry with a single grating. Applied Optics 23, 247-249 (1984). doi: 10.1364/AO.23.000247
[31] Mohanty, R. K., Joenathan. C. & Sirohi, R. S. Speckle fringe sharpening in speckle shear interferometry. Journal of Optics 13, 49-51 (1984).
[32] Sharma, D. K., Sirohi, R. S. & Kothiyal, M. P. Simultaneous measurement of slope and curvature with a three-aperture speckle shearing interferometer. Applied Optics 23, 1542-1546 (1984). doi: 10.1364/AO.23.001542
[33] Joenathan, C., Mohanty, R. K. & Sirohi, R. S. On the methods of multiplexing in speckle shear interferometry. Optik 69, 8-12 (1984).
[34] Sharma, D. K., Sirohi, R. S. & Kothiyal, M. P. Multiaperture speckle shearing arrangements for stress analysis. Optics Communications 49, 313-317 (1984). doi: 10.1016/0030-4018(84)90239-6
[35] Joenathan, C., Mohanty, R. K. & Sirohi, R. S. Curvature by halo combination in speckle shear interferometry. Journal of Optics 13, 114-119 (1984). doi: 10.1007/BF03549114
[36] Mohanty, R. K., Joenathan. C. & Sirohi, R. S. Fringe sharpening and information coupling in speckle shear interferometry. Applied Optics 23, 4596-4600 (1984). doi: 10.1364/AO.23.004596
[37] Joenathan, C., Mohanty, R. K. & Sirohi, R. S. Hololens in speckle and speckle shear interferometry. Applied Optics 24, 1294-1298 (1985). doi: 10.1364/AO.24.001294
[38] Joenathan, C. & Sirohi, R. S. Holographic gratings in speckle shearing interferometry. Applied Optics 24, 2750-2751 (1985). doi: 10.1364/AO.24.002750
[39] Mohanty, R. K., Joenathan. C. & Sirohi, R. S. Speckle and speckle-shearing interferometers combined for the simultaneous determination of out-of-plane displacement and slope. Applied Optics 24, 3106-3109 (1985). doi: 10.1364/AO.24.003106
[40] Takezaki J. & Hung, Y. Y. Direct measurement of flexural strains in plates by shearography. Journal of Applied Mechanics 53, 125-129 (1986). doi: 10.1115/1.3171698
[41] Sharma, D. K., Mohan, N. K. & Sirohi, R. S. A holographic speckle shearing technique for the measurement of out-of-plane displacement, slope and curvature. Optics Communications 57, 230-235 (1986). doi: 10.1016/0030-4018(86)90088-X
[42] Pyzin, G. P., Artemenko, S. B. & Ignatev, A. G. Compensation for rigid and deformational displacements in speckle-shearing interferometry. Soviet Physics Technical Physics 31, 528-530 (1986).
[43] Iwahashi, Y., Iwata, K. & Nagata, R. Simultaneous measurement of three slope distributions with single-aperture speckle shearing interferometer. Applied Optics 25, 328-329 (1986). doi: 10.1364/AO.25.000328
[44] Templeton, D. W. & Hung, Y. Y. Computerization of data deduction in shearography. Proceedings of SPIE 0814, Photomechanics and Speckle Metrology. San Diego, CA, United States: SPIE, 1987.
[45] Hung, Y. Y. Nondestructive evaluation by electronic shearography. Proceedings of the 16th Symposium on Nondestructive Evaluation. San Antonio, Texas, 1987.
[46] Hung, M. Y. Y. et al. Fast detection of residual stresses by shearography. Proceedings of SPIE 0955, Industrial Laser Interferometry II. Dearborn, MI, United States: SPIE, 1988.
[47] Hung, Y. Y., Hovanesian, J. D. & Takezaki, J. A fringe carrier technique for unambiguous determination of fringe orders in shearography. Optics and Lasers in Engineering 8, 73-81 (1988). doi: 10.1016/0143-8166(88)90044-9
[48] Ganesan, A. R., Joenathan. C. & Sirohi, R. S. Sharpening of fringes in digital speckle pattern interferometry. Applied Optics 27, 2099-2100 (1988). doi: 10.1364/AO.27.002099
[49] Ganesan, A. R., Sharma, D. K. & Kothiyal, M. P. Universal digital speckle shearing interferometer. Applied Optics 27, 4731-4734 (1988). doi: 10.1364/AO.27.004731
[50] Winther, S. 3D strain measurements using ESPI. Optics and Lasers in Engineering 8, 45-57 (1988). doi: 10.1016/0143-8166(88)90006-1
[51] Yamaguchi, I., Takemori, T. & Kobayashi, K. Stabilized and accelerated speckle strain gauge. Proceedings of SPIE 1162, Laser Interferometry: Quantitative Analysis of Interferograms: Third in a Series. San Diego, United States: SPIE, 1989.
[52] Hung, Y. Y. Shearography: a novel and practical approach for nondestructive inspection. Journal of Nondestructive Evaluation 8, 55-67 (1989). doi: 10.1007/BF00565631
[53] Templeton, D. W. & Hung, Y. Y. Shearographic fringe carrier method for data reduction computerization. Optical Engineering 28, 30-34 (1989).
[54] Lamprecht, H. A. & van Rooyen, E. Sandwich shearography: a technique for the cancellation of unknown rigid body displacements in shearography. Proceedings of SPIE 1162, Laser Interferometry: Quantitative Analysis of Interferograms: Third in a Series. San Diego, United States: SPIE, 1989.
[55] Klumpp, P. A. Simple spatial filtering for shearograms. Optics & Laser Technology 21, 105-111 (1989).
[56] Hung, Y. Y. & Hovanesian, J. D. Fast detection of residual stresses in an industrial environment by thermoplastic-based shearography. Proceedings of 1990 SEM Spring Conference on Experimental Mechanics, Albuquerque. Bethel: SEM, 1990.
[57] Joenathan, C. & Torroba, R. Simple electronic speckle-shearing-pattern interferometer. Optics Letters 15, 1159-1161 (1990). doi: 10.1364/OL.15.001159
[58] Aiyer, A. A. Shearography: a potential portable on-site nondestructive evaluation inspection tool. Proceedings of SPIE 1212, Practical Holography IV. Los Angeles, CA, United States: SPIE, 1990.
[59] Toh, S. L. et al. Application of shearography in nondestructive testing of composite plates. Journal of Materials Processing Technology 23, 267-275 (1990). doi: 10.1016/0924-0136(90)90245-P
[60] Tay, C. J. et al. The measurement of slope using shearography. Optics and Lasers in Engineering 14, 13-24 (1991). doi: 10.1016/0143-8166(91)90035-R
[61] Owner-Petersen, M. Digital speckle pattern shearing interferometry: limitations and prospects. Applied Optics 30, 2730-2738 (1991). doi: 10.1364/AO.30.002730
[62] Kadono, H., Toyooka, S. & Iwasaki, Y. Speckle-shearing interferometry using a liquid-crystal cell as a phase modulator. Journal of the Optical Society of America A 8, 2001-2008 (1991). doi: 10.1364/JOSAA.8.002001
[63] Toh, S. L. et al. Flaw detection in composites using time-average shearography. Optics & Laser Technology 23, 25-30 (1991).
[64] Newman, J. W. Shearographic inspection of aircraft structure. Materials Evaluation 49, 1106-1109 (1991).
[65] Long, K., Hung, Y. Y. & Der Hovanesian, J. Measurement of residual stresses in plastic materials by electronic shearography. Proceedings of SPIE 1554, Second International Conference on Photomechanics and Speckle Metrology. San Diego, CA, United States: SPIE, 1991.
[66] Yamaguchi, I. & Kobayashi, K. Material testing by the laser speckle strain gauge. Proceedings of SPIE 1554, Second International Conference on Photomechanics and Speckle Metrology. San Diego, CA, United States: SPIE, 1991.
[67] Shang, H. M. et al. Interpretation of holographic and shearographic fringes for estimating the size and depth of debonds in laminated plates. Proceedings of SPIE 1554, Second International Conference on Photomechanics and Speckle Metrology. San Diego, CA, United States: SPIE, 1991.
[68] Hung, Y. Y. Electronic shearography versus ESPI in nondestructive evaluation. Proceedings of SPIE 1554, Second International Conference on Photomechanics and Speckle Metrology. San Diego, CA, United States: SPIE, 1991.
[69] Shang, H. M., Soh, C. C. & Chau, F. S. The use of carrier fringes in shearography for locating and sizing debonds in GRP plates. Composites Engineering 1, 157-165 (1991). doi: 10.1016/0961-9526(91)90016-L
[70] Qin, Y. W., Wang, J. Q. & Ji, X. H. Study of electronic shearing speckle technique. Proceedings of SPIE 1554, Second International Conference on Photomechanics and Speckle Metrology. San Diego, CA, United States: SPIE, 1991.
[71] Xu, B. Q. & Wu, X. P. Fringe formation in speckle shearing interferometry. Proceedings of SPIE 1554, Second International Conference on Photomechanics and Speckle Metrology. San Diego, CA, United States: SPIE, 1991.
[72] Hung, Y. Y. Electronic shearography for measurement of time-dependent deformation. Proceedings of the VII International Congress on Experimental Mechanics, vol. 2. Las Vegas, Nevada, 1992.
[73] Tay, C. J. et al. Measurements of surface coordinates and slopes by shearography. Optics & Laser Technology 24, 209-213 (1992).
[74] Chau, F. S. & Ng, T. W. A real-time digital shearing speckle interferometer. Measurement Science and Technology 3, 381-383 (1992). doi: 10.1088/0957-0233/3/4/008
[75] Ng, T. W. & Chau, F. S. Suppression of fringe-modulating in-plane displacement fields in shearing speckle interferometry. Proceedings of SPIE 1821, Industrial Applications of Optical Inspection, Metrology, and Sensing. Boston, MA, United States: SPIE, 1992.
[76] Chau, F. S. & Ng, T. W. Determining the optimum image recording conditions in shearography based on spatial frequency considerations. Proceedings of SPIE 1821, Industrial Applications of Optical Inspection, Metrology, and Sensing. Boston, MA, United States: SPIE, 1992.
[77] Tyson II, J. & Feferman, B. Inspection of Fabricated Fuselage Panels Using Electronic Shearography. Report AD-A257639 (1992).
[78] Pryputniewicz, R. J. Electronic shearography and electronic holography working side by side. Proceedings of SPIE 1821, Industrial Applications of Optical Inspection, Metrology, and Sensing. Boston, MA, United States: SPIE, 1992.
[79] Tay, C. J. et al. Curvature measurement of hydroformed axisymmetrical shells using shearography. Experimental Techniques 17, 27-30 (1993).
[80] Spooren, R., Dyrseth, A. A. & Vaz, M. Electronic shear interferometry: application of a (double-) pulsed laser. Applied Optics 32, 4719-4127 (1993). doi: 10.1364/AO.32.004719
[81] Deaton, J. B. Jr. & Rogowski, R. S. Applications of electronic shearography for the inspection of airskin structures. Proceedings of SPIE 2001, Nondestructive Inspection of Aging Aircraft. San Diego, CA, United States: SPIE, 1993.
[82] Wang, B. S. & Zhang, X. Electronic speckle carrier shearography for automated deformation measurement. Proceedings of SPIE 2003, Interferometry VI: Techniques and Analysis. San Diego, CA, United States: SPIE, 1993.
[83] Deaton, J. B. Jr. & Rogowski, R. S. Electronic shearography for nondestructive evaluation: the influence of the field of view and the shearing angle. Proceedings of SPIE 2066, Industrial Optical Sensing and Metrology: Applications and Integration. Boston, MA, United States: SPIE, 1993.
[84] Ng, T. W. & Chau, F. S. Performance of charge-coupled devices in digital shearography. Proceedings of SPIE 2066, Industrial Optical Sensing and Metrology: Applications and Integration. Boston, MA, United States: SPIE, 1993.
[85] Safai, M. Nondestructive evaluation of aircraft fuselage panels with electronic shearography. Proceedings of SPIE 2066, Industrial Optical Sensing and Metrology: Applications and Integration. Boston, MA, United States: SPIE, 1993.
[86] Mohan, N. K., Saldner, H. O. & Molin, N. E. Electronic shearography applied to static and vibrating objects. Optics Communications 108, 197-202 (1994). doi: 10.1016/0030-4018(94)90648-3
[87] Ng, T. W. & Chau, F. S. A digital shearing speckle interferometry technique for modal analysis. Applied Acoustics 42, 175-185 (1994). doi: 10.1016/0003-682X(94)90006-X
[88] Hung, Y. Y. & Long, K. Evaluation of residual stress in concrete structures by digital shearography. Proceedings of SPIE 2004, Interferometry VI: Applications. San Diego, CA, United States: SPIE, 1993.
[89] Maji, A. K., Satpathi, D. & Zawaydeh, S. Electronic shearography for inspecting steel bridges: a feasibility study. Proceedings of SPIE 2004, Interferometry VI: Applications. San Diego, CA, United States: SPIE, 1993.
[90] Ettemeyer, A. Nondestructive inspection with shearography. Proceedings of SPIE 2248, Optical Measurements and Sensors for the Process Industries. Frankfurt, Germany: SPIE, 1994.
[91] Mohan, N. K. et al. Separation of the influence of in-plane displacement in multiaperture speckle shear interferometry. Optical Engineering 33, 1973-1982 (1994). doi: 10.1117/12.167154
[92] Shakher, C. & Nirala, A. K. Measurement of temperature using speckle shearing interferometry. Applied Optics 33, 2125-2127 (1994). doi: 10.1364/AO.33.002125
[93] Hung, Y. Y., Tang, S. & Hovanesian, J. D. Real-time shearography for measuring time-dependent displacement derivatives. Experimental Mechanics 34, 89-92 (1994). doi: 10.1007/BF02328445
[94] Tay, C. J. et al. On the determination of slope by shearography. Optics and Lasers in Engineering 20, 207-217 (1994). doi: 10.1016/S0143-8166(94)90014-0
[95] Tay, C. J. et al. Direct determination of second-order derivatives in plate bending using multiple-exposure shearography. Optics & Laser Technology 26, 91-98 (1994).
[96] Valera, J. D. & Jones, J. D. C. Phase stepping in fiber-based speckle shearing interferometry. Optics Letters 19, 1161-1163 (1994). doi: 10.1364/OL.19.001161
[97] Steinchen, W. et al. Electronic shearography (ESPSI) for direct measurement of strains. Proceedings of SPIE 2248, Optical Measurements and Sensors for the Process Industries. Frankfurt, Germany: SPIE, 1994.
[98] Bird, L. Evaluating laser shearography for nondestructive testing at the Kennedy Space Center. Proceedings of SPIE 2349, Industrial Optical Sensors for Metrology and Inspection. Boston, MA, United States: SPIE, 1994.
[99] Schulz, B. Electronic speckle pattern interferometrie through shearography. Proceedings of SPIE 2358, First International Conference on Vibration Measurements by Laser Techniques: Advances and Applications. Ancona, Italy: SPIE, 1994.
[100] del Carretto, P. & Perlo, P. P. Compact multibeam shearography for deformation and vibration observation. Proceedings of SPIE 2358, First International Conference on Vibration Measurements by Laser Techniques: Advances and Applications. Ancona, Italy: SPIE, 1994.
[101] Satpathi, D. & Maji, A. K. Electronic shearography for bridge inspection. Proceedings of SPIE 2446, Smart Structures and Materials 1995: Smart Systems for Bridges, Structures, and Highways. San Diego, CA, United States: SPIE, 1995.
[102] Davis, C. K. et al. Shearographic nondestructive evaluation of Space Shuttle thermal protection systems. Proceedings of SPIE 2455, Nondestructive Evaluation of Aging Aircraft, Airports, Aerospace Hardware, and Materials. Oakland, CA, United States: SPIE, 1995.
[103] Newman, J. W. Shearography nondestructive evaluation techniques for aerospace. Proceedings of SPIE 2455, Nondestructive Evaluation of Aging Aircraft, Airports, Aerospace Hardware, and Materials. Oakland, CA, United States: SPIE, 1995.
[104] Bohn, C. E. Shearographic nondestructive inspection on the B-2 program. Proceedings of SPIE 2455, Nondestructive Evaluation of Aging Aircraft, Airports, Aerospace Hardware, and Materials. Oakland, CA, United States: SPIE, 1995.
[105] Hung, Y. Y. Automated shearography for nondestructive evaluation and strain measurement. Proceedings of SPIE 2455, Nondestructive Evaluation of Aging Aircraft, Airports, Aerospace Hardware, and Materials. Oakland, CA, United States: SPIE, 1995.
[106] Spicer, J. W. M. et al. Thermal stressing techniques for flaw characterization with shearography. Proceedings of SPIE 2455, Nondestructive Evaluation of Aging Aircraft, Airports, Aerospace Hardware, and Materials. Oakland, CA, United States: SPIE, 1995.
[107] Chatters, T. C., Pouet, B. F. & Krishnaswamy, S. Additive-subtractive phase-modulated shearography with synchronized acoustic stressing. Experimental Mechanics 35, 159-165 (1995). doi: 10.1007/BF02326475
[108] Shang, H. M., Tham, L. M. & Chau, F. S. Shearographic and holographic assessment of defective laminates with bond-lines of different elasticities. Journal of Engineering Materials and Technology 117, 322-329 (1995). doi: 10.1115/1.2804546
[109] Lape, D., Newman, J. W. & Craig, D. Advanced laser shearography inspection of turbo-fan engine composite fan cases. Proceedings of SPIE 2455, Nondestructive Evaluation of Aging Aircraft, Airports, Aerospace Hardware, and Materials. Oakland, CA, United States: SPIE, 1995.
[110] Sim, C. W., Chau, F. S. & Toh, S. L. Determination of fractional fringe order by phase shifting in real-time shearography. Journal of Modern Optics 42, 271-277 (1995). doi: 10.1080/09500349514550261
[111] Steinchen, W. et al. Out-of-plane and in-plane strain measured by shearography. Proceedings of SPIE 2509, Smart Structures: Optical Instrumentation and Sensing Systems. Munich, Germany: SPIE, 1995.
[112] Yang, L. X. et al. Precision measurement and nondestructive testing by means of digital phase shifting speckle pattern and speckle pattern shearing interferometry. Measurement 16, 149-160 (1995). doi: 10.1016/0263-2241(95)00020-0
[113] Huang, J. R. & Tatam, R. P. Optoelectronic shearography: two wavelength slope measurement. Proceedings of SPIE 2544, Interferometry VII: Techniques and Analysis. San Diego, CA, United States: SPIE, 1995.
[114] Griffen, C. T., Hung, Y. Y. & Chen, F. Three-dimensional shape measurement using digital shearography. Proceedings of SPIE 2545, Interferometry VII: Applications. San Diego, CA, United States: SPIE, 1995.
[115] Hung, Y. Y. & Griffen, C. T. Digital shearography versus TV-holography for vibration measurement. Proceedings of the 15th Biennial Conference on Mechanical Vibration and Noise and 1995 ASME Design Engineering Technical Conferences, Vol. 3. Boston, Massachusetts, 1995.
[116] Ng, T. W. Shear measurement in digital speckle shearing interferometry using digital correlation. Optics Communications 115, 241-244 (1995). doi: 10.1016/0030-4018(94)00702-V
[117] Chen, F., Griffen, C. T. & Hung, Y. Y. Stroboscopic phase shifting shearography for automated vibration measurement. Proceedings of the 15th Biennial Conference on Mechanical Vibration and Noise and 1995 ASME Design Engineering Technical Conferences, Vol. 3. Boston, Massachusetts, 1995.
[118] Steinchen, W. et al. Application of shearography to quality assurance. Journal of Materials Processing Technology 52, 141-150 (1995). doi: 10.1016/0924-0136(94)01435-4
[119] Tay, C. J. et al. Multiple-image shearography: a direct method to determine curvatures. Applied Optics 34, 2202-2206 (1995). doi: 10.1364/AO.34.002202
[120] Valera, J. D. R. & Jones, J. D. C. Vibration analysis by modulated time-averaged speckle shearing interferometry. Measurement Science and Technology 6, 965-970 (1995). doi: 10.1088/0957-0233/6/7/017
[121] Buerkle, L. & Joenathan, C. Electronic speckle pattern shearing interferometer using a holographic grating. Proceedings of SPIE 2622, Optical Engineering Midwest '95. Chicago, IL, United States: SPIE, 1995.
[122] Griffen, C. T., Chen, F. & Hung, Y. Y. Measurement of time-dependent displacement using dual speckle pattern phase-shifting TV holography. Proceedings of SPIE 2622, Optical Engineering Midwest '95. Chicago, IL, United States: SPIE, 1995.
[123] Steinchen, W., Yang, L. X. & Kupfer, G. Vibration analysis by digital shearography. Proceedings of SPIE 2868, Second International Conference on Vibration Measurements by Laser Techniques: Advances and Applications. Ancona, Italy: SPIE, 1996.
[124] Valera, J. D. R., Jones, J. D. C. & Løkberg, O. J. Exact vibration amplitude derivative measurement with TV shearography. Measurement Science and Technology 7, 918-922 (1996). doi: 10.1088/0957-0233/7/6/010
[125] Hung, Y. Y., Griffen, C. T. & Chen, F. High speed shearography for measuring transient deformation and vibration. Proceedings of the VIII International Congress on Experimental Mechanicals. Nashville, Tennessee, 1996.
[126] Pedrini, G., Zou, Y. L. & Tiziani, H. J. Quantitative evaluation of digital shearing interferogram using the spatial carrier method. Pure and Applied Optics:Journal of the European Optical Society Part A 5, 313-321 (1996). doi: 10.1088/0963-9659/5/3/008
[127] Bisle, W. J., Scherling, D. & Tober, G. Phase stepping shearography for testing commercial aircraft structures: an application review of advanced image processing techniques for shearography. in Review of Progress in Quantitative Nondestructive Evaluation (eds Thompson, D. O. & Chimenti, D. E.) (Boston: Springer, 1996).
[128] Santhanakrishnan, T. et al. Slope change contouring for 3D deeply curved objects by multi-aperture speckle shear interferometry. Optik 104, 27-31 (1996).
[129] Long, K. W. 3-Beam phase shift shearography for simultaneous measurement of in-plane and out-of-plane displacements and its applications to residual stress measurements. PhD thesis, Oakland University, 1996.
[130] Burnett, M. & Bryanston-Cross, P. J. Measurements of transonic shock structures using shearography. Proceedings of SPIE 2861, Laser Interferometry VIII: Applications. Denver, CO, United States: SPIE, 1996.
[131] Mohan, N. K., Saldner, H. O. & Molin, N. E. Recent applications of TV holography and shearography. Proceedings of SPIE 2861, Laser Interferometry VIII: Applications. Denver, CO, United States: SPIE, 1996.
[132] Rabal, H., Henao, R. & Torroba, R. Digital speckle pattern shearing interferometry using diffraction gratings. Optics Communications 126, 191-196 (1996). doi: 10.1016/0030-4018(96)00017-X
[133] Hung, Y. Y. & Wang, J. Q. Dual-beam phase shift shearography for measurement of in-plane strains. Optics and Lasers in Engineering 24, 403-413 (1996). doi: 10.1016/0143-8166(95)00098-4
[134] Chen, L. X. Complementary use of TV-holography/shearography and ultrasonic techniques for NDE of fiber-reinforced plastics. Proceedings of SPIE 2944, Nondestructive Evaluation of Materials and Composites. Scottsdale, AZ, United States: SPIE, 1996.
[135] Huang, J. R., Ford, H. D. & Tatam, R. P. Heterodyning of speckle shearing interferometers by laser diode wavelength modulation. Measurement Science and Technology 7, 1721-1727 (1996). doi: 10.1088/0957-0233/7/12/005
[136] Steinchen, W., Yang, L. X. & Schuth, M. TV-shearography for measuring 3D-strains. Strain 32, 49-57 (1996). doi: 10.1111/j.1475-1305.1996.tb00997.x
[137] Waldner, S. Removing the image-doubling in shearography by reconstruction of the displacement field. Optics Communications 127, 117-126 (1996). doi: 10.1016/0030-4018(96)00109-5
[138] Waldner, S. Removing the image-doubling in shearography: theory and application. Proceedings of SPIE 2944, Nondestructive Evaluation of Materials and Composites. Scottsdale, AZ, United States: SPIE, 1996.
[139] Hung, Y. Y. Shearography for non-destructive evaluation of composite Structures. Optics and Lasers in Engineering 24, 161-182 (1996). doi: 10.1016/0143-8166(95)00020-8
[140] Rastogi, P. K. Measurement of curvature and twist of a deformed object by electronic speckle-shearing pattern interferometry. Optics Letters 21, 905-907 (1996). doi: 10.1364/OL.21.000905
[141] Huang, J. R., Ford, H. D. & Tatam, R. P. Phase-stepped speckle shearing interferometer by source wavelength modulation. Optics Letters 21, 1421-1423 (1996). doi: 10.1364/OL.21.001421
[142] Hung, Y. Y. Shearography and applications in experimental mechanics. Proceedings of SPIE 2921, International Conference on Experimental Mechanics: Advances and Applications. Singapore, Singapore: SPIE, 1996.
[143] Weijers, A. L., van Brug, H. H. & Frankena, H. J. Real-time deformation measurement using a transportable shearography system. Proceedings of SPIE 2921, International Conference on Experimental Mechanics: Advances and Applications. Singapore, Singapore: SPIE, 1996.
[144] Tay, C. J., Shang, H. M. & Choong, D. Triple-exposure shearography for the measurement of surface slopes. Proceedings of SPIE 2921, International Conference on Experimental Mechanics: Advances and Applications. Singapore, Singapore: SPIE, 1996.
[145] Xie, L., Chau, F. S. & Toh, S. L. Nondestructive testing and evaluation using phase-shifting electronic shearography. Proceedings of SPIE 2921, International Conference on Experimental Mechanics: Advances and Applications. Singapore, Singapore: SPIE, 1996.
[146] Gundlach, A. et al. Speckle shearing interferometry using a diffractive optical beam splitter. Optical Engineering 36, 1488-1493 (1997). doi: 10.1117/1.601351
[147] Wu, X. P., Fu, S. J. & Pang, L. Y. Special Ronchi grating as the shearing element of shearing ESPI for nondestructive testing. Proceedings of SPIE 2921, International Conference on Experimental Mechanics: Advances and Applications. Singapore, Singapore: SPIE, 1996.
[148] Lee, H., Wang, L. S. & Krishnaswamy, S. Fourier processing of shearing interferometric fringe patterns. Proceedings of SPIE 2921, International Conference on Experimental Mechanics: Advances and Applications. Singapore, Singapore: SPIE, 1996.
[149] Shu, Y., Chau, F. S. & Toh, S. L. Precision phase measurement in digital speckle shearing interferometry. Proceedings of SPIE 2921, International Conference on Experimental Mechanics: Advances and Applications. Singapore, Singapore: SPIE, 1996.
[150] Sirohi, R. S. et al. Spatial phase shifting for pure in-plane displacement and displacement-derivative measurements in electronic speckle pattern interferometry (ESPI). Applied Optics 36, 5787-5791 (1997). doi: 10.1364/AO.36.005787
[151] Bison, P. G. et al. NDE of composite materials by thermal method and shearography. Proceedings of SPIE 3056, Thermosense XIX: an International Conference on Thermal Sensing and Imaging Diagnostic Applications. Orlando, FL, United States: SPIE, 1997.
[152] Patorski, K. & Olszak, A. G. Digital in-plane electronic speckle pattern shearing interferometry. Optical Engineering 36, 2010-2015 (1997). doi: 10.1117/1.601386
[153] Joenathan, C. & Buerkle, L. Electronic speckle pattern shearing interferometer using holographic gratings. Optical Engineering 36, 2473-2477 (1997). doi: 10.1117/1.601472
[154] Dymny, G., Kujawinska, M. & Waldner, S. Modified electronic speckle pattern shearing interferometry for simultaneous derivative map measurements. Proceedings of SPIE 3098, Optical Inspection and Micromeasurements II. Munich, Germany: SPIE, 1997.
[155] Steinchen, W. et al. Vibration analysis by digital speckle pattern shearing interferometry. Proceedings of SPIE 3098, Optical Inspection and Micromeasurements II. Munich, Germany: SPIE, 1997.
[156] Rastogi, P. K. An electronic pattern speckle shearing interferometer for the measurement of surface slope variations of three-dimensional objects. Optics and Lasers in Engineering 26, 93-100 (1997). doi: 10.1016/0143-8166(95)00106-9
[157] Petzing, J. N. & Tyrer, J. R. Analysis of Power ultrasonic components using shearing interferometry. Optics and Lasers in Engineering 26, 235-248 (1997). doi: 10.1016/0143-8166(95)00115-8
[158] Hung, Y. Y., Wang, J. Q. & Hovanesian, J. D. Technique for compensating excessive rigid body motion in nondestructive testing of large structures using shearography. Optics and Lasers in Engineering 26, 249-258 (1997). doi: 10.1016/0143-8166(95)00116-6
[159] Tay, C. J., Shang, H. M. & Choong, D. Cross influence of coordinate and slope related fringes during shearographic profiling. Optics and Lasers in Engineering 26, 259-278 (1997). doi: 10.1016/0143-8166(95)00117-4
[160] Nakadate, S. Phase shifting speckle shearing polarization interferometer using a birefringent wedge. Optics and Lasers in Engineering 26, 331-350 (1997). doi: 10.1016/0143-8166(95)00137-9
[161] Maas, A. A. M. & Somers, P. A. A. M. Two-dimensional deconvolution applied to phase-stepped shearography. Optics and Lasers in Engineering 26, 351-360 (1997). doi: 10.1016/0143-8166(95)00138-7
[162] Valera, J. D. R. et al. Strain and vibration analysis by fibre based speckle shearing interferometry. Optics and Lasers in Engineering 26, 361-376 (1997). doi: 10.1016/0143-8166(95)00144-1
[163] Tyrer, J. R. & Petzing, J. N. In-plane electronic speckle pattern shearing interferometry. Optics and Lasers in Engineering 26, 395-406 (1997). doi: 10.1016/0143-8166(95)00139-5
[164] Aebischer, H. A. & Waldner, S. Strain distributions made visible with image-shearing speckle pattern interferometry. Optics and Lasers in Engineering 26, 407-420 (1997). doi: 10.1016/0143-8166(95)00140-9
[165] Hung, Y. Y. Digital shearography versus TV-holography for non-destructive evaluation. Optics and Lasers in Engineering 26, 421-436 (1997). doi: 10.1016/0143-8166(95)00141-7
[166] Mohan, N. K. & Sirohi, R. S. Fringe formation in symmetric three-aperture speckle shear interferometry: an analysis. Optics and Lasers in Engineering 26, 437-447 (1997). doi: 10.1016/0143-8166(95)00142-5
[167] Ng, T. W. Optical distance sensing using digital speckle shearing interferometry. Optics and Lasers in Engineering 26, 449-460 (1997). doi: 10.1016/0143-8166(95)00143-3
[168] Toh, S. L., Chau, F. S. & Sim, C. W. Theoretical basis for real-time shearography. Journal of Modern Optics 44, 279-286 (1997). doi: 10.1080/09500349708241870
[169] Santhanakrishnan, T., Mohan, N. K. & Sirohi, R. S. Oblique observation speckle shear interferometers for slope change contouring. Journal of Modern Optics 44, 831-839 (1997). doi: 10.1080/09500349708230698
[170] Sirohi, R. S. & Mohan, N. K. An in-plane insensitive multiaperture speckle shear interferometer for slope measurement. Optics & Laser Technology 29, 415-417 (1997).
[171] Hathaway, R. B., Hovanesian, J. D. & Hung, M. Y. Y. Residual stress evaluation using shearography with large-shear displacements. Optics and Lasers in Engineering 27, 43-60 (1997). doi: 10.1016/S0143-8166(95)00011-9
[172] Hung, M. Y. Y., Long, K. W. & Wang, J. Q. Measurement of residual stress by phase shift shearography. Optics and Lasers in Engineering 27, 61-73 (1997). doi: 10.1016/S0143-8166(95)00013-5
[173] Hung, Y. Y. Automated nondestructive shearographic inspection of debonds in composites using multi-frequency vibrational stressing. Proceedings of 1997 SEM Spring Conference. Bellevue, Washington, 1997.
[174] Huang, J. R., Ford, H. D. & Tatam, R. P. Slope measurement by two-wavelength electronic shearography. Optics and Lasers in Engineering 27, 321-333 (1997). doi: 10.1016/0143-8166(95)00124-7
[175] Fomitchov, P. A. & Krishnaswamy, S. A compact dual-purpose camera for shearography and electronic speckle-pattern interferometry. Measurement Science and Technology 8, 581-583 (1997). doi: 10.1088/0957-0233/8/5/019
[176] Steinchen, W. et al. Nondestructive testing of microcracks using digital speckle pattern shearing interferometry. Proceedings of SPIE 3098, Optical Inspection and Micromeasurements II. Munich, Germany: SPIE, 1997.
[177] Song, Y. Z. et al. Digital shearing speckle interferometry applied to optical diagnostics in flow. Proceedings of SPIE 3172, Optical Technology in Fluid, Thermal, and Combustion Flow III. San Diego, CA, United States: SPIE, 1997.
[178] Hung, Y. Y. & Shi, D. H. Technique for rapid inspection of hermetic seals of microelectronic packages using shearography. Optical Engineering 37, 1406-1409 (1998). doi: 10.1117/1.601656
[179] Murukeshan, V. M., Seng, O. L. & Asundi, A. Polarization phase shifting shearography for optical metrological applications. Optics & Laser Technology 30, 527-531 (1998).
[180] Hung, Y. Y. Computerized shearography and its application for nondestructive evaluation of composites. in Manual on Experimental Methods of Mechanical Testing of Composites (ed. Jenkins, C. H.) (Society for Experimental Mechanics, 1998).
[181] Santhanakrishnan, T., Palanisamy, P. K. & Sirohi, R. S. Optical configuration in speckle shear interferometry for slope change contouring with a twofold increase in sensitivity. Applied Optics 37, 3447-3449 (1998). doi: 10.1364/AO.37.003447
[182] Lu, G. W., Bard, B. A. & Wu, S. D. Real-time portable phase-stepping shearography system for NDE. Proceedings of SPIE 3397, Nondestructive Evaluation of Aging Aircraft, Airports, and Aerospace Hardware II. San Antonio, TX, United States: SPIE, 1998.
[183] Bard, B. A., Gordon, G. A. & Wu, S. D. Laser-modulated phase-stepping digital shearography for quantitative full-field imaging of ultrasonic waves. The Journal of the Acoustical Society of America 103, 3327-3335 (1998). doi: 10.1121/1.423046
[184] Walz, T. & Ettemeyer, A. Automatic shearography inspection system for helicopter rotor blades. Proceedings of SPIE 3397, Nondestructive Evaluation of Aging Aircraft, Airports, and Aerospace Hardware II. San Antonio, TX, United States: SPIE, 1998.
[185] Steinchen, W. et al. Developmental steps for double-pulse shearography. Proceedings of SPIE 3478, Laser Interferometry IX: Techniques and Analysis. San Diego, CA, United States: SPIE, 1998.
[186] Steinchen, W. et al. Application of laser diodes in digital speckle pattern shearing interferometry. Proceedings of SPIE 3415, Laser Diodes and Applications III. Quebec, Canada: SPIE, 1998.
[187] Steinchen, W. et al. Digital shearography for strain measurement: an analysis of measuring errors. Proceedings of SPIE 3479, Laser Interferometry IX: Applications. San Diego, CA, United States: SPIE, 1998.
[188] Steinchen, W. et al. Strain analysis by means of digital shearography: potential, limitations and demonstration. The Journal of Strain Analysis for Engineering Design 33, 171-182 (1998). doi: 10.1243/0309324981512896
[189] Steinchen, W. et al. Non-destructive testing of aerospace composite materials using digital shearography. Proceedings of the Institution of Mechanical Engineers,Part G:Journal of Aerospace Engineering 212, 21-30 (1998). doi: 10.1243/0954410981532108
[190] Joenathan, C. et al. Novel temporal Fourier transform speckle pattern shearing interferometer. Optical Engineering 37, 1790-1795 (1998). doi: 10.1117/1.601834
[191] Shang, H. M., He, Y. M. & Tay, C. J. Digital shearography for slope measurement. Proceedings of SPIE 3407, International Conference on Applied Optical Metrology. Balatonfured, Hungary: SPIE, 1998.
[192] Sirohi, R. S. et al. An Optical Strain Gauge. Proceedings of SPIE 3407, International Society for Optical Engineering. SPIE, 1998.
[193] Sirohi, R. S. et al. Shear ESPI with small objects. Proceedings of SPIE 3407, International Conference on Applied Optical Metrology. Balatonfured, Hungary: SPIE, 1998.
[194] Fernandez, J. L. et al. Double-pulsed-carrier speckle-shearing pattern interferometry for transient deformation analysis. Proceedings of SPIE 3478, Laser Interferometry IX: Techniques and Analysis. San Diego, CA, United States: SPIE, 1998.
[195] Wong, W. O. & Chan, K. T. Measurement of modal damping by electronic speckle shearing interferometry. Optics & Laser Technology 30, 113-120 (1998).
[196] Dávila, A., Kaufmann, G. H., & Pérez-López, C. Transient deformation analysis by a carrier method of pulsed electronic speckle-shearing pattern interferometry. Applied Optics 37, 4116-4122 (1998). doi: 10.1364/AO.37.004116
[197] Yang, L. X. et al. Vibration analysis by means of digital shearography. Optics and Lasers in Engineering 30, 199-212 (1998).
[198] van Brug, H. Temporal phase unwrapping and its application in shearography systems. Applied Optics 37, 6701-6706 (1998). doi: 10.1364/AO.37.006701
[199] Sirohi, R. S. et al. Nondestructive assessment of thinning of plates using digital shearography. Optical Engineering 38, 1582-1585 (1999). doi: 10.1117/1.602210
[200] He, Y. M., Tay, C. J. & Shang, H. M. Digital phase-shifting shearography for slope measurement. Optical Engineering 38, 1586-1590 (1999). doi: 10.1117/1.602211
[201] Kästle, R., Hack, E. & Sennhauser, U. Multiwavelength shearography for quantitative measurements of two-dimensional strain distributions. Applied Optics 38, 96-100 (1999). doi: 10.1364/AO.38.000096
[202] van Brug, H. H. Real-time speckle shearography system for defect detection in aircraft materials. Proceedings of SPIE 3586, Nondestructive Evaluation of Aging Aircraft, Airports, and Aerospace Hardware III. Newport Beach, CA, United States: SPIE, 1999.
[203] Siebert, T. & Schmitz, B. New shearing setup for simultaneous measurement of two shear directions. Proceedings of SPIE 3637, Practical Holography XIII. San Jose, CA, United States: SPIE, 1999.
[204] James, S. W. & Tatam, R. P. Time-division-multiplexed 3D shearography. Proceedings of SPIE 3744, Interferometry '99: Techniques and Technologies. Pultusk Castle, Poland: SPIE, 1999.
[205] Murukeshan, V. M., Ganesan, A. R. & Sirohi, R. S. Curvature measurement using double shear TV holography. Optik 110, 57-60 (1999).
[206] Waldner, S. & Brem, S. Compact shearography system for the measurement of 3D deformation. Proceedings of SPIE 3745, Interferometry '99: Applications. Pultusk Castle, Poland: SPIE, 1999.
[207] Osten, W., Kalms, M. K. & Jueptner, W. P. O. Some ways to improve the recognition of imperfections in large-scale components using shearography. Proceedings of SPIE 3745, Interferometry '99: Applications. Pultusk Castle, Poland: SPIE, 1999.
[208] Hung, Y. Y. et al. Evaluation of Residual Stresses in Plastics and Composites by Shearography. (SAE International Congress and Exposition, 1999).
[209] Groves, R. M., James, S. W. & Tatam, R. P. Polarization-multiplexed and phase-stepped fiber optic shearography using laser wavelength modulation. Proceedings of SPIE 3745, Interferometry '99: Applications. Pultusk Castle, Poland: SPIE, 1999.
[210] Kalms, M. K. et al. NDT on wide-scale aircraft structures with digital speckle shearography. Proceedings of SPIE 3824, Optical Measurement Systems for Industrial Inspection. Munich, Germany: SPIE, 1999.
[211] Hung, Y. Y. Applications of digital shearography for testing of composite structures. Composites Part B:Engineering 30, 765-773 (1999). doi: 10.1016/S1359-8368(99)00027-X
[212] Steinchen, W. et al. Determination of strain distribution by means of digital shearography. Measurement 26, 79-90 (1999). doi: 10.1016/S0263-2241(99)00008-1
[213] Elster, C. & Weingärtner, I. Solution to the shearing problem. Applied Optics 38, 5024-5031 (1999). doi: 10.1364/AO.38.005024
[214] Parker, S. C. J. & Salter, P. L. A novel shearography system for aerospace non-destructive testing. Proceedings of the Institution of Mechanical Engineers,Part G:Journal of Aerospace Engineering 213, 23-33 (1999). doi: 10.1243/0954410991532819
[215] Osten, W. et al. Shearography system for the testing of large-scale aircraft components taking into account noncooperative surfaces. Proceedings of SPIE 4101, Laser Interferometry X: Techniques and Analysis. San Diego, CA, United States: SPIE, 2000.
[216] Andersson, A. et al. TV shearography: quantitative measurement of shear-magnitude fields by use of digital speckle photography. Applied Optics 39, 2565-2568 (2000). doi: 10.1364/AO.39.002565
[217] Hung, Y. Y. et al. Evaluating the soundness of bonding using shearography. Composite Structures 50, 353-362 (2000). doi: 10.1016/S0263-8223(00)00109-4
[218] Shang, H. M. et al. Surface profiling using shearography. Optical Engineering 39, 23-31 (2000). doi: 10.1117/1.602331
[219] Mohan, N. K. The influence of multiple-exposure recording on curvature pattern using multi-aperture speckle shear interferometry. Optics Communications 186, 259-263 (2000). doi: 10.1016/S0030-4018(00)01072-5
[220] Fernandez, A. et al. Measurement of transient out-of-plane displacement gradients in plates using double-pulsed subtraction TV shearography. Optical Engineering 39, 2106-2113 (2000). doi: 10.1117/1.1305260
[221] Dilhaire, S. et al. Measurement of the thermomechanical strain of electronic devices by shearography. Microelectronics Reliability 40, 1509-1514 (2000). doi: 10.1016/S0026-2714(00)00124-4
[222] Shang, H. M. et al. Generation of carrier fringes in holography and shearography. Applied Optics 39, 2638-2645 (2000). doi: 10.1364/AO.39.002638
[223] Groves, R. M., James, S. W. & Tatam, R. P. Polarization-multiplexed and phase-stepped fibre optic shearography using laser wavelength modulation. Measurement Science and Technology 11, 1389-1395 (2000). doi: 10.1088/0957-0233/11/9/320
[224] Wang, K. F., Tieu, A. K. & Li, E. B. Influence of in-plane displacement and double-aperture orientation on slope fringe formation in double-shearing-aperture speckle interferometry. Opt. Eng. Optical Engineering 39, 2124 (2000).
[225] Lau, B., Kronthaler, T. & Schilling, R. A microprism array as shearing device for speckle shearing interferometry. Optics and Lasers in Engineering 36, 389-396 (2001). doi: 10.1016/S0143-8166(01)00054-9
[226] Chen, F. Digital shearography: state of the art and some applications. Journal of Electronic Imaging 10, 240-251 (2001). doi: 10.1117/1.1329336
[227] Rastogi, P. K. Measurement of static surface displacements, derivatives of displacements, and three-dimensional surface shapes—examples of applications to non-destructive testing. in Digital Speckle Pattern Interferometry and Related Techniques (ed Rastogi, P. K.) (New York: Wiley, 2001).
[228] Groves, R. M., James, S. W. & Tatam, R. P. Full surface strain measurement using shearography. Proceedings of SPIE 4448, Optical Diagnostics for Fluids, Solids, and Combustion. San Diego, CA, United States: SPIE, 2001.
[229] Murukeshan, V. M. et al. Double shearography for engineering metrology: optical and digital approach. Optics & Laser Technology 33, 325-328 (2001).
[230] Martínez-Celorio, R. A. et al. Visibility enhancement of carrier fringes in Electronic Speckle Shearing Pattern Interferometry using microspheres for light detection in back reflection. Optik 112, 99-104 (2001). doi: 10.1078/0030-4026-00032
[231] Tornari, V. et al. Laser-based systems for the structural diagnostic of artwork: an application to XVII-century Byzantine icons. Proceedings of SPIE 4402, Laser Techniques and Systems in Art Conservation. Munich, Germany: SPIE, 2001.
[232] Osten, W. et al. Progress with the implementation of a shearography system for the testing of technical components. Proceedings of SPIE 4900, Seventh International Symposium on Laser Metrology Applied to Science, Industry, and Everyday Life. Novosibirsk, Russian Federation: SPIE, 2002.
[233] Kalms, M. K., Osten, W. & Jueptner, W. P. O. Advanced shearographic system for nondestructive testing of industrial and artwork components. Proceedings of SPIE 4915, Lasers in Material Processing and Manufacturing. Shanghai, China: SPIE, 2002.
[234] Wong, W. O. A simple electronic speckle shearing interferometer. Optics & Laser Technology 34, 399-403 (2002).
[235] Wang, K. F., Tieu, A. K. & Li, E. B. Influence of displacement and its first-and second-order derivative components on curvature fringe formations in speckle shearography. Applied Optics 41, 4557-4561 (2002). doi: 10.1364/AO.41.004557
[236] Steinchen, W., Kupfer, G. & Mäckel, P. Full field tensile strain shearography of welded specimens. Strain 38, 17-26 (2002). doi: 10.1046/j.0039-2103.2002.00005.x
[237] Steinchen, W. & Yang, L. X. Digital Shearography: Theory and Application of Digital Speckle Pattern Shearing Interferometry. (Bellingham: SPIE Press, 2003).
[238] Falldorf, C., Osten, W. & Kolenovic, E. Speckle shearography using a multiband light source. Optics and Lasers in Engineering 40, 543-552 (2003). doi: 10.1016/S0143-8166(02)00080-5
[239] Kalms, M. K. & Osten, W. Mobile shearography system for the inspection of aircraft and automotive components. Optical Engineering 42, 1188-1196 (2003). doi: 10.1117/1.1566968
[240] Hung, Y. Y., Shang, H. M. & Yang, L. X. Unified approach for holography and shearography in surface deformation measurement and nondestructive testing. Optical Engineering 42(5), 1197-1207 (2003). doi: 10.1117/1.1567263
[241] Kim, K. S. et al. Analysis of an internal crack of pressure pipeline using ESPI and shearography. Optics & Laser Technology 35, 639-643 (2003).
[242] Groves, R. M., James, S. W. & Tatam, R. P. Multicomponent shearography using optical fiber imaging-bundles. Proceedings of SPIE 5144, Optical Measurement Systems for Industrial Inspection III. Munich, Germany: SPIE, 2003.
[243] Krupka, R., Waltz, T. & Ettemeyer, A. Industrial applications of shearography for inspections of aircraft components. Proceedings of SPIE 5144, Optical Measurement Systems for Industrial Inspection III. Munich, Germany: SPIE, 2003.
[244] Groves, R. M., James, S. W. & Tatam, R. P. Multi-component pulsed-laser shearography using optical fiber imaging-bundles. Proceedings of SPIE 5191, Optical Diagnostics for Fluids, Solids, and Combustion II. San Diego, California, United States: SPIE, 2003.
[245] Groves, R. M., James, S. W. & Tatam, R. P. Pipe weld investigation using shearography. Strain 39, 101-105 (2003). doi: 10.1046/j.1475-1305.2003.00071.x
[246] Santos, F., Vaz, M. & Monteiro, J. A new set-up for pulsed digital shearography applied to defect detection in composite structures. Optics and Lasers in Engineering 42, 131-140 (2004). doi: 10.1016/j.optlaseng.2003.07.002
[247] Mihaylova, E. et al. Electronic speckle pattern shearing interferometer with a photopolymer holographic grating. Applied Optics 43, 2439-2442 (2004). doi: 10.1364/AO.43.002439
[248] Groves, R. M., James, S. W. & Tatam, R. P. Shape and slope measurement by source displacement in shearography. Optics and Lasers in Engineering 41, 621-634 (2004). doi: 10.1016/S0143-8166(02)00177-X
[249] Kurtz, R. M. et al. Reflection shearography for nondestructive evaluation. Proceedings of SPIE 5422, Unmanned Ground Vehicle Technology VI. Orlando, Florida, United States: SPIE, 2004.
[250] Findeis, D. & Gryzagoridis, J. A comparison of the capabilities of portable shearography and portable electronic speckle pattern interferometry. Proceedings of SPIE 5393, Nondestructive Evaluation and Health Monitoring of Aerospace Materials and Composites III. San Diego, CA, United States: SPIE, 2004.
[251] Casillas, F. J. et al. Small amplitude estimation of mechanical vibrations using electronic speckle shearing pattern interferometry. Optical Engineering 43, 880 (2004). doi: 10.1117/1.1666858
[252] Schuth, M., Vössing, F. & Yang, L. X. A shearographic endoscope for nondestructive test. Journal of Holography and Speckle 1, 46-52 (2004). doi: 10.1166/jhs.2004.007
[253] Sujatha, N. U. & Murukeshan, V. M. Nondestructive inspection of tissue/tissue like phantom curved surfaces using digital speckle shearography. Optical Engineering 43, 3055-3060 (2004). doi: 10.1117/1.1810531
[254] Wang, K. F. & Tieu, A. K. Theory and experiment of spatially and temporally partially coherent speckle shearing interferometry. Optics & Laser Technology 36, 43-45 (2004).
[255] Mihaylova, E., Whelan, M. & Toal, V. Simple phase-shifting lateral shearing interferometer. Optics Letters 29, 1264-1266 (2004). doi: 10.1364/OL.29.001264
[256] Barrientos, B. et al. Measurement of out-of-plane deformation by combination of speckle photography and speckle shearing interferometry. Optik 115, 248-252 (2004). doi: 10.1078/0030-4026-00362
[257] Mäckel, P. Die Scherografie-Ein quantitatives Messverfahren zur Schwingungsmessung und zerstörungsfreien Prüfung. Laser Technik Journal 1, 49-54 (2004). doi: 10.1002/latj.200790016
[258] Yang, L. X. et al. Digital shearography for nondestructive testing: potentials, limitations, and applications. Journal of Holography and Speckle 1, 69-79 (2004). doi: 10.1166/jhs.2004.010
[259] Udupa, G. et al. Defect detection in unpolished Si wafers by digital shearography. Measurement Science and Technology 15, 35-43 (2004). doi: 10.1088/0957-0233/15/1/005
[260] Andhee, A., Gryzagoridis, J. & Findeis, D. Comparison of normal and phase stepping shearographic NDE. Proceedings of SPIE 5767, Nondestructive Evaluation and Health Monitoring of Aerospace Materials, Composites, and Civil Infrastructure IV. San Diego, CA, United States: SPIE, 2005.
[261] Kalms, M. & Jueptner, W. Mobile shearography. Proceedings of SPIE 5852, Third International Conference on Experimental Mechanics and Third Conference of the Asian Committee on Experimental Mechanics. Singapore: SPIE, 2004.
[262] Sun, P. Digital phase-shifting shearography for strain measurement by using a rotating platform system. Optical Engineering 44, 085601 (2005). doi: 10.1117/1.2010127
[263] Tay, C. J. & Fu, Y. Determination of curvature and twist by digital shearography and wavelet transforms. Optics Letters 30, 2873-2875 (2005). doi: 10.1364/OL.30.002873
[264] Abdullah, W. S. W. & Petzing, J. N. Development of speckle shearing interferometer error analysis as an aperture function of wavefront divergence. Journal of Modern Optics 52, 1495-1510 (2005). doi: 10.1080/09500340500052887
[265] Hung, Y. Y. & Ho, H. P. Shearography: an optical measurement technique and applications. Materials Science and Engineering:R:Reports 49, 61-87 (2005). doi: 10.1016/j.mser.2005.04.001
[266] Habib, K. Thermally induced deformations measured by shearography. Optics & Laser Technology 37, 509-512 (2005).
[267] Somers, P. A. A. M. & Bhattacharya, N. Maintaining sub-pixel alignment for a single-camera two-bucket shearing speckle interferometer. Journal of Optics A:Pure and Applied Optics 7, S385-S391 (2005). doi: 10.1088/1464-4258/7/6/020
[268] Groves, R. M. et al. Single-axis combined shearography and digital speckle photography instrument for full surface strain characterization. Optical Engineering 44, 025602 (2005). doi: 10.1117/1.1842779
[269] Cordero, R. R. & Labbe, F. Uncertainty evaluation of displacement gradients measured by electronic speckle pattern shearing interferometry (ESPSI). Measurement Science and Technology 16, 1315-1321 (2005). doi: 10.1088/0957-0233/16/6/012
[270] Bhaduri, B., Mohan, N. K. & Kothiyal, M. P. A dual-function ESPI system for the measurement of out-of-plane displacement and slope. Optics and Lasers in Engineering 44, 637-644 (2006). doi: 10.1016/j.optlaseng.2005.05.004
[271] Bhaduri, B. et al. Use of spatial phase shifting technique in digital speckle pattern interferometry (DSPI) and digital shearography (DS). Optics Express 14, 11598-11607 (2006). doi: 10.1364/OE.14.011598
[272] Groves, R. M. & Osten, W. Temporal phase measurement methods in shearography. Proceedings of SPIE 6341, Speckle06: Speckles, From Grains to Flowers. Nimes, France: SPIE, 2006
[273] Zhao, S. A. & Chung, P. S. Digital speckle shearing interferometer using a liquid-crystal spatial light modulator. Optical Engineering 45, 105606 (2006). doi: 10.1117/1.2360940
[274] Mihaylova, E. et al. Photopolymer diffractive optical elements in electronic speckle pattern shearing interferometry. Optics and Lasers in Engineering 44, 965-974 (2006). doi: 10.1016/j.optlaseng.2005.06.017
[275] Bhaduri, B., Mohan, N. K. & Kothiyal, M. P. Cyclic-path digital speckle shear pattern interferometer: use of polarization phase-shifting method. Optical Engineering 45, 105604 (2006). doi: 10.1117/1.2361194
[276] Quan, C., Fu, Y. & Miao, H. Wavelet analysis of digital shearing speckle patterns with a temporal carrier. Optics Communications 260, 97-104 (2006). doi: 10.1016/j.optcom.2005.10.023
[277] Viotti, M. R. et al. A portable digital speckle pattern interferometry device to measure residual stresses using the hole drilling technique. Optics and Lasers in Engineering 44, 1052-1066 (2006). doi: 10.1016/j.optlaseng.2005.09.004
[278] Růžek, R., Lohonka, R. & Jironč, J. Ultrasonic C-Scan and shearography NDI techniques evaluation of impact defects identification. NDT & E International 39, 132-142 (2006).
[279] Groves, R. M. et al. Full-field Laser Shearography Instrumentation for the Detection and Characterization of Fatigue Cracks in Titanium 10-2-3. Journal of ASTM International 3, 12757 (2006). doi: 10.1520/JAI12757
[280] Ochoa, N. A. & Silva-Moreno, A. A. Fringes demodulation in time-averaged digital shearography using genetic algorithms. Optics Communications 260, 434-437 (2006). doi: 10.1016/j.optcom.2005.11.045
[281] Yang, L. X. Recent developments of digital shearography for NDT. Material Evaluation 64, 704-709 (2006).
[282] Pauliat, G. & Roosen, G. Continuous monitoring of a surface slope by real-time shearing interferometry with a photorefractive crystal. Applied Optics 45, 993-999 (2006). doi: 10.1364/AO.45.000993
[283] Bhaduri, B., Mohan, N. K. & Kothiyal, M. P. (1, N) spatial phase-shifting technique in digital speckle pattern interferometry and digital shearography for nondestructive evaluation. Optical Engineering 46, 051009 (2007). doi: 10.1117/1.2740749
[284] Somers, P. A. A. M. & Bhattacharya, N. Handling unfavourable polarization states in a polarization-based shearing speckle interferometer. Journal of Optics A:Pure and Applied Optics 9, S92-S97 (2007). doi: 10.1088/1464-4258/9/6/S14
[285] Francis, D., James, S. W. & Tatam, R. P. Surface strain measurement using multi-component shearography with coherent fibre-optic imaging bundles. Measurement Science and Technology 18, 3583-3591 (2007). doi: 10.1088/0957-0233/18/11/043
[286] Anand, A. et al. Fresnel wavefront propagation model for shearography shape measurement. Proceedings of SPIE 6617, Modeling Aspects in Optical Metrology. Munich, Germany: SPIE, 2007.
[287] Falldorf, C. et al. Efficient reconstruction of spatially limited phase distributions from their sheared representation. Applied Optics 46, 5038-5043 (2007). doi: 10.1364/AO.46.005038
[288] Hung, Y. Y. et al. Review and comparison of shearography and pulsed thermography for adhesive bond evaluation. Optical Engineering 46, 051007 (2007). doi: 10.1117/1.2741277
[289] Groves, R. M. et al. Surface strain measurement: a comparison of speckle shearing interferometry and optical fibre Bragg gratings with resistance foil strain gauges. Measurement Science and Technology 18, 1175-1184 (2007). doi: 10.1088/0957-0233/18/5/003
[290] Groves, R. M. et al. Shearography as part of a multi-functional sensor for the detection of signature features in movable cultural heritage. Proceedings of SPIE 6618, O3A: Optics for Arts, Architecture, and Archaeology. Munich, Germany: SPIE, 2007.
[291] Bhaduri, B., Kothiyal, M. P. & Mohan, N. K. Curvature measurement using three-aperture digital shearography and fast Fourier transform. Optics and Lasers in Engineering 45, 1001-1004 (2007). doi: 10.1016/j.optlaseng.2007.04.005
[292] Huang, S. J. & Liu Y. F. The out-of-plane strain measurement of composite sandwich plate with fully-potted insert using digital phase-shifting shearography. ICCES 4, 187-193 (2007). doi: 10.3970/icces.2007.004.187
[293] Kalms, M. Mobile shearography in applications. Proceedings of SPIE 6762, Two- and Three-Dimensional Methods for Inspection and Metrology V. Boston, MA, United States: SPIE, 2007.
[294] Bhaduri, B., Mohan, N. K. & Kothiyal, M. P. Simultaneous measurement of out-of-plane displacement and slope using a multiaperture DSPI system and fast Fourier transform. Applied Optics 46, 5680-5686 (2007). doi: 10.1364/AO.46.005680
[295] Rosso, V. et al. Almost-common path interferometer using the separation of polarization states for digital phase-shifting shearography. Optical Engineering 46, 105601 (2007). doi: 10.1117/1.2795632
[296] Groves, R. M., Pedrini, G. & Osten, W. Real-time extended dynamic range imaging in shearography. Applied Optics 47, 5550-5556 (2008). doi: 10.1364/AO.47.005550
[297] Gryzagoridis, J. & Findeis, D. Benchmarking shearographic NDT for composites. Insight 50, 249-252 (2008). doi: 10.1784/insi.2008.50.5.249
[298] Findeis, D., Gryzagoridis, J. & Musonda, V. NDT detection and quantification of induced defects on composite helicopter rotor blade and UAV wing sections. Proceedings of SPIE 7155, Ninth International Symposium on Laser Metrology. Singapore, Singapore: SPIE, 2008.
[299] Rosso, V. et al. Simultaneous coherent imaging and strain measurement using coupled photorefractive holography and shearography. Optics Letters 33, 797-799 (2008). doi: 10.1364/OL.33.000797
[300] Francis, D. Surface strain measurement using pulsed laser shearography with fibre-optic imaging bundles. PhD thesis, Cranfield University, UK, 2008.
[301] Focke, O., Hildebrand, A. & von Kopylow, C. Inspection of laser generated lamb waves using shearographic interferometry. Proceedings of the 1st International Symposium on Laser Ultrasonics: Science, Technology and Applications. Montreal, Canada, 2008.
[302] Francis, D., James, S. W. & Tatam, R. P. Surface strain measurement of rotating objects using pulsed laser shearography with coherent fibre-optic imaging bundles. Measurement Science and Technology 19, 105301 (2008). doi: 10.1088/0957-0233/19/10/105301
[303] Lee, J. R. et al. Investigation of shear distance in Michelson interferometer-based shearography for mechanical characterization. Measurement Science and Technology 19, 115303 (2008). doi: 10.1088/0957-0233/19/11/115303
[304] Huang, S. J. & Lin, Y. T. Out-of-plane strain measurement in sandwich plates with single fully potted insert by using digital shearography. Strain 44, 253-258 (2008). doi: 10.1111/j.1475-1305.2007.00361.x
[305] Sun, C. R., Chen, J. L. & Lu, H. Improved phase-shifted digital speckle shearography for time-dependent deformation measurement. Optical Engineering 47, 065601 (2008). doi: 10.1117/1.2939407
[306] Groves, R. M. et al. Multi-sensor evaluation of a wooden panel painting using terahertz imaging and shearography. Proceedings of SPIE 7391, O3A: Optics for Arts, Architecture, and Archaeology II. Munich, Germany: SPIE, 2009.
[307] Mihaylova, E. & Toal, V. Simple versatile shearing interferometer suitable for measurements on a microscopic scale. Optics and Lasers in Engineering 47, 271-273 (2009). doi: 10.1016/j.optlaseng.2008.05.003
[308] Hung, Y. Y. et al. Review and comparison of shearography and active thermography for nondestructive evaluation. Materials Science and Engineering:R:Reports 64, 73-112 (2009). doi: 10.1016/j.mser.2008.11.001
[309] Kumar, U. P., Kothiyal, M. P. & Mohan, N. K. Microscopic TV shearography for characterization of microsystems. Optics Letters 34, 1612-1614 (2009). doi: 10.1364/OL.34.001612
[310] Huang, Y. H. et al. NDT&E using shearography with impulsive thermal stressing and clustering phase extraction. Optics and Lasers in Engineering 47, 774-781 (2009). doi: 10.1016/j.optlaseng.2009.02.011
[311] Groves, R. M. et al. 2D and 3D non-destructive evaluation of a wooden panel painting using shearography and terahertz imaging. NDT & E International 42, 543-549 (2009).
[312] Groves, R. M. et al. 2.5D virtual reality visualisation of shearography strain data from a canvas painting. Proceedings of SPIE 7391, O3A: Optics for Arts, Architecture, and Archaeology II. Munich, Germany: SPIE, 2009.
[313] Bhaduri, B., Tay, C. J. & Quan, C. G. Direct measurement of curvature and twist using two-channel double-aperture digital shearography. Optical Engineering 49, 033604 (2010). doi: 10.1117/1.3359470
[314] Francis, D., Tatam, R. P. & Groves, R. M. Shearography technology and applications: a review. Measurement Science and Technology 21, 102001 (2010). doi: 10.1088/0957-0233/21/10/102001
[315] Goto, D. T. & Groves, R. M. Error analysis of 3D shearography using finite-element modelling. Proceedings of SPIE 7718, Optical Micro- and Nanometrology III. Brussels, Belgium: SPIE, 2010.
[316] Habib, K. Failure of thin films: optical shearography versus electrochemical impedance spectroscopy. Optik 121, 1317-1323 (2010). doi: 10.1016/j.ijleo.2009.01.018
[317] Wu, S. J., He, X. Y. & Yang, L. X. Enlarging the angle of view in Michelson-interferometer-based shearography by embedding a 4f system. Applied Optics 50, 3789-3794 (2011). doi: 10.1364/AO.50.003789
[318] Dhanotia, J. et al. Slope measurement of bent plates using double grating shearing interferometry. Applied Optics 50, 2958-2963 (2011). doi: 10.1364/AO.50.002958
[319] Taillade, F. et al. Shearography and pulsed stimulated infrared thermography applied to a nondestructive evaluation of FRP strengthening systems bonded on concrete structures. Construction and Building Materials 25, 568-574 (2011). doi: 10.1016/j.conbuildmat.2010.02.019
[320] Wu, S. J. et al. Digital shearography with in situ phase shift calibration. Optics and Lasers in Engineering 50, 1260-1266 (2012). doi: 10.1016/j.optlaseng.2012.03.011
[321] Frade, M., Enguita, J. M. & Álvarez, I. In situ 3D profilometry of rough objects with a lateral shearing interferometry range finder. Optics and Lasers in Engineering 50, 1559-1567 (2012). doi: 10.1016/j.optlaseng.2012.04.012
[322] Fu, Y., Guo, M. & Liu, H. Determination of instantaneous curvature and twist by digital shearography. Optical Engineering 51, 083602 (2012). doi: 10.1117/1.OE.51.8.083602
[323] Cai, C. Q. & He, L. F. Improved Mach-Zehnder interferometer-based shearography. Optics and Lasers in Engineering 50, 1699-1705 (2012). doi: 10.1016/j.optlaseng.2012.07.014
[324] Zhu, L. Q. et al. Real-time monitoring of phase maps of digital shearography. Optical Engineering 52, 101902 (2013). doi: 10.1117/1.OE.52.10.101902
[325] Xie, X. et al. Michelson interferometer based spatial phase shift shearography. Applied Optics 52, 4063-4071 (2013). doi: 10.1364/AO.52.004063
[326] Xie, X. et al. Simultaneous measurement of deformation and the first derivative with spatial phase-shift digital shearography. Optics Communications 286, 277-281 (2013). doi: 10.1016/j.optcom.2012.08.072
[327] Zastavnik, F. et al. Comparison of shearography to scanning laser vibrometry as methods for local stiffness identification of beams. Strain 50, 82-94 (2014). doi: 10.1111/str.12069
[328] Zhang, Y., Li, T. & Li, Q. L. Defect detection for tire laser shearography image using curvelet transform based edge detector. Optics & Laser Technology 47, 64-71 (2013).
[329] Blain, P. et al. An in-line shearography setup based on circular polarization gratings. Optics and Lasers in Engineering 51, 1053-1059 (2013). doi: 10.1016/j.optlaseng.2013.03.003
[330] Lopes, H. et al. Localization of damage with speckle shearography and higher order spatial derivatives. Mechanical Systems and Signal Processing 49, 24-38 (2014). doi: 10.1016/j.ymssp.2013.12.016
[331] Sirohi, R. S. Digital shear speckle pattern interferometry with holo-elements. Asian Journal of Physics 23, 509-515 (2014).
[332] Zastavnik, F. et al. Errors in shearography measurements due to the creep of the PZT shearing actuator. Measurement Science and Technology 25, 085007 (2014). doi: 10.1088/0957-0233/25/8/085007
[333] Xu, N. et al. Shearography for specular object inspection. Optics and Lasers in Engineering 61, 14-18 (2014). doi: 10.1016/j.optlaseng.2014.04.015
[334] Xu, X. et al. Identification of flaws using digital shearography. Lasers in Engineering 28, 201-211 (2014).
[335] Huke, P., Burke, J. & Bergmann, R. B. A comparative study between deflectometry and shearography for detection of subsurface defects. Proceedings of SPIE 9203, Interferometry XVII: Techniques and Analysis. San Diego, California, United States: SPIE, 2014.
[336] Vandenrijt, J. F., Lièvre, N. & Georges, M. P. Improvement of defect detection in shearography by using principal component analysis. Proceedings of SPIE 9203, Interferometry XVII: Techniques and Analysis. San Diego, California, United States: SPIE, 2014.
[337] Zhu, L. Q., Wu, S. J. & Yang, L. X. Stroboscopic digital shearographic system for vibration analysis of large-area object. Instruments and Experimental Techniques 57, 493-498 (2014). doi: 10.1134/S0020441214040113
[338] Khaleghi, M. et al. Long-term effects of cyclic environmental conditions on paintings in museum exhibition by laser shearography. in Advancement of Optical Methods in Experimental Mechanics, Volume 3 (eds Jin, H. et al.) (Cham: Springer, 2014), 283-288.
[339] Krzemień, L. et al. Combining digital speckle pattern interferometry with shearography in a new instrument to characterize surface delamination in museum artefacts. Journal of Cultural Heritage 16, 544-550 (2015). doi: 10.1016/j.culher.2014.10.006
[340] Amar, S. et al. Digital carrier superposition by Hilbert-Huang transform for optical phase recovery in speckle shearing interferometry. Optical Engineering 54, 013101 (2015). doi: 10.1117/1.OE.54.1.013101
[341] Feng, Z. et al. A polarized digital shearing speckle pattern interferometry system based on temporal wavelet transformation. Review of Scientific Instruments 86, 093102 (2015). doi: 10.1063/1.4929533
[342] Falldorf, C., Klattenhoff, R. & Bergmann, R. B. Single shot lateral shear interferometer with variable shear. Optical Engineering 54, 054105 (2015). doi: 10.1117/1.OE.54.5.054105
[343] Bai, P. X., Zhu, F. P. & He, X. Y. Out-of-plane displacement field measurement by shearography. Optics & Laser Technology 73, 29-38 (2015).
[344] Zastavnik, F. et al. Calibration and correction procedure for quantitative out-of-plane shearography. Measurement Science and Technology 26, 045201 (2015). doi: 10.1088/0957-0233/26/4/045201
[345] Xie, X. et al. Measurement of in-plane strain with dual beam spatial phase-shift digital shearography. Measurement Science and Technology 26, 115202 (2015). doi: 10.1088/0957-0233/26/11/115202
[346] Buchta, D. et al. Artwork inspection by shearography with adapted loading. Experimental Mechanics 55, 1691-1704 (2015). doi: 10.1007/s11340-015-0070-9
[347] Schindler, J. et al. Retrieving the axial position of fluorescent light emitting spots by shearing interferometry. Journal of Biomedical Optics 21, 125009 (2016). doi: 10.1117/1.JBO.21.12.125009
[348] Languy, F. et al. Vibration mode shapes visualization in industrial environment by real-time time-averaged phase-stepped electronic speckle pattern interferometry at 10.6 μm and shearography at 532 nm. Optical Engineering 55, 121704 (2016). doi: 10.1117/1.OE.55.12.121704
[349] Deepan, B., Quan, C. G. & Tay, C. J. Determination of slope, curvature, and twist from a single shearography fringe pattern using derivative-based regularized phase tracker. Optical Engineering 55, 121707 (2016). doi: 10.1117/1.OE.55.12.121707
[350] Richoz, G. L. & Schajer, G. S. Simultaneous two-axis shearographic interferometer using multiple wavelengths and a color camera. Optics and Lasers in Engineering 77, 143-153 (2016). doi: 10.1016/j.optlaseng.2015.08.007
[351] Wang, Y. H. et al. Simultaneous dual directional strain measurement using spatial phase-shift digital shearography. Optics and Lasers in Engineering 87, 197-203 (2016). doi: 10.1016/j.optlaseng.2015.12.009
[352] Mininni, M. et al. Damage identification in beams using speckle shearography and an optimal spatial sampling. Mechanical Systems and Signal Processing 79, 47-64 (2016). doi: 10.1016/j.ymssp.2016.02.039
[353] Yang, L. X. & Xie, X. Digital Shearography: New Developments and Applications. (Bellingham: SPIE Press, 2016).
[354] Xie, X. et al. Polarized digital shearography for simultaneous dual shearing directions measurements. Review of Scientific Instruments 87, 083110 (2016). doi: 10.1063/1.4961473
[355] dos Santos, J. V. A., Lopes H. & Moreno-García, P. Numerical study on damage identification using shearography with different shearing amounts. Procedia Structural Integrity 5, 1198-1204 (2017). doi: 10.1016/j.prostr.2017.07.039
[356] Lopes, H. et al. Influence of shearing amount and vibration amplitude on noise in shearography. Procedia Structural Integrity 5, 1205-1212 (2017). doi: 10.1016/j.prostr.2017.07.043
[357] Falldorf, C. et al. Sparse light fields in coherent optical metrology [Invited]. Applied Optics 56, F14-F19 (2017). doi: 10.1364/AO.56.000F14
[358] Wang, X. et al. Temporal heterodyne shearing speckle pattern interferometry. Optics and Lasers in Engineering 93, 76-82 (2017). doi: 10.1016/j.optlaseng.2017.01.010
[359] Sun, J. F. et al. Dynamic measurement of first-order spatial derivatives of deformations by digital shearography. Instruments and Experimental Techniques 60, 575-583 (2017). doi: 10.1134/S0020441217040145
[360] Zhao, Q. H. et al. Digital shearography for NDT: phase measurement technique and recent developments. Applied Sciences 8, 2662 (2018). doi: 10.3390/app8122662
[361] da Silva, F. A. A. et al. Evaluation of a novel compact shearography system with DOE configuration. Optics and Lasers in Engineering 104, 90-99 (2018).
[362] Macedo, F. J. et al. Inspection of defects of composite materials in inner cylindrical surfaces using endoscopic shearography. Optics and Lasers in Engineering 104, 100-108 (2018). doi: 10.1016/j.optlaseng.2017.06.005
[363] Newman, J. W. Shearography nondestructive testing of composites. Comprehensive Composite Materials II 7, 270-290 (2018).
[364] Buchta, D. et al. Soft tissue elastography via shearing interferometry. Journal of Medical Imaging 5, 046001 (2018).
[365] Barrera, E. S. et al. Multiple-aperture one-shot shearography for simultaneous measurements in three shearing directions. Optics and Lasers in Engineering 111, 86-92 (2018). doi: 10.1016/j.optlaseng.2018.07.018
[366] Kirkove, M. et al. Determination of vibration amplitudes from binary phase patterns obtained by phase-shifting time-averaged speckle shearing interferometry. Applied Optics 57, 8065-8077 (2018). doi: 10.1364/AO.57.008065
[367] Buchta, D. et al. Combination of FEM simulations and shearography for defect detection on artwork. Strain 54, e12269 (2018). doi: 10.1111/str.12269
[368] Liu, H. J. et al. Directed acoustic shearography for crack detection around fastener holes in aluminum plates. NDT & E International 100, 124-131 (2018).
[369] Wang, X. et al. Application of digital shearing speckle pattern interferometry for thermal stress. Measurement 125, 11-18 (2018). doi: 10.1016/j.measurement.2018.04.073
[370] Wang, S. J. et al. Dual-directional shearography based on a modified common-path configuration using spatial phase shift. Applied Optics 58, 593-603 (2019). doi: 10.1364/AO.58.000593
[371] Yan, P. Z. et al. Shearography for non-destructive testing of specular reflecting objects using scattered light illumination. Optics & Laser Technology 112, 452-457 (2019).
[372] Yan, P. Z. et al. Spatial phase-shift digital shearography for simultaneous measurements in three shearing directions based on adjustable aperture multiplexing. Optical Engineering 58, 054105 (2019).
[373] DeWeert, M. J. et al. Principles of surface-phase-resolved shearography. Optical Engineering 58, 114103 (2019).
[374] Katunin, A., Lopes, H. & dos Santos, J. V. A. Identification of multiple damage using modal rotation obtained with shearography and undecimated wavelet transform. Mechanical Systems and Signal Processing 116, 725-740 (2019). doi: 10.1016/j.ymssp.2018.07.024
[375] Sun, F. Y. et al. The application of SLM in shearography detecting system. Optics and Lasers in Engineering 114, 90-94 (2019). doi: 10.1016/j.optlaseng.2018.10.016
[376] Arco, C. et al. Application of spline interpolation to speckle shearography measurements for damage identification. Procedia Structural Integrity 17, 718-725 (2019). doi: 10.1016/j.prostr.2019.08.096
[377] Hooshmand-Ziafi, H., Hassani, K. & Dashtdara, M. Dual-sensitive spatial phase-shifting shearography based on a common-path configuration. Optical Engineering 58, 114104 (2019).
[378] Zhao, Q. H. et al. Simultaneous 3D measurement of deformation and its first derivative with speckle pattern interferometry and shearography. Applied Optics 58, 8665-8672 (2019). doi: 10.1364/AO.58.008665
[379] Dong, J. et al. Real-time dual-sensitive shearography for simultaneous in-plane and out-of-plane strain measurements. Optics Express 27, 3276-3283 (2019). doi: 10.1364/OE.27.003276
[380] Gao, X. Y. et al. Double imaging Mach-Zehnder spatial carrier digital shearography. Journal of Modern Optics 66, 153-160 (2019). doi: 10.1080/09500340.2018.1513173
[381] Yan, P. Z. et al. Pixelated carrier phase-shifting shearography using spatiotemporal low-pass filtering algorithm. Sensors 19, 5185 (2019). doi: 10.3390/s19235185
[382] Hooshmand-Ziafi, H. et al. Common-path spatial phase-shift speckle shearography using a glass plate. Review of Scientific Instruments 90, 105105 (2019). doi: 10.1063/1.5097211
[383] Giordano, A. & Chiang, F. P. Digital laser speckle technique for generating deflection, slope, and curvature contours of bent plates. Optical Engineering 59, 094101 (2020).
[384] Dong, J. et al. Shear-unlimited common-path speckle interferometer. Optics Letters 45, 1305-1308 (2020). doi: 10.1364/OL.382893
[385] Sun, F. Y. et al. A spatial-phase-shift-based defect detection shearography system with independent adjustment of shear amount and spatial carrier frequency. Optics & Laser Technology 124, 105956 (2020).
[386] Zhong, S. M. et al. Multi-directional shearography based on multiplexed Mach-Zehnder interference system. Journal of Modern Optics 67, 346-354 (2020). doi: 10.1080/09500340.2020.1733115
[387] Oliveira, B. C. F. et al. Square-pulse shearography inspections of metallic parts repaired with a glass fiber reinforced polymer using pressure, radiation, vibration, and induction loading methods. International Journal of Pressure Vessels and Piping 187, 104187 (2020). doi: 10.1016/j.ijpvp.2020.104187
[388] Felipe, D. S. et al. Procedure for detecting the shape and size of defects on metallic substrates under composite repairs using shearography. Applied Optics 59, 8089-8097 (2020). doi: 10.1364/AO.399417
[389] Zhang, B. Y. et al. Overview of digital shearography for NDT. Materials Evaluation 78, 342-351 (2020). doi: 10.32548/2020.me-04120
[390] Zhang, B. Y. et al. Spatial-light-modulator-based dual shearing direction shearography. Applied Optics 59, 11080-11086 (2020). doi: 10.1364/AO.404088
[391] Zhao, Q. H. et al. A new multiplexed system for the simultaneous measurement of out-of-plane deformation and its first derivative. Optics Communications 482, 126602 (2021). doi: 10.1016/j.optcom.2020.126602
[392] Gu, G. Q. et al. Improved depth characterization of internal defect using the fusion of shearography and speckle interferometry. Optics & Laser Technology 135, 106701 (2021).
[393] Anisimov, A. G. & Groves, R. M. Extreme shearography: development of a high-speed shearography instrument for quantitative surface strain measurements during an impact event. Optics and Lasers in Engineering 140, 106502 (2021). doi: 10.1016/j.optlaseng.2020.106502
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article′s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article′s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Figures(7)
Shearography: A technique for non-destructive testing
Shearography is a displacement gradient sensitive, full-field optical technique that is resilient to environmental disturbances and vibrations and is capable to examine large structures. It can be used on shopfloor as well in field. The major application of the technique is the non-destructive inspection of laminates. It has been applied to examine components and systems in aerospace and automobile industries, and to inspect art objects like paintings for conservation. The review article presents the evolution of the technique, various optical configurations, recording procedures, and applications.
Article views(195) PDF downloads(87) Citation(0) Citation counts are provided from Web of Science. The counts may vary by service, and are reliant on the availability of their data.
Rajpal Sirohi*,
Light: Advanced Manufacturing , Article number: 1 (2022)
Rajpal Sirohi, [email protected]
Abstract: This paper presents the activities in the field of shearography in chronological order and highlights the great potential of this holographic measurement technology. After a brief introduction, the basic theory of shearography is presented. Shear devices, phase-shift arrangements, and multiplexed shearography systems are described. Finally, the application areas where shearography has been accepted and successfully used as a tool are presented.
A grainy pattern formed in space when a laser beam strikes an object is called a speckle pattern. These patterns were considered a bane of holographers because the images reconstructed from the holograms were grainy. Several methods were investigated to reduce this deleterious effect. During the late 1960s, it was discovered that the speckle pattern carries information about the object and hence can be used for measurement1. The field of speckle metrology thus came into existence. Both speckle photography and speckle interferometry were investigated to measure the displacement and deformation of the objects. Unlike holographic interferometry, speckle interferometry can be configured to measure the components of the deformation vector. Early recordings were made on photo-emulsions (photographic plates or films) and the technique came to be known as speckle pattern interferometry2. Since it was possible to control the size of the speckles in the speckle pattern, electronic recording using a vidicon tube was attempted successfully and the technique was renamed as electronic speckle pattern interferometry (ESPI)3. When speckle patterns are recorded on a charge-coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) sensor and manipulated digitally, the technique is known as digital speckle pattern interferometry (DSPI)3. Using ESPI/DSPI along with phase shifting, the deformation map of an object subjected to an external agency is obtained almost in real time. The technique is whole-field, extremely sensitive, and can be configured to obtain either in-plane components or out-of-plane component, or all the three components of the deformation vector simultaneously2,3,5−7. It has also been applied to the study of vibrating objects in both time-averaged and stroboscopic modes.
Material scientists, stress analysts, quality assurance personnel, product developers, and many others are interested not only in deformation but also in strain values. Strain measurements are the basis for predicting how long an object will perform without failure when subjected to certain external influences. There are several techniques for strain measurement, of which some are point-wise, and some are whole-field techniques. Shearography is a whole-field optical technique that has evolved both for qualitative and quantitative strain (difference quotient) measurements. When shear interferometry features are included in speckle pattern interferometry, the technique is known as speckle pattern shear interferometry (SPSI) or speckle shear interferometry (SSI) or simply shearography.
Shearography is a displacement gradient sensitive, full-field optical technique that is resilient to environmental disturbances and vibrations. It does not respond to rigid body translations. Because shearography is an almost common-path technique, the temporal coherence requirement is considerably relaxed. The experimental setup is simple and offers reduced sensitivity to fill the gap between DSPI and moiré techniques4. The first paper on shearography was written by Leendertz and Butters8, who validated the theoretical framework of shearography by comparing twist contours of a plate with those generated as moiré fringes from the holographically obtained displacement pattern and its shifted version. For shearing, a Michelson interferometric arrangement was used. This work was followed by publications on the visualization of strain fields and slopes, the use of different shearing methods, and some theoretical aspects of the technique9−47. Recordings were done on photo-emulsions and the information was extracted by Fourier filtering9,23,55. In 1980, Nakadate et al.20 reported the use of a video camera to record the slope contours of static and vibrating objects. In 1988, some publications reported the recording of object images on an electronic detector48−50. Thereafter, researchers continued with strain field visualizations, slope contours, and even curvature using both electronic and non-electronic detectors51−87. The importance of shearography as a tool for measurement and inspection inspired the organization of several conferences on this topic from 1987 to 2006. These conferences have attracted many researchers to present works demonstrating manifold applications such as nondestructive testing and evaluation (NDT&E), examination of large structures, particularly in aeronautics, and nondestructive testing (NDT) of art and cultural heritage objects. At the same time, there were many publications on theoretical aspects and applications of the technique88−279. The first application of phase stepping to obtain quantitative strain was reported by Steinchen et al.97. Many researchers have described the measurement of strain in their publications. In reality, shearography measures not the displacement derivative, but the difference quotient – an approximation to a derivative. If the shear is made very small for the difference quotient to approximate a derivative, the sensitivity decreases; for a vanishingly small shear there is no sensitivity. Several mobile and compact shearographic systems have been developed to perform NDT activities in an industrial environment127,182,202,210,214,215. After 2005, activity on this topic slowed, perhaps due to the fact that it had evolved as an inspection and NDT technique with several systems on the market. In spite of the maturity, some theoretical aspects continue to be explored, new applications have been developed such as NDT of micro-objects, and research on phase evaluation methods continues280−379. Several review articles226,265,314,360, review-like articles127,188,211,227,258,288,308 and two books237,353 have been published on shearography. It is interesting to note that there is renewed interest in developing shearography systems with and without multiplexing capabilities325,346,349−351,354,361−365,370−372,377−382,384−393.
As the technique evolved, it went through various names. The most common usage has been shearography, with some prefixes to it like digital shearography, TV-shearography, electronic shearography, laser shearography, and digital speckle shearography. In the beginning, shearography was called speckle-shearing interferometry and speckle-shear interferometry. However, some authors have called it electronic speckle pattern shearing interferometry152−154,157,163,220,255,269, digital speckle pattern shearing interferometry20,132,155,176, electronic speckle-shearing pattern interferometry57,140,194,196,251, digital shearing speckle pattern interferometry177,341,358, and digital speckle shearing interferometry116,145,149,167,186.
A typical experimental setup for shearography is shown in Fig. 1. It consists of a light source, typically a laser, an imaging system, a shear device placed either between the object and the lens or between the lens and the recording medium, and a recording medium such as photo-emulsion or a CCD/CMOS imager. The object is illuminated by a laser beam, and its image is formed on the detector via the shear device. Shearing causes a point on the object to be imaged as two points. Alternatively, two points on the object are imaged as a single point. A wave from one point acts as a reference wave to the wave from the second point; hence, there is no need for a reference wave. Between exposures, the object is subjected to an external loading that deforms the object.
Fig. 1 Schematic of a speckle pattern shear interferometer: DL-diverging lens; L-imaging lens; I-image sensor.
The theoretical framework of shearography is based on the same phase difference equation that is valid for both holographic interferometry and speckle pattern interferometry2,3. The optical phase difference equation is given by7:
$$ \phi =\left({\boldsymbol{k}}_{o}-{\boldsymbol{k}}_{i}\right) \cdot {\boldsymbol{L}}={\boldsymbol{k}} \cdot {\boldsymbol{L}} $$
where ko and ki are the propagation vectors in the directions of observation and illumination, respectively, and L is the deformation vector at a point on the object. The vector k [=$ \left({\boldsymbol{k}}_{o}-{\boldsymbol{k}}_{i}\right) $ ] is called the sensitivity vector. The deformation vector is expressed as $ {\boldsymbol{L}}=u\hat{\boldsymbol{i}}+v\hat{\boldsymbol{j}}+w\hat{\boldsymbol{k}} $ with components u, v, and w along the x-, y-, and z-axes.
In shearography, there are two points on the object under consideration: a point P and its sheared counterpart Q. On loading the object, these two points undergo different displacements, as shown in Fig. 2. The point $ P\left(x,y,z\right) $ is on the undeformed surface, and the point $ Q\left(x+\Delta x,y,z\right) $ is on the sheared surface. These are very close to each other on the two surfaces with a lateral shear $ \Delta x $ along the x-axis. When the object is deformed, these points are displaced to their new locations $ P'\left(x+u,y+v,z+w\right) $ and $ Q'(x+\Delta x+ $ $ u+\Delta u,y+v+\Delta v,z+w+\Delta w) $ . Point P undergoes a displacement $ {\boldsymbol{L}}=u\hat{\boldsymbol{i}}+v\hat{\boldsymbol{j}}+w\hat{\boldsymbol{k}} $ and point Q undergoes a displacement $ {{\boldsymbol{L}}}'=\left(u+\Delta u\right)\hat{\boldsymbol{i}}+\left(v+\Delta v\right)\hat{\boldsymbol{j}}+\left(w+\Delta w\right)\hat{\boldsymbol{k}} $ . Therefore, $ \Delta {\boldsymbol{L}} $ , the change in the displacement vector, is given by $ \Delta {\boldsymbol{L}}={{\boldsymbol{L}}}'-{\boldsymbol{L}}=\Delta u\hat{\boldsymbol{i}}+\Delta v\hat{\boldsymbol{j}}+\Delta w\hat{\boldsymbol{k}} $ .
Fig. 2 Original and deformed states of the object showing the location of sheared points on deformation: P is a point on the surface and Q is its sheared counterpart.
On loading, these points displace to P' and Q'. Deformation is very small and hence sensitivity vectors remain unchanged.
Following Eq. 1, the phase difference $ {\phi }_{P} $ between the waves reaching any point on the detector plane when point P displaces to point P' is given by
$$ {\phi }_{P}=\left({\boldsymbol{k}}_{o}-{\boldsymbol{k}}_{i}\right) \cdot \left(u\hat{\boldsymbol{i}}+v\hat{\boldsymbol{j}}+w\hat{k}\right)={k}_{x}u+{k}_{y}v+{k}_{z}w $$
where kx, ky, and kz are the components of the vector $ {\boldsymbol{k}}=\left({\boldsymbol{k}}_{o}-{\boldsymbol{k}}_{i}\right) $ , i.e., $ {\boldsymbol{k}}={k}_{x}\hat{\boldsymbol{i}}+{k}_{y}\hat{\boldsymbol{j}}+{k}_{z}\hat{\boldsymbol{k}} $ .
Similarly, the phase difference between the waves reaching the same point on the detector plane when point Q moves to point Q' is given by
$$ \begin{split}{\phi }_{Q}=&{\boldsymbol{k}} \cdot \left[\left(u+\Delta u\right)\hat{\boldsymbol{i}}+\left(v+\Delta v\right)\hat{\boldsymbol{j}}+\left(w+\Delta w\right)\hat{k}\right]\\=&{k}_{x}\left(u+\Delta u\right)+{k}_{y}\left(v+\Delta v\right)+{k}_{z}\left(w+\Delta w\right) \end{split}$$
Calculations of the phases $ {\phi }_{P} $ and $ {\phi }_{Q} $ in terms of the coordinates of the source and a point on the detector, and their respective distances, can be found in books237,353 and in several references19,265,314.
Because the wave from the sheared point acts as a reference wave to the one from the other point on the surface or vice-versa, there is no need for a reference wave. These two waves produce an interference pattern. Indeed, waves from the scattering points within the resolution element around points P and Q interfere, producing a speckle at the image point. When the entire object is considered, its image is a speckle pattern. A record of the speckle pattern is called a shearogram. Let the amplitude of the waves from point P and point Q at a point $ \left({x}_{i},{y}_{i}\right) $ at the image plane be
$$ {a}_{P}\left({x}_{i},{y}_{i}\right)\propto {a}_{0}\left(x,y\right){e}^{i{\theta }_{P}} $$
$$ {a}_{Q}\left({x}_{i},{y}_{i}\right)\propto {a}_{0}\left(x+\Delta x,y\right){e}^{i{\theta }_{Q}} $$
Both the amplitudes [$ {a}_{0}\left(x,y\right) $ and $ {a}_{0}\left(x+\Delta x,y\right)] $ and phases [$ {\theta }_{P} $ and $ {\theta }_{Q} $ ] are random variables. The irradiance distribution at the image point can be expressed as
$$ \begin{split} &I\left({x}_{i},{y}_{i}\right)\propto {\left|{a}_{0}\left(x,y\right)\right|}^{2}+{\left|{a}_{0}\left(x+\Delta x,y\right)\right|}^{2}+\\&2{a}_{0}\left(x,y\right){a}_{0}\left(x+\Delta x,y\right) \cos\phi ;\phi =\left({\theta }_{Q}-{\theta }_{P}\right) \\I(x_i,y_i)=& {\cal{I}}_{1}\left({x}_{i},{y}_{i}\right)+{\cal{I}}_{2}\left({x}_{i},{y}_{i}\right)+2\sqrt{{\cal{I}}_{1}\left({x}_{i},{y}_{i}\right){\cal{I}}_{2}\left({x}_{i},{y}_{i}\right)} \cos\phi \\ =&{I}_{0}\left(1+\gamma \cos\phi \right) \\[-10pt]\end{split} $$
where $ {\cal{I}}_{1}\left({x}_{i},{y}_{i}\right) $ and $ {\cal{I}}_{2}\left({x}_{i},{y}_{i}\right) $ are the irradiances of the waves from points P and Q at the image point $ \left({x}_{i},{y}_{i}\right) $ , I0 is the total irradiance (sum of irradiances of both waves), and $ \gamma $ is the modulation. Because the two points P and Q are very close to each other, the amplitudes of the waves from these points can be assumed to be equal. In this case, the irradiance distribution can be expressed as
$$ I\left({x}_{i},{y}_{i}\right)={I}_{0}\left(1+ \cos\phi \right) $$
Usually, two exposures are made in shearography: the first exposure when the object is in its undeformed state and the second exposure after the application of load, that is, when the object is in its deformed state.
The irradiance distribution in the first exposure can be expressed as
$$ {I}_{1}\left({x}_{i},{y}_{i}\right)={I}_{0}\left(1+ \cos\phi \right) $$
On loading, the surface deforms, and points P and Q respectively move to points P' and Q'. The waves from points P' and Q' acquire additional phases $ {\phi }_{P} $ and $ {\phi }_{Q} $ respectively.
The irradiance distribution recorded now is given by
$$\begin{split} {I}_{2}\left({x}_{i},{y}_{i}\right)=&2{a}_{0}^{2}\left[1+ \cos\left\{\left({\theta }_{Q}+{\phi }_{Q}\right)-\left({\theta }_{P}+{\phi }_{P}\right)\right\}\right]\\=&{I}_{0}\left[1+ \cos\left(\phi +{\Delta \phi }_{QP}\right)\right]\end{split}$$
where $ {\Delta \phi }_{QP}=\left({\phi }_{Q}-{\phi }_{P}\right) $ is the phase difference between the two waves from points Q and P at a point on the recording plane. By rewriting the phase difference $ {\Delta \phi }_{QP} $ , we obtain
$$\begin{split}& {k}_{x}\left(u+\Delta u\right)+{k}_{y}\left(v+\Delta v\right)+{k}_{z}\left(w+\Delta w\right)-\\&({k}_{x}u+{k}_{y}v+{k}_{z}w)={k}_{x}\Delta u+{k}_{y}\Delta v+{k}_{z}\Delta w \end{split}$$
Assuming that the magnitude of the shear is very small, the phase difference can be expressed as
$$ \begin{split}{\Delta \phi }_{QP}=&\left({\phi }_{Q}-{\phi }_{P}\right)=\left({k}_{x}\frac{\partial u}{\partial x}+{k}_{y}\frac{\partial v}{\partial x}+{k}_{z}\frac{\partial w}{\partial x}\right)\Delta x\\=&\left({\boldsymbol{k}}_{o}-{\boldsymbol{k}}_{i}\right) \cdot \frac{\partial {\boldsymbol{L}}}{\partial x}\Delta x={\boldsymbol{k}} \cdot \frac{\partial {\boldsymbol{L}}}{\partial x}\Delta x \end{split}$$
For a very small shear, $ {\Delta {\boldsymbol{L}}}/{\Delta x} $ is expressed as $ {\partial {\boldsymbol{L}}}/{\partial x} $ . The phase difference depends on the derivatives of the components of the deformation vector if an exceedingly small lateral shear is used. In practice, the difference quotient is measured rather than the derivative. If the magnitude of the shear is made very small such that it approximates a derivative, the deformation phase $ {\Delta \phi }_{QP} $ tends to become very small, and hence the sensitivity of the technique becomes poor. It is possible to employ other shear types, e.g., radial shear and theta shear, and obtain the radial derivative $ \left({\partial w}/{\partial r}\right) $ 28,29 and the theta derivative $ \left({\partial w}/{\partial \theta }\right) $ 28,29.
Although almost all publications discuss the theory of shearography along with other aspects, including applications, several papers are devoted only to the theoretical aspects of the technique70,71,76,166,168,224.
Photographic Recording
In photographic recording, the irradiance distributions I1(xi, yi) and I2(xi, yi) are recorded sequentially over a time period T. The recorded exposure E is expressed as
$$ E={E}_{1}+{E}_{2}=2{I}_{0}T\left[1+ \cos\left(\varnothing +\frac{{\Delta \phi }_{QP}}{2}\right) \cos\left(\frac{{\Delta \phi }_{QP}}{2}\right)\right] $$
where T is the time of exposure. Thus, the record contains a speckle pattern modulated by $ \cos\left({{\Delta \phi }_{QP}}/{2}\right) $ . The term $ \cos\left({{\Delta \phi }_{QP}}/{2}\right) $ carries the information of the derivative of the deformation vector that an object has suffered due to loading. This information, which is in the form of a fringe pattern, is extracted by Fourier filtering9,23,55. Usually, laser light is used for Fourier filtering, but in some cases, filtering can also be performed with white light55. The fringe pattern is speckled, and the fringes have a low contrast with a cos2-type distribution. The fringes can be sharpened by multiple exposures or by making use of the nonlinearity of the recording process31,33,36,48. The contrast of the fringes is improved using apertures in front of the lens, albeit at the expense of loss of light13,29,34,36.
If the shearogram corresponding to the undeformed state of an object recorded on photo-emulsion is repositioned exactly after development, the deformation derivatives can be observed in real time as the object is deformed74,93. The theoretical framework of real-time shearography is similar to that of real-time holographic interferometry: there is a phase change of $ \pi $ in the fringe pattern as compared to double-exposure shearography168. Real-time shearography has been used to determine fractional fringe order by the translation of a wedge plate110. Instead of recording a shearogram on a photographic plate, the use of photorefractive crystals is suggested for real-time shearography282. To extract information pertaining to $ {\Delta \phi }_{QP} $ , a doubly exposed shearogram is Fourier filtered9,13, resulting in a fringe pattern. These fringes can be sharpened by making multiple exposures or by making use of the nonlinearity of the recording process.
Electronic Recording and Digital Processing
In electronic recording, the two exposures corresponding to I1(xi, yi) and I2(xi, yi) are handled independently. In one processing method, the exposure corresponding to I2(xi, yi) is subtracted pixel-by-pixel from I1(xi, yi). The voltage output of the detector is proportional to [I1(xi, yi)-I2(xi, yi)]. The brightness on the monitor is proportional to the voltage output of the detector, and therefore it can be expressed as
$$\begin{split} B\propto \left[{I}_{1}\left({x}_{i},{y}_{i}\right)-{I}_{2}\left({x}_{i},{y}_{i}\right)\right]=&{I}_{0}\left[ \cos\phi - \cos\left(\phi +{\Delta \phi }_{QP}\right)\right]\\=&2{I}_{0} \sin \left(\phi +\frac{\Delta {\phi }_{QP}}{2}\right) \sin \frac{{\Delta \phi }_{QP}}{2}\end{split} $$
For negative values of $ \sin {{\Delta \phi }_{QP}}/{2} $ , the brightness on the monitor will be zero. To avoid this loss of signal, the output signal is squared before being displayed, and therefore the brightness B on the display monitor is expressed as
$$ B\propto 4\;{I}_{0}^{2}{ \sin }^{2}\left(\phi +\frac{{\Delta \phi }_{QP}}{2}\right){ \sin }^{2}\left(\frac{{\Delta \phi }_{QP}}{2}\right) $$
The term $ { \sin }^{2}\left(\phi +{{\Delta \phi }_{QP}}/{2}\right) $ contains the random phase, and therefore it represents the speckled portion of the brightness distribution on the monitor, which is modulated by the term $ { \sin }^{2}\left({{\Delta \phi }_{QP}}/{2}\right) $ . There would be dark regions, called fringes, wherever $ {{\phi }_{QP}}/{2}=m\pi $ for integer values of m. Substituting for $ \Delta {\phi }_{QP} $ , we obtain the condition for the formation of dark fringes as
$$ \begin{split}\Delta {\phi }_{QP}=&\left({k}_{x}\frac{\partial u}{\partial x}+{k}_{y}\frac{\partial v}{\partial x}+{k}_{z}\frac{\partial w}{\partial x}\right)\Delta x\\=&\left({\boldsymbol{k}}_{o}-{\boldsymbol{k}}_{i}\right) \cdot \frac{\partial {\boldsymbol{L}}}{\partial x}\Delta x={\boldsymbol{k}} \cdot \frac{\partial {\boldsymbol{L}}}{\partial x}\Delta x=2m\pi \end{split}$$
The fringe pattern arises because of the gradient of the deformation vector. The fringe visibility depends on the correlation between the two interfering waves20,61,256.
It can be seen from Eq. 10 that the DC term is eliminated by the subtraction process. In another recording scheme, two speckle patterns with a phase difference of $ \pi /2 $ are obtained simultaneously using polarized beam illumination for the deformed and undeformed states of the object122. Because the phase difference is fixed, a fast Fourier transform (FFT) can be used to eliminate the DC terms of the speckle patterns. In the early 1980s, TV cameras were used to record shearograms, but were followed by CCD and CMOS cameras. Ng and Chau examined the noise-related performance of CCD cameras used for shearography84. Osten et al.232 demonstrated that CMOS cameras produce better shearograms than CCD cameras, even for non-cooperative surfaces under examination.
Recording of vibrating objects
Let an object be vibrating sinusoidally with frequency $ \omega $ and amplitude w. The average irradiance distribution Iav recorded over a period much longer than the period of vibration is expressed as9,63,123,155,226,265,348,366
$$ {I}_{av}\left({x}_{i},{y}_{i}\right)={I}_{0}\left(1+\gamma \cos\phi {J}_{0}\left(\mathrm{\Omega }\right)\right) $$
where $ \mathrm{\Omega }=({4\pi }/{\lambda })({\partial w}/{\partial x})\Delta x $ when the observation and illumination directions are anti-parallel and the object is illuminated normally, $ \Delta x $ is the shear, and J0(x) is the Bessel function of zero order. The output from the CCD camera will be proportional to the irradiance distribution, and therefore the fringe pattern on the monitor will have a strong speckle background. Furthermore, the zero-order Bessel function causes only a few fringes to be observed. However, stroboscopic illumination circumvents this problem: the fringes are now cosinusoidal and the background can be removed by subtraction, as is done in the static case117,123,183. With stroboscopic illumination, the shearogram can be frozen at any instant of illumination by short stroboscopic pulses synchronized with the frequency of the vibrating object, and the phase-shift technique can be applied to obtain the derivative of the vibration amplitude155,197,251. Shearography with stroboscopic illumination and a large shear is used to measure small out-of-plane vibration amplitudes over a large range251. To measure transient vibrations, Hung et al.125 used a high-speed camera, and the images were stored in the memory of the high speed acquisition system.
In another method to improve the visibility of time-averaged fringes, time-averaged shearograms are captured sequentially. The processor outputs a signal that produces the brightness on the monitor, which is proportional to86,120,162,226,314
$$ B\left(x,y\right)\propto {J}_{0}^{2}\left(\mathrm{\Omega }\right)\propto {J}_{0}^{2}\left(\frac{4\pi }{\lambda }\frac{\partial w\left(x,y\right)}{\partial x}\Delta x\right) $$
The visibility of the fringes will be maximum where $ {\partial w\left(x,y\right)}/{\partial x}=0 $ and will decrease rapidly as $ {\partial w\left(x,y\right)}/{\partial x} $ increases. Hence, a zero-order Bessel fringe will occur where the amplitude is maximum. The zero-order Bessel fringe can be shifted if one of the mirrors of the shearing Michelson interferometer is excited with the same frequency as that of the object86 or by using the scheme suggested by Valera and Jones120, Valera et al.124, and Chatters et al.107. The zero-order Bessel fringe can also be shifted by wavelength modulation135,162,314. However, this requires an unbalanced Michelson interferometer for shearing. It has been shown that a single Bessel fringe pattern obtained under subtraction operation could be sufficient to calculate the phase using a phase recovery method, based on genetic algorithms280. Methods for improving the visibility of fringes were described by Chen et al.226. For the study of transient vibrations such as those arising from impact, the use of a double-pulse laser is suggested185,220. Steichen et al.185 also mention the use of a double-flash CCD camera that may record two shearograms separated in the range of 100 ns to 50 ms. Procedures to use single-pulse shearography and double-pulse shearography for the study of vibrating objects were described by Spooren et al.80. Another study reported the use of a double-pulse laser for shearographic recording in which a carrier frequency is generated by changing the curvature of the illumination beam between the impact stressing of the metallic plates194,196,220. A procedure to measure damping using shearography was described by Wong and Chan195. A method that produces binary phase patterns where the phase changes are related to the zeros of the Bessel function is presented for estimating the vibration amplitudes from the shearographic fringe patterns366.
Optical Configurations
As pointed out in the introduction, a speckle pattern interferometer can be configured to be sensitive either to an in-plane displacement component or an out-of-plane displacement component. The same is true for a speckle pattern shear interferometer, i.e., it can be configured to yield fringes pertaining either to the in-plane derivative (strain) or out-of-plane derivative (slope).
In-plane gradient sensitive configuration
There are two distinct possibilities, as shown in Fig. 3a, 3b. In one arrangement, the object is illuminated by collimated beams lying either in the y-z plane or x-z plane, which are symmetric to the normal at a point on the object surface, and the observation can be along any direction but preferably along the normal to the surface.
a symmetric illumination directions, b symmetric observation directions.
The formation of dark fringes in digital shearography is described by Eq. 12, which has been rewritten as
$$ \left({\boldsymbol{k}}_{o}-{\boldsymbol{k}}_{i}\right) \cdot \frac{\partial {\boldsymbol{L}}}{\partial x}\Delta x=2m\pi $$
When two symmetric directions of illumination are used, as shown in Fig. 3a, the fringe formation is governed by
$$ \left({\boldsymbol{k}}_{o}-{\boldsymbol{k}}_{i2}\right) \cdot \frac{\partial {\boldsymbol{L}}}{\partial x}\Delta x-\left({\boldsymbol{k}}_{o}-{\boldsymbol{k}}_{i1}\right) \cdot \frac{\partial {\boldsymbol{L}}}{\partial x}\Delta x=2{m}_{1}\pi -2{m}_{2}\pi =2m'\pi $$
$$ \left({\boldsymbol{k}}_{i1}-{\boldsymbol{k}}_{i2}\right) \cdot \frac{\partial {\boldsymbol{L}}}{\partial x}\Delta x=2m'\pi $$
where m' is an integer that takes values 0, ±1, ±2, ±3, ···. Assuming that the beams lie in the x-z plane and make an angle of $ \pm \theta $ with the surface normal, then $ {\boldsymbol{k}}_{i1}={2\pi }/{\lambda } $ $ \left( \sin \theta \hat{\boldsymbol{i}}- \cos\theta \hat{\boldsymbol{k}}\right) $ and $ {\boldsymbol{k}}_{i2}={2\pi }/{\lambda }\left(- \sin \theta \hat{\boldsymbol{i}}- \cos\theta \hat{\boldsymbol{k}}\right) $ . Hence,
$$\begin{split} \left({\boldsymbol{k}}_{i1}-{\boldsymbol{k}}_{i2}\right) \cdot \frac{\partial {\boldsymbol{L}}}{\partial x}\Delta x=&\frac{4\pi }{\lambda } \sin \theta \frac{\partial u}{\partial x}\Delta x=2m'\pi \to \to\\ \frac{\partial u}{\partial x}=&\frac{m'\lambda }{2 \sin \theta \Delta x}=\frac{\lambda {\Delta \phi }_{QP}}{4\pi \sin \theta \Delta x}\end{split} $$
The sensitivity of the principal strain fringe pattern depends on both the magnitude of the shear and the angle of illumination. This method has been implemented experimentally in several ways, including using a Michelson interferometer136,152,164,187,206,236,345,351, a birefringent crystal133,211,265, a birefringent wedge160, and a Wollaston prism379 for shearing. When a wedge plate covers half the aperture of an imaging lens or when a two-aperture or multi-aperture mask with wedges is placed in front of an imaging lens, the recording contains information about the in-plane component along with the derivatives of the displacement. This has been studied by Ng and Chau75, Mohan et al.91, Mohan and Sirohi166, and Wang et al.224.
Similarly, if for a given direction of illumination, two directions symmetric to the surface normal are used for observation, the configuration is in-plane gradient sensitive. Assuming that the directions of observation lie in the x-z plane and make an angle of $ \pm \alpha $ with the local normal, the fringe formation is governed by
$$\begin{split} \left({\boldsymbol{k}}_{01}-{\boldsymbol{k}}_{02}\right) \cdot \frac{\partial {\boldsymbol{L}}}{\partial x}\Delta x=&\frac{4\pi }{\lambda } \sin \alpha \frac{\partial u}{\partial x}\Delta x=2m\pi \to \to \\\frac{\partial u}{\partial x}=&\frac{m\lambda }{2 \sin \alpha \Delta x}=\frac{\lambda {\Delta \phi }_{QP}}{4\pi \sin \alpha \Delta x} \end{split}$$
In practice, $ \theta $ can take values approaching 900, but the angle $ \alpha $ usually takes much smaller values limited by the lens aperture. However, the experimental arrangement can be configured so that angle $ \alpha $ can also take large values169,379,392.
When the shear is along the y-direction and the beams are confined to the y-z plane, the y-gradient of the in-plane component under these configurations is given by
$$ \frac{\partial v}{\partial y}=\frac{m\lambda }{2 \sin \theta \Delta y}=\frac{\lambda {\Delta \phi }_{QP}}{4\pi \sin \theta \Delta y} $$
$$ \frac{\partial v}{\partial y}=\frac{m\lambda }{2 \sin \alpha \Delta y}=\frac{\lambda {\Delta \phi }_{QP}}{4\pi \sin \alpha \Delta y} $$
where $ \Delta y $ is the shear. To realize this experimentally, an opaque plate consisting of several openings is placed before the imaging lens. Each opening carries either a shear plate, or a ground glass plate, or just a plate for compensation27,32,34,39,169. This arrangement generates the carrier frequency and is well adopted for Fourier filtering whether the recording is photographic or electronic. A three-aperture arrangement with one aperture carrying a wedge plate produces an in-plane fringe pattern and a combination of in-plane and derivative fringes166. The influence of in-plane displacement on slope fringes was examined by Mohan et al.91,166. It has also been pointed out that a single illumination direction will also give the in-plane derivative when loading is such that there is no out-of-plane deformation or there is a sequential illumination from two directions163. It has been mentioned that an in-plane sensitive configuration can be devised that yields all four in-plane strains independent of out-of-plane strains163.
Out-of-plane gradient sensitive configuration
It is obvious from the phase difference Eq. 12 that the configuration will be sensitive to the gradient of out-of-plane deformation if the directions of illumination and observation are anti-parallel $ \left({\boldsymbol{k}}_{i}=-{\boldsymbol{k}}_{o}\right) $ and are along the normal to the surface. In that situation,
$$ \left({\boldsymbol{k}}_{o}-{\boldsymbol{k}}_{i}\right) \cdot \frac{\partial {\boldsymbol{L}}}{\partial x}\Delta x=\frac{4\pi }{\lambda }\hat{\boldsymbol{k}} \cdot \frac{\partial {\boldsymbol{L}}}{\partial x}\Delta x=\frac{4\pi }{\lambda }\frac{\partial w}{\partial x}\Delta x $$
$$ \frac{\partial w}{\partial x}=\frac{m\lambda }{2 \Delta x}=\frac{\lambda {\Delta \phi }_{QP}}{4\pi \Delta x} $$
When the shear is along the y-direction, we have
$$ \frac{\partial w}{\partial y}=\frac{m\lambda }{2 \Delta y}=\frac{\lambda {\Delta \phi }_{QP}}{4\pi \Delta y} $$
Orthogonal slopes, or a slope with double the sensitivity and curvature, can be obtained using a four-aperture arrangement in which two wedges are appropriately placed at the two opposite apertures and compensating plates on the remaining apertures34. Sirohi and Mohan170 described a two-aperture configuration that is insensitive to in-plane deformation. This configuration is used to obtain the slope fringes. Using a similar configuration, the sensitivity to slope change was increased two-fold by coating the object with a retro-reflective paint181. It was shown that the visibility of the correlation fringes increases when the object surface is covered by a microsphere of 10.2μm mean diameter230. Using two CCD cameras in a Michelson interferometer for shearing, both $ {\partial w}/{\partial x} $ and $ {\partial w}/{\partial y} $ are obtained for the same load on the object188.
Multi-component Configurations
As shown in Section on optical configurations, we obtain the x-derivative of the displacement when the shear is along the x-direction and the object is illuminated symmetrically by beams lying in the x-z plane, and the y-derivative of the displacement when shear is along the y-direction with illuminating beams in the y-z plane. For a general direction of illumination and shear, e.g., along the x-direction, the phase difference contains information of all three x-derivatives of the displacement. However, if three different directions of illumination are used, it is possible to separate the three derivatives50,164,204,206,314. The same is true if three different directions of observation are used. Similarly, using a single direction of illumination and four directions of observation, and a single CCD camera, all three x- or y-gradients of the deformation vector can be obtained245. A speckle pattern shearing interferometer has been reported that uses three diode lasers of different wavelengths along with three CCD cameras for the determination of two-dimensional strain distributions201. Using three directions of shear, simultaneous measurement of derivatives of displacement in three different directions with phase-shift using Fourier processing has been demonstrated372. This method is cumbersome and uses six cube beam splitters. Correto and Perlo100 suggested the use of multiple laser diodes to illuminate the object surface uniformly, while Kalms and Osten239 used two mutually incoherent laser sources for illumination. More information about multicomponent configurations is provided in the section on multiplexing.
Thus far, the analysis has assumed well-defined directions for illumination and observation and a flat object. In practice, a real object is far from being a plane surface, and the illumination is usually divergent to cover a larger area for examination. These issues lead to errors in the measured values of deformation derivatives. The effect of divergence was examined by Abdullah and Petzing264. There are other factors such as the magnitude of shear, the sensitivity vector, the errors in phase-shifting and the rigid body motion that influence the measurement accuracy of the derivatives of deformation187. Many studies have considered these aspects in their theoretical works269,315. The creep of the PZT actuator on which the shearing mirror is mounted can change the magnitude of shear during the measurement, leading to an error332. A calibration procedure is described that accounts for the local variability of the magnitude of shearing and compensates for variability of the sensitivity vector344. Although shearography is said to be immune to rigid body translation, there is a decorrelation resulting in a decrease in the modulation of fringes. The effect of rigid body translation can be compensated with sandwich shearography, which combines shearography and sandwich holography54, and by flexible mirrors42. The in-plane rigid body translation is compensated by using a combination of speckle photography and shearography256. The shifting of the sensor by an appropriate amount, and digital shifting of the image, have been suggested to alleviate the effect of rigid body translation158. Furthermore, shearography itself has been used to obtain the deformation. To extract deformation from its gradient, the effect of doubling the image must be reduced137,138. When the phase distribution is spatially limited, it can be extracted from a single-sheared phase distribution287. In this study, out-of-plane deformation and its gradient were captured using ESPI and shearography, respectively. From the shearographic data, the out-of-plane deformation was reconstructed, and matched well with the deformation measured by ESPI. A shearography system was described that measures the orthogonal slope maps from which the deflection profile is generated343. A 2D deconvolution technique for images generated by a phase-stepped shearography system was described, which enables the extraction of quantitative displacement data from shearography phase change images161. Rosso et al.299 used holography to record an object on a photorefractive crystal, and the holographically reconstructed image was used as an object in a shearography setup for strain measurement. The technique of simultaneous coherent imaging and strain measurement by coupling photorefractive holography and shearography seems to have potential for depth-resolved strain measurement inside optically diffusing biological tissue.
Some researchers have examined the fringe formation due to static as well as vibratory load on an object with holography vs. shearography21,67,78,108,240,389, ESPI vs. shearography68,99,241,250, TV holography vs. shearography115,131,134,165,360, and digital shearography vs. electronic shearography281 when these techniques are used for non-destructive testing. Both the holographic interferometry and ESPI respond to deformation and show flaw-induced displacement anomalies in the fringe pattern, whereas shearography shows flaw-induced strain anomalies in the fringe pattern. Because their sensitivities are very different, they can be used sequentially.
Shear Devices
The most commonly used arrangement for shearing is a balanced Michelson interferometer8,10,78,80,81,97,99,106,152,164,182,206,232,241,251,303,317,339. The object is imaged via two paths, and the shear is achieved by tilting one of the mirrors, thereby producing two images. The amount of shear is controlled by the angle of the tilt. Parker and Salter214 described a shearographic system in which a face of the cube beam splitter is mirrored, and a PZT-mounted mirror is used for shearing and phase-shifting. Thus, the system uses an unbalanced Michelson arrangement. They also described the limitations of this system. Instead of a Michelson interferometer, some researchers have used a Mach-Zehnder interferometer for shearing194,246,301,323,380,386,391. Many researchers have used a wedge plate17,19,27,32,34,41,58,60−62,93,97,170,181, a biprism20,71,75,76,90,111,171,280, parallel plates9,11, a plane parallel plate in reflection130,382, a wedge plate in reflection249, and Dove prisms24,25,28 in front of the imaging lens. A wedge plate in reflection provides both the shear and the carrier fringes for spatial phase-shifting. A plate with four openings at the corners of a square placed in front of the imaging lens results in sheared images at a defocused plane13,16,18. A single grating15,30,38,100,132, a Ronchi grating146,147, a cross-grating43, and a pair of gratings12,153,318 have also been used for shearing.
A holographically recorded lens has been used for both imaging and shearing37,331. A Savart plate14,143,202,321, a birefringent wedge plate160, a birefringent crystal68,133,139, a Wollaston prism122,148,177,179,262,341,358,369, and a single or double circular polarization grating329 along with the polarizers have also been used as shear devices. A right-angle prism with its diagonal face coated to separate the TE and TM polarizations along with a glass plate attached to it with an index-matching liquid followed by a liquid crystal phase modulator and a polarizer provides both the shear and the phase-shift295,299. A polarization-based shearing device results in an almost common-path speckle pattern shearing interferometer. A split lens obtained by cutting an imaging lens into two halves, one half of which can be shifted laterally or longitudinally, has been used for lateral shear as well as for radial shear22,23,28,29,33,57,58. The split lens provided laterally shifted images when one half was translated in its plane, and radially sheared images when translated longitudinally along the optical axis. A multi-segment lens was used to record the out-of-plane derivatives for different loadings26,33. However, split lenses or multi-segment lenses have not been used in practice because they are not commercially available. Instead, sheared images were obtained by placing a wedge plate over half the aperture of the lens19. In many cases, an opaque screen with two or several openings was placed before the imaging lens, and the shearing elements were placed appropriately over these openings17,24,27,34,39,181. By judicious choice, it was possible to obtain the slope and principal strains along the x- and y-directions from the same double-exposure shearogram. Details of various shearing arrangements are given by Sirohi et al.193. Radial shear was also achieved in a cyclic shear interferometer275. Several authors have used the 4f arrangement in a shearography setup to increase the field of view317,325,340,345,360,378,359,360. Falldorf et al.342 used an air wedge in a 4f configuration that provided variable shear. Zhao and Chung273 used a grating realized by an electrically addressable liquid crystal SLM.
Ganesan et al.49 described a shearing arrangement comprising two right-angle prisms instead of two mirrors in a Michelson configuration that can be used to introduce all five shear types. A microprism array placed in front of the camera lens was also used for shearing225. The amount of shearing was adjusted by changing the distance between the camera lens and the array. This arrangement was claimed to be light-efficient. Wong234 described a simpler shearing arrangement consisting of a mirror and a beam splitter. The arrangement suffered from magnification error but had the advantage that it can be turned into an ESPI set up by rotating the mirror. Mohanty et al.24 used a pair of Dove prisms in front of the lens to introduce rotational shear. Debrus14 employed a Savart plate between a pair of polarizers to obtain lateral shear. Joenathan and Buerkle121,153 employed a holographic grating followed by a ground glass for shearing, as did Mihaylova et al.247,274. Murty's130,382 plane parallel plate is a convenient shear element that can be used in reflection. The magnitude of the shear depends on the thickness, refractive index of the plate, and the angle of incidence; therefore, the shear is very difficult to modify. A convenient method in which an air parallel plate was created by two glass plates whose separation can be varied easily was adopted by Mihaylova et al.255,307 for shearing.
Loading Methods or Excitation Methods
To study the response of an object, it is subjected to an external agency. As mentioned previously, the initial state (undeformed state) of the object is compared with the deformed state caused by loading. The objective of loading is to generate surface displacement gradients in the region where the subsurface flaws are to be detected. The response of an object to an applied load depends on several factors: the material, the size and location of the fault, the stiffness of the construction, and the type of load. However, the only free parameter in the experiment is the load. Consequently, its type and amount should be selected carefully207. A number of loading methods have been applied to study object response105,127,142,139,176,281,387. These are briefly described below.
Mechanical Loading
This is the most commonly used method to obtain strain data as a function of stress, or to study the failure or fatigue response of a particular system under load. It has been applied to visualize strains, slopes, and twist, as well as for the quantitative measurement of strain, slope, twist, and crack detection8,11,23,40,145.
Partial Vacuum
The object is placed in a chamber that is slowly evacuated, thereby creating a partial vacuum. It is suggested that the vacuum should be applied first to the sample, and a reference frame should be captured. Vacuum is then slowly bled off to create a pressure difference, and then successive frames/images are captured81. Vacuum stressing is the most effective method for detecting unbonds109 and impact damage on aluminum honeycomb structures104,182. Using vacuum stressing, the presence of debonds and delamination in composite and glass fiber reinforced plastic (GRP) laminates is easily visualized59,67,81,90,108,156,265. Self-adhesive vacuum stressing is used to test aircraft components and aircraft bodies210,232,233.
Pressure Loading
This is used to detect wall thickness defects and cracks in pressure vessels, pipes, etc. The object is subjected to internal pressure between the exposures or frames/images. Steinchen et al.118,176 used this method to detect and characterize microcracks in GRP pipes. It is also used to locate disbonds in GRP pipes112. This method has also been used to inspect power plant piping for corrosion defects90 and to detect programmed flaws in steel pipes149.
Thermal Loading
This is used to test laminated structures, honeycomb structures, and reinforced structures, and to locate skin-to-core debonds and cracks176. Heating is usually provided with infra-red lamps134,283,329, a hair blower61,182, or a heat gun151. Laser heating is used to test flat-bottomed hole specimens106 and a thin aluminum plate with a scratch at the back249. Impulsive thermal loading is used for crack detection in metal plates and pipes310, and purely thermal loading with a hair dryer is used to detect cracks on a composite turbine blade265. A thermal pad has also been employed for thermal stressing369.
Microwave heating is a special form of thermal loading that can easily detect the presence of water vapor trapped inside the object of study142. Microwave excitation causes moisture in the materials to heat up and thus induces highly localized deformation detectable by shearography. This mode of stressing is only applicable to nonmetallic composites.
Induction Current Loading
Induction current or eddy current, which is produced by changing magnetic field, is used to heat materials and is currently being used for defect detection in NDT387.
Vibratory or Acoustical Loading
Vibratory loading, in addition to studying the modes of an object, has been applied to test welded joints, debonds, and delamination. The object is excited either by a piezo-electric device107, a shaker85, or a vibration generator such as a loudspeaker339. Generally, the excitation frequencies are much higher than the resonance frequency for visualizing cracks, delamination, or disbands. Directed acoustical waves from a transducer appear to be promising for detecting fatigue cracks in metal plates368.
Impact or Transient Loading
A transient load is applied with an electro-dynamic hammer (firing pin). The shock waves thus produced excite the defects246.
Phase-Shifting
Phase shifting, either temporal or spatial, is employed when quantitative data are required about the strain field or defects. In many cases, phase maps are presented even for the visualization of gradients of the deformation vector. Temporal phase-shifting was first introduced in shearography by Owner-Petersen61, followed by Kadono et al.62. Present-day shearographic instruments are equipped with phase shifting. To obtain the phase values at each pixel a minimum of three irradiance values are required, with different phase values in a $ 0\;{\rm{to}}\;2\pi $ interval. Some algorithms use more than three irradiance values. In temporal phase shifting, these irradiance values are captured sequentially by changing the phase difference between the interfering waves. If three phase steps are used in the interval $ 0\;{\rm{to}}\;2\pi $ such that the phase is stepped by $ 2\pi /3 $ , that is, the irradiance distribution in the interference pattern is captured at 0, $ 2\pi /3 $ and $ 4\pi /3 $ , then the phase $ \phi $ at each pixel is obtained from62,97,112,141,248,281,324
$$ \phi ={\tan}^{-1}\frac{\sqrt{3}\left({I}_{3}-{I}_{2}\right)}{\left({2I}_{1}-{I}_{2}-{I}_{3}\right)} $$
where I1, I2, and I3 are the irradiance distributions captured at the 0, $ 2\pi /3 $ and $ 4\pi /3 $ phase steps, respectively. When four phase steps are used in the interval $ 0\;{\rm{to}}\;2\pi $ with a phase step of $ \pi /2 $ , the phase $ \phi $ at each pixel is obtained from61,105,179,197,202,265
$$ \phi ={\tan}^{-1}\frac{\left({I}_{4}-{I}_{2}\right)}{\left({I}_{1}-{I}_{3}\right)} $$
where I1, I2, I3, and I4 are the irradiance distributions captured at the 0, $ \pi /2,\;\pi $ and $ 3\pi /2 $ phase steps, respectively. The phase $ \phi $ is a wrapped phase and must be unwrapped. Unwrapping poses no problem when the shear is small. However, there are situations in which variable shear is required to obtain a derivative map. Brug198,202 presented a method to obtain derivatives in real time using a Michelson interferometer for shearing and two CCD cameras, where shear varies with time.
The five-step algorithm is also used because it is robust to noise and insensitive to calibration errors214. Wu et al.320 described a method of in situ calibration of phase shift by employing an additional optical element in the Michelson interferometer. Common phase-shift algorithms used in digital shearography were enumerated by Zhao et al.360.
The phase steps can be obtained by translating a PZT-mounted mirror97,111,176,338,339, or using a polarization-based phase-shifter160,202,209,275,284,295,305, or a liquid crystal cell62,265,273, or a Wollaston prism along with a liquid crystal cell305, or a Wollaston prism along with HiBi fiber wrapped around a PZT cylinder96,120,124,162,179, or a combination of a quarter-wave and a rotating half-wave plate275, or by translation of a diffractive optical element146,361 or by source wavelength modulation141,152,223. A lateral shift of the source with a PZT provides a phase shift that has been incorporated in phase-shift algorithms149. It has been shown that rotation of an object introduces an additional phase that varies linearly with the angle of rotation262. The rotation of the object has been used for phase shifting.
The temporal phase-shift method is susceptible to external disturbances, such as vibration, temperature fluctuation, or rapid motion of the test object itself. The spatial phase shifting (SPS) technique is a simple method to eliminate external disturbances. In spatial phase shifting (SPS), a carrier frequency is introduced such that there are three or four pixels between the fringes depending on whether a three-step or four-step phase-shift algorithm is used150,271,283. In one publication, a four-step error-compensating algorithm was used to generate the phase map271. Alternately, the Fourier transform method can be applied to obtain the phase difference between the undeformed and deformed states of an object325,326,345,351,372,377,380,382. It is desirable that the zeroth order and the desired first-order spectra should be separated. This determines the spatial frequency of carrier fringes. A shearographic setup in which a Michelson interferometer is embedded for shearing usually employs temporal phase shifting. However, it can be used for spatial phase shifting if a CCD camera with a sufficiently large number of pixels is employed325.
A study of the mode shapes of a turbine blade excited by a piezoelectric shaker at different frequencies was carried out by DSPI at a wavelength of 10.6 μm and by shearography at a wavelength of 532 nm to assess which of the two techniques could be used in an industrial environment348. Some interesting results were presented in this paper348. Measurements of the modal rotation fields of an engineering object, e.g., a beam with single and multiple damages, are obtained with shearography, and an optimal sampling technique is used to improve damage localization330,352,374. Because the sensitivity of shearography depends on the magnitude of shear, a numerical study of damage localization as a function of shear has also been reported355. A new method for structural damage identification using cubic spline interpolation has been described376. The method is based on the interpolation of modal rotations measured using speckle shearography. The signal-to-noise ratio (SNR) in shearography as a function of the magnitude of shear on the modal rotation fields has also been investigated356.
To study time-dependent deformations, it is better to capture information in a single frame. For this reason, the Fourier transform method for processing data is preferred. However, it requires a carrier frequency, so that various Fourier spectra are separated. Several methods have been proposed to provide carrier frequency in the shearogram47,53,82,126,194,196,222,301,330,370,372,382. Carrier frequency fringes are generated by changing the curvature of the illuminating beam before the second exposure or the second frame47,167,221 or by a small rotation of the object171. When a Mach-Zehnder interferometer is used as a shearing device, a parallel shift of the mirror introduces shear, and the rotation of the mirror produces the carrier frequency246,330. The use of multi-aperture with wedge plates in front of the imaging lens automatically creates a carrier frequency. Bhaduri et al.291,294 employed three-aperture shearography with two wedge plates to obtain a curvature phase map, and with a single wedge plate both the displacement and the displacement gradient using the Fourier transform method. Carrier frequency fringes have also been used for data reduction44,53, to locate and size the debonds in GRP plates69, to obtain surface coordinates and slopes73,159, to obtain flexural strains40, and to determine the order of a fringe47.
Joenathan et al.190 developed a method in which the object is deformed continuously, and a large number of sheared images of the object deformation are acquired using a high-speed CCD camera. The derivative of the object deformation is then retrieved from this large set of data using Fourier transformation190,272. This method is capable of obtaining information for object displacements over 500 μm. In another study, a continuous wavelet transform was applied to extract the phase change from a series of shearograms276. In addition to the Fourier transform and the windowed-Fourier transform methods of phase recovery from a shearogram, some researchers have demonstrated other methods such as the curvelet transform for edge detection328, Hilbert-Huang transform340, and wavelet transform358. A heterodyne shearographic system in which orthogonally polarized beams are frequency shifted by $ \pm {\omega }_{s} $ is described for obtaining derivative information in real time341,358. Frequency-shifted beams illuminate the object, which is imaged through a Wollaston prism sandwiched between two polarizers on the CCD camera. The output of the CCD is a heterodyne signal over which the difference phase rides. The difference phase information is extracted using wavelet transformation and a proper filter. Andhee et al.260 carried out a comparative study of conventional and phase-stepped shearography. Conventional shearography is good for making qualitative observations in real time, while phase-stepped shearography delivers quantitative information161. A procedure to maintain sub-pixel alignment between shearograms obtained with a single camera in a polarization-based phase-stepped two-bucket shearing interferometer has been described267. The procedure was based on cross-correlating the two shearograms. A more recent paper discusses an improvement of the two-bucket shearing interferometer when dealing with the unfavorable polarization states that arise when a rough metallic surface is illuminated with a linearly polarized light284. Recently, a theory of surface phase-resolved shearography has been presented that considers speckle statistics and delivers less noisy specklegrams373.
Multiplexing in Shearography
Separation of the in-plane and out-of-plane gradient components of the deformation of an object surface generally requires illumination of the object from three different directions or observation from three different directions. In spatial-division multiplexing (SDM), the object is illuminated from a single direction, and viewed simultaneously from three different directions through three shearing interferometers. The first attempt to obtain all three displacement derivatives was made by Hung and Durelli18. Sharma et al.32,41 used a three-aperture arrangement to obtain slope, curvature, and deformation. A similar arrangement was used by Bhaduri et al.294 using the Fourier transform method. They used the (1, N) spatial phase-shifting method to obtain derivative phase maps283. Filtering from the ±1 order halo from a doubly exposed shearogram, recorded with a holographic grating as a shear element, yields the slope and curvature fringes, and filtering from the ±2 order halo yields the slope fringes with double the sensitivity38. Speckle interferometry and speckle shear interferometry have been combined in a simple setup that uses four apertures39. Deformation, slope, and curvature fringes can be obtained by filtering from the appropriate halos. Iwahashi et al.43 used a cross grating to obtain three slope distributions simultaneously. Joenathan et al.26,33 extracted three out-of-plane derivatives from the same shearogram using a multi-segment lens for imaging. Barrera et al.365 used an aperture-based spatial phase-shift multi-shearing direction system to acquire information in three different shearing directions simultaneously. They used wedge plates appropriately oriented on the apertures; therefore, both the shear and the carrier frequency were not variable. Dymny et al.154 demonstrated a configuration that displays x- and y- derivatives simultaneously using two Michelson interferometers and two CCD cameras. The use of two CCD cameras and two shearing units to simultaneously obtain x- and y- derivatives is also recommended by Steinchen et al.176. Siebert and Schmitz203 used two CCDs to measure two orthogonal gradients simultaneously in a temporal phase-shift digital shearography system using two orthogonal polarization states. They also suggested that the two orthogonal gradients can be recorded sequentially on a single CCD camera using a liquid crystal cell as a fast polarizer.
Mohan et al.131 combined TV holography and shearography into a single setup such that the deformation map and the derivative map are displayed simultaneously, one above the other, on the monitor. This setup can be used to obtain the deformation and derivative maps for both static and vibratory objects. James and Tatam204 used sequential illumination of the object from three directions and a single CCD camera. The illumination was synchronized with the frame rate of the CCD camera. Each frame contains an image of an object when illuminated from one of the illumination directions. Kästle et al.201 used wavelength-division multiplexing (WDM) to determine two-dimensional strain distributions. They used three diode lasers to illuminate the object simultaneously with three different wavelengths, and the sheared object images in each wavelength were recorded by three CCD cameras. It was shown that using three directions of illumination and two orthogonal shears, all six derivative components can be measured164. A compact system based on this has been reported206. Groves et al.209,223 described a shearography system to measure two orthogonal slopes quasi-simultaneously using polarization multiplexing. They used an unbalanced polarization-sensitive Michelson interferometer to produce sheared images in orthogonal polarizations. They also described a multi-directional viewing configuration with one direction of illumination and a single CCD camera242,244. The images were transported by coherent fiber bundles to the CCD via a balanced Michelson interferometer. The quality of correlation fringes was very poor owing to decorrelation of the speckle patterns due to the influence of environmental variables on the fiber bundles. These authors reported another multi-component shearography system that can fully characterize the surface strain using time-division multiplexing of the measurement channels245. An application of this system to measure strains near a weld joint in a hydrostatically loaded pipe was presented. Francis et al.285,300,302 used a similar pulsed laser shearographic system to measure the surface strains on static and rotating objects. Groves et al.228,279 have also described a multi-directional illumination and single-direction observation system to fully measure surface strain. This system was used to measure crack length and surface strain.
A shearography system was described that employed colinear beams from three different lasers for illumination, a dichroic mirror in the Michelson interferometer, and a single CCD camera350. This provided shears along the x- and y-directions simultaneously and was used to obtain shearograms of the x- and y-derivatives of out-of-plane displacement simultaneously. Xie et al.354 described a polarization-based digital shearography system containing two Mach–Zehnder interferometers to record shearograms in two orthogonal directions simultaneously, and applied it for the NDT of honeycomb structures. Gao et al.380 described a shearography setup that employs a Mach–Zehnder interferometer for shear and spatial frequency generation for the simultaneous measurement of out-of-plane deformation and its first derivative. In a recent paper, Zhao et al.378 described a combined DSPI and shearography system that yields both the deformation and the slope phase maps simultaneously using Fourier transform processing. In the same paper, a system is described that uses three different lasers (blue, green, and red) to illuminate the object in three different directions, and a CCD with three chips responding to each color that yields 3D deformation and derivative phase maps using the Fourier transform processing. They also described another multiplexed system, similar to that described earlier380, which uses a Mach-Zehnder interferometer for shear and spatial frequency generation and a single CCD camera to yield the deformation and the derivative phase maps391. Instead of a spatial frequency carrier for generating the phase map from a single frame, a pixelated sensor was used381. A large number of shearograms were captured when the object underwent deformation. The spatiotemporal filtering algorithm in the complex domain was applied to these shearograms, and phase maps were obtained from the adjacent shearograms. Sun et al.375 described a system that uses a spatial light modulator (SLM) for both shear and phase shift. Another system used two apertures to generate the spatial frequency, and an SLM for shear385. The out-of-plane deformation and its derivatives were obtained using Fourier processing. Zhong et al.386 used two Mach-Zehnder interferometers in tandem to obtain gradients of deformation in two orthogonal directions along with a twist. Zhang et al.390 described a shearography system in which one of the mirrors of a Michelson interferometer is replaced by an SLM, which provides both shear and phase shift. It should be noted that SDM, WDM, and polarization multiplexing allow the capture of information of the derivatives and the deformation simultaneously, while TDM accomplishes this sequentially.
An optical arrangement is described in which an ESPI system can be converted to a digital shearography system by rotating one mirror of a Michelson interferometer by 45 degrees270. Thus, both the displacement and its derivative can be obtained sequentially. Another arrangement that uses a two-aperture screen in front of the imaging lens has been used to measure the out-of-plane displacement and slope sequentially271. Because of the two apertures, a carrier frequency is generated, and therefore Fourier processing is used to obtain the phase map. Several researchers have combined the ESPI/DSPI system and the shearography system into a single unit that can measure the deformation and its derivatives either simultaneously or sequentially99,134,175,270,326,339,378,391,392. A single-axis system that comprises a shearography system and a speckle photography system is described that measures two out-of-plane displacement gradients and four in-plane displacement gradients to fully characterize the surface strain268. The in-plane displacement component gradients are obtained by numerical differentiation of the in-plane components measured by the speckle photography system. Wang et al.370 described an optical configuration that uses a Wollaston prism for shear and can yield x- and y- slopes or x-slope and a radial slope simultaneously. The carrier frequency is provided by the tilt of a mirror, and Fourier transform processing is used to generate phase maps. Simultaneous measurement of deformation and its first derivative was demonstrated using the spatial phase shift method followed by Fourier processing326.
A common-path shearography setup was described for the simultaneous measurement of the in-plane and out-of-plane strain components under dynamic loading377, using two laser beams with different wavelengths that symmetrically illuminate the test object, a plane parallel plate for the shear, and a single CMOS camera. The phase was recovered from a single frame using Fourier processing. Another method to separate the in-plane and out-of-plane strains is to use a symmetric illumination configuration with a Wollaston prism as a shear element in which sequential recordings are made379. A recent paper describes the design and development of a high-speed shearography system and the measurement of surface strain during a high-speed impact on aluminum and composite samples393. This study also compares the performance of shearographic systems based on the Michelson and Mach-Zehnder interferometers.
Low Coherence Shearography
The temporal coherence requirement for shearography is not stringent because polarization-based shearographic systems are nearly common-path and the path difference in a balanced Michelson or Mach-Zehnder interferometer is very small. A source of adequate temporal coherence is required only when an unbalanced Michelson or Mach-Zehnder interferometer is employed for shearing, and wavelength modulation is used for phase stepping. In general, a light source with adequate brightness, spatial coherence, and limited temporal coherence is needed for shearography to visualize the deformation gradients232. In an earlier study, it was demonstrated that the shearographic fringes were formed on the object even when the coherence length was only 5 mm80. Falldorf et al.238 employed a mercury arc lamp to study the deformation of an alloy strip that was mechanically loaded using a four-step phase stepping method. Wang and Tieu254 also described the use of a low-coherence source in shearography. A mercury lamp was used to measure the out-of-plane displacement derivative of a centrally loaded flat plate. In another study, two laser diodes without temperature stabilization were used in a shearographic system186. It is easier to integrate laser diodes in compact shearographic systems175. Further, for the examination of large surface area of an object and to overcome non-uniform illumination due to the Gaussian nature of the laser beam, the use of two or more mutually incoherent laser beams has been suggested239,357. The importance of using mutually incoherent sources is obvious from the photographs taken with illumination from two mutually incoherent sources and two mutually coherent sources, as shown in Fig. 4.
Fig. 4 Shearogram of a part of an Airbus A310 rudder side panel (1 m × 1.5 m), made with a two mutually incoherent sources, b two mutually coherent sources (Courtesy Prof. Osten).
Applications of Shearography
Shearography is a non-destructive, full-field, non-contact inspection technique that is widely used as a qualitative tool for structural health monitoring, location, and identification of damage and defect sites. It has also been used to obtain quantitative data on strain fields and the location and size of defects. Some well-known applications of shearography are presented in this section.
Visualization and measurement of strain fields, slope, curvature and twist
Early work on shearography was to visualize twist contours8, derivatives of surface displacement9,10, in-plane strains13,164, slope contours11−17,20,22−29,32,34,36,40,153,160,169,170,181,191,361,365, slope of modal amplitude of vibration9,16,117, and curvature15,19,32,35,36,38,79 in plates, beams, and other objects. Slope contours are also obtained by (i) shifting the illumination source60,94,114,144, (ii) changing the wavelength113,174, (iii) changing the refractive index of the medium around the object17, and (iv) rotation of the object128,156,169,200 between exposures. These methods were reviewed by Shang et al.218. It has been pointed out that the generation of slope contours by the shift of the source is quite a complex procedure248. Another study showed the dependence of slope and shape determination on the direction of illumination286. Owner-Petersen61 used digital speckle pattern shear interferometry (DSPSI) for both qualitative and quantitative strain evaluation. It was commented that DSPSI is not useful for quantitative evaluation unless the space-bandwidth product is increased. It should be noted that the visualization of strain and slope continued even after photographic recording was replaced by video detection105.
With photographic recording, curvature fringes are obtained as a moiré of the slope patterns obtained with a three-aperture shearographic setup in which two outer apertures carry wedge plates32,34,35,41,119,235. In such a configuration, the displacement and its first derivative influence the curvature fringes235. Rastogi140 obtained the curvature and twist fringes by configuring an experimental setup using four mirrors. Murukeshan et al.205,229 used two Michelson interferometers in tandem to generate two overlapping slope patterns, thereby producing curvature fringes. In contrast to multiple imaging using a three-aperture shearographic setup, four exposures are sequentially made with the object laterally shifted after two exposures, thereby recording two shifted slope patterns that produce curvature fringes due to the moiré phenomenon95. A similar result would be obtained if the photographic plate was shifted in its plane after two exposures. It is shown that the curvature fringes can be sharpened by multiple exposures31,36,48,219.
Quantitative measurement of strain, slope, curvature, and twist
Quantitative measurement of strain, slope, curvature, and twist requires the measurement of shear in addition to the phase measurement. Methods to obtain the magnitude of shear have been described by Ng116, Andersson et al.216, and Khaleghi et al.338. Steinchen et al.97,111,136,188 measured the in-plane strains and slopes of a cranked bar and a disc under compression using a four-step phase-shift algorithm and obtained values that closely matched the theoretical values. They also obtained time-averaged shearograms of a disc vibrating at different frequencies. Winther50 measured the strain in a pipe clamp. He et al.200 measured the slopes of various objects by rotating them between frames. Steinchen et al.97 described a method to obtain pure in-plane strain and pure out-of-plane gradient using a single direction of illumination. They also described another method using two independent directions of illumination111,212,236. A method to measure the derivative of the vibration amplitude was described by Valera et al.124,162 using a fiber-based speckle shear interferometer. The out-of-plane strain in a sandwich plate with full-potted inserts was measured using shearography and compared with analytically obtained results292,304. The greatest strains occurred in the vicinity of the boundary edge, but not at the edge.
Bhaduri et al.291 used a three-aperture arrangement along with Fourier transform processing to obtain curvature contours. Tay et al.263 obtained the first derivative of displacement by applying a temporal wavelet transform, and the second derivatives, curvature and twist, were extracted by applying a continuous Haar wavelet transform to a series of speckle patterns. Fu et al.322 captured a large number of frames of a continuously deforming object with a high-speed CCD camera and extracted the curvature and twist contours at any instant using Fourier and windowed-Fourier transforms. Balakrishnan et al.349 used a derivative-based phase tracker to retrieve the curvature and twist maps directly from a single shearography fringe pattern. The deflection, slope, and curvature contours of bent plates were generated computationally from the speckle patterns recorded in front of the plates before and after deformation383. Bhaduri et al.313 demonstrated a method to determine the curvature and twist of an object quantitatively using two-channel double-aperture digital shearography. The double-aperture mask produces spatial carrier fringes inside the speckle, and two channels of the system produce two sets of slope phase maps that are shifted to obtain the curvature and twist data. To measure the in-plane strains in orthogonal directions, a shearography setup uses two Michelson interferometers to create two shearograms, one in the x shearing direction and the other in the y shearing direction, which are recorded by a single CCD camera351. Two lasers with different wavelengths are used for illumination, and the windowed-Fourier transform is used to obtain the phase maps. For measuring the in-plane strains under dynamic loading, the same shearography setup was used, but with two CCD cameras345. A speckle strain gauge was developed with a gauge length of approximately a millimeter that was used to measure the biaxial strain and Poisson's ratio of vinyl tape, polyvinylidene fluoride sheet, and rubber51,66. It was also used to measure the thermal expansion coefficients of several metals and non-metals.
Shearography is accepted as a powerful non-destructive testing technique, owing to its several unique features such as whole-field observation, resiliency to environmental disturbances and vibrations, low coherence requirement, simple optical setup, readiness for shop-floor applications, and ability to examine larger structures, albeit with tricky illumination. However, there are some critical issues to be considered when applying shearography for non-destructive testing83,134. An appropriate stressing method must be chosen so that the desired defect or flaw appears. The field of view and resolution are equally important for testing throughput. The sensitivity of shearography depends on the magnitude of shear. However, too large a shear results in speckle decorrelation because the technique does not remain common-path. Conversely, if a large shear is used, the derivatives become the relative displacements between two neighboring points on the object separated by a distance equal to the amount of shearing. The actual relationship between resolution and field of view is application-specific because it depends on the geometry of the test structure, the condition of the test surface, and the available laser power81,83. A method to detect flaws by measuring the first derivative and numerically obtaining the second derivative was proposed and demonstrated along with the influence of the magnitude of shear on flaw detection334. Theoretical and experimental investigations have been reported regarding the relationship between shear distance and performance issues, such as the immeasurable zone in the target with discontinuity, signal-to-noise ratio, sensitivity, and shear distortion303. Furthermore, it has been mentioned that by using a Wollaston prism for shear and 4f optical configuration, the shear can be varied over a large range384.
The first application of shearography to NDT appears to be credited to Hung21,240, who applied it to detect cracks in a pressure vessel, and ply separation in a truck tire. There are several publications on this topic originating from him and his research group, dealing with different aspects of non-destructive testing21,45,52,72,105,142,139,158,165,173,178,180,211,217,240,258,265,288,308,310. NDT has been applied for defect detection in plates and diaphragms23,27−29. It was observed that the theta shear vividly brings out defects when the deflection profile due to loading is circularly symmetric28. It has been shown that NDT of difficult-to-reach objects can be conducted by transmitting the sheared images through a coherent fiber bundle to the detector58,252. Kim et al.241 used it to detect internal cracks in pressure pipelines by applying pressure loading. When the magnitude of the shear was smaller than the crack length, shearography predicted its length correctly. When the shear was larger than the crack length, shearography overestimated the length. In contrast to the detection of disbonds, shearography offers less promise for locating and sizing fatigue cracks in riveted aluminum aircraft skins81. Shearography is used to estimate the size and depth of disbands in GRP plates59,67 and helicopter rotor blades184,243 using vacuum stressing. Time-averaged shearography is used to estimate the size of flaws in 2-ply and 4-ply beams63. Shearography offers identical inspection capabilities for aircraft both in production and in the field103,109,127,202,210,215,239. It is used to detect cracks and disbands in skins77,81,104, debonds in lap joints77, debonds in fan cases109, disbands in skin panels85, corrosion in aluminum and composite honeycombs107 and poor or improper repairs, and impact damage to epoxy graphite or thermoplastic aircraft structures103,127,189. Inspection of test panels that simulated Boeing 727 and 737 fuselage construction was carried out using shearography77,103. The panels contained defects that represented problems specific to aircraft aging. Principal component analysis (PCA) is incorporated into shearography in which a series of phase maps of a sandwich structure with defects at various depths are obtained as the temperature decreases336. PCA improves the defect detection capability of shearography. A good description of the use of shearography to conduct NDT of composites is provided by Newman363.
Several groups have used compact and mobile shearographic systems for the inspection of aircraft structures64,103,127,189,202,210,214,215,239,243,250,261,293. A tripod mounted system is used to inspect (i) honeycomb structures, (ii) monolithic FRP structures, and (iii) metallic structures using 3-step and 4-step phase-shifting, and all known methods of stressing127. Another mobile shearography system using two CCD cameras with Savart plates in front of them and phase-shifting by a PZT-activated mirror is described, which captures 25 frames per second143. This system and its improved version are used for defect detection in aircraft materials202. It records the phase-stepped images simultaneously and performs better in a hostile environment than a system that is sequentially phase-stepped. A shearography system was developed for comprehensive testing of aircraft structures and was used to inspect the rudder of Airbus aircraft for delamination and debonding using vacuum stressing210,215,239. The sensing system is tripod mounted, and uses a Nd:YAG laser at 532 nm which is also tripod mounted. Fig. 5a shows a rudder shell from an Airbus A310, constructed from CFRP-Honeycomb, with marked fault regions that were tested by shearography. Fig. 5b is the shearogram obtained with thermal loading, and Fig. 5c is the demodulated phase map.
Fig. 5 Shearographic testing of an aircraft panel (500 mm × 500 mm) using thermal load
a A310 rudder shell, CFRP-Honeycomb, with marked fault regions, b shearogram after thermal loading, and c demodulated phase map (Courtesy Prof. Osten).
Surfaces with non-optimal reflectivity are referred to as non-cooperating surfaces. In most cases, they can be treated with paint. However, when any such treatment is forbidden, other optical solutions are employed, such as the use of diffusers and multiple laser diodes215,333. The important features of the shearography system are its ability to deal with non-cooperating surfaces and the fringe processing system that outputs excellent final results215. The shearography system is used on the shop floor to test aircraft components and whole aircraft at a maintenance facility215,232,233,239, and for testing automobile components such as CFRP cross bars for internal imperfections232,239. For testing large objects, the use of two mutually incoherent lasers or two mutually incoherent beams derived from the same laser is suggested239. Comparison with ultrasonics and thermography favors shearography as a versatile inspection technique in industrial settings due to its full-field access, robust mode of operation, and fast delivery of results. The design details of a lightweight mobile shearography system that uses a diode laser for illumination along with its applications for the inspection of aircraft components and systems are given by Kalms and Jüptner261 and by Kalms293. In another comparative study, shearography was found to be the preferred technique for reliable detection of manufacturing or in-service defects such as disbands and impact damage of aircraft structures278. Shearography was applied for surface strain measurements of a hydrostatically loaded ABS pipe, and the values obtained were compared with those obtained by Bragg gratings and resistance gauges289. In a very specific study to detect defects in thin mirrors, shearography was compared with deflectometry335. It was observed that the deflectometry delivered better results. In another study, a specular reflecting surface was illuminated with scattered light illumination, and shearograms with good contrast were recorded371. This work advanced the application of shearography for NDT of non-cooperating objects.
Shearography has been used to detect microcracks in poly methyl methacrylate (PMMA)148, foam-adhesive bonded metal laminates108, GRP pipes176,281, internal cracks in pressure pipelines241, and fatigue cracks in titanium Ti-10-2-3 alloy279. Inspection of horizontal stabilizer and aileron for delamination with vacuum stressing, radome for delamination, and rotor blades for debonds with thermal stressing has been reported182. It has also been shown that post-processing of the shearogram presents flaw details better against a fringe-free background. A helicopter rotor blade with induced defects at different depths was inspected with shearography using an infrared lamp250,297,298. It was observed that the number of fringes in the characteristic bulls-eye pattern decreased with the depth of the defect. Another interesting observation was that the fringe pattern in the shearogram of two partially overlapping identical disbonds was visually similar to that from a single disbond288. A comparative study of NDT of composite materials with shearography and thermography has been reported151,180,288,319. Shearography is employed to inspect power plant pipes for defects arising from corrosion90, pipes in the chemical industry145, automobile dashboards, and some components used in aircraft and helicopters90. It has been used to test for debonds on external tanks and rocket boosters of space shuttles at the Kennedy Space Center98,102.
Defect detection using TV-holography and shearography can be very fast, but the depth and size of the defect are difficult to determine accurately. Conversely, ultrasonic techniques can characterize the defects in detail, including both the shape and depth, but they are slow. This observation is based on NDT carried out on a glass-reinforced plastic plate with programmed voids134. Assuming thinning of thickness of an aluminum plate as a diaphragm on a rigid boundary, shearography is applied to characterize this193,199. Shearography provides a rapid method to check the hermetic seals of microelectronics packages for leaks178. It has also been used to measure strains in electronic power devices under normal operating conditions221. Indeed, it is a versatile technique for quantitative measurement of vibration amplitudes and for non-destructive testing257.
Hung173,211 carried out testing and evaluation of composites using shearography. Hung et al.217,240 evaluated the soundness of bonding in laminated composite structures using vacuum and vibratory stressing and carried out NDT of an adhesively laminated plate using vacuum stressing. Shearography can be used either in a fringe mode or a phase map mode for non-destructive testing in real time324. The phase map mode is more sensitive than the fringe mode for defect detection. Several algorithms for obtaining phase maps have been demonstrated for real-time NDT. An endoscopic shearography system to inspect the inside of cylindrical surfaces for defects was recently reported362. This system with radial sensitivity is used for the inner inspection of adhesion faults in composite material pipes. A novel shearographic system in which the image is transmitted through a rigid endoscope is used for NDT of a turbine blade in an automobile turbo-charger, and also of laminates252. Recently, shearography has been used to inspect metallic parts reinforced by GRP387, and to detect the shape and size of defects on metallic substrates under composite repairs388.
Residual stress
For the measurement of residual stress, stress relief is achieved either by drilling a blind hole or by making a micro-indentation on the object surface. The change in stress will result in a change in the in-plane and out-of-plane displacements, which is detected by shearography129. This technique has been used to measure residual stress in plastic and reinforced materials65,208, concrete88, and composites208. For almost real-time presentation of the results, the recording was performed on a thermoplastic material and the stress was relieved by indentation56. A large shear arrangement has been used in the determination of residual stresses using ball indentation46,142,171,172,211,265 and shallow blind hole-drilling65,88,172,208. Viotti et al. described a portable shearographic system to measure residual stresses by means of a hole drilling technique based on a compact in-plane interferometer with only one symmetrical dual-beam illumination277.
Art Conservation
A shearography system developed by Kalms et al.210 and Osten et al.215 was used to explore the structural robustness of XVII century Byzantine icons231. In another program, a shearography sensor was developed along with an impact assessment procedure (IAP)290. The sensor was used to detect defects in wooden panel paintings and canvas paintings using thermal stressing. A tripod-mounted system was used to examine paintings in the Benaki Museum, Greece233. Extended dynamic range (EDR) imaging technology has been implemented in a shearography system296. EDR imaging in shearography offers several benefits such as low noise in phase maps, less effort in aligning the illumination beam, and the use of multiple camera shutter speeds. EDR imaging shearography is also used to detect defects in wooden panel paintings and canvas paintings using thermal stressing. In another study, terahertz technology was combined with shearography with thermal loading for defect detection in wooden panel paintings306,311. To gain an understanding of the appearance and classification of faults in complex anisotropic objects such as artwork, finite element method (FEM) simulations along with shearographic measurements are carried out367. By comparing the simulations (different parameters) with the measurements, the depth and width of the faults can be determined with an accuracy of approximately 200 μm and 1,000 μm respectively. The shearography sensor provides surface and sub-surface information about the artwork, and the terahertz imaging sensor provides information about the bulk of the sample. The work on art conservation is further enhanced by incorporating virtual reality in shearography so that a user can remotely interact with the artwork312. Researchers have demonstrated that the false height in virtual reality can be used to display shearography displacement gradient maps. This technique allows the user to view both the displacement gradient measurements and the original colors of the painting.
The effect of environmental variables such as temperature and humidity on oil-on-canvas paintings has been studied continuously over a period of 30 hours in a museum; significant activity was observed when the temperature varied from 73.5 0F to 75.9 0F and humidity from 46.6% to 49.8%338. Using a combined DSPI and shearography system, the restored art objects were examined with acoustic loading over a range of frequencies to detect delamination and defects339. An important factor in detecting delamination, cracks, and holes in artworks, particularly paintings, is the type of loading employed. For example, thermal loading reveals trapped humidity, trapped air, and delamination. Acoustic loading has been used to detect delamination. A detailed description of various types of loading techniques, such as pressure, temperature, humidity, and vibration, and their influence on the sensitivity of flaw detection on wooden and canvas paintings, was provided by Buchta et al.346. Fig. 6 shows a painting on canvas of size 290 mm × 250 mm with a narrow wooden frame. It is illuminated with a green laser beam and the double-exposure shearogram is displayed on the monitor, showing a defect in the center. Thermal stressing using infrared lamps was used as a method of excitation.
a A painting on canvas, b illuminated painting along with its double exposure shearogram (with permission from Ref. 290).
It has been demonstrated that theta shear can be used to measure the tilt of planar rough surfaces with good accuracy, which depends inversely on the magnitude of the theta shear24,25. Stroboscopic shearography has been used to effectively measure ultrasonic waves propagating through aluminum bars183. The effect of the direction and magnitude of shear on the propagation direction of ultrasonic waves was also investigated. Shearography has also been used to obtain the flexural strain in a turbine blade197. Another application has been to examine large objects such as metal cabinets 337. It has been demonstrated that stroboscopic shearography not only identifies defects but can also quantify them388. A feasibility study to examine bridge response by shearography has been reported89,101. Unfortunately, no shearograms were presented in this study.
Quantitative whole-field density measurements have been shown to be possible using shearography in a hostile industrial environment130. Essentially, shearography maps the refractive index in the flow, which is then related to the density. Both density and temperature have been mapped in a flow field177. Shearography was also applied to measure the temperature distribution in a liquid petroleum gas flame92. Shearography is applied to the experimental analysis of high-power ultrasonic machining components using a pulse laser157.
An interesting application of shearography is the measurement of distance167. A fringe pattern is created by the motion of the source, in which the fringe width is related to the distance. Interestingly, this can also be used to measure velocity192. A shearographic setup using a Savart plate for shearing generates a fringe pattern in which the fringe width is proportional to the depth of the object, thus turning it into a profilometer that provides absolute distance measurements of rough surfaces in the sub-micrometric range321.
A conceptual approach to apply shearography to detect abnormal growth in biological cavities was presented by conducting experiments on a half-cylindrical tissue phantom specimen253. Features that resemble abnormal growth in body cavities, such as polyps in different parts of the gastrointestinal path, especially in the large intestine, are introduced in different layers of the phantom. It was possible to detect these abnormalities using shearography. Recently, Buchta et al.364 simulated tumor-like tissue in a silicone phantom and applied shearography to detect regions of different stiffness with pressure loading. Fig. 7a shows a sketch of the silicone sample (with dimensions in mm) in which hard silicone spheres of different sizes are embedded at different depths. Two different types of silicone exhibiting different stiffness properties were chosen: hard silicone representing the cancerous tissue and a softer texture representing normal tissue. Fig. 7b is a photograph of the transilluminated sample. Fig. 7c shows the strain map of a 10 mm sphere obtained with 1.2 mm shear and a 4 mbar pressure change. Shearography was able to discriminate between two types of silicone. However, it was not possible to provide correct information about the size and depth of hard silicone spheres. Nevertheless, this technique may prove to be important for guidance and discrimination during minimally invasive surgery.
a Sketch of the silicone sample (dimensions in mm), b photograph of the transilluminated sample, and c processed shearography data for two hard-silicone spheres (with permission from Ref. 364).
Shearography can be configured for the study and testing of small objects if the conventional imaging lens is replaced by a long-focus microscope objective307,309. In one study, a pair of parallel plates separated by a distance acted as a shearing device as well as a phase-shifter307. A pressure sensor diaphragm (with a size of 1.5 mm × 1.5 mm and thickness of 25 μm) etched in a silicon substrate of size 3 mm × 3 mm and approximately 1 mm thick was studied using a low magnification zoom lens with a longer working distance309. It used a conventional Michelson interferometer for shearing and phase-shifting. Udupa et al.259 investigated a 200 mm Si wafer for defects by thermally loading the wafer with an infrared lamp. Shearography was able to successfully detect defects in the mm range.
Piezo-electrically, laser- and impact-generated Lamb waves in CFRP have been studied using pulse laser shearography employing a Mach-Zehnder interferometer as a shear device; this system was also used to generate carrier fringes for spatial phase shifting301. Shearography has been employed to evaluate the thermo-mechanical behavior of various coatings on different substrates, and for the determination of thermal stress and strain as a function of temperature266,316. Owing to its low temporal coherence requirement, shearography is able to measure the axial positions of phosphors embedded in a medium347. Phosphors embedded in a medium were illuminated by a structured beam from the excitation laser. The phase of the fluorescent wave from the phosphor was measured by shearography using a balanced Michelson interferometer for shearing. The axial position was reconstructed from the measured phase. A combination of fluorescent microscopy and shearography can provide the axial depth of phosphor placed several hundred micrometers in a highly scattering medium with good resolution.
There are several publications that describe the measurement of the shape of an object with shearography114,227,248,286. In all of these publications, the slope was integrated to obtain the shape. However, shearography can provide the form of an object. The basic framework of the procedure was provided by Elster and Weingärtner213. By applying several shears in the x- and y-directions, several x- and y-derivatives can be measured. The form of the object can be reconstructed by integrating the x- and y-derivatives.
Since its birth in 1973, shearography has evolved as a technique for visualizing and measuring displacement derivatives. Essentially, it measures the difference quotient that approximates a derivative when the shear is small. However, the sensitivity of shearography is linearly dependent on the magnitude of the shear. Because the shearographic configurations are nearly common-path, the requirement for temporal coherence is relaxed, and hence shearography can be performed with low-coherence sources such as a mercury source or an LED. Waves from the object can be transported using imaging fiber optic cables, and therefore inaccessible objects can be examined. Furthermore, the sensitivity also depends on the angle between the local surface normal and the illumination or observation beam, and maximum sensitivity is observed with respect to the derivative of the out-of-plane displacement (slope). Shearography has been accepted as an NDT technique for examining a variety of objects. Several mobile and non-mobile systems have been developed by researchers for their own use and by companies for commercialization. The first commercial 3D-shearographic system was offered by Ettemeyer in the early 1990s through his company. Another system using an air-cooled argon ion laser was developed in 1994. Some of the companies that offer shearography systems are (i) Dantec Dynamics A/S, Denmark; (ii) Optonor, Norway; (iii) isi-sys GmbH, Germany; (iv) Laser Technology, Inc., USA; (v) Laser Optical Engineering Ltd, UK; (vi) Carl Zeiss, Germany.
It is my pleasure to thank Prof. Wolfgang Osten for inviting me to prepare this article. Special thanks are to Prof. Jayant Sirohi, who painstakingly went over the manuscript and made critiques that improved the paper considerably. I would like to acknowledge the support provided by the Department of Physics, Alabama A&M University, Huntsville, Alabama, USA. I would also like to thank the reviewers of the manuscript for their constructive suggestions and for bringing out some of the references to my attention.
Reference (393)
[1] Leendertz, J. A. Interferometric displacement measurement on scattering surfaces utilizing speckle effect. Journal of Physics E:Scientific Instruments 3, 214-218 (1970).
[7] Sollid, J. E. Holographic interferometry applied to measurements of small static displacements of diffusely reflecting surfaces. Applied Optics 8, 1587-1595 (1969).
[8] Leendertz, J. A. & Butters, J. N. An image-shearing speckle-pattern interferometer for measuring bending moments. Journal of Physics E:Scientific Instruments 6, 1107-1110 (1973).
[10] Hung, Y. Y. A speckle-shearing interferometer: a tool for measuring derivatives of surface displacement. Optics Communications 11, 132-135 (1974).
[11] Hung, Y. Y. & Taylor, C. E. Measurement of slopes of structural deflections by speckle-shearing interferometry. Experimental Mechanics 14, 281-285 (1974).
[12] Hariharan, P. Speckle-shearing interferometry: a simple optical system. Applied Optics 14, 2563 (1975).
[13] Hung, Y. Y., Rowlands, R. E. & Daniel, I. M. Speckle-shearing interferometric technique: a full-field strain gauge. Applied Optics 14, 618-622 (1975).
[14] Debrus, S. Speckle shearing interferometer using a Savart plate. Optics Communications 20, 257-261 (1977).
[15] Assa, A., Betser, A. A. & Politch, J. Recording slope and curvature contours of flexed plates using a grating shearing interferometer. Applied Optics 16, 2504-2513 (1977).
[17] Hung, Y. Y. et al. Optical method for measuring contour slopes of an object. Applied Optics 17, 128-131 (1978).
[18] Hung, Y. Y. & Durelli, A. J. Simultaneous measurement of three displacement derivatives using a multiple image-shearing interferometric camera. The Journal of Strain Analysis for Engineering Design 14, 81-88 (1979).
[19] Hung, Y. Y. & Liang, C. Y. Image-shearing camera for direct measurement of surface strains. Applied Optics 18, 1046-1051 (1979).
[20] Nakadate, S., Yatagai T. & Saito, H. Digital speckle-pattern shearing interferometry. Applied Optics 19, 4241-4246 (1980).
[22] Murthy, R. K., Sirohi, R. S. & Kothiyal, M. P. Speckle shearing interferometry: a new method. Applied Optics 21, 2865-2867 (1982).
[23] Murthy, R. K., Sirohi, R. S. & Kothiyal, M. P. Detection of defects in plates and diaphragms using a split-lens speckle-shearing interferometer. NDT International 15, 329-333 (1982).
[24] Mohanty, R. K., Joenathan, C. & Sirohi, R. S. Speckle-shear interferometry with double dove prisms. Optics Communications 47, 27-30 (1983).
[25] Mohanty, R. K., Joenathan, C. & Sirohi, R. S. Measurement of tilt with speckle-shear interferometry. Journal of Optics 12, 118-121 (1983).
[26] Joenathan, C., Mohanty, R. K. & Sirohi, R. S. Multiplexing in speckle shear interferometry. Optica Acta:International Journal of Optics 31, 681-692 (1984).
[30] Iwahashi, Y., Iwata, K. & Nagata, R. Single-aperture speckle shearing interferometry with a single grating. Applied Optics 23, 247-249 (1984).
[32] Sharma, D. K., Sirohi, R. S. & Kothiyal, M. P. Simultaneous measurement of slope and curvature with a three-aperture speckle shearing interferometer. Applied Optics 23, 1542-1546 (1984).
[34] Sharma, D. K., Sirohi, R. S. & Kothiyal, M. P. Multiaperture speckle shearing arrangements for stress analysis. Optics Communications 49, 313-317 (1984).
[35] Joenathan, C., Mohanty, R. K. & Sirohi, R. S. Curvature by halo combination in speckle shear interferometry. Journal of Optics 13, 114-119 (1984).
[36] Mohanty, R. K., Joenathan. C. & Sirohi, R. S. Fringe sharpening and information coupling in speckle shear interferometry. Applied Optics 23, 4596-4600 (1984).
[37] Joenathan, C., Mohanty, R. K. & Sirohi, R. S. Hololens in speckle and speckle shear interferometry. Applied Optics 24, 1294-1298 (1985).
[38] Joenathan, C. & Sirohi, R. S. Holographic gratings in speckle shearing interferometry. Applied Optics 24, 2750-2751 (1985).
[39] Mohanty, R. K., Joenathan. C. & Sirohi, R. S. Speckle and speckle-shearing interferometers combined for the simultaneous determination of out-of-plane displacement and slope. Applied Optics 24, 3106-3109 (1985).
[40] Takezaki J. & Hung, Y. Y. Direct measurement of flexural strains in plates by shearography. Journal of Applied Mechanics 53, 125-129 (1986).
[41] Sharma, D. K., Mohan, N. K. & Sirohi, R. S. A holographic speckle shearing technique for the measurement of out-of-plane displacement, slope and curvature. Optics Communications 57, 230-235 (1986).
[43] Iwahashi, Y., Iwata, K. & Nagata, R. Simultaneous measurement of three slope distributions with single-aperture speckle shearing interferometer. Applied Optics 25, 328-329 (1986).
[47] Hung, Y. Y., Hovanesian, J. D. & Takezaki, J. A fringe carrier technique for unambiguous determination of fringe orders in shearography. Optics and Lasers in Engineering 8, 73-81 (1988).
[48] Ganesan, A. R., Joenathan. C. & Sirohi, R. S. Sharpening of fringes in digital speckle pattern interferometry. Applied Optics 27, 2099-2100 (1988).
[49] Ganesan, A. R., Sharma, D. K. & Kothiyal, M. P. Universal digital speckle shearing interferometer. Applied Optics 27, 4731-4734 (1988).
[50] Winther, S. 3D strain measurements using ESPI. Optics and Lasers in Engineering 8, 45-57 (1988).
[52] Hung, Y. Y. Shearography: a novel and practical approach for nondestructive inspection. Journal of Nondestructive Evaluation 8, 55-67 (1989).
[57] Joenathan, C. & Torroba, R. Simple electronic speckle-shearing-pattern interferometer. Optics Letters 15, 1159-1161 (1990).
[59] Toh, S. L. et al. Application of shearography in nondestructive testing of composite plates. Journal of Materials Processing Technology 23, 267-275 (1990).
[60] Tay, C. J. et al. The measurement of slope using shearography. Optics and Lasers in Engineering 14, 13-24 (1991).
[61] Owner-Petersen, M. Digital speckle pattern shearing interferometry: limitations and prospects. Applied Optics 30, 2730-2738 (1991).
[62] Kadono, H., Toyooka, S. & Iwasaki, Y. Speckle-shearing interferometry using a liquid-crystal cell as a phase modulator. Journal of the Optical Society of America A 8, 2001-2008 (1991).
[69] Shang, H. M., Soh, C. C. & Chau, F. S. The use of carrier fringes in shearography for locating and sizing debonds in GRP plates. Composites Engineering 1, 157-165 (1991).
[74] Chau, F. S. & Ng, T. W. A real-time digital shearing speckle interferometer. Measurement Science and Technology 3, 381-383 (1992).
[80] Spooren, R., Dyrseth, A. A. & Vaz, M. Electronic shear interferometry: application of a (double-) pulsed laser. Applied Optics 32, 4719-4127 (1993).
[86] Mohan, N. K., Saldner, H. O. & Molin, N. E. Electronic shearography applied to static and vibrating objects. Optics Communications 108, 197-202 (1994).
[87] Ng, T. W. & Chau, F. S. A digital shearing speckle interferometry technique for modal analysis. Applied Acoustics 42, 175-185 (1994).
[91] Mohan, N. K. et al. Separation of the influence of in-plane displacement in multiaperture speckle shear interferometry. Optical Engineering 33, 1973-1982 (1994).
[92] Shakher, C. & Nirala, A. K. Measurement of temperature using speckle shearing interferometry. Applied Optics 33, 2125-2127 (1994).
[93] Hung, Y. Y., Tang, S. & Hovanesian, J. D. Real-time shearography for measuring time-dependent displacement derivatives. Experimental Mechanics 34, 89-92 (1994).
[94] Tay, C. J. et al. On the determination of slope by shearography. Optics and Lasers in Engineering 20, 207-217 (1994).
[96] Valera, J. D. & Jones, J. D. C. Phase stepping in fiber-based speckle shearing interferometry. Optics Letters 19, 1161-1163 (1994).
[107] Chatters, T. C., Pouet, B. F. & Krishnaswamy, S. Additive-subtractive phase-modulated shearography with synchronized acoustic stressing. Experimental Mechanics 35, 159-165 (1995).
[108] Shang, H. M., Tham, L. M. & Chau, F. S. Shearographic and holographic assessment of defective laminates with bond-lines of different elasticities. Journal of Engineering Materials and Technology 117, 322-329 (1995).
[110] Sim, C. W., Chau, F. S. & Toh, S. L. Determination of fractional fringe order by phase shifting in real-time shearography. Journal of Modern Optics 42, 271-277 (1995).
[112] Yang, L. X. et al. Precision measurement and nondestructive testing by means of digital phase shifting speckle pattern and speckle pattern shearing interferometry. Measurement 16, 149-160 (1995).
[116] Ng, T. W. Shear measurement in digital speckle shearing interferometry using digital correlation. Optics Communications 115, 241-244 (1995).
[118] Steinchen, W. et al. Application of shearography to quality assurance. Journal of Materials Processing Technology 52, 141-150 (1995).
[119] Tay, C. J. et al. Multiple-image shearography: a direct method to determine curvatures. Applied Optics 34, 2202-2206 (1995).
[120] Valera, J. D. R. & Jones, J. D. C. Vibration analysis by modulated time-averaged speckle shearing interferometry. Measurement Science and Technology 6, 965-970 (1995).
[124] Valera, J. D. R., Jones, J. D. C. & Løkberg, O. J. Exact vibration amplitude derivative measurement with TV shearography. Measurement Science and Technology 7, 918-922 (1996).
[126] Pedrini, G., Zou, Y. L. & Tiziani, H. J. Quantitative evaluation of digital shearing interferogram using the spatial carrier method. Pure and Applied Optics:Journal of the European Optical Society Part A 5, 313-321 (1996).
[132] Rabal, H., Henao, R. & Torroba, R. Digital speckle pattern shearing interferometry using diffraction gratings. Optics Communications 126, 191-196 (1996).
[133] Hung, Y. Y. & Wang, J. Q. Dual-beam phase shift shearography for measurement of in-plane strains. Optics and Lasers in Engineering 24, 403-413 (1996).
[135] Huang, J. R., Ford, H. D. & Tatam, R. P. Heterodyning of speckle shearing interferometers by laser diode wavelength modulation. Measurement Science and Technology 7, 1721-1727 (1996).
[136] Steinchen, W., Yang, L. X. & Schuth, M. TV-shearography for measuring 3D-strains. Strain 32, 49-57 (1996).
[137] Waldner, S. Removing the image-doubling in shearography by reconstruction of the displacement field. Optics Communications 127, 117-126 (1996).
[139] Hung, Y. Y. Shearography for non-destructive evaluation of composite Structures. Optics and Lasers in Engineering 24, 161-182 (1996).
[140] Rastogi, P. K. Measurement of curvature and twist of a deformed object by electronic speckle-shearing pattern interferometry. Optics Letters 21, 905-907 (1996).
[141] Huang, J. R., Ford, H. D. & Tatam, R. P. Phase-stepped speckle shearing interferometer by source wavelength modulation. Optics Letters 21, 1421-1423 (1996).
[146] Gundlach, A. et al. Speckle shearing interferometry using a diffractive optical beam splitter. Optical Engineering 36, 1488-1493 (1997).
[150] Sirohi, R. S. et al. Spatial phase shifting for pure in-plane displacement and displacement-derivative measurements in electronic speckle pattern interferometry (ESPI). Applied Optics 36, 5787-5791 (1997).
[152] Patorski, K. & Olszak, A. G. Digital in-plane electronic speckle pattern shearing interferometry. Optical Engineering 36, 2010-2015 (1997).
[153] Joenathan, C. & Buerkle, L. Electronic speckle pattern shearing interferometer using holographic gratings. Optical Engineering 36, 2473-2477 (1997).
[156] Rastogi, P. K. An electronic pattern speckle shearing interferometer for the measurement of surface slope variations of three-dimensional objects. Optics and Lasers in Engineering 26, 93-100 (1997).
[157] Petzing, J. N. & Tyrer, J. R. Analysis of Power ultrasonic components using shearing interferometry. Optics and Lasers in Engineering 26, 235-248 (1997).
[158] Hung, Y. Y., Wang, J. Q. & Hovanesian, J. D. Technique for compensating excessive rigid body motion in nondestructive testing of large structures using shearography. Optics and Lasers in Engineering 26, 249-258 (1997).
[159] Tay, C. J., Shang, H. M. & Choong, D. Cross influence of coordinate and slope related fringes during shearographic profiling. Optics and Lasers in Engineering 26, 259-278 (1997).
[160] Nakadate, S. Phase shifting speckle shearing polarization interferometer using a birefringent wedge. Optics and Lasers in Engineering 26, 331-350 (1997).
[161] Maas, A. A. M. & Somers, P. A. A. M. Two-dimensional deconvolution applied to phase-stepped shearography. Optics and Lasers in Engineering 26, 351-360 (1997).
[162] Valera, J. D. R. et al. Strain and vibration analysis by fibre based speckle shearing interferometry. Optics and Lasers in Engineering 26, 361-376 (1997).
[163] Tyrer, J. R. & Petzing, J. N. In-plane electronic speckle pattern shearing interferometry. Optics and Lasers in Engineering 26, 395-406 (1997).
[164] Aebischer, H. A. & Waldner, S. Strain distributions made visible with image-shearing speckle pattern interferometry. Optics and Lasers in Engineering 26, 407-420 (1997).
[165] Hung, Y. Y. Digital shearography versus TV-holography for non-destructive evaluation. Optics and Lasers in Engineering 26, 421-436 (1997).
[166] Mohan, N. K. & Sirohi, R. S. Fringe formation in symmetric three-aperture speckle shear interferometry: an analysis. Optics and Lasers in Engineering 26, 437-447 (1997).
[167] Ng, T. W. Optical distance sensing using digital speckle shearing interferometry. Optics and Lasers in Engineering 26, 449-460 (1997).
[168] Toh, S. L., Chau, F. S. & Sim, C. W. Theoretical basis for real-time shearography. Journal of Modern Optics 44, 279-286 (1997).
[169] Santhanakrishnan, T., Mohan, N. K. & Sirohi, R. S. Oblique observation speckle shear interferometers for slope change contouring. Journal of Modern Optics 44, 831-839 (1997).
[171] Hathaway, R. B., Hovanesian, J. D. & Hung, M. Y. Y. Residual stress evaluation using shearography with large-shear displacements. Optics and Lasers in Engineering 27, 43-60 (1997).
[172] Hung, M. Y. Y., Long, K. W. & Wang, J. Q. Measurement of residual stress by phase shift shearography. Optics and Lasers in Engineering 27, 61-73 (1997).
[174] Huang, J. R., Ford, H. D. & Tatam, R. P. Slope measurement by two-wavelength electronic shearography. Optics and Lasers in Engineering 27, 321-333 (1997).
[175] Fomitchov, P. A. & Krishnaswamy, S. A compact dual-purpose camera for shearography and electronic speckle-pattern interferometry. Measurement Science and Technology 8, 581-583 (1997).
[178] Hung, Y. Y. & Shi, D. H. Technique for rapid inspection of hermetic seals of microelectronic packages using shearography. Optical Engineering 37, 1406-1409 (1998).
[181] Santhanakrishnan, T., Palanisamy, P. K. & Sirohi, R. S. Optical configuration in speckle shear interferometry for slope change contouring with a twofold increase in sensitivity. Applied Optics 37, 3447-3449 (1998).
[183] Bard, B. A., Gordon, G. A. & Wu, S. D. Laser-modulated phase-stepping digital shearography for quantitative full-field imaging of ultrasonic waves. The Journal of the Acoustical Society of America 103, 3327-3335 (1998).
[188] Steinchen, W. et al. Strain analysis by means of digital shearography: potential, limitations and demonstration. The Journal of Strain Analysis for Engineering Design 33, 171-182 (1998).
[189] Steinchen, W. et al. Non-destructive testing of aerospace composite materials using digital shearography. Proceedings of the Institution of Mechanical Engineers,Part G:Journal of Aerospace Engineering 212, 21-30 (1998).
[190] Joenathan, C. et al. Novel temporal Fourier transform speckle pattern shearing interferometer. Optical Engineering 37, 1790-1795 (1998).
[196] Dávila, A., Kaufmann, G. H., & Pérez-López, C. Transient deformation analysis by a carrier method of pulsed electronic speckle-shearing pattern interferometry. Applied Optics 37, 4116-4122 (1998).
[198] van Brug, H. Temporal phase unwrapping and its application in shearography systems. Applied Optics 37, 6701-6706 (1998).
[199] Sirohi, R. S. et al. Nondestructive assessment of thinning of plates using digital shearography. Optical Engineering 38, 1582-1585 (1999).
[200] He, Y. M., Tay, C. J. & Shang, H. M. Digital phase-shifting shearography for slope measurement. Optical Engineering 38, 1586-1590 (1999).
[201] Kästle, R., Hack, E. & Sennhauser, U. Multiwavelength shearography for quantitative measurements of two-dimensional strain distributions. Applied Optics 38, 96-100 (1999).
[211] Hung, Y. Y. Applications of digital shearography for testing of composite structures. Composites Part B:Engineering 30, 765-773 (1999).
[212] Steinchen, W. et al. Determination of strain distribution by means of digital shearography. Measurement 26, 79-90 (1999).
[213] Elster, C. & Weingärtner, I. Solution to the shearing problem. Applied Optics 38, 5024-5031 (1999).
[214] Parker, S. C. J. & Salter, P. L. A novel shearography system for aerospace non-destructive testing. Proceedings of the Institution of Mechanical Engineers,Part G:Journal of Aerospace Engineering 213, 23-33 (1999).
[216] Andersson, A. et al. TV shearography: quantitative measurement of shear-magnitude fields by use of digital speckle photography. Applied Optics 39, 2565-2568 (2000).
[217] Hung, Y. Y. et al. Evaluating the soundness of bonding using shearography. Composite Structures 50, 353-362 (2000).
[218] Shang, H. M. et al. Surface profiling using shearography. Optical Engineering 39, 23-31 (2000).
[219] Mohan, N. K. The influence of multiple-exposure recording on curvature pattern using multi-aperture speckle shear interferometry. Optics Communications 186, 259-263 (2000).
[220] Fernandez, A. et al. Measurement of transient out-of-plane displacement gradients in plates using double-pulsed subtraction TV shearography. Optical Engineering 39, 2106-2113 (2000).
[221] Dilhaire, S. et al. Measurement of the thermomechanical strain of electronic devices by shearography. Microelectronics Reliability 40, 1509-1514 (2000).
[222] Shang, H. M. et al. Generation of carrier fringes in holography and shearography. Applied Optics 39, 2638-2645 (2000).
[223] Groves, R. M., James, S. W. & Tatam, R. P. Polarization-multiplexed and phase-stepped fibre optic shearography using laser wavelength modulation. Measurement Science and Technology 11, 1389-1395 (2000).
[225] Lau, B., Kronthaler, T. & Schilling, R. A microprism array as shearing device for speckle shearing interferometry. Optics and Lasers in Engineering 36, 389-396 (2001).
[226] Chen, F. Digital shearography: state of the art and some applications. Journal of Electronic Imaging 10, 240-251 (2001).
[230] Martínez-Celorio, R. A. et al. Visibility enhancement of carrier fringes in Electronic Speckle Shearing Pattern Interferometry using microspheres for light detection in back reflection. Optik 112, 99-104 (2001).
[235] Wang, K. F., Tieu, A. K. & Li, E. B. Influence of displacement and its first-and second-order derivative components on curvature fringe formations in speckle shearography. Applied Optics 41, 4557-4561 (2002).
[236] Steinchen, W., Kupfer, G. & Mäckel, P. Full field tensile strain shearography of welded specimens. Strain 38, 17-26 (2002).
[238] Falldorf, C., Osten, W. & Kolenovic, E. Speckle shearography using a multiband light source. Optics and Lasers in Engineering 40, 543-552 (2003).
[239] Kalms, M. K. & Osten, W. Mobile shearography system for the inspection of aircraft and automotive components. Optical Engineering 42, 1188-1196 (2003).
[240] Hung, Y. Y., Shang, H. M. & Yang, L. X. Unified approach for holography and shearography in surface deformation measurement and nondestructive testing. Optical Engineering 42(5), 1197-1207 (2003).
[245] Groves, R. M., James, S. W. & Tatam, R. P. Pipe weld investigation using shearography. Strain 39, 101-105 (2003).
[246] Santos, F., Vaz, M. & Monteiro, J. A new set-up for pulsed digital shearography applied to defect detection in composite structures. Optics and Lasers in Engineering 42, 131-140 (2004).
[247] Mihaylova, E. et al. Electronic speckle pattern shearing interferometer with a photopolymer holographic grating. Applied Optics 43, 2439-2442 (2004).
[248] Groves, R. M., James, S. W. & Tatam, R. P. Shape and slope measurement by source displacement in shearography. Optics and Lasers in Engineering 41, 621-634 (2004).
[251] Casillas, F. J. et al. Small amplitude estimation of mechanical vibrations using electronic speckle shearing pattern interferometry. Optical Engineering 43, 880 (2004).
[252] Schuth, M., Vössing, F. & Yang, L. X. A shearographic endoscope for nondestructive test. Journal of Holography and Speckle 1, 46-52 (2004).
[253] Sujatha, N. U. & Murukeshan, V. M. Nondestructive inspection of tissue/tissue like phantom curved surfaces using digital speckle shearography. Optical Engineering 43, 3055-3060 (2004).
[255] Mihaylova, E., Whelan, M. & Toal, V. Simple phase-shifting lateral shearing interferometer. Optics Letters 29, 1264-1266 (2004).
[256] Barrientos, B. et al. Measurement of out-of-plane deformation by combination of speckle photography and speckle shearing interferometry. Optik 115, 248-252 (2004).
[257] Mäckel, P. Die Scherografie-Ein quantitatives Messverfahren zur Schwingungsmessung und zerstörungsfreien Prüfung. Laser Technik Journal 1, 49-54 (2004).
[258] Yang, L. X. et al. Digital shearography for nondestructive testing: potentials, limitations, and applications. Journal of Holography and Speckle 1, 69-79 (2004).
[259] Udupa, G. et al. Defect detection in unpolished Si wafers by digital shearography. Measurement Science and Technology 15, 35-43 (2004).
[262] Sun, P. Digital phase-shifting shearography for strain measurement by using a rotating platform system. Optical Engineering 44, 085601 (2005).
[263] Tay, C. J. & Fu, Y. Determination of curvature and twist by digital shearography and wavelet transforms. Optics Letters 30, 2873-2875 (2005).
[264] Abdullah, W. S. W. & Petzing, J. N. Development of speckle shearing interferometer error analysis as an aperture function of wavefront divergence. Journal of Modern Optics 52, 1495-1510 (2005).
[265] Hung, Y. Y. & Ho, H. P. Shearography: an optical measurement technique and applications. Materials Science and Engineering:R:Reports 49, 61-87 (2005).
[267] Somers, P. A. A. M. & Bhattacharya, N. Maintaining sub-pixel alignment for a single-camera two-bucket shearing speckle interferometer. Journal of Optics A:Pure and Applied Optics 7, S385-S391 (2005).
[268] Groves, R. M. et al. Single-axis combined shearography and digital speckle photography instrument for full surface strain characterization. Optical Engineering 44, 025602 (2005).
[269] Cordero, R. R. & Labbe, F. Uncertainty evaluation of displacement gradients measured by electronic speckle pattern shearing interferometry (ESPSI). Measurement Science and Technology 16, 1315-1321 (2005).
[270] Bhaduri, B., Mohan, N. K. & Kothiyal, M. P. A dual-function ESPI system for the measurement of out-of-plane displacement and slope. Optics and Lasers in Engineering 44, 637-644 (2006).
[271] Bhaduri, B. et al. Use of spatial phase shifting technique in digital speckle pattern interferometry (DSPI) and digital shearography (DS). Optics Express 14, 11598-11607 (2006).
[273] Zhao, S. A. & Chung, P. S. Digital speckle shearing interferometer using a liquid-crystal spatial light modulator. Optical Engineering 45, 105606 (2006).
[274] Mihaylova, E. et al. Photopolymer diffractive optical elements in electronic speckle pattern shearing interferometry. Optics and Lasers in Engineering 44, 965-974 (2006).
[275] Bhaduri, B., Mohan, N. K. & Kothiyal, M. P. Cyclic-path digital speckle shear pattern interferometer: use of polarization phase-shifting method. Optical Engineering 45, 105604 (2006).
[276] Quan, C., Fu, Y. & Miao, H. Wavelet analysis of digital shearing speckle patterns with a temporal carrier. Optics Communications 260, 97-104 (2006).
[277] Viotti, M. R. et al. A portable digital speckle pattern interferometry device to measure residual stresses using the hole drilling technique. Optics and Lasers in Engineering 44, 1052-1066 (2006).
[279] Groves, R. M. et al. Full-field Laser Shearography Instrumentation for the Detection and Characterization of Fatigue Cracks in Titanium 10-2-3. Journal of ASTM International 3, 12757 (2006).
[280] Ochoa, N. A. & Silva-Moreno, A. A. Fringes demodulation in time-averaged digital shearography using genetic algorithms. Optics Communications 260, 434-437 (2006).
[282] Pauliat, G. & Roosen, G. Continuous monitoring of a surface slope by real-time shearing interferometry with a photorefractive crystal. Applied Optics 45, 993-999 (2006).
[283] Bhaduri, B., Mohan, N. K. & Kothiyal, M. P. (1, N) spatial phase-shifting technique in digital speckle pattern interferometry and digital shearography for nondestructive evaluation. Optical Engineering 46, 051009 (2007).
[284] Somers, P. A. A. M. & Bhattacharya, N. Handling unfavourable polarization states in a polarization-based shearing speckle interferometer. Journal of Optics A:Pure and Applied Optics 9, S92-S97 (2007).
[285] Francis, D., James, S. W. & Tatam, R. P. Surface strain measurement using multi-component shearography with coherent fibre-optic imaging bundles. Measurement Science and Technology 18, 3583-3591 (2007).
[287] Falldorf, C. et al. Efficient reconstruction of spatially limited phase distributions from their sheared representation. Applied Optics 46, 5038-5043 (2007).
[288] Hung, Y. Y. et al. Review and comparison of shearography and pulsed thermography for adhesive bond evaluation. Optical Engineering 46, 051007 (2007).
[289] Groves, R. M. et al. Surface strain measurement: a comparison of speckle shearing interferometry and optical fibre Bragg gratings with resistance foil strain gauges. Measurement Science and Technology 18, 1175-1184 (2007).
[291] Bhaduri, B., Kothiyal, M. P. & Mohan, N. K. Curvature measurement using three-aperture digital shearography and fast Fourier transform. Optics and Lasers in Engineering 45, 1001-1004 (2007).
[292] Huang, S. J. & Liu Y. F. The out-of-plane strain measurement of composite sandwich plate with fully-potted insert using digital phase-shifting shearography. ICCES 4, 187-193 (2007).
[294] Bhaduri, B., Mohan, N. K. & Kothiyal, M. P. Simultaneous measurement of out-of-plane displacement and slope using a multiaperture DSPI system and fast Fourier transform. Applied Optics 46, 5680-5686 (2007).
[295] Rosso, V. et al. Almost-common path interferometer using the separation of polarization states for digital phase-shifting shearography. Optical Engineering 46, 105601 (2007).
[296] Groves, R. M., Pedrini, G. & Osten, W. Real-time extended dynamic range imaging in shearography. Applied Optics 47, 5550-5556 (2008).
[297] Gryzagoridis, J. & Findeis, D. Benchmarking shearographic NDT for composites. Insight 50, 249-252 (2008).
[299] Rosso, V. et al. Simultaneous coherent imaging and strain measurement using coupled photorefractive holography and shearography. Optics Letters 33, 797-799 (2008).
[302] Francis, D., James, S. W. & Tatam, R. P. Surface strain measurement of rotating objects using pulsed laser shearography with coherent fibre-optic imaging bundles. Measurement Science and Technology 19, 105301 (2008).
[303] Lee, J. R. et al. Investigation of shear distance in Michelson interferometer-based shearography for mechanical characterization. Measurement Science and Technology 19, 115303 (2008).
[304] Huang, S. J. & Lin, Y. T. Out-of-plane strain measurement in sandwich plates with single fully potted insert by using digital shearography. Strain 44, 253-258 (2008).
[305] Sun, C. R., Chen, J. L. & Lu, H. Improved phase-shifted digital speckle shearography for time-dependent deformation measurement. Optical Engineering 47, 065601 (2008).
[307] Mihaylova, E. & Toal, V. Simple versatile shearing interferometer suitable for measurements on a microscopic scale. Optics and Lasers in Engineering 47, 271-273 (2009).
[308] Hung, Y. Y. et al. Review and comparison of shearography and active thermography for nondestructive evaluation. Materials Science and Engineering:R:Reports 64, 73-112 (2009).
[309] Kumar, U. P., Kothiyal, M. P. & Mohan, N. K. Microscopic TV shearography for characterization of microsystems. Optics Letters 34, 1612-1614 (2009).
[310] Huang, Y. H. et al. NDT&E using shearography with impulsive thermal stressing and clustering phase extraction. Optics and Lasers in Engineering 47, 774-781 (2009).
[313] Bhaduri, B., Tay, C. J. & Quan, C. G. Direct measurement of curvature and twist using two-channel double-aperture digital shearography. Optical Engineering 49, 033604 (2010).
[314] Francis, D., Tatam, R. P. & Groves, R. M. Shearography technology and applications: a review. Measurement Science and Technology 21, 102001 (2010).
[316] Habib, K. Failure of thin films: optical shearography versus electrochemical impedance spectroscopy. Optik 121, 1317-1323 (2010).
[317] Wu, S. J., He, X. Y. & Yang, L. X. Enlarging the angle of view in Michelson-interferometer-based shearography by embedding a 4f system. Applied Optics 50, 3789-3794 (2011).
[318] Dhanotia, J. et al. Slope measurement of bent plates using double grating shearing interferometry. Applied Optics 50, 2958-2963 (2011).
[319] Taillade, F. et al. Shearography and pulsed stimulated infrared thermography applied to a nondestructive evaluation of FRP strengthening systems bonded on concrete structures. Construction and Building Materials 25, 568-574 (2011).
[320] Wu, S. J. et al. Digital shearography with in situ phase shift calibration. Optics and Lasers in Engineering 50, 1260-1266 (2012).
[321] Frade, M., Enguita, J. M. & Álvarez, I. In situ 3D profilometry of rough objects with a lateral shearing interferometry range finder. Optics and Lasers in Engineering 50, 1559-1567 (2012).
[322] Fu, Y., Guo, M. & Liu, H. Determination of instantaneous curvature and twist by digital shearography. Optical Engineering 51, 083602 (2012).
[323] Cai, C. Q. & He, L. F. Improved Mach-Zehnder interferometer-based shearography. Optics and Lasers in Engineering 50, 1699-1705 (2012).
[324] Zhu, L. Q. et al. Real-time monitoring of phase maps of digital shearography. Optical Engineering 52, 101902 (2013).
[325] Xie, X. et al. Michelson interferometer based spatial phase shift shearography. Applied Optics 52, 4063-4071 (2013).
[326] Xie, X. et al. Simultaneous measurement of deformation and the first derivative with spatial phase-shift digital shearography. Optics Communications 286, 277-281 (2013).
[327] Zastavnik, F. et al. Comparison of shearography to scanning laser vibrometry as methods for local stiffness identification of beams. Strain 50, 82-94 (2014).
[329] Blain, P. et al. An in-line shearography setup based on circular polarization gratings. Optics and Lasers in Engineering 51, 1053-1059 (2013).
[330] Lopes, H. et al. Localization of damage with speckle shearography and higher order spatial derivatives. Mechanical Systems and Signal Processing 49, 24-38 (2014).
[332] Zastavnik, F. et al. Errors in shearography measurements due to the creep of the PZT shearing actuator. Measurement Science and Technology 25, 085007 (2014).
[333] Xu, N. et al. Shearography for specular object inspection. Optics and Lasers in Engineering 61, 14-18 (2014).
[337] Zhu, L. Q., Wu, S. J. & Yang, L. X. Stroboscopic digital shearographic system for vibration analysis of large-area object. Instruments and Experimental Techniques 57, 493-498 (2014).
[339] Krzemień, L. et al. Combining digital speckle pattern interferometry with shearography in a new instrument to characterize surface delamination in museum artefacts. Journal of Cultural Heritage 16, 544-550 (2015).
[340] Amar, S. et al. Digital carrier superposition by Hilbert-Huang transform for optical phase recovery in speckle shearing interferometry. Optical Engineering 54, 013101 (2015).
[341] Feng, Z. et al. A polarized digital shearing speckle pattern interferometry system based on temporal wavelet transformation. Review of Scientific Instruments 86, 093102 (2015).
[342] Falldorf, C., Klattenhoff, R. & Bergmann, R. B. Single shot lateral shear interferometer with variable shear. Optical Engineering 54, 054105 (2015).
[344] Zastavnik, F. et al. Calibration and correction procedure for quantitative out-of-plane shearography. Measurement Science and Technology 26, 045201 (2015).
[345] Xie, X. et al. Measurement of in-plane strain with dual beam spatial phase-shift digital shearography. Measurement Science and Technology 26, 115202 (2015).
[346] Buchta, D. et al. Artwork inspection by shearography with adapted loading. Experimental Mechanics 55, 1691-1704 (2015).
[347] Schindler, J. et al. Retrieving the axial position of fluorescent light emitting spots by shearing interferometry. Journal of Biomedical Optics 21, 125009 (2016).
[348] Languy, F. et al. Vibration mode shapes visualization in industrial environment by real-time time-averaged phase-stepped electronic speckle pattern interferometry at 10.6 μm and shearography at 532 nm. Optical Engineering 55, 121704 (2016).
[349] Deepan, B., Quan, C. G. & Tay, C. J. Determination of slope, curvature, and twist from a single shearography fringe pattern using derivative-based regularized phase tracker. Optical Engineering 55, 121707 (2016).
[350] Richoz, G. L. & Schajer, G. S. Simultaneous two-axis shearographic interferometer using multiple wavelengths and a color camera. Optics and Lasers in Engineering 77, 143-153 (2016).
[351] Wang, Y. H. et al. Simultaneous dual directional strain measurement using spatial phase-shift digital shearography. Optics and Lasers in Engineering 87, 197-203 (2016).
[352] Mininni, M. et al. Damage identification in beams using speckle shearography and an optimal spatial sampling. Mechanical Systems and Signal Processing 79, 47-64 (2016).
[354] Xie, X. et al. Polarized digital shearography for simultaneous dual shearing directions measurements. Review of Scientific Instruments 87, 083110 (2016).
[355] dos Santos, J. V. A., Lopes H. & Moreno-García, P. Numerical study on damage identification using shearography with different shearing amounts. Procedia Structural Integrity 5, 1198-1204 (2017).
[356] Lopes, H. et al. Influence of shearing amount and vibration amplitude on noise in shearography. Procedia Structural Integrity 5, 1205-1212 (2017).
[357] Falldorf, C. et al. Sparse light fields in coherent optical metrology [Invited]. Applied Optics 56, F14-F19 (2017).
[358] Wang, X. et al. Temporal heterodyne shearing speckle pattern interferometry. Optics and Lasers in Engineering 93, 76-82 (2017).
[359] Sun, J. F. et al. Dynamic measurement of first-order spatial derivatives of deformations by digital shearography. Instruments and Experimental Techniques 60, 575-583 (2017).
[360] Zhao, Q. H. et al. Digital shearography for NDT: phase measurement technique and recent developments. Applied Sciences 8, 2662 (2018).
[362] Macedo, F. J. et al. Inspection of defects of composite materials in inner cylindrical surfaces using endoscopic shearography. Optics and Lasers in Engineering 104, 100-108 (2018).
[365] Barrera, E. S. et al. Multiple-aperture one-shot shearography for simultaneous measurements in three shearing directions. Optics and Lasers in Engineering 111, 86-92 (2018).
[366] Kirkove, M. et al. Determination of vibration amplitudes from binary phase patterns obtained by phase-shifting time-averaged speckle shearing interferometry. Applied Optics 57, 8065-8077 (2018).
[367] Buchta, D. et al. Combination of FEM simulations and shearography for defect detection on artwork. Strain 54, e12269 (2018).
[369] Wang, X. et al. Application of digital shearing speckle pattern interferometry for thermal stress. Measurement 125, 11-18 (2018).
[370] Wang, S. J. et al. Dual-directional shearography based on a modified common-path configuration using spatial phase shift. Applied Optics 58, 593-603 (2019).
[374] Katunin, A., Lopes, H. & dos Santos, J. V. A. Identification of multiple damage using modal rotation obtained with shearography and undecimated wavelet transform. Mechanical Systems and Signal Processing 116, 725-740 (2019).
[375] Sun, F. Y. et al. The application of SLM in shearography detecting system. Optics and Lasers in Engineering 114, 90-94 (2019).
[376] Arco, C. et al. Application of spline interpolation to speckle shearography measurements for damage identification. Procedia Structural Integrity 17, 718-725 (2019).
[378] Zhao, Q. H. et al. Simultaneous 3D measurement of deformation and its first derivative with speckle pattern interferometry and shearography. Applied Optics 58, 8665-8672 (2019).
[379] Dong, J. et al. Real-time dual-sensitive shearography for simultaneous in-plane and out-of-plane strain measurements. Optics Express 27, 3276-3283 (2019).
[380] Gao, X. Y. et al. Double imaging Mach-Zehnder spatial carrier digital shearography. Journal of Modern Optics 66, 153-160 (2019).
[381] Yan, P. Z. et al. Pixelated carrier phase-shifting shearography using spatiotemporal low-pass filtering algorithm. Sensors 19, 5185 (2019).
[382] Hooshmand-Ziafi, H. et al. Common-path spatial phase-shift speckle shearography using a glass plate. Review of Scientific Instruments 90, 105105 (2019).
[384] Dong, J. et al. Shear-unlimited common-path speckle interferometer. Optics Letters 45, 1305-1308 (2020).
[386] Zhong, S. M. et al. Multi-directional shearography based on multiplexed Mach-Zehnder interference system. Journal of Modern Optics 67, 346-354 (2020).
[387] Oliveira, B. C. F. et al. Square-pulse shearography inspections of metallic parts repaired with a glass fiber reinforced polymer using pressure, radiation, vibration, and induction loading methods. International Journal of Pressure Vessels and Piping 187, 104187 (2020).
[388] Felipe, D. S. et al. Procedure for detecting the shape and size of defects on metallic substrates under composite repairs using shearography. Applied Optics 59, 8089-8097 (2020).
[389] Zhang, B. Y. et al. Overview of digital shearography for NDT. Materials Evaluation 78, 342-351 (2020).
[390] Zhang, B. Y. et al. Spatial-light-modulator-based dual shearing direction shearography. Applied Optics 59, 11080-11086 (2020).
[391] Zhao, Q. H. et al. A new multiplexed system for the simultaneous measurement of out-of-plane deformation and its first derivative. Optics Communications 482, 126602 (2021).
[393] Anisimov, A. G. & Groves, R. M. Extreme shearography: development of a high-speed shearography instrument for quantitative surface strain measurements during an impact event. Optics and Lasers in Engineering 140, 106502 (2021).
Fig. 2 Original and deformed states of the object showing the location of sheared points on deformation: P is a point on the surface and Q is its sheared counterpart. On loading, these points displace to P' and Q'. Deformation is very small and hence sensitivity vectors remain unchanged.
Fig. 3 a symmetric illumination directions, b symmetric observation directions.
Fig. 5 Shearographic testing of an aircraft panel (500 mm × 500 mm) using thermal load a A310 rudder shell, CFRP-Honeycomb, with marked fault regions, b shearogram after thermal loading, and c demodulated phase map (Courtesy Prof. Osten).
Fig. 6 a A painting on canvas, b illuminated painting along with its double exposure shearogram (with permission from Ref. 290).
Fig. 7 a Sketch of the silicone sample (dimensions in mm), b photograph of the transilluminated sample, and c processed shearography data for two hard-silicone spheres (with permission from Ref. 364). | CommonCrawl |
Superconducting metamaterials for waveguide quantum electrodynamics
Mohammad Mirhosseini1,2,3,
Eunjong Kim1,2,3,
Vinicius S. Ferreira1,2,3,
Mahmoud Kalaee1,2,3,
Alp Sipahigil ORCID: orcid.org/0000-0003-1469-52721,2,3,
Andrew J. Keller1,2,3 &
Oskar Painter1,2,3
Nature Communications volume 9, Article number: 3706 (2018) Cite this article
Qubits
Superconducting devices
Embedding tunable quantum emitters in a photonic bandgap structure enables control of dissipative and dispersive interactions between emitters and their photonic bath. Operation in the transmission band, outside the gap, allows for studying waveguide quantum electrodynamics in the slow-light regime. Alternatively, tuning the emitter into the bandgap results in finite-range emitter–emitter interactions via bound photonic states. Here, we couple a transmon qubit to a superconducting metamaterial with a deep sub-wavelength lattice constant (λ/60). The metamaterial is formed by periodically loading a transmission line with compact, low-loss, low-disorder lumped-element microwave resonators. Tuning the qubit frequency in the vicinity of a band-edge with a group index of ng = 450, we observe an anomalous Lamb shift of −28 MHz accompanied by a 24-fold enhancement in the qubit lifetime. In addition, we demonstrate selective enhancement and inhibition of spontaneous emission of different transmon transitions, which provide simultaneous access to short-lived radiatively damped and long-lived metastable qubit states.
Cavity quantum electrodynamics (QED) studies the interaction of an atom with a single electromagnetic mode of a high-finesse cavity with a discrete spectrum1,2. In this canonical setting, a large photon–atom coupling is achieved by repeated interaction of the atom with a single photon bouncing many times between the cavity mirrors. Recently, there has been much interest in achieving strong light–matter interaction in a cavity-free system such as a waveguide3,4. Waveguide QED refers to a system where a chain of atoms are coupled to a common optical channel with a continuum of electromagnetic modes over a large bandwidth. Slow-light photonic crystal waveguides are of particular interest in waveguide QED because the reduced group velocity near a bandgap preferentially amplifies the desired radiation of the atoms into the waveguide modes5,6,7. Moreover, in this configuration an interesting paradigm can be achieved by placing the resonance frequency of the atom inside the bandgap of the waveguide8,9,10,11. In this case, the atom cannot radiate into the waveguide but the evanescent field surrounding it gives rise to a photonic bound state9. The interaction of such localized bound states has been proposed for realizing tunable spin–exchange interaction between atoms in a chain12,13, and also for realizing effective non-local interactions between photons14,15.
While achieving efficient waveguide coupling in the optical regime requires the challenging task of interfacing atoms or atomic-like systems with nanoscale dielectric structures16,17,18,19,20, superconducting circuits provide an entirely different platform for studying the physics of light–matter interaction in the microwave regime4,21. Development of the field of circuit QED has enabled fabrication of tunable qubits with long coherence times and fast qubit gate times22,23. Moreover, strong coupling is readily achieved in coplanar platforms due to the deep sub-wavelength transverse confinement of photons attainable in microwave waveguides and the large electric dipole of superconducting qubits24. Microwave waveguides with strong dispersion, even "bandgaps" in frequency, can also be simply realized by periodically modulating the geometry of a coplanar transmission line25. Such an approach was recently demonstrated in a pioneering experiment by Liu and Houck26, whereby a qubit was coupled to the localized photonic state within the bandgap of a modulated coplanar waveguide (CPW). Satisfying the Bragg condition in a periodically modulated waveguide requires a lattice constant on the order of the wavelength, however, which translates to a device size of approximately a few centimeters for complete confinement of the evanescent fields in the frequency range suitable for microwave qubits. Such a restriction significantly limits the scaling in this approach, both in qubit number and qubit connectivity.
An alternative approach for tailoring dispersion in the microwave domain is to take advantage of the metamaterial concept. Metamaterials are composite structures with sub-wavelength components, which are designed to provide an effective electromagnetic response27,28. Since the early microwave work, the electromagnetic metamaterial concept has been expanded and extensively studied across a broad range of classical optical sciences29,30,31; however, their role in quantum optics has remained relatively unexplored, at least in part due to the lossy nature of many sub-wavelength components. Improvements in design and fabrication of low-loss superconducting circuit components in circuit QED offer a new prospect for utilizing microwave metamaterials in quantum applications32. Indeed, high quality-factor superconducting components such as resonators can be readily fabricated on a chip33, and such elements have been used as a tool for achieving phase-matching in near quantum-limited traveling wave amplifiers34,35 and for tailoring qubit interactions in a multimode cavity QED architecture36.
In this paper, we utilize an array of coupled lumped-element microwave resonators to form a compact bandgap waveguide with a deep sub-wavelength lattice constant (λ/60) based on the metamaterial concept. In addition to a compact footprint, these sort of structures can exhibit highly nonlinear band dispersion surrounding the bandgap, leading to exceptionally strong confinement of localized intra-gap photon states. We present the design and fabrication of such a metamaterial waveguide, and characterize the resulting waveguide dispersion and bandgap properties via interaction with a tunable superconducting transmon qubit. We measure the Lamb shift and lifetime of the qubit in the bandgap and its vicinity, demonstrating the anomalous Lamb shift of the fundamental qubit transition as well as selective inhibition and enhancement of spontaneous emission for the first two excited states of the transmon qubit.
Band-structure analysis and spectroscopy
We begin by considering the circuit model of a CPW that is periodically loaded with microwave resonators as shown in the inset to Fig. 1a. The Lagrangian for this system can be constructed as a function of the node fluxes of the resonator and waveguide sections \(\Phi _n^b\) and \(\Phi _n^a\)37. Assuming periodic boundary conditions and applying the rotating wave approximation, we derive the Hamiltonian for this system and solve for the energies \(\hbar \omega _{ \pm ,k}\) along with the corresponding eigenstates \(\left| { \pm ,k} \right\rangle = \hat \alpha _{ \pm ,k}\left| 0 \right\rangle\) as (see Supplementary Note 1)
$$\omega _{ \pm ,k} = \frac{1}{2}\left[ {\left( {\Omega _k + \omega _0} \right) \pm \sqrt {\left( {\Omega _k - \omega _0} \right)^2 + 4g_k^2} } \right],$$
$$\hat \alpha _{ \pm ,k} = \frac{{\left( {\omega _{ \pm ,k} - \omega _0} \right)}}{{\sqrt {\left( {\omega _{ \pm ,k} - \omega _0} \right)^2 + g_k^2} }}\hat a_k + \frac{{g_k}}{{\sqrt {\left( {\omega _{ \pm ,k} - \omega _0} \right)^2 + g_k^2} }}\hat b_k.$$
Microwave metamaterial waveguide. a Dispersion relation of a CPW loaded with a periodic array of microwave resonators (red curve). The green line shows the dispersion relation of the waveguide without the resonators. Inset: circuit diagram for a unit cell of the periodic structure. b Scanning electron microscope (SEM) image of a fabricated capacitively coupled microwave resonator, here with a wire width of 500 nm. The resonator region is false-colored in purple, the waveguide central conductor and the ground plane are colored green, and the coupling capacitor is shown in orange. We have used pairs of identical resonators symmetrically placed on the two sides of the transmission line to preserve the symmetry of the structure. c Transmission measurement for the realized metamaterial waveguide made from 9 unit cells of resonator pairs with a wire width of 1μm, repeated with a lattice constant of d = 350 μm. The blue curve depicts the experimental data and the red curve shows the lumped-element model fit to the data
Here, \(\hat a_k\) and \(\hat b_k\) describe the momentum-space annihilation operators for the bare waveguide and bare resonator sections, the index k denotes the wavevector, and the parameters Ωk, ω0, and gk quantify the frequency of traveling modes of the bare waveguide, the resonance frequency of the microwave resonators, and coupling rate between resonator and waveguide modes, respectively. The operators \(\hat \alpha _{ \pm ,k}\) represent quasi-particle solutions of the composite waveguide, where far from the bandgap the quasi-particle is primarily composed of the bare waveguide mode, while in the vicinity of ω0 most of its energy is confined in the microwave resonators.
Figure 1a depicts the numerically calculated energy bands ω±,k as a function of the wavevector k. It is evident that the dispersion has the form of an avoided crossing between the energy bands of the bare waveguide and the uncoupled resonators. For small gap sizes, the midgap frequency is close to the resonance frequency of uncoupled resonators ω0, and unlike the case of a periodically modulated waveguide, there is no fundamental relation tying the midgap frequency to the lattice constant in this case. The form of the band structure near the higher cut-off frequency ωc+ can be approximated as a quadratic function (ω − ωc+) ∝ k2, whereas the band structure near the lower band-edge ωc− is inversely proportional to the square of the wavenumber (ω − ωc−) ∝ 1/k2. The analysis above has been presented for resonators which are capacitively coupled to a waveguide in a parallel geometry; a similar band structure can also be achieved using series inductive coupling of resonators (see Supplementary Note 1 and Supplementary Fig. 1).
Physical realization using lumped-element resonators
A coplanar microwave resonator is often realized by terminating a short segment of a coplanar transmission line with a length set to an integer multiple of λ/4, where λ is the wavelength corresponding to the fundamental resonance frequency25,33. However, it is possible to significantly reduce the footprint of a resonator by using components that mimic the behavior of lumped elements. We have used the design presented in ref. 38 to realize resonators in the frequency range of 6–10 GHz. This design provides compact resonators by placing interdigital capacitors at the anti-nodes of the charge waves and double spiral coils near the peak of the current waves at the fundamental frequency (see Fig. 1b). The symmetry of this geometry results in the suppression of the second harmonic frequency and thus the elimination of an undesired bandgap at twice the fundamental resonance frequency of the band-gap waveguide. A more subtle design criterion is that the resonators be of high impedance. Use of high impedance resonators allows for a larger photonic bandgap and greater waveguide–qubit coupling. For the waveguide QED application of interest this enables denser qubit circuits, both spatially and spectrally.
The impedance of the resonators scales roughly as the inverse square-root of the pitch of the wires in the spiral coils. Complicating matters is that smaller wire widths have been found to introduce larger resonator frequency disorder due to kinetic inductance effects39. Here, we have selected an aggressive resonator wire width of 1μm and fabricated a periodic array of N = 9 resonator pairs coupled to a CPW with a lattice constant of d = 350. The resonators are arranged in identical pairs placed on opposite sides of the central waveguide conductor to preserve the symmetry of the waveguide. In addition, the center conductor of each CPW section is meandered over a length of 210 μm so as to increase the overall inductance of the waveguide section which also increases the bandgap. Further details of the design criteria and lumped element parameters are given in Supplementary Note 2. The fabrication of the waveguide is performed using electron-beam deposited Al film (see Methods). Figure 1c shows the measured power transmission through such a finite-length metamaterial waveguide. Here 50-Ω CPW segments, galvanically coupled to the metamaterial waveguide, are used at the input and output ports. We find a midgap frequency of 5.83 GHz and a bandgap extent of 1.82 GHz for the structure. Using the simulated value of effective refractive index of 2.54, the midgap frequency gives a lattice constant-to-wavelength ratio of d/λ ≈ 1/60.
Disorder and Anderson localization
Fluctuations in the electromagnetic properties of the metamaterial waveguide along its length, such as the aforementioned resonator disorder, results in random scattering of traveling waves. Such random scattering can lead to an exponential extinction of propagating photons in the presence of weak disorder and complete trapping of photons for strong disorder, a phenomenon known as the Anderson localization of light40. Similarly, absorption loss in the resonators results in attenuation of wave propagation which adds a dissipative component to the effective localization of fields in the metamaterial waveguide. Figure 2a shows numerical simulations of the effective localization length as a function of frequency when considering separately the effects of resonator frequency disorder and loss (see Supplementary Note 3 for details of independent resonator measurements used to determine frequency variation (0.5%) and loss parameters (intrinsic Q-factor of 7.2 × 104) for this model). In addition to the desired localization of photons within the bandgap, we see that the effects of disorder and loss also limit the localization length outside the bandgap. In the lower transmission band where the group index is largest, the localization length is seen to rapidly approach zero near the band-edge, predominantly due to disorder. In the upper transmission band where the group index is smaller, the localization length maintains a large value of 6 × 103 periods all the way to the band-edge. Within the bandgap the simulations show that the localization length is negligibly modified by the levels of loss and disorder expected in the resonators of this work, and is well approximated by the periodic loading of the waveguide alone which can be simply related to the inverse of the curvature of the transmission bands of a loss-less, disorder-free structure13. These results indicate that, even with practical limitations on disorder and loss in such metamaterial waveguides, a range of photon length scales of nearly four orders of magnitude should be accessible for frequencies within a few hundred MHz of the band-edges of the gap (see Supplementary Note 4).
Disorder effects and qubit–waveguide coupling. a Calculated localization length for a loss-less metamaterial waveguide with structural disorder (blue circles). The nominal waveguide parameters are determined from the fit to a lumped element model (including resonator loss) to the transmission data in Fig. 1. Numerical simulation has been performed for N = 100 unit cells, averaged over 105 randomly realized values of the resonance frequency ω0, with the standard deviation δω0/ω0 = 0.5%. The vertical green lines represent the extent of the bandgap region. The red curve outside the gap is an analytic model based on ref. 53. For comparison, the solid black curve shows the calculated effective localization length without resonator frequency disorder but including resonator loss. b SEM image of the fabricated qubit–waveguide system. The metamaterial waveguide (gray) consists of 9 periods of the resonator unit cell. The waveguide is capacitively coupled to an external CPW (red) for reflective read-out. c The transmon qubit is capacitively coupled to the resonator at the end of the array. The Z drive is used to tune the qubit resonance frequency by controlling the external flux bias in the superconducting quantum interference device (SQUID) loop. The XY drive is used to coherently excite the qubit. d Capacitively coupled microwave resonator. e Calculated local density of states (LDOS) at the qubit position for a metamaterial waveguide with a length of 9 unit cells and open boundary conditions (experimental measurements of LDOS tabulated in Supplementary Table 1). The band-edges for the corresponding infinite structure are marked with vertical green lines. f Normalized electromagnetic energy distribution along the waveguide vs. qubit frequency for the coupled qubit–waveguide system. The vertical axis marks the distance from the qubit (x/d) in units of the lattice period d
Anomalous Lamb shift near the band-edge
To further probe the electromagnetic properties of the metamaterial waveguide we couple it to a superconducting qubit. In this work, we use a transmon qubit22 with the fundamental resonance frequency ωge/2π = 7.9 GHz and Josephson energy to single electron charging energy ratio of EJ/EC ≈ 100 at zero flux bias (details of our qubit fabrication methods can also be found in ref. 41). Figure 2b shows the geometry of the device where the qubit is capacitively coupled to one end of the waveguide and the other end is capacitively coupled to a 50-Ω CPW transmission line. This geometry allows for forming narrow individual modes in the transmission band of the metamaterial, which can be used for dispersive qubit state read-out42 from reflection measurements at the 50-Ω CPW input port (see Supplementary Note 2 and Supplementary Table 1). Figure 2e, f shows the theoretical photonic LDOS and spatial photon energy localization versus frequency for this finite length qubit–waveguide system. Within the bandgap the qubit is self-dressed by virtual photons which are emitted and re-absorbed due to the lack of escape channels for the radiation. Near the band-edges surrounding the bandgap, where the LDOS is rapidly varying with frequency, this results in a large anomalous Lamb shift of the dressed qubit frequency10,43. Figure 3a shows the measured qubit transition frequency shift from the expected bare qubit tuning curve as a function of frequency. Shown for comparison is the circuit theory model frequency shift of a finite structure with N = 9 periods (blue solid curve) alongside that of an infinite length waveguide (red dashed curve). It is evident that the qubit frequency is repelled from the band-edges on the two sides of the bandgap, a result of the strongly asymmetric density of states in these two regions. The measured frequency shift at the lower frequency band-edge is 43 MHz, in good agreement with the circuit theory model. Note that at the lower frequency band-edge where the localization length approaches zero due to the anomalous dispersion (see Fig. 2a), boundary-effects in the finite structure do not significantly alter the Lamb shift. Near the upper-frequency band-edge, where finite-structure effects are non-negligible due to the weaker dispersion and corresponding finite localization length, we measure a qubit frequency shift as large as −28 MHz. This again is in good correspondence with the finite structure model; the upper band-edge of the infinite length waveguide occurs at a slightly lower frequency with a slightly smaller Lamb shift.
Measured dispersive and dissipative qubit dynamics. a Lamb shift of the qubit transition vs. qubit frequency. b Lifetime of the excited qubit state vs. qubit frequency. Open circles show measured data. The solid blue line (dashed red line) is a theoretical curve from the circuit model of a finite (infinite) waveguide structure. For determining the Lamb shift from measurement, the bare qubit frequency is calculated as a function of flux bias Φ as \(\hbar \omega _{ge} = \sqrt {8E_{\mathrm{C}}E_{\mathrm{J}}\left( \Phi \right)} - E_{\mathrm{C}}\) using the extracted values of EC, EJ, and assuming the symmetric SQUID flux bias relation EJ(Φ) = EJ,maxcos(2πΦ/Φ0)22. The lifetime characterization is performed in the time domain where the qubit is initially excited with a π pulse through the XY drive. The excited state population, determined from the state-dependent dispersive shift of a close-by band-edge waveguide mode, is measured subsequent to a delay time during which the qubit freely decays. Inset to (b) shows a zoomed in region of the qubit lifetime near the upper band-edge. Solid blue (red) lines show the circuit model contributions to output port radiation (structural waveguide loss), adjusted to include a frequency independent intrinsic qubit lifetime of 10.86 μs. The black dashed line shows the cumulative theoretical lifetime
Enhancement and suppression of spontaneous emission
Another signature of the qubit–waveguide interaction is the change in the rate of spontaneous emission of the qubit. Tuning the qubit into the bandgap changes the localization length of the waveguide photonic state that dresses the qubit (see Fig. 2f). Since the finite waveguide is connected to an external port which acts as a dissipative environment, the change in localization length \(\ell (\omega )\) is accompanied by a change in the lifetime of the qubit \(T_{{\mathrm{rad}}}(\omega ) \propto e^{2x/\ell (\omega )}\), where x is the total length of the waveguide (see Supplementary Note 5). In addition to radiative decay into the output channel, losses in the resonators in the waveguide also contribute to the qubit's excited state decay. Using a low power probe in the single-photon regime we have measured intrinsic Q-factors of 7.2 ± 0.4 × 104 for the individual waveguide resonances between 4.6 and 7.4 GHz. Figure 3b shows the measured qubit lifetime (T1) as a function of its frequency in the bandgap. The solid blue curve in Fig. 3b shows a fitted theoretical curve which takes into account the loss in the waveguide along with a phenomenological intrinsic lifetime of the qubit (Tl,i = 10.8 μs). The dashed red curve shows the expected qubit lifetime for an infinite waveguide length. Qualitatively, the measured lifetime of the qubit behaves as expected; the qubit lifetime drastically increases inside the bandgap region and is reduced in the transmission bands. More subtle features of the measured lifetime include multiple, narrow Fano-like spectral features deep within the bandgap. These features arise from what are believed to be interference between parasitic on-chip modes and low-Q modes of our external copper box chip packaging. In addition, while the measured lifetime near the upper band-edge is in excellent agreement with the finite waveguide theoretical model, the data near the lower band-edge shows significant deviation. We attribute this discrepancy to the presence of low-Q parasitic resonances, observable in transmission measurements between the qubit XY drive line and the 50-Ω CPW port. Possible candidates for such spurious modes include the asymmetric "slotline" modes of the waveguide, which are weakly coupled to our symmetrically grounded CPW line but may couple to the qubit. Further study of the spectrum of these modes and possible methods for suppressing them will be a topic of future studies.
Focusing on the upper band-edge, we plot as an inset to Fig. 3b a zoom-in of the measured qubit lifetime along with theoretical estimates of the different components of qubit decay. Here, the qubit decay results from two dominant effects: detuning-dependent coupling to the lossy resonances in the transmission band of the waveguide, and emission into the output port of the finite waveguide structure. The former effect is an incoherent phenomenon arising from a multi-mode cavity-QED picture, whereas the latter effect arises from the coherent interference of band-edge resonances which can be related to the photon bound state picture and resulting localization length. Owing to the weaker dispersion at the upper band-edge, the extent of the photon bound state has an appreciable impact on the qubit lifetime in the N = 9 finite length waveguide. This is most telling in the strongly asymmetric qubit lifetime around the first waveguide resonance in the upper transmission band. Quantitatively, the slope of the radiative component of the lifetime curve in the bandgap near the band-edge can be shown to be proportional to the group delay (see Supplementary Note 6), \(\left| {\partial T_{{\mathrm{rad}}}/\partial \omega } \right| = T_{{\mathrm{rad}}}\tau _{{\mathrm{delay}}}\). The corresponding group index, ng ≡ τdelay/x, is a property of the waveguide independent of its length x. Here, we measure a slope corresponding to a group index ng ≈ 450, in good correspondence with the circuit model of the lossy metamaterial waveguide.
The sharp variation in the photonic LDOS near the metamaterial waveguide band-edges may also be used to engineer the multi-level dynamics of the qubit. A transmon qubit, by construct, is a nonlinear quantum oscillator and thus has a multilevel energy spectrum. In particular, a third energy level (|f〉) exists at the frequency \(\omega _{{\mathrm{gf}}} = 2\omega _{{\mathrm{ge}}} - E_{\mathrm{C}}/\hbar\). Although the transition g–f is forbidden by selection rules, the f–e transition has a dipole moment that is \(\sqrt 2\) larger than the fundamental transition22. This is consistent with the scaling of transition amplitudes in a harmonic oscillator and results in a second transition lifetime that is half of the fundamental transition lifetime for a uniform density of states in the electromagnetic environment of the oscillator. The sharply varying density of states in the metamaterial, on the other hand, can lead to strong suppression or enhancement of the spontaneous emission for each transition. Figure 4 shows the measured lifetimes of the two transitions for two different spectral configurations. In the first scenario, we enhance the ratio of the lifetimes Teg/Tfe by situating the fundamental transition frequency deep inside in the bandgap while having the second transition positioned near the lower transmission band. The situation is reversed in the second configuration, where the fundamental frequency is tuned to be near the upper frequency band while the second transition lies deep inside the gap. In our fabricated qubit, the second transition is about 300 MHz lower than the fundamental transition frequency at zero flux bias, which allows for achieving large lifetime contrast in both configurations.
State-selective enhancement and inhibition of radiative decay. a Measurement with the e–g transition tuned deep into the bandgap (ωeg/2π = 5.37GHz), with the f–e transition near the lower transmission band (ωfe/2π = 5.01 GHz). b Measurement with the e–g transition tuned near the upper transmission band (ωeg/2π = 6.51 GHz), with the f–e transition deep in the bandgap (ωfe/2π = 6.17 GHz). For measuring the f–e lifetime, we initially excite the third energy level |f〉 via a two-photon π pulse at the frequency of ωgf/2. Following the population decay in a selected time interval, the population in |f〉 is mapped to the ground state using a second π pulse. Finally, the ground state population is read using the dispersive shift of a close-by band-edge resonance of the waveguide. g–e (f–e) transition data shown as red squares (blue circles)
Looking forward, we anticipate that further refinement in the engineering and fabrication of the devices presented here should enable metamaterial waveguides approaching a lattice constant-to-wavelength ratio of λ/1000, with limited disorder and a bandgap-to-midgap ratio in excess of 50% (see Supplementary Note 7). Such compact, low loss, low disorder superconducting metamaterials can help realize more scalable superconducting quantum circuits with higher levels of complexity and functionality in several regards. They offer a method for densely packing qubits—both in spatial and frequency dimensions—with isolation from the environment and controllable connectivity achieved via bound qubit–waveguide polaritons7,13,44. Moreover, the ability to selectively modify the transition lifetimes provides simultaneous access to long-lived metastable qubit states as well as short-lived states strongly coupled to waveguide modes. This approach realizes a transmon qubit system with state-dependent bound state localization lengths, which can be used as a quantum nonlinear media for propagating microwave photons15,45,46, or as recently demonstrated, to realize spin-photon entanglement and high-bandwidth itinerant single microwave photon detection47,48. Combined, these attributes provide a unique platform for studying the many-body physics of quantum photonic matter49,50,51,52.
Device fabrication
The devices used in this work are fabricated on silicon substrates [Float zone (FZ) grown, 500 thickness, >10 kOhm-cm resistivity]. The ground plane, metamaterial waveguide, and qubit capacitor are patterned by electron-beam lithography followed by electron-beam evaporation of 120 nm Al at a rate of 1 nm/s. A liftoff process is performed in n-methyl-2-pyrrolidone at 80 °C for 1.5 h. The Josephson junctions are fabricated using double-angle electron beam evaporation of suspended bridges, following similar techniques as in ref. 41.
The fabricated devices are characterized in a dilution refrigerator with a base temperature of Tf ≈ 7 mK. The input coaxial lines are thermalized at each stage of the fridge with a series of attenuators to reduce the Johnson thermal noise from the room-temperature environment. The output signal is directed through a pair of isolators at the mixing-chamber stage of the fridge and is subsequently sent into an amplifier chain consisting of a HEMT amplifier (Low Noise Factory LNF-LNC4_8C) the 4-K fridge stage and a low-noise amplifier (Miteq AFS42-00101200-22-10P-42) at room temperature. Frequency-domain characterization is performed using a two-port vector network analyzer (VNA). The transmission (S21) and reflection (S11, separated by a circulator) signals are selectively directed to the output line by means of a mechanical RF switch. For time-domain characterization, a pair of pulse sequences are synthesized for exciting the qubit and for performing read-out. A Tektronix AWG5014C arbitrary waveform generator (AWG) is used to generate I–Q signals at the IF frequency (<200 MHz), and the signals are upconverted in a pair of mixers with local oscillator tones supplied by CW microwave sources (Rohde & Schwarz SMB100A). The output read-out signal is downconverted with a mixer and is registered using a 1 GS/s PCIe digitizer (AlazarTech ATS9870).
The data that support the findings of this study are available from the corresponding author (O.P.) upon reasonable request.
Raimond, J. M. & Haroche, S. Exploring the Quantum. (Oxford University Press, Oxford, 2006).
Reiserer, A. & Rempe, G. Cavity-based quantum networks with single atoms and optical photons. Rev. Mod. Phys. 87, 1379–1418 (2015).
ADS Article CAS Google Scholar
Roy, D., Wilson, C. M. & Firstenberg, O. Colloquium: strongly interacting photons in one-dimensional continuum. Rev. Mod. Phys. 89, 1617 (2017).
Gu, X., Kockum, A. F., Miranowicz, A., Liu, Y.-x & Nori, F. Microwave photonics with superconducting quantum circuits. Phys. Rep. 718–719, 1–102 (2017).
MathSciNet Article MATH ADS CAS Google Scholar
Yao, P. et al. Ultrahigh Purcell factors and Lamb shifts in slow-light metamaterial waveguides. Phys. Rev. B 80, 195106 (2009).
Goban, A. et al. Superradiance for atoms trapped along a photonic crystal waveguide. Phys. Rev. Lett. 115, 063601 (2015).
ADS Article PubMed CAS Google Scholar
Calajó, G., Ciccarello, F., Chang, D. & Rabl, P. Atom-field dressed states in slow-light waveguide QED. Phys. Rev. A 93, 033833 (2016).
ADS Article Google Scholar
Bykov, V. P. Spontaneous emission from a medium with a band spectrum. Sov. J. Quantum Electron. 4, 861–871 (1975).
John, S. & Wang, J. Quantum electrodynamics near a photonic band gap: photon bound states and dressed atoms. Phys. Rev. Lett. 64, 2418–2421 (1990).
Kofman, A. G., Kurizki, G. & Sherman, B. Spontaneous and induced atomic decay in photonic band structures. J. Mod. Opt. 41, 353–384 (1994).
Hood, J. D. et al. Atom–atom interactions around the band edge of a photonic crystal waveguide. Proc. Natl Acad. Sci. USA 113, 10507–10512 (2016).
ADS Article PubMed PubMed Central CAS Google Scholar
Munro, E., Kwek, L. C. & Chang, D. E. Optical properties of an atomic ensemble coupled to a band edge of a photonic crystal waveguide. New J. Phys. 19, 083018 (2017).
Douglas, J. S. et al. Quantum many-body models with cold atoms coupled to photonic crystals. Nat. Photonics 9, 326–331 (2015).
Shahmoon, E., Kurizki, G., Stimming, H. P., Mazets, I. & Grišins, P. Highly nonlocal optical nonlinearities in atoms trapped near a waveguide. Optica 3, 725–733 (2016).
Douglas, J. S., Caneva, T. & Chang, D. E. Photon molecules in atomic gases trapped near photonic crystal waveguides. Phys. Rev. X 6, 031017 (2016).
Vetsch, E. et al. Optical interface created by laser-cooled atoms trapped in the evanescent field surrounding an optical nanofiber. Phys. Rev. Lett. 104, 203603 (2010).
Yu, S.-P. et al. Nanowire photonic crystal waveguides for single-atom trapping and strong light-matter interactions. Appl. Phys. Lett. 104, 111103 (2014).
Javadi, A. et al. Single-photon non-linear optics with a quantum dot in a waveguide. Nat. Commun. 6, 8655 (2015).
Lodahl, P., Mahmoodian, S. & Stobbe, S. Interfacing single photons and single quantum dots with photonic nanostructures. Rev. Mod. Phys. 87, 347–400 (2015).
ADS MathSciNet Article CAS Google Scholar
Bhaskar, M. K. et al. Quantum nonlinear optics with a germanium-vacancy color center in a nanoscale diamond waveguide. Phys. Rev. Lett. 118, 223603 (2017).
Blais, A., Huang, R.-S., Wallraff, A., Girvin, S. M. & Schoelkopf, R. J. Cavity quantum electrodynamics for superconducting electrical circuits: an architecture for quantum computation. Phys. Rev. A 69, 062320 (2004).
Koch, J. et al. Charge-insensitive qubit design derived from the Cooper pair box. Phys. Rev. A 76, 042319 (2007).
Chen, Y. et al. Qubit architecture with high coherence and fast tunable coupling. Phys. Rev. Lett. 113, 220502 (2014).
Wallraff, A. et al. Strong coupling of a single photon to a superconducting qubit using circuit quantum electrodynamics. Nature 431, 162–167 (2004).
Pozar, D. M Microwave Engineering. 4th edn, John Wiley: USA, 1998.
Liu, Y. & Houck, A. A. Quantum electrodynamics near a photonic bandgap. Nat. Phys. 13, 48–52 (2017).
Smith, D. R., Padilla, W. J., Vier, D. C., Nemat-Nasser, S. C. & Schultz, S. Composite medium with simultaneously negative permeability and permittivity. Phys. Rev. Lett. 84, 4184–4187 (2000).
Itoh, T Electromagnetic Metamaterials: Transmission Line Theory and Microwave Applications (The Engineering Approach). John Wiley & Sons Inc.: New Jersey, 2006.
Koschny, T., Soukoulis, C. M. & Wegener, M. Metamaterials in microwaves, optics, mechanics, thermodynamics, and transport. J. Opt. 19, 084005 (2017).
Alù, A. & Engheta, N. Enabling a new degree of wave control with metamaterials: a personal perspective. J. Opt. 19, 084008 (2017).
Chen, H.-T., Taylor, A. J. & Yu, N. A review of metasurfaces: physics and applications. Rep. Prog. Phys. 79, 076401 (2016).
Rakhmanov, A. L., Zagoskin, A. M., Savelev, S. & Nori, F. Quantum metamaterials: electromagnetic waves in a Josephson qubit line. Phys. Rev. B 77, 144507 (2008).
Göppl, M. et al. Coplanar waveguide resonators for circuit quantum electrodynamics. J. Appl. Phys. 104, 113904 (2008).
Macklin, C. et al. A near-quantum-limited Josephson traveling-wave parametric amplifier. Science 350, 307–310 (2015).
White, T. C. et al. Traveling wave parametric amplifier with Josephson junctions using minimal resonator phase matching. Appl. Phys. Lett. 106, 242601 (2015).
McKay, D. C., Naik, R., Reinhold, P., Bishop, L. S. & Schuster, D. I. High-contrast qubit interactions using multimode cavity QED. Phys. Rev. Lett. 114, 080501 (2015).
Devoret, M. H. Quantum fluctuations in electrical circuits. Les Houches Lectures (1995).
Zhou, J., Lancaster, M. J. & Huang, F. Superconducting microstrip filters using compact resonators with double-spiral inductors and interdigital capacitors. In IEEE MTT-S International Microwave Symposium Digest (2003).
Underwood, D. L., Shanks, W. E., Koch, J. & Houck, A. A. Low-disorder microwave cavity lattices for quantum simulation with photons. Phys. Rev. A 86, 023837 (2012).
Wiersma, D. S., Bartolini, P., Lagendijk, A. & Righini, R. Localization of light in a disordered medium. Nature 390, 671–673 (1997).
Keller, A. J. et al. Al transmon qubits on silicon-on-insulator for quantum device integration. Appl. Phys. Lett. 111, 042603 (2017).
Wallraff, A. et al. Approaching unit visibility for control of a superconducting qubit with dispersive readout. Phys. Rev. Lett. 95, 060501 (2005).
John, S. & Wang, J. Quantum optics of localized light in a photonic band gap. Phys. Rev. B 43, 12772–12789 (1991).
Sundaresan, N. M., Lundgren, R., Zhu, G., Gorshkov, A. V. & Houck, A.A. Interacting qubit-photon bound states with superconducting circuits. Quantum Phys. http://arxiv.org/abs/1801.10167 (2018).
Nikoghosyan, G. & Fleischhauer, M. Photon-number selective group delay in cavity induced transparency. Phys. Rev. Lett. 105, 013601 (2010).
Albrecht, A., Caneva, T. & Chang, D. E. Changing optical band structure with single photons. New J. Phys. 19, 115002 (2017).
Inomata, K. et al. Single microwave-photon detector using an artificial Λ-type three-level system. Nat. Commun. 7, 12303 (2016).
Besse, J.-C. et al. Single-shot quantum non-demolition detection of itinerant microwave photons. Phys. Rev. X 8, 021003 (2018).
Greentree, A. D., Tahan, C., Cole, J. H. & Hollenberg, L. C. L. Quantum phase transitions of light. Nat. Phys. 2, 856–861 (2006).
Hartmann, M. J., Brandão, F. G. S. L. & Plenio, M. B. Strongly interacting polaritons in coupled arrays of cavities. Nat. Phys. 2, 849–855 (2006).
Houck, A. A., Türeci, H. E. & Koch, J. On-chip quantum simulation with superconducting circuits. Nat. Phys. 8, 292–299 (2012).
Noh, C. & Angelakis, D. G. Quantum simulations and many-body physics with light. Rep. Prog. Phys. 80, 016401 (2017).
Hernández-Herrejón, J. C., Izrailev, F. M. & Tessieri, L. Anomalous localization in the aperiodic Kronig–Penney model. J. Phys. A: Math. Theor. 43, 425004 (2010).
MathSciNet Article MATH ADS Google Scholar
We would like to thank Paul Dieterle, Ana Asenjo Garcia, and Darrick Chang for fruitful discussions regarding waveguide QED. This work was supported by the AFOSR MURI Quantum Photonic Matter (grant 16RT0696), the AFOSR MURI Wiring Quantum Networks with Mechanical Transducers (grant FA9550-15-1-0015), the Institute for Quantum Information and Matter, an NSF Physics Frontiers Center (grant PHY-1125565) with support of the Gordon and Betty Moore Foundation, and the Kavli Nanoscience Institute at Caltech. M.M. (A.J.K., A.S.) gratefully acknowledges support from a KNI (IQIM) Postdoctoral Fellowship.
Kavli Nanoscience Institute, California Institute of Technology, Pasadena, CA, 91125, USA
Mohammad Mirhosseini, Eunjong Kim, Vinicius S. Ferreira, Mahmoud Kalaee, Alp Sipahigil, Andrew J. Keller & Oskar Painter
Thomas J. Watson, Sr., Laboratory of Applied Physics, California Institute of Technology, Pasadena, CA, 91125, USA
Institute for Quantum Information and Matter, California Institute of Technology, Pasadena, CA, 91125, USA
Mohammad Mirhosseini
Eunjong Kim
Vinicius S. Ferreira
Mahmoud Kalaee
Alp Sipahigil
Andrew J. Keller
Oskar Painter
M.M., V.S.F. and O.P. came up with the concept. M.M., A.S. and O.P. planned the experiment. M.M., M.H., V.S.F., A.J.K. and E.K. performed the device design and fabrication. M.M., E.K. and A.S. performed the measurements. M.M., E.K., A.S. and O.P. analyzed the data. All authors contributed to the writing of the manuscript.
Correspondence to Oskar Painter.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Mirhosseini, M., Kim, E., Ferreira, V.S. et al. Superconducting metamaterials for waveguide quantum electrodynamics. Nat Commun 9, 3706 (2018). https://doi.org/10.1038/s41467-018-06142-z
Accepted: 10 August 2018
Coupling a Superconducting Qubit to a Left-Handed Metamaterial Resonator
S. Indrajeet
, H. Wang
, M.D. Hutchings
, B.G. Taketani
, Frank K. Wilhelm
, M.D. LaHaye
& B.L.T. Plourde
Physical Review Applied (2020)
Interactions between a magnon mode and a cavity photon mode mediated by traveling photons
J. W. Rao
, Y. P. Wang
, Y. Yang
, T. Yu
, Y. S. Gui
, X. L. Fan
, D. S. Xue
& C.-M. Hu
Physical Review B (2020)
Chiral quantum optics in photonic sawtooth lattices
Eduardo Sánchez-Burillo
, Chao Wan
, David Zueco
& Alejandro González-Tudela
Physical Review Research (2020)
Analog of multiple electromagnetically induced transparency using double-layered metasurfaces
Siyuan Liu
, Zhixia Xu
, Xiaoxing Yin
& Hongxin Zhao
Scientific Reports (2020)
Imaging nano-defects of metal waveguides using the microwave cavity interference enhancement method
Hao Guo
, Xin Li
, Qiang Zhu
, Zhenrong Zhang
, Yusong Liu
, Zhonghao Li
, Huanfei Wen
, Yanjun Li
, Jun Tang
& Jun Liu
Nanotechnology (2020)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Editors' Highlights
Top Articles of 2019
Nature Communications ISSN 2041-1723 (online) | CommonCrawl |
188 articles found
Generation and Applications of Plasma (An Academic Review)
Faizan Mehmood, Tariq Kamal, Umair Ashraf
Subject: Physical Sciences, Fluids & Plasmas Keywords: Plasma Generation Techniques, Applications of Plasma; Fusion technology; Plasma ion implantation and Deposition.
Online: 3 October 2018 (15:54:34 CEST)
Plasma being the fourth and most abundant form of matter extensively exists in the universe in the inter-galactic regions. It provides an electrically neutral medium of unbound negative and positive charged particles, which has been produced by subjecting air and various other gaseous mixtures to strengthen the electromagnetic field and by heating compressed air or inert gasses for creating negative and positive charged particles known as ions. Nowadays, many researchers are paying attention to the formation of artificial Plasma and its potential benefits for mankind. The literature is sparsely populated with the applications of Plasma. This paper presents specific methods of generation and applications of Plasma, which benefits humankind in various fields, such as in electrical, mechanical, chemical and medical fields. These applications include hydrogen production from alcohol, copper bonding, semiconductor processing, surface treatment, Plasma polymerization, coating, Plasma display panels, antenna beam forming, nanotechnology, Plasma Torch, Plasma pencils, low-current non-thermal Plasmatron, treatment of prostate cancer, Plasma source ion implantation, cutting by Plasma, Plasma etching, pollution control, neutralization of liquid radioactive waste, etc. Resultantly, worth of Plasma technology in the medical industry is increasing exponentially that is closing the gap between its benefits and cost of equipment used for generating and controlling it.
Plasma Beam Dumps for EuPRAXIA Facility
Guoxing Xia, Alexandre Bonatto, Roger Pizzato Nunes, Linbo Liang, Oscar Jakobsson, Yuan Zhao, Barney Williamson, Can Davut, Xueying Wang
Subject: Physical Sciences, Fluids & Plasmas Keywords: Beam dumps; laser plasma accelerator; plasma beam dump
Online: 4 March 2020 (09:45:00 CET)
Beam dumps are indispensable components for particle accelerator facilities to absorb or dispose beam kinetic energy in a safe way. However, the design of beam dumps based on conventional technology, i.e. the energy deposition via beam-dense matter interaction, makes the beam dump facility complicated and large in size, partly due to nowadays' high beam intensities and energies achieved. In addition, these high-power beams generate radioactive hazards, which need specific methods to deal with. On the other hand, the EuPRAXIA project can advance the laser-plasma accelerator significantly by achieving 1-5 GeV high quality electron beam in a compact layout. Nevertheless, the beam dump based on conventional technique will still produce radiation hazards and make the overall footprint less compact. Here, we propose to implement a plasma beam dump to absorb the kinetic energy from the EuPRAXIA beam. In doing so, the overall compactness of the EuPRAXIA layout will not be impacted, and the radioactivity generated by the facility can be mitigated. In this paper, results from particle-in-cell (PIC) simulations are presented for plasma beam dumps based on EuPRAXIA beam parameters.
The Effect of Plasma Activated Medium and PBS on Human Melanoma Cells Compared With Other Cancer and Normal Cells
Dominika Sersenová, Zdenko Machala, Vanda Repiská, Helena Gbelcová
Subject: Medicine & Pharmacology, Allergology Keywords: cold plasma; plasma activated liquid; cancer cell; melanoma; fibroblast
Online: 5 January 2021 (10:04:48 CET)
Plasma medicine is a new field focusing on biomedical and clinical applications of cold physical plasmas, including their anticancer effects. Cold plasmas can be applied directly or indirectly as plasma activated liquids (PAL). The effect of plasma activated cell growth medium (PAM) and plasma activated phosphate buffered saline (PAPBS) were tested using a plasma pen generating streamer corona discharge in ambient air, on different cancer cell lines (melanoma A375, glioblastoma LN229 and pancreatic cancer MiaPaCa-2) and normal cells (human dermal fibroblasts HDFa). The viability reduction and apoptosis induction were detected in all cancer cells after incubation in PAL. In melanoma cells we focused on detailed insights to the apoptotic pathways. The anticancer effects depend on the plasma treatment time or PAL concentration. The first 30 minutes of incubation in PAL were enough to start processes leading to the cell death. In fibroblasts, no apoptosis induction was observed, only PAPBS, activated for longer time, slightly decreased their viability. Anticancer effects of PAM and PAPBS on cancer cells showed selectivity compared to normal fibroblasts, depended on correctly chosen activation time and PAL concentration. This selectivity, supported by optimum ratio of hydrogen peroxide and nitrites in PAL, is very promising for potential clinical applications.
Deposition of Photo-Catalytic Tio2 Film by Low Power Atmospheric Suspension Plasma Spray Using Ar/N2 Working Gas
Sagr Hsain, Yasutaka Ando
Subject: Engineering, Mechanical Engineering Keywords: photo-catalysis, suspension plasma spray, thermal plasma, titanium oxide
Online: 4 September 2018 (15:08:47 CEST)
As a photo-catalytic titanium oxide film deposition process, thermal spray is hoped to be utilized practically on the condition that it is relatively easy to deposit anatase rich films. However, because of its high equipment and feedstock powder costs, it is very difficult to introduce thermal spray equipment into small companies. In this study, to develop a low cost thermal spray system, low power atmospheric suspension plasma spray equipment with titanium hydroxide suspension created by hydrolysis of titanium tetra iso butoxide using Ar, N2 as working gases. For avoiding sedimentation of the hydroxide particles in the suspension, mechanical milling of the suspension was conducted to create colloidal suspension before using it as feedstock. Moreover, an Ultrasonic wave container was used to keep the suspension particles moving while the spray process was conducted. After the film deposition, with As for the coating, anatase rich TiO2 film could be obtained. For characterization of the film, microstructure observation by optical microscope and X-ray diffraction was carried out. Consequently, by creation of colloidal suspension, deposition could be conducted without sedimentation of the hydroxide particle in the suspension during operation. Besides, it was proved the film had enough photo-catalytic property to decolor methylene-blue droplet
Temperature and Lifetime Measurements in the SSX Wind Tunnel
M. Kaur, K. D. Gelber, A. D. Light, M. R. Brown
Subject: Physical Sciences, Other Keywords: spectroscopy; plasma diagnostics
We describe electron temperature measurements in the SSX MHD wind tunnel using two different methods. First, we estimate Te along a chord by measuring the ratio of the C_III 97.7 nm to CIV 155 nm line intensities using a vacuum ultraviolet monochrometer. Second, we record a biasing scan to a double Langmuir probe to obtain a local measurement of Te. The aim of these studies is to increase the Taylor state lifetime, primarily by increasing the electron temperature. Also, a model is proposed to predict magnetic lifetime of relaxed states and is found of predict the lifetime satisfactorily. Furthermore, we find that proton cooling can be explained by equilibration with the electrons.
Spatial Molecular AlO Temperature Distributions in Laser-Induced Plasma
David M. Surmick, Daryl J. Dagel, Christian G. Parigger
Subject: Physical Sciences, Atomic & Molecular Physics Keywords: molecular spectroscopy; diatomic spectroscopy; Abel transform; plasma spectroscopy; laser-induced breakdown spectroscopy; laser-induced plasma; plasma dynamics
Online: 11 July 2019 (09:30:23 CEST)
Spatially resolved, line-of-sight measurements of aluminum monoxide emission spectra in laser ablation plasma are used with Abel inversion techniques to extract radial plasma temperatures. Contour mapping of the radially deconvolved signal intensity shows a ring of AlO formation near the plasma boundary with the ambient atmosphere. Simulations of the molecular spectra were coupled with the line profile fitting routines. Temperature results are presented with simultaneous inferences from lateral, asymmetric radial, and symmetric radial AlO spectral intensity profiles. This analysis indicates that we measured shockwave phenomena in the radial profiles, including a temperature drop behind the blast wave created during plasma initiation.
Diffusion Criteria of O3 Gas Emitted From Non-Thermal Atmospheric-Pressure Biocompatible Plasma Sterilizer for Safety Environment
Jang Sick Park, Eun Ha Choi
Subject: Physical Sciences, Fluids & Plasmas Keywords: non-thermal atmospheric-pressure biocompatible plasma; diffusion; ozone species; plasma sterilizer
Medical institutions, where several patients are treated and medical workers engaged, are always exposed to secondary viral and bacterial infections. It is critical to prevent infection transmission by indirect as well as direct contact through air or splash. The infections of most diseases can be transmitted through the air. HEPA filters installed in air conditioning equipment are used to prevent infection transmission through air in medical institutions, but air circulation takes a long time in a large space. Virus and bacteria smaller than 0.3 μm cannot be removed by the HEPA filter; hence, those microbes remain alive throughout the air ventilation. A plasma sterilizer has the capability to provide environmental friendly sterilization by employing reactive oxide species and reactive nitrogen species at a low cost. We developed an excellent plasma sterilizer by using a non-thermal atmospheric-pressure biocompatible plasma (NBP). Ozone concentration in plasma sources has been derived by Kuhn et al. [1]. The diffusion coefficients inside (D0) and outside (D1) the plasma sterilizer have been calculated to be 0.0641 m2 s-1 and 0.717 m2 s-1, respectively. To sustain high O3 concentrations over 121 ppm inside the plasma source and low O3 concentrations below 0.05 ppm outside the sterilizer, it is necessary to keep O3 concentrations at the exit of plasma sterilizer below 0.28 ppm. so that diffusion coefficient D1 has been designed to be as large as 11 times of D0.
Plasma Based Water Purifier: Design And Testing of Prototype with Different Samples of Water
Suraj M, Anuradha T
Subject: Engineering, Other Keywords: Plasma generation, non-thermal Plasma, pulsating DC power, Ozone, cost-improvement
The objective of the prototype is to eliminate the polluting contamination of water sources, due to the leak of industrial waste without any kind of treatment, mainly generated by the industries and home sector. In this project, a prototype of water purification by plasma technology has been designed. The prototype will convert contaminated water into the plasma stream and eliminate the pathogens from the water by exposing it to ultraviolet radiation and plasma sterilisation. The polluted water will be accelerated at high speed using a water pump in order to convert it into a liquid-gas mixture for ease plasma generation. This process will be achieved when the electric supply from a source of alternating current (AC) is applied to the water by means of high voltage electrodes. After which, the mixture slows down to return into liquid form and the clean water is obtained. The whole process takes place without significantly raising the temperature also knows as non-thermal plasma. The device also has an automatic flow and pressure control system. Finally, a short feasibility study has been conducted on the water samples collected and report obtained from Chennai Metropolitan Water Supply and Sewage boards are reported. It has been concluded that this new plasma-based water treatment system will be more efficient and cheaper than the current wastewater treatment techniques and can be used in the future as the replacement of current secondary and tertiary treatments of industrial wastewater.
Plasma Expansion Dynamics in Ultra-High-Pure Hydrogen Gas
Ghaneshwar Gautam, Christian G. Parigger
Subject: Physical Sciences, Atomic & Molecular Physics Keywords: laser-plasma interactions; plasma dynamics and flow; hypersonic flows; emission spectra
Micro-plasma is generated in ultra-high-pure hydrogen gas filled inside a cell at a pressure of (1.08 ± 0.033) × 105 Pa (810 ± 25 Torr) by using a Q-switched Nd:YAG laser device operated at 1064 nm wavelength and 14 ns pulse duration. Micro-plasma emission spectra of the hydrogen Balmer alpha line, Hα, are recorded with a Czerny-Turner type spectrometer and an intensified charge-coupled device. The spectra are calibrated for wavelength and corrected for detector sensitivity. During the first few tens of nanoseconds after initiating optical breakdown, significantly Stark-broadened and Stark-shifted Hα lines mark the well-above hypersonic outward expansion. The vertical diameters of the spectrally resolved plasma images are measured for time delays of 10 ns to 35 ns to determine expansion speeds of the order of 100 km/s to 10 km/s. For time delays of the order of 0.5 µs to 1 µs, the expansion decreases to the speed of sound of 1.3 km/s in the near ambient temperature and pressure hydrogen gas.
Preprint COMMUNICATION | doi:10.20944/preprints202004.0126.v1
Cold Atmospheric Plasma for COVID-19
Zhitong Chen, Richard Wirz
Subject: Life Sciences, Biophysics Keywords: Cold atmospheric Plasma; sterilization; Plasma-activated medium; Nitric oxide; Immunotherapy; COVID-19
Online: 8 April 2020 (11:34:11 CEST)
The recent pandemic has greatly stressed supply chains, treatment modalities, and medical resources. Cold atmospheric plasma (CAP) has been used for a wide range of applications in biomedical engineering due to its many components including electrons, charged particles, reactive oxygen species (ROS), reactive nitrogen species (RNS), free radicals, ultraviolet (UV) photons, molecules, electromagnetic fields, physical forces, and electric fields. In this manuscript, we develop CAP devices for COVID-19. Our manuscript indicates the advantages of highlydeployable CAP devices for both sanitation and treatment, without the need for supply chains of special consumables such as hand sanitizers and the like. We hope that this timely research will help engage the broader community of engineers that wish to help the medical community with this pandemic and to prevent and treat future outbreaks.
Enhancing Photocatalytic Activity of ZnO Nanoparticles in a Circulating Fluidized Bed with Plasma Jets
Shiwei Ma, Yunyun Huang, Ruoyu Hong, Xuesong Lu, Jianhua Li
Subject: Chemistry, Chemical Engineering Keywords: Plasma, zinc oxide, photocatalysis, nanomaterials.
Online: 9 December 2020 (15:51:59 CET)
In this work, zinc oxide (ZnO) nanoparticles were modified in a circulating fluidized bed through argon and hydrogen (Ar-H) alternative-current (AC) arc plasma, which shows the characteristics of non-equilibrium and equilibrium plasma at the same time. In addition, a circulating fluidized bed with two plasma jets was used for cyclic processing. The catalytic degradation performance on Rhodamine B (Rh B) by Ar-H plasma modified ZnO and pure ZnO was tested in aqueous media to identify the significant role of hydrogen atoms in Rh B degradation mechanism. Meanwhile, the effects of plasma treatment time on the morphology, size and photocatalytic performance of ZnO were also investigated. The results demonstrated that ZnO after 20 minutes-treatment by Ar-H plasma showed Rh B photocatalytic degradation rate is ten times greater than that of pure ZnO, and the reaction follows a first-kinetics for the Rh B degradation process. Furthermore, the photocatalyst cycle experiment curve exhibited that the modified ZnO still displays optimum photocatalytic activity after five cycles of experiment. The improvement of photocatalytic activity and luminescence performance attributes to the significant increase of the surface area, and the introduction of hydrogen atoms on the surface also could enhance the time of carrier existence where the hydrogen atoms act as shallow donors.
Plasma MicroRNAs Can be a Potential Diagnostic Biomarker for Endometriosis
Zhihong Zhuo, Chuhan Wang, Gao Li, Huimin Yu
Subject: Medicine & Pharmacology, Obstetrics & Gynaecology Keywords: endometriosis; circular; microRNA; diagnosis; plasma
Online: 8 July 2019 (06:10:33 CEST)
Plasma microRNAs are considered to be potential diagnostic biomarkers for endometriosis. Increasing evidence has shown that a huge amount of miRNAs are abnormally expressed in endometriosis plasma and play irreplaceable roles in diagnosis. The aim of the our study was to identify the differential expression of circular miRNA by reviewing the PubMed, ScienceDirect, and Cochrane databases between normal women and women with endometriosis and analyzing the miRNA data downloaded from the GEO database. Because of the differential miRNA expression in this review, we evaluated the diagnostic values of the differentially expressed miRNAs, particularly during the menstrual phases. According to the cut-off criteria with |log 2 FC|>1.0 and P < 0.05, 36 differentially expressed miRNAs were identified, including 13 upregulated miRNAs and 23 downregulated miRNAs. We developed miR-155, miR-574, miR-23a, and miR-520d via a Venn diagram. Functional enrichment analysis considered that the target miRNAs might be involved in various pathways related to endometriosis, including neurotrophin, Hippo, oocyte meiosis, ubiquitin mediated proteolysis, HTLV-Infection, FoxO, and Rap1 signaling pathways. CTNNB1, MYC, and ES R1 of transcription factors were related to the differentially expressed miRNAs. In summary, our study suggested that a four-miRNA could be included as a prognostic marker in endometriosis.
Level-specific differences in systemic expression of pro- and anti-inflammatory cytokines and chemokines after spinal cord injury
James Hong, Alex Chang, Mohammad Zavvarian, Jian Wang, Yang Liu, Michael Fehlings
Subject: Life Sciences, Molecular Biology Keywords: spinal cord injury; inflammation; plasma
While over half of all spinal cord injuries (SCIs) occur in the cervical region, the majority of preclinical studies have focused on models of thoracic injury. However, these two levels are anatomically distinct—with the cervical region possessing a greater vascular supply, grey-white matter ratio and sympathetic outflow relative to the thoracic region. As such, there exists a significant knowledge gap in the secondary pathology at these levels following SCI. In this study, we characterized the systemic plasma markers of inflammation over time (1, 3, 7, 14, 56 days post-SCI) after moderate-severe, clip-compression cervical and thoracic SCI in the rat. Using high-throughput ELISA panels, we observed a clear level-specific difference in plasma levels of VEGF, leptin, IP10, IL18, GCSF, and fractalkine. Overall, cervical SCI had reduced expressions of both pro- and anti-inflammatory proteins relative to thoracic SCI, likely due to sympathetic dysregulation associated with higher level SCIs. However, contrary to the literature, we did not observe level-dependent splenic atrophy with our incomplete SCI model. This is the first study to compare the systemic plasma-level changes following cervical and thoracic SCI using level-matched and time-matched controls. The results of this study provide the first evidence in support of level-targeted intervention and also challenge the phenomenon of high SCI-induced splenic atrophy in incomplete SCI models.
Experimental Study of n-Decane Decomposition with Microsecond Pulsed Discharge Plasma
Feilong Song, Di Jin, Min Jia, Wenwen Wei, Huimin Song, Yun Wu
Subject: Physical Sciences, Applied Physics Keywords: plasma; DBD; decomposition; n-decane
Online: 19 April 2017 (16:04:45 CEST)
A highly-integrated experimental system for plasma decomposition of fuels was built. Experiments were conducted and confirmed that macromolecular chain hydrocarbons were cracked by large-gap dielectric barrier discharge under the excitation of a microsecond-pulse power supply. Alkanes and olefins with a C atom number smaller than 10 as well as hydrogen were found in the cracked products of n-decane (n-C10H22). The combination of preheating and plasma decomposition had strong selectivity for olefins. Under strong discharge conditions, micromolecular olefins were found in the products. Moreover, there was a general tendency that micromolecular olefins gradually accounted for higher percentage of products at higher temperature and discharge frequency.
Influence of Plasma Treatment to the Performance of Amorphous IGZO based Flexible Thin Film Transistors
Long-long Chen, Xiang Sun, Ji-feng Shi, Xi-feng Li, Xing-wei Ding, Jian-Hua Zhang
Subject: Engineering, Other Keywords: TFTs; IGZO; flexible; plasma treatment
Thin film transistors (TFTs) using In-Ga-Zn Oxide (IGZO) as active layer and the gate insulator was treated with NH3 plasma and N2O plasma, respectively, which is fabricated on flexible PI substrate in this work. The performance of IGZO TFTs with different plasma species and treatment time are investigated and compared. The experiment results show that the plasma species and treatment time play an important role in the threshold voltage, field-effect mobility, Ion/Ioff ratio, sub-threshold swing (SS) and bias stress stability of the devices. The TFT with a 10 seconds NH3 plasma treatment shows the best performance; specifically, threshold voltage of 0.34 V, field-effect mobility of 15.97 cm2/Vs, Ion/Ioff ratio of 6.33×107, and sub-threshold swing of 0.36 V/dec. The proposed flexible IGZO-TFTs in this paper can be used as driving devices in the next-generation flexible displays.
SMN Deficiency Destabilizes ABCA1 Expression in Human Fibroblasts. Novel Insights in Pathophysiology of Spinal Muscular Atrophy
Francesca Gabanella, Annalisa Onori, Cinzia Pisani, Marco Fiore, Giampiero Ferraguti, Andrea Colizza, Marco De Vincentiis, Marco Ceccanti, Maurizio Inghilleri, Nicoletta Corbi, Claudio Passananti, Maria Grazia Di Certo
Subject: Life Sciences, Molecular Biology Keywords: SMN; SMA; plasma membrane; ABCA1; cholesterol
Online: 29 December 2022 (09:13:39 CET)
The deficiency of Survival Motor Neuron (SMN) protein causes Spinal Muscular Atrophy (SMA), a rare neuromuscular disease that affects different organs. SMN is a key player in RNA metabolism regulation. An intriguing aspect of SMN function is its relationship with plasma membrane-associated proteins. Here, we provide a first demonstration that SMN affects the ATP-binding cassette transporter A1, (ABCA1), a membrane protein critically involved in cholesterol homeostasis. In human fibroblasts, we showed that SMN associates to ABCA1 mRNA, and impacts its subcellular distribution. Consistent with the central role of ABCA1 in the efflux of free cholesterol from cells, we observed a cholesterol accumulation in SMN-depleted human fibroblasts. These results were also confirmed in a SMA type I patient-derived fibroblasts. These findings not only validate the intimate connection between SMN and plasma mem-brane-associated proteins, but also highlight a contribution of dysregulated cholesterol efflux in SMA pathophysiology.
Construction of 2DE-Patterns of Plasma Proteins: Aspect of Potential Tumor Markers
Stanislav Naryzhny, Natalia Ronzhina, Elena Zorina, Fedor Kabachenko, Nikolay Klopov, Victor Zgoda
Subject: Life Sciences, Biochemistry Keywords: plasma; biomarker; proteomics; 2DE; proteoform; pattern
Cancer is a complex systemic disease that changes the entire proteome. The analysis of this transformation makes it possible to determine tumor markers, that is, the most characteristic biomacromolecules produced by tumor cells. Here, the question of finding ideal tumor markers, which should be sensitive, specific, and reliable, is an acute issue. Unfortunately, none of the tumor markers, even those used in the clinic, has all these characteristics. Despite this, many tumor markers have demonstrated excellent clinical relevance for monitoring the effectiveness of different treatments for cancer patients. The use of markers also aids in the early detection of cancer recurrence and prognosis. Therefore, the situation in this area can be improved in two ways – by attempting to find an ideal single tumor marker or generating panels of different markers. In both cases, proteomics certainly plays a major role. Human plasma is one of the most popular samples as it is commonly collected in the clinic and provides noninvasive, rapid analysis for any type of disease including cancer. Many efforts have been applied in searching for "ideal" tumor markers digging very deep plasma proteome. There is a line of evidence that the most abundant, so-called "classical plasma proteins", may be used to generate a tumor biomarker profile. To be comprehensive these profiles should have information not only about protein levels but proteoform distribution for each protein. Initially, the profile of these proteins in norm should be generated. Here, we present data about these profiles generated by two-dimensional electrophoresis with the following mass-spectrometry and immunodetection.
Supernova Plasma Effects on Extragalactic Light Fields
Paul Marko
Subject: Physical Sciences, Astronomy & Astrophysics Keywords: Supernova; Plasma; Gravitational Lens; Einstein Cross; Hubble Constant; Hubble Tension
Simulations based on Supernova (SN) observations predict several galactic SN explosions (SNe) occur every century. Unlike SNes within the Interstellar Medium (ISM) where ambient gas generally absorbs blast waves within a million years, SNes occurring in a rarified environment outside of the (ISM) generate blast waves which remain in a high velocity free expansion phase for more extended periods. The SN blast wave forms an expanding spherical shell and when multiple blast waves intersect, the overlapping region naturally takes the form of a ring, an arc, or an Einstein Cross structure. The analysis shows the high velocity plasma establishes a medium with permeability which drives the index of refraction greater than 1. As a result, when a shock discontinuity forms in the overlapping region, light is reflected from the host galaxy which exposes the intersecting blast wave regions. The expanding shells are shown to induce an achromatic redshift to the reflected light consistent with those measured for gravitational lenses. Further, it is shown that a Hubble equation for a blast wave around the Milky Way Galaxy can be parameterized to align with measured redshifts of extragalactic light fields over a wide range of distances.
Slag Formation During Reduction of Iron Oxide Using Hydrogen Plasma Smelting Reduction
Masab Naseri Seftejani, Johannes Schenk, Daniel Spreitzer, Michael Andreas Zarl
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: slag; basicity; hydrogen plasma; smelting reduction; iron oxide; plasma arc; hydrogen utilisation; degree of reduction; hematite
Replacing carbon by hydrogen is a huge step towards reducing CO2 emissions in the iron- and steel-making industry. The reduction of iron oxides using hydrogen plasma smelting reduction as an alternative to conventional steel-making routes has been studied at Montanuniversitaet Leoben, Austria. The aim of this work was to study the slag formation during the reduction process and the reduction behaviour of iron oxides. Furthermore, the reduction behaviour of iron ore during continuous feeding was assessed. Mixtures of iron ore and calcined lime with a basicity of 0, 0.8, 1.6, 2.3, and 2.9 were melted and reduced by hydrogen. The off-gas composition was measured during the operations to calculate the process parameters. The reduction parameters, namely the degree of reduction, degree of hydrogen utilisation, produced iron, and slag, are presented. The results of the batch-charged experiments showed that at the beginning of the reduction process, the degree of hydrogen utilisation was high, and then, it decreased over the operation time. In contrast, during the continuous-feeding experiment, the degree of hydrogen utilisation could be kept approximately constant. The highest degrees of reduction and hydrogen utilisation were obtained upon the application of a slag with a basicity of 2.3. The experiment showed that upon the continuous feeding of iron ore, the best conditions for the reduction process using hydrogen could be applied.
Broadband Time-Resolved Absorption and Dispersion Spectroscopy of Methane and Ethane in A Plasma Using a Mid-Infrared Dual-Comb Spectrometer
Muhammad Ali Abbas, Luuk van Dijk, Khalil Eslami Jahromi, Mohammadreza Nematollahi, Frans J.M. Harren, Amir Khodabakhsh
Subject: Physical Sciences, Atomic & Molecular Physics Keywords: dual frequency comb spectroscopy; mid-infrared absorption and dispersion spectroscopy; electrical discharge plasma; time-resolved plasma kinetics
Conventional mechanical Fourier Transform Spectrometers (FTS) are able to simultaneously measure absorption and dispersion spectra of gas-phase samples. However, they usually need very long measurement times to achieve time-resolved spectra with a good spectral and temporal resolution. Here, we present a mid-infrared dual-comb-based FTS in an asymmetric configuration, providing broadband absorption and dispersion spectra with a spectral resolution of 5 GHz, a temporal resolution of 20 μs, and a total measurement time of a few minutes. We used the dual-comb spectrometer to monitor the reaction dynamics of methane and ethane in an electrical plasma discharge. We observed ethane/methane formation as a recombination reaction of hydrocarbon radicals in the discharge in various static and dynamic conditions. The results demonstrate a new analytical approach for measuring fast molecular absorption and dispersion changes and monitoring fast dynamics of chemical reactions, which can be interesting for chemical kinetic research and particularly for the combustion and plasma analysis community.
Reduction of Haematite Using Hydrogen Thermal Plasma
Masab Naseri Seftejani, Johannes Schenk, Michael Andreas Zarl
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: hydrogen plasma; smelting reduction; iron oxide; plasma arc; degree of hydrogen utilization; degree of reduction; hematite; basicity
The development of hydrogen plasma smelting reduction as a CO2 emission-free steel-making process is a promising approach. This study presents a concept of the reduction of hematite using hydrogen thermal plasma. A laboratory scale and pilot scale hydrogen plasma smelting reduction (HPSR) process are introduced. To assess the reduction behavior of hematite, a series of experiments has been conducted and the main parameters of the reduction behavior, namely the degree of hydrogen utilization, degree of reduction and the reduction rate are discussed. The thermodynamic aspect of the hematite reduction is considered and the pertinent calculations have been carried out using FactSageTM 7.2. The degree of hydrogen utilization and the degree of reduction were calculated using the off-gas chemical composition. The contribution of carbon, introduced from the graphite electrode, ignition pin and steel crucible, to the reduction reactions was studied. The degree of reduction of hematite, regarding H2O, CO and CO2 as the gaseous reduction products, is determined. It is shown that the degree of hydrogen utilization and the reduction rate were high at the beginning of the experiments, then decreased during the reduction process owing to the diminishing of iron oxide. Conducting experiments with the high basicity of slag B2=2 led to a decrease of the phosphorus concentration in the produced iron.
Laboratory Hydrogen-Beta Emission Spectroscopy for Analysis of Astrophysical White Dwarf Spectra
Christian G Parigger, Kyle A. Drake, Christopher M Helstern, Ghaneshwar Gautam
Subject: Physical Sciences, Astronomy & Astrophysics Keywords: white dwarfs; burning in stars; plasma diagnostics; atomic spectra; plasma spectroscopy; laser spectroscopy; laser-induced breakdown spectroscopy
This work communicates a review on Balmer series hydrogen beta line measurements and applications for analysis of white dwarf stars. Laser-induced plasma investigations explore electron density and temperature ranges comparable to white dwarf star signatures such as Sirius B, the companion to the brightest star observable from the earth. Spectral line shape characteristics of the hydrogen beta line include width, peak separation, and central dip-shift, thereby providing three indicators for electron density measurements. The hydrogen alpha line shows two primary line-profile parameters for electron density determination, namely, width and shift. Both Boltzmann plot and line-to-continuum ratios yield temperature. The line-shifts recorded with temporally- and spatially- resolved optical emission spectroscopy of hydrogen plasma in laboratory settings can be larger than gravitational redshifts that occur in absorption spectra from radiating white dwarfs. Published astrophysical spectra display significantly diminished Stark or pressure broadening contributions to red-shifted atomic lines. Gravitational redshifts allow one to assess the ratio of mass and radius of these stars, and subsequently, the mass from cooling models.
Polarimetry for 3He Ion Beams From Laser-Plasma Interactions
Chuan Zheng, Pavel Fedorets, Ralf Engels, Chrysovalantis Kannis, Ilhan Engin, Sören Möller, Robert Swaczyna, Herbert Feilbach, Harald Glückler, Manfred Lennartz, Heinz Pfeifer, Johannes Pfennings, Claus M. Schneider, Norbert Schnitzler, Helmut Soltner, Markus Büscher
Subject: Physical Sciences, Nuclear & High Energy Physics Keywords: polarized ion beam; polarimetry; laser-plasma acceleration
We present a compact polarimeter for 3He ions with special emphasis on the analysis of short-pulsed beams accelerated during laser-plasma interactions. We discuss the specific boundary conditions for the polarimeter, such as the properties of laser-driven ion beams, the selection of the polarization-sensitive reaction in the polarimeter, the representation of the analyzing-power contour map, the choice of the detector material used for particle identification, as well as the production procedure of the required deuterated foil-targets. The assembled polarimeter has been tested using a tandem accelerator delivering unpolarized 3He ion beams, demonstrating good performance in the few-MeV range. The statistical accuracy and the deduced figure-of-merit of the polarimetry are discussed, including the count-rate requirement and the lower limit of accuracy for beam-polarization measurements at a laser-based ion source.
Surface Characterization of Polytetrafluoroethylene (PTFE) Substrate after Oxygenated Plasma Treatment towards Potential Food Processing Application
Noraziani Zainal Abidin, Haslaniza Hashim, Saiful Irwan Zubairi
Subject: Materials Science, Surfaces, Coatings & Films Keywords: biofouling; deposition; hydrophobic; plasma treatment; PTFE; stickiness
The spray drying process causes the buildup of an unspecified and unique pattern of wall deposits on the wall. The powder recovery of fruit juice by spray dryer is associated with stickiness problems because of the nature of food which contains low molecular weight sugars and organic acids, which have a low glass transition temperature (Tg). The surface properties of oxygen plasma treated-PTFE substrate were evaluated by using the different parameters of Plasma Enhanced Chemical Vapour Deposition (PECVD) prior to spray drying analysis. In this study, the fabrication method of nearly perfect superhydrophobic surfaces through plasma treatment with oxygen gas was generated and utilized. The plasma-treated PTFE were deposited from a fixed flow rate of oxygen gas with 30 cm3/min by varying the deposition time from 1 to 15 minutes to induce the hydrophobic surface of the PTFE substrate. The characterization techniques used to determine the morphology and chemical bonding of the substrate are field emission scanning electron microscopy (FESEM) and Fourier transform infrared spectroscopy (FTIR), respectively. The hydrophobicity of the glass samples was determined by the water droplet contact angle. Sample prepared at radio frequency (RF) power of 90W for 15 minutes duration of treatment time showed porous and spongy like microstructure which correlates with the best performance of a good contact angle which creates the superhydrophobicity regime (171o). Surface morphology analysis using scanning electron microscopy (SEM) showed changes in its roughness in the surface-treated glass substrate. The success of this method produced a huge potential for solving most of the food processing issues which relate to biofouling (e.g., powder stickiness) that would otherwise struggle to improve high productivity and recovery.
Research on Electromagnetic Radiation Mechanism During Detonation of Energetic Material
Yuanbo Cui, Deren Kong, Jian Jiang, Shang Gao
Subject: Physical Sciences, Radiation & Radiography Keywords: energetic material; plasma; detonation; electromagnetism; high temperature
In the process of deflagration of energetic materials, strong electromagnetic radiation is to be generated, which causes the surrounding electronic equipment to fail to work normally. To solve this problem, it is necessary to clarify the mechanism of electromagnetic radiation generated by energetic materials. The mechanism of plasma changed by the deflagration of energetic materials is an important topic in the aerospace and geophysics fields. The academic community holds two main viewpoints on the mechanism of electromagnetic radiation generated by energetic materials: one is that the solid material is squeezed and deformed during the deflagration of energetic materials, and the charges of different polarities rub in space to form effective electric dipoles, which eventually generate electromagnetic radiation. Another view is that the deflagration of energetic materials causes the temperature of the medium to rise sharply, and bremsstrahlung is formed during the compression and diffusion of the high-temperature wave front, resulting in the generation of electromagnetic radiation. This paper, based on theoretical analysis and experimental data, holds the view that electromagnetic radiation is generated by the high-temperature thermal effect. It studies the relationship between temperature and electromagnetic radiation and obtains quantitative analysis conclusions.
Modifying Geopolymer Wettability by Plasma Treatment and Fly Ash
Vojtěch Růžek, Petr Louda, Paweł Just, Karol Prałat, Justyna Ciemnicka, Przemysław Plaskota, Katarzyna Ewa Buczkowska
Subject: Materials Science, Polymers & Plastics Keywords: geopolymer; plasma; surface treatment; fly ash; wettability
This paper deals with investigation of changes in geopolymer wettability with increasing mass fraction of high-carbon fly ash and surface treatment by cold atmospheric plasma (CAP). In this study, multiple samples of geopolymers were prepared, including those with 5% and 10% of high-carbon fly ash from coal-fired power station. Wettability of samples was then measured before and after plasma treatment, both on surface and cut surface. While addition of fly ash only had low effect on the wettability, as in most cases, it only lowered the initial contact angle without speeding up the speed of soaking for compact geopolymer and actually slowed the soaking for foamed geopolymer, plasma treatment had significant impact and made the geopolymer hydrophobic.
Plasma Chemical Production Technology of AlN as High Thermal Conductivity Material
Daniil Aleksandrov, Artem Kim, Pavel Novikov, Anatoly Popovich
Subject: Materials Science, Metallurgy Keywords: aluminum nitride; plasma chemical synthesis; nitride ceramics
A technology for obtaining nanosized aluminum nitride powder by plasma-chemical synthesis is presented. Nitrogen gas (N2), melamine (C3H6N6) and ammonia (NH3) were used as a source of nitrogen. Aluminum powder of different fractions was used as a source of aluminum. The influence of the nitrogen source, the height of the injector, and the input power of the plasma equipment on the synthesized aluminum nitride powder is shown. The resulting aluminum nitride powder has a size d90=60 nm. The parameters of aluminum nitride synthesis did not in any way affect the granulometric composition of the synthesized powder materials. It was found that, due to the high binding energy, the nitrogen molecule (N2) reacts poorly with aluminum powder particles, as a result a mixture of nitrogen and ammonia gases was used in a ratio of 70/30 (mol.%) for aluminum nitride synthesis.
Development Research on Integrating CNC Machine Tool with Plasma for On-line Surface Heat Treatment
Shao Hsien Chen, Bo-Ting Wang
Subject: Materials Science, Biomaterials Keywords: CNC Machine Tool; Plasma; Surface Heat Treatment
Online: 23 June 2021 (12:47:11 CEST)
The mechanical engineering requires heat treatment after rough machining to reach the mechanical strength, but the heat treatment can induce workpiece deformation, so that the workpiece cannot be reworked. In this study, the plasma was integrated with a lathe, and the on line heat treatment was performed to achieve the mechanical strength and hardness, so as to reduce the machining process and handling. However, for on line heat treatment, it is important to study the machine and plasma parameters of the lathe and plasma, and the research method is used eventually to optimize the process, reduce the machining cost and machining error. The variable factors in surface on line real-time heat treatment are revolution, feed rate and current, the objective function is the hardness of mechanical properties. In the screening experiment, the interaction of factors was discussed using full factorial experiment. The Central Composite Design was combined with the Lack-of-Fit test for optimization experiment, the R2 coefficient was used to determine whether the regression model is appropriate. The optimum parameters were derived from the contour diagram and response surface diagram. The experimental results show that the significant factors include revolution, feed rate and current, the optimum parameters include revolution 168rpm, feed rate 0.068mm/rev and current 86A. The experimental results of optimum parameters show that the surface hardness is increased from 306HLD to 806HLD, the surface hardening effect is enhanced by 163%, so the on line real-time heat treatment equipment has a best hardening effect.
Early Universe Plasma Separation and the Creation of a Dual Universe
Mohammed B. Al-Fadhli
Subject: Physical Sciences, Acoustics Keywords: Plasma; accelerated expansion; parallel universe; duality; antimatter
The Planck Legacy recent release revealed a closed and positively curved early universe with a confidence level greater than 99%. In this study, the Friedmann–Lemaîtree–Robertson–Walker (FLRW) metric is enhanced to model early universe plasma, incorporating its reference curvature radius upon the emission of the cosmic microwave background (CMB) and the reference scale factor of the energy flux. The universe evolution from early plasma is modelled utilising quantised spacetime worldlines, where they revealed both positive and negative solutions implying that matter and antimatter in the plasma could be separated by electromagnetic fields and evolved in opposite directions as distinct sides of the universe, corroborating the CMB dipole anisotropy. The model indicates a nascent hyperbolic expansion is followed by a first phase of decelerating expansion during the first 10 Gyr, and then, a second phase of accelerating expansion. The model theoretically resolves the tension in Hubble parameter measurements, with a predicted density at the phase transition of 1.16. Further, it predicts a final time-reversal phase of rapid spatial contraction leading to a Big Crunch, signalling a cyclic universe. Simulations of the quantised spacetime continuum flux through its travel along the predicted worldlines demonstrated the fast-orbital speed of stars resulting from an external momentum exerted on galaxies via the spatial curvature through imaginary time dimension. These findings indicate that early universe plasma could be separated and evolved into distinct sides, collectively and geometrically influencing the universe evolution.
The Coulomb Symmetry and a Universal Representation of Rydberg Spectral Line Shapes in magnetized Plasmas
Andrei Letunov, Lisitsa Valery
Subject: Physical Sciences, Atomic & Molecular Physics Keywords: Stark-Zeeman effect; Rydberg atom; plasma spectroscopy
A new method of line shape calculations of hydrogen-like atoms in magnetized plasmas is presented. This algorithm makes it possible to solve two fundamental problems in the broadening theory: the analytical description of the radiation transition array between excited atomic states and account of a thermal ion motion effect on the line shapes formation. The solution to the first problem is based on the semiclassical approach to dipole matrix elements calculations and the usage of the specific symmetry properises of the Coulomb field. The second one is considered in terms of the kinetic treatment of the frequency fluctuation model (FFM). As the result one has a universal description of line shapes under the action of the dynamic of ion's microfield. The final line shape is obtained by the convolution of the ionic line shape with the Voigt electron-Doppler profile. The method is applicable formally for large values of principle quantum numbers. However, it is demonstrated the efficiency of the results even for well known first members of the hydrogen Balmer series Dalpha and Dbeta. The comparison of obtained results with accurate quantum calculations is presented. The new method may be of interest for investigations of spectral line shapes of hydrogen-like ions presented in different kinds of hot ionized environments with the presence of a magnetic field, including SoL and divertor tokamak plasmas.
Performance of Diabetes Screening Tests: An Evaluation Study of Iranian Diabetes Screening Program
Fateme Kianpour, Mohammad Fararouei, Jafar Hassanzadeh, Mohammadnabi Mohammadi, Mostafa Dianatinasab
Subject: Medicine & Pharmacology, Allergology Keywords: Diabetes mellitus, screening, HbA1c, fasting plasma glucose
Background: Diabetes is a common non-communicable disease that is responsible for about 9% of all deaths and 25% reduction in life expectancy and nearly half of the diabetic patients are not aware of their disease. In this regard, diabetes screening to identify un-known diabetic patients is of great importance. Aims: The aims of this study were first to evaluate the performance of two commonly used diabetes screening tests that are currently recommended by the Iranian national screening program for diabetes (NSPD). Methods: The validities of the two diabetes screening tests were measured among 1057 participants older than 30 years. The studied screening tests included Capillary fasting blood glucose (CBG) and glycated hemoglobin (HbA1c). The golden standard for measuring the validity of the tests was venous fasting plasma glucose (VPG). Results: According to the results, the sensitivity of CBG and HbA1c tests were 69.01% and 84.5% and the specificity of the tests were 95.7% and 79.3% respectively. Positive and negative predictive values were 53.84% and 97.72% for CBG and 22.72% and 98.61% for HbA1c respectively. The recommended cut-points for CBG and HbA1c were 116.5 mg/dl and 7.15% respectively. Using these values as the new cut-points, sensitivity and specificity of CBG and HbA1c changed to 80.30% and 89.10%, and 77.50% and 94.20% respectively. Conclusions: Compared to several other countries, the performance of NSPD is relatively higher in Iran. ROC analysis suggested new cut-points for significantly better performance of NSPD.
Immunomodulators Acting on COVID-19: Actual Knowledge and Perspectives
Nelson Durán, Wagner J. Fávaro
Subject: Life Sciences, Immunology Keywords: COVID-19; Immunotherapy; Immunomodulator; Antibody; Plasma; Immunoglobulins
Since the outbreak of SARS CoV-2 infection (Covid-19), healthcare professionals worldwide have been trying to find disease management and control alternatives to encourage immunotherapies. Immunotherapy is an efficient therapeutic option used against comparable viral contaminations such as MERS-CoV and SARS-CoV. The aim of the current study is to assess the existing knowledge associated with SARS-CoV-2 immunotherapy. Information available in published articles and their quality highlights the importance of following strict scientific rules for clinical outcomes. Thus, these studies have shown enough data to confirm that immunomodulation is the main topic investigated in research about Covid-19 therapy. Therefore, it is possible saying that immunotherapy is certainly the appropriate option against this virus.
ctDNA as a Cancer Biomarker: A Broad Overview
Luciana Santos Pessoa, Manoela Heringer, Valéria Pereira Ferrer
Subject: Life Sciences, Molecular Biology Keywords: biomarker; blood/plasma; cancer; ctDNA; liquid biopsy
Circulating tumor DNA (ctDNA) in fluids has gained attention because ctDNA seems to identify tumor-specific abnormalities, which could be used for diagnosis, follow-up of treatment, and prognosis: the so-called liquid biopsy. Liquid biopsy is a minimally invasive approach and presents the sum of ctDNA from primary and secondary tumor sites. It has been possible not only to quantify the amount of ctDNA but also to identify (epi)genetic changes. Specific mutations in genes have been identified in the plasma of patients with several types of cancer, which highlights ctDNA as a possible cancer biomarker. However, achieving detectable concentrations of ctDNA in body fluids is not an easy task. ctDNA fragments present a short half-life, and there are no cut-off values to discriminate high and low ctDNA concentrations. Here, we discuss the use of ctDNA as a cancer biomarker, the main methodologies, the inherent difficulties, and the clinical predictive value of ctDNA.
A Single Arm Study to Evaluate the Transfer of Drospirenone to Breast Milk after Reaching Steady State after Oral Administration of 4 MG Drospirenone Only Pill in Healthy Lactating Female Volunteers
Dace Melka, Kalev Kask, Enrico Colli, Pedro-Antonio Regidor
Subject: Medicine & Pharmacology, Nutrition Keywords: Drospirenone 4mg; breastfeeding; plasma concentration; milk concentration
Objective: The primary objective of this trial was to assess the transfer of drospirenone to breast milk after daily administration of an oral test preparation containing 4 mg of drospirenone at the steady state. The secondary objective of the trial was to assess the safety of the preparation based on safety clinical and laboratory measurements (at the beginning and at the end of the trial) and reporting of adverse events and/or adverse drug reactions. Patients and Methods: This was an open label, non-comparative single center study. Drospirenone 4mg per day was the first postpartum contraceptive for the study participants who were no longer breastfeeding yet were still lactating. It was administered for 7 (seven) days to achieve steady-state concentration. All participants were volunteers who planned to use oral contraceptives as their family planning method in the future. Results: A total number of 12 volunteers completed the trial according to the protocol and the samples of all the 12 study completers were analyzed. The average concentration-time curve of drospirenone in plasma 24 h after the administration of the last dose (AUC(0-24h)) was 635.33 ng*h/mL and 120 h after the single repeat dose administration (AUC(0-120h) was 1180.57 ng*h/mL, respectively. The average Cmax was 48.64 ng/mL. The average concentration-time curve of drospirenone in milk 24 h after the administration of the last dose (AUC(0-24h)) was 134.35 ng*h/mL and 120 h after the single repeat dose administration (AUC(0-120h) was 227.17 ng*h/mL respectively. The average Cmax was 10.34 ng/mL. Conclusion: On average 18.13% of plasma drospirenone made it to breast milk and the highest concentration of drospirenone in breast milk was 17.55% of that in plasma. The total quantity of drospirenone passing to breast milk is on average 4478 ng during a 24 h period representing 0.11% of the maternal daily dose. Thus, at the recommended doses, no effects on breastfed newborns/infants are anticipated with drospirenone 4 mg.
Effect of Plasma Nitriding on Fatigue Behavior of Ti-6Al-4V Alloy
Michele C. B. de Castro, Antônio A. Couto, Gisele F. C. Almeida, Marcos Massi, Nelson B. de Lima, Argemiro S. da Silva Sobrinho, Mariano Castagnet, Gleicy L. Xavier, Rene R. Oliveira
Subject: Materials Science, Metallurgy Keywords: Ti-6Al-4V alloy; plasma nitriding; fatigue
The Ti-6Al-4V alloy is widely used in the manufacture of components that should have low density, high corrosion resistance, and fatigue strength. The fatigue strength can be improved by surface modification. The aim of this study was to determine the influence of plasma nitriding on the fatigue behavior of Ti-6Al-4V alloy with a lamellar microstructure (Widmanstätten type). Nitriding was executed at 720 °C for 4 hours in an atmosphere with N2, Ar and H2. Samples microstructure characterization was carried out by X-ray diffraction analysis, optical microscopy and scanning electron microscopy. The average roughness of the specimens was determined, and fatigue tests were executed in a bending-rotating machine with reverse tension cycles (R= -1). X-ray diffraction analysis revealed the matrix phases α and β, and the phases Ɛ-Ti2N and δ-TiN in the nitrided alloy. A nitrogen diffusion layer was formed between the substrate and the titanium nitrides. Plasma nitriding resulted in an increase in low cycle fatigue strength, whereas at high cycles, both conditions exhibit similar behavior. The fracture surface of the fatigue tested specimens clearly revealed the lamellar microstructure. The fracture mechanism appears to be due to cracking at the interface of α and β phases of the lamellar microstructure.
A Simple Method to Measure Renal Function in Swine by the Plasma Clearance of Iohexol
Sergio Luis-Lima, Consolación Garcia-Contreras, Marta Vazquez-Gomez, Susana Astiz, Fabiola Carrara, Flavio Gaspari, Natalia Negrin-Mena, Alejandro Jiménez-Sosa, Hugo Jiménez-Hernández, Antonio Gonzalez-Bulnes, Esteban Porrini
Subject: Medicine & Pharmacology, Pharmacology & Toxicology Keywords: renal function; iohexol plasma clearance; swine model
Online: 16 November 2017 (03:11:30 CET)
There is no simple method to measure glomerular filtration rate (GFR) in swine, an established model to study renal disease. We developed a protocol to measure GFR in conscious swine with the plasma clearance of iohexol. We used two groups: testing and validation, of 8 animals each. Ten milliliters of iohexol (6.47 g) were injected by the marginal auricular vein and blood samples (3 ml) were collected from the orbital sinus at different points after injection. GFR was determined considering two models: two-compartments (CL2: all samples) and one-compartment (CL1: the last six samples). In the testing group, CL1 overestimated CL2 by ~30%: CL2=245±93 and CL1=308±123 ml/mn. This error was corrected by a first order polynomial quadratic equation to CL1, which was considered the simplified method: SM=-47.909+(1.176xCL1)–(0.00063968xCL12). SM showed narrow limits of agreement with CL2, and a concordance correlation of 0.97 and a total deviation index of 14.73%. Similar results were obtained for the validation group. This protocol is reliable, reproducible, can be performed in conscious animals, uses a single dose of the marker, and requires a reduced number of samples avoiding urine collection. Finally, it portends a significant improvement in animal-welfare conditions and handling necessities in experimental trials.
Bounding Greybody and Deflection Angle of Improved Schwarzschild Black Hole
Wajiha Javed, Muhammad Aqib, Ali Övgün
Subject: Physical Sciences, General & Theoretical Physics Keywords: General Relativity; Improved Schwarzschild Black hole; Plasma medium; Gauss-Bonnet theorem; Non plasma medium; Hawking radiation; Greybody Factor.
We explore the deflection angle in the framework of improved Schwarzschild Black hole utilizing the most advance geometrical path adopted by Gibbon-Werner. To investigate deflection angle of the photon ray by weak gravitational lensing for this black hole, we derive the optical curvature and perform the application of Gauss-Bonnet theorem on the optical metric. Moreover, we study the impacts of the plasma medium in context of the weak gravitational lensing in relate to this black hole. Further, we also study the graphical analysis of the deflection angle in both the plasma and non-plasma mediums. Also, we find the rigorous bound base upon the greybody factor for improved Schwarzschild black hole. A while later, we contrast our conclusions about deflection angle with the deflection angles of Schwarzschild black hole within plasma and non-plasma mediums.
Transport of Gaseous Hydrogen Peroxide and Ozone into Bulk Water vs. Electrosprayed Aerosol
Mostafa Elsayed Hassan, Mario Janda, Zdenko Machala
Subject: Physical Sciences, Acoustics Keywords: plasma-liquid interactions; water electrospray; aerosol microdroplet; bulk water, plasma-activated water; Henry's law solubility; ozone; hydrogen peroxide
Production and transport of reactive species through plasma-liquid interactions plays a significant role in multiple applications in biomedicine, environment, and agriculture. We experimentally investigated the transport mechanisms of hydrogen peroxide H2O2 and ozone O3, as the typical plasma species, into water. We measured the solvation of gaseous H2O2 and O3 in airflow into water bulk vs. electrosprayed microdroplets while changing the gas and water flow rates, applied voltage that determines the gas-liquid interface area, and treatment time. The solvation rate of H2O2 and O3 increased with the treatment time and the gas-liquid interface area. The total surface area of the electrosprayed microdroplets was larger than that of the bulk, but their lifetime was much shorter. We estimated that only microdroplets with diameters below ~ 40 µm could achieve the saturation by O3 during their lifetime, while the saturation by H2O2 was impossible due to its depletion from air. Besides the short-lived flying microdroplets, the longer-lived bottom microdroplets substantially contributed to H2O2 and O3 solvation in water electrospray. This study contributes to a better understanding of the gaseous H2O2 and O3 transport into water as a function of different parameters and will lead to design optimization of the plasma-liquid interaction systems.
Unsteady Collisionless Gaseous Plasma Flow in the Upper Layers of the Charged Atmosphere and The Investigation of Vlasov-Maxwell Equations System.
Taha Abdel Wahid, Adel Morad, Tark Lamoudan, Mohamad Almakrami
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Collisionless plasma dynamics; Gaseous plasma; Vlasov equation; Maxwell equations; Entropy; Extended Gibbs formula; The Earth's plasmasphere; the magnetosphere.
In the upper charged layers of the atmosphere, the plasma is very rarefied. The collisions between its molecules are almost non-existent, and the driving forces behind them are the Lorentz forces resulting from the electric and magnetic fields. For this reason, we are interested in studying the behavior of non-collision plasmas because of its essential applications, such as the movement of satellites in the charged atmosphere. In this paper, the flow problem of collisionless gaseous plasma is examined. For that propose, we solve the unsteady Vlasov-Maxwell system of non-linear partial differential equations analytically. Methods of moments and traveling wave parameters are used to acquire an exact solution. Specific macroscopic properties of collisionless gaseous plasma are calculated along with electrical and magnetic fields. Further, thermodynamic estimation, such as entropy and entropy production, is presented. Those calculations allow us to measure the consistency with the laws of non-equilibrium thermodynamics. Relations between internal energy modification participations are predicted using Gibbs' equation for collisionless plasma. The modification effect of internal energies due to electro-magnetic fields is found to be small compared with the internal energy change due to the effect of entropy. That is because these fields are self-induced by plasma particles due to the sudden movement of the rigid plane plate. The results are accomplished according to the typical argon gaseous plasma model. Three-dimensional diagrams showing the measured variables are drawn to investigate and discuss their behavior. The problem has many commercial applications for the movement of objects in the charged atmosphere.
Variations in Plant Growth Characteristics due to Oxygen Plasma Irradiation on Leaf and Seed
Nobuya Hayashi, Kyotaro Yamamoto
Subject: Engineering, Other Keywords: oxygen plasma 2; active oxygen species 3; plasma irradiation on seed and leaf 4; growth enhancement 5; gene expression
Gene expression variations of plant leaf are investigated by irradiating seed and leaf with oxygen or air plasmas. Enhancement of leaf growth is induced by oxygen plasma irradiation on seeds, which is supported by increased gene expression for protein synthesis, oxidative-reduction reactions and decreased gene expression concerning DNA methylation and histone modification. Suppression of leaf growth is observed by the oxygen plasma, which would be owing to increased gene expression concerning heat shock protein and redox reaction, and decreased expression of photosynthesis and glycoprotein. Also, gene expression variation due to air plasma irradiation is almost same as that of oxygen plasma. Active oxygen species are major factors in both oxygen and air plasmas for the variation of gene expressions in plant.
Working Paper ARTICLE
Laser-Plasma Spatio-Temporal Cyanide Spectroscopy and Applications
Christian G. Parigger, Christopher M. Helstern, Benjamin S. Jordan, David M. Surmick, Robert Splinter
Subject: Physical Sciences, Atomic & Molecular Physics Keywords: plasma diagnostics; molecular spectra; diatomic molecules; plasma spectroscopy; laser spectroscopy; laser-induced breakdown spectroscopy; optical emission spectroscopy; hypersonic expansion
This article reports new measurements of laser-induced plasma hypersonic expansion measurements of diatomic molecular cyanide (CN). Focused, high-peak power 1064-nm Q-switched radiation of the order of 1 TW/cm2 generates optical breakdown plasma in a cell containing a 1:1 molar gas mixture of N2 and CO2 at a fixed pressure of 1.1 × 105 Pascal and in a 100 ml/min flow of the mixture. Line-of-sight (LOS) analysis of recorded molecular spectra indicate the outgoing shockwave at expansion speeds well in excess of Mach number 5. Spectra of atomic carbon confirm an increased electron density near the shock wave, and equally, molecular CN spectra reveal higher excitation temperature near the shockwave. The results are consistent with corresponding high-speed shadow graphs obtained by visualization with an effective shutter speed of five n anosecond. In addition, LOS analysis and application of integral inversion techniques allow inferences about the spatio-temporal distribution of the plasma.
KDEL Receptor Trafficking to the Plasma Membrane is Regulated by ACBD3 and Rab4-GTP
Chuanting Tan, Yulei Du, Lianhui Zhu, Shuaiyang Jing, Jingkai Gao, Xihua Yue, Yi Qian, Intaek Lee
Subject: Biology, Other Keywords: plasma membranes; secretion; ACBD3; KDEL receptor; Rab4; Rab11
KDEL receptor-1 maintains homeostasis in the early secretory pathway by capturing and retrieving ER-chaperones to the ER during heavy secretory activity. We have previously shown that a Golgi scaffolding protein (ACBD3) facilitates KDEL receptor localization at the Golgi via regulating cargo wave-induced cAMP/PKA-dependent signaling pathway. Unexpectedly, a fraction of the receptor is also known to reside in the plasma membrane as a stress response, although it is largely unknown exactly how KDEL receptor gets exported from the Golgi and travels to the PM. In this study, we sought to investigate the mechanism by which KDEL receptor gets exported from the Golgi en route to the PM and identified two crucial factors that greatly influence post-Golgi trafficking of KDEL receptor. We show here that ACBD3 depletion results in significantly increased trafficking of KDEL receptor to the PM via Rab4-positive tubular carriers emanating from the Golgi. Expression of constitutively activated Rab4 mutant (Q72L) increases surface expression of KDEL receptor up to 2~3-fold, whereas expression of GDP-locked Rab4 mutant (S27N) inhibits KDEL receptor localization to the PM. Importantly, KDELR trafficking from the Golgi to the PM is independent of PKA- and Src Kinase-mediated mechanism. Taken together, these results reveal that ACBD3 and Rab4-GTP are key players at the Golgi in regulating KDEL receptor trafficking to the cell surface.
Numerical Study of the Electromagnetic Field Distribution in Pulsed Power Plasma Stimulation Technique - Potential Feasibility of Underground Detective and Diagnostic Tool
Yue Xiao, Waylon House, Mohamed Y. Soliman, Akif Ibraguimov
Subject: Engineering, Energy & Fuel Technology Keywords: pulsed power plasma discharge; electromagnetic; COMSOL; numerical simulation
The scope of this work was part of our previous research on Pulsed Power Plasma Stimulation Technique. Electromagnetic fields generated during a pulsed power plasma discharge were simulated using a finite element method in the COMSOL RF module. The field distributions were calculated during and after the pulse. The current paper takes advantage of prior work measuring and modeling the current distribution in the plasma arc. Agreement with laboratory experimental measurements provides support for extension of the model to reservoir scale. The validated model was used to quantify the signal attenuation level in different medium environment.
The 4-Year Experience with Implementation and Routine Use of Pathogen Inactivation in a Brazilian Hospital
Roberta Maria Fachini, Rita Fontão-Wendel, Ruth Achkar, Patrícia Scuracchio, Mayra Brito, Marcelo Amaral, Silvano Wendel
Subject: Medicine & Pharmacology, Other Keywords: pathogen reduction; blood safety; platelet transfusion; INTERCEPT; plasma
Online: 27 October 2021 (12:27:33 CEST)
(1) Background: We reviewed the logistics of the implementation of pathogen inactivation (PI) using the INTERCEPT Blood System™ for platelets and the experience with routine use and clinical outcomes in the patient population at the Sírio-Libanês Hospital of São Paulo, Brazil. (2) Methods: Platelet concentrate (PC), including pathogen reduced (PR-PC) production, inventory management, discard rates, blood utilization, and clinical outcomes were analyzed over the 40 months before and after PI implementation. Age distribution and wastage rates were compared over the 10 months before and after approval for PR-PC to be stored for up to 7 days. (3) Results: A 100% PR-PC inventory was achieved by increasing double apheresis collections and production of double doses using pools of two single apheresis units. Discard rates decreased from 6% to 3% after PI implementation and further decreased to 1.2% after 7-day storage extension for PR-PCs. The blood utilization remained stable, with no increase in component utilization. A significant decrease in adverse transfusion events was observed after the PI implementation. (4) Conclusion: Our experience demonstrates the feasibility for Brazilian blood centers to achieve a 100% PR-PC inventory. All patients at our hospital received PR-PC and showed no increase in blood component utilization and decreased rates of adverse transfusion reactions.
Recent Experiments Support an Emulsion Origin of Plasma Membrane Domains: Dependence of Domain Size on Physical Parameters
David W. Allender, M. Schick
Subject: Life Sciences, Biophysics Keywords: plasma membrane; rafts; microemulsion; phase-separation; domain size
It is widely, but not universally, believed that the lipids of the plasma membrane are not uniformly distributed, but that "rafts'' of sphingolipids and cholesterol float in a "sea'' of unsaturated lipids. The physical origin of such heterogeneities is often attributed to a phase coexistence between the two different domains. We argue that this explanation is untenable for several reasons. Further we note that the results of recent experiments are inconsistent with this picture. However they are quite consistent with an alternate explanation, namely that the plasma membrane is an emulsion of the two kinds of regions. To show this, we briefly review a simplified version of this theory and its phase diagram. We also explicate the dependence of the predicted domain size on four physical parameters. Among them are the spontaneous curvature of the membrane and its bending modulus and surface tension. Taking values of the latter two from experiment, we obtain domain sizes for several different cell types that vary from 58 to 88 nm.
Thermodynamic, non-extensive, or turbulent quasi equilibrium for space plasma environment
Peter Yoon
Subject: Physical Sciences, Astronomy & Astrophysics Keywords: non-extensive entropic principle; plasma turbulence; quasi equilibrium
The Boltzmann-Gibbs (BG) entropy has been used in a wide variety of problems for more than a century. It is well known that BG entropy is extensive, but for certain systems such as those dictated by long-range interactions, the entropy must be non-extensive. Tsallis entropy possesses non-extensive characteristics, which is parametrized by a variable q (q = 1 being the classic BG limit), but unless q is determined from microscopic dynamics, the model remains but a phenomenological tool. To this date very few examples have emerged in which q can be computed from first principles. This paper shows that the space plasma environment, which is governed by long-range collective electromagnetic interaction, represents a perfect example for which the q parameter can be computed from micro-physics. By taking the electron velocity distribution function measured in the heliospheric environment into account, and considering them to be in quasi equilibrium state with electrostatic turbulence known as the quasi-thermal noise, it is shown that the value corresponding to q = 9/13 = 0.6923 may be deduced. This prediction is verified against observation made by spacecraft, and it is shown to be in excellent agreement.
Investigation of Gold Electrodes Surface Condition of Plasma Reactor
Sebastian Gnapowski, Elżbieta Kalinowska-Ozgowicz, Mariusz Sniadkowski, Aleksandra Pietraszek
Subject: Materials Science, Surfaces, Coatings & Films Keywords: gold; surface of electrodes; ozone; plasma; raids layer
During the long-term operation of the plasma reactor, decreases in the plasma concentration were noticed despite the constant maintenance of all parameters. One of the factors is the decrease of the nitrogen content on the surface of the electrode, in order to eliminate it, the supply voltage has been increased to 11 kV. The next decisive factor in the decrease of plasma concentration is the oxidation of the electrode surface, therefore two electrodes were used: first one with solid gold and the other one copper covered with galvanized gold with a thickness of 10 μm. During the experiment, a large decrease in plasma concentration was observed when the electrode coated by gold was used. High-energy electrons have knocked out the gold atoms from the electrode, as a result of which the gold evaporated and the raids layers formed. After a month of working of the electrodes, metallographic researches were carried out, the results of which are described in this publication.
Physicochemical Properties of Yttria-Stabilized-Zirconia In-Flight Particles During Supersonic Atmospheric Plasma Spray
Guozheng Ma, Shuying Chen, Jiajie Kang, Haidou Wang, Ming Liu, Qin Zhao, Guolu Li, Pengfei He
Subject: Materials Science, Surfaces, Coatings & Films Keywords: plasma spray; in-flight particles; molten status; YSZ
In order to achieve better knowledge of the thermal barrier coatings (TBCs) by supersonic atmospheric plasma spraying (SAPS) process, an experimental study was carried out to elaborate physicochemical properties of particles in-flight during the SAPS process. One type of commercially available agglomerated and sintered yttria-stabilized-zirconia (YSZ) powders were injected into the SAPS plasma jet and collected by shock chilling method. The YSZ particles in-flight physicochemical properties of the melting state, morphology, microstructure, particle size distribution, element composition changes and phase transformation during the SAPS process have been systematically analyzed. The melting state, morphology and microstructure of the collected particles were determined by scanning electron microscopy (SEM). The particle size distribution was measured by a laser particle size analyzer (LPSA). Element compositions were quantitatively analysed by an electron probe X-ray microanalyzer (EPMA). Additionally, the X-ray diffraction (XRD) method was used to analyse the phase transformation. The results showed that the original YSZ powders injected into the SAPS plasma jet were quickly heated and melted from the outer layer companied with breakup and collision-coalescence. The outer layer of the collected particles containing roughly hexagonal shaped grains exhibited a surface texture with high sphericity and the inside was dense with hollow structure. The median particle size was decreased from 45.65 μm to 42.04 μm. Besides, phase transformation took place and the content of zirconium (Zr) and yttrium (Y) element was decreased with the evaporation of ZrO2 and Y2O3.
Growth and Differentiation of Human Wharton's Jelly Mesenchymal Stem Cells on oxygen Plasma-Modified 2D and 3D Polycaprolactone Scaffolds
Kewalin Inthanon, Weerah Wongkham, Wanida Junwikul, Siriwadee Chomdej
Subject: Materials Science, Biomaterials Keywords: polycaprolactone; oxygen plasma; Wharton's Jelly mesenchymal stem cells
Cell-based therapies and tissue engineering applications require biocompatible substrates that support and regulate the growth, survival, and differentiation of specific cell types. Extensive research efforts in regenerative medicine are devoted to the development of tunable biomaterials which support various cell types including stem cells. In this research, the non-cytotoxic biopolymer polycaprolactone (PCL) was fabricated into 2D and 3D scaffolds with or without the low-pressure oxygen plasma treatment to enhance hydrophilicity. Cellular responses and biocompatibility were evaluated using a human Wharton's jelly mesenchymal stem cell line (BCP-K1). The 2D PCL scaffolds enhanced initial cell attachment compared to the 3Ds indicated by a higher expression of focal adhesion kinase (FAK). Whilst, the 3D scaffolds promoted cell proliferation and migration as evidenced by higher cyclin A expression and filopodial protrusion, respectively. The 3D scaffolds potentially protected the cell entering to apoptosis/necrosis program and induced cell differentiation, evaluated by gene expression. Both 2D and 3D PCL appeared to have stronger effects on cell behavior than a control substrate (polystyrene). In summarize, the different configuration and surface properties of PCL scaffolds provide various options for modulation of stem cell behaviors, including attachment, proliferation, survival, and differentiation, when combined with specific growth factors and culture conditions.
Characteristics of Welding and Arc Pressure in the Plasma–TIG Hybrid Welding Process
Bo Wang, Xunming Zhu, Hongchang Zhang, Hongtao Zhang, Jicai Feng
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: plasma-TIG; coupled arc; arc profile; pressure distribution
In this article, a novel hybrid welding process called plasma-TIG coupled arc welding was proposed to improve the efficiency and quality of welding by utilizing the full advantage of plasma and TIG welding processes. The two arcs of plasma and TIG were pulled into each other into one coupled arc under the effect of Lorentz force and plasma flow force during welding experiments. The arc behavior of coupled arc was studied by means of it's arc profile, arc pressure and arc force conditions. The coupled arc pressure distribution measurements were performed. The effects of welding conditions on coupled arc pressure were evaluated and the maximum coupled arc pressure was improved compared with single-plasma arc and single-TIG arc. It was found that the maximum arc pressure was mainly determined by plasma arc current and plasma gas flow. According to the results, the proposed coupled arc welding process have both advantages of plasma arc and TIG method, and it has a broad application prospect.
Multimodal Metabolomics Combining UPLC-qToF-MS and GC-MS Data in Plasma and Brain Tissue
Amera A. Ebshiana, Stuart G. Snowden, Cristina Legido-Quigley
Subject: Chemistry, Analytical Chemistry Keywords: metabolomics; LC-MS; GC-MS; plasma; brain tissue
Metabolomic analysis of biological fluids and tissues has become an increasingly routine tool in the biological toolbox. However, challenges remain to be overcome, including developing strategies to maximise coverage of the metabolome without requiring large sample volumes. Here we describe a multimodal strategy that combines data using both LC-MS and GC-MS from a unique vial with a sample of plasma (20µl) or a sample of brain tissue (3mg). Using a split phase extraction the non-aqueous phase was analyzed by reversed phase (RP) LC-MS, whilst the aqueous phase was analyzed using hydrophilic liquid interaction chromatography (HILIC)LC-MS, with both phases also analysed using GC-MS after derivatization of the extract. Analytical performance was assessed in 7 rat cerebellum samples and a pilot study of 40 plasma samples (20 vs. 20: AD vs. healthy controls). The method, which uses four hours of instrument time, measured 20,707 metabolite features in brain samples and 17,266 in plasma samples, from those 44.1% features displayed CV's below 15% and 75.2% below 30%. The method has potential to resolve subtle biological differences and to correlate metabolite composition directly to clinical outcomes including MMSE, age and ADCS-ADL. This method can acquire in the order of 20K metabolic features when low volumes are available.
Hypersonic Imaging and Emission Spectroscopy of Hydrogen and Cyanide Following Laser-Induced Optical Breakdown
Christian G Parigger, Christopher M Helstern, Ghaneshwar Gautam
Subject: Physical Sciences, Acoustics Keywords: laser–plasma interactions; plasma dynamics and flow; hypersonic flows; optical emission spectroscopy; hydrogen; cyanide; Abel inversion; astrophysics; white dwarf stars
This work communicates the connection of measured shadowgraphs from optically induced air breakdown with emission spectroscopy in selected gas mixtures. Laser-induced optical breakdown is generated using 850 mJ and 170 mJ, 6-ns pulses at a wavelength of 1064 nm, the shadowgraphs are recorded using time-delayed 5-ns pulses at a wavelength of 532 nm and a digital camera, and emission spectra are recorded for typically a dozen of discrete time-delays from optical breakdown by employing an intensified charge-coupled device. The symmetry of the breakdown event can be viewed as close-to spherical symmetry for time-delays of several 100 ns. Spectroscopic analysis explores well-above hypersonic expansion dynamics using primarily the diatomic molecule cyanide and atomic hydrogen emission spectroscopy. Analysis of the air breakdown and selected gas breakdown events permits the use of Abel inversion for inference of the expanding species distribution. Typically, species are prevalent at higher density near the hypersonically expanding shockwave, measured by tracing cyanide and a specific carbon atomic line. Overall, recorded air breakdown shadowgraphs are indicative of laser-plasma expansion in selected gas mixtures, and optical spectroscopy delivers analytical insight into plasma expansion phenomena.
Broadening of the Neutral Helium 492 nm Line in a Corona Discharge: Code Comparisons and Data Fitting
Roshin Raj Sheeba, Mohammed Koubiti, Nelly Bonifaci, Franck Gilleron, Jean-Christophe Pain, Evgeny Stambulchik
Subject: Physical Sciences, Atomic & Molecular Physics Keywords: Stark broadening; van der Waals broadening; line shapes; helium plasma; corona discharge; plasma diagnostics; code comparison; neutral broadening; pressure broadening
Passive plasma spectroscopy is a well-established non-intrusive diagnostic technique. Depending on the emitter and its environment which determine the dominant interactions and effects governing emission line shapes, passive spectroscopy allows the determination of electron densities, emitter and perturber temperatures as well as other quantities like abundances. However, using spectroscopy needs appropriate line shape codes retaining all the physical effects governing the emission line profiles. This requires for line shape code developers to continuously correct or improve them to increase their accuracy when applied for diagnostics. This is exactly the aim expected from code-code and code-data comparisons. In this context, the He I 492 nm emitted in a helium corona discharge at room temperature represents an ideal case since its profile results from several broadening mechanisms: Stark, Doppler, resonance and van der Waals. The importance of each broadening mechanism depends on the plasma parameters. Here the profiles of the He I 492 nm in a helium plasma computed by various codes are compared for a selected set of plasma parameters. In addition, preliminary results related to plasma parameter determination using experimental spectra from a helium corona discharge at low pressure 1- 2 bars, are presented.
H-β Line in a Corona Helium Plasma: A Multi-Code Line Shape Comparison
Roshin Raj Sheeba, Mohammed Koubiti, Nelly Bonifaci, Franck Gilleron, Caroline Mossé, Jean-Christophe Pain, Joël Rosato, Evgeny Stambulchik
Many spectroscopic diagnostics are routinely used as a technique to infer the plasma parameters from line emission spectra but their accuracy depends on the numerical model or code used for the fitting process. However, the validation of a line shape code requires some steps : comparison of the line shape code with other similar codes for some academic (simple) cases and then more complex ones, comparison of the fitting parameters obtained from the best fit of the experimental spectra with those obtained with other diagnostic techniques and/or comparison of the fitting parameters obtained by different codes to fit the same experimental data. Here we compare the profiles of the hydrogen Balmer β line in a helium plasma computed by six codes for a selected set of plasma parameters and we report on the plasma parameters inferred by each of them from the fitting to a number of experimental spectra measured in a helium corona discharge where the pressure was in the range 1- 5 bar.
Weak Gravitational Lensing in Dark Matter and Plasma Mediums for Wormhole-like Static Aether Solution
Wajiha Javed, Sibgha Riaz, Reggie C. Pantig, Ali Övgün
Subject: Physical Sciences, Astronomy & Astrophysics Keywords: general relativity; gravitational lensing; wormhole-like static aether solution; Gauss-Bonnet theorem; plasma and non-plasma mediums; dark matter; modified gravity
In this paper, we study the deflection angle for wormhole-like static aether solution by using Gibbons and Werner technique in non-plasma, plasma and dark matter mediums. For this purpose, we use optical spacetime geometry to calculate the Gaussian optical curvature, then implement the Gauss-Bonnet theorem in weak field limits. Moreover, we compute the deflection angle by using a technique known as Keeton and Petters technique. Furthermore, we analyze the graphical behaviour of the bending angle ψ with respect to the impact parameter b, mass m as integration constant and parameter q in non-plasma and plasma mediums. We examine that deflection angle is exponentially increasing as direct with charge. Also, we observe that for small values of b, ψ increases and for large values of b the angle deceases. We also considered an analysis to the shadow cast of the wormhole relative to an observer at various locations. Comparing it the the Schwarzschild shadow, shadow cast is possible for wormhole as r<2m. At r>2m, the Schwarzschild is larger. As r → ∞, we have seen that the behavior of the shadow, as well as the weak deflection angle, approaches that of the Schwarzschild black hole. Overall, the effect of plasma tends to decrease the value of the observables due to the wormhole geometry.
Plasma-Assisted Biomass Gasification in a Drop Tube Reactor at Atmospheric Pressure
Yin Pang, Leo Bahr, Peter Fendt, Lars Zigan, Stefan Will, Thomas Hammer, Manfred Baldauf, Robert Fleck, Dominik Müller, Jürgen Karl
Subject: Engineering, Energy & Fuel Technology Keywords: non-thermal plasma; wood gasification; charcoal gasification; water steam gasification; plasma stability; optical temperature measurement; carbon conversion; reaction kinetics; Arrhenius diagram
Compared to conventional allothermal gasification of solid fuels (e.g. biomass, charcoal, lignite etc.), plasma-assisted gasification offers an efficient method to apply energy into the gasification process to increase the flexibility of operation conditions and to increase the reaction kinetics. In particular, non-thermal plasmas (NTP) are promising, in which thermal equilibrium is not reached and electrons have substantially higher mean energy than gas molecules. Thus it is generally assumed that in NTP the supplied energy is utilized more efficiently for generating free radicals initiating gasification reactions than thermal plasma processes. In order to investigate this hypothesis, we compared purely thermal to non-thermal plasma assisted gasification of biomass in steam in a drop tube reactor at atmospheric pressure. The NTP was provided by means of gliding arcs between two electrodes aligned in the inlet steam flow. Electric power of about 1 kW was supplied using a high voltage generator operating at frequencies between 70 and 150 kHz and voltage amplitudes up to 10 kV. A laser-assisted optical method (Raman spectroscopy) was applied for measuring the gas temperature both in the conventionally heated steam and flow-down of the visible plasma filaments of the gliding arcs. Reaction yields and rates were evaluated using these measured gas temperatures. The first experimental results have shown that the non-thermal plasma not only promotes the carbon conversion of the fuel particles, but also accelerates the reaction kinetics. The carbon conversion is increased by nearly 10% using wood powder as the fuel. With charcoal powder more than 3% are converted into syngas.
Spark Plasma Sintering of Nanostructured TiCrC Carbides Prepared via Mechanical Alloying
Mohsen Mhadhbi, İlker Emin Dağ, Barış Avar, Mohamed Khitouni, Frédéric Schoenstein, Noureddine Jouini
Subject: Materials Science, Nanotechnology Keywords: mechanical alloying; titanium carbide; spark plasma sintering; cermets; corrosion
In order to produce nanostructured Ti0.9Cr0.1C powders, an elemental powder mixture of titanium, chromium, and graphite is milled in this work using a high-energy ball mill for various milling times. Microstructural characteristics such as crystallite size, microstrain, lattice parameter, and dislocation density are determined using X-ray diffraction (XRD). Mechanical alloying successfully produced nanocrystalline (Ti,Cr)C with an average crystallite size of 11 nm. This size of the crystallites is also directly verified using transmission electron microscopy (TEM). Scanning electron microscopy (SEM) was used to investigate the morphology of the samples. The novelty of this work is advancing the scientific understanding of the effect of milling time on the particle size distribution and crystalline structure, and also understanding the effect of the spark plasma sintering on the different properties of the bulks. Densified cermet samples were produced from the nanocrystalline powders, milled for 5, 10 and 20 hours by SPS process at 1800 degrees for 5 min under a pressure of 80 MPa. Phase changes of the produced cermets were examined according to XRD, SEM/EDX analyses. Significant amounts of Cr and Fe elements were detected, especially in the 20 h milled cermet. The bulk forms of the milled powders for 5 and 20 h had a relative density of 98.43 and 98.51 %, respectively. However, 5 h milled cermet had 93.3 HRA because of the more homogeneous distribution of the (Ti,Cr)C phase, the low iron content and high relative density. According to the 0.0011 mm/year corrosion rate, and 371.68 kΩ*cm2 charge transfer resistance obtained from the potentiodynamic polarization and EIS tests, the 20 h cermet was the specimen with the highest corrosion resistance.
Direct Bonding Method for Completely Cured Polyimide by Surface Activation and Wetting
Ying Meng, Runhua Gao, Xinhua Wang, Sen Huang, Ke Wei, Dahai Wang, Fengwen Mu, Xinyu Liu
Subject: Engineering, Electrical & Electronic Engineering Keywords: polyimide bonding; plasma activation; hydrophilic; hybrid bonding; 3D integration
Polymer adhesives have emerged as a promising dielectric passivation layer in hybrid bonding for 3D integration while they raise misalignment problems during curing. In this work, the synergistic effect of oxygen plasma surface activation and wetting is utilized to achieve bonding between completed cured polyimides. The optimized process achieves a void-less bonding with a maximum shear strength of 35.3 MPa at a low temperature of 250 °C in merely 2 min, significantly shortening the bonding period and decreasing thermal stress. It is found that the plasma activation generated hydrophilic groups on the polyimide surface, and the wetting process further introduced more -OH groups and water molecular on the activated polyimide surface. The synergistic process of plasma activation and wetting facilitate bridging polyimide interfaces to achieve bonding, providing an alternative path for adhesive bonding in 3D integration.
Assessment of Biostimulation Methods based on Chemical Communication in Female Doe Reproduction
Paula R. Villamayor, Julián Gullón, Uxía Yáñez, María Sánchez, Pablo Sánchez-Quinteiro, Paulino Martínez, Luis Quintela
Subject: Life Sciences, Other Keywords: rabbit; biostimulation; reproduction; urine; seminal plasma; chemocommunication; olfaction; pheromones
Biostimulation is an animal management practice that helps improve reproductive parameters by modulating animal sensory systems. Chemical signals, mostly known as pheromones, have a great potential in this regard. This study was conducted to determine the influence of short-term female rabbit exposure to different conditions, mainly pheromone-mediated, on reproductive parameters of inseminated does. Groups of 60 females/each were exposed to 1) female urine, 2) male urine, 3) seminal plasma and 4) female-female interaction, just before artificial insemination, and compared to isolated females controls (female-female separated). The following reproductive parameters were analyzed for each group: receptivity (vulvar color), fertility (calving rate), prolificacy and number of born alive and dead kits ⁄ litter. Our results showed that the biostimulation methods employed in this experiment did not significantly improve any of the analyzed parameters. However, female doe exposure to urine, especially to male urine, slightly increased fertility levels when compared to the rest of the experimental conditions. Female-female interaction before artificial insemination, which is a common practice in rabbit farms, did not have any effect, which suggests its removal to avoid unnecessary animal management and time cost. On the other hand, fertility ranges were lower for animals with pale vulvar color whereas no differences were noticed among the other three colours which measure receptivity (pink, red, purple), thus suggesting that these three colours could be grouped together. Additionally, equine chorionic gonadotropin injection could be replaced with various biostimulation methods, therefore reducing or replacing current hormonal treatments, and contributing to animal welfare and to a natural image of animal production.
The Corrosion Resistance and Mechanism of AT13/Fe-based Amorphous Composite Coatings
Zhenhua Chu, Shikun Teng, Yuyun Zhou, Xingwei Zheng, Jingxiang Xu, Fang Wang, Baosen Zhang
Subject: Engineering, Other Keywords: Fe-based amorphous coating; AT13; Plasma spraying; Corrosion resistance
In the present study, the corrosion resistance of amorphous coating and composite coatings in 3.5 wt.% NaCl, 0.5 M H2SO4 and 10 wt.% NaOH solution were studied. The composite coatings exhibit superior corrosion resistance. When the content of AT13 (Al2O3–13 wt.% TiO2)was 15 wt.%, the composite coating has the lowest corrosion current density (1.75×10-6 A cm-2), which is 5.14×10-5 A cm-2 for Fe-based metallic glassy coating, and the highest corrosion potential (-411 mV), which is -580 mV for Fe-based metallic glassy coating. The breakdown potential of the passivation film in 3.5 wt.% NaCl solution was much higher than that of 316L.The long-time immersion corrosion tests carried out on different coatings showed that the corrosion protection effect of coating was enhanced with the increase of the amount of AT13 added.
Sphingolipidome Quantification by Liquid Chromatography- High Resolution Mass Spectrometry: Whole Blood vs. Plasma
Dezhen Wang, Peining Xu, Clementina Mesaros
Subject: Chemistry, Analytical Chemistry Keywords: sphingolipidome; ceramides; high resolution mass spectrometry; whole blood; plasma
Plasma and serum are the most widely used blood-derived biofluids for metabolomics and lipidomics assays, but the isolation of these products from blood may introduce additional bias as indicated by the fact that many analytes that are present at high concentrations in blood cells cannot be measured and evaluated in those samples. Of particular concern, variable hemolysis during the pre-processing of blood products could compromise accurate and reproducible quantification. Compared with plasma or serum, whole blood may be a better alternative due to simplicity of processing. In this study, we provide a comprehensive method for quantification of the whole blood sphingolipidome and the concentrations were compared with those from plasma. Combining a single-phase extraction method with liquid-chromatography high resolution mass spectrometry (R=120, 000), assisted by alkaline hydrolysis, we were able to identify and simultaneously quantify more than 150 sphingolipids. Furthermore, most of sphingolipids remained stable after a freeze/thaw cycle. Whole blood contained a higher concentration of most sphingolipids than corresponding plasma. Moreover, individual variations in the levels of sphingolipids were lower for whole blood than plasma. These findings demonstrate that whole blood could be a better alternative to plasma, and potentially guide the evaluation of sphinglipidome for biomarker discovery.
Hydrogen Production from Methane Cracking in Dielectric Barrier Discharge Catalytic Plasma Reactor using a Nanocatalyst
Asif Hussain Khoja, Abul Kalam Azad, Faisal Saleem, Bilal Alam Khan, Salman Raza Naqvi, Muhammad Taqi Mehran, Nor Aishah Saidina Amin
Subject: Materials Science, Biomaterials Keywords: Hydrogen production; Methane cracking; DBD plasma reactor; MgAl2O4; CNTs
The study experimentally investigated a novel approach for producing hydrogen from methane cracking in dielectric barrier discharge catalytic plasma reactor using a nanocatalyst. Plasma-catalytic methane (CH4) cracking was undertaken in a dielectric barrier discharge (DBD) catalytic plasma reactor using Ni/MgAl2O4. The Ni/MgAl2O4 was synthesised through co-precipitation followed customised hydrothermal method. The physicochemical properties of the catalyst were examined using X-ray diffraction (XRD), scanning electron microscopy - energy dispersive X-ray spectrometry (SEM-EDX) and thermogravimetric analysis (TGA). The Ni/MgAl2O4 shows a porous structure spinel MgAl2O4 and thermal stability. In the catalytic-plasma methane cracking, the Ni/MgAl2O4 shows 80% of the maximum conversion of CH4 with H2 selectivity 75%. Furthermore, the stability of the catalyst was encouraging 16 hours with CH4 conversion above 75%, and the selectivity of H2 was above 70%. This is attributed to the synergistic effect of the catalyst and plasma. The plasma-catalytic CH4 cracking is a promising technology for the simultaneous H2 and carbon nanotubes (CNTs) production for energy storage applications.
Influence of Plasma Heating on the Metallurgical Effects of a Continuous Casting Tundish
Maolin Ye, Mengjing Zhao, Sai Chen, Shufeng Yang, Jingshe Li
Subject: Materials Science, Metallurgy Keywords: Plasma heating; tundish; metallurgical effect; inclusions; tundish covering flux
Steel products have experienced long-standing problems such as unstable product quality and low product homogeneity. In the continuous casting process, realizing constant temperature pouring is an effective way to improve product homogeneity. Plasma heating can compensate for the temperature drop during casting with a tundish and maintain a stable degree of superheating of the molten steel in the tundish. Plasma heating has a certain impact on the cleanliness of the molten steel and on the tundish covering flux in the tundish while compensating for the temperature drop. This paper uses SEM-EDS, XRD and FactSage to analyze the cleanliness of the molten steel and the characteristics of the tundish covering flux before and after heating. The results show that the number density of inclusions in the tundish is significantly lower after heating, improving the floating removal rate of small-sized inclusions; after heating, the surface morphology of the tundish covering flux sample appears transparent and glassy, with uniform morphology; XRD results show that the tundish covering flux after plasma heating exhibits no crystal precipitation and is amorphous and that there is a certain regularity before and after heating; there are no obvious changes in the composition of the tundish covering flux in the liquid phase area.
Negative Effective Mass in Plasmonic Systems
Edward Bormashenko, Irina Legchenkova
Subject: Physical Sciences, Condensed Matter Physics Keywords: metamaterials; negative effective mass; plasma oscillations; low frequency plasmons
Online: 29 March 2020 (03:39:01 CEST)
We report the negative effective mass metamaterials based on the electro-mechanical coupling exploiting plasma oscillations of a free electron gas. The negative mass appears as a result of vibration of a metallic particle with a frequency of ω which is close the frequency of the plasma oscillations of the electron gas m_2 relatively to the ionic lattice m_1. The plasma oscillations are represented with the elastic spring k_2=ω_p^2 m_2, where ω_p is the plasma frequency. Thus, the metallic particle vibrated with the external frequency ω is described by the effective mass m_eff=m_1+(m_2 ω_p^2)/(ω_p^2-ω^2 ) , which is negative when the frequency ω approaches ω_p from above. The idea is exemplified with two conducting metals, namely Au and Li.
Microstructural and Corrosion Properties of PEO Coated Zinc-Aluminized (ZA) Steel
Luca Pezzato, Alessio Giorgio Settimi, Pietrogiovanni Cerchier, Claudio Gennari, Manuele Dabala, Katya Brunelli
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: plasma electrolytic oxidation; PEO; coatings; steel; corrosion; zinc-aluminized
Plasma Electrolytic Oxidation (PEO) is a surface treatment, similar to anodizing, that produces thick oxide films on the surface of metals. In the present work, PEO coatings were obtained on zinc-aluminized (ZA) carbon steel using as electrolyte a solution containing sodium silicate and potassium hydroxide, and working with high current densities and short treatment times in DC mode. The surface morphology resulted the typical one of PEO layers, with the presence of a lot of pores and micro-cracks. Considering the cross section, the thickness of the coating was strongly influenced by the process parameters, with different dissolution grades of the ZA layer depending on the current density and treatment time. The PEO layer resulted mainly composed by aluminum and zinc oxides and silicates. The corrosion resistance was remarkable increased in the samples with the PEO coating.
Characterization of Porous Phosphate Coatings Created on CP Titanium Grade 2 Enriched with Calcium, Magnesium, Zinc and Copper by Plasma Electrolytic Oxidation
Krzysztof Rokosz, Tadeusz Hryniewicz, Steinar Raaen, Sofia Gaiaschi, Patrick Chapon, Winfried Malorny, Dalibor Matýsek, Łukasz Dudek, Kornel Pietrzak
Subject: Materials Science, Surfaces, Coatings & Films Keywords: Plasma Electrolytic Oxidation (PEO); Micro Arc Oxidation (MAO); Titanium
In the paper, the effect of voltage increasing (from 500 VDC up to 650 VDC) on the structure and chemical composition of porous coating on titanium made by Plasma Electrolytic Oxidation, is presented. In the present paper, phosphates based coatings enriched with calcium, magnesium, zinc and copper in electrolyte based on 1 L of 85% concentrated H3PO4 with additions of Ca(NO3)2·4H2O, and Mg(NO3)2∙6H2O, and Zn(NO3)2∙6H2O, and Cu(NO3)2∙3H2O, are described. The morphology, chemical and phase composition, are evaluated using SEM, EDS, XRD, XPS, GDOES. Based on all the analyses, it was found out that the PEO coatings are porous and enriched with calcium, magnesium, zinc and copper. They consist mainly of the amorphous phase, which is more visible for higher voltages, and it is correlated with the increasing of the total PEO coating thickness (the higher the voltage, the thicker the PEO coating). However, for 650 VDC an amorphous phase and titanium substrate was also recorded with a signal from Ti2P2O7 crystalline, that was not observed for lower voltages. It was also found out that all the obtained coatings may be divided in three sub-layers, i.e. porous, semiporous, and transition one.
The Fracture of Plasma-Treated Polyurethane Surface under Fatigue Loading
Ilya A. Morozov, Alexander S. Mamaev, Mikhail V. Bannikov, Anton Yu. Beliaev, Irina V. Osorgina
Subject: Materials Science, Surfaces, Coatings & Films Keywords: ion plasma treatment; fatigue loading; fracture; polyurethane; surface morphology
Plasma treatment of soft polymers is the promising technique to improve biomedical properties. The response to the deformation of such materials is not yet clear. Soft elastic polyurethane treated with plasma immersion ion implantation is subjected to fatigue uniaxial loading (50000 cycles, frequency – 1 Hz, strain amplitude – 10, 20, 40%). The influence of the strain amplitude and the plasma treatment regime on damage character is discussed. Surface defects are studied in unloaded and stretched states of the material. As a result of fatigue loading, transverse cracks (with closed overlapping edges as well as with open edges deeply propagating into the polymer) and longitudinal folds which are break and bend inward, appear on the surface. Hard edges of cracks cut the soft polymer which is squeezed from the bulk to the surface.
Diabetogenic Effects of Ochratoxin A in Female Rats
Firdevs Mor, Omur Sengul, Senay Topsakal, Mehmet Akif Kilic, Ozlem Ozmen
Subject: Medicine & Pharmacology, Pathology & Pathobiology Keywords: Ochratoxin A, insulin, glucagon, glucose, rat plasma, pathology, immunohistochemistry
In this study, diabetogenic effects of long term Ochratoxin A (OTA) administration in rats were investigated and its role in the etiology of diabetes mellitus (DM) was examined utilizing 42 female Wistar rats for these purposes. The rats were divided into 3 different study and control groups according to the duration of the OTA administration. Rats received 45 μg OTA daily in their feed for 6, 9 and 24 weeks study groups. Three control groups without any treatment were also used in the same periods. Blood and pancreatic tissue samples were collected during the necropsy at the end of 6, 9 and 24 weeks. Plasma values of insulin, glucagon and glucose in study and control groups were determined. Pancreatic lesions were evaluated by histopathological examination; then insulin and glucagon expression in these lesions were determined by immunohistochemical methods. Statistically significant decrease in insulin levels in contrast to increases in glucagon and glucose levels in blood were observed. Slight degeneration in Langerhans islet cells were observed at the histopathological examination in all OTA treated groups. Immunohistochemistry of pancreatic tissue revealed decreased insulin and increased glucagon expression. This study demonstrated that OTA may cause pancreatic damage in Langerhans islet and predispose rats to DM.
Plasma Polymerization SnOxCy Organic-like Films and Grafted PNIPAAm Composite Hydrogel with Nanogold Particles for Promotion Thermal Resistive Properties
Chin-Yen Chou, Ko-Shao Chen, Win-Li Lin, Ying-Cian Ye, Shu-Chuan Liao
Subject: Materials Science, Surfaces, Coatings & Films Keywords: nano-Au particles; NIPAAm hydrogel; plasma treatment; UV grafting
In this study, a new type of temperature sensor device was developed. The circular electrode of the thermal sensitive sensor was modified with TMT and O2 plasma to enhance the conductivity by forming a thin SnOxCy layer on the electrode surface. The Nano-Au particles were subjected to O2 plasma pretreatment to form peroxide groups on the surface. The thermally sensitive sensor was made by mixing the above-treated Nano-Au particles with N-isopropylacrylamide (NIPAAm) to form solution and then UV-induced grafting polymerization of the NIPAAm-containing solution onto the electrode substrate. The composite hydrogels on the electrode introduce thermo-sensitive polymeric surface films for temperature sensing. Using ambient environment resistance test to measure the resistance, the LCST (lower critical solution temperature) of Nano-Au (MUA) mixed with NIPAAm hydrogel was found to be 32 °C. At ambient temperatures higher than LCST, the electrode resistance decreases linearly.
On the Role of Seminal Fluid Protein and Nucleic Acid Content in the Paternal Epigenetic Inheritance
Bahar Patlar
Subject: Biology, Animal Sciences & Zoology Keywords: seminal fluid; seminal plasma; epigenetics; transgenerational plasticity; sperm; paternal effects
The evidence supports the occurrence of environmentally induced paternal epigenetic inheritance shaping the offspring phenotype in the absence of direct or indirect paternal care, and the empirical results clearly indicate that sperm epigenetics is one of the major actors mediating these paternal effects. However, sperm often make up only a small fraction of the male ejaculate in animals. Males also have a complex mixture of proteins, peptides, types of small RNAs, and cell-free DNA fragments in their seminal fluid. These molecules are in close contact with reproductive cells, tissues, organs, and other molecules of both males and females during reproduction. Moreover, their production and use are very sensitive to environmental conditions which makes them potential modulators of environmentally and developmentally induced paternal effects on the next generation(s). Although there is some intriguing evidence of seminal fluid-mediated paternal epigenetic effects, the underlying molecular mechanisms remain poorly defined. In this review, I discuss the current evidence regarding the association between seminal fluid and environmentally induced paternal effects, the possible trajectories, and the mechanisms in which seminal fluid can involve to mediate paternal epigenetic inheritance.
Closed-Form Solution of Adiabatic Particle Trajectories in Axis-Symmetric Magnetic Fields
F. Sattin, D.F. Escande
Subject: Physical Sciences, Fluids & Plasmas Keywords: Hamiltonian dynamics; Adiabatic invariants; Magnetized plasma; Particle trajectory; Canonical transformations
The dynamics of a low-energy charged particle in an axis-symmetric magnetic field is known to be a regular superposition of periodic–although possibly incommensurate–motions. The projection of the particle orbit along the two non-ignorable coordinates (x,y) may be expressed in terms of each other: y=y(x), yet–to our knowledge–such a functional relation has never been directly produced in literature, but only by way of a detour: first, equations of motion are solved, yielding x=x(t),y=y(t), and then one of the two relations is inverted, x(t)→t(x). In this paper we present a closed-form functional relation which allows to express coordinates of the particle' orbit without the need to pass through the hourly law of motion.
Improvements in Modern Weapons Systems: The Use of Dielectric Materials for the Development of Advanced Models of Electric Weapons Powered by Brushless Homopolar Generator
Daniela Georgiana Golea, Lucian Stefan Cozma
Subject: Engineering, Automotive Engineering Keywords: electric weapons; rail gun; electromagnetic ammunition; hybrid technologies; plasma detonator.
Although the idea of making electric weapons has emerged since the beginning of the 20th Century, the great number of technological problems made that such technology was not developed. During the Cold War, the US strategic programme Space Defensive Initiative paid a special attention to this category of weapons, but the experiments have demonstrated that the prototypes are too big, very heavy, and involve a very high energy consumption and low reliability. Under these circumstances, the authors have noticed the trend to design new types of electric weapons starting from the hybridization of some technologies: the compressed flux weapons, the plasma electromagnetic cannon, the electromagnetic weapons as the coil-gun and rail-gun.
14Cr-ODS Steel Produced by SPS from Mechanical Milled Powders with High Efficiency Oxide Precursors Incorporation
Eric Macia
Subject: Keywords: ODS steel; mechanical alloying; spark plasma sintering; zirconium; co-precipitation
Currently, one of the biggest issues when developing an ODS alloy is the competition established between the different oxide precursors during the precipitation of oxides which nature depends on their chemical composition. In the presence of various precursors, usually the one with the highest affinity to oxygen leads to the absence of the other oxides. In this work, a new process to equilibrate the local concentration of species and to decrease the competition among them is explained. A unique compound, containing the diverse oxide precursors as one complex oxide, is introduced in a prealloyed 14Cr Steel powder via mechanical alloying. Thus, generating environments enriched in Y, Ti and Zr which, after consolidation, refine the oxides precipitation improving the thermal stability of the alloy. SPS were used as consolidation technique to guarantee shorter sintering times and to maintain the nanostructure obtained. Mechanical properties were tested by tensile tests and Vickers microhardness.
Plasma Channel Extension by Femtosecond Laser Filamentation with a Circular Aperture Quartz Plate
Yao Li, Ze-liang Zhang, Hai-ying Song, Xun Liu, Wei Li, Shi-bing Liu
Subject: Physical Sciences, Optics Keywords: femtosecond laser; plasma channel; filament elongation; circular aperture quartz plate
We propose a new approach of extending the laser filament plasma channel. By adding a circular aperture quartz plate before the focusing lens, the extension of the plasma channel is doubled. The effects of different diameters, thicknesses of the circular aperture quartz plate and different pulse energies on the length of the plasma channel were investigated. The experimental results show that the thickness of the quartz plate and the depth of the hole have little effects on the plasma channel of the filament, and the diameter of the hole in the center of the quartz plate has a significant effect on the length of the optical filament. The moving-focus model is used to explain the extension of the optical filament.
Temporally and Spatially Resolved Emission Spectroscopy of Cyanide, Hydrogen and Carbon in Laser-Induced Plasma
Christian G. Parigger, Christopher M. Helstern, Ghaneshwar Gautam
Subject: Physical Sciences, Fluids & Plasmas Keywords: laser-induced plasma; atomic spectroscopy; molecular spectroscopy; cyanide; hydrogen; carbon
This work examines atomic and molecular signatures in laser-induced plasma in standard ambient temperature and pressure environments, including background contributions to the spectra that depend on the laser pulse-width. Investigations include solids, gases, and nano-particles. Abel inversions of measured line-of-sight data reveal insight into the radial plasma distribution. For nominal 6 nanosecond laser pulses and for pulse-energies in the range of 100 to 800 mJ, expansion dynamics and turbulence due to shock phenomena are elucidated to address local equilibrium details that are frequently assumed in spatially averaged emission spectroscopy. Chemical equilibrium computations reveal temperature dependence of selected plasma species. Specific interests include atomic hydrogen (H) and cyanide (CN). Atomic hydrogen spectra indicate axisymmetric shell structures and isentropic expansion of the plasma kernel over and above the usual shockwave. The recombination radiation of CN emanates within the first 100 nanoseconds for laser-induced breakdown in a 1:1 CO2:N2 gas mixture when using nanosecond laser pulses to create the micro-plasma. The micro-plasma is generated using 1064 nm, 150 mJ, 6 ns Q-switched Nd:YAG laser radiation. Measurements of the optical emission spectra utilize a 0.64 m Czerny-Turner type spectrometer and an intensified charge-coupled device.
Plasma Metabolites Associated with Brain MRI Measures in Older Adults in the Atherosclerosis Risk in Communities – Neurocognitive Study (ARIC-NCS)
Danni Li, Jeffrey R. Misialek, Clifford R. Jack, Michelle M. Mielke, David Knopman, Rebecca Gottesman, Thomas Mosley, Alvaro Alonso
Subject: Keywords: metabolomics; sphingomyelins; plasma; cerebral small vessel disease; neurodegeneration; brain atropy
Background: Plasma metabolites are associated with cognitive and physical function in the elderly. Because cerebral small vessel disease (SVD) and neurodegeneration are common causes of cognitive and physical function decline, the primary objective of this study was to investigate the associations of six plasma metabolites (two plasma phosphatidylcholines [PCs]: PC aa C36:5 and PC aa 36:6 and four sphingomyelins [SMs]: SM C26:0, SM [OH] C22:1, SM [OH] C22:2, SM [OH] C24:1) with magnetic resonance imaging (MRI) features of cerebral SVD and neurodegeneration in older adults. Methods: This study included 238 older adults in the Atherosclerosis Risk in Communities study at the fifth exam. Multiple linear regression was used to assess the association of each metabolite (log-transformed) in separate models with MRI measures except lacunar infarcts, for which binary logistic regression was used. Results: Higher concentrations of plasma PC aa C36:5 had adverse associations with MRI features of cerebral SVD (odds ratio of 1.69 [95% confidence interval: 1.01, 2.83] with lacunar infarct, and beta of 0.16 log [cm3] [0.02, 0.30] with log [White Matter Hyperintensities (WMH) volume]) while higher concentrations of 3 plasma SM (OH)s were associated with higher total brain volume (beta of 12.0 cm3 [5.5, 18.6], 11.8 cm3 [5.0, 18.6], and 7.3 cm3 [1.2, 13.5] for SM [OH] C22:1, SM [OH] C22:2, and SM [OH] C24:1, respectively). Conclusions: This study identified associations between certain plasma metabolites and brain MRI measures of SVD and neurodegeneration in older adults, particularly higher SM (OH) concentrations with higher total brain volume.
The Circulating Transcriptome as a Source of Biomarkers for Melanoma
Carla Solé, Daniela Tramonti, Maike Schamm, Ibai Goicoechea, María Armesto, Luiza I. Hernandez, Lorea Manterola, Marta Fernandez-Mercado, Karmele Mujika, Anna Tuneu, Ane Jaka, Maitena Tellaetxe, Marc R. Friedländer, Xavier Estivill, Paolo Piazza, Pablo L. Ortiz-Romero, Mark R. Middleton, Charles H. lawrie
Subject: Medicine & Pharmacology, Oncology & Oncogenics Keywords: melanoma; plasma; liquid biopsy; miRNA; mRNA; biomarker; YRNA; RNA species
The circulating transcriptome is a valuable source of cancer biomarkers, which with the exception of miRNAs, remains relatively unexplored. To elucidate which RNAs are present in plasma from melanoma patients and which could be used to distinguish cancer patients of healthy individuals, we used next generation sequencing (NGS) and validation was carried out by qPCR and/or ddPCR. We identified 442 different microRNAs in samples, eleven of which were differentially expressed (P < 0.05). Levels of miR-134-5p and miR-320a-3p were significantly down-regulated (P<0.001) in melanoma samples (n = 96) compared to healthy controls (n = 28). Differentially expressed protein-encoding mRNA, 5´-fragments, were enriched for the angiopoietin, PAK and EIF2 pathways. Levels of ATM1, AMFR, SOS1 and CD109 gene fragments were up-regulated (P < 0.001) in melanoma samples (n=144) compared to healthy controls (n = 41) (AUC = 0.825). Over 40% of mapped reads were YRNAs, a class of non-coding RNAs that to date has been little explored. Expression levels of RNY3P1, RNY4P1 and RNY4P25 were significantly higher in patients with stage 0 disease, than either healthy controls or more advanced stage disease (P < 0.001). In conclusion, we have identified a number of novel RNA biomarkers, which most importantly we validated in multi-centre retrospective and prospective cohorts suggesting potential diagnostic use of these RNA species.
Self-Absorption Analysis of Silver Resonance Lines in Nano-Material Laser Produced Plasma
Ashraf M. El Sherbini, Mohamed A. Hagras , Mohamed R. Rizk , El-Sayed A. El Badawy, Christian G. Parigger
Subject: Physical Sciences, Atomic & Molecular Physics Keywords: laser-induced plasma; atomic spectroscopy; self-absorption; nanoparticles; silver; hydrogen
The resonance spectra of neutral silver indicate self-absorption for the studied Ag I lines at the wavelengths of 327.9 nm and 338.2 nm. The center dip is associated with self-reversal due to self-absorption in the plasma. The Q-switched radiation of 355 nm, 532 nm, or 1064 nm from a Nd:YAG laser device generates the plasma at the surface of silver nano-material targets, with experiments conducted in standard ambient temperature and pressure laboratory air. Procedures for recovery of the spectral line shapes confirm that over and above the effects of self-reversal, line shape distortion are important in the analysis. The work discusses parameters describing self-absorption when using fluence levels of 2 to 33 J/cm2 to generate the plasma. Furthermore, subsidiary calibration efforts that utilize the hydrogen alpha line of the Balmer series show that the Ag I lines at 827.35 nm and 768.7 nm are optically thin.
Laser-Plasma Spectroscopy of Hydroxyl with Applications
Subject: Physical Sciences, Astronomy & Astrophysics Keywords: plasma diagnostics; molecular spectra; diatomic molecules; plasma spectroscopy; laser spectroscopy; laser-induced breakdown spectroscopy; optical emission spectroscopy; molecular excitation temperature; combustion analysis; astrophysical spectroscopy
Online: 5 February 2020 (03:27:47 CET)
This article discusses laser-induced laboratory-air plasma measurements and analysis of hydroxyl (OH) ultraviolet spectra. New experiments with Q-switched laser pulses illustrate occurrence of molecular recombination spectra for time delays of the order of several dozen of microseconds after plasma initiation. The computation of the emission spectra utilizes line strength data that are communicated as a supplementary file. Applications of detailed OH computations include laser-induced plasma and combustion analyses, to name but two applications.
Feeding Spray-Dried Porcine Plasma to Pigs Reduces African Swine Fever Virus Load in Infected Pigs and Delays Virus Transmission. Study 1
Elena Blázquez, Joan Pujols, Fernando Rodriguez, Joaquim Segalés, Rosa Rosell, Joy Campbell, Javier Polo
Subject: Life Sciences, Virology Keywords: African swine fever; ASFV; spray-dried porcine plasma; challenge; nutritional intervention
The objective of this study was to evaluate the potential benefits of feeding spray-dried porcine plasma (SDPP) to pigs infected with African swine fever virus (ASFV). Two groups of twelve weaned pigs each were fed with CONVENTIONAL or 8% SDPP enriched diets. Two pigs (trojans)/group) were injected intramuscularly with the pandemic ASFV (Georgia 2007/01) and comingled with the rest of the pigs (1:5 trojan:naïve ratio) to simulate a natural route of transmission. Trojans developed ASF and died within the first week after inoculation but contact pigs did not develop ASF, viremia or seroconversion. Therefore, three more trojans per group were introduced to optimize the ASFV transmission (1:2 trojan:naïve ratio). Blood, nasal and rectal swabs were weekly harvested and at end of the study, ASFV-target organs collected. After the second exposure, rectal temperature of conventionally fed contact pigs increased >40.5˚C while fever was delayed in the SDPP contact pigs. Additionally, PCR Ct values in blood, secretions and tissue samples were significantly lower (P<0.05) for CONVENTIONAL compared to SDPP contact pigs. Under these study conditions, contact exposed pigs fed SDPP had delayed ASFV transmission and reduced virus load, likely by enhanced specific T-cell priming after the first ASFV-exposure.
The Post Shock Nonequilibrium Relaxation in a Hypersonic Plasma Flow Involving Reflection off A Thermal Discontinuity
Anna Markhotok
Subject: Physical Sciences, Fluids & Plasmas Keywords: Hypersonic plasma dynamics; Optical Discharge; Shock wave structure; Non-equilibrium state
The evolution in the post shock nonequilibrium relaxation in a hypersonic plasma flow was investigated during a shock's reflection off a thermal discontinuity. Within a transitional period, the relaxation zone parameters past both, the reflected and transmitted waves, evolve differently compared to that in the incident wave. In a numerical example for the non-dissociating N2 gas heated to 5000 K/10,000 K across the interface and M = 3.5, the relaxation time for the transmitted wave is up to 50% shorter and the relaxation depth for both waves is significantly reduced, thus resulting in a weakened wave structure. The results of extension into larger values of heating strength and the shock Mach numbers are discussed. The findings can be useful in the areas of research involving strong shocks interacting with optical discharges or other heated media on the scale where the shock structure becomes important.
Nonlinear Problems of Equilibrium Charge State Transport in Hot Plasmas
Vladimir A. Shurygin
Subject: Physical Sciences, Fluids & Plasmas Keywords: magnetically confined plasma; impurity, charge state, transport, coronal equilibrium; diffusion coefficient
The general coupling between particle transport and ionization-recombination processes in hot plasma is considered on the key concept of equilibrium charge state (CS) transport. A theoretical interpretation of particle and CS transport is gained in terms of a two-dimensional (2D) Markovian stochastic (random) processes, a discrete 2D Fokker-Plank-Kolmogorov equation (in charge and space variables) and generalized 2D coronal equilibrium between atomic processes and particle transport. The basic tool for analysis of CS equilibrium and transport is the equilibrium cell (EC) (two states on charge and two on space), which presents (i) a unit phase volume, (ii) the characteristic scale of local equilibrium, (iii) a comprehensive solution for the simplest nonlinear relations between transport and atomic processes. The approach opens up new perspectives on transport studies: (i) the direct modelling of equilibrium and transport of impurity using the atomic data base, (ii) recovery of the complete recombination rate profile based on knowledge of density profiles and ionization rate profiles, (iii) the local transport analysis, based on the reduction of the equilibrium set to the single EC (in particular, central or edge), (iv) analysis of the reduced transport coefficients (diffusion and convection) on the density profile measurements.
Automated Supported Liquid Extraction for the analysis of a panel of 12 endogenous steroids in human plasma by LC-MS/MS
Scott G Denham, George Just, Catriona J Kyle, Jennifer Richardson, Patricia Lee, Joanna P Simpson, Robert M Gifford, Ruth Andrew, Rebecca M Reynolds, Natalie ZM Homer
Subject: Medicine & Pharmacology, Allergology Keywords: steroids; steroid panel; clinical mass spectrometry; plasma; sample automation; endocrine; Synacthen
Steroid analysis is important in the clinical assessment of endocrine function in health and disease. Although tandem mass spectrometry methods coupled with chromatographic separation are considered the gold standard analytical technique in this setting, enabling profiling of multiple steroids in a single sample, sample processing can be labour-intensive. Here we present a simple, efficient automated 96-well Supported Liquid Extraction method with dichloromethane/isopropanol as organic solvent, carried out on a Extrahera automated sample handler (Biotage), which completes sample preparation of 80 plasma samples (200µL) in 90 minutes. Compounds were separated on a Kinetex C18 column (150x3mm;2.6um) using a mobile phase of methanol and water (0.1% formic acid). The run time was 16 minutes on a Nexera uHPLC system (Shimadzu) with a QTrap 6500+ linear ion trap mass spectrometer (AB Sciex). Precisions ranged 8.1 to 18.1% RSD, bias -10.1-5.8%, and extraction recoveries 73.5-111.9%. LOQs ranged between 0.025–0.500 ng/mL.
Fabrication of Ultralow Stress Optical Coatings by Plasma Ion-assisted Deposition
Chun Guo, Mingdong Kong
Subject: Materials Science, Surfaces, Coatings & Films Keywords: stress; plasma ion-assisted deposition; TiO2 film; SiO2 film; annealing treatment.
Optical and mechanical properties of multilayer coatings depend on the selected layer materials and the deposition technology; therefore, knowledge of the performances of thin films is essential. In the present work, titanium dioxide (TiO2) and silicon dioxide (SiO2) thin films have been prepared by plasma ion-assisted deposition (PIAD). The optical, structural, and mechanical properties of thin films have been investigated using spectrometer/ellipsometer, X-ray diffraction (XRD), atomic force microscopy (AFM), and laser interferometer. The results show that TiO2 film fabricated by PIAD induces a high refractive index, wide optical band gap, amorphous structure, smooth surface, and tensile stress. In the case of SiO2 film, high bias voltage leads to dense structure and compressive stress. As an application, a three-wavelength high reflectance at 632.8, 808, and 1550nm is optimized and deposited. The dependence of total stress in the multilayer on the substrate temperature is studied as well. In conclusion, it is demonstrated that PIAD is an effective method for the preparation of ultralow stress TiO2/SiO2 multilayer films. The achieved stress is as low as 1.4Mpa. The result could provide guidance to the stress optimization of most optical components without prefiguring, backside coating, and post-deposition treatments.
The Assessment of Carbon Dioxide Dissociation Using Single Mode Microwave Plasma Generator
George Mogildea, Marian Mogildea, Cristina Popa, Gabriel Chiritoi
Subject: Physical Sciences, Other Keywords: assessment of gases; carbon dioxide dissociation; plasma generator; OES spectroscopy method
The paper focuses of the interaction between tungsten (W) wire and microwave field in carbon dioxide (CO2) atmosphere. Our experimental set-up uses a microwave plasma generator in order to generate the plasma from the metallic wires. The microwave plasma generator contain a cylindrical cavity - TM_011 propagation modes, commercial source (magnetron) having the 2.45 GHz frequency at 800 W microwave and power supply. In the focal point of the cylindrical cavity we have a high energy of the electromagnetic radiation. The metallic wires from this area will be vaporized and ionized having as effect CO2 dissociation. The electron temperature regarding metallic plasma produced was estimated using the ratio of atomic emission lines acquired by a high definition of the optical multichannel spectrometer. Determination of the CO2 dissociation quantity was estimated measuring of the carbon monoxide (CO) resulting of the CO2 dissociation process. We obtained a high electron temperature of the plasma and a strong dissociation of the CO2.
Microfluidic Device for On-Chip Immunophenotyping and Cytogenetic Analysis of Rare Biological Cells
Kumuditha M. Weerakoon-Ratnayake, Swarnagowri Vaidyanathan, Nicholas Larkey, Kavya Dathathreya, Mengjia Hu, Jilsha Jose, Shalee Mog, Keith August, Andrew K. Godwin, Mateusz L. Hupert, Malgorzata A. Witek, Steven A. Soper
Subject: Engineering, Biomedical & Chemical Engineering Keywords: microfluidics; immunophenotyping; FISH; liquid biopsy; circulating leukemia cells; circulating plasma cells
The role of circulating plasma cells (CPCs) and circulating leukemic cells (CLCs) as biomarkers for several blood cancers, such as multiple myeloma and leukemia, respectively, have recently been reported. These markers can be attractive due to the minimally invasive nature of their acquisition through a blood draw (i.e., liquid biopsy) negating the need for painful bone marrow biopsies. CPCs or CLCs can be used for cellular/molecular analyses, such as immunophenotyping or fluorescence in situ hybridization (FISH). FISH, which is typically carried out on slides involving complex workflows, becomes problematic when operating on CLCs or CPCs due to their relatively modest numbers. Here, we present a microfluidic device for characterizing CPCs and CLCs enriched from peripheral blood using immunofluorescence or FISH. The microfluidic possessed an array of cross-channels (2-4 µm in depth and width) that interconnected a series of input and output fluidic channels. Placing a cover plate over the device formed microtraps, the size of which was defined by the width and depth of the cross-channels. This microfluidic chip allowed for automating immunofluorescence and FISH requiring the use of small volumes of reagents, such as antibodies and probes, as compared to slide-based immunophenotyping and FISH. In addition, the device could secure FISH results in <4 h compared to 2-3 d for conventional FISH.
Plasma Electrolysis Spraying Al2O3 Nano-coating onto Quartz Fiber for Enhanced Thermal Conductivity and Stability
Aiming Bu, Yongfu Zhang, Yan Xiang, Yunjie Yang, Weiwei Chen, Huanwu Cheng, Lu Wang
Subject: Materials Science, Surfaces, Coatings & Films Keywords: quartz fiber; Al2O3 coating; plasma electrolysis spraying; tensile strength; thermal conductivity
The manuscript reported the synthesis of Al2O3 nano-coating onto quartz fiber by plasma electrolysis spray for enhanced thermal conductivity and stability. The nano- and micro-sized clusters were partially observed on the coating, while most coating was relatively smooth. It was suggested that the formation of a ceramic coating was followed as the nucleation-growth raw, that is, the formation of the coating clusters was dependent on the fast grow-up partially, implying the inhomogeneous energy distribution in the electrolysis plasma. The deposition of the Al2O3 coating increased the annealing tensile strength from 19.2 MPa to 58.1 MPa. The thermal conductivity of the coated quartz fiber was measured to be 1.17 W m-1 K-1, increased by ~45% compared to the bare fiber. The formation mechanism of the Al2O3 coating was preliminarily discussed. We believe that the thermally conductive quartz fiber with high thermal stability by plasma electrolysis spray will find a wide range of applications in industries.
Quantitative Analysis of Cerium-Gallium Alloys Using a Hand-Held Laser Induced Breakdown Spectroscopy Device
Ashwin Rao, Matthew Cook, Howard Hall, Michael Shattan
Subject: Physical Sciences, Atomic & Molecular Physics Keywords: plasma spectroscopy; nuclear forensics; analytical chemistry; nuclear chemistry; lanthanide spectroscopy; LIBS
A hand-held laser-induced breakdown spectroscopy device was used to acquire spectral emission data from laser-induced plasmas created on the surface of cerium-gallium alloy samples with Ga concentrations ranging from 0 to 3 weight percent. Ionic and neutral emission lines of the two constituent elements were then extracted and used to generate calibration curves relating the emission line intensity ratios to the gallium concentration of the alloy. The Ga I 287.4 nm emission line was determined to be superior for the purposes of Ga detection and concentration determination.A limit of detection below 0.25% was achieved using a multivariate regression model of the Ga I287.4 nm line ratio versus two separate Ce II emission lines. This LOD is considered a conservative estimation of technique's capability given the type of the calibration samples available and low power( 5 mJ per 1 ns pulse) and resolving power (λ/∆λ= 4000) of this handheld device. Nonetheless, the utility of the technique is demonstrated via a detailed mapping analysis of the surface Ga distribution of a Ce-Ga sample which reveals significant heterogeneity resulting from the sample production process.
Metallographic Comparison of Copper and Brass Plasma Reactor Electrodes After a Month of Operation
Sebastian Gnapowski, Elżbieta Kalinowska- Ozgowicz, Mariusz Sniadkowski, Aleksandra Pietraszek
Subject: Materials Science, Surfaces, Coatings & Films Keywords: brass; Cu-Zn-Pb; raids layers; metals; electrode; ozone; plasma reactor
Researchers from around the world are looking for better and cheaper ozone production. One of the methods increasing the efficiency of ozone production is the use to a rotating electrode presented in this paper. Experiments were carried out which shows that the most important parameters are the materials used on the electrodes and the condition of its surface. The metallographic investigations of the electrodes after continuous monthly work was made, which show how the raids layers are formed. As a result of working in a highly oxidizing environment, the electrode is oxidized in the process of chemical corrosion. It is obvious that the layer of corrosion products created during the work of the plasma reactor isolates the surface of the electrode, which reduces the intensity of the electric field, causing a decrease in the amount of plasma generated, which is a direct cause of lowering the concentration of ozone during this process. The dynamics of plasma generation process and the type of electrode material working in changing process conditions are the decisive factors influencing the concentration of ozone produced. The influence of the medium, which is the electrode material, depends mainly on its resistance to corrosion in the environment of dynamically changing conditions, e.g. electrode rotation, oxygen flow through the rotating electric field and the long monthly working time of the plasma reactor.
A Comparative Study of Hydroxyapatite Coating Produced with Plasma Electrolytic Oxidation and Hydrothermal Treatment on Titanium Alloys: Ti6Al4V and Ti6Al7Nb for Dental Implants
Elinor Nahum, Svetlana Lugovskoy
Subject: Engineering, Biomedical & Chemical Engineering Keywords: Ti6Al4V; Ti6Al7Nb; Plasma electrolytic oxidation (PEO); hydrothermal treatment (HTT); Hydroxyapatite (HA)
Ti and Ti alloys are materials usually used in contact with hard tissue for applications such as artificial joints and dental implants. Ti6Al4V is a very common alloy used for dental implants, owing to its good mechanical properties and corrosion resistance. Nevertheless, because of uncertainties regarding the toxicity of vanadium and its influence on the human body, other Ti alloys containing no vanadium and retaining suitable properties are used. In this work Ti6Al4V and Ti6Al7Nb were oxidized in a water solution of calcium acetate (Ca(CH3COO)2) and calcium glycerophosphate (Ca(PO4CH(CH2OH)2) by Plasma Electrolytic Oxidation (PEO) for 20 minutes and then were hydrothermally treated (HTT) in water (pH=7) and in potassium hydroxide (KOH) solution (pH=11) for 2 hours at 200°C in a pressurized reactor. The surface morphologies, elemental composition and phase components were characterized by Scanning Electron Microscopy (SEM), Energy Dispersive Spectroscopy (EDS) and X-Ray Diffraction (XRD), respectively. The surface roughness was measured by Atomic Force Microscope (AFM) and thickness measurements were made by SEM and thickness gauge. Also, corrosion tests were made to evaluate the corrosion behavior of the two alloys. The aim of this study is to compare two viable Ti alloys, Ti6Al4V and Ti6Al7Nb, and to attain on their surface hydroxyapatite (HA) coating improving the osseointegration, as it simulates a human bone.
Heat and Hydrothermal Treatment on the Microstructure Evolution of Plasma Sprayed Hydroxyapatite Coatings Reinforced with Graphene Nanoplatelets
Jia Ren, Dong Zhao, Fei Qi, Yongguang Wang, Yao Chen
Subject: Materials Science, Surfaces, Coatings & Films Keywords: graphene; hydroxyapatite; crystallinity; surface roughness; heat and hydrothermal treatment; plasma spray
Recent advances and demands in clinical applications drive a large amount of research to hydroxyapatite (HA) composite coatings fabricated by plasma spray. However, lower degree of HA crystallinity related to high temperature exposure in plasma spray usually leads to rapid weakening and disintegration of HA coatings and often promotes inflammatory responses in the surrounding tissue. In this research, graphene nanosheet (GNS) reinforced HA coatings were fabricated using plasma spray and followed by heat and hydrothermal treatment (hereafter referred to as thermal treatment). The addition of GNSs resulted in competing phenomenon to influence HA crystallinity viz. increased portion of the partially melted/unmelted zones and higher cooling rate during splat formation, leading to slight increase in HA crystallinity (~46.0-51.3%) in the as-sprayed coating. XRD and FTIR results showed that thermal treatment was capable of inducing significant transformation of amorphous HA to the crystalline form and removing other foreign non-HA compounds through regaining OH- ion, and therefore HA coatings displayed ~45.5-47.1% improvements in HA crystallinity regardless of addition or not of the GNS nanofillers. Microstructure observations revealed that thermal treatment enabled microcrack propagation due to stresses caused by crystallisation and phase transformations, and the residual partially melted/unmelted zone of the thermally treated GNS/HA coating was significantly decreased in size. More importantly, the added GNSs contributed greatly to the significant increase in surface nanoroughness of the thermally treated HA coatings owing to the fact that much more structural defects along with the GNSs mainly induced by thermal treatment might act as nucleation sites to accelerate HA nanoparticle precipitation, which would be beneficial for the improved adhesion strength of the osteoblast cells on the coating surface.
Time-Resolved Emission Spectroscopy of Atomic and Molecular Species in Laser-Induced Plasma
Christian Parigger, Ghaneshwar Gautam, Christopher M Helstern
Subject: Physical Sciences, Fluids & Plasmas Keywords: laser-induced plasma; atomic and molecular spectroscopy; Abel inversion; hydrogen; cyanogen
This work examines atomic and molecular signatures in laser-induced plasma in standard ambient temperature and pressure environments, including background contributions to the spectra that depend on the laser pulse-width. Investigations include solids, gases, and nano-particles. Abel inversions of measured line-of-sight data reveal insight into the radial plasma distribution. For nominal 6 nanosecond laser pulses and for pulse-energies in the range of 100 to 800 milli-Joules, expansion dynamics and turbulence due to shock phenomena are elucidated to address local equilibrium details that are frequently assumed in spatially averaged emission spectroscopy. Chemical equilibrium computations reveal temperature dependence of selected plasma species. Specific interests include atomic hydrogen (H) and cyanide (CN). The atomic H spectra, collected following optical breakdown in ultra-high-pure hydrogen and 9:1 mixtures of ultra-pure hydrogen and nitrogen gases, indicate spherical shell structures and isentropic expansion of the plasma kernel over and above the usual shockwave. The recombination radiation of CN emanates within the first 100 nanoseconds for laser-induced breakdown in a 1:1 CO2:N2 gas mixture when using nanosecond laser pulses to create the micro-plasma. The micro-plasma is generated using 1064 nm, 150 mJ, 6 ns Q-switched Nd:YAG laser radiation. Measurements of the optical emission spectra utilize a 0.64 m Czerny-Turner type spectrometer and an intensified charge-coupled device.
Dose-Dependent Effects of Oral Tyrosine Administration on Plasma Tyrosine Levels and Cognition in Aging
Ondine van de Rest, Mirjam Bloemendaal, Rianne de Heus, Esther Aarts
Subject: Medicine & Pharmacology, Nutrition Keywords: tyrosine; dose-response; aging; working memory; plasma amino acids; catecholamines; dopamine
The effects of tyrosine on plasma response and cognition in aging are unknown. We assessed the dose-dependent response to tyrosine administration in older adults in both plasma tyrosine concentrations and working memory performance. In this double blind randomized cross-over trial 17 older adults (aged 60-75 years) received a single administration of 100, 150 or 200 mg/kg body weight of tyrosine. For comparison, 17 young adults (aged 18-35 years) received a dose of 150 mg/kg body weight of tyrosine. Tyrosine plasma concentrations were determined before and 90, 120, 150, 180, 210 and 240 minutes after tyrosine intake. Working memory was assessed using the N-back task at 90 minutes after tyrosine administration. Older adults showed a dose-dependent increase in plasma tyrosine concentrations (p<.001), and the plasma response was higher than for young adults with the same dose (p<.001). Load-dependent working memory performance decreased with higher doses of tyrosine (p=.048), especially in older adults with greater dose-dependent plasma tyrosine responses (p=.035). Our results show an age-related increase in plasma tyrosine response, which was associated with a dose-dependent decline in cognitive functioning in older adults.
Fatigue Behavior of the Rare Earth Rich EV31A Mg Alloy: Influence of Plasma Electrolytic Oxidation
Lorella Ceschini, Alessandro Morri, Valerio Angelini, Simone Messieri
Subject: Materials Science, Surfaces, Coatings & Films Keywords: Rare earth rich magnesium alloy, Plasma Electrolytic Oxidation, PEO, Microstructure, Fatigue
Rare earth rich magnesium alloys are used in aerospace and automotive fields because of their high specific strength and good castability. However, due to their low corrosion resistance, protective surface treatments, such as conversion coating or electroless plating are necessary, when they have to be used in humid or corrosive environments. The present study was aimed to evaluate the effect of Plasma Electrolytic Oxidation (PEO) and different surface roughness (Ra≈0.8 μm and Ra≈0.3 μm) on the rotating bending fatigue of an innovative Mg alloy, with a high content of Nd (up to 3.1 wt%) and Gd (up to 1.7 wt %). Fatigue tests revealed a 15% decrease in the fatigue strength of the PEO treated alloy (fatigue strength = 88 MPa) with respect to the bare alloy (fatigue strength = 103 MPa). The reduction of fatigue strength was mainly due to the residual tensile stresses induced by the PEO treatment. The effect of surface roughness on the bare alloy was, instead, negligible. The mechanisms of crack initiation were similar in the untreated and PEO treated alloy, with crack nucleation sites located in correspondence of large facets of the cleavage planes.
Weak Deflection Angle for Curvature-Coupled Antisymmetric Wormhole Solution
Wajiha Javed, Touqeer Zahra, Reggie Pantig, Ali Övgün
Subject: Physical Sciences, Astronomy & Astrophysics Keywords: General Relativity; Gravitational lensing; Dark Matter, Gauss-Bonnet Theorem; Plasma Medium; Wormhole
This paper is devoted to study the gravitational lensing for Curvature-coupled antisymmetric wormhole solution to compute the bending angle of light by utilizing the Gibbons and Werner technique. To achieve this, we find the Gaussian optical curvature and then apply Gauss-Bonnet theorem in the weak field limits. We also study the effects of plasma as well as dark matter mediums on the bending angle. Moreover, we analyze the graphical behaviour of the deflection angle $\alpha$ with respect to the impact parameter $\sigma$ and minimal radius $r_{0}$ in non-plasma and plasma mediums. We examine that deflection angle shows direct relation with $r_{0}$ such that large values of $r_{0}$ gives large deflection angle and small values of $r_{0}$ gives small deflection angle. For impact parameter $\sigma$, deflection angle $\alpha$ shows inverse relation. Additionally, we derive the deflection angle of light by using Keeton and Petters method and compare with the previous results.
The Plasma Membrane as the Arena for Receptor-Receptor Crosstalk
Ana Sofía Vallés, Francisco J. Barrantes
Subject: Medicine & Pharmacology, Behavioral Neuroscience Keywords: plasma membrane; membrane domains; nanodomains; neurotransmitter receptors; cannabinoids; acetylcholine receptor; cannabinoid receptor.
Fifty years on from the classical fluid-mosaic model of Singer and Nicolson, current views of the plasma membrane portray a much more complex view of this interface region. Compartmentalization, together with transbilayer and lateral asymmetries, provide the structural foundation for functional specializations at the cell surface, including the active role of the lipid microenvironment in the modulation of membrane-bound proteins. The chemical synapse, the site where neurotransmitter-coded signals are decoded by neurotransmitter receptors, adds another layer of complexity to the plasma membrane architectural intricacy, mainly due to the need to accommodate a sizeable number of molecules in a minute subcellular compartment with dimensions barely reaching the micrometer. In this review, we discuss how Nature has developed suitable adjustments to accommodate different types of membrane-bound receptors and scaffolding proteins via membrane microdomains, and how this "efford-sharing" mechanism has evolved to optimize crosstalk or separation or coupling where/when appropriate. To this end, we use a fast ligand-gated neurotransmitter receptor, the nicotinic acetylcholine receptor, and a second-messenger G-protein coupled receptor, the cannabinoid receptor, as paradigmatic example.
Evaluation of B-Cell Kinetics After Acellular Pertussis Vaccination in Four Cohorts of Different Age and Priming Background
Annieck M Diks, Pauline Versteegen, Cristina Teodosio, Rick J Groenland, Bas de Mooij, Anne-Marie Buisman, Alba Torres-Valle, Martín Pérez-Andrés, Alberto Orfao, Guy A.M. Berbers, Jacques J.M. van Dongen, Magdalena A. Berkowska
Subject: Medicine & Pharmacology, Other Keywords: Tdap; flow cytometry; acellular pertussis vaccine; whole cell pertussis vaccine; plasma cells
Pertussis is a vaccine-preventable disease caused by the bacterium Bordetella pertussis. Over the past years, the incidence and mortality of pertussis increased significantly. A possible cause is the switch from whole cell to acellular pertussis vaccines, although other factors may also contribute. To develop future vaccines and improve current vaccination strategies, it is critical to understand factors influencing the generation of immunological memory. We applied high-dimensional flow cytometry to investigate changes in B cells in individuals of different ages and distinct priming backgrounds upon administration of an acellular pertussis booster vaccine. These findings were correlated to vaccine-specific plasma cells and serum Ig levels. Expansion and maturation of plasma cells 7 days post-vaccination was the most prominent cellular change in all age groups, and was most pronounced for more mature IgG1+ plasma cells. Cellular responses were stronger in individuals primed with whole cell vaccine than in individuals primed with acellular vaccine. Moreover, IgG1+ plasma cell expansion weakly correlated with Prn- and PT- specific serum IgG levels. Our study points at plasma cells as a potential early cellular marker of an immune response and contributes to understanding differences in immune responses between age groups and priming backgrounds.
In-Utero Exposure to Cigarette Smoking on Child Long-Term Risk of Obesity: Concordance of Self-Report, Maternal and Cord Blood Biomarkers
Wenpin Hou, Mingyu Zhang, Yuelong Ji, Xiumei Hong, Guoying Wang, Liming Liang, Hongkai Ji, Suchi Saria, Xiaobin Wang
Subject: Medicine & Pharmacology, Pharmacology & Toxicology Keywords: Maternal smoking; obesity; biomarkers; cotinine; hydroxycotinine; plasma; cord; in utero expo-sure
Most studies on the association of in utero exposure to cigarette smoking and childhood overweight or obesity (OWO) were based on maternal self-reported smoking data and few were based on objective biomarkers. In this study, we evaluated the associations between self-reported and biomarkers of in utero exposure to cigarette smoking with risk of childhood OWO. We analyzed data from 2351 mother-child pairs in the Boston Birth Cohort, a US low-income minority cohort that enrolled children at birth and followed prospectively up to age 18 years. In utero smoking exposure was measured by maternal self-report and by maternal and cord plasma cotinine and hydroxycotinine metabolites. We assessed the individual and joint associations of each smoking exposure measure and maternal OWO with childhood OWO using multinomial logistic regressions. We used nested logistic regressions to investigate the childhood OWO prediction performance when adding maternal and cord plasma biomarkers as input covariates on top of self-reported data. Our results demonstrated that in utero cigarette smoking exposure defined by self-report and by maternal or cord metabolites were consistently associated with increased risk of long-term child OWO. Adding maternal and cord plasma biomarker information to self-reported data improved the prediction accuracy of long-term child OWO risk.
Weak Deflection Angle and Shadow by Tidal Charged Black Hole
Wajiha Javed, Ali Hamza, Ali Övgün
Subject: Physical Sciences, General & Theoretical Physics Keywords: Relativity; Gravitation; Black hole; Tidal Charge; Gauss-bonnet theorem; plasma medium; Shadow
In this article, we calculate the deflection angle of tidal charged black hole (TCBH) in weak field limits. First we obtain the Gaussian optical curvature and then apply the Gauss-Bonnet theorem on it. With the help of Gibbons-Werner method, we are able to calculate the light's deflection angle by TCBH in weak field limits. After calculating the deflection angle of light, we check the graphical behavior of TCBH. Moreover, we further find the light's deflection angle in the presence of plasma medium and also check the graphical behavior in the presence of plasma medium. Moreover, we investigate the shadow of TCBH.For calculating the shadow, we first find the null geodesics around the TCBH and then find its shadow radius. We also obtain TCBH's shadow in the plasma medium. Hence, we discuss the shadow of the TCBH using the $M87^{*}$ parameters announced by the Event Horizon Telescope.
Antimicrobial Peptides Grafted onto a Plasma Polymer Interlayer Platform: Performance upon Extended Bacterial Challenge
Stefani Griesser, Marek Jasieniak, Krasimir Vasilev, Hans J. Griesser
Subject: Materials Science, Biomaterials Keywords: antibacterial coating; antimicrobial peptide; plasma polymer; LL 37; Magainin; Parasin; bacterial attachment
To combat infections on biomedical devices, antimicrobial coatings have attracted considerable attention, including coatings comprising naturally occurring antimicrobial peptides (AMPs). In this study the aim was to explore performance upon extended challenge by bacteria growing in media above samples. The AMPs LL37, Magainin 2, and Parasin 1 were covalently grafted onto a plasma polymer platform, which enables application of this multilayer coating strategy to a wide range of biomaterials. Detailed surface analyses were performed to verify the intended outcomes of the coating sequence. Samples were challenged by incubation in bacterial growth media for 5 and 20 hrs. Compared with the control plasma polymer surface, all three grafted AMP coatings showed considerable reductions in bacterial colonization even at the high bacterial challenge of initial seeding at 1x107 CFU, but there were increasing numbers of dead bacteria attached to the surface. All three grafted AMP coatings were found to be non-toxic to primary fibroblasts. These coatings thus could be useful to produce antibacterial surface coatings for biomaterials, though possible consequences arising from the presence of dead bacteria need to be studied further, and compared to non-fouling coatings that avoid attached dead bacteria. | CommonCrawl |
Improving equity in health care financing in China during the progression towards Universal Health Coverage
Mingsheng Chen1,2,
Andrew J. Palmer3 &
Lei Si3
China is reforming the way it finances health care as it moves towards Universal Health Coverage (UHC) after the failure of market-oriented mechanisms for health care. Improving financing equity is a major policy goal of health care system during the progression towards universal coverage.
We used progressivity analysis and dominance test to evaluate the financing channels of general taxation, pubic health insurance, and out-of-pocket (OOP) payments. In 2012 a survey of 8854 individuals in 3008 households recorded the socioeconomic and demographic status, and health care payments of those households.
The overall Kakwani index (KI) of China's health care financing system is 0.0444. For general tax KI was −0.0241 (95% confidence interval (CI): −0.0315 to −0.0166). The indices for public health schemes (Urban Employee Basic Medical Insurance, Urban Resident's Basic Medical Insurance, New Rural Cooperative Medical Scheme) were respectively 0.1301 (95% CI: 0.1008 to 0.1594), −0.1737 (95% CI: –0.2166 to −0.1308), and −0.5598 (95% CI: –0.5830 to −0.5365); and for OOP payments KI was 0.0896 (95%CI: 0.0345 to 0.1447). OOP payments are still the dominant part of China's health care finance system.
China's health care financing system is not really equitable. Reducing the proportion of indirect taxes would considerably improve health care financing equity. The flat-rate contribution mechanism is not recommended for use in public health insurance schemes, and more attention should be given to optimizing benefit packages during China's progression towards UHC.
Various countries have designed and implemented health sector reforms to bring about Universal Health Coverage (UHC), and the World Health Organization (WHO) has called for health systems to move towards UHC, where there are 'key promotive, preventive, curative and rehabilitative health interventions for all at an affordable cost, thereby achieving equity in access' [1]. The 2010 World Health Report was devoted to UHC and it argued that financing systems need to be specifically designed to provide all people with access to the health services that they need, and to ensure that the use of these services does not expose the user to financial hardship, especially regarding poor and vulnerable groups [2, 3]. Consequently, policymakers must ensure that coverage is equitable and they must establish reliable sources of finance to fund health care. Improving the equity of health care financing has become a major policy goal in the development of UHC. However, policymakers often encounter challenges: does health care financing become equitable during the progression towards UHC?
China is reforming the way it finances its health care system as it moves towards UHC after the failure of the market-oriented approach to health care. China's health care financing system has been influenced by economic transitions since the early 1980s, and the system was gradually reformed as it transitioned from a planned economic model to a market-oriented model [4]. Government health care spending declined as health care financing was decentralized. As a result, the share of public funding in the health care system decreased, while the proportion of private financing increased [5].
During the period of the planned economy, China's social health insurance consisted of the Government Welfare Insurance Scheme (GWIS), the Labor Insurance Scheme (LIS) for those in urban areas and the Cooperative Medical Scheme (CMS) for those in rural areas. GWIS mainly covered civil servants, other government employees, veterans and college students, whereas LIS was for workers and their dependents across all the formal sectors of the economy [6].CMS played a key role in guaranteeing access to basic health services for the vast majority of the rural population, especially the poor [7]. Almost all of health care expenditures were funded by the government during the planned economy period. Taking the year 1980 as an example, OOP expenditure accounted for only 21.19% of all health care financing [8].
However, these health insurance schemes faced challenges brought about by the market-oriented economic reforms, which led to substantial changes in hospital management procedures and financing patterns. These reforms, coupled with the adoption of advanced medical technologies and economic inflation, became a major factor that increased health care costs. Along with the greater demands by employees for quality care, and the corresponding financial pressures, financing from GWIS and LIS greatly shrank and citizens had to pay much higher OOP expenditures for health care during the period of the market-oriented economy.
In 2000, OOP payments accounted for 58.98% of all health care financing [8]. The 2000 World Health Report also noted that China had a very high per capita health care expenditure and an inequitable health care system [9]. The heavy dependence on OOP payments resulted in a segmented and tiered health care financing system, in which poor and vulnerable groups faced financial difficulties when accessing health care. The WHO indicated that not only do OOP payments cause financial stress and deter people from using health services, but they also cause inequity in health care financing [9]. The results of China's 2003 national health services survey show that 48.9% of individuals who should have received outpatient care did not visit a health clinic. Among those who were admitted but did not use inpatient services, 75.4% could not afford the hospital charges [10].
The increase in costs and inequality in health care usage was considered to be a major crisis [11], and the Chinese government took steps to address these issues by establishing new types of health insurance schemes, as shown in Table 1. In 1998, the Chinese government established the Urban Employee Basic Medical Insurance (UEBMI), which covers urban workers in the formal sector. During the period 1998–2009, UEBMI gradually expanded to cover all urban workers in all types of organizations, including government institutions, state-owned and collective enterprises, private enterprises, enterprises with foreign investment, social organizations and private non-enterprise organizations. Moreover, the Urban Resident Basic Medical Insurance (URBMI) was established in 2007 for urban residents such as pre-school children, students, the disabled, the unemployed and elderly people without pensions [12]. In addition, the New Rural Cooperative Medical Scheme (NRCMS) was piloted in 2003 and officially implemented in 2007 to provide cover for rural residents.
Table 1 Summary of China's current health insurance schemes
Currently, China's health care financing sources consist of general taxation, OOP payments and public health insurance schemes (UEBMI, URBMI and NRCMS). In 2009, the Chinese government announced that it was to establish UHC by extending the coverage of UEBMI, URBMI and NRCMS in order to provide safe, effective, convenient and affordable health services to all Chinese people by 2020 [13]. As a result of the Chinese government's attempts to accelerate the establishment of UHC by expanding the coverage of the three public health insurance schemes, in 2012, UEBMI, URBMI and NRCMS covered 274 million, 296 million and 802 million individuals, respectively. In 2012, the proportions of the relevant population covered by UEBMI, URBMI and NRCMS were 95.1, 89.2 and 97.3%, respectively [14]. However, 4.4% of the population was still not covered by any type of health insurance scheme and these people had to pay OOP for health care [14].
The Chinese government's initiatives have expanded health coverage and they have attempted to encourage progressive payments over regressive payments, with the overall aims of reducing OOP payments and improving the equity of health care financing. Contribution to health care finance has been considered a redistribution of the disposable income of households [15, 16]. Progressive payments refer to the rich contribute a greater proportion of health care payments than the poor in comparison with their ability to pay (ATP). In contrast, regressive payments refer to the poor contribute a greater proportion of health care payments than the rich in comparison with ATP. However, cross-subsidization from the rich to the sick poses a potential challenge to UHC. For example, the individual contributions associated with UEBMI were a fixed proportion of employees' salaries, whilst the individual contributions associated with URBMI and NRCMS were flat-rate premiums, regardless of each individual's ATP. Although solidarity with the poor is widely supported in many countries [17], progressivity of health care finance may affect people's willingness to participate in a health insurance scheme.
Some researchers have questioned the claim that poor individuals who are covered by URBMI or NRCMS contribute a greater share of the health care payments to health insurance schemes than the rich. Accordingly, evaluating the distribution of health care financing has become fundamental to assessing China's progression towards UHC. However, few empirical studies provide evidence on the actual degree of financing equity. It is intended that this study will help to clarify the positive and negative aspects of China's health care financing system, and thereby discover flaws in the financing mechanisms, which are heavily influenced by UHC initiatives.
The data for the analyses came from a household survey in 2013 in North Jiangsu Province, China, which recorded the information in 2012. In terms of per capita gross domestic product (GDP), North Jiangsu, in the center of East China, is middle-ranked in China.
Adopting a multistage stratified random sampling method, the survey randomly selected five counties or county-level cities, and then five townships or neighborhoods were selected from each of these. In turn, two communities were selected from each of the townships or neighborhoods. About 60 households from each of the communities were then randomly selected, giving a total of 3008 households with 8854 individuals, as shown in Table 2.
Table 2 Descriptive statistics and socioeconomic characteristics of the sampling data by income quintile
The survey was administered via household interviews. Within each sampled household, all household members aged 15 years and older were interviewed. Information on the children aged under 15 years was obtained via their guardians, as was information on adults with incapacities who required guardians. The face-to-face interviews were carried out by trained data collectors who used a structured questionnaire. This questionnaire contained a series of questions regarding the socioeconomic and demographic characteristics of each household and its members, including expenditure, number of household members, gender, age, employment status, earnings and education status. With regard to household expenditure, monthly expenditures on food, water, transport, housing, clothing, electricity, communications, education, fuel, entertainment, tour, health care and other expenditures were recorded. These data covered the 12-month period prior to the interviews. Household expenditure was recorded by the head of the household, or by members of the household who were familiar with the household's affairs. Data on health care expenditure were collected using the interviewees' medical records. The survey was confidential and personal identifiers were not collected. The study was approved by the Academic Research Ethics Committee of Nanjing Medical University.
Health care payments were computed using three data sources: the survey described above, the tariffs for tax and the contribution rates associated with UEBMI. The tariffs for general taxation were collected from the China Price Statistical Yearbook [14], while the contribution rates associated with UEBMI were obtained from the Jiangsu Statistical Yearbook [18].
General taxation is an important source of funding for health care in China. A variety of tax revenues exist, including excise taxes on food, drink, accommodation, alcohol, cigarettes, entertainment, gas and electricity, and various other consumption taxes. The taxes were estimated by applying the specific tax rates to the corresponding data on expenditures collected in the survey.
With regard to UEBMI, the household financing contributions were estimated by multiplying the contribution rate associated with UEBMI by the salaries of the relevant workers. With regard to URBMI, the annual premium was a flat-rate contribution. Each household was required to pay the same premium, due to the difficulty faced by the insurance agencies associated in identifying the socioeconomic status of each household. The flat amounts were obtained directly during the household interviews and they were aggregated at the household level. The same method was used for estimating the financing contribution associated with NRCMS.
Data on OOP payments during the two weeks prior to each interview were obtained during the survey, directly from the interviewees.
The unit in the analysis of financing progressivity was the household. Expenditures and health care payments were aggregated at the household level. The household expenditure was used as the measurement of ATP [19]. The household expenditure was adjusted for household size and composition in order to obtain adult equivalent estimates. The number of adult equivalent household members was defined as follows:
$$ AE={\left(A+\alpha K\right)}^{\beta } $$
where A is the number of adults in the household, α is the cost of children, K is the number of children and β is the degree of economies of scale [19]. The values of α and β were assumed to be 0.5 and 0.75, respectively [20]. The population was ranked by ATP and grouped into quintiles. Household health care payments were also adjusted for household size and composition in order to obtain adult equivalent estimates.
The most direct means of assessing the progressivity of health care payments is to examine how the cumulative proportion of health care payments changes with the cumulative proportion of the population, ranked by ATP. Specifically, a progressivity analysis measures departures from proportionality in relation to health care payments and ATP.
The equity of health care financing is measured using the Kakwani index (KI), which is calculated as follows:
πK = C – G
where C is the concentration index (CI) for health care payments and G is the Gini coefficient associated with the ATP variable [21]. The CI is a measure for assessing the proportionality of health care payments within a defined population and it is not a standard approach for the assessment of the equity of health care financing. The KI was used to estimate the degree of equity in the health care financing system. The πK value ranges between −2 to 1, with a positive number indicating progressivity, and a negative number indicating regressivity. A πK value of 0 indicates proportionality [19]. Progressivity (regressivity) indicates that the rich (poor) contribute a larger proportion of health care payments than the poor (rich) in comparison with ATP [19].
Computing the CI and the Gini coefficient requires directly relating the covariance between variables and the households' fractional ranks according to their ATP [22, 23]. The estimates of the CI and the Gini coefficient can be obtained from ordinary least squares (OLS) regression of the health care payment variables and ATP, respectively, on the households' fractional rank according to the ATP distribution [19, 24], as follows:
$$ 2{\upsigma}^2\left(\frac{{\mathrm{Y}}_{\mathrm{i}}}{\upvarphi}\right)=\alpha +\beta {\mathrm{X}}_{\mathrm{i}}+\varepsilon $$
where Yi is the health care payment or ATP of household i, φ is the mean health care payment or ATP, XiXi is the household fractional rank according to the ATP distribution and σ2 is its variance. The OLS value of β is an estimate of the CI or the Gini coefficient, depending on the variables used in the regression.
As the KI is the difference between the CI and the Gini coefficient, both of which can be computed using the regression method described above, its value can be computed using a regression of the following form [19]:
$$ 2{\sigma}^2\left[\frac{s_i}{\mu }-\frac{t_i}{\eta}\right]=\alpha +\theta {X}_i+\varepsilon $$
where s i is the health care payment of household i, μ is an estimate of its mean, t i is the ATP variable, η is an estimate of its mean, X i is the household fractional rank according to the ATP distribution and σ 2 is its variance. The OLS value of θ is an estimate of the KI. The overall KI of the health care financing system can be computed by taking the weighted sum of the individual KIs for each source of finance, where the weights are equal to the proportions of revenue collected from each source.
In addition, dominance test was used after the progressivity analysis. In order to determine whether the health care financing mechanisms reduce inequity, in the sense that poor individuals contribute a smaller proportion of their wealth to the health care financing system than wealthy individuals, tests were conducted to determine whether one concentration curve dominates (i.e., lies above) the Lorenz curve or another concentration curve. For dominance testing, the standard errors and differences between ordinates were computed to allow for between-curve dependence, where appropriate [25]. A multiple comparison approach to testing was adopted [26], with the null hypothesis defined as the curves being indistinguishable. This was tested against both dominance and the crossing of curves [27]. The null hypothesis was rejected in favor of dominance if there was at least one statistically significant difference between the ordinates of the two curves in one direction and no significant differences in the other direction across 19 evenly-spaced quintile points from 0.05 to 0.95. The null hypothesis was rejected in favor of crossing of curves if there was at least one statistically significant difference in each direction [28].
Table 3 presents the quintile-based income shares of per capita household expenditures and health care payments in North Jiangsu in 2012. The financing distribution, CIs, KIs, and dominance tests associated with each source of health care financing are also used to describe health care financing progressivity (Table 3).
Table 3 Distribution of household expenditure and the progressivity of the health care financing sources
The values of the CIs for all of the financing sources apart from NRCMS were statistically significantly positive. This confirms that the wealthy contribute a larger proportion of their ATP to the financing of health care than the poor, as is clear from the dominance tests against the 45° line (the line of equity). Out of all the positive CIs, the CI for UEBMI is the largest and the CI for URBMI is the smallest. With regard to NRCMS, the CI value is statistically significantly negative, implying that the wealthy contribute absolutely less to the financing of health care than do the poor through NRCMS.
The values of the KIs associated with UEBMI and OOP payments were statistically significantly positive. This indicates that the wealthy contribute a larger proportion of health care payments than the poor in comparison with ATP, as was clear from the dominance tests against the Lorenz curve. The KIs for general taxation, URBMI and NRCMS were statistically significantly negative, indicating that the wealthy contribute a smaller proportion of health care payments than the poor in comparison with ATP. Among these sources of health care finance, the KI for NRCMS was negative with a statistically significant magnitude, implying that the poor funded a much larger share of health care payments relative to ATP than the rich through NRCMS.
In summary, the financing associated with UEBMI and OOP payments was progressive, whereas the financing associated with general taxation, URBMI and NRCMS was regressive. The overall KI was 0.0444, indicating that it is a progressive health care financing system.
The relative progressivity of the different sources of finance was tested using dominance methods (Table 4). The results indicate that the concentration curve associated with UEBMI is dominated by all the others, and so it can be concluded that UEBMI is the most progressive source of finance, because the poor contributed the smallest proportion of UEBMI in all sources of finance compared with ATP. The next most progressive source of finance is OOP payments, the concentration curve for which is dominated by all the others except UEBMI. The concentration curve for general taxation is dominated by URBMI and NRCMS, whereas the curve for URBMI is dominated by NRCMS. Therefore, it can be concluded that NRCMS is the most regressive source of finance, since the poor contributed the largest proportion of NRCMS in all sources of finance compared with ATP. The rank in relation to progressivity of financing sources is consistent with the estimates of the KIs.
Table 4 Tests of dominance between concentration curves for different sources of health finance
This study questioned whether there was evidence that China's health care financing system was equitable during China's progression towards UHC. Generally, it was not really equitable in the big picture. The sources of finance for China's health care system primarily comprised general taxation, UEBMI, URBMI, NRCMS and OOP payments. The health care payments associated with UEBMI and OOP payments were both progressive, as the values of the associated KIs were statistically significantly positive. However, the health care payments associated with general taxation, URBMI and NRCMS were regressive, as the values of the associated KIs were statistically significantly negative. Using these KI values and the results of the dominance tests that compared different concentration curves, the relative progressivity (from highest to lowest) of the sources of finance for health care is as follows: UEBMI, OOP payments, general taxation, URBMI and NRCMS. Overall, the health care financing system was slightly progressive, since the overall KI was positive but close to zero.
With regard to general taxation, the tax burden demonstrated a pro-rich bias in the distribution of health care financing. This contradicted the findings in the literature that general taxation is a progressive mechanism to fund health care in both high- and middle-income countries [29]. In contrast to developed countries, where direct taxes comprise the majority of the general taxation system, indirect taxes dominate the general taxation system in China. This represents a pro-rich policy because the tax burden is transferred from the wealthier people to the lower-income echelons. Within the region of interest in the current study, indirect taxes (which, in China, include valued-added tax (VAT), excise tax and sales tax) accounted for 65.82% of general taxation in 2012 [18]. The high reliance on indirect taxes in China has resulted in the regressive effect of general taxation on China's health care financing system. It is suggested that, in the move towards UHC, not only general tax collection should be increased through a variety of tax sources to fund the pool of UHC, but also general tax structure should be renovated by reduction on indirect taxes to improve financing equity of UHC.
With the goal of achieving UHC by 2020, efforts have been made to expand the coverage of existing health insurance schemes (UEBMI, URBMI, and NRCMS) to a wider Chinese population [30]. By 2012, over 95% of the total population has enrolled one of these three health insurance schemes [14]. Whilst China has made a big progress on the coverage, the equity and affordability of health care is still under investigated. The progressivity of public health insurance schemes differed between UEBMI, URBMI and NRCMS. UEBMI was the most progressive financing source, whereas URBMI and NRCMS were the second most regressive and the most regressive sources, respectively. These differences can be attributed to the individual contribution mechanisms used in the different schemes. UEBMI was jointly financed by employees and employers. The employees contributed around 2% of their salaries and the employers contributed around 7% of the employees' salaries, although this proportion varied slightly depending on the region and the employee's age [31]. This indicates that the UEBMI contribution was positively correlated with income. In contrast, there was a flat rate of individual contributions associated with both NRCMS and URBMI, as it was difficult for the insurance agencies to measure the income of rural households or urban households where household members often did not have stable jobs. Therefore, the insured population who enrolled in NRCMS and URBMI were required to pay the same premium, regardless of their ATP. This contribution mechanism explains why the KIs associated with these schemes were negative, which indicates an inequity that disadvantages the poorer members of society.
OOP payments were found to be progressive, indicating that the wealthier contributed a greater proportion of their ATP via direct payments than the poor. However, cautious interpretation is required when considering the KI and equity associated with the OOP payments, because the progressive distribution may be attributable to a 'combined effect' of the development of UHC in China. The 'combined effect' emerged after the initiation of China's campaign to establish UHC (which involved extending financial risk protection and access to health services for the poor) as health care expenses were partly compensated for by insurance schemes. The OOP expenditure of the poor decreased and their share of OOP payments relative to their ATP tended to be smaller than that of the wealthier. On the other hand, unlike other sources of health care finance, OOP payments are post-payments, which means that care can only be provide to people who can finance their own health care. For example, in some middle- and low-income countries that are moving towards UHC, the wealthiest have no difficulty affording health care, even for highly-priced medical goods, at the cost of some co-payment. However, these co-payments represent obstacles to accessing health care for the poor and some middle-class people, because they cannot afford the expense [32, 33]. Consequently, these effects may explain the progressive distribution of OOP payments.
The relatively high level of OOP payments poses a challenge for China as it moves towards UHC. In the region of interest in this study, OOP payments accounted for 44.61% of the health care financing (Table 3). Heavy dependence on OOP payments encourages overuse by people who can pay and underuse by those who cannot. The high proportion of OOP payments in China can be attributed to the prioritization of China's UHC plans, which focused on increasing the population coverage. It was reported that, in 2012, 95.4% of the population in North Jiangsu were covered by the various health insurance schemes. However, coverage of health services and costs was not as comprehensive. In the insurance list, the co-insurance rates associated with UEBMI, URBMI and NRCMS were 71.4, 57.2 and 46.6%, respectively [34]. If the required health care services were not on the insurance list, patients had to pay for them using OOP payments. Therefore, OOP payments remained at a high level during China's progression towards UHC. In addition, this adversely affected patients' care-seeking behavior, especially regarding the health-seeking behavior of the poor.
A limitation of our study is that the data were collected from a single province. The results might not represent the case for entire China and might not apply to the equity of health care finance in other provinces. This limitation notwithstanding, our study used relative scale of indices to evaluate the implementation of national policies and programs across the whole population. Accordingly, our study is less associated with specific regional economies or geography.
The study shows that China's health care financing distribution was slightly progressive during the progression to UHC. Financing via general taxation was regressive because indirect taxation dominates the general tax structure. Exemptions from indirect taxes, especially for the taxes that heavily impact on vulnerable groups, could play a large part in improving the equity of financing. The different financing contribution mechanisms of the public health insurance schemes resulted in different levels of progressivity. UEBMI was progressive, whereas URBMI and NRCMS were both regressive. Flat-rate contributions are not recommended and it is suggested that the contributions of the wealthy should be higher in order to cover the non-contributing members. Although OOP payments were progressive, this may be due to the underuse of health services by the poor. In addition, OOP payments still dominated China's health care financing system. This indicates that the next phase of the development of UHC should focus on updating the benefit packages of the health insurance schemes, which could include extending the range of health services made available by the schemes and the proportion of the total costs of care to be covered by the insurance schemes.
ATP:
Ability to pay
Concentration index
Cooperative Medical Scheme
GWIS:
Government Welfare Insurance Scheme
KI:
Kakwani index
LIS:
Labor Insurance Scheme
NRCMS:
New Rural Cooperative Medical Scheme
OLS:
Ordinary least squares
OOP:
Out-of-pocket
UEBMI:
Urban Employee Basic Medical Insurance
UHC:
URBMI:
Urban Resident Basic Medical Insurance
Valued-added tax
World Health Organization. Social health insurance: sustainable health financing, universal coverage and social health insurance. 58th world health assembly. Geneva: World Health Organization; 2005.
World Health Organization. World health report 2010: health systems financing: the path to universal coverage. Geneva: World Health Organization; 2010.
Lagomarsino G, Garabrant A, Adyas A, Muga R. Moving towards universal health coverage: health insurance reforms in nine developing countries in Africa and Asia. Lancet. 2012;380:933–43.
Hsiao WC, Maynard A. Health economics in China. Foreword. Health Economics. 2009;18(Suppl 1):1-2.
Lou J, Wang S. Public finance in China: reform and growth for a harmonious society. Washington, DC: World Bank; 2008.
Henderson G, Shuigao J, Akin J, Zhiming L, Jianmin W, Haijiang M. Distribution of medical insurance in China. Soc Sci Med. 1995;41:1119–30.
Xing-Yuan G, Sheng-Lan T. Reform of the Chinese health care financing system. Health policy. 1995;32:181–91.
National Health Development Research Center. China National Health Accounts Report 2012. Beijing: Ministry of Health; 2012.
World Health Organization. World health report 2000: health systems: improving performance. Geneva: World Health Organization; 2000.
Ministry of Health. An analysis report of the third national health services survey. Beijing: Chinese Union Medical University Press; 2004.
Wagstaff A, Yip W, Lindelow M, Hsiao WC. China's health system and its reform: a review of recent studies. Health Econ. 2009;18(Suppl 2):7–23.
Meng Q, Tang S. Universal coverage of health care in China: challenges and opportunities. World health report 2010 background paper 2010. http://www.who.int/healthsystems/topics/financing/healthreport/whr_background/en. Accessed 25 May 2016.
Center for Health Statistics and Information. An analysis report of the fifth national health services survey. Beijing: Pecking Union Medical College Press; 2013.
National Bureau of Statistics of China. China price statistical yearbook 2013. Beijing: China Statistics Press; 2013.
Wagstaff A, van Doorslaer E. Progressivity, horizontal equity and Reranking in health care finance: a decomposition analysis for The Netherlands. J Health Econ. 1997;16:499–516.
Bilger M. Progressivity, horizontal inequality and Reranking caused by health system financing: a decomposition analysis for Switzerland. J Health Econ. 2008;27:1582–93.
James C, Savedoff W. Risk pooling and redistribution in health care: an empirical analysis of attitudes towards solidarity. World health report 2010 background paper, no. 5. Geneva: World Health Organization; 2010.
Jiangsu Provincial Bureau of Statistics. Jiangsu statistical yearbook 2013. Beijing: China Statistics Press; 2013.
O'Donnell O, Wagstaff A. Analyzing health equity using household survey data: a guide to techniques and their implementation. Washington, DC: World Bank; 2008.
World Bank. Guidelines for constructing consumption aggregates. Washington, DC: World Bank; 2002.
Deaton A. The analysis of household surveys: a microeconometric approach to development policy. Washington, DC: World Bank; 1997.
Jenkins S. Calculating income distribution indices from micro-data. Natl Tax J. 1988;41:139–42.
Lerman RI, Yitzhaki S. Improving the accuracy of estimates of Gini coefficients. J Econ. 1989;42:43–7.
Chen Z, Roy K. Calculating concentration index with repetitive values of indicators of economic welfare. J Health Econ. 2009;28:169–75.
Clarke P, Van Ourti T. Calculating the concentration index when income is grouped. J Health Econ. 2010;29:151–7.
Ashcroft R. Health inequities: evaluation of two paradigms. Health & Social Work. 2010;35:249–56.
Arora S, Pundir S, Jain K. Inference for absolute Lorenz curve and absolute Lorenz ordering. Journal of the Korean Statistical Society. 2006;35:305–16.
O'Donnell O, van Doorslaer E, Rannan-Eliya RP, Somanathan A, Adhikari SR, Harbianto D, et al. The incidence of public spending on healthcare: comparative evidence from Asia. The World Bank Economic Review. 2007;21:93–123.
O'Donnell O, van Doorslaer E, Rannan-Eliya RP, Somanathan A, Adhikari SR, Akkazieva B, et al. Who pays for health Care in Asia? J Health Econ. 2008;27:460–75.
Chen Z. Launch of the health-care reform plan in China. Lancet. 2009;373:1322–4.
Yip WC, Hsiao WC, Chen W, Hu S, Ma J, Maynard A. Early appraisal of China's huge and complex health-care reforms. Lancet. 2012;379:833–42.
Yip WC, Mahal A. The health care systems of China and India: performance and future challenges. Health Aff (Millwood). 2008;27:921–32.
van Doorslaer E, O'Donnell O, Rannan-Eliya RP, Somanathan A, Adhikari SR, Garg CC, et al. Effect of payments for health care on poverty estimates in 11 countries in Asia: an analysis of household survey data. Lancet. 2006;368:1357–64.
Jiangsu Provincial Commission of Health and Family Planning. An analysis report of the fifth health services survey in Jiangsu Province. Nanjing: Jiangsu Provincial Commission of Health and Family Planning; 2015.
We thank the Ministry of Health, People's Republic of China for supporting and cooperation; Health Department of the project province and local health bureau for cooperation and organization in the field data collection. We would also like to thank all study participants for their time to be interviewed.
This study is funded by the National Natural Science Foundation of China (Grant number: 71,503,137). The National Natural Science Foundation of China was not involved in the design of the study, or data collection, analysis, and interpretation or in writing the manuscript.
The datasets used in the current study are not publicly available due to the confidential policy but are available from the corresponding author on reasonable request.
School of Health Policy & Management, Nanjing Medical University, 101Longmian Avenue, Jiangning District, Nanjing, 211166, People's Republic of China
Mingsheng Chen
Institute of Healthy Jiangsu Construction & Development, Nanjing, 211166, China
Menzies Institute for Medical Research, University of Tasmania, Medical Science 1 Building, 17 Liverpool St (Private Bag 23), Hobart, TAS, 7000, Australia
Andrew J. Palmer
& Lei Si
Search for Mingsheng Chen in:
Search for Andrew J. Palmer in:
Search for Lei Si in:
LS led the study. He designed the study, led the data collection, analysis, and interpretation. MC contributed to the study design, provided input into the data analysis, and wrote the first draft of the manuscript. AP contributed to the study design, reviewed the manuscript and helped the writing of the final draft manuscript. All authors read and approved the final manuscript.
Correspondence to Lei Si.
This study was approved by the Academic Research Ethics Committee of Nanjing Medical University. All procedures were in accordance with the ethical standards of the Helsinki Declaration. Participants provided informed consent prior to data collection.
Financing equity
Progressivity | CommonCrawl |
Planetary systems in a star cluster II: intermediate-mass black holes and planetary systems
Spurzem, Rainer
Computational Structure Formation, MPI for Astrophysics, Max Planck Society;
Dotti, F. F., Kouwenhoven, M. B. N., Shu, Q., Hao, W., & Spurzem, R. (2020). Planetary systems in a star cluster II: intermediate-mass black holes and planetary systems. Monthly Notices of the Royal Astronomical Society, 497(3), 3623-3637. doi:10.1093/mnras/staa2188.
Cite as: http://hdl.handle.net/21.11116/0000-0007-B6FE-E
Most stars form in dense stellar environments. It is speculated that some dense star clusters may host intermediate-mass black holes (IMBHs), which may have formed from runaway collisions between high-mass stars, or from the mergers of less massive black holes. Here, we numerically explore the evolution of populations of planets in star clusters with an IMBH. We study the dynamical evolution of single-planet systems and free-floating planets, over a period of 100 Myr, in star clusters without an IMBH, and in clusters with a central IMBH of mass |$100\, \mathrm{M}_\odot$| or |$200\, \mathrm{M}_\odot$|. In the central region (|$r\lesssim 0.2$| pc), the IMBH's tidal influence on planetary systems is typically 10 times stronger than the average neighbour star. For a star cluster with a |$200\, \mathrm{M}_\odot$| IMBH, the region in which the IMBH's influence is stronger within the virial radius (∼1 pc). The IMBH quenches mass segregation, and the stars in the core tend to move towards intermediate regions. The ejection rate of both stars and planets is higher when an IMBH is present. The rate at which planets are expelled from their host star rate is higher for clusters with higher IMBH masses, for t < 0.5trh, while remains mostly constant while the star cluster fills its Roche lobe, similar to a star cluster without an IMBH. The disruption rate of planetary systems is higher in initially denser clusters, and for wider planetary orbits, but this rate is substantially enhanced by the presence of a central IMBH. | CommonCrawl |
Only show content I have access to (13)
Only show open access (5)
Chapters (23)
Last 12 months (5)
Physics and Astronomy (7)
Materials Research (6)
Politics and International Relations (3)
Infection Control & Hospital Epidemiology (5)
MRS Online Proceedings Library Archive (5)
Agricultural and Resource Economics Review (3)
International Astronomical Union Colloquium (3)
British Journal of Political Science (2)
History of Education Quarterly (2)
Oryx (2)
Antimicrobial Stewardship & Healthcare Epidemiology (1)
Bird Conservation International (1)
CNS Spectrums (1)
Developmental Medicine and Child Neurology (1)
Journal of Clinical and Translational Science (1)
Journal of Public Policy (1)
Journal of Smoking Cessation (1)
MRS Bulletin (1)
Prehospital and Disaster Medicine (1)
Public Health Nutrition (1)
Publications of the Astronomical Society of Australia (1)
The British Journal of Psychiatry (1)
Cambridge University Press (24)
Materials Research Society (6)
Society for Healthcare Epidemiology of America (SHEA) (5)
International Astronomical Union (3)
Northeastern Agricultural and Resource Economics Association (3)
Fauna & Flora International - Oryx (2)
History of Education Society (2)
American Academy of Cerebral and Developmental Medicine (1)
Australian Assoc of Smoking Cessation Professionals (1)
BLI Birdlife International (1)
Nestle Foundation - enLINK (1)
Neuroscience Education Institute (1)
Royal College of Psychiatrists / RCPsych (1)
World Association for Disaster and Emergency Medicine (1)
Cambridge Companions to Religion (1)
Cambridge Handbooks in Psychology (1)
Cambridge Studies in Biological and Evolutionary Anthropology (1)
New Cambridge History of the Bible (1)
Cambridge Companions (1)
Cambridge Handbooks (1)
Cambridge Handbooks of Psychology (1)
Cambridge Histories (1)
Cambridge Histories - Religion (1)
The Cambridge Companions to Philosophy and Religion (1)
Outcomes and antibiotic use in patients with coronavirus disease 2019 (COVID-19) admitted to an intensive care unit
Megan M. Petteys, Leigh Ann Medaris, Julie E. Williamson, Rohit S. Soman, Travis A. Denmeade, William E. Anderson, Michael K. Leonard, Christopher M. Polk
Journal: Antimicrobial Stewardship & Healthcare Epidemiology / Volume 2 / Issue 1 / 2022
Published online by Cambridge University Press: 17 January 2022, e12
Antibiotic overuse is high in patients hospitalized with coronavirus disease 2019 (COVID-19) despite a low documented prevalence of bacterial infections in many studies. In this study evaluating 65 COVID-19 patients in the intensive care unit, empiric broad-spectrum antibiotics were often overutilized with an inertia to de-escalate despite negative culture results.
Evolution of severe acute respiratory coronavirus virus 2 (SARS-CoV-2) seroprevalence among employees of a US academic children's hospital during coronavirus disease 2019 (COVID-19) pandemic
Brian T. Fisher, Anna Sharova, Craig L. K. Boge, Sigrid Gouma, Audrey Kamrin, Jesse Blumenstock, Sydney Shuster, Lauren Gianchetti, Danielle Collins, Elikplim Akaho, Madison E. Weirick, Christopher M. McAllister, Marcus J. Bolton, Claudia P. Arevalo, Eileen C. Goodwin, Elizabeth M. Anderson, Shannon R. Christensen, Fran Balamuth, Audrey R. Odom John, Yun Li, Susan Coffin, Jeffrey S. Gerber, Scott E. Hensley
Journal: Infection Control & Hospital Epidemiology , First View
Published online by Cambridge University Press: 02 December 2021, pp. 1-9
To describe the cumulative seroprevalence of severe acute respiratory coronavirus virus 2 (SARS-CoV-2) antibodies during the coronavirus disease 2019 (COVID-19) pandemic among employees of a large pediatric healthcare system.
Design, setting, and participants:
Prospective observational cohort study open to adult employees at the Children's Hospital of Philadelphia, conducted April 20–December 17, 2020.
Employees were recruited starting with high-risk exposure groups, utilizing e-mails, flyers, and announcements at virtual town hall meetings. At baseline, 1 month, 2 months, and 6 months, participants reported occupational and community exposures and gave a blood sample for SARS-CoV-2 antibody measurement by enzyme-linked immunosorbent assays (ELISAs). A post hoc Cox proportional hazards regression model was performed to identify factors associated with increased risk for seropositivity.
In total, 1,740 employees were enrolled. At 6 months, the cumulative seroprevalence was 5.3%, which was below estimated community point seroprevalence. Seroprevalence was 5.8% among employees who provided direct care and was 3.4% among employees who did not perform direct patient care. Most participants who were seropositive at baseline remained positive at follow-up assessments. In a post hoc analysis, direct patient care (hazard ratio [HR], 1.95; 95% confidence interval [CI], 1.03–3.68), Black race (HR, 2.70; 95% CI, 1.24–5.87), and exposure to a confirmed case in a nonhealthcare setting (HR, 4.32; 95% CI, 2.71–6.88) were associated with statistically significant increased risk for seropositivity.
Employee SARS-CoV-2 seroprevalence rates remained below the point-prevalence rates of the surrounding community. Provision of direct patient care, Black race, and exposure to a confirmed case in a nonhealthcare setting conferred increased risk. These data can inform occupational protection measures to maximize protection of employees within the workplace during future COVID-19 waves or other epidemics.
The Evolutionary Map of the Universe pilot survey
Ray P. Norris, Joshua Marvil, J. D. Collier, Anna D. Kapińska, Andrew N. O'Brien, L. Rudnick, Heinz Andernach, Jacobo Asorey, Michael J. I. Brown, Marcus Brüggen, Evan Crawford, Jayanne English, Syed Faisal ur Rahman, Miroslav D. Filipović, Yjan Gordon, Gülay Gürkan, Catherine Hale, Andrew M. Hopkins, Minh T. Huynh, Kim HyeongHan, M. James Jee, Bärbel S. Koribalski, Emil Lenc, Kieran Luken, David Parkinson, Isabella Prandoni, Wasim Raja, Thomas H. Reiprich, Christopher J. Riseley, Stanislav S. Shabala, Jaimie R. Sheil, Tessa Vernstrom, Matthew T. Whiting, James R. Allison, C. S. Anderson, Lewis Ball, Martin Bell, John Bunton, T. J. Galvin, Neeraj Gupta, Aidan Hotan, Colin Jacka, Peter J. Macgregor, Elizabeth K. Mahony, Umberto Maio, Vanessa Moss, M. Pandey-Pommier, Maxim A. Voronkov
Journal: Publications of the Astronomical Society of Australia / Volume 38 / 2021
Published online by Cambridge University Press: 07 September 2021, e046
We present the data and initial results from the first pilot survey of the Evolutionary Map of the Universe (EMU), observed at 944 MHz with the Australian Square Kilometre Array Pathfinder (ASKAP) telescope. The survey covers $270 \,\mathrm{deg}^2$ of an area covered by the Dark Energy Survey, reaching a depth of 25–30 $\mu\mathrm{Jy\ beam}^{-1}$ rms at a spatial resolution of $\sim$ 11–18 arcsec, resulting in a catalogue of $\sim$ 220 000 sources, of which $\sim$ 180 000 are single-component sources. Here we present the catalogue of single-component sources, together with (where available) optical and infrared cross-identifications, classifications, and redshifts. This survey explores a new region of parameter space compared to previous surveys. Specifically, the EMU Pilot Survey has a high density of sources, and also a high sensitivity to low surface brightness emission. These properties result in the detection of types of sources that were rarely seen in or absent from previous surveys. We present some of these new results here.
Clinging to survival: Critically Endangered Chapman's pygmy chameleon Rhampholeon chapmanorum persists in shrinking forest patches
Krystal A. Tolley, Colin R. Tilbury, Jessica M. da Silva, Gary Brown, Yankho Chapeta, Christopher V. Anderson
Journal: Oryx , First View
Published online by Cambridge University Press: 03 August 2021, pp. 1-6
The Critically Endangered Chapman's pygmy chameleon Rhampholeon chapmanorum is endemic to the low elevation rainforest of the Malawi Hills in southern Malawi. Much of this forest has been converted to agriculture and it was uncertain whether chameleon populations have persisted. We used current and historical satellite imagery to identify remaining forest patches and assess deforestation. We then surveyed forest patches for the presence of this chameleon, and assessed its genetic diversity and structure. We estimated that 80% of the forest has been destroyed since 1984, although we found extant populations of the chameleon in each of the patches surveyed. Differentiation of genetic structure was strong between populations, suggesting that gene flow has been impaired. Genetic diversity was not low, but this could be the result of a temporal lag as well as lack of sensitivity in the mitochondrial marker used. Overall, the impact of forest loss is assumed to have led to a large demographic decline, with forest fragmentation preventing gene flow.
Coronavirus disease 2019 (COVID-19) research agenda for healthcare epidemiology
Lona Mody, Ibukunoluwa C. Akinboyo, Hilary M. Babcock, Werner E. Bischoff, Vincent Chi-Chung Cheng, Kathleen Chiotos, Kimberly C. Claeys, K. C. Coffey, Daniel J. Diekema, Curtis J. Donskey, Katherine D. Ellingson, Heather M. Gilmartin, Shruti K. Gohil, Anthony D. Harris, Sara C. Keller, Eili Y. Klein, Sarah L. Krein, Jennie H Kwon, Adam S. Lauring, Daniel J. Livorsi, Eric T. Lofgren, Katreena Merrill, Aaron M. Milstone, Elizabeth A. Monsees, Daniel J. Morgan, Luci P. Perri, Christopher D. Pfeiffer, Clare Rock, Sanjay Saint, Emily Sickbert-Bennett, Felicia Skelton, Katie J. Suda, Thomas R. Talbot, Valerie M. Vaughn, David J. Weber, Timothy L. Wiemken, Mohamed H. Yassin, Matthew J. Ziegler, Deverick J. Anderson
Published online by Cambridge University Press: 25 January 2021, pp. 1-11
This SHEA white paper identifies knowledge gaps and challenges in healthcare epidemiology research related to coronavirus disease 2019 (COVID-19) with a focus on core principles of healthcare epidemiology. These gaps, revealed during the worst phases of the COVID-19 pandemic, are described in 10 sections: epidemiology, outbreak investigation, surveillance, isolation precaution practices, personal protective equipment (PPE), environmental contamination and disinfection, drug and supply shortages, antimicrobial stewardship, healthcare personnel (HCP) occupational safety, and return to work policies. Each section highlights three critical healthcare epidemiology research questions with detailed description provided in supplementary materials. This research agenda calls for translational studies from laboratory-based basic science research to well-designed, large-scale studies and health outcomes research. Research gaps and challenges related to nursing homes and social disparities are included. Collaborations across various disciplines, expertise and across diverse geographic locations will be critical.
The role of income and neighbourhood poverty in the association between the 2009 Special Supplemental Nutrition Program for Women, Infants and Children (WIC) food package change and child obesity among WIC-participating children in Los Angeles County, 2003–2016
M Pia Chaparro, Shannon E Whaley, Christopher E Anderson, May C Wang, Catherine M Crespi
Journal: Public Health Nutrition / Volume 24 / Issue 13 / September 2021
Published online by Cambridge University Press: 22 December 2020, pp. 4212-4219
To determine whether a previously reported association between the Special Supplemental Nutrition Program for Women, Infants and Children (WIC) food package change and reduced child obesity risk among WIC-participating children in Los Angeles County holds across levels of family income and neighbourhood poverty.
Analysis of prospectively collected WIC administrative data. The outcome was obesity at age 4 years (BMI-for-age ≥ 95th percentile). Poisson regression was applied to a matched sample (n 79 502) to determine if the association between the WIC food package change and child obesity was modified by family income (<50 % federal poverty level (FPL), 50–100 % FPL, >100 % but <185 % FPL) and neighbourhood poverty.
Los Angeles County, California.
Children who participated in WIC in Los Angeles County between 2003 and 2016; children were grouped as receiving the old WIC food package (2003–2009) or the new WIC food package (2010–2016).
Receiving the new WIC food package (i.e., post-2009) was associated with 7–18 % lower obesity risk across all family income categories. Neither family income nor neighbourhood poverty significantly modified the association between the WIC food package and child obesity. However, certain sub-groups seemed to benefit more from the food package change than others. In particular, boys from families with income above poverty but residing in the poorest neighbourhoods experienced the greatest reductions in obesity risk (relative risk = 0·77; 95 % CI 0·66, 0·88).
The WIC food package revisions were associated with reduced childhood obesity risk among all WIC-participating families in Los Angeles County, across levels of income eligibility and neighbourhood poverty.
Modelling the potential non-breeding distribution of Spoon-billed Sandpiper Calidris pygmaea
TOM BRADFER-LAWRENCE, ALISON E. BERESFORD, GUY Q. A. ANDERSON, PYAE PHYO AUNG, QING CHANG, SAYAM U. CHOWDHURY, NIGEL A. CLARK, YURI N. GERASIMOV, RHYS E. GREEN, GEOFF M. HILTON, BAZ HUGHES, ELENA G. LAPPO, JING LI, EVGENY E. SYROECHKOVSKIY, PAVEL S. TOMKOVICH, CHRISTOPH ZOCKLER, GRAEME M. BUCHANAN
Journal: Bird Conservation International / Volume 31 / Issue 2 / June 2021
The Spoon-billed Sandpiper Calidris pygmaea is a 'Critically Endangered' migratory shorebird. The species faces an array of threats in its non-breeding range, making conservation intervention essential. However, conservation efforts are reliant on identifying the species' key stopover and wintering sites. Using Maximum Entropy models, we predicted Spoon-billed Sandpiper distribution across the non-breeding range, using data from recent field surveys and satellite tracking. Model outputs suggest only a limited number of stopover sites are suitable for migrating birds, with sites in the Yellow Sea and on the Jiangsu coast in China highlighted as particularly important. All the previously known core wintering sites were identified by the model including the Ganges-Brahmaputra Delta, Nan Thar Island and the Gulf of Mottama. In addition, the model highlighted sites subsequently found to be occupied, and pinpointed potential new sites meriting investigation, notably on Borneo and Sulawesi, and in parts of India and the Philippines. A comparison between the areas identified as most likely to be occupied and protected areas showed that very few locations are covered by conservation designations. Known sites must be managed for conservation as a priority, and potential new sites should be surveyed as soon as is feasible to assess occupancy status. Site protection should take place in concert with conservation interventions including habitat management, discouraging hunting, and fostering alternative livelihoods.
Changes in antibiotic use following implementation of a telehealth stewardship pilot program
Brigid M. Wilson, Richard E. Banks, Christopher J. Crnich, Emma Ide, Roberto A. Viau, Nadim G. El Chakhtoura, Yvonne R. Jones, Jason B. Cherry, Brett A. Anderson, Robin L. P. Jump
Journal: Infection Control & Hospital Epidemiology / Volume 40 / Issue 7 / July 2019
Starting in 2016, we initiated a pilot tele-antibiotic stewardship program at 2 rural Veterans Affairs medical centers (VAMCs). Antibiotic days of therapy decreased significantly (P < .05) in the acute and long-term care units at both intervention sites, suggesting that tele-stewardship can effectively support antibiotic stewardship practices in rural VAMCs.
3167 Evaluation of risk factors for progression from carbapenem-resistant Enterobacteriaceae bacteriuria to an invasive infection
Jessica Howard-Anderson, Rebekah Blakney, Christopher Bower, Mary Elizabeth Sexton, Sarah W. Satola, Monica M. Farley, Jesse T. Jacob
Journal: Journal of Clinical and Translational Science / Volume 3 / Issue s1 / March 2019
Published online by Cambridge University Press: 26 March 2019, pp. 43-44
OBJECTIVES/SPECIFIC AIMS: To describe the epidemiology of patients with carbapenem-resistant Enterobacteriaceae (CRE) bacteriuria in metropolitan Atlanta, GA and to identify risk factors associated with progression to an invasive CRE infection. We hypothesize that having an indwelling urinary catheter increases the risk of progression. METHODS/STUDY POPULATION: The Georgia Emerging Infections Program (EIP) performs active population- and laboratory-based surveillance to identify CRE isolated from a sterile site (e.g. blood) or urine among patients who reside in the 8-county metropolitan Atlanta area (population ~4 million). The Georgia EIP performs a chart review of each case to extract data on demographics, culture location, resistance patterns, healthcare exposures, and other underlying risk factors. We used a retrospective cohort study design to include all Georgia EIP cases with Escherichia coli, Klebsiella pneumoniae, Klebsiella oxytoca, Enterobacter cloacae, or Klebsiella (formerly Enterobacter) aerogenes, adapting the current EIP definition of resistance to only include isolates resistant to meropenem, imipenem or doripenem (minimum inhibitory concentration ≥ 4) first identified in a urine culture from 8/1/2011 to 7/31/2017. Patients with CRE identified in a sterile site culture prior to a urine culture will be excluded. Within this cohort, we will identify which patients had a subsequent similar CRE isolate identified from a sterile site between one day and one year after the original urine culture was identified (termed "progression"). CRE isolates will be defined as similar if they are the same species and have the same carbapenem susceptibility pattern. Univariable analyses using T-tests or other nonparametric tests for continuous variables, and Chi-square tests (or Fisher's exact tests as appropriate) for categorical variables will compare patient demographics, comorbidities and presence of invasive devices including urinary catheters between patients who had progression to an invasive infection and those who did not have progression. Covariates with a p-value of < 0.2 will be eligible for inclusion in the multivariable logistic regression model with progression to invasive infection as the primary outcome. All statistical analyses will be done in SAS 9.4. RESULTS/ANTICIPATED RESULTS: From 8/1/2011 to 7/31/2017 we have preliminarily identified 546 patients with CRE first identified in urine, representing an annual incidence rate of 1.1 cases per 100,000 population. Most cases were K. pneumoniae (352, 64%), followed by E. coli (117, 21%), E. cloacae (48, 9%), K. aerogenes (18, 3%), and K. oxytoca (11, 2%). The mean patient age was 64 +/− 18 years and the majority (308, 56%) were female. Clinical characterization through chart review was available for 507 patients. The majority of the patients were black (301, 59%), followed by white (166, 33%), Asian (12, 2%), and other or unknown race (28, 6%). 466 (92%) patients had at least one underlying comorbid condition with a median Charlson Comorbidity Index of 3 (IQR 1-5). 460 (91%) infections were considered healthcare-associated (366 community-onset and 94 hospital-onset), while 44 (9%) were community-associated. 279 (55%) patients had a urinary catheter within the two days prior to the CRE culture. The analysis of patients who progress to an invasive CRE infection, including the results of the univariable and multivariable analyses assessing risk factors for progression is in progress and will be reported in the future. DISCUSSION/SIGNIFICANCE OF IMPACT: In metropolitan Atlanta, the annual incidence of CRE first isolated in urine was estimated to be 1.1 cases per 100,000 population between 2011 and 2017, with the majority of the cases being K. pneumoniae. Most patients had prior healthcare exposure and more than 50% of the patients had a urinary catheter. Our anticipated results will identify risk factors associated with progression from CRE bacteriuria to an invasive infection with a specific focus on having a urinary catheter, as this is a potentially modifiable characteristic that could be a target of future interventions.
Point-prevalence study of antimicrobial use in public hospitals in southern Sri Lanka identifies opportunities for improving prescribing practices
Tianchen Sheng, Gaya B. Wijayaratne, Thushani M. Dabrera, Richard J. Drew, Ajith Nagahawatte, Champica K. Bodinayake, Ruvini Kurukulasooriya, Truls Østbye, Kristin J. Nagaro, Cherin De Silva, Hasini Ranawakaarachchi, A. T. Sudarshana, Deverick J. Anderson, Christopher W. Woods, L. Gayani Tillekeratne
Journal: Infection Control & Hospital Epidemiology / Volume 40 / Issue 2 / February 2019
Print publication: February 2019
A point-prevalence study of antimicrobial use among inpatients at 5 public hospitals in Sri Lanka revealed that 54.6% were receiving antimicrobials: 43.1% in medical wards, 68.0% in surgical wards, and 97.6% in intensive care wards. Amoxicillin-clavulanate was most commonly used for major indications. Among patients receiving antimicrobials, 31.0% received potentially inappropriate therapy.
A prospective study of transmission of Multidrug-Resistant Organisms (MDROs) between environmental sites and hospitalized patients—the TransFER study
Luke F. Chen, Lauren P. Knelson, Maria F. Gergen, Olga M. Better, Bradly P. Nicholson, Christopher W. Woods, William A. Rutala, David J. Weber, Daniel J. Sexton, Deverick J. Anderson, for the CDC Prevention Epicenters Program
Journal: Infection Control & Hospital Epidemiology / Volume 40 / Issue 1 / January 2019
Published online by Cambridge University Press: 14 November 2018, pp. 47-52
Hospital environmental surfaces are frequently contaminated by microorganisms. However, the causal mechanism of bacterial contamination of the environment as a source of transmission is still debated. This prospective study was performed to characterize the nature of multidrug-resistant organism (MDRO) transmission between the environment and patients using standard microbiological and molecular techniques.
Prospective cohort study at 2 academic medical centers.
A prospective multicenter study to characterize the nature of bacterial transfer events between patients and environmental surfaces in rooms that previously housed patients with 1 of 4 'marker' MDROs: methicillin-resistant Staphylococcus aureus, vancomycin-resistant enterococci, Clostridium difficile, and MDR Acinetobacter baumannii. Environmental and patient microbiological samples were obtained on admission into a freshly disinfected inpatient room. Repeat samples from room surfaces and patients were taken on days 3 and 7 and each week the patient stayed in the same room. The bacterial identity, antibiotic susceptibility, and molecular sequences were compared between organisms found in the environment samples and patient sources.
We enrolled 80 patient–room admissions; 9 of these patients (11.3%) were asymptomatically colonized with MDROs at study entry. Hospital room surfaces were contaminated with MDROs despite terminal disinfection in 44 cases (55%). Microbiological Bacterial Transfer events either to the patient, the environment, or both occurred in 12 patient encounters (18.5%) from the microbiologically evaluable cohort.
Microbiological Bacterial Transfer events between patients and the environment were observed in 18.5% of patient encounters and occurred early in the admission. This study suggests that research on prevention methods beyond the standard practice of room disinfection at the end of a patient's stay is needed to better prevent acquisition of MDROs through the environment.
Recent changes in the number of spoon-billed sandpipers Calidris pygmaea wintering on the Upper Gulf of Mottama in Myanmar
Pyae-Phyo Aung, Saw Moses, Nigel A. Clark, Guy Q.A. Anderson, Geoff M. Hilton, Graeme M. Buchanan, Christoph Zöckler, Rhys E. Green
Journal: Oryx / Volume 54 / Issue 1 / January 2020
Published online by Cambridge University Press: 07 September 2018, pp. 23-29
The spoon-billed sandpiper Calidris pygmaea, a migratory Arctic-breeding shorebird, is one of the rarest birds and its population has declined since the 1970s. We surveyed its most important known wintering area in the Upper Gulf of Mottama in Myanmar to estimate recent (2009–2016) changes in its numbers there. The total number of small shorebirds present in the Upper Gulf was counted and the proportion of them that were spoon-billed sandpipers was estimated from sample scans. These two quantities were multiplied together to give the estimated number of spoon-billed sandpipers in each of 4 years. Total numbers of combined small shorebird species tripled from 21,000 to 63,000 between 2009 and 2016, coincident with efforts to reduce hunting pressure on waterbirds. However, the proportion of small shorebirds that were spoon-billed sandpipers declined and their estimated absolute numbers fell by about half, from 244 to 112 individuals. It is probable that loss of intertidal habitat and shorebird hunting elsewhere on the migration route of the spoon-billed sandpipers wintering at Mottama is causing a continued decline, although this is occurring at a less rapid rate than that recorded from Arctic Russia before 2010. The number of spoon-billed sandpipers wintering on the Upper Gulf of Mottama remains the highest single-site total for this species from any known wintering site. Preventing resurgence of illegal shorebird hunting and ensuring long-term protection of the intertidal feeding habitats and roost sites in the Gulf are high priorities if extinction of this species is to be averted.
History of Education in the News: The Legacy of Slavery, Racism, and Contemporary Black Activism on Campus
James D. Anderson, Christopher M. Span
Journal: History of Education Quarterly / Volume 56 / Issue 4 / November 2016
Published online by Cambridge University Press: 20 January 2017, pp. 646-656
Print publication: November 2016
History of Education Quarterly editorial team is planning to integrate a new feature, "History of Education in the News," into periodic issues of the journal. Our idea is to highlight relevant historical scholarship on a topic that has contemporary public resonance. Our first piece in this new vein engages the current uptick of interest in the links between slavery and higher education. Recent scholarship and popular press accounts have documented how many eastern colleges and universities benefited from enslaved African-American labor.
We asked Professors James D. Anderson and Christopher M. Span of the University of Illinois at Urbana-Champaign to take up this issue and reflect on how a deep knowledge of history informs recent activism on college and university campuses, particularly activism focused on forcing institutions to reckon with their histories and become antiracist spaces.
Reflections from Within: Ten Years of Service to HEQ
Yoon K. Pak, Christopher M. Span, James D. Anderson
Before we expound on our brief "farewell" essay, we wish to extend our deepest gratitude to those colleagues who contributed as authors, reviewers, associate editors, and editorial board members in sustaining the high quality of scholarship in the history of education. You have been indispensable in this process. I hope you realize the extent to which your role as reviewers serves as a means of mentoring, in contributing to the development of a community of scholars through your topical expertise. The majority of authors, junior and senior faculty alike, shared how appreciative they were of the thoughtful and lengthy feedback offered by the reviewers. They did not view the critiques in a punitive way but rather as a place for creating dialogue. This spirit of collegiality is what also helps our field to thrive.
By Mitchell Aboulafia, Frederick Adams, Marilyn McCord Adams, Robert M. Adams, Laird Addis, James W. Allard, David Allison, William P. Alston, Karl Ameriks, C. Anthony Anderson, David Leech Anderson, Lanier Anderson, Roger Ariew, David Armstrong, Denis G. Arnold, E. J. Ashworth, Margaret Atherton, Robin Attfield, Bruce Aune, Edward Wilson Averill, Jody Azzouni, Kent Bach, Andrew Bailey, Lynne Rudder Baker, Thomas R. Baldwin, Jon Barwise, George Bealer, William Bechtel, Lawrence C. Becker, Mark A. Bedau, Ernst Behler, José A. Benardete, Ermanno Bencivenga, Jan Berg, Michael Bergmann, Robert L. Bernasconi, Sven Bernecker, Bernard Berofsky, Rod Bertolet, Charles J. Beyer, Christian Beyer, Joseph Bien, Joseph Bien, Peg Birmingham, Ivan Boh, James Bohman, Daniel Bonevac, Laurence BonJour, William J. Bouwsma, Raymond D. Bradley, Myles Brand, Richard B. Brandt, Michael E. Bratman, Stephen E. Braude, Daniel Breazeale, Angela Breitenbach, Jason Bridges, David O. Brink, Gordon G. Brittan, Justin Broackes, Dan W. Brock, Aaron Bronfman, Jeffrey E. Brower, Bartosz Brozek, Anthony Brueckner, Jeffrey Bub, Lara Buchak, Otavio Bueno, Ann E. Bumpus, Robert W. Burch, John Burgess, Arthur W. Burks, Panayot Butchvarov, Robert E. Butts, Marina Bykova, Patrick Byrne, David Carr, Noël Carroll, Edward S. Casey, Victor Caston, Victor Caston, Albert Casullo, Robert L. Causey, Alan K. L. Chan, Ruth Chang, Deen K. Chatterjee, Andrew Chignell, Roderick M. Chisholm, Kelly J. Clark, E. J. Coffman, Robin Collins, Brian P. Copenhaver, John Corcoran, John Cottingham, Roger Crisp, Frederick J. Crosson, Antonio S. Cua, Phillip D. Cummins, Martin Curd, Adam Cureton, Andrew Cutrofello, Stephen Darwall, Paul Sheldon Davies, Wayne A. Davis, Timothy Joseph Day, Claudio de Almeida, Mario De Caro, Mario De Caro, John Deigh, C. F. Delaney, Daniel C. Dennett, Michael R. DePaul, Michael Detlefsen, Daniel Trent Devereux, Philip E. Devine, John M. Dillon, Martin C. Dillon, Robert DiSalle, Mary Domski, Alan Donagan, Paul Draper, Fred Dretske, Mircea Dumitru, Wilhelm Dupré, Gerald Dworkin, John Earman, Ellery Eells, Catherine Z. Elgin, Berent Enç, Ronald P. Endicott, Edward Erwin, John Etchemendy, C. Stephen Evans, Susan L. Feagin, Solomon Feferman, Richard Feldman, Arthur Fine, Maurice A. Finocchiaro, William FitzPatrick, Richard E. Flathman, Gvozden Flego, Richard Foley, Graeme Forbes, Rainer Forst, Malcolm R. Forster, Daniel Fouke, Patrick Francken, Samuel Freeman, Elizabeth Fricker, Miranda Fricker, Michael Friedman, Michael Fuerstein, Richard A. Fumerton, Alan Gabbey, Pieranna Garavaso, Daniel Garber, Jorge L. A. Garcia, Robert K. Garcia, Don Garrett, Philip Gasper, Gerald Gaus, Berys Gaut, Bernard Gert, Roger F. Gibson, Cody Gilmore, Carl Ginet, Alan H. Goldman, Alvin I. Goldman, Alfonso Gömez-Lobo, Lenn E. Goodman, Robert M. Gordon, Stefan Gosepath, Jorge J. E. Gracia, Daniel W. Graham, George A. Graham, Peter J. Graham, Richard E. Grandy, I. Grattan-Guinness, John Greco, Philip T. Grier, Nicholas Griffin, Nicholas Griffin, David A. Griffiths, Paul J. Griffiths, Stephen R. Grimm, Charles L. Griswold, Charles B. Guignon, Pete A. Y. Gunter, Dimitri Gutas, Gary Gutting, Paul Guyer, Kwame Gyekye, Oscar A. Haac, Raul Hakli, Raul Hakli, Michael Hallett, Edward C. Halper, Jean Hampton, R. James Hankinson, K. R. Hanley, Russell Hardin, Robert M. Harnish, William Harper, David Harrah, Kevin Hart, Ali Hasan, William Hasker, John Haugeland, Roger Hausheer, William Heald, Peter Heath, Richard Heck, John F. Heil, Vincent F. Hendricks, Stephen Hetherington, Francis Heylighen, Kathleen Marie Higgins, Risto Hilpinen, Harold T. Hodes, Joshua Hoffman, Alan Holland, Robert L. Holmes, Richard Holton, Brad W. Hooker, Terence E. Horgan, Tamara Horowitz, Paul Horwich, Vittorio Hösle, Paul Hoβfeld, Daniel Howard-Snyder, Frances Howard-Snyder, Anne Hudson, Deal W. Hudson, Carl A. Huffman, David L. Hull, Patricia Huntington, Thomas Hurka, Paul Hurley, Rosalind Hursthouse, Guillermo Hurtado, Ronald E. Hustwit, Sarah Hutton, Jonathan Jenkins Ichikawa, Harry A. Ide, David Ingram, Philip J. Ivanhoe, Alfred L. Ivry, Frank Jackson, Dale Jacquette, Joseph Jedwab, Richard Jeffrey, David Alan Johnson, Edward Johnson, Mark D. Jordan, Richard Joyce, Hwa Yol Jung, Robert Hillary Kane, Tomis Kapitan, Jacquelyn Ann K. Kegley, James A. Keller, Ralph Kennedy, Sergei Khoruzhii, Jaegwon Kim, Yersu Kim, Nathan L. King, Patricia Kitcher, Peter D. Klein, E. D. Klemke, Virginia Klenk, George L. Kline, Christian Klotz, Simo Knuuttila, Joseph J. Kockelmans, Konstantin Kolenda, Sebastian Tomasz Kołodziejczyk, Isaac Kramnick, Richard Kraut, Fred Kroon, Manfred Kuehn, Steven T. Kuhn, Henry E. Kyburg, John Lachs, Jennifer Lackey, Stephen E. Lahey, Andrea Lavazza, Thomas H. Leahey, Joo Heung Lee, Keith Lehrer, Dorothy Leland, Noah M. Lemos, Ernest LePore, Sarah-Jane Leslie, Isaac Levi, Andrew Levine, Alan E. Lewis, Daniel E. Little, Shu-hsien Liu, Shu-hsien Liu, Alan K. L. Chan, Brian Loar, Lawrence B. Lombard, John Longeway, Dominic McIver Lopes, Michael J. Loux, E. J. Lowe, Steven Luper, Eugene C. Luschei, William G. Lycan, David Lyons, David Macarthur, Danielle Macbeth, Scott MacDonald, Jacob L. Mackey, Louis H. Mackey, Penelope Mackie, Edward H. Madden, Penelope Maddy, G. B. Madison, Bernd Magnus, Pekka Mäkelä, Rudolf A. Makkreel, David Manley, William E. Mann (W.E.M.), Vladimir Marchenkov, Peter Markie, Jean-Pierre Marquis, Ausonio Marras, Mike W. Martin, A. P. Martinich, William L. McBride, David McCabe, Storrs McCall, Hugh J. McCann, Robert N. McCauley, John J. McDermott, Sarah McGrath, Ralph McInerny, Daniel J. McKaughan, Thomas McKay, Michael McKinsey, Brian P. McLaughlin, Ernan McMullin, Anthonie Meijers, Jack W. Meiland, William Jason Melanson, Alfred R. Mele, Joseph R. Mendola, Christopher Menzel, Michael J. Meyer, Christian B. Miller, David W. Miller, Peter Millican, Robert N. Minor, Phillip Mitsis, James A. Montmarquet, Michael S. Moore, Tim Moore, Benjamin Morison, Donald R. Morrison, Stephen J. Morse, Paul K. Moser, Alexander P. D. Mourelatos, Ian Mueller, James Bernard Murphy, Mark C. Murphy, Steven Nadler, Jan Narveson, Alan Nelson, Jerome Neu, Samuel Newlands, Kai Nielsen, Ilkka Niiniluoto, Carlos G. Noreña, Calvin G. Normore, David Fate Norton, Nikolaj Nottelmann, Donald Nute, David S. Oderberg, Steve Odin, Michael O'Rourke, Willard G. Oxtoby, Heinz Paetzold, George S. Pappas, Anthony J. Parel, Lydia Patton, R. P. Peerenboom, Francis Jeffry Pelletier, Adriaan T. Peperzak, Derk Pereboom, Jaroslav Peregrin, Glen Pettigrove, Philip Pettit, Edmund L. Pincoffs, Andrew Pinsent, Robert B. Pippin, Alvin Plantinga, Louis P. Pojman, Richard H. Popkin, John F. Post, Carl J. Posy, William J. Prior, Richard Purtill, Michael Quante, Philip L. Quinn, Philip L. Quinn, Elizabeth S. Radcliffe, Diana Raffman, Gerard Raulet, Stephen L. Read, Andrews Reath, Andrew Reisner, Nicholas Rescher, Henry S. Richardson, Robert C. Richardson, Thomas Ricketts, Wayne D. Riggs, Mark Roberts, Robert C. Roberts, Luke Robinson, Alexander Rosenberg, Gary Rosenkranz, Bernice Glatzer Rosenthal, Adina L. Roskies, William L. Rowe, T. M. Rudavsky, Michael Ruse, Bruce Russell, Lilly-Marlene Russow, Dan Ryder, R. M. Sainsbury, Joseph Salerno, Nathan Salmon, Wesley C. Salmon, Constantine Sandis, David H. Sanford, Marco Santambrogio, David Sapire, Ruth A. Saunders, Geoffrey Sayre-McCord, Charles Sayward, James P. Scanlan, Richard Schacht, Tamar Schapiro, Frederick F. Schmitt, Jerome B. Schneewind, Calvin O. Schrag, Alan D. Schrift, George F. Schumm, Jean-Loup Seban, David N. Sedley, Kenneth Seeskin, Krister Segerberg, Charlene Haddock Seigfried, Dennis M. Senchuk, James F. Sennett, William Lad Sessions, Stewart Shapiro, Tommie Shelby, Donald W. Sherburne, Christopher Shields, Roger A. Shiner, Sydney Shoemaker, Robert K. Shope, Kwong-loi Shun, Wilfried Sieg, A. John Simmons, Robert L. Simon, Marcus G. Singer, Georgette Sinkler, Walter Sinnott-Armstrong, Matti T. Sintonen, Lawrence Sklar, Brian Skyrms, Robert C. Sleigh, Michael Anthony Slote, Hans Sluga, Barry Smith, Michael Smith, Robin Smith, Robert Sokolowski, Robert C. Solomon, Marta Soniewicka, Philip Soper, Ernest Sosa, Nicholas Southwood, Paul Vincent Spade, T. L. S. Sprigge, Eric O. Springsted, George J. Stack, Rebecca Stangl, Jason Stanley, Florian Steinberger, Sören Stenlund, Christopher Stephens, James P. Sterba, Josef Stern, Matthias Steup, M. A. Stewart, Leopold Stubenberg, Edith Dudley Sulla, Frederick Suppe, Jere Paul Surber, David George Sussman, Sigrún Svavarsdóttir, Zeno G. Swijtink, Richard Swinburne, Charles C. Taliaferro, Robert B. Talisse, John Tasioulas, Paul Teller, Larry S. Temkin, Mark Textor, H. S. Thayer, Peter Thielke, Alan Thomas, Amie L. Thomasson, Katherine Thomson-Jones, Joshua C. Thurow, Vzalerie Tiberius, Terrence N. Tice, Paul Tidman, Mark C. Timmons, William Tolhurst, James E. Tomberlin, Rosemarie Tong, Lawrence Torcello, Kelly Trogdon, J. D. Trout, Robert E. Tully, Raimo Tuomela, John Turri, Martin M. Tweedale, Thomas Uebel, Jennifer Uleman, James Van Cleve, Harry van der Linden, Peter van Inwagen, Bryan W. Van Norden, René van Woudenberg, Donald Phillip Verene, Samantha Vice, Thomas Vinci, Donald Wayne Viney, Barbara Von Eckardt, Peter B. M. Vranas, Steven J. Wagner, William J. Wainwright, Paul E. Walker, Robert E. Wall, Craig Walton, Douglas Walton, Eric Watkins, Richard A. Watson, Michael V. Wedin, Rudolph H. Weingartner, Paul Weirich, Paul J. Weithman, Carl Wellman, Howard Wettstein, Samuel C. Wheeler, Stephen A. White, Jennifer Whiting, Edward R. Wierenga, Michael Williams, Fred Wilson, W. Kent Wilson, Kenneth P. Winkler, John F. Wippel, Jan Woleński, Allan B. Wolter, Nicholas P. Wolterstorff, Rega Wood, W. Jay Wood, Paul Woodruff, Alison Wylie, Gideon Yaffe, Takashi Yagisawa, Yutaka Yamamoto, Keith E. Yandell, Xiaomei Yang, Dean Zimmerman, Günter Zoller, Catherine Zuckert, Michael Zuckert, Jack A. Zupko (J.A.Z.)
Edited by Robert Audi, University of Notre Dame, Indiana
Book: The Cambridge Dictionary of Philosophy
Print publication: 27 April 2015, pp ix-xxx
By Janice Capel Anderson, S. Wesley Ariarajah, Constantine Belezos, Ian Boxall, Marc Zvi Brettler, Edward Breuer, Daniel Bruno, Mark Chapman, W. T. Dickens, Mark W. Elliott, Eldon Epp, Tassilo Erhardt, Timothy Gorringe, Harriet Harris, Peter C. Hodgson, Leslie Howsam, Werner G. Jeanrond, Scott McLaren, Wayne A. Meeks, Néstor Míguez, Stephen D. Moore, Robert Morgan, Halvor Moxnes, Peter Neuner, Mark Noll, Jorunn Økland, Gaye Ortiz, John Riches, Christopher Rowland, Nicolaas A. Rupke, Edmund J. Rybarczyk, Lamin Sanneh, Constantine Scouteris, R. S. Sugirtharajah, Willard M. Swartley, William R. Telford, David Thompson, Elena Volkova, J. R. Watson, Gerald West, Michael Wheeler, Keith Whitelam
Edited by John Riches, University of Glasgow
Book: The New Cambridge History of the Bible
Published online: 09 June 2015
Print publication: 13 April 2015, pp xi-xii
Factors associated with inconsistency in self-reported mild traumatic brain injury over time among military personnel in Iraq
Nathaniel W. Nelson, Carolyn R. Anderson, Paul Thuras, Shannon M. Kehle-Forbes, Paul A. Arbisi, Christopher R. Erbes, Melissa A. Polusny
Journal: The British Journal of Psychiatry / Volume 206 / Issue 3 / March 2015
Estimates of the prevalence of mild traumatic brain injury (mTBI) among military personnel and combat veterans rely almost exclusively on retrospective self-reports; however, reliability of these reports has received little attention.
To examine the consistency of reporting of mTBI over time and identify factors associated with inconsistent reporting.
A longitudinal cohort of 948 US National Guard Soldiers deployed to Iraq completed self-report questionnaire screening for mTBI and psychological symptoms while in-theatre 1 month before returning home (time 1, T 1) and 1 year later (time 2, T 2).
Most respondents (n = 811, 85.5%) were consistent in their reporting of mTBI across time. Among those who were inconsistent in their reports (n = 137, 14.5%), the majority denied mTBI at T 1 and affirmed mTBI at T 2 (n = 123, 89.8%). Respondents rarely endorsed mTBI in-theatre and later denied mTBI (n = 14, 10.2% of those with inconsistent reports). Post-deployment post-traumatic stress symptoms and non-specific physical complaints were significantly associated with inconsistent report of mTBI.
Military service members' self-reports of mTBI are generally consistent over time; however, inconsistency in retrospective self-reporting of mTBI status is associated with current posttraumatic stress symptoms and non-specific physical health complaints.
By Brittany L. Anderson-Montoya, Heather R. Bailey, Carryl L. Baldwin, Daphne Bavelier, Jameson D. Beach, Jeffrey S. Bedwell, Kevin B. Bennett, Richard A. Block, Deborah A. Boehm-Davis, Corey J. Bohil, David B. Boles, Avinoam Borowsky, Jessica Bramlett, Allison A. Brennan, J. Christopher Brill, Matthew S. Cain, Meredith Carroll, Roberto Champney, Kait Clark, Nancy J. Cooke, Lori M. Curtindale, Clare Davies, Patricia R. DeLucia, Andrew E. Deptula, Michael B. Dillard, Colin D. Drury, Christopher Edman, James T. Enns, Sara Irina Fabrikant, Victor S. Finomore, Arthur D. Fisk, John M. Flach, Matthew E. Funke, Andre Garcia, Adam Gazzaley, Douglas J. Gillan, Rebecca A. Grier, Simen Hagen, Kelly Hale, Diane F. Halpern, Peter A. Hancock, Deborah L. Harm, Mary Hegarty, Laurie M. Heller, Nicole D. Helton, William S. Helton, Robert R. Hoffman, Jerred Holt, Xiaogang Hu, Richard J. Jagacinski, Keith S. Jones, Astrid M. L. Kappers, Simon Kemp, Robert C. Kennedy, Robert S. Kennedy, Alan Kingstone, Ioana Koglbauer, Norman E. Lane, Robert D. Latzman, Cynthia Laurie-Rose, Patricia Lee, Richard Lowe, Valerie Lugo, Poornima Madhavan, Leonard S. Mark, Gerald Matthews, Jyoti Mishra, Stephen R. Mitroff, Tracy L. Mitzner, Alexander M. Morison, Taylor Murphy, Takamichi Nakamoto, John G. Neuhoff, Karl M. Newell, Tal Oron-Gilad, Raja Parasuraman, Tiffany A. Pempek, Robert W. Proctor, Katie A. Ragsdale, Anil K. Raj, Millard F. Reschke, Evan F. Risko, Matthew Rizzo, Wendy A. Rogers, Jesse Q. Sargent, Mark W. Scerbo, Natasha B. Schwartz, F. Jacob Seagull, Cory-Ann Smarr, L. James Smart, Kay Stanney, James Staszewski, Clayton L. Stephenson, Mary E. Stuart, Breanna E. Studenka, Joel Suss, Leedjia Svec, James L. Szalma, James Tanaka, James Thompson, Wouter M. Bergmann Tiest, Lauren A. Vassiliades, Michael A. Vidulich, Paul Ward, Joel S. Warm, David A. Washburn, Christopher D. Wickens, Scott J. Wood, David D. Woods, Motonori Yamaguchi, Lin Ye, Jeffrey M. Zacks
Edited by Robert R. Hoffman, Peter A. Hancock, University of Central Florida, Mark W. Scerbo, Old Dominion University, Virginia, Raja Parasuraman, George Mason University, Virginia, James L. Szalma, University of Central Florida
Book: The Cambridge Handbook of Applied Perception Research
Published online: 05 July 2015
Print publication: 26 January 2015, pp xi-xiv
By Venkataraman Anantharaman, Philip D. Anderson, Christopher W. Baugh, J. Stephen Bohan, Kirsten Boyd, Matthias Brachmann, Peter R. Brown, Shelley Calder, David Callaway, Peter Cameron, Jody Crane, Meaghan Cussen, Christina Dempsey, Jonathan A. Edlow, Thomas Fleischmann, Robert L. Freitas, John D. Halamka, Manuel Hernandez, Cherri Hobgood, Jock Hoffman, Steven Horng, Kirk B. Jensen, Jennifer R. Johnson, Stephanie Kayden, Tasnim Khan, Daniel G. Kirkpatrick, James Lennon, Mary Leupold, Thom Mayer, J. Lawrence Mottley, Scott B. Murray, Deirdre Mylod, Larry A. Nathanson, Michael P. Pietrzak, Elke Platz, Nadeem Qureshi, Matthew M. Rice, Andrew Schenkel, Chet Schrader, Puneet Seth, Richard B. Siegrist, David Smith, Robert E. Suter, Carrie Tibbles, Sebastian N. Walker, Lee A. Wallis, Julie Welch, Leana S. Wen
Edited by Stephanie Kayden, Philip D. Anderson, Robert Freitas, Elke Platz
Book: Emergency Department Leadership and Management
Print publication: 27 November 2014, pp ix-xii
By Cheryl P. Anderson, Heidi J. Bauer-Clapp, Kathryn M. Baustian, Ute Brinker, John J. Crandall, Elizabeth M. DeVisser, William N. Duncan, Stefan Flohr, Laura C. Fulginiti, Alison Galloway, Michelle D. Hamilton, Ryan P. Harrod, Uwe Kierdorf, Laura M. King, Anna Kjellström, Danielle Kurin, Krista E. Latham, Marisol Intriago Leiva, Debra L. Martin, Melissa Scott Murphy, Jörg Orschiedt, Ventura R. Pérez, Annemarie Schramm, Andrew C. Seidel, Elena Spanagel, Brian Spatola, Vincent H. Stefan, Herbert H. Lehman, Christopher M. Stojanowski, Rebecca Storey, Christina Torres-Rouff, Rick Weathermon
Edited by Debra L. Martin, University of Nevada, Las Vegas, Cheryl P. Anderson, University of Nevada, Las Vegas
Book: Bioarchaeological and Forensic Perspectives on Violence
Published online: 05 March 2014
Print publication: 13 March 2014, pp x-xii | CommonCrawl |
The possibility of predicting learning performance using features of note taking activities and instructions in a blended learning environment
Minoru Nakayama ORCID: orcid.org/0000-0001-5563-69011,
Kouichi Mutsuura2 &
Hiroh Yamamoto3
International Journal of Educational Technology in Higher Education volume 14, Article number: 6 (2017) Cite this article
A formative assessment was introduced to a blended learning course in order to predict participant's learning performance using measurements of their note taking activity and metrics of their attitudes. The lexical metrics were created by analyzing the contents of student's notes from every class, and by measuring their characteristics. In the results of two types of regression analysis of these measurements, features of note taking activities were a significant source of information for predicting the scores of final exams. In regards to temporal changes in prediction performance during the course, the progress of learning and the typical changes in features of note taking activity were discussed. During the analyses, the effectiveness of the lecturer's note taking instructions was also confirmed.
The online learning environment has recently become the preferred method of sharing learning content, which enables participant's learning activities to be analyzed, in order to maximize learning activity and optimize the overall system. For example, typical large scale systems, which are called massive open online courses (MOOCs), are used for delivering learning content (Seaton, Bergner, Chuang, Mitros, & Pritchard, 2014). In order to evaluate both participant's learning activity and learning systems, participant's access logs and learning performance were analyzed (Seaton, Nesterko, Mullaney, Reich, & Ho, 2014). When the current learning environment, which uses information communication technology, was developed, the effectiveness of the system was evaluated using participant's learning behavioral data (Nakayama, Kanazawa, & Yamamoto, 2007; Nakayama, Kanazawa, & Yamamoto, 2009). Using a conventional learning assessment approach, such as a summative evaluation, learning performance during the online course was discussed. Also, the cost benefit of an online learning environment has often been discussed (Bates, 2000). Another approach, known as an authentic assessment (Gulikers, Bastiaens, & Kirschner, 2004), focuses on the student's learning progress. Also, a conventional formative assessment can improve the learning progress of students and the instructive activities of teachers (Bloom, Hastings, & Madaus, 1971). Since student's learning activity may elicit learning achievement, the formative assessment focuses on activities which improve student's learning progress in comparison with a summative assessment that is based on their final results. In addition to this, the formative assessment can be conducted using regular learning activities such as note taking and scores of various non-test work results. Though participants are evaluated using final test scores during most university courses, a formative assessment may reflect their overall learning performance. Therefore, a formative assessment can provide various kinds of information about participant's learning activity and performance. This approach has been applied to educational improvement of the learning progress (Bell et al., 2015; Bennett, 2015).
Using conventional approaches, the varying effectiveness of student's aptitudes (Cronbach & Snow 1977) and learning behavior, such as note taking activity (Kiewra, 1985, 1989; Kiewra, Benton, Kim, Risch, & Christensen, 1995; Kobayashi, 2005; Piolat, Olive, & Kellogg, 2005) was studied. The relationships between these factors and learning performance have also been widely analyzed and discussed (Cronbach & Snow, 1977; Nye, Crooks, Powley, & Tripp, 1984; Weener, 1974). These survey and analytical techniques were introduced to the study of the online learning environment, and the learning effectiveness of note taking, and some of the causal relationships between learning activities and note taking behavior were analyzed (Nakayama, Mutsuura, & Yamamoto, 2014a, 2015a, b, 2017). Previous studies concerning note taking activity, which examined these overall activities, can be used as a summative assessment as student's achievements. This suggests that metrics of note taking activity can be used as one of the indices of the formative assessment when note contents for every session are assessed.
While some of these measurements affect student's learning, detailed factors and the timing of their effectiveness may be key issues. The analysis of participant's formative leaning process and the revision of metrics in response to student's behavior provide the means to resolve the problem.
This study extracts some of the features of student's attitudes and the contents of notes taken during learning activities, and tracks their contributions to learning performance as the course progress. During online learning courses, behavioral events are organized as learning activities which are supported by information communication technology. Therefore, the research questions this paper is concerned with are: examining the possibility of predicting scores of final exams as a measure of learning performance, using information collected during the learning progress, identifying the learning activity metrics necessary for the prediction, and evaluating the effectiveness of note taking instruction.
The following topics are also addressed in this paper:
A procedure for the estimation of the final exam scores is proposed, using selected features of the contents of notes taken and participant's characteristics.
The effectiveness of note taking instructions is evaluated, in order to examine the performance of the estimations.
Possible time periods for the measurement of the progress of the course are discussed, regarding the performance of the predictions.
Surveys were conducted for two years during a credit course for Bachelor students at a Japanese university. The contents of notes taken by students were collected after every class, and other metrics were surveyed twice during the course. The metrics of the means of the first and second halves were then compared.
Blended learning courses
The subject was Information Networks. The courses were Bachelor level credit courses. The course was taught using the blended learning style, which consists of face-to-face sessions and an online learning environment outside of the classroom. The course consisted of weekly face-to-face sessions for 15 weeks (Nakayama, Mutsuura, & Yamamoto, 2011). Participants could take on-line tests (OT) after each session, using a function of the learning management system (LMS), and the assigned textbook. The function of on-line tests is as a review exercise of the content of each face-to-face session. Since students can repeat the tests until they are satisfied with their scores, most scores are perfect or nearly so. The LMS recorded the final scores of the on-line tests. Therefore, these scores were not included in the evaluation of participant's learning performance. A final exam (FE) was given to all participants, to measure their performance at the end of the course.
Note taking instructions
All participants were required to present their notebooks in order to track the progress of their learning. The contents of the notes were scanned and stored in a PC, and all texts were then converted into machine readable text. In addition to the recording of the contents of notes, the lecturer reviewed and assessed the contents of notes for 14 out of 15 weeks of the course. The contents were evaluated using a scale, and the sums of the scores were defined as individual note taking assessment score (NT-A). The overall survey conditions remained the same during the two years the course was studied.
This survey employed as an action research procedure. During the first year, the survey was conducted without any instructions having been given, nor suggestions to improve note taking skills made, and this condition is defined as the "without instruction" condition. Regarding our series of surveys (Nakayama, Mutsuura, & Yamamoto, 2011, 2014a, b), note taking activity may be a key learning process, and can provide a possible means of improving learning performance. Therefore, instructions concerning note taking techniques and examples of good notes were shown twice during the course, at the beginning and mid-point of the course. This other condition is defined as the "with instruction" condition.
The valid number of participants is 32 for without instruction and 24 for with instruction.
Characteristics of participants
The individual characteristics of participants may affect their learning activity, as has been noted in previous studies (Nakayama, Mutsuura, & Yamamoto, 2014a). The following metrics were introduced to measure characteristics during the two surveys. The constructs are: Personality (Goldberg, 1999; International Personality Item Pool, 2004), Information Literacy (Fujii, 2007), Learning Experience (Nakayama, Yamamoto, & Santiago, 2007) and Note-taking skills (Nakayama, Mutsuura, & Yamamoto, 2014a). The total number of variables is 13. The causal relationships between these characteristics and overall learning performance was confirmed using these metrics (Nakayama, Mutsuura, & Yamamoto, 2014a, b).
The personalities of students were measured using an open resource such as the International Personality Item Pool (IPIP) inventory (International Personality Item Pool, 2004). This construct consists of 50 questions which measure the following five factor scores as follows: "Extroversion" (IPIP-1), "Agreeableness" (IPIP-2), "Conscientiousness" (IPIP-3), "Neuroticism" (IPIP-4) and "Openness to Experience" (IPIP-5).
Information literacy was measured using 32 question inventories which were developed and defined as 8 factor structures by Fujii (2007). The resulting two meta-factors were summarized as Operational Skills (IL-1), and Attitudes toward Information Literacy (IL-2) (Nakayama, Yamamoto, & Santiago, 2008).
Student's online learning experiences at the university were evaluated using three factor scores which were extracted using developed inventories. The inventories consist of 10 items which ask participants about their attitude toward learning. The items are then rated using a 5-point scale. Factor analysis of the surveyed data was conducted using Promax rotation. In the results, three specific factors were extracted (Nakayama, Yamamoto, & Santiago, 2007). The three factors are Factor 1 (LE-F1): Overall Evaluation of the e-learning experience, Factor 2 (LE-F2): Learning Habits, and Factor 3 (LE-F3): Learning Strategies (Nakayama, Yamamoto, & Santiago, 2007). The factor scores of student's learning experience may reflect their attitudes toward their learning behavior.
Note-taking skills
Though many universities provide students with self evaluation quizzes to measure their own abilities (Pennsylvania State University, Learning Centre, 2017), most Japanese universities do not provide these. A set of 17 question inventories was developed to measure student's note-taking skills. Also, three factor structures were extracted using factor analysis (Nakayama, Mutsuura, & Yamamoto, 2011, 2012). The three factors are as follows: NT-F1: Recognition of functions of note taking, NT-F2: Methodology of utilizing notes, and NT-F3: Presentation of notes.
Evaluations of note contents
Note taking assessment
The contents of all notes taken by participants in each of the sessions were evaluated by the lecturer (Nakayama, Mutsuura, & Yamamoto, 2014a, b). The measurements were divided into two categories, "Good" and "Fair". The assessments of all sessions were summed up, and the mean scores were defined as note taking scores (NTS). As mentioned above, the machine readable texts of note contents were used in the following lexical analysis.
Lexical analysis for contents of notes taken
The machine readable texts from the notes students took were prepared, though figures and tables were excluded. During the face-to-face sessions, the lecturer wrote down and presented points texts using a document camera. These documents were also converted into text along with the participant's notes.
The texts of the participants and the lecturer were analyzed lexically using the Japanese morphological term analysis tool MeCab (2006). During processing, the nouns and the sequence of appearance of each individual noun in the texts and in each session were recorded. To measure the note taking activity of participants, some indices were defined as follows (Nakayama, Mutsuura, & Yamamoto, 2014b, 2016b).
Word ratio (WR): the ratio between the number of terms written down and the number of terms given (the number of terms participants recorded vs. the number of terms the lecturer presented)
Coverage (CV): the coverage ratio was calculated as a percentage of the number of terms recorded by participants.
The above indices represent written features which are based on noun term frequencies. To detect key phrases such as conceptual terminology, term connections or co-occurring terms were summarized. Concurrent term connections are called as 2-gram (Nakayama, Mutsuura, & Yamamoto, 2014b, 2016a, b). For example, noun transitions of terms such as A-B and B-C were extracted from a text A-B-C. The term connections between these two nouns can be mathematically summarized as an adjacency matrix. Therefore, the adjacency matrix for each individual and each session can be used to represent the features of notes taken. In addition to this, the matrices of the two sets of texts can be compared using a distance metric such as the Levenshtein distance, which is a kind of edit distance. In comparing the contents of texts of the students and the lecturer, students sometimes omitted terms the lecturer presented, or some students recorded terms that were in addition to those the lecturer presented. The frequency of this kind of behavior can be evaluated using a distance metric which is based on a comparison of the two matrices of adjacency.
As a result, the two indices are defined as follows:
Additional distance (AD) means the sum of the number of additional nodes or edges in a matrix.
Insufficient distance (ID) means the sum of the number of reduced nodes or edges in a participant's matrix in comparison with the lecturer's matrix.
Both distances are influenced by the total number of terms in the lecturer's presentation during sessions, so that the relative distances are calculated using the overall distances of written contents presented by the lecturer in each session. As a result, four note taking indices were extracted from each session and each individual, then the ground averages across all sessions were calculated, and partial averages for the first and the second halves of the sessions were calculated, respectively.
Note taking activities
Some simple statistics are summarized using an analysis of the contents of notes taken.
First, the numbers of terms the lecturer presented in each session is summarized in Fig. 1. This course includes mathematical equations and presentations in several sessions, so that the numbers of nouns detected in those sessions is not large. Also, the number of nouns presented is small when the lecturer gave explanations using descriptions from the text book during a few of the sessions at the end of course.
The number of words the lecturer presented in each session
Second, the note taking activity of participants was measured using the four metrics mentioned above. These means are summarized across two conditions in Table 1. As the table shows, all means for the two conditions are comparable, except for AD (additional distance). When note taking instruction was given, the mean AD distance increased in accordance with the WD (word rate). The number of nouns in student's notes reflects the active promotion of recording nouns in their notes. However, for four metrics there are no significant differences between the two conditions. The effectiveness of note taking instruction was not observed in these metrics.
Table 1 Relative means of features of note taking activity
In a detailed analyses of note taking activity, the relationships between these metrics were measured using correlation analysis. The correlation coefficients between metrics without instruction are summarized as a triangular matrix in Table 2. A subjective assessment of the contents of notes was also conducted. In the table, the non-significant coefficients are indicated using parentheses. When instruction was not provided, the word rate (WD) strongly correlates with additional distance (AD), and also with insufficient distance (ID). Active writing promotes the replacement of terms the lecturer presented with the participant's own terms. In regards to the phenomenon, coverage (CV) negatively correlates with ID. That means that ID increases when CV decreases. In an overall assessment, NTS positively correlates with WD, ID and AD, since more note taking activity provides the lecturer with a better impression of the contents of student's notes. There are no significant relationships between final exams scores (FE) and activity however, as no contribution due to note taking was observed.
Table 2 Correlation coefficients across features of NT (without Inst.; N=32)
When note taking instruction was given, some relationships changed, as Table 3 shows. The word rate (WD) also correlates with coverage (CV), in addition to correlating with additional distance (AD). In this condition, CV correlates with AD. The results suggest that more note taking activity causes an increase in both CV and AD, so that participants record nouns the lecturer presented as well as nouns they created themselves. As a result, the lecturer's assessment of the contents of notes (NTS) is affected by the volume of nouns recorder, which is based on rates of both WD and AD. Also, all note taking metrics correlate with the final exams scores (FE). The activities of participants contribute to these relationships significantly, and the effectiveness of note taking instruction during the course was examined. The features of note taking activity for the two conditions were comparable, as Table 1 shows. The individual features may depend on the learning progress of the student. A detailed analysis will be presented in the following Performance in regards to the progress of the course.
Table 3 Correlation coefficients across features of NT (with Inst.; N=24)
Some of the effectiveness of note taking activity and instructions which promote note taking were confirmed during the respective correlation analysis in the previous section. The contribution of learning performance was not specified. To evaluate the impact of note taking instruction on participant's learning performance, the relationships between all variables mentioned above, and scores of final exams (FE) were examined. A set of variables consisted of 13 variables for participant characteristic variables and four note taking activity variables. A possible relationship using the set of variables (x) can be noted as follows.
$$\begin{aligned} &\mathbf{x} \in \{ WR, CV, AD, ID, IPIP_{1-5},IL_{1-2}, LE_{1-3}, {NT-F}_{1-3} \} \\ &{G}(\mathbf{x}) = ~<\mathbf{w},\mathbf{x}>~+~b \end{aligned} $$
Performance comparisons between the first and second halves of the courses
First, a simple multiple regression analysis using liner models for G(x) was introduced, in order to determine key variables and their contributions to the scores of final exams. During the analysis, a step wise method of selection was introduced to choose the variables which were significant. This procedure can optimize the model in order to calculate the rate of contribution, which is expressed as an R-squares.
The results are summarized in Table 4. After taking into consideration the stimulation of note taking instruction and the differences in the number of terms the lecturer presented in Fig. 1, four note taking metrics were summarized as three conditions: means of the 1st and the 2nd halves of the course, and the overall course.
Table 4 Selected variables and R-squares as a contribution to multi regression analysis
In the results, the effective variables and their contributions (partial R-squares) are summarized and compared between two conditions, such as data with instruction and data without instruction. In a comparison of an index of overall performance of the regressions between the two conditions, the R-squares for the condition with instructions are much higher than for the other condition. In regards to these results, the contributions of final exam scores can be explained using individual sets of variables during the course with instruction. Also, performance is better when variables from the first half of the course are employed. The contributions using multiple regression models are higher than are the ones for single regression models. Therefore, every index can contribute to the relationships.
The lists of variables indicate that some variables of student's characteristics were selected from the course without instruction, while note taking features were included when instruction was given. In particular, four metrics of note taking activity in the first session, and NT3 (presentation of notes) were selected for regression analyses of the 1st half of the course, and the 1st and 2nd halves of the course.
Performance in regards to the progress of the course
Since participants learn content and make improvements to their notes during each session, learning performance may be based on cumulative learning behavior. For example, the learning performance of the i th session may reflect overall learning activities from the first to i th session. As mentioned in the previous section, the four note taking metrics and one of the note taking skills (NT-F3) contributed to final exams scores (FE) such as x∈{WR,CV,AD,ID,NT−F 3}, and temporal prediction performance in regards to the evaluation of cumulative indices.
The contribution ratios of the variables were calculated as R-squares, and the accuracy of prediction was indicated as root mean square errors (RMSE). The temporal changes are summarized in Fig. 2. The contribution ratios stay around 0.2–0.4 without instruction. When instruction was made, the ratios increased with the number of sessions. During sessions 5–12, the ratios were distributed around 0.8, which was quite a high rate. In the last two sessions, the ratios decreased, though they were above 0.6. In addition to these improvements, after the fourth session RMSEs with instructions tended to be smaller than those for without instruction. These results suggest that the effectiveness of note taking activity appears at an early stage in the course, and can be maintained until the end of the course.
R-squares and RMSEs of regression models between final exam scores and means of NT features of course sessions
Possibility of prediction of final exams scores
In regards to the results of multi regression analysis, the scores of final exams reflect metrics of note taking activity with instruction. The results of analysis show some significant relationships between the scores of final exams (FE) and the four metrics of note taking activities and the one factor score for note taking skills (NT-F3). The accuracy of prediction for FE scores was not evaluated precisely. Here, prediction accuracy means precision of estimation of scores of a novel participant using his or her scores of metrics of note taking behavior as an index of the capability to generalize.
To estimate scores accurately, a support vector regression (SVR) technique was introduced as a more robust model. A Gaussian kernel was employed as the function (G) using the constant b. Prediction accuracy was evaluated using a leave-one-out procedure which estimated the scores of each individual using a model trained with the rest of the data. With this, the model training process and prediction process can be evaluated independently of each other.
Prediction performance using SVR
The computed function G(x) can provide estimates of the scores of final exams (\(\tilde {FE}\)) after optimization training using data of participant that does not target any one specific individual. The actual calculation was conducted using a LIBSVM package (Chang & Lin, 2011). To evaluate prediction performance, R-squares and prediction error RMSEs were calculated. The performance was tested under several conditions using selected variables. The results of some trials suggest that the four features of note taking activity contribute to performance prediction.
The results of several feature sets are summarized in Table 5. In comparing the performance of estimations between with and without note taking instruction, the predictions were more accurate when note taking instruction was given. The results of R-squares show that some note taking instruction metrics can contribute to the prediction of FE (R 2>0.6), though the contribution of these metrics without instruction is low. Both RMSEs for the two sets of data are comparable with those which use multiple regression analysis, as Fig. 2 shows.
Table 5 R-squares and RMSEs between final exam scores and predictions across sets of selected feature variables
Prediction performance of course progress
The possibility of predicting final learning performance provides useful information to improve student's learning activities during the course. In particular, generalization and the period of evaluation are important issues. Prediction performance using SVR and the cumulative information about participant's learning features is calculated in this section, and in subsection "Performance in regards to the progress of the course" as well.
Here, prediction was also conducted using cumulative values of four note taking features and the note taking factor score (NT-F3) and comparing them with subsection "Performance in regards to the progress of the course". Performance is displayed in Fig. 3 using the same format as in Fig. 2. R-squares remain around 0.6 after the 4th session, except for the last two sessions when instruction was provided. When instruction was not provided, R-squares remained around 0.2. In order to evaluate prediction accuracy, a leave-one-out procedure was introduced. Though overall performance is lower than for multiple regression analysis, the accuracy of with instruction shows the possibility of estimating individual final exams scores as the course progress.
R-squares and RMSEs of prediction models with SVR between final exam scores and means of NT features of course sessions
As mentioned in the introduction, student's characteristics, including note taking scores (NTS), have been confirmed to have an affect on the scores of final exams. Also, the effectiveness of note taking instruction was introduced, though the detailed relationships between these variables were not specified. This paper tries to examine the relationships mathematically in order to improve the learning process.
Fundamentally, the relationships between the scores of final exams and features of note taking activities were analyzed. Though the metrics for note taking activity for courses with and without instruction are comparable in Table 1, the correlation relationships between the scores of final exams and features of note taking activities have changed due to the note taking instruction that was given. All correlation coefficients between variables were significant during the course with instruction, while the coefficients were not significant for the course without instruction. In comparing correlation relationships across metrics of note taking activity between Tables 2 and 3, the contents of notes participants took may have changed. Generally, students replaced the terms presented with their own words in their notes. When note taking instruction was given, students recorded their own words in addition to recording the words presented. Therefore, the metric of additional words written down increased during the course with instruction, as is shown in Table 1.
In the next step, the process of increasing the effectiveness of note taking metrics was measured as a formative assessment, using two approaches. First, the relationship between the scores of final exams and variables of student's individual behaviors were evaluated using multiple regression analysis and a step wise method of selecting variables. In the results, the overall contribution of most note taking activity metrics which were selected was high when instruction was given. However, the contribution of some variables of student's characteristics which were selected is small for the no instruction condition. The contribution of a set of metrics of note taking activity increased with the number of sessions as the course progressed. The contribution, when measured as an R-square, remained at a high level between the 4th and 12th sessions of the course, as is shown in Fig. 2. However, the contribution decreased in the last two sessions. In regards to the change in the number of terms the lecturer presented in Fig. 1, the number of terms in the last two session was the smallest of all sessions during the course. The lecturer explained the contents using mainly a textbook, and so the number of terms presented was small. Therefore, this information may influence the metrics of cumulative values.
When the relationships were validated using multiple regression analysis, the possibility of prediction of final exam scores during the progress of the course was confirmed. To do this, SVR was introduced as a robust prediction procedure to test note taking activity metrics. Also, the contribution to predicting final exam scores remained at around 0.6 between the 4th and 12th sessions of the course, as is shown in Fig. 3. These results suggest that it is possible to improve participant's final exam scores during the course. Since the prediction function is based on metrics of note taking activity, it may be possible to provide each participant with appropriate instruction regarding their individual note taking abilities.
However, the results are based on the case of one course at a single Japanese university, and the number of participants was not large. The validity of this approach should be investigated with care, and the validation for these points will be the subject of our further study.
The possibility of developing a diagnostic procedure to improve learning performance using participant's characteristics and features of note taking activity during a blended learning was confirmed. The relationships between these indices and the procedure used to predict the scores of final exams were examined. The following results were obtained.
Some metrics of note taking activity were defined using lexical analysis of the contents of notes taken by students. The statistics of two groups of students in courses with and without note taking instruction were compared. The effectiveness of note taking with instruction was observed in correlational relationships between these metrics.
The relationships between final exam scores and the metrics of note taking activity were analyzed, and the contribution of these metrics during the course with instruction was confirmed. Additionally, the relationships were established after only several sessions of the course, using formative analysis.
The possibility of predicting the scores of the final exams was confirmed using support vector regression (SVR) functions. Also, the contribution of the metrics of note taking activity was confirmed. The sessions available for predicting scores were examined using formative analysis.
These techniques can be applied to improving participant's learning activity through the use of better note taking methods. A detailed procedure and confirmation of the effectiveness of this using various other courses will be a subject of our further study.
Bates, A. W. T (2000). Managing Technological Change: Strategies for College and University Leaders. San Francisco, CA, USA: Jessey-Bass Publishers.
Bell, C., Jones, N., Lewis, J., Qi, Y., Kirui, D., Stickler, L.,... Liu, S. (2015). Understanding Consequential Assessment Systems of Teaching: Year 2 Final Report to Los Angles United School District. Princeton, NJ, USA: Educational Testing Service.
Bennett, R. E (2015). The changing nature of educational assessment. Review of Research in Education, 39, 370–407.
Bloom, B. S., Hastings, J. T., & Madaus, G. F. (1971). Handbook on Formative and Summative Evaluation of Student Learning. New York, USA: McGraw-Hill Inc.
Cronbach, L. J.,& Snow, R. E. (1977). Aptitudes and Instructional Methods. New York, USA: Irvington Publishers, Inc.
Chang, C. C.,& Lin, C. J. (2011). Libsvm: a library for support vector machines. ACM Transaction Intelligent Systems and Technology, 2, 27–12727. http://www.csie.ntu.edu.tw/\~cjlin/libsvm.
Fujii, Y. (2007). Development of a scale to evaluate the information literacy level of young people –comparison of junior high school students in Japan and Northern Europe. Japan Journal of Educational Technology, 30(4), 387–395.
Gulikers, J. T. M., Bastiaens, T. J., & Kirschner, P. A. (2004). A five-dimensional framework for authentic assessment. Educaitonal Technology Research & Development, 52(3), 67–86.
Goldberg, L. R. (1999). A broad-bandwidth, public domain, personality inventory measuring the lower-level facets of several five-factor models. Personality Psychology in Europe, 7, 7–28.
International Personality Item Pool: (2004). A Scientific Collaboratory for the Development of Advanced Measures of Personality Traits and Other Individual Differences. http://ipip.ori.org. Accessed 28th, Oct. 2004.
Kiewra, K. A. (1985). Students' note-taking behaviors and the efficacy of providing the instructor's notes for review. Contemporary Educational Psychology, 10, 378–386.
Kiewra, K. A. (1989). A review of note-taking: The encoding-storage paradigm and beyond. Educational Psychology Review, 1(2), 147–172.
Kiewra, K. A., Benton, S. L., Kim, S. I., Risch, N., & Christensen, M. (1995). Effects of note-taking format and study technique on recall and relational performance. Contemporary Educational Psychology, 20, 172–187.
Kobayashi, K. (2005). What limits the encoding effect of note-taking? a meta-analytic examination. Contemporary Educational Psychology, 30, 242–262.
MeCab: (2006). Yet Another Part-of-Speech and Morphological Analyzer. http://mecab.sourceforge.net. Accessed 27 Dec 2010.
Nakayama, M., Kanazawa, H., & Yamamoto, H. (2009). Detecting incomplete learners in a blended learning environment among Japanese university students. International Journal of Emerging Technology in Learning, 4(1), 47–51.
Nakayama, M., Mutsuura, K., & Yamamoto, H. (2011). Evaluation of student's notes in a blended learning course. International Journal of New Computer Architectures and their Applications, 1(4), 1080–1089.
Nakayama, M., Mutsuura, K., & Yamamoto, H. (2012). Causal analysis of student's characteristics of note-taking activities and learning performance during a fully online course. In Proceedings of 2012 IEEE 11th International Conference on Trust, Security and Privacy in Computing and Communication (pp. 1924–1929). Liverpool, UK.
Nakayama, M., Mutsuura, K., & Yamamoto, H. (2014a). Impact of lernear's characteristics and learning behaviour on learning performance during a fully online course. The Electronic Journal of e-learning, 12(4), 394–408.
Nakayama, M., Mutsuura, K., & Yamamoto, H. (2014b). A note taking evaluation index using term networks in a blended learning environment. In Proceedings of Eighth International Conference on Complex, Intelligent and Software Intensive Systems (pp. 486–490). Birmingham, UK.
Nakayama, M., Mutsuura, K., & Yamamoto, H. (2015a). Relationship between the final test scores and indices of note-taking activity. Japan Journal of Educational Technology, 39(Suppl.), 53–56.
Nakayama, M., Mutsuura, K., & Yamamoto, H. (2015b). The prediction of learning performance using features of note taking activities. In Proceedings of 23rd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN) (pp. 325–330). Brugge.
Nakayama, M., Mutsuura, K., & Yamamoto, H. (2016a). Note-taking evaluation using network illustrations based on term co-occurence in a blended learning environment. International Journal of Distance Education Technologies, 14, 77–91.
Nakayama, M., Mutsuura, K., & Yamamoto, H. (2016b). Lexical analysis of student's learning activities during the giving of instructions for note-taking in a blended learning environment. International Journal of Information and Educational Technology, 6, 1–6.
Nakayama, M., Mutsuura, K., & Yamamoto, H. (2017). Effectiveness of student's note-taking activities and characteristics of their learning performance in two types of online learning. International Journal of Distance Education Technologies, 15.
Nakayama, M., Yamamoto, H., & Santiago, R. (2007). The impact of learner characteristics on learning performance in hybrid courses among Japanese students. The Electronic Journal of e-Learning, 5(3), 195–206.
Nakayama, M., Yamamoto, H., & Santiago, R. (2008). Impact of information literacy and learner characteristics on learning behavior of Japanese students in online courses. International Journal of Case Method Research &, Application, XX(4), 403–415.
Nye, P. A., Crooks, T. J., Powley, M., & Tripp, G. (1984). Student note-taking related to university examination performance. Higher Education, 13, 85–97.
Pennsylvania State University (2017). Learning Centre: Lecture Note Taking. http://www.ulc.psu.edu/studyskills/note_taking.html. Accessed 30 Aug 2010.
Piolat, A., Olive, T., & Kellogg, R. T. (2005). Cognitive effort during note taking. Applied Cognitive Psychology, 19, 291–312.
Seaton, D. T., Nesterko, S., Mullaney, T., Reich, J., & Ho, A. (2014). Characterizing video use in the catalogue of MITx MOOCs. eLearning Papers, 37, 33–41.
Seaton, D. T., Bergner, Y., Chuang, I., Mitros, P., & Pritchard, D. E. (2014). Who does what in a massive open online course?Communication of the ACM, 57(4), 58–65.
Ueno, M. (2007). Online outlier detection for e-learning time data. IEICE Transaction, J90-D, 40–51.
Weener, P. (1974). Note taking and student verbalization as instrumental learning activities. Instructional Science, 3, 51–74.
This work was supported by JSPS KAKENHI Grant Number B-26282046, 2014–2016. This paper is an extended version which is based on reports at ESANN2015 (Nakayama et al. 2015b) and a JSET short letter (Nakayama, Mutsuura, & Yamamoto, 2015a). The authors would like to thank those who provided useful comments regarding both of these papers.
MN and HY conceived and designed the survey experiments; KM performed the experiments; MN analyzed the data; MN wrote the paper and KM and HY reviewed it. All authors read and approved the final manuscript.
Information and Communications Engineering, Tokyo Institute of Technology, Ookayama, Meguro, Tokyo, 152-8552, Japan
Minoru Nakayama
Faculty of Economics, Shinshu University, Asahi, 390-8621, Matsumoto, Japan
Kouichi Mutsuura
Shinshu University, Asahi, 390-8621, Matsumoto, Japan
Hiroh Yamamoto
Correspondence to Minoru Nakayama.
Nakayama, M., Mutsuura, K. & Yamamoto, H. The possibility of predicting learning performance using features of note taking activities and instructions in a blended learning environment. Int J Educ Technol High Educ 14, 6 (2017). https://doi.org/10.1186/s41239-017-0048-z
Participant's characteristics | CommonCrawl |
Coupled-expanding maps under small perturbations
The $C^{\alpha}$ regularity of weak solutions of ultraparabolic equations
July 2011, 29(3): 1277-1290. doi: 10.3934/dcds.2011.29.1277
Regular level sets of Lyapunov graphs of nonsingular Smale flows on 3-manifolds
Bin Yu 1,
Department of Mathematics, Tongji University, Shanghai 200092, China
Received March 2010 Revised June 2010 Published November 2010
In this paper, we first discuss the regular level set of a nonsingular Smale flow (NSF) on a 3-manifold. The main result about this topic is that a 3-manifold $M$ admits an NSF which has a regular level set homeomorphic to $(n+1)T^{2}$ $(n\in \mathbb{Z}, n\geq 0)$ if and only if $M=M'$#$n S^{1}\times S^{2}$. Then we discuss how to realize a template as a basic set of an NSF on a 3-manifold. We focus on the connection between the genus of the template $T$ and the topological structure of the realizing 3-manifold $M$.
Keywords: templates., Lyapunov graphs, three manifolds, nonsingular Smale flows.
Mathematics Subject Classification: Primary: 37D15, 37D20, 37E99; Secondary: 57N1.
Citation: Bin Yu. Regular level sets of Lyapunov graphs of nonsingular Smale flows on 3-manifolds. Discrete & Continuous Dynamical Systems, 2011, 29 (3) : 1277-1290. doi: 10.3934/dcds.2011.29.1277
F. Béguin and C. Bonatti, Flots de Smale en dimension 3: Présentations finies de voisinages invariants d'ensembles selles, (French) [Smale flows in dimension 3: Finite presentations of invariant neighborhoods of saddle sets], Topology, 41 (2002), 119-162. doi: 10.1016/S0040-9383(00)00032-X. Google Scholar
R. Bowen, One-dimensional hyperbolic sets for flows, J. Differential Equations, 12 (1972), 173-179. doi: 10.1016/0022-0396(72)90012-5. Google Scholar
J. Birman and R. F. Williams, Knotted periodic orbits in dynamical systems. I. Lorenz's equations, Topology, 22 (1983), 47-82. doi: 10.1016/0040-9383(83)90045-9. Google Scholar
J. Birman and R. F. Williams, Knotted periodic orbits in dynamical system. II. Knot holders for fibered knots, in "Low-Dimensional Topology" (San Francisco, Calif., 1981), Contemp. Math., 20, Amer. Math. Soc., Providence, RI, (1983), 1-60. Google Scholar
R. N. Cruz and K. A. de Rezende, Cycle rank of Lyapunov graphs and the genera of manifolds, Proc. Amer. Math. Soc, 126 (1998), 3715-3720. doi: 10.1090/S0002-9939-98-04957-0. Google Scholar
J. Franks, Nonsingular Smale flows on $S^{3}$, Topology, 24 (1985), 265-282. doi: 10.1016/0040-9383(85)90002-3. Google Scholar
J. Franks, Symbolic dynamics in flows on three-manifolds, Trans. Amer. Math. Soc, 279 (1983), 231-236. doi: 10.1090/S0002-9947-1983-0704612-1. Google Scholar
J. Franks, "Homology and Dynamical Systems," CBMS 49, American Mathematical Society, Providence, Rhode Island, 1982. Google Scholar
J. Franks, Knots, links and symbolic dynamics, Ann. of Math. (2), 113 (1981), 529-552. doi: 10.2307/2006996. Google Scholar
S. R. Fenley, Anosov flows in 3-manifolds, Ann. of Math. (2), 139 (1994), 79-115. doi: 10.2307/2946628. Google Scholar
G. Frank, Templates and train tracks, Trans. Amer. Math. Soc, 308 (1988), 765-784. doi: 10.1090/S0002-9947-1988-0951627-9. Google Scholar
R. W. Ghrist, P. J. Holmes and M. C. Sullivan, "Knots and Links in Three-dimensional Flows," Lecture Notes in Mathematics, 1654, Springer-Verlag, Berlin, 1997. Google Scholar
J. Morgan, Nonsingular Morse-Smale flows on 3-dimensional manifolds, Topology, 18 (1978), 41-54. doi: 10.1016/0040-9383(79)90013-2. Google Scholar
V. Meleshuk, "Embedding Templates in Flows," Ph.D thesis, Northwestern University, 2002. Google Scholar
N. Oka, Notes on Lyapunov graphs and nonsingular Smale flows on three manifolds, Nagoya Math. J, 117 (1990), 37-61. Google Scholar
C. Pugh and M. Shub, Suspending subshifts, in "Contributions to Analysis and Geometry" (eds. Md. Baltimore), Johns Hopkins Univ. Press, (1981), 265-275. Google Scholar
K. de Rezende, Smale flows on the three-sphere, Trans. Amer. Math. Soc, 303 (1987), 283-310. doi: 10.1090/S0002-9947-1987-0896023-7. Google Scholar
C. Robinson, "Dynamical Systems. Stability, Symbolic Dynamics, and Chaos," 2nd edition, Studies in Advanced Mathematics, CRC Press, Boca Raton, FL, 1999. Google Scholar
D. Rolfsen, "Knots and Links," Publish or Perish, Inc., Berkeley, CA, 1976. Google Scholar
M. Saito, On closed orbits of Morse-Smale flows on 3-manifolds, Bull. London Math. Soc, 23 (1991), 482-486. doi: 10.1112/blms/23.5.482. Google Scholar
M. C. Sullivan, Visually building Smale flows on $S^{3}$, Topology Appl, 106 (2000), 1-19. doi: 10.1016/S0166-8641(99)00069-3. Google Scholar
B. Yu, Lorenz like Smale flows on three-manifolds, Topology Appl, 156 (2009), 2462-2469. doi: 10.1016/j.topol.2009.07.008. Google Scholar
Bin Yu. Behavior $0$ nonsingular Morse Smale flows on $S^3$. Discrete & Continuous Dynamical Systems, 2016, 36 (1) : 509-540. doi: 10.3934/dcds.2016.36.509
Hyunjin Ahn, Seung-Yeal Ha, Woojoo Shim. Emergent dynamics of a thermodynamic Cucker-Smale ensemble on complete Riemannian manifolds. Kinetic & Related Models, 2021, 14 (2) : 323-351. doi: 10.3934/krm.2021007
A. Kochergin. Well-approximable angles and mixing for flows on T^2 with nonsingular fixed points. Electronic Research Announcements, 2004, 10: 113-121.
Qianying Xiao, Zuohuan Zheng. $C^1$ weak Palis conjecture for nonsingular flows. Discrete & Continuous Dynamical Systems, 2018, 38 (4) : 1809-1832. doi: 10.3934/dcds.2018074
Keith Burns, Katrin Gelfert. Lyapunov spectrum for geodesic flows of rank 1 surfaces. Discrete & Continuous Dynamical Systems, 2014, 34 (5) : 1841-1872. doi: 10.3934/dcds.2014.34.1841
Ming-Chia Li. Stability of parameterized Morse-Smale gradient-like flows. Discrete & Continuous Dynamical Systems, 2003, 9 (4) : 1073-1077. doi: 10.3934/dcds.2003.9.1073
Kurt Ehlers. Geometric equivalence on nonholonomic three-manifolds. Conference Publications, 2003, 2003 (Special) : 246-255. doi: 10.3934/proc.2003.2003.246
Shucheng Yu. Logarithm laws for unipotent flows on hyperbolic manifolds. Journal of Modern Dynamics, 2017, 11: 447-476. doi: 10.3934/jmd.2017018
Mark Pollicott. Ergodicity of stable manifolds for nilpotent extensions of Anosov flows. Discrete & Continuous Dynamical Systems, 2002, 8 (3) : 599-604. doi: 10.3934/dcds.2002.8.599
L. Dieci, M. S Jolly, Ricardo Rosa, E. S. Van Vleck. Error in approximation of Lyapunov exponents on inertial manifolds: The Kuramoto-Sivashinsky equation. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 555-580. doi: 10.3934/dcdsb.2008.9.555
Mário Bessa, Jorge Rocha. Three-dimensional conservative star flows are Anosov. Discrete & Continuous Dynamical Systems, 2010, 26 (3) : 839-846. doi: 10.3934/dcds.2010.26.839
B. Campos, P. Vindel. Transversal intersections of invariant manifolds of NMS flows on $S^{3}$. Discrete & Continuous Dynamical Systems, 2012, 32 (1) : 41-56. doi: 10.3934/dcds.2012.32.41
Ítalo Melo, Sergio Romaña. Contributions to the study of Anosov geodesic flows in non-compact manifolds. Discrete & Continuous Dynamical Systems, 2020, 40 (9) : 5149-5171. doi: 10.3934/dcds.2020223
Benjamin Letson, Jonathan E. Rubin. Local orthogonal rectification: Deriving natural coordinates to study flows relative to manifolds. Discrete & Continuous Dynamical Systems - B, 2020, 25 (9) : 3725-3747. doi: 10.3934/dcdsb.2020088
Carlos Kenig, Tobias Lamm, Daniel Pollack, Gigliola Staffilani, Tatiana Toro. The Cauchy problem for Schrödinger flows into Kähler manifolds. Discrete & Continuous Dynamical Systems, 2010, 27 (2) : 389-439. doi: 10.3934/dcds.2010.27.389
S. Eigen, A. B. Hajian, V. S. Prasad. Universal skyscraper templates for infinite measure preserving transformations. Discrete & Continuous Dynamical Systems, 2006, 16 (2) : 343-360. doi: 10.3934/dcds.2006.16.343
Zhixian Yu, Xiao-Qiang Zhao. Propagation phenomena for CNNs with asymmetric templates and distributed delays. Discrete & Continuous Dynamical Systems, 2018, 38 (2) : 905-939. doi: 10.3934/dcds.2018039
Larry Turyn. Cellular neural networks: asymmetric templates and spatial chaos. Conference Publications, 2003, 2003 (Special) : 864-871. doi: 10.3934/proc.2003.2003.864
Hong Cai, Zhong Tan. Time periodic solutions to the three--dimensional equations of compressible magnetohydrodynamic flows. Discrete & Continuous Dynamical Systems, 2016, 36 (4) : 1847-1868. doi: 10.3934/dcds.2016.36.1847
Xiaoli Li, Boling Guo. Well-posedness for the three-dimensional compressible liquid crystal flows. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 1913-1937. doi: 10.3934/dcdss.2016078
Bin Yu | CommonCrawl |
MSC Classifications
MSC 2010: Dynamical systems and ergodic theory
37Kxx
Refine listing
Only show content I have access to (11)
Journal of the Institute of Mathematics of Jussieu (4)
Proceedings of the Royal Society of Edinburgh Section A: Mathematics (4)
Bulletin of the Australian Mathematical Society (3)
Communications in Computational Physics (3)
Mathematika (3)
Nagoya Mathematical Journal (3)
Advances in Applied Mathematics and Mechanics (2)
European Journal of Applied Mathematics (2)
Forum of Mathematics, Sigma (2)
LMS Journal of Computation and Mathematics (2)
East Asian Journal on Applied Mathematics (1)
Glasgow Mathematical Journal (1)
Journal of the Australian Mathematical Society (1)
Mathematical Proceedings of the Cambridge Philosophical Society (1)
Proceedings of the Edinburgh Mathematical Society (1)
Global Science Press (6)
LMS (5)
Australian Mathematical Society Inc (4)
Forum of Mathematics (2)
33 results in 37Kxx
A GENERALISATION OF WITTEN'S CONJECTURE FOR THE PIXTON CLASS AND THE NONCOMMUTATIVE KDV HIERARCHY
Infinite-dimensional Hamiltonian systems
Alexandr Buryak, Paolo Rossi
Journal: Journal of the Institute of Mathematics of Jussieu , First View
Published online by Cambridge University Press: 18 July 2022, pp. 1-23
In this paper, we formulate and present ample evidence towards the conjecture that the partition function (i.e. the exponential of the generating series of intersection numbers with monomials in psi classes) of the Pixton class on the moduli space of stable curves is the topological tau function of the noncommutative Korteweg-de Vries hierarchy, which we introduced in a previous work. The specialisation of this conjecture to the top degree part of Pixton's class states that the partition function of the double ramification cycle is the tau function of the dispersionless limit of this hierarchy. In fact, we prove that this conjecture follows from the double ramification/Dubrovin–Zhang equivalence conjecture. We also provide several independent computational checks in support of it.
Existence of exponentially and superexponentially spatially localized breather solutions for nonlinear klein–gordon lattices in ℤd, d ≥ 1
Nonlinear operators and their properties
Dirk Hennig, Nikos I. Karachalios
Journal: Proceedings of the Edinburgh Mathematical Society / Volume 65 / Issue 2 / May 2022
We prove the existence of exponentially and superexponentially localized breather solutions for discrete nonlinear Klein–Gordon systems. Our approach considers $d$ -dimensional infinite lattice models with general on-site potentials and interaction potentials being bounded by an arbitrary power law, as well as, systems with purely anharmonic forces, cases which are much less studied particularly in a higher-dimensional set-up. The existence problem is formulated in terms of a fixed-point equation considered in weighted sequence spaces, which is solved by means of Schauder's Fixed-Point Theorem. The proofs provide energy bounds for the solutions depending on the lattice parameters and its dimension under physically relevant non-resonance conditions.
Inverse scattering transforms for non-local reverse-space matrix non-linear Schrödinger equations
Equations of mathematical physics and other areas of application
WEN-XIU MA, YEHUI HUANG, FUDONG WANG
Journal: European Journal of Applied Mathematics / Volume 33 / Issue 6 / December 2022
Published online by Cambridge University Press: 01 December 2021, pp. 1062-1082
The aim of the paper is to explore non-local reverse-space matrix non-linear Schrödinger equations and their inverse scattering transforms. Riemann–Hilbert problems are formulated to analyse the inverse scattering problems, and the Sokhotski–Plemelj formula is used to determine Gelfand–Levitan–Marchenko-type integral equations for generalised matrix Jost solutions. Soliton solutions are constructed through the reflectionless transforms associated with poles of the Riemann–Hilbert problems.
ON THE INSTABILITY OF PERIODIC WAVES FOR DISPERSIVE EQUATIONS—REVISITED
Incompressible inviscid fluids
FÁBIO NATALI, SABRINA AMARAL
Journal: Nagoya Mathematical Journal / Volume 247 / September 2022
Published online by Cambridge University Press: 25 August 2021, pp. 471-493
Print publication: September 2022
The purpose of this paper is to present an extension of the results in [8]. We establish a more general proof for the moving kernel formula to prove the spectral stability of periodic traveling wave solutions for the regularized Benjamin–Bona–Mahony type equations. As applications of our analysis, we show the spectral instability for the quintic Benjamin–Bona–Mahony equation and the spectral (orbital) stability for the regularized Benjamin–Ono equation.
Dispersive fractalisation in linear and nonlinear Fermi–Pasta–Ulam–Tsingou lattices
Numerical problems in dynamical systems
Hamiltonian and Lagrangian mechanics
Classical measure theory
Harmonic analysis in one variable
PETER J. OLVER, ARI STERN
Journal: European Journal of Applied Mathematics / Volume 32 / Issue 5 / October 2021
We investigate, both analytically and numerically, dispersive fractalisation and quantisation of solutions to periodic linear and nonlinear Fermi–Pasta–Ulam–Tsingou systems. When subject to periodic boundary conditions and discontinuous initial conditions, e.g., a step function, both the linearised and nonlinear continuum models for FPUT exhibit fractal solution profiles at irrational times (as determined by the coefficients and the length of the interval) and quantised profiles (piecewise constant or perturbations thereof) at rational times. We observe a similar effect in the linearised FPUT chain at times t where these models have validity, namely t = O(h−2), where h is proportional to the intermass spacing or, equivalently, the reciprocal of the number of masses. For nonlinear periodic FPUT systems, our numerical results suggest a somewhat similar behaviour in the presence of small nonlinearities, which disappears as the nonlinear force increases in magnitude. However, these phenomena are manifested on very long time intervals, posing a severe challenge for numerical integration as the number of masses increases. Even with the high-order splitting methods used here, our numerical investigations are limited to nonlinear FPUT chains with a smaller number of masses than would be needed to resolve this question unambiguously.
q-DISCRETE PAINLEVÉ EQUATIONS: THEIR HIERARCHIES AND PROPERTIES
Other special functions
Functional-differential and differential-difference equations
Differential equations in the complex domain
HUDA DAEFALLH ALRASHDI
Journal: Bulletin of the Australian Mathematical Society / Volume 103 / Issue 2 / April 2021
Published online by Cambridge University Press: 14 September 2020, pp. 333-334
Print publication: April 2021
PROJECTIVE LOOPS GENERATE RATIONAL LOOP GROUPS
Global differential geometry
Gang Wang, Oliver Goertsches, Erxiao Wang
Journal: Journal of the Institute of Mathematics of Jussieu / Volume 21 / Issue 2 / March 2022
We generalize Uhlenbeck's generator theorem of ${\mathcal{L}}^{-}\operatorname{U}_{n}$ to the full rational loop group ${\mathcal{L}}^{-}\operatorname{GL}_{n}\mathbb{C}$ and its subgroups ${\mathcal{L}}^{-}\operatorname{GL}_{n}\mathbb{R}$, ${\mathcal{L}}^{-}\operatorname{U}_{p,q}$: they are all generated by just simple projective loops. Recall that Terng–Uhlenbeck studied the dressing actions of such projective loops as generalized Bäcklund transformations for integrable systems. Our result makes a nice supplement: any rational dressing is the composition of these Bäcklund transformations. This conclusion is surprising in the sense that Lie theory suggests the indispensable role of nilpotent loops in the case of noncompact reality conditions, and nilpotent dressings appear quite complicated and mysterious. The sacrifice is to introduce some extra fake singularities. So we also propose a set of generators if fake singularities are forbidden. A very geometric and physical construction of $\operatorname{U}_{p,q}$ is obtained as a by-product, generalizing the classical construction of unitary groups.
LONG TIME BEHAVIOR OF THE SOLUTIONS OF NLW ON THE $d$-DIMENSIONAL TORUS
Hyperbolic equations and systems
JOACKIM BERNIER, ERWAN FAOU, BENOÎT GRÉBERT
Journal: Forum of Mathematics, Sigma / Volume 8 / 2020
Published online by Cambridge University Press: 06 March 2020, e12
We consider the nonlinear wave equation (NLW) on the $d$-dimensional torus $\mathbb{T}^{d}$ with a smooth nonlinearity of order at least 2 at the origin. We prove that, for almost any mass, small and smooth solutions of high Sobolev indices are stable up to arbitrary long times with respect to the size of the initial data. To prove this result, we use a normal form transformation decomposing the dynamics into low and high frequencies with weak interactions. While the low part of the dynamics can be put under classical Birkhoff normal form, the high modes evolve according to a time-dependent linear Hamiltonian system. We then control the global dynamics by using polynomial growth estimates for high modes and the preservation of Sobolev norms for the low modes. Our general strategy applies to any semilinear Hamiltonian Partial Differential Equations (PDEs) whose linear frequencies satisfy a very general nonresonance condition. The (NLW) equation on $\mathbb{T}^{d}$ is a good example since the standard Birkhoff normal form applies only when $d=1$ while our strategy applies in any dimension.
Dispersionless integrable hierarchies and GL(2, ℝ) geometry
Local differential geometry
Classical differential geometry
EVGENY FERAPONTOV, BORIS KRUGLIKOV
Journal: Mathematical Proceedings of the Cambridge Philosophical Society / Volume 170 / Issue 1 / January 2021
Print publication: January 2021
Paraconformal or GL(2, ℝ) geometry on an n-dimensional manifold M is defined by a field of rational normal curves of degree n – 1 in the projectivised cotangent bundle ℙT*M. Such geometry is known to arise on solution spaces of ODEs with vanishing Wünschmann (Doubrov–Wilczynski) invariants. In this paper we discuss yet another natural source of GL(2, ℝ) structures, namely dispersionless integrable hierarchies of PDEs such as the dispersionless Kadomtsev–Petviashvili (dKP) hierarchy. In the latter context, GL(2, ℝ) structures coincide with the characteristic variety (principal symbol) of the hierarchy.
Dispersionless hierarchies provide explicit examples of particularly interesting classes of involutive GL(2, ℝ) structures studied in the literature. Thus, we obtain torsion-free GL(2, ℝ) structures of Bryant [5] that appeared in the context of exotic holonomy in dimension four, as well as totally geodesic GL(2, ℝ) structures of Krynski [33]. The latter possess a compatible affine connection (with torsion) and a two-parameter family of totally geodesic α-manifolds (coming from the dispersionless Lax equations), which makes them a natural generalisation of the Einstein–Weyl geometry.
Our main result states that involutive GL(2, ℝ) structures are governed by a dispersionless integrable system whose general local solution depends on 2n – 4 arbitrary functions of 3 variables. This establishes integrability of the system of Wünschmann conditions.
The massive Thirring system in the quarter plane
Baoqiang Xia
Journal: Proceedings of the Royal Society of Edinburgh Section A: Mathematics / Volume 150 / Issue 5 / October 2020
Published online by Cambridge University Press: 23 April 2019, pp. 2387-2416
Print publication: October 2020
The unified transform method (UTM) or Fokas method for analyzing initial-boundary value (IBV) problems provides an important generalization of the inverse scattering transform (IST) method for analyzing initial value problems. In comparison with the IST, a major difficulty of the implementation of the UTM, in general, is the involvement of unknown boundary values. In this paper we analyze the IBV problem for the massive Thirring model in the quarter plane, assuming that the initial and boundary data belong to the Schwartz class. We show that for this integrable model, the UTM is as effective as the IST method: Riemann-Hilbert problems we formulated for such a problem have explicit (x, t)-dependence and depend only on the given initial and boundary values; they do not involve additional unknown boundary values.
CONSERVED QUANTITIES ON MULTISYMPLECTIC MANIFOLDS
Symplectic geometry, contact geometry
LEONID RYVKIN, TILMANN WURZBACHER, MARCO ZAMBON
Journal: Journal of the Australian Mathematical Society / Volume 108 / Issue 1 / February 2020
Published online by Cambridge University Press: 26 December 2018, pp. 120-144
Given a vector field on a manifold $M$, we define a globally conserved quantity to be a differential form whose Lie derivative is exact. Integrals of conserved quantities over suitable submanifolds are constant under time evolution, the Kelvin circulation theorem being a well-known special case. More generally, conserved quantities are well behaved under transgression to spaces of maps into $M$. We focus on the case of multisymplectic manifolds and Hamiltonian vector fields. Our main result is that in the presence of a Lie group of symmetries admitting a homotopy co-momentum map, one obtains a whole family of globally conserved quantities. This extends a classical result in symplectic geometry. We carry this out in a general setting, considering several variants of the notion of globally conserved quantity.
SPATIAL HAMILTONIAN IDENTITIES FOR NONLOCALLY COUPLED SYSTEMS
Infinite-dimensional dissipative dynamical systems
Miscellaneous topics - Partial differential equations
Representations of solutions
Nonlinear integral equations
BENTE BAKKER, ARND SCHEEL
Published online by Cambridge University Press: 14 November 2018, e22
We consider a broad class of systems of nonlinear integro-differential equations posed on the real line that arise as Euler–Lagrange equations to energies involving nonlinear nonlocal interactions. Although these equations are not readily cast as dynamical systems, we develop a calculus that yields a natural Hamiltonian formalism. In particular, we formulate Noether's theorem in this context, identify a degenerate symplectic structure, and derive Hamiltonian differential equations on finite-dimensional center manifolds when those exist. Our formalism yields new natural conserved quantities. For Euler–Lagrange equations arising as traveling-wave equations in gradient flows, we identify Lyapunov functions. We provide several applications to pattern-forming systems including neural field and phase separation problems.
π-TYPE FERMIONS AND π-TYPE KP HIERARCHY
NA WANG, CHUANZHONG LI
Journal: Glasgow Mathematical Journal / Volume 61 / Issue 3 / September 2019
In this paper, we first construct π-type Fermions. According to these, we define π-type Boson–Fermion correspondence which is a generalization of the classical Boson–Fermion correspondence. We can obtain π-type symmetric functions Sλπ from the π-type Boson–Fermion correspondence, analogously to the way we get the Schur functions Sλ from the classical Boson–Fermion correspondence (which is the same thing as the Jacobi–Trudi formula). Then as a generalization of KP hierarchy, we construct the π-type KP hierarchy and obtain its tau functions.
Novel Conformal Structure-Preserving Algorithms for Coupled Damped Nonlinear Schrödinger System
Approximation methods and numerical treatment of dynamical systems
Hao Fu, Weien Zhou, Xu Qian, Songhe Song
Journal: Advances in Applied Mathematics and Mechanics / Volume 9 / Issue 6 / December 2017
Published online by Cambridge University Press: 28 November 2017, pp. 1383-1403
Print publication: December 2017
This paper introduces two novel conformal structure-preserving algorithms for solving the coupled damped nonlinear Schrödinger (CDNLS) system, which are based on the conformal multi-symplectic Hamiltonian formulation and its conformal conservation laws. The proposed algorithms can preserve corresponding conformal multi-symplectic conservation law and conformal momentum conservation law in any local time-space region, respectively. Moreover, it is further shown that the algorithms admit the conformal charge conservation law, and exactly preserve the dissipation rate of charge under appropriate boundary conditions. Numerical experiments are presented to demonstrate the conformal properties and effectiveness of the proposed algorithms during long-time numerical simulations and validate the analysis.
Integrable Properties of a Variant of the Discrete Hungry Toda Equations and Their Relationship to Eigenpairs of Band Matrices
Basic linear algebra
Convergence and divergence of infinite limiting processes
Qualitative behavior
Difference and functional equations
Difference equations
Yusuke Nishiyama, Masato Shinjo, Koichi Kondo, Masashi Iwasaki
Journal: East Asian Journal on Applied Mathematics / Volume 7 / Issue 4 / November 2017
Print publication: November 2017
The Toda equation and its variants are studied in the filed of integrable systems. One particularly generalized time discretisation of the Toda equation is known as the discrete hungry Toda (dhToda) equation, which has two main variants referred to as the dhTodaI equation and dhTodaII equation. The dhToda equations have both been shown to be applicable to the computation of eigenvalues of totally nonnegative (TN) matrices, which are matrices without negative minors. The dhTodaI equation has been investigated with respect to the properties of integrable systems, but the dhTodaII equation has not. Explicit solutions using determinants and matrix representations called Lax pairs are often considered as symbolic properties of discrete integrable systems. In this paper, we clarify the determinant solution and Lax pair of the dhTodaII equation by focusing on an infinite sequence. We show that the resulting determinant solution firmly covers the general solution to the dhTodaII equation, and provide an asymptotic analysis of the general solution as discrete-time variable goes to infinity.
STABILITY THEORY AND HAMILTONIAN DYNAMICS IN THE EULER IDEAL FLUID EQUATIONS
Hydrodynamic stability
JOACHIM WORTHINGTON
Journal: Bulletin of the Australian Mathematical Society / Volume 96 / Issue 3 / December 2017
Stability of line solitons for the KP-II equation in ℝ2. II
Tetsu Mizumachi
Journal: Proceedings of the Royal Society of Edinburgh Section A: Mathematics / Volume 148 / Issue 1 / February 2018
The KP-II equation was derived by Kadmotsev and Petviashvili to explain stability of line solitary waves of shallow water. Recently, Mizumachi proved nonlinear stability of 1-line solitons for exponentially localized perturbations. In this paper, we prove stability of 1-line solitons for perturbations in (1 + x2)−1/2−0H1(ℝ2) and perturbations in H1(ℝ2) ∩ ∂xL2(ℝ2).
DISCRETE LINEAR WEINGARTEN SURFACES
F. BURSTALL, U. HERTRICH-JEROMIN, W. ROSSMAN
Published online by Cambridge University Press: 04 September 2017, pp. 55-88
Discrete linear Weingarten surfaces in space forms are characterized as special discrete $\unicode[STIX]{x1D6FA}$-nets, a discrete analogue of Demoulin's $\unicode[STIX]{x1D6FA}$-surfaces. It is shown that the Lie-geometric deformation of $\unicode[STIX]{x1D6FA}$-nets descends to a Lawson transformation for discrete linear Weingarten surfaces, which coincides with the well-known Lawson correspondence in the constant mean curvature case.
A Compact Scheme for Coupled Stochastic Nonlinear Schrödinger Equations
Stochastic analysis
Chuchu Chen, Jialin Hong, Lihai Ji, Linghua Kong
Journal: Communications in Computational Physics / Volume 21 / Issue 1 / January 2017
Published online by Cambridge University Press: 05 December 2016, pp. 93-125
In this paper, we propose a compact scheme to numerically study the coupled stochastic nonlinear Schrödinger equations. We prove that the compact scheme preserves the discrete stochastic multi-symplectic conservation law, discrete charge conservation law and discrete energy evolution law almost surely. Numerical experiments confirm well the theoretical analysis results. Furthermore, we present a detailed numerical investigation of the optical phenomena based on the compact scheme. By numerical experiments for various amplitudes of noise, we find that the noise accelerates the oscillation of the soliton and leads to the decay of the solution amplitudes with respect to time. In particular, if the noise is relatively strong, the soliton will be totally destroyed. Meanwhile, we observe that the phase shift is sensibly modified by the noise. Moreover, the numerical results present inelastic interaction which is different from the deterministic case.
Exponentially accurate Hamiltonian embeddings of symplectic A-stable Runge–Kutta methods for Hamiltonian semilinear evolution equations
Numerical analysis in abstract spaces
Partial differential equations, initial value and time-dependent initial-boundary value problems
Claudia Wulff, Marcel Oliver
Journal: Proceedings of the Royal Society of Edinburgh Section A: Mathematics / Volume 146 / Issue 6 / December 2016
Published online by Cambridge University Press: 25 October 2016, pp. 1265-1301
We prove that a class of A-stable symplectic Runge–Kutta time semi-discretizations (including the Gauss–Legendre methods) applied to a class of semilinear Hamiltonian partial differential equations (PDEs) that are well posed on spaces of analytic functions with analytic initial data can be embedded into a modified Hamiltonian flow up to an exponentially small error. Consequently, such time semi-discretizations conserve the modified Hamiltonian up to an exponentially small error. The modified Hamiltonian is O(hp)-close to the original energy, where p is the order of the method and h is the time-step size. Examples of such systems are the semilinear wave equation, and the nonlinear Schrödinger equation with analytic nonlinearity and periodic boundary conditions. Standard Hamiltonian interpolation results do not apply here because of the occurrence of unbounded operators in the construction of the modified vector field. This loss of regularity in the construction can be taken care of by projecting the PDE to a subspace in which the operators occurring in the evolution equation are bounded, and by coupling the number of excited modes and the number of terms in the expansion of the modified vector field with the step size. This way we obtain exponential estimates of the form O(exp(–c/h1/(1+q))) with c > 0 and q ⩾ 0; for the semilinear wave equation, q = 1, and for the nonlinear Schrödinger equation, q = 2. We give an example which shows that analyticity of the initial data is necessary to obtain exponential estimates. | CommonCrawl |
Sharma, R., Yadav, P., Joshi, K. (2012). Units in $\mathbb{Z}_2(C_2\times D_\infty)$. International Journal of Group Theory, 1(4), 33-41. doi: 10.22108/ijgt.2012.1589
R Sharma; Pooja Yadav; Kanchan Joshi. "Units in $\mathbb{Z}_2(C_2\times D_\infty)$". International Journal of Group Theory, 1, 4, 2012, 33-41. doi: 10.22108/ijgt.2012.1589
Sharma, R., Yadav, P., Joshi, K. (2012). 'Units in $\mathbb{Z}_2(C_2\times D_\infty)$', International Journal of Group Theory, 1(4), pp. 33-41. doi: 10.22108/ijgt.2012.1589
Sharma, R., Yadav, P., Joshi, K. Units in $\mathbb{Z}_2(C_2\times D_\infty)$. International Journal of Group Theory, 2012; 1(4): 33-41. doi: 10.22108/ijgt.2012.1589
Units in $\mathbb{Z}_2(C_2\times D_\infty)$
Article 4, Volume 1, Issue 4, December 2012, Page 33-41 PDF (415.01 K)
Document Type: Research Paper
R Sharma 1; Pooja Yadav2; Kanchan Joshi3
1Indian Institute of Technology Delhi
2Kamla Nehru College, University of Delhi, Delhi
3Department of Mathematics, University of Delhi, Delhi
In this paper we consider the group algebra $R(C_2\times D_\infty)$. It is shown that $R(C_2\times D_\infty)$ can be represented by a $4\times 4$ block circulant matrix. It is also shown that $\mathcal{U}(\mathbb{Z}_2(C_2\times D_\infty))$ is infinitely generated.
Unit Group; infinite dihedral group; Circulant Matrices
16A27 Group rings of infinite groups; 16U60 Units, groups of units; 20G05 Representation theory; 20G35 Linear algebraic groups over adèles and other rings and schemes; 20H25 Other matrix groups over rings
V. A. Artamonov and A. A. Bovdi (1991). Integral group rings: groups of invertible elements and classical $K$-theory. J. Soviet Math.. 57 (2), 2931-2958
A. Bovdi (1998). The group of units of a group algebras of characteristic p. Publ. Math. Debrecen. 52, 193-244
A. Karrass, D. Solitar and W. Magnus (1975). Combinatorial Group Theory. Dover Publications, INC.
J. Gildea (2011). Units of the group algebra $F_{2^k} (C_2 times D_8)$. J. Algebra Appl.. 10 (4), 643-647
J. Gildea (2011). The structure of the unitary units of the group algebra $F_{2^k}D_8$. Int. Electron. J. Algebra. 9, 171-176
T. Hurley (2006). Group rings and rings of matrices. Int. J. Pure Appl. Math.. 31 (3), 319-335
M. Mirowicz (1991). Units in group rings of the infinite dihedral group. Canad. Math. Bull.. 34 (1), 83-89
D. S. Passman (1997). The Algebraic Structure of Group Rings. Wiley interscience.
S. K. Sehgal (1993). Units in Integral Group Rings. Longman Scientific and Technical, Harlow. | CommonCrawl |
Onset of convection in rotating porous layers via a new approach
DCDS-B Home
Identification problems related to cylindrical dielectrics **in presence of polarization**
September 2014, 19(7): 2267-2278. doi: 10.3934/dcdsb.2014.19.2267
On some properties of the Mittag-Leffler function $\mathbf{E_\alpha(-t^\alpha)}$, completely monotone for $\mathbf{t> 0}$ with $\mathbf{0<\alpha<1}$
Francesco Mainardi 1,
Department of Physics and Astronomy, University of Bologna, and INFN, Via Irnerio 46, Bologna, I-40126, Italy
Received April 2013 Revised July 2013 Published August 2014
We analyse some peculiar properties of the function of the Mittag-Leffler (M-L) type, $e_\alpha(t) := E_\alpha(-t^\alpha)$ for $0<\alpha<1$ and $t>0$, which is known to be completely monotone (CM) with a non-negative spectrum of frequencies and times, suitable to model fractional relaxation processes. We first note that (surprisingly) these two spectra coincide so providing a universal scaling property of this function, not well pointed out in the literature. Furthermore, we consider the problem of approximating our M-L function with simpler CM functions for small and large times. We provide two different sets of elementary CM functions that are asymptotically equivalent to $e_\alpha(t)$ as $t\to 0$ and $t\to +\infty$. The first set is given by the stretched exponential for small times and the power law for large times, following a standard approach. For the second set we chose two rational CM functions in $t^\alpha$, obtained as the Pad\`e Approximants (PA) $[0/1]$ to the convergent series in positive powers (as $t\to 0$) and to the asymptotic series in negative powers (as $t\to \infty$), respectively. From numerical computations we are allowed to the conjecture that the second set provides upper and lower bounds to the Mittag-Leffler function.
Keywords: rational approximations., fractional relaxation, Mittag-Leffler function, complete monotonicity, asymptotic analysis.
Mathematics Subject Classification: Primary: 26A33, 33E12; Secondary: 35S10, 45K0.
Citation: Francesco Mainardi. On some properties of the Mittag-Leffler function $\mathbf{E_\alpha(-t^\alpha)}$, completely monotone for $\mathbf{t> 0}$ with $\mathbf{0<\alpha<1}$. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 2267-2278. doi: 10.3934/dcdsb.2014.19.2267
G. A. Baker, Essentials of Padè Approximants,, Academic Press, (1975). Google Scholar
D. Baleanu, K. Diethelm, E. Scalas and J. J. Trujillo, Fractional Calculus. Models and Numerical Methods,, World Scientific, (2012). doi: 10.1142/9789814355216. Google Scholar
L. Beghin and E. Orsingher, Poisson-type processes governed by fractional and higher-order recursive differential equations,, Electron. J. Probab., 15 (2010), 684. doi: 10.1214/EJP.v15-762. Google Scholar
E. Capelas de Oliveira, F. Mainardi and J. Vaz Jr, Models based on Mittag-Leffler functions for anomalous relaxation in dielectrics,, Eur. Phys. J., 193 (2011), 161. Google Scholar
M. Caputo and F. Mainardi, A new dissipation model based on memory mechanism,, Pure and Appl. Geophys. (PAGEOPH), 91 (1971), 134. Google Scholar
M. Caputo and F. Mainardi, Linear models of dissipation in anelastic solids,, Riv. Nuovo Cimento (Ser. II), 1 (1971), 161. doi: 10.1007/BF02820620. Google Scholar
K. S. Cole and R. H. Cole, Dispersion and absorption in dielectrics, II. Direct current characteristics,, J. Chemical Physics, 10 (1942), 98. doi: 10.1063/1.1723677. Google Scholar
H. T. Davis, The Theory of Linear Operators,, The Principia Press, (1936). Google Scholar
K. Diethelm, The Analysis of Fractional Differential Equations,, Springer, (2004). doi: 10.1007/978-3-642-14574-2. Google Scholar
M. M. Dzherbashyan,, Integral Transforms and Representations of Functions in the Complex Plane,, Nauka, (1966). Google Scholar
A. Erdélyi, W. Magnus, F. Oberhettinger and F. G. Tricomi, Higher Transcendental Functions,, Vol. III. Based, (1955). Google Scholar
W. Feller, An Introduction to Probability Theory and its Applications,, Vol. II, (1971). Google Scholar
A. Freed, K. Diethelm and Y. Luchko, Fractional-order Viscoelasticity (FOV): Constitutive Development using the Fractional Calculus,, {First Annual Report, (2002), 2002. Google Scholar
R. Gorenflo, J. Loutchko and Y. Luchko, Computation of the Mittag-Leffler function and its derivatives,, Fract. Calc. Appl. Anal., 5 (2002), 491. Google Scholar
R. Gorenflo and F. Mainardi, Fractional calculus: Integral and differential equations of fractional order,, in Fractals and Fractional Calculus in Continuum Mechanics, (1997), 223. Google Scholar
B. Gross, On creep and relaxation,, J. Appl. Phys., 18 (1947), 212. doi: 10.1063/1.1697606. Google Scholar
R. Hilfer (editor), Fractional Calculus, Applications in Physics,, World Scientific, (2000). Google Scholar
E. Hille and J. D. Tamarkin, On the theory of linear integral equations,, Ann. Math., 31 (1930), 479. doi: 10.2307/1968241. Google Scholar
A. A. Kilbas, A . A. Koroleva and S. V. Rogosin, Multi-parametric Mittag-Leffler functions and their extensions,, Fract. Calc. Appl. Anal., 16 (2013), 378. doi: 10.2478/s13540-013-0024-9. Google Scholar
A. A. Kilbas and M. Saigo, On solution of integral equations of Abel-Volterra type,, Differential and Integral Equations, 8 (1995), 993. Google Scholar
A. A. Kilbas and M. Saigo, $H$-Transforms. Theory and Applications,, Chapman and Hall/CRC, (2004). doi: 10.1201/9780203487372. Google Scholar
A. A. Kilbas, H. M Srivastava and J. J. Trujillo, Theory and Applications of Fractional Differential Equations,, Elsevier, (2006). Google Scholar
V. Kiryakova, Generalized Fractional Calculus and Applications,, Longman & J. Wiley, (1994). Google Scholar
V. Kiryakova, The multi-index Mittag-Leffler functions as important class of special functions of fractional calculus,, Comp. Math. Appl., 59 (2010), 1885. doi: 10.1016/j.camwa.2009.08.025. Google Scholar
V. Kiryakova and Y. Luchko, The multi-index Mittag-Leffler functions and their applications for solving fractional order problems in applied analysis,, In American Institute of Physics - Conf. Proc., 1301 (2010), 597. doi: 10.1063/1.3526661. Google Scholar
J. Klafter, S. C. Lim and R. Metzler (Editors), Fractional Dynamics, Recent Advances,, World Scientific, (2012). Google Scholar
R. L. Magin, Fractional Calculus in Bioengineering,, Begell House Publishers, (2006). Google Scholar
F. Mainardi, Fractional Calculus and Waves in Linear Viscoelasticity,, Imperial College Press, (2010). doi: 10.1142/9781848163300. Google Scholar
F. Mainardi and R. Gorenflo, Time-fractional derivatives in relaxation processes: A tutorial survey,, Fract. Calc. Appl. Anal., 10 (2007), 269. Google Scholar
O. I. Marichev, Handbook of Integral Transforms of Higher Transcendental Functions,, Theory and Algorithmic Tables, (1983). Google Scholar
A. M. Mathai and H. J. Haubold, Special Functions for Applied Scientists,, Springer, (2008). doi: 10.1007/978-0-387-75894-7. Google Scholar
A. M. Mathai and R. K. Saxena, The H-Function with Applications in Statistics and Other Disciplines,, Wiley Eastern Ltd, (1978). Google Scholar
A. M. Mathai, R. K. Saxena and H. J. Haubold, The H-Function: Theory and Applications,, Springer Verlag, (2010). doi: 10.1007/978-1-4419-0916-9. Google Scholar
K. S. Miller and S. G. Samko, Completely monotonic functions,, Integral Transforms and Special Functions, 12 (2001), 389. doi: 10.1080/10652460108819360. Google Scholar
I. Podlubny, Fractional Differential Equations,, Academic Press, (1999). Google Scholar
I. Podlubny, Mittag-Leffler function,, Matlab-Code that calculates the Mittag-Leffler function with desired accuracy, (2006). Google Scholar
H. Pollard, The completely monotonic character of the Mittag-Leffler function $E_\alpha (-x)$,, Bull. Amer. Math. Soc., 54 (1948), 1115. doi: 10.1090/S0002-9904-1948-09132-7. Google Scholar
S. G. Samko, A. A. Kilbas and O. I. Marichev, Fractional Integrals and Derivatives, Theory and Applications,, Gordon and Breach, (1993). Google Scholar
T. Sandev, R. Metzler and Z. Tomovski, Velocity and displacement correlation functions for fractional generalized Langevin equations,, Fract. Calc. Appl. Anal., 15 (2012), 426. doi: 10.2478/s13540-012-0031-2. Google Scholar
G. Sansone and J. Gerretsen, Lectures on the Theory of Functions of a Complex Variable,, Vol. I. Holomorphic Functions, (1960). Google Scholar
R. L. Schilling, R. Song and Z. Vondraček, Bernstein Functions. Theory and Applications,, 2-nd ed., (2012). doi: 10.1515/9783110269338. Google Scholar
T. Simon, Comparing Fréchet and positive stable laws,, Electron. J. Probab., 19 (2014), 1. doi: 10.1214/EJP.v19-3058. Google Scholar
H. M. Srivastava, K. C. Gupta and S. P. Goyal, The H-Functions of One and Two Variables with Applications,, South Asian Publishers, (1982). Google Scholar
A. P. Starovoitov and N. A. Starovoitova, Padè approximants of the Mittag-Leffler functions,, Sbornik Mathematics, 198 (2007), 1011. doi: 10.1070/SM2007v198n07ABEH003871. Google Scholar
V. E. Tarasov, Fractional Dynamics: Applications of Fractional Calculus to Dynamics of Particles, Fields and Media,, Springer, (2010). doi: 10.1007/978-3-642-14003-7. Google Scholar
Z. Tomovski, R. Hilfer and H. M. Srivastava, Fractional and operational calculus with generalized fractional derivative operators and Mittag-Leffler type functions,, Integral Transforms and Special Functions, 21 (2010), 797. doi: 10.1080/10652461003675737. Google Scholar
V. V. Uchaikin, Fractional Derivatives for Physicists and Engineers,, Springer, (2013). doi: 10.1007/978-3-642-33911-0. Google Scholar
R. Wong and Y.-Q Zhao, Exponential asymptotics of the Mittag-Leffler function,, Constructive Approximation, 18 (2002), 355. doi: 10.1007/s00365-001-0019-3. Google Scholar
C. Zeng and Y.-Q. Chen, Global Padè approximations for the generalized Mittag-Leffler function and its inverse,, E-print , (2013). Google Scholar
Ebenezer Bonyah, Samuel Kwesi Asiedu. Analysis of a Lymphatic filariasis-schistosomiasis coinfection with public health dynamics: Model obtained through Mittag-Leffler function. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 519-537. doi: 10.3934/dcdss.2020029
Ndolane Sene. Mittag-Leffler input stability of fractional differential equations and its applications. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 867-880. doi: 10.3934/dcdss.2020050
Jean Daniel Djida, Juan J. Nieto, Iván Area. Parabolic problem with fractional time derivative with nonlocal and nonsingular Mittag-Leffler kernel. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 609-627. doi: 10.3934/dcdss.2020033
Raziye Mert, Thabet Abdeljawad, Allan Peterson. A Sturm-Liouville approach for continuous and discrete Mittag-Leffler kernel fractional operators. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020171
Mehmet Yavuz, Necati Özdemir. Comparing the new fractional derivative operators involving exponential and Mittag-Leffler kernel. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 995-1006. doi: 10.3934/dcdss.2020058
Antonio Coronel-Escamilla, José Francisco Gómez-Aguilar. A novel predictor-corrector scheme for solving variable-order fractional delay differential equations involving operators with Mittag-Leffler kernel. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 561-574. doi: 10.3934/dcdss.2020031
Qiang Du, Jingyan Zhang. Asymptotic analysis of a diffuse interface relaxation to a nonlocal optimal partition problem. Discrete & Continuous Dynamical Systems - A, 2011, 29 (4) : 1443-1461. doi: 10.3934/dcds.2011.29.1443
Yulin Zhao. On the monotonicity of the period function of a quadratic system. Discrete & Continuous Dynamical Systems - A, 2005, 13 (3) : 795-810. doi: 10.3934/dcds.2005.13.795
Frank Neubrander, Koray Özer, Teresa Sandmaier. Rational approximations of semigroups without scaling and squaring. Discrete & Continuous Dynamical Systems - A, 2013, 33 (11&12) : 5305-5317. doi: 10.3934/dcds.2013.33.5305
Jon Aaronson, Michael Bromberg, Nishant Chandgotia. Rational ergodicity of step function skew products. Journal of Modern Dynamics, 2018, 13: 1-42. doi: 10.3934/jmd.2018012
Tingzhi Cheng. Monotonicity and symmetry of solutions to fractional Laplacian equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 3587-3599. doi: 10.3934/dcds.2017154
Esha Chatterjee, Sk. Sarif Hassan. On the asymptotic character of a generalized rational difference equation. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 1707-1718. doi: 10.3934/dcds.2018070
Frank Neubrander, Koray Özer, Lee Windsperger. On subdiagonal rational Padé approximations and the Brenner-Thomée approximation theorem for operator semigroups. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020238
Yanzhao Cao, Song Chen, A. J. Meir. Analysis and numerical approximations of equations of nonlinear poroelasticity. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1253-1273. doi: 10.3934/dcdsb.2013.18.1253
Anhui Gu. Asymptotic behavior of random lattice dynamical systems and their Wong-Zakai approximations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5737-5767. doi: 10.3934/dcdsb.2019104
Chao Ji. Ground state solutions of fractional Schrödinger equations with potentials and weak monotonicity condition on the nonlinear term. Discrete & Continuous Dynamical Systems - B, 2019, 24 (11) : 6071-6089. doi: 10.3934/dcdsb.2019131
Mei-Qin Zhan. Finite element analysis and approximations of phase-lock equations of superconductivity. Discrete & Continuous Dynamical Systems - B, 2002, 2 (1) : 95-108. doi: 10.3934/dcdsb.2002.2.95
Sandra Carillo. Some remarks on the model of rigid heat conductor with memory: Unbounded heat relaxation function. Evolution Equations & Control Theory, 2019, 8 (1) : 31-42. doi: 10.3934/eect.2019002
Moncef Aouadi, Taoufik Moulahi. Asymptotic analysis of a nonsimple thermoelastic rod. Discrete & Continuous Dynamical Systems - S, 2016, 9 (5) : 1475-1492. doi: 10.3934/dcdss.2016059
Yun Li, Fuke Wu, George Yin. Asymptotic behavior of gene expression with complete memory and two-time scales based on the chemical Langevin equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4417-4443. doi: 10.3934/dcdsb.2019125
Francesco Mainardi | CommonCrawl |
Get Q essential facts below. View Videos or join the Q discussion. Add Q to your PopFlock.com topic list for future reference or share this resource on social media.
Letter of the Latin alphabet
Writing system
Latin script
Alphabetic and Logographic
Language of origin
Phonetic usage
(Table)
Unicode codepoint
U+0051, U+0071
Alphabetical position
Unknown to present
• ?
Other letters commonly used with
Without proper rendering support, you may see question marks, boxes, or other symbols instead of Unicode characters. For an introductory guide on IPA symbols, see Help:IPA.
Q, or q, is the 17th letter of the modern English alphabet and the ISO basic Latin alphabet. Its name in English is cue (pronounced ), plural cues.[1]
Egyptian hieroglyph
Phoenician
qoph
Qoppa
The Semitic sound value of Qôp was /q/ (voiceless uvular stop), and the form of the letter could have been based on the eye of a needle, a knot, or even a monkey with its tail hanging down.[2][3][4]/q/ is a sound common to Semitic languages, but not found in many European languages.[a] Some have even suggested that the form of the letter Q is even more ancient: it could have originated from Egyptian hieroglyphics.[5][6]
In Greek, qoppa (?) probably came to represent several labialized velar stops, among them /k?/ and /k/.[7] As a result of later sound shifts, these sounds in Greek changed to /p/ and /p?/ respectively.[8] Therefore, qoppa was transformed into two letters: qoppa, which stood for the number 90,[9] and phi (?), which stood for the aspirated sound /p?/ that came to be pronounced /f/ in Modern Greek.[10][11]
The Etruscans used Q in conjunction with V to represent /k?/, and this usage was copied by the Romans with the rest of their alphabet.[4] In the earliest Latin inscriptions, the letters C, K and Q were all used to represent the two sounds /k/ and /?/, which were not differentiated in writing. Of these, Q was used before a rounded vowel (e.g. ⟨EQO⟩ 'ego'), K before /a/ (e.g. ⟨KALENDIS⟩ 'calendis'), and C elsewhere.[12] Later, the use of C (and its variant G) replaced most usages of K and Q: Q survived only to represent /k/ when immediately followed by a /w/ sound.[13]
The five most common typographic presentations of the capital letter Q.
A long-tailed Q as drawn by French typographer Geoffroy Tory in his 1529 book Champfleury
The printed long-tailed Q was inspired by ancient Roman square capitals: this long-tailed Q, used here in the Latin word "POPVLVSQVE", was carved into Trajan's column c. AD 113.
A short trilingual text showing the proper use of the long- and short-tailed Q. The short-tailed Q is only used when the word is shorter than the tail; the long-tailed Q is even used in all-capitals text.[14]:77
Uppercase "Q"
Depending on the typeface used to typeset the letter Q, the letter's tail may either bisect its bowl as in Helvetica,[15] meet the bowl as in Univers, or lie completely outside the bowl as in PT Sans. In writing block letters, bisecting tails are fastest to write, as they require less precision. All three styles are considered equally valid, with most serif typefaces having a Q with a tail that meets the circle, while sans-serif typefaces are more equally split between those with bisecting tails and those without.[16] Typefaces with a disconnected Q tail, while uncommon, have existed since at least 1529.[17] A common method among typographers to create the shape of the Q is by simply adding a tail to the letter O.[16][18][19]
Old-style serif fonts, such as Garamond, may contain two capital Qs: one with a short tail to be used in short words, and another with a long tail to be used in long words.[17] Some early metal type fonts included up to 3 different Qs: a short-tailed Q, a long-tailed Q, and a long-tailed Q-u ligature.[14] This print tradition was alive and well until the 19th century, when long-tailed Qs fell out of favor: even recreations of classic typefaces such as Caslon began being distributed with only short Q tails.[20][14] Not a fan of long-tailed Qs, American typographer D. B. Updike celebrated their demise in his 1922 book Printing Types, claiming that Renaissance printers made their Q tails longer and longer simply to "outdo each other".[14]Latin-language words, which are much more likely than English words to contain "Q" as their first letter, have also been cited as the reason for their existence.[14] The long-tailed Q had fallen completely out of use with the advent of early digital typography, as many early digital fonts could not choose different glyphs based on the word that the glyph was in, but it has seen something of a comeback with the advent of OpenType fonts and LaTeX, both of which can automatically typeset the long-tailed Q when it is called for and the short-tailed Q when not.[21][22]
Owing to the allowable variation in the Q, the letter is a very distinctive feature of a typeface;[16][23] like the ampersand, the Q is cited as a letter that gives typographers a chance to express themselves.[4]
Identifont, an automated typeface identification service that identifies typefaces by questions about their appearance, asks about the Q tail second if the "sans-serif" option is chosen.[24] Out of Identifont's database, Q tails are divided thus:[25]
Q tail type
Bisecting 1461 2719
Meets bowl 3363 4521
Outside bowl 271 397
"2" (Q{\displaystyle {\mathcal {Q}}} ) shape 304 428
Inside bowl 129 220
Some typographers prefer one "Q" design over another: Adrian Frutiger, famous for the airport typeface that bears his name, remarked that most of his typefaces feature a Q tail that meets the bowl and then extends horizontally.[19] Frutiger considered such Qs to make for more "harmonious" and "gentle" typefaces.[19] Some typographers, such as Sophie Elinor Brown, have listed "Q" as being among their favorite letters.[26][27]
Lowercase "q"
A comparison of the glyphs of ⟨q⟩ and ⟨g⟩
The lowercase "q" is usually seen as a lowercase "o" or "c" with a descender (i.e., downward vertical tail) extending from the right side of the bowl, with or without a swash (i.e., flourish), or even a reversed lowercase p. The "q"'s descender is usually typed without a swash due to the major style difference typically seen between the descenders of the "g" (a loop) and "q" (vertical). When handwritten, or as part of a handwriting font, the descender of the "q" sometimes finishes with a rightward swash to distinguish it from the letter "g" (or, particularly in mathematics, the digit "9").
Pronunciation and use
Pronunciations of Qq
Dialect(s)
Pronunciation (IPA)
Mainly used in <qu> /kw/
Mostly as <qu> /k/, sometimes alone, e.g. cinq "five".
Only used in <qu> /kw/
Hadza
~ Romanization
After /?/ Romanization
K'iche
Mohegan-Pequot
Nuxalk
Only used in <qu> /k/
Archaic, uncommon spelling
Northern, Central
silent Only used in <qu> /w/
Phonetic and phonemic transcription
The International Phonetic Alphabet uses ⟨q⟩ for the voiceless uvular stop.
English standard orthography
In English, the digraph ⟨qu⟩ most often denotes the cluster ; however, in borrowings from French, it represents , as in 'plaque'. See the list of English words containing Q not followed by U. Q is the second least frequently used letter in the English language (after Z), with a frequency of just 0.1% in words. Q has the third fewest English words where it is the first letter, after Z and X.
Other orthographies
In most European languages written in the Latin script, such as in Romance and Germanic languages, ⟨q⟩ appears almost exclusively in the digraph ⟨qu⟩. In French, Occitan, Catalan and Portuguese, ⟨qu⟩ represents /k/ or /kw/; in Spanish, it represents /k/. ⟨qu⟩ replaces ⟨c⟩ for /k/ before front vowels ⟨i⟩ and ⟨e⟩, since in those languages ⟨c⟩ represents a fricative or affricate before front vowels. In Italian ⟨qu⟩ represents [kw] (where [w] is the semivowel allophone of /u/).
It is not considered to be part of the Cornish (Standard Written Form), Estonian, Icelandic, Irish, Latvian, Lithuanian, Serbo-Croatian, Scottish Gaelic, Slovenian, Turkish, or Welsh alphabets.
⟨q⟩ has a wide variety of other pronunciations in some European languages and in non-European languages that have adopted the Latin alphabet.
The capital letter Q is used as the currency sign for the Guatemalan quetzal.
The Roman numeral Q is sometimes used to represent the number 500,000.[28]
Related characters
Descendants and related characters in the Latin alphabet
Q with diacritics: ? ? ? q?
Small capital q: ? (Used in Japanese linguistics[29])
Gha: ? ?
Ancestors and siblings in other alphabets
? : Semitic letter Qoph, from which the following symbols originally derive
? ?: Greek letter Koppa
? : Old Italic Q, which is the ancestor of modern Latin Q
? ? : Cyrillic letter Qa
Derived signs, symbols and abbreviations
? : rotated capital Q, a signature mark
? ?, ? ? : Various forms of Q were used for medieval scribal abbreviations[30]
Computing codes
Unicode name LATIN CAPITAL LETTER Q LATIN SMALL LETTER Q
Unicode 81 U+0051 113 U+0071
UTF-8 81 51 113 71
Numeric character reference Q Q q q
EBCDIC family 216 D8 152 98
ASCII 1 81 51 113 71
1Also for encodings based on ASCII, including the DOS, Windows, ISO-8859 and Macintosh families of encodings.
Other representations
NATO phonetic Morse code
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
Signal flag Flag semaphore American manual alphabet (ASL fingerspelling) Braille dots-12345
Unified English Braille
List of English words containing Q not followed by U
Mind your Ps and Qs
^ "Q", Oxford English Dictionary, 2nd edition (1989); Merriam-Webster's Third New International Dictionary of the English Language, Unabridged (1993); "cue," op. cit.
^ Travers Wood, Henry Craven Ord Lanchester, A Hebrew Grammar, 1913, p. 7. A. B. Davidson, Hebrew Primer and Grammar, 2000, p. 4. The meaning is doubtful. "Eye of a needle" has been suggested, and also "knot" Harvard Studies in Classical Philology vol. 45.
^ Isaac Taylor, History of the Alphabet: Semitic Alphabets, Part 1, 2003: "The old explanation, which has again been revived by Halévy, is that it denotes an 'ape,' the character Q being taken to represent an ape with its tail hanging down. It may also be referred to a Talmudic root which would signify an 'aperture' of some kind, as the 'eye of a needle,' ... Lenormant adopts the more usual explanation that the word means a 'knot'.
^ a b c Haley, Allan. "The Letter Q". Fonts.com. Monotype Imaging Corporation. Retrieved .
^ Samuel, Stehman Haldeman (1851). Elements of Latin Pronunciation: For the Use of Students in Language, Law, Medicine, Zoology, Botany, and the Sciences Generally in which Latin Words are Used. J.B. Lippincott. p. 56.
^ Hamilton, Gordon James (2006). The Origins of the West Semitic Alphabet in Egyptian Scripts. Catholic Biblical Association of America. ISBN 9780915170401.
^ Woodard, Roger G. (2014-03-24). The Textualization of the Greek Alphabet. p. 303. ISBN 9781107729308.
^ Noyer, Rolf. "Principal Sound Changes from PIE to Greek" (PDF). University of Pennsylvania Department of Linguistics.
^ Boeree, C. George. "The Origin of the Alphabet". Shippensburg University. Shippensburg University of Pennsylvania. Retrieved .
^ Arvaniti, Amalia (1999). "Standard Modern Greek" (PDF). Journal of the International Phonetic Association. 2 (29): 167-172. doi:10.1017/S0025100300006538. Archived from the original on 2016-03-03. CS1 maint: bot: original URL status unknown (link)
^ Miller, D. Gary (1994-09-06). Ancient Scripts and Phonological Knowledge. John Benjamins Publishing. pp. 54-56. ISBN 9789027276711.
^ Bispham, Edward (2010-03-01). Edinburgh Companion to Ancient Greece and Rome. Edinburgh University Press. p. 482. ISBN 9780748627141.
^ Sihler, Andrew L. (1995), New Comparative Grammar of Greek and Latin (illustrated ed.), New York: Oxford University Press, p. 21, ISBN 0-19-508345-8
^ a b c d e Updike, Daniel Berkeley (1922). Printing types, their history, forms, and use; a study in survivals. Cambridge, Massachusetts: Harvard University Press. ISBN 1584560568 – via Internet Archive.
^ Ambrose, Gavin; Harris, Paul (2011-08-31). The Fundamentals of Typography: Second Edition. A & C Black. p. 24. ISBN 9782940411764. ...the bisecting tail of the Helvetica 'Q'.
^ a b c Willen, Bruce; Strals, Nolen (2009-09-23). Lettering & Type: Creating Letters and Designing Typefaces. Princeton Architectural Press. p. 110. ISBN 9781568987651. The bowl of the Q is typically similar to the bowl of the O, although not always identical. The style and design of the Q's tail is often a distinctive feature of a typeface.
^ a b Vervliet, Hendrik D. L. (2008-01-01). The Palaeotypography of the French Renaissance: Selected Papers on Sixteenth-century Typefaces. BRILL. pp. 58 (a) 54 (b). ISBN 978-9004169821.
^ Rabinowitz, Tova (2015-01-01). Exploring Typography. Cengage Learning. p. 264. ISBN 9781305464810.
^ a b c Osterer, Heidrun; Stamm, Philipp (2014-05-08). Adrian Frutiger - Typefaces: The Complete Works. Walter de Gruyter. pp. 97 (a) 183 (b) 219 (c). ISBN 9783038212607.
^ Loxley, Simon (2006-03-31). Type: The Secret History of Letters. I.B.Tauris. ISBN 9780857730176. The uppercase roman Q...has a very long tail, but this has been modified and reduced on versions produced in the following centuries.
^ Fischer, Ulrike (2014-11-02). "How to force a long-tailed Q in EB Garamond". TeX Stack Exchange. Retrieved .
^ "What are "Stylistic Sets?"". Typography.com. Hoefler & Co. Retrieved .
^ Bosler, Denise (2012-05-16). Mastering Type: The Essential Guide to Typography for Print and Web Design. F+W Media, Inc. p. 31. ISBN 978-1440313714. Letters that contain truly individual parts [are] S, ... Q...
^ "2: Q Shape". Identifont. Retrieved .
^ "3: $ style". Identifont. Retrieved . To get the numbers in the table, click Question 1 (serif or sans-serif?) or Question 2 (Q shape) and change the value. They appear under X possible fonts.
^ Heller, Stephen (2016-01-07). "We asked 15 typographers to describe their favorite letterforms. Here's what they told us". WIRED. Retrieved .
^ Phillips, Nicole Arnett (2016-01-27). "Wired asked 15 Typographers to introduce us to their favorite glyphs". Typograph.Her. Retrieved .
^ Gordon, Arthur E. (1983). Illustrated Introduction to Latin Epigraphy. University of California Press. pp. 44. ISBN 9780520038981. Retrieved 2015. roman numerals.
^ Barmeier, Severin (2015-10-10), L2/15-241: Proposal to encode Latin small capital letter Q (PDF)
^ Everson, Michael; Baker, Peter; Emiliano, António; Grammel, Florian; Haugen, Odd Einar; Luft, Diana; Pedro, Susana; Schumacher, Gerd; Stötzner, Andreas (2006-01-30). "L2/06-027: Proposal to add Medievalist characters to the UCS" (PDF).
^ See references at Voiceless uvular stop#Occurrence
Media related to Q at Wikimedia Commons
The dictionary definition of Q at Wiktionary
QQ*Q*Q*Q*Q**Q*Q
q&q qenq Q en Q
Q & Q
Q:::Q
Kkk.kk
Prominence Poker (( Q .Q Q Q )) RG 500K 20170325
Ecriture q Q
(A?`)-q(q'`)
Q Fireworks Q
Q.8,Q.9,Q.10
Q~Q~ - TO
-24 ( NW 30)--SOD27,NET q-q q-q **Y q-q q-q
Q&Q Q&Q VQ13-012
Q&Q Q&Q RP00 J010
Q&Q Q&Q VP94-001
Q&Q Q&Q GS37-005
Smart Dog Q Q | CommonCrawl |
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.
Why is the rod moving to and from?
These are tow pulleys, and a plastic plank is placed above them which can slide freely. The plank has uniform density and it touches both the pulleys. Both pulleys are placed on the same plane, and the rod lies on them horizontally. The pulleys can be rotated in such a way, that they wove in opposite directions with the same angular velocity.
If we move the A pulley in the anticlockwise direction the plank falls off the apparatus in the left direction. But if we rotate the A pulley in the clockwise direction the plank keeps on moving to and fro. What is the reason behind this?
You can watch the video here, if you need more clarity of the question. http://www.youtube.com/watch?v=GlrreXLkzrk
homework-and-exercises newtonian-mechanics forces friction free-body-diagram
Qmechanic♦
Aaryan DewanAaryan Dewan
$\begingroup$ Watch the video VERY carefully. There is a directional component to the friction on the beam. The first time that the beam falls off of the (right side of the) pulley assembly, you can see from the shadows, if you watch very carefully, that the experimenter turned the beam around to make it slide the other way off of the pulley. This means that the friction coefficient is not the same in both directions. $\endgroup$
– David White
These are two pulleys, and a plastic plank is placed above them which can slide freely.
That's a little misleading a way of putting it: there is definitely friction between the pulleys and plank.
Consider the figure below:
Call $L$ the length of the bar and $m$ its mass.
At the points of contact the pulleys exert forces $F_1$ and $F_2$ to ensure no movement in the vertical direction is possible.
Assume also that the centre of gravity of the bar is at $x=D$, then:
$$F_1+F_2=mg$$
Both forces also exert a torque around the origin of $x,y$ and these torques need to cancel out:
$$(L/2+D)F_1=(L/2-D)F_2$$
$$F_1=\frac{L/2-D}{L/2+D}F_2$$
$$\alpha=\frac{L/2-D}{L/2+D}$$
$F_2=\frac{mg}{1+\alpha}$ and $F_1=\frac{\alpha mg}{1+\alpha}$
Only if $D=0$, then $\alpha=1$ and $F_1=F_2=\frac{mg}{2}$.
The friction forces in the points of contact are given by:
$$F_{f,1}=\mu F_1$$
It's now apparent that unless the bar is perfectly balanced on the pulleys, i.e. $D=0$, the frictional forces will always be different and the bar will accelerate in the direction of the largest friction force.
In general with Newton we can say:
$$ma=F_{f,1}-F_{f,2}$$
Where $a$ is the acceleration of the bar.
GertGert
$\begingroup$ I haven't done the math behind this, but can you please explain me in terms of a layman, sir? $\endgroup$
– Aaryan Dewan
$\begingroup$ So is it all because of friction? $\endgroup$
$\begingroup$ @AaryanDewan: when the bar isn't perfectly balanced the friction force is greater in one direction than the other, That then causes the bar to move horizontally. $\endgroup$
– Gert
Adding to Gert's answer. If you assume the bar starts of even $(D=0)$, rotating it either way would produce two perfectly equal forces. The bar wouldn't even move. Now, that's pretty much impossible.
In the case where you're rotating them anti-clockwise, the bar will tend to move in a way that increases $D$ :
Say $F_1$ is the strongest force. That means that the friction exerted by that pulley is bigger, and the bar will move in that direction. That pulley will also be rotating away from the other one, as you're rotating it anti-clockwise (it doesn't actually matters if it's anti-clockwise, that depends on which pulley you're rotating, it just needs to be away from the other one). Not sure if it's apparent to you, but it seems clear that the bar will move away from the pulley exerting less force, the center of gravity will be even less even until the bar falls.
When they're rotating the other way around, they'll be rotating closer to each other (Think of the points touching the bar. After a quarter spin they'll be touching each other. Rotating the other way around it will need 3/4.) The pulley that is producing more Force will tend to push it towards the other, making $D$ smaller and smaller until it pushed a bit too much and the other one wins, and thus the system is an astable oscillator. If you spun them towards each other really slowly you'd probably reach a point where the bar barely moves.
Hope that helps. I don't feel like formalizing the answer, but it should be understandable.
FuujinFuujin
If you carefully observe the video you will see that in the first 2 cases the rod has been placed not exactly at the center, but a little off, this will, as anybody can guess puts uneven weights on the pulleys, so the pulley which has more weight offers greater friction and becomes more dominant.
In the third case he places the rod exactly at the center and it moves to and fro.
CourageCourage
$\begingroup$ No, in the last case, he moves the pulley in the opposite direction. There's nothing needed with the exact centre here! $\endgroup$
$\begingroup$ It has nothing to do with moving clockwise or anti (assuming perfectly symmetrical systems), it needs to be in the center to not to move to and fro $\endgroup$
– Courage
$\begingroup$ No. Not at all. Can you prove that? $\endgroup$
$\begingroup$ See Gert's answer for the math, when it is placed at the center $F_1=F_2$ which implies that the frictional forces are equal, and they are in opposite directions. $\endgroup$
Thanks for contributing an answer to Physics Stack Exchange!
Not the answer you're looking for? Browse other questions tagged homework-and-exercises newtonian-mechanics forces friction free-body-diagram or ask your own question.
Motion of a rod struck at one end
Moving wedge and pulley system
If the force of friction is 20 N for a moving body does acceleration increase with the increase of the mass?
Why does the maximum speed occur when there is no acceleration when moving up the slope?
Why does a freewheeling flywheel counter-rotate slightly before finally stopping?
Why is an object still moving even if force applied is equal to friction?
What is the force of a wall on a revolvable rod? | CommonCrawl |
Home Journals IJDNE Distribution Law of Water Pressure on the Lining in Tunnels with Water Blocking and Drainage Control Design
Distribution Law of Water Pressure on the Lining in Tunnels with Water Blocking and Drainage Control Design
Hongyuan Huang* | Guoyin Wu | Yao Rong
Key Laboratory of Hydraulic and Waterway Engineering of the Ministry of Education, Chongqing Jiaotong University, Chongqing 400074, China
Engineering Research Center of Diagnosis Technology and Instruments of Hydro-Construction, Chongqing Jiaotong University, Chongqing 400074, China
Jiangxi Transportation Research Institute, Nanchang 3302000, China
[email protected]
For mountain tunnels in water-rich areas, the water pressure on the lining (WPOL) has a significant impact on the parameter selection and operation safety of the lining. Based on the theory of groundwater dynamics and complex function, this paper derives the analytical expressions of the WPOL and the seepage pressure outside the grouting ring. Under different supporting conditions, the authors analyzed how the WPOL was influenced by the head of groundwater, the permeability of the surrounding rock, and the permeability of the grouting ring. The results show that the permeability of the secondary lining not only affects the drainage capacity of the drainage system, but also greatly impacts the WPOL on the composite lining; the WPOL decreases linearly with the growing drainage capacity. To control the WPOL on the composite lining, designers of mountain tunnels in water-rich areas should carefully plan the water blocking and drainage control in accordance with the surrounding environment.
tunnel, water pressure on the lining (WPOL), distribution law, permeability
Water inrush is a common and serious disaster in tunnel construction [1]. The occurrence of water inrush will lead to changes in design plan and construction method. If lots of water flows into the tunnel, the shallow groundwater and surface water will be depleted, and the ground surface and building foundations will subsidize. Therefore, tunnel designers and constructors need to fully consider the influence of water pressure outside the lining (WPOL) [2].
The WPOL acts on the lining as surface force and seepage force, for the tunnel lining is not entirely impervious to water [3]. Since the lining and surrounding rock are both permeable, the water seepage between them can be considered continuous. Thus, the seepage force of water is comparable to a body force [4]. There are often many pores between the lining and surrounding rock. In this case, the body force acting on the surrounding rock cannot be directly transmitted to the lining, making the lining an independent structure under water pressure. It is an important issue for tunnel designers to correctly calculate the WPOL [5].
Many scholars have long been exploring the WPOL. In 1983, Farmer and Jennings [6] studied the effect of hydrostatic stress on mine support, and concluded that: groundwater discharge can be effectively controlled by grouting, and the WPOL can be reduced under specific conditions. After numerically analyzing seepage, Jaeger and Cook [7] combined the ground stress formed after lining construction with the coupling effect between groundwater permeability of surrounding rock and lining to calculate the WPOL. Lee et al. [8, 9] reported that the drainage system will be squeezed during lining construction, and blocked by the migration of soil particles during use, which in turn will increase the WPOL. In 2005, Raymer [10] pointed out that water flows only in connected fractures in hard rock, and the inflow is usually concentrated in fissure dense areas. To disclose the hydraulic interaction between lining and surrounding rock, Nam et al. [11] numerically simulated the effect of pore water pressure on the displacements of lining and surrounding rock, identified lining permeability and working condition of drainage system as the main factors affecting the WPOL, and put forward the WPOL load curve. Louis [12, 13], Zimmerman et al. [14], Louis and Maini [15] suggested that the lining must withstand a static head pressure of 20%, if the permeable cushion layer continuously drains groundwater. Arjnoi et al. [16] studied the distribution law of pore water pressures and internal force of tunnel surrounding rocks and lining under different drainage conditions by numerical simulation.
To sum up, the existing studies provide good reference for many aspects of tunnel design, namely, the WPOL calculation, the structural features under stress, water blocking and drainage control, etc. In actual project, groundwater can only be discharged from the tunnel through the drainage system behind the lining, rather than through the lining. However, many factors that are important to the WPOL and tunnel design in groundwater-rich areas have not been considered as influencing factors of the WPOL, such as the layout, degradation, and blockage of the drainage system, as well as the safety, reliability, and maintenance of the lining during operation [17-22].
2. Distribution Law of WPOL
For a deep-buried tunnel, the groundwater is discharged not only from the two sides, but also in the axial direction. In the axial direction, the inflow mainly comes from the front of tunnel face. After the tunnel being constructed is sufficiently long, the groundwater in the constructed section can be considered as moving from both sides of the tunnel into the section. At this time, the three-dimensional (3D) problem of tunnel drainage becomes a two-dimensional (2D) problem. If the cross-section of the deep-buried tunnel is way smaller than the thickness of the top aquifer, each part suffering local water inrush can be treated a point, whose drainage function is equivalent to a well. The drainage of the tunnel disturbs the seepage field, causing changes to the head pressure.
Under the complex conditions of hydrogeology and rock occurrence, the seepage in surrounding rock and lining of the deep-buried tunnel can be summarized as the movement of confined water to vertical wells. To study the WPOL of a circular tunnel with water blocking and drainage control design (grouting measures), the following assumptions were put forward:
(1) The surrounding rock is a homogeneous and isotropic continuous medium, and the groundwater is incompressible. The permeability K in each direction follows Darcy's law. The seepage speed is directly proportional to the hydraulic gradient.
(2) There is sufficient groundwater recharge. The recharge ability far exceeds the drainage ability of the tunnel. The groundwater level remains constant, and the head does not drop. That is, the groundwater seepage reaches a stable flow state, and the distant water potential H remains constant, regardless of the initial seepage field.
(3) The stress field of the surrounding rock, and the coupling effect between seepage field and stress field are not considered.
(4) Around the tunnel is the boundary of the equal head. Water is released when the head drops. The drainage of the tunnel is characterized by the uniform seepage of the lining, irrespective of the waterproof layer of the tunnel.
Under the above assumptions, a simplified axisymmetric calculation model was constructed as shown in Figure 1, where kl, kr, and kg are the permeability of the lining, the surrounding rock, and the grouting ring, respectively; r is the polar distance of the target point; r0 and rl are the inner and outer diameters of the lining, respectively; rg is the outer diameter of the grouting ring.
Figure 1. The simplified axisymmetric calculation model
Darcy's law and continuity equation can be expressed in axisymmetric forms:
$\frac{Q}{2\pi r}=k\frac{dh}{dr}, Q=const\left( r \right)$ (1)
where, Q is flow; k is the seepage of the medium; r is polar distance; h is water head.
The axial direction of the tunnel is z-axis. The seepage satisfies the Laplace continuity equation:
${{\nabla }^{2}}h=0$ (2)
In cylindrical coordinates, formula (2) can be rewritten as:
$\frac{1}{r}\frac{\partial }{\partial r}\left( r\frac{\partial h}{\partial r} \right)+\frac{1}{{{r}^{2}}}\frac{{{\partial }^{2}}h}{\partial {{\theta }^{2}}}+\frac{{{\partial }^{2}}h}{\partial {{z}^{2}}}=0$ (3)
Since the water flow is perpendicular to the z-axis, $\frac{\partial h}{\partial z}=0$. Meanwhile, the head field function h is symmetrical about the z=axis, indicating that $\frac{\partial h}{\partial \theta}=0$. Formula (3) can be simplified as $\frac{1}{r} \frac{d}{d r}\left(r \frac{d h}{d r}\right)=0$, and integrated as $r\frac{d h}{d r}=C$.
The flow rate is equal at different cross-sections. According to Darcy's law, the flow rate per linear meter of the tunnel can be calculated by:
$Q=2\pi rk\frac{dh}{dr}$ (4)
$C=\frac{Q}{2\pi k}, and r\frac{dh}{dr}=\frac{Q}{2\pi k}$ (5)
Through variable separation, we have $d h=\frac{Q}{2 \pi k} \frac{1}{r} d r$.
Then, the boundary conditions can be introduced as r=r2, h=H; r=rg, h=hg; r=rl, h=hl; r=r0, h=h0. Finding the integrals of the variable dh:
$\left\{ \begin{matrix} \int_{{{h}_{g}}}^{H}{dh}\text{=}\frac{Q}{2\pi {{k}_{r}}}\int_{{{r}_{g}}}^{{{r}_{2}}}{\frac{1}{r}dr} \\ \int_{{{h}_{l}}}^{{{h}_{g}}}{dh}\text{=}\frac{Q}{2\pi {{k}_{g}}}\int_{{{r}_{l}}}^{{{r}_{g}}}{\frac{1}{r}dr} \\ \int_{{{h}_{0}}}^{{{h}_{l}}}{dh}\text{=}\frac{Q}{2\pi {{k}_{l}}}\int_{{{r}_{0}}}^{{{r}_{l}}}{\frac{1}{r}dr} \\ \end{matrix} \right.$ (6)
That is, $\frac{2 \pi k_{r}(H-h g)}{\ln \left(\frac{r_{2}}{r_{g}}\right)}=\frac{2 \pi k_{g}\left(h_{g}-h_{l}\right)}{\ln \left(\frac{r_{g}}{r_{l}}\right)}=\frac{2 \pi k_{l}\left(h_{l}-h_{0}\right)}{\ln \left(\frac{r_{l}}{r_{0}}\right)}$.
From the above formula, it can be seen that r2=H and h0=0. In this case, there is no water inside the tunnel. Then, the body force on the surrounding rock, the water pressure on the interface between surrounding rock and lining, and the water pressure on the grouting ring can be respectively obtained by:
${{P}_{l}}=\gamma {{h}_{l}}=\frac{\gamma H\ln \frac{{{r}_{l}}}{{{r}_{0}}}}{\frac{{{k}_{l}}}{{{k}_{r}}}\ln \frac{H}{{{r}_{g}}}+\frac{{{k}_{l}}}{{{k}_{g}}}\ln \frac{{{r}_{g}}}{{{r}_{l}}}+\ln \frac{{{r}_{l}}}{{{r}_{0}}}}$ (7)
$\begin{align} & {{P}_{g}}=\gamma {{h}_{g}} \\ & =\frac{\gamma H\ln \frac{{{r}_{g}}}{{{r}_{l}}}}{\frac{{{k}_{g}}}{{{k}_{r}}}\ln \frac{H}{{{r}_{g}}}+\ln \frac{{{r}_{g}}}{{{r}_{l}}}}+\frac{\gamma {{h}_{l}}\ln \frac{H}{{{r}_{g}}}}{\ln \frac{H}{{{r}_{g}}}+\frac{{{k}_{r}}}{{{k}_{g}}}\ln \frac{{{r}_{g}}}{{{r}_{l}}}} \\ & \text{=}\gamma H-\frac{\gamma H\ln \frac{H}{{{r}_{g}}}}{\ln \frac{H}{{{r}_{g}}}+\frac{{{k}_{r}}}{{{k}_{g}}}\ln \frac{{{r}_{g}}}{{{r}_{l}}}+\frac{{{k}_{r}}}{{{k}_{l}}}\ln \frac{{{r}_{l}}}{{{r}_{0}}}} \\ \end{align}$ (8)
where, kl, kg, and kr are the permeability of the lining, grouting ring, and surrounding rock, respectively; r0 and rl are the inner and outer diameters of the lining, respectively; rg is the radius of the grouting ring; hl, hg, and H are the water heads outside the lining, outside the grouting ring, and at the center of the tunnel (hereinafter referred to as the center water head), respectively.
The following analysis is based on several special cases.
(1) Without considering grouting, make kg=kr and rg=rl in formula (7):
${{P}_{l}}=\frac{\gamma H\ln {{{r}_{l}}}/{{{r}_{0}}}\;}{\frac{{{k}_{l}}}{{{k}_{r}}}\ln \frac{H}{{{r}_{l}}}+\ln \frac{{{r}_{l}}}{{{r}_{0}}}}$ (9)
(2) In actual engineering, the lining is not adhered to the surrounding rock everywhere, and usually applied with a waterproof cloth. Therefore, the water passing through the surrounding rock is directly drained out of the tunnel through the drainage system instead of the lining. Hence, the lining permeability kl can be viewed as infinite, i.e. the lining has no effect during the derivation. From formula (9), it can be seen that the WPOL is zero at this time.
According to (5), the relationship between the water head outside the lining and the amount of water inrush or drainage can be obtained:
${{h}_{l}}=H-\frac{Q}{2\pi {{k}_{r}}}\ln \frac{{{r}_{2}}}{{{r}_{g}}}-\frac{Q}{2\pi {{k}_{g}}}\ln \frac{{{r}_{g}}}{{{r}_{l}}}$ (10)
During tunnel construction, the reinforced tunnel needs secondary reinforcement, i.e. secondary lining. As waterproof slabs are laid outside the secondary lining, the lining can be considered as impermeable. If the tunnel is closed without any drainage system, the flow rate Q will be zero. Substituting Q=0 into formula (10), the water head on the lining will reach center water head, that is, hl=H.
If the tunnel is drained without any control, the water behind the lining will completely discharged, eliminating the possibility of seepage. Under this condition, the water pressure behind the lining is zero, and the maximum amount of water inrush per linear meter of the tunnel can be obtained.
3. Influencing Factors of WPOL Distribution
From the above formulas, the groundwater seepage and WPOL are affected by the permeability kr of the surrounding rock, the tunnel diameter r0, the outer diameter r1 of the secondary lining, the permeability k1 of the secondary lining, the external diameter rg of the grouting ring, the permeability kg of the grouting ring, and the center water head H. Among them, kr, r0, and H are natural factors, reflecting the environment of the tunnel; k1, rl, rg, and kg are engineering factors, which should be controlled to meet the engineering requirements.
Next, the influence of each factor on the WPOL was explored with or without grouting, and the relationship between the WPOL and displacement was analyzed in details. The calculation parameters were configured as follows: If the tunnel is non-circular, the cross-section of the lining is equivalent to a circle, whose inner diameter was set to r0 = 4.94m; if the tunnel passing through a fault zone, the surrounding rock will be moderately weathered, whose permeability was set to 0.200m/d, 0.0277m/d, and 0.325m/d, respectively; the lining thickness was set to 35cm, 40cm, 45cm, and 50cm, respectively; the center water head H was set to 50m, 100m, and 150m, respectively.
3.1 Case 1: Without grouting
(1) The relationship between the WPOL and the permeability of the surrounding rock
Figure 2 shows the relationship between the WPOL and surrounding rock permeability, at the lining thickness of 50cm and the center water head H of 100m. It can be seen that, when the lining permeability remained constant, the WPOL increased with surrounding rock permeability. As the surrounding rock became increasingly preamble, the WPOL initially surged up, then increased slowly, and finally tended to be stable. When the surrounding rock permeability reached kr=0.4m/d, the relationship curves between the WPOL and surrounding rock permeability converged, indicating that, when the surrounding rock permeability reaches a certain extent, the slight permeability change of lining concrete material has no impact on the WPOL, and the WPOL will remain constant at certain value.
Figure 2. The relationship between the WPOL and surrounding rock permeability
(2) The relationship between the WPOL and the permeability of the lining
The relationship between the WPOL and the permeability of the lining was investigated at the lining thickness of 50cm, under different surrounding rock permeability and center water heads. Figure 3 displays the relationship between the WPOL and lining permeability at H=100m. It can be seen that, when the surrounding rock permeability remained constant, the WPOL decreased with the lining permeability, and approached zero after the lining permeability reached 0.3m/d. This means the WPOL can be effectively reduced by increasing the lining permeability or arraigning reasonable drainage holes.
It can also be seen in Figure 3 that the WPOL increased with the surrounding rock permeability. The three relationship curves converted at the two ends, indicating that the WPOL is not affected by surrounding rock permeability, but only by the lining permeability, when the lining permeability is too large or too small.
Figure 3. The relationship between the WPOL and lining permeability
Figure 4 presents the relationship between the WPOL and lining permeability at the surrounding rock permeability of 0.325m/d and center water head H of 50m, 100m, and 150m, respectively. It can be seen that, under each center water head, the WPOL decreased with the growing lining permeability: when the lining permeability was small, the WPOL is greatly affected by the center water head; as the lining permeability climbed up to 0.3m/d, the curves approximated the x-axis, suggesting that the WPOL is no longer affected by the center water head and reduced to zero, when the lining permeability increases to a certain threshold.
Figure 4. The relationship between the WPOL and center water head
(3) The relationship between the WPOL and lining thickness
Figure 5 records the relationship between the WPOL and lining thickness at the center water head H of 100m, and the permeability ratio n=kr/kl between surrounding rock and grouting ring at n=10, n=100, and n=1,000, respectively. It can be seen that, when the lining thickness increased below 0.5m, the WPOL rocketed up; when the lining thickness increased above 0.5m, the WPOL increased slightly. In particular, when the lining thickness was above 0.5m and n=1,000 (the lining is almost impervious), the WPOL was close to the hydrostatic stress, and was hardly reduced.
Figure 5. The relationship between the WPOL and lining thickness
3.2 Case 2: With grouting
From formula (7), it can be known that Pl decreases regardless of whether rg increases or kg decreases. In other words, if and only if the lining has drainage effect, the WPOL can be reduced or eliminated through grouting. The following is an analysis on how the permeability and thickness of the grouting ring affect the WPOL. The calculation parameters were configured as: equivalent circle radius r0=4.94m, lining thickness=0.5m, rl=5.44M, kr=0.325m/d, kl=2.93×10-3m/d, center water head H=100m, the permeability ratio between the surrounding rocks and grouting ring n=kr/kg.
Figure 6. The relationship between the WPOL and parameters of the grouting ring
Figure 6 shows the relationship between the WPOL and the thickness of the grouting ring. It can be seen that the WPOL decreased with the growing thickness and the reducing permeability of the grouting ring. This means the WPOL can be further reduced by improving the water blocking effect of the grouting ring, when the design parameters of the drainage system remain the same. When n ≥100 and tg ≥8m, the WPOL decrement was not obvious, whether the grouting ring was made thicker or less permeable. Similar to controlling the water inrush to the tunnel, the WPOL reduction does not necessarily improves with the growing thickness or reducing permeability of the grouting ring. There are relatively economical and reasonable values of these parameters.
3.3 Relationship between drainage and WPOL
The relationship between drainage and the WPOL was investigated by formula (10), based on grouting rings with different permeability and thicknesses. If the drainage system cannot timely discharge all the groundwater behind the secondary lining, the water blocking effect of the secondary lining must be taken into account. The calculation parameters were configured as: the center water head H = 100m, the lining thickness= 0.5m, the grouting ring thickness=5m, kr=0.2m/d, and n=10.
(a)Different permeability of grouting ring
(b) Different thickness of the grouting ring
Figure 7. The relationship between drainage and the WPOL
Figure 7 illustrates the relationship between the WPOL on the secondary lining and drainage. Specifically, Figure 7(a) shows the relationship between the WPOL and drainage at the thickness of 5m and different permeability of the grouping ring. It can be seen that: when the drainage was zero (the tunnel is completely waterproof), the WPOL did not drop regardless of the water blocking effect of the grouting ring; the WPOL linearly decreased with the growing drainage (the drainage capacity of the drainage system).
When the displacement was sufficiently large, the water accumulated behind the lining was completely discharged, and the WPOL dropped to zero. When the displacement remained the same, the WPOL decrement increased with the water blocking effect of the grouting ring. Hence, the grouting ring cannot reduce the WPOL until drainage measures are taken. Moreover, the drainage volume needed to achieve the same WPOL dropped with the growing water blocking effect of the grouting ring.
When the displacement was zero (without drainage system), no matter how small the permeability of grouting ring, the water pressure HL behind the lining would reach the center water head H. To control the WPOL, the drainage system must be combined with proper parameters of the grouting ring.
Figure 7(b) shows the relationship between the drainage and the WPOL on the secondary lining at different grouting ring thicknesses. It can be seen that the water pressure on the secondary lining decreased linearly with the growing drainage. With the increase of the water blocking effect (the increase of thickness and reduction of permeability of the grouting ring), the water pressure-drainage curve became steeper. Compared with thickness, the change of grouting ring permeability has a greater effect on the suppression of the WPOL by drainage.
This paper mainly discusses the distribution law of the WPOL on the secondary lining in the tunnel. The authors derived the WPOLs on an actual tunnel with or without grouting, and summarized the change law of the WPOL under multiple factors. The following discusses the grouting measures in the light of the WPOL.
4.1 Without grouting
When the lining permeability is kl≤1.0×10−4m/d or kc≥0.3m/d, the WPOL is not greatly affected by the surrounding rock permeability. When kl≥0.3m/d, the WPOL approximates zero. This means the WPOL can be effectively reduced by increasing the lining permeability or designing reasonable drainage holes.
When the surrounding rock permeability is kr≥0.4m/d, the WPOL is not affected by the slight changes in lining permeability, and remains constant. The permeability ratio n between the surrounding rock and the lining has a great impact on the WPOL. The smaller the ratio, the smaller the WPOL. The inverse is also true.
To effectively reduce the WPOL, it is necessary to lower the permeability of the surrounding rock, while increasing the permeability of the lining. This principle should be adopted by all waterproof and drainage tunnels to stabilize the surrounding rock and lining. The specific engineering measures include making reasonable arrangement of drainage holes, grouting the surrounding rock, and properly increasing the permeability of lining concrete materials.
4.2 With grouting
The tunnel drainage decreases with the growing thickness or reducing permeability of the grouting ring. The decline of the WPOL will ensue. When the grouting ring reaches a certain thickness, the WPOL will no longer decrease obviously. When the grouting ring achieves the optimal thickness, the WPOL will cease decreasing. Therefore, the grouting ring thickness should be properly selected to realize good water blocking effect at a reasonable cost.
The discharge of groundwater mainly depends on the grouting, and the unloading of the WPOL hinges on the lining permeability and the drainage method. If the lining permeability is sufficiently small or the drainage system supports full blocking, the WPOL cannot be reduced whether the grouting ring becomes thicker or less permeable. Therefore, if the lining has a proper permeability or the drainage system is effective, grouting the surrounding rock can limit the groundwater discharge, and also reduce the WPOL.
4.3 Influencing factors of the WPOL
From the above analysis, it can be seen that the WPOL is closely related to the head of groundwater, the permeability of the surrounding rock, thickness and permeability of the grouting ring (the design of the drainage system), and the permeability of lining.
In mountain tunnels, the WPOL on the composite lining directly bears on the performance of a drainage system, and decreases linearly with the growing drainage capacity. If the tunnel lies in a water-rich area, the WPOL should be regulated by designing the plan for water blocking and drainage control in accordance with the surrounding environment. The following points should be noticed during the design:
(1) The designed WPOL should match the drainage capacity of the tunnel. Fully consideration should be given to the reduction of drainage capacity induced by sedimentation and external force extrusion.
(2) The design of the drainage system should focus on the slope of blind, longitudinal, and transverse drainage pipes. Special drainage medium with smooth surface could be adopted if conditions permit, aiming to minimizing the effect of sediments (e.g. silt and calcium ion) on drainage capacity.
(3) During the construction, the drainage system should be fully protected to prevent local deformation and blockage under external construction forces.
(4) The designer must pay attention to the suppression effect of the grouting ring on the WPOL on the composite lining. The thickness of the grouting ring should be controlled between 6m and 8m.
(5) The permeability of the grouting ring should match the drainage capacity of the drainage system. The WPOL on the composite lining will be small, when the drainage capacity of the system is greater than that of the grouting ring.
The authors gratefully acknowledge the financial supports from the Natural Science Foundation Project of China (Grant No.: 52068033), and the Science and Technology Project of Jiangxi Provincial Department of Transport (Grant No.: 2018C0028).
[1] Zhao, X., Yang, X. (2019). Experimental study on water inflow characteristics of tunnel in the fault fracture zone. Arabian Journal of Geosciences, 12(13): 399. https://doi.org/10.1007/s12517-019-4561-3
[2] Huang, Y., Fu, Z., Chen, J., Zhou, Z., Wang, J. (2015). The external water pressure on a deep buried tunnel in fractured rock. Tunnelling and Underground Space Technology, 48: 58-66. https://doi.org/10.1016/j.tust.2015.02.003
[3] Zhang, X., Jiang, Z., Feng, S., Chen, S. (2011). Study on the determination of permeability coefficient of fractured rock mass under high pressure test condition. Journal of Hydroelectric Engineering, 30(1): 155-159.
[4] Ren, W.F. (2013). Theory research of stress field displacement field and seepage field and study on grouting waterproofing of high water pressures tunnel. Central South University.
[5] Morsali, M., Nakhaei, M., Rezaei, M., Hassanpour, J., Nassery, H. (2017). A new approach to water head estimation based on water inflow into the tunnel (case study: Karaj Water Conveyance Tunnel). Quarterly Journal of Engineering Geology and Hydrogeology, 50(2): 126-132. https://doi.org/10.1144/qjegh2016-015
[6] Farmer, I.W., Jennings, D.H. (1983). Effect of strata permeability on the radial hydrostatic pressures on mine shaft linings. International Journal of Mine Water, 2(3): 17-24. https://doi.org/10.1007/BF02504569
[7] Jaeger, J.C., Cook, N.G.W. (1979). Fundamentals of Rock Mechanics (3rd ed.). London: Chapman and Hall, 86-88.
[8] Lee, I.M., Kim, J.H., Reddi, L.N. (2002). Clogging phenomena of the residual soil-geotextile filter system. Geotechnical Testing Journal, 25(4): 379-390. https://doi.org/10.1520/GTJ11299J
[9] Lee, I.M., Nam, S.W. (2004). Effect of tunnel advance rate on seepage forces acting on the underwater tunnel face. Tunnelling and Underground Space Technology, 19(3): 273-281. https://doi.org/10.1016/j.tust.2003.11.005
[10] Raymer, J.H. (2005). Groundwater inflow into hard rock tunnels: A new look at inflow equations. In Rapid Excavation and Tunneling Conference, pp. 457-468.
[11] Nam, T.S., Joo, E.J., Choi, G.C., Shin, J.H. (2007). Hydraulic lining–ground interaction of subsea tunnels. Chinese Journal of Rock Mechanics and Engineering, 26(S2): 3674-3681.
[12] Louis, C. (2011). Rock Hydraulics in Rock Mechanics. Verlay Wien New York.
[13] Louis, C. (2011). A study of groundwater flow in jointed rock and its influence on the stability of rock masses. London: Imperial College.
[14] Zimmerman, R.W., Chen, G., Hadgu, T., Bodvarsson, G.S. (1993). A numerical dual-porosity model with semianalytical treatment of fracture/matrix flow. Water Resources Research, 29(7): 2127-2137. https://doi.org/10.1029/93WR00749
[15] Louis, C., Maini, Y.N. (1970). Determination of in-situ hydraulic parameters in jointed rock. International Society of Rock Mechanics, Proceedings, 1(1-32): 234-245. https://trid.trb.org/view/127706
[16] Arjnoi, P., Jeong, J.H., Kim, C.Y., Park, K.H. (2009). Effect of drainage conditions on porewater pressure distributions and lining stresses in drained tunnels. Tunnelling and Underground Space Technology, 24(4): 376-389. https://doi.org/10.1016/j.tust.2008.10.006
[17] Zhang, Y.T. (2003). Discussion on external hydraulic pressure upon rock tunnel lining Mod. Tunn. Technol., 40(3): 1-4.
[18] Shin, H.S., Youn, D.J., Chae, S.E., Shin, J.H. (2009). Effective control of pore water pressures on tunnel linings using pin-hole drain method. Tunnelling and Underground Space Technology, 24(5): 555-561. https://doi.org/10.1016/j.tust.2009.02.006
[19] Bian, K., Xiao, M., Chen, J. (2009). Study on coupled seepage and stress fields in the concrete lining of the underground pipe with high water pressure. Tunnelling and Underground Space Technology, 24(3): 287-295. https://doi.org/10.1016/j.tust.2008.10.003
[20] El Tani, M. (2003). Circular tunnel in a semi-infinite aquifer. Tunnelling and Underground Space Technology, 18(1): 49-55. https://doi.org/10.1016/S0886-7798(02)00102-5
[21] Kolymbas, D., Wagner, P. (2007). Groundwater ingress to tunnels–the exact analytical solution. Tunnelling and Underground Space Technology, 22(1): 23-27. https://doi.org/10.1016/j.tust.2006.02.001
[22] Huangfu, M., Wang, M.S., Tan, Z.S., Wang, X.Y. (2010). Analytical solutions for steady seepage into an underwater circular tunnel. Tunnelling and Underground Space Technology, 25(4): 391-396. https://doi.org/10.1016/j.tust.2010.02.002 | CommonCrawl |
Publication details for Prof Ray Sharples
Beifiori, Alessandra, Mendel, J. Trevor, Chan, Jeffrey C. C., Saglia, Roberto P., Bender, Ralf, Cappellari, Michele, Davies, Roger L., Galametz, Audrey, Houghton, Ryan C. W., Prichard, Laura J., Smith, Russell, Stott, John P., Wilman, David J., Lewis, Ian J., Sharples, Ray & Wegner, Michael (2017). The KMOS Cluster Survey (KCS). I. The Fundamental Plane and the Formation Ages of Cluster Galaxies at Redshift 1.4 < Z < 1.6. The Astrophysical Journal 846(2): 120.
ISSN/ISBN: 1538-4357 (electronic)
DOI: 10.3847/1538-4357/aa8368
Prof Ray Sharples
Dr Russell Smith
We present the analysis of the fundamental plane (FP) for a sample of 19 massive red-sequence galaxies (${M}_{\star }\gt 4\times {10}^{10}$ ${M}_{\odot }$) in three known overdensities at $1.39\lt z\lt 1.61$ from the K-band Multi-object Spectrograph (KMOS) Cluster Survey, a guaranteed-time program with spectroscopy from the KMOS at the VLT and imaging from the Hubble Space Telescope. As expected, we find that the FP zero-point in B band evolves with redshift, from the value 0.443 of Coma to −0.10 ± 0.09, −0.19 ± 0.05, and −0.29 ± 0.12 for our clusters at z = 1.39, z = 1.46, and z = 1.61, respectively. For the most massive galaxies ($\mathrm{log}{M}_{\star }/{M}_{\odot }\gt 11$) in our sample, we translate the FP zero-point evolution into a mass-to-light-ratio M/L evolution, finding ${\rm{\Delta }}\mathrm{log}M/{L}_{B}=(-0.46\pm 0.10)z$, ${\rm{\Delta }}\mathrm{log}M/{L}_{B}=(-0.52\pm 0.07)z$, to ${\rm{\Delta }}\mathrm{log}M/{L}_{B}=(-0.55\pm 0.10)z$, respectively. We assess the potential contribution of the galaxy structural and stellar velocity dispersion evolution to the evolution of the FP zero-point and find it to be ~6%–35% of the FP zero-point evolution. The rate of M/L evolution is consistent with galaxies evolving passively. Using single stellar population models, we find an average age of ${2.33}_{-0.51}^{+0.86}$ Gyr for the $\mathrm{log}{M}_{\star }/{M}_{\odot }\gt 11$ galaxies in our massive and virialized cluster at z = 1.39, ${1.59}_{-0.62}^{+1.40}$ Gyr in a massive but not virialized cluster at z = 1.46, and ${1.20}_{-0.47}^{+1.03}$ Gyr in a protocluster at z = 1.61. After accounting for the difference in the age of the universe between redshifts, the ages of the galaxies in the three overdensities are consistent within the errors, with possibly a weak suggestion that galaxies in the most evolved structure are older. | CommonCrawl |
Neurogenomic insights into paternal care and its relation to territorial aggression
Syed Abbas Bukhari1,2,3,
Michael C. Saul1 nAff6,
Noelle James ORCID: orcid.org/0000-0002-9936-59524,
Miles K. Bensky5,
Laura R. Stein3 nAff7,
Rebecca Trapp3 nAff8 &
Alison M. Bell1,3,4,5
Social evolution
Motherhood is characterized by dramatic changes in brain and behavior, but less is known about fatherhood. Here we report that male sticklebacks—a small fish in which fathers provide care—experience dramatic changes in neurogenomic state as they become fathers. Some genes are unique to different stages of paternal care, some genes are shared across stages, and some genes are added to the previously acquired neurogenomic state. Comparative genomic analysis suggests that some of these neurogenomic dynamics resemble changes associated with pregnancy and reproduction in mammalian mothers. Moreover, gene regulatory analysis identifies transcription factors that are regulated in opposite directions in response to a territorial challenge versus during paternal care. Altogether these results show that some of the molecular mechanisms of parental care might be deeply conserved and might not be sex-specific, and suggest that tradeoffs between opposing social behaviors are managed at the gene regulatory level.
In many species, parents provide care for their offspring, which can improve offspring survival. There is fascinating diversity in the ways in which parents care for their offspring, from infant carrying behavior in titi monkeys, poison dart frogs and spiders to provisioning of offspring in burying beetles and birds1,2. The burden of parental care does not always land exclusively on females; in some species both parents provide care and in others males are solely responsible for care.
Our understanding of the molecular and neuroendocrine basis of parental care has been largely influenced by studies in mammals, where maternal care is the norm. In mammals, females experience cycles of estrus, pregnancy, child birth and lactation as they become mothers, all of which are coordinated by hormones. While maternal care is often primed by hormonal and physiological changes related to embryonic or fetal development, the primers for paternal behavior are likely to be more subtle, such as the presence of eggs or offspring3,4. Despite this subtlety, there is growing evidence that males can also experience changes in physiology and behavior as they become fathers, some of which resemble changes in mothers5. For example, men experience increased oxytocin6 and a drop in testosterone7 following the birth of a child. Indeed, a recent study in burying beetles showed that the neurogenomic state of fathers when they are the sole providers of care closely resembles the neurogenomic state of mothers8.
There is taxonomic diversity in the specific behavioral manifestations of care, but all care-giving parents go through a predictable series of stages as they become mothers or fathers, from preparatory stages prior to fertilization (e.g. territory establishment and nest building) to the care of developing embryos (e.g. pregnancy, incubation), to care of free-living offspring (e.g. provisioning of nestlings, lactation, etc). Each stage is characterized by a set of behaviors and events, and the transition to the next stage depends on the successful completion of the preceding stage e.g. ref. 9. The temporal ordering of stages, combined with our understanding of the neuroendocrine dynamics of reproduction10, prompts at least three non-mutually exclusive hypotheses about how we might expect gene expression in the brain to change over the course of parental care. First, because each stage is characterized by a particular set of behaviors, each stage might have a unique neurogenomic state associated with it (the unique hypothesis). Second, some of the demands of parenting remain constant across stages, e.g. defending a nest site, therefore we might expect to see the signal of a preceding stage to persist into subsequent stages (the carryover hypothesis), resulting in shared genes among stages, especially between stages close together in the series. Finally, extending the reasoning further, and considering that parents must pass through one stage before proceeding to the next, genes associated with one stage might be added to the previous stage as a parent proceeds through the stages (the additivity hypothesis, an extension of the carryover hypothesis).Whether changes that occur at the neurogenomic level can be mapped on to behaviorally defined (as opposed to endogenously defined) stages of parental care is unknown. Moreover, we know little about whether there are genes that conform to a unique, carryover or additive pattern across stages of care. These hypotheses provide a novel conceptual framework for improving our understanding of parental care at the molecular level, and could serve as a model for studying other life events that comprise a series of behaviorally defined stages, e.g. stages of territory establishment, stages of pair-bonding, stages of dispersal, etc.
Unlike mammals, paternal care is relatively common in fishes: of the fishes that display parental care, 80% of them provide some form of male care, therefore fish are good subjects for understanding the molecular orchestrators of paternal care11,12. Moreover, the basic building blocks of parental care are ancient and deeply conserved in vertebrates13. For example, the hormone prolactin was named for its essential role in lactation in mammals, but had functions related to parental care in fishes long before mammals evolved14. Growing evidence for deep homology of brain circuits related to social behavior15,16,17,18 suggests that the diversity of parental care among vertebrates is underlain by changes in functionally conserved genes operating within similar neural circuits19.
In this study, we track the neurogenomic dynamics of the transition to fatherhood in male stickleback fish by measuring gene expression (RNA-Seq) in two brain regions containing nodes within the social behavior network, diencephalon and telencephalon. Gene expression in experimental males is compared across five different stages (nest, eggs and three time points after hatching) and relative to a control group. In this species, fathers are solely responsible for the care of the developing offspring, and male sticklebacks go through a predictable series of stages as they become fathers, from territory establishment and nest building to mating, caring for eggs, hatching and caring for fry20.
In addition to providing care, parents must be vigilant to defend their vulnerable dependents from potential predators or other threats. Tradeoffs between parental care and territory defense have been particularly well studied in the ecological literature, e.g.21, and parental care and territorial aggression represent the extremes on a continuum of social behavior—from strongly affiliative to strongly aversive. Therefore, an additional goal of this study is to compare and contrast the neurogenomics of paternal care with the neurogenomic response to a territorial challenge. As parental care and territorial aggression are social behaviors and both utilize circuitary within the social behavior network in the brain15,16,17, we expect to observe similarities between parental care and a territorial challenge at the molecular level. However given their position at opposite ends of the continuum of social behavior, along with neuroendocrine tradeoffs between them22, here we test the hypothesis that opposition between parental care and territorial aggression is reflected at the molecular and/or gene regulatory level.
Altogether results suggest that some of the molecular mechanisms of parental care are deeply conserved and are not sex-specific, and suggest that tradeoffs between opposing social behaviors are managed at the gene regulatory level.
Neurogenomic dynamics of paternal care
There were dramatic neurogenomic differences associated with paternal care. A large number of genes—almost 10% of the transcriptome—were differentially expressed between the control and experimental groups over the course of the parenting period (Fig. 1a, Supplementary Data 1). Within each stage, a comparable number of genes were up- and down-regulated. There were significant gene expression differences between the control and experimental groups within both brain regions; relatively more genes were differentially expressed in diencephalon.
Neurogenomic dynamics of paternal care. a The number of up- and down-regulated differentially expressed genes (DEGs) at each stage of paternal care in diencephalon and telencephalon. b Summary of GO-terms that were enriched in up- and down-regulated genes at each stage in the two brain regions. c The expression profile of candidate genes related to maternal care (galanin, galanin receptor 1, progesterone, estrogen receptor 1, oxytocin) across stages, with expression in the two brain regions plotted relative to the appropriate circadian control group; data points represent individual samples with means and s.e.m. indicated. Statistical significance of these genes was assessed as a pairwise contrast between a stage and its control (see Supplementary Data 1 for full list of genes; source data are in GEO GSE134508) using negative binomial distribution with generalized linear models in edgeR. Boxes surround means that are statistically different between the control and experimental condition within the stage.
Functional enrichment analysis of the differentially expressed genes (DEGs) suggests that paternal care requires changes in energy metabolism in the brain along with modifications of immune system and transcription. Genes associated with the immune response were down-regulated in both brain regions and during most stages relative to the control group. Genes associated with energy metabolism and the adaptive component of the immune response were upregulated in telencephalon. Genes associated with the stress response were downregulated in both brain regions around the day of hatching. Genes associated with energy metabolism were downregulated as the fry emerged (Fig. 1b, Supplementary Data 2). The expression profile of particular candidate genes related to parental care are in Fig. 1c, with statistically significant differences between the control and experimental condition within a stage indicated. Altogether these patterns suggest that paternal care involves significant neurogenomic shifts in stickleback males.
Change and stability of neurogenomic state across stages
We used these data to assess evidence for three non-mutually exclusive hypotheses about how neurogenomic state might change across stages of parental care. According to the unique hypothesis, there is a strong effect of stage on brain gene expression and little to no overlap among the genes associated with different stages. To evaluate this hypothesis we tested whether there were DEGs that were unique to each stage, i.e. not shared with other stages. We generated lists of genes that were differentially expressed between the control and experimental group at each stage within each brain region. Then, we excluded the DEGs that were shared between stages in order to identify unique genes to each stage. To increase confidence that the unique genes are truly unique to each stage, i.e. that they didn't just barely passed the cutoff for differential expression in another stage (false negatives), we followed an empirical approach (as in23). We kept the cutoff for DEGs at the focal stage at FDR < 0.01 and relaxed the FDR threshold on the other stages (Supplementary Fig. 1). This procedure was repeated for each stage and in each brain region separately. This analysis produced—with high statistical confidence—lists of DEGs that are unique to each stage (Fig. 2a), consistent with the "unique" hypothesis.
Change and stability of neurogenomic state across stages of parental care. a There were DEGs that were only differentially expressed during one stage. Shown is a heat map depiction of the expression profile of the genes that were "unique" to each stage, showing how they were regulated in other stages, separated by stage and by brain region. b The statistical significance of the pair-wise overlap between stages within each brain region. The size of the circle is proportional to the significance of the p-value (hypergeometric test FDR) of the overlap, such that large circles indicate smaller p-values. Note that the stages closest to the focal stage tended to share more DEGs compared to stages further apart in the series. c DEGs that were added to a stage and were also differentially expressed in subsequent stages. Shown is a heat map depiction of the added shared genes for each stage, separated by brain region, showing how they were regulated across stages. Red = upregulated, blue = downregulated. Numbers on the heat maps indicate the number of genes in each heat map. Source data are in GEO GSE134508
Next, we assessed the extent to which genes were shared among different stages of paternal care by testing whether the number of overlapping DEGs between stages was greater than expected using a hypergeometric test. Consistent with the carryover hypothesis, within each brain region, the number of overlapping DEGs between stages was statistically much greater than expected by chance (Supplementary Data 3), and stages that are close together in the series shared more DEGs compared to stages that are further apart (Fig. 2b, Supplementary Fig. 2).
These results suggest that there are genes whose signal persists across stages of care. We then evaluated the possibility that each new stage triggers a neurogenomic response which persists into subsequent stages, i.e. that genes associated with one stage are added to the previous stage as a parent proceeds through the stages. According to this hypothesis, when a parent is caring for eggs in their nest, for example, the "egg" genes are added to the previously activated "nest" genes, and so on, in an additive fashion. To examine this statistically, for each stage, we identified genes that: (1) were differentially expressed during the stage of interest; (2) were not differentially expressed during any of the preceding stages; (3) were also differently expressed in a subsequent stage, hereafter referred to as "added shared genes". Only genes added during a new stage were used to test for their overlap with subsequent stages, therefore except for the "nest added shared genes", each of the added shared genes from the previous stage(s) were subtracted from the focal stage's added shared genes (Supplementary Fig. 3). This process generated four sets of added shared genes: genes that were differentially expressed during the nest stage and were also differentially expressed during at least one subsequent stage ("nest added shared genes"), genes that were differentially expressed during the egg stage and were also differentially expressed during at least one subsequent stage but not during the nest stage ("egg added shared genes"), and so on.
This analysis revealed genes that became differentially expressed as males proceeded through different stages of paternal care and ROAST24 analysis found that the added shared genes remained differentially expressed in subsequent stages in a statistically significant manner (Supplementary Data 4). This suggests, for example, that there was a transcriptional signal of eggs which persisted after the egg stage. To see if the genes that were added and which persisted over time were similarly regulated across subsequent stages of paternal care, we examined the expression profiles of the added shared genes at each stage and tested if the direction of regulation was consistent across stages. This analysis revealed that added shared genes were indeed similarly regulated across stages (Supplementary Data 4, Fig. 2c). For example, added shared genes that were upregulated in males with nests were also upregulated during subsequent stages, especially during stages close to the nesting stage. To investigate this further, we calculated the probability that all genes within a set of added shared genes were expressed in the same direction due to chance, i.e. either consistently up- or down-regulated. Then, we counted the number of genes within each set of added shared genes that were concordantly expressed. We found that the number of concordantly expressed genes was greater than expected by chance (diencephalon χ2 = 1859, P < 1e-6, telencephalon χ2 = 146, df = 2, P < 1e-4). For example, 172 of the 235 genes in the nest added shared genes in diencephalon were concordantly expressed across stages, much higher than the expected 15 genes due to chance. The concordant expression pattern across stages suggests that an added shared gene serves a similar function in different stages.
Pathways are not sex-specific and are deeply conserved
Some of the candidate genes associated with female pregnancy and maternal care were differentially expressed in different stages of paternal care in sticklebacks (Fig. 1c). For example, in mammals, levels of progesterone, estrogen and their receptors increase during pregnancy and then subside after childbirth. A similar pattern was observed in the diencephalon of male sticklebacks: both estrogen receptor (esr1) and progesterone receptor (pgr) were upregulated during early hatching and then subsided (Fig. 1c). Oxytocin (and its teleost homolog isotocin) plays an important role in social affiliation and parental care in mammals6 and fish19,25,26,27,28. Oxytocin (oxt) was upregulated in diencephalon when male sticklebacks were caring for eggs in their nests, and upregulated in telencephalon mid-way through the hatching process (Fig. 1c).
Genes that have been implicated in infanticide during parental care in mammals were also differentially expressed in sticklebacks, where egg cannibalism is common. Galanin—a gene implicated with infanticidal behavior in mice29—was highly expressed in diencephalon (which includes the preoptic area) during the nest, eggs and early hatching stages. However, the galanin receptor gene was downregulated during the middle to late hatching stages in both brain regions (Fig. 1c). Furthermore, the progesterone receptor—which mediates aggressive behavior toward pups in mice30—gradually declined in both brain regions as hatching progressed, and its level was lowest when all the fry were hatched (Fig. 1c). Up-regulation of galanin during the egg stage and down-regulation of progesterone receptor during the hatching stage could reflect how male sticklebacks inhibit cannibalistic behavior while providing care.
To test if the neurogenomic changes that we observed in stickleback fathers across stages, e.g. unique and added shared genes, are similar to the neurogenomic changes that mothers experience across stages of maternal care, we leveraged a recent dataset where brain gene expression was compared across a series of pregnancy and post-partum stages in mice (Supplementary Data 5)31. Similar to stickleback fathers, there were both unique and added shared DEGs across different stages of pregnancy and postpartpum in mouse mothers. We then tested if the enduring (added shared genes) and transient (unique) changes in neurogenomic state that were experienced in stickleback fathers were similar to the enduring and transient signals of pregnancy and the postpartum period in mouse mothers. Specifically, we compared mouse and stickleback added shared genes within the appropriate orthogroup (Supplementary Data 6). For example, we compared 356 stickleback added shared genes within 90 orthogroups in diencephalon and 838 mouse added shared genes within 265 orthogroups in hypothalamus and found that they shared 14 orthogroups. In order to test whether those 14 shared orthogroups is greater than expected due to chance, we employed a Monte Carlo based permutation approach. We did not use a regular hypergeometric test or regular permutation test here (at the orthogroup level) because each orthogroup contains more than one gene in both the stickleback and mouse genomes, and some of those genes were differentially expressed and others were not. Instead, we sampled the gene sets (e.g. 356 and 838 genes in diencephalon/hypothalamus) repeatedly (105) and with replacement from both species' universes and counted the overlaps at the orthogroup level. This overlap was then tested against the observed overlap to compute p-values, which are highly significant (Fig. 3, note that the overlap never reaches 14 orthogroups). Added shared genes in stickleback and mouse include BDNF (a candidate gene related to anxiety, stress and depression32) and a regulator of G protein receptors RGS3 (related to insulin metabolism33). We followed the same procedure for the unique genes and did not find any evidence of sharing between the two species. For example, there were 33 unique genes in four orthogroups in mouse hypothalamus and 244 unique genes in 54 orthogroups stickleback diencephalon with no overlap between them (Supplementary Data 6).
DEGs associated with shared orthogroups. Color represents the significance of differential expression between the control and experimental group (p values (−log(fdr)) across the five conditions in stickleback (left) and the five conditions in mouse (right). a shows the significance of DEGs within 14 shared orthogroups between diencephalon in stickleback and hypothalamus in mouse. b shows the significance of DEGs within nine shared orthogroups between telencephalon in stickleback and hippocampus in mouse. Source data are in GEO GSE134508
Altogether, the differential expression of candidate genes related to maternal care along with the deep homology of the enduring signal of care across stages (added shared genes) suggest that some of the neurogenomic shifts that occur during paternal care in a fish are deeply conserved and are not sex-specific.
Parenting and aggression tradeoffs at the molecular level
To better understand how different social demands are resolved in the brain, we compared these data to a previous study on the neurogenomic response to a territorial challenge in male sticklebacks34, which measured brain gene expression 30, 60 or 120 min after a 5 min territorial challenge. The two experiments studied behaviors at the opposite ends of a continuum of social behavior: paternal care provokes affiliative behavior while a territorial challenge provokes aggressive behavior, and the challenge hypothesis originally posited that patterns of testosterone secretion reflects tradeoffs between parental care and territory defense, assuming that testosterone is incompatible with parental care in males22. Subsequent studies have shown that testosterone is not always inhibitory of parental care35, and that a territorial challenge activates gene regulatory pathways that do not depend on the action of testosterone36. Regardless of the specific neuromodulators or hormones, a mechanistic link between parental care and territory defense is likely to operate through the social behavior network in the brain because most nodes of this network express receptors for neuromodulators and hormones that are involved with both parental care and aggression37. Therefore we used these data to assess whether there is commonality at the molecular level between aggression and paternal care. For example, shared genes could reflect general processes such as the response to a social stimulus, while genes that are specific to an experiment could reflect the unique biology of paternal care versus territorial aggression. Alternatively, there might be a set of genes that is associated with both parental care and territorial aggression, but those genes are regulated in different ways depending on whether the animal is responding to a positive (parental care) versus negative (territorial challenge) social stimulus.
To compare the neurogenomics of paternal care and the response to a territorial challenge at the gene level, we pooled genes that were differentially expressed in the experimental compared to the control group (FDR < 0.01) across time points, stages and brain regions within each experiment, which resulted in two sets of genes associated with either a territorial challenge or paternal care (Fig. 4a). There were 177 genes that were shared between the two experiments (Fig. 4b); this overlap is highly statistically significant (hypergeometric test, fdr < 1e-10).
The regulatory dynamics of territorial challenge and paternal care. a Experimental time course sampling design in the two experiments. b Overlap between territorial aggression and paternal care DEGs. DEGs were pooled across time points and brain regions. c ASTRIX-generated transcriptional regulatory network. Each node represents a transcription factor or a predicted transcription factor target gene. Oversized nodes are transcription factors where the size of the node is proportional to the number of targets. Transcription factors whose targets are significantly enriched in either or both experiments are highlighted with different colors. Stickleback imaged drawn by MB. Source data are in GEO GSE134508
To identify genes that were unique to each experiment while guarding against false positives, we adopted the same empirical approach as described above (Supplementary Fig. 1). There were 153 genes unique to territorial challenge and 764 genes unique to paternal care and these unique genes were enriched with non-overlapping functional categories (Supplementary Data 7). For example, some of the genes that were unique to a territorial challenge were related to sensory perception and tissue development, whereas some of the genes that were unique to paternal care were related to oxidative phosphorylation and energy metabolism, which might reflect the high metabolic needs of males as they are providing care38.
The large number of genes that were differentially expressed both during paternal care and in response to a territorial challenge prompted us to test for evidence of their common regulation at the gene regulatory level. Therefore, we used the data from both experiments to build a transcriptional regulatory network and asked if there are transcription factors whose targets were significantly associated with the DEG sets from the paternal care experiment, the territorial challenge experiment or both experiments (Fig. 4, Supplementary Data 8). There were 10 transcription factors that were significantly enriched in both experiments. Eight out of 10 transcription factors were regulated in opposite directions in at least one of the conditions in the two experiments (Fig. 5). Two of the transcription factors that were regulated in opposite directions (NR3C1 and klf7b) have been implicated with social behavior in other studies (the glucocorticoid receptor NR3C1 and psychosocial stress during pregnancy39; klf7b and austim spectrum disorder40). These patterns suggest that for some genes, different salient experiences—providing paternal care and territorial aggression—trigger opposite gene regulatory responses.
Shared regulators of a territorial challenge and paternal care. The panel on the left shows the expression pattern of the 10 transcription factors that were enriched in both experiments (Fig. 4). Columns are conditions within the two experiments (30, 60 or 120 min after a territorial challenge, the five stages of paternal care in diencephalon (D) or telencephalon (T)). Note that 8 of the shared transcription factors were regulated in opposite directions and in different brain regions in the two experiments. The two panels on the right show the expression pattern of two examples of shared, differentially regulated transcription factors (Klf7b and NR3C1) and their targets across all of the conditions. Source data are in GEO GSE134508
Interestingly, the transcription factors showing the opposite expression pattern were differentially expressed in different brain regions in the two experiments. Specifically, shared transcription factors and their predicted targets were up-regulated in telencephalon in response to a territorial challenge and down-regulated in diencephalon during parental care. These findings point to the molecular mechanisms by which transcription factors might differentially modulate the social behavior network15,16,17 in the brain to manage conflicts between paternal care and territory defense.
While maternal care has long been recognized as an intense period when the maternal brain is reorganized41,42, our results suggest that paternal care also involves significant neurogenomic shifts. Many of the neuroendocrine changes that are experienced by mammalian mothers are driven by endogenous cues during pregnancy, birth and lactation, and are required for fetal growth and development31,43, with the neural circuits necessary for maternal care being primed by hormones during pregnancy and the postpartum periods42. Our results suggest that males can also experience dramatic neuromolecular changes as they become fathers, even in the absence of ovulation, parturition, postpartum events and lactation and their associated hormone dynamics5. We observed dramatic neurogenomic changes in males in response to cues for care that are exogenous (e.g. the presence of nesting material) and social (e.g. the presence of eggs or the hatching of fry). Such dramatic neurogenomic shifts associated with paternal care might be especially likely to occur in species when fathers are the sole providers of parental care, such as in sticklebacks. The effects might not be as strong in biparental systems where fathers contribute less. Consistent with this hypothesis, in burying beetles, when males were the sole providers of care, their brain gene expression profile was similar to mothers, but when they were biparental, fathers' neurogenomic state was less similar to mothers'8.
A key challenge for care-giving parents is to defend their home and vulnerable offspring from threats, such as territorial intruders. Behavioral trade-offs between parental care and territory defense are well-documented35 and work in this area has been influenced by the challenge hypothesis22, which originally posited that androgens mediate the conflict between care and aggression. By comparing the neurogenomic dynamics of paternal care and the response to a territorial challenge, our work offers insights into the gene regulatory mechanisms by which animals resolve these conflicting demands. Our results suggest that opposing social experiences acting over different time scales—providing paternal care over the course of weeks versus responding to a territorial challenge over the course of minutes to hours—trigger opposite gene regulatory responses. In particular, an analysis of the predicted gene regulatory network identified transcription factors that were significantly enriched both following paternal care and in response to a territorial challenge, and the majority of the transcription factors (and their targets) were regulated in opposite directions in the two experiments (Fig. 5).
While previous studies have explored circuit-level changes in the social behavior network in response to different social stimuli15, our results point to the molecular basis of differential modulation of the social behavior network: the transcription factors showing the opposite expression pattern were differentially expressed in different brain regions in the two experiments. Specifically, shared transcription factors and their predicted targets were up-regulated in telencephalon in response to a territorial challenge and down-regulated in diencephalon during parental care. These findings suggest the molecular mechanisms by which transcription factors might differentially modulate the social behavior network15,16,17 in the brain to manage conflicts between paternal care and territory defense. A similar pattern was observed at the transcriptomic (rather than gene regulatory) level when neurogenomic states were compared between territorial aggression and courtship in male threespined sticklebacks: some genes that were upregulated after a territorial challenge were downregulated after a courtship opportunity44. These results are also consistent with a detailed mechanistic study which showed that transcription factors play a role in setting up neural circuits to mediate opposing behaviors45.
Altogether our analysis of changes in neurogenomic state across stages of paternal care offers support for all three hypotheses proposed. For example, consistent with the unique hypothesis, there were genes that were unique to each stage. Genes exhibiting transient, stage-specific differential expression might be involved in facilitating the next stage, priming and/or responding to a particular event or stimulus during that stage, e.g. the arrival of offspring. Whether genes that were unique to a particular stage and not differentially expressed in other stages are a cause of future behavior or consequence of past behavior is unknown. We also found support for the carryover and additivity hypotheses: elements of an acquired neurogenomic state persisted into subsequent stages, which suggests that the events and behaviors that characterize a particular stage of paternal care (e.g. finishing a nest, the arrival of eggs, hatching) trigger a neurogenomic state that persists, perhaps for as long as those events and behaviors continue. Genes whose expression persists across stages could be involved in maintaining the previous neurogenomic state, and/or reflect the constant demands of parenthood, e.g. the nest must be maintained across all stages of care.
Moreover, our results suggest that changes in neurogenomic state in a fathering fish might share commonalities at the molecular level with the neurogenomic changes associated with maternal care in a mammal. The number of orthologous genes that were shared across stages of maternal care in mice31 and paternal care in sticklebacks was greater than expected due to chance. This suggests that the neurogenomic state that is maintained across pregnancy and the post partum period in mice, for example, at least partially resembles the neurogenomic state that is maintained while a male stickleback is caring for eggs and while the eggs are hatching. These results suggest that maternal and paternal care might share similarities at the molecular level, and this finding is consistent with other studies showing that parental males and females can use the same hormones and molecular mechanisms to activate the same pathways in the brain46.
The finding of partial commonality between paternal care in a fish and maternal care in a mammal adds to the growing body of work showing that the underlying neural and molecular mechanisms related to parental care might have been repeatedly recruited during the evolution and diversification of parental care19,47. Indeed, our results suggest that so-called "pregnancy hormones" and added shared genes (for instance BDNF and G protein regulators, RGS3) might have been serving functions related to care giving long before the evolution of mammals, and that these mechanisms operate just as well in fathers as they do in mothers. These commonalities with maternal care in mammals suggest that the neurogenomic shifts that occur during paternal care in a fish might be deeply conserved and might not be sex-specific. Animals have been dealing with the problem of how to improve offspring survival (as well as avoiding filial cannibalism) for a long time; our results suggest that they have relied on ancient molecular substrates to solve it.
Sticklebacks
In sticklebacks, paternal care is necessary for offspring survival and is influenced by prolactin48, and the main androgen in fishes (11KT) does not inhibit paternal care in this species49. Paternal care in sticklebacks is costly both in terms of time and energy38, infanticide and cannibalism are common20, and males must be highly vigilant to challenges from predators and rival males while caring for their vulnerable offspring.
Adult males were collected from Putah Creek, CA, a freshwater population, in spring 2013, shipped to the University of Illinois where they were maintained in the lab on a 16:8 (L:D) photoperiod and at 18 °C in separate 9-l tanks. Males were provided with nesting material including algae, sand and gravel and were visually isolated from neighbors.
In order to track transcriptional dynamics associated with becoming a father, we sampled males for brain gene expression profiling at five different points during the reproductive cycle (n = 5 males per time point): nest, eggs, early hatching, middle hatching and late hatching (control: reproductively mature males with no nests). Males in the nest condition had a nest but had not yet mated. Males in the eggs condition were sampled four days after their eggs were fertilized. Because males in the eggs condition were sampled four days after mating, the transcriptomic effects of mating are likely to have attenuated by the time males were sampled at this stage. Hatching takes place over the course of the 5th day after fertilization, and a previous study found that brain activation as assessed by Egr-1 expression was highest while male sticklebacks were caring for fry as compared to males with nests or eggs50. In order to capture males' response to the new social stimulus of their fry (see51), we focused on three time points on the day of hatching, which capture the start of the hatching process (9 a.m.), when approximately half of the clutch is hatched (1 p.m.) and when all of the eggs have hatched (5 p.m.).
Males in the nest, eggs and early hatching conditions were sampled at 9 a.m., males in the mid-hatching condition were sampled at 1 p.m. and males in the late hatching condition were sampled at 5 p.m. Males in these conditions were compared to reproductively mature circadian-matched control males that did not have a nest (n = 5 males per control group). Wild-caught females from the same population were used as mothers. Males were quickly netted and sacrificed by decapitation within seconds. All methods were approved by the IACUC of the University of Illinois at Urbana-Champaign (#15077).
Heads were flash frozen in liquid nitrogen and the telencephalon and diencephalon were carefully dissected and placed individually in Eppendorf tubes containing 500 μL of TRIzol Reagent (Life Technologies). Total RNA was isolated immediately using TRIzol Reagent according to the manufacturer's recommendation and subsequently purified on columns with the RNeasy kit (QIAGEN). RNA was eluted in a total volume of 30 μL in RNase-free water. Samples were treated with DNase (QIAGEN) to remove genomic DNA during the extraction procedure. RNA quantity was assessed using a Nanodrop spectrophotometer (Thermo Scientific), and RNA quality was assessed using the Agilent Bioanalyzer 2100 (RIN 7.5–10); one sample was excluded because of low RNA quality. RNA was immediately stored at −80 °C until used in sequencing library preparation.
The RNAseq libraries were constructed with the TruSeq® Stranded mRNA HT (Illumina) using an ePMotion 5075 robot (Eppendorf). Libraries were quantified on a Qubit fluorometer, using the dsDNA High Sensitivity Assay Kit (Life Technologies), and library size was assessed on a Bioanalyzer High Sensitivity DNA chip (Agilent). Libraries were pooled and diluted to a final concentration of 10 nM. Final library pools were quantified using real-time PCR, using the Illumina compatible kit and standards (KAPA) by the W. M. Keck Center for Comparative and Functional Genomics at the Roy J. Carver Biotechnology Center (University of Illinois). Single-end sequencing was performed on an Illumina HiSeq 2500 instrument using a TruSeq SBS sequencing kit version 3 by the W. M. Keck Center for Comparative and Functional Genomics at the Roy J. Carver Biotechnology Center (University of Illinois). The 79 libraries were sequenced on 27 lanes.
RNA Seq informatics
FASTQC version 0.11.352 was used to assess the quality of the reads. RNA-seq produced an average of 60 million reads per sample (Supplementary Data 9). We aligned reads to the Gasterosteus aculeatus reference genome (the repeat masked reference genome, Ensembl release 75), using TopHat (2.0.8)53 and Bowtie (2.1.0)54. Results of the TopHat alignment were largely in agreement with results from HISAT255 (Supplementary Fig. 4). Reads were assigned to features according to the Ensembl release 75 gene annotation file (http://ftp.ensembl.org/pub/release-75/gtf/gasterosteus_aculeatus/). We used the default settings in all the programs unless otherwise noted.
Defining DEGs
HTSeq v0.6.156 read counts were generated for genes using stickleback genome annotation. Any reads that fell in multiple genes were excluded from the analysis. We included genes with at least 0.5 count per million (cpm) in at least five samples, resulting in 17,659 and 17,463 genes in diencephalon and telencephalon, respectively. Count data were TMM (trimmed mean of M-values) normalized in R using edgeR v3.16.557. Samples separated cleanly by brain region on an MDS plot; we did not detect any outliers. To assess differential expression, pairwise comparisons between experimental and control conditions were made at each stage using appropriate circadian controls. Because the nest, eggs and early stages were all sampled at 9 a.m., their expression was compared relative to the same 9 a.m. control group.
Diencephalon and telencephalon were analyzed separately in edgeR v3.16.5. A tagwise dispersion estimate was used after computing common and trended dispersions. To call differential expression between treatment groups, a "glm" approach was used. We adjusted actual p-values via empirical FDR, where a null distribution of p-values was determined by permuting sample labels for 500 times for each tested contrast and a false discovery rate was estimated58. Similarities across stages of care was assessed using hypergeometric tests and PCA (Supplementary Fig. 2).
For a fair comparison between our study and Ray et al.31, we reanalyzed the Ray et al., gene expression dataset by applying the same model, dispersion estimates and false discovery rate procedures.
Unique genes
One of the goals of this study was to identify genes that uniquely characterized a particular state, e.g. to a particular stage of paternal care, or to either the territorial challenge or the paternal care experiment. To address the possibility that putative unique genes barely passed the cutoff for differential expression in another state (false negatives), we adopted an empirical approach, as in ref. 23. We kept the cutoff for DEGs at the focal state at FDR < 0.01 and relaxed the FDR cutoff on the other states (see Supplementary Fig. 1 for an explanation of this procedure). This procedure was repeated for each state and in each brain region separately.
Added shared genes
We wanted to know how many of the genes that were differentially expressed in one stage remained differently expressed in the subsequent stages (added shared genes). To find added shared genes, we first selected those stages which had significant pairwise overlap between them (FDR < 0.05, hypergeometric test). Only those genes were tested for overlap with subsequent stages; in order to qualify as an added shared genes for a particular stage, the gene could not be differentially expressed during a preceding stage and had to be differentially expressed during a subsequent stage, but not necessarily the stage immediately following that particular stage. Except for the first stage, each stage's genes were first subtracted from the previous stages' DEGs and then tested for overlap with subsequent stages (Supplementary Fig. 3).
To assess the significance of added shared genes, we used rotation gene set testing functionality (ROAST)24 in the limma package59. ROAST can test whether any of the genes in a given set of added shared genes are differentially expressed in the specified contrast and also can test if they are consistently regulated. ROAST tests for three alternative hypotheses: "Up", tests whether the genes in the set tend to be up-regulated, "Down" tests whether the genes in the set tend to be down-regulated and "Mixed" tests whether the genes in the set tend to be differentially without regard for direction of regulation. Here we used directional ROAST (null hypothesis either Up or Down) and separated the added shared genes by their direction of regulation (up or down) in a focal stage and then tested for their significant differential expression and consistent direction in subsequent stages. We also complemented this analysis with a Chi-Square test to determine whether the number of genes within a given set of overlapping genes showing a concordant expression pattern is greater than expected due to chance.
Stickleback and mouse orthrogroups
To compare stickleback and mouse genes we generated a reliable orthogroup map using OrthoDB, v9.160. This map contained both one-to-one, one to many and many to many orthology associations between stickleback and mouse genes. This map contains 3790 orthogroups which represent 4820 stickleback and 4894 mouse genes.
Overlap significance
We tested the significance of unique and added shared DEGs between stickleback and mouse at the orthogroup level. We used Monte Carlo repeated random sampling to determine if an observed orthogroup overlap between species was statistically significant at P < 0.0561. For example, suppose \(t^ \ast\) is the observed orthogroup overlap between the stickleback and the mouse gene lists and n1 and n2 are gene set sizes respectively. We repeatedly and randomly drew samples of size n1 from the stickleback genome and samples of n2 from the mouse genome for M times (M = 105) with replacement and detected an overlap ti for each iteration of M and computed an estimated p-value using the following equation,
$$estimate \,\, {\wp} = \frac{{1 + \mathop {\sum }\nolimits_{i = 1}^M I\left( {t_i \ge t^ \ast } \right)}}{{1 + M}}$$
where I(.) is an indicator function.
Transcriptional regulatory network (TRN) analysis
ASTRIX uses gene expression data to identify regulatory interactions between transcription factors and their target genes. A previous study validated ASTRIX-generated TF-target associations using data from ModENCODE, REDfly, and DROID databases62. The predicted targets of TFs were defined as those genes that share very high mutual information (P < 10−6) with a TF, and can be predicted quantitatively with high accuracy (Root Mean Square Deviation (RMSD) < 0.33 i.e prediction error less than 1/3rd of each gene expression profile's standard deviation. The list of putative TFs in the stickleback genome was obtained from the Animal Transcription Factor Database. Given TFs and targets sets ASTRIX infers a genome-scale TRN model capable of making quantitative predictions about the expression levels of genes given the expression values of the transcription factors. The ASTRIX algorithm was previously used to infer a TRN models for honeybee, mouse and sticklebacks34,62,63,64. ASTRIX identified transcription factors that are central actors in regulating aggression, maturation and foraging behaviors in the honeybee brain62.
Here we used ASTRIX to infer a joint gene regulatory network by combining gene expression profiles from a previous study on the transcriptomic response to a territorial challenge in male sticklebacks34 with the data from this experiment. Combining the two datasets increased statistical power to help identify modules that are shared and unique to the two experiments. Transcription factors that are predicted to regulate DEGs in either experiment were determined according to whether they had a significant number of targets as assessed by a Bonferroni FDR-corrected hypergeometric test.
We derived GO assignments, using protein family annotations from the database PANTHER65. Stickleback protein sequences were blasted against all genomes in the database (PANTHER 9.0 85 genomes). This procedure assigns proteins to PANTHER families on the basis of structural information as well as phylogenetic information. Genes were then annotated using GO information derived from the 85 sequenced genomes in the PANTHER database.
GO analysis were performed in R using TopGo v.2.16.0 and Fisher's exact test. A p-value cut off of <0.01 was used to select for significantly enriched functional terms wherever possible. We summarized the GO terms into larger and general categories to get a general overview of the underlying biology. Terms were grouped together if they were in a similar pathway and/or based on semantic similarity. GO enrichments along with their respective p-values are in Supplementary Data 2 and 7.
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
The datasets generated during and/or analysed during the current study are available in GEO accession code number GSE134508.
Code availability
Codes are available on GitHub (https://github.com/bukhariabbas/stickleback-paternal-care). All other relevant data is available upon request.
Clutton-Brock, T. H. The Evolution of Parental Care. (Princeton University Press, 1991).
Royle, N. J., Smiseth, P. T. & Kolliker, M. The Evolution of Parental Care. (Oxford University Press, 2012).
DeAngelis, R. S. & Rhodes, J. S. Sex differences in steroid hormones and parental effort across the breeding cycle in Amphiprion ocellaris. Copeia 104, 586–593 (2016).
Rosenblatt, J. S. Nonhormonal basis of maternal behavior in the rat. Science 156, 1512–1514 (1967).
Feldman, R., Braun, K. & Champagne, F. A. The neural mechanisms and consequences of paternal caregiving. Nat. Rev. Neurosci. 20, 205–224 (2019).
Gordon, I., Zagoory-Sharon, O., Leckman, J. F. & Feldman, R. Oxytocin and the development of parenting in humans. Biol. Psychiatry 68, 377–382 (2010).
Storey, A. E., Walsh, C. J., Quinton, R. L. & Wynne-Edwards, K. E. Hormonal correlates of paternal responsiveness in new and expectant fathers. Evol. Hum. Behav. 21, 79–95 (2000).
Parker, D. J. et al. Transcriptomes of parents identify parenting strategies and sexual conflict in a subsocial beetle. Nat. Commun. 6, 8449 (2015).
Lehrman, D. S. The reproductive behavior of ring doves. Sci. Am. 211, 48–55 (1964).
Kohl, J., Autry, A. E. & Dulac, C. The neurobiology of parenting: a neural circuit perspective. BioEssays 39, 1–11 (2017).
Balshine, S. & Sloman, K. A. in Encyclopedia of Fish Physiology: From Genome to Environment (Anthony Farrell ed) 670–677 (Academic Press, 2011).
Gross, M. R. & Sargent, R. C. The evolution of male and female parental care in fishes. Am. Zool. 25, 807–822 (1985).
Whittington, C. M., Griffith, O. W., Qi, W., Thompson, M. B. & Wilson, A. B. Seahorse brood pouch transcriptome reveals common genes associated with vertebrate pregnancy. Mol. Biol. Evol. 32, 3114–3131 (2015).
Whittington, C. M. & Wilson, A. B. The role of prolactin in fish reproduction. Gen. Comp. Endocrinol. 191, 123–136 (2013).
Newman, S. W. The medial extended amygdala in male reproductive behavior. A node in the mammalian social behavior network. Ann. N. Y. Acad. Sci. 877, 242–257 (1999).
Goodson, J. L. The vertebrate social behavior network: evolutionary themes and variations. Horm. Behav. 48, 11–22 (2005).
O'Connell, L. A. & Hofmann, H. A. The Vertebrate mesolimbic reward system and social behavior network: A comparative synthesis. J. Comp. Neurol. 519, 3599–3639 (2011).
Young, R. L. et al. Conserved transcriptomic profiles underpin monogamy across vertebrates. Proc. Natl Acad. Sci. USA 116, 1331–1336 (2019).
O'Connell, L. A., Matthews, B. J. & Hofmann, H. A. Isotocin regulates paternal care in a monogamous cichlid fish. Horm. Behav. 61, 725–733 (2012).
Wootton, R. J. A Functional Biology of Sticklebacks. (University of California Press, 1984).
Ketterson, E. D., Nolan, V., Wolf, L. & Ziegenfus, C. Testosterone and avian life histories—Effects of experimentally elevated testosterone on behavior and correlates of fitness in the dark-eyed junco (Junco hyemalis). Am. Nat. 140, 980–999 (1992).
Wingfield, J. C., Hegner, R. E., Dufty, Alfred, M. & Ball, G. F. The challenge hypothesis: Theoretical implications for patterns of testosterone secretion, mating systems, and breeding strategies. Am. Nat. 136, 829–846 (1990).
Stein, L. R., Bukhari, S. A. & Bell, A. M. Personal and transgenerational cues are nonadditive at the phenotypic and molecular level. Nature Ecology &. Evolution 2, 1306–1311 (2018).
Wu, D. et al. ROAST: rotation gene set tests for complex microarray experiments. Bioinformatics (Oxford, England) 26, 2176–2182 (2010).
Kleszczynska, A. et al. Determination of the neuropeptides arginine vasotocin and isotocin in brains of three-spined sticklebacks (Gasterosteus aculeatus) by off-line solid phase extraction-liquid chromatography-electrospray tandem mass spectrometry. J. Chromatogr. A. 1150, 290–294 (2007).
Kleszczynska, A., Sokolowska, E. & Kulczykowska, E. Variation in brain arginine vasotocin (AVT) and isotocin (IT) levels with reproductive stage and social status in males of three-spined stickleback (Gasterosteus aculeatus). Gen. Comp. Endocrinol. 175, 290–296 (2012).
Kulczykowska, E. & Kleszczynska, A. Brain arginine vasotocin and isotocin in breeding female three-spined sticklebacks (Gasterosteus aculeatus): the presence of male and egg deposition. Gen. Comp. Endocrinol. 204, 8–12 (2014).
Lema, S. C., Sanders, K. E. & Walti, K. A. Arginine vasotocin, isotocin and nonapeptide receptor gene expression link to social status and aggression in sex-dependent patterns. J. Neuroendocrinol. 27, 142–157 (2015).
Wu, Z., Autry, A. E., Bergan, J. F., Watabe-Uchida, M. & Dulac, C. G. Galanin neurons in the medial preoptic area govern parental behaviour. Nature 509, 325–330 (2014).
Schneider, J. S. et al. Progesterone receptors mediate male aggression toward infants. Proc. Natl Acad. Sci. USA 100, 2951–2956 (2003).
Ray, S. et al. An examination of dynamic gene expression changes in the mouse brain during pregnancy and the postpartum period. G3 (Bethesda, Md.) 6, 221–233 (2015).
Martinowich, K., Manji, H. & Lu, B. New insights into BDNF function in depression and anxiety. Nat. Neurosci. 10, 1089–1093 (2007).
Raab, R. M., Bullen, J., Kelleher, J., Mantzoros, C. & Stephanopoulos, G. Regulation of mouse hepatic genes in response to diet induced obesity, insulin resistance and fasting induced weight reduction. Nutr. Metab. (Lond). 2, 15–15 (2005).
Bukhari, S. A. et al. Temporal dynamics of neurogenomic plasticity in response to social interactions in male threespined sticklebacks. PLoS Genet. 13, e1006840 (2017).
Hirschenhauser, K. & Oliveira, R. F. Social modulation of androgens in male vertebrates: meta-analyses of the challenge hypothesis. Anim. Behav. 71, 265–277 (2006).
Rosvall, K. A. & Peterson, M. P. Behavioral effects of social challenges and genomic mechanisms of social priming: What's testosterone got to do with it? Current Zoology 60, 791–803 (2014).
Cardoso, S. D., Teles, M. C. & Oliveira, R. F. Neurogenomic mechanisms of social plasticity. J. Exp. Biol. 218, 140–149 (2015).
Smith C, Wootton RJ. Parental energy expenditure of the male three‐spined stickleback. J. Fish. Biol. 54, 1132–1136 (1999).
Palma-Gudiel, H., Cordova-Palomera, A., Leza, J. C. & Fananas, L. Glucocorticoid receptor gene (NR3C1) methylation processes as mediators of early adversity in stress-related disorders causality: a critical review. Neuroscience Biobehavioral Review 55, 520–535 (2015).
Powis, Z. et al. De novo variants in KLF7 are a potential novel cause of developmental delay/intellectual disability, neuromuscular and psychiatric symptoms. Clin. Genet. 93, 1030–1038 (2018).
Kinsley, C. H. & Amory-Meyer, E. Why the maternal brain? J. Neuroendocrinol. 23, 974–983 (2011).
Hillerer, K. M., Jacobs, V. R., Fischer, T. & Aigner, L. The maternal brain: an organ with peripartal plasticity. Neural. Plast. 2014, 574159 (2014).
Brunton, P. J. & Russell, J. A. The expectant brain: adapting for motherhood. Nat. Rev. Neurosci. 9, 11–25 (2008).
Sanogo, Y. O. & Bell, A. M. Molecular mechanisms and the conflict between courtship and aggression in three-spined sticklebacks. Mol. Ecol. 25, 4368–4376 (2016).
Choi, G. B. et al. Lhx6 delineates a pathway mediating innate reproductive behaviors from the amygdala to the hypothalamus. Neuron 46, 647–660 (2005).
Wynne-Edwards, K. E. & Timonin, M. E. Paternal care in rodents: Weakening support for hormonal regulation of the transition to behavioral fatherhood in rodent animal models of biparental care. Horm. Behav. 52, 114–121 (2007).
Dulac, C., O'Connell, L. A. & Wu, Z. Neural control of maternal and paternal behaviors. Science 345, 765–770 (2014).
de Ruiter, A. J., Wendelaar Bonga, S. E., Slijkhuis, H. & Baggerman, B. The effect of prolactin on fanning behavior in the male three-spined stickleback, Gasterosteus aculeatus L. Gen. Comp. Endocrinol. 64, 273–283 (1986).
Pall, M. K., Mayer, I. & Borg, B. Androgen and behavior in the male three-spined stickleback, Gasterosteus aculeatus II. Castration and 11-ketoandrostenedione effects on courtship and parental care during the nesting cycle. Horm. Behav. 42, 337–344 (2002).
Kent, M. & Bell, A. M. Changes in behavior and brain immediate early gene expression in male threespined sticklebacks as they become fathers. Horm. Behav. 97, 102–111 (2018).
Zilkha, N., Scott, N. & Kimchi, T. Sexual dimorphism of parental care: from genes to behavior. Annu. Rev. Neurosci. 40, 273–305 (2017).
FastQC: a quality control tool for high throughput sequence data. http://www.bioinformatics.babraham.ac.uk/projects/fastqc (2010)
Kim, D. et al. TopHat2: accurate alignment of transcriptomes in the presence of insertions, deletions and gene fusions. Genome. Biol. 14, R36 (2013).
Langmead, B. & Salzberg, S. L. Fast gapped-read alignment with Bowtie 2. Nat. Methods 9, 357–359 (2012).
Kim, D., Langmead, B. & Salzberg, S. L. HISAT: a fast spliced aligner with low memory requirements. Nat. Methods 12, 357 (2015).
Anders, S., Pyl, P. T. & Huber, W. HTSeq-a Python framework to work with high-throughput sequencing data. Bioinformatics (Oxford, England) 31, 166–169 (2015).
Robinson, M. D., McCarthy, D. J. & Smyth, G. K. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics (Oxford, England) 26, 139–140 (2010).
Storey, J. D. & Tibshirani, R. Statistical significance for genomewide studies. Proc. Natl Acad. Sci. USA 100, 9440–9445 (2003).
ADS MathSciNet CAS Article Google Scholar
Ritchie, M. E. et al. limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res. 43, e47 (2015).
Zdobnov, E. M. et al. OrthoDB v9.1: cataloging evolutionary and functional annotations for animal, fungal, plant, archaeal, bacterial and viral orthologs. Nucleic Acids Res. 45, D744–d749 (2017).
Ernst, M. D. Permutation methods: a basis for exact inference. Stat. Sci. 19, 676–685 (2004).
MathSciNet Article Google Scholar
Chandrasekaran, S. et al. Behavior-specific changes in transcriptional modules lead to distinct and predictable neurogenomic states. Proc. Natl Acad. Sci. USA 108, 18020–18025 (2011).
Shpigler, H. Y. et al. Deep evolutionary conservation of autism-related genes. Proc. Natl Acad. Sci. USA 114, 9653–9658 (2017).
Saul, M. C. et al. Transcriptional regulatory dynamics drive coordinated metabolic and neural response to social challenge in mice. Genome Res. 27, 959–972 (2017).
Mi, H. et al. PANTHER version 11: expanded annotation data from gene ontology and reactome pathways, and data analysis tool enhancements. Nucleic Acids Res. 45, D183–d189 (2017).
We thank Gene Robinson, Mark Hauber, Dave Zhao, Saurabh Sinha, Lisa Stubbs, Mikus Abolins-Abols and members of the Bell lab for comments on the paper. Bukhari was supported by a Dissertation Improvement Grant from the University of Illinois during the preparation of this paper. This material is based upon work supported by the National Science Foundation under Grant No. IOS 1121980, by the National Institutes of Health under award number 2R01GM082937-06A1 and by a grant from the Simons Foundation to L. Stubbs and Gene Robinson.
Michael C. Saul
Present address: Jackson Labs, 600 Main St., Bar Harbor, ME, 04609, USA
Laura R. Stein
Present address: Department of Biology, University of Oklahoma, 730 Van Vleet Oval, Room 314, Norman, OK, 73019, USA
Rebecca Trapp
Present address: Department of Biological Sciences, Purdue University, 915 W. State St., West Lafayette, IN, 47907, USA
Carl R. Woese Institute for Genomic Biology, University of Illinois, Urbana Champaign, 1206 Gregory Drive, Urbana, IL, 61801, USA
Syed Abbas Bukhari, Michael C. Saul & Alison M. Bell
Illinois Informatics Institute, University of Illinois, Urbana Champaign, 616 E. Green St., Urbana, IL, 61820, USA
Syed Abbas Bukhari
Department of Evolution, Ecology and Behavior, University of Illinois, Urbana Champaign, 505 S. Goodwin Avenue, Urbana, IL, 61801, USA
Syed Abbas Bukhari, Laura R. Stein, Rebecca Trapp & Alison M. Bell
Neuroscience Program, University of Illinois, Urbana Champaign, 505 S. Goodwin Avenue, Urbana, IL, 61801, USA
Noelle James & Alison M. Bell
Program in Ecology, Evolution and Conservation Biology, University of Illinois, Urbana Champaign, 505 S. Goodwin Avenue, Urbana, IL, 61801, USA
Miles K. Bensky & Alison M. Bell
Noelle James
Miles K. Bensky
Alison M. Bell
S.A.B. contributed to study design, analyzed the data and wrote the first draft of the paper. M.S. contributed to study design and data analysis. N.J., M.B., L.R.S. and R.T. contributed to study design and collected the data. A.M.B. designed the study, contributed to data analysis and interpretation and edited the paper. All authors approved the final version of the paper.
Correspondence to Alison M. Bell.
The authors declare no competing interests.
Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Description of Additional Supplementary Files
Supplementary Data 1
Bukhari, S.A., Saul, M.C., James, N. et al. Neurogenomic insights into paternal care and its relation to territorial aggression. Nat Commun 10, 4437 (2019). https://doi.org/10.1038/s41467-019-12212-7
Neurotranscriptomic changes associated with chick-directed parental care in adult non-reproductive Japanese quail
Patricia C. Lopes
Robert de Bruijn
Scientific Reports (2021)
Nine Levels of Explanation
Melvin Konner
Human Nature (2021)
The neurogenomic transition from territory establishment to parenting in a territorial female songbird
Alexandra B. Bentz
Douglas B. Rusch
Kimberly A. Rosvall
BMC Genomics (2019) | CommonCrawl |
Volume 10 Supplement 2
Proceedings of the 15th American Heartworm Society Triennial Symposium 2016
Shifting the paradigm in Dirofilaria immitis prevention: blocking transmission from mosquitoes to dogs using repellents/insecticides and macrocyclic lactone prevention as part of a multimodal approach
John W. McCall1,
Marie Varloud2,
Elizabeth Hodgkins3,
Abdelmoneim Mansour1,
Utami DiCosty1,
Scott McCall1,
James Carmichael1,
Ben Carson1 &
Justin Carter1
Parasites & Vectors volume 10, Article number: 525 (2017) Cite this article
This study assessed the influence of a topical ectoparasiticide (dinotefuran-permethrin-pyriproxyfen, DPP, Vectra® 3D, Ceva Animal Health) combined with a macrocyclic lactone (milbemycin oxime, MBO, Interceptor®, Virbac) on transmission of heartworm L3 from mosquitoes to dogs and subsequent development of worms in treated dogs exposed to infected mosquitoes.
Thirty-two beagle dogs were allocated to four groups of eight: Group 1, untreated controls; Group 2, treated topically with DPP on Day 0; Group 3, treated orally with MBO on Day 51; and Group 4, treated with DPP on Day 0 and MBO on Day 51. Dogs were exposed under sedation for 1 h to Dirofilaria immitis (JYD-34)-infected Aedes aegypti on Days 21 and 28. At the end of each exposure, mosquitoes were classified as live, moribund, or dead and engorged or non-engorged. Live or moribund mosquitoes were incubated for daily survival assessment for 3 days. Mosquitoes were dissected before and after exposure to estimate the number of L3 transmitted to each dog. Dogs were necropsied 148 to 149 days postinfection.
A total of 418 mosquitoes fed on the 16 dogs in Groups 1 and 3, while only 6 fed on the 16 DPP-treated dogs in Groups 2 and 4. Mosquito anti-feeding (repellency) effect in Groups 2 and 4 was 98.1 and 99.1%, respectively. The estimated numbers of L3 transmitted to controls, DPP-treated, MBO-treated and DPP + MBO-treated dogs were 76, 2, 78, and 1, respectively. No heartworms were detected in any of the DPP + MBO-treated dogs (100% efficacy), while 8 out of 8 were infected in the control group (range, 21–66 worms per dog), 8 out of 8 were infected in the MBO-treated group (58% efficacy), and 3 out of 8 were infected in the DPP-treated group (96% efficacy).
DPP repelled and killed most mosquitoes that were capable of transmitting heartworm L3 to dogs. The "Double Defense" protocol of DPP + MBO had better efficacy for protecting dogs against heartworm transmission and infection than MBO alone. This added DPP benefit is more pronounced when macrocyclic lactone-resistant strains of heartworms are involved or lack of compliance in macrocyclic lactone administration is known or suspected.
Heartworm (Dirofilaria immitis) is generally considered the most important vector-borne disease of dogs in the United States and many other parts of the world [1, 2]. Cats, ferrets, wild canids and felids, numerous other animals and even humans are also susceptible to infection; and some of these animals add substantially to the pool of microfilaremic reservoirs of infection [2, 3] in warm, humid regions throughout the world. The disease can be life-threatening to our infected pets, and these animals face risky and expensive treatment to clear the infection or manage multi-organ pathology in infected animals. The disease can be prevented, and it is far better to prevent infection than to treat the animal that is infected with adult heartworms and microfilariae.
Macrocyclic lactone (ML) preventive drugs have been widely used for the past three decades [2]. These preventive drugs are safe, highly effective, easy to administer and relatively inexpensive, particularly when compared with treatment. Even with these excellent products readily available, however, the prevalence of heartworm disease has not declined; and one report suggests that the number of positive cases compared with the total number of dogs tested in the United States increased by 15.3% from 2013 to 2016 (www.capcvet.org; accessed April 26, 2017).This is generally attributed to owner and clinic lack of compliance, as less than 40% of dogs leave veterinary hospitals with preventive medication, and inconsistent dosing by owners further adds to prevention failures [4].
It is now well-documented that resistance to all ML has occurred, particularly in the Mississippi Delta area of the United States; however, the extent and degree of spreading of resistant populations of worms is still not known [5, 6]. In Australia, recent evidence indicates a suspicion of heartworm resistance [7]. Once a resistant gene(s) is established in a population of heartworms, ML drug pressure further selects for resistant worms, ML will become less effective over time; and mosquitoes, if not repelled or killed, will spread these resistant worms to animals in other geographical areas.
Apparently many species of mosquitoes can serve as vectors of the heartworm parasite, and at least one species of vector mosquitoes in any suitable climate can live in even a small amount of water, either polluted or clean. More than 70 different species of mosquitoes have been shown to allow development of microfilariae to infective third-stage larvae (L3) in the laboratory [3], and the limited number of field studies to date have detected more than 20 different species of mosquitoes with L3 [8,9,10]. The high susceptibility of numerous animals, especially dogs and wild canids, along with the ubiquitous presence of susceptible mosquitoes and inadequate heartworm preventive measures and vector control, ensures a high prevalence of heartworm disease in pets in most warm, humid climates of the world.
Arthropod-targeted disease preventive strategies in a multimodal approach to controlling several vector-borne human diseases have been used with much success for several decades. For example, the use of permethrin-impregnated bed nets and clothing, repellents and various vector control measures has been invaluable in blocking the transmission of malaria, lymphatic filariasis, onchocerciasis and some viral diseases, as well as preventing vector biting (Robert Wirtz, personal oral communication, March 2016). For the past three decades, veterinarians have used with much success the unimodal approach of killing heartworm larvae by monthly, semiannual or annual (Australia) administration of ML preventives after the animal has become infected, but mosquito control has received little attention. The American Heartworm Society canine guidelines [1] recommend "…environmental control measures, including treatment of standing water sources with insect growth regulators (IGR) combined with mosquito adulticidal measures (sprays, CO2 traps, etc). In addition to mosquito control, keeping pets inside during peak mosquito hours and/or the use of mosquito repellents on pets may also reduce the risk of infection." Despite this recommendation, the veterinary profession has never focused on vector control as an integral part of a multimodal approach to blocking heartworm transmission. This is probably due mainly to the dearth of research data on the subject.
Several studies have reported varying levels of success with the use of topically applied products for repelling and/or killing several different species of mosquitoes [11,12,13]. One study assessed the infection of dogs and cats by West Nile virus– infected mosquitoes [14], but only one of these studies included mosquitoes or animals infected with D. immitis [15]. Encouraged by the high level of effectiveness of one of these products (Vectra® 3D, dinotefuran-permethrin-pyriproxyfen, DPP; Ceva Animal Health) against Aedes aegypti [16] and Culex pipiens [17], we investigated the role of DPP in blocking heartworm parasite transmission in dogs in two studies. The first of these two studies was an exploratory study with microfilaremic dogs [18]. In that study we confirmed that DPP was more than 95% effective in repelling (anti-feeding) and killing mosquitoes for 1 month. In the same study, we demonstrated that DPP was completely effective in killing the few mosquitoes that fed on the treated dogs before they lived long enough for the microfilariae to develop to L3 and, consequently, was completely effective in blocking the transmission of L3 to other animals [18].
The results of the second study are reported herein. The primary objective of this study was to evaluate the effectiveness of milbemycin oxime (MBO) when combined with DPP against experimental infection of dogs exposed to D. immitis–infected mosquitoes.
This clinical efficacy study was GCP, negative-controlled, single-site and blinded. The products were administered to animals by individuals who were not involved in performing the posttreatment assessments and observations. All personnel making observations, performing tests and procedures, and collecting data were blinded in regard to which were treated and control animals. Groups were color-coded for identification by laboratory personnel throughout the study.
Study design and schedule
The 32 dogs were ranked by descending body weight (BW) within gender and randomly allocated to four groups of eight dogs each, with equal numbers of male and female dogs in each group. After randomization to treatment blocks, the dogs were further randomly allocated to one of two replicates (A and B), each consisting of 16 dogs in 4 subgroups of 4 dogs each, with equal numbers of male and female dogs (Table 1). The two replicates were run 1 day apart. The dogs in Group 1 served as the untreated controls; those in Group 2 were treated topically with DPP on Day 0; those in Group 3 were treated orally with MBO on Day 51; and those in Group 4 were treated topically with DPP on Day 0 and orally with MBO on Day 51 (Table 2). All dogs were infected on Days 21 and 28 by exposure to infected mosquitoes and necropsy was conducted on Day 176–177 (ie, 148–149 days after the second infection).
Table 1 Study design
Table 2 Study schedule
A total of 32 purpose-bred beagles, 16 males and 16 females, 4.8 to 6.7 months of age and weighing 7.5 to 13.7 lb, were involved in this study. Prior to the start of the study, all dogs were bathed with a noninsecticidal grooming shampoo. On Day −5 and again on Day 141 (120 days after the first infection), all dogs were negative on the modified Knott's test [1] and antigen test (DiroCHEK® Canine Heartworm Antigen Test Kit, Synbiotics). The dogs were housed individually in mosquito-proof indoor pens (5 × 4 ft) in a purpose-built building, with controlled temperature and ventilation systems. The dogs were fed at least once daily an appropriate quantity of commercially available maintenance diet, and water was supplied ad libitum. The animals were maintained with due regard for their welfare and in accordance with applicable laws, regulations and guidelines. The protocol was approved by an ethics committee (TRS Labs' Institutional Animal Care and Use Committee) prior to initiation of the study.
Parasite and vector
The JYD-34 isolate of D. immitis was used. This isolate is known to have varying degrees of resistance to all of the four ML used for heartworm prevention in dogs [19] (JWM, unpublished data, August 2013). The blackeyed Liverpool strain of A. aegypti mosquitoes was used as the vector. Female mosquitoes were raised in one-gallon oyster cartons and infected on microfilaremic blood 16 days prior to dog exposure. The dead mosquitoes were removed daily from each carton and the mosquito count was updated. The targeted number of mosquitoes to be released per dog was matched by prior removal of extra mosquitoes from the boxes.
The dogs in Groups 2 and 4 were treated on Day 0 with DPP (topical solution of dinotefuran 4.95% w/w, pyriproxyfen 0.44% w/w and permethrin 36.08% w/w; Vectra® 3D) at an average rate of 0.35 ± 0.06 mL/kg BW (Table 3). The product was applied topically according to the label, as a line-on from the base of the tail to the shoulders. The dogs in Groups 3 and 4 were treated orally with MBO (tablet of milbemycin oxime, Interceptor®) 30 days after the first infestation (ie, on Day 51). According to the instructions on the label and to the BW measured on Day 47 ± 2, an average dose of 0.92 ± 0.15 mg/kg BW of milbemycin oxime was delivered to the dogs (Table 3). The control dogs (Group 1) were not treated. For each treatment, the dogs were under observation every hour (±15 min) for the first 4 h after the last animal was treated.
Table 3 Dogs and treatment
Exposure of dogs to infected mosquitoes
All dogs were exposed to mosquitoes infected with D. immitis L3 on Days 21 and 28. Prior to each exposure, each dog was sedated by IM injection of dexmedetomidine 0.02 mg/kg BW (Dexdomitor®, Orion, Espoo, Finland) and butorphanol 0.2 mg/kg BW (Torbugesic®, Zoetis). Each dog was placed in a dedicated container (73.7 cm long × 40.6 cm wide × 33 cm high), and the lid was replaced, making the container mosquito-proof (Fig. 1). The infected mosquitoes were released into the container and the dog was exposed to the infected mosquitoes for 60 (±10) minutes. The procedure was conducted during the day and under artificial light.
Exposure of a dog to mosquitoes in a mosquito-proof container prior to the release of nonblood-fed D. immitis (JYD-34 strain)-infected female mosquitoes (Aedes aegypti) from the carton
Immediately after exposure, mosquitoes were aspirated from the container and categorized as live, moribund or dead, and fed or unfed. A mosquito was considered live when it exhibited normal behavior and was capable of flying. Moribund mosquitoes were unable to move normally and clearly exhibited neurological disruption. Mosquitoes with red and enlarged abdomens were considered engorged. Dead mosquitoes were preserved in ethanol (70%) and stored at −20 °C. Live and moribund mosquitoes from each container were placed in an insectary (28 °C, 80% RH), and mortality was assessed daily for 3 days.
Estimation of number of L3 transmitted to each dog
Prior to each exposure, for each replicate of 16 dogs, approximately 48 mosquitoes (~3 from each carton) were dissected individually to determine the average number of D. immitis L3 per mosquito. Based on this calculation, a sufficient number of infected mosquitoes to transmit ~25 L3/dog/exposure for a total of ~40–60 L3/dog/2 exposures was made available in each carton. During each exposure period, up to three blood-engorged mosquitoes were removed from each container and dissected individually to determine the average number of L3 remaining in the mosquitoes after feeding. The estimated number of L3 transmitted to each of the 16 dogs in each replicate and for each exposure was calculated by subtracting the average number of L3 remaining in each mosquito after feeding from the average number of L3 in each mosquito prior to feeding, and then multiplying this number by the number of mosquitoes that fed on that dog.
Worm counts
On Days 176–177 all dogs were humanely euthanized and necropsied for recovery and enumeration of adult worms in the pleural and peritoneal cavities and the heart and associated pulmonary arteries. At necropsy, all worms were recovered and enumerated by gender. Mortality was assessed by observing the motility of each worm in saline. Worms that were immotile in warm saline were considered dead.
Anti-feeding effect or repellency
For each day of exposure (Days 21 and 28), the anti-feeding effect was calculated:
$$ \mathrm{Anti}\hbox{-} \mathrm{feeding}\ \mathrm{effect}\ \left(\%\right)=100\times \frac{\left(\mathrm{Cf}\hbox{--} \mathrm{Tf}\right)}{Cf} $$
Where Cf was the geometric mean of fed female mosquitoes (live fed + dead fed) in Groups 1 and 3 (no-DPP), and Tf was the geometric mean of the fed female mosquitoes in Groups 2 and 4 (DPP treated).
Knock-down effect
For each day of exposure (Days 21 and 28), the knock-down effect was calculated:
$$ \mathrm{Knock}\hbox{-} \mathrm{down}\ \mathrm{effect}\ \left(\%\right)=100\times \frac{\mathrm{Cl}\hbox{--} \mathrm{Tl}}{\mathrm{CI}} $$
Where Cl was the geometric mean of live female mosquitoes (live engorged + live unengorged) in Groups 1 and 3 (no DPP), and Tl was the geometric mean of the live female mosquitoes in Groups 2 and 4 (DPP treated).
The knock-down effect was calculated based on the data collected at the end of the exposure (1 h).
Insecticidal effect
For each day of exposure (Days 21 and 28), the mortality effect was calculated:
$$ \mathrm{Mortality}\ \mathrm{effect}\ \left(\%\right)=100\times \frac{\left(\mathrm{Clm}\hbox{--} \mathrm{Tlm}\right)}{\mathrm{Clm}} $$
Where Clm was the geometric mean of live + moribund female mosquitoes (live engorged + live unengorged + moribund engorged + moribund unengorged) in Groups 1 and 3 (no-DPP), and Tlm was the geometric mean of the live + moribund female mosquitoes in Groups 2 and 4 (DPP treated). The mortality effect was calculated daily for 3 days of the postexposure incubation.
Theoretical exposure of dogs to D. Immitis L3
The estimated number of L3 deposited on each dog was calculated by taking into account the number of fed mosquitoes and the counts of L3 in mosquitoes prior to and after blood-feeding on dogs.
$$ \mathrm{L}3\ \mathrm{transmitted}\ \mathrm{per}\ \mathrm{dog}=\mathrm{NFed}\times \left(\mathrm{L}3\ \mathrm{before}\hbox{--} \mathrm{L}3\ \mathrm{after}\right) $$
Where NFed was the cumulated number of mosquitoes fed on dogs for each dog, L3 before was the average number of L3 found in mosquitoes upon dissection before exposure of the dogs and L3 after was the average number of L3 found in fed mosquitoes upon dissection after exposure of the dogs.
Heartworm preventive efficacy
The heartworm preventive efficacy was calculated using the worm counts found at necropsy (Days 176–177) in the circulatory system of the dogs.
$$ \mathrm{Preventive}\ \mathrm{efficacy}\ \left(\%\right)=100\times \frac{\left(\mathrm{Cw}\hbox{--} \mathrm{Tw}\right)}{\mathrm{Cw}} $$
Where Cw was the arithmetic mean number of worms found at necropsy of dogs from Group 1 (Control), and Tw was the arithmetic mean number of worms found at necropsy of dogs from each of the treated groups (Group 2 or Group 3 or Group 4).
For the primary endpoint, a Poisson model (GENMOD proc., SAS) was used for comparison of the L3 estimated number and of the heartworm counts between treatments: (1) DPP (Group 2) vs DPP + MBO (Group 4) and (2) DPP (Group 2) vs untreated control (Group 1). The "fed" status of mosquitoes was considered a covariable. For the secondary endpoint, a Friedman ANOVA model (rank transformation for normalization) was used for comparison of the number of fed or live mosquitoes between DPP-treated groups (Groups 2 and 4) and non-DPP-treated groups (Groups 1 and 3).
This study was carried out in compliance with Good Clinical Practice requirements (VICH GL 9, 2001), with FDA Guidance No. 111 and with US EPA Product Performance Test Guidelines OPPTS 810.3300: Treatments to Control Pests of Humans and Pets.
No adverse effects due to any of the treatment applications were observed in any dogs during the study.
Anti-feeding efficacy
In the no-DPP Groups (1 and 3, n = 16), mosquito blood-feeding occurred in all the dogs. The number of fed mosquitoes per dog ranged from 7 to 24 on Day 21 and from 6 to 17 on Day 28 (Table 4). The feeding rate in the no-DPP groups was 43% on Day 21 and 50% on Day 28. In the DPP Groups (2 and 4, n = 16), the number of fed mosquitoes per dog ranged from 0 (n = 15) to 1 (n = 1) on Day 21 and from 0 (n = 13) to 3 on Day 28. On Day 28, mosquito blood-feeding was detected in 3 out of 16 dogs from the DPP-treated groups. On both days of mosquito exposure, the number of fed mosquitoes was higher (P < 0.0001) in the no-DPP groups than in the DPP groups. The anti-feeding efficacy of DPP assessed 3 and 4 weeks after treatment was 99.7% and 98.4%, respectively. The overall anti-feeding efficacy against the 867 mosquitoes released on the DPP-treated dogs on Days 21and 28 was 99.0%.
Table 4 Geometric mean number of blood-fed and live mosquitoes per dog and immediate anti-feeding and knock-down efficacy (%) of DPP* after 1 h exposure (%) on Days 21 and 28 after administration in Group 2 and 4 (n = 16) dogs
Knock-down efficacy
In the no-DPP groups (1 and 3, n = 16), the mosquitoes did not exhibit signs of knock-down or death. The average proportion of live mosquitoes was 93.2% and 91.7% after exposure on Days 21 and 28, respectively (Table 4). In the DPP groups (2 and 4, n = 16), most of the mosquitoes showed impaired coordination and signs of death. The average proportion of live mosquitoes was 7.4% and 7.7% after exposure on Days 21 and 28, respectively (data not shown). On both days of mosquito exposure, the number of live mosquitoes was higher (P < 0.0001) in the no-DPP groups than in the DPP groups. The knock-down efficacy of DPP assessed 3 and 4 weeks after treatment was 91.8% and 91.9%, respectively. The overall knock-down efficacy against the 867 mosquitoes released on the DPP-treated dogs on Days 21 and 28 was 91.8%.
Insecticidal efficacy
The survival of the total cumulated 1600 mosquitoes found live or moribund at the end of the 1-h exposure to dogs was assessed daily over 3 days after each exposure day (Table 5). In the no-DPP groups (1 and 3, n = 16), the average survival of mosquitoes per dog was 73.5% (20.0 out of 27.2 incubated) and 80.7% (17.6 out of 21.8 incubated) 24 h after exposure on Days 21 and 28, respectively. The survival of the mosquitoes from the no-DPP groups decreased during the incubation and was recorded as 61.8% and 69.7% when assessed at 72 h after exposure on Days 21 and 28, respectively (data not shown). In the DPP groups (2 and 4, n = 16), the average survival of mosquitoes per dog was 12.7% (3.6 out of 28.4 incubated) and 15.5% (3.5 out of 22.6 incubated) 24 h after exposure on Days 21 and 28, respectively. After both mosquito exposure days, the number of mosquitoes dying during incubation was higher (P < 0.0001) in the DPP groups than in the no-DPP groups. After 24 h of incubation of live or moribund mosquitoes collected after dog exposure, the insecticidal efficacy of DPP was 81.8% and 80.4%, 3 and 4 weeks after DPP administration, respectively. The overall insecticidal efficacy of DPP against infected A. aegypti mosquitoes assessed after 24 h of incubation postexposure was 81.1%.
Table 5 Geometric mean number of live or moribund mosquitoes per dog and insecticidal efficacy (%) of DPP after 24, 48 and 72 h of incubation after exposure on Days 21 and 28
Theoretical L3 transmission
Estimated numbers of D. immitis L3 deposited per dog were calculated based on the number of fed mosquitoes and the number of L3 detected in mosquitoes before and after blood-feeding (Table 6). Whatever the group, the average number of L3 per mosquito before exposure and blood-feeding (n = 192) ranged from 3.2 (0–13) to 4.6 (0–21) on Days 21 and 28, respectively. After blood-feeding, the average number of L3 per mosquito (n = 79) was 0.8 (0–8) on both days of exposure (data not shown). In the no-DPP groups (1 and 3, n = 16), the estimated number of L3 deposited per dog was 32.2 and 45.3 on Days 21 and 28 (data not shown), respectively. In the DPP groups (2 and 4, n = 16), the estimated number of L3 deposited per dog was 0.1 and 1.1 on Days 21 and 28, respectively (data not shown). The total expected number of L3 deposited per dog after the two exposures to infected mosquitoes was higher (P < 0.0001) in Groups 1 and 3 (76 and 78, respectively) than in Groups 2 and 4 (2 and 1, respectively).
Table 6 Average number of Dirofilaria immitis (JYD-34 strain) infected mosquitoes (Aedes aegypti), average L3 load of mosquitoes used to infect dogs and average L3 potentially deposited on dogs
L3 transmission/development blocking efficacy
At necropsy, no worms were found outside of the circulatory system in any of the dogs, and all of the worms were viable based on their motility and appearance observed in saline after collection. The D. immitis worm counts were assessed on each dog on Days 176–177 (Table 7). In Groups 1 and 3, all the dogs were infected with at least 7 worms. In Group 2, 5 dogs were free of worms. None of the Group 4 dogs was infected. In Groups 1, 2, 3 and 4 the average (range) worm burden was 41.0 (21–66), 1.5 (0–7), 17.1 (7–39), and 0.0 (0–0) D. immitis per dog, respectively. Two of the dogs from Group 2 with no fed mosquitoes observed on Days 21 and 28 had a few (2 and 3) worms. The worm counts were lower (p < 0.0001) in the DPP-treated group (2) as compared to the untreated group (1) and in the DPP + MBO treated group (4) as compared to the MBO-treated group (3) (Fig. 2). The heartworm preventive efficacy was 58.2% for MBO alone, 96.3% for DPP alone and 100% for DPP combined with MBO.
Table 7 Worm counts in dogs 148 to 149 days after the last infection by exposure to Aedes aegypti mosquitoes previously infected with Dirofilaria immitis (JYD-34 strain) 21 and 28 days after DPP administration
Number of worms at necropsy 148 to 149 days after the last of two weekly exposures to infected mosquitoes (Days 21 and 28) in dogs either untreated, treated topically with DPP*(Day 0), treated orally with MBO† (Day 51) or treated with DPP (Day 0) and MBO (Day 51)‡. *DPP: dinotefuran + permethrin + pyriproxyfen (Vectra® 3D). † MBO: milbemycin oxime (Interceptor®)
‡See Table 7 for details
Methodological considerations
The model was considered successful since the control group was infected at rates (21–64 worms/dog) close to those usually targeted with other experimental infection techniques, such as the use of subcutaneous injection of L3 (up to 37 worms/dog) [20, 21], or to natural worm burdens where up to 85 worms/dog were found [22]. This model is expected to be close to the natural exposure of dogs by infected mosquitoes, but higher or lower exposure levels can certainly be achieved because animals can be bitten by uninfected mosquitoes, as well as mosquitoes with variable and unpredictable numbers of L3. A study performed in Italy documented the exposure of dogs that could be bitten by about 80 mosquitoes per night and per individual [23]. This number is probably an underestimation of some common situations at certain locations in the United States. In Alaska, more than 1000 mosquito bites per hour were reported over 4 consecutive hours of exposure to natural infestations by eight volunteers [24]. Unlike in humans, for whom a human biting rate is documented for malaria mosquito vectors [25], a dog biting rate is missing for the competent vectors of heartworm.
Efficacy against mosquitoes (A. aegypti)
The efficacy of DPP against mosquitoes was assessed using 32 dogs that were either treated with DPP (n = 16, Groups 2 and 4) on Day 0 or untreated with DPP (n = 16, Groups 1 and 3) at the time of mosquito exposures that occurred on Days 21 and 28.
The anti-feeding efficacy against mosquitoes is a consequence of the repellent effect of the product. It is also directly affected by the feeding performance of the mosquitoes on the untreated animals. After release of the infected mosquitoes into the exposure containers with the animals, we observed an overall impaired behavior of the insects as compared to previous observations on uninfected mosquitoes of the same species and strain [18]. Although experimental transmission from dog to dog was recorded [26], to our knowledge, this is the first time that the feeding behavior of free, heartworm-infected A. aegypti mosquitoes is documented on dogs. In the present experiment, the mosquitoes were slower to start feeding and did not move as well as usual. The age and health status of mosquitoes is expected to influence their feeding behavior.
In our earlier study [18], the mosquitoes were 4 to 5 days old and not infected, while in the present study, the mosquitoes were 20 to 21 days old and heavily infected with D. immitis L3. Feeding behavior is likely to be impaired in older, unhealthy mosquitoes. This disruption of behavior was observed with other filarial-infected A. aegypti mosquitoes [27] and can explain the lower feeding rate (42% − 50% in the no-DPP groups) versus 79% to 97% in the previous study with uninfected mosquitoes [18]. An increased duration of exposure (beyond 1 h) was not expected to improve this feeding rate, since the feeding occurred mainly in the first 20 min after release of the mosquitoes.
The insecticidal efficacy of DPP slightly decreased over incubation since the mosquitoes were already over 16 days old, and natural mortality occurred in the DPP-untreated groups as well. This phenomenon greatly contributes to lower the insecticidal efficacy of DPP, especially when compared with assessment performed on younger mosquitoes of the same species [16].
As demonstrated in a side experiment, both effects are only triggered by contact between the treated animal and the insect [28]. There is no evidence of action by vapor-pressure release of any of the DPP active ingredients in the cage, which was under natural ventilation. At 20 °C, the vapor pressure of cis and trans-permethrin is 2.5 and 1.5 μPa [29], respectively; and it only increases to 6.58 μPa above the dog skin temperature at 40 °C [30].
Efficacy against heartworm (D. immitis)
There was an obvious relationship between the number of fed, infected mosquitoes on dogs and the worm burden determined at necropsy (Fig. 3): a higher number of worms being observed in dogs on which more than 15 mosquitoes were found blood-fed. However, we noticed that worms (n = 2 and 3) were found in two of the dogs (Group 2, DPP-treated) for whom no visually fed mosquitoes were recorded at any of the mosquito exposures. This important observation underlines not only that repellent insecticidal products cannot provide 100% protection against mosquito bites, but also that the visual assessment of the engorgement of mosquitoes cannot be considered as a standalone and reliable way to assess the potential heartworm transmission to dogs [18]. In the present experiment, a few, but unknown exact numbers of, mosquitoes were able to transmit heartworm L3 to dogs without visually detectable blood-feeding. Since these dogs treated with DPP were sedated, we assume that their blood-feeding process was disrupted by the treatment and that probing through the skin allowed the deposition of a few heartworm L3.
Relationship between the cumulated number of fed infected mosquitoes per dog on Days 21 and 28 and the Dirofilaria immitis worm burden in the circulatory system of the untreated (Group 1), DPP*-treated (Group 2), MBO†-treated (Group 3) or DPP + MBO-treated (Group 4) dogs on Days 176–177. *DPP: dinotefuran + permethrin + pyriproxyfen (Vectra® 3D). †MBO: milbemycin oxime (Interceptor®)
To our knowledge, the influence of a topical repellent (imidacloprid-permethrin, IP) in the prevention of heartworm transmission has been assessed previously only once [15]. In that experiment, however, the methodology was completely different: there were three dogs per group compared with eight in the present study, the experimental infection of the control untreated dogs (1–4 worms/dog) was lower than the threshold required by FDA (≥5 worms/dog), and the animals were treated with the product 3 or 36 days prior to exposure to D. immitis-infected mosquitoes. Whatever the time interval between the administration and the challenge, the anti-feeding efficacy of the repellent tested was markedly lower (70.2% and 72.7%) than DPP (>98%). In two different studies, the anti-feeding efficacy of IP against A. aegypti was only 50.4% [12] and 51.5% [11] 28 days after administration. After a topical application of a product containing fipronil and permethrin (FP, not available in the United States), the anti-feeding effect against A. aegypti was >96.1%, and the insecticidal efficacy was >80% over a 28-day period [13]. Wide variations in efficacy can also occur between formulations since the anti-feeding effect of a different FP combination tested on dogs against A. aegypti was 77% to 91.7% and the insecticidal (killing) efficacy never exceeded 59% during weekly challenges repeated over a 1 month period [31].
In the present study, we selected a challenge based on a well-known resistant heartworm isolate (JYD-34). The efficacy results of MBO against JYD-34 obtained in the present study (58.2%) are within the range of previous efficacy measurements (52.2% after three monthly consecutive administrations and 72.0%, 72.2%, and 76.1% after six, three [19], and two repeated administrations [32], respectively). Until now, all available ML have been shown [19, 32,33,34] (JWM, unpublished data, March 2013) to fail against this isolate and to experience lack of efficacy reported from the field. Of course, the epidemiology of heartworm resistance in the United States and across worldwide endemic areas is not well known at the present. However, this challenge is representative of other situations in which the ML suffer from insufficient efficacy against heartworm. For example, we know that the risk of efficacy failure will increase with missed chemoprophylactic doses.
This experiment demonstrated the benefit of the addition of insecticidal repellency against the mosquito vectors to larvicidal efficacy against these pathogenic worms. The benefit of the concomitant use of DPP and MBO is an improved reduction of the risk implemented at two levels: transmission to the dog and development of the worms. The two products rely on very different and complementary strategies that can easily be combined in the field for increased protection. Such strategies are already implemented for human health protection. For example, vector control is one of the core measures against dengue [35], malaria and, more recently, Zika virus. In dogs, the combination of a larvicide with an insecticidal-repellent appears to be realistic and achievable by pet owners from an economic point of view since DPP also provides protection against a wide spectrum of ectoparasites that usually infest pets.
DPP repelled and killed mosquitoes that were capable of transmitting heartworm L3 to dogs. The combination of DPP + MBO had better efficacy for protecting dogs against heartworm transmission and infection than MBO alone. This research supports a "Double Defense" protocol in which DPP can be combined with any heartworm preventive drug. All dogs exposed to heartworm infection should benefit from the mosquito repellency and insecticidal efficacy of DPP added to their heartworm preventive protocol, and this benefit is even more obvious when macrocyclic lactone-resistant strains of heartworm are involved or lack of compliance in the administration of ML preventives is known or suspected.
DPP:
Dinotefuran-permethrin-pyriproxyfen
IGR:
Insect growth regulator
MBO:
Milbemycin oxime
Microfilariae
Macrocyclic lactone
American Heartworm Society (AHS). Current canine guidelines for the diagnosis, prevention and management of heartworm (Dirofilaria immitis) infections in dogs. 2014. https://www.heartwormsociety.org/images/pdf/2014-AHS-Canine-Guidelines.pdf. Accessed 8 Sept 2016.
McCall JW, Genchi C, Kramer L, Guerrero J, Venco L. Heartworm disease in animals and humans. Adv Parasitol. 2008;66:193–285.
Otto GH, Jachowski LA. Mosquitoes and canine heartworm disease. In: Morgan HC, editor. Proceedings of the heartworm symposium 1981. Edwardsville, KS: Veterinary Medicine Publishing Company; 1981. p. 17–32.
Atkins CE, Murray MJ, Olavessen LJ, Burton KW, Marshall JW, Brooks CC. Heartworm "lack of effectiveness"claims in the Mississippi delta: computerized analysis of owner compliance – 2004-2011. Vet Parasitol. 2014;206:106–13. https://doi.org/10.1016/j.vetpar.2014.08.013.
Pulaski CN, Malone JB, Bourguinat C, Prichard R, Geary T, Ward D, et al. Establishment of macrocyclic lactone resistant Dirofilaria immitis isolates in experimentally infected laboratory dogs. Parasit Vectors. 2014;7:494. https://doi.org/10.1186/s13071-014-0494-6.
Bourguinat C, Lee AC, Lizundia R, Blagburn BL, Liotta JL, Kraus MS, et al. Macrocyclic lactone resistance in Dirofilaria immitis: failure of heartworm preventives and investigation of genetic markers for resistance. Vet Parasitol. 2015;210:167–78. https://doi.org/10.1016/j.vetpar.2015.04.002.
Nguyen C, Koh WL, Casteriano A, Beijerink N, Godfrey C, Brown G, et al. Mosquito-borne heartworm Dirofilaria immitis in dogs from Australia. Parasit Vectors. 2016;9:535.
Scoles GA. Vectors of canine heartworm in the United States: a review of the literature including new data from Indiana, Florida, and Louisiana. In: Seward RL, editor. Recent advances in heartworm disease: symposium '98. Batavia, IL: American Heartworm Society; 1998. p. 21–36.
Bowman DD, Atkins CE. Heartworm biology, treatment, and control. Vet Clin Small Anim. 2009;39:1127–58.
Ledesma N, Harrington L. Mosquito vectors of dog heartworm in the United States: vector status and factors influencing transmission efficiency. Top Comp Anim Med. 2011;26:178–85.
Cruthers L, Slone R, Arther RG, Davis W, Settje T. K9 Advantix (imidacloprid plus permethrin) for controlling ticks and mosquitoes on dogs. Compend Contin Educ Pract Vet. 2003;25(suppl):15–8.
Tiawsirisup S, Nithiuthai S, Kaewthamasorn M. Repellent and adulticide efficacy of a combination containing 10% imidacloprid and 50% permethrin against Aedes aegypti mosquitoes on dogs. Parasitol Res. 2007;101:527–31. https://doi.org/10.1007/s00436-007-0508-9.
Fankhauser B, Dumont P, Hunter JS III, McCall JW, Kaufmann C, Mathis A, et al. Repellent and insecticidal efficacy of a new combination of fipronil and permethrin against three mosquito species (Aedes albopictus, Aedes aegypti and Culex pipiens) on dogs. Parasit Vectors. 2015;8:64.
Austgen LE, Bowen RA, Bunning ML, Davis BS, Mitchell CJ, Chang GJ. Experimental infection of cats and dogs with West Nile virus. Emerg Infect Dis. 2004;10:82–6.
Hayasaki M, Saeki H. Inhibition and prevention efficacy against mosquito bloodsucking and Dirofilaria immitis infection by administration of topical insecticide. J Vet Med Sci. 2009;71:1049–52.
Franc M, Genchi C, Bouhsira E, Warin S, Kaltsatos V, Baduel L, et al. Efficacy of dinotefuran, permethrin and pyriproxyfen combination spot-on against Aedes Aegypti mosquitoes on dogs. Vet Parasitol. 2012;189:333–7. https://doi.org/10.1016/j.vetpar.2012.04.026.
Bouhsira E, Lienard E, Lyazrhi F, Jacquiet P, Varloud M, Deflandre A, et al. Repellent and insecticidal efficacy of a combination of dinotefuran, pyriproxyfen and permethrin (Vectra® 3D) against Culex pipiens in dogs. Parasite Epidemiol Control. 2016;1:233–8. August 25-29, 2013
McCall J, Hodgkins E, Varloud M, Mansour A, DiCosty U. Blocking the transmission of heartworm (Dirofilaria immitis) to mosquitoes (Aedes aegypti) by weekly exposure for one month to microfilaremic dogs treated once topically with dinotefuran-permethrin-pyriproxyfen. Parasit Vectors. 2017;10(Suppl 2): doi:10.1186/s13071-017-2439-3.
Blagburn BL, Arther RG, Dillon AR, Butler JM, Bowles JV, von Simson C, et al. Efficacy of four commercially available heartworm preventive products against the JYD-34 laboratory strain of Dirofilaria immitis. Parasit Vectors. 2016;9:191. https://doi.org/10.1186/s13071-016-1476-7.
Snyder DE, Wiseman S, Cruthers LR, Slone RL. Ivermectin and milbemycin oxime in experimental adult heartworm (Dirofilaria immitis) infection of dogs. J Vet Intern Med. 2011;25:61–4. https://doi.org/10.1111/j.1939-1676.2010.0657.x.
McCall JW, Kramer L, Genchi C, Guerrero J, Dzimianski MT, Mansour A, et al. Effects of doxycycline on heartworm embryogenesis, transmission, circulating microfilaria, and adult worms in microfilaremic dogs. Vet Parasitol. 2014;206:5–13.
Kaiser L, Williams JF. Dirofilaria immitis: worm burden and pulmonary artery proliferation in dogs from Michigan (United States). Vet Parasitol. 2004;124:125–9.
Capelli G, Frangipane di Regalbono A, Simonato G, Cassini R, Cassin S, Cancrini G, et al. Risk of canine and human exposure to Dirofilaria immitis infected mosquitoes in endemic areas of Italy. Parasit Vectors. 2013;6:60. https://doi.org/10.1186/1756-3305-6-60.
Lillie TH, Schreck CE, Rahe AJ. Effectiveness of personal protection against mosquitoes in Alaska. J Med Entomol. 1988;25:475–8.
Kenea O, Balkew M, Tekie H, Gebre-Michael T, Deressa W, Loha E, et al. Human-biting activities of Anopheles species in south-central Ethiopia. Parasit Vectors. 2016;9:527.
Bickley WE. Failure of Culex salinarius to transmit D. immitis from dog to dog. Mosquito News. 1976;36:366–7.
Gleave K, Cook D, Taylor MJ, Reimer LJ. Filarial infection influences mosquito behaviour and fecundity. Sci Rep. 2016;6:36319. https://doi.org/10.1038/srep36319.
McCall J, Hodgkins E, Ramiro V, Varloud M. Contact is required between dogs treated with Vectra® 3D and Aedes aegypti mosquitoes for insecticidal efficacy. ESCCAP Symposium Vector-Borne Diseases 2016 Granada: ESCCAP; 2016.
Wells D, Terence Grayson B, Langner E. Vapour pressure of permethrin. Pest Manag Sci. 1986;17:473–6. https://doi.org/10.1002/ps.2780170503.
Apparao K, Surendra Babu MS, Krishna Rao N, Kumar G, Nageswara RT. Determination of vapour pressure of permethrin technical pesticide using gas saturation method with gas chromatographic method. MSAIJ. 2016;14:456–63.
Virbac: Effitix® Technical standards:22. https://www3.epa.gov/pesticides/chem_search/cleared_reviews/csr_PC-109701_08-Nov-11.pdf. Accessed 17 Oct 2017.
CVMP: Assessment report for Nexgard Spectra. 2014. http://www.ema.europa.eu/docs/en_GB/document_library/EPAR_Public_assessment_report/veterinary/003842/WC500181963.pdf. Accessed 16 Oct 2017.
Bayer: Data on file. Study TRS 2013. http://www.bayerdvm.com/show.aspx/productdetail/advantage-multi-for-dogs/recent-heartworm-findings. Access 19 Dec 2016.
Bowman DD, McTier TL, Adams E, Mahabir S, Login JA, et al. Evaluation of the efficacy of moxidectin (ProHeart® 6) against a resistant isolate of Dirofilaria immitis (JYD-34) in dogs. Parasit Vectors. 2017;10(Suppl 2): doi:10.1186/s13071-017-2431-y.
Rey JR. Dengue in Florida (USA). Insects. 2014;5:991–1000. https://doi.org/10.3390/insects5040991.
The authors acknowledge the TRS Labs team members for their respective contributions in this experiment. The authors are grateful to the SOLADIS team and to Maëva Deniaud for their contributions in the statistical analysis of the data.
This study was funded by Ceva Santé Animale. The article publication fee was funded by the American Heartworm Society.
Ethics approvals and consent to participate
The animals in this study were maintained with due regard for their welfare and in accordance with applicable laws, regulations and guidelines. The protocol was approved by the TRS Labs' Institutional Animal Care and Use Committee prior to initiation of the study.
It is not appropriate to store and share this data, as the authors are planning to include some of it in future publications.
About this supplement
This article has been published as part of Parasites and Vectors Volume 10 Supplement 2, 2017: Proceedings of the 15th American Heartworm Society Triennial Symposium 2016. The full contents of the supplement are available online at https://parasitesandvectors.biomedcentral.com/articles/supplements/volume-10-supplement-2.
TRS Labs, Inc., 215 Paradise Boulevard, Athens, GA, 30607, USA
John W. McCall
, Abdelmoneim Mansour
, Utami DiCosty
, Scott McCall
, James Carmichael
, Ben Carson
& Justin Carter
Ceva Santé Animale, Libourne, France
Marie Varloud
Ceva Animal Health, Lenexa, Kansas, USA
Elizabeth Hodgkins
Search for John W. McCall in:
Search for Marie Varloud in:
Search for Elizabeth Hodgkins in:
Search for Abdelmoneim Mansour in:
Search for Utami DiCosty in:
Search for Scott McCall in:
Search for James Carmichael in:
Search for Ben Carson in:
Search for Justin Carter in:
JWM, EH and MV wrote the protocol; AM, UD, SC, JC, BC, JC and JWM performed the technical aspects of the study; and JWM, EH and MV prepared the initial draft of the manuscript. All authors read and approved the final manuscript.
Correspondence to John W. McCall.
John McCall is Professor Emeritus of the University of Georgia and Chief Scientific Officer of TRS Labs, Inc., a research organization that was contracted by Ceva Santé Animale to perform the study. Abdelmoneim Mansour, Utami DiCosty, Scott McCall, James Carmichael, Ben Carson and Justin Carter are employees of TRS Labs, Inc. Marie Varloud and Elizabeth Hodgkins are employees of Ceva Santé Animale.
McCall, J.W., Varloud, M., Hodgkins, E. et al. Shifting the paradigm in Dirofilaria immitis prevention: blocking transmission from mosquitoes to dogs using repellents/insecticides and macrocyclic lactone prevention as part of a multimodal approach. Parasites Vectors 10, 525 (2017) doi:10.1186/s13071-017-2438-4
Prevention strategy
Dirofilaria immitis
Lethal repellency
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Please note that comments may be removed without notice if they are flagged by another user or do not comply with our community guidelines. | CommonCrawl |
BMC Bioinformatics
Volume 21 Supplement 6
Selected articles from the 15th International Symposium on Bioinformatics Research and Applications (ISBRA-19): bioinformatics
MHSNMF: multi-view hessian regularization based symmetric nonnegative matrix factorization for microbiome data analysis
Yuanyuan Ma ORCID: orcid.org/0000-0001-9640-33621,
Junmin Zhao2 &
Yingjun Ma3
BMC Bioinformatics volume 21, Article number: 234 (2020) Cite this article
With the rapid development of high-throughput technique, multiple heterogeneous omics data have been accumulated vastly (e.g., genomics, proteomics and metabolomics data). Integrating information from multiple sources or views is challenging to obtain a profound insight into the complicated relations among micro-organisms, nutrients and host environment. In this paper we propose a multi-view Hessian regularization based symmetric nonnegative matrix factorization algorithm (MHSNMF) for clustering heterogeneous microbiome data. Compared with many existing approaches, the advantages of MHSNMF lie in: (1) MHSNMF combines multiple Hessian regularization to leverage the high-order information from the same cohort of instances with multiple representations; (2) MHSNMF utilities the advantages of SNMF and naturally handles the complex relationship among microbiome samples; (3) uses the consensus matrix obtained by MHSNMF, we also design a novel approach to predict the classification of new microbiome samples.
We conduct extensive experiments on two real-word datasets (Three-source dataset and Human Microbiome Plan dataset), the experimental results show that the proposed MHSNMF algorithm outperforms other baseline and state-of-the-art methods. Compared with other methods, MHSNMF achieves the best performance (accuracy: 95.28%, normalized mutual information: 91.79%) on microbiome data. It suggests the potential application of MHSNMF in microbiome data analysis.
Results show that the proposed MHSNMF algorithm can effectively combine the phylogenetic, transporter, and metabolic profiles into a unified paradigm to analyze the relationships among different microbiome samples. Furthermore, the proposed prediction method based on MHSNMF has been shown to be effective in judging the types of new microbiome samples.
With the rapid development of bio-technique, such as high-through sequencing technique, plenty of multiple omics data (e.g. metagenomics, metabolomics and so on) have generated in microbiome study. These resources pave the way for researchers to explore and understand the structure and functions of microbiome community. In addition, it helps to reveal the relationships between microbiota and host environment, microbes and diseases. In order to further dissect the structure and functions of microbiome, many microbiome projects including Human Microbiome Plan (HMP) [1], Integrative Human Microbiome Plan (iHMP) [2], and Metagenomics of the Human Intestinal Gut (MetaHIT) [3] have been launched and accumulated large amounts of microbiome data. By some analysis tools, these data can be computationally represented as the phylogenetic profile or functional composition profile of microbiome [4]. Although some approaches have been designed to analyze the difference and connections among different microbiome samples, they only considered one kind of biological profile data. Thus, the conclusions obtained from these approaches may be one-sided and incorrect. In order to draw a reasonable conclusion, integrating multiple omics data from different biological scenarios to jointly analyze latent patterns becomes a feasible way.
However, to the best of our knowledge, there have been few approaches to simultaneously combine multiple biological profiles into a paradigm to study the underlying microbiome structure shared by different representations. Hence, it is urgent and necessary to design novel data integration methods or tools to explore the complicated relationship among microorganisms.
As a kind of clustering method, nonnegative matrix factorization (NMF) has drawn great public attention, recently. In text mining, image processing, bioinformatics fields and so on, many new data integration methods based on NMF have emerged. Greene et.al proposed a joint nonnegative matrix factorization algorithm by concatenating the features of all the views to form a new representation, and then it was factorized into two low rank matrices, one of which was used to cluster indicator [5]. Liu et.al proposed the Multi-NMF algorithm by searching a common consensus solution across different views [6]. Zhang et.al developed a novel NMF framework (CSMF) to reveal the common and specific patterns obtained from multiple interrelated biological scenarios [7]. All these methods could obtain good performance when data distribution satisfies certain conditions, e.g. linear relationship. However, the real-world data often owns complicated structure and nonlinear relation. For example, the interactions among microbes are easily influenced by the food intake, host environment or other species, particularly for the intestinal flora, and thus the relationship among microbes may be delicate and complicated. Traditional approaches based on NMF are not sufficient for revealing the latent relations hidden in multiple biological data profiles.
In order to improve the clustering performance, Laplacian graph which makes use of the geometric information of the original data was introduced into the NMF framework. Cai et.al proposed a graph regularization based nonnegative matrix factorization approach (GNMF) for data clustering and obtained good performance [8]. Jiang et.al proposed a new joint nonnegative matrix factorization algorithm with robust Laplacian graph (LJ-NMF) to cluster microbiome data [4] and achieved better clustering performance. Chen et.al proposed a novel co-module mining framework based on Tri-factor nonnegative matrix factorization (NetNMF) to identify heterogeneous biological modules [9] and easily extended to Laplacian case with prior knowledge. Even though Laplacian can boost the performance, Kim et.al pointed that Laplacian regularization possibly leaded poor extrapolating power because Laplacian regularization always biased the solution towards a constant function [10]. Compared to Laplacian regularization, Hessian can not only effectively exploit the local geometry information of original data, but also extrapolate beyond data points [11].
To solve the above problems, in this paper we propose a novel multi-view Hessian regularization based symmetric nonnegative matrix factorization algorithm (MHSNMF) to integrate multiple biological profiles into an unified framework to analyze the potential clustering patterns across all view. MHSNMF utilizes the local geometrical information of different views and automatically assigns corresponding weights for each view in each iteration process. We conduct large amounts of experiments on two real datasets and the experimental results show that the proposed MHSNMF algorithm outperforms other integrating approaches, suggesting its underlying application in microbiome data analysis.
The contributions of this study lie in: (1) an effective integration method to explore the difference among distinct microbiome samples with multiple views has been proposed. The experimental results show that it outperforms the state-of-art algorithms in terms of AC and NMI; (2) high-order information of the original data is exploited to reveal the underlying clustering patterns across different views; (3) a novel approach based on the consensus matrix obtained from MHSNMF is proposed to predict the classification of new microbiome samples. The extended experiments demonstrate the effectiveness of the proposed method. Figure 1 demonstrates the flowchart of MHSNMF algorithm.
Illustrative of MHSNMF framework on human microbiome data. a Example representation of the phylogenetic profile and metabolic profile for the same cohort of samples. b Sample-sample similarity matrices obtained from each view. c Using MHSNMF, each similarity matrix is factorized into a low rank matrix and its transposition. Matrix fusion process iteratively updates each clustering with information from the other view. d The iterative fusion leads to convergence to the final consensus matrix H∗. e Given a new sample xnew from the i ‐ th view, we can obtain its subspace representation h by H∗ and the proposed mapping approach. Here, \( {V}_{tr}^i \) indicates the training samples from i ‐ th view, S denotes the similarity between xnew and \( {V}_{tr}^i \). α is the regularization parameter. f Once obtaining h, some applications such as classification, prediction and so on would be executed naturally
The rest of this paper is organized as below: in next section, a brief view of SNMF and multi-view clustering is provided, and then multi-view Hessian regularization based SNMF algorithm is also proposed. Next extensive experiments results and the comparisons with other methods are presented. At last, the conclusion and next research plans are given.
Symmetric nonnegative matrix factorization
Nonnegative matrix factorization (NMF), which has been widely used in many fields including text clustering, image recognition, bioinformatics, has drawn great attention. In NMF, the data matrix V is factorized the production of two low rank matrices W and H. Each column V.i in original matrix V can be approximated as the linear combination of basis vectors W.j, the coefficients are the corresponding elements of H.i. Hence, when data owns linear structure, NMF can achieve better performance. However, the real world data distribution is usually complex and hard to dissect the relations among different objects, and especially for the microbial data. Symmetric nonnegative matrix factorization (SNMF) views the data samples as vertices in graph and minimizes certain objective function of graph cuts [12]. SNMF can adopt multiple metrics to character the similarities between two nodes, including inner kernel, Gaussian kernel, correlation coefficient methods and so on.
The objective function of SNMF is defined as:
$$ O=\underset{H\ge 0}{\mathit{\operatorname{Min}}}{\left\Vert A-H{H}^T\right\Vert}_F^2. $$
where ‖∗‖F is the Frobenius norm of matrix, \( A\in {R}_{+}^{n\times n} \) is the similarity matrix, and \( H\in {R}_{+}^{n\times k} \) is the factorized low-rank matrix, k is the degree of factorization. Aij denotes the similarity between i ‐ th and j ‐ th node.
Eq. 1 iteratively updates H using the following rule [11, 13]:
$$ {H}_{ij}\leftarrow {H}_{ij}\frac{(AH)_{ij}}{{\left(H{H}^TH\right)}_{ij}}. $$
Once the similarity matrix A was established, the low rank solution H would be easily obtained. For text data, the cosine function is used to compute the similarity between two documents. For microbiome data, the Gaussian kernel function can be used to measure the similarity between different microbiome samples:
$$ {W}_{ij}=\exp \left(-\frac{{\left\Vert {V}_i-{V}_j\right\Vert}_F^2}{\sigma_i{\sigma}_j}\right)\left(i\ne j\right). $$
where Vi denotes the i ‐ th data point in original matrix. σi is the Euclidean distance between Vi and its k ‐ th neighbor. We set k to be 7 as suggested in [14]. Note that the self-similarity of the nodes is eliminated in all cases.
Next, we construct the sparse graph for microbiome sample-sample similarity network; the edge weight can be redefined as
$$ {W}_{ij}=\left\{\begin{array}{l}{W}_{ij}\kern1.5em \mathrm{i}\mathrm{f}\kern0.5em \mathrm{i}\in N(j)\ \mathrm{or}\kern0.5em \mathrm{j}\in N(i)\ \\ {}0\kern2.25em \mathrm{otherwise}\end{array}\right.. $$
where N(i) is the neighborhood of node i. In our study, we set the number of the neighbors to be 12 empirically.
Furthermore, the obtained weight matrix Wij is normalized to
$$ A={D}^{-\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$2$}\right.}W{D}^{-\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$2$}\right.}. $$
where D is the diagonal matrix and \( {D}_{ii}={\sum}_{j=1}^n{W}_{ij} \).
Multi-view symmetric nonnegative matrix factorization
Given multi-view dataset \( \left\{{V}^1,{V}^2,\cdots, {V}^{n_v}\right\} \),the corresponding similarity matrices are represented as \( \left\{{A}^1,{A}^2,\cdots, {A}^{n_v}\right\} \), where nv denotes the number of views. Inspired by the study [6], Multi-view symmetric nonnegative matrix factorization (Multi-view SNMF) can be formulated as
$$ {\displaystyle \begin{array}{l}O=\mathit{\operatorname{Min}}\left(\sum \limits_{v=1}^{n_v}{\left\Vert {A}^v-{H}^v{\left({H}^v\right)}^T\right\Vert}_F^2+\sum \limits_{v=1}^{n_v}{\gamma}^v{\left\Vert {H}^v{Q}^v-{H}^{\ast}\right\Vert}_F^2\right)\\ {}\mathrm{s}.\mathrm{t}.{H}^v,{H}^{\ast}\ge 0.\end{array}} $$
where H∗ denotes the consensus matrix toward that the solutions of all views. \( {Q}^v= Diag\left(1/\sum \limits_{i=1}^m{H}_{i,1}^v,1/\sum \limits_{i=1}^m{H}_{i,2}^v,\cdots, 1/\sum \limits_{i=1}^m{H}_{i,k}^v\right) \) is an auxiliary matrix which guarantees that the clustering solution of each view is comparable. γv is the weight of the v ‐ th view and simultaneously keeps a balance between the SNMF reconstruction error and regularization term (the second term of Eq. 6). In the study, we set γv s to be equal for all views considering the convenience of computation.
Multi-view SNMF follows the basic hypothesis that there exists an underlying consensus structure in all views. This is reasonable because each view describes partial truth of the unknown; however, these limited cognitions are essential components toward objective truth.
Hessian regularization
Given a smooth manifold M ⊂ Rn, at each point p the tangent space is defined as Tp(M) ⊂ Rn ⋅ Np denotes the neighborhood of p. For each point p' ∈ Np, there is a unique closest point v' ∈ Tp(M) such that the implied mapping p' → v' is smooth. In order to obtain the Hessian of function f : M ↦ R, an orthogonal coordinate system of Tp(M) is needed to define. This can be achieved by the d largest eigenvectors of Np corresponding to the orthogonal basis of Tp(M). Hence, in the tangent space f(p) can be represented as g(x) : Tp(M) ↦ R. In this way, the Hessian of f at point p can be defined as
$$ {\left({H}_f^{\mathrm{tan}}(p)\right)}_{i,j}=\frac{\partial }{\partial {x}_i}\frac{\partial }{\partial {x}_j}{\left.g(x)\right|}_{x=0}. $$
The previous studies point that the Frobenius form of Hessian matrix is invariant to coordinate changes [10]. Hence, the total Hessian is obtained to measure the average curviness of f along the manifold M as follows
$$ H(f)={\int}_{p\in M}{\left\Vert {H}_f^{\mathrm{tan}}(p)\right\Vert}_F^2 dp. $$
Hessian regularization (HR) steers the solution varying smoothly along the manifold. Compared with Laplacian regularization, Hessian fits the data perfectly and owns stronger extrapolating capability to unseen data [15]. Next, we summarize the computation process of Hessian as follows.
For each sample vi, finding its k nearest neighbors Ni and then construct the neighborhood matrix Vi with rows consisting of the centralized samples vj = vj − vi for each vj ∈ Ni.
Conducting SVD on Vi so that Vi = UDST. The first d columns of U gives the tangent coordinates of points in Ni.
Constructing the matrix Mi = [1, U.1, U.2, ⋯, U.d, U11, U11, ⋯, Udd], where 1 denotes one vector, followed by the first d columns of U and d × (d + 1)/2 columns consisting of various cross products and squares of these d columns. Then, performing the Gram-Schmidt process on Mi and yielding \( \hat{M^i} \). The last d × (d + 1)/2 columns of \( \hat{M^i} \) are extracted to form Bi ⋅ Bi is the hessian matrix of the tangent space formed by the k nearest neighbors of the i-th sample.
Thus, a symmetric Hessian matrix can be obtained by summing up all point's Hessian energy:
$$ {B}_{ij}=\sum \limits_l\sum \limits_r\left({\left({B}^l\right)}_{ri}{\left({B}^l\right)}_{rj}\right). $$
where l is the data point on the manifold, i denotes the i ‐ th data point in Nl.
In contrast to Laplacian regularization (LR), HR can make full use of the intrinsic geometric information of the data manifold. It can not only well fit the training data, but also predict the unseen data points [16]. In this paper, we use multiple Hessian matrices obtained from different data presentations to well maintain the structural consistence in process of dimension reduction, just like with Laplacian.
Multi-view hessian regularization based symmetric nonnegative matrix factorization
According to the analyses above, we propose a novel data integrating method, called Multi-view Hessian based symmetric nonnegative matrix factorization (MHSNMF). MHSNMF combines the advantages of SNMF and Hessian regularization, and can take full advantage of the local geometric structure information of the original data. Hence, MHSNMF theoretically owns more preferable performance.
The objective function of MHSNMF can be formulated as
$$ {\displaystyle \begin{array}{l}O=\mathit{\operatorname{Min}}\left\{\sum \limits_{v=1}^{n_v}{\left\Vert {A}^v-{H}^v{\left({H}^v\right)}^T\right\Vert}_F^2+\sum \limits_{v=1}^{n_v}\ {\gamma}^v{\left\Vert {H}^v{Q}^v-{H}^{\ast}\right\Vert}_F^2+\beta\ tr\left({\left({H}^{\ast}\right)}^T\left(\sum \limits_{v=1}^{n_v}\ {\alpha}^v{B}^v\right){H}^{\ast}\right)\right\}\\ {}\mathrm{s}.\mathrm{t}.{H}^v,{H}^{\ast}\ge 0,{\alpha}^v\ge 0,\sum \limits_v{\alpha}^v=1.\end{array}} $$
where Bv denotes the Hessian matrix derived from the v ‐ th view, tr(·) denotes the trace of matrix. αv is the coefficient of Bv, β is the regularization parameter and is used to tune the smooth of solution.
The optimal problem of MHSNMF contains three steps: (1) updating Hv given fixed consensus matrix H∗ and graph coefficient αv; (2) updating H∗ given fixed Hv and graph coefficient αv; (3) finding the optimal graph coefficients αv s given fixed Hv and H∗. The optimizations of these three sub-problems are presented below.
Fixing H∗ and αv, computing Hv
Given fixed H∗ and αv, only considering terms that are relevant to Hv at this step, the Eq. 10 can be reduced to
$$ {\displaystyle \begin{array}{l}O=\mathit{\operatorname{Min}}\left\{{\left\Vert {A}^v-{H}^v{\left({H}^v\right)}^T\right\Vert}_F^2+{\gamma}^v{\left\Vert {H}^v{Q}^v-{H}^{\ast}\right\Vert}_F^2\right\}\\ {}\mathrm{s}.\mathrm{t}.{H}^v,{H}^{\ast}\ge 0.\end{array}} $$
To minimize Eq. 11, we can solve the optimal problem with Lagrange method [6, 17]. Introducing the Lagrange multiplier ψ, Lagrange function can be written as
$$ {\displaystyle \begin{array}{l}L={\left\Vert A-H{H}^T\right\Vert}_F^2+\gamma {\left\Vert HQ-{H}^{\ast}\right\Vert}_F^2+ tr\left(\psi {H}^T\right)\\ {}\kern0.5em \propto tr\left(-2 AH{H}^T+H{H}^TH{H}^T\right)+\gamma tr\left( HQ{Q}^T{H}^T-2 HQ{H^{\ast}}^T\right)+ tr\left(\psi {H}^T\right).\end{array}} $$
For simplicity A, H, Q is substituted for Av, Hv, Qv, respectively.
Taking the partial derivative of L with respect to H gives
$$ \frac{\partial L}{\partial H}=-4 AH+4H{H}^{\hbox{'}}H+2\gamma H Q{Q}^{\hbox{'}}-2{\gamma H}^{\ast }{Q}^{\hbox{'}}+\psi . $$
Using KKT condition, we can obtain the following updating rule
$$ {H}_{i,k}\leftarrow {H}_{i,k}\frac{2{(AH)}_{i,k}+\gamma {\left({H}^{\ast }{Q}^T\right)}_{i,k}}{2{\left({HH}^TH\right)}_{i,k}+\gamma {\left( HQ{Q}^T\right)}_{i,k}}. $$
Fixing Hv and αv, updating H∗
This sub-problem is similar to (1), the objective function can be rewritten as
$$ {\displaystyle \begin{array}{l}O=\sum \limits_{v=1}^{n_v}{\gamma}^v{\left\Vert {H}^v{Q}^v-{H}^{\ast}\right\Vert}_F^2+\beta tr\left({\left({H}^{\ast}\right)}^T{BH}^{\ast}\right)+ tr\left(\psi {\left({H}^{\ast}\right)}^T\right)\\ {}\kern0.75em \propto \sum \limits_{v=1}^{n_v}{\gamma}^v tr\left(-2{H}^v{Q}^v{\left({H}^{\ast}\right)}^T+{\left({H}^{\ast}\right)}^T{H}^{\ast}\right)+\beta tr\left({\left({H}^{\ast}\right)}^T{BH}^{\ast}\right)+ tr\left(\psi {\left({H}^{\ast}\right)}^T\right).\end{array}} $$
where \( B=\sum \limits_{v=1}^{n_v}{\alpha}^v{B}^v \), \( {\alpha}^v>0,\sum \limits_v{\alpha}^v=1 \).
The rule of iteration for H∗ is given
$$ \kern3em {H^{\ast}}_{ij}={H^{\ast}}_{ij}\frac{{\left({\sum}_{v=1}^{n_v}{\gamma}^v{H}^v{Q}^v+\beta {B}^{-}{H}^{\ast}\right)}_{ij}}{{\left({\sum}_{i=1}^{n_v}{\gamma}^v{H}^{\ast }+\beta {B}^{+}{H}^{\ast}\right)}_{ij}}. $$
where B = B+ − B−. It shouldn't be difficult to see that H∗ remains nonnegative after each iteration.
Fixing Hv and H∗, learning αv
This sub-problem can be formulated as
$$ {\displaystyle \begin{array}{l}\min tr\left({\left({H}^{\ast}\right)}^T\left(\sum \limits_{v=1}^{n_v}\ {\alpha}^v{B}^v\right){H}^{\ast}\right).\\ {}\mathrm{s}.\mathrm{t}.{\alpha}^v\ge 0,\sum \limits_v{\alpha}^v=1\end{array}} $$
When tr((H∗)TBiH∗) the minimum one among distinct views, the solution w.r.t α is αi = 1 and αj = 0 corresponding to other views. It means that only one view takes effect and the complement information carried by multiple views cannot be utilized effectively.
In this study, we employ a trick [18, 19] to avoid this problem. We substitute (αv)r for αv, r > 1. In this case, each graph has a particular contribution to the consensus matrix. The Eq. 17 can be rewritten as
$$ {\displaystyle \begin{array}{l}\min tr\left({\left({H}^{\ast}\right)}^T\left(\sum \limits_{v=1}^{n_v}\ {\left({\alpha}^v\right)}^r{B}^v\right){H}^{\ast}\right).\\ {}\mathrm{s}.\mathrm{t}.{\alpha}^v\ge 0,\sum \limits_v{\alpha}^v=1\end{array}} $$
To solve Eq. 18, we introduce Lagrange multiplier λ and consider the constraint \( \sum \limits_v{\alpha}^v=1 \) and then obtain the Lagrange function
$$ L\left(\alpha, \lambda \right)= tr\left({\left({H}^{\ast}\right)}^T\left(\sum \limits_{v=1}^{n_v}\ {\left({\alpha}^v\right)}^r{B}^v\right){H}^{\ast}\right)-\lambda \left(\sum \limits_{v=1}^{n_v}{\alpha}^v-1\right). $$
Taking the partial derivative of L(α, λ) with respect to αv and λ set them to zero
$$ \left\{\begin{array}{l}\frac{\partial L}{\partial {\alpha}^v}=r{\left({\alpha}^v\right)}^{r-1} tr\left({\left({H}^{\ast}\right)}^T{B}^v{H}^{\ast}\right)-\lambda =0,\kern1em v=1,2,\cdots, {n}_v\\ {}\frac{\partial L}{\partial \lambda }=\sum \limits_{v=1}^{n_v}{\alpha}^v-1=0\end{array}\right.. $$
Finally, a closed solution of αv can be given
$$ {\alpha}^v=\frac{{\left(1/ tr\left({\left({H}^{\ast}\right)}^T{B}^v{H}^{\ast}\right)\right)}^{1/r-1}}{\sum \limits_{v=1}^{n_v}{\left(1/ tr\left({\left({H}^{\ast}\right)}^T{B}^v{H}^{\ast}\right)\right)}^{1/r-1}}. $$
From Eq. 21 we can see that αv is always nonnegative because Hessian matrix Bv is SDP.
Table 1 gives the pseudocode of the proposed MHSNMF.
Table 1 The pseudocode of MHSNMF
Datasets and evaluation metrics
In this paper, two public multi-view datasets are used to verify the performance of the proposed MHSNMF algorithm.
Three-source text story dataset. The dataset was collected from three online news sources: BBC, Reuters and the Guardian. One hundred sixty-nine stories were reported in all three sources. Each of them was manually classified into one of the six topical labels: business, entertainment, politics, sport, health and technology. These roughly correspond to the principal section headings used across these three sources. To facilitate comparisons using the AC and NMI metrics, only the main topic for each story was considered. More details can be found in [20]. Table 2 describes the detailed statistical information.
Human microbiome dataset (HMP). This dataset includes three compositional profiles: phylogenetic, metabolic and transporter profiles from HMP site. It consists of 637 samples drawn from seven body sites including one vagina (posterior fornix), one gut (stool), one nasal (anterior nares), one skin (retroauricular crease), and three oral sties (supragingvial plaque, tongue dorsum and buccal mucosa). The phylogenetic profile which contains the microorganism relative abundances was estimated by software MetaPhlAn at species level (710 × 637). For functional profile, the transporter profile (4941 × 637) and the metabolic profile (295 × 637) are investigated by filtering out those with low variances (see Table 3 for the detailed statistical summary) [4]. All the data can be available from HMP site: http://hmpdacc.org/ [21].
Table 2 Statistics of the Three-source dataset
Table 3 Statistics of the HMP dataset
Evaluation metrics
In the following experiments, two frequently used metrics are applied to evaluate the clustering performance of MHSNMF, i.e. accuracy (AC) and normalized mutual information (NMI). Generally speaking, higher AC or NMI indicates the better clustering performance. More details were described in [22].
In this section, we conduct extensive experiments to elucidate the effectiveness of the proposed MHSNMF approach. Some baseline algorithms below are compared:
Single view (BSSV and WSSV). Running standard SNMF on each view, BSSV is the most informative view that has the best clustering quality; WSSV refers to the worst view.
Multi-NMF. Iteratively fusing the coefficient matrices learnt from different views to form a consensus clustering solution. In the fusion process, coefficient matrix from each view is normalized to guarantee that they are comparable and meaning [6].
Co-training spectral clustering (Co-training SC). Performing multi-view spectral clustering with co-training paradigm [23] to update iteratively the graph structure of one view by using the discriminative eigenvectors obtained from the other view.
Similarity network fusion (SNF). Constructing similarity network for each view and then iteratively fusing these networks so that global and local information from different views can be shared and interchanged. More details can be obtained from [24].
LJ-NMF. Fixing a common coefficient matrix across different views and then performing joint nonnegative matrix factorization as shown in [4].
CSMF. Extracting common and specific patterns from multiple data generated under interrelated biological scenarios via nonnegative matrix factorization [7].
NetNMF. Utilizing Tri-factor NMF to construct two layer modular networks. For each biological network, the samples were reordered according to the obtained features modules. At last, the optimal clustering performance is recorded [9].
MHSNMF. This is the proposed algorithm. In the experiments, we used NNDSVD method to enhance the initiation stage of MHSNMF [25]. The parameter selection will be discussed later.
Table 4 shows the clustering results of different algorithms on these two datasets. From this table, we can see that MHSNMF outperforms the baseline and the state-of-art algorithms in terms of AC and NMI.
Table 4 The best clustering performance on two datasets
As we can see, on these two realistic dataset MHSNMF achieves much improvement in terms of AC and NMI compared with other algorithms. One of the possible reasons is that MHSNMF takes advantage of the local geometry information reserved in the data to satisfy the manifold consistency assumption well. The proposed MHSNMF algorithm can effectively find the latent consensus clustering solution across different views.
Parameter tuning
There are two types of parameters in the proposed MSNMF algorithm: γv and β. γv is the regularization parameter for the v ‐ th view. On one hand, γv reflects each view's relative importance among all views, on the other hand, it also indicates the strength which we want to impose on the regularization constraint. Considering the convenience of computation, we set γv s to be equal for each view. β is the graph regularization parameter. In our experiment the values of β are tuned from the candidate set {10−4, 5 × 10−4, 10−3, 5 × 10−3, 10−2, 0.05, 0.1, 0.5, 1} and γv is set to vary in the set {10−3, 5 × 10−3, 10−2, 0.05, 0.1, 0.5, 1} for all the datasets. Besides, in computing Hessian the size of neighborhood is set to be 30.
Figure 2 shows how the performance of MHSNMF varies with changes of parameters γv and β on these two datasets. As Fig. 2 shown, MHSNMF obtains the best performance when γ equals to 0.1 and β equals to 0.5 on three-source data. Moreover, for other values of β MHSNMF still owns stable and reliable performance. On HMP dataset, MHSNMF performs relatively stable when γ equals to 0.05 and β varies during the set {10−4, 5 × 10−4, 10−3, 5 × 10−3, 10−2, 0.05, 0.1}.
The performance of MHSNMF w.r.t parameters γ and β on three-source and HMP datasets, respectively
Convergence curve and the performance
According to the iterative rules (Eqs. 14, 16 and 21), the objective function value progressively grows smaller and it is convergent. Figure 3 shows the convergence curves along with the accuracy value on these two datasets, respectively. The results below are obtained when γ is set to be 0.05 and β is set to 0.01. As we can see that MHSNMF will converge after a few iterations. Interestingly, on three-source data the performance curve shows some shocks in the iterative process. One of the possible reasons is that the clustering solutions obtained from multiple views may not be misaligned for each cluster. This is beyond the scope of this paper.
Convergence and corresponding AC curve of MHSNMF on three-source and HMP datasets
As Fig. 3 shown, on HMP dataset the performance of MHSNMF achieves the optimal value 95.28%/91.76% in terms of AC/NMI after around 250 iterations. It is worth noting that MHSNMF converges very fast regardless of Three-source or HMP data. This suggests the effectiveness and efficiency of MHSNMF for clustering multi-view omics data.
Parameter study
In this subsection, extensive experiments are conducted on HMP data to further validate the performance of MHSNMF w.r.t the number of neighbors p and knn in computing Hessian and constructing affinity graphs, respectively. Figure 4 demonstrates how the accuracy varies with changes in the number of neighbors.
Performance of MHSNMF versus p and knn on HMP data
As Fig. 4 shown, the accuracy of MHSNMF achieves the best value when p is set to be 12. Meanwhile, the performance of MHSNMF is stable for the various values of knn. For other values of p, in most cases AC doesn't vary significantly with the changes of knn, which demonstrates the number of neighbors in computing Hessian cannot have a remarkable impact on the performance of MHSNMF on HMP dataset. This is important to study the microbiome data. We can set a fixed knn value in computing Hessian for the convenience of computation. This study also offers a new reference for multiple heterogeneous omics data fusion.
Analysis on HMP data
To further explore the structures and functions of human microbiome, we apply the proposed MHSNMF algorithm to HMP data and find that it is very useful. Classical multidimensional scaling (MDS) is used on the consensus matrix H∗ to describe the relationships among microbiome samples in three dimensional space. Figure 5 reveals clear clustering patterns derived from the consensus matrix. This supports Jeffery et al.'s argument that the change at the species level of human microbiome is irrelevant to the discrete clusters (enterotype), but it is continuous [26].
Scatter plot of HMP data in three-dimension space. The result is obtained when γ equals to 0.05 and is set to be 1e-4. Seven colors indicate the true labels of microbiome samples from different body sites
As Fig. 5 shown, MHSNMF clearly identifies different clusters corresponding to microbiome samples from seven different body sites. Theses samples from anterior nares (red), gut (cyan) and posterior fornix (yellow) are well separated, particularly for gut microbiome samples. One possible reason is that gut microbiome has more complicated composition and spatial distance relative to other sites. We can also find that samples from three oral sites (buccal mucosa, plaque, tongue dorsum) may have overlapped with each other. This might be because these three sites are all from oral cavity. Therefore, theses samples may have similar microbiome composition and diversity.
Other application
Besides clustering, MHSNMF has also other potential application, for instance, predicting the classification of new samples via consensus matrix H∗ obtained from multiple views. When applied it to HMP data with multiple views, the Eq. (10) can also be understood as finding a consensus basis H∗ (similar to basis matrix in NMF), such that in the space spanned by H∗ the presentation of new microbiome samples can also reflect their structure information. Therefore, we can express a new microbiome sample xnew as h by solving the following optimization problem:
$$ \underset{h\ge 0}{\min }{\left\Vert S-{H}^{\ast }h\right\Vert}_F^2+\alpha {\left\Vert h\right\Vert}_2^2. $$
Where, \( S={V}_{tr}^i\ast {x}_{new} \), \( {V}_{tr}^i \) is training set from the view, the second term is L2 regularization term.
We can use closeness of h to the rows of H∗ to decide how likely the new microbiome sample should belong to which body site. For example, one can predict the class of a new microbiome sample according to knn method.
To evaluate our approach, we recollect and extend human microbiome samples to 653 cases, and then separate HMP data (phylogenetic profile and metabolic profile) into training set and test set by randomly selecting 70% samples from each body site as training set and the remaining samples as test set. We firstly learn a consensus matrix H∗ from phylogenetic profile and metabolic profile samples in training set, and then predict the classification of phylogenetic (or metabolic) samples in the test set.
To verify that the consensus H∗ computed by the proposed MHSNMF algorithm indeed well represents the geometric structure, we also compare several baseline approaches. One is to learn the matrix Hi only by single view SNMF, the remaining steps for making predictions are the same as MHSNMF. The other two methods based on subspace learning are Canonical Correlation Analysis (CCA) and Partial Least Squares Regression (PLSR) [27]. We use the consensus matrix H∗ to predict the classification of new samples from each view. The experimental results are shown in Table 5.
Table 5 The prediction accuracy on HMP data
As Table 5 shown, MHSNMF obtains much improvement in accuracy compared with three baselines methods on HMP data. It should be noted that CCA fails to utilize the complementary information from multiple views and cannot find the underlying subspace shared by multiple biological compositional profiles. One possible reason is that the objective of CCA is to find the maximum linear correlation between two feature profiles data. Therefore, CCA-based methods may be not suitable for data with nonlinear structure, such as microbiome data. In contrast, by adopting graph and Hessian regularization framework to learn the consensus matrix H∗ across all views, MHSNMF succeeds in capturing such knowledge.
In this paper, we introduced a novel multi-view Hessian regularization based symmetric nonnegative matrix factorization algorithm (MHSNMF) for multiple omics data integration task. On human microbiome data, the proposed MHSNMF algorithm can effectively combine the phylogenetic, transporter, and metabolic profiles into a unified paradigm to analyze the relationships among different microbiome samples. Experimental results demonstrate MHSNMF has the latent application in multiple biological profiles data analysis. Furthermore, the prediction method based on MHSNMF has shown to be effective in judging the types of new microbiome samples.
To our best knowledge, the interactions among microorganisms are complicated owning to the influences from host environment, diet and other species, particularly for the intestinal flora. Dissecting and exploring the structure and functions of intestinal microbiota is an essential step toward understanding the occurrence and development of microbiota-related disease. In the future, combining the phylogenetic information of species into the microbial interaction network to analyze functional modules is our next consideration.
The datasets generated or analyzed during the current study are available in the GitHub repository, https://github.com/chonghua-1983/MHSNMF.
HMP:
Human Microbiome Plan
iHMP:
Integrative Human Microbiome Plan
MetaHIT:
Metagenomics of the Human Intestinal Gut
NMF:
Nonnegative Matrix Factorization
GNMF:
Graph Regularized Nonnegative Matrix Factorization
SNMF:
MHSNMF:
SVD:
Singular Value Decomposition
AC:
NMI:
Normalized Mutual Information
BSSV:
Best Single view
WSSV:
Worst Single View
Multi-NMF:
Multi-view Nonnegative Matrix Factorization
Co-training SC:
Co-training spectral clustering
SNF:
Similarity network fusion
LJ-NMF:
Joint Nonnegative Matrix Factorization with Laplacian
CSMF:
Common and Specific Matrix Factorization
NetNMF:
Two Layers Network based Nonnegative Matrix Factorization
Turnbaugh PJ, Ley RE, Hamady M, Fraserliggett CM, Knight R, Gordon JI. The human microbiome project. Nature. 2007;449(7164):804–10.
Consortium IHN. The integrative human microbiome project: dynamic analysis of microbiome-host omics profiles during periods of human health and disease. Cell Host Microbe. 2014;16(3):276.
Qin J, Li R, Raes J, Arumugam M, Burgdorf KS, Manichanh C, Nielsen T, Pons N, Levenez F, Yamada T. A human gut microbial gene catalogue established by metagenomic sequencing. Nature. 2010;464(7285):59–65.
Jiang X, Hu X, Xu W. Microbiome data representation by joint nonnegative matrix factorization with Laplacian regularization. IEEE/ACM Trans Comput Biol Bioinformatics. 2017;14(2):353–9.
Greene D, Cunningham P. A matrix factorization approach for integrating multiple data views. In: European conference on machine learning; 2009. p. 423–38.
Liu J, Wang C, Gao J, Han J. Multi-view clustering via joint nonnegative matrix factorization. In: Proceedings of the 2013 SIAM International Conference on Data Mining; 2013. p. 252–60.
Zhang L, Zhang S. Learning common and specific patterns from data of multiple interrelated biological scenarios with matrix factorization. Nucleic Acids Res. 2019;47(13):6606–17.
Cai D, He X, Han J, Huang TS. Graph regularized nonnegative matrix factorization for data representation. IEEE Trans Pattern Anal Mach Intell. 2011;33(8):1548–60.
Chen J, Zhang S. Discovery of two-level modular organization from matched genomic data via joint matrix tri-factorization. Nucleic Acids Res. 2018;46(12):5967–76.
Kim KI, Steinke F, Hein M. Semi-supervised regression using hessian energy with an application to semi-supervised dimensionality reduction. In: Neural information processing systems; 2009. p. 979–87.
Ma Y, Hu X, He T, Jiang X. Hessian regularization based symmetric nonnegative matrix factorization for clustering gene expression and microbiome data. Methods. 2016;111:80–4.
Kuang D, Ding CHQ, Park H. Symmetric nonnegative matrix factorization for graph clustering. In: Siam international conference on data mining; 2012. p. 106–17.
Long B, Zhang Z, Yu PS. Co-clustering by block value decomposition. In: Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining. New York: ACM; 2005. p. 635–40. https://doi.org/10.1145/1081870.1081949.
Zelnikmanor L, Perona P. Self-tuning spectral clustering. In: Advances in neural information processing systems; 2005. p. 1601–8.
Donoho D, Grimes C. Hessian eigenmaps: locally linear embedding techniques for high dimensional data. Proc Natl Acad Sci. 2003;100(10):5591–6.
Liu W, Tao D. Multiview hessian regularization for image annotation. IEEE Trans Image Process. 2013;22(7):2676–87.
Ma Y, Hu X, He T, Jiang X. Clustering and integrating of heterogeneous microbiome data by joint symmetric nonnegative matrix factorization with laplacian regularization. IEEE/ACM Trans Comput Biol Bioinformatics. 2017;PP(99):1–1. https://doi.org/10.1109/TCBB.2017.2756628.
Wang M, Hua XS, Yuan X, Song Y, Dai LR. Optimizing multi-graph learning: towards a unified video annotation scheme. In: ACM International Conference on Multimedia; 2007. p. 862–71.
Xia T, Tao D, Mei T, Zhang Y. Multiview spectral embedding. IEEE Trans Syst Man Cybernetics Part B. 2010;40(6):1438–46.
Greene D. A matrix factorization approach for integrating multiple data views. In: European conference on machine learning and knowledge discovery in databases; 2009. p. 423–38.
Chapter Google Scholar
Huttenhower C, Gevers D, Knight R, Abubucker S, Badger JH, Chinwalla AT, Creasy HH, Earl AM, Fitzgerald MG, Fulton RS. Structure, function and diversity of the healthy human microbiome. Nature. 2012;486(7402):207–14.
Xu W, Liu X, Gong Y. Document clustering based on non-negative matrix factorization. In: International ACM sigir conference on research and development in information retrieval; 2003. p. 267–73.
Blum A, Mitchell TM. Combining labeled and unlabeled data with co-training. In: Conference on learning theory; 1998. p. 92–100.
Wang B, Mezlini AM, Demir F, Fiume M, Tu Z, Brudno M, Haibekains B, Goldenberg A. Similarity network fusion for aggregating data types on a genomic scale. Nat Methods. 2014;11(3):333–7.
Boutsidis C, Gallopoulos E. SVD based initialization: a head start for nonnegative matrix factorization. Pattern Recogn. 2008;41(4):1350–62.
Jeffery IB, Claesson MJ, O'toole PW, Shanahan F. Categorization of the gut microbiota: enterotypes or gradients? Nat Rev Microbiol. 2012;10(9):591.
Rasiwasia N, Costa Pereira J, Coviello E, Doyle G, Lanckriet GR, Levy R, Vasconcelos N. A new approach to cross-modal multimedia retrieval. In: Proceedings of the 18th ACM international conference on multimedia. New York: ACM; 2010. p. 251–60. https://doi.org/10.1145/1873951.1873987.
The authors are grateful to all of the reviewers and editors of this manuscript.
About this supplement
This article has been published as part of BMC Bioinformatics Volume 21 Supplement 6, 2020: Selected articles from the 15th International Symposium on Bioinformatics Research and Applications (ISBRA-19): bioinformatics. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-21-supplement-6
This study is supported by the National Natural Science Foundation of China (No.61532008), the Key Technology R&D Program of Henan Province (202102310561) and the Key Research Projects of Henan Higher Education Institutions (No.20B520002).
School of Computer & Information Engineering, Anyang Normal University, Anyang, China
Yuanyuan Ma
School of Computer & Data Science, Henan University of Urban Construction, Pingdingshan, China
Junmin Zhao
School of Computer, Central China Normal, Wuhan, China
Yingjun Ma
YM developed the algorithms, co-implemented the experiments and helped to draft the manuscript. JZ co-implemented the experiment used in the paper and YM contributed to the writing of the manuscript. All authors read and approved the final manuscript.
Correspondence to Yuanyuan Ma.
Ma, Y., Zhao, J. & Ma, Y. MHSNMF: multi-view hessian regularization based symmetric nonnegative matrix factorization for microbiome data analysis. BMC Bioinformatics 21, 234 (2020). https://doi.org/10.1186/s12859-020-03555-w
DOI: https://doi.org/10.1186/s12859-020-03555-w
Multi-view clustering
Submission enquiries: [email protected] | CommonCrawl |
Effects of the long-term feedin...
Core reader
Effects of the long-term feeding of diets enriched with inorganic phosphorus on the adult feline kidney and phosphorus metabolism
Animals and housing
Mineral balance
Urine mineral content and relative super saturation (RSS)
Blood-based measures
Renal measures
Post-mortem and histological investigations
Intake and body weight
Mineral apparent digestibility and balance
Glomerular filtration rate
Blood biochemistry
Vitamin D metabolites
Regulatory hormones and markers of bone turnover
Fractional excretion
Abdominal ultrasound and whole-body radiographs
Abdominal ultrasound, whole-body radiographs and post-mortem histology
British Journal of Nutrition, Volume 121, Issue 3
Janet Alexander (a1), Jonathan Stockman (a1), Jujhar Atwal (a1), Richard Butterwick (a1), Alison Colyer (a1), Denise Elliott (a2), Matthew Gilham (a1), Penelope Morris (a1), Ruth Staunton (a1), Helen Renfrew (a3), Jonathan Elliott (a4) and Phillip Watson (a1)
1WALTHAM Centre for Pet Nutrition, Melton Mowbray, Leicestershire LE14 4RT, UK
2Royal Canin SAS, 650 Avenue de la Petite Camargue, 30470 Aimargues, France
3Renfrew Imaging, Grove Road, Bladon, Woodstock OX20 1RD, UK
4Department of Comparative Biomedical Sciences, Royal Veterinary College, University of London, London NW1 0TU, UK
© The Authors 2018
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Published online by Cambridge University Press: 21 December 2018
Figures:
Table 1 Diet composition*
Fig. 1 Study 1, (a) daily faecal and urinary phosphorus excretion (g/kg) n 23–24, (b) daily faecal and urinary calcium excretion (g/kg) n 23–24, (c) phosphorus balance (daily intake minus total daily faecal and urinary excretion in g), n 24 and (d) calcium balance (g), n 24, by study week and diet. Individual cat data are shown as small symbols and means as large symbols with 95 % CI. * Significant between diet difference in change from baseline (P <0·05). Total mineral balance (g) calculated as intake–(faecal excretion + urinary excretion). (a, b) , Urine – control; , urine – test; , faeces – control; , faeces – test. (c, d) , Control; , test.
Fig. 2 Study 1, glomerular filtration rate (GFR) determined by iohexol clearance (ml/min per kg) by study week and diet. Individual cat data are shown as small symbols and means as large symbols with 95 % CI, n 23–24. * Significant difference in GFR relative to baseline between diets (P <0·05). , Control; , test.
Table 2 Study 1 selected blood biochemistry, markers of bone metabolism and vitamin D metabolites (Mean values and 95 % confidence intervals)
Fig. 4 Study 1, (a) serum fibroblast growth factor 23 (FGF23, pg/ml), n 23–24 and (b) parathyroid hormone (PTH, pg/ml), n 23–24, by study week and diet. Individual cat data are shown as small symbols and means as large symbols with 95 % CI. * Significant between diet difference in change from baseline (P <0·05). , Control; , test.
Table 3 Study 1 urine analysis (Mean values and 95 % confidence intervals)
Fig. 6 Study 2, (a) daily faecal and urinary phosphorus excretion (g/kg), n 22–24, (b) daily faecal and urinary calcium excretion (g/kg), n 22–24, (c) phosphorus balance (g), n 22–24, (d) calcium balance (g), n 22–24, by study week and by diet. Individual cat data are shown as small symbols and means as large symbols with 95 % CI. * Significant between-diet difference in change from baseline (P <0·05). Total mineral balance (g) calculated as intake–(faecal excretion + urinary excretion). (a, b) , Urinary – control; , urinary – test; , faecal – control; , faecal – test. (c, d) , Control; , test. BW, body weight.
Fig. 7 Study 2, glomerular filtration rate (GFR, ml/min per kg) by study week and diet. Individual cat data are shown as small symbols and means are shown as large symbols with 95 % CI, n 22–25. No significant between diet difference in relative to baseline (P <0·05) was observed. , Control; , test.
Fig. 9 Study 2, (a) serum fibroblast growth factor 23 (FGF23, pg/ml) and (b) serum parathyroid hormone (PTH, pg/ml), by study week and by diet. Individual cat data are shown as small symbols and means as large symbols with 95 % CI, n 21–25. * Significant between diet difference in change from baseline (P <0·05). , Control; , test.
Renal disease has a high incidence in cats, and some evidence implicates dietary P as well. To investigate this further, two studies in healthy adult cats were conducted. Study 1 (36 weeks) included forty-eight cats, stratified to control or test diets providing 1·2 or 4·8 g/1000 kcal (4184 kJ) P (0 or approximately 3·6 g/1000 kcal (4184 kJ) inorganic P, Ca:P 1·2, 0·6). Study 2 (29 weeks) included fifty cats, stratified to control or test diets, providing 1·3 or 3·6 g/1000 kcal (4184 kJ) P (0 or approximately 1·5 g/1000 kcal (4184 kJ) inorganic P, Ca:P 1·2, 0·9). Health markers, glomerular filtration rate (GFR) and mineral balance were measured regularly, with abdominal ultrasound. Study 1 was halted after 4 weeks as the test group GFR reduced by 0·4 (95 % CI 0·3, 0·5) ml/min per kg, and ultrasound revealed changes in renal echogenicity. In study 2, at week 28, no change in mean GFR was observed (P >0·05); however, altered renal echogenicity was detected in 36 % of test cats. In agreement with previous studies, feeding a diet with Ca:P <1·0, a high total and inorganic P inclusion resulted in loss of renal function and changes in echogenicity suggestive of renal pathology. Feeding a diet containing lower total and inorganic P with Ca:P close to 1·0 led to more subtle structural changes in a third of test cats; however, nephrolithiasis occurred in both diet groups, complicating data interpretation. We conclude that the no observed adverse effects level for total dietary P in adult cats is lower than 3·6 g/1000 kcal (4184 kJ), however the effect of inorganic P sources and Ca:P require further investigation.
Chronic kidney disease (CKD) has a high prevalence in domestic cats, with approximately 33 % of those over the age of 12 years being affected( 1 ). CKD is defined as a sustained decrease in renal function over at least 3 months and is an important cause of morbidity and mortality in the cat population. Both congenital and acquired disorders can lead to the development of CKD and acute kidney damage owing to urinary obstruction, nephrotoxins and pyelonephritis, or ischaemic injury can also progress to chronic disease( 2 ). Although dietary P is essential for life, unrestricted P intake has been associated with the progression of existing renal disease( 2 , 3 ). There is, however, little evidence to indicate adverse effects of high P intake in healthy adult cats. For this reason, there is currently no safe upper limit for feline dietary P cited in nutritional guidelines set out by the US National Research Council (NRC), European Pet Food Industry Federation or Association of American Feed Control Officials (AAFCO)( 4 – 6 ).
The first study to suggest a link between high dietary P and reduced renal function in healthy adult cats was reported by Pastoor et al. ( 7 ). To evaluate the effects of high P intake, the authors fed a purified test diet containing 3·6 g P/1000 kcal (4184 kJ) with a Ca:P ratio of 0·3 for 4 weeks. A small, but significant, reduction in endogenous creatinine clearance and a slight decrease in plasma P were reported in cats offered the test diet compared with those offered P at a level of 2·3 g/1000 kcal (4184 kJ) or below( 7 ). A neutral P balance was observed owing to an increase in both urinary and faecal P excretion when higher levels were fed, suggesting that P levels were regulated and excess P excreted rather than retained. These authors recommended that high dietary P levels should be discouraged owing to the associated reduction in plasma P concentrations and creatinine clearance. Subsequently, Dobenecker et al. ( 8 ) offered thirteen healthy adult cats a diet comparable to a home-prepared formulation with a total P of 3·0 g/1000 kcal (4184 kJ) with approximately 2·0 g/1000 kcal (4184 kJ) supplemented as calcium monophosphate and sodium dihydrogen phosphate (SDHP) with a Ca:P of 0·4. After 29 d of feeding, in agreement with the findings of Pastoor et al. ( 7 ), glucosuria and microalbuminuria were observed, creatinine clearance significantly decreased and blood urea nitrogen content increased( 8 ).
It has been suggested previously in several species that P homoeostasis is not only influenced by total dietary P content and the Ca:P, but also by the source of P( 9 – 11 ). Dietary P provided by organic raw materials (e.g. poultry meal or meat and bone meal) has been shown to be less bioavailable than that arising from added inorganic sources (e.g. P containing mono and dibasic Na salts)( 7 , 12 ). P-based additives are widely used in commercial food manufacturing not only to supplement essential P but also to serve a number of processing functions including pH stabilisation, metal cation sequestration, emulsification, leavening, hydration and as antibacterials( 13 ). This difference in bioavailability is probably because of in vivo binding of organic P to proteins and intra-cellular signalling molecules, while inorganic P salts are readily disassociated and absorbed( 10 ). Data from rats fed high-phosphate diets supplemented with monophosphate or polyphosphate salts has indicated that the development of nephrocalcinosis and diminished kidney function is more severe when polyphosphate salts as compared with monophosphate salts are fed( 11 ). In cats, Finco et al. ( 12 ) compared a diet with 100 % P from an organic source (total P content 2·7 g/1000 kcal (4184 kJ), Ca:P 1·6) with a diet containing a similar total P (3·3 g/1000 kcal (4184 kJ) and Ca:P 1·4), but with 63·5 % P from neutral monobasic/dibasic sodium phosphate. These authors noted a greater percentage recovery of P in the urine when the high-inorganic P diet was fed compared with the organic P diet (34·9 % compared with 14·7 %). However, these findings may have been influenced by the higher Na level present in the inorganic P diet, which is known to increase P absorption by stimulating Na+-dependent uptake of phosphate( 14 ).
Together, these studies highlight a possible risk of changes in renal function when healthy adult cats are fed diets containing levels of P in excess of 3·0 g/1000 kcal (4184 kJ), or when high levels of P from inorganic sources are included in diets. However, the available evidence is not conclusive and the published data do not determine whether the reported changes exceeded normal reference ranges or have clinical relevance. The diets used in these studies do not represent commercially relevant formats being either purified( 7 ) or representative of 'home-prepared' diets( 8 ). Therefore, to elucidate the effects of dietary P in cats, a long-term feeding study was initiated with the objective of evaluating the health effects of an extruded dry format diet containing total and inorganic P (in the form of SDHP), at levels higher than previously reported, but possible in some commercial pet foods( 15 ). This study observed adverse changes in markers of renal health after 4 weeks of feeding and was terminated at this stage. We hypothesised that the adverse effects may have been due to the inclusion of highly available inorganic SDHP. Subsequently, a second 29-week study was initiated in which an extruded dry format diet with a lower inclusion of inorganic SDHP and a total P level equivalent to that previously reported by Pastoor et al. ( 7 ) was fed, in an attempt to identify a level of dietary P inclusion resulting in no observed adverse effects level (NOAEL) in adult cats.
This work was approved by the WALTHAM Animal Welfare and Ethical Review Body and conducted under the authority of the Animals (Scientific Procedures) Act 1986.
In a parallel design, forty-eight healthy neutered adult cats (twenty-three males and twenty-five females) aged between 1·7 and 9·1 years at the start of the study were stratified into two study groups and offered one of two extruded dry format diets, differing in P content and Ca:P. Throughout an initial baseline period of 20 weeks, all cats were offered a control diet 74 % lower in P than the test (Table 1), and baseline measurements were made within the final 4 weeks. Subsequently, test cats were offered a high-P test diet (Table 1), whereas the control cats remained on the control diet. After 4 weeks, measurements were repeated for both groups. Following review of these data, the study was terminated and the test cats returned to the control diet in week 6.
PME, predicted metabolisable energy.
* Diet analysis g/1000 kcal (4184 kJ) or g/100 g DM.
† Value after supplementation of Ca at 200 mg/1000 kcal (4184 kJ).
‡ International units IU/1000 kcal (4184 kJ) or IU/100 g.
§ Calculated by proximate analysis to PME kcal/100 g according to LaFlamme (75) .
In a parallel design, fifty healthy neutered adult cats (twenty-nine males and twenty-one females) aged between 1·4 and 7·8 years at the start of the study were stratified into two study groups and offered one of two extruded diets, differing in P content and Ca:P. Throughout an initial baseline period of 10 weeks, all cats were offered a lower P control diet (Table 1) and baseline measurements were made towards the end of this phase. Subsequently, test cats were offered a moderate P test diet (Table 1), whereas the control cats remained on the lower P control diet. Measurements were repeated after 2, 4, 8, 12, 20 and 28 weeks.
Study 1 included forty-eight healthy neutered adult cats stratified into two study groups balancing energy intake, body weight (BW) and age. The control group was made up of twelve males and twelve females with a median starting age of 4·7 (1·7–8·1) years and the test of eleven males and thirteen females with a median starting age of 4·3 (1·7–9·1) years. Before selection, cats were health-screened by assessment of plasma biochemistry, haematology, urinary health parameters and abdominal ultrasound. Those with findings outside of normal range or considered abnormal by the site veterinarian were excluded, as were cats identified as having crystalluria, pre-existing uroliths, renoliths or abnormal changes in renal size or echogenicity, as determined by a diplomate of the European College of Veterinary Diagnostic Imaging (H. R.). Study 2 included fifty healthy neutered adult cats stratified into two study groups balancing energy intake, BW and age. The control group consisted of thirteen males and twelve females with a median starting age of 5·4 (1·4–7·7) years, and the test group was made up of fifteen males and ten females with a mean starting age of 4·6 (1·4–7·7) years. Before selection, cats were health-screened by applying the same criteria as for study 1. All cats were housed at the WALTHAM Centre for Pet Nutrition in social rooms, except during feeding or urine and faecal collection periods; to facilitate sample collection, they were individually housed in lodges. Cats received two 30-min meals (50 % of maintenance energy requirement) per day, a pattern they were habituated to. Throughout the study, diets were offered in amounts to maintain an ideal body condition score according to the WALTHAM Size, Health and Physical Evaluation Guide( 16 ). Deionised water was freely available at all times. Initial energy requirements were determined using observed energy intakes over the preceding 2 months.
In study 1, single batches of two extruded dry format experimental diets were specifically formulated for this study and manufactured (Royal Canin) using the same core recipe and raw materials, but with differing levels of P.
Both diets were composed of similar quantities of pork meal, pork fat and rind, poultry fat, soya and fish oils, yeast, beet pulp, ground rice, cellulose fibre, maize flour and gluten, potassium chloride, potassium citrate, sodium chloride, magnesium oxide, calcium bicarbonate (study 1), a vitamin premix, anticaking agent and flavourings. The control diet contained 1·2 g P/1000 kcal (4184 kJ) (all provided from organic raw materials) with 1·3 g Ca/1000 kcal (4184 kJ) (Ca:P 1·1) and the higher P test diet contained 4·8 g P/1000 kcal (4184 kJ) (approximately 3·6 g/1000 kcal (4184 kJ) from inorganic sources) with 2·8 g Ca/1000 kcal (4184 kJ) (Ca:P 0·6 (Table 1). The additional Ca and P in the test diet were provided by inorganic calcium carbonate (2·6 %) and SDHP (5·2 %). However, to achieve a Ca:P of 0·6, the test diet was supplemented before feeding with calcium citrate malate (Metabolics Ltd) at a dose of 200 mg Ca/1000 kcal (4184 kJ). Diet analysis confirmed that the nutritional composition of both diets met NRC (2006) adult cat recommendations for all nutrients except choline in the test diet, which was supplemented with 118 mg/1000 kcal (4184 kJ) choline chloride (Metabolics Ltd) to ensure compliance (online Supplementary Table S1).
In study 2, single batches of two extruded dry format experimental diets were specifically formulated for this study and manufactured (Royal Canin) using the same core recipe used in study 1 and the same raw materials for both diets with additional pork meal added to the test diet. Control diet contained 1·3 g P/1000 kcal (4184 kJ) (all provided from organic raw materials) with 1·6 g Ca/1000 kcal (4184 kJ) (Ca:P 1·2) and the moderate P test diet contained 3·6 g P/1000 kcal (4184 kJ) (approximately 1·5 g/1000 kcal (4184 kJ) from inorganic sources) with 3·3 g Ca/1000 kcal (4184 kJ) (Ca:P 0·9) (Table 1). The additional Ca and P in the test diet were provided through organic sources: bone ash from protein meal supplemented with inorganic calcium carbonate (1·5 %) and SDHP (2·2 %). Both diets met NRC (2006) recommendations for adult cats (online Supplementary Table S1).
All diets were produced through a standard extrusion process commonly used in the pet food industry( 17 ). Diet ingredients were mixed before being extruded and kibbled under identical processing conditions in a single-screw extruder (Royal Canin). All nutritional chemical analyses were carried out using Association of Official Agricultural Chemists (AOAC) procedures at Eurofins Ltd.
Intake was recorded on an individual basis as mass (g) of diet offered minus mass (g) of diet refused. BW was recorded weekly (in kg) on a standard scale (FB 34 3DE P top-pan balance; Sartorius).
At baseline and after 4, 12, 20 and 28 weeks of feeding, five day total faeces and urine collections were carried out to determine mineral apparent digestibility. Samples were analysed for Ca using flame photometry and P using spectrophotometry at the Royal Canin SAS European Regional Laboratory. Faeces were stored frozen at –20°C in a sealed container until processing. Each pooled 5-d faecal collection was weighed and then freeze-dried (VirTis Benchtop BTP9ES Freeze Dryer, Biopharma Process Systems). Once dried, the faeces was reweighed and manually homogenised in a mortar and pestle; any foreign material was removed and weighed separately. A 20-g aliquot of the homogenate was analysed for moisture, crude fibre and ash content, according to methods described in the Journal Official de l'Union Européenne procedures( 18 ); fat content was determined by acid hydrolysis followed by diethyl ether extraction and crude protein calculated from total N content by combustion using the Dumas principle according to Association Française de Normalisation (AFNOR ISO 16634-1:2008 November 2008). P content was determined via spectral photometry and Ca and Mg by flame photometry according to AFNOR methodology at Invivo Labs Chateau-Thierry. Apparent mineral digestibility (%) was calculated for Ca and P to represent the fraction of the minerals that were retained from the diet rather than being excreted in the faeces using the following formula:
$$\eqalign {{\rm Apparent \ digestibility\,}\left( {\rm \%} \right) {\rm {\equals}((intake-faecal\ excretion)\,/\,intake)}\cr \quad{\times}{\rm 100}{\rm .}$$
Mineral balance (g) was also calculated for P and Ca as:
$$\eqalign { {\rm Total\ mineral\ balance }\left( {\rm g} \right)\cr \quad\quad{\rm {\equals}intake-}\left( {{\rm faecal\ excretion{\plus}urinary\ excretion}} \right)$$
All urine excreted over a 3-d (study 1) or 5-d (study 2) period was collected and urine pH assessed twice daily. The method for urine RSS analysis has been previously described( 19 ). Samples were analysed for oxalate, citrate, pyrophosphate, K, Ca, Na, ammonium, chloride, sulphate and phosphate via HPLC. The concentrations of minerals were then analysed by SUPERSAT software( 19 ) to calculate the RSS (activity product/solubility product) for struvite (magnesium ammonium phosphate (MAP)), calcium oxalate (CaOx) and brushite (study 2 only). The urine P and Ca content were determined via spectral photometry and flame photometry, respectively, according to AFNOR methodology at Invivo Labs Chateau-Thierry and used for calculation of the urinary fractional excretion of these minerals.
At baseline and after 4 weeks of feeding (study 1) or baseline and after 4, 12, 20 and 28 weeks of feeding (study 2), a 5-ml freshly voided urine sample was collected from each cat, and creatinine, microalbumin, pH, urine-specific gravity, glucose, pH and urine protein creatinine ratio were measured (InSight MS-11 and MS-2 Vet urine strips; Woodley Equipment Company Ltd) within 30 min. Specific gravity was also determined using a refractometer (Sinotech RHCN-200ATC; Sinotech) and urine albumin and creatinine measured using the Beckman Coulter microalbumin (OSR6167) and creatinine (OSR6178) assays for the Olympus AU480 biochemistry analyser (Olympus Europe GmbH). The urine albumin:creatinine ratio (UACR) was then calculated for each sample.
At baseline and every 2–8 weeks thereafter, morning fasted (>12 h) blood samples were collected for the measurement of standard biochemistry and haematology, ionised Ca (iCa), vitamin D metabolites, TAG, markers of bone turn-over, serum crosslaps (CTx) and bone-specific alkaline phosphatase (BAP), parathyroid hormone (PTH) and fibroblast growth factor 23 (FGF23). EDTA-treated blood was used to measure standard haematology parameters (leucocyte and erythrocyte counts, Hb concentration, haematocrit, platelet count, mean corpuscular volume, mean corpuscular Hb, number and percentage of lymphocytes, monocytes and granulocytes) via a Mythic 18 cell counter (Orphee).
Lithium-heparinised plasma was analysed to quantify standard biochemistry parameters (total protein, albumin, phosphate, alkaline phosphatase (ALP), alanine transaminase, aspartate aminotransferase, Ca, cholesterol, urea, creatinine, TAG and glucose). In study 1, this was carried out in-house using an AU480 (Olympus) analyser. In study 2, heparinised serum was sent to an external laboratory (IDEXX Laboratories) for biochemical analysis as in study 1 with the addition of symmetric dimethylarginine (SDMA) measurement. Serum was collected for the measurement of markers of bone turnover: BAP and CTx using the BAP MicroVue™ Quidel ELISA (TECO Medical Group) and CartiLaps® ELISA (Immunodiagnostic Systems Ltd), both according to the manufacturer's instructions. In the Department of Comparative and Biomedical Sciences, Royal Veterinary College, London, EDTA plasma was analysed to quantify intact plasma FGF23 concentrations using a sandwich ELISA (Kainos Laboratories Inc.) as detailed by Geddes et al. ( 20 ) and PTH concentrations by a total intact PTH immunoradiometric assay (Scantibodies Laboratory, Inc.) previously validated for use with feline samples( 21 ). Serum was also collected for the measurement of the vitamin D metabolites 25-hydroxyvitamin D (25(OH)D) and 24,25-dihydroxyvitamin D (24,25(OH)2D) by liquid chromatography (LC)/MS–MS (Agilent) using a modified method described by Aronov et al. ( 22 ). Serum was also collected for the measurement of the vitamin D metabolites at the Bioanalytical Facility at the University of East Anglia, Norwich, UK. Total 25(OH)D, total 24,25(OH)2D and the epimer C3-Epi-25(OH)D3 were measured by LC/MS–MS (performed using a Micromass Quattro Ultima Pt Mass Spectrometer (Waters Corp.). Total serum 1,25-dihydroxyvitamin D (1,25(OH)2D) levels were measured using an EIA Kit (IDS Ltd). iCa analysis was conducted using heparinised whole blood in a Stat Profile Prime Critical Care analyser (Nova Biomedical).
Plasma and serum calcium–phosphorus (CaP) product (mmol2/l2) was calculated by multiplying plasma total Ca and inorganic P concentrations in mmol/l.
In study 1, at baseline and 5 weeks after diet change, and in study 2 at baseline, 5, 13, 21 and 29 weeks after diet change, iohexol clearance tests were carried out as an estimate of glomerular filtration rate (GFR) using the method described by Finch et al. ( 23 ). Briefly, for topical anaesthesia, 2 ml of EMLA cream (2·5 % lidocaine, 2·5 % prilocaine; AstraZeneca) was applied to the skin over the cephalic veins and covered for 1 h before removal and aseptic preparation of the area with chlorhexidine gluconate. The cephalic vein was raised by pressure to the dorsal aspect of the foreleg and a catheter was inserted into the vein before flushing with 0·5 ml of heparinised saline (100 IU/ml of heparin in 0·9 % saline; Wockhardt UK Ltd). A discard volume of 300 µl was removed from the catheter and 647 mg/kg iohexol (Omnipaque 300; Amersham Health) was administered over a 3-min period, followed by a heparinised saline flush. The completion of the injection represented time zero. Blood samples of 1 ml volume were collected via a cephalic catheter into serum tubes at 2, 3 and 4 h post infusion. Iohexol concentration in the serum of each cat at each time point was analysed using high-performance capillary electrophoresis (undertaken in the laboratories of deltaDOT Ltd and London BioScience Innovation Centre). Weight-adjusted clearance (in ml/kg per min) was calculated by the slope of the concentration gradient.
The fractional excretion of P (FEP) and Ca (FECa) was calculated as the percentage filtered by the glomerulus excreted into the urine, expressed as a ratio to creatinine clearance as below:
$$\kern 0pt\eqalign {&\,\%\,\;{\rm FE}\cr&{\equals}{{{\rm urinary}\;{\rm concentration}\;{\rm of}\;[{\rm X}]{\times}{\rm circulating}\;{\rm creatinine}\;{\rm concentration}} \over {{\rm urinary}\;{\rm concentration}\;{\rm of}\;{\rm creatinine}{\times}{\rm circulating}\;[{\rm X}]\;{\rm concentration}}}\cr&\hskip11pt{\times}100$$
Imaging: at baseline and at the end of both studies, general physical health examinations, whole-body radiographs and abdominal ultrasound scans were carried out to detect soft tissue mineralisation, urolithiasis or other pathologies. Radiographs of the abdomen were taken in ventrodorsal and right lateral recumbency, those of the thorax were taken in right lateral recumbency (AGFA CR 30-X, exposure range 46–64 kV, 2·36–4·73 mAs) and full abdominal ultrasound scans (GE Logiq-E with a linear probe 307 12L-RS, 5–13 mHz, 47 mm footprint) were performed, with the cats under sedation (induced through administration of 0·3 mg/kg butorphanol 0·025 mg/kg dexmedetomidine and reversed with 0·075 mg/kg atipamazole). All scans and interpretation of the radiographs were carried out by a diplomate of the European College of Veterinary Diagnostic Imaging (H. R.) who was blinded to the dietary grouping.
In study 2, one cat from the test group presenting at week 27 with acute renal failure did not respond to treatment and was euthanised following a pre-defined clinical management pathway. Post-mortem and histological investigations, including Von Kossa staining to visualise Ca mineralisation( 24 ), were carried out by a diplomate of the European College of Veterinary Pathologists at the Animal Health Trust, Diagnostic Laboratory Services, Lanwades Park, Kentford, Newmarket, Suffolk, UK.
Both studies were powered according to the primary response variable GFR (iohexol clearance ml/min per kg), by simulation using baseline values from study 1 to estimate variance. To detect a change in distribution where 10 % of cats had values <0·92 ml/min per kg, with approximately 80 % power and using a test level of 5 %, twenty cats were required per diet group. An additional 4–5 cats/group were used in study 1 and an extra five in study 2, to allow for potential drop-out.
For each measure, linear mixed model, restricted maximum likelihood (REML) analyses were performed to estimate variance parameters allowing for repeated measures on each subject over time. Specifically, a random effect of cat and categorical fixed effects of diet, week and their interaction were used. For some measures, data were missing due to samples not taken or undue assay variability; however, the REML method of analysis was used to account for this and assumed these values were missing at random. The model made use of the incomplete data so that these did not bias the estimates. Distributional assumptions were checked to ensure robustness of the statistical models by performing residual checks (e.g. for randomness and constant variance). Residuals were found to have increasing variability for a number of measures, and these were log10 transformed before analyses.
GFR was the primary measure; however, an overall test level of 5 % was also applied for all other supportive measures regarded as secondary. Planned comparisons were calculated between diet groups at baseline and within and between each diet groups from baseline to subsequent time points. For each measure, family-wise adjustments were made for the number of comparisons to maintain an overall test level at 5 %. Accordingly, mean, difference in mean or fold change of mean (where log10 transformation was necessary) are reported with 95 % family wise CI. Analyses were performed in R version 3.3.3( 25 ) using libraries 'lme4' for linear mixed effects models( 26 ), 'multcomp' for simultaneous inference of planned contrasts( 27 ) and 'ggplot'( 28 ) for figures.
For behavioural reasons, occasional missed or partial samples occurred and sample numbers are presented in the relevant figures and tables.
Increased incidence of vomiting was noted in the test group from week 1 of feeding and average daily energy intake (per kg BW) reduced significantly (P <0·001) in the test group from 43·14 (95 % CI 39·23, 47·05) to 37·56 (95 % CI 33·65, 41·47) kcal/kg BW (180·5 (95% CI 164·1, 196·9) to 157·1 (95% CI 140·8, 173·5) kJ/kg BW) and this change was significantly (P <0·001) different from the control group, which remained stable at 42·87 (95 % CI 38·96, 46·78) kcal/kg baseline and 43·93 (95 % CI 40·02, 47·84) kcal/kg BW (179·4 (95% CI 163·0, 195·78) kJ/kg baseline and 183·8 (95% CI 167·4, 200·2) kJ/kg BW) at week 4 (online Supplementary Table S2). Correspondingly, the mean BW in the test group reduced significantly (P <0·001) from baseline, whereas that of the control group did not change (P >0·05) and this change was significantly (P <0·001) higher in the test group at week 4 (online Supplementary Table S2). When transferred to the test diet, mean P intake in the test group increased significantly (P <0·001) to 0·18 (0·17, 0·19) g/kg BW compared with 0·06 (0·04, 0·07) g/kg BW in the control (online Supplementary Table S2). Mean Ca intake was also observed to increase significantly (P <0·001) in the test group to 0·13 (0·12, 0·14) g/kg BW, compared with 0·06 (0·05, 0·07) g/kg BW in the control.
Urinary and faecal P excretion in the test group increased significantly (P <0·001) from baseline and in comparison with the control group (Fig. 1(a)). Urinary excretion of Ca did not change from baseline in either group; however, faecal excretion increased significantly (P <0·001) in the test group and was significantly different (P <0·001) from the change in the control (Fig. 1(b)). Apparent digestibility (dietary intake minus faecal losses) of dietary P was not significantly (P >0·05) different between the control and test groups at baseline: 37·83 (32·34, 43·33) and 38·02 (32·41, 43·63) % respectively. After 4 weeks, P apparent digestibility in the test group increased by 11·28 (3·70, 18·86) % and this change was significantly (P = 0·003) different from that in the control. In the test group, Ca apparent digestibility reduced significantly (P = 0·001) from baseline over the feeding period, but this change was not significantly different (P = 0·139) from the change in the control group.
Total P and Ca balance were not significantly different between diet groups at baseline (Fig. 1(a) and (b)). After the feeding period, P balance was not significantly different from baseline (P = 0·211) (Fig. 1(c)), whereas total Ca balance reduced (P <0·001) by 0·46 (0·16, 0·76) g to become negative in the test group at week 4 and this change was significantly different from that of the control group (Fig. 1(d)).
No difference between groups in GFR was observed at baseline (Fig. 2). Although GFR reduced significantly (P <0·05) in both groups over the feeding period, a significantly greater reduction (P <0·001) was observed in the test group compared with the control by 0·26 (0·11, 0·41) ml/min per kg (Fig. 2).
At baseline, no significant differences were observed between diet groups in any parameter except iCa, which was significantly (P = 0·035) higher in the control group by 0·021 (0·001, 0·041) mmol/l (Table 2). However, over the feeding period, control group iCa levels reduced to those of the test group, which did not change (Table 2). After 4 weeks of feeding, significant changes were observed in a number of parameters (Table 2). Plasma creatinine increased significantly in the test group by 24·95 (24·38, 25·52) μmol/l (Fig. 3(a)). In the test group, plasma urea increased significantly (P <0·050) by 1·38 (1·33, 1·44) mmol/l, whereas albumin levels decreased significantly (P <0·050) by 2·08 (2·05, 2·11) g/l (Fig. 3(b)). Plasma ALP concentration also increased significantly in the test group by 0·10 (0·09, 0·12) µkat/l (Table 2). No significant between group difference was observed in plasma total Ca levels, although those of the control group reduced significantly (P = 0·020) (Fig. 3(c)). Plasma inorganic P concentration decreased significantly (P <0·001) in the test but not the control group over the feeding period by 0·40 (0·39, 0·41) mmol/l (Fig. 3(d)). The calculated plasma CaP product was not significantly different between groups at baseline (Table 2); however, after 4 weeks of feeding the test group CaP had reduced significantly (P <0·001) from 3·13 (2·88, 3·40) to 2·12 (1·97, 2·29) mmol2/l2, and this change was significantly (P <0·001) different from the control group, which remained unchanged (Table 2). Additional blood biochemistry values can be seen in online Supplementary Table S3.
ALP, alkaline phosphatase; CaP, plasma Ca–P product; BAP, bone-specific alkaline phosphatase; CTx, serum crosslaps; 25(OH)D, 25-hydroxyvitamin D; 24,25(OH)2D, 24,25-dihydroxyvitamin D; 1,25(OH)2D, 1,25-dihydroxyvitamin D.
a,b Mean values within a row with unlike superscript letters were significantly different between test and control at baseline (P <0·05).
* A significant difference from baseline within group (P <0·05). P-value provides a comparison of the change from baseline at week 4, between test and control diet groups.
Fig. 3 Study 1, (a) serum creatinine (µmol/l) and (b) serum urea (mmol/l), (c) serum calcium (mmol/l), (d) serum inorganic phosphorus (mmol/l) by study week and diet. Individual cat data are shown as small symbols and means as large symbols with 95 % CI, n 24. * Significant between diet difference in change from baseline (P <0·05). , Control; , test.
In the test group, total 25(OH)D concentrations fell significantly (P <0·05) from 138·36 (122·47, 156·31) nmol/l at baseline to 115·36 (102·18, 130·24) nmol/l at week 4 (Table 2). No significant (P <0·05) changes were observed in levels of serum total 24,25(OH)2D over the 4-week feeding period (Table 2). Serum levels of C3-Epi-25(OH)D3 were not significantly different (P >0·05) between groups at baseline, but after 4 weeks of feeding had increased significantly (P = 0·01) in the control group, whereas no change was observed in the test group (Table 2). No significant changes in serum 1,25(OH)2D were observed over the 4-week feeding period (Table 2).
Over the 4-week feeding period, levels of FGF23 increased significantly (P <0·001) in the test group by 5·27 (4·14, 6·71)-fold (Fig. 4(a)). PTH also increased significantly in both diet groups; however, in the test group this increase was significantly (P = 0·01) greater by 4·15 (3·41, 5·06) pg/ml (Fig. 4(b)).
BAP concentrations increased in the test group by 1·39 (1·29, 1·50)-fold, whereas those of the control group did not change (Table 2) and the difference in change from baseline between diets was significant (P <0·001). Serum levels of CTx increased in both the control and test groups by 1·11 (1·02, 1·19)-fold and 1·23 (1·14, 1·32)-fold, respectively; therefore, the difference in change between groups did not meet the level of significance (P >0·05) (Table 2).
A total of five parameters were observed to change significantly over the feeding period. Mean corpuscular volume and mean corpuscular Hb reduced in both diet groups, although this was significantly greater (P <0·05) in the test group (online Supplementary Table S4). Red cell distribution width and mean platelet volume both increased significantly in the test group, but not in the control (online Supplementary Table S4), whereas platelet distribution width reduced significantly only in the test group (online Supplementary Table S4).
Although total daily urine volume increased significantly (P = 0·007) in the test group over the feeding period, no significant between diet difference in change from baseline was observed (Table 3). Urine specific gravity was significantly lower in the test group compared with the control group at baseline (P = 0·04), but increased significantly (P <0·05) by week 4 and no significant between diet difference in change from baseline was observed (Table 3). However, urine relative supersaturation for CaOx and MAP decreased significantly in the test group by 1·73 (1·38, 2·17) and 2·26 (1·39, 3·69)-fold, respectively (Table 3). Urine microalbumin levels increased significantly (P <0·05) in the test cats (Table 3) as did UACR from 1·69 (1·07, 2·67) to 11·70 (7·41, 18·48) mg/g (Fig. 5).
RSS CaOx, urine relative supersaturation for calcium oxalate; RSS MAP, urine relative supersaturation for magnesium ammonium phosphate (struvite); FE, fractional excretion of P or Ca.
Fig. 5 Study 1, urine albumin creatinine ratio (UACR, mg/g) by week and diet. Individual cat data are shown as small symbols and means as large symbols with 95 % CI, n 24. * Significant between-diet difference in change from baseline (P <0·05). , Control; , test.
Calculated FEP was not significantly different between groups at baseline (Table 3), but after 4 weeks of feeding increased significantly (P <0·001) in the test group from 11·02 (9·56, 12·71) to 183·05 (158·79, 211·02) % and this change was significantly (P <0·001) different from that in the control group (Table 3). Change in FECa from baseline over the feeding period was not significantly different between groups (P = 0·550), even though that of the test group did increased significantly (P = 0·039) by 1·23 (1·01, 1·50)-fold (Table 3).
A total of twenty-two cats (92 %) in the test group had changes in echogenicity indicative of altered renal morphology. Of these, three had subtle, seven narrow and seven broad renal cortical rim sign (RS) identified, whereas five had a broad renal cortical RS with parenchymal speckling and generalised increased echogenicity. In all, two test cats (8 %) had developed renoliths and one pyelctasia. In contrast, one cat in the control group was observed to have developed a narrow renal cortical RS. Examination of the whole-body radiographs did not indicate any evidence of mineralisation of soft tissues or skeletal changes.
In all, three cats in the control group were removed from trial after the baseline sample owing to behavioural unsuitability for sample collection. Data from these cats are not included in the analysis. In all, four cats were removed from the study before 28 weeks; one in the control group was removed in week 19 for behavioural reasons and its data were not included after this time. In all, three test group cats were removed after the 21-week sample point revealed changes in serum biochemistry (creatinine and urea outside of normal range) consistent with an adverse effect on renal function and their data were included up to and including week 21. For behavioural reasons, occasional missed or partial samples occurred for other measures and sample numbers are presented in the relevant figures and tables.
No significant between-group differences in BW were observed with mean values at baseline of 4·26 (3·88, 4·64) and 4·52 (4·15, 4·89) kg for the control and test groups, respectively, and 4·27 (3·89, 4·66) and 4·49 (4·12, 4·86) kg at week 29 (online Supplementary Table S5). No significant (P >0·05) between-group differences in the change in energy intake were determined at any time (online Supplementary Table S5). After baseline, a significant increase (P <0·001) in mean P intake to 0·16 (0·15, 0·17) g/kg was observed in the test group compared with the control, which did not change from a baseline level of 0·06 (0·05, 0·07) g/kg (online Supplementary Table S5). Mean Ca intake was also observed to increase significantly (P <0·001) from baseline in the test group to a mean of 0·15 (0·14, 0·16) g/kg at week 28 from 0·07 (0·06, 0·08) g/kg.
Urinary and faecal P excretion in the test group increased significantly (P <0·001) from baseline and in comparison with the control group (Fig. 6(a)). The change from baseline in urinary Ca excretion was only significantly (P = 0·036) different between groups at week 28 (Fig. 6(b)). Faecal Ca excretion increased significantly (P <0·001) in the test group compared with baseline and the control group. Apparent P digestibility at baseline was not significantly different (P = 0·996) between the control and test groups: 38·26 (32·14, 44·38) and 40·26 (34·27, 46·25) %, repectively. However, over the 29-week feeding period P apparent digestibility increased in both groups to 50·36 (43·8, 56·92) % in the control group and 51·53 (44·8, 58·25) % in the test group, and therefore no significant (P <0·05) difference between diet difference in change from baseline was observed. Apparent Ca digestibility did not change significantly (P <0·05) from baseline (14·33 (5·52, 23·13) %) in the test group. However, in the control group Ca retention increased significantly (P <0·001) from baseline by 23·05 (9·94, 36·15) to 37·37(27·93, 46·81) % at week 28, a change significantly (P = 0·015) greater than in the test group.
Total P balance was not significantly different between the diet groups at baseline (P = 0·998) (Fig. 6(c)). From week 4 of the feeding period, total P balance increased significantly (P <0·001) in the test group from 0·39 (0·18, 0·59) to 0·74 (0·52, 0·96) g/d at week 28, whereas that of the control group was significantly (P = 0·047) higher than baseline only at the final time point (Fig. 6(c)). Total Ca balance change from baseline was not significantly different between the diet groups at any time point (Fig. 6(d)).
In both diet groups, no significant (P >0·05) change from baseline was observed and no significant between-diet difference determined (Fig. 7).
After 13 weeks of feeding, serum creatinine levels in the control group had fallen significantly (P <0·001) and remained lower than at baseline for the rest of the feeding period. However, in the test group creatinine concentration did not change from baseline at any point (Fig. 8(a)). A significant (P <0·05) difference in change from baseline was observed between groups at weeks 13 and 21 (Fig. 8(a)), but not at week 28 (P = 0·064) (Fig. 8(a)). Of the three cats that developed kidney disease, one was removed after serum creatinine levels were observed to increase from 190 to 283 µmol/l between weeks 13 and 21. The other two individuals were observed to have serum creatinine levels of 192 and 186 µmol/l at week 21 and were removed before week 28. Over the feeding period, serum urea levels fell significantly (P <0·001) below baseline in the control group from 13 weeks onwards (Fig. 8(b)), whereas those in the test group remained unchanged (P >0·05). A significant between-group difference in change in serum urea (P <0·05) was observed at weeks 21 and 28 (Fig. 8(b)). The three cats that developed kidney disease were observed to have serum urea levels of 12·1, 9·5 and 9·0 mmol/l at week 21. In both diet groups serum albumin levels decreased significantly (P <0·001) from baseline, although no significant (P <0·05) between-group difference in was observed (Table 4). Conversely, serum globulin levels increased significantly (P <0·001) from baseline in both groups; no significant between diet differences in change from baseline were detected (Table 4). Serum levels of SDMA did not change from baseline in either diet group over the course of the study and no between-group differences were observed (Table 4). However, fifteen cats in the control group were observed to have SDMA levels of between 0·75 and 1·09 μmol/l (15 and 22 μg/dl) at one or more time point, whereas seventeen test cats were determined to have levels between 0·74 and 1·14 μmol/l (15 and 23 μg/dl) at least once. The three cats that developed kidney disease were observed to have SDMA levels of 1·14, 0·84 and 0·74 μmol/l (23, 17 and 15 μg/dl) at week 21. Serum inorganic P concentrations decreased significantly (P <0·001) from a baseline mean of 1·42 (1·34, 1·52) to 1·22 (1·14, 1·3) mmol/l at week 28, whereas those of the control group did not change (Fig. 8(c)). The change in serum inorganic P concentration was therefore significantly different (P <0·01) in the test group compared with the control group from week 13 onwards (Fig. 8(c)). Serum levels of Ca fell significantly (P <0·05) from baseline in both groups, but a significant between diet difference was only observed at week 28 (P = 0·002, Fig. 8(d)). Post-baseline changes in serum cholesterol levels were observed in both groups, with significant (P <0·05) between-group differences detected between weeks 8 and 21 (online Supplementary Table S6). ALP concentrations were observed to have increased (P = 0·009) from baseline in the test group at week 13, and this change was significantly different (P = 0·003) from that in the control group at this time and at week 21 (online Supplementary Table S6). A transient difference (P = 0·006) in change from baseline was detected in serum K and Na:K ratio at week 13, but no other significant between-diet differences were observed in the biochemical parameters measured (online Supplementary Table S6). Plasma CaP product was not significantly different between groups at baseline (Table 4); however, over the feeding period this reduced significantly (P <0·001) in the test group from 3·6 (3·3, 3·9) mmol2/l2 at baseline to 2·9 (2·7, 3·10) mmol2/l2 at week 28 and this change was significantly (P <0·001) different from the control group (Table 4). Additional blood biochemistry values can be seen in online Supplementary Table S6.
Fig. 8 Study 2, (a) serum creatinine (µmol/l), (b) urea (mmol/l), (c) inorganic P (mmol/l), (d) Ca (mmol/l) by study week and diet. Individual cat data are shown as small symbols and means as large symbols with 95 % CI, n 22–25. * Significant between diet difference in change from baseline (P <0·05). , Control; , test.
SDMA, symmetric dimethylarginine; ALP, alkaline phosphatase; CaP, serum Ca–P product; BAP, bone-specific alkaline phosphatase; CTx, serum cross laps; 25(OH)D, 25-hydroxyvitamin D; 24,25(OH)2D, 24,25-dihydroxyvitamin D; 1,25(OH)2D, 1,25-dihydroxyvitamin D.
a Mean values within a row with unlike superscript letters were significantly different between test and control at baseline (P <0·05).
Serum total 25(OH)D concentrations in the test group fell significantly (P <0·05) from a baseline concentration of 137·89 (117·21, 162·22) to 123·58 (104·85, 145·66) nmol/l at week 28 (Table 4). However, this change was not significantly (P >0·05) different from that of the control group (Table 4). Total 24,25(OH)2D serum concentration increased significantly (P <0·05) from baseline in the test group from 24·52 (19·43, 30·95) to 30·87 (24·36, 39·12) nmol/l at week 28, and this change was significantly (P <0·05) different from the control group from week 13 (Table 4). Serum levels of C3-Epi-25(OH)D3 fell significantly from baseline in the test group from week 13 (P <0·05), although this change was not significantly different (P <0·05) from that in the control group at any point (Table 4). In the test group, other than a transient reduction (P = 0·012) in week 4, serum levels of 1,25(OH)2D did not change from baseline over the feeding period (Table 4). In the control group, serum levels of 1,25(OH)2D were significantly lower than at baseline at weeks 13 and 28 and this change was significantly different (P <0·05) from that in the test group (Table 4).
Over the feeding period, serum levels of FGF23 in the test group increased from baseline by up to 2·91 (1·92, 4·42)-fold and this change was significantly (P <0·05) different from that in the control group at weeks 2, 4, 13 and 21 (Fig. 9(a)). Serum levels of PTH increased above baseline in the test group, although when compared with the control this change did not reach the level of statistical significance at time points other than weeks 21 and 28 (Fig. 9(b)). No significant between-diet differences in change from baseline were observed in the serum levels of CTx, although those of the control group fell significantly (P <0·05) (Table 4). No sustained change in BAP levels were determined in the test group, although levels reduced significantly (P <0·05) compared with baseline from week 8 onwards in the control group and this change was significantly different (P <0·05) to that of the test group from week 4 (Table 4).
Other than transient changes in granulocyte numbers, no significant between diet differences were observed in any of the haematological parameters measured (online Supplementary Table S7).
No significant between-group differences in change in urine volume, specific gravity or urine pH was observed (Table 5). Urinary creatinine levels did not change from baseline in either group until week 28 when a reduction of 2·43 (2·16, 2·72) mmol/l was observed in the control group (Table 5). Urinary microalbumin levels in the test group increased significantly (P <0·001) from 8·14 (5·49, 12·07) to 48·15 (32·08, 72·26) mg/l, although no change was observed in the control group (Table 5). Consequently, UACR in the test group was observed to increase (P <0·001) from 2·37 (1·46, 3·84) to 14·38 (8·54, 24·21) mg/g at week 28, a change significantly (P <0·001) different from that in the control (Fig. 10). Changes from baseline in urinary Ca, citrate and sulphate were not significantly different between groups at any time point (Table 5). Changes in urinary Mg, Na, urate and ammonium concentrations were small, but at some time points were significantly (P <0·05) different between groups (Table 5). Urinary oxalate concentrations were higher in the test group than in the control by up to 0·64 (0·17, 1·12) mmol/l (Table 5) and the change from baseline in the test group was significantly (P <0·05) different from that of the control. However, changes in urinary RSS for CaOx and struvite were not significantly different between groups (Table 5). In contrast, brushite urinary RSS increased significantly (P <0·05) from baseline in the test group, and this change was significantly different (P <0·001) from the control at week 12 (Table 5).
RSS CaOx, urine relative supersaturation for calcium oxalate; RSS MAP, urine relative supersaturation for magnesium ammonium phosphate (struvite); RSS BRU, urinary RSS for brushite (CaHPO4.2H2O); FE, fractional excretion of P or Ca; CI, CI.
Fig. 10 Study 2, urinary albumin creatinine ratio (UACR, mg/g) by study week and diet. Individual cat data are shown as small symbols and means are shown as large symbols with 95 % CI, n 20–25. * Significant between diet difference in change from baseline (P <0·05). , Control; , test.
Calculated FEP was not significantly different between groups at baseline, but after 4 weeks of feeding had increased significantly (P <0·001) in the test group from 23·34 (95 % CI 20·41, 26·69) to 75·22 (95 % CI 65·18, 86·80) % (Table 5). This increase was sustained for the rest of the feeding period and was significantly different (P <0·001) from that in the control group (Table 4). FECa was not significantly different between groups and did not change significantly in either group over the feeding period (Table 5).
At the end of the study, ultrasound examination revealed that fifteen cats (60 %) in the test group had developed renoliths compared with six cases in the control group (27 %). Evidence of changes in kidney echogenicity indicative of structural changes were not observed in the control group, but were observed in nine test cats (36 %). Of these, one test cat was observed to have developed narrow and four broad renal cortical RS, whereas three were observed to have parenchymal speckling, hyperechoic medullary radiating linear echoes or generalised increased echogenicity. In all, five test cats (20 %) were identified as having both changes in echogenicity with renoliths present, two of which had also developed pyelectasia. Examination of the whole-body radiographs did not indicate evidence of any soft tissue mineralisation outside of the kidney or skeletal changes.
In week 27, following a pre-defined clinical management pathway, one cat from the test group was euthanised after developing acute azotaemia that did not respond to treatment. Post-mortem and histological investigations were carried out. Macroscopically the kidney surface was observed to be irregular and within the cortex several white radial stripes were observed with chronic inflammation and fibrosis. Lesions were radially distributed supporting the presence of renal infarcts. Inflammatory cells, multifocal flattening of the tubule epithelial lining, accumulation of necrotic debris, dystrophic calcifications and crystals were observed under microscopic examination. The inflammatory infiltrate was noted to extend into the medulla, pelvis and renal capsule. Von Kossa staining gave a positive signal in the foci of tubular necrosis, although negative intratubular crystals were also observed.
In study 1, feeding cats an extruded diet with a total P content of 4·8 g/1000 kcal (4184 kJ) and Ca:P of 0·6 led to a series of adverse health effects within 4 weeks, including reduced appetite, increased incidence of vomiting, weight loss and changes in renal echogenicity indicative of structural abnormalities. In agreement with findings of Pastoor et al. ( 7 ) and Dobenecker et al. ( 8 ), elevated serum creatinine concentrations were observed and these were accompanied by a reduced GFR. A persistent reduction in GFR generally indicates direct damage to the renal tubular cells leading to loss of functioning nephrons( 29 ). If the initial damage is to the tubular cells( 29 ), the cells may desquamate into the tubular lumen causing obstruction and back flow of filtrate into the peritubular interstitium, thus increasing glomerular pressure and decreasing GFR( 30 ). In turn this leads to a reduction in creatinine and urea clearance from the blood and serum levels increase as observed in study 1. A significant reduction in serum albumin levels was also observed in the test group, possibly indicating impairment in the kidney's ability to retain this protein. However, as only a relatively small corresponding increase in urine microablumin was observed, this may have been because of losses due to blood sampling, as well as reduced production due to decreased food intake. Haematological changes observed in the study 1 test group were similar to those reported in cats suffering from hyperthyroidism( 31 ) – for example, erythrocyte poikilocytosis indicated by changes in mean corpuscular volume, mean corpuscular Hb, reduced erythrocyte distribution width and mean platelet volume. These changes may have been because of haemodynamic disturbance due to reduced GFR( 32 ). The observed alterations to serum and urinary biochemistry are indicative of stage 2 kidney disease classified using the International Renal Interest Society scale( 33 ) and are supported by the changes in renal morphology identified by ultrasonography. Subsequent monitoring has indicated that these changes are persistent, but not progressive.
Subsequently, study 2 was undertaken with the aim of determining a NOAEL for dietary P. In this study, clinically relevant changes in markers of kidney function were not apparent in the majority of cats offered a test diet containing 3·6 g/1000 kcal (4184 kJ) of P and a Ca:P of 0·9. The mean GFR and circulating concentrations of serum creatinine, urea and SDMA did not change in the test group over the course of the feeding period. It is also of note that serum levels of creatinine and urea fell in the control group, possibly owing to differences between the control diet and the standard diets the cats were habituated to before the study, although it might be expected that adaptation would have taken place over a shorter time period. P regulatory factors, FGF23, PTH, iCa and vitamin D remained within published normal ranges in all but three cats throughout the second study( 20 , 34 , 35 ). However, in common with the first study, FGF23 and PTH levels increased significantly in response to the increased P intake, although the changes were less marked. Notwithstanding the lesser degree of adaptive changes observed in study 2 and the stability of mean GFR, the incidence of kidney stones and renal structural changes, as determined by ultrasound, were observed to be higher in the test group. In addition, three cats developed signs of renal dysfunction towards the end of the feeding period, with clinically relevant changes from week 21 of feeding. Taking these observations together, it was not possible to establish a NOAEL for dietary P.
The bioavailability of dietary P is not only influenced by the P content of the diet, but the ratio of Ca:P and source of P have also been demonstrated to play important roles in absorption( 9 – 11 ). Here, in study 1, extra Ca in both organic (bone and meal meal) and inorganic (calcium carbonate) forms was added to balance the P content of the test diets at the desired ratio. Dietary P provided by organic raw materials (e.g. bone meal, poultry meal or P bound by phytates) is less bioavailable than that arising from added inorganic sources (e.g. P containing mono and dibasic Na salts)( 7 , 12 ). Phosphates are widely used in commercial human and pet food manufacturing, where they serve a number of processing functions and as a source of required dietary P( 13 ). Such P salts easily disassociate, solubilise and are readily absorbed in the intestinal tract; hence, P absorption can be greatly influenced by the chemical form ingested, with both circulating P concentration and urinary P excretion influenced by differences in dietary P availability( 12 ). Evidence suggests that over 90 % of inorganic P may be bioavailable, compared with between 40 and 60 % for naturally occurring sources( 10 ); the proportion of dietary P coming from inorganic sources probably contributed to the effects seen after consumption of high P diets in the present studies, as well as those reported by Pastoor et al. ( 7 ) and Dobenecker et al. ( 8 ). The test diets used here were formulated using relatively high levels of inorganic SDHP, which has been reported to be highly bioavailable and to have a greater effect on renal parameters than calcium monophosphate( 36 ). Inorganic neutral monobasic/dibasic sodium phosphates have been observed to increase urinary P concentration more than those originating from organic dietary P sources( 12 ), and the higher level of dietary Na associated with these salts may stimulate of Na+-dependent uptake of phosphate( 14 ). This action could have exacerbated the adverse effects of P intake in the studies presented here and particularly in study 2 when total P level was moderate. However, it is unlikely that Na levels alone induced the renal changes observed here. A study on the effects of dietary salt intake on renal function in healthy aged cats using a test diet at a similar level of Na inclusion did not induce deleterious effects on renal function or changes in plasma P, creatinine and PTH over a 2-year feeding period( 37 ). In a recent study of the postprandial effects of dietary P in cats, Coltherd et al. ( 38 ) observed that plasma P profiles are responsive to changes in dietary P content and that inorganic P content drives this response in a dose-dependent manner. These authors report that feeding diets with a high inclusion of SDHP and a Ca:P <1·0 induced a significant increase in postprandial plasma P levels that was sustained for 6 h. Our findings therefore support other reports indicating that diets including high levels of P from inorganic salts may have adverse effects on renal health in healthy cats( 12 , 36 , 39 , 40 ).
Circulating P levels are regulated by a number of hormones including FGF23 and PTH. In both studies, when high levels of P were ingested, levels of these hormones increased possibly in an attempt to maintain homoeostasis. PTH and FGF23 reduce reabsorption of phosphate in the proximal tubule, resulting in increased P concentration in the tubule, potentially leading to crystallisation( 41 ). PTH acts by reducing sodium phosphate cotransporter expression on the apical membrane of the proximal tubule and induces increased urinary P excretion( 42 ), as observed here. FGF23 and its cofactor, klotho, bind to FGF receptors type 1 in the proximal tubule and inhibit synthesis of the sodium phosphate cotransporter and 1α-hydroxylase, an enzyme involved in hydroxylation of 25(OH)D (calcifediol) to 1,25(OH)2D (calcitriol the bioactive form of vitamin D)( 43 ). As a result, more P would be expected to be excreted in urine and, owing to reduced levels of vitamin D, less P absorbed from the intestine, hence reducing the circulating P level. In the current studies, although urinary P concentration increased in the test groups as might be expected, there was no reduction in P apparent digestibility. While remaining within the normal range (0·8–1·9 mmol/l), circulating P concentrations in both studies were observed to reduce, possibly indicating an over-compensation in urinary excretion. In the first study, owing to the relatively low circulating P levels compared with the high urinary excretion observed, calculated FEP increased significantly over the feeding period to levels much higher than those reported as normal for cats (<73 %)( 44 , 45 ) and considered physiologically impossible as net tubular secretion of phosphate does not occur. This may be because the fasting levels of circulating P measured in the study are not indicative of transient postprandial changes in concentration. It has been reported in dogs that FE of a number of minerals is significantly higher after feeding, and postprandial measurements may offer further information on adaptations made in response to dietary intake( 46 ). In the second study, while FEP increased significantly in response to the more moderate levels of dietary P, although still likely to be an over-estimation owing to postprandial fluctuations in plasma P concentration, values were closer to the reported physiological maximum. Other minerals may also exert effects on P regulation – for example, Mg and P metabolism have previously been linked and hypomagnesaemia has been associated with raised FGF23 production and the progression of CKD in cats( 47 ). However, here dietary Mg levels, while meeting nutritional requirements( 4 ), were not fixed and neither serum nor faecal Mg levels were assessed; hence, the contribution of this mineral cannot be interrogated.
P absorption in the gut is regulated by the effect of 1,25(OH)2D on intestinal NaPi-2b transporters( 48 ). However, in both studies, the expected suppression of serum 1,25(OH)2D by FGF23 was not observed, although circulating levels of its precursor 25(OH)D did reduce significantly in the test groups. Increased PTH owing to ingestion of significant quantities of bioavailable P could counteract the effect of FGF23 by upregulating synthesis of 1,25(OH)2D in an attempt to increase Ca absorption. Although in the first study Ca balance became negative, no significant reduction in serum Ca or iCa was observed and FECa remained within reported normal range( 44 , 49 ). In the absence of changes in iCa, it is, however, possible that PTH secretion could be increased owing to transient fluctuations in postprandial extracellular P concentration( 50 , 51 ). Such postprandial changes in circulating iCa or P, although sufficient to induce PTH, would not be detected in the fasted blood samples taken in the current study. Circadian patterns of serum P, which can be modified by intake, have been observed in humans and it has been suggested that studies targeting intestinal phosphate absorption should consider afternoon phosphate measurements( 52 ). Coltherd et al. ( 38 ) have reported that feeding a diet with a high inclusion of SDHP decreased iCa and increased PTH postprandially. It has also been reported that stimulation of PTH by hypocalcaemia or hyperphosphataemia may be counteracted by balancing dietary Ca:P( 53 , 54 ). This postulated protective effect of Ca:P is supported by Coltherd et al. ( 38 ), who observed altered iCa and PTH response when a diets of similar SDHP concentration but differing Ca:P was fed. PTH is also reported to increase the rate of bone resorption( 55 , 56 ), and remodelling, a recognised consequence of increased P load( 3 ). After ingestion of excess P, deposition of Ca and P in bone may increase hypocalcaemia and raise markers of bone turnover( 57 ). In study 1, cats offered the test diet were observed to have increased levels of ALP and BAP. ALP breaks down pyrophosphate or β-glycerophosphate into inorganic phosphate essential to osteoblast differentiation and mineralisation( 58 ) and may also play a role in the intestinal absorption of inorganic P to facilitate transport( 57 ). However, such changes were not observed in study 2, when a more moderate amount of P was fed( 59 ) and no skeletal mineralisation or demineralisation was observed in radiographs from either study. Despite this, Ca balance was observed to become negative in the first study, indicating increased secretion of Ca into the intestine – a phenomenon previously correlated with high P intake( 60 ). This endogenous Ca is probably released from bone, which may be evidenced by the raised levels of ALP owing to increased bone remodelling( 61 ).
Increased total P burden can induce soft tissue mineralisation in the kidneys and other organs and deleterious effects of phosphate accumulation can be a direct consequence of calcium phosphate precipitates in the tissues. However, in both studies reported here the fasting CaP product was observed to be well below the level at which mineralisation is reported to occur (70 mg2/dl2; 5·6 mmol2/l2)( 62 , 63 ). The reduction in serum P concentration observed in the test groups actually lowered CaP, which would be expected to reduce the risk of mineralisation( 63 ). In the urinary tract, calculi are the most common form of mineralisation and cortical nephrocalcinosis may appear as increased echogenicity in or around the cortex. In both of the current studies, changes in renal echogenicity consistent with mineralisation were observed and histological examination of renal tissue from one cat indicated the presence of crystals and Ca deposits. P intake in cats has previously been reported to play a role in urolithiasis( 7 ), and in study 2 a higher incidence of renolith formation was observed in the test group. P is a component of struvite (MAP) and diets with a high P content have been associated with increased risk of struvite stone formation( 64 ). However, no sustained increase in urinary struvite RSS was observed with mean levels remaining at less than that regarded as metastable (<2.5)( 65 ). Crystallisation required for struvite renolith formation only occurs when mineral concentrations are sufficiently high and the urine pH is neutral or alkaline. Here, mean urinary pH was between 6·3 and 6·6 –in the range recommended to reduce the risk of struvite crystal formation in cats( 66 ). CaOx is reported as the most common mineral found in feline renoliths( 67 ). In the second study, urine concentrations of oxalate were significantly increased in the test group, although mean urinary RSS for CaOx was similar between groups and below 10, a level reported to reduce the risk of stone formation( 65 ). In cats primary hyperoxaluria has been noted to result in the presence of birefringent crystals in the tubules and renal fibrosis; however, such cases are rare and linked to a genetic disease similar to human primary hyperoxaluria type 2( 68 ). In the current study, urinary RSS for brushite (CaHPO4.2H2O), another stone-forming mineral, was increased significantly in the test group compared with the control group. Although brushite stones are normally uncommon in cats( 67 ), in humans metabolic abnormalities such as renal tubular acidosis and primary hyperparathyroidism are common in brushite stone disease( 69 ). Furthermore, humans who form brushite renoliths demonstrate renal cellular damage( 70 ) and kidney cell injury is associated with the presence of brushite crystals( 71 ). It has been proposed that a physiological change or insult causing loss of collecting duct cell pH regulation can trigger crystallisation resulting in cell injury and death. In alignment with the histological observations made in study 2, the crystallisation process in humans was observed to enlarge the collecting duct and to initiate interstitial inflammation( 71 ). It is possible that in the current studies the levels of P reaching the kidney could initiate a similar process, with length of exposure to the P a contributing factor as increased incidence of renolith formation was only observed in study 2. However, as renoliths were also identified in a number of control cats, the effects of other promoters of crystal precipitation and aggregation cannot be excluded. While little information is available on renolith incidence and development in the general cat population, the development of renoliths in the control group over the feeding period was surprising and higher than that observed in a comparable population of colony cats (approximately 11 % of cats)( 72 ). The incidence of renolith formation could possibly have been influenced by the lower P content of the control diet as a study evaluating dietary risk factors for CaOx urolithiasis reported that cats consuming diets with lower P content had increased risk for CaOx urolithiasis( 64 ). However, the control diet was at a P level similar to commercial diets designed for senior cats and not reported to induce negative effects on renal function( 73 ). One shortcoming of the study is the lack of renolith analysis to confirm chemical composition and indicate mechanism of formation. However, surgical intervention to remove the stones was deemed not to be in the therapeutic interest of the cats by the attending veterinary team. It is possible that the moderate protein levels of the diets used could have also influenced the outcome of these studies. Although the formulations were based on standard extruded commercial products, protein inclusion was limited owing to the requirement to produce diets with similar nutrient levels other than P and Ca inclusion. Furthermore, if high-protein inclusion study diets had been used, this could have been a confounding factor in assessment of the impact of dietary P on renal health, as a recent retrospective study has identified a link between high protein intakes and CKD in cats( 74 ). Although the test diet offered in the first study and the control diet offered in the second study were, respectively, 1·3 and 0·4 % lower than the AAFCO minimum for adult maintenance (26 % of DM)( 6 ), all the diets used were at least 20 % above the NRC (2006) adult protein RA( 4 ). It would therefore seem unlikely that physiologically meaningful protein deficiency would have occurred over the course of these studies, although this may have resulted in lower circulating creatinine and albumin levels. The conclusions drawn from the studies presented here must, however, be considered in the context of a moderate dietary protein intake.
In conclusion, the results of the present studies suggest that 4-week feeding of a diet containing 4·8 g/1000 kcal (4184 kJ) total P, with a Ca:P of 0·6 and formulated to contain 5·2 % (3·6 g/1000 kcal (4184 kJ)) SDHP, causes renal dysfunction in healthy adult cats. In addition, in most cats prolonged feeding of a diet with a total dietary P content of 3·6 g/1000 kcal (4184 kJ), Ca:P 0·9, formulated to contain 2·2 % (1·5 g/1000 kcal (4184 kJ)) SDHP results in no change in renal function. However, sonographic changes to the kidneys were observed in some individuals, and although the clinical significance of these is not known in a small number renal dysfunction did occur. Multiple sources of evidence suggest that dietary P provided by organic raw materials is less bioavailable than that arising from added inorganic sources, including the SDHP used in the present study. Furthermore, recent studies by our group suggest that diets with a high inclusion of inorganic P and specifically SDHP, along with a Ca:P <1·0, induce a prolonged increase in serum concentration postprandially( 38 ) and that this may be an important factor in the adverse effects observed here. As a NOAEL for dietary P was not established, further work is required to determine both this and the effects dietary P source and Ca:P ratio have on renal health in healthy adult cats.
The authors acknowledge the skills and expertise of colleagues at WALTHAM in the care and training of the cats and for the analysis of samples collected throughout the study.
This work was funded by Mars Petcare. The WALTHAM® Centre for Pet Nutrition is a fundamental research centre for Mars Petcare.
J. S., J. Alexander, D. E., M. G., P. M., J. E., R. B. and P. W. conceived the project. J. S., J. Alexander, D. E., P. W. and M. G. developed the overall research plan and had study oversight. J. Alexander, J. S., M. G. and J. Atwal conducted the research and oversaw the analysis of the samples. H. R. carried out and assessed the abdominal ultrasounds and radiographs. R. S. and A. C. performed the statistical analysis. J. Alexander authored the paper with J. S., P. W., M. G., J. Atwal, P. M., J. E. and H. R.
J. Alexander, J. S., J. Atwal, R. B., A. C., D. E., M. G., P. M., R. S. and P. W. were employees of Mars Petcare during the study. J. E. and H. R. acted as paid independent consultants for Mars Petcare at the time of the study.
For supplementary material/s referred to in this article, please visit https://doi.org/10.1017/S0007114518002751
1. O'Brien, TD, Osborne, CA, et al. (1992) Feline renal failure: questions, answers, questions. Compend Cont Ed Pract Vet 14, 127–152.
2. Elliott, D & Lefebvre, H (2006) Chronic renal disease: the importance of nutrition. In Encyclopedia of Canine Clinical Nutrition, pp. 252–282 [P Pibot, editor]. Aimargues, France: Royal Canin.
3. Dibartola, SP & Willard, MD (2006) Disorders of phosphorus: hypophosphatemia and hyperphosphatemia. In Fluid, Electrolyte, and Acid-Base Disorders in Small Animal Practice, 3rd ed., pp. 195–209 [SP DiBartola, editor]. St Louis, MO: Elsevier Science.
4. National Research Council (US) (2006) Ad Hoc Committee on Dog and Cat Nutrition. Washington, DC: National Academies Press.
5. FEDIAF-European Pet Food Industry Federation (2016) FEDIAF Nutritional Guidelines for Complete and Complementary Pet Food for Cats and Dogs. Brussels.
6. Association of American Feed Control Officials (2016) American Association of Feed Control Officials Official Publication. Washington, DC: The Association of Feed Control Officials Inc.
7. Pastoor, F, Klooster, A, Mathot, J, et al. (1995) Increasing phosphorus intake reduces urinary concentrations of magnesium and calcium in adult ovariectomized cats fed purified diets. J Nutr 125, 1334–1341.
8. Dobenecker, B, Webel, A, Reese, S, et al. (2018) Effect of a high phosphorus diet on indicators of renal health in cats. J Feline Med Surg 20, 339–343.
9. Siedler, S & Dobenecker, B (2015) Effect of different P sources in high phosphorus diets with balanced Ca/P ratio on serum PTH, P and calcium levels as well as apparent digestibility of these minerals in dogs. In Proceedings of the 19th ESVCN Congress, 17 September 2015, pp. 17–19.
10. Noori, N, Sims, JJ, Kopple, JD, et al. (2010) Organic and inorganic dietary phosphorus and its management in chronic kidney disease. Iran J Kidney Dis 4, 89–100.
11. Matsuzaki, H, Kikuchi, T, Kajita, Y, et al. (1999) Comparison of various phosphate salts as the dietary phosphorus source on nephrocalcinosis and kidney function in rats. J Nutr Sci Vitaminol 45, 595–608.
12. Finco, D, Barsanti, J & Brown, S (1989) Influence of dietary source of phosphorus on fecal and urinary excretion of phosphorus and other minerals by male cats. Am J Vet Res 50, 263–266.
13. Gutiérrez, OM (2013) Sodium- and phosphorus-based food additives: persistent but surmountable hurdles in the management of nutrition in chronic kidney disease. Adv Chronic Kidney Dis 20, 150–156.
14. Marks, J, Lee, GJ, Nadaraja, SP, et al. (2015) Experimental and regional variations in Na+-dependent and Na+-independent phosphate transport along the rat small intestine and colon. Physiol Rep 3, e12281.
15. Davies, M, Alborough, R, Jones, L, et al. (2017) Mineral analysis of complete dog and cat foods in the UK and compliance with European guidelines. Sci Rep 7, 17107.
16. German, AJ, Holden, SL, Moxham, GL, et al. (2006) A simple, reliable tool for owners to assess the body condition of their dog or cat. J Nutr 136, 2031S–2033S.
17. Rokey, GJ (1994) Petfood and fishfood extrusion. In The Technology of Extrusion Cooking, pp. 144–189 [ND Frame, editor]. New York: Chapman & Hall.
18. European Union (2009) Official Journal of the European Union, Regulation (EC) N ° 152 / (2009) of the commission of 27/01/2009. In Regulation (EC) No. 152, vol. 152. Brussels: European Union.
19. Robertson, WG, Jones, JS, Heaton, MA, et al. (2002) Predicting the crystallization potential of urine from cats and dogs with respect to calcium oxalate and magnesium ammonium phosphate (struvite). J Nutr 132, 1637S–1641S.
20. Geddes, R, Finch, N, Elliott, J, et al. (2013) Fibroblast growth factor 23 in feline chronic kidney disease. J Vet Intern Med 27, 234–241.
21. Williams, T, Elliott, J & Syme, H (2012) Calcium and phosphate homeostasis in hyperthyroid cats–associations with development of azotaemia and survival time. J Small Anim Pract 53, 561–571.
22. Aronov, PA, Hall, LM, Dettmer, K, et al. (2008) Metabolic profiling of major vitamin D metabolites using Diels–Alder derivatization and ultra-performance liquid chromatography–tandem mass spectrometry. Anal Bioanal Chem 391, 1917–1930.
23. Finch, NC, Syme, HM, Elliott, J, et al. (2011) Glomerular filtration rate estimation by use of a correction formula for slope–intercept plasma iohexol clearance in cats. Am J Vet Res 72, 1652–1659.
24. Von Kossa, J (1901) Uber die im Organismus kunztlich erzeugen Verkalkungen (About calcifications that are artificially produced in the organism). Beitr Anat 29, 163.
25. Team RC (2017) R: A Language and Environment for Statistical Computing, Google Scholar.
26. Bates, D, Maechler, M, Bolker, B, et al. (2014) lme4: Linear Mixed-effects Models Using Eigen and S4. https://cran.r-project.org/web/packages/lme4/index.html
27. Hothorn, T, Bretz, F & Westfall, P (2008) Simultaneous inference in general parametric models. Biom J 50, 346–363.
28. Wickham, H (2009) ggplot2: Elegant Graphics for Data Analysis. New York, NY: Springer-Verlag.
29. Braun, J-P & Lefebvre, H (2008) Kidney function and damage. Clin Biochem Domest Anim 6, 485–528.
30. Monaghan, K, Nolan, B & Labato, M (2012) Feline acute kidney injury: 1. Pathophysiology, etiology and etiology-specific management considerations. J Feline Med Surg 14, 775–784.
31. Bucknell, D (2000) Feline hyperthyroidism: spectrum of clinical presentions and response to carbimazole therapy. Aust Vet J 78, 462–465.
32. Pechereau, D, Martel, P & Braun, J (1997) Plasma erythropoietin concentrations in dogs and cats: reference values and changes with anaemia and/or chronic renal failure. Res Vet Sci 62, 185–188.
33. Lees, GE, Brown, SA, Elliott, J, et al. (2005) Assessment and management of proteinuria in dogs and cats: 2004 ACVIM Forum Consensus Statement (small animal). J Vet Intern Med 19, 377–385.
34. Barber, P & Elliott, J (1998) Feline chronic renal failure: calcium homeostasis in 80 cases diagnosed between 1992 and 1995. J Small Anim Pract 39, 108–116.
35. Barber, P & Elliott, J (1996) Study of calcium homeostasis in feline hyperthyroidism. J Small Anim Pract 37, 575–582.
36. Demmel, A (2011) Der Einfluss der alimentären Phosphorversorgung auf ausgewählte Nierenfunktionsparameter bei Katzen (The influence of dietary phosphorus supply on selected kidney function parameters in cats). DVM Thesis, Ludwig-Maximilians-Universität, München.
37. Reynolds, B, Chetboul, V, Nguyen, P, et al. (2013) Effects of dietary salt intake on renal function: a 2-year study in healthy aged cats. J Vet Intern Med 27, 507–515.
38. Coltherd, JC, Staunton, RH, Colyer, A, et al. (2018) Not all forms of dietary phosphorus are equal: an evaluation of postprandial phosphorus concentrations in the plasma of the cat. Br J Nutr (In the Press).
39. Dobenecker, B, Webel, A, Hertel-Böhnke, P, et al. (2013) Effect of high phoshporus intake on renal parameters in cats is influences by phosphorus concentration in urine. In Waltham International Nutritional Sciences Symposium, pp. 35, Portland, OR, 2 October 2013.
40. Hertel-Böhnke, PK & Dobenecker, B (2012) A high phosphorus diet with a Ca/P ratio of 0.9/1 affects renal parameters in healthy cats. Eur Soc Vet Comp Nutr Congr 63, 63–64.
41. Blaine, J, Chonchol, M & Levi, M (2014) Renal control of calcium, phosphate, and magnesium homeostasis. Clin J Am Soc Nephrol 10, 1257–1272.
42. Levi, M, Kempson, S, Lötscher, M, et al. (1996) Molecular regulation of renal phosphate transport. J Membr Biol 154, 1–9.
43. Razzaque, MS (2009) The FGF23–Klotho axis: endocrine regulation of phosphate homeostasis. Nat Rev Endocrinol 5, 611–619.
44. Finco, D, Brown, S, Barsanti, J, et al. (1997) Reliability of using random urine samples for "spot" determination of fractional excretion of electrolytes in cats. Am J Vet Res 58, 1184–1187.
45. Russo, E, Lees, G & Hightower, D (1986) Evaluation of renal function in cats, using quantitative urinalysis. Am J Vet Res 47, 1308–1312.
46. Lulich, J, Osborne, C, Polzin, D, et al. (1991) Urine metabolite values in fed and nonfed clinically normal beagles. Am J Vet Res 52, 1573–1578.
47. van den Broek, DH, Chang, YM, Elliott, J, et al. (2018) Prognostic importance of plasma total magnesium in a cohort of cats with azotemic chronic kidney disease. J Vet Intern Med 32, 1359–1371.
48. Rizzoli, R, Fleisch, H & Bonjour, J-P (1977) Role of 1,25-dihydroxyvitamin D3 on intestinal phosphate absorption in rats with a normal vitamin D supply. J Clin Invest 60, 639–647.
49. Lefebvre, HP, Dossin, O, Trumel, C, et al. (2008) Fractional excretion tests: a critical review of methods and applications in domestic animals. Vet Clin Pathol 37, 4–20.
50. Nagode, LA, Chew, DJ & Podell, M (1996) Benefits of calcitriol therapy and serum phosphorus control in dogs and cats with chronic renal failure: both are essential to prevent or suppress toxic hyperparathyroidism. Vet Clin North Am Small Anim Pract 26, 1293–1330.
51. Adler, AJ, Ferran, N & Berlyne, GM (1985) Effect of inorganic phosphate on serum ionized calcium concentration in vitro: a reassessment of the "trade-off hypothesis". Kidney Int 28, 932–935.
52. Ix, JH, Anderson, CAM, Smits, G, et al. (2014) Effect of dietary phosphate intake on the circadian rhythm of serum phosphate concentrations in chronic kidney disease: a crossover study. Am J Clin Nutr 100, 1392–1397.
53. Lamberg-Allardt, C, Karp, H & Kemi, V (2010) Phosphorus and bone. In Nutritional Influences on Bone Health, pp. 87–97 [P Burckhardt, B Dawson-Hughes and C Weaver, editors]. London: Springer-Verlag.
54. Ferrari, SL, Bonjour, J-P & Rizzoli, R (2005) Fibroblast growth factor-23 relationship to dietary phosphate and renal phosphate handling in healthy young men. J Clin Endocrinol Metab 90, 1519–1524.
55. O'Brien, CA, Jilka, RL, Fu, Q, et al. (2005) IL-6 is not required for parathyroid hormone stimulation of RANKL expression, osteoclast formation, and bone loss in mice. Am J Physiol Endocrinol Metab 289, E784–E793.
56. Bellido, T, Ali, AA, Plotkin, LI, et al. (2003) Proteasomal degradation of Runx2 shortens parathyroid hormone-induced anti-apoptotic signaling in osteoblasts; a putative explanation for why intermittent administration is needed for bone anabolism. J Biol Chem 278, 50259–50272.
57. DiBartola, SP (2011) Fluid, Electrolyte, and Acid–Base Disorders in Small Animal Practice. St Louis, MO: Elsevier Health Sciences.
58. Gutiérrez, OM (2010) Fibroblast growth factor 23 and disordered vitamin D metabolism in chronic kidney disease: updating the "trade-off" hypothesis. Clin J Am Soc Nephrol 5, 1710–1716.
59. Kemi, VE, Rita, HJ, Kärkkäinen, MU, et al. (2009) Habitual high phosphorus intakes and foods with phosphate additives negatively affect serum parathyroid hormone concentration: a cross-sectional study on healthy premenopausal women. Public Health Nutr 12, 1885–1892.
60. Heaney, RP & Recker, RR (1982) Effects of nitrogen, phosphorus, and caffeine on calcium balance in women. J Lab Clin Med 99, 46–55.
61. Klein, L, Lafferty, FW, Pearson, OH, et al. (1964) Correlation of urinary hydroxyproline, serum alkaline phosphatase and skeletal calcium turnover. Metabolism 13, 272–284.
62. Landau, D, Krymko, H, Shalev, H, et al. (2007) Transient severe metastatic calcification in acute renal failure. Pediatr Nephrol 22, 607–611.
63. McLeland, SM, Lunn, KF, Duncan, CG, et al. (2014) Relationship among serum creatinine, serum gastrin, calcium–phosphorus product, and uremic gastropathy in cats with chronic kidney disease. J Vet Intern Med 28, 827–837.
64. Lekcharoensuk, C, Osborne, CA, Lulich, JP, et al. (2001) Association between dietary factors and calcium oxalate and magnesium ammonium phosphate urolithiasis in cats. J Am Vet Med Assoc 219, 1228–1237.
65. Evason, M, Remillard, R & Bartges, J (2016) Understanding urinary relative supersaturation. Clinician's Brief, April 2016. https://www.cliniciansbrief.com/article/understanding-urinary-relative-supersaturation
66. Lulich, J, Berent, A, Adams, L, et al. (2016) ACVIM small animal consensus recommendations on the treatment and prevention of uroliths in dogs and cats. J Vet Intern Med 30, 1564–1574.
67. Osborne, CA, Lulich, JP, Kruger, JM, et al. (2009) Analysis of 451,891 canine uroliths, feline uroliths, and feline urethral plugs from 1981 to 2007: perspectives from the Minnesota Urolith Center. Vet Clin Small Anim Pract 39, 183–197.
68. McKerrell, R, Blakemore, W, Heath, M, et al. (1989) Primary hyperoxaluria (l-glyceric aciduria) in the cat: a newly recognised inherited disease. Vet Rec 125, 31–34.
69. Pak, CY, Poindexter, JR, Adams-Huet, B, et al. (2003) Predictive value of kidney stone composition in the detection of metabolic abnormalities. Am J Med 115, 26–32.
70. Krambeck, AE, Handa, SE, Evan, AP, et al. (2010) Brushite stone disease as a consequence of lithotripsy? Urol Res 38, 293–299.
71. Evan, AP, Lingeman, JE, Coe, FL, et al. (2005) Crystal-associated nephropathy in patients with brushite nephrolithiasis. Kidney Int 67, 576–591.
72. Hall, JA, Yerramilli, M, Obare, E, et al. (2017) Serum concentrations of symmetric dimethylarginine and creatinine in cats with kidney stones. PLOS ONE 12, e0174854.
73. Plantinga, E, Everts, H, Kastelein, A, et al. (2005) Retrospective study of the survival of cats with acquired chronic renal insufficiency offered different commercial diets. Vet Rec 157, 185–187.
74. Böswald, L, Kienzle, E & Dobenecker, B (2018) Observation about phosphorus and protein supply in cats and dogs prior to the diagnosis of chronic kidney disease. J Anim Physiol Anim Nutr 102, 31–36.
75. Laflamme, DP (2001) Determining metabolizable energy content in commercial pet foods. J Anim Physiol Anim Nutr 85, 222–230.
Loading article... | CommonCrawl |
Specification Patterns for Robotic Missions
Pattern Catalog
Requirements Collection
Ordered Patrolling
Given a set of locations the robot should repeatedly visit all areas following a given order.
This pattern requires a robot to keep visiting a set of locations, in some specified order, similarly to sequenced patrolling. However, given an order, e.g., 1 and 2 between two locations, it is not admitted to the robot to visit 2 before 1.
LTL Template
$\mathcal{G} (\mathcal{F} (l_1 \wedge \mathcal{F}(l_2 \wedge \ldots \mathcal{F}(l_n)))) $ $\overset{n-1}{\underset{i=1}{\bigwedge}} (\neg l_{i+1}) \mathcal{U} l_i$\\ $\overset{n}{\underset{i=1}{\bigwedge}} \mathcal{G} (l_{(i+1)\%n} \rightarrow \mathcal{X} ( (\neg l_{(i+1)\%n}) \mathcal{U} l_{i}))$ , where ($l_1, l_2, \ldots$ are location propositions)
Note that the pattern is general and consider the case in which a robot can be in two locations at the same time. For example, a robot can be in an area of a building indicated as l1 (e.g., area 01) and at the same time in a room of the area indicated as l2 (e.g., room 002) at the same time. If the topological intersection of the considered locations is empty, then the robot cannot be in two locations at the same time and the transitions labeled with both l1 and l2 cannot be fired.
Examples and Known Uses
Locations $l_1$, $l_2$, and $l_3$ must be patrolled following the order $l_1$, $l_2$, and $l_3$. The trace $l_1\rightarrow l_4 \rightarrow l_3 \rightarrow l_1 \rightarrow l_4 \rightarrow l_2 \rightarrow ( l_1 \rightarrow l_2 \rightarrow l_3)^\omega$ violates the mission requirement since $l_3$ precedes $l_2$. The trace $l_1 \rightarrow l_1 \rightarrow l_2 \rightarrow l_4 \rightarrow l_4 \rightarrow l_3 \rightarrow ( l_1 \rightarrow l_2 \rightarrow l_3)^\omega$ satisfies the mission requirement.
The Ordered Patrolling pattern specializes Sequenced Patrolling by requiring locations to be visited in order.
Occurences
An example of ordered patrolling can be obtained by combining the sequence patrolling formulation provided by Chen et al, with the absence pattern proposed by Dwyer et al.
Büchi Automaton representing accepting sequences of events
where circled states are accepting states and states with an incoming arrow with no source are initial states. The automaton above is deterministic.
CTL Template
$\forall \mathcal{G} (\forall \mathcal{F} (l_1 \wedge \forall \mathcal{F}(l_2 \wedge \ldots \forall \mathcal{F}(l_n)))) $ $\overset{n-1}{\underset{i=1}{\bigwedge}} \forall (\neg l_{i+1}) \mathcal{U} l_i$\\ $\overset{n}{\underset{i=1}{\bigwedge}} \forall \mathcal{G} (l_{(i+1)\%n} \rightarrow \forall \mathcal{X} ( \forall (\neg l_{(i+1)\%n}) \mathcal{U} l_{i}))$
Tagged: surveillance
Claudio Menghi, Christos Tsigkanos, Thorsten Berger, Patrizio Pelliccione and Carlo Ghezzi © 2018 | CommonCrawl |
Trapezium Rule – approximating integrals
StudyWell > Numerical Methods in A-Level Mathematics > Trapezium Rule – approximating integrals
Throughout this page we use methods such as the trapezium rule to approximate the area beneath a curve. This serves as an alternative to definite integrals and we will see below that the approximations can be overestimates or underestimates. Recall that the expression $\int_a^b f(x)dx$ finds the area between the curve of $f(x)$, the $x$-axis and the limits $x=a$ and $x=b$ (see definite integrals). Up until now, we found integrals using integration, that is, we find the area analytically (using algebra etc). However, if we cannot integrate $f(x)$ like this we must adopt a numerical method. For example, we could fit rectangles over the area and sum their individual areas.
Approximating Integrals
In this diagram, we use 4 rectangles. Each of these rectangles has a width of $\Delta x$ (meaning change in $x$) which we can find from $\frac{b-a}{4}$ when the full width is $b-a$ and there are 4 rectangles spanning it. The height of each rectangle is dependent on the $y$-coordinate of the point on the curve which is at the centre of the top of each rectangle. Hence, the area of the $i$th triangle is $f(x_i)\delta x$. It follows that we can approximate the area in this diagram as $\sum_{i=1}^4 f(x_i)\Delta x$ (see sigma notation). The more rectangles we use the better the approximation to the integral. In general, if the number of rectangles we use is $n$ then:
$\int_a^b f(x)dx=\sum_{i=1}^nf(x_i)\Delta x$
In which case, $\Delta x=\frac{b-a}{n}$ and can be thought of the discrete version of $dx$. Similarly, we can think of $dx$ as the continuous infinitesimal version of $\Delta x$ and $\int$ as the continuous infinitesimal version of $\sum$. So, for large finite $n$, we can say $\int_a^bf(x)dx\approx \sum_{i=1}^n f(x_i)\Delta x$. However, if we take $n\rightarrow\infty$, then we can say $ \int_a^b f(x) dx=\lim_{n\rightarrow\infty} \sum_{i=1}^n f(x_i)\Delta x$ where $\Delta x=\frac{b-a}{n}$.
Trapezium Rule
In this diagram, instead of fitting 4 rectangles, we use 4 trapezia to approximate the area. In general, this is a more accurate approximation to the area. Conventionally, we use $h=\frac{b-a}{4}$ (instead of $\Delta x$) to describe the width of each trapezium. We also use $y$-coordinates to describe heights instead of the function notation. Notice that we require 5 points instead of 4 since we need the lengths of the parallel sides when computing the area of a trapezium. The area of each trapezium is 'half the sum of the parallel sides multiplied by the distance between them'. Hence, the area of the second trapezium here, for example, is $\frac{(y_1+y_2)h}{2}$. It follows that we can approximate the full area with the trapezium rule as:
$\int_a^b y \,dx\approx \frac{(y_0+y_1)h}{2}+\frac{(y_1+y_2)h}{2}+\frac{(y_2+y_3)h}{2}+\frac{(y_3+y_4)h}{2}=\frac{h}{2}\left\lbrace y_0+2\left(y_1+y_2+y_3\right)+y_4\right\rbrace$
where $h=\frac{b-a}{4}$. Note that, in general, the trapezium rule for $n$ trapezia is given by
$\int_a^b y \,dx\approx \frac{(y_0+y_1)h}{2}+\frac{(y_1+y_2)h}{2}+…+\frac{(y_{n-1}+y_n)h}{2}=\frac{1}{2}h\left\lbrace y_0+y_n+2\left(y_1+y_2+…+y_{n-1}\right)\right\rbrace$
where $h=\frac{b-a}{n}$. This formula is given in the Edexcel Formula Booklet. See Examples 1 and 2 for a demonstration of the trapezium rule in use.
Overestimate or Underestimate?
The trapezium rule will only ever give the exact area if the graph has the same shape as the trapezia. In general, the trapezium rule will give an overestimate or an overestimate. Which one of these it is will depend on the shape of the curve. Recall the difference between a concave and a convex function. The trapezium rule applied to parts of the graph where it is concave will give an underestimate. Conversely, the trapezium rule applied to parts of the graph where it is convex will give an overestimate.
It is harder to tell what kind of estimate we get using rectangles instead. This is because, the rectangle don't tend to follow the line of the graph as closely as the trapezia do.
Examples using the Trapezium Rule
The following shows the graph of the function $f(x)=2-\cos^2(x)$ and the dashed line $x=1$:
Complete the following table of values, giving 3 decimal places where appropriate:
x 0 0.2 0.4 0.6 0.8 1
f(x) 1 2.416
Using the trapezium rule with 5 trapezia, estimate the area bounded by the curve, the $x$-axis, the $y$-axis and the line $x=1$.
Is the estimate an overestimate or an underestimate? Explain why increasing the number of trapezia improves the approximation.
f(x) 1 1.003 1.051 1.248 1.713 2.416
For 5 trapezia, the formula is: $\frac{1}{2}h\left\lbrace y_0+y_5+2\left(y_1+y_2+.y_3+y_4\right)\right\rbrace$. Setting the trapezium width $h=0.2$ and using the $y$-values from the table above we have: $0.1(1+2.416+2(1.003+1.051+1.248+1.713))$. Hence, the area is approximately 1.3446 square units.
On this interval, the curve is mostly convex and so the estimate is likely to be an overestimate. Increasing the number of trapezia will improve the estimate as the closer you zoom, the more the curve begins to look like a straight line. In this case, the top of each trapezium will match more closely to the curve.
The following shows the graph of $y=5\sqrt{x}+3$ and the dashed line $x=1$:
Use the trapezium rule with 5 trapezia to approximate the area between the curve, the $x$-axis, the $y$-axis and the line $x=1$. Is this approximation an overestimate or an underestimate?
Use integration to find the exact area.
What is the percentage error in the approximation?
We can organise the $x$ and $y$ values in a table:
f(x) 3 5.236 6.162 6.873 7.472 8
For 5 trapezia, the formula is: $\frac{1}{2}h\left\lbrace y_0+y_5+2\left(y_1+y_2+.y_3+y_4\right)\right\rbrace$. Setting the trapezium width $h=0.2$ and using the $y$-values from the table above we have: $0.1(3+8+2(5.236+6.162+6.873+7.472))$. Hence, the area is approximately 6.2486 square units. The curve is concave and so this will be an underestimate.
We can find the exact area by integrating: $\begin{array}{l}\int_0^1\left(5\sqrt{x}+3\right)dx&=&\int_0^1\left(5x^{\frac{1}{2}}+3\right)dx\\&=&\left[\frac{5}{\frac{3}{2}}x^{\frac{3}{2}+}3x\right]_0^1\\&=&\left[\frac{10}{3}x^{\frac{3}{2}}+3x\right]_0^1\\&=&\left(\frac{10}{3}(1)^{\frac{3}{2}}+3(1)\right)-\left(\frac{10}{3}(0)^{\frac{3}{2}}+3(0)\right)\\&=&\frac{19}{3}\end{array}$. See more on definite integrals. Note that $\frac{19}{3}=6.3333…$ and so the approximation from part 1 is a good estimate.
The percentage error is given by $\frac{6.2486-\frac{19}{3}}{\frac{19}{3}}\times 100=-1.34\%$ to 2 decimal places.
A2TrapeziumRuleExamQuestions
Open Trapezium Rule Questions by Topic in New Window
See more Questions by Topic | CommonCrawl |
Introduction to Data Abstraction
In section 1.1.8, we noted that a procedure used as an element in creating a more complex procedure could be regarded not only as a collection of particular operations but also as a procedural abstraction. That is, the details of how the procedure was implemented could be suppressed, and the particular procedure itself could be replaced by any other procedure with the same overall behavior. In other words, we could make an abstraction that would separate the way the procedure would be used from the details of how the procedure would be implemented in terms of more primitive procedures. The analogous notion for compound data is called data abstraction. Data abstraction is a methodology that enables us to isolate how a compound data object is used from the details of how it is constructed from more primitive data objects.
The basic idea of data abstraction is to structure the programs that are to use compound data objects so that they operate on "abstract data." That is, our programs should use data in such a way as to make no assumptions about the data that are not strictly necessary for performing the task at hand. At the same time, a "concrete" data representation is defined independent of the programs that use the data. The interface between these two parts of our system will be a set of procedures, called selectors and constructors, that implement the abstract data in terms of the concrete representation. To illustrate this technique, we will consider how to design a set of procedures for manipulating rational numbers.
2.1.1 Example: Arithmetic Operations for Rational Numbers
Suppose we want to do arithmetic with rational numbers. We want to be able to add, subtract, multiply, and divide them and to test whether two rational numbers are equal.
Let us begin by assuming that we already have a way of constructing a rational number from a numerator and a denominator. We also assume that, given a rational number, we have a way of extracting (or selecting) its numerator and its denominator. Let us further assume that the constructor and selectors are available as procedures:
(make-rat n d) returns the rational number whose numerator is the integer n and whose denominator is the integer d.
(numer x) returns the numerator of the rational number x.
(denom x) returns the denominator of the rational number x.
We are using here a powerful strategy of synthesis: wishful thinking. We haven't yet said how a rational number is represented, or how the procedures numer, denom, and make-rat should be implemented. Even so, if we did have these three procedures, we could then add, subtract, multiply, divide, and test equality by using the following relations: $$ \frac{n_1}{d_1} - \frac{n_2}{d_2} = \frac{n_1 d_2 - n_2 d_1}{d_1 d_2} $$
We can express these rules as procedures:
(define (add-rat x y) (make-rat (+ (* (numer x) (denom y)) (* (numer y) (denom x))) (* (denom x) (denom y)))) (define (sub-rat x y) (make-rat (- (* (numer x) (denom y)) (* (numer y) (denom x))) (* (denom x) (denom y)))) (define (mul-rat x y) (make-rat (* (numer x) (numer y)) (* (denom x) (denom y)))) (define (div-rat x y) (make-rat (* (numer x) (denom y)) (* (denom x) (numer y)))) (define (equal-rat? x y) (= (* (numer x) (denom y)) (* (numer y) (denom x))))
Now we have the operations on rational numbers defined in terms of the selector and constructor procedures numer, denom, and make-rat. But we haven't yet defined these. What we need is some way to glue together a numerator and a denominator to form a rational number.
To enable us to implement the concrete level of our data abstraction, our language provides a compound structure called a pair, which can be constructed with the primitive procedure cons. This procedure takes two arguments and returns a compound data object that contains the two arguments as parts. Given a pair, we can extract the parts using the primitive procedures car and cdr. 2 Thus, we can use cons, car, and cdr as follows:
(define x (cons 1 2))
(car x)
(cdr x)
Notice that a pair is a data object that can be given a name and manipulated, just like a primitive data object. Moreover, cons can be used to form pairs whose elements are pairs, and so on:
(define x (cons 1 2)) (define y (cons 3 4)) (define z (cons x y))
(car (car z))
(car (cdr z))
In section 2.2 we will see how this ability to combine pairs means that pairs can be used as general-purpose building blocks to create all sorts of complex data structures. The single compound-data primitive pair, implemented by the procedures cons, car, and cdr, is the only glue we need. Data objects constructed from pairs are called list-structured data.
Representing rational numbers
Pairs offer a natural way to complete the rational-number system. Simply represent a rational number as a pair of two integers: a numerator and a denominator. Then make-rat, numer, and denom are readily implemented as follows:3
(define (make-rat n d) (cons n d)) (define (numer x) (car x)) (define (denom x) (cdr x))
Also, in order to display the results of our computations, we can print rational numbers by printing the numerator, a slash, and the denominator:4
(define (print-rat x) (display (numer x)) (display '/) (display (denom x)) (newline))
Now we can try our rational-number procedures:
(define one-half (make-rat 1 2))
(print-rat one-half)
(define one-third (make-rat 1 3))
(print-rat (add-rat one-half one-third))
(print-rat (mul-rat one-half one-third))
(print-rat (add-rat one-third one-third))
As the final example shows, our rational-number implementation does not reduce rational numbers to lowest terms. We can remedy this by changing make-rat. If we have a gcd procedure like the one in section 1.2.5 that produces the greatest common divisor of two integers, we can use gcd to reduce the numerator and the denominator to lowest terms before constructing the pair:
(define (gcd a b) (if (= b 0) a (gcd b (remainder a b))))
(define (make-rat n d) (let ((g (gcd n d))) (cons (/ n g) (/ d g))))
as desired. This modification was accomplished by changing the constructor make-rat without changing any of the procedures (such as add-rat and mul-rat) that implement the actual operations.
Exercise 2.1. Define a better version of make-rat that handles both positive and negative arguments. Make-rat should normalize the sign so that if the rational number is positive, both the numerator and denominator are positive, and if the rational number is negative, only the numerator is negative.
2.1.2 Abstraction Barriers
Before continuing with more examples of compound data and data abstraction, let us consider some of the issues raised by the rational-number example. We defined the rational-number operations in terms of a constructor make-rat and selectors numer and denom. In general, the underlying idea of data abstraction is to identify for each type of data object a basic set of operations in terms of which all manipulations of data objects of that type will be expressed, and then to use only those operations in manipulating the data.
We can envision the structure of the rational-number system as shown in Figure 2-1. The horizontal lines represent abstraction barriers that isolate different "levels" of the system. At each level, the barrier separates the programs (above) that use the data abstraction from the programs (below) that implement the data abstraction. Programs that use rational numbers manipulate them solely in terms of the procedures supplied "for public use" by the rational-number package: add-rat, sub-rat, mul-rat, div-rat, and equal-rat?. These, in turn, are implemented solely in terms of the constructor and selectors make-rat, numer, and denom, which themselves are implemented in terms of pairs. The details of how pairs are implemented are irrelevant to the rest of the rational-number package so long as pairs can be manipulated by the use of cons, car, and cdr. In effect, procedures at each level are the interfaces that define the abstraction barriers and connect the different levels.
Figure 2.1: Data-abstraction barriers in the rational-number package.
This simple idea has many advantages. One advantage is that it makes programs much easier to maintain and to modify. Any complex data structure can be represented in a variety of ways with the primitive data structures provided by a programming language. Of course, the choice of representation influences the programs that operate on it; thus, if the representation were to be changed at some later time, all such programs might have to be modified accordingly. This task could be time-consuming and expensive in the case of large programs unless the dependence on the representation were to be confined by design to a very few program modules.
For example, an alternate way to address the problem of reducing rational numbers to lowest terms is to perform the reduction whenever we access the parts of a rational number, rather than when we construct it. This leads to different constructor and selector procedures:
(define (make-rat n d) (cons n d)) (define (numer x) (let ((g (gcd (car x) (cdr x)))) (/ (car x) g))) (define (denom x) (let ((g (gcd (car x) (cdr x)))) (/ (cdr x) g)))
The difference between this implementation and the previous one lies in when we compute the gcd. If in our typical use of rational numbers we access the numerators and denominators of the same rational numbers many times, it would be preferable to compute the gcd when the rational numbers are constructed. If not, we may be better off waiting until access time to compute the gcd. In any case, when we change from one representation to the other, the procedures add-rat, sub-rat, and so on do not have to be modified at all.
Constraining the dependence on the representation to a few interface procedures helps us design programs as well as modify them, because it allows us to maintain the flexibility to consider alternate implementations. To continue with our simple example, suppose we are designing a rational-number package and we can't decide initially whether to perform the gcd at construction time or at selection time. The data-abstraction methodology gives us a way to defer that decision without losing the ability to make progress on the rest of the system.
Exercise 2.2. Consider the problem of representing line segments in a plane. Each segment is represented as a pair of points: a starting point and an ending point. Define a constructor make-segment and selectors start-segment and end-segment that define the representation of segments in terms of points. Furthermore, a point can be represented as a pair of numbers: the x coordinate and the y coordinate. Accordingly, specify a constructor make-point and selectors x-point and y-point that define this representation. Finally, using your selectors and constructors, define a procedure midpoint-segment that takes a line segment as argument and returns its midpoint (the point whose coordinates are the average of the coordinates of the endpoints). To try your procedures, you'll need a way to print points:
(define (print-point p) (display "(") (display (x-point p)) (display ",") (display (y-point p)) (display ")") (newline))
Exercise 2.3. Implement a representation for rectangles in a plane. (Hint: You may want to make use of Exercise 2-2.) In terms of your constructors and selectors, create procedures that compute the perimeter and the area of a given rectangle. Now implement a different representation for rectangles. Can you design your system with suitable abstraction barriers, so that the same perimeter and area procedures will work using either representation?
2.1.3 What Is Meant by Data?
We began the rational-number implementation in section 2.1.1 by implementing the rational-number operations add-rat, sub-rat, and so on in terms of three unspecified procedures: make-rat, numer, and denom. At that point, we could think of the operations as being defined in terms of data objects---numerators, denominators, and rational numbers---whose behavior was specified by the latter three procedures.
But exactly what is meant by data ? It is not enough to say "whatever is implemented by the given selectors and constructors." Clearly, not every arbitrary set of three procedures can serve as an appropriate basis for the rational-number implementation. We need to guarantee that, if we construct a rational number x from a pair of integers n and d, then extracting the numer and the denom of x and dividing them should yield the same result as dividing n by d. In other words, make-rat, numer, and denom must satisfy the condition that, for any integer n and any non-zero integer d, if x is (make-rat n d), then $$ \frac{\text{(numer x)}}{\text{(denom x)}} = \frac{n}{d} $$
In fact, this is the only condition make-rat, numer, and denom must fulfill in order to form a suitable basis for a rational-number representation. In general, we can think of data as defined by some collection of selectors and constructors, together with specified conditions that these procedures must fulfill in order to be a valid representation. 5
This point of view can serve to define not only "high-level" data objects, such as rational numbers, but lower-level objects as well. Consider the notion of a pair, which we used in order to define our rational numbers. We never actually said what a pair was, only that the language supplied procedures cons, car, and cdr for operating on pairs. But the only thing we need to know about these three operations is that if we glue two objects together using cons we can retrieve the objects using car and cdr. That is, the operations satisfy the condition that, for any objects x and y, if z is (cons x y) then (car z) is x and (cdr z) is y. Indeed, we mentioned that these three procedures are included as primitives in our language. However, any triple of procedures that satisfies the above condition can be used as the basis for implementing pairs. This point is illustrated strikingly by the fact that we could implement cons, car, and cdr without using any data structures at all but only using procedures. Here are the definitions:
(define (cons x y) (define (dispatch m) (cond ((= m 0) x) ((= m 1) y) (else (error "Argument not 0 or 1 -- CONS" m)))) dispatch) (define (car z) (z 0)) (define (cdr z) (z 1))
This use of procedures corresponds to nothing like our intuitive notion of what data should be. Nevertheless, all we need to do to show that this is a valid way to represent pairs is to verify that these procedures satisfy the condition given above.
The subtle point to notice is that the value returned by (cons x y) is a procedure---namely the internally defined procedure dispatch, which takes one argument and returns either x or y depending on whether the argument is 0 or 1. Correspondingly, (car z) is defined to apply z to 0. Hence, if z is the procedure formed by (cons x y), then z applied to 0 will yield x. Thus, we have shown that (car (cons x y)) yields x, as desired. Similarly, (cdr (cons x y)) applies the procedure returned by (cons x y) to 1, which returns y. Therefore, this procedural implementation of pairs is a valid implementation, and if we access pairs using only cons, car, and cdr we cannot distinguish this implementation from one that uses "real" data structures.
The point of exhibiting the procedural representation of pairs is not that our language works this way (Scheme, and Lisp systems in general, implement pairs directly, for efficiency reasons) but that it could work this way. The procedural representation, although obscure, is a perfectly adequate way to represent pairs, since it fulfills the only conditions that pairs need to fulfill. This example also demonstrates that the ability to manipulate procedures as objects automatically provides the ability to represent compound data. This may seem a curiosity now, but procedural representations of data will play a central role in our programming repertoire. This style of programming is often called message passing , and we will be using it as a basic tool in Chapter 3 when we address the issues of modeling and simulation.
Exercise 2.4. Here is an alternative procedural representation of pairs. For this representation, verify that (car (cons x y)) yields x for any objects x and y.
(define (cons x y) (lambda (m) (m x y))) (define (car z) (z (lambda (p q) p)))
What is the corresponding definition of cdr?
(define (cdr z) 'your-answer-here)
Hint: To verify that this works, make use of the substitution model of section 1.1.5.
Exercise 2.5. Show that we can represent pairs of nonnegative integers using only numbers and arithmetic operations if we represent the pair $a$ and $b$ as the integer that is the product $2^a 3^b$. Give the corresponding definitions of the procedures cons, car, and cdr.
(define (cons a b) 'your-answer-here)
Exercise 2.6. In case representing pairs as procedures wasn't mind-boggling enough, consider that, in a language that can manipulate procedures, we can get by without numbers (at least insofar as nonnegative integers are concerned) by implementing $0$ and the operation of adding $1$ as
(define zero (lambda (f) (lambda (x) x))) (define (add-1 n) (lambda (f) (lambda (x) (f ((n f) x)))))
This representation is known as Church numerals , after its inventor, Alonzo Church, the logician who invented the lambda calculus.
Define one and two directly (not in terms of zero and add-1). (Hint: Use substitution to evaluate (add-1 zero)). Give a direct definition of the addition procedure + (not in terms of repeated application of add-1).
2.1.4 Extended Exercise: Interval Arithmetic
Alyssa P. Hacker is designing a system to help people solve engineering problems. One feature she wants to provide in her system is the ability to manipulate inexact quantities (such as measured parameters of physical devices) with known precision, so that when computations are done with such approximate quantities the results will be numbers of known precision.
Electrical engineers will be using Alyssa's system to compute electrical quantities. It is sometimes necessary for them to compute the value of a parallel equivalent resistance R_p of two resistors $R_1$ and $R_2$ using the formula $$ R_p = \frac{1}{\frac{1}{R_1} + \frac{1}{R_2}} $$
Resistance values are usually known only up to some tolerance guaranteed by the manufacturer of the resistor. For example, if you buy a resistor labeled "6.8 ohms with 10% tolerance" you can only be sure that the resistor has a resistance between 6.8 - 0.68 = 6.12 and 6.8 + 0.68 = 7.48 ohms. Thus, if you have a 6.8-ohm 10% resistor in parallel with a 4.7-ohm 5% resistor, the resistance of the combination can range from about 2.58 ohms (if the two resistors are at the lower bounds) to about 2.97 ohms (if the two resistors are at the upper bounds).
Alyssa's idea is to implement "interval arithmetic" as a set of arithmetic operations for combining "intervals" (objects that represent the range of possible values of an inexact quantity). The result of adding, subtracting, multiplying, or dividing two intervals is itself an interval, representing the range of the result.
Alyssa postulates the existence of an abstract object called an "interval" that has two endpoints: a lower bound and an upper bound. She also presumes that, given the endpoints of an interval, she can construct the interval using the data constructor make-interval. Alyssa first writes a procedure for adding two intervals. She reasons that the minimum value the sum could be is the sum of the two lower bounds and the maximum value it could be is the sum of the two upper bounds:
(define (add-interval x y) (make-interval (+ (lower-bound x) (lower-bound y)) (+ (upper-bound x) (upper-bound y))))
Alyssa also works out the product of two intervals by finding the minimum and the maximum of the products of the bounds and using them as the bounds of the resulting interval. (Min and max are primitives that find the minimum or maximum of any number of arguments.)
(define (mul-interval x y) (let ((p1 (* (lower-bound x) (lower-bound y))) (p2 (* (lower-bound x) (upper-bound y))) (p3 (* (upper-bound x) (lower-bound y))) (p4 (* (upper-bound x) (upper-bound y)))) (make-interval (min p1 p2 p3 p4) (max p1 p2 p3 p4))))
To divide two intervals, Alyssa multiplies the first by the reciprocal of the second. Note that the bounds of the reciprocal interval are the reciprocal of the upper bound and the reciprocal of the lower bound, in that order.
(define (div-interval x y) (mul-interval x (make-interval (/ 1.0 (upper-bound y)) (/ 1.0 (lower-bound y)))))
Exercise 2.7: Alyssa's program is incomplete because she has not specified the implementation of the interval abstraction. Here is a definition of the interval constructor:
(define (make-interval a b) (cons a b))
Define selectors upper-bound and lower-bound to complete the implementation.
(define (lower-bound interval) 'your-answer-here) (define (upper-bound interval) 'your-answer-here)
Exercise 2.8: Using reasoning analogous to Alyssa's, describe how the difference of two intervals may be computed. Define a corresponding subtraction procedure, called sub-interval.
(define (sub-interval) 'your-answer-here)
Exercise 2.9: The width of an interval is half of the difference between its upper and lower bounds. The width is a measure of the uncertainty of the number specified by the interval. For some arithmetic operations the width of the result of combining two intervals is a function only of the widths of the argument intervals, whereas for others the width of the combination is not a function of the widths of the argument intervals. Show that the width of the sum (or difference) of two intervals is a function only of the widths of the intervals being added (or subtracted). Give examples to show that this is not true for multiplication or division.
Exercise 2.10: Ben Bitdiddle, an expert systems programmer, looks over Alyssa's shoulder and comments that it is not clear what it means to divide by an interval that spans (includes) zero. Modify Alyssa's code to check for this condition and to signal an error if it occurs.
Exercise 2.11: In passing, Ben also cryptically comments: "By testing the signs of the endpoints of the intervals, it is possible to break mul-interval into nine cases, only one of which requires more than two multiplications." Rewrite this procedure using Ben's suggestion.
After debugging her program, Alyssa shows it to a potential user, who complains that her program solves the wrong problem. He wants a program that can deal with numbers represented as a center value and an additive tolerance; for example, he wants to work with intervals such as 3.5 +/- 0.15 rather than [3.35, 3.65]. Alyssa returns to her desk and fixes this problem by supplying an alternate constructor and alternate selectors:
(define (make-center-width c w) (make-interval (- c w) (+ c w))) (define (center i) (/ (+ (lower-bound i) (upper-bound i)) 2)) (define (width i) (/ (- (upper-bound i) (lower-bound i)) 2))
Unfortunately, most of Alyssa's users are engineers. Real engineering situations usually involve measurements with only a small uncertainty, measured as the ratio of the width of the interval to the midpoint of the interval. Engineers usually specify percentage tolerances on the parameters of devices, as in the resistor specifications given earlier.
Exercise 2.12: Define a constructor make-center-percent that takes a center and a percentage tolerance and produces the desired interval. You must also define a selector percent that produces the percentage tolerance for a given interval. The center selector is the same as the one shown above.
(define (make-center-percent center percent) 'your-answer-here) (define (percent interval) 'your-answer-here)
Exercise 2.13: Show that under the assumption of small percentage tolerances there is a simple formula for the approximate percentage tolerance of the product of two intervals in terms of the tolerances of the factors. You may simplify the problem by assuming that all numbers are positive.
After considerable work, Alyssa P. Hacker delivers her finished system. Several years later, after she has forgotten all about it, she gets a frenzied call from an irate user, Lem E. Tweakit. It seems that Lem has noticed that the formula for parallel resistors can be written in two algebraically equivalent ways: $$ \frac{R_1 R_2}{R_1 + R_2} $$
and $$ \frac{1}{\frac{1}{R_1} + \frac{1}{R_2}} $$
He has written the following two programs, each of which computes the parallel-resistors formula differently:
(define (par1 r1 r2) (div-interval (mul-interval r1 r2) (add-interval r1 r2))) (define (par2 r1 r2) (let ((one (make-interval 1 1))) (div-interval one (add-interval (div-interval one r1) (div-interval one r2)))))
Lem complains that Alyssa's program gives different answers for the two ways of computing. This is a serious complaint.
Exercise 2.14: Demonstrate that Lem is right. Investigate the behavior of the system on a variety of arithmetic expressions. Make some intervals A and B, and use them in computing the expressions A/A and A/B. You will get the most insight by using intervals whose width is a small percentage of the center value. Examine the results of the computation in center-percent form (see Exercise 2-12).
Exercise 2.15: Eva Lu Ator, another user, has also noticed the different intervals computed by different but algebraically equivalent expressions. She says that a formula to compute with intervals using Alyssa's system will produce tighter error bounds if it can be written in such a form that no variable that represents an uncertain number is repeated. Thus, she says, par2 is a "better" program for parallel resistances than par1. Is she right? Why?
Exercise 2.16: Explain, in general, why equivalent algebraic expressions may lead to different answers. Can you devise an interval-arithmetic package that does not have this shortcoming, or is this task impossible? (Warning: This problem is very difficult.)
2 The name cons stands for "construct." The names car and cdr derive from the original implementation of Lisp on the IBM 704. That machine had an addressing scheme that allowed one to reference the "address" and "decrement" parts of a memory location. Car stands for "Contents of Address part of Register" and cdr (pronounced "could-er") stands for "Contents of Decrement part of Register."
3 Another way to define the selectors and constructor is
(define make-rat cons) (define numer car) (define denom cdr)
The first definition associates the name make-rat with the value of the expression cons, which is the primitive procedure that constructs pairs. Thus make-rat and cons are names for the same primitive constructor.
Defining selectors and constructors in this way is efficient: Instead of make-rat calling cons, make-rat is cons, so there is only one procedure called, not two, when make-rat is called. On the other hand, doing this defeats debugging aids that trace procedure calls or put breakpoints on procedure calls: You may want to watch make-rat being called, but you certainly don't want to watch every call to cons. We have chosen not to use this style of definition in this book.
4 display is the Scheme primitive for printing data. The Scheme primitive newline starts a new line for printing. Neither of these procedures returns a useful value, so in the uses of print-rat below, we show only what print-rat prints, not what the interpreter prints as the value returned by print-rat.
5 Surprisingly, this idea is very difficult to formulate rigorously. There are two approaches to giving such a formulation. One, pioneered by C. A. R. Hoare (1972), is known as the method of abstract models. It formalizes the "procedures plus conditions" specification as outlined in the rational-number example above. Note that the condition on the rational-number representation was stated in terms of facts about integers (equality and division). In general, abstract models define new kinds of data objects in terms of previously defined types of data objects. Assertions about data objects can therefore be checked by reducing them to assertions about previously defined data objects. Another approach, introduced by Zilles at MIT, by Goguen, Thatcher, Wagner, and Wright at IBM (see Thatcher, Wagner, and Wright 1978), and by Guttag at Toronto (see Guttag 1977), is called algebraic specification. It regards the "procedures" as elements of an abstract algebraic system whose behavior is specified by axioms that correspond to our "conditions," and uses the techniques of abstract algebra to check assertions about data objects. Both methods are surveyed in the paper by Liskov and Zilles (1975).
Based on Structure and Interpretation of Computer Programs, a work at https://mitpress.mit.edu/sicp/. | CommonCrawl |
Journal of the Korean Mathematical Society (대한수학회지)
Korean Mathematical Society (대한수학회)
This journal endeavors to publish significant research of broad interests in pure and applied mathematics. One volume is published each year, and each volume consists of six issues (January, March, May, July, September, November).
http://jkms.kms.or.kr/submission KSCI KCI SCOPUS SCIE
BESSEL MULTIPLIERS AND APPROXIMATE DUALS IN HILBERT C∗ -MODULES
Azandaryani, Morteza Mirzaee 1063
https://doi.org/10.4134/JKMS.j150701 PDF KSCI
Two standard Bessel sequences in a Hilbert $C^*$-module are approximately duals if the distance (with respect to the norm) between the identity operator on the Hilbert $C^*$-module and the operator constructed by the composition of the synthesis and analysis operators of these Bessel sequences is strictly less than one. In this paper, we introduce (a, m)-approximate duality using the distance between the identity operator and the operator defined by multiplying the Bessel multiplier with symbol m by an element a in the center of the $C^*$-algebra. We show that approximate duals are special cases of (a, m)-approximate duals and we generalize some of the important results obtained for approximate duals to (a, m)-approximate duals. Especially we study perturbations of (a, m)-approximate duals and (a, m)-approximate duals of modular Riesz bases.
GLOBAL EXISTENCE AND UNIFORM DECAY OF COUPLED WAVE EQUATION OF KIRCHHOFF TYPE IN A NONCYLINDRICAL DOMAIN
Ha, Tae Gab 1081
In this paper, we consider coupled wave equation of Kirchhoff type in a noncylindrical domain. This work is devoted to prove the existence and uniqueness of global solutions and decay for the energy of solutions.
ON 𝜙-SEMIPRIME SUBMODULES
Ebrahimpour, Mahdieh;Mirzaee, Fatemeh 1099
Let R be a commutative ring with non-zero identity and M be a unitary R-module. Let S(M) be the set of all submodules of M and ${\phi}:S(M){\rightarrow}S(M){\cup}\{{\emptyset}\}$ be a function. We say that a proper submodule P of M is a ${\phi}$-semiprime submodule if $r{\in}R$ and $x{\in}M$ with $r^2x{\in}P{\setminus}{\phi}(P)$ implies that $rx{\in}P$. In this paper, we investigate some properties of this class of sub-modules. Also, some characterizations of ${\phi}$-semiprime submodules are given.
FINITE p-GROUPS WHOSE NON-ABELIAN SUBGROUPS HAVE THE SAME CENTER
Wang, Lifang 1109
For an odd prime p, finite p-groups whose non-abelian subgroups have the same center are classified in this paper.
THE PROBABILISTIC METHOD MEETS GO
Farr, Graham 1121
Go is an ancient game of great complexity and has a huge following in East Asia. It is also very rich mathematically, and can be played on any graph, although it is usually played on a square lattice. As with any game, one of the most fundamental problems is to determine the number of legal positions, or the probability that a random position is legal. A random Go position is generated using a model previously studied by the author, with each vertex being independently Black, White or Uncoloured with probabilities q, q, 1 - 2q respectively. In this paper we consider the probability of legality for two scenarios. Firstly, for an $N{\times}N$ square lattice graph, we show that, with $q=cN^{-{\alpha}}$ and c and ${\alpha}$ constant, as $N{\rightarrow}{\infty}$ the limiting probability of legality is 0, exp($-2c^5$), and 1 according as ${\alpha}$ < 2/5, ${\alpha}=2/5$ and ${\alpha}$ > 2/5 respectively. On the way, we investigate the behaviour of the number of captured chains (or chromons). Secondly, for a random graph on n vertices with edge probability p generated according to the classical $Gilbert-Erd{\ddot{o}}s-R{\acute{e}}nyi$ model ${\mathcal{G}}$(n; p), we classify the main situations according to their asymptotic almost sure legality or illegality. Our results draw on a variety of probabilistic and enumerative methods including linearity of expectation, second moment method, factorial moments, polyomino enumeration, giant components in random graphs, and typicality of random structures. We conclude with suggestions for further work.
COLORED PERMUTATIONS WITH NO MONOCHROMATIC CYCLES
Kim, Dongsu;Kim, Jang Soo;Seo, Seunghyun 1149
An ($n_1,\;n_2,\;{\ldots},\;n_k$)-colored permutation is a permutation of $n_1+n_2+{\cdots}+n_k$ in which $1,\;2,\;{\ldots},\;n_1$ have color 1, and $n_1+1,\;n_1+2,\;{\ldots},\;n_1+n_2$ have color 2, and so on. We give a bijective proof of Steinhardt's result: the number of colored permutations with no monochromatic cycles is equal to the number of permutations with no fixed points after reordering the first $n_1$ elements, the next $n_2$ element, and so on, in ascending order. We then find the generating function for colored permutations with no monochromatic cycles. As an application we give a new proof of the well known generating function for colored permutations with no fixed colors, also known as multi-derangements.
HYPERSURFACES OF INFINITE TYPE WITH NULL TANGENTIAL HOLOMORPHIC VECTOR FIELDS
Thu, Ninh Van 1163
In this paper, we introduce the condition (I) (cf. Section 2) and prove that there is no nontrivial tangential holomorphic vector field of a certain hypersurface of infinite type in ${\mathbb{C}}^2$.
ON THE UNIFORM CONVERGENCE OF SPECTRAL EXPANSIONS FOR A SPECTRAL PROBLEM WITH A BOUNDARY CONDITION RATIONALLY DEPENDING ON THE EIGENPARAMETER
Goktas, Sertac;Kerimov, Nazim B.;Maris, Emir A. 1175
The spectral problem $$-y^{{\prime}{\prime}}+q(x)y={\lambda}y,\;0 < x < 1, \atop y(0)cos{\beta}=y^{\prime}(0)sin{\beta},\;0{\leq}{\beta}<{\pi};\;{\frac{y^{\prime}(1)}{y(1)}}=h({\lambda})$$ is considered, where ${\lambda}$ is a spectral parameter, q(x) is real-valued continuous function on [0, 1] and $$h({\lambda})=a{\lambda}+b-\sum\limits_{k=1}^{N}{\frac{b_k}{{\lambda}-c_k}},$$ with the real coefficients and $a{\geq}0$, $b_k$ > 0, $c_1$ < $c_2$ < ${\cdots}$ < $c_N$, $N{\geq}0$. The sharpened asymptotic formulae for eigenvalues and eigenfunctions of above-mentioned spectral problem are obtained and the uniform convergence of the spectral expansions of the continuous functions in terms of eigenfunctions are presented.
L2 HARMONIC FORMS ON GRADIENT SHRINKING RICCI SOLITONS
Yun, Gabjin 1189
In this paper, we study vanishing properties for $L^2$ harmonic 1-forms on a gradient shrinking Ricci soliton. We prove that if (M, g, f) is a complete oriented noncompact gradient shrinking Ricci soliton with potential function f, then there are no non-trivial $L^2$ harmonic 1-forms which are orthogonal to df. Second, we show that if the scalar curvature of the metric g is greater than or equal to (n - 2)/2, then there are no non-trivial $L^2$ harmonic 1-forms on (M, g). We also show that any multiplication of the total differential df by a function cannot be an $L^2$ harmonic 1-form unless it is trivial. Finally, we derive various integral properties involving the potential function f and $L^2$ harmonic 1-forms, and handle their applications.
ANALYTICAL TECHNIQUES FOR SYSTEM OF TIME FRACTIONAL NONLINEAR DIFFERENTIAL EQUATIONS
Choi, Junesang;Kumar, Devendra;Singh, Jagdev;Swroop, Ram 1209
We coupled the so-called Sumudu transform with the homotopy perturbation method (HPM) and the homotopy analysis method (HAM), which are called homotopy perturbation Sumudu transform method (HPSTM) and homotopy analysis Sumudu transform method (HASTM), respectively. Then we show how HPSTM and HASTM are more convenient than HPM and HAM by conducting a comparative analytical study for a system of time fractional nonlinear differential equations. A Maple package is also used to enhance the clarity of the involved numerical simulations.
FACTORIZATION OF CERTAIN SELF-MAPS OF PRODUCT SPACES
Jun, Sangwoo;Lee, Kee Young 1231
In this paper, we show that, under some conditions, self-maps of product spaces can be represented by the composition of two specific self-maps if their induced homomorphism on the i-th homotopy group is an automorphism for all i in some section of positive integers. As an application, we obtain closeness numbers of several product spaces.
TRIPLE SYMMETRIC IDENTITIES FOR w-CATALAN POLYNOMIALS
Kim, Dae San;Kim, Taekyun 1243
In this paper, we introduce w-Catalan polynomials as a generalization of Catalan polynomials and derive fourteen basic identities of symmetry in three variables related to w-Catalan polynomials and analogues of alternating power sums. In addition, specializations of one of the variables as one give us new and interesting identities of symmetry even for two variables. The derivations of identities are based on the p-adic integral expression for the generating function of the w-Catalan polynomials and the quotient of p-adic integrals for that of the analogues of the alternating power sums.
ON SPLIT LEIBNIZ TRIPLE SYSTEMS
Cao, Yan;Chen, Liangyun 1265
In order to study the structure of arbitrary split Leibniz triple systems, we introduce the class of split Leibniz triple systems as the natural extension of the class of split Lie triple systems and split Leibniz algebras. By developing techniques of connections of roots for this kind of triple systems, we show that any of such Leibniz triple systems T with a symmetric root system is of the form $T=U+{\sum}_{[j]{\in}{\Lambda}^1/{\sim}}I_{[j]}$ with U a subspace of $T_0$ and any $I_{[j]}$ a well described ideal of T, satisfying $\{I_{[j]},T,I_{[k]}\}=\{I_{[j]},I_{[k]},T\}=\{T,I_{[j]},I_{[k]}\}=0 \text{ if }[j]{\neq}[k]$.
ON 𝑺-CLOSED SUBMODULES
Durgun, Yilmaz;Ozdemir, Salahattin 1281
A submodule N of a module M is called ${\mathcal{S}}$-closed (in M) if M/N is nonsingular. It is well-known that the class Closed of short exact sequences determined by closed submodules is a proper class in the sense of Buchsbaum. However, the class $\mathcal{S}-Closed$ of short exact sequences determined by $\mathcal{S}$-closed submodules need not be a proper class. In the first part of the paper, we describe the smallest proper class ${\langle}\mathcal{S-Closed}{\rangle}$ containing $\mathcal{S-Closed}$ in terms of $\mathcal{S}$-closed submodules. We show that this class coincides with the proper classes projectively generated by Goldie torsion modules and coprojectively generated by nonsingular modules. Moreover, for a right nonsingular ring R, it coincides with the proper class generated by neat submodules if and only if R is a right SI-ring. In abelian groups, the elements of this class are exactly torsionsplitting. In the second part, coprojective modules of this class which we call ec-flat modules are also investigated. We prove that injective modules are ec-flat if and only if each injective hull of a Goldie torsion module is projective if and only if every Goldie torsion module embeds in a projective module. For a left Noetherian right nonsingular ring R of which the identity element is a sum of orthogonal primitive idempotents, we prove that the class ${\langle}\mathcal{S-Closed}{\rangle}$ coincides with the class of pure-exact sequences of modules if and only if R is a two-sided hereditary, two-sided $\mathcal{CS}$-ring and every singular right module is a direct sum of finitely presented modules.
HAUSDORFF DIMENSION OF THE SET CONCERNING WITH BOREL-BERNSTEIN THEORY IN LÜROTH EXPANSIONS
Shen, Luming 1301
It is well known that every $x{\in}(0,1]$ can be expanded to an infinite $L{\ddot{u}}roth$ series with the form of $$x={\frac{1}{d_1(x)}}+{\cdots}+{\frac{1}{d_1(x)(d_1(x)-1){\cdots}d_{n-1}(x)(d_{n-1}(x)-1)d_n(x)}}+{{\cdots}}$$, where $d_n(x){\geq}2$ for all $n{\geq}1$. In this paper, the set of points with some restrictions on the digits in $L{\ddot{u}}roth$ series expansions are considered. Namely, the Hausdorff dimension of following the set $$F_{\phi}=\{x{\in}(0,1]\;:\;d_n(x){\geq}{\phi}(n),\;i.o.n}$$ is determined, where ${\phi}$ is an integer-valued function defined on ${\mathbb{N}}$, and ${\phi}(n){\rightarrow}{\infty}$ as $n{\rightarrow}{\infty}$.
LINEAR PRESERVERS OF SYMMETRIC ARCTIC RANK OVER THE BINARY BOOLEAN SEMIRING
Beasley, LeRoy B.;Song, Seok-Zun 1317
A Boolean rank one matrix can be factored as $\text{uv}^t$ for vectors u and v of appropriate orders. The perimeter of this Boolean rank one matrix is the number of nonzero entries in u plus the number of nonzero entries in v. A Boolean matrix of Boolean rank k is the sum of k Boolean rank one matrices, a rank one decomposition. The perimeter of a Boolean matrix A of Boolean rank k is the minimum over all Boolean rank one decompositions of A of the sums of perimeters of the Boolean rank one matrices. The arctic rank of a Boolean matrix is one half the perimeter. In this article we characterize the linear operators that preserve the symmetric arctic rank of symmetric Boolean matrices.
SLANT HELICES IN THE THREE-DIMENSIONAL SPHERE
Lucas, Pascual;Ortega-Yagues, Jose Antonio 1331
A curve ${\gamma}$ immersed in the three-dimensional sphere ${\mathbb{S}}^3$ is said to be a slant helix if there exists a Killing vector field V(s) with constant length along ${\gamma}$ and such that the angle between V and the principal normal is constant along ${\gamma}$. In this paper we characterize slant helices in ${\mathbb{S}}^3$ by means of a differential equation in the curvature ${\kappa}$ and the torsion ${\tau}$ of the curve. We define a helix surface in ${\mathbb{S}}^3$ and give a method to construct any helix surface. This method is based on the Kitagawa representation of flat surfaces in ${\mathbb{S}}^3$. Finally, we obtain a geometric approach to the problem of solving natural equations for slant helices in the three-dimensional sphere. We prove that the slant helices in ${\mathbb{S}}^3$ are exactly the geodesics of helix surfaces.
SOME ARITHMETIC PROPERTIES ON NONSTANDARD NUMBER FIELDS
Lee, Junguk 1345
For a given number field K, we show that the ranks of elliptic curves over K are uniformly finitely bounded if and only if the weak Mordell-Weil property holds in all (some) ultrapowers $^*K$ of K. We introduce the nonstandard weak Mordell-Weil property for $^*K$ considering each Mordell-Weil group as $^*{\mathbb{Z}}$-module, where $^*{\mathbb{Z}}$ is an ultrapower of ${\mathbb{Z}}$, and we show that the nonstandard weak Mordell-Weil property is equivalent to the weak Mordell-Weil property in $^*K$. In a saturated nonstandard number field, there is a nonstandard ring of integers $^*{\mathbb{Z}}$, which is definable. We can consider definable abelian groups as $^*{\mathbb{Z}}$-modules so that the nonstandard weak Mordell-Weil property is well-defined, and we conclude that the nonstandard weak Mordell-Weil property and the weak Mordell-Weil property are equivalent. We have valuations induced from prime numbers in nonstandard rational number fields, and using these valuations, we identify two nonstandard rational numbers. | CommonCrawl |
Factors contributing to measles transmission during an outbreak in Kamwenge District, Western Uganda, April to August 2015
Fred Nsubuga ORCID: orcid.org/0000-0002-0264-70141,
Lilian Bulage1,
Immaculate Ampeire2,
Joseph K. B. Matovu3,
Simon Kasasa3,
Patricia Tanifum4,
Alex Ario Riolexus1 &
Bao-Ping Zhu4
In April 2015, Kamwenge District, western Uganda reported a measles outbreak. We investigated the outbreak to identify potential exposures that facilitated measles transmission, assess vaccine effectiveness (VE) and vaccination coverage (VC), and recommend prevention and control measures.
For this investigation, a probable case was defined as onset of fever and generalized maculopapular rash, plus ≥1 of the following symptoms: Coryza, conjunctivitis, or cough. A confirmed case was defined as a probable case plus identification of measles-specific IgM in serum. For case-finding, we reviewed patients' medical records and conducted in-home patient examination. In a case-control study, we compared exposures of case-patients and controls matched by age and village of residence. For children aged 9 m-5y, we estimated VC using the percent of children among the controls who had been vaccinated against measles, and calculated VE using the formula, VE = 1 - ORM-H, where ORM-H was the Mantel-Haenszel odds ratio associated with having a measles vaccination history.
We identified 213 probable cases with onset between April and August, 2015. Of 23 blood specimens collected, 78% were positive for measles-specific IgM. Measles attack rate was highest in the youngest age-group, 0-5y (13/10,000), and decreased as age increased. The epidemic curve indicated sustained propagation in the community. Of the 50 case-patients and 200 controls, 42% of case-patients and 12% of controls visited health centers during their likely exposure period (ORM-H = 6.1; 95% CI = 2.7–14). Among children aged 9 m-5y, VE was estimated at 70% (95% CI: 24–88%), and VC at 75% (95% CI: 67–83%). Excessive crowding was observed at all health centers; no patient triage-system existed.
The spread of measles during this outbreak was facilitated by patient mixing at crowded health centers, suboptimal VE and inadequate VC. We recommended emergency immunization campaign targeting children <5y in the affected sub-counties, as well as triaging and isolation of febrile or rash patients visiting health centers.
Measles is one of the most infectious human diseases and frequently results in widespread outbreaks. It can lead to lifelong complications and death [1, 2]. The World Health Organization (WHO) estimated that approximately 535,000 children died of measles in 2000 globally, the majority from developing countries, which accounted for 5% of all under-five mortality [3]. In 2009, the Regional Committee for Africa adopted a regional measles elimination goal for 2020 at its 59th session. It urged member states to invest in strengthening immunization and health systems, because routine immunization plays a central role in the elimination efforts [4]. In 2010, the World Health Assembly set a Year 2015 Target to reduce measles deaths by 95% of the 2000 levels. By 2010, global measles mortality decreased by an estimated 74%, from 535,300 deaths in 2000 to 139,300 in 2010 [5].
Accelerated measles control activities started in 2001 in countries in the WHO African Region [6]. By 2008, reported measles cases decreased by 93% and estimated measles mortality decreased by 92% in the African Region compared with the figures for 2000 [6]. The WHO African Region set targets as part of the regional measles elimination goal, which include the following: Reducing annual regional measles incidence to fewer than five cases per million; achieving measles vaccination coverage (VC) of 90% nationally and exceeding 80% VC at every districts; and achieving at least 95% coverage with measles vaccines during Supplementary Immunization Activities nationally and in at least 80% of districts.
Despite the effort and progress made, measles incidence appears to have rebounded in recent years in Uganda. Kamwenge District in western Uganda reported an increasing number of measles cases since April 2015. In June 2015 the district requested assistance to control the outbreak. Measles VC (with 1 dose measles-containing vaccine given at age 9 m) in Kamwenge District is estimated at 80% based on the administrative data, which does not provide adequate population protection.
Kamwenge District (0.2258° N, 30.4818° E) has an estimated total population of 421,470. It is bordered by Kyenjojo District to the north, Kyegegwa and Kiruhura Districts to the northeast, Ibanda District to the east and southeast, Rubirizi District to the southeast, Kasese District to the west, and Kabarole District to the northwest. The district also houses a refugee settlement, with an estimated refugee population in excess of 50,000, a vulnerable population often with low VC, inadequate access to care, and compromised health status.
We conducted an investigation in Kamwenge District to identify potential exposures for measles transmission, estimate vaccine effectiveness (VE), estimate VC, and provide evidence-based recommendations for measles control in Uganda.
Case definitions
We defined a probable case as onset of fever and generalized maculopapular rash in a resident of Kamwenge from 1 March to 31 August 2015 with at least one of the following symptoms: Coryza, conjunctivitis, or cough. A confirmed case was a probable case with serum positivity of measles-specific IgM antibody. We developed the case definition after reviewing the clinical presentations of some of the measles patients and discussing with clinicians.
Case finding
We conducted systematic case finding by visiting health centers that served the most affected sub-counties. We reviewed patient records from 1 March 2015 to 31 August 2015 to identify probable and confirmed cases based on the case definition. The surveillance officers, village health team (VHT) members and village administrators visited the case-patients' homes to verify the cases. The records in the Health Management Information System had basic information for each patient, including name, age, sex, residence, admission date, and symptoms. We trained other health workers and VHT members on case finding using the case definition. Those meeting the definition of a probable case were referred to health centers for further management. The VHT members played a key role during this process because they reside in the villages and know practically every village resident, and they are trained to conduct surveillance of notifiable diseases and to refer patients for health services.
Descriptive epidemiologic analysis
We conducted a descriptive epidemiologic study examining the distribution of the cases. We described the clinical symptoms of the case-patients. We computed the attack rates by age, sex, and sub-county of residence, and nationality. We constructed an epidemic curve to examine the development of the epidemic over time.
Hypothesis generation
During hypothesis generation we used a standardized case investigation form to interview 24 probable case-patients who were conveniently found at the health facilities and surrounding communities. The assumption was that those probable case-patients represented all case-patients during this outbreak. The sample size was not statistically estimated formally but it was based on experiences during past outbreak investigations. These interviews explored potential exposures at health facilities, schools, churches, and during any mass vaccination campaigns within the last 21 days before onset of rash.
We conducted a matched case-control study to evaluate the potential exposures that emerged during hypothesis generation. Cases for the case-control study were selected among the probable cases identified through case finding. If a household had more than one case, only the case-patient with the earliest onset was enrolled.
For each case, we selected four controls, individually matched with the case by age group (0–5, 6–12, 13–18 and 19–30) and village of residence. We matched by age because age is a major confounder during investigations of virtually all communicable diseases. We matched by village of residence to ensure cases and controls had comparable probability of exposure. We used a case-to-control ratio of 1:4 because little additional statistical power is gained beyond four controls per case [7]. A control-person must have had no fever nor generalized maculopapular rash since March 2015 to qualify.
We administered a structured questionnaire to cases and controls in-person, to collect information on demographic characteristics (age, sex, and education level), potential exposures, and vaccination status. In interviewing the cases and controls about their potential exposures, we defined the effective exposure period to be the time window during 7–21 days prior to the case-patients' onset of rash (i.e., between the minimum and maximum incubation period for measles). These calendar days were then applied in interviewing both the case-patient and his/her matched control-persons regarding their exposures. Cases that had rash onset between 30 April and 13 July 2015 were recruited for the case-control study.
Vaccination history was assessed by a vaccination card whenever available; for 24% (12/50) of the case-patients and 13% (25/200) of the control-persons whose vaccination cards were unavailable, we relied on the recall of the case-patients (for adult cases) or their parents (for child cases).
Data management and analysis
We managed the data using Microsoft Excel, and conducted data analysis using Epi Info 7.1.5. We used the 2014 census data to calculate attack rates. We used the Mantel-Haenszel method to analyze the data from the case-control study to account for the matched study design.
We estimated the VE for measles vaccine using the following formula [8]:
$$ \mathrm{VE}=\left(1-{\mathrm{OR}}_{\mathrm{M}-\mathrm{H}}\right) $$
where ORM-H is the protective Mantel-Haenszel odds ratio associated with vaccination estimated from the case-control study. We estimated the VE for all persons aged ≥9 m and by age group (9 m-5y, 6-12y, and 13-52y). We excluded children <9 m of age from the calculation of the VE because Uganda's routine one-dose measles vaccination is administered at age 9 m.
To obtain a quick estimate of VC in the outbreak area for the purpose of outbreak control, we used the percentage of controls who had a history of measles vaccination to estimate measles VC for all persons aged ≥9 m and by age groups (9 m-5y, 6–12, and 13-52y), assuming that the controls were representative of the general population [9, 10]. As with the calculation of VE, children aged <9 m were excluded in calculating VC.
Laboratory and environmental investigations
We collected blood samples from 23 probable case-patients who sought care at health centers and sent the samples to the Uganda Virus Research Institute for serological testing. Measles IgM antibody levels were measured using enzyme linked immunosorbent assay (ELISA) kits.
During the case-finding activities, we conducted on-site inspections of the health centers in the outbreak area to observe the crowdedness, and interviewed the clinic attendants on the reasons for the crowdedness.
The Ministry of Health of Uganda (MoH) gave the directive and approval to investigate this outbreak. The Office of the Associate Director for Science, CDC/Uganda, determined that this activity was not human subject research, and its primary intent was public health practice or a disease control activity (specifically, epidemic or endemic disease control activity). Verbal informed consent was obtained from all patients who provided blood samples and from all participants or, caretakers (if the interviewee/patient was a minor), before the start of each interview. We sought verbal consent because this study was conducted as part of an outbreak investigation whose primary purpose was to inform disease control efforts rather than outright human subjects' research.
We identified 213 probable cases with onset between 17 April and 30 August, 2015 in the three affected sub-counties, including 18 laboratory confirmed cases. The epidemic curve shows sustained community transmission from April to August with no apparent periodicity. The highest number of cases had onset in July (Fig. 1).
Epi curve showing the number of measles cases by rash onset date in Kamwenge District, from April to August 2015
The median age of the case-patients was 5y (Inter-Quartile Range: 2.3–9.5y). The clinical presentations of case-patients' were consistent with measles (Table 1). The attack rate was highest in children aged 0-5y (13 per 10,000) and declined as age increased. Of the three sub-counties, Biguli (where the outbreak started) had the highest attack rate. Males and females were relatively equally affected. The attack rate also did not differ greatly between Ugandan nationals and refugees (Table 2).
Table 1 Distribution of symptoms of measles case-patients during an outbreak in Kamwenge District, Uganda, April to August 2015
Table 2 Attack rates of measles (per 10,000) during an outbreak in Kamwenge District, Uganda, April to August 2015
The hypothesis-generating interviews identified three potential exposures that might have driven the outbreak, i.e., exposures at school, at health facilities, and at the church. The case-control investigation of those potential exposures showed that visiting a health center during the effective exposure period was associated with an increased risk of developing measles by approximately 6 fold (ORM-H = 6.1, 95% CI: 2.7–14) (Table 3). The other potential exposures were not significantly associated with measles disease.
Table 3 Exposures for measles transmission during an outbreak in Kamwenge District, Uganda, April to August 2015
On-site observation of health centers' out-patient department revealed that the patient waiting areas were grossly overcrowded. Patients saturated the waiting areas most of the time during the day. Interviews of health center administrators indicated that the overcrowding was due to delays in consultation and disposition as a result of inadequate healthcare workers or an influx of patients.
Measles vaccine administered at age 9 m by the routine vaccination schedule in Uganda was protective against measles infection for the population aged ≥9 m (ORM-H = 0.36, 95% CI = 0.75–0.83), with a corresponding VE of 64% (95% CI = 17–85%). When the data were stratified by age, the vaccine was protective in children aged 9 m-5y (ORM-H = 0.30, 95% CI = 0.12–0.76), yielding a VE of 70% (95% CI = 24–88%). The VE for the other age groups were not statistically significant due to small sample sizes (Table 4).
Table 4 Measles vaccination effectiveness by age group during an outbreak in Kamwenge District, Uganda, April to August, 2015
The VC, estimated by the percent of control-persons who had a history of measles vaccination, was 77% for all persons age ≥ 9 m, 75% among children aged 9 m-5y, 75% among children aged 6-12y, and 95% among persons aged ≥13y (Table 5).
Table 5 Measles vaccination coverage by age group, estimated by the percent of controls who were vaccinated, during an outbreak in Kamwenge District, Uganda, April to August, 2015
Our investigation indicated that exposure at crowded healthcare facilities, vaccine failure, and failure to vaccinate all facilitated measles transmission during the measles outbreak in Kamwenge District, Western Uganda.
Measles is an extremely infectious disease [11]. When a measles case is introduced into a naïve population, 12–40 secondary cases might be produced [12]. Measles virus has the ability to remain viable for an extended period of time in small droplets expelled by infected individuals when they cough, hence the disease can be easily transmitted from a measles patient to other patients if they share the same confined space such as a waiting area in a healthcare setting, especially if the waiting area is crowded [13]. Similar observations have been made in other countries [14,15,16]. Therefore, reducing healthcare-associated transmission should be an integral and important part of measles control strategy [17, 18]. During measles outbreaks, healthcare centers could consider setting up a triaging system to separate patients with fever and rash from other patients at the reception area. Public health authorities could also consider setting up special measles clinics and advise anyone with fever and rash to go to these special clinics for treatment.
The effectiveness of a single-dose measles vaccine currently administered at age 9 m in the outbreak area during this investigation (64% in all persons and 70% among children aged 9 m-5y) was lower than a previous estimate in three large hospitals in Dhaka, Bangladesh (80%) [19]. A literature review found that VE varied by WHO region, with lower estimates in countries belonging to the African Region and the Southeast Asian Region [20]. The effectiveness of measles vaccination is influenced by several host and vaccine factors [21], including the number of doses given, age at which vaccine is administered [22], the quality of vaccine and the adequacy of the cold chain [23, 24]. It should also be noted that in outbreak situations, usually only a small percentage of the cases are confirmed; hence researchers have to use observational study designs to estimate VE, which tend to underestimate the true VE due to the inclusion of false positives in the cases. Case-control designs usually produce the highest and most accurate estimates because the use of odds ratio tends to over-estimate the relative risk, which counterbalances the underestimation of VE due to the inclusion of false-positives in the cases [25]. Estimating VE is an important part of an outbreak investigation involving a vaccine-preventable disease because it can provide crucial evidence to guide outbreak response and routine immunization activities [20].
Measles antibodies develop in approximately 85% of children vaccinated at age 9 m, 95% of children vaccinated at 12 m, and 98% of children vaccinated at 15 m. WHO recommends vaccination at age 9 m of age in countries at the mortality reduction stage [26]. However, to improve the effectiveness of measles vaccine and to slow the buildup of the susceptible population, one or a combination of the following strategies is generally recommended, depending on the specific situation: introducing a second dose in the vaccination schedule, increasing the age of measles vaccination to ≥12 m for the first dose, and conducting supplemental immunization activities [24]. In the Ugandan situation, it would be best to introduce a second dose of measles vaccine into routine vaccination schedule at age 15 m. Literature has shown that a second dose of measles vaccine would boost the immunity level in the vaccinated population to as high as 98% and reduce the buildup of the susceptible population [27].
The VC estimated in this investigation (75%) in children 9 m-5y was lower than that from the administrative data of Kamwenge District (79%), both of which were lower than the recommended VC of ≥90% required to achieve population immunity by the WHO African Region [6]. Even in countries where good immunization coverage has been achieved, measles outbreaks still occur because susceptible population still accumulates fairly rapidly even with high immunization coverage, as measles vaccine is not 100% effective [28]. For example, despite having an overall measles vaccination coverage of 92–94% between 2004 and 2010, WHO/Euro experienced several outbreaks; in 2010 the region reported 30,639 cases, the highest since 2006 [29]. Also, a measles outbreak occurred in San Diego, California in 2008, despite having a high community vaccination coverage at 91% [30]. The low VC found in our study therefore necessitates the Uganda MoH to regularly conduct coverage surveys to complement administrative data so as to improve program implementation.
Strength and limitations
Our investigation used a rigorous epidemiologic approach to describe the roles of exposure at crowded healthcare facilities, vaccine failure, and failure to vaccinate during a measles outbreak. On the other hand, our study had multiple limitations. Some of the cases and controls lacked immunization cards. In Uganda, measles vaccination is the last antigen given on the upper left arm at age 9 m during routine immunization; therefore we asked the caretakers whether their children had received a measles shot at age 9 m on the upper left arm. We relied on the respondents' memory to recall the vaccination status. Also, formal contact tracing was impossible to implement in our setting; therefore we might have had an incomplete case count, which prohibited us from examining the role of community transmission. Additionally, we used the proportion of control-persons vaccinated in the case-control study to estimate the measles VC, in order to provide data quickly to MoH for rapid outbreak control. While similar methods have been used previously in outbreak investigation settings [9, 10], this method assumed that the control-persons in the case-control study represented the general population, which might have been an invalid assumption and could have led to a biased estimate. A more appropriate method would have been a population sample survey. Lastly, use of probable cases instead of confirmed cases might have underestimated the VE in this study.
We conclude from our investigation that exposure at crowded health centers, along with low VC and suboptimal VE, facilitated the spread of measles during this outbreak. We recommend emergency immunization campaign targeting children ≤5y in affected sub-counties and triaging and isolating febrile or rash patients at health centers during measles outbreaks to control the current outbreak. We also recommend that measles routine immunization schedule be changed from the current one dose at age 9 m to two doses at 9 m and 15 m each.
CDC:
US Centers for Disease Control and Prevention
ORadj:
Adjusted odds ratio
Vaccine effectiveness
Antona D, Levy-Bruhl D, Baudon C, Freymuth F, Lamy M, Maine C, et al. Measles elimination efforts and 2008-2011 outbreak, France. Emerg Infect Dis. 2013;19(3):357–64.
Onoja A, Adeniji A, Faneye A. Measles complications in a Nigerian hospital setting. Clin Rev Opin. 2013;5(2):18–23.
Simons E, Ferrari M, Fricks J, Wannemuehler K, Anand A, Burton A, et al. Assessment of the 2010 global measles mortality reduction goal: results from a model of surveillance data. Lancet (London, England). 2012;379(9832):2173–8.
Levin A, Burgess C, Garrison L Jr, Bauch C, Babigumira J, Simons E, et al. Global eradication of measles: an epidemiologic and economic evaluation. J Infect Dis. 2011;204:S98.
Global Measles and Rubella Strategic plan. http://apps.who.int/iris/bitstream/10665/44855/1/9789241503396_eng.pdf. Accessed 07 Jan 2015.
African Regional Guidelines for Measles and Rubella Surveillance. www.afro.who.int/sites/default/files/2017-06/who-african-regional-measles-and-rubella-surveillance-guidelines_updated-draft-version-april-2015_1.pdf. Accessed 11 Jan 2016.
Setia MS. Methodology series module 2: case-control studies. Indian J Dermatol. 2016;61(2):146–51.
Weinberg GA, Szilagyi PG. Vaccine epidemiology: efficacy, effectiveness, and the translational research roadmap. J Infect Dis. 2010;201(11):1607–10.
Mafigiri R, Nsubuga F, Ario AR. Risk factors for measles death: Kyegegwa District, western Uganda, February–September, 2015. BMC Infect Dis. 2017;17(1):462.
Ma C, Li F, Zheng X, Zhang H, Duan M, Yang Y, et al. Measles vaccine coverage estimates in an outbreak three years after the nation-wide campaign in China: implications for measles elimination, 2013. BMC Infect Dis. 2015;15(1):23.
Lopez AD, Mathers CD, Ezzati M, Jamison DT, Murray CJ. Global and regional burden of disease and risk factors, 2001: systematic analysis of population health data. Lancet. 2006;367(9524):1747–57.
Guidelines for epidemic preparedness and response to measles outbreaks. www.who.int/csr/resources/publications/measles/whocdscsrisr991.pdf. Accessed 05 Dec 2015.
Remington PL, Hall WN, Davis IH, Herald A, Gunn RA. Airborne transmission of measles in a physician's office. JAMA. 1985;253(11):1574–7.
Steingart K, Thomas A, Dykewicz C, Redd S. Transmission of measles virus in healthcare settings during a communitywide outbreak. Infect Control Hosp Epidemiol. 1999;20(2):115.
Biellik R, Clements CJ. Strategies for minimizing nosocomial measles transmission. Bull World Health Organ. 1997;75(4):367.
Maltezou H, Wicker S. Measles in health-care settings. Am J Infect Control. 2013;41(7):661.
Chen SY, Anderson S, Kutty PK, Lugo F, McDonald M, Rota PA, et al. Health care–associated measles outbreak in the United States after an importation: challenges and economic impact. J Infect Dis. 2011;203(11):1517–25.
Hope K, Boyd R, Conaty S, Maywood P. Measles transmission in health care waiting rooms: implications for public health response. Western Pac Surveill Response J. 2012;3(4):33–38.
Akramuzzaman S, Cutts F, Hossain M, Wahedi O, Nahar N, Islam D, et al. Measles vaccine effectiveness and risk factors for measles in Dhaka. Bangladesh Bull World Health Organ. 2002;80(10):776.
Uzicanin A, Zimmerman L. Field effectiveness of live attenuated measles-containing vaccines: a review of published literature. J Infect Dis. 2011;204(suppl 1):S133–49.
Hayden GF. Clinical review: measles vaccine failure a survey of causes and means of prevention. Clin Pediatr. 1979;18(3):155–6.
Sudfeld CR, Navar AM, Halsey NA. Effectiveness of measles vaccination and vitamin a treatment. Int J Epidemiol. 2010;39(suppl 1):i48–55.
Akande T. A review of measles vaccine failure in developing countries. Niger Med Pract. 2007;52(5–6):112–6.
Breakwell L, Moturi E, Helgenberger L, Gopalani SV, Hales C, Lam E, et al. Measles outbreak associated with vaccine failure in adults-Federated States of Micronesia, February-august 2014. MMWR Morb Mortal Wkly Rep. 2015;64(38):1088.
Orenstein E, De Serres G, Haber M, Shay D, Bridges C, Gargiullo P, et al. Methodologic issues regarding the use of three observational study designs to assess influenza vaccine effectiveness. Int J Epidemiol. 2007;36(3):623.
Module on best practices for measles surveillance. http://apps.who.int/iris/bitstream/10665/67192/1/WHO_V-B_01.43_eng.pdf. Accessed 26 Aug 2016.
De Serres G, Boulianne N, Defay F, Brousseau N, Benoît M, Lacoursière S, Guillemette F, Soto J, Ouakki M, Ward BJ, et al. Higher risk of measles when the first dose of a 2-dose schedule of measles vaccine is given at 12–14 months versus 15 months of age. Clin Infect Dis. 2012;55(3):394–402.
Marin M, Nguyen H, Langidrik J, Edwards R, Briand K, Papania M, Seward J, LeBaron C. Measles transmission and vaccine effectiveness during a large outbreak on a densely populated island: implications for vaccination policy. Clin Infect Dis. 2006;42(3):315.
Control CD. Prevention: increased transmission and outbreaks of measles-European region, 2011. MMWR Morb Mortal Wkly Rep. 2011;60(47):1605.
Sugerman D, Barskey A, Delea M, Ortega-Sanchez I, Bi, D, Ralston K, et al. Measles outbreak in a highly vaccinated population, San Diego, 2008: role of the intentionally undervaccinated. Pediatrics. 2010;125(4):747–755.
We thank Dr. Robert Mayanja of Uganda National Expanded Program on Immunization, and Dr. Luzze Henry, MoH, Kampala, Uganda for their advice and assistance during the investigation. We also thank Steven Ndugwa Kabwama, Uganda Public Health Fellowship Program, for his comments and suggestions on manuscript preparation. We would like to acknowledge the following organizations for their efforts in ensuring the successful completion of the investigation: Kamwenge District Health Team, Uganda Virus Research Institute and Epidemiology and Surveillance Division MoH.
This investigation was supported by funding from the Uganda Public Health Fellowship Program and from the United States Centers for Disease Control and Prevention.
The data from this investigation are not available in publicly available repositories due to ethical concerns. However, it can be accessed upon request from the field supervisor at; www.musph.ac.ug.
The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention.
Uganda Public Health Fellowship Program – Field Epidemiology Track, Makerere University School of Public Health, P.O. Box 7072, Kampala, Uganda
Fred Nsubuga
, Lilian Bulage
& Alex Ario Riolexus
Uganda National Expanded Program on Immunization, Ministry of Health, Kampala, Uganda
Immaculate Ampeire
Makerere University School of Public Health, Kampala, Uganda
Joseph K. B. Matovu
& Simon Kasasa
Division of Public Health Protection, Center for Global Health, US Centers for Disease Control and Prevention, Kampala, Uganda
Patricia Tanifum
& Bao-Ping Zhu
Search for Fred Nsubuga in:
Search for Lilian Bulage in:
Search for Immaculate Ampeire in:
Search for Joseph K. B. Matovu in:
Search for Simon Kasasa in:
Search for Patricia Tanifum in:
Search for Alex Ario Riolexus in:
Search for Bao-Ping Zhu in:
FN, LB, PT, AA, and AI conceived and designed the study, FN, LB and AA contributed to data collection, cleaning and analysis. FN, LB and AA implemented the study. FN and LB analyzed the data. FN prepared the original manuscript. FN, AA, JKBM, PT, SK and B-PZ had primary responsibility for final content. B-PZ contributed to study design, data analysis, interpretation, and preparation of the final manuscript. All authors contributed to the final draft of the paper. All authors read and approved the final manuscript.
Correspondence to Fred Nsubuga.
The Ministry of Health of Uganda gave the directive and approval to investigate this outbreak. The Office of the Associate Director for Science, CDC/Uganda, determined that this activity was not human subject research and its primary intent was public health practice or a disease control activity (specifically, epidemic or endemic disease control activity). Verbal informed consent was obtained from all patients who provided blood samples and from all participants or, caretakers (if the interviewee/patient was a minor), before the start of each interview. We soμght verbal consent because this study was conducted as part of an outbreak investigation whose primary purpose was to inform disease control efforts rather than outright human subjects' research.
Nsubuga, F., Bulage, L., Ampeire, I. et al. Factors contributing to measles transmission during an outbreak in Kamwenge District, Western Uganda, April to August 2015. BMC Infect Dis 18, 21 (2018) doi:10.1186/s12879-017-2941-4
Vaccine failure
Viral diseases | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.