text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
I've been working through Silverman and Tate's book Rational Points on Elliptic Curves. They use conic equations as an introduction to singular/nonsingular curves. I've reproduced the problem with my comments below, then my questions follow.
(a) Show that if $\delta \neq 0$ then $C$ has no singular points.
(b) Conversely, show that if $\delta = 0$ and $b^2 - 4ac \neq 0$ then there is a unique singular point on $C$.
Given these two conditions we can classify the curve as two intersecting lines or a single point. So there is one singular point on the curve. This is from wikipedia.
(c) Let $L$ be the line $y = \alpha x + \beta$ with $\alpha \neq 0$. Show that the intersection of $L$ and $C$ consists of either zero, one, or two points.
Substitute the equation of the line into the equation of the conic. Then we get a quadratic equation $(a + \alpha b + \alpha^2 c)x^2 + (\beta b + 2\alpha\beta c + d + \alpha e)x + (c\beta^2 + \beta e + f) = 0$. This equation has zero, one, or two solutions.
(d) Determine the conditions on the coefficients which ensure that the intersection $L \cap C$ consists of exactly one point. What is the geometric significance of these conditions?
Let $(a + \alpha b + \alpha^2 c)x^2 + (\beta b + 2\alpha\beta c + d + \alpha e)x + (c\beta^2 + \beta e + f) = a'x^2 + b'x + c'$ There is exactly one solution if the determinant $b'^2 - 4a'c'$ of the quadratic equation is 0. Geometrically, the intersections collide: the line must be tangent.
(b) Can anyone provide a reference other than Wikipedia for this? I've searched a lot but can't find anything that actually gives a derivation.
But the two roots could be one root of multiplicity two. Is that correct? Or should I only be thinking about the third case (over real numbers), then considering the possible roots, with complex roots left out.
Therefore, there exist only one point $(u, v)$ for $Q_x = 0, Q_y = 0$ and $Q = 0$. Hence $(u, v)$ is unique singular point of $Q$.
Hence $L$ intersects $Q$ at one point, two points or never provided $p, q, r$ all are not zero.
d): I don't understand this part. $L$ is tangent to $Q$ when $q^2 - 4pr = 0$ but finding the condition on coefficients of $Q$ and $L$ is a lot of dull calculations (at least with my method).
Not the answer you're looking for? Browse other questions tagged algebraic-geometry conic-sections projective-geometry or ask your own question.
Points at infinity of a conic section and its eccentricity, foci, and directrix? | CommonCrawl |
The Weil conjectures are famous conjectures formulated by André Weil in 1949. They have driven a large part of the development of algebraic geometry in the following decades, until they were proved by Pierre Deligne in the 1970s. The proof uses, in an essential way, the language of schemes and the theory of étale cohomology developed by Alexander Grothendieck and his school.
The Weil conjectures (now properly speaking the Theorem of Deligne, but the old name is still in use) describe in an elegant way an astonishing regularity of the numbers of solutions of systems of polynomial equations over finite fields $\mathbb F_q$, where $q$ runs through all powers of a fixed prime $p$. Part of the fascination is caused by the appearance of so-called zeta functions which are defined in closed analogy to the famous Riemann zeta function. The most difficult part of the Weil conjectures asserts that for these functions the analogue of the Riemann hypothesis holds true.
For algebraic curves, the situation is much simpler than in the general case. In this situation the conjecture habe been formulated already in 1924 by Emil Artin, and had been proved by Weil. Understanding this theorem and its proof by Bombieri is the topic of the seminar.
Credits: The seminar is a joint Bachelor/Master seminar. For a successful Bachelor seminar talk, you earn 6 ECTS points. For a successful Master seminar talk you earn 6 or 9 ECTS points (depending on the version of the regulations (Prüfungsordnung) which appplies to you). Several of the talks can serve as the basis of a Bachelor's thesis; if you are interested, then it would be useful to also learn some commutative algebra and algebraic geometry. Most of the talks in the last third of the seminar have the level of Master seminar talks.
E. Bombieri, Counting points on curves over finite fields, Séminaire Bourbaki, Exposé no. 430 (1972/73).
U. Görtz, T. Wedhorn, Algebraic Geometry I. Schemes, Vieweg+Teubner, 2010.
S. H. Hansen, Rational Points on Curves over Finite Fields, Lect. Notes Ser., Aarhus Univ. Mat. Institute, 1995.
R. Hartshorne, Algebraic Geometry, Springer Graduate Texts in Mathematics 52, 1977.
D. Lorenzini, An invitation to Arithmetic Geometry, Grad. Studies in Math. 9, Amer. Math. Soc., 1996.
H. Stichenoth, Algebraic Function Fields and Codes, Springer Graduate Texts in Math. 254, 2nd ed., 2009. | CommonCrawl |
You are an employee of Automatic Cleaning Machine (ACM) and a member of the development team of Intelligent Circular Perfect Cleaner (ICPC). ICPC is a robot that cleans up the dust of the place which it passed through.
Your task is an inspection of ICPC. This inspection is performed by checking whether the center of ICPC reaches all the $N$ given points.
However, since the laboratory is small, it may be impossible to place all the points in the laboratory so that the entire body of ICPC is contained in the laboratory during the inspection. The laboratory is a rectangle of $H \times W$ and ICPC is a circle of radius $R$. You decided to write a program to check whether you can place all the points in the laboratory by rotating and/or translating them while maintaining the distance between arbitrary two points.
The first line consists of four integers $N, H, W$ and $R$ ($1 \leq N \leq 100$, $1 \leq H, W \leq 10^9$, $1 \leq R \leq 10^6$). The following $N$ lines represent the coordinates of the points which the center of ICPC must reach. The ($i+1$)-th line consists of two integers $x_i$ and $y_i$ ($0 \leq x_i, y_i \leq 10^9$). $x_i$ and $y_i$ represent the $x$ and $y$ coordinates of the $i$-th point, respectively. It is guaranteed that the answer will not change even if $R$ changes by $1$.
If all the points can be placed in the laboratory, print 'Yes'. Otherwise, print 'No'.
All the points can be placed in the laboratory by rotating them through $45$ degrees. | CommonCrawl |
Classification of $C^*$-algebras and $*$-homomorphisms.
KK-Theory, connectivity and unsuspended E-theory, Cuntz semigroups.
Approximation and dimension theory; quasidiagonality. $W^*$-algebras.
Graph $C^*$-algebras and path groupoids. Groupoid modeling of $C^*$-algebras.
Higher category theory. $\infty$-categories and homotopy theory. | CommonCrawl |
A set of recent results indicates that fractionally filled bands of Chern insulators in two dimensions support fractional quantum Hall states analogous to those found in fractionally filled Landau levels. We provide an understanding of these results by examining the algebra of Chern band projected density operators. We find that this algebra closes at long wavelengths and for constant Berry curvature, whereupon it is isomorphic to the $W_\infty$ algebra of lowest Landau level projected densities first identified by Girvin, MacDonald and Platzman [Phys. Rev. B 33, 2481 (1986).] For Hamiltonians projected to the Chern band this provides a route to replicating lowest Landau level physics on the lattice. | CommonCrawl |
This result is taken f.e. from 'Paul Wilmott on Quantitative Finance' book.
Why I can not use the same technique to price American perpetual call option? When I apply the same method I obtain that my price has a form: $$ V(S) = A S. $$ But I am not able to derive that the coefficient $A$ should be equal to $1$.
Can anybody explain me where is the key issue of this problem?
I suggest you first value a perpetual up and out call with a barrier B above max of strike K and initial spot and a rebate paid at first barrier hit equal to B - K. Then maximize this value over B. Continuing to assume no dividends, I believe you will find that the optimal B is infinite and that the up and out call value converges to spot. I haven't actually done the calculation but it seems like a worthwhile exercise.
Since this is strictly positive, it follows that the $B^* = \infty$ and thus $A^* = 1$. The only exception is when $S = 0$. In this case the option is worthless no matter what exercise policy you employ.
Not the answer you're looking for? Browse other questions tagged option-pricing american-options call american or ask your own question. | CommonCrawl |
Put operations signs ($+$ or $-$ or $\times$ or $\div$) between the numbers 3, 4, 5, 6 to make the highest possible number and lowest possible number.
This activity gives a good opportunity to explore using the knowledge and skills the pupils already have in a "safe" environment.
Start off by writing the four numbers down in order and putting the same sign between each pair. Which operation gives the highest total and which the lowest?
The children can vary the order themselves either working in pairs or individually. After a short period of independent work ask some of the children to explain their thinking to the others before continuing to see what the highest and lowest possible solutions are.
Having tried this challenge, many children will be able to explore further some of the attributes associated with the four rules of number and place value.
Could you make this answer bigger somehow?
Could you make this answer smaller somehow?
How have you got your ideas?
How do you know that this is the biggest possible answer?
How do you know that this is the smallest possible answer?
Other forms of number manipulation may be applied e.g. using powers. The children could also try to make a range of target numbers using the same numbers or alternatively choose a different set of starting numbers and see what results they can make.
Many pupils will benefit from using a calculator so that their energies can be applied to exploring ideas and reasoning rather than just be taken up with calculating. However if your focus is on gaining fluency in calculating skills then it would be better to restrict the set to just three of the numbers such as $3$, $4$ and $5$. A further reduction in the challenge would be to take two numbers and consider all the different solutions that could be made by using any of the operations on them.
Odd and even numbers. Trial and improvement. Working systematically. Addition & subtraction. Multiplication & division. Factors and multiples. Place value. Introducing algebra. Interactivities. Comparing and Ordering numbers. | CommonCrawl |
Jacob Lurie's stuff seems to develop derived algebraic geometry via $E_\infty$ rings and/or maybe something like simplicial commutative rings. Ben Wieland's comment in this question indicates that Lurie never deals with commutative dg algebras. However, it is supposed to be true that all of these different things are the same (meaning more precisely that their model categories are Quillen equivalent) in characteristic zero.
Is the theory of derived algebraic geometry via dg rings or dg algebras in characteristic zero developed anywhere? If not, why not?
I feel like there must be a good reason why Lurie does not use dg rings/algebras, other than the fact that they apparently don't work well in positive characteristic. So I wonder what the reasons are.
I don't know very much about homotopy theory, so I find the $E_\infty$ rings approach to DAG a bit daunting. I am personally more comfortable with dg algebras.
I am personally more interested in things involving "sheaves of dg algebras" than things involving "sheaves of $E_\infty$ rings" (such as elliptic cohomology (and TMF), which I understand is one of Lurie's motivations).
This is more or less an amplification of Tyler's comment. You shouldn't take it too seriously, since I am certainly talking outside my area of expertise, but maybe it will be helpful.
In particular, one of Lurie's achievements is (I believe) constructing equivariant versions of TMF, which (as I understand it) involves (among other things) studying deformations of $p$-divisible groups of derived elliptic curves. It seems hard to do this kind of thing without having a theory that can cope with torsion phenomena.
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry dg-algebras or ask your own question.
What motivates modern algebraic geometry for a combinatorial/constructive algebraist?
Which sites in classical/derived algebraic geometry are hypercomplete?
Why do people say DG-algebras behave badly in positive characteristic? | CommonCrawl |
6 \times 3 = 6 + 6 + 6 = 18.
3.OA.A.1. Interpret products of whole numbers, e.g., interpret $5 \times 7$ as the total number of objects in 5 groups of 7 objects each. For example, describe a context in which a total number of objects can be expressed as $5 \times 7$.
The parent had every right to be upset: a correct answer is a correct answer. Comments on the post correctly pointed out that, since multiplication is commutative, it shouldn't matter in what order the calculation interpreted the product. But hang on, I hear you ask, doesn't that contradict 3.OA.A.1, which clearly states that $6 \times 3$ should be interpreted as 6 groups of 3?
None of this dictates the way of doing $6 \times 3$, that is, the method of computing it. In fact, it expands the possibilities, including deciding to work with the more efficient $3 \times 6$, as this child did. The way of thinking does not constrain the way of doing. If you want to test whether a child understands 3.OA.A.1, you will have to come up with a different task than computation of a product. There are some good ideas from Student Achievement Partners here. | CommonCrawl |
Published April 1999,October 2009,September 2012,February 2011.
The most well-known story is a tale from when Gauss was still at primary school. One day Gauss' teacher asked his class to add together all the numbers from $1$ to $100$, assuming that this task would occupy them for quite a while. He was shocked when young Gauss, after a few seconds thought, wrote down the answer $5050$. The teacher couldn't understand how his pupil had calculated the sum so quickly in his head, but the eight year old Gauss pointed out that the problem was actually quite simple.
He had added the numbers in pairs - the first and the last, the second and the second to last and so on, observing that $1+100=101$, $2+99=101$, $3+98=101$, ...so the total would be $50$ lots of $101$, which is $5050$.
While the story may not be entirely true, it is a popular tale for maths teachers to tell because it shows that Gauss had a natural insight into mathematics. Rather than performing a great feat of mental arithmetic, Gauss had seen the structure of the problem and used it to find a short cut to a solution.
Gauss could have used his method to add all the numbers from $1$ to any number - by pairing off the first number with the last, the second number with the second to last, and so on, he only had to multiply this total by half the last number, just one swift calculation.
Can you see how Gauss's method works? Try using it to work out the total of all the numbers from $1$ to $10$. What about $1$ to $50$? The answers are at the bottom of this page.
Or why not challenge a friend to add up the numbers from $1$ to a nice large number, and then amaze them by getting the answer in seconds!
The rest of the article explains how you could use algebra to write Gauss's method - if you haven't yet learned any algebra you may wish to skip this part.
One way of presenting Gauss' method is to write out the sum twice, the second time reversing it as shown.
If we add both rows we get the sum of $1$ to $n$, but twice. Gauss added the rows pairwise - each pair adds up to $n+1$ and there are $n$ pairs, so the sum of the rows is also $n\times (n+1)$. It follows that $2\times (1+2+\ldots +n) = n\times (n+1)$, from which we obtain the formula.
Gauss' formula is a result of counting a quantity in a clever way. The problems Picturing Triangular Numbers, Mystic Rose, and Handshakes all use similar clever counting to come up with a formula for adding numbers. | CommonCrawl |
Abstract: We study static BPS black hole horizons in four dimensional N=2 gauged supergravity coupled to $n_v$-vector multiplets and with an arbitrary cubic prepotential. We work in a symplectically covariant formalism which allows for both electric and magnetic gauging parameters as well as dyonic background charges and obtain the general solution to the BPS equations for horizons of the form $AdS_2\times \Sigma_g$. In particular this means we solve for the scalar fields as well as the metric of these black holes as a function of the gauging parameters and background charges. When the special Kahler manifold is a symmetric space, our solution is completely explicit and the entropy is related to the familiar quartic invariant. For more general models our solution is implicit up to a set of holomorphic quadratic equations. For particular models which have known embeddings in M-theory, we derive new horizon geometries with dyonic charges and numerically construct black hole solutions. These correspond to M2-branes wrapped on a Riemann surface in a local Calabi-Yau five-fold with internal spin. | CommonCrawl |
Neutrino Mass Hierarchy and neutron-anti-neutron Oscillation from Baryogenesis - High Energy Physics - Phenomenology - Download this document for free, or read online. Document in PDF available to download.
Abstract: It has been recently proposed that the matter-antimatter asymmetry of theuniverse may have its origin in -post-sphaleron baryogenesis- PSB. It is aTeV scale mechanism that is testable at the LHC and other low energyexperiments. In this paper we present a theory of PSB within a quark-leptonunified scheme based on the gauge group $SU2 L\times SU2 R\times SU4 c$that allows a direct connection between the baryon asymmetry and neutrino massmatrix. The flavor changing neutral current constraints on the model allowsuccessful baryogenesis only for an inverted mass hierarchy for neutrinos,which can be tested in the proposed long base line neutrino experiments. Themodel also predicts observable neutron-antineutron oscillation accessible tothe next generation of experiments as well as TeV scale colored scalars withinreach of LHC. | CommonCrawl |
We have fabricated an isolated thin-film superconducting Al lumped-element resonator (resonant frequency 6.72 GHz) on a sapphire substrate and mounted it inside an Al 3d cavity (TE101 mode frequency 7.50 GHz). The thin-film resonator is very weakly coupled to the microwave drive line with $Q_e \approx 5 \times 10^9$. We illuminated the resonator with 780 nm light from an optical fiber and measured the internal loss in the resonator and its dependence on applied optical and rf powers at temperatures as low as 20 mK. With no applied optical power, the resonator reaches an internal quality factor $Q_i \approx 2 \times 10^6$ at high rf photon numbers. Our measurements show that the applied optical power causes an increase in loss due to an apparent increased contribution from two-level systems as well as the expected increase from quasiparticles. We discuss our results and possible mechanisms for the optical activation of two-level systems.
*Work supported by NSF through the Physics Frontier Center at the Joint Quantum Institute, Dept. of Physics, Univ. of Maryland. | CommonCrawl |
The evil Dr. Procrastinator, supreme overlord of planet forgotzal12 hit us with a forgetting ray and a number of the events below just say "title." We can assure you that we did great things on those days but that forgetting ray has left us with no memory for what we did those days. If only we had countered that forgetting ray with our second greatest weapon system: pen/paper. Our minds are our greatest weapon system, but for all their power, they are wildly susceptible to that forgetting ray. The pen/paper system is impervious to the forgetting ray.
Parents day! We learned how to play SET game and just how mathematical this wonderful game is. Finite geometries, combinatorics, and FUN!
Laundry math, ABBA, and Knotted! By using topology or "rubber sheet geometry," we freed ourselves from knotted ropes, turned our shirts right side out while keeping our hands looped, and more!
A while back, Jim, a friend of BK's, showed BK his set of machines that exploded a specified number of dots into some other specified number of dots. $1\leftarrow 2$ machines, $1\leftarrow 3$ machines, $2\leftarrow 3$ machines, and $1\leftarrow x$ machines are amazingly powerful machines.
Javier demonstrates his magical abilities by guessing the number you are thinking of, with the help of only a few cards. BK shared his magical abilities by guessing which number you lied about in Liar's Bingo.
Mission 001 : : February 08, 2014 : : Title?
If 7 LEAGUErs + Javier all shook hands, how many handshakes would that be? Some said $8\times 7\times \cdots \times 2\times 1$. Others said $8+7+\cdots +1$. Who was right?
Can you keep your ally between you and your nemesis? What if everyone else in the room is doing the same with THEIR allies and THEIR nemeses? What does that look like? | CommonCrawl |
Format: MarkdownI added to [[vertical categorification]] the [comments](http://mathoverflow.net/questions/4841/what-precisely-is-categorification/5094#5094) that I'd made at MathOverflow, as Urs has requested. I'm not sure that I'm happy with where I put them and how I labelled them, but maybe it's better if other people judge that.
Format: MarkdownI think of "categorification" as *including* "laxification". Adding only invertible higher arrows I would call "groupoidification" (if that weren't already being used for something else...).
I think of "categorification" as including "laxification". Adding only invertible higher arrows I would call "groupoidification" (if that weren't already being used for something else...).
Format: MarkdownYeah, I might do that too if that weren't already being used. Except that a lot of examples called 'categorification' in the wild *don't* add noninvertible higher arrows; an example is moving from [[locales]] to [[Grothendieck topoi]] (or from [[topological spaces]] to [[ionaid]], for that matter). Conceptually, I think that it helps me to see adding higher arrows and allowing arrows (at whatever level) to be noninvertible as separate steps, even if you might want to do both.
Yeah, I might do that too if that weren't already being used.
Except that a lot of examples called 'categorification' in the wild don't add noninvertible higher arrows; an example is moving from locales to Grothendieck topoi (or from topological spaces to ionaid, for that matter).
Conceptually, I think that it helps me to see adding higher arrows and allowing arrows (at whatever level) to be noninvertible as separate steps, even if you might want to do both.
Format: MarkdownI think the distinction is a fine one, but I would still prefer to allow "categorification" to refer to the whole shebang. What I'm objecting to is I think confined to the third paragraph [here](http://ncatlab.org/nlab/show/vertical+categorification#contrast_to_laxification_11).
I think the distinction is a fine one, but I would still prefer to allow "categorification" to refer to the whole shebang. What I'm objecting to is I think confined to the third paragraph here.
I would allow it, certainly, but I also need a term for not the whole shebang. So I said 'categorification proper'. The main point of categorification (whatevery you call it), as I see it, is that.
Format: MarkdownOk, I edited it a bit. What I object to is that "categorification proper" makes it sound like using "categorification" for the whole shebang is somehow "improper," i.e. not quite right.
Ok, I edited it a bit. What I object to is that "categorification proper" makes it sound like using "categorification" for the whole shebang is somehow "improper," i.e. not quite right.
The opposite of 'proper' in this sense, I think, is 'greater' (with the latter coming before the noun, as usual in English, rather than after). So they're both emotionally positive terms. (^_^) A random example from an Internet discussion board: 'Did Magna Grecia ("Greater Greece") ever replace Greece proper as the center of Greek civilization?'.
Format: MarkdownIt's not entirely clear to me that the two can be completely disentangled, but it's interesting to think about. Are you thinking of always composing them in a specific order? E.g. we can get from sets to categories via groupoids, which is maybe what you are thinking of, or via preorders -- is going from preorders to categories a process of "adding higher morphisms but not allowing noninvertible things"? Another possible word for "adding higher invertible morphisms" is "homotopification" since when carried to the limit it replaces all sets by <latex>\infty</latex>-groupoids, i.e. homotopy types. Or from the type theorists' point of view it could be "intensionalization" since <latex>\infty</latex>-groupoids "are" intensional types. I'm not all that taken by "laxification," partly I think because "lax" things are often viewed as weird and technical (which I think lax functors and "lax categories" often are, although lax transformations and lax morphisms of algebras are certainly ubiquitous), while I think in some circles (such as algebraic geometry) there is already too much of a tendency to regard noninvertible higher cells as weird or pathological or not important. It could also be called "directification" or "directionalization" or "directing" (argh, English) since it involves making things "directed" that weren't before. I don't really like any of these words, but I'm just thinking out loud.
It's not entirely clear to me that the two can be completely disentangled, but it's interesting to think about. Are you thinking of always composing them in a specific order? E.g. we can get from sets to categories via groupoids, which is maybe what you are thinking of, or via preorders -- is going from preorders to categories a process of "adding higher morphisms but not allowing noninvertible things"?
Another possible word for "adding higher invertible morphisms" is "homotopification" since when carried to the limit it replaces all sets by -groupoids, i.e. homotopy types. Or from the type theorists' point of view it could be "intensionalization" since -groupoids "are" intensional types.
I'm not all that taken by "laxification," partly I think because "lax" things are often viewed as weird and technical (which I think lax functors and "lax categories" often are, although lax transformations and lax morphisms of algebras are certainly ubiquitous), while I think in some circles (such as algebraic geometry) there is already too much of a tendency to regard noninvertible higher cells as weird or pathological or not important. It could also be called "directification" or "directionalization" or "directing" (argh, English) since it involves making things "directed" that weren't before.
I don't really like any of these words, but I'm just thinking out loud.
Format: Markdown>is going from preorders to categories a process of "adding higher morphisms but not allowing noninvertible things" Yes, if you mean not allowing *additional* noninvertible things. (If you do, then you get 2-posets, of course.) Although when speaking of *specific* categories, you get a very different result by decatorifying to a poset and then delaxifying to a set, than you get by delaxifying to a groupoid and then decategorifying to a set, and the latter route is usually (but not always) more interesting. >Or from the type theorists' point of view it could be "intensionalization" Maybe. I'm still a bit cautious about approaching higher categories through intensional type theory; while a type in intensional type theory is an <latex>\infty</latex>-groupoid, I see no reason why most <latex>\infty</latex>-groupoids should arise in this way. Indeed, adding an extensionality axiom to an intensional type theory is often consistent, in which case none of these <latex>\infty</latex>-groupoids can be proved to be not discrete. (But I need to catch up on the latest ideas here.) I'll bet that Urs would like 'homotopification'; we should ask him. >I think because "lax" things are often viewed as weird and technical But that's wrong, right? For people who already know and love categorification, you point out that (half of the time) they've been laxifying all along! Although that won't help with geometers who dislike noninvertible higher cells already. Something based on 'direct' could be good, although none of those particular words sound nice to me either.
is going from preorders to categories a process of "adding higher morphisms but not allowing noninvertible things"
Although when speaking of specific categories, you get a very different result by decatorifying to a poset and then delaxifying to a set, than you get by delaxifying to a groupoid and then decategorifying to a set, and the latter route is usually (but not always) more interesting.
Or from the type theorists' point of view it could be "intensionalization"
I'll bet that Urs would like 'homotopification'; we should ask him.
But that's wrong, right? For people who already know and love categorification, you point out that (half of the time) they've been laxifying all along! Although that won't help with geometers who dislike noninvertible higher cells already.
Something based on 'direct' could be good, although none of those particular words sound nice to me either.
Format: Markdown> Although that won't help with geometers who dislike noninvertible higher cells already. I do not understand the motivation for this comment. My experience with monoidal functors is just the opposite. In works in geometry monoidal functors are almost always lax, while most of the papers in category theory mainly concentrate and on pseudo-version as a default. > "groupoidification" (if that weren't already being used for something else...) But except for this cafe circle I would not say that word "groupoidification" is generally accepted: I mean doing the linear algebra of correspondences ("spans") is a great and old idea (very popular in algebraic geometry: Fourier-Mukai transforms being the most well-known archetypal exsample) and does not need to include groupoids in the game, so the term is in my opinion misleading. Spanification is not yet used term; but I do not think one needs to have a term for linear algebra in some setup. Linear algebra has been many times generalized, from vector spaces to modules, sheaves of modules, additive categories etc. It still stays linear lagebra and the basic idea of Fourier-Mukai transform is still at its basis the idea of Fourier transform, never mind the sheaves, inner homs and other bells and whistles.
Although that won't help with geometers who dislike noninvertible higher cells already.
I do not understand the motivation for this comment. My experience with monoidal functors is just the opposite. In works in geometry monoidal functors are almost always lax, while most of the papers in category theory mainly concentrate and on pseudo-version as a default.
But except for this cafe circle I would not say that word "groupoidification" is generally accepted: I mean doing the linear algebra of correspondences ("spans") is a great and old idea (very popular in algebraic geometry: Fourier-Mukai transforms being the most well-known archetypal exsample) and does not need to include groupoids in the game, so the term is in my opinion misleading. Spanification is not yet used term; but I do not think one needs to have a term for linear algebra in some setup. Linear algebra has been many times generalized, from vector spaces to modules, sheaves of modules, additive categories etc. It still stays linear lagebra and the basic idea of Fourier-Mukai transform is still at its basis the idea of Fourier transform, never mind the sheaves, inner homs and other bells and whistles.
Format: TextWhy don't you use something like the term, "local n-categorification" for categorification where all morphisms of dimension greater than n are invertible. This idea, of course, coming from the idea of localizing a ring at a prime ideal, making that prime maximal, and by default, making all elements in the complement invertible. Then, we can use the term categorification to refer to all categorifications, local categorifications to denote what you guys have been throwing around as possibly "groupoidification", and laxification can stay the same. That way we've avoided the "commutative noncommutative ring" problem. I checked google, and there is no term "local categorification". The only problem I could see happening with this definition is if it were mixed up with localizing a category, but the good thing is, with this terminology, localizing a category becomes 1-truncated local 0-categorification,.
Why don't you use something like the term, "local n-categorification" for categorification where all morphisms of dimension greater than n are invertible. This idea, of course, coming from the idea of localizing a ring at a prime ideal, making that prime maximal, and by default, making all elements in the complement invertible. Then, we can use the term categorification to refer to all categorifications, local categorifications to denote what you guys have been throwing around as possibly "groupoidification", and laxification can stay the same. That way we've avoided the "commutative noncommutative ring" problem. I checked google, and there is no term "local categorification". The only problem I could see happening with this definition is if it were mixed up with localizing a category, but the good thing is, with this terminology, localizing a category becomes 1-truncated local 0-categorification,.
Certainly, if there's going to be a type theory that is sufficient as an internal logic for -toposes, then adding an extensionality axiom to it must not be consistent. I think one important axiom of such a theory will be exactness, which in the -case means that any internal groupoid has a "quotient." In low dimensions, exactness and extensivity are what prevent degenerate models. For instance, any Heyting algebra is an exact Heyting category, but it is not extensive. And any extensive Heyting category is an extensive Heyting 2-category, but it is not (2-)exact.
Format: MarkdownZoran, I had a discussion with an algebraic geometer about Lurie's use of "<latex>\infty</latex>-category" to mean <latex>(\infty,1)</latex>-category, and the similar tendency to use "2-category" to mean "(2,1)-category". His response was that most algebraic geometers think of noninvertible 2-cells and higher as "pathological," so that adding the ",1" is just a "niceness condition" which one can easily omit to mention, as in saying "space" for "compactly generated space." I can't understand this, for the reasons you gave and others, but that's what he said, and the fact is that many geometers do use the words "<latex>\infty</latex>-category" and (I believe) even "2-category" in this way. I also don't think it's true that category-theory papers concentrate on pseudo/strong monoidal functors. Lax functors of bicategories are fairly exotic, but lax monoidal functors are so important that in many circles they are simply called "monoidal functors," with "strong" added when the comparison maps are invertible. Lax morphisms of algebras for 2-monads, of which monoidal categories are a special case, play a very important role in the theory and are studied in a lot of places.
Zoran, I had a discussion with an algebraic geometer about Lurie's use of " -category" to mean -category, and the similar tendency to use "2-category" to mean "(2,1)-category". His response was that most algebraic geometers think of noninvertible 2-cells and higher as "pathological," so that adding the ",1" is just a "niceness condition" which one can easily omit to mention, as in saying "space" for "compactly generated space." I can't understand this, for the reasons you gave and others, but that's what he said, and the fact is that many geometers do use the words " -category" and (I believe) even "2-category" in this way.
I also don't think it's true that category-theory papers concentrate on pseudo/strong monoidal functors. Lax functors of bicategories are fairly exotic, but lax monoidal functors are so important that in many circles they are simply called "monoidal functors," with "strong" added when the comparison maps are invertible. Lax morphisms of algebras for 2-monads, of which monoidal categories are a special case, play a very important role in the theory and are studied in a lot of places.
Format: MarkdownHarry, I don't like "local" because there is another category-theoretic meaning of "local" that that could easily be confused with (and which confused me at first when I read your post), namely "applying to hom-sets." To me "local categorification" "obviously" means "categorifying the hom-sets."
Harry, I don't like "local" because there is another category-theoretic meaning of "local" that that could easily be confused with (and which confused me at first when I read your post), namely "applying to hom-sets." To me "local categorification" "obviously" means "categorifying the hom-sets."
It would be really nice if we found a way to recover something along those lines, since the analogy with localization of a ring seems very straightforward to me at least.
Format: Markdown>Is this some sort of indication of a general principle? H'm, maybe it is. It just seems the obvious thing to do, and it seemed to be the obvious thing to you as well. That actually strengthens the case that decategorifying from a category to a set is just as much 'decategorification' simple as from a category to a poset, since each is a left adjoint. >I think one important axiom of such a theory will be *exactness* Interesting, since one potential way to distinguish a 'type theory' from a 'set theory' is that the former is usually *not* exact (so that one must pass to setoids to get an exact set theory). @ Harry 'groupoidal categorification' is a tad long, but it also strikes me as a very clear term.
Is this some sort of indication of a general principle?
H'm, maybe it is. It just seems the obvious thing to do, and it seemed to be the obvious thing to you as well. That actually strengthens the case that decategorifying from a category to a set is just as much 'decategorification' simple as from a category to a poset, since each is a left adjoint.
Interesting, since one potential way to distinguish a 'type theory' from a 'set theory' is that the former is usually not exact (so that one must pass to setoids to get an exact set theory).
'groupoidal categorification' is a tad long, but it also strikes me as a very clear term.
Format: TextIf we have that, then we can change laxification to lax categorification, and the notion of a general categorification should encompass both.
If we have that, then we can change laxification to lax categorification, and the notion of a general categorification should encompass both.
Format: MarkdownBut laxification is not categorification at all, as I see it. At least not [[vertical categorification]].
But laxification is not categorification at all, as I see it. At least not vertical categorification.
Format: TextThere's not really a precise definition of vertical categorification on that page at all. I feel like in a vague way, they are both vertical categorifications, since they both add define structure at the top. I feel like a general categorification should be some kind of "formal linear combination (intentionally vague)" of lax and groupoidal categorifications.
There's not really a precise definition of vertical categorification on that page at all. I feel like in a vague way, they are both vertical categorifications, since they both add define structure at the top. I feel like a general categorification should be some kind of "formal linear combination (intentionally vague)" of lax and groupoidal categorifications.
Format: MarkdownLaxification could be done at any level, moving say from a $(3,1)$-category to a $(3,2)$-category, not only at the top. But Mike may feel as you do; see discussion at [[vertical categorification]].
Laxification could be done at any level, moving say from a $(3,1)$-category to a $(3,2)$-category, not only at the top. But Mike may feel as you do; see discussion at vertical categorification. | CommonCrawl |
Were there science fiction stories written during the Middle Ages?
Did Gauss find the formula for $1+2+3+\ldots+(n-2)+(n-1)+n$ in elementary school?
What percentage of PhD theses are rejected nowadays?
Can a car steer on a frictionless surface?
Why use baking powder instead of yeast?
Mathematics Audiobooks for the Blind?
How was funding handled in the medieval university?
How did we 'discover' dark matter?
What did mathematicians study as an undergraduate/graduate before modern mathematics such as modern algebra and analysis?
Is this Einstein rejection letter fake?
How to find Version of Qt?
Does the Pope have the authority to baptize aliens (martians)?
How to make doxygen preprocessor recurse into subdirectories?
What exactly is a subkey?
According to the Catholic Church, how do you perform an examination of conscience?
If I win the Powerball jackpot will Catholic Church accept a donation?
Are Catholics allowed to leave marriage in order to become a monk or nun?
Whose shoulders did Newton stand on?
What is the history of scientific Latin? | CommonCrawl |
34 A way to directly see that the interior angles of triangle sum to $180^\circ$?
14 Is there an efficient algorithm for expression equivalence?
5 discrete definitions of curl $\nabla \times F$?
5 Do the Shallow Water Equations produce 2d vorticity/eddies? Why/Why not? | CommonCrawl |
28 Hard combinatorics and probability question.
16 Is there a set non-countable around zero but countable elsewhere?
14 C++ Template : one list by class, how to factorize the code?
9 Prove without using graphing calculators that $f: \mathbb R\to \mathbb R,\,f(x)=x+\sin x$ is both one-to-one, onto (bijective) function. | CommonCrawl |
A theorem on the possibility of uniform approximation of functions of one complex variable by polynomials. Let $K$ be a compact subset of the complex $z$-plane $\mathbf C$ with a connected complement. Then every function $f$ continuous on $K$ and holomorphic at its interior points can be approximated uniformly on $K$ by polynomials in $z$.
This theorem was proved by S.N. Mergelyan (see , ); it is the culmination of a large number of studies on approximation theory in the complex plane and has many applications in various branches of complex analysis.
In the case where $K$ has no interior points this result was proved by M.A. Lavrent'ev ; the corresponding theorem in the case where $K$ is a compact domain with a connected complement is due to M.V. Keldysh (cf. also Keldysh–Lavrent'ev theorem).
Mergelyan's theorem has the following consequence. Let $K$ be an arbitrary compact subset of $\mathbf C$. Let a function $f$ be continuous on $K$ and holomorphic in its interior. Then in order that $f$ be uniformly approximable by polynomials in $z$ it is necessary and sufficient that $f$ admits a holomorphic extension to all bounded connected components of the set $\mathbf C\setminus K$.
The problem of polynomial approximation is a particular case of the problem of approximation by rational functions with poles in the complement of $K$. Mergelyan found also several sufficient conditions for rational approximation (see ). A complete solution of this problem (for compacta $K\subset\mathbf C$) was obtained in terms of analytic capacities (cf. Analytic capacity), .
Mergelyan's theorem touches upon a large number of papers concerning polynomial, rational and holomorphic approximation in the space $\mathbf C^n$ of several complex variables. Here only partial results for special types of compact subsets have been obtained up till now.
Another important forerunner of Mergelyan's theorem was the Walsh theorem: the case where $K$ is the closure of a Jordan domain (a set with boundary consisting of Jordan curves, cf. Jordan curve).
An interesting proof of Mergelyan's theorem, based on functional analysis, is due to L. Carlesson, see [a1].
For analogues of Mergelyan's theorem in $\mathbf C^n$, see [a2]. See also Approximation of functions of a complex variable.
This page was last modified on 1 May 2014, at 21:02. | CommonCrawl |
Check the Javadocs. You need to use Math.log10(). Math.log() returns the logarithm based on e – a_horse_with_no_name Dec 28 '12 at 7:35... Return Math.Log(value + Math.Sqrt(value * value + 1.0)) End Function Example This example uses the Round method of the Math class to round a number to the nearest integer.
The remainder of this page explains how to use the Log machine. The three red text windows contain the numbers y , x , and b , in that sequence. You should read them from the bottom to the top: b to the power x equals y .... The remainder of this page explains how to use the Log machine. The three red text windows contain the numbers y , x , and b , in that sequence. You should read them from the bottom to the top: b to the power x equals y .
18/05/2010 · I'm glad you posted this question - I had been looking for the log function also and could not find it in reference section. Now because of your thread, I find it in the link - just hope it works when I try to use it later tonight or this weekend.... x86's BSR instruction does 32 - numberOfLeadingZeros, but undefined for 0, so a (JIT) compiler has to check for non-zero if it can't prove it doesn't have to.
How can I replace the $\log(x)$ function by simple math operators like $+,-,\div$, and $\times$? I am writing a computer code and I must use $\log(x)$ in it. However, the technology I am using do...... to bring up a Logarithm Calculator that lets you pick two of the numbers in (*) and computes the third. It's pretty straightforward to use, but here is documentation.
Return Math.Log(value + Math.Sqrt(value * value + 1.0)) End Function Example This example uses the Round method of the Math class to round a number to the nearest integer. | CommonCrawl |
The fundamental counting principle If we apply this principle to our previous example, we can easily calculate the number of possible outcomes by multiplying the number of possible die rolls with the number of outcomes of tossing a coin: \(6 \times 2 = 12\) outcomes.... Use features like bookmarks, note taking and highlighting while reading How to Draw Step by Step for kids: You Can Draw Easy: Desserts: Fun, Easy & Simple Step-by-Step Guide on How You Can Draw Amazing Desserts (How to Draw fo Kids - How U Can Draw Eazy: Desserts Book 1).
Search the term "Paleo Dessert But in this case, the sugar industry cherry picked the studies and told researchers the conclusions to draw. The researchers confirmed they were "well aware" of the stipulations – and received $50,000 (in today's dollars). 23. And what did the sugar industry get for their funding? A prestigious Harvard study, published in New England Journal of... How to draw dessert sweet Reviews and opinions written by visitors like you in a few seconds without registration. Share quick how to draw dessert sweet review with others and describe your own experience or read existing feedback.
How to draw dessert Reviews and opinions written by visitors like you in a few seconds without registration. Share quick how to draw dessert review with others and describe your own experience or read existing feedback.
Learn how to draw dessert characters step by step Easy and cute. | CommonCrawl |
Team Socket was defeated by Bash — the Pokenom trainer — and Bash's best Pokenom Chikapu for the $10^9 + 7$-th time. Team Socket realized that Bash and Chikapu were simply too strong together. Now team Socket is devising an evil plan to keep Bash and Chikapu separated! Team Socket has built an evil machine, which can instantly build a rectangular wall or instantly remove a rectangular wall.
Given the locations where Team Socket is going to build and remove walls, can you help Team Socket check whether Bash and Chikapu are separated?
$1 \, x_1 \, y_1 \, x_2 \, y_2$: Team Socket builds a rectangular wall, with sides parallel to the axes, and $2$ opposite corners at $(x_1, y_1)$ and $(x_2, y_2)$ $(x_1 \ne x_2, y_1 \ne y_2)$.
$2 \, j$: Team Socket removes the rectangular wall built in the $j$-th query. It is guaranteed that $j$-th query is of the $1$st type, the wall was built before this query (i.e. $j < i$), and the wall was not removed previously.
$3 \, x_1 \, y_1 \, x_2 \, y_2$: Bash is standing at $(x_1, y_1)$, and Chikapu is standing at $(x_2, y_2)$. Please let Team Socket know if there is a path from Bash to Chikapu. Of course, both Bash and Chikapu cannot walk through any walls.
The first line of input contains exactly one integer $Q$ — the number of queries $(1 \le Q \le 10^5)$.
After each query, no 2 walls have a common point.
In all queries of 1st type, $x_1, y_1, x_2, y_2$ are odd numbers.
In all queries of 3rd type, $x_1, y_1, x_2, y_2$ are even numbers.
For each query of third type, print a character 'Y' if there is a path from Bash to Chikapu. Otherwise, print a character 'N'. Please note that this problem uses case-sensitive checker. | CommonCrawl |
Abstract : Using structures of Abstract Wiener Spaces and their reproducing kernel Hilbert spaces, we define a fractional Brownian field indexed by a product space $(0,1/2] \times L^2(T,m)$, $(T,m)$ a separable measure space, where the first coordinate corresponds to the Hurst parameter of fractional Brownian motion. This field encompasses a large class of existing fractional Brownian processes, such as Lévy fractional Brownian motion and multiparameter fractional Brownian motion, and provides a setup for new ones. We prove that it has satisfactory incremental variance in both coordinates and derive certain continuity and Hölder regularity properties in relation with metric entropy. Also, a sharp estimate of the small ball probabilities is provided, generalizing a result on Lévy fractional Brownian motion. Then, we apply these general results to multiparameter and set-indexed processes, proving the existence of processes with prescribed local Hölder regularity on general indexing collections. | CommonCrawl |
Farmer John is going into the ice cream business! He has built a machine that produces blobs of ice cream but unfortunately in somewhat irregular shapes, and he is hoping to optimize the machine to make the shapes produced as output more reasonable.
Each '.' character represents empty space and each '#' character represents a $1 \times 1$ square cell of ice cream.
Unfortunately, the machine isn't working very well at the moment and might produce multiple disconnected blobs of ice cream (the figure above has two). A blob of ice cream is connected if you can reach any ice cream cell from every other ice cream cell in the blob by repeatedly stepping to adjacent ice cream cells in the north, south, east, and west directions.
Farmer John would like to find the area and perimeter of the blob of ice cream having the largest area. The area of a blob is just the number of '#' characters that are part of the blob. If multiple blobs tie for the largest area, he wants to know the smallest perimeter among them. In the figure above, the smaller blob has area 2 and perimeter 6, and the larger blob has area 13 and perimeter 22.
Knowing both the area and perimeter of a blob of ice cream is important, since Farmer John ultimately wants to minimize the ratio of perimeter to area, a quantity he calls the icyperimetric measure of his ice cream. When this ratio is small, the ice cream melts slower, since it has less surface area relative to its mass.
The first line of input contains $N$, and the next $N$ lines describe the output of the machine. At least one '#' character will be present.
Please output one line containing two space-separated integers, the first being the area of the largest blob, and the second being its perimeter. If multiple blobs are tied for largest area, print the information for whichever of these has the smallest perimeter. | CommonCrawl |
If there appears two neighboring empty grids after you taken the number, then the score should be decreased by \(2(x\&y)\). Here \(x\) and \(y\) are the values used to existed on these two grids. Please pay attention that "neighboring grids" means there exists and only exists one common border between these two grids.
Before you start the game, you are given some positions and the numbers on these positions must be taken away.
Can you help onmylove to calculate: what's the highest score onmylove can get in the game?
Multiple input cases. For each case, there are three integers \(n, m, k\) in a line.
\(n\) and \(m\) describing the size of the grids is \(n\times m\). \(k\) means there are \(k\) positions of which you must take their numbers.
Then following \(n\) lines, each contains \(m\) numbers, representing the numbers on the $latedx n\times m$ grids.Then \(k\) lines follow. Each line contains two integers, representing the row and column of one position and you must take the number on this position.
Also, the rows and columns are counted start from \(1\).
Limits: \(1\le n, m\le 50, 0\le k\le n \times m\), the integer in every gird is not more than \(1000\).
For each test case, output the highest score on one line. | CommonCrawl |
How to solve this question by using convolution?
I have to find $y[n]$. I have spent hours on this question but I am unable to solve it. Kindly please tell me the way to solve it.
(a) is the convolution symbol.
(b) is using the definition of discrete convolution.
(d) $2^n$ is independent of the summation index.
(e) $u[m]=1$ if $m\geq 0$ and $0$ otherwise.
(f) if $n-m<0$, or if $n<m$ then the summation is zero. So we have (for non-zero quantities), $m = 0 \ldots n$.
Not the answer you're looking for? Browse other questions tagged discrete-signals convolution or ask your own question.
How to derive the answer to this convolution problem?
Perform Convolution in Frequency Domain Using FFTW? | CommonCrawl |
Why are elliptic curves important for elementary number theory?
Is the $E_\infty$-structure on the cochain complex of a $K(G,n)$ readily understandable?
How can I construct D-modules over projective space using explicit differential equations?
How can I find the monodromy of a cyclic galois cover of the affine line minus a few points? | CommonCrawl |
What does this mean? I understand a stable filter is one whose coefficients $a[n]$ approach 0 as $n \rightarrow \infty$, and a causal filter is one which does not use values from the future (i.e. no negative time lags). I have read that the innovation in terms of stochastic processes is defined as the difference between the actual next value of a time series, and the value given by an optimal prediction based on the information already available.
I cannot, however, find any description of an innovation filter. I have a mathematics background rather than a DSP/engineering background, and so sometimes some of the terminology trips me up, which is why I was hoping DSP stack exhange could kindly help!
where $h(t)$ is the impulse response of the innovations filter.
The (causal and stable) inverse filter of $h(t)$ is called the whitening filter of $X(t)$. Its response to the input $X(t)$ is white noise $N(t)$.
This implies that $S(\omega)$ cannot contain spectral lines, and it cannot be band-limited. Unlike a singular process (consisting of spectral lines), a regular process cannot be parameterized by a finite set of random variables and it is not predictable, i.e., it is not completely determined in terms of its past.
Reference: Probability, Random Variables, and Stochastic Processes, A.Papoulis, Athanasios 1965. McGraw-Hill.
Not the answer you're looking for? Browse other questions tagged filters power-spectral-density fir stochastic or ask your own question.
What do high and low order have a meaning in FIR filter? | CommonCrawl |
Latent Dirichlet Allocation (LDA) is a algorithms used to discover the topics that are present in a corpus. See the slides for details.
Non-Negative Matrix Factorization is a dimension reduction technique that factors an input matrix of shape $m \times n$ into a matrix of shape $m \times k$ and another matrix of shape $n \times k$.
In text mining, one can use NMF to build topic models. Using NMF, one can factor a Term-Document Matrix of shape documents x word types into a matrix of documents x topics and another matrix of shape word types x topics. The former matrix describes the distribution of each topic in each document, and the latter describes the distribution of each word in each topic.
Non-negative Matrix Factorization (NMF) can also be used to find topics in text. The mathematical basis underpinning NMF is quite different from LDA. LDA is based on probabilistic graphical modeling while NMF relies on linear algebra. Both algorithms take as input a bag of words matrix (i.e., each document represented as a row, with each columns containing the count of words in the corpus). The aim of each algorithm is then to produce 2 smaller matrices; a document to topic matrix and a word to topic matrix that when multiplied together reproduce the bag of words matrix with the lowest error.
NMF sometimes produces more meaningful topics for smaller datasets.
NMF has been included in scikit-learn. scikit-learn brings API consistency which makes it almost trivial to perform Topic Modeling using both LDA and NMF.
Scikit Learn also includes seeding options for NMF which greatly helps with algorithm convergence and offers both online and batch variants of LDA.
The creation of the bag of words matrix is very easy in scikit-learn . All the heavy lifting is done by the feature extraction functionality provided for text datasets. A tf-idf transformer is applied to the bag of words matrix that NMF must process with the TfidfVectorizer.
LDA on the other hand, being a probabilistic graphical model (i.e. dealing with probabilities) only requires raw counts, so a CountVectorizer is used. Stop words are removed and the number of terms included in the bag of words matrix is restricted to the top 1000.
As mentioned previously the algorithms are not able to automatically determine the number of topics and this value must be set when running the algorithm. Comprehensive documentation on available parameters is available for both NMF and LDA. Initialising the W and H matrices in NMF with 'nndsvd' rather than random initialisation improves the time it takes for NMF to converge. LDA can also be set to run in either batch or online mode. | CommonCrawl |
APS -59th Annual Meeting of the APS Division of Plasma Physics - Event - Potential profile near the virtual cathode in a dusty plasma device.
Existence of a virtual cathode in presence of dusty plasma has been studied by theoretical and numerical analysis. Using basic equations of charge dust, ions and electrons, the behavior of the potential in presence of dust has been calculated and plotted as a function of dust density. It was found that there is a change in potential difference between cathode and sheath potential which changes the threshold wall temperature compared to normal plasma condition. The threshold wall temperature has been increased due to the ability of micro-particles acquiring some electron charge and hence, reducing potential at the wall. Further with different values of $\alpha $(depends on dust density), threshold temperature remained same for an observed virtual cathode. Hence, behavior of potential was plotted for different $\alpha $with increasing wall temperatures. It has been observed that, at lower dust density, double layer like structure is formed near the virtual cathode. Occurrence of two virtual cathodes is observed, one before threshold temperature and one after it. However, irrespective of variation of potential difference near the wall and existence of two virtual cathodes, threshold temperature remained same. | CommonCrawl |
The blue squares represent the possible positions that the knight can move given it's position. So the question is, "Can a knight visit each square of a chessboard exactly once and return to the starting square if the chessboard is of size $m \times n$ ?"
Notice that the only way for the knight to reach both corners of the board is to start on either of the corners. Hence, let's arbitrarily start at the vertex incident to the blue and purple edge. The knight can either travel to vertex $a$ or $b$ (which is symmetrically the same) and then travel to the other corner of the board. However, then the knight must travel to vertex $a$ or $b$ again (the opposite from the vertex picked originally). But then the only two vertices returning to the corner of the board have been used, so the knight cannot return to its start square. Hence there is no knight's tour on a $4 \times 4$ board.
Once again, no knight's tour start. The only way for us to reach every corner of the board is by the vertices $a$, $b$, $c$, or $d$. Starting at the corner of the board and traversing along, we eventually get to vertex $d$ without filling up all of the other squares first. Hence both vertices $a$ and $d$ were used, so we cannot return to the start point.
Actually, no knight's tours exist on an $m \times m$ board where $m$ is an odd integer, and no knight's tours exist on a $4 \times 4$ board as we examined earlier. | CommonCrawl |
It is clearly the trend is rising but i would like to know how to determine the rate of change, so that actually the rising trend is slowing. For example here, if 110 is local max, then the rate of change form there is going from positive to negative.
Browse other questions tagged optimization brownian-motion floating-point or ask your own question.
How to calculate floating point numbers?
Why is $3 \times 0.3 = 0.8999999999999999$ in floating point?
How to determine a number closest to a given number in floating point. | CommonCrawl |
Number theory is one of the most important topics in the field of Math and can be used to solve a variety of problems. Many times one might have come across problems that relate to the prime factorization of a number, to the divisors of a number, to the multiples of a number and so on.
Euler's Totient function is a function that is related to getting the number of numbers that are coprime to a certain number $$X$$ that are less than or equal to it. In short , for a certain number $$X$$ we need to find the count of all numbers $$Y$$ where $$ gcd(X,Y)=1 $$ and $$1 \le Y \le X$$.
A naive method to do so would be to Brute-Force the answer by checking the gcd of $$X$$ and every number less than or equal to $$X$$ and then incrementing the count whenever a $$GCD$$ of $$1$$ is obtained. However, this can be done in a much faster way using Euler's Totient Function.
According to Euler's product formula, the value of the Totient function is below the product over all prime factors of a number. This formula simply states that the value of the Totient function is the product after multiplying the number $$N$$ by the product of $$(1-(1/p))$$ for each prime factor of $$N$$.
Generate a list of primes.
While dealing with a certain $$N$$, check and store all the primes that perfectly divide $$N$$.
Now, it is just needed to use these primes and the above formula to get the result.
There are a few subtle observations that one can make about Euler's Totient Function.
The sum of all values of Totient Function of all divisors of $$N$$ is equal to $$N$$.
The value of Totient function for a certain prime $$P$$ will always be $$P-1$$ as the number $$P$$ will always have a $$GCD$$ of $$1$$ with all numbers less than or equal to it except itself.
For 2 number A and B, if $$GCD(A,B)==1$$ then $$Totient (A) \times Totient(B)$$ = $$Totient(A \cdot B)$$. | CommonCrawl |
The seminar series, Homological Mirror Symmetry, will be held on selected Thursdays from 2PM – 4pm in CMSA Building, 20 Garden Street, Room G10.
Abstract: This is the first talk of the seminar series. We survey the statement of Homological Mirror Symmetry (introduced by Kontsevich in 1994) and some known results, as well as briefly discussing its importance, and the connection to other formulations of Mirror Symmetry and the SYZ conjecture. Following that, we will begin to review the definition of the A-side (namely, the Fukaya category) in some depth. No background is assumed! Also, in the last half hour, we will divide papers and topics among participants.
Abstract: In the second talk, we review (some) of the nitty-gritty details needed to construct a Fukaya categories. This include basic Floer theory, the analytic properties of J-holomorphic curves and cylinders, Gromov compactness and its relation to metric topology on the compactified moduli space, and Banach setup and perturbation schemes commonly used in geometric regularization. We then proceed to recall the notion of an operad, Fukaya's differentiable correspondences, and how to perform the previous constructions coherently in order to obtain $A_\infty$-structures. We will try to demonstrate all concepts in the Morse theory 'toy model'.
which use homological algebra techniques and formal deformation theory of Lagrangians etc.
Abstract: We will review the semi-flat mirror symmetry setting in Strominger-Yau-Zaslow, and discuss the correspondence between special Lagrangian sections on the A-side and deformed Hermitian-Yang-Mills connections on the B-side using real Fourier-Mukai transform, following Leung-Yau-Zaslow.
Abstract: While mirror symmetry was originally conjectured for compact manifolds, the phenomenon applies to non-compact manifolds as well. In the setting of Liouville domains, a class of open symplectic manifolds including affine varieties, cotangent bundles and Stein manifolds, there is an A-infinity category called the wrapped Fukaya category, which is easier to define and often more amenable to computation than the original Fukaya category. In this talk I will construct it, along with symplectic cohomology (its closed-string counterpart), and compute some examples. We will then discuss how compactifying a symplectic manifold corresponds, on the B-side of mirror symmetry, to turning on a Landau-Ginzburg potential.
According to the SYZ conjecture, the mirror of a Calabi-Yau variety can be constructed by dualizing the fibers of a special Lagrangian fibration. Following Auroux, we consider this rubric for an open Calabi-Yau variety X-D given as the complement of a normal crossings anticanonical divisor D in X. In this talk, we first define the moduli space of special Lagrangian submanfiolds L with a flat U(1) connection in X-D, and note that it locally has the structure of a Calabi-Yau variety. The Fukaya category of such Lagrangians is obstructed, and the degree 0 part of the obstruction on L defines a holomorphic function on the mirror. This "superpotential" depends on counts of holomorphic discs of Maslov index 2 bounded by L. We then restrict to the surface case, where there are codimension 1 "walls" consisting of Lagrangians which bound a disc of Maslov index 0. We examine how the superpotential changes when crossing a wall and discuss how one ought to "quantum correct" the complex structure on the moduli space to undo the discontinuity introduced by these discs.
I will present Auroux-Katzarkov-Orlov's proof of one side of the homological mirror symmetry for Del Pezzo surfaces. Namely I will prove their derived categories are equivalent to the categories of vanishing cycles for certain LG-models together with B-fields. I plan to show how the general B-field corresponds to non-commutative Del Pezzo surfaces and time allowing may mention HMS for simple degenerations of Del Pezzo surfaces. The tools include exceptional collections( and mutations for degenerate case), explicit description of NC deformations, etc.
Abstract: In this talk I will discuss the Fukaya category of a toric manifold following the work of Fukaya-Oh-Ohta-Ono. I will start with an overview of the general structure of the Fukaya category of a compact symplectic manifold. Then I will consider toric manifolds in particular the Fano case and construct its mirror. | CommonCrawl |
Free variables occur often in practice. Handling free variables in interior point algorithms is a pending issue (see for example [Andersen2002], [Anjos2007], and [Meszaros1998]). Frequently users convert a problem with free variables into one with restricted variables by representing the free variables as a difference of two nonnegative variables. This approach increases the problem size and introduces ill-posedness, which may lead to numerical difficulties.
Solver 'sdpt3': Normal termination, 1610.8 seconds.
SDPT3 solves the problem without warnings, although it is ill-posed according to Renegar's definition [Renegar1994].
Now we try to get rigorous error bounds using the approximation of SDPT3.
Solver 'sdpt3': Unknown, 1434.9 seconds, 1 iterations.
Solver 'sdpt3': Normal termination, 3481.1 seconds, 2 iterations.
These results reflect that the interior of the dual feasible solution set is empty. An ill-posed problem has the property that the distance to primal or dual infeasibility is zero. If as above the distance to dual infeasibility is zero, then there are sequences of dual infeasible problems with input data converging to the input data of the original problem. Each problem of the sequence is dual infeasible and thus has the dual optimal solution $-\infty$. Hence, the result $-\infty$ of rigorous_lower_bound is exactly the limit of the optimal values of the dual infeasible problems and reflects the fact that the distance to dual infeasibility is zero. This demonstrates that the infinite bound computed by VSDP is sharp, when viewed as the limit of a sequence of infeasible problems. We have a similar situation if the distance to primal infeasibility is zero.
If the free variables are not converted into restricted ones then the problem is well-posed and a rigorous finite lower bound can be computed.
Solver 'sdpt3': Normal termination, 1567.6 seconds.
Solve perturbed problem using 'sdpt3'.
Solver 'sdpt3': Normal termination, 1579.5 seconds, 1 iterations.
Normal termination, 6.9 seconds, 0 iterations.
Therefore, without splitting the free variables, we get rigorous finite lower and upper bounds of the exact optimal value with an accuracy of about eight decimal digits. Moreover, verified interior solutions are computed for both the primal and the dual problem, proving strong duality.
In Table benchmark_dimacs_free_2012_12_12.html we display rigorous bounds for the optimal value of eight problems contained in the DIMACS test library that have free variables (see [Anjos2007] and [Kobayashi2007]). These problems have been modified by reversing the substitution of the free variables. We have listed the results for the problems with free variables and for the same problems when representing the free variables as the difference of two nonnegative variables. The table contains the rigorous upper bounds $fU$, the rigorous lower bounds $fL$, and the computing times measured in seconds for the approximate solution $t_s$, the lower bound $t_u$, and the upper bound $t_l$, respectively. The table demonstrates the drastic improvement if free variables are not split.
Independent of the transformation of the free variables the primal problems of the nql instances are ill-posed. The weak error bound of the optimal constraints. A solution for the qssp180 instance is due to the large number of equality system with 130141 equality constraints and 261365 variables has to be solved rigorously. In the next version of VSDP the accuracy for such large problems will be improved. | CommonCrawl |
Abstract: We introduce a wide class of countable groups, called properly proximal, which contains all non-amenable bi-exact groups, all non-elementary convergence groups, and all lattices in non-compact semi-simple Lie groups, but excludes all inner amenable groups. We show that crossed product II$_1$ factors arising from free ergodic probability measure preserving actions of groups in this class have at most one weakly compact Cartan subalgebra, up to unitary conjugacy. As an application, we obtain the first $W^*$-strong rigidity results for compact actions of $SL_d(\mathbb Z)$ for $d \geq 3$. | CommonCrawl |
Here $n$ and $k$ are positive integer numbers, and all the numbers $i_1, i_2, \ldots, i_k$ are positive integers.
The reader may be surprised to learn that this can be solved with cycle indices.
Not the answer you're looking for? Browse other questions tagged combinatorics summation or ask your own question.
Combinatorial proof. What is this question asking? | CommonCrawl |
Find all quadruplet(s) of non-zero real numbers $ (a,b,c,d) $ such that $ a,b,c$ and $ d$ are roots (of $x$) to the equation $ x^4 + ax^3 + bx^2 + cx + d = 0 $.
My friend found a set of irrational roots that satisfy this condition through experimentation.
And no, I don't know how to solve this question properly.
This question was inspired by another math question in another math platform.
EDIT2: Changed the word "rational" to "irrational", because I'm a klutz.
Here is a largely modified version of my initial answer.
(thanks to @Yves Daoust who has pointed out a typo).
In particular, I have used powerful algebraic tools such as Gröbner basis (https://en.wikipedia.org/wiki/Gröbner_basis) and resultants (https://en.wikipedia.org/wiki/Resultant) relying on a computer algebra system (Mathematica) for computations.
1) First approach: Groebner basis of ((1),(2)).
This is a set of equivalent equations to (1)+(2), simpler in a certain sense, as we will see, at the price, in our case, of degree elevation.
equation (2') determines in a unique manner the value of $a$ once a value of $b$ is given.
$c=1/b=-1$, and, as a consequence, $d=-2a-b-c=0$, thus non acceptable.
$P_3(b)=0$ has no real solution, as can be "seen" either on the numerical values of its roots, all complex non real, or by having a look at its graphical representation as a function. But, till now, I haven't been able to prove it rigorously.
This "quatuor" $(a_0,b_0,c_0,d_0)$ is the unique solution of the problem for which $abcd \neq 0$.
Note: we have nothing to check because conditions (1') and (2') (which are equivalent to (1)+(2)) are fulfilled.
A resultant (equal to $0$) is a necessary and sufficient compatibility condition between to parametric polynomials (here $f(a,b)$ and $g(a,b)$) for them to have at least a common root. There are two ways to consider this issue (with notations of "abstract algebra": $K[ a,b ] \approx K [a ] [ b ] \approx K [ b ] [ a ]$).
either $a$ is considered as the main variable and $b$ is a parameter. In this case the resultant will generate a condition on $b$, which is exactly the same as condition (1').
which is an interesting relationship giving solutions $a=1$, $a=0$ (not possible) and no other solution because the third factor (the 14th degree polynomial) has no real roots. The interesting thing in (3) is that $a$ can - theoretically - play the main rôle in the computation of the solution as $b$ has played upwards.
The first two factors $b(a-1)$ account for special values $b=0$ (not considered) and $a=1$, which has been the final value for the solution.
with roots: $x_1 = 1$ , $x_2 = -1.7548776662316001862...$ , $x_3 = -0.5698402909809427985...$ $x_4=0.32471795721254298472...$.
Roots of a polynomial equation are homogeneous?
How to find the value of $(a+b+c)(a+b+d)(a+c+d)(b+c+d)$ from the following equation?
What are the roots of this equation? | CommonCrawl |
Let $X$ be a connected CW complex and $G$ a group such that every homomorphism $\pi_1(X)\to G$ is trivial. Show that every map $X\to K(G, 1)$ is nullhomotopic.
A $K(G,1)$ space is a path connected spaces whose fundamental group is isomorphic to a given group $G$ and which has a contractible universal covering space.
A map is nullhomotopic if is homotopic to the constant map.
Let $X$ be a connected CW complex and let $Y$ be a $K(G,1)$. Then every homomorphism $\pi_1(X, x_0)\to\pi_1(Y, y_0)$ is induced by a map $(X, x_0)\to(Y, y_0)$ that is unique up to homotopy fixing $x_0$ .
Indeed, Theorem 1B.9 is a good tool here. In particular, you'll want to use the uniqueness part of the theorem. That means that any two maps $(X,x_0)\to (K(G,1),y_0)$ which induce the same map on $\pi_1$ are homotopic. What does that tell you about a map $f:(X,x_0)\to (K(G,1),y_0)$ which induces the trivial homomorphism on $\pi_1$?
The answer is hidden below.
It tells you $f$ is homotopic to the constant map $(X,x_0)\to (K(G,1),y_0)$, since the constant map also induces the trivial homomorphism on $\pi_1$. So, if there are no nontrivial homomorphisms $\pi_1(X)\to G$, this applies to every map $f:X\to Y$ (for an appropriate choice of basepoints $x_0$ and $y_0$).
Not the answer you're looking for? Browse other questions tagged algebraic-topology homotopy-theory covering-spaces fundamental-groups cw-complexes or ask your own question.
Does trivial fundamental group imply contractible?
How to prove that $\phi:G\rightarrow \pi_1(X/G,p(x_0))$ is a homomorphism of groups?
How does attaching a 1-cell to a path connected CW-complex affect the fundamental group?
$\langle X, K(G, 1)\rangle \to H^1(X; G)$ sending $f: X \to K(G, 1)$ to induced homomorphism $f_*: H_1(X) \to H_1(K(G, 1)) \approx G$ is a bijection? | CommonCrawl |
In this paper, inspired by the previous work of Franco Montagna on infinitary axiomatizations for standard BL-algebras, we focus on a uniform approach to the following problem: given a left-continuous t-norm *, find an axiomatic system (possibly with infinitary rules) which is strongly complete with respect to the standard algebra $[0, 1]_*$. This system will be an expansion of MTL (Monoidal t-norm based logic). First, we introduce an infinitary axiomatic system $L^\infty_*$, expanding the language with Delta and countably many truth-constants, and with only one infinitary inference rule, that is inspired in Takeuti-Titani density rule. Then we show that $L^\infty_*$ is indeed strongly complete with respect to the standard algebra $[0,1]_*$. Moreover, the approach is generalized to axiomatize expansions of these logics with additional operators whose intended semantics over [0,1] satisfy some regularity conditions. | CommonCrawl |
Abstract: We compute the colour fields of SU(3) lattice QCD created by static pentaquark systems, in a $24^3\times 48$ lattice at $\beta=6.2$ corresponding to a lattice spacing $a=0.07261(85)$ fm. We find that the pentaquark colour fields are well described by a multi-Y-type shaped flux tube. The flux tube junction points are compatible with Fermat-Steiner points minimizing the total flux tube length. We also compare the pentaquark flux tube profile with diquark-diantiquark central flux tube profile in the tetraquark and the quark-antiquark fundamental flux tube profile in the meson, and they match, thus showing that the pentaquark flux tubes are composed of fundamental flux tubes. | CommonCrawl |
How can I estimate how much memory will be needed to find eigenvalues and eigenvectors of a given large sparse matrix?
I have a real symmetric matrix with roughly $5 \times 10^4$ rows and columns, and an average of $10$ nonzero elements per row. I would like to find the smallest eigenvalue and the corresponding eigenvector, using the built-in Eigensystem function in Mathematica (which treats the matrix as sparse and uses an ARPACK Arnoldi algorithm). Is there a simple way of estimating how much memory this will take?
Browse other questions tagged eigensystem memory-management or ask your own question.
Does Lanczos have trouble with large matrix elements?
Why does PETSc matrix memory allocation improve performance so much? | CommonCrawl |
This is the fourth in a series of blog posts about my genome, which I recently had sequenced through Illumina's Understand Your Genome program.
Last week's data wrangling produced eight FASTQ files containing the sequencing reads for my genome ($8=4 \times 2$, four lanes' worth of paired-end reads). The next step in making sense of these 1.3 billion reads is to map their positions of origin in the human reference genome assembly. This post will continue somewhat down in the weeds technically, but we'll end up in position to look at some interesting genetics next time.
If you're not interested in the technical minutiae - and that would be fair enough - you could skip to the last section.
This shows that, across all the 100bp reads in one pair of my FASTQ files, the Phred quality scores assigned by the sequencer tend to be high across most of the read, but degrade to either end. That's typical with current technology. My other FastQC results also looked as expected.
Mapping is the process of taking each 100bp sequencing read and determining the most likely corresponding position in the human genome reference assembly, enabling us to locate variants throughout my genome. Mapping is challenging for many reasons, such as: (1) the reads contain outright sequencing errors, in addition to bona fide differences between my genome and the reference; (2) the human genome contains extensive repetition, making 100bp reads sometimes difficult to uniquely place; (3) the paired reads convey implicit proximity information that can be tricky to use to maximum effect; (4) with 1.3 billion reads to map, it has to be done efficiently.
Aside: it would be great to produce a de novo assembly of my genome from the reads, rather than just mapping them onto a generic reference assembly. One day this will be standard, and I'll explore it a little bit in a future post - but with current technology, it's just far more difficult than mapping.
BWA-MEM will take my FASTQ reads and produce a BAM file encoding the mappings, which we'll eventually feed to a variant calling algorithm. Best practices call for a few additional "finishing" steps on the mappings beforehand, however. One is deduplication, flagging or removing reads that are redundant in a certain sense - arising from PCR amplification prior to sequencing - which could otherwise bias downstream analyses. I'll perform deduplication using Picard MarkDuplicates, which is pretty standard.
There are additional finishing steps in common use today, including disposal of low-quality reads or mappings, recalibration of base quality scores, and local realignment around indels. I'll save myself the trouble because it seems the very latest generation of variant callers don't benefit much from these steps.
I'm running BWA-MEM separately on each of my four pairs of FASTQ files. It's possible to run it on all of the FASTQ files at once, but it'll be useful later on to preserve the information of where each read arose from, and mapping the four "read groups" separately is the easiest way to do so.
Then I'm using bamtools to merge the four intermediate BAM files into one, preserving the read group information, prior to deduplication.
I'm creating a BAM index, which is useful for various downstream programs.
I wrote an applet to report various statistics about the BAM files, useful for sanity checking the results.
Lastly, I'm importing the mappings into a format suitable for our genome browser.
This workflow, as well as those from previous and future episodes, is available in this DNAnexus public project (free account required).
You can find the definition of each column in the SAM format specification. Briefly, each row includes a read identifier, its sequence and quality scores, a position in the reference genome assembly, and a "CIGAR string", a compact encoding of how the read differs from the reference at the specified position.
This plot shows a (complementary) cumulative distribution function for the basewise coverage of the mappings, or how many mapped reads cover each nucleotide position in the reference assembly. The black curve represents the complete hs37d5 reference assembly, which is about 3.2 Gbp, while the blue curve represents only positions within Consensus CDS (CCDS), a conservative annotation of protein-coding exons, totaling about 32 Mbp.
According to this plot, the typical (median) position is covered by about 40 mapped reads, a depth sufficient to enable confident variant detection despite occasional errors and noise in the sequencing and mapping. About 7% of the hs37d5 reference has no coverage at all - largely accounted for by centromeres and telomeres, but including other difficult regions as well - while almost all CCDS exon positions are covered by at least several reads. At the other extreme, very few positions have more than 70-fold coverage.
It's interesting that the curves cross, and the typical CCDS position has slightly lower coverage than the genome-wide median. This could have to do with the relative GC-richness of protein-coding regions, which might make sequencing or mapping to them a little less efficient. In any case, the difference is slight and not too concerning.
We're looking at a 150bp stretch of the reference chromosome 12. Each of the light blue and green bars represents the mapping of one of my 100bp reads. (The color reflects the strand orientation of the read with respect to the reference, which is basically random and not very interesting in WGS, except to account for the lower reliability of the 3' ends of the reads.) We can see that my reads cover the genome many times over, which is typical, as shown previously. The reference DNA sequence is displayed just above the mappings.
The next algorithmic analysis step will use the mappings to detect variants across my whole genome - similar, to a first approximation, to the logic we just applied to this one locus. First, however, I'll have much more to say about this particular homozygous variant; it wasn't mentioned in Illumina's clinical interpretation of my genome, but it's actually had a significant effect on my adult life. | CommonCrawl |
We analyze the statistics of an estimator, denoted by $\xi_t$ and referred to as the slave, for the equilibrium susceptibility of a one dimensional Langevin process $x_t$ in a potential $\phi(x)$~. The susceptibility can be measured by evolving the slave equation in conjunction with the original Langevin process. This procedure yields a direct estimate of the susceptibility and avoids the need, when performing numerical simulations, to include applied external fields explicitly. The success of the method however depends on the statistical properties of the slave estimator. The joint probability density function for $x_t$ and $\xi_t$ is analyzed. In the case where the potential of the system has a concave component the probability density function of the slave acquires a power law tail characterized by a temperature dependent exponent. Thus we show that while the average value of the slave, in the equilibrium state, is always finite and given by the fluctuation dissipation relation, higher moments and indeed the variance may show divergences. The behavior of the power law exponent is analyzed in a general context and it is calculated explicitly in some specific examples. Our results are confirmed by numerical simulations and we discuss possible measurement discrepancies in the fluctuation dissipation relation which could arise due to this behavior. | CommonCrawl |
A recurring task: the tracking of large numbers of cells or particles and the analysis of their (morpho)dynamic behavior.
the development of a vast spectrum of fluorescent proteins and nanocrystals and groundbreaking advances in optical microscopy technology made live imaging of dynamic processes at the cellular and molecular levels.
simplest: link every segmented cell in any given frame to the nearest cell in the next frame, where "nearest" may refer not only to spatial distance but also to difference in intensity, volume, orientation, and other features.
Individual proteins or other (macro) molecular complexes within cells (collectively referred to as particles) are hardly (if at all) visible in bright field or phase-contrast microscopy and require fluorescent labeling and imaging.
pure random walk (such as Brownian motion of particles): $MSD(t) = cDt$, with $c=4$ in 2D and $c=6$ in 3D, and where $D$ denotes the so-called diffusion coefficient.
motion impeded by obstacles: $MSD(t)=cDt^\alpha$ with $\alpha < 1$.
motion confined to some region: $MSD(t)=R[1-a_1\exp(-a_2cDt/R)]$.
directed motion (flow) in addition to diffusion: $MSD(t)=cDt+(vt)^2$.
measures of size and orientation: perimeter (surface area), area (volume), and the major and minor axes.
aggregation: object based (a histogram of the per-object mean speeds will reveal objects consisting of two subpopulations) or frame based (detection of different modes of motion). | CommonCrawl |
The surface parametrized by the family of lines lying on a non-singular cubic surface $V_3\subset P^4$. G. Fano studied the family of lines $F(V_3)$ on a three-dimensional cubic .
Through a generic point of a non-singular cubic $V_3\subset P^4$ there pass exactly 6 lines lying on it, and the Fano surface $F(V_3)$ is a non-singular irreducible reduced algebraic surface of geometric genus $p_g=10$ and irregularity $q=5$, with topological Euler characteristic (in case $k=\mathbf C$) equal to 27. From the Fano surface $F(V_3)$ one can reconstruct the cubic $V_3$ (see ).
This page was last modified on 17 July 2014, at 17:54. | CommonCrawl |
Indefinite integral of interpolating polynomial. The blue curve is the graph of a polynomial $f(x)$. You can change $f$ by dragging the blue points, as $f$ is an interpolating polynomial through those points. The indefinite integral $\int f(x)dx$ of the function $f(x)$ is shown by the red curve. Since the slope of tangent line to the integral is the function itself, the integral $\int f(x)dx$ increases when the function $f$ is positive, is horizontal where the function $f$ is zero, and is decreasing where the function $f$ is negative. Since the integral $\int f(x) dx$ is determined only up to a constant, you can raise or lower the function by dragging the red point up and down. All these vertical translations of the red curve are the integral of $f$. To test your ability to estimate the integral from the function, you can uncheck the "show integral" checkbox and attempt to sketch what you think the integral is. Alternatively, you can uncheck the "show function" checkbox to test your ability to sketch the function from its integral.
Indefinite integral of a function. The function $f(x)$ is plotted by the thick blue curve. If it can be calculated, the function's indefinite integral $\int f(x) dx$ is shown by the thin red curve. You can move the large green diamond along the graph of $\int f(x)dx $ by dragging with your mouse; its $x$-coordinate is $x_0$. A tangent line to $\int f(x)dx$ calculated at $x=x_0$ is shown by the green line. Its slope is the value $f(x_0)$ of the function $f$ itself evaluated at $x=x_0$. This slope is also displayed by the smaller green diamond on the graph of $f$, which is at the point $(x_0,f(x_0))$. As you change $x_0$, this smaller diamond representing the slope traces out the graph of the function itself. You can change $f(x)$ by typing a new value in its box. The value of $\int f(x)(x)$ is displayed to the right of the box. Since you can always add an arbitrary constant $C$ to the integral, you can move the graph of $\int f(x) dx$ up and down by dragging the red point. You can hide items by unchecking the corresponding check boxes in order to test yourself on how well you can determine the indefinite integral from the function or vice versa. You can use the buttons at the top to zoom in and out as well as pan the view.
Developing intuition about the indefinite integral by Duane Q. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License. For permissions beyond the scope of this license, please contact us. | CommonCrawl |
I have encountered this inequality in Spivak's Calculus (first chapter exercises), which I'm not sure how to solve.
I might be wrong but my gut feeling says the inequality holds for all $x$ between $(-\infty, 1)$ but I cannot prove it.
As I read, there seems to be no standard scheme for solving this type of inequalities/equations. How do You then usually proceed when dealing with one like the above? Thanks.
Because $x$ and $3^x$ are increasing, so is $x+3^x$.
defining $$f(x)=3^x+x-4$$ then we get $$f'(x)=3^x\ln(3)+1>0$$ can you finish?
Not the answer you're looking for? Browse other questions tagged calculus inequality or ask your own question.
What books are prerequisites for Spivak's Calculus?
How can show the following inequality exponential? | CommonCrawl |
Following code is solving n queen problem with c++ using backtracking. I saw other people solution they are very short like 20 to 30 lines. Is there way improve following code?
// * Ends when row is 0 and col is n!
Welcome to Code Review. This post is a mixture between review, remarks, tutorial and general guidelines. Overall, your code has two complexity issues: space complexity due to types (see "Tip: Use implicit data") and algorithmic complexity due to algorithms (see "Diagonals"). Several of your references should be const (see "Tip: Use const Type& if you don't want to change the argument"), and some of your names could be chosen better.
Every section of this post should be readable on its own, however, some code uses declarations and types written in earlier examples, so keep that in mind.
When we use a backtracking algorithm, a stack is usually used throughout its execution. In programming languages that provide function calls, there is always an implicit stack present: the function call stack. We have a caller and a callee. The caller calls the callee, and at some point, the callee either returns and hands control back to the caller, or the callee terminates the program.
Why is this important? Because it provides a tool for backtracking problems with limited depth: instead of an explicit stack, we can use the implicit one. So if you were to solve the N-queens problem for a limited \$N\$, the use of the implicit stack would be a lot simpler. But more on that later.
the Stack's elements always contain exactly two elements, so a std::pair<int,int> is more appropriate.
Although both type synonyms have the same meaning, their different name already tells us that we want to use a stack or a board at certain positions. Still, std::vector<std::pair<int,int>> would be more fitting for the stack.
Several functions have an int n parameter you never use. You should remove that. Also, enable compiler warnings. GCC and Clang use -Wall.
Your main doesn't have a return type. Add int.
However, even with that in mind (and std::pair), we still carry around a lot of information with us all the time. The Board gets updated in every iteration, whenever we put a new queen on the board. We need that Board for print_board, right?
That will take at most \$\mathcal O(n^2)\$. Since print_board will also take \$\mathcal O(n^2)\$, we're not going to increase the asymptotic complexity of our program.
That way the function's type already tells us that this function won't change its argument, and we will get a compiler error if we erroneously try to.
The same holds for all the check_ functions, see declarations above.
Throughout the range-based for loops, it and s get used. What's it? For example, what is it in the following loop?
Well, it's the Board's row. So we should call it row, not it. it is fine if we use iterators, but with range-based for loops, we already have a value or a reference at hand, not a raw iterator. The name it is therefore misleading.
For s, we can use placement, position, pos or even queen. The Stack can actually get renamed to Placements or even Queens, as it contains the queens' positions. The form can yield a name, but the name should foremost fit the contents.
By the way, the check_ functions are ambiguous. If a check returns true, does that mean that the queen is safe? Or does true mean that the queen is threatened? A name like can_place_, is_safe_ or threatens_ is unambiguous.
Even with those small tips, the code will be larger than the other variants you've encountered. That's due to a small, but significant optimization that's usually applied to the board: we don't store the row. Indeed, one dimension is never stored, at all. It's already implicit in the Stack, if we reorder it.
But if that holds, we can just get rid of the row and use using stack_type = std::vector<int>.
If we have x = s - row and y = -(s - col), we can interpret x as the rows we need to traverse to get from s to the new queen, and y as the respective columns in the other direction. If both are the same, both queens are on the same diagonal (this time the right-to-left diagonals).
To get back to the variant with the implicit row, just remember that Stack[i] is the queen at i, Stack[i], and col will be the Stack.size(), col queen (the row is now implicit!).
# That's actually almost the whole valid python solution.
It's a little bit longer, but that's the complete C++ code necessary to solve the problem, we just need to call solve on an empty initial stack. Every recursive function can get rewritten into a non-recursive one if we use an explicit stack, however, we will end up with a function that's similar to your original one, so that's left as an exercise.
Not the answer you're looking for? Browse other questions tagged c++ backtracking n-queens or ask your own question. | CommonCrawl |
Abstract: Collective behaviour in suspensions of microswimmers is often dominated by the impact of long-ranged hydrodynamic interactions. These phenomena include active turbulence, where suspensions of pusher bacteria at sufficient densities exhibit large-scale, chaotic flows. To study this collective phenomenon, we use large-scale (up to $N=3\times 10^6$) particle-resolved lattice Boltzmann simulations of model microswimmers described by extended stresslets. Such system sizes enable us to obtain quantitative information about both the transition to active turbulence and characteristic features of the turbulent state itself. In the dilute limit, we test analytical predictions for a number of static and dynamic properties against our simulation results. For higher swimmer densities, where swimmer-swimmer interactions become significant, we numerically show that the length- and timescales of the turbulent flows increase steeply near the predicted finite-system transition density. | CommonCrawl |
bosons and fermions are fundamentally different for the case of on a 1D compact ring.
Is this true? How is the Bosonization/Fermionization different on a line segment or a compact ring? Does it matter whether the line segment is finite $x\in[a,b]$ or infinite $x\in(-\infty,\infty)$? Why? Can someone explain it physically? Thanks!
Browse other questions tagged quantum-field-theory condensed-matter topology or ask your own question.
An alternative, algebraic way to introduce interactions. Are there other ways out there?
$\phi^4$ theory kinks as fermions?
Why do we assume the spatial volume is infinite? | CommonCrawl |
The answer, I reckon, is that as stated it isn't.
However,the hydrolysis of ATP to AMP and PPi yields considerable more free energy than does the hydrolysis of ATP into ADP and Pi.
That is, the free energy change for the hydrolysis of ATP into AMP and PPi is considerably more negative that that for the hydrolysis of ATP into ADP and Pi and for the hydrolysis of ADP into AMP and Pi.
If we consider the the free energy change for the reaction ATP = AMP + 2 Pi (where, say, the pyrophosphate (PPi) produced by argininosuccinate synthetase is broken down by a pyrophosphatase), then it is the equivalent to the hydrolysis of 2 ATP to ADP and Pi and (more or less) to the hydrolysis of 2 ADP to AMP and Pi.
When dealing with ATP hydrolysis, we must take the 'chemical environment' of the phosphate group into account, not only in determining whether the bond is 'high energy' (in the Lipmann sense) or 'low energy', but also when considering the amount of free energy released by hydrolysis of a 'high energy' linkage: not all pyrophosphate linkages yield the same free energy on hydrolysis.
Cleavage of the 'inner' pyrophosphate linkage in ATP (to give AMP and PPi) 'releases' considerably more free energy than cleavage of this linkage in ADP (to give ADP and Pi).
Cleavage of the 'inner' pyrophosphate linkage in ATP (to give AMP and PPi) releases considerable more free energy than the cleavage of the 'outer' pyrophosphate linkage (to give ADP and Pi).
Hydrolysis of ATP to AMP and 2 Pi roughly releases 2 units.
Hydrolysis of AMP to adenosine and Pi roughly releases 0.4 units.
The question posed by the OP may now be answered as follows.
Hydrolysis the inner pyrophosphate linkage of ATP (to give AMP) rather than the outer linkage (to give ADP) provides a 'thermodynamic pull' of 0.4 'free energy units' to the argininosuccinate synthase reaction. If the PPi is hydrolyzed by a pyrophosphatase an extra 0.6 'free energy units' are obtained and the free energy change for the combined reaction (that of argininosuccinate synthase and the pyrophosphatase) is equivalent to 2 'free energy units' (about -70 kJ mol-1).
As pointed out by Frey, & Arabshahi (1995), hydrolysis of the $\alpha$,$\beta$-phosphoanhydride of ATP, rather than the $\beta$,$\gamma$ linkage, is a common strategy in biosynthetic reactions. To quote the final line of this paper: Cleavage of the $\beta$,$\gamma$-phosphoanhydride bridge in ATP takes place in metabolic reactions in which a smaller driving force is required.
It was not always accepted that the hydolysis of ATP to AMP and PPi proceeded with a more negative free energy than for the hydrolysis of ATP to ADP and Pi. To quote from Standard Free Energy Change for the Hydrolysis of the $\alpha, \beta$-Phosphoanhydride Bridge in ATP , by Frey & Arabshahi (1995).
The standard free energy of hydrolysis of the $\beta$,$\gamma$-phosphoanhydride of ATP to give ADP and Pi is about -32 to -36 kJ/mol.
The standard free energy of hydrolysis ATP to AMP an 2Pi about -70 kJ mol-1.
(For a diagram illustrating the nomenclature of the $\alpha$, $\beta$ and $\gamma$ phosphates of ATP, see here).
Following Alberty (2000), all equations are written as 'biochemical equations' where everything is balanced except hydrogen ions.
[1 ] Frey, & Arabshahi (1995) give a value of -45.6 kJ mol-1 (-10.9 kcal mol-1).
Dixon et al (2000) give a value of -48.5 kJ mol-1 (-11.6 kcal mol-1).
Schuegraf et al (1960) give a value of -43 kJ mol-1 (-10.3 kcal mol-1).
Frey, & Arabshahi (1995) give a value of -32.6 kJ mol-1 (-7.8 kcal mol-1).
Rosing & Slater, 1972 give a value of -31.5 kJ mol-1 (-7.53 kcal mol-1).
Frey, & Arabshahi (1995) give a value of -19.24 kJ mol-1 (-4.6 kcal mol-1).
Not the answer you're looking for? Browse other questions tagged cell-biology metabolism amino-acids or ask your own question. | CommonCrawl |
A measure space $(M,\mathfrak B, \mu)$ (where $M$ is a set, $\mathfrak B$ is a $\sigma$-algebra of subsets of $M$, called measurable sets, and $\mu$ is a measure defined on the measurable sets), isomorphic to the "standard model" , consisting of an interval $\Delta$ and an at most countable set of points $\alpha_i$ (in "extreme" cases this "model" may consists of just the interval $\Delta$ or of just the points $\alpha_i$) endowed with the following measure $\mathfrak m$: on $\Delta$ one takes the usual Lebesgue measure, and to each of the points $\alpha_i$ one ascribes a measure $\mathfrak(\alpha_i) = \mathfrak m_i$; the measure is assumed to be normalized, that is, $\mu(M) = \mathfrak m(\Delta) + \sum\mathfrak m_i = 1$. The "isomorphism" can be understood here in the strict sense or modulo $0$; one obtains, respectively, a narrower or wider version of the concept of a Lebesgue space (in the latter case one can talk about a Lebesgue space modulo $0$). One can give a definition of a Lebesgue space in terms of "intrinsic" properties of the measure space $(M,\mathfrak B, \mu)$ (see –).
A Lebesgue space is the most frequently occurring type of space with a normalized measure, since any complete separable metric space with a normalized measure (defined on its Borel subsets and then completed in the usual way) is a Lebesgue space. Apart from properties common to all measure spaces, a Lebesgue space has a number of specific "good" properties. For example, any automorphism of a Boolean $\sigma$-algebra on a measure space $(\mathfrak B, \mu)$ is generated by some automorphism of a Lebesgue space $M$. Under a number of natural operations, from a Lebesgue space one again obtains a Lebesgue space. Thus, a subset $A$ of positive measure in a Lebesgue space $M$ is itself a Lebesgue space (its measurable subsets are assumed to be those that are measurable in $M$, and the measure is $\mu_A(X)=\mu(X) / \mu(A)$); the direct product of finitely or countably many Lebesgue spaces is a Lebesgue space. Other properties of Lebesgue spaces are connected with measurable partitions (cf. Measurable decomposition).
Cf. also [a1] for a discussion of Lebesgue spaces and measurable partitions, including an intrinsic description of Lebesgue spaces.
This page was last modified on 4 December 2012, at 17:35. | CommonCrawl |
The publication discusses non-stop and discrete platforms in systematic and sequential techniques for all elements of nonlinear dynamics. the original function of the e-book is its mathematical theories on movement bifurcations, oscillatory ideas, symmetry research of nonlinear structures and chaos thought. The logically established content material and sequential orientation offer readers with an international evaluate of the subject. a scientific mathematical procedure has been followed, and a few examples labored out intimately and workouts were integrated. Chapters 1–8 are dedicated to non-stop platforms, starting with one-dimensional flows. Symmetry is an inherent personality of nonlinear structures, and the Lie invariance precept and its set of rules for locating symmetries of a method are mentioned in Chap. eight. Chapters 9–13 specialise in discrete platforms, chaos and fractals. Conjugacy dating between maps and its homes are defined with proofs. Chaos thought and its reference to fractals, Hamiltonian flows and symmetries of nonlinear platforms are one of the major focuses of this book.
Over the previous few many years, there was an unparalleled curiosity and advances in nonlinear structures, chaos concept and fractals, that's mirrored in undergraduate and postgraduate curricula all over the world. The booklet comes in handy for classes in dynamical platforms and chaos, nonlinear dynamics, etc., for complex undergraduate and postgraduate scholars in arithmetic, physics and engineering.
Many difficulties in technological know-how and engineering are defined via nonlinear differential equations, which are notoriously tough to unravel. during the interaction of topological and variational principles, equipment of nonlinear research may be able to take on such basic difficulties. This graduate textual content explains the various key strategies in a fashion that would be favored by means of mathematicians, physicists and engineers.
Alberto P. Calderón (1920-1998) was once certainly one of this century's top mathematical analysts. His contributions, characterised by means of nice originality and intensity, have replaced the best way researchers procedure and view every little thing from harmonic research to partial differential equations and from sign processing to tomography.
The speculation of random dynamical structures originated from stochasticdifferential equations. it truly is meant to supply a framework andtechniques to explain and examine the evolution of dynamicalsystems whilst the enter and output info are identified simply nearly, in response to a few likelihood distribution.
C-. 0 when x 0 8. Prove that the solutions of the initial value problem x_ ¼ x1=n when x [ 0 with xð0Þ ¼ 0 are not unique for n ¼ 2; 3; 4; . : 9. What do you mean by fixed point of a system? Determine the fixed points of the system x_ ¼ x2 À x; x 2 R: Show that solutions exist for all time and become unbounded in finite time. 10. Give mathematical definitions of 'flow evolution operator' of a system. Write the basic properties of an evolution operator of a flow. 11. Show that the dynamical system (or evolution) forms a dynamical group.
We shall now re-look the analytical solution of the system. The analytical solution can be expressed as À1 t ¼ logjtanðx=2Þj þ c ) xðtÞ ¼ 2 tan ðAet Þ where A is an integrating constant. Fig. 7 Analysis of One-Dimensional Flows 23 Let the initial condition be x0 ¼ xð0Þ ¼ p=4: Then from the above solution we obtain pffiffiffi pffiffiffi A ¼ tanðp=8Þ ¼ À1 þ 2 ¼ 1= 1 þ 2 : So the solution is expressed as xðtÞ ¼ 2 tan À1 et pffiffiffi : 1þ 2 We see that the solution xðtÞ ! p and t ! 1. Without using analytical solution for this particular initial condition the same result can be found by drawing the graph of x versus t.
Then we obtain another 0 0 1 0 eigenvector @ 1 A. Clearly, these two eigenvectors are linearly independent. Thus, 1 we have two linearly independent eigenvectors corresponding to the repeated eigenvalue −2. Hence, the general solution of the system is given by 0 1 0 1 0 1 1 1 0 x$ ðtÞ ¼ c1 @ 1 Ae4t þ c2 @ 1 AeÀ2t þ c3 @ 1 AeÀ2t 2 0 1 where c1 , c2 and c3 are arbitrary constants. 8 Solve the system $x_ ¼ Ax$ where 2 À1 6 1 A¼6 4 0 0 À1 À1 0 0 3 0 0 0 0 7 7 0 À2 5 1 2 50 2 Linear Systems Solution Here matrix A has two pair of complex conjugate eigenvalues k1 ¼ À1 Æ i and k2 ¼ 1 Æ i. | CommonCrawl |
You are given a tree that consists of $n$ nodes, and $m$ paths in the tree.
Your task is to calculate for each node the number of paths that contain the node.
The first input line contains integers $n$ and $m$: the number of nodes and paths. The nodes are numbered $1,2,\ldots,n$.
Finally, there are $m$ lines that describe the paths. Each line contains two integers $a$ and $b$: there is a path between nodes $a$ and $b$.
Print $n$ integers: for each node $1,2,\ldots,n$, the number of paths that contain the node. | CommonCrawl |
FAST approaches to scalable similarity-based test case prioritization
DOI:10.1145/3180155.3180210
Conference: ICSE 2018
Breno Miranda
Federal University of Pernambuco
Emilio Cruciani
Roberto Verdecchia
Vrije Universiteit Amsterdam (VU)
Antonia Bertolino
Italian National Research Council
To read the full-text of this research, you can request a copy directly from the authors.
Many test case prioritization criteria have been proposed for speeding up fault detection. Among them, similarity-based approaches give priority to the test cases that are the most dissimilar from those already selected. However, the proposed criteria do not scale up to handle the many thousands or even some millions test suite sizes of modern industrial systems and simple heuristics are used instead. We introduce the FAST family of test case prioritization techniques that radically changes this landscape by borrowing algorithms commonly exploited in the big data domain to find similar items. FAST techniques provide scalable similarity-based test case prioritization in both white-box and black-box fashion. The results from experimentation on real world C and Java subjects show that the fastest members of the family outperform other black-box approaches in efficiency with no significant impact on effectiveness, and also outperform white-box approaches, including greedy ones, if preparation time is not counted. A simulation study of scalability shows that one FAST technique can prioritize a million test cases in less than 20 minutes.
you can request a copy directly from the authors.
... The test suite reduction problem [24,2,18,1,29] is the problem of reducing the size of a given test suite while satisfying a given test criterion. Typical criteria are the so-called coverage-based criteria, which ensure that the coverage of the reduced test suite is above a certain minimal threshold. ...
... Typical criteria are the so-called coverage-based criteria, which ensure that the coverage of the reduced test suite is above a certain minimal threshold. The test case selection problem [24,2,18,1,29] is the dual problem, in that it tries to determine the minimal number of tests to be added to a given test suite so that a given test criterion is attained. As most of these algorithms are targeted at the industrial setting, they assume severe time constraints on the test selection process. ...
... As most of these algorithms are targeted at the industrial setting, they assume severe time constraints on the test selection process. Hence, the vast majority of the proposed approaches for test suite reduction and selection are based on approximate algorithms, such as similarity-based algorithms [2,18], which are not guaranteed to find the optimal test suite even when given enough resources. In order to achieve a compromise between precision and scalability, the authors of [1] proposed a combination of standard ILP encodings and heuristic approaches. ...
TestSelector: Automatic Test Suite Selection for Student Projects -- Extended Version
Filipe Marques
António Morgado
José Fragoso Santos
Mikoláš Janota
Computer Science course instructors routinely have to create comprehensive test suites to assess programming assignments. The creation of such test suites is typically not trivial as it involves selecting a limited number of tests from a set of (semi-)randomly generated ones. Manual strategies for test selection do not scale when considering large testing inputs needed, for instance, for the assessment of algorithms exercises. To facilitate this process, we present TestSelector, a new framework for automatic selection of optimal test suites for student projects. The key advantage of TestSelector over existing approaches is that it is easily extensible with arbitrarily complex code coverage measures, not requiring these measures to be encoded into the logic of an exact constraint solver. We demonstrate the flexibility of TestSelector by extending it with support for a range of classical code coverage measures and using it to select test suites for a number of real-world algorithms projects, further showing that the selected test suites outperform randomly selected ones in finding bugs in students' code.
... That is, considering the practical application of TCP, including the GA algorithm, both effectiveness and efficiency are important. However, existing TCP approaches, including the GA algorithm, suffer from the efficiency problem, e.g., the previous work shows that most existing TCP approaches cannot deal with large-scale application scenarios [13], [15], [18]. Furthermore, some work [13], [15], [18] points out that the GA algorithm spends dramatically long time on prioritization. ...
... However, existing TCP approaches, including the GA algorithm, suffer from the efficiency problem, e.g., the previous work shows that most existing TCP approaches cannot deal with large-scale application scenarios [13], [15], [18]. Furthermore, some work [13], [15], [18] points out that the GA algorithm spends dramatically long time on prioritization. Note that in the 20-year history of GA, there is no approach proposed to improve its efficiency while preserving the high effectiveness. ...
... We also empirically compared AGA with FAST [18], which focuses on the TCP efficiency problem. As FAST [18] targets a different problem, improving the time efficiency by sacrificing effectiveness, such a comparison in terms of efficiency may be a bit unfair to our AGA approach. ...
AGA: An Accelerated Greedy Additional Algorithm for Test Case Prioritization
Feng Li
Yinzhu Li
Jianyi Zhou
In recent years, many test case prioritization (TCP) techniques have been proposed to speed up the process of fault detection. However, little work has taken the efficiency problem of these techniques into account. In this paper, we target the Greedy Additional (GA) algorithm, which has been widely recognized to be effective but less efficient, and try to improve its efficiency while preserving effectiveness. In our Accelerated GA (AGA) algorithm, we use some extra data structures to reduce redundant data accesses in the GA algorithm and thus the time complexity is reduced from $\mathcal{O}(m^2n)$ to $\mathcal{O}(kmn)$ when $n > m$, where $m$ is the number of test cases, $n$ is the number of program elements, and $k$ is the iteration number. Moreover, we observe the impact of iteration numbers on prioritization efficiency on our dataset and propose to use a specific iteration number in the AGA algorithm to further improve the efficiency. We conducted experiments on 55 open-source subjects. In particular, we implemented each TCP algorithm with two kinds of widely-used input formats, adjacency matrix and adjacency list. Since a TCP algorithm with adjacency matrix is less efficient than the algorithm with adjacency list, the result analysis is mainly conducted based on TCP algorithms with adjacency list. The results show that AGA achieves 5.95X speedup ratio over GA on average, while it achieves the same average effectiveness as GA in terms of Average Percentage of Fault Detected (APFD). Moreover, we conducted an industrial case study on 22 subjects, collected from Baidu, and find that the average speedup ratio of AGA over GA is 44.27X, which indicates the practical usage of AGA in real-world scenarios.
... If the population is large, this can be an expensive process. To reduce this cost, we perform a selection procedure on a randomly-chosen subset of the population (lines [19][20], explained below: Identify the best solution in the subset. ...
... Later research have proposed ways to speed diversity calculations up. One study used locality-sensitive hashing to speed up the diversity calculations [20]. Another study used the pair-wise distance values of all test cases as input to a dimensionality reduction algorithm so that a twodimensional (2D) visual "map" of industrial test suites could be provided to software engineers [21]. ...
... Try random mutations until we see a better solutions,20 # or until we exhaust the number of tries. ...
Automated Support for Unit Test Generation: A Tutorial Book Chapter
Afonso Fontes
Gregory Gay
Francisco Gomes de Oliveira Neto
Robert Feldt
Unit testing is a stage of testing where the smallest segment of code that can be tested in isolation from the rest of the system - often a class - is tested. Unit tests are typically written as executable code, often in a format provided by a unit testing framework such as pytest for Python. Creating unit tests is a time and effort-intensive process with many repetitive, manual elements. To illustrate how AI can support unit testing, this chapter introduces the concept of search-based unit test generation. This technique frames the selection of test input as an optimization problem - we seek a set of test cases that meet some measurable goal of a tester - and unleashes powerful metaheuristic search algorithms to identify the best possible test cases within a restricted timeframe. This chapter introduces two algorithms that can generate pytest-formatted unit tests, tuned towards coverage of source code statements. The chapter concludes by discussing more advanced concepts and gives pointers to further reading for how artificial intelligence can support developers and testers when unit testing software.
... APFD continues to be one of the standard and widely used (Miranda et al., 2018) metric to evaluate prioritization effectiveness. This is defined as follows: ...
... The original definition of APFD specifies faults instead of failures. However, as the fault knowledge ceases to exist apriori, we followed the assumptions of (Miranda et al., 2018;Chen et al., 2018;Yu et al., 2019; Mondal and Nasre, 2019) by treating each failure as a different fault. ...
... Our choice was to disregard fault knowledge although works having controlled experimentation using SIR subjects have extensively utilized this information. We observed that treating each failure as a different fault has already been leveraged in some existing works (Yang et al., 2011;Lidbury et al., 2015;Chen et al., 2018;Miranda et al., 2018;Yu et al., 2019;Peng et al., 2020;Lam et al., 2020). We followed this approach due to a wider scope as software testing can be performed even when fault information was not available. ...
Hansie: Hybrid and Consensus Regression Test Prioritization
Shouvick Mondal
Rupesh Nasre
Traditionally, given a test-suite and the underlying system-under-test, existing test-case prioritization heuristics report a permutation of the original test-suite that is seemingly best according to their criteria. However, we observe that a single heuristic does not perform optimally in all possible scenarios, given the diverse nature of software and its changes. Hence, multiple individual heuristics exhibit effectiveness differently. Interestingly, together, the heuristics bear the potential of improving the overall regression test selection across scenarios. In this paper, we pose the test-case prioritization as a rank aggregation problem from social choice theory. Our solution approach, named Hansie , is two-flavored: one involving priority-aware hybridization, and the other involving priority-blind computation of a consensus ordering from individual prioritizations. To speed-up test-execution, Hansie executes the aggregated test-case orderings in a parallel multi-processed manner leveraging regular windows in the absence of ties, and irregular windows in the presence of ties. We show the benefit of test-execution after prioritization and introduce a cost-cognizant metric (EPL) for quantifying overall timeline latency due to load-imbalance arising from uniform or non-uniform parallelization windows. We evaluate Hansie on 20 open-source subjects totaling 287,530 lines of source code, 69,305 test-cases, and with parallelization support of up to 40 logical CPUs.
... The output of this program is sorted into the following upcoming sets:" Select, Medium and, Discard". Input variables, as well as output variables, can take values between (1)(2)(3)(4)(5)(6)(7)(8)(9)(10). In this case study, triangular membership functions are used for mapping random and flexible input sets during fuzzification as well as for making dynamic output and complex sets during defuzzification. ...
... Think of a test suit T with several test cases; F is a set. of m faults detected with test T T. TFi is the first test case in T'(one of T's orders) indicating error i. Thereafter T "s APFD is defined by the following equation [1,4,5,8,9]: ...
The Fundamental of TCP Techniques
Pritee Nivrutti Hulule
Strategies for prioritizing test cases plan test cases to reduce the cost of retrospective testing and to enhance a specific objective function. Test cases are prioritized as those most important test cases under certain conditions are made before the re-examination process. There are many strategies available in the literature that focus on achieving various pre-test testing objectives and thus reduce their cost. In addition, inspectors often select a few well-known strategies for prioritizing trial cases. The main reason behind the lack of guidelines for the selection of TCP strategies. Therefore, this part of the study introduces the novel approach to TCP strategic planning using the ambiguous concept to support the effective selection of experimental strategies to prioritize experimental cases. This function is an extension of the already selected selection schemes for the prioritization of probation cases.
... Further heuristics include topic modeling [64], or models of the system [37]. Miranda et al. [51] proposed fast methods to speed up the pair-wise distance computation, namely shingling and locality-sensitive hashing. Recently, Henard et al. [38] empirically compared many white-box and black-box prioritization techniques. ...
... This is an important aspect to investigate since a critical constraint in regression testing is that the cost of prioritizing test cases should be smaller than the time needed to run the test suite [68]. Therefore, fast approaches are fundamental from a practical point of view to enable rapid and continuous test iterations during SDC development [51]. ...
Automated Test Cases Prioritization for Self-driving Cars in Virtual Environments
Christian Birchler
Sajad Khatiri
Pouria Derakhshanfar
Annibale Panichella
Testing with simulation environments helps to identify critical failing scenarios emerging autonomous systems such as self-driving cars (SDCs) and are safer than in-field operational tests. However, these tests are very expensive and are too many to be run frequently within limited time constraints. In this paper, we investigate test case prioritization techniques to increase the ability to detect SDC regression faults with virtual tests earlier. Our approach, called SDC-Prioritizer, prioritizes virtual tests for SDCs according to static features of the roads used within the driving scenarios. These features are collected without running the tests and do not require past execution results. SDC-Prioritizer utilizes meta-heuristics to prioritize the test cases using diversity metrics (black-box heuristics) computed on these static features. Our empirical study conducted in the SDC domain shows that SDC-Prioritizer doubles the number of safety-critical failures that virtual tests can detect at the same level of execution time compared to baselines: random and greedy-based test case orderings. Furthermore, this meta-heuristic search performs statistically better than both baselines in terms of detecting safety-critical failures. SDC-Prioritizer effectively prioritize test cases for SDCs with a large improvement in fault detection while its overhead (up to 0.34% of the test execution cost) is negligible.
... Furthermore, approximate NNS (ANNS) should be able to significantly alleviate the computational overheads of distance calculations, especially in high dimensional input domains [42]. In software testing, NNS has been used to find the most similar test cases in regression testing [43], test case prioritization (TCP) [44], and model-based testing [45]. It has also been used to find the most diverse (opposite to similar) test cases in ART [46] and software product lines [47]. ...
... It has also been used to find the most diverse (opposite to similar) test cases in ART [46] and software product lines [47]. ANNS has also been successfully applied to enhance the efficiency in other areas of software testing, including TCP [44], test suite reduction [48] and prediction of test flakiness [49]. ...
SWFC-ART: A cost-effective approach for Fixed-Size-Candidate-Set Adaptive Random Testing through small world graphs
J SYST SOFTWARE
Muhammad Ashfaq
Tao Zhang
Rubing Huang
Dave Towey
Adaptive random testing (ART) improves the failure-detection effectiveness of random testing by leveraging properties of the clustering of failure-causing inputs of most faulty programs: ART uses a sampling mechanism that evenly spreads test cases within a software's input domain. The widely-used Fixed-Sized-Candidate-Set ART (FSCS-ART) sampling strategy faces a quadratic time cost, which worsens as the dimensionality of the software input domain increases. In this paper, we propose an approach based on small world graphs that can enhance the computational efficiency of FSCS-ART: SWFC-ART. To efficiently perform nearest neighbor queries for candidate test cases, SWFC-ART incrementally constructs a hierarchical navigable small world graph for previously executed, non-failure-causing test cases. Moreover, SWFC-ART has shown consistency in programs with high dimensional input domains. Our simulation and empirical studies show that SWFC-ART reduces the computational overhead of FSCS-ART from quadratic to log-linear order while maintaining the failure-detection effectiveness of FSCS-ART, and remaining consistent in high dimensional input domains. We recommend using SWFC-ART in practical software testing scenarios, where real-life programs often have high dimensional input domains and low failure rates.
... A FAST family of prioritization techniques has been described in [23]. The FAST techniques handle huge size test suites by utilizing Big Data techniques to achieve scalability in TCP to meet current industrial demands. ...
Value-Based Test Case Prioritization for Regression Testing Using Genetic Algorithms
CMC-COMPUT MATER CON
Farrukh Shahzad Ahmed
Awais Majeed
Tamim Khan
... Berolino et al. [10] present the FAST group of PTC techniques which alterations profoundly these scenes from end to end acquiring calculations broadly utilized in the large information for tracking down related things. Quick procedure gives versatile likeness founded PTC in WB and BB modus operandi. ...
Prioritization of Test Cases in Software Testing Using M 2 H 2 Optimization
Kodepogu Koteswara Rao
Babu RAO Markapudi
Kavitha Chaduvula
Yalamanchili Surekha
... Due to the relatively high computation cost of TCP algorithms, proposing TCP methods with lower computation costs for large-scale test suites has been investigated. Miranda et al. (2018) propose using hashing-based approaches to provide faster TCP algorithms. ...
Test case prioritization using test case diversification and fault-proneness estimations
AUTOMAT SOFTW ENG
Mostafa Mahdieh
Seyed-Hassan Mirian-Hosseinabadi
Mohsen Mahdieh
Regression testing activities greatly reduce the risk of faulty software release. However, the size of the test suites grows throughout the development process, resulting in time-consuming execution of the test suite and delayed feedback to the software development team. This has urged the need for approaches such as test case prioritization (TCP) and test-suite reduction to reach better results in case of limited resources. In this regard, proposing approaches that use auxiliary sources of data such as bug history can be interesting. We aim to propose an approach for TCP that takes into account test case coverage data, bug history, and test case diversification. To evaluate this approach we study its performance on real-world open-source projects. The bug history is used to estimate the fault-proneness of source code areas. The diversification of test cases is preserved by incorporating fault-proneness on a clustering-based approach scheme. The proposed methods are evaluated on datasets collected from the development history of five real-world projects including 357 versions in total. The experiments show that the proposed methods are superior to coverage-based TCP methods. The proposed approach shows that improvement of coverage-based and fault-proneness-based methods is possible by using a combination of diversification and fault-proneness incorporation.
... Despite the large body of research on coveragebased TCP [66,68,69,70], the total-greedy and additional-greedy greedy strategies remain the most widely investigated prioritization strategies [7]. In addition to the above greedy-based strategies, researchers have also investigated other generic strategies [30,31]. ...
Test Case Prioritization Using Partial Attention
Chunrong Fang
Yutao Xu
Quanjun Zhang
Weisong Sun
Test case prioritization (TCP) aims to reorder the regression test suite with a goal of increasing the fault detection rate. Various TCP techniques have been proposed based on different prioritization strategies. Among them, the greedy-based techniques are the most widely-used TCP techniques. However, existing greedy-based techniques usually reorder all candidate test cases in prioritization iterations, resulting in both efficiency and effectiveness problems. In this paper, we propose a generic partial attention mechanism, which adopts the previous priority values (i.e., the number of additionally-covered code units) to avoid considering all candidate test cases. Incorporating the mechanism with the additional-greedy strategy, we implement a novel coverage-based TCP technique based on partition ordering (OCP). OCP first groups the candidate test cases into different partitions and updates the partitions on the descending order. We conduct a comprehensive experiment on 19 versions of Java programs and 30 versions of C programs to compare the effectiveness and efficiency of OCP with six state-of-the-art TCP techniques: total-greedy, additional-greedy, lexicographical-greedy, unify-greedy, art-based, and search-based. The experimental results show that OCP achieves a better fault detection rate than the state-of-the-arts. Moreover, the time costs of OCP are found to achieve 85%-99% improvement than most state-of-the-arts.
... There exist several benchmarks in recent APR literature [62]- [64]. After searching the literature for benchmarks, we adopt Defects4J [15], as it has been continuously developed for a long time and has become the most widely studied dataset in APR studies [7], [8], [16], [37], or even other software engineering research (e.g., fault localization [65]- [68] and test case prioritization [69], [70], etc.) in general. Defects4J consists hundreds of known and reproducible real-world bugs from a collection of 16 real-world Java programs. ...
Program Repair: Automated vs. Manual
Yuan Zhao
Lingming Zhang
Various automated program repair (APR) techniques have been proposed to fix bugs automatically in the last decade. Although recent researches have made significant progress on the effectiveness and efficiency, it is still unclear how APR techniques perform with human intervention in a real debugging scenario. To bridge this gap, we conduct an extensive study to compare three state-of-the-art APR tools with manual program repair, and further investigate whether the assistance of APR tools (i.e., repair reports) can improve manual program repair. To that end, we recruit 20 participants for a controlled experiment, resulting in a total of 160 manual repair tasks and a questionnaire survey. The experiment reveals several notable observations that (1) manual program repair may be influenced by the frequency of repair actions sometimes; (2) APR tools are more efficient in terms of debugging time, while manual program repair tends to generate a correct patch with fewer attempts; (3) APR tools can further improve manual program repair regarding the number of correctly-fixed bugs, while there exists a negative impact on the patch correctness; (4) participants are used to consuming more time to identify incorrect patches, while they are still misguided easily; (5) participants are positive about the tools' repair performance, while they generally lack confidence about the usability in practice. Besides, we provide some guidelines for improving the usability of APR tools (e.g., the misleading information in reports and the observation of feedback).
... Miranda et al. [10] presented the FAST family of TCP methods which radically changes the landscapes through borrowing algorithms, widely used in big data, to find relevant items. FAST technique provides scalable similarity-based TCP in white-box and black-box methods. ...
Modified Harris Hawks Optimization Based Test Case Prioritization for Software Testing
Manar Hamza
Abdelzahir Abdelmaboud
Souad Larabi Marie-Sainte
Ishfaq Yaseen
... The last Table A4 show the review protocol process. [26], [52], [55], [56], [58], [76], [77], [80]- [83], [28], [84], [87], [94], [95], [100], [104]- [106], [108], [109], [44], [110], [112], [113], [115], [116], [119]- [121], [124], [125], [45], VOLUME XX, 2017 9 ...
Trend Application of Machine Learning in Test Case Prioritization: A Review on Techniques
Muhammad Khatibsyarbini
Mohd Adham Isa
Dayang Norhayati Abang Jawawi
Dhiauddin Suffian
Software quality can be assured by passing the process of software testing. However, software testing process involve many phases which lead to more resources and time consumption. To reduce these downsides, one of the approaches is to adopt test case prioritization (TCP) where numerous works has indicated that TCP do improve the overall software testing performance. TCP does have several kinds of techniques which have their own strengths and weaknesses. As for this review paper, the main objective of this paper is to examine deeper on machine learning (ML) techniques based on research questions created. The research method for this paper was designed in parallel with the research questions. Consequently, 110 primary studies were selected where, 58 were journal articles, 50 were conference papers and 2 considered as others articles. For overall result, it can be said that ML techniques in TCP has trending in recent years yet some improvements are certainly welcomed. There are multiple ML techniques available, in which each technique has specified potential values, advantages, and limitation. It is notable that ML techniques has been considerably discussed in TCP approach for software testing.
... Test report prioritization simulates the ideal of test case prioritization which aims to rank test cases to reveal bugs earlier [18]. In evaluating the effectiveness of various test case prioritization techniques, the Average Percentage of Fault Detected (APFD) [12] is widely adopted to measure how rapidly a prioritized test suite detects defects when executing the test suite [19]. Therefore, we also employ APFD to evaluate the effectiveness of DivClass. ...
Crowdsourced Test Report Prioritization Based on Text Classification
Yuxuan Yang
Xin Chen
In crowdsourced testing, crowd workers from different places help developers conduct testing and submit test reports for the observed abnormal behaviors. Developers manually inspect each test report and make an initial decision for the potential bug. However, due to the poor quality, test reports are handled extremely slowly. Meanwhile, due to the limitation of resources, some test reports are not handled at all. Therefore, some researchers attempt to resolve the problem of test report prioritization and have proposed many methods. However, these methods do not consider the impact of duplicate test reports. In this paper, we focus on the problem of test report prioritization and present a new method named DivClass by combining a diversity strategy and a classification strategy. First, we leverage Natural Language Processing (NLP) techniques to preprocess crowdsourced test reports. Then, we build a similarity matrix by introducing an asymmetric similarity computation strategy. Finally, we combine the diversity strategy and the classification strategy to determine the inspection order of test reports. To validate the effectiveness of DivClass, experiments are conducted on five crowdsourced test report datasets. Experimental results show that DivClass achieves 0.8887 in terms of APFD (Average Percentage of Fault Detected) and improves the state-of-the-art technique DivRisk by 14.12% on average. The asymmetric similarity computation strategy can improve DivClass by 4.82% in terms of APFD on average. In addition, empirical results show that DivClass can greatly reduce the number of inspected test reports.
... Graphical representation of the Dynamic Pairs Prioritization Approach example, where the nStartUp pairs ( PR 1 , PR 2 and PR 3 shadowed with diagonal lines) are placed into the DynamicPairs output during the first stage. The first iteration of the second stage is also represented, where the pair with highest priority ( PR 6 emphasized with dark gray shadow) is selected to be placed into the DynamicPairs output and these have been successfully used as criteria to prioritize test cases both in general purpose software systems (Fang et al., 2014;Miranda et al., 2018;Noor & Hemmati, 2015;Thomas et al., 2014), as well as in the context of product line engineering at the Domain engineering level (Al-Hajjaji et al., 2014). The second reason is that, unlike output-based test similarity, or other quality metrics proposed in (Arrieta et al., 2018, input-based test similarity does not require tests to be executed beforehand. ...
Dynamic test prioritization of product lines: An application on configurable simulation models
SOFTWARE QUAL J
Urtzi Markiegi
Aitor Arrieta
Leire Etxeberria
Goiuria Sagardui
Product line testing is challenging due to the potentially huge number of configurations. Several approaches have tackled this challenge; most of them focused on reducing the number of tested products by selecting a representative subset. However, little attention has been paid to product line test optimization using test results, while tests are executed. This paper aims at optimizing the testing process of product lines by increasing the fault detection rate. To this end we propose a dynamic test prioritization approach. In contrast to traditional static test prioritization, our dynamic test prioritization leverages information of tests being executed in specific products. Processing this information, the initially prioritized tests are rearranged in order to find non-discovered faults. The proposed approach is valid for any kind of product lines, but we have adapted it to the context of configurable simulation models, an area where testing is especially time-consuming and optimization methods are paramount. The approach was empirically evaluated by employing two case studies. The results of this evaluation reveal that the proposed test prioritization approach improves both the static prioritization algorithm and the selected baseline technique. The results provide a basis for suggesting that the proposed dynamic test prioritization approach is appropriate to optimize the testing process of product lines.
... First two, accounts for reducing the expense of testing process by selecting the relevant subset and by minimizing test suite to a subset, satisfying the prior coverage criteria respectively. Prioritization organize and rank test cases in a way that aims to improve code coverage efficiency and, thus deal efficiently with early detection of faults (Miranda et al., 2018). Besides, it provides faster feedbacks, thereby allowing developers to debug as early as possible. ...
Analysing a novel multi-objective prioritization model using improved fuzzy c mean clustering
Sarika Chaudhary
Aman Jatain
Consistent regression testing (RT) is an abstract class, that considered indispensable for assuring the quality of software systems but it is too expensive. To minimize the computational cost of RT, test case prioritization (TCP) is the most adopted methodology in literature. The implementation of TCP process, performed using various hard clustering techniques but fuzzy clustering, one of the most sought clustering technique for selecting appropriate test cases had not been discover at a wider platform. Therefore, the proposed work discusses a novel density based fuzzy c-mean (NDB-FCM) algorithm with newly derived initialize membership function for prioritizing the test cases. It first, generates optimal number of cluster (Copt) using a density based algorithm, which in turn minimizes the search criteria to find the 'Copt', especially in cases where a given data set does not follow the empirical rule. Then, creates an initial fuzzy partition matrix based upon newly suggested initial membership method. In addition, a novel multiobjective prioritization model (NDS-FCMPM) proposed to achieve the performance goalof enhanced fault recognition. Initially, feature extraction carried out by exploiting the dependencies between test cases, and then test cases are clustered using proposed fuzzy clustering approach, which finally, prioritized using a newly developed prioritization algorithm. To validate the performance of suggested fuzzy clustering algorithm twoperformance measure namely "Fuzzy Rand Index" and "Run Time" exercised and for prioritization algorithm "APFD" metrics is analysed. The proposed model is assessed using eclipse data extracted from Github Repository. Inferences generated depict that NDB-FCM clustering provide more stable results in terms of classification accuracy, run time and quick convergence when compared with other state-of-the-art techniques. Also, it is verified that NDS-FCMPM observes an improved rate of fault identification at early stage © The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted distribution provided the original author and source are cited
... heuristically driven to improve the rate of failure observation. The approximation of faults by failures is not new and had been followed by prior approaches [30], [31], [32]. We adopt this simplifying approximation (treating each failure as a different fault) due to two reasons. ...
Colosseum: Regression Test Prioritization by Delta Displacement in Test Coverage
IEEE T SOFTWARE ENG
The problem of test-case prioritization has been pursued for over three decades now and continues to be one of the active topics in software testing research. In this paper, we focus on a code-coverage based regression test-prioritization solution (Colosseum) that takes into account the position of changed (delta) code elements (basic-blocks) along the loop-free straight-line execution path of the regression test-cases. We propose a heuristic that logically associates each of these paths with three parameters: (i) the offset (displacement a) of the first delta from the starting basic-block, (ii) the offset (displacement c) of the last delta from the terminating basic block, and (iii) the average scattering (displacement b) within all the intermediate basic-blocks. We hypothesize that a regression test-case path with a shorter overall displacement has a good chance of propagating the affects of the code-changes to the observable outputs in the program. Colosseum prioritizes test-cases with smaller overall displacements and executes them early in the regression test-execution cycle. The underlying intuition is that the probability of a test-case revealing a regression fault depends on the probability of the corresponding change propagation. The change in this context can potentially lead to an error. Extending this logic, delta displacement provides an approximation to failed error propagation. Evaluation on 20 open-source C projects from the Software-artifact Infrastructure Repository and GitHub (totaling: 694,512 SLOC, 280 versions, and 69,305 test-cases) against four state-of-the-art prioritizations reveals that: Colosseum outperforms the competitors with an overall 84.61% success in terms of 13 prioritization effectiveness metrics, majority of which prefer to execute top-k% prioritized test-cases.
... Proposed TCP approaches do not scale up in handling of test cases of large size projects. There are industrial projects with millions of test suites, and simple heuristic TCP approaches are used instead [118]. ...
Hasnain2021 Article FunctionalRequirement-BasedTes
Imran Ghan
Muhammad Hassnain
Muhammad Fermi Pasha
Seung Ryul Jeong
Functional Requirement-Based Test Case Prioritization in Regression Testing: A Systematic Literature Review
Imran Ghani
Regression testing, as an important part of the software life cycle, ensures the validity of modified software. Researchers' focus of this research is on functional requirement-based 'Test Case Prioritization' (TCP) because requirement specifications help keep the software correctness on customers' perceived priorities. This research study is aimed to investigate requirement-based TCP approaches, regression testing aspects, applications regarding the validation of proposed TCP approaches, systems' size under regression testing, test case size and relevant revealed faults, TCP related issues, TCP issues and types of primary studies. Researchers of this paper examined research publications, which have been published between 2009 and 2019, within the seven most significant digital repositories. These repositories are popular, and mostly used for searching papers on topics in software engineering domain. We have performed a meticulous screening of research studies and selected 35 research papers through which to investigate the answers to the proposed research questions. The final outcome of this paper showed that functional requirement-based TCP approaches have been widely covered in primary studies. The results indicated that fault size and the number of test cases are mostly discussed as regression testing aspects within primary studies. In this review paper, it has been identified that iTrust system is widely examined by researchers in primary studies. This paper's conclusion indicated that most of the primary studies have been demonstrated in the real-world settings by respective researchers of focused primary studies. The findings of this "Systematic Literature Review" (SLR) reveal some suggestions to be undertaken in future research works, such as improving the software quality, and conducting evaluations of larger systems.
... ere is no clear winner from these popular testing models and frameworks too [11]. e other factors which may affect the accuracy of prioritization techniques are the size of the software under testing, size of the test suites available for testing, testing scenarios under these prioritization techniques, and testing environment supporting these prioritization techniques [12,15,16]. e limitations of the previous frameworks for unit testing with these factors impacting accuracy and usefulness of prioritization techniques put challenge in terms of multiobjective and multicriterion test suite prioritization research space [17]. ...
Multiobjective Test Case Prioritization Using Test Case Effectiveness: Multicriteria Scoring Method
Ali Samad
Hairulnizam Mahdin
Rafaqat Kazmi
Zirawani Baharum
Modified source code validation is done by regression testing. In regression testing, the time and resources are limited, in which we have to select the minimal test cases from test suites to reduce execution time. The test case minimization process deals with the optimization of the regression testing by removing redundant test cases or prioritizing the test cases. This study proposed a test case prioritization approach based on multiobjective particle swarm optimization (MOPSO) by considering minimum execution time, maximum fault detection ability, and maximum code coverage. The MOPSO algorithm is used for the prioritization of test cases with parameters including execution time, fault detection ability, and code coverage. Three datasets are selected to evaluate the proposed MOPSO technique including TreeDataStructure, JodaTime, and Triangle. The proposed MOPSO is compared with the no ordering, reverse ordering, and random ordering technique for evaluating the effectiveness. The higher values of results represent the more effectiveness and the efficiency of the proposed MOPSO as compared to other approaches for TreeDataStructure, JodaTime, and Triangle datasets. The result is presented to 100-index mode relevant from low to high values; after that, test cases are prioritized. The experiment is conducted on three open-source java applications and evaluated using metrics inclusiveness, precision, and size reduction of a matrix of the test suite. The results revealed that all scenarios performed well in acceptable mode, and the technique is 17% to 86% more effective in terms of inclusiveness, 33% to 85% more effective in terms of precision, and 17% minimum to 86% maximum in size reduction of metrics.
... In previous studies [19], [20] we have shown that test code similarity can provide an effective instrument for test suite prioritization and reduction. Inspired by such studies, in this work we leverage test code similarity for identifying flaky tests. ...
Know You Neighbor: Fast Static Prediction of Test Flakiness
Context: Flaky tests plague regression testing in Continuous Integration environments by slowing down change releases and wasting testing time and effort. Despite the growing interest in mitigating the burden of test flakiness, how to efficiently and effectively detect flaky tests is still an open problem. Objective: In this study, we present and evaluate FLAST, an approach designed to statically predict test flakiness. FLAST leverages vector-space modeling, similarity search, dimensionality reduction, and k-Nearest Neighbor classification in order to timely and efficiently detect test flakiness. Method: In order to gain insights into the efficiency and effectiveness of FLAST, we conduct an empirical evaluation of the approach by considering 13 real-world projects, for a total of 1,383 flaky and 26,702 non-flaky tests.We carry out a quantitative comparison of FLAST with the state-of-the-art methods to detect test flakiness, by considering a balanced dataset comprising 1,402 real-world flaky and as many non-flaky tests. Results: From the results we observe that the effectiveness of FLAST is comparable with the state-of-the-art, while providing considerable gains in terms of efficiency. In addition, the results demonstrate how by tuning the threshold of the approach FLAST can be made more conservative, so to reduce false positives, at the cost of missing more potentially flaky tests. Conclusion: The collected results demonstrate that FLAST provides a fast, low-cost and reliable approach that can be used to guide test rerunning, or to gate the inclusion of new potentially flaky tests.
... Many studies have studied diversity in test generation [11,23,24,43]. For example, researchers have studied the diversity of test inputs and outputs [2]. ...
Adversarial Specification Mining
Hong Jin Kang
David Lo
There have been numerous studies on mining temporal specifications from execution traces. These approaches learn finite-state automata (FSA) from execution traces when running tests. To learn accurate specifications of a software system, many tests are required. Existing approaches generalize from a limited number of traces or use simple test generation strategies. Unfortunately, these strategies may not exercise uncommon usage patterns of a software system. To address this problem, we propose a new approach, adversarial specification mining, and develop a prototype, DICE (Diversity through Counter-Examples). DICE has two components: DICE-Tester and DICE-Miner. After mining Linear Temporal Logic specifications from an input test suite, DICE-Tester adversarially guides test generation, searching for counterexamples to these specifications to invalidate spurious properties. These counterexamples represent gaps in the diversity of the input test suite. This process produces execution traces of usage patterns that were unrepresented in the input test suite. Next, we propose a new specification inference algorithm, DICE-Miner, to infer FSAs using the traces, guided by the temporal specifications. We find that the inferred specifications are of higher quality than those produced by existing state-of-the-art specification miners. Finally, we use the FSAs in a fuzzer for servers of stateful protocols, increasing its coverage.
... Although Defects4J is widely used in SE experimentation [4,28] and has allowed us to know actual bugs and have more reliable BIC commits as compared to other solutions we cannot guarantee that our results can be generalized to other contexts (e.g., Ray et al.'s dataset, industrial projects, non-Java projects, and so on) or the universe of Java projects. When we planned our study, we had to reach a trade-off between generalizability of results and reliability of measures. ...
Sentiment Polarity and Bug Introduction
Simone Romano
Maria Caulo
Giuseppe Scanniello
Danilo Caivano
Researchers have shown a growing interest in the affective states (i.e., emotions and moods) of developers while performing software engineering tasks. We investigate the association between developers' sentiment polarity—i.e., negativity and positivity—and bug introduction. To pursue our research objective, we executed a case-control study in the Mining Software Repository (MSR) context. Our exposures are developers' negativity and positivity captured, by using sentiment analysis, from commit comments of software repositories; while our "disease" is bug introduction—i.e., if the changes of a commit introduce bugs. We found that developers' negativity is associated to bug introduction, as well as developers' positivity. These findings seem to foster a continuous monitoring of developers' affective states so as to prevent the introduction of bugs or discover bugs as early as possible.
... A considerable amount of research has been conducted into regression testing techniques with a goal of improving the testing performance. This includes test case prioritization [1,50], reduction [51,52] and selection [53,54]. This Related Work section focuses on test case prioritization, which aims to detect faults as early as possible through the reordering of regression test cases [55,56]. ...
Regression test case prioritization by code combinations coverage
Jinfu Chen
Regression test case prioritization (RTCP) aims to improve the rate of fault detection by executing more important test cases as early as possible. Various RTCP techniques have been proposed based on different coverage criteria. Among them, a majority of techniques leverage code coverage information to guide the prioritization process, with code units being considered individually, and in isolation. In this paper, we propose a new coverage criterion, code combinations coverage, that combines the concepts of code coverage and combination coverage. We apply this coverage criterion to RTCP, as a new prioritization technique, code combinations coverage based prioritization (CCCP). We report on empirical studies conducted to compare the testing effectiveness and efficiency of CCCP with four popular RTCP techniques: total, additional, adaptive random, and search-based test prioritization. The experimental results show that even when the lowest combination strength is assigned, overall, the CCCP fault detection rates are greater than those of the other four prioritization techniques. The CCCP prioritization costs are also found to be comparable to the additional test prioritization technique. Moreover, our results also show that when the combination strength is increased, CCCP provides higher fault detection rates than the state-of-the-art, regardless of the levels of code coverage.
Investigating the Adoption of History-based Prioritization in the Context of Manual Testing in a Real Industrial Setting
Vinicius Siqueira
State of Practical Applicability of Regression Testing Research: A Live Systematic Literature Review
ACM COMPUT SURV
Renan Greca
Context: Software regression testing refers to rerunning test cases after the system under test is modified, ascertaining that the changes have not (re-)introduced failures. Not all researchers' approaches consider applicability and scalability concerns, and not many have produced an impact in practice. Objective: One goal is to investigate industrial relevance and applicability of proposed approaches. Another is providing a live review, open to continuous updates by the community. Method: A systematic review of regression testing studies that are clearly motivated by or validated against industrial relevance and applicability is conducted. It is complemented by follow-up surveys with authors of the selected papers and 23 practitioners. Results: A set of 79 primary studies published between 2016-2022 is collected and classified according to approaches and metrics. Aspects relative to their relevance and impact are discussed, also based on their authors' feedback. All the data are made available from the live repository that accompanies the study. Conclusions: While widely motivated by industrial relevance and applicability, not many approaches are evaluated in industrial or large-scale open-source systems, and even fewer approaches have been adopted in practice. Some challenges hindering the implementation of relevant approaches are synthesized, also based on the practitioners' feedback.
Exploring Better Black-Box Test Case Prioritization via Log Analysis
Zhichao Chen
Weijing Wang
Jianmin Wang
Junjie Chen
Test case prioritization (TCP) has been widely studied in regression testing, which aims to optimize the execution order of test cases so as to detect more faults earlier. TCP has been divided into white-box test case prioritization (WTCP) and black-box test case prioritization (BTCP). WTCP can achieve better prioritization effectiveness by utilizing source code information, but is not applicable in many practical scenarios (where source code is unavailable, e.g., outsourced testing). BTCP has the benefit of not relying on source code information, but tends to be less effective than WTCP. That is, both WTCP and BTCP suffer from limitations in the practical use. To improve the practicability of TCP, we aim to explore better BTCP, significantly bridging the effectiveness gap between BTCP and WTCP. In this work, instead of statically analyzing test cases themselves in existing BTCP techniques, we conduct the first study to explore whether this goal can be achieved via log analysis. Specifically, we propose to mine test logs produced during test execution to more sufficiently reflect test behaviors, and design a new BTCP framework (called LogTCP), including log pre-processing, log representation, and test case prioritization components. Based on the LogTCP framework, we instantiate seven log-based BTCP techniques by combining different log representation strategies with different prioritization strategies. We conduct an empirical study to explore the effectiveness of LogTCP. Based on 10 diverse open-source Java projects from GitHub, we compared LogTCP with three representative BTCP techniques and four representative WTCP techniques. Our results show that all of our LogTCP techniques largely perform better than all the BTCP techniques in average fault detection, to the extent that then become competitive to the WTCP techniques. That demonstrates the great potential of logs in practical TCP.
TestSelector: Automatic Test Suite Selection for Student Projects
Computer Science course instructors routinely have to create comprehensive test suites to assess programming assignments. The creation of such test suites is typically not trivial as it involves selecting a limited number of tests from a set of (semi-)randomly generated ones. Manual strategies for test selection do not scale when considering large testing inputs needed, for instance, for the assessment of algorithms exercises. To facilitate this process, we present TestSelector, a new framework for automatic selection of optimal test suites for student projects. The key advantage of TestSelector over existing approaches is that it is easily extensible with arbitrarily complex code coverage measures, not requiring these measures to be encoded into the logic of an exact constraint solver. We demonstrate the flexibility of TestSelector by extending it with support for a range of classical code coverage measures and using it to select test suites for a number of real-world algorithms projects, further showing that the selected test suites outperform randomly selected ones in finding bugs in students' code.
Automatic Extraction of Behavioral Features for Test Program Similarity Analysis
Emanuele De Angelis
Alessandro Pellegrini
Maurizio Proietti
Transformation, vectorization, and optimization
Sahar Tahvili
Leo Hatvani
Chapter 3 focuses on transformation, vectorization, and optimization.
Comparing and combining file-based selection and similarity-based prioritization towards regression test orchestration
Milos Gligoric
Yulei Liu
Test case prioritization (TCP) aims to reorder the regression test suite with a goal of increasing the fault detection rate. Various TCP techniques have been proposed based on different prioritization strategies. Among them, the greedy-based techniques are the most widely-used TCP techniques. However, existing greedy-based techniques usually reorder all candidate test cases in prioritization iterations, resulting in both efficiency and effectiveness problems. In this paper, we propose a generic partial attention mechanism, which adopts the previous priority values (i.e., the number of additionally-covered code units) to avoid considering all candidate test cases. Incorporating the mechanism with the additional-greedy strategy, we implement a novel coverage-based TCP technique based on partition ordering (OCP). OCP first groups the candidate test cases into different partitions and updates the partitions on the descending order. We conduct a comprehensive experiment on 19 versions of Java programs and 30 versions of C programs to compare the effectiveness and efficiency of OCP with six state-of-the-art TCP techniques: total-greedy, additional-greedy, lexicographical-greedy, unify-greedy, art-based, and search-based. The experimental results show that OCP achieves a better fault detection rate than the state-of-the-arts. Moreover, the time costs of OCP are found to achieve 85%–99% improvement than most state-of-the-arts.
Single and Multi-objective Test Cases Prioritization for Self-driving Cars in Virtual Environments
However, these tests are very expensive and are too many to be run frequently within limited time constraints. In this paper, we investigate test case prioritization techniques to increase the ability to detect SDC regression faults with virtual tests earlier. Our empirical study conducted in the SDC domain shows that
Value-based cost-cognizant test case prioritization for regression testing
Shahid Nazir Bhatti
Software Test Case Prioritization (TCP) is an effective approach for regression testing to tackle time and budget constraints. The major benefit of TCP is to save time through the prioritization of important test cases first. Existing TCP techniques can be categorized as value-neutral and value-based approaches. In a value-based fashion, the cost of test cases and severity of faults are considered whereas, in a value-neutral fashion these are not considered. The value-neutral fashion is dominant over value-based fashion, and it assumes that all test cases have equal cost and all software faults have equal severity. But this assumption rarely holds in practice. Therefore, value-neutral TCP techniques are prone to produce unsatisfactory results. To overcome this research gap, a paradigm shift is required from value-neutral to value-based TCP techniques. Currently, very limited work is done in a value-based fashion and to the best of the authors' knowledge, no comprehensive review of value-based cost-cognizant TCP techniques is available in the literature. To address this problem, a systematic literature review (SLR) of value-based cost-cognizant TCP techniques is presented in this paper. The core objective of this study is to combine the overall knowledge related to value-based cost-cognizant TCP techniques and to highlight some open research problems of this domain. Initially, 165 papers were reviewed from the prominent research repositories. Among these 165 papers, 21 papers were selected by using defined inclusion/exclusion criteria and quality assessment procedures. The established questions are answered through a thorough analysis of the selected papers by comparing their research contributions in terms of the algorithm used, the performance evaluation metric, and the results validation method used. Total 12 papers used an algorithm for their technique but 9 papers didn't use any algorithm. Particle Swarm Optimization (PSO) Algorithm is dominantly used. For results validation, 4 methods are used including, Empirical study, Experiment, Case study, and Industrial case study. The experiment method is dominantly used. Total 6 performance evaluation metrics are used and the APFDc metric is dominantly used. This SLR yields that value-orientation and cost cognition are vital in the TCP process to achieve its intended goals and there is great research potential in this research domain.
ExVivoMicroTest: ExVivo Testing of Microservices
Luca Gazzola
Maayan Goldstein
Leonardo Mariani
Luca Ussi
Microservice-based applications consist of multiple services that can evolve independently. When a service must be updated, it is first tested with in-house regression test suites. However, the test suites that are executed are usually designed without the exact knowledge about how the services will be accessed and used in the field; therefore, they may easily miss relevant test scenarios, failing to prevent the deployment of faulty services. To address this problem, we introduce ExVivoMicroTest, an approach that analyzes the execution of deployed services at run-time in the field, in order to generate test cases for future versions of the same services. ExVivoMicroTest implements lightweight monitoring and tracing capabilities, to inexpensively record executions that can be later turned into regression test cases that capture how services are used in the field. To prevent accumulating an excessive number of test cases, ExVivoMicroTest uses a test coverage model that can discriminate the recorded executions between the ones that are worth to be turned into test cases and the ones that should be discarded. The resulting test cases use a mocked environment that fully isolates the service under test from the rest of the system to faithfully reply interactions. We assessed ExVivoMicroTest with the PiggyMetrics and Train Ticket open source microservice applications and studied how different configurations of the monitoring and tracing logic impact on the capability to generate test cases.
Parallel Test Prioritization
Dan Hao
Although regression testing is important to guarantee the software quality in software evolution, it suffers from the widely known cost problem. To address this problem, existing researchers made dedicated efforts on test prioritization, which optimizes the execution order of tests to detect faults earlier; while practitioners in industry leveraged more computing resources to save the time cost of regression testing. By combining these two orthogonal solutions, in this article, we define the problem of parallel test prioritization, which is to conduct test prioritization in the scenario of parallel test execution to reduce the cost of regression testing. Different from traditional sequential test prioritization, parallel test prioritization aims at generating a set of test sequences, each of which is allocated in an individual computing resource and executed in parallel. In particular, we propose eight parallel test prioritization techniques by adapting the existing four sequential test prioritization techniques, by including and excluding testing time in prioritization. To investigate the performance of the eight parallel test prioritization techniques, we conducted an extensive study on 54 open-source projects and a case study on 16 commercial projects from Baidu , a famous search service provider with 600M monthly active users. According to the two studies, parallel test prioritization does improve the efficiency of regression testing, and cost-aware additional parallel test prioritization technique significantly outperforms the other techniques, indicating that this technique is a good choice for practical parallel testing. Besides, we also investigated the influence of two external factors, the number of computing resources and time allowed for parallel testing, and find that more computing resources indeed improve the performance of parallel test prioritization. In addition, we investigated the influence of two more factors, test granularity and coverage criterion, and find that parallel test prioritization can still accelerate regression testing in parallel scenario. Moreover, we investigated the benefit of parallel test prioritization on the regression testing process of continuous integration, considering both the cumulative acceleration performance and the overhead of prioritization techniques, and the results demonstrate the superiority of parallel test prioritization.
Access Control Tree for Testing and Learning
Davrondzhon Gafurov
Arne Erik Hurum
Margrete Sunde Grovan
Dissimilarity‐based test case prioritization through data fusion
Yinyin Xu
Ning Yang
Test case prioritization (TCP) aims at scheduling test case execution so that more important test cases are executed as early as possible. Many TCP techniques have been proposed, according to different concepts and principles, with dissimilarity-based TCP (DTCP) prioritizing tests based on the concept of test case dissimilarity: DTCP chooses the next test case from a set of candidates such that the chosen test case is farther away from previously selected test cases than the other candidates. DTCP techniques typically only use one aspect/granularity of the information or features from test cases to support the prioritization process. In this article, we adopt the concept of data fusion to propose a new family of DTCP techniques, data-fusion-driven DTCP (DDTCP), which attempts to use different information granularities for prioritizing test cases by dissimilarity. We performed an empirical study involving 30 versions of five subject programs, investigating the testing effectiveness and efficiency by comparing DDTCP against DTCP techniques that use a dissimilarity granularity. The experimental results show that not only does DDTCP have better fault-detection rates than single-granularity DTCP techniques, but it also appears to only incur similar prioritization costs. The results also show that DDTCP remains robust over multiple system releases.
A dataset of regressions in web applications detected by end-to-end tests
Óscar Soto-Sánchez
Michel Maes Bermejo
Micael Gallego
Francisco Gortázar
End-to-end tests present many challenges in the industry. The long-running times of these tests make it unsuitable to apply research work on test case prioritization or test case selection, for instance, on them, as most works on these two problems are based on datasets of unit tests. These ones are fast to run, and time is not usually a considered criterion. This is because there is no dataset of end-to-end tests, due to the infrastructure needs for running this kind of tests, the complexity of the setup and the lack of proper characterization of the faults and their fixes. Therefore, running end-to-end tests for any research work is hard and time-consuming, and the availability of a dataset containing regression bugs, documentation and logs for these tests might foster the usage of end-to-end tests in research works. This paper presents a) a dataset for this kind of tests, including six well-documented manually injected regression bugs and their corresponding fixes in three web applications built using Java and the Spring framework; b) tools for easing the execution of these tests no matter the infrastructure; and c) a comparative study with two well-known datasets of unit tests. The comparative study shows that there are important differences between end-to-end and unit tests, such as their execution time and the amount of resources they consume, which are much higher in the end-to-end tests. End-to-end testing deserves some attention from researchers. Our dataset is a first effort toward easing the usage of end-to-end tests in research works.
A Systematic Literature Review on Regression Test Case Prioritization
Ani Rahmani
Sabrina Ahmad
Intan Ermahani A. Jalil
Adhitia Putra Herawan
Test case prioritization (TCP) is deemed valid to improve testing efficiency, especially in regression testing, as retest all is costly. The TCP schedule the test case execution order to detect bugs faster. For such benefit, test case prioritization has been intensively studied. This paper reviews the development of TCP for regression testing with 48 papers from 2017 to 2020. In this paper, we present four critical surveys. First is the development of approaches and techniques in regression TCP studies, second is the identification of software under test (SUT) variations used in TCP studies, third is the trend of metrics used to measure the TCP studies effectiveness, and fourth is the stateof- the-art of requirements-based TCP. Furthermore, we discuss development opportunities and potential future directions on regression TCP. Our review provides evidence that TCP has increasing interests. We also discovered that requirement-based utilization would help to prepare test cases earlier to improve TCP effectiveness.
Architectural Technical Debt: Identification and Management
Architectural technical debt (ATD) in a software-intensive system is the sum of all design choices that may have been suitable or even optimal at the time they were made, but which today are significantly impending progress: structure, framework, technology, languages, etc. Unlike code-level technical debt which can be readily detected by static analysers, and can often be refactored with minimal or only incremental efforts, architectural debt is hard to detect, and its remediation rather wide-ranging, daunting, and often avoided. The objective of this thesis is to develop a better understanding of architectural technical debt, and determine what strategies can be used to identify and manage it. In order to do so, we adopt a wide range of research techniques, including literature reviews, case studies, interviews with practitioners, and grounded theory. The result of our investigation, deeply grounded in empirical data, advances the field not only by providing novel insights into ATD related phenomena, but also by presenting approaches to pro-actively identify ATD instances, leading to its eventual management and resolution.
Empirically evaluating readily available information for regression test optimization in continuous integration
Daniel Elsner
Florian Hauer
Alexander Pretschner
Silke Reimer
RLTCP: A reinforcement learning approach to prioritizing automated user interface tests
INFORM SOFTWARE TECH
Vu Nguyen
Bach Le
Context User interface testing validates the correctness of an application through visual cues and interactive events emitted in real world usages. Performing user interface tests is a time-consuming process, and thus, many studies have focused on prioritizing test cases to help maintain the effectiveness of testing while reducing the need for a full execution. Objective This paper describes a novel test prioritization method called RLTCP whose goal is to maximize the number of test faults detected while reducing the amount of test. Methods We define a weighted coverage graph to model the underlying association among test cases for the user interface testing. Our method combines Reinforcement Learning (RL) and the coverage graph to prioritize test cases. While RL has been found to be suitable for rapid changing projects with abundant historical data, the coverage graph considers in-depth the event-based aspects of user interface testing and provides a fine-grained level at which the RL system can gain more insights into individual test cases. Results We experiment and assess the proposed method using nine data sets obtained from two mature web applications, finding that the method outperforms the six, including the state-of-the-art, methods. Conclusions The use of both reinforcement learning and the underlying structure of user interface tests modeled via the coverage has the potential to improve the performance of test prioritization methods. Our study also shows the benefit of using the coverage graph to gain insights into test cases, their relationship and execution history.
There have been numerous studies on mining temporal specifications from execution traces. These approaches learn finite-state automata (FSA) from execution traces when running tests. To learn accurate specifications of a software system, many tests are required. Existing approaches generalize from a limited number of traces or use simple test generation strategies. Unfortunately, these strategies may not exercise uncommon usage patterns of a software system. To address this problem, we propose a new approach, adversarial specification mining, and develop a prototype, Diversity through Counter-examples (DICE). DICE has two components: DICE-Tester and DICE-Miner. After mining Linear Temporal Logic specifications from an input test suite, DICE-Tester adversarially guides test generation, searching for counterexamples to these specifications to invalidate spurious properties. These counterexamples represent gaps in the diversity of the input test suite. This process produces execution traces of usage patterns that were unrepresented in the input test suite. Next, we propose a new specification inference algorithm, DICE-Miner, to infer FSAs using the traces, guided by the temporal specifications. We find that the inferred specifications are of higher quality than those produced by existing state-of-the-art specification miners. Finally, we use the FSAs in a fuzzer for servers of stateful protocols, increasing its coverage.
JTeC: A Large Collection of Java Test Classes for Test Code Analysis and Processing
Federico Corò
LCCSS: A Similarity Metric for Identifying Similar Test Code
Lucas Pereira da Silva
Patricia Vilain
RTPTorrent: An Open-source Dataset for Evaluating Regression Test Prioritization
Toni Mattis
Patrick Rein
Falco Dürsch
Robert Hirschfeld
iSENSE2.0: Improving Completion-aware Crowdtesting Management with Duplicate Tagger and Sanity Checker
Wang Junjie
Ye Yang
Tim Menzies
Software engineers get questions of "how much testing is enough" on a regular basis. Existing approaches in software testing management employ experience-, risk-, or value-based analysis to prioritize and manage testing processes. However, very few is applicable to the emerging crowdtesting paradigm to cope with extremely limited information and control over unknown, online crowdworkers. In practice, deciding when to close a crowdtesting task is largely done by experience-based guesswork and frequently results in ineffective crowdtesting. More specifically, it is found that an average of 32% testing cost was wasteful spending in current crowdtesting practice. This article intends to address this challenge by introducing automated decision support for monitoring and determining appropriate time to close crowdtesting tasks. To that end, it first investigates the necessity and feasibility of close prediction of crowdtesting tasks based on an industrial dataset. Next, it proposes a close prediction approach named iSENSE2.0, which applies incremental sampling technique to process crowdtesting reports arriving in chronological order and organizes them into fixed-sized groups as dynamic inputs. Then, a duplicate tagger analyzes the duplicate status of received crowd reports, and a CRC-based (Capture-ReCapture) close estimator generates the close decision based on the dynamic bug arrival status. In addition, a coverage-based sanity checker is designed to reinforce the stability and performance of close prediction. Finally, the evaluation of iSENSE2.0 is conducted on 56,920 reports of 306 crowdtesting tasks from one of the largest crowdtesting platforms. The results show that a median of 100% bugs can be detected with 30% saved cost. The performance of iSENSE2.0 does not demonstrate significant difference with the state-of-the-art approach iSENSE , while the later one relies on the duplicate tag, which is generally considered as time-consuming and tedious to obtain.
Taming Google-Scale Continuous Testing
Atif Memon
Zebao Gao
Bao Nguyen
John Micco
Comparing white-box and black-box test prioritization
Christopher Henard
Mike Papadakis
Yves Le Traon
Mark Harman
Although white-box regression test prioritization has been well-studied, the more recently introduced black-box prioritization approaches have neither been compared against each other nor against more well-established white-box techniques. We present a comprehensive experimental comparison of several test prioritization techniques, including well-established white-box strategies and more recently introduced black-box approaches. We found that Combinatorial Interaction Testing and diversity-based techniques (Input Model Diversity and Input Test Set Diameter) perform best among the black-box approaches. Perhaps surprisingly, we found little difference between black-box and white-box performance (at most 4% fault detection rate difference). We also found the overlap between black- and white-box faults to be high: the first 10% of the prioritized test suites already agree on at least 60% of the faults found. These are positive findings for practicing regression testers who may not have source code available, thereby making white-box techniques inapplicable. We also found evidence that both black-box and white-box prioritization remain robust over multiple system releases.
On rapid releases and software testing: a case study and a semi-systematic literature review
Mika Mäntylä
Bram Adams
Foutse Khomh
Kai Petersen
Large open and closed source organizations like Google, Facebook and Mozilla are migrating their products towards rapid releases. While this allows faster time-to-market and user feedback, it also implies less time for testing and bug fixing. Since initial research results indeed show that rapid releases fix proportionally less reported bugs than traditional releases, this paper investigates the changes in software testing effort after moving to rapid releases in the context of a case study on Mozilla Firefox, and performs a semi-systematic literature review. The case study analyzes the results of 312,502 execution runs of the 1,547 mostly manual system-level test cases of Mozilla Firefox from 2006 to 2012 (5 major traditional and 9 major rapid releases), and triangulates our findings with a Mozilla QA engineer. We find that rapid releases have a narrower test scope that enables a deeper investigation of the features and regressions with the highest risk. Furthermore, rapid releases make testing more continuous and have proportionally smaller spikes before the main release. However, rapid releases make it more difficult to build a large testing community , and they decrease test suite diversity and make testing more deadline oriented. In addition, our semi-systematic literature review presents the benefits, problems and enablers of rapid releases from 24 papers found using systematic search queries and a similar amount of papers found through other means. The literature review shows that rapid releases are a prevalent industrial practice that are utilized even in some highly critical domains of software engineering, and that rapid releases originated from several software development methodologies such as agile, open source, lean and internet-speed software development. However, empirical studies proving evidence of the claimed advantages and disadvantages of rapid releases are scarce.
Test Set Diameter: Quantifying the Diversity of Sets of Test Cases
Simon Poulding
Shin Yoo
A common and natural intuition among software testers is that test cases need to differ if a software system is to be tested properly and its quality ensured. Consequently, much research has gone into formulating distance measures for how test cases, their inputs and/or their outputs differ. However, common to these proposals is that they are data type specific and/or calculate the diversity only between pairs of test inputs, traces or outputs. We propose a new metric to measure the diversity of sets of tests: the test set diameter (TSDm). It extends our earlier, pairwise test diversity metrics based on recent advances in information theory regarding the calculation of the normalized compression distance (NCD) for multisets. An advantage is that TSDm can be applied regardless of data type and on any test-related information, not only the test inputs. A downside is the increased computational time compared to competing approaches. Our experiments on four different systems show that the test set diameter can help select test sets with higher structural and fault coverage than random selection even when only applied to test inputs. This can enable early test design and selection, prior to even having a software system to test, and complement other types of test automation and analysis. We argue that this quantification of test set diversity creates a number of opportunities to better understand software quality and provides practical ways to increase it.
Defects4J: a database of existing faults to enable controlled testing studies for Java programs
Rene Just
Darioush Jalali
Michael Ernst
Empirical studies in software testing research may not be comparable, reproducible, or characteristic of practice. One reason is that real bugs are too infrequently used in software testing research. Extracting and reproducing real bugs is challenging and as a result hand-seeded faults or mutants are commonly used as a substitute. This paper presents Defects4J, a database and extensible framework providing real bugs to enable reproducible studies in software testing research. The initial version of Defects4J contains 357 real bugs from 5 real-world open source pro- grams. Each real bug is accompanied by a comprehensive test suite that can expose (demonstrate) that bug. Defects4J is extensible and builds on top of each program's version con- trol system. Once a program is configured in Defects4J, new bugs can be added to the database with little or no effort. Defects4J features a framework to easily access faulty and fixed program versions and corresponding test suites. This framework also provides a high-level interface to common tasks in software testing research, making it easy to con- duct and reproduce empirical studies. Defects4J is publicly available at http://defects4j.org.
Similarity-based test case prioritization using ordered sequences of program entities
Zhenyu Chen
Kun Wu
Zhihong Zhao
Test suites often grow very large over many releases, such that it is impractical to re-execute all test cases within limited resources. Test case prioritization rearranges test cases to improve the effectiveness of testing. Code coverage has been widely used as criteria in test case prioritization. However, the simple way may not reveal some bugs, such that the fault detection rate decreases. In this paper, we use the ordered sequences of program entities to improve the effectiveness of test case prioritization. The execution frequency profiles of test cases are collected and transformed into the ordered sequences. We propose several novel similarity-based test case prioritization techniques based on the edit distances of ordered sequences. An empirical study of five open source programs was conducted. The experimental results show that our techniques can significantly increase the fault detection rate and be effective in detecting faults in loops. Moreover, our techniques are more cost-effective than the existing techniques.
Test Case Prioritization for Continuous Regression Testing: An Industrial Case Study
Dusica Marijan
Arnaud Gotlieb
Sagar Sen
Regression testing in continuous integration environment is bounded by tight time constraints. To satisfy time constraints and achieve testing goals, test cases must be efficiently ordered in execution. Prioritization techniques are commonly used to order test cases to reflect their importance according to one or more criteria. Reduced time to test or high fault detection rate are such important criteria. In this paper, we present a case study of a test prioritization approach ROCKET (Prioritization for Continuous Regression Testing) to improve the efficiency of continuous regression testing of industrial video conferencing software. ROCKET orders test cases based on historical failure data, test execution time and domain-specific heuristics. It uses a weighted function to compute test priority. The weights are higher if tests uncover regression faults in recent iterations of software testing and reduce time to detection of faults. The results of the study show that the test cases prioritized using ROCKET (1) provide faster fault detection, and (2) increase regression fault detection rate, revealing 30% more faults for 20% of the test suite executed, comparing to manually prioritized test cases.
Normalized Compression Distance of Multisets with Applications
IEEE T PATTERN ANAL
Andrew R. Cohen
Paul M. B. Vitányi
Normalized compression distance (NCD) is a parameter-free, feature-free, alignment-free, similarity measure between a pair of finite objects based on compression. However, it is not sufficient for all applications. We propose an NCD of finite multisets (a.k.a. multiples) of finite objects that is also a metric. Previously, attempts to obtain such an NCD failed. We cover the entire trajectory from theoretical underpinning to feasible practice. The new NCD for multisets is applied to retinal progenitor cell classification questions and to related synthetically generated data that were earlier treated with the pairwise NCD. With the new method we achieved significantly better results. Similarly for questions about axonal organelle transport. We also applied the new NCD to handwritten digit recognition and improved classification accuracy significantly over that of pairwise NCD by incorporating both the pairwise and NCD for multisets. In the analysis we use the incomputable Kolmogorov complexity that for practical purposes is approximated from above by the length of the compressed version of the file involved, using a real-world compression program. Index Terms--- Normalized compression distance, multisets or multiples, pattern recognition, data mining, similarity, classification, Kolmogorov complexity, retinal progenitor cells, synthetic data, organelle transport, handwritten character recognition
On the Fault-Detection Capabilities of Adaptive Random Test Case Prioritization: Case Studies with Large Test Suites (PDF)
Zhi Quan Zhou
Arnaldo Sinaga
Willy Susilo
An adaptive random (AR) testing strategy has recently been developed and examined by a growing body of research. More recently, this strategy has been applied to prioritizing regression test cases based on code coverage using the concepts of Jaccard Distance (JD) and Coverage Manhattan Distance (CMD). Code coverage, however, does not consider frequency, furthermore, comparison between JD and CMD has not yet been made. This research fills the gap by first investigating the fault-detection capabilities of using frequency information for AR test case prioritization, and then comparing JD and CMD. Experimental results show that "coverage" was more useful than "frequency" although the latter can sometimes complement the former, and that CMD was superior to JD. It is also found that, for certain faults, the conventional "additional" algorithm (widely accepted as one of the best algorithms for test case prioritization) could perform much worse than random testing on large test suites.
Adaptive Random Testing: The ART of test case diversity
Tsong Yueh Chen
Fei-Ching Kuo
Robert G. Merkel
T.H. Tse
Random testing is not only a useful testing technique in itself, but also plays a core role in many other testing methods. Hence, any significant improvement to random testing has an impact throughout the software testing community. Recently, Adaptive Random Testing (ART) was proposed as an effective alternative to random testing. This paper presents a synthesis of the most important research results related to ART. In the course of our research and through further reflection, we have realised how the techniques and concepts of ART can be applied in a much broader context, which we present here. We believe such ideas can be applied in a variety of areas of software testing, and even beyond software testing. Amongst these ideas, we particularly note the fundamental role of diversity in test case selection strategies. We hope this paper serves to provoke further discussions and investigations of these ideas.
Adaptive Random Test Case Prioritization
Bo Jiang
Zhenyu Zhang
Wing Kwong Chan
Abstract—Regression testing assures changed ,programs against unintended amendments. Rearranging the execution order of test cases is a key idea to improve their effectiveness. Paradoxically, many test case prioritization techniques resolve tie cases using the random selection approach, and yet random ordering of test cases has been considered as ineffective. Exist- ing unit testing research unveils that adaptive random ,testing (ART) is a promising candidate that may replace random,test- ing (RT). In this paper, we not only propose a new family of coverage-based ART techniques, but also show empirically that they are statistically superior to the RT-based technique in detecting faults. Furthermore, one of the ART prioritization techniques is consistently comparable ,to some ,of the ,best coverage-based prioritization techniques (namely, the "addi- tional" techniques) and yet involves much less time cost. Keywords—Adaptive random testing; test case prioritization
Prioritising Test Cases with String Distances
Yves Ledru
Alexandre Petrenko
Sergiy Boroday
Nadine Mandran
Test case prioritisation aims at finding an ordering which enhances a certain property of an ordered test suite. Traditional techniques rely on the availability of code or a specification of the program under test. We propose to use string distances on the text of test cases for their comparison and elaborate a prioritisation algorithm. Such a prioritisation does not require code or a specification and can be useful for initial testing and in cases when code is difficult to instrument. In this paper, we also report on experiments performed on the "Siemens Test Suite", where the proposed prioritisation technique was compared with random permutations and four classical string distance metrics were evaluated. The obtained results, confirmed by a statistical analysis, indicate that prioritisation based on string distances is more efficient in finding defects than random ordering of the test suite: the test suites prioritized using string distances are more efficient in detecting the strongest mutants, and, on average, have a better APFD than randomly ordered test suites. The results suggest that string distances can be used for prioritisation purposes, and Manhattan distance could be the best choice.
Big data: The next frontier for innovation, competition, and productivity
J. Manyika
Michael Chui
Brad Brown
Angela Hung Byers
A Large-Scale Empirical Comparison of Static and Dynamic Test Case Prioritization Techniques
Qi Luo
Kevin Moran
The large body of existing research in Test Case Prioritization (TCP) techniques, can be broadly classified into two categories: dynamic techniques (that rely on run-time execution information) and static techniques (that operate directly on source and test code). Absent from this current body of work is a comprehensive study aimed at understanding and evaluating the static approaches and comparing them to dynamic approaches on a large set of projects. In this work, we perform the first extensive study aimed at empirically evaluating four static TCP techniques comparing them with state-of-research dynamic TCP techniques at different test-case granularities (e.g., method and class-level) in terms of effectiveness, efficiency and similarity of faults detected. This study was performed on 30 real-word Java programs encompassing 431 KLoC. In terms of effectiveness, we find that the static call-graph-based technique outperforms the other static techniques at test-class level, but the topic-model-based technique performs better at test-method level. In terms of efficiency, the static call-graph-based technique is also the most efficient when compared to other static techniques. When examining the similarity of faults detected for the four static techniques compared to the four dynamic ones, we find that on average, the faults uncovered by these two groups of techniques are quite dissimilar, with the top 10% of test cases agreeing on only ≈ 25% - 30% of detected faults. This prompts further research into the severity/importance of faults uncovered by these techniques, and into the potential for combining static and dynamic information for more effective approaches.
Learning for test prioritization: an industrial case study
Benjamin Busjaeger
Tao Xie
Modern cloud-software providers, such as Salesforce.com, increasingly adopt large-scale continuous integration environments. In such environments, assuring high developer productivity is strongly dependent on conducting testing efficiently and effectively. Specifically, to shorten feedback cycles, test prioritization is popularly used as an optimization mechanism for ranking tests to run by their likelihood of revealing failures. To apply test prioritization in industrial environments, we present a novel approach (tailored for practical applicability) that integrates multiple existing techniques via a systematic framework of machine learning to rank. Our initial empirical evaluation on a large real-world dataset from Salesforce.com shows that our approach significantly outperforms existing individual techniques.
A common and natural intuition among software testers is that test cases need to differ if a software system is to be tested properly and its quality ensured. Consequently, much research has gone into formulating distance measures for how test cases, their inputs and/or their outputs differ. However, common to these proposals is that they are data type specific and/or calculate the diversity only between pairs of test inputs, traces or outputs. We propose a new metric to measure the diversity of sets of tests: the test set diameter (TSDm). It extends our earlier, pairwise test diversity metrics based on recent advances in information theory regarding the calculation of the normalized compression distance (NCD) for multisets. A key advantage is that TSDm is a universal measure of diversity and so can be applied to any test set regardless of data type of the test inputs (and, moreover, to other test-related data such as execution traces). But this universality comes at the cost of greater computational effort compared to competing approaches. Our experiments on four different systems show that the test set diameter can help select test sets with higher structural and fault coverage than random selection even when only applied to test inputs. This can enable early test design and selection, prior to even having a software system to test, and complement other types of test automation and analysis. We argue that this quantification of test set diversity creates a number of opportunities to better understand software quality and provides practical ways to increase it.
Regression Testing Minimisation, Selection and Prioritisation – A Survey
Test-case prioritization: achievements and challenges
Hong Mei
Test-case prioritization, proposed at the end of last century, aims to schedule the execution order of test cases so as to improve test effectiveness. In the past years, test-case prioritization has gained much attention, and has significant achievements in five aspects: prioritization algorithms, coverage criteria, measurement, practical concerns involved, and application scenarios. In this article, we will first review the achievements of test-case prioritization from these five aspects and then give our perspectives on its challenges.
A similarity-based approach for test case prioritization using historical failure data
Tanzeem Bin Noor
Hadi Hemmati
Techniques for improving regression testing in continuous integration development environments
John Penix
S. Elbaum
Gregg Rothermel
In continuous integration development environments, software engineers frequently integrate new or changed code with the mainline codebase. This can reduce the amount of code rework that is needed as systems evolve and speed up development time. While continuous integration processes traditionally require that extensive testing be performed following the actual submission of code to the codebase, it is also important to ensure that enough testing is performed prior to code submission to avoid breaking builds and delaying the fast feedback that makes continuous integration desirable. In this work, we present algorithms that make continuous integration processes more cost-effective. In an initial pre-submit phase of testing, developers specify modules to be tested, and we use regression test selection techniques to select a subset of the test suites for those modules that render that phase more cost-effective. In a subsequent post-submit phase of testing, where dependent modules as well as changed modules are tested, we use test case prioritization techniques to ensure that failures are reported more quickly. In both cases, the techniques we utilize are novel, involving algorithms that are relatively inexpensive and do not rely on code coverage information -- two requirements for conducting testing cost-effectively in this context. To evaluate our approach, we conducted an empirical study on a large data set from Google that we make publicly available. The results of our study show that our selection and prioritization techniques can each lead to cost-effectiveness improvements in the continuous integration process.
Mining of massive datasets: Second edition
Jure Leskovec
Anand Rajaraman
Jeffrey David Ullman
Written by leading authorities in database and Web technologies, this book is essential reading for students and practitioners alike. The popularity of the Web and Internet commerce provides many extremely large datasets from which information can be gleaned by data mining. This book focuses on practical algorithms that have been used to solve key problems in data mining and can be applied successfully to even the largest datasets. It begins with a discussion of the map-reduce framework, an important tool for parallelizing algorithms automatically. The authors explain the tricks of locality-sensitive hashing and stream processing algorithms for mining data that arrives too fast for exhaustive processing. Other chapters cover the PageRank idea and related tricks for organizing the Web, the problems of finding frequent itemsets and clustering. This second edition includes new and extended coverage on social networks, machine learning and dimensionality reduction.
Experimentation in Software Engineering
Claes Wohlin
Per Runeson
Martin Höst
Anders Wesslén
The experiment data from the operation is input to the analysis and interpretation. After collecting experimental data in the operation phase, we want to be able to draw conclusions based on this data. To be able to draw valid conclusions, we must interpret the experiment data.
Achieving scalable model-based testing through test case diversity
Lionel C. Briand
Andrea Arcuri
The increase in size and complexity of modern software systems requires scalable, systematic, and automated testing approaches. Model-based testing (MBT), as a systematic and automated test case generation technique, is being successfully applied to verify industrial-scale systems and is supported by commercial tools. However, scalability is still an open issue for large systems, as in practice there are limits to the amount of testing that can be performed in industrial contexts. Even with standard coverage criteria, the resulting test suites generated by MBT techniques can be very large and expensive to execute, especially for system level testing on real deployment platforms and network facilities. Therefore, a scalable MBT technique should be flexible regarding the size of the generated test suites and should be easily accommodated to fit resource and time constraints. Our approach is to select a subset of the generated test suite in such a way that it can be realistically executed and analyzed within the time and resource constraints, while preserving the fault revealing power of the original test suite to a maximum extent. In this article, to address this problem, we introduce a family of similarity-based test case selection techniques for test suites generated from state machines. We evaluate 320 different similarity-based selection techniques and then compare the effectiveness of the best similarity-based selection technique with other common selection techniques in the literature. The results based on two industrial case studies, in the domain of embedded systems, show significant benefits and a large improvement in performance when using a similarity-based approach. We complement these analyses with further studies on the scalability of the technique and the effects of failure rate on its effectiveness. We also propose a method to identify optimal tradeoffs between the number of test cases to run and fault detection.
Test Case Prioritization Using Requirements-Based Clustering
Md. Junaid Arafeen
Hyunsook Do
The importance of using requirements information in the testing phase has been well recognized by the requirements engineering community, but to date, a vast majority of regression testing techniques have primarily relied on software code information. Incorporating requirements information into the current testing practice could help software engineers identify the source of defects more easily, validate the product against requirements, and maintain software products in a holistic way. In this paper, we investigate whether the requirements-based clustering approach that incorporates traditional code analysis information can improve the effectiveness of test case prioritization techniques. To investigate the effectiveness of our approach, we performed an empirical study using two Java programs with multiple versions and requirements documents. Our results indicate that the use of requirements information during the test case prioritization process can be beneficial.
Test case prioritization: A systematic mapping study
Cagatay Catal
Deepti Mishra
Test case prioritization techniques, which are used to improve the cost-effectiveness of regression testing, order test cases in such a way that those cases that are expected to outperform others in detecting software faults are run earlier in the testing phase. The objective of this study is to examine what kind of techniques have been widely used in papers on this subject, determine which aspects of test case prioritization have been studied, provide a basis for the improvement of test case prioritization research, and evaluate the current trends of this research area. We searched for papers in the following five electronic databases: IEEE Explorer, ACM Digital Library, Science Direct, Springer, and Wiley. Initially, the search string retrieved 202 studies, but upon further examination of titles and abstracts, 120 papers were identified as related to test case prioritization. There exists a large variety of prioritization techniques in the literature, with coverage-based prioritization techniques (i.e., prioritization in terms of the number of statements, basic blocks, or methods test cases cover) dominating the field. The proportion of papers on model-based techniques is on the rise, yet the growth rate is still slow. The proportion of papers that use datasets from industrial projects is found to be 64 %, while those that utilize public datasets for validation are only 38 %. On the basis of this study, the following recommendations are provided for researchers: (1) Give preference to public datasets rather than proprietary datasets; (2) develop more model-based prioritization methods; (3) conduct more studies on the comparison of prioritization methods; (4) always evaluate the effectiveness of the proposed technique with well-known evaluation metrics and compare the performance with the existing methods; (5) publish surveys and systematic review papers on test case prioritization; and (6) use datasets from industrial projects that represent real industrial problems.
Regression test suite prioritization using system models
SOFTW TEST VERIF REL
Luay Tahat
Bogdan Korel
Hasan Ural
During regression testing, a modified system is often retested using an existing test suite. Since the size of the test suite may be very large, testers are interested in detecting faults in the modified system as early as possible during this retesting process. Test prioritization attempts to order tests for execution so that the chances of early detection of faults during retesting are increased. The existing prioritization methods are based on the source code of the system under test. In this paper, we present and evaluate two model-based selective methods and a dependence-based method of test prioritization utilizing the state-based model of the system under test. These methods assume that the modifications are made both on the system under test and its model. The existing test suite is executed on the system model and information about this execution is used to prioritize tests. Execution of the model is inexpensive as compared with execution of the system under test; therefore, the overhead associated with test prioritization is relatively small. In addition, we present an analytical framework for evaluation of test prioritization methods. This framework may reduce the cost of evaluation as compared with the framework that is based on observation. We have performed an empirical study in which we compared different test prioritization methods. The results of the empirical study suggest that system models may improve the effectiveness of test prioritization with respect to early fault detection.
Empirical Investigation of the Effects of Test Suite Properties on Similarity-Based Test Case Selection
L. Briand
Our experience with applying model-based testing on industrial systems showed that the generated test suites are often too large and costly to execute given project deadlines and the limited resources for system testing on real platforms. In such industrial contexts, it is often the case that only a small subset of test cases can be run. In previous work, we proposed novel test case selection techniques that minimize the similarities among selected test cases and outperforms other selection alternatives. In this paper, our goal is to gain insights into why and under which conditions similarity-based selection techniques, and in particular our approach, can be expected to work. We investigate the properties of test suites with respect to similarities among fault revealing test cases. We thus identify the ideal situation in which a similarity-based selection works best, which is useful for devising more effective similarity functions. We also address the specific situation in which a test suite contains outliers, that is a small group of very different test cases, and show that it decreases the effectiveness of similarity-based selection. We then propose, and successfully evaluate based on two industrial systems, a solution based on rank scaling to alleviate this problem.
On the use of a similarity function for test case selection in the context of model-based testing
Emanuela Cartaxo
Patrícia Machado
Test case selection in model-based testing is discussed focusing on the use of a similarity function. Automatically generated test suites usually have redundant test cases. The reason is that test generation algorithms are usually based on structural coverage criteria that are applied exhaustively. These criteria may not be helpful to detect redundant test cases as well as the suites are usually impractical due to the huge number of test cases that can be generated. Both problems are addressed by applying a similarity function. The idea is to keep in the suite the less similar test cases according to a goal that is defined in terms of the intended size of the test suite. The strategy presented is compared with random selection by considering transition-based and fault-based coverage. The results show that, in most of the cases, similarity-based selection can be more effective than random selection when applied to automatically generated test suites. Copyright © 2009 John Wiley & Sons, Ltd.
Supporting Controlled Experimentation with Testing Techniques: An Infrastructure and its Potential Impact
Where the creation, understanding, and assessment of software testing and regression testing techniques are concerned, controlled experimentation is an indispensable research methodology. Obtaining the infras- tructure necessary to support such experimentation, however, is difficult and expensive. As a result, progress in experimentation with testing techniques has been slow, and empirical data on the costs and effectiveness of techniques remains relatively scarce. To help address this problem, we have been designing and constructing infrastructure to support controlled experimentation with testing and regression testing techniques. This paper reports on the challenges faced by researchers experimenting with testing techniques, including those that inform the design of our infrastructure. The paper then describes the infrastructure that we are creating in response to these challenges, and that we are now making available to other researchers, and discusses the impact that this infrastructure has had and can be expected to have.
Experimentation In Software Engineering: An Introduction
Like other sciences and engineering disciplines, software engineering requires a cycle of model building, experimentation, and learning. Experiments are valuable tools for all software engineers who are involved in evaluating and choosing between different methods, techniques, languages and tools. The purpose of Experimentation in Software Engineering is to introduce students, teachers, researchers, and practitioners to empirical studies in software engineering, using controlled experiments. The introduction to experimentation is provided through a process perspective, and the focus is on the steps that we have to go through to perform an experiment. The book is divided into three parts. The first part provides a background of theories and methods used in experimentation. Part II then devotes one chapter to each of the five experiment steps: scoping, planning, execution, analysis, and result presentation. Part III completes the presentation with two examples. Assignments and statistical material are provided in appendixes. Overall the book provides indispensable information regarding empirical studies in particular for experiments, but also for case studies, systematic literature reviews, and surveys. It is a revision of the authors' book, which was published in 2000. In addition, substantial new material, e.g. concerning systematic literature reviews and case study research, is introduced. The book is self-contained and it is suitable as a course book in undergraduate or graduate studies where the need for empirical studies in software engineering is stressed. Exercises and assignments are included to combine the more theoretical material with practical aspects. Researchers will also benefit from the book, learning more about how to conduct empirical studies, and likewise practitioners may use it as a "cookbook" when evaluating new methods or techniques before implementing them in their organization. © Springer-Verlag Berlin Heidelberg 2012. All rights are reserved.
Using spanning sets for coverage testing
Martina Marré
A test coverage criterion defines a set E<sub>r</sub> of entities of the program flowgraph and requires that every entity in this set is covered under some test Case. Coverage criteria are also used to measure the adequacy of the executed test cases. In this paper, we introduce the notion of spanning sets of entities for coverage testing. A spanning set is a minimum subset of E<sub>r</sub>, such that a test suite covering the entities in this subset is guaranteed to cover every entity in E<sub>r</sub>. When the coverage of an entity always guarantees the coverage of another entity, the former is said to subsume the latter. Based on the subsumption relation between entities, we provide a generic algorithm to find spanning sets for control flow and data flow-based test coverage criteria. We suggest several useful applications of spanning sets: They help reduce and estimate the number of test cases needed to satisfy coverage criteria. We also empirically investigate how the use of spanning sets affects the fault detection effectiveness.
Prioritizing test cases for regression testing
Mary Jean Harrold
Roland H. Untch
Chengyun Chu
Test case prioritization techniques schedule test cases for execution in an order that attempts to increase their effectiveness at meeting some performance goal. Various goals are possible; one involves rate of fault detection, a measure of how quickly faults are detected within the testing process. An improved rate of fault detection during testing can provide faster feedback on the system under test and let software engineers begin correcting faults earlier than might otherwise be possible. One application of prioritization techniques involves regression testing, the retesting of software following modifications; in this context, prioritization techniques can take advantage of information gathered about the previous execution of test cases to obtain test case orderings. We describe several techniques for using test execution information to prioritize test cases for regression testing, including: 1) techniques that order test cases based on their total coverage of code components; 2) techniques that order test cases based on their coverage of code components not previously covered; and 3) techniques that order test cases based on their estimated ability to reveal faults in the code components that they cover. We report the results of several experiments in which we applied these techniques to various test suites for various programs and measured the rates of fault detection achieved by the prioritized test suites, comparing those rates to the rates achieved by untreated, randomly ordered, and optimally ordered suites
Test Case Prioritization: A Family of Empirical Studies
Alexey Malishevsky
To reduce the cost of regression testing, software testers may prioritize their test cases so that those which are more important, by some measure, are run earlier in the regression testing process. One potential goal of such prioritization is to increase a test suite's rate of fault detection. Previous work reported results of studies that showed that prioritization techniques can significantly improve rate of fault detection. Those studies, however, raised several additional questions: 1) Can prioritization techniques be effective when targeted at specific modified versions; 2) what trade-offs exist between fine granularity and coarse granularity prioritization techniques; 3) can the incorporation of measures of fault proneness into prioritization techniques improve their effectiveness? To address these questions, we have performed several new studies in which we empirically compared prioritization techniques using both controlled experiments and case studies
Finding Similar Files in a Large File System
Udi Manber
We present a tool, called sif, for finding all similar files in a large file system. Files are considered similar if they have significant number of common pieces, even if they are very different otherwise. For example, one file may be contained, possibly with some changes, in another file, or a file may be a reorganization of another file. The running time for finding all groups of similar files, even for as little as 25% similarity, is on the order of 500MB to 1GB an hour. The amount of similarity and several other customized parameters can be determined by the user at a post-processing stage, which is very fast. Sif can also be used to very quickly identify all similar files to a query file using a preprocessed index. Application of sif can be found in file management, information collecting (to remove duplicates), program reuse, file synchronization, data compression, and maybe even plagiarism detection. 1. Introduction Our goal is to identify files that came from the same source ...
Test case prioritization techniques schedule test cases for execution in an order that attempts to increase their effectiveness at meeting some performance goal. Various goals are possible; one involves rate of fault detection --- a measure of how quickly faults are detected within the testing process. An improved rate of fault detection during testing can provide faster feedback on the system under test and let software engineers begin correcting faults earlier than might otherwise be possible. One application of prioritization techniques involves regression testing -- the retesting of software following modifications; in this context, prioritization techniques can take advantage of information gathered about the previous execution of test cases to obtain test case orderings. In this paper, we describe several techniques for using test execution information to prioritize test cases for regression testing, including: (1) techniques that order test cases based on their total co...
Test Case Prioritization: An Empirical Study
Test case prioritization techniques schedule test cases for execution in an order that attempts to maximize some objective function. A variety of objective functions are applicable; one such function involves rate of fault detection --- a measure of how quickly faults are detected within the testing process. An improved rate of fault detection during regression testing can provide faster feedback on a system under regression test and let debuggers begin their work earlier than might otherwise be possible. In this paper, we describe several techniques for prioritizing test cases and report our empirical results measuring the effectiveness of these techniques for improving rate of fault detection. The results provide insights into the tradeoffs among various techniques for test case prioritization. 1. Introduction Software developers often save the test suites they develop for their software, so that they can reuse those suites later as the software evolves. Such test suite reuse, in t...
Let's assume we had to pay for testing. Keynote at AST 2016
Kim Herzig
Herzig Kim
Kim Herzig. 2016. Let's assume we had to pay for testing. Keynote at AST 2016. (2016). https://www.kim-herzig.de/2016/06/28/ keynote-ast-2016/
Fei-Ching Tsong Yueh Chen
Robert G Kuo
T H Merkel
Regression Testing Minimization, Selection and Prioritization: A
CHOREOS
Maira Lescevica
Egils Ginters
Riccardo Mazza
James Lockerbie
IEEE Internet Computing editorial board
Munindar P. Singh
D. Katsaros
POLOLAS- Guided solutions for the systematization of early quality assurance in software engineering
M.J. Escalona
Javier Aroba
Irene Barba
José González Enríquez
Software quality assurance is a crucial aspect that has been a focus of interest to the research community and the business sector. In addition, with the continued advancement of information techno logy, it is crucial that this quality assurance occurs in an early stages of the life cycle of software, and a most systematic way possible, including even , automation. Project Pololas- Guided solutions for the systematization of early quality assurance in software engineering ... [more]
NDT 4.0. Mecanismos para el diseño y gestión de software orientados al usuario | CommonCrawl |
Wiktionary:Beer parlour/2009/August
This is an archive page that has been kept for historical purposes. The conversations on this page are no longer live.
Beer parlour archives edit
April-June
July-September
October-December
1.1 Multiple forms in translation sections
1.2 Template to go with Appendix:French spelling reforms of 1990
1.3 Language maps
1.4 AWB
1.5 suffix or interfix?
1.6 Attesting color names
1.7 Template:Xyzy
1.8 threshold for voting
1.9 Ouch!
1.10 one's vs. someone's in English verb phrase headwords
1.11 CFI for English versus other languages
1.12 Numbers
1.13 Template:Spanish possessive adjective
1.14 Names
1.15 Google searches with non-alphanumeric characters
1.16 Transliteration in Template:l
1.17 Wiktionary logo 2009 refresh voting
1.18 Linking headwords
1.19 Wiktionary:Picture Dictionary
1.20 Drop encyclopedic categories
1.21 Clarification of WT:CFI#Names of specific entities
1.22 Rename Category:US Category:American English
1.23 political parties
1.24 Wiktionary:Spelling variants in entry names
1.25 wotifeelgoeswrong
1.26 American Sign Language
1.27 Misbehaving audio file....
1.28 Inclusion of SOPs for translations — proposal
1.29 Halfwidth and Fullwidth Forms
1.30 Questionable sense of a word (with no citations to support it)
1.31 aspired h vs silent h
1.32 Authorization to run bot
1.33 rfd RENAME
1.34 Phrasal sentence adverbs
1.35 curation
1.36 Me
1.37 Appendix:List of legal Latin terms
1.38 Redundant articles?
1.39 Stereotypical sample sentences
1.40 Wiktionary:About Old French
1.41 Greek derivations.
1.42 wrong translation of section name Beer Parlour...
1.43 Wiktionary:Citations
1.44 Appendix:Spanish names for María
1.45 Wiktionary:Votes/pl-2009-08/Add en: to English topical categories
1.46 Wiktionary:Anagrams
1.47 Database of English words
1.48 shut the fuck up [rfd'nd
1.49 Split communal discussion pages by month
Multiple forms in translation sections
With the creation of a new Index:French by Conrad.Irwin's awesome script, I'm noticing an odd little side effect that calls for a discussion:
Should the feminine of nouns and adjectives be given in translation section? Should it be linked?
The presence of a linked entry creates (IMHO) undesirable non-lemma entries in the Index. I've dealt with this in several ways, and considered others. Here's a summary (example for abject, simplest to most elaborate):
abject (fr) m
abject (fr) m, e
abject (fr) m, e f
abject (fr) m, abjecte f
abject (fr) m (abjecte f)
As far as adjectives go, something like "abject, e (fr)" would be an ideal solution (if only because adjectives do not have inherent genders in French, unlike noun), but a similar problem arises with variable nouns. In either case I'm not too keen on giving a full feminine entry, since it's just a call for trouble as later editors are likely to link them. Circeus 02:18, 2 August 2009 (UTC)
I don't think putting the feminine form at all is necessary. The same would apply to Spanish and Italian, then (and a score of other languages). I think that information belongs on abject#French, not the translation. I do appreciate the point you're making, though. Mglovesfun (talk) 09:20, 2 August 2009 (UTC)
Swedish would have three forms (for some irregulars, four) there, and I have been recommended only to use the lemma... But I have never considered giving the gender of this lemma (it's always common, if we are talking adjectives). Should I have done so? \Mike 10:32, 2 August 2009 (UTC)
In the case of adjectives, I agree with Mglovesfun and \Mike, and feel very strongly that it should be just abject (fr); I see no need to list the feminine singular, masculine plural, or feminine plural, and no need to indicate that this is the masculine singular form. (And I certainly see no need for half-measures, listing the feminine singular but not either plural, and indicating that the form abject is masculine but not that it's singular.) It's a given that French adjectives have four forms, of which we list only the lemma form (masculine singular), just as with French nouns (two forms; singular) and French verbs (forty-eight synthetic forms; infinitive). A reader looking for inflection information will click the link.
In the case of nouns, I do think we should include both in full, because I don't think they're two forms of one word, but rather, two very closely related words. (Arguments can certainly be made either way, but I prefer to err on the side of including both, since otherwise it's not obvious to a reader that both even exist.)
—RuakhTALK 03:33, 4 August 2009 (UTC)
If the "form" is included (and is often useful, as long as one doesn't go overboard with f, f pl, m, m pl ... ;-) it should be linked:
abject (fr) m, abjecte (fr) f
If this confuses the script, the script needs a bit of fixing I think. (Yes, it is awesome ;-) In any case, using "e" or something like "-e" should never be done. It is one of the irritations of paper dictionaries, where one is never quite sure exactly which letters, if any, are replaced. Robert Ullmann 14:04, 5 August 2009 (UTC)
Template to go with Appendix:French spelling reforms of 1990
I've adapted this from the French (re-reading for errors very welcome). A template to allow quicker creation of these 'alternative spellings' seems a plus to me (fr:Modèle:ortho1990 in French). As far as I can see, either a variante of {{alternative spelling of}} with a usage note, or my preference, something that goes directly under ====Usage notes==== would be better. Any ideas what name would be good? Preferably starting with fr-. Mglovesfun (talk) 09:17, 2 August 2009 (UTC)
Perhaps relevant is the German template {{de-note obsolete spelling}}. It's used in usage notes to show that the spelling was made obsolete by a German spelling reform. —Rod (A. Smith) 17:36, 2 August 2009 (UTC)
Thanks for that link. The spelling reform of 1990 didn't actually make the old spellings obsolete; it tried to, but it didn't gain traction, and the eventual result was that new spellings were deemed also-correct. Even French governmental publications generally use the old ones; and as you can imagine, the new ones really got nowhere in Francophonia-at-large. I'd argue that for most of the reforms, it's more important to label the post-1990 spelling as "supposedly O.K., but don't try it at home" than to label the pre-1990 spelling in any way. But Stephen's comment suggests that something similar is the case for the last German spelling reform, so perhaps French and German editors can share thoughts on a good way to present this information. —RuakhTALK 03:41, 4 August 2009 (UTC)
Yeah, I used gout in a university article, and got told it was wrong, it which point I said it wasn't, and since we didn't have a dictionary to hand (and well, there are other students!) we dropped it. Mglovesfun (talk) 16:00, 7 August 2009 (UTC)
One might know the experiment of putting translations on a world map to arrange them geographically. While I think this is not a good idea (imagine that for water), we have the Languages by country categories which are currently a bit bland and could benefit greatly from maps of languages. It's mainly useful if you don't exactly know the name of a language (or if it's ambiguous), but you know where it's approximately spoken, in which case the map would help a lot, but there are probably other uses for it. By extension, the Languages by genetic classification categories could also get these maps, though it might be a bit more problematic with these. Ethnologue, in their 16th edition, also introduced language maps, and I believe Wiktionary's category system would be much more useful if we did the same. -- Prince Kassad 19:49, 2 August 2009 (UTC)
Can I be approved for AWB? For now I want to change 'Suffix' to 'Affix' in a family of articles I created until discussion comes to consensus at Tea Room, then change to whichever term we agree on. kwami 02:37, 3 August 2009 (UTC)
I've added you to the list of approved users. --Ivan Štambuk 02:46, 3 August 2009 (UTC)
Thanks! kwami 06:47, 3 August 2009 (UTC)
suffix or interfix?
I raised this question in the Tea Room, but perhaps it belongs here, since it has wider policy implications.
There are "suffixes" such as English -tum- and Esperanto -aĉ- which cannot occur word finally. Question is, can we label as a suffix s.t. which has the form "-X-", or is our "suffix" label restricted to "-X"? Neither intrafix nor infix would seem to apply to these cases. (I'm not sure what -tum- is, since it always attaches to other affixes, forming words which contain no root or stem, but -aĉ- is universally described as a suffix in Esperanto, when that term is used at all.)
Anyway, the discussion is over there. kwami 07:16, 3 August 2009 (UTC)
Attesting color names
This continues a discussion begun at WT:RFV#outer space. DCDuring TALK 15:18, 4 August 2009 (UTC)
It seems to me that we have very basic questions about color words (nouns and adjective, mostly):
To what extent do we accept the nomenclature of standard-setting bodies?
What is that we should be attesting to?
What are the limits on what we can achieve that are imposed by current and near-future computer technology?
Analogies come from taxonomy and chemical nomenclature. In those cases there are standards for current usage, which specific institutional systems have been in operation for less than a century, I think. There are previous naming practices which have some carryover usage, sometimes differing significantly. Matching vernacular names to standard names might be a valuable service to normal users. The analogies mostly seem relevant for question 1.
It seems possible that what we should do initially is identify non-standard terms in our Category:Colors and run those through RfV. That only requires that we identify one or more standard-setting bodies that seem to address nomenclature in a way consistent with wiktionary.
There is obviously a special role played by the standards for color representation on a computer screen. At one level we just have to accept it. At other levels color on a computer screen is a match to real-world color only through our interpretative meat-mechanisms (and prosthetics). DCDuring TALK 15:46, 4 August 2009 (UTC)
Definitions in technical subject areas may be prescriptive, and may be more precisely defined than general-use definitions. So, e.g., while plum may be defined generally as a deep blue-purple colour, another technical sense may carry the subject label {{web design}} and be defined as "#DDA0DD". Because of our descriptive methodology, I would prefer to see some actual usage attested for each of these, rather than just mass-producing entries from technical glossaries, even if the definition is based on a prescriptive source—if web designers don't actually write or speak about the technical plum colour, then we oughtn't define it. This has a broad application, for example an arm is different in medicine, a tint in visual arts, or DNA in genetics. I believe clay has different meanings in soil science, hydrology, and ceramic arts. —Michael Z. 2009-08-04 17:08 z
When used as a definition, "#DDA0DD" may have that restricted context, but, when used to generate a representation of what people might expect, it has potentially greater use. Mass-reproduction of starter entries with a highly restricted context sounds like a great idea to me. We could proceed from there, adding broader contexts where we had confidence and facing selective rfv challenges. After all, it's not clear that it is so much more important that we attest color entries as opposed to multi-word entries or prepositions or proverbs or engage in any of the hundreds of other classes of tasks that await us at every turn. DCDuring TALK 19:33, 4 August 2009 (UTC)
The colour swatches can be misleading. Indian red, is not a particular colour, but an open series of chemical pigments, both natural earths and artificial chemicals. Its defining qualities are of interest to printers, artists, and house painters: richness and intensity of hue, mechanical coverage, and chemical permanence of the colour; and to chemists: the inclusion of ferric oxide and no other common ingredient. It is not a particular red colour, but a wide range of rusty red and purplish hues. Although crayons, coloured pencils, and an HTML hex triplet have been named after it, they are not Indian red. Representing Indian red with a swatch of #CD5C5C is like throwing up a picture of Walt Disney's Goofy to illustrate dog. The number only represents the (non-standard) HTML colour, and nothing else. It fails to give the reader the correct impression, which is easily done with a half-dozen words.
I'm also not crazy about dumping a technical glossary into the dictionary, although we already have something like that in Appendix:Colors. What is needed now is attestation or some corpus research. Without that, do we know if ecru is really used as an adjective, or tow-colored as a noun, or if they are actually used by anyone at all? —Michael Z. 2009-08-05 03:59 z
Template:Xyzy
Mr. Ullmann seems to be under the impression that any changes to his template have to go through lots of red tape and his personal approval before being implemented. This wouldn't be much of an issue if I were trying to add support for some language spoken by 500 people, but Urdu? I'll point out that Hindi is supported. So why not Urdu? Basically the same language with a different script. Why should this be an issue that requires a vote? <edit> We also actually have at least one Urdu editor who I would expect this to be useful for. Do we have anyone who does major work in Belarusian, Macedonian, Ukrainian? Or Tamil or Telugu? — [ R·I·C ] opiaterein — 13:40, 5 August 2009 (UTC)
It would be very useful. (could you possibly can your abusive attitude? someday?) The issue is (as explained on the talk page) that changes must be made very carefully, as each addition is a tremendous amount of overhead, and deletions are/will be very, very painful (finding every default use by 'bot, and adding sc=). So it is just about going slowly and looking at them. make the suggestions, and at some point we will add a few carefully. This template is used more times than any other (411 thousand pages, but more times on each page than the two before it, see Special:MostLinkedTemplates). Changes are not trivial. Robert Ullmann 13:54, 5 August 2009 (UTC)
I will probably continue to be abusive to you if you keep stunting my contributions with what I perceive to be ridiculous bullshit and meaningless, extremist sensationalism. (This "proposal", and "vote" are, in themselves a crime against humanity)
If each addition is a "tremendous amount of overhead", why are there so many to begin with? — [ R·I·C ] opiaterein — 14:05, 5 August 2009 (UTC)
None for any of those listed except Macedonian which has 1-2 semi-regular editors but is usually inactive 10 out of 12 months a year. All Indian languages except Hindustani are way underrepresented considering the number of their speakers. Hindu and Urdu are usually added in pairs (since they're basically the same language in 2 scripts) by Dijan so it makes sense to allow either both or neither of them.
Now reading upon the technical limitations of {{Xyzy}}, it appears to be only of limit assistance to the editors. If the defaulted scripts get introduced/dumped according to the # of Wiktionary entries the languages they reflect have in a particular moment of time, it could possibly prove to do more damage than being useful considering that it can apparently support only a limited number of default scripts (how much?), and there is no way for us to know how much entries will Wiktionary have in e.g. 10 years in what language, so sooner-or-later when we hit the limit there'll be no way to keep in sync the top languages which should have their script defaulted and those actually supported by {Xyzy}, since once they're added to {Xyzy} they cannot be removed without doing damage that will not be easy to fix. IMHO it's simply the best to add sc= manually (it could be also added by a bot in most of the scenarios), then to rely on such a mutable template. --Ivan Štambuk 14:01, 5 August 2009 (UTC)
Quite so. Note that the defaulting of script is only part of what the template does; the more important bit is generating language tags when there is no other script template, the tags being tasked to script templates. The defaulting is only useful to a small number of languages, where it is very useful. (Japanese, Armenian ...). It isn't really based on the number of entries (which as you observe correctly, would only be a statistical starting point). And the point is that it must not be "such a mutable template" to work effectively. (;-) Robert Ullmann 14:12, 5 August 2009 (UTC)
threshold for voting
As far as I know, everyone who had created an account here before the vote started, is eligible to vote. This has lead recently to appalling manipulations (in the vote of unifying Serbo-Croatian) and influx of unknown novices with less than 10 edits for Wiktionary. It is insulting to see how the votes of (not one or two, but dozens of) unknown editors stir discord and impede important policies and how those novices' votes are influencing the decision with the same weight as Stephen's, Ruaks, Prince Kassad's, Ivan's and so on. Therefore I suggest adopting a policy prohibiting users with less than x contributions from partaking of votes. In French Wikipedia the threshold is 50 contributions in the main space and at least 7 days of contributing before the vote started. In Bulgarian Wikipedia it is 400 contributions in the main space and 40 days before the vote started. In German Wikipedia, if I remember aright, it was 300 (or 200...) contributions in the main space. So, before starting a vote I would like to know what kind of threshold most of you would indorse? 150, 100 contributions in the main space, Appendices or Citations? How many days of contributing prior to the vote ought to be required (and not just of creating the account, which became trivial after the SUL had been introduced)? The uſer hight Bogorm converſation 09:47, 6 August 2009 (UTC)
I thought we had already implemented such a threshold for voting. I agree that we should require at least the 50 contributions with 7 days anticipation that the French have, but I would not be opposed to a requirement of 300 contributions with 30 days in anticipation of a vote. —Stephen 10:00, 6 August 2009 (UTC)
Contributing to Wiktionary is much easier and faster than to Wikipedia, so I'd be rather in favor of the figure of some 200-300 edits in the main namespace, and 7 days before the vote was started. --Ivan Štambuk 10:38, 6 August 2009 (UTC)
500 contribs and 30 days. --Vahagn Petrosyan 11:32, 6 August 2009 (UTC)
10,000, one year, verified En-N status and identity, and Mensa membership. DCDuring TALK 22:50, 6 August 2009 (UTC)
Your cynicism is not appreciated. 500 edits is a figure that can be achieved in a few days by any decent contributor. --Ivan Štambuk 22:58, 6 August 2009 (UTC)
Sarcastic is perhaps the word you were looking for. How could you know that I was cynical? With equal basis, I could say that I find the proposals of a self-proclaimed elite to be the height of free-loading cynicism, attempting to appropriate a valuable resource for their own purposes. Also, how do you know that there isn't someone who appreciates what I said. Did you take a straw poll or have you determined that you are the spokesman for a silent majority? DCDuring TALK 23:16, 6 August 2009 (UTC)
No, I meant "cynical". --Ivan Štambuk 23:23, 6 August 2009 (UTC)
Perhaps you could explain yourself. DCDuring TALK 18:22, 10 August 2009 (UTC)
I don't mind if a new or casual contributor votes, I just don't want a non-contributor voting, or someone contributing in order to vote. Casual and new contributors are as entitled to their opinions as I am to mine (and while some are very stubborn, most tend to take a "go with the flow" approach until they have their sea legs). But they have to be here because they want to contribute, not because they want to vote, or else we risk becoming a battleground for vote-canvassers with their own agendas. (And given recent events, the word "risk" may be an understatement.) —RuakhTALK 21:26, 6 August 2009 (UTC)
I don't want regular contributors recruiting people to manipulate the outcome of a vote. — [ R·I·C ] opiaterein — 18:34, 7 August 2009 (UTC)
This is absolutely the wrong way to approach this problem. I'm not entirely sure I can explain myself, but I'll try. Firstly, I can't see the use in an edit-count based privilege system - restricting to whitelisted users would make a lot more sense - but again is an outrageously blunt measurement of ability to make sensible decisions (and to actually use the whitelist as qualification for voting would bring suspicion on that process too, so let's forget I said it). Secondly voting is a mainly useless way of making decisions anyway, which is exactly why Wikipedia have their !vote page (which of course no-one anywhere sticks to because votes are by far the easiest way of doing things). Votes have two places that I can see, 1) We have come to a common conclusion, let us ratify it and document it formally; 2) We have decided to take action, but are ambivalent to which of A,B,C we actually do. When a vote no longer falls into those two categories it becomes a pointless waste of time - all the discussion time wasted trying to "inform the misled" could be much better spent working on either the dictionary, or a counter-proposal. A lecturer of mine once pointed out (somewhat more eloquently) that, while economists trade to bring mutual benefits, politicians fight to try and be the one who wins. The recent S-C vote is a prime example of where politics gets in the way of mutual benefit, simply because it's easier to fight than to compromise. Conrad.Irwin 21:29, 6 August 2009 (UTC)
What's to prevent someone from dragging in a bunch of ringers from the Wikipedias and whitelisting them en masse? A contributions quota and probationary period may not be the perfect way to insure that only those with a real interest in Wiktionary will cast a vote, but it's probably the most practical way we will find. Let's make it 500 contributions and a 30-day waiting period. Also whitelisting. —Stephen 23:33, 6 August 2009 (UTC)
There shouldn't even be a vote if it's contentious enough for people to care that much - is I think my main point anyway. Conrad.Irwin 00:14, 7 August 2009 (UTC)
Hm, I was just coming to the BP to start a topic similar to this... I suppose 30 days and 500 contributions is reasonable. How about a no previous bans? Or no bans within the past year or two? Kinda like the no votes for convicted felons :) — [ R·I·C ] opiaterein — 17:45, 7 August 2009 (UTC)
This would prevent anyone who previously took a wikibreak and new admins experimenting with the block function from voting. -- Prince Kassad 17:48, 7 August 2009 (UTC)
"Experimenting with the block function"? I have to say, Kassad, as much of a p.o.s. I think Ullmann is, I'm really disappointed in you. — [ R·I·C ] opiaterein — 18:32, 7 August 2009 (UTC)
We can make that a reasonable exception to the rules. The important thing is to filter out the malicious voters with zero interest in both improving Wiktionary and actually contributing here. These seem to thrive recently and we must stop such votes. --Ivan Štambuk 17:50, 7 August 2009 (UTC)
What's the percent of voters in the oppose section who created accounts in 2008 and made no edits until the vote? Bet it'll be high. — [ R·I·C ] opiaterein — 18:32, 7 August 2009 (UTC)
Presumably the way to implement this would be to vote on it? And, assuming rationality, everyone who does not fulfil these criteria will vote against such a proposal. Such restrictions are not fair, and impose a larger bias on the votes than already exists. Stopping legitimate newbies from voting is vindictive, stopping deliberately subversive votes is not necessary >90% of the time. Additionally, the amount of bureaucracy needed to ensure that everyone voting fulfils any criteria is an absolute waste of resources. The only correct solution to votes that people are willing to cheat at is to accept that the outcome of the vote is undecided and return to the drawing board - making correct decisions can not be down to the amount of voting force you can muster, it must be down to proper discussion. Conrad.Irwin
Re: "And, assuming rationality, everyone who does not fulfil these criteria will vote against such a proposal": That's not even close to true, unless you plan to traipse through the 'pedias canvassing for "oppose" votes. But it's true that, as an open wiki, we probably can't even enforce the kinds of controversial decisions that these restrictions might seem useful for. —RuakhTALK 19:48, 7 August 2009 (UTC)
I didn't say there would be any, and the chances are there would be very few, but it is a good example of a vote that would be very skewed by implementing such arbitrary criteria. Conrad.Irwin 22:18, 7 August 2009 (UTC)
Re: "stopping deliberately subversive votes is not necessary >90% of the time" - The ongoing SC vote is exactly a situation where well-defined criteria for vote-acceptance are necessary, and where relatively significant amount of votes (both supportive and opposing, more of the latter group I'd say :D) appear to come from users expressing their political opinions and not voting on the proposed WT:ASH policy per se. It simply doesn't make sense to equally treat all votes in relatively "controversial" votes such as this one, as the "good faith" principle would always be abused in such cases. --Ivan Štambuk 20:09, 7 August 2009 (UTC)
Ok, so Ullmann's team is better at cheating than Ivan's - this is exactly why decisions should never be made by vote; votes are there to acknowledge that a compromise has been agreed on. I don't think anyone in particular is to blame in this case, but the grown-up thing to do is just abandon the current destructive snow-ball and start afresh. Even if you make it harder for newbies to contribute, you don't remove the significant effect that rhetoric can have; you don't even remove the probability that Wiktionarians who know nothing about this particular issue will vote anyway. Conrad.Irwin 22:18, 7 August 2009 (UTC)
votes are there to acknowledge that a compromise has been agreed on - there was a consensus amongst all the contributors for 3 months, nobody was bothered when I announced it in March. As far as I can see, of the regulars having any proficiency in Slavic languages, only 2 of them are voting for oppose The apparent "lack of consensus" was introduced by Ullmann by political FUD. These canvassed opposing votes by nationalist bigots who imagine that "linguistics does not determine what language is" are worthless, and it's just matter of formally making them so. --Ivan Štambuk 22:46, 7 August 2009 (UTC)
I agree that this vote is not ideal, but I still notice that you completely ignore any possibility that the people opposing might be right; as I have no knowledge on this issue, only your opinion, I am very wary. It is clear that objections should have been raised before the vote started, but hey, not everything will work all the time. The mature thing to do is just let the vote run its course, the conclusion of a vote does not prevent it from being run again, and there would be much less wastage of time if there was less discussion on the vote page. In fact, now I think about it, preventing any discussion in the Support/Oppose section of all votes might well be a good thing. Conrad.Irwin 07:52, 8 August 2009 (UTC)
I don't see on what exactly the people voting for oppose may be right. We have Croatian nationalist bigots that plainly lie that Serbian and Croatian are as distinct as Romance languages. We have others that imagine that I'm a "Yugo-chauvinist" ! Several of them openly say "this is not matter of linguistics, but politics". Not a single one of them has actually opened a WT:ASH talkpage and contributed to the discussion of the possible deficiencies of the proposal. This pretty much proves everything I wrote, on the "differences" being more imaginary than real. The only argument I've seen from the opposing clique worth discussing is that they're against "forbidding languages", which I personally consider absurd as we are not forbidding anything, only treating it commonly at a single ==BCSM== header, in a 100% NPOV way. I also agree that prolonging the vote wasn't a good thing to do. We'll perhaps have to reiterate it later, but with voted voting-acceptance threshold (for this particular vote, the date would still be July 1st). --Ivan Štambuk 12:12, 8 August 2009 (UTC)
Maybe we should be more specific, then. Maybe for votes that aren't controversial, no major guidelines should be in place, but for votes that need to be repeated (as this one probably will), we should be more strict on who we allow to vote. For instance, this current vote isn't going to affect 90% of the people who voted. They'll just go back to their respective projects and forget it ever happened. Why should their opinions matter as much as those of the ones who actually contribute in the area? — [ R·I·C ] opiaterein — 20:21, 7 August 2009 (UTC)
We can always "freeze" the vote, and close (i.e. interpret the results) later, when we agree on the details of the vote acceptance rules. It's pointless to force all the people to waste their time again and again to repeat the position which they already explicitly expressed by a vote before, and stood by it for 5 weeks. Or e.g. allow everyone to change their vote (during some reasonable period), if they feel like doing so. But simply repeating the whole procedure again...it makes me shudder at the very thought. --Ivan Štambuk 20:37, 7 August 2009 (UTC)
Modifying the conditions under which a vote is being run, while it is running, is a ludicrous proposal. You simply cannot ask for a community decision and then ignore the outcome because you don't like it. Sure, it's not the terms under which a vote should be running, but it is too late now. Conrad.Irwin 22:19, 7 August 2009 (UTC)
But there are no conditions now! (with the advent of SUL), and with obvious canvassing it's imperative that we introduce some. You still didn't explain why it would be a "bad precedent", or "ludicrous"? It's absurd to have the vote close up with different end-results in a timeframe of several weeks. We all agree that the votes of these nationalists bear little value, it's just a matter of formally acknowledging it. --Ivan Štambuk 22:39, 7 August 2009 (UTC)
It undermines any point of voting - I don't understand why you can't see that. What is the point in having a vote if the outcome is decided by someone based not on the number of votes, but simply on the number of votes that they choose to acknowledge, on criteria that they choose to impose. It's exactly as if we had a vote for a new prime-minister and labour noticed that the polls implied most 18-25 year-olds voted for the lib-dems so decided that anyone under 25 is not mature enough to vote, and then discounted all of those votes. You are certainly of the opinion that "the votes of these nationalists bear little value"; they presumably are not. How do I, who knows nothing about either political viewpoint, know which is "more right", I've seen references to academic work from both sides. I'm quite happy to not consider the political/linguistic consequences and implications of this, clearly some people aren't, maybe they don't understand the issue, but maybe it is me who doesn't understand. Conrad.Irwin 07:52, 8 August 2009 (UTC)
The present, unrestricted, situation is akin to requiring no citizenship qualification for voting in an election; i.e., allowing tourists and other foreign visitors to vote: maybe no big deal, unless the country has a small electorate that is liable to be swamped or the vote is particularly close or controversial. Do you see the analogy? Allowing this isn't democracy, it's heterarchy. † ﴾(u):Raifʻhār (t):Doremítzwr﴿ 14:08, 15 August 2009 (UTC)
Your analogy is really far-fetched. The only reason why those folks are eligible for voting now is becuase we haven't set any threshold at all. That omission on our part could be trivially fixed later. As I said, if we reiterate the vote after a few more weeks, but with vote-acceptance rules set of e.g. 300 edits before July 1st, you'll get pretty much exactly the same end-result. It's not a big deal if we do it, it's just the easier way to apply the vote-acceptance rules retroactively on a frozen vote, than to wait for 1 months again. 1 month of more neddless stress upon the community, and I'm pretty sure that everyone is fed up with this mess and simply wants it resolved ASAP.
As for the "uninvolved party doesn't know whom to trust": Just wait a few more days until I collect more e-mails from professionals who actually wrote books on SC (dictionaries, grammars, cutting-edge research). You can either trust them, or "academicians" who are interested in "proving" that these are separate langauges by long political and historical tirades... Every single dictionary of "Croatain" is also 95% valid dictionary of Serbian and Bosnian, and this has been so for the last 100 years and will not change in the next 100 years. This is undeniable fact, and discussing anything else is a waste of time. For a lexicographer, there is only one way to go. --Ivan Štambuk 12:25, 8 August 2009 (UTC)
If we leave the vote alone, we'll save ourselves a lot of stress, and it'll come up as no consensus. So you can just go back to doing what you've been doing. If the other HBS contributors agree, just do it. If Ullmann wants to have a hissy fit about it, let him. He doesn't contribute to HBS so his opinion on it means shit, IMO :D. If our only HBS contributors think it better to not be racist and divide things up meaninglessly... let them. Who knows, it might help keep out the racists. Or when they get here they'll bitch themselves to death. — [ R·I·C ] opiaterein — 02:53, 8 August 2009 (UTC)
I just went to vote in the Wikimedia board elections and noted that they have a similar threshold for voting. There they require that a voter not be blocked, not be a bot, and have made at least 600 edits before 01 June 2009, and have made at least 50 edits between 01 January and 01 July 2009. —Stephen 18:14, 8 August 2009 (UTC)
It makes sense, and as has been pointed out before, 600 isn't even that difficult to get to on Wiktionary, especially with our new assisted editting tools.
I noticed just a minute ago that in our current "voting guidelines" it says "Anyone can vote, especially regulars from other language Wiktionaries" which not only contradicts the previous 'rule', but I think it gives a lot of room for the kind of meatpuppeting we're seeing on the current BCS vote. I don't vote at the French Wiktionary... it wouldn't be right. I may edit there once in a while, but their votes don't affect me. Why should users from hr.wikipedia be able to dictate how we do things here if immediately after the vote, they're going back to their main projects, never to be seen by us (at least here) again? — [ R·I·C ] opiaterein — 18:41, 9 August 2009 (UTC)
About the threshold, I would like to point out that it's very easy to get any number of contributions on a Wikipedia: you just have to look for common misspellings or typographic problems and to correct them. It's very easy, trust me (from time to time, I enter televison in the Wikipedia search box, and I correct a number of pages). I would say that it's easier to reach any given threshold on Wikipedia than on Wiktionary (because there are fewer misspellings here). Therefore, I would adopt the same kind of rules and threshold as Wikipedias. The most important rule is that voters should explain the reason for their vote: any vote without giving a reason is useless when you try to conclude.
But the most important point in the talk page is There shouldn't even be a vote if it's contentious enough for people to care that much (see above). This is related to the NPOV principle. Lmaltier 08:30, 15 August 2009 (UTC)
That's contrary to Stephen's perception that the facility of editing Wiktionary is higher. By adding translations one is able to make dozens of edits per day, whereas apart from spelling corrections this is not the case in Wikipedia. So you insist on explaining the vote? Well, I could incorporate that rule too. Do I need to decrease the amount of votes for that or you suggest just adding it to the proposed threshold? The uſer hight Bogorm converſation 10:53, 15 August 2009 (UTC)
It's very easy to contribute here by adding translations, I agree, or by adding new pages (it's easier to find new pages to be added than for Wikipedia). Nonetheless, I think that it's still much easier to find something to correct on Wikipedia. I see no reason to require a higher threshold here. The condition about explaining the vote is something different, but somewhat related: I think that anybody, even somebody with very few contributions (or no contribution at all), may bring helpful arguments, and that's what matters. Lmaltier 12:06, 15 August 2009 (UTC)
I can't really be bothered reading all that above, but couldn't you semi-protect votes (or at least, controversial ones) which has the same effect, right? Mglovesfun (talk) 13:18, 15 August 2009 (UTC)
No, semiprotection would allow anyone who registers a username to vote, which would apply to almost all of the RU's Yugoslavian thugs who came, registered a username, and then tried to shove their uninformed but political-extremist views down our throats. —Stephen 15:51, 15 August 2009 (UTC)
I'm assuming that the decision by Apple to censor Wiktionary from the iPhone mentioned in this article is a result of assumption rather than research, or could they have a point? Would anyone be interested in trying to communicate with Apple to see if we can improve our standards to match theirs? Conrad.Irwin 21:31, 6 August 2009 (UTC)
It got from a WMF list that the writer of the app has to rate us "17+" for Apple to approve the app. I guess someone could do a "G" version, but is a wide-open wiki likely to be able to guarantee a "G" rating? DCDuring TALK 22:41, 6 August 2009 (UTC)
Read Gruber's original article[1] and follow-up[2] for the whole story. Apple is twice removed from us, but it appears that for whatever reasons, several developers have a need to be able to filter our database. Ultimately they have to be responsible for their results, but perhaps we can assist this kind of thing with a high standard of consistent labelling: coarse, vulgar, offensive, etc, or perhaps in more detail: sexual slang, vulgar insult, etc. —Michael Z. 2009-08-07 04:06 z
Yes, I agree [with Mzajac]. It's not "improv[ing] our standards" to remove words that meet CFI, but if someone wants to bowdlerize, we can help with that. That sort of sense-label is useful even for human readers. That said, much of our content will not be useful to a determined bowdlerizer; if we tag well, they can filter out sense lines and the stuff under sense lines; but they won't be able to use our onyms, translations, etc. —RuakhTALK 14:25, 7 August 2009 (UTC)
I think that the reusers can take care of themselves. They have already bowdlerized the content.
I also understand that some libraries and schools block WMF sites. And some of our direct users or rather the parents thereof have complained about the same kind of content. I don't think we have been willing to consider the implications of those complaints. Should only registered users have access to such content so that we can serve educational needs more broadly? Should we have a bowdlerized version for parents and institutions who are attempting to retard children's use of such vocabulary or respond to religious, moral, social or political norms? Or we could just leave the bowdlerizing process to institutions that have values more in line with "censorship" than WMF?
Bowdlerizing seems to me like a bad fit with the base of contributors we have, though I think we once had someone who might have had an interest. If someone would like to do it, then we could try to make that easier, but I wonder whether:
we would agree on how to do it and
we would be willing to enforce any sanctions against someone who undermined any rules we were able to enact. DCDuring TALK 15:54, 7 August 2009 (UTC)
one's vs. someone's in English verb phrase headwords
I just wanted to check my understanding of a simple point. To bust one's chops (bust one's own chops) is not the same as bust someone's chops (bust someone else's chops). The use of one's relative to an object of a simple verb, phrasal verb, or preposition phrase implies that the subject of the verb is the "one". The use of "someone's" implies that someone else is involved. "One" has crept in to more than a few headwords where it does not belong, it seems to me. It haven't checked all of the OneLook dictionaries as to their practice in this regard, but RHU/Dictionary.com uses one's and someone's exactly as I would have expected.
If this is so, how is it that there are so many entries which use "one's" where "someone's" seems more appropriate? Is there a regional (UK/US) difference? If so, I dread the implications. DCDuring TALK 01:59, 7 August 2009 (UTC)
I believe some progressive dictionaries use plainer language, like bust your chops. COD and NOAD both use one's for the reflexive and someone's for the transitive. OED uses one's and the or (a person's), for example, tickle the fancy, but hate (a person's) guts [boldface and italics sic]. —Michael Z. 2009-08-07 03:45 z
Thanks for the UK and Canadian/North American confirmation. I wouldn't object to less affected language, but the trashing of the distinctions for many English idioms really distresses me. Sometimes the two are conflated with redirects. DCDuring TALK 14:42, 7 August 2009 (UTC)
I don't think I confirmed any difference (and a sample of 1 wouldn't confirm anything)—COD is the Concise Oxford Dictionary. But I found some more: NODE (UK), Random House (US), AHD (US), and CanOD (Can.) also use one's and a person's the same way. —Michael Z. 2009-08-08 22:54 z
Do we need separate entries for expressions that have both reflexive and transitive uses of verb idioms? For example: cool one's jets and cool someone's jets? To me the "someone's" version is more general, more appropriate for a lemma. DCDuring TALK 11:56, 8 August 2009 (UTC)
I think cool one's jets is only reflexive: you cool your jets, I cool my jets. I can't cool your jets.
Anything transitive can probably be used reflexively, even if it's unusual: "I busted his chops," "I swung wildly and accidently busted my own chops." there may be cases where the reflexive meaning is different, though I can't think of one at the moment. —Michael Z. 2009-08-08 22:54 z
Yes. I thought you confirmed no difference, though I wasn't clear about that. I should have mentioned that 2/20 of the uses in COCA of "cool someone's jets" were not reflexive. (Linguistic creativity at work?) Perhaps given the relative infrequency it could be left to a usage note, but that makes it even harder to find or notice.
Do you think "a person's" would be more acceptable than "someone's"?
In reviewing our verb-phrase headwords containing "one's", I see a majority that "permit" transitive, non-reflexive usage -- not that one couldn't find some exceptions. Very many have objects that are considered exclusively under one's own personal control or experience (eg, temper, tongue, time). But perhaps one could bide one's master's or employer's or client's or principal's time.
Contributors have vastly preferred using "one's" to "someone's" or "somebody's". In many cases, such as prepositional phrases, there is no direct harm. But they also use "one's", even when the reflexive use use is not common or even "impossible". If we were starting fresh perhaps "someone's" could be mandated in all headwords, except those with reflexive use (ie certain verb phrases). This would have the effect of sensitizing users to the difference in meaning between the two terms in the cases where it matters: verb phrases with objects. But it doesn't seem realistic to change any non verb-phrase headwords with this rationale.
So the open questions, just for verb phrases, are:
Given that transitive includes reflexive, logically it should never be a problem to substitute "someone's" for "one's". But it seems to be. At how high a level of relative frequency of non-reflexive usage should the lemma be worded with "someone's" instead of "one's"? 1%? 2%? 5%? 10%? 20%? 40%?
Does it ever pay to have separate reflexive and non-reflexive/transitive entries?
I am looking forward to hearing more thoughts on this. DCDuring TALK 01:02, 9 August 2009 (UTC)
I don't sure that there are more than a dozen more verb-phrase entries that use "one's" where "someone's" would be better. There must be more problems in the bodies of the entries, but that is a second-order problem. We also have many uses of "somebody's", but that is only a matter of style consistency. DCDuring TALK 01:45, 9 August 2009 (UTC)
I only found one in COCA: "maybe the prospect of a 15-hour flight has cooled your jets," but it does seem to show that this is transitive. (Bonus points for someone who demonstrates that the transitive is a new sense extended from the reflexive.)
Well, the reflexive (only-reflexive) is not transitive, so it should have "one's." We don't have the capability or knowledge to do frequency studies right now, so I wouldn't set a quantitative threshold—we have to rely on citations and editors' judgment.
Separate entries? Depends on the case. I think there is a lot of subtle judgment to be shown in these. For example, the stereotypical phrase is "not on my watch'" but there's nothing wrong with "I won't mess up on your watch"—so I think on one's watch is technically transitive, but typically reflexive.
But a look at a couple of pages of search results for one's and someone's tells me that most editors have intuitively done this right. The only mistake I found is under one's thumb—it seems to me that you can be under anyone's thumb except your own. Is this an example of a transitive, non-reflexive case? —Michael Z. 2009-08-09 02:03 z
Yes. Once I was a fish who just swam, reasonably well. Then I thought I could describe how I swam. Then I found my description weren't very accurate. Now I find that I can't swim while thinking about it. I've even lost faith that others really know how to swim. It might just be time for a wikibreak. DCDuring TALK 02:27, 9 August 2009 (UTC)
CFI for English versus other languages
Am I the only one that feels like there are a load of deletion requests for English words, but when someone (okay, thinking of me) makes a nomination in another language that is equally unidiomatic, often much worse, it gets kept? On fr: as well we have some English stuff that I'd like to get rid of, but I can't get it through a vote. swing away was one, surely swing (make a swinging movement) + away covers this nicely? It's that old chestnut where the translation of an idiomatic (or even single word) term in English is unidiomatic in the target language. Consider the Spanish conmigo which means with me, I'd be very unsurprised if that got created in Spanish. Mglovesfun (talk) 16:05, 7 August 2009 (UTC)
Could fr.wikt count as evidence our exclusion of an English term? How different are their standards for inclusion? Perhaps we could provide SoP determination services for them on a request basis. We might get a few entries or improved/additional component-word senses. DCDuring TALK 16:45, 7 August 2009 (UTC)
What are some examples of French entries that you wanted to delete? I suspect that the apparent different attitude is based on the fact that English is our native language, so we don't need to bother ourselves with unidiomatic SoP terms in English, while French is a foreign language that most of us do not know, and entries that the French Wiktionary might reject, such as aux, are useful and needed by us because for us they are not so crystal clear. So it will be easier to understand your argument if you bring some examples. —Stephen 18:23, 8 August 2009 (UTC)
I created 1992 which was tagged as RFD almost straight away ([3]). 1992 has its own entry in the online Oxford English Dictionary. I won't duplicate that specific debate here but what is the policy about entries made (only) from the digits 0 - 9.
My first thought was that some such as 101, 911 and 999 are useful but I that would question 5555.
Having though about it more there is a case for allowing all digits 0 to 9 just to illustrate the number system, there is also a case for including numbers like 11 and 50 which have their own word ("eleven", "fifty") which cannot be broken down into smaller words. You could go further and say 1 to 2009 are acceptable as they refer to calendar years. I think that there will always be special cases that should be included such as 101, 911, 1471, 1992 whether or not they fall in the 'basic range'.
I would also support including entries starting with 0 where appropriate e.g. 007 (see James Bond).
I'd say every entry acceptable as a written out word is also appropriate as a number (i. e. 0-20, 30, 40 and so on), plus numbers with special meanings such as 666 or 1337. -- Prince Kassad 19:26, 9 August 2009 (UTC)
This last would exclude few years. It is also not part of our sole applicable policy and so requires a vote to have lasting impact. It would be better to first determine how existing WT:CFI applies.
There is nothing about a year number that would means it has to be excluded automatically by CFI. But it could be argued that, after the basic elements of a numerical system are defined 0-9, "-", all other numbers qua numbers, are defined by the "morphology" or "grammar" of numbers and are SoP in their most basic meaning and not worthy of lexical treatment just as an infinite number of constructable phrases are excluded.
The interesting question to me is: What should be valid attestation of the meaning of "1992" or similar? IMO, our standard should probably be that the year number must be used in a way that brings forth the referent events at the year number's first use in a given document (I would argue that a teaser paragraph or title should not count, precisely because such use depends on the reader not knowing the importance of the year in question: eg 1421: The Year China Discovered America, Gavin Menzies.) Would the attestation be specific to a language? If 1066 were attested only in Middle English, would 1066 only have a Middle English L2 header? Similarly 1968 in Czech, etc.? DCDuring TALK 19:58, 9 August 2009 (UTC)
I don't really "get" what the 1992 entry is meant to be, what does 1992 mean? I'm not against numerical entries either, I added fr:360 (and 540 and 720) to fr.wikt as they (to me) are undeniably English words. Mglovesfun (talk) 20:27, 9 August 2009 (UTC)
Okay I get it now it's been cleaned up - so how would comparable terms like 7/7 and 9/11 do? I think these are also synonyms of proper nouns. Mglovesfun (talk) 20:36, 9 August 2009 (UTC)
"1992" came to refer to the harmonization of legislation affecting trade within the EU and the signing of the Maastricht treaty. In the business press worldwide, "1992" was sometimes used as a reference to the process. A journalist might have asked a president of a multinational: "What are you doing to prepare for 1992?" not meaning the year, but the harmonization, and do so without any need for explanation. "Since 1992" can still be found in discussions of recent economic and political history, referring not to the year itself. There is a clearer case for inclusion than for almost any other year number because of the use in advance and the fact that some of the anticipated events did not actually happen in 1992, but are still apparently referred to as "1992". HTH, DCDuring TALK 20:41, 9 August 2009 (UTC)
Template:Spanish possessive adjective
Hello, as yo#Spanish uses Template:Spanish pronoun to navigate quickly, I suggest to extend this kinds of templates to the Spanish determiners. Moreover on fr: we also already have the French ones, ready to be imported. JackPotte 15:20, 10 August 2009 (UTC)
So I'm going to proceed as on fr:. JackPotte 21:21, 6 November 2009 (UTC)
Take a look at User talk:Alasdair, I agree these need sort out, the appendices and categories. I appreciate Alasdair's massive input, but it does need to follow WT:ELE. For example, do you write Category:oc:Names or Category:Occitan names? The first one looks okay to me, by comparison with Category:oc:Place names. I have a few more things to add, but I can't find the page names yet. Mglovesfun (talk) 11:40, 11 August 2009 (UTC)
Okay, this is pretty horrible, somewhat unfinished and abandoned, possible irrelevant too.
Appendix:Names male-A, weird title, something like Appendix:Male given names/A seems more appropriate.
I don't want to "chase" Alasdair away from the Wiktionary, but this is not a blog or a personal website, there are rules and guidelines. Mglovesfun (talk) 11:49, 11 August 2009 (UTC)
Obviously we cannot have both Category:Occitan names and Category:oc:Names. Given names and surnames are parts of speech, so I would go for "Occitan names". Place names are topics (London is defined as a city, not as a place name), and Category:Place names is an erratic title. It should be Category:Places (see Dan Polansky's user page, under Surnames ), but nobody has the energy to change it.User Daniel. has recently created Category:Spanish names, to be used in his Template:namecatboiler.
But do we need a top level name category at all? A "name" can mean too many things, every proper noun can be called a name, rose is a name of a flower.
As for Alasdair, Appendix:Names and Appendix:Surnames are quite useful, although they do contain many mistakes. It's good to have all given names and surnames in alphabetical order. I wish we could persuade Alasdair to stick to them. Every time Alasdair steps outside them he creates a mess - not only in format, but I'd say about five percent of his information is erratic or mere guesswork.--Makaokalani 12:13, 11 August 2009 (UTC).
I always wondered how Appendix:Names and Appendix:Surnames are supposed to be used? What are they for? I could clean up Appendix:Armenian given names if I knew its fucntion. --Vahagn Petrosyan 13:18, 11 August 2009 (UTC)
I can see two reasons for name appendices. One is a preliminary list to see which names we are missing - a good example: Appendix:Hungarian male given names. Or something that cannot be explained in an ordinary entry or through categories, like frequency - a good example: Appendix:Chinese surnames. I don't understand why Alasdair copies names from Danish or Norwegian categories and calls them Appendix:Danish/Norwegian given names. But we also get strange appendices transwikied from the Wikipedia.
Appendix:Names and Appendix:Surnames can work as preliminary lists, if you use discretion. Alasdair has been adding secret explanations for them for three years, nobody understands why. For example, Kettu is supposed to be a Finnish male name, from the surname Kettunen (!!). But what's the harm? Very few people are likely to push the edit button. Maybe Kettu really is a name in some language. What I'm worried about is that he'll put it in Appendix:Finnish given names, or make an actual entry.
Maybe a bot could make all these remarks visible, in small text for example. But then somebody would have to clean them up. I'm certainly not volunteering.
While we are on the subject of Armenian names, what do you think, Vahagn, if I created "Category:Armenian male/female given names in Roman script"? Or should they be grouped separately by every language using Latin alphabet, is there too much variation? It seems wrong to define transliterations as "English", I think they should be "Translingual", even if they are not used in every language of the world. But whatever the language statement, a proper category is missing. --Makaokalani 09:36, 12 August 2009 (UTC)
You're raising a very tough question. I'm sure you remember this discussion which did not end in a real consensus. As I see it, we need three types of categories:
1) Armenian names in Armenian script, e.g. Աժդահակ (Aždahak) (such categories already exist at Category:Armenian male given names and Category:Armenian female given names)
2) Non-Armenian names in Armenian script, e.g. Օբամա (Ōbama) (no such category yet)
3) Armenian names in Latin/Cyrillic/etc. script, e.g. Vahagn (no such category yet)
3a) Armenian names in English, transliterated according to English pronunciation rules, e.g. Azhdahak
3b) Armenian names in French, transliterated according to French pronunciation rules, e.g. Ajdahak
3c) Armenian names in Russian, transliterated according to Russian pronunciation rules, e.g. Аждахак, etc. --Vahagn Petrosyan 14:01, 12 August 2009 (UTC)
That's the most logical comment I've heard about this problem so far. And the list would go on:Armenian names in Thai, Thai names in Armenian...Here and here is some more discussion. Real names (like English Natasha, pronounced as in English) should be separated from the way a foreign name appears in that language (like Kirill, pronounced as in Russian). Surnames are easier, immigrant families usually keep them but change the pronunciation in a few generations. "Category:English surnames from Armenian" is fine. But if Vahagn is an English proper noun, how can it be entered in a category beginning with "Armenian..."? Should transliterations be grouped by script or by language? Hundreds of languages use Roman script and there usually isn't so much variation. We have very few names of this kind yet, maybe it's too early to worry. I'm just nervous that somebody will create categories like "English given names from Armenian/Armenian given names from Thai" etc. --Makaokalani 12:22, 13 August 2009 (UTC)
The categories can be called Category:English renderings of Armenian given names if you want an English proper noun to be in a category starting with "English...". Re the grouping: I think it definitely should be done by language, not scripts. For example, Armenians in France render their names in Latin differently from the ones in the US, Uruguay or Turkey. Besides, different sections in different languages would have different inflections. If you are worried about the same spellings in hundreds of languages being entered into Wiktionary, I don't see that happening. I'm sure just an English entry for Latin spelling or a Russian for Cyrillic one will suffice in practice. --Vahagn Petrosyan 23:16, 14 August 2009 (UTC)
I'm skeptical about much of that (anyone surprised?)
Names (a.k.a. proper names, proper nouns) are a special kind of word.[4] They are translingual in a way. Whichever language you speak of me in, I still have the same name.
Romanizations and other transliterations are typically not "English renderings" or whatever. Some romanization schemes are language-specific, but others are not, and even the ones that are are usually used in different languages. People who romanize their names tend to use one form in any language. I remained Michael when I travelled in several non-anglophone places, and Russians have one official passport romanization that they must use throughout the world.
Language is more clearly an etymological attribute of a name than a synchronic one. Михайло is a traditional Ukrainian name, but it may also be used in Russian or Bulgarian. Michael is an English version of it, but used in many languages. Mykhajlo is a romanization of it, also used in many languages, including English.
By the way, Place names and Surnames are lexicographical categories classifying words. Places and people are encyclopedic categories classifying things. People, places, and things are already well categorized in Wikipedia—let's stop trying to duplicate their work, because we will forever do it worse—and anyway we only have entries for words in the dictionary, not entries for things. —Michael Z. 2009-08-19 03:35 z
Aside: Names are never "male" or "female", as they have no biological sex, no reproductive organs, and neither mate with each other nor reproduce. Names have grammatical gender, which is properly expressed as "masculine" or "feminine". If we are going to talk about corrections in pages names, etc., then this should be addressed. --EncycloPetey 05:31, 16 August 2009 (UTC)
I disagree. English names do not have grammatical gender, because English does not have grammatical gender. They do, however, have social gender, in that, say, "Michael" is used for men, "Michelle" for women. Similar things can be said about languages like Hungarian/Magyar and Finnish and Persian/Farsi, which all lack a masculine-feminine distinction linguistically (even more so than English — they don't have separate pronouns for men as for women), but nonetheless have names that tend to belong to one sex or the other. Nowadays "masculine" and "feminine" are usually preferred to "male" and "female" when social gender rather than biological sex is at issue, but in a dictionary, we need to be extra careful not to give the impression that grammatical gender is in play. —RuakhTALK 01:39, 21 August 2009 (UTC)
(off topic, but...) English does have grammatical gender, but not in agreement between head and dependent or in morphology (a few rare nouns aside). Grammatical gender in English is evident only in pronoun selection. With personal pronouns, we have masculine (e.g., he), feminine (she), and neuter (it), and with relative pronouns, we have personal (who) and nonpersonal (which). Because of gender, you can say the baby is in its crib, but you can't say *baby Mia is in its crib).--Brett 23:30, 21 August 2009 (UTC)
I've seen some "stolen gender": a blond man, a blonde woman, a naïf man, a naïve woman. Vanishingly rare, though. Equinox ◑ 23:37, 21 August 2009 (UTC)
Google searches with non-alphanumeric characters
A common problem, I bet this has come up before, but Google tends to ignore (or just deal badly with) special characters like é, è, ë when doing searches, which makes it harder to verify stuff. Is there any way to get round this, or is there another search engine that deals with them better? Mglovesfun (talk) 12:50, 14 August 2009 (UTC)
Put it between quotation marks, and then only the exact form will be matched. E.g. try "Bronte" vs. "Brontë" Qorilla 14:13, 14 August 2009 (UTC)
I'm already doing that, for example réglement vs. règlement. Either it can't tell the difference, or it assumes I want all of these (also reglement, Reglement, REGLEMENT, etc.) which I don't. Mglovesfun (talk) 14:17, 14 August 2009 (UTC)
You must have a different Google than me. It works perfectly fine here. -- Prince Kassad 14:28, 14 August 2009 (UTC)
Try a plus sign before the word: +réglement. --Vahagn Petrosyan 16:23, 14 August 2009 (UTC)
When I follow google:"réglement" and google:"règlement", I get two different result sets. --Dan Polansky 16:53, 14 August 2009 (UTC)
Years ago Google used to ignore most diacritics in the USA, but not elsewhere (e.g., in Canada, where French is an official language). Perhaps it still works that way. —Michael Z. 2009-08-16 03:18 z
Transliteration in Template:l
I will add support for tr= to Template:l so that people don't add the transliteration in brackets after the word, (and would have the bonus that if we get Extension:Transliterator installed it can automatically add them). This is working on the assumption that we want transliterations beside links, which I personally think are good, but no doubt there's a whole 'nother argument to be had about that. The only hitch is that it would use the same space as the current gloss= parameter (in brackets after the word). At the moment, the gloss= parameter is used by American Sign Language to indicate the English spelling of the word, and by some foreign language definitions to point to the correct English definition. (see وصل and 1@TipFinger-PalmBack-1@CenterChesthigh-FingerUp 1@BaseThumb-PalmBack-1@CenterChesthigh-FingerUp). As neither language need transliteration this shouldn't be a problem, but it is an issue that may need resolving at some point. Conrad.Irwin 22:17, 14 August 2009 (UTC)
Template:term handles that issue well, I think.—msh210℠ 02:56, 17 August 2009 (UTC)
{l} was suppose to be a simple template used for the listings in appendices, ====X terms==== and such, where there would be no need for transliteration and glosses, essentially simply a shorthand for typing the full language name and the linked term twice. Now it appears that the transliterations are needed almost everywhere in case of obscure scripts, and esp. in case of obscure fonts supporting obscure scripts, and people like to add glosses in ====Related terms==== and similar, and with this new functionality this template would simply become a clone of {{term}}. Perhaps it should simply redirect to {term}? --Ivan Štambuk 03:46, 17 August 2009 (UTC)
But {term} italicizes.—msh210℠ 21:20, 17 August 2009 (UTC)
Wiktionary logo 2009 refresh voting
Hello Wiktionarians! The Wiktionary logo 2009 refresh has been going on for a while now, and as the logo submissions have been made, the voting will start soon. Please visit meta:Wiktionary/logo/refresh#Voting for a discussion about the voting, and participate when the voting actually starts at meta:Wiktionary/logo/refresh/voting. One of the reasons why the old logo vote failed to reach consensus was that too few people from Wiktionary joined the vote, so please consider helping the project get a universal high-quality logo (even if you prefer the current logo, you're allowed to vote for it). Thanks! Wyvernoid 11:56, 15 August 2009 (UTC)
Linking headwords
WT:ELE#The entry core doesn't directly address treatment of the headword, except to say "For uninflected words it is enough to repeat the entry word in boldface."
Many editors link parts of the headword, although this is not mentioned in the guidelines. Some infer that this serves as an etymology, to the extent that I recently had words with an experienced editor who was removing information from etymology sections in part or in whole, because he felt that headword links already communicated the information.
Problems with linked headwords:
Links in the headword are not etymological, except by coincidence.
Example: farmer's sausage doesn't come from farmer's + sausage but is a borrowing from something like bauerwurst ("farmer's sausage")—others: art for art's sake, ball lightning, flea market.
In thousands of entries, compounds are linked but not etymologically—for example, we see links like Spanish Water Dog (now improved), but the etymology is Spanish + water dog.
The links hide subtleties: in the above example, the words Water and Dog, or Water Dog link to water and dog or water dog, hiding the case distinction from the reader. It's bad practice and confusing for the reader to have link targets differ from link text (especially when the difference is a subtle as capitalization, and most especially when the capitalization is significant, as in Wiktionary).
The links hide everything: you can only see where they start and stop one word at a time, while you mouse over the term and watch carefully for the underlines appearing. In trompe l'oeil#Italian the link boundary falls within a word.
Coloured links turn the most important part of the entry into a multicoloured collage of blue link, purple visited link, and black unlinked text. See St. Elmo's fire (in my browser: purple St., blue Elmo, black 's, purple fire), trompe l'oeil (two variations in the entry), etc.
If the links actually did represent etymology, there's no way for the reader to know this, and having a standard "Etymology" section discourages the reader from guessing that this may be the case.
The headword links water down the visual impact of the headword, give uninterpretable and inconsistent information, and link terms for unknown reasons, which should either be explicitly mentioned and linked in another part of the entry, or are linked for no reason.
I'd like to propose we add a line to ELE saying not to link the headword. Alternately, we must explain to editors and readers how headwords should be linked, and exactly what the links represent. —Michael Z. 2009-08-16 04:18 z
I don't agree, it certainly (well, not often) doesn't do any harm. In some cases and etymology is needed as well, if it's a calque or a derivation of something. I don't think saying spring clean necessary suggests that this is the etymology, just that you can click on these links for further information. Sometimes autogenerated titles needs to be corrected, like Statue of Liberty needs lower case (statue of liberty) because the two nouns are only capitalized as part of proper nouns. Mglovesfun (talk) 10:45, 18 August 2009 (UTC)
At the very least we should state clearly in ELE that such links are not an etymology, and in all cases an etymology is needed as well. Is the reader supposed to guess whether these links are etymological or not in each entry? What prevents readers and editors from assuming that these are always etymological?
But what exactly is the point of these links? What do you mean "further information?" Such unfocussed and inconsistently-used elements water down the functionality of entries. Pointless design elements do real harm (let's add a dozen other things which "do no harm (well, not often)" and see what a great experience Wiktionary becomes).
No great design ever included elements just because they "do no harm." Leaving these links in brings down Wiktionary. —Michael Z. 2009-08-18 17:41 z
Wiktionary:Picture Dictionary
A new user starter off this page with no discussion whatsoever. Looks like it has some merit to me. Mglovesfun (talk) 11:28, 17 August 2009 (UTC)
Yes. I think that these pictures showing the names of the different parts of a washing machine, etc., should be integrated to Wikisaurus. A thesaurus is more useful when illustrated, and should include words related to the thing (the main entry focusing more on words related to the word, such as derived words, etc.). Lmaltier 16:58, 18 August 2009 (UTC)
See eighteen-wheeler for a thought-starter. DCDuring TALK 16:26, 22 August 2009 (UTC)
Drop encyclopedic categories
Let's get rid of encyclopedic categories, and put some energy into cross-linking with Wikipedia.
Of course we'd keep lexicographical categories like Category:Surnames, Category:Place names, Category:Exonyms, and technical vocabulary categories like Category:Geography and Category:Onomastics.
But let's get rid of encyclopedic categories for things, because our entries represent words and names, not things. Wikipedia has articles about things, and categorizes things, and if we spend 1,000 editor-years working on our redundant copy of their categories, they will have spent 50,000 improving the original category tree.
Things like Category:Countries should be renamed Category:Names of countries, or deleted. We should make an effort to add sister-project boxes to all of the corresponding categories in both Wiktionary and Wikipedia, to make it easy and clear for the reader to jump to the appropriate category in the appropriate project.
Who's with me? —Michael Z. 2009-08-19 04:08 z
I'm with you on the fact that our entries represent words (names should be included only when they are words, in my opinion), not things.
I'm with you on the fact that we should not have encyclopedic categories such as Sparidae (or all category names understandable only by specialists). But categories such as Fish (i.e. categories using very common names) are useful. If you don't remember the name of some fish in Japanese, and you cannot enter Japanese characters easily, a category such as fish in Japanese is very useful.
I'm not with you with the renaming of categories: while you are right that a Countries category groups country names, this is always implicit here, so why do you want to make category names longer, more complex? The only difficult case for category naming is for separating e.g. names of towns (e.g. London) and words carrying the sense of town (e.g. megalopolis) or, generally speaking, separating "types" from "-nyms" (common nouns from proper nouns). Such cases should be discussed. Lmaltier 06:03, 19 August 2009 (UTC)
Hm—fish in Japanese is a good counter-example. One resolution would be for the Wiktionary link-box in Wikipedia to let you choose the language, but let's not get into a project like that now. I'll give this some thought.
Regarding naming: we'd like it to always be understood, but I see both newbies from Wikipedia and veteran Wikilexicographers having real trouble understanding the distinction. As a result, our categories are a mish-mash of technical subject vocabularies, thematically grouped concepts, and grammatical categories of words. Why on earth do we have separate Category:Names and Category:proper nouns? Why is it impossible to rename Category:US to Category:American English so it matches every other kind of Category:Regional English? The categories' names have to clearly define their nature, to help keep the hierarchies separate. For starters, let's give all of our 'nym categories explicit 'nym names. —Michael Z. 2009-08-19 07:09 z
I disaree with dropping topic categories, but maybe I do not quite understand the classification under your proposal. What is the hyponymy structure of the terms for categories that you are using? To clarify, the top part of the category taxonomy as I understand it:
hyponym
term category
Category:Place names
Category:Surnames
Category:Physics
Category:Countries
From what you have written and I have understood, you use the following terms to classify categories (but am I correct?):
lexicographical category
Category:Exonyms
encyclopedic category
Category:Rivers
Category:Trees (?)
Category:Vehicles (?)
Category:Boats (?)
Category:Sound (?)
Category:Movement (?)
Category:Communication (?)
Category:Language (?)
technical vocabulary category
Category:Geography - is the category allowed to include "river", given "river" is not a technical term?
Category:Onomastics
Category:Mathematics
Category:Philosophy
Until you document your classification scheme with a broader list of examples, I have a hard time understanding the impact of your proposal.
Admittedly, I am inclined to oppose your proposal regardless.
--Dan Polansky 07:23, 19 August 2009 (UTC)
That looks about right. Geography illustrates the problem: it is clearly a specialized subject field and not a classification of referents, but our category naming is so unclear and inconsistent that anything goes: the category is full of entries and subcategories that relate to "geography" in three or four ways. The meaning of categories is further watered down because the same categories are applied with restricted-usage labels and [[Category:xxx]] tags, so technical vocabulary is lumped together with thematic categorization (furthermore, labels are widely misused, and unjustified labels like {{bird}} promote misunderstanding).
We really need to do something to improve this embarrassing state. —Michael Z. 2009-08-19 08:57 z
I agree that categories for referents are for WP, not us. Fish in Japanese can be found (in theory) by looking up the word fish in Japanese (if there is one; I'm sure some categories of things in some languages have no name for the type but do for the individuals) and looking at its listed hyponyms or Wikisaurus entry. I think topical categories should go, though categories indicating fields' jargon should not.—msh210℠ 20:14, 20 August 2009 (UTC)
"The following is a list of Estonian words related to geography." I think that's pretty clear. I love the topical categories. Love. — [ R·I·C ] opiaterein — 22:44, 20 August 2009 (UTC)
I love the topical categories as well. They allow me to find missing entries on a theme, and I've seen quite a number of other editors make use of them for that. It's not possible to determine accurately whether non-editors are making as much use of them, since such activity doesn't show up the way that new entries and edits do. Consider Category:Hair, which led to the creation of many entries on names of hairstyles, facial hair patterns etc., and not only in English. The categories have a use also in finding a word that you know is in a particular field or related to an idea, but you can't fihure out how to express it. Additionally, I learned words related to hair that I never knew, all because contributors categorized articles that I had missed in my ignorance. There, I have presented three good reasons to keep these categories. I haven't seen an actual reason presented for getting rid of them. Some problems in our current categories have been pointed out, but that's reason to fix them, not to eliminate them. --EncycloPetey 02:53, 25 August 2009 (UTC)
Then how about separating technical vocabulary categories from encyclopedic/thematic categories? Terms marked with regional and usage context labels are sorted into categories representing dialects and specific usages. We also have a rich set of restricted-usage labels which represent a very specific set of lexical information (applied by Topical context labels), but the terms so marked are mixed in with plain category tags applied to thousands of other entries. Perhaps we can use a set of prefixes for separate hierarchies? —Michael Z. 2009-08-25 06:29 z
Clarification of WT:CFI#Names of specific entities
This comes from Wiktionary:Requests for deletion#Uncle Scrooge, and many other discussions. I'd like to amend the guideline as follows. Please suggest improvements, and I'll start a vote shortly. —Michael Z. 2009-08-19 04:47 z
A name should be included if it is used attributively, with a widely understood meaning, independent of its referent. For example: New York is included because "New York" is used attributively in phrases like "New York delicatessen", to describe refer to a particular sort of delicatessen. A person or place name that is not used attributively (and that is not a word that otherwise should be included) should not be included. Lower Hampton, Sears Tower, and George Walker Bush thus should not be included. Similarly, whilst Jefferson (an attested family name word with an etymology that Wiktionary can discuss) and Jeffersonian (an adjective) should be included, Thomas Jefferson (which isn't used attributively) should not.
Started a vote at Wiktionary:Votes/pl-2009-08/Clarify names of specific entities. —Michael Z. 2009-08-27 04:53 z
Since there's no discussion, I'll start the vote now. —Michael Z. 2009-08-30 17:23 z
Rename Category:US Category:American English
This was discussed to death before, and I relented because one editor had a strong objection. In retrospect, I should have taken it to a vote. So here's my justification one more time.
All categories in Category:Regional English are named for the dialects they represent. The very important dialect spoken in the USA is called American English by linguists and lexicographers. Nobody calls it US English, United States English, or "US". Keeping the wrong name for this very important category is confusing for readers and editors, and just plain embarrassing.
Throw in your two bits, and I'll start a vote. —Michael Z. 2009-08-19 09:05 z
I support. I'd expect to find things like cowboy or cola in Category:US and not what is there currently. --Vahagn Petrosyan 10:59, 19 August 2009 (UTC)
The word English is clearly redundant in all of the English-language context tags. I'd favor eliminating it for all of the sense-level contexts in the interests of increasing the space available on a single screen for more useful non-redundant content.
I suppose the other extravagant wastes of visual space make any one waste of space seem like a trivial matter, especially compared to professional embarrassment.
I think we can take comfort from the likelihood that we would only be driving away users by making our entries more and more technically correct and complete and less and less useful to folks looking for definitions. We clearly already have more than enough contributors. If we could just reduce the demands on contributors' time from patrolling, feedback, answering silly questions, correcting entries that don't fit our unwritten framework, the extra time available should enable us to make great strides toward our eventual goal of […] (What was that goal again?) DCDuring TALK 12:25, 19 August 2009 (UTC)
The labels don't include the word English—the text is like US, Canadian, British, Irish, Newfoundland, or Hartlepool. They are usable for different languages, e.g., {{Canada|lang=fr}} puts an entry into Category:Canadian French. Only a few have language-specific text, like African American Vernacular and Western Armenian.
But the strings of labels do get long, and I wouldn't mind abbreviating them, as other dictionaries do. We won't run out of paper, but it's awkward to have labels longer than definitions, and we could be terse and expressive with US, Cdn, Brit, Ir, Nfld.
This proposal is just to make the category name reflect its contents. By the way, the corresponding encyclopedic category is Category:United States of America. —Michael Z. 2009-08-19 17:59 z
When re-implementing {{context}}, the intent was/is to have {{US}} and {{UK}} be regional context labels, just like others, defaulting to English. The labels are thus US and UK (as now), and the cats Category:US English and Category:UK English, consistently with all other region/language combinations.
As to "American English": it may be the usual name, but is an illiteracy. (Do you know how effing amusing for most of the world to listen to the US English usage of the word "American". People in the US really do think the US is the only country in the Americas. E.g. PBS Newshour: "Venezuela is America's third-largest source of imported oil." which sort of silly nonsense we see regularly. In the US, of course, it sounds just fine: Venezuela is some alien, not-really-existing place. It is especially hilarious when reading the US press referring to "Mexican immigration to America". ;-) In any case, we want to keep the tag "US", and it is best for the cat name to match the tag. Robert Ullmann 07:16, 20 August 2009 (UTC)
I agree with Mzajac, we should use the usual names (but full names, not abbreviations), even when we disapprove them. But is American English supposed to cover both Canada and the US? (it's not clear to me). It it covers both, why not creating 3 categories (American, US and Canada). If the category is reserved to words used only in the US, and not used in Canada, US English might be less ambiguous.
I would not add cowboy and cola to this category: these words might originate from the US, but they are used everywhere. Lmaltier 21:20, 20 August 2009 (UTC)
<rant> "American English" is not an illiteracy. It is a standard term used by many professionals in linguistics. Just because many (generally Central/South Americans) get rilled up when people from the USA use "American" in a different sense than others, does not make that usage "wrong". Remember, we are primarily descriptive here. Let's not have proposals to adopt neologisms such as "USan". </rant> --Bequw → ¢ • τ 02:03, 22 August 2009 (UTC)
Yes, thank you, Beq. The two main branches of English are British English (formerly called "English" in the Empire) and American English, and they existed before the Thirteen Colonies got uppity about tea and split off from British North America. It may not be PC to say so, but although Canadian English constitutes the third English orthography, it is a (distinct) variety of American English (in the last few decades, North American English has been coined in recognition that a Canadian English dialect exists). For precision, Wiktionary use of restricts American English to mean the USA, and North American English (in Category:North American English) to encompass that with Canadian and Newfoundland English (and considers Newfoundland English to be Canadian, even though we are a historical dictionary—Nfld. was separate until 1949).
"US English" and "UK English" are not used in linguistics and lexicography except by mistake. —Michael Z. 2009-08-23 23:19 z
(You may find the attributive use of US in combination—for example, US English speakers, meaning "English-speakers of the US." This is different.) —Michael Z. 2009-08-26 00:57 z
Started a vote for further discussion at Wiktionary:Votes/2009-08/Rename Category:US Category:American English. —Michael Z. 2009-08-26 00:57 z
No discussion, so I'm starting the vote. —Michael Z. 2009-08-30 17:23 z
There are loads of political parties that I think should be deleted, but since they haven't been deleted yet, maybe they meet CFI and I'm missing something. Here are the ones I've discovered so far. Republican Party, Liberal Democrats, Labour Party, Democratic Party. Green Party seems to have merit as it seems like every country has one (should this be a common noun maybe?) but for the others, I just don't see it. Any chance of having a fairly complete list of these to see which ones we want to nominate, if any. Mglovesfun (talk) 09:56, 20 August 2009 (UTC)
what'boutdeletor phenotype? [o-did imentionithoroughlyHATE thoseINCREDIBLYSUBLIMINAL REMARKS Of quite afew Engl-nativs here-isthattheirPREROGATIV!?!butpl cmylilrant below.--史凡>voice-MSN/skypeme!RSI>typin=hard! 19:54, 20 August 2009 (UTC)
Wiktionary:Spelling variants in entry names
Input unbelievably welcome; last edits seem to be more than a year ago. Mglovesfun (talk) 19:32, 20 August 2009 (UTC)
wotifeelgoeswrong
[movd frm rfd]
i'dgo4wt tobe abroaddict>a.bit of grammar[ala Swan,whichsome entrys ractualy~dict.styl],gazeteer/geo,bitencycl.,phrasebooki/SHORTish entrysREFERING2wp,wm-books,etc>userFRIENDLY,klik-efficient[here:guidance2find wotevastuf:)[thoputinboundaryshard,irealiz
wp entrys<therisETYL2that,when1.used,changin'use praps-thatdef.sthWE'dbedoin~placenames[who rtheynamd after,lit.quotesundundund--史凡>voice-MSN/skypeme!RSI>typin=hard! 04:18, 20 August 2009 (UTC)
"Wikipedia is an encyclopedia, and they deal with topics in their articles. We are a dictionary, and deal with words in our entries. The principles of organizing an encyclopedia do not apply here because our goals are quite different." -
hate/likeit:there isOVERLAP--likalthose discusions here'bout saytheDEF OFA WORD[lexicografik1]-itookmejust2weeks dealin intesivly w/apliedlinguistics waybak i/oz2c thatMOSTofthose holy/bigwordHOTOPICS/TECHN.TERMSrpoorly defind>wotsthepoint inalthefiting??encycl/wp do alilbitof linguistiks[ipa,etyl],weneed2HELP'EMw/that[styloid-ipa?spica-etyl?let alone spica splint--have funsearchin i/wp..]>INCLUDING WP entrys [w/justLILdef-flesh,that indeed4wp],doinOURJOB w/etyl,ipa etc andsoHELPourusers.[imtrulyfedupw/althese mostlynarowsens def getinpalmdofasTHEdef[ex.:WOT IS A DICTIONARY,answerREALYNOTASTRAIGHT4WARDasu regulars'dlik2makebeliv,ncomin downw/big[policy usay?perdef>{punintended ;)}ALWAYS IN FLUX]stiks isntv.RESPECTFULeither],aweaknes esp.ofalthoseSOFTsciences as sociology,psychology etc imo[lookatsuch wp-entrys,howlers!!],nlet alonethe impresion itmaks uponanewby]
nmostofthose"dict.constraints"had2do w/SPACElimits["so we'lmakesomARBITRARYCRITERIAup"]-why esp.here onaproject ofsuchunprecedenteddimension/breadthppl rso"closed"2wotburgeonin'technologys cando4them-itleavesmebafled,butrealy..:(
nthisimhoPERVERS/DESTRUCTIVfocus on"shalwe deletethisentry,yea?!{hyper-tone intended.}"[mywatchp.isnowINUNDATEDbythem--isCREATINstuf realysoborin??]-rwe here2BUILDUPor2smashea others efortsunderthepretensofGARDIN'THEGRAIL--itsaWORKINPROGRES,4krist'sake
thenwehavthe bl/whitepushing[mywayornon],thecomnsensmidlwayislarglyuntrodn.
asstephensays-me2istartfeelinliknotmakin'entrys/substantialchangesenymore-wot'dbethepoint-nSMdesire2cthenextrfdstupiditythatlmanage2puldfromthewal??
iobviesly'vninputprob-wotsudestructivpplexcuse?ichardlyanysignofwillines2sitdown, givthingsathoughtn consequentlyformulate'em outinaclearncomprensivconstructivway-nifuvcomitmnt2get2urloftygoals,thatswotitltake,hardsciensinstedof kaffeeklatsch[findthataharshstatmnt?sitdownntake alongdeeplookatwoturdoin-idothatregularly,nfindthathelpfl.
wannadeletethiscozoftypin?go-on,nmakeafurthercharadeofurselfs!![iwenton2breakmyscaphoid,just4goodmeasure.]
thesametrustedcontributorskeepflipflopin,sayinthis,sayinthat--'gain:HOWSTHISGONNAWORK??
instedofaquiki rv,2hard2takthetim2weighthemeritnwhakitinbettershape?
theonliconsistentlinice'nthoughtflpersnhereinmyexp.isvisviva[welkombak!]--speculation:mitethe isuesrisdcontribute2sucha users absens??
thisisamegarant,sure--butwhohasbeenbrewinitfromthefirstfoot iputinsidehere???'vathought.[ivgotmorecontributionsnowalredy,mainlyonactual dict-p.,thansomsysops,w/outmuchguidans butWITHunclearguidelinz a3yo'dmostlydoABETTERJOB'ritin' jee..]
furthermore,ithoroughlyHATE thoseINCREDIBLYSUBLIMINAL REMARKS Of quite afew E-nativs here-isthattheirPREROGATIV!?!
wot1gets here:RUDEnHARSHNES,wot1NOTgets:TOLERANS,EVENHANDEDnOPENMINDEDNES
reRUnwritin2WMFboard:ivbeenontheverge3TIMS,ofdointhesame[DISCRIMINATION ISUES]1s'gain:aintnobubl here,paradise4pplkeenonDESTROYIN ES OTHERS TOYS[abitofAPRECIATION4 1s eforts?soNAIV huh--plSPENDTIMEmakin'GOOGFAITHEDITS,treat'emassuch!![ex:"recently we'd aneditor.."AINTREEdebacl,anotherNICEWELCOM!![dearth ofeditors,howkum eh?
urCMNref.manJUSTCARZ'BOUT HISPRIVAT TOYS-butCOMPLAINShethe only1adin'entrys[8000WOW-imSOimpresd,maksitUSEFL,realy.],havin'made itin2aTYPIN'SLOG-but askin4anINPUTMASK,MAKIN'HISPOINTBOUT TR-ISUES-ohno,hezurINVISIBLEXPERT,pulindastringfromhisrelmofshadz.
Posta q boutengl[letalochin.]>75%NOanswer-2busyDELETIN'STUFey,funnyglovz4most,ni'dntcalthisPATHETIC?!?
butdunspendtimeREADIN'THIS,iunderstandnowhere urtimispent.
usysopSILLY,IMATURnNOTVKNOLEDGBLppl[nothatimup4that,alredycozofmyhands]-dunufeelalotoftheprobshere ROFUROWNMAKING??[orurself-proclaimd'inclusionist'1whospecializesi/SPEEDYDELETIOONS4FORMAT-ISUES [nocontradiction?helpful?],not'avin'a clueboutmostforeignlanguagesn'even'avin'LIMITATIONSI/ENGLISH["wotsthesubjectofabiographynamd"-no ref-booksI/YOUROWNCOUNTRY??orDELETEDthepertainin'entry alredy?thegruelin'incOMPETENShere,je-sus!]-icanTRY2be niceANDthinkamapretydecentchap,butaSPADISASPADIsAS-P-A-D-E,sory4thatfact.
thathisranthasurvivd>12h w/out rv,isonliCOZICONTRIBUTED-wotimentiondheretho isclearFROMSTARTERShere-sowhySILENCE[By rv ofcours]newbys whotaketheirtimeTELIN'USO??howDAREupplcomplainboutbeinSHORTHANDEDwhenaludo=CHUCKIN'newpplOUT??[theSELFRIGHTEOUSNES'doffendGODiftheris1:(
whywasthentry"hypothenar"NOTthere-cozENCYCLOPEDICword???althoseARCHAIConceptspplike2bash eaotherw/here,makesmeflee2CCEDICT['dntwe importheirwordsbytheway soourchin.sectionbekumzFUNCTIONAL??NDIDIcreatesth2day-NO,isuccumbd2theNEGATIVFORCES HERE!!!!!!!!!!!!!!!!!!!!!
if som1 taksmycontrbutions'nmaks'em beter,imthe'apiestman i/theworld;rude/plain rm/del justCHEESEMEOFF,MEGA,asonlyLAZYINCONSIDERATEppl'ddoso.[go'n'count rv done byme vs. ACTUALIMPROVMENTS,exemplaryindeed--ori'lASKthepersonconcernd--stilthinkin'uguyzhavANYhighmoralground???
'dufeelaMENTALITYnATITUDCHANG'dbe considerd??
>igotridofmyeg, indisgust--史凡>voice-MSN/skypeme!RSI>typin=hard! 15:09, 20 August 2009 (UTC)
What the fuck is all this gibberish!!!!!
Please do an audio recording of your message and upload it to commons. Then, people might reply. -- Prince Kassad 01:55, 22 August 2009 (UTC)
isugestedAUDIOwaybak[butwasrebuted]-how2dosuch pl?--史凡>voice-MSN/skypeme!RSI>typin=hard! 02:32, 22 August 2009 (UTC)
"Please make a proposal to amend WT:CFI so that we can apply our resources to more entries. I know that we have already made all of our existing entries as good as we know how to. We need most especially to add entries that other dictionaries omit. It is particularly important that we make sure that language learners never have to work through the meaning of a phrase using entries for the constituent words. Better we should lexicalize everything. Let a billion collocations bloom."
Please make a proposal WHENI CANINPUT+constr.MILIEU/MIDSTto amend WT:CFI so that we can apply our resources to more entries. I know that we have already made all of our existing entries as good as we know how to.UR2BUSY'DELETIN'4THAT2HAPEN We need most especially to add entries that other dictionaries omit.INDEED-MYSTREETNAME:IWANT ETYL,OBSCURSPORTSTERM-IWANT PLAINENGLIS EXPL ETC. It is particularly important that we make sure that language learners never have to work through [DICT=GOLDSTANDED,NEEDS ENTRYS]the meaning of a phrase U[THEABOVPOSTER]HAVNO DEEPLEARN/TEACHING OF2NDLANGUAGE EXPERIENS,N'HENCE LAKPERSPECTIV ,AOTH BOUTHE 'CONSTANTGUESIN'N'WORKIN'OUTREQUIRD INTHATTPROCES.using entries for the constituent words.LIKE GOIN'THRU THE28SENSESOF'OFF' JUST COS SB POSTEDAN INCOMPEHENSIBLTECHN.DEF OFA CRICKETERM-NOTX[MINUTS=OK,HRS NOT4GETTIN ANEWCOLOCUTION,SOI STILDONTNO]. Better we should lexicalize everything.YES!! Let a billion collocations=NOTORIOUSTUMBLIN'BLOK4LEARNERS bloom. MYCAPS---史凡>voice-MSN/skypeme!RSI>typin=hard! 03:09, 22 August 2009 (UTC)
itsnotPOLICYppl dislike,butur0-TOLERANS~POLICESTATE[butCOMNSENS/PSYCHOLOGYSKIL1realy'lNOTfindhere.
uppl rv4TECHNICALITYS,nthansosurprisednew-editorsDONTLIKEIT-DOUPPL LIVONTHE MOONORWOT???--史凡>voice-MSN/skypeme!RSI>typin=hard! 10:49, 22 August 2009 (UTC)
For the past half-year, I've been trying to develop a writing system for American Sign Language based on the Roman alphabet. I've made some progress -- enough, at least in my estimation, to warrant adding some of my results as entries in Wiktionary. However, my results are still limited in extent, only preliminary, and yet without sanction from the ASL community. Nonetheless, despite these limitations, I really do believe this method I've developed could potentially be a foundation for a writing system that could benefit ASL research and deaf culture. And with a lot of input, know-how, and initiative from the Wiktionary and ASL communities, I think this project could be a success. So, because I'm new to Wiktionary and not familiar with its policies or capacity, I've started this new discussion topic to determine whether members of Wiktionary might believe my data and initial results are appropriate to the mission of this wiki site and, if so, how an ASL section with words represented in this manner might best be implemented. However, although I hope my proposal gets a lot of sympathetic feedback, there are at least three potentially complicating factors that should be considered first: (1) Other writing or transcription systems have been attempted in the past, although only one of which, Valerie Sutton's Sign Writing, in my inexpert opinion has any following among present ASL signers -- and even its popularity seems minimal. (Anyone interested in learning more about Sign Writing should see its entry in Wikipedia.) Now, as you might expect, I think the method I've tried to start is, on balance, more helpful than Sign Writing -- mainly because Sign Writing's mode is quasi-pictorial and thus incompatible with most people's communication software. Even so, I would like to sincerely advocate for its adjunct inclusion in any ASL section in Wiktionary, if technically feasible, because I think its strengths and weaknesses complement those of my proposed method. (2) ASL communication is intimately tied to the English language, so much so perhaps that an ASL section in Wiktionary strictly separate from English might not be optimal. And (3) just as letters in any writing system are associated with certain sounds native to a language, letters in the system I'm proposing are linked to phonemic features native to ASL, and so, if possible, a page in Wiktionary dedicated just to these phonemic correspondences might be helpful. And, well, I hope after reading all the above, people could reply, give their opinion, and offer any constructive advice they might have -- I will be very appreciative; thanks. 66.213.98.17 20:56, 20 August 2009 (UTC)
Sorry to inform you that you've duplicated fairly recent efforts (especially by User:Rodasmith though also by User:Positivesigner and others) to develop a way to include sign-language entries into Wiktinary. See WT:ASL and Index:American Sign Language.—msh210℠ 20:59, 20 August 2009 (UTC)
Thank you for your response. I wasn't aware that an ASL section was already on Wiktionary, although really I shouldn't be surprised. Still (and if this is the appropriate forum to be asking), what do others think about a community-developed writing system based on the Roman alphabet? I know Sign Writing has strong proponents, but I've also heard some criticism as well. Would perhaps ASL signers in general prefer a writing system more like the standard Western European type? decimus 21:13, 20 August 2009 (UTC)
Our system (see the links above) uses the Latin alphabet already. If you wish to change that system, I recommend you make your recommendation at the more specialized page Wiktionary talk:About sign languages, since I suspect that most people watching the Beer parlour don't care how we do it. But note that we have a god number of entries and translations in the existing system already, and a lot fo work has gone into developing it, so if your recommendation is rejected, that doesn't reflect badly on the recommendation necessarily (though I haven't seen it yet): perhaps it's merely not that much better as to warrant changing everything around that's already in place.—msh210℠ 22:56, 20 August 2009 (UTC)
Misbehaving audio file....
i am using firefox 3.5.2 on win xp sp2. i tried to play the audio file on the page lion. when i click on it it takes me to a new page where firefox opens the audio file and plays lion. but when i replay the file from there it spells li-lion. there is some error here.
i downloaded the ogg file to my comp and played it. it spelled fine. no prob with the file... i haven't tested any other audio file. i don't know where the prob is. but if u experience the same prob join me to report it.... —This comment was unsigned.
This is almost certainly an issue of browser configuration. When an entry includes an audio file, it just offers a standard hyperlink to that audio file, just like a link to any other page or resource. If you can't play it properly, or it opens new windows etc., it's probably your setup. Equinox ◑ 23:41, 21 August 2009 (UTC)
Inclusion of SOPs for translations — proposal
I'd like to propose that we allow English SOPs that are found in major English-to-X dictionaries for multiple values of X. My reasons are severalfold:
Stephen G. Brown (talk • contribs), who seems to be an experienced translator, seems to think that many SOPs are useful for translators.
One presumes that the compilers of these dictionaries must also think such entries are useful.
Users frequently add such entries. I think these users are nearly always misguided (sorry, users!), thinking that something is an idiom when in fact they've simply failed to look up the component words — indeed, we sometimes get comments at RFD that basically take the form "Keep, because I've failed to look up the component words, so apparently we need all possible SOPs that ever use the senses I'm not familiar with" — but it's probably more welcoming to convert such entries into translation-hangers than to delete them. (DCDuring (talk • contribs) has often observed that we can use such entries to improve the entries for the component parts, and that's true, but I don't think users get warm fuzzies when we redlinkify their entries, even if their contribution does end up helping out in this way.)
Most major dictionaries don't take our approach of splitting out idioms into their own entries; rather, most of them treat these idioms in the entry for their most important component word. This has problems of its own — "most important component word" is often subjective — but it renders irrelevant the often-blurry distinction between SOPs and idioms: no matter which one it is, you can look it up at the most important component word. Our approach means that readers have to try multiple entries to determine if we consider it SOP or idiomatic; and worse yet, if we consider it SOP, then we usually do very little to help them find the relevant senses of the component words. (There are exceptions — someone looking up "have a cow", for example, would have relatively little difficulty finding the right sense of cow — but I wonder if, now that I point it out, someone will "fix" it to have the normal, unhelpful presentation. And even [[cow]] isn't as good as what you'll see in many other dictionaries, where the salient phrase would appear in bold and/or italics at the start of the sense line, in the style of a sub-headword.)
I therefore suggest that we give minimal definitions:
The brother of one's mother; see maternal, uncle.
and dispense with etymologies, usage notes, related terms, etc. (unless there's a specific reason to have them — in which case it's probably not actually SOP), and encourage the addition of translations.
Questions? Comments? Concerns? Death threats? —RuakhTALK 00:33, 22 August 2009 (UTC)
Are there to be any criteria for including/excluding these? At least one translation at time of creation? Within a month? Within a month after challenge? Does the translation have to be verifiable? How?
One thing I have often wondered about is the appropriateness of the wording of translations, especially the the use of awkward English, obsolete words, or mixed-register phrasing. Headwords with these ills will be more visible to search engines in this scenario. Will we have some tags to mark these? We will probably have many more critics of translations than we will have folks to correct the translations, but the tags might be useful to prevent translators from wasting time on awkward English expressions. DCDuring TALK 01:43, 22 August 2009 (UTC)
YES!=1.step2moreUSERFRIENDLINES--史凡>voice-MSN/skypeme!RSI>typin=hard! 02:14, 22 August 2009 (UTC)
Could we perhaps use this as the long-sought inclusion criterion for Phrasebook? I would replace "major dictionaries" with "at least [3] dictionaries published in print," but otherwise it seems like as good a basis for this problematic area as any. If we are going to have these entries, it would be much better to integrate them with our existing translations-only entries under a single, clear criterion. -- Visviva 05:49, 22 August 2009 (UTC)
wotstheOBSESIONw/PRINTEDICT.s-theySUKanyway!--史凡>voice-MSN/skypeme!RSI>typin=hard! 10:15, 22 August 2009 (UTC)
I like the idea. However, I would not express it this way. An English-French dictionary might mention in its bird entry: little bird: oisillon. This is not a reason for accepting little bird as a separate page, this information should be given in the translation section of the bird page. Anyway, nobody would consult a little bird page (or am I wrong?). But, if it can be considered as a verb or a set phrase, such as thrust back, vector graphics or ranked society, it makes sense to include it, even when its meaning can be deduced from the sum of its parts.
These cases are simple cases. There are more difficult cases. My English-French dictionary mentions to give an account of and provides a translation. Should give an account be considered as a set phrase and created as a page here? Maybe, but I'm not sure. Is it a set phrase?
In other words, I think that all set phrases should be accepted, even when SOP. A good reason is that, although they can be understood when heard, their existence cannot be guessed by people not knowing them, who might use slightly different words, and be misunderstood. But when something is not a word nor a set phrase, it should not be included (and, anyway, the page would not be consulted) and the translations given in the page(s) for its components. Lmaltier 06:21, 22 August 2009 (UTC)
I've always been against adding SoP terms in any language because they can be translated by a single word in another language (usually English for us). We've had some real atrocities on fr.wikt like high school student because that's collégien (or lycéen) in French, which is of course one word. Mglovesfun (talk) 10:19, 22 August 2009 (UTC)
I do think set phrases get a rough deal here. I tend to think if something is really common it should get an entry, unless it's unbelievably sum of parts. Be able to, to me anyway, is so commonly used, and since be and able have a lot of meanings, I'd support it. Mglovesfun (talk) 10:24, 22 August 2009 (UTC)
(unindent) I welcome the initiative, but do not know the impact of allowing translation targets. Quite possibly, the likely impact could be discouraging, meaning we would let too much in. The likely impact—the likely added new terms—should better be documented.
I have created Wiktionary talk:CFI#Translation target as a home location of the topic, from which I have linked to this discussion in Beer Parlour. --Dan Polansky 17:17, 22 August 2009 (UTC)
I'd be in favour of a good lexically-based proposal for accepting "set phrases," common expressions, or whatever. I'm against making up new English terms and creating entries for them, just to support foreign-language entries—this is just multiplying the work required for a single entry. Headwords get entries, glosses and definitions go into the entries. —Michael Z. 2009-08-23 18:16 z
Re: "I'm against making up new English terms and creating entries for them, just to support foreign-language entries": Yeah, of course. No one is suggesting that. —RuakhTALK 19:32, 23 August 2009 (UTC)
I am sorry for the confusion that I have created with creating the section page "Translation target" at the talk apge of WT:CFI. Let us continue the discussion here, if you don't mind, or otherwise correct me if I'm wrong.
I am reposting the terms that possibly lie within the impact of the proposal, although people disagree on whether they do.
Examples of possible translation targets:
high school student – French: collégien or lycéen; but: "highschooler"
indoor football – Dutch: zaalvoetbal; is this actually a non-SoP name of a sport?
problem solving – German: Problemlösen; but: "problemsolving"; but-but: "problemsolving" is much less common than "problem solving"
small boat – Czech: loďka, lodička; diminutives in general; but: "boatlet"; but-but: "boatlet" is rare.
two-wheeled – Finnish: kaksipyöräinen
Candidate criteria for inclusion of translation targets, even if sketchy:
(C1) The translation target has to be included in at least three printed translation dictionaries.
I think it could work. In fact I would go further and say that terms like "high-school student" are properly idiomatic, in the true sense of the word, in that they are the most natural way of expressing the concept in English. So if a term is idiomatic then I would always consider it worth including, even if it is also sum-of-parts. Ƿidsiþ 22:45, 24 August 2009 (UTC)
I disagree. Allowing high school student opens the door for middle school student, elementary school student, art school student, community college student, etc. All of these are the most natural way to express the concept, and all are [attributive noun] + [noun]. Likewise, small boat should not get an English entry. It is in no way idiomatic. There are other words in English used for "small boat", depending on the context or type of boat. Indoor football is called arena football in the US, and that (at least) might merit an entry, since it is not clear from the combination that the arena must be indoor. However, it is used only for indoor American footbal, not bor indoor association football (soccer). --EncycloPetey 02:39, 25 August 2009 (UTC)
thisis justMOREfrom som1whoNEVERGOTANYWHERE IN FOREIGNLANGUAGSKILLS-butelmeTRAD-userHOSTILE-DICT-PETER,where amigonafind fi the chin.tr-l 4'internet acces',tr-literations like4'gwBush',spREDOVERTHEPARTSUTHINK?!?o-leme gues,icango'nCHEK WP INTERWIKIS,thatsolvzit ay--reread a user's w/PERTINENTEXPERIENS like sgb 's coments,letitSINK,rUMINATEit,NTHEN COMBAKw/aMORSENSICAL ELABORATIONifupl,blAMATEURtalk here!
ps mostengl-speakers rNONATIVS[bilions of'em!!],theyneed2be aHI PRIORITY4en.-theDE FACTO WORLDLANGUAGE-wt,nNOTjust som oldfashiondGRAMARFREAKSwhothink theygothe a&o boutMAKIN'A DICT justcos theypourd a lotoverOLDSTUFYBOOKS--ivhad anABSOLUT OVERDOSE ofBAAAD DICT.S 'n wt 'ljoin'em/thoseranks OVERMYDEADBODYonly,n throwtherest ofurOLDRESTRICTIVGUARDatme,i'l[grudginly admitently ] deal w/it[no1 everbeen abl2say i'dlak pluckines.]--史凡>voice-MSN/skypeme!RSI>typin=hard! 06:42, 25 August 2009 (UTC)
Echoing Widsidth, there is one thing that bothers me about Wiktionary's use of the term "idiomatic". I've grown to read "idiomatic" here as "not sum of parts", but my pre-Wiktionary understanding of the term "idiomatic" was different. My original understanding was that a phrase or an expression is idiomatic if it sounds fully natural in the given language, but its naturalness cannot be derived purely from the knowledge of the meaning of the parts, and, equally importantly, the non-naturalness of a non-idiomatic phrase cannot be derived purely from the knowledge of the meanings of the parts. So what cannot be derived from the meaning of the part is the naturalness of the sum, while the meaning of the sum may still be perfectly clear from the meaning of the parts. To disambiguate, I store the concepts under two terms in my mind: "Wiktionary:idiomatic" and "Dan:idiomatic".
Keeping an additional concept on a given overloaded term "idiomatic" is not a big deal. But there is the template {{idiomatic}} that may be not wholly consistent with the "nonSoP" reading of "idiomatic". Per Wiktionary:idiomatic (=nonSoP), each multi-word term is idiomatic. And yet, not every multi-word term is marked using {{idiomatic}}. The term "black hole" is a multi-word term and a nonSoP, but it is not Dan:idiomatic; the non-SoP-ness of "black hole" has nothing to do with Dan:idiomacity. What is Dan:idiomatic is "to make sense", instead of the German "Sinn ergeben" or Czech "dávat smysl".
I am writing this to share a confusion that I have had about the term "idiomatic" for some time. I am not proposing with this that WT:CFI's "idiomatic" should be redefined to mean Dan:idiomatic. --Dan Polansky 10:48, 25 August 2009 (UTC)
Halfwidth and Fullwidth Forms
Some graphemes are represented in Unicode as both halfwidth and fullwidth characters. We're not super consistent in our treatment of these distinctions. Does anyone have a preference of using a hard-redirect, soft-redirect, or not including them at all?
Halfwidth and Fullwidth Forms Unicode Block - Unicode.org chart (PDF)
U+FF0x ! " # $ % & ' ( ) * + , - . /
U+FF1x 0 1 2 3 4 5 6 7 8 9 : ; < = > ?
U+FF2x @ A B C D E F G H I J K L M N O
U+FF3x P Q R S T U V W X Y Z [ \ ] ^ _
U+FF4x ` a b c d e f g h i j k l m n o
U+FF5x p q r s t u v w x y z { | } ~ ⦅
U+FF6x ⦆ 。 「 」 、 ・ ヲ ァ ィ ゥ ェ ォ ャ ュ ョ ッ
U+FF7x ー ア イ ウ エ オ カ キ ク ケ コ サ シ ス セ ソ
U+FF8x タ チ ツ テ ト ナ ニ ヌ ネ ノ ハ ヒ フ ヘ ホ マ
U+FF9x ミ ム メ モ ヤ ユ ヨ ラ リ ル レ ロ ワ ン ゙ ゚
U+FFAx (ᅠ) ᄀ ᄁ ᆪ ᄂ ᆬ ᆭ ᄃ ᄄ ᄅ ᆰ ᆱ ᆲ ᆳ ᆴ ᆵ
U+FFBx ᄚ ᄆ ᄇ ᄈ ᄡ ᄉ ᄊ ᄋ ᄌ ᄍ ᄎ ᄏ ᄐ ᄑ ᄒ
U+FFCx ᅡ ᅢ ᅣ ᅤ ᅥ ᅦ ᅧ ᅨ ᅩ ᅪ ᅫ ᅬ
U+FFDx ᅭ ᅮ ᅯ ᅰ ᅱ ᅲ ᅳ ᅴ ᅵ
U+FFEx ¢ £ ¬ ̄ ¦ ¥ ₩ │ ← ↑ → ↓ ■ ○
Note: U+FF65–FFDC encodes halfwidth forms. U+FFE0–FFEE includes fullwidth and halfwidth symbols. --Bequw → ¢ • τ 00:53, 22 August 2009 (UTC)
From The Unicode Standard 5.0, pages 434-435, they were only added "[t]o achieve round-trip conversion compatibility with [...] mixed-width encoding systems[...]". We have no need to convert between legacy encodings. They are an artifact of encoding systems, and we have no business using them. Just redirect them. There is no need to create entries with words written in these characters either, so perhaps even better, something should be done at the base Wiktionary level to automatically convert these to their standard versions. Bendono 01:20, 22 August 2009 (UTC)
w is the only one that can even be considered keeping. All the others should just be hard redirected. -- Prince Kassad 01:52, 22 August 2009 (UTC)
Anything other than a soft redirect seems to violate the principle of least astonishment IMO. If I enter this into the searchbox or URL line, it's probably because I ran across it somewhere and want to know what the heck it is; redirecting doesn't really give me the information I'm looking for. This is actually one area where Wiktionary can (currently) provide better information than Google and most other search engines. So if we do redirect, the redirect should go to the "Appendix:Variations of Foo" page rather than the page for the half-width form. But given the community's unwillingness to delete or redirect other random Unicode codepoints (thinking of the Hangul syllabic blocks here), I see no reason why we would exclude these. -- Visviva 05:38, 22 August 2009 (UTC)
Prince Kassad, they are not non-standard characters, they are just foreign characters. Japanese standard input produces full-width 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, instead of "standard" 0123456789, some other symbols from the above table are also used in standard Japanese, e.g. * and & (counterparts of * and &). 2 is a redirect page, should be removed IMHO. They occupy more space and look wider and look harmoniously with the rest of the characters. Chinese also use them but not as consistently as Japanese. We have entries for Arabic numerals, we might have Japanese as well: ٠ zero, ١ one, ٢ two, ٣ three, ٤ four, ٥ five, ٦ six, ٧ seven, ٨ eight, ٩ nine. Anatoli 10:21, 22 August 2009 (UTC)
I have created Japanese entries for full-width numerals. I could use some more help in usage in other languages. The full-width Roman characters have a similar usage in Japanese, so CD is normally spelled CD (full-width) or CD (half-width) in Japanese but I have to find if it's standard. The half-width Roman characters seem to be more prevalent in Japanese but some Japanese word-processors and dictionaries use full-width only. Anatoli 10:50, 22 August 2009 (UTC)
The difference being that half/fullwidth is strictly a computer issue. I don't go around and write halfwidth and fullwidth letters on a piece of paper. The same reason is why we hard redirect entries with typographical apostrophe to entries with the normal ASCII apostrophe. -- Prince Kassad 11:03, 22 August 2009 (UTC)
Whilst we have hard redirects from entries with the typographical apostrophë (such as from ha'p'orth to ha'p'orth), we have distinct entries for the individual characters themselves; e.g., ', ', ', &c. Accordingly, we ought to hard-redirect something like 69 to 69, but we should have full explanatory entries (especially for technical data) for the individual characters themselves. † ﴾(u):Raifʻhār (t):Doremítzwr﴿ 11:22, 22 August 2009 (UTC)
We don't write here on a piece of paper, I see the use for this info. The codes, look and especially usage differs and it matters. If in Japan they use 2009 or 二〇〇九, not 2009, users may want to know it. On a piece of paper, Japanese would write numbers with larger spaces, especially noticeable in the vertical script. Some Japanese words with Roman letters might need some fixes, like JR -> JR (Japan Railway) or CD -> CD. That's the way they appear in a Japanese text online. Although, the half-width are also used. Anatoli 11:32, 22 August 2009 (UTC)
Note also that this will be a precedent for creating 1,000 entries for italic, bold, bold italic, fraktur/blackletter, double-struck, monospace and sans-serif Latin letters in Unicode (why they were encoded in the first place is beyond me). -- Prince Kassad 15:04, 22 August 2009 (UTC)
True, but that doesn't seem problematic as long as we do them all, with some measure of consistency as to format. The total number of assigned codepoints in Unicode 5.1 -- about 240k -- is much less than the number of attested lemmata in a typical written language. -- Visviva 03:01, 23 August 2009 (UTC)
I think there's an even stronger reason to have real entries for the w:Mathematical alphanumeric symbols (that Prince Kassad mentions) than for both full/half-width forms. While many fonts do display full/half-width forms differently, they are not necessarily different (generally one's format determines the choice between 0 and 0 not some semantic difference). Unicode specifically said the difference between the styles in mathematical symbols was "fundamentally semantic rather than stylistic", so I would want entries for these symbols for sure. --Bequw → ¢ • τ 05:35, 23 August 2009 (UTC)
I agree that the stylistic variants aren't nearly as important as the semantic variants, but nevertheless, when I look up a character on here, I want to know all the technical information pertaining to it, who uses it and for what reasons, why a variant exists, and so on. As Visviva said above: "This is actually one area where Wiktionary can (currently) provide better information than Google and most other search engines." † ﴾(u):Raifʻhār (t):Doremítzwr﴿ 11:35, 23 August 2009 (UTC)
I agree with Doremítzwr, and for his reasons: include the characters, but hard-redirect words they comprise. ✡ ﴾(u):msh (t):210﴿ 19:47, 23 August 2009 (UTC)
I believe the non-(Katakana/Hangul) symbols can be used in all the w:CJK langauges, so I imagine we'd want separate L2 langauges (up to 3) listed on each of those pages (like the latin numerals are now) rather than one "Translingual" header (like most of the punctuation marks are now). I also started a usage note template. --Bequw → ¢ • τ 00:22, 24 August 2009 (UTC)
I strongly question this. "Translingual" does not mean omnilingual. I think that the best approach is the one we have long taken with Han characters (which have a similar distribution of usage): a ==Translingual== section for those aspects that are language-independent (such as technical information), and ==(Language)== sections for any aspects that are more or less language-specific. Otherwise a complete entry for full-width characters would include redundant sections for not only Japanese, Korean, and Mandarin, but also Cantonese, Min Nan, Shanghainese and so forth. For most punctuation marks, I think Translingual alone is probably sufficient (unless there are specific, documented peculiarities of usage in a specific language). If we can get by without, say, a German entry for ~, I think we can get by without a Korean entry for ~. -- Visviva 05:16, 26 August 2009 (UTC)
I disagree. Translingual is indeed too broad. "East Asian" is already a limitation, especially if it's in the English Wiktionary, out of the East Asian languages, it's only Chinese and Japanese that count, not Korean. No need to mislead users with characters, which have no relevance to English and European languages. Listing all Chinese dialects to show the punctuation is again absurd, as there is (basically) only one written standard form. A ==Chinese== umbrella would do a better job but since there was so much pressure to abandon "Chinese" term for "Mandarin", ==Mandarin== and ==Japanese== will do.
These full-length characters are no longer used in Korea, Japanese use them more consistently than Chinese. The usage in Chinese is fading for numbers and Roman characters but commas, colons, question marks and other punctuation is used (examples of characters in today's newspapers: China: 文章摘录如下: , Taiwan: 歐鴻鍊請辭?. Today's date in the Japanese newspaper (horizontal style) looks like 2009年8月26日, where characters are aligned better than 2009年8月26日 (the idea is that any character occupies the same square space, including punctuation symbols). In my opinion, Mandarin and Japanese flags are needed. No need for Korean and Chinese dialects. We don't have a special category for Cantonese punctuation, it is shared with Mandarin. Anatoli
Certainly there are some technical and visual difference between "2009年8月26日" and "2009年8月26日", but to me a Japanese native speaker, they are identical as characters, in the layer of lexical recoginition. I mean, the difference I feel is quite similar to that between "August 26, 2009" and bold "August 26, 2009", though the technical difference resides in an upper layer, not in Unicode code points but in Wiki text, in the latter case. I would call the both pair lexically the same, and don't feel a need to employ a different 2nd-level header for full-width 2 than that of half-width 2, as in the case of 2 and 2. While the fact that those full-width characters appear mainly in Chinese or Japanese text is important, I believe it can be sufficiently described as a part of the definition or a usage note.
Anyway, of course, I agree that providing explanation for each of those possibly-exotic characters will be a big help to our users. --Tohru 16:35, 26 August 2009 (UTC)
Thanks, Tohru, but what do you suggest? A simple redirect won't give any information, that means there must be an entry. Luckily, it's not such a large unmanageable subset. Unlike bold or italic, these characters' usage is specific to Japan and Chinese speaking countries. Anatoli 00:52, 27 August 2009 (UTC)
I've already given my opinion above. While I recognize that fullwidth forms are in use, similar to Tohru, I do not recognize them as lexically distinct from their halfwidth forms. Accepting them now will potentially open up many future problems. For instance, they are not necessarily limited to Japanese or Chinese. I have received complete e-mail messages numerous times written either entirely or partially in fullwidth English characters. The reason for this is due to the Japanese IME. Just like with dates, you can type "English" such as this without needing to switch IMEs. I am sure that this is annoying some, so I will stop. Just because you can Google such text and verify that it exists does not necessarily mean that we should start adding entries for them.
On a similar note, just now I wanted to type U+FA5B. This is a glyph variant of 者 with an extra dot in the center. Wiktionary automatically converted (without even a redirect) this to U+8005. And rightly so, as U+F900-U+FAFF are compatibility ideographs again encoded for round-trip conversion. This is the same situation. Technical information is nice, but do not loose sight that we are compiling a dictionary of words first. Bendono 08:11, 27 August 2009 (UTC)
Questionable sense of a word (with no citations to support it)
Hi. I occasionally run into a sense of a word in a Wiktionary entry that seems questionable; sometimes they "seem" dead wrong or "as if" they might be the work of a vandal or bored teenager. If there is no citation, and no example of usage, I tentatively confirm in my mind that this might be an issue worth noting, for the sake of the quality of Wiktionary. My question is: How can/should novice editors bring that to the attention of one of the more serious and competent wordsmiths on Wiktionary?
Is there any sort of a template with which one should tag that particular sense of the word? I could not find any on Wiktionary:Index_to_templates that seemed appropriate to the purpose.
Should we just note it on the discussion page and move on, hoping that some serious wordsmith will one day read the discussion page and catch the item? (This is what I did on the sense I noted this morning on tenant; my comment is here: Talk:tenant. But I don't want to sideline my main question with this specific example.)
So how should we bring such an issue to the attention of one of the more serious and competent wordsmiths on Wiktionary?N2e 16:29, 22 August 2009 (UTC)
I use the {{rfv-sense}} template. SemperBlotto 16:33, 22 August 2009 (UTC)
Thanks SemperBlotto. I did not find that template when I looked. My bad. And now I see that DCDuring has already marked that sense I was concerned about on tenant. So all is very well. N2e 19:25, 22 August 2009 (UTC)
aspired h vs silent h
As this sound absence has no phonetic symbol, we've decided last month on fr.wikt to use a Template:h. Here we're currently also forced to replace the inappropriate "ʔ" by aspired h, as in haricot#French. But I really think that adopting a {{h}} would be more practical. JackPotte 23:48, 22 August 2009 (UTC)
This issue is being discussed on Wiktionary talk:About French. I agree with you that ʔ is inappropriate and I created a {{asph}}. This template is still in its infancy and feel free to make any changes. (I like the way fr.wikt does it!) No {{muteh}} exists yet. —Internoob (Talk|Cont.) 18:14, 23 August 2009 (UTC)
Authorization to run bot
Hi. I would like permission to run my bot:
My user name: Malafaya
My bot's user name: User:MalafayaBot
Software: Pywikipediabot
Task: It will exchange interwiki links among categories only with other Wiktionaries and update them accordingly
Due to the very low update rate required, I believe a bot flag is not absolutely necessary.
Thanks, Malafaya 00:58, 23 August 2009 (UTC)
Uh, isn't that exactly what User:VolkovBot does? I don't see the necessity for another interwiki bot. -- Prince Kassad 01:20, 23 August 2009 (UTC)
Yes, I believe VolkovBot does that and yet there are still lots of categories without interwiki links or outdated (because that's lots of work anyway). Allow me to explain why I'm asking for permission here: I already run my bot in smaller Wiktionaries. More often than not, my bot ends up not updating anything even if there was supposedly things to update. Imagine for example a category "Numbers in greek", existing in Portuguese Wiktionary and here, and nowhere else (at least, linked anywhere). English Wiktionary is not aware of the same category in PT, but PT is aware of the EN category. After I run my bot, things will be kept exactly the same, because the bot is not allowed to update here. So, even if VolkovBot runs here, it still won't find the Portuguese category because there's nothing linking to it (only the other way). This happens very often and it's the main reason why I'm applying for bot use here. Malafaya 01:31, 23 August 2009 (UTC)
Due to the lack of interest and the seemingly intrinsic counter-interwiki-bot spirit present here, I hereby retire my request. The only comment I got after 2.5 days was analog to "we already have someone working on Italian words so why would we want you to do it too?" (and I thank Prince Kassad for taking the time to post his comment, even if it's not what I was hoping for). My perspective is that, wikiwise, the more the better, even for bots. With the exception of interwiki linking in Wiktionary main namespace, which is ruled by the same ortography rather than translation of concepts to the wiki language, pages need at least an interwiki link somewhere else (i.e., a link from category German to category Deutsch in de.wikt). This means that, the wiki you process on makes a difference, as some interwiki-isolated pages may have information that links to other wikis. VolkovBot is just one and, even if it runs against all Wiktionaries, it will takes a long time before a cycle is completed, and even then, it's not for certain that it will catch all the relations between categories of different wikis. Two bots would be better and quicker than just one (despite VolkovBot running all around, take a look at my bot's contributions at pt.wikt and ca.wikt, for instance).
Enough said, I have applied for bot flag at the French Wiktionary and it should go well, so I'll be updating there. VolkovBot will eventually get the new interwikis it needs from there, so the long term result is approximately as if I was updating here directly.
Thank you for your attention, Malafaya 11:22, 25 August 2009 (UTC)
VolkovBot doesn't seem very active, I'd strongly support this, having seen it in action. Mglovesfun (talk) 14:55, 25 August 2009 (UTC)
VolkovBot seems to be more active in the main namespace lately. Nevertheless, its work is a very important contribution and it's the only globally established Wiktionary category interwiki linking bot. MalafayaBot should discover new data (interwikis) for VolkovBot, and VolkovBot should spread that data globally. It's a good symbiosis :). Malafaya 22:52, 25 August 2009 (UTC)
You need to pass a vote to get the bot flag. You should start one. --Ivan Štambuk 16:08, 25 August 2009 (UTC)
Maybe that's it. Maybe I should have started a vote immediately. I followed the directions at WT:BOT which mention gathering a consensus here at BP, but I guess no one takes that step too seriously (except myself :) ). Thanks, Malafaya 16:12, 25 August 2009 (UTC)
I started here as I'm not sure how well you know the English Wiktionary. Mglovesfun (talk) 16:32, 25 August 2009 (UTC)
Thanks, Mglovesfun. Yes, you're right. I don't know it that well in what concerns community decisions. As I mentioned above, I followed the procedure described in WT:BOT which directed me first to conducting an opinion poll here at WT:BP, with "meager" replies. Again, thank you for clearing out what I thought was lack of interest by the community. I'll be following the vote page. Over and out on this topic here :). Malafaya 22:46, 25 August 2009 (UTC)
rfd RENAME
in2VERIFICATION [ONLY!]which ofcours hasitsplace;ifsthFAILSthat,it'l inth endgetTAKENOUT[orLABELDasuch imo],i/part orentirely,thatsDUEPROCES,we doNOTneed2stres"delx5"likenow,itjustmakus lookbadlikeNAROWMINDED OGRS,eagerto eatawayNEWBYS EFORTS[butrealy,whostilfeels likeCREATINGentrys i/the PRESENTATMOSPHERE?!?{nwe rcomplete alredy,sure,c hypothenar's history.}--史凡>voice-MSN/skypeme!RSI>typin=hard! 04:13, 23 August 2009 (UTC)
Assuming you were attempting to say "RFD should be merged into RFV because RFD sounds negative and the process is the same" (which is incidentally much easier to type and read than what you put), RFD serves a different purpose from RFV, while RFV is for words that might not be words, RFD is for things that, even if they would pass an RFV, we might not want to include anyway. While they could be combined, it's quite nice to have the two seperate as a stale RFV gets deleted, while a stale RFD is kept. Conrad.Irwin 22:36, 25 August 2009 (UTC)
Also, the file size of such a page would break the MediaWiki limit for page size. -- Prince Kassad 22:42, 25 August 2009 (UTC)
Phrasal sentence adverbs
I am not sure why we would want all attestable phrasal sentence adverbs. The idiom rationale that we are now applying seems to effectively provide a lower standard for inclusion of such phrases than for any others. It appears that we could include to be honest and various simple derivatives pf the form: "to be X honest, where X ranges over a subset of adverbs that includes [perfectly, brutally, very, totally, completely, more, really, and absolutely]. Similarly "In all X" where X ranges over a list of nouns like "fairness", "honesty", "frankness", "innocence", etc.
Do we want all of them as entries?
Should the ones with adverbs be redirects to the forms without adverbs?
Do we want some of them only to appear in appendices with titles of the form Appendix:English sentence adverbs of the form "in all N"?
At the next level:
Is this controversial?
Does it need research?
Do we want to include this among the considerations to be addressed in "technical amendments" to WT:CFI? DCDuring TALK 20:00, 23 August 2009 (UTC)
These look a bit like snowclones, but offhand I can't recall how we decided to handle those. The arguments in that discussion might be relevant and helpful, if someone can find them. --EncycloPetey 02:28, 25 August 2009 (UTC)
Wiktionary:Beer_parlour_archive/2008/April#Gaps in entry titles. seems to be our most recent full discussion of the the general X-formula/snowclone issue. DCDuring TALK 11:23, 25 August 2009 (UTC)
IMO: 1. No. These are an open set; our goal as a project is wildly ambitious, but still finite.
2. Yes (if they are common enough that someone might plausibly search for them).
3. I don't think we should delude ourselves that such appendices are anything but a black hole at present.
1'. Ha ha.
2'. Seems cut and dried to me. Research might be merited in a specific case, if there were e.g. a question of whether "to be brutally honest" is anything but sum of parts.
3'. Seems like a good idea. Pity it's such an effing pain to update policies around here.
One advantage of these is that there is a natural home for the main entry, at the adverb-less version. Obviously this does not work for many snowclone cases. -- Visviva 13:19, 28 August 2009 (UTC)
Looking up the meaning of a word in wiktionary : "curation" :
"Curation : The act of curating". Clicking on to to "curating".
"Curating : Present participle of curate". Clicking on to "curate".
"Curate : 1. (transitive) to act as a curator". Clicking on to "curator".
"Curator : A person who manages, administers or organizes a collection , historically at a museum, library, archive or zoo. The function is now quite detached from institutions and many curators, some of the most famous, tend to be independant , organizing exhibition all over the world, with different partners, as public as private." Jackpot! Only took me 3 more clicks!
157.193.203.65 08:05, 24 August 2009 (UTC)
Then you can easily improve it into "Curation : The act of managing, administering or organizing a collection , historically at a museum, library, archive or zoo." :-) --Kipmaster 12:06, 24 August 2009 (UTC)
For your information: since I am not much active lately, I have been desysoped at my request [5]. --Kipmaster 12:08, 24 August 2009 (UTC)
<grins> Can someone please unlock my userpage...? :-) --Kipmaster 12:11, 24 August 2009 (UTC)
Done :) --Dijan 12:14, 24 August 2009 (UTC)
At least you're around enough that we know you're alive. —Neskaya kanetsv? 08:51, 5 September 2009 (UTC)
Appendix:List of legal Latin terms
I created this appendix (by which I mean I copied it from Wikipedia) and made a few edits. I thought it might be useful to users but also to contributors.
A lot of the links are red and I might want to add some of these terms.
I was wondering what the policy is for Latin phrases that get used a lot in (otherwise) English texts.
Can I use the heading "English"?
John Cross 21:55, 24 August 2009 (UTC)
If you're going to copy stuff from Wikipedia, or another Wikimedia project, then AFAICT (IANAL) you have to preserve the edit history, per the GFDL (or whatever license is being used), which requires attribution. This can be done by using special:import (for those who can see that page) or by asking someone else to do so or by copying the edit history to the talkpage of the page you make. (I think there's a script somewhere that produces a page's edit history wikified. Try w:WP:JS perhaps?)—msh210℠ 22:31, 24 August 2009 (UTC)
The newly created list can probably be deleted, as it is mostly redundant to Appendix:Legal terms, unless you want to specifically select Latin terms used in English to the exclusion of natively English terms.
Appendix:Legal terms is linked to from the law entry, from the section "See also". --Dan Polansky 10:20, 25 August 2009 (UTC)
I think there is some benefit to having a list of Latin legal terms used regularly in English texts/Courts in English speaking countries. I can't use special:import, perhaps someone with more amin rights could help me. I think it would be tough to argue the word list was copyright of anyone other than Wikimedia Foundation, I don't really see a major issue here.John Cross 18:25, 31 August 2009 (UTC)
That depends. Do you mean "terms from Latin that are used as legal terms in English courts" or do you mean "Latin legal terms that have since been borrowed into English"? There is a big difference there. Some "Latin" terms were in fact used in courts of law where Latin was the language of the court, but many of those expressions are just everyday Latin phrases or collection of words that had no special meaning in Latin. Such terms have only taken on specific legal meanings within the corpus of English law (and its derivatives in other countries). For these words, the language is English, since even though it is composed of Latin words, it did not have a special legal meaning in Latin. If you are indexing those words, then the title of your appendix is misleading, since it implies they are terms from Roman law and not from English law.
As for the copying, the copyright isn't the issue. MW documents are required by the licensing to display the contribution and edit history. If you copy the contents without the edit history, you are claiming to be the author of the content, which is unethical as well. User:Goldenrowley is probably our most experienced admin when it comes to importing from WP. --EncycloPetey 04:14, 3 September 2009 (UTC)
Redundant articles?
Is there a policy saying that we should always use articles in our definitions? It is customary to use articles in dictionaries, but to me it seems redundant. For example, the definition of "a table" would be "an item of furniture with a flat top surface raised above the ground, usually on one or more legs" and a definition of "the table" would be "the item of furniture with a flat top surface raised above the ground, usually on one or more legs", so should the definition of "table" be simply "item of furniture with a flat top surface raised above the ground, usually on one or more legs"? Is there any good argument why we should include the articles? What do we all think about making it a policy to use no articles unless they are necessary? Gregcaletta 02:22, 25 August 2009 (UTC)
There isn't a fixed policy I'm aware of, but house style is to include the indefinite article when defining a common noun and (often) including the definite in defining a proper noun. --EncycloPetey 02:26, 25 August 2009 (UTC)
It just reads better with the indefinite article IMO. A definition -- for a noun, at least -- is basically an answer to the question "What is a(n) _____?" If someone asks what a table is, it is far more natural to answer "An item of furniture (...)" than just "Item of furniture (...)".
It is easy to imagine uses for Wiktionary data that would require that definitions be perfectly substitutable for the definiendum -- e.g. some sort of AI/NLP application. But in those cases, it is trivial to remove the "a"s and "to"s. Our own style should be human-oriented, I think, and should follow lexicographic precedent unless there is reason to do otherwise. -- Visviva 10:42, 25 August 2009 (UTC)
The same applies to the use of the particle "to" in our definitions of verbs. In both glosses and definitions "to", "a", and "the" often serve to disambiguate between a verb and noun. The other means of doing so may not be present in users' working memories as they read the gloss or definition. DCDuring TALK 11:04, 25 August 2009 (UTC)
I agree. They also help to distinguish between a countable noun and an uncountable one (though they're not foolproof in that regard). —RuakhTALK 00:24, 26 August 2009 (UTC)
Fair enough. Someone could add this to policy, if it hasn't been already, but it's probably not necessary as articles sees to be used pretty consistently anyway.Gregcaletta 04:17, 28 August 2009 (UTC)
It is not so consistent in glosses, such as appear in many pages in {{trans}}, {{term}}, and {{sense}}, where the same considerations apply, except with more force because there are often fewer clues as to how to read a word in a gloss. It might be worth a proposal and vote. See Wiktionary talk:Entry layout explained#Including articles and particles in definitions and glosses. DCDuring TALK 14:36, 28 August 2009 (UTC)
Stereotypical sample sentences
I just changed the sample sentence of indecisive from "Girls are very indecisive. They spend ages choosing a dress." to something a little less generalizing. No matter what one's opinions are about gender roles are, or indeed any kind of stereotype, these sample sentences are wholly unnecessary and should in the name of neutrality be avoided when possible. Here's another example[6] of how one can, if not actually provide outright counter-culture images, then at leasta uphold a semblance of reasonable balance, ie that men are not always active and while women passive, especially when it comes to issues of romance or sexuality.
I would assume that gender-specific statements are probably among the most common when writing sample sentences, but I wouldn't be surprised if this might occasionally occur when it comes to ethnicity and other categories as well. Do we have any guidelines on this? Has this been discussed before?
Peter Isotalo 09:23, 25 August 2009 (UTC)
Help:Example sentences contains the guidelines and the policy (transcluded); I have done this to the page to reflect what you said, feel free to improve upon it further - I've noticed that the guidelines are all written in slightly different styles if you're looking to improve the whole section (though the policy cannot be modified without a vote). Example sentences are there to demonstrate how words are used, and I feel that providing a natural sounding example is more important than providing a "politically correct" example - though there is little need to deliberately be controversial when writing them. At least one user has been blocked permanently for persistently adding unacceptably explicit sentences, so there is some control over them, but at the same time it would be a mistake to overregulate them - they are useful both for providing context and also for providing tiny nuances that won't fit into a definition. In the particular instance of indecisive, I think your example is less natural than the original, but I'm not sure I can pinpoint why. While in the change to come on I see no particular gender issues, I would be very careful in deliberately going against stereotypes in example sentences; we are seeking for examples that show the language as it is commonly used (whether that be correct or not). Conrad.Irwin 00:04, 26 August 2009 (UTC) Conrad.Irwin 00:01, 26 August 2009 (UTC)
Also, I think it can always benefit an entry to replace a contrived "sample" showing how we think a word is used with an actual citation quoted from a book or other source. Even some of the better print dictionaries have included made-up examples which don't match real usage at all. —Michael Z. 2009-08-26 01:18 z
The diff you mention changes the entry [[come on]] from having two sentences with a man's coming on to a woman to having one of those and one with a woman's coming on to a man. For lexical purposes (viz, to show users that the word come on can be used when referring to either gender), that's reasonable. But the purposes you mention (to show "that men are not always active and while women passive, especially when it comes to issues of romance or sexuality"), that's, if you'll excuse me, ridiculous. We should provide sentences that provide usage information of the word, and that's it. We're a dictionary.—msh210℠ 18:24, 26 August 2009 (UTC)
I agree that reflecting proper usage is the first proriority of a dictionary, but it doesn't mean that any other aims are "ridiculous". What's the point of that kind of characterization anyway? To try to prove that dictionaries aren't intended for real world usage? That our readers couldn't possibly care and would never notice? That we as wiktionarians are never prejudiced? I never suggested that we write specific guidelines that prescribe that exactly 50% of the personal pronouns have to be female or that any sentence even remotely touching the issue of gender roles have to be 100% politically correct. It's a suggestion that we could at least hint that when choosing between two equally relevant example sentences, there's generally only benefits in choosing one that doesn't potentially offend, and applies not just to racial and ethnic slurs but stereotyping in general.
The change that Conrad seems like it could solve the worst of the problems, though.
Wiktionary:About Old French
Does anyone mind if I start an article for this? I've been down to Leeds Uni (where I study) and read the introductions to a couple of French-Old French dictionaries. I'm also trying to coordinate it with the French Wiktionary at the same time. Mglovesfun (talk) 16:06, 26 August 2009 (UTC)
Go for it. I had the impetus once to add some basic words from an Old French grammar I picked up, but quickly discovered the spelling variations problems and lost all hope. --EncycloPetey 04:08, 3 September 2009 (UTC)
It would be useful. Our etymologies of Middle English and English words ofter refer to Norman French (xno), Old French (fro), and Middle French (frm). I have also seen Old North French (no separate ISO code, I think) in etymologies. Clarifying each of these would be useful. Old French could be the first and mention the others. DCDuring TALK 10:04, 3 September 2009 (UTC)
Agreed. I would love to see this. Old French is a very important language for the English Wiktionary, probably only surpassed by its importance to the French Wiktionary. -Atelaes λάλει ἐμοί 11:32, 3 September 2009 (UTC)
Greek derivations.
Previous discussion: Wiktionary:Beer parlour archive/2007/November#Greek_derivations.
What do we want to do about Modern Greek derivations? el means Modern Greek and grc means Ancient Greek, but due to a confluence of various well-meaning past actions, {{etyl|el}} has come to be used in many entries that actually require {{etyl|grc}}. Ideally, I think {{etyl|el}} should present the text "Modern Greek", link to w:Modern Greek, and categorize in Category:Modern Greek derivations; but that seems inappropriate until we fix the existing entries.
So, I propose the following multi-step plan:
Create {{etyl:el-GR}} for Modern Greek, that does what I describe above. (GR is the country code for Greece.)
Go through Category:Greek derivations, and its other-language counterparts, and edit all entries to use either {{etyl|grc}} or {{etyl|el-GR}}.
Move {{etyl:el-GR}} to {{etyl:el}}, or redirect {{etyl:el}} to {{etyl:el-GR}}, or something.
While we're at it, we may also want to create templates for other forms of Greek, such as Byzantine/Medieval Greek.
—RuakhTALK 19:42, 26 August 2009 (UTC)
This is a known problem. I have been very slowly cleaning up Category:Greek derivations but the number of entries is too much for me alone. The situation is exacerbated by the fact that a huge number of etymologies misses Greek script, another portion of those misses polytonic diacritic marks, yet another large group (of verbs) is wrongly lemmatized to infinitive and not the first person singular.
We need more helping hands.
PS. Byzantine/Medieval Greek does not have an ISO code. As far as I know we treat everything with polytonic diacritics as "Ancient Greek", the rest as "Greek" (except for the recently created Category:Cappadocian Greek language). --Vahagn Petrosyan 20:14, 26 August 2009 (UTC)
Right. And I'm not suggesting that we start giving Greek words under ==Modern Greek== L2 headers. It's just that it was decided a long time ago that Category:Greek derivations would be split, but then later changes undid that. So I'm re-suggesting that we split it, and suggesting a way to do it. It'll take a lot of a hard work on the part of knowledgeable editors, and I'm not expecting it to happen overnight; indeed, the fact that it won't happen overnight is the reason that we need a way. —RuakhTALK 21:28, 26 August 2009 (UTC)
Similar approaches have been previously proposed (e.g. {{MGr.}}) and ultimately rejected. I think in large part because it confuses an already confused issue. If we wanted to take an automated stab at cleaning this up, the best bet would really be to have a bot auto-replace all instances of {{el}} with {{grc}}, as there are so few instances of Greek derivations, especially compared to Ancient Greek derivations. Such an approach would allow for easy monitoring of additions to Category:Greek derivations, and reproachment of editors adding {{etyl|el}}. If we follow your route, we're going to have a whole lot of entries claiming to have come from modern Greek, when they really didn't, whereas now, they're at least ambiguous (at least to a user who doesn't know our language naming policies). As for Byzantine Greek, that really needs a unified approach to dialect forms. As it currently stands, Byzantine Greek is a dialect of Ancient Greek (which includes everything up to 1453. I've created a number of Ancient Greek dialect templates, which are currently only used in {{grc-alt}}, but I think could easily be incorporated into {{etyl}}, if we could agree on it. However, all things considered, I think that this issue should be put on the back-burner, as I'm unwilling to work on it at the moment (I'm working on something else with Ancient Greek at the moment, which, when finished, will I think be worth the wait), and as far as I've seen, I'm the only editor interested in consistently working with the language. -Atelaes λάλει ἐμοί 21:52, 26 August 2009 (UTC)
Soon Middle Greek will get its own code gkm which we might utilize. --Ivan Štambuk 22:03, 26 August 2009 (UTC)
Re: "If we follow your route, we're going to have a whole lot of entries claiming to have come from modern Greek, when they really didn't": I don't get it. My entire goal was to avoid that: entries that currently use {{etyl|el}} would only claim to have come from modern Greek if they were manually edited to use {{etyl|el-GR}}. What am I missing? :-/
Re: Byzantine Greek: Oh, that's good, then. I was imagining that Category:Greek derivations must include some Byzantine derivations we'd need to create a home for; but if it's considered O.K. to use {{etyl|grc}} for them, then that works great. :-)
When these derivation categories are finally sorted (so that all the derivations are properly sorted, be they from the Ancient or the Modern language), I think that {{etyl|el}} should display Modern Greek (and not just Greek) — the fact that the language of derivation will explicitly state "Modern" will probably cause editors to pause and correct their misuse of ISO codes. † ﴾(u):Raifʻhār (t):Doremítzwr﴿ 01:35, 27 August 2009 (UTC)
I may have misunderstood, but I thought you proposed redirecting {{etyl:el}} to {{etyl-GR}}. Wouldn't that cause {{etyl|el}} to say Modern Greek? -Atelaes λάλει ἐμοί 22:46, 26 August 2009 (UTC)
Oh, I did, but only as a last step, after all the entries that currently have {{etyl|el}} are sorted out. Maybe I just haven't looked enough, but I haven't seen any examples of human editors using {{etyl|el}} inappropriately; all I've seen are cases where an editor used {{Gr.}} (which IMHO was legitimately ambiguous, even though it "officially" meant the same as the ==Greek== L2 header, i.e. Modern Greek), where AutoFormat (talk • contribs) autoconverted that to {{etyl|el}}. But if it does happen that editors use {{etyl|el}} inappropriately, then I could go either way on that part of the proposal: we'd want to clean up any entry that showed up in Special:WhatLinksHere/Template:etyl:el, but while it was there, would it be better if it assumed Modern Greek, or not? Either way, we're talking about a far-off date — there are 841 entries using either {{etyl|el}} or {{etyl|el|xx}}, and it'll be a while before they're all sorted out — so we can probably cross that bridge when we come to it. :-) —RuakhTALK 01:56, 27 August 2009 (UTC)
wrong translation of section name Beer Parlour...
Sorry to inform but translation of section named Beer Parlour from English to Portuguese is wrong. The correct translation for beer parlour from English to Portuguese is 'Cervejaria', not 'esplanada'. I'm a Portuguese native and a speaker of Portuguese as a mother-tongue. I hope this will help... —This comment was unsigned.
That's not a translation, but a link to a page with a similar purpose on pt.wiktionary. So the name can be very different, since every community chooses her own. --Nemo bis 08:31, 28 August 2009 (UTC)
Wiktionary:Citations
See Wiktionary_talk:Citations#Why_split_the_citations.3F. I can't understand this policy. --Nemo bis 08:28, 28 August 2009 (UTC)
Thanks for checking the documentation and bringing the inconsistencies here. I have explained it as best I can, but, as you suggest, the guideline (not policy, I think) needs to be updated to fully reflect our current best practices. DCDuring TALK 14:53, 28 August 2009 (UTC)
Appendix:Spanish names for María
What's this, and what does it do? Mglovesfun (talk) 10:04, 28 August 2009 (UTC)
Changing the title to Spanish names referring to Mary would be clearer. But some of the columns are not Spanish at all... Lmaltier 19:10, 28 August 2009 (UTC)
Wiktionary:Votes/pl-2009-08/Add en: to English topical categories
Pretty much does what it says on the tin. Mglovesfun (talk) 15:03, 28 August 2009 (UTC)
Curious how I was just thinking of this problem, as my bot is at the moment under a vote. Currently, there's ambiguity between "Category:X (All languages)" and "Category:X (English only)" on this Wiktionary which is a problem for example for interwiki linking. Malafaya 15:09, 28 August 2009 (UTC)
+en: for English where every other language has its own ISO code? I'm for that. † ﴾(u):Raifʻhār (t):Doremítzwr﴿ 15:11, 28 August 2009 (UTC)
Yes, I think that's it. I.e., "Category:Countries" would be split into "Category:Countries" (same but non-language-specific, with all "Category:xx:Countries" inside) and "Category:en:Countries" just with English words. I'm also for :). Malafaya 15:25, 28 August 2009 (UTC)
So am I to understand that, on English Wiktionary, in a list of categories, users should be made to learn by trial and error that they need to go through the list of language categories to find English, provided that they aren't put off Wiktionary altogether? Is everyone just writing off the notion that we might want en.wikt to serve users who are primarily looking for/expecting English? Why does convenience in writing templates and bots override the needs of actual end users? Is en.wikt just our playpen? DCDuring TALK 16:30, 28 August 2009 (UTC)
OK, I understand your point of view. In that case, shouldn't categories in other languages move out of the English category? Something like "Category:Countries (all languages)". It's not logical that, say, category "Countries in Ukrainian" is sitting side by side with country names pages in English. Malafaya 16:59, 28 August 2009 (UTC)
P.S. You say en.wikt is supposed to "serve primarily users in English". Serve primarily in English, such as definitions, etc., yes, but serve primarily English words, maybe not. This is a multilingual project in what concerns words, terms, expressions, etc., so it doesn't bother me too much if categories for English concepts are marked as being English, no defaults assumed. Malafaya 17:06, 28 August 2009 (UTC)
English meaning being strongly determined by word order, what I wrote above, "to serve users who are primarily looking....", is not equivalent to "to serve primarily users who are looking...."
I simply want to make sure that any users who come to English Wiktionary with the naive expectation that it is well suited to helping them find out in English about English words are not excessively disappointed. After all, as benighted as they might be, they might have some quaint or colorful regional dialectal expression to contribute. I am not in a position to predict whether the slogan "all words in all languages" will turn out to have been a quixotic goal, but it is clear that we are having trouble reconciling the needs of the various user groups. As a result, we often seem to settle on just meeting our own. DCDuring TALK 18:30, 28 August 2009 (UTC)
For example Category:Vulgarities interwikied to fr:Catégorie:Mots vulgaires en anglais and VolkovBot understandably then linked it to all the macro-categories, not the "English" ones (pt:Categoria:Obscenidade for example). Mglovesfun (talk) 17:13, 28 August 2009 (UTC)
I share the DCDuring's concern. But it's common to all languages, not only English. Countries in English, Countries in Ukrainian... would be better category names. Lmaltier 17:18, 28 August 2009 (UTC)
I believe that the native language of each Wiktionary deserves special treatment on that Wiktionary to facilitate use for that language by people who are seeking a reference work in that language. That includes monolingual users. I would argue that only categories that have no prospect of ever being visible to users should have the language codes as part of the name. DCDuring TALK 18:30, 28 August 2009 (UTC)
(from the left) In fairness, what does that mean DCD? Mglovesfun (talk) 15:09, 29 August 2009 (UTC)
I'm not sure what "that" is intended to refer to. I will assume you mean the lead sentence and hope that I cover your actual intended question.
It's a statement of principle, like "all words in all languages", but not catchy enough to be a good slogan. I think all users of something called "Wiktionary, the free dictionary" have a right to expect things like the following (which we have at present for the most part, except for some user pages, stray definitions, and recent changes):
All running text on all pages in English other than usage examples/citations for non-English terms.
All terms used in headers, context tags, category names, attributes in English.
English language section first (Translingual debatable)
English at the top of all multi-lingual category and other listings.
Omitted lang= parameter defaults to English where required.
More controversially, perhaps no non-Roman and accented characters in transcriptions.
The purpose of all of this is to make sure that en.wikt can compete against mono-lingual English dictionaries reasonably effectively, also thereby attracting potential contributors who can help broaden and keep up to date our English content for the benefit of all those interested in English as she is spoke. The very inclusion of non-English material is a liability in serving monolingual English users. Of course, having non-English material is also a major advantage in serving EN users who are seeking information about words from other languages, in etymologies and in recently borrowed terms. It enables us to have a productive multi-lingual community of contributors that provides a stimulating environment for people interested in words and language. DCDuring TALK 15:56, 29 August 2009 (UTC)
O.K., but that's moot. The fact is that we do have lots of non-English stuff in (e.g.) Category:Fruits, and a proposal like this one would make it easier for an English-only reader to ignore non-English content. You write above, "So am I to understand that, on English Wiktionary, in a list of categories, users should be made to learn by trial and error that they need to go through the list of language categories to find English […] ?", and I simply don't understand why that should be. If they're navigating from English entries, they'll automatically find themselves in the English topic-category hierarchy. If, for whatever reason, they do reach Category:Fruits when what they want is Category:en:Fruits, there's no reason we have to make them "go through the list of language categories"; if we separate {English} from {everything}, then it will be easy for {everything} to link prominently to {English}. The only thing I don't like about this proposal is that our readers may not understand the meaning of the en: prefix; but then, we already have that problem. Just because someone is learning about, or interested in, a foreign language, doesn't mean they're instantly and magically a tech-savvy person familiar with the ISO language codes. (Worse yet, they probably will be familiar with the ISO country codes used as TLDs, which look similar, but which don't correlate very well.) I'd prefer names like Category:Fruits (English), Category:Fruits (French), etc. —RuakhTALK 18:52, 29 August 2009 (UTC)
I agree that the ISO codes are not user friendly unless one is saving time my typing in a code instead of a name and has the code available in one's memory.
Some of what I would like might be accomplished by the sort=* parameter or something similar that pushes English to or near the top of listings. The alternative is allow all default contexts and all lang=en context tags to appear in the top category.
If context tabs without a lang= parameter default to either Category:Fruits or to Category:en:Fruits, we will have a cleanup problem. It is not terribly difficult to clean non-English terms out of the top category (no lang=) by eyeballing the terms and inserting the correct lang parameter usually based on information in the language section. If one can use the Language header itself, that makes the process even more certain. There is a labor tradeoff between having to insert "en" in almost all English word context tags and having to pick non-English terms out of the category used for English terms. DCDuring TALK 20:21, 29 August 2009 (UTC)
Sorry, I think what I wrote was ambiguous. When I said, "we do have lots of non-English stuff in (e.g.) Category:Fruits", I wasn't referring to miscategorized entries; I was referring to the fact that Category:Fruits is the parent category for (e.g.) Category:fr:Fruits. Take a look at http://en.wiktionary.org/wiki/Category:Fruits, and tell me if that Web-page, as it stands, meets your "English-first" expectations. The screenful of language-specific subcategories pushes the English entries far down on the page; and furthermore, it causes the English entries to be split across two pages (you have to click the "next 200" link), even though there are fewer than 200 of them, so the software would happily show them all on one page if it weren't for the subcategories.
That said, I should point out that there are other ways to address this problem. The current proposal is to split Category:Fruits into a generic Category:Fruits and an English-specific Category:en:Fruits; but an alternative approach might be to split it into an English-specific Category:Fruits and a generic Category:xx:Fruits or something. (That's a bad name, but you see what I'm getting at.) Would that suit your sensibilities better?
BTW, I think sorting under * looks tacky. Personally, I'd prefer something like [[Category:Fruits| en:Fruits]] (using a space instead of an asterisk), which puts it first, before any of the character-headed groupings. But that's a minor thing. :-) It's just too bad that it's not possible to list it both first (for prominence) and in its properly-sorted place (for someone who knows what they're looking for); but then, we can always link to e.g. Category:en:Fruits in the actual template-provided text of Category:Fruits, and then not override the sort order.
I'm with DCD on this. There's no reason to add an extra level of hierarchy for the basic finding of English words in the category tree. Currently, words of our main language fall under a simple hierarchy of topics, and they lead the reader an extra step to the related categories in every other language.
This proposal would remove words from the topic categories, which would be reserved for subcategories only. I think the result would be conceptually more difficult, forcing reader to make the leap to figure out how the two different types of category pages form a single branch-and-leaf structure.
We should rethink category naming as Ruakh suggests. Our restricted-use labels already categorize tens of thousands of special-vocabulary words. But we completely destroy this lexical categorization by mixing in thousands of thematically categorized words to create a lame Wikipedia-wannabe category tree instead. Using category prefixes may be a way to fix this. —Michael Z. 2009-08-30 17:18 z
I think that Ruakh's view is, as is thankfully often the case, virtually identical to mine, superior where it differs, and much more clearly expressed. That said, I in no way want to imply that encyclopedic categories are necessarily useful. Language, grammar, context, and maintenance categories seem much more clearly in line with being a dictionary. Our other topical categories seem capricious. Our semantic relations framework, supplemented by Appendices and see also links provides a system superior to categories in many regards. DCDuring TALK 19:02, 30 August 2009 (UTC)
Possibly, the solution does not have to be adding "en:" to all English categories but my opinion is surely that the main category should be split in two. I was one of the people who had trouble finding English categories for countries (I believe it was "Countries" I was looking for at the time). I had noticed the categorization was following the convention "Categoriy:xx:Countries". When that didn't work for English, I went to "Category:Countries" and at a first glance I still could not find the countries in English. Had I scrolled the page to the end, I would have found them... but only after a few head bumps I did find them there. Malafaya 11:47, 31 August 2009 (UTC)
There is an unrelated problem in play—the poor MediaWiki interface which gives the reader no hint that the page contains a list of entries to be found by scrolling down past the list of subcategories. Even worse, it gives no hint that more subcategories are to be found by clicking through "Next 200" pages listed. We can't really address this here. —Michael Z. 2009-08-31 12:29 z
Wiktionary:Anagrams
I'd like to propose some changes to this. Not only does this totally ignore foreign languages, why not allow diacritics? In Scrabble diacritics are always ignored, because there are no tiles that have them! Only a few languages used diacritics for Scrabble - French and Romanian don't, nor does Italian, and Spanish only has ñ, nothing else. Mglovesfun (talk) 15:13, 29 August 2009 (UTC)
I'm not responding to your specific points here, but since (accents aside) the definition of an anagram is so algorithmically measurable, I'd like to see this automated. I realise it's a lot of work for something of no use to most readers, though. In the same way that the alphabetical index is periodically generated, perhaps someone could look into a process that determines which (newly added or deleted) words are anagrams of others...? Equinox ◑ 15:39, 29 August 2009 (UTC)
In French, [7]. Certainly adding anagrams by hand is pretty futile, yeah. On fr.wikt we consider that eéêèëEÉ (etc.) are all e and iîïiI (etc.) are all just i, in terms of anagrams that is. So pâté is a perfectly acceptable anagram of tape. Mglovesfun (talk) 16:06, 29 August 2009 (UTC)
It would make sense to not add anagrams directly to entries, either to have a category for each one, Category:English anagrams of aenv, or (I prefer) a template that is included on each entry, {{anagrams:en/adeht}}. It makes sense, in this case, for the title to be included in the template, as that way the edit link will point to the correct place. I have created that template and added it to the two anagrams death and hated. While creating these templates and adding them to entries with a bot is very doable, it is not totally trivial and we need to work out (probably on a per-language basis) what to do with diacritics, punctuation (and clicks), how to deal with Mapudungun (and any other language that has two seperate writing systems using the same set of letters), what to do with multi-glyph letters (does the Hungarian cs just count as c + s), and whether the phrases created must be dictionary entries (i.e. should "the da", and "Ed hat" also appear on {{anagrams:en/adeht}}. Given the large number of entries that may have anagrams, it seems to me that we should ammend the WT:ELE example of a vertical list for them and have a horizontal list instead (I doubt there are many words for which this list is huge). I might start having a go at doing this with User:Conrad.Bot, given that I already have the word lists, though I will wait for further comments and a VOTE before editing in earnest. Conrad.Irwin 16:06, 30 August 2009 (UTC)
About diacritics and multi-gryph letters for anagrams, no general rule can be defined: in French, diacritics are traditionally ignored for anagram purposes, but the tradition might be different in other languages.
About phrases, I would include everything included in the Wiktionary, e.g. bien sûr but not Ed hat. I would make an exception for famous anagrams (e.g. un veto corse la finira for Révolution française, but without linking this anagram sentence, of course). Lmaltier 16:19, 30 August 2009 (UTC)
Other things to not include are misspelling entries, and (presumably) entries for Abbreviations, Acronyms and Initialisms? Conrad.Irwin 17:14, 30 August 2009 (UTC)
I agree for misspellings, of course. But why not abbreviations, acronyms...? If the reader is not interested, he can skip them... Lmaltier 17:19, 30 August 2009 (UTC)
No reason really, I just don't count them as "real words" :). The other issue is for alternatives like cafe and café where it is not clear from the entries pages which spelling is preferred. It would seem strange to list them as "anagrams" of each other, but then maybe we'd want to do that for linking purposes. For pages like co-operate and cooperate, the bot can detect the {{alternative spelling of}} template and not include the alternatives when they use the same letters in the same order. Conrad.Irwin 17:17, 31 August 2009 (UTC)
Or links s.v. =Alternative spellings/forms=? In any event, I like the idea of automating this.—msh210℠ 18:21, 31 August 2009 (UTC)
Anagrams imply a different order. café is not an anagram of cafe. Lmaltier 18:26, 31 August 2009 (UTC)
Ok, how do {{anagrams:en/adeht}} (death, Death, hated), {{anagrams:en/acef}} (cafe, café, face) and {{anagrams:en/eft}} (eft, EFT, FET) look? Conrad.Irwin 01:06, 4 September 2009 (UTC)
At fr.wikt, we use a single line when several words are the "same" anagram. An example in fr:écran:
ancre, ancré
caner
carne, carné
crâne, crâné
créna
encra
nacre, nacré
Lmaltier 05:27, 5 September 2009 (UTC)
I've now done that, the only issue remaining with my implementation (for English anyway) is that it lists theatres of war and theaters of war as anagrams (it excludes theater of war and theatre of war as they are clearly marked as alternatives). I presume this isn't too much of a problem, and the system has the ability to be manually overwritten (using the templates {{include anagram}} and {{exclude anagram}} in the created templates). Any other suggestions? Conrad.Irwin 23:10, 10 September 2009 (UTC)
Why considering that theatre and theater are not anagrams (the letters are the same, in a different order)? Is the rule traditional for English anagrams? Lmaltier 21:25, 12 September 2009 (UTC)
They are anagrams, even though they are different spellings of the same word. I think we should certainly have them listed as such. Equinox ◑ 21:44, 12 September 2009 (UTC)
Since this is, hopefully, going to be added to the majority of English entries, I'd prefer switching to a horizontal layout. We could use parenthesis to group words differing only by diacritics. So Lmaltier's example would be "* ancre (ancré), caner, carne (carné), cerna, crâne (crâné), ... ". The word left before the parenthesis could be the one that would come first in an alphabetical sort. This would be the form w/o diacritics, if one exists. Is there much support for this? It would eventually take a vote to change the WT:ELE. --Bequw → ¢ • τ 19:03, 12 September 2009 (UTC)
Ok, I'll just include them then, there are around 75000 templates I could create, so yes they'd be on most entries. The format could be changed to the one you describe just by editing {{anagrams}}. Conrad.Irwin 22:08, 12 September 2009 (UTC)
Please indicate your opinion at Wiktionary:Votes#User:Conrad.Bot_to_do_anagrams. Conrad.Irwin 22:25, 13 September 2009 (UTC)
This has now started. If you notice problems, please let me know ASAP. It will take it about a week to do English. Conrad.Irwin 14:50, 26 September 2009 (UTC)
Database of English words
I am currently in the last phase of development of my mathematical software AlgoSim II [8] (a software quite similar to Wolfram's Mathematica in many respects). One feature I would like to add to the application, is the ability to search an English dictionary for words, definitions, synonyms, etc. Of course, Wiktionary is the best choise when it comes to the source of the data. What I really would like is a plain-text UTF-8 file with all English words at en.wiktionary.org, in a simple-to-parse format, e.g.
word1(SOME PRIVATE-USE CHARACTER)def1
wordN(SOME PRIVATE-USE CHARACTER)defN
where N ∈ Z + {\displaystyle N\in \mathbb {Z} ^{+}} is the number of such words in the dictionary. Is there such a file already available? If not, how can I create one? --Andreas Rejbrand 12:55, 30 August 2009 (UTC)
If you only want words and definitions --as in your example -- it's fairly easy to create such a file from the database dump. Some cleanup is likely to be required, though; how much will depend in part on how you want to handle inflected forms, alternative spellings, and so forth.
On the other hand, if you want synonyms and other relations, it gets a lot hairier -- for example, you have to decide whether to associate synonyms with words or with senses, and if with senses, how to handle cases where glosses are unavailable or inadequate. -- Visviva 14:23, 30 August 2009 (UTC)
Also bear in mind that the contents of the dictionary are constantly changing (more words added, and the occasional nonsense word deleted) so you can't just do this dump once and rely on it forever. Equinox ◑ 14:28, 30 August 2009 (UTC)
Thank you for your comments. I think it would be enough with the structure displayed above (that is, no synonyms). My best idea right now is to create a command line utility that reads the Wiktionary *.xml file, and creates the desired plain-text file. But the task becomes untrivial unless all articles have the same, valid structure, which is rather unlikely. I guess I can simply ignore the incorrectly-formatted articles, though... --Andreas Rejbrand 16:54, 30 August 2009 (UTC)
The format is good enough for getting the definition line (well, in well over 99.99% of entries). You simply need to find every line between ==English== and the next ---- that starts with a "#" but does not start with "#:" or "#*". Translating that into a useful definition is harder, many entries have "definitions" such as '# {{plural of|fudge}}' or (even worse) '# {{misspelling of|collaborate}}' which you will need to work out how you want to deal with. There are also context tags (at the start of a definition line) which may or may not have the same label as the wikitext. (i.e. {{Scotland}} renders as (Scotland)). If you come up with something good to do the definition-line -> definition, then please let us know! Conrad.Irwin 17:40, 30 August 2009 (UTC)
You can save yourself a lot of grief by using the xmlreader module from Pywikipedia. Instead of messing around with saxutils or whatever, you can just download the dump and --without even unzipping it -- write:
dump=xmlreader.XmlDump("wikt.bz2")
for entry in dump.parse():
...do something with entry.text <and entry.title>...
Conrad is right (of course) about templates and the like, but there are quick-and-dirty ways of getting messy but usable output. I'm currently running (approximately) the code in User:Visviva/sloppy.py, will post the result off-wiki once it's done. Output looks decent so far; there are some catastrophes like noun phrase (now fixed), where someone had helpfully numbered the examples, but not too many of such. I'm thinking this would be a useful thing to run on a regular basis for each language. -- Visviva 18:49, 30 August 2009 (UTC)
Thanks for the clue about dumps. Lmaltier 21:14, 30 August 2009 (UTC)
You're welcome. Never would have known about xmlreader myself, if Robert hadn't shown the way. -- Visviva 08:13, 31 August 2009 (UTC)
Here is a rough cut, after some very minimal cleanup (has some chopped lines due to sloppy coding on the first run). If you slice out the lines enclosed entirely in parentheses (form-ofs), and anything that has "(" after the beginning of the sense line -- and also anything with an invalid part of speech -- you'll still have a very large number of mostly-usable definitions. I would also remove all the proper nouns (or at least any containing templates/parentheses), but YMMV.
Somewhat off topic, it seems like emulating Special:Expandtemplates, using a DB dump, should not be an impossible task in Python. Anybody know of an already-written function that does this? I know there's mwlib, but it seems to require that you install every library on the planet. -- Visviva 08:13, 31 August 2009 (UTC)
Thank you very much! This is great! I will scrutinize the data when I get more time. --Andreas Rejbrand 10:44, 31 August 2009 (UTC)
Besides problmes with {{tags}}, I found this line:
GTA Initialism rand Theft Auto
Apparently, a "G" is missing. (By the way: is GTA, as in the game, really appropriate for a dictionary?) --Andreas Rejbrand 12:46, 31 August 2009 (UTC)
Hilbert Proper Noun (surname, from given names, dot=) derived from a (etyl, enm) given name of (etyl, gem) origin, _hild_ + _berht_.
--Andreas Rejbrand 12:54, 31 August 2009 (UTC)
Yes, I made a foolish assumption when first running the code, so the first letter of any definition that did not have a space after the "#" was zapped. I will fix this when I run it again (which I think I will do in any case, since this seems like a useful thing for us to have) ... but as it took about 15 hours to run on my little machine, it might be a few days before I can swing an update. Feel free to take the code and run with it, if you need something in a hurry.
The templates that have recently been introduced for surname and given name entries are parameter-heavy, so my little cheat of replacing the "{{}}" with "()" doesn't work at all. This is why -- at least in this iteration -- it would be necessary to slice out the proper nouns, or at least anything that contains "(surname" or "(given name". Similar problems apply to some other, less-common templates. On the plus side, I have just put together a Python function that will render {{surname}} accurately (though it will still fail badly on more complex templates).
As for GTA, I can only say that our inclusion policy for initialisms has been a bit ... odd. We wouldn't accept the actual game as an entry. -- Visviva 15:07, 31 August 2009 (UTC)
Here is a more satisfactory version, I think: [9] (about 8.5 megs zipped). Still got a ways to go, but it looks pretty usable overall, if you don't mind the usual wiki palimpsest effect. Known issues include the fact that the special properties of some context templates are not accounted for, so one gets "(usually, informal)" rather than "(usually informal)". This can be fixed easily, but I've run out of time again.
Significant further improvements on my end will depend on my figuring out how to get Python to render our more omnipresent meta-templates, particularly {{form of}}. Any help would be appreciated (or maybe I should just scotch the idea and hard-code them, as I did with {{wlink}} and {{isValidPageName}}).
NB: there is a PHP script that can be used to create static HTML dumps of a wiki (that one has loaded onto a local server), from which extracting fully-formatted definitions would be trivial. The copy on svn.wikimedia.org seems to have gone AWOL, however.
Anyway, this is a very satisfactory way of looking over a slice of our content, IMO. Could be useful for QC, if anyone is inclined. -- Visviva 15:29, 3 September 2009 (UTC)
Now updated from the latest dump. Fun fact: when pasted into MS Word in a suitably dictionaric 8-point font and 2-column layout, this takes up 4,648 pages. -- Visviva 04:15, 4 September 2009 (UTC)
Have you made any progress the last weeks? Perhaps the best thing would be if there was a command-line utility that accepts a database dump and creates the required file. Then anyone could use his/her own CPU time to parse the data, and update the dictionary whenever he/she wants. Or, even better, there could be an integrated CGI application in Wiktinary, that creates the file perhaps once a month. --Andreas Rejbrand 08:01, 25 September 2009 (UTC)
Some progress has been made; see current file (based on the 9/17 dump). I have been using this to create User:Visviva/Page of the day; it is reasonably clean (IMO), but still not exactly perfect. I am thinking that a really satisfactory approach is going to have to involve working from a static HTML dump. I hope to have such a dump ready by next week (requires a bit of setup, since I can't really use a computer that I'm using for anything else). Of course it may turn out to be more problematic than I imagine...
At any rate, I daresay it still isn't anything a professional programmer would want to be seen in public with, but User:Visviva/transclusion.py is now running to completion in a couple of hours on my laptop, so time isn't a big issue anymore. "transclusion.py <name of dumpfile>" should do the trick, should you care to take it for a spin. It does require Pywikipedia (specifically xmlreader.py) -- I suppose it wouldn't be too hard to make it self-sufficient, but it didn't seem worth the hassle at this juncture. -- Visviva 08:58, 25 September 2009 (UTC)
Have a look at dict.zip @ privat.rejbrand.se. Unpack the compressed archive and run the executable. --Andreas Rejbrand 00:36, 7 October 2009 (UTC)
shut the fuck up [rfd'nd
aLEARNER'dbe abl2CURSORthis>dropdown w/tr-l/def etc[acc.2hisPREFS]>pl GETRID ofthe sop-foibl [we rNOTpaper-basd n'dHELP USERS!--史凡>voice-MSN/skypeme!RSI>typin=hard! 17:13, 30 August 2009 (UTC)
Start a vote to eliminate the rule you don't like. Complaining here will achieve little. Equinox ◑ 17:20, 30 August 2009 (UTC)
Split communal discussion pages by month
I propose that this page, and WT:TR, WT:GP use subpages to stop the pages becoming unbelievably long. The process is also very simple. For September, you just add {{/September 2009}}. The advantages are dividing the page up more, plus to shorten the page, you just remove the link and it disappears - it's already archived without anyone doing anything!
There's a major drawback with doing this with deletion and verification pages, because the whole thing gets archived at once, including ones that haven't actually been closed. Mglovesfun (talk)
Support. -- Visviva 08:22, 31 August 2009 (UTC)
Support for BP, GP, TR, but not for RFV, etc. per nom. -Atelaes λάλει ἐμοί 08:59, 31 August 2009 (UTC)
I'd prefer to do this on a per-topic basis as we do for WT:VOTE, per month is arbitrary and has the disadvantage that if you get a link to WT:BP#title you don't know where to start looking (if it's been archived). For RFV and RFD I'd like an approach similar to WT:ES, though I appreciate the wiki-markup is ugly - it makes indexed archiving so easy. Conrad.Irwin 09:07, 31 August 2009 (UTC)
Agreed, but is there a way that we can make this remarkably simpler for an editor? If not, the point is moot? -Atelaes λάλει ἐμοί 10:09, 31 August 2009 (UTC)
Yes, it would take either some javascript (i.e. make the (+) links on {{rfv}} etc. do the transclusion onto the main page, and load up the talk page with the boilerplate already present, and do something similar with the new section link on WT:BP) or use a bot that detects the addition of a normal section to any of these pages and sub-pages it automagically. Conrad.Irwin 17:13, 31 August 2009 (UTC)
Could there be some other approach to this? Could a bot automatically archive a TR, BP, GP, ID, Feedback topic once there has been no discussion for 30 days (or some other period). RfV and RfD just seem to need a different approach or at least an adjusted one. Should a discussion inactive for a period of time be moved to an "active archive" with the topic heading remaining on the main page with a link to the active archive? A brave soul might try to restart the discussion on the main page by attempting a brief summary of the previous discussion and adding any new insight thereto. The headings might bear the date of the last archived comment.
I think Connel used to run one of these, it shouldn't be too hard if someone wants to implement it; given that everyone leaves the latest timestamp on their edit. Conrad.Irwin 17:13, 31 August 2009 (UTC)
What would be involved in said implementation? Is it a question of finding what CM ran or of developing the bot? Of monitoring the bot, stopping it and calling for help or of constantly recovering from and redesigning in light of major problems? DCDuring TALK 18:37, 31 August 2009 (UTC)
Retrieved from "https://en.wiktionary.org/w/index.php?title=Wiktionary:Beer_parlour/2009/August&oldid=39358082"
term cleanup/wiktionary namespace
IPA pronunciations with invalid representation marks | CommonCrawl |
Projected Gradient Descent Analysis
Gradient descent: Downhill from \(x\) to new \(X = x - s (\partial F / \partial x)\). PyTorch lets you easily build ResNet models; it provides several pre-trained ResNet architectures and lets you build your own ResNet architectures. We give the convergence analysis of our proposed algorithms. It consists in updating the prediction of the algorithm at each time step moving in the negative direction of the gradient of the loss received and projecting back onto the feasible set. The default value is defined automatically for Logloss, MultiClass & RMSE loss functions AnyImprovement — Reduce the descent step up to the point when the loss function value is smaller than it was on the previous step. On Projected Stochastic Gradient Descent Algorithm with Weighted Averaging for Least Squares Regression Kobi Cohen, Angelia Nedic and R. Note that, while gradient descent can be susceptible to local minima in general, the optimization problem we have posed here for linear regression has only one global, and. Nonlinear Programming: Analysis and Methods. Final project for "How to win a data science competition" Coursera course. Borwein (1988). Through an iterative process, gradient descent refines a set of parameters through use of partial differential equations, or PDEs. Logistic Regression Gradient Descent. Looking for cool gradients for your graphic, web or UI design? Product designer and front end developer Indrashish Ghosh has created a useful online tool called You can browse gradients by color, copy their hexadecimal or CSS codes, and even download a. Hi All, I want to make a Gradient Descent algorithm which iteratively performs small steps in the direction of the negative gradient towards a (local) minimum (like a drop of water on a surface, flowing downwards from a given point in the steepest descent direction). Now, run gradient descent for about 50 iterations at your initial learning rate. If fis -smooth then rfwill approximately vanish near x. In practice, this corresponds to performing In practice, this corresponds to performing Minimax eversion (521 words) [view diff] exact match in snippet view article find links to article. Full Batch Gradient Descent and Stochastic Gradient Descent algorithms are the variants of Gradient Descent. We establish a convergence theory for this framework. Summary of Assignment Instructions: Perform a Multivariate Linear Regression analysis using Stochastic Gradient Descent on the supplied dataset. Name Entity Recognition. Figure 6: Impact of gradient descent iterations on matrix deformation and trip length distribution. exploratory data analysis. Foster y Department of Statistics Wharton, University of Pennsylvania Philadelphia, PA, 19104-6340 Abstract We propose a new two stage algorithm LING for large scale regression problems. The train set is used for training the network, namely adjusting the weights with gradient descent. Our method is closely related to decomposition methods currently popular for SVM training. Edit: The projected gradient descent is defined as \(\displaystyle x_{k+1} = \prod_X (x_k - \tau_k abla f(x_k)) \) where \(\displaystyle \prod_X(x)\) is orthogonal projection of \(\displaystyle x\)on \(\displaystyle X\) and \(\displaystyle \tau_k\) is the step size. The gradient descent algorithm is a strategy that helps to refine machine learning operations. Lecturer: Ofer Dekel. Supervised Descent Method for Face Alignment Yongqiang Zhang1(B), Shuang Liu2, Xiaosong Yang2, Daming Shi1, and Jian Jun Zhang2 1 Harbin Institute of Technology, Harbin, China [email protected] Cost and gradient calculation. Let x(t) be a one-dimensional signal. Stochastic gradient descent is a method that minimizes the loss function of a model by repeatedly computing its gradient on a single training example, or a batch of few examples. Risk analysis requires specific expertise on every iteration. For Batch Gradient Descent (or simply Gradient Descent), the entire training dataset is used on each iteration to calculate the loss. Stochastic gradient descent (sgd). Projected Gradient Descent Python Code. Gradient descent is the method that iteratively searches for a minimizer by looking in the gradient direction. If you increase the value of range of x but keep theta1_grid (corresponding to the gradient) the same, then the contours become very tall and narrow, so across. [9] proves, in a setting di erent than ours, convergence in L2 of projected stochastic gradient descent in discrete time for convex functions. On modifying the gradient in gradient descent when the objective function is not convex nor does it have Lipschitz gradient. In practice, my understanding is that gradient descent becomes more useful in the following scenarios: 1) As the number of parameters you need to solve for grows. At a basic level, projected gradient descent is just a more general method for solving a more general problem. After the last iteration, plot the J values against the number of the iteration. Unlike the batch gradient descent which computes the gradient using the whole dataset, because the SGD, also known as incremental gradient descent, tries to find minimums or maximums by iteration from a single randomly picked training. The main idea behind gradient descent is relatively straightforward: compute the gradient of the function that we want to minimize and take a step in the direction of steepest descent: i. As mentioned earlier, it is used to do weights updates in a neural network so that we minimize the loss function. Understand the Gradient Descent Algorithm, the central algorithm in machine learning with Neural Networks. Appl Comput Harmon Anal, 2013, 34: 366–378 Appl Comput Harmon Anal, 2013, 34: 366–378 Article. Use gradient descent until Hessian is barely positive, then load the diagonals for a few iterations, then pure Newton. Bahmani S, Raj B. In practice, we usually randomly shuffle. Perform experiments on NLST dataset with the algorithm we developed. Given the above hypothesis, let us try to figure out the parameter which minimizes the square of the error between the 1. Arablouei, R, Werner, S & Dogancay, K 2014, ' Analysis of the gradient-descent total least-squares adaptive filtering algorithm ', IEEE Transactions on Signal. An entropy function always tends to have admissible gradient (used for heavy penalty for wrong classification )and has less tendency to get saturated at extreme points. derivative. Backpropagation allows us to find the optimal weights for our model using a version of gradient descent; unfortunately. CSS Gradient is a happy little website and free tool that lets you create a gradient background for websites. 4 Proximal gradient descent with xed step size t 1=Lsatis es f(x(k)) f jjx(0) 2xjj 2 2tk and same result holds for backtracking, with treplaced by =L. Firstly, we'll take a look at pooling operations from a conceptual level. Gradient descent works by calculating the gradient of the cost function which is given by the partial derivitive of the function. gradient-descent gradient-descent-algorithm stochastic-gradient-descent batch-gradient-descent mini-batch-gradient-descent gradient-descent-methods. lets cheers. Perform experiments on NLST dataset with the algorithm we developed. Proof 15/45 I Projected Gradient Descent: special case with h. Dissecting the update rule for momentum based gradient descent. Figure 6: Impact of gradient descent iterations on matrix deformation and trip length distribution. In practice, we might never exactly reach the minima, but we keep oscillating in a flat. Androids, AI, memory, and existential horror abound. These points are expressed in fractions, so that the same gradient can be reused with varying sized boxes without changing the parameters. The most important piece of design, the colors. The gradient descent algorithm is an optimization algorithm that can be used to minimize the above cost function and find the optimized values for the linear. Hi, Is there a reason you use softmax instead of clamp? That is not doing the projection of pgd right?. That produces a prediction model in the form of an ensemble of weak prediction models. derivative. We need to pick h small enough so that we take small steps that it is guaranteed to converge and large enough T. Recall from before, the basic gradient descent algorithm involves a learning rate 'alpha' and an update function that utilizes the 1st derivitive or gradient f'(. I have been running models for a few months on request while this was being finished. Exponentiated Gradient Descent. As such traditionally one would account for the linear optimization oracle call with an O ( 1 ) O(1) cost and disregard it in the analysis. 1: General gradient-based method and its relation to sensitivity analysis. Adding a gradient is easy. cessful versions of the steepest descent method, the projected gradient method (with exogenous chosen steplengths) and the Newton method have been proposed in [9, 16], [13] and [11, 15], respectively. exploratory data analysis. On Projected Stochastic Gradient Descent Algorithm with Weighted Averaging for Least Squares Regression Kobi Cohen, Angelia Nedic and R. The algorithms are the well-known gradient descent (GD) algo-rithm and a new algorithm, which we call EG\. 4 Proximal gradient descent with xed step size t 1=Lsatis es f(x(k)) f jjx(0) 2xjj 2 2tk and same result holds for backtracking, with treplaced by =L. We show that projected gradient descent, when initialized at 0, converges at a linear rate to the Therefore, projected gradient descent takes the form. CSS Gradient is a happy little website and free tool that lets you create a gradient background for websites. 3 The projection of a point y, onto a set Xis de ned as X(y) = argmin x2X 1 2 kx yk2 2:. Mathematical analysis is a branch of mathematics that includes the theories of differentiation, integration, measure, limits, infinite series, and analytic function. Once the risks are identified, they are analysed to identify the qualitative and quantitative impact of the risk on the project so that appropriate steps can be taken to mitigate them. Our method is closely related to decomposition methods currently popular for SVM training. We empirically show that the performance of our method with projected gradient descent is superior to the existing approach for solving phase retrieval under generative priors. The gradient descent algorithm is an optimization algorithm that can be used to minimize the above cost function and find the optimized values for the linear. Project description. proposed the stochastic power method without theoretical guarantees[Aroraet al. Here we will show a general method to approach a constrained minimisation problem of a convex, differentiable function f f f over a closed convex set C ⊂ R n C\subset \mathbb R^n C ⊂ R n. Gradients ( x and y derivatives ) of an image are useful because the magnitude of gradients is large around edges and corners ( regions of abrupt intensity changes ) and we know that edges and corners pack in a lot more information about object shape than flat regions. Gradient Descent forms the foundation of the training process in Machine Learning and Data Science. In its current state, the proposed approach has given. prehensive analysis of Optimistic Gradient Descent/Ascent in unconstrained bilinear games. shape[0] # No. In gradient descent, the direction of motion d k obeys the inequality dT k(r f(x )) >0 or rf(x )T d <0; 8k 0: (3. Given a function L( ) where denotes the model parameters, the gradient descent method can be written as: (k+ 1) (k) rL( (k)); (1. Therefore, the algorithm cannot hope to reach the minimum if the minimum. based on gradient descent, called greedy projection. For each update of the parameter vector , the algorithm process the full training set. : x ∈ C, where C is a convex set. 3) with z = x shows the algorithm is a descent method. Gradient descent is the most common optimization algorithm in deep learning and machine learning. ### Impact An attacker can interfere with a system which uses gradient descent to change system behavior. proposed the stochastic power method without theoretical guarantees[Aroraet al. Introduction With the advent of sample-path gradient estimation techniques in discrete event dynamic systems, like infinitesimal perturbation analysis. Capstone Project Solutions - Part Three. Gradient descent consists of iteratively subtracting from a star;; This Demonstration shows how linear regression can determine the best fit to a collection of points by iteratively applying gradient descent. Full understanding of Matplotlib Programming Library. On modifying the gradient in gradient descent when the objective function is not convex nor does it have Lipschitz gradient. Analysis of the Learning Rate in Gradient Descent Algorithm Using Python. Strategy by default. leaving the rest of the function differentiable, therefore we can explicity calculate its gradient: I will be using a python module that I'm developing called Bilevel Imaging Toolbox , it is still in its early stages, but there you can find an implementation for a projected gradient descent algorithm. CONJUGATE GRADIENT METHOD 173 Hence, the direction given by (1. BeeLine Reader uses subtle color gradients to help you read more quickly and efficiently. Vanilla gradient descent follows the below iteration with some learning rate parameter : where the loss is the mean loss, calculated with some number of samples, drawn randomly from the entire training dataset. What are the risks in this context?. Depending on your random initialization, your algorithm may converge to different. find the minimum value of x for which f(x) is minimum, Let's play around with learning rate values and see how it affects the. It follows that, if + = − ∇ for ∈ + small enough, then ≥ (+). Given a function L( ) where denotes the model parameters, the gradient descent method can be written as: (k+ 1) (k) rL( (k)); (1. o Fick's first law - The equation relating the flux of atoms by diffusion to the diffusion coefficient and the concentration gradient. At each iteration, it approximates the GMKL objective function W(d) by a quadratic in d. 6) is a descent direction. If fis -smooth then rfwill approximately vanish near x. Convergence analysis Intersection of sets Projected subgradient method 4. One way to resolve this problem is to divide the λ with 1/N, where N is the size of the training data. Featured on Meta New post lock available on meta sites: Policy Lock. In practice, this corresponds to performing In practice, this corresponds to performing Minimax eversion (521 words) [view diff] exact match in snippet view article find links to article. (2012);Cotter et al. We have seen an introduction of logistic regression with a simple example how to predict a student admission to university based on past exam results. Business Skills. Loading the diagonal is a solution method that is in between gradient descent and Newton's method. littlehaes的博客. You can find the details about gradient descent here. The design evolves every year. The dataset has two predictor variables. Obtain the projected gradient ∂L/∂w*. The analysis relies upon the separation margin of the limiting kernel, which is guaranteed positive, can distinguish between true labels and random labels, and can give a tight sample-complexity analysis in the infinite-width setting. More formally: D [w] 2argmin w02 jjw w 0jj 2 Hence, w t+1 2D. in 2013, which described NAG's application in stochastic gradient descent. Browse other questions tagged convex-analysis convex-optimization machine-learning gradient-descent or ask your own question. Such gradient-descent methods depend heavily on the accuracy of the initial estimate, but the Otsu method or similar clustering methods can usually provide reasonable initial estimates. leaving the rest of the function differentiable, therefore we can explicity calculate its gradient: I will be using a python module that I'm developing called Bilevel Imaging Toolbox , it is still in its early stages, but there you can find an implementation for a projected gradient descent algorithm. Equality of opportunity in supervised learning. It can train hundreds or thousands of layers without a "vanishing gradient". However, gradient descent and the concept of parameter optimization/tuning is found all over the machine learning world, so I wanted to present it in a way that was easy to understand. Business Skills. Mark Schmidt. Gradients ( x and y derivatives ) of an image are useful because the magnitude of gradients is large around edges and corners ( regions of abrupt intensity changes ) and we know that edges and corners pack in a lot more information about object shape than flat regions. 2, we proceed to show that projected sub-gradient descent converges to an approximate station-ary point in a polynomial number of iterations. We have seen an introduction of logistic regression with a simple example how to predict a student admission to university based on past exam results. Projected gradient descent - convergence rate. The current paper carries out a trajectory-based analysis of gradient descent for general deep linear neural networks, covering the residual setting of Bartlett et al. Project Overview. Projected gradient descent. A comprehensive analysis of the partial derivatives, volumes. As a result of the minimization problem, a set of parameters is updated at each iteration. The 49th Annual Meeting of the Association for Computational Linguistics (ACL 2011). Natural gradient descent is an optimization method that takes steps in distribution space rather than in parameter space. [email protected] gradient descent. A deeper look into the limitation of gradient descent. So what is the relationship between them? In fact, we can consider backpropagation as a subset of gradient descent, which is the implementation of gradient. It can train hundreds or thousands of layers without a "vanishing gradient". A unifying analysis of projected gradient descent for l p-constrained least squares. Speci cally, we rst derive an exact and concise solution to the dynamics in Subsection3. Model analysis. Pingback: Derivation: Error Backpropagation & Gradient Descent for Neural Networks - collection of dev articles. Afterwards, the parameters of the tree are modified to reduce the residual loss. Auditting/registration forms: Submit them at end of class, pick them up end of next class. On Projected Stochastic Gradient Descent Algorithm with Weighted Averaging for Least Squares Regression Kobi Cohen, Angelia Nedic and R. Discover what data analysis is and do your first research as an analyst. One of the early projects to provide a standalone package for fitting Gaussian processes in Python was GPy by the Sheffield machine learning group. Therefore, we can minimize a function by iteratively moving a little bit in the direction of negative gradient. and ), the proximal gradient update becomes. Choose an initial , and repeat until some convergence criterion: What is it doing? At each iteration, consider the following approximation:. The projected subgradient method uses the iteration x ( k + 1 ) = P ( x ( k ) − α k g ( k ) ) {\displaystyle x^{(k+1)}=P\left(x^{(k)}-\alpha _{k}g^{(k)}\right)} where P {\displaystyle P} is projection on C {\displaystyle {\mathcal {C}}} and g ( k ) {\displaystyle g^{(k)}} is any subgradient of f {\displaystyle f\ } at x ( k ). Gradient Boosting - A Concise Introduction from Scratch. NumPy with Python. Lecture 10: Lower bounds & Projected Gradient Descent{ September 22 10-7 10. Backward-bootstrapping example. Our analy-. Solving optimization problem by projected gradient descent Projected Gradient Descent (PGD) is a way to solve constrained optimization problem. The second major release of this code (2011) adds a robust implementation of the averaged stochastic gradient descent algorithm (Ruppert, 1988) which consists of performing stochastic gradient descent iterations and simultaneously averaging the parameter vectors over time. Gradient descent is the most successful optimization algorithm. Gradient descent is the recommended algorithm when we have massive neural networks, with many thousand parameters. Description of Gradient Descent Method •The idea relies on the fact that −훻푓(푥(푘))is a descent direction •푥(푘+1)=푥(푘)−η푘훻푓(푥(푘))푤푖푡ℎ푓푥푘+1[email protected] Dekel et al. Projected Gradient Descent Analysis The focus is on most powerful paradigms and techniques of how to design algorithms, and measure their efficiency. Summary of Assignment Instructions: Perform a Multivariate Linear Regression analysis using Stochastic Gradient Descent on the supplied dataset. nq3brfb8jbwe ufancxfkzau31 g34smh4a7ul 5x0gfbys9vj iygf3apl0v zke8pv2ax0cbf 5jmqnimfvr1ign 3q7umth9otoi xzw10hrw2ayuo 8piwk537si8 mckvo73mmh 03cgq91lcw9 7tti5c69nay s3o8b4y4r08v wcd0rdnaxa9fm 2xqa9p6lfhgb9c 9hfvrrrpmr9f6 yiyq1wn0p97 897suje5cb j3bkzq4ou4 7eqrmmlv7xs 0o87pwryafzl errp3ij3dlem t2xybmrts0 gx5ra65u8wwy6m0 j94dobzxzyt cl4fk9p0fwa8h7d. Browse other questions tagged convex-analysis convex-optimization machine-learning gradient-descent or ask your own question. For those who have no idea about Gradient, A Gradients let you display. Through a series of tutorials, the gradient descent (GD) algorithm will be implemented from scratch in Python for optimizing parameters of artificial neural network (ANN) in the backpropagation phase. It requires information from the gradient vector, and hence it is a first-order method. Gradient descent consists of iteratively subtracting from a star;; This Demonstration shows how linear regression can determine the best fit to a collection of points by iteratively applying gradient descent. Gradient descent is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. In practice, we usually randomly shuffle. Tune in to this episode and listen to Elliot's perspective on the great potential of AI in Human Resources and why human intuition cannot replace data. To flnd the local min-imum of F(x), The Method of The Steepest Descent is. Project Management. Convex optimization studies the problem of minimizing a convex function over a convex set. \(- abla F(x)\). 4 will discuss how to reduce the number of candidates and provide a modified gradient descent algorithm with the help of the Event-Triggered Projected Gradient Descent (ETPG) algorithm in Shoukry and Tabuada (2016). Otherwise perform proximal gradient update. Appl Comput Harmon Anal, 2013, 34: 366-378. It is well known that the nonlinear conjugate gradient algorithm is one of the effective algorithms for optimization problems since it has low storage and simple structure properties. Data Analysis. I The vanilla gradient descent may su er from (slide 5), and makes convergence analysis easier. Its goal is: given some arbitrary function, find a minumum. We want to find: The algorithm is as follows. Suppose you are at the top of a mountain, and you have to reach a lake which is at the lowest point of the mountain (a. • Projected Gradient Descent • Conditional Gradient Descent • Stochastic Gradient Descent • Random Coordinate Descent. gradient Convergence analysis: will be in terms of # of iterations of the algorithm Each iteration evaluates prox t() once, and this can be cheap or expensive, depending on h 7. Analysis of Thompson Sampling for multi-armed bandits Kernel-based methods for bandit convex optimization. Thus desirable properties such as convexity need only hold in a small region. Recall that gradient descent (GD) explores the state space by taking small steps along (rf(x)). Title: Convergence analysis of gradient descent stochastic algorithms Created Date: Mon Aug 15 17:03:20 2005. Gradient descent method is a way to find a local minimum of a function. Appl Comput Harmon Anal, 2013, 34: 366-378. Instead of computing the gradient of En(fw) exactly, each iteration estimates this gradient on the basis of a The analysis presented in this section shows that stochastic gradient descent performs very well in this context. Gradient Descent Minimization - Single Variable Example. In the population limit, SVGD performs gradient descent in the space of probability distributions on the KL divergence with respect to $\\pi$, where the gradient is smoothed through a kernel integral. Well, the word gradient means an increase and decrease in a property or something! whereas Descent means the act of moving downwards. Gradients can make an application look beautiful, and they're simpler than ever to use in Flutter. Suppose we have a cost function $J$ and want to minimize it. [email protected] gradient descent. A disadvantage of mini-batch gradient descent is the optimization of the parameter b. Gradient Descent¶ Gradient descent is an optimization algorithm used to minimize some function by iteratively moving in the direction of steepest descent as defined by the negative of the gradient. Project Setup. 4 Proximal gradient descent with xed step size t 1=Lsatis es f(x(k)) f jjx(0) 2xjj 2 2tk and same result holds for backtracking, with treplaced by =L. On each iteration, we apply the following "update rule" (the := symbol means replace theta with the value computed on the right). The most important piece of design, the colors. If you recall from calculus, the gradient points in the direction of the highest peak of the function, so by inverting the sign, we can move towards a minimum value. Frank-Wolfe algorithm). The gradient descent algorithm is a strategy that helps to refine machine learning operations. Basic intuition and explanation are revealed in the video. json file which is available in this project's repo. Therefore, we can minimize a function by iteratively moving a little bit in the direction of negative gradient. Adding a gradient is easy. Another optimization algorithm that needs only function calls to find the minimum is Powell's method available by setting method='powell' in minimize. def gradient_descent(x0, f, f_prime, hessian=None, adaptative=False). TITLE: Lecture 2 - An Application of Supervised Learning - Autonomous Deriving DURATION: 1 hr 16 min TOPICS: An Application of Supervised Learning - Autonomous Deriving ALVINN Linear Regression Gradient Descent Batch Gradient Descent Stochastic Gradient Descent (Incremental Descent) Matrix Derivative Notation for Deriving Normal Equations Derivation of Normal Equations. I Consider a constraint set QˆRn, starting from a initial point x 0 2Q, PGD iterates the following equation until a stopping condition is met: x k+1 = P Q x k krf(x k) : I P. 3 Bounds on Successive Steps of Projected Gradient Descent. gradient descent approaches have poor generalization properties when they have to deal with previously unseen intra-class object variation (Gross, Baker & Matthews 2005). If fis -smooth then rfwill approximately vanish near x. $\begingroup$ The projected gradient method is a special case of the proximal gradient method, and you can find a convergence proof for the proximal gradient method in many places, for example in Vandenberghe's UCLA 236c course notes. Steepest descent doesn't depend on the basis at all, so we might as well pick the eigenbasis for analysis. Gradient descent is how nearly all modern deep nets are trained. In Linear regression, parameters refer coefficients and weights in deep learning. If f is strongly convex or the line search satisfies the Wolfe conditions, then dT k y k > 0 and the Dai–Yuan schemes yield. The gradient of the polar angle representation is reversed across this boundary, resulting in V1 and 1B is projected onto the visual space, by translating the nodes to the associated receptive field The results of this analysis were consistent with the interpretation that these sectors are part of a same. So what is the relationship between them? In fact, we can consider backpropagation as a subset of gradient descent, which is the implementation of gradient. Next lesson. De nition 10. The constraint is x1>0, x2>0, x3>0, and x1+x2+x3=1. Relying on the Restricted Isometry Property, we provide convergence guarantees for this algorithm for. 2 Gradients and Hessians Consider a function f(x). gradient-descent gradient-descent-algorithm stochastic-gradient-descent batch-gradient-descent mini-batch-gradient-descent gradient-descent-methods. Such problems can be written in an unconstrained form as we discussed in the introduction. TextClassificationDataset # subclass of all datasets below tDatasets. Upper Right Menu. If we wish to use gradient descent update to a point x t2X, it is possible that the iterate x t+1 = x t rf(x t) L may not belong to the constraint set X. Avriel, Mordecai (2003). gradient-descent is a package that contains different gradient-based algorithms, usually used to optimize Neural Networks and other machine learning models. : x ∈ C, where C is a convex set. NumPy with Python. As C is convex, the projection onto C, PC, is well defined for every x ∈ Rn. shape[0] # in the update stage, all we need to do is nudge our weight # matrix in the negative direction of the gradient (hence the # term "gradient descent" by taking. PyTorch lets you easily build ResNet models; it provides several pre-trained ResNet architectures and lets you build your own ResNet architectures. Suppose we have a cost function $J$ and want to minimize it. It follows gradient descent closely except that we project an iterate back to the constraint set in each iteration. Apply gradients to variables. Barzilai and Borwein (1988) took a similar idea to an even more surprising extent. It then optimizes both and using gradient descent. 1: General gradient-based method and its relation to sensitivity analysis. Gradient descent: Downhill from \(x\) to new \(X = x - s (\partial F / \partial x)\). As mentioned earlier, it is used to do weights updates in a neural network so that we minimize the loss function. Deep dive into Pandas for Data Analysis. A more general way to think along these lines is dimensional analysis. Gradient descent is the preferred way to optimize neural networks and many other machine learning algorithms but is often used as a black box. In this tutorial, you'll learn, implement, visualize the Performance of Gradient descent by trying different sets of. conditional gradient descent (take h(x) as the characteristic function for a set C), the proximal point method (take g(x) = 0). Data format description. Once we know the 2D direction, we can normalize this direction. This gives the Projected Subgr rithm which iterates the following equations for t. Grants The Ocean Memory Project invests each year in its community by providing seed grants to support researchers and practitioners exploring ocean memory. 2 Convergence Analysis Recall that in proximal gradient descent, we have wt+1 = prox ;h wt trL(wt) (2) Next, we define G (w), which is a gradient like object at iteration t, G (w) , 1 ;h w prox w rL(w) At every iteration, wt+1 = wt tG t (w t). The idea behind the algorithm is simple. Borwein (1988). Wednesday, October 16, 2019 - 9:50am - 10:35am. com Google Inc. In the gradient descent algorithm, the formula to update the weight $w$, which has $g$ as the partial gradient of the loss function with respect to it, is: $$w\ -= r \times g$$ where $r$ is the machine-learning gradient-descent momentum adam optimizers. 3 Bounds on Successive Steps of Projected Gradient Descent. The projected subgradient method uses the iteration x ( k + 1 ) = P ( x ( k ) − α k g ( k ) ) {\displaystyle x^{(k+1)}=P\left(x^{(k)}-\alpha _{k}g^{(k)}\right)} where P {\displaystyle P} is projection on C {\displaystyle {\mathcal {C}}} and g ( k ) {\displaystyle g^{(k)}} is any subgradient of f {\displaystyle f\ } at x ( k ). To illuminate a bit more this condition, we. Backward-bootstrapping example. The gradient step moves the point downwards along the linear approximation of the function. Title: Convergence analysis of gradient descent stochastic algorithms Created Date: Mon Aug 15 17:03:20 2005. Gradient Descent is the process of minimizing a function by following the gradients of the cost function. The projected gradient method is a method that proposes solving the above optimization problem taking steps of the form xt + 1 = PC[xt − η∇f(xt)]. Gradient Descent with Linear Regression. Quite the same Wikipedia. 0, and the end point corresponds to 1. Gradient descent is like a ball rolling down a hill, so let's image two balls, shown in green placed on the function and see where they roll to. Add New Gradient. In this paper, we consider the problem of learning high-dimensional tensor regression problems with low-rank structure. Nesterov accelerated gradient (NAG). This algorithm is widely used in machine learning for minimization of. We will see an algorithm that We start this program with a projection-free algorithm for black-box constrained optimization. Multi-level classification using stochastic gradient descent Classification is the problem of identifying to which of a set of categories (sub-populations) a new observation belongs, on the basis of a training set of data containing observations (or instances) whose category membership is known. Gradient descent is a very popular optimization method. Gradient descent is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. Gradient Descent/Ascent vs. See full list on academic. Challenges in executing Gradient Descent. Much like scikit-learn's gaussian_process module, GPy provides a set of classes for specifying and fitting Gaussian processes, with a large library of kernels. Relying on the Restricted Isometry Property, we provide. In contrast to (batch) gradient descent, SGD approximates the true gradient of E (w, b) by considering a single training example at a time. The mirror descent algorithm (MDA) was introduced by Nemirovsky and Yudin for solving convex optimization problems. Gradient Descent¶. The algorithms are the well-known gradient descent (GD) algo-rithm and a new algorithm, which we call EG\. Winter 2017. Recall projected gradient descent uses an initial x(0), and then updates for k = 1, 2, 3, · · · by rst performing gradient descent on the then current solution and then projecting it back onto the constraint set. Consider the problem min x ∈ Rn f(x) s. Gradients can make an application look beautiful, and they're simpler than ever to use in Flutter. Gradient descent took over 122 million iterations, and the results from gradient descent and directly solving are nearly identical (conclusion: you generally shouldn't use gradient descent to solve least squares without a good. In this figure of speech emotive or logical importance accumulates only to be unexpectedly. So we've got our initial hypothesis and found it's cost using the cost function. For those who have no idea about Gradient, A Gradients let you display. Although we can minimize this function directly, gradient descent will let us minimize more complicated loss functions that we can't minimize directly. A more general way to think along these lines is dimensional analysis. Its goal is: given some arbitrary function, find a minumum. Exercise: Guess the 3D surface. 15+ Free Python Projects for Beginners with full tutorial walkthroughs. This gives the Projected Subgr rithm which iterates the following equations for t. Vandenberghe. In this example, we will see Linear Gradient in React Native using LinearGradient component from @react-native-community/react-native-linear-gradient. Cauchy is the first person who proposed this idea of Gradient Descent in 1847. Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function. Move into that direction for some distance. See gradients were super played out back in the early web days, but now they're so ubiquitous that you'd be remiss not to drop them in your site, interface, or next hair dye job. Further at the optimum, G (wt) = 0. shape[0] # No. basically includes two parts, projected gradient descent and interior descent algorithm, which will be eshed out in the sequel. How to do projected gradient descent? autograd. Recall that rf(x) = 0 and therefore by -smoothness f(x t+1) f(x) 2 kx t+1 x k2: By de nition of the gradient. structural assumptions using the projected gradient descent algorithm applied to a poten-tially non-convex constraint set in terms of its localized Gaussian width (due to Gaussian design). Gradient descent is how nearly all modern deep nets are trained. The minimization problem can be solved independently for each. The process and stages of the Learning about gradient descent, gradient boosting, and neural networks. 3 The projection of a point y, onto a set Xis de ned as X(y) = argmin x2X 1 2 kx yk2 2:. gradient-descent is a package that contains different gradient-based algorithms, usually used to optimize Neural Networks and other machine learning. Stochastic Gradient Descent (SGD) is a simple yet very efficient approach to fitting linear classifiers and regressors under convex loss functions such as (linear) Support Vector Machines and Logistic Regression. This idea, unnatural at rst glance, turns out to be quite intuitive, as we will see momentarily. These methods do not scalarize the vector-valued prob-lem and work on the image space, providing adequate search directions with respect. gradient descent: 1) Batch Gradient Descent (BGD) - uses the. com Ryan Overbeck [email protected] If you recall from calculus, the gradient points in the direction of the highest peak of the function, so by inverting the sign, we can move towards a minimum value. Section 4 investigates speci c but essential modi cations for applying the proposed projected gradients methods to NMF. [9] proves, in a setting di erent than ours, convergence in L2 of projected stochastic gradient descent in discrete time for convex functions. These methods do not scalarize the vector-valued prob-lem and work on the image space, providing adequate search directions with respect. and ), the proximal gradient update becomes. Gradient descent, subdifferentials, uniform laws of large numbers, infinitesimal perturbation analysis, discrete event dynamic systems. In addition, there are handful number of literature available where comparative analysis of various optimizers on the performance of deep CNN model has been presented (Dogo et al. 启发式算法greedy heuristic、贪心算法 7390. Mirror descent and nonlinear projected subgradient methods for convex optimization. We establish a convergence theory for this framework. What it does is it separates the norm of the weight vector from its direction. Risk analysis involves examining how project outcomes and objectives might change due to the impact of the risk event. In practice, we usually randomly shuffle. An example demoing gradient descent by creating figures that trace the evolution of the optimizer. Niklas Donges. Non-Convex Projected Gradient Descent for Generalized Low-Rank Tensor Regression. Gradient descent梯度下降(Steepest descent). pdf), Text File (. Edit: The projected gradient descent is defined as \(\displaystyle x_{k+1} = \prod_X (x_k - \tau_k abla f(x_k)) \) where \(\displaystyle \prod_X(x)\) is orthogonal projection of \(\displaystyle x\)on \(\displaystyle X\) and \(\displaystyle \tau_k\) is the step size. Annotate the cost function plot with coloured points indicating the # parameters chosen and red arrows indicating the steps down the gradient. Gradient Descent is an optimizing algorithm used in Machine/ Deep Learning algorithms. The design evolves every year. This corresponds to taking a gradient step , then projecting the result onto the set. Stylistics takes as an object of its analysis the expressive means and stylistic devices of the language which are based on some significant structural point in an It is the descent from the sublime to the ridiculous. The default value is defined automatically for Logloss, MultiClass & RMSE loss functions AnyImprovement — Reduce the descent step up to the point when the loss function value is smaller than it was on the previous step. He is a manager Human Resources (People Strategy) in India*. Therefore, regardless of the initial size, the more stable scale is found which allows us to be scale invariant. Here's a thought. You've been diagnosed with terminal projected gradient descent Researchers have demonstrated how a projected gradient descent attack is able to fool medical. The GD implementation will be generic and can work with any ANN architecture. | CommonCrawl |
Uptake of new antidiabetic medicines in 11 European countries
Nika Mardetko1,
Urska Nabergoj Makovec ORCID: orcid.org/0000-0001-5194-33141,
Igor Locatelli ORCID: orcid.org/0000-0002-0052-89861,
Andrej Janez ORCID: orcid.org/0000-0002-6594-52542 &
Mitja Kos ORCID: orcid.org/0000-0002-6801-64501
BMC Endocrine Disorders volume 21, Article number: 127 (2021) Cite this article
Several new antidiabetic medicines (GLP-1 receptor agonists, DPP-4 inhibitors, and SGLT-2 inhibitors) have been approved by the European Medicines Agency since 2006. The aim of this study was to evaluate the uptake of new antidiabetic medicines in European countries over a 10-year period.
The study used IQVIA quarterly value and volume sales data January 2006–December 2016. The market uptake of new antidiabetic medicines together with intensity of prescribing policy for all antidiabetic medicines were estimated for Austria, Croatia, France, Germany, Hungary, Italy, Poland, Slovenia, Spain, Sweden, and the United Kingdom. The following measures were determined: number of available new active substances, median time to first continuous use, volume market share, and annual therapy cost.
All countries had at least one new antidiabetic medicine in continuous use and an increase in intensity of prescribing policy for all antidiabetic medicines was observed. A tenfold difference in median time to first continuous use (3–30 months) was found. The annual therapy cost in 2016 of new antidiabetic medicines ranged from EUR 363 to EUR 769. Among new antidiabetic medicines, the market share of DPP-4 inhibitors was the highest. Countries with a higher volume market share of incretin-based medicines (Spain, France, Austria, and Germany) in 2011 had a lower increase in intensity of prescribing policy. This kind of correlation was not found in the case of SGLT-2 inhibitors.
This study found important differences and variability in the uptake of new antidiabetic medicines in the included countries.
Diabetes is one of the most challenging health problems in Europe. It is one of the leading causes of death, and its macro- and microvascular complications result in population disability and increased healthcare cost [1]. The prevalence and financial burden of diabetes have increased in European countries and another 10 million patients are expected by 2035. However, diabetes prevalence as well as trends in diabetes prevalence vary significantly between European countries, reflecting differences in management of diabetes as well as its financial burden [2].
Diabetes management consists of lifestyle intervention along with pharmacological therapy and routine blood glucose monitoring [3]. Oral antidiabetics are the most commonly used medicines. Metformin is the first line of treatment and is the most widely prescribed antihyperglycemic medicine. A second-line agent will be added to metformin to achieve individualized glycaemic targets in order to prevent diabetes-related chronic complications. According to the American Diabetes Association (ADA) and the European Association for the Study of Diabetes (EASD) the decision on the second-line agent is based on the risk of comorbidities, risk of hypoglycaemia, body weight, medicine cost, adverse effects, or contraindications [4]. The major classes of old antidiabetic medicines include biguanides, insulin secretagogues (sulfonylureas and glinides), insulin sensitizers (thiazolidinediones), α-glucosidase inhibitors, and insulin. New agents approved by the European Medicines Agency (EMA) are incretin-based therapy (glucagon-like peptide-1 (GLP-1) receptor agonists and dipeptidyl peptidase-4 (DPP-4) inhibitors) and sodium-glucose cotransporter (SGLT-2) inhibitors [4, 5]. The introduction of new agents started with the marketing authorisation of GLP-1 receptor agonist exenatide at the end of 2006 [6].
New treatments for diabetes also come at significantly higher prices [7,8,9]. For example, an approximately 30-fold difference between metformin and the GLP-1 receptor agonists liraglutide and exenatide was observed in France and Switzerland. Important price differences between metformin and new antidiabetic agents were also reported for the United Kingdom (UK) and Germany [7, 10]. Moreover, prescribed antidiabetic medicines already represent the largest part of costs in diabetes management, followed by the costs of managing diabetes complications [1]. Hence, the introduction and uptake level of new antidiabetic medicines in a particular country could be affected by the healthcare system's financial capabilities and its priorities. Moreover, differences in country-specific health technology assessment processes supporting payers and decision-makers on the adoption and reimbursement of new medicines could significantly affect patient access to new antidiabetic medicines [11, 12].
This study evaluates the uptake of new antidiabetic medicines in European countries over a 10-year period.
Selection of medicines
The ATC codes of the A10 group were used to define all antidiabetic medicines. The products included in the study were categorized into three main groups: new antidiabetics (DPP-4 inhibitors, GLP-1 receptor agonists, SGLT-2 inhibitors), insulins and old antidiabetic medicines (the rest of the antidiabetic medicines). Insulin degludec was considered a new medicine among insulins. In addition, the assumption from 2004 that all medicines used in diabetes need to be authorized by a centralized procedure was taken into account [13, 14]. Therefore, medicines containing new active substances for diabetes treatment that were authorized via a centralized procedure at the EMA between 2006 and 2016 were considered new antidiabetic medicines (see Appendix Table S1 for all medicines included in the study). Fixed combinations for which one of the active substances was a new active substance were assigned to the corresponding group of new medicines.
The study was based on the IQVIA quarterly database, January 2006 – December 2016. The IQVIA quarterly value sales data in EUR and quarterly volume sales data expressed in days of treatment (DOTs) for the products from the ATC A10 group were analysed for the purpose of the study. DOTs are estimated based on volume in standard unit measure adjusted to the average (or defined) daily dose.
Selected countries
The study included a set of 11 different European countries in terms of pharmaceutical market value, population size, geographical location in the EU, and diabetes treatment approach. Consequently, Austria, Croatia, France, Germany, Hungary, Italy, Poland, Slovenia, Spain, Sweden, and the UK were selected. For each country, the data were given either as hospital and retail channels separately or as hospital and retail combined. For eight countries, both channels were given separately, whereas data for Sweden were given combined. In the case of Austria and Hungary, only retail sales data were available and hospital panel data were missing.
Number of new antidiabetic medicines and time to their first continuous use
New antidiabetic medicine was considered to be available in a particular country when its sales were detected in the IQVIA quarterly database. Continuous use was defined as constant 1-year product sales. The time to first continuous use was determined based on the number of products containing one of the new active substances for diabetes treatment available in a particular country. Each country median time to first continuous use was therefore determined using a different number of available products. The time difference was calculated between the new antidiabetic medicine authorisation date and the first medicine continuous use date. The quarter of the year within which the medicine authorisation occurred and the quarter of the first continuous use were considered for the time difference calculation. When a medicine's continuous use was recognized before marketing authorization, this was considered other practices of patient supply, such as compassionate use. However, in such cases the time to first continuous use was the same as the marketing authorization date (quarter of the year). The median times to first continuous use of new antidiabetic medicines were then compared between the countries.
Volume market share and annual therapy cost
Based on the volume sales in DOTs, the market share of new antidiabetics, old antidiabetics and all insulins (including insulin degludec) were defined. Annual therapy cost was calculated separately for all three groups by dividing value sales by volume sales (consumption in DOTs) and then multiplying by the intensity of all antidiabetic medicines prescribing policy (see Eq. 1 for the case of new antidiabetic medicines). The annual therapy cost provide the estimation of cost of the annual therapy per patient in each country.
$$\scriptsize Annual\ therapy\ cost \left(new\ medicine\right)=\frac{Annual\ value\ sales \left(new\right)}{Annual\ cons.\ in\ DOTs \left(new\right)}\times\ Intensity\ of\ prescribing\ policy$$
Intensity of prescribing policy for all antidiabetic medicines
The intensity of prescribing policy is an index allowing comparison of medicine consumption adjusted for the population at risk [15]. In our study, the volume sales data in DOTs represents annual consumption in DOTs per day in the specified population of diabetes patients (Eq. 2). The annual consumption was calculated for all antidiabetic medicines. The number of diabetes population was calculated by multiplying prevalence of diabetes in each country (in %; data from NCD Risk Factor Collaboration [16] with the country's total population size (WHO Health Expenditure Database [17].
$$\scriptsize Intensity\ of\ antidiabetic\ medicines\ prescribing\ policy\ =\ \frac{Annual\ consumption\ in\ DOTs\ /\ 365\ days }{Number\ of\ diabetes\ patients}$$
The value 1 for intensity of prescribing policy implies that all diabetes patients in the country receive a defined (or average) daily dose of antidiabetic medicine every day within a year, indicating an adequate coverage with medicines. The values above and below 1 would indicate medicines' abundance or sub-optimal utilization per average diabetic patient, respectively.
Correlation between the intensity of prescribing policy for all antidiabetic medicines and volume market share of new antidiabetic medicines
Correlation analysis was performed to explain between-country differences in the volume market share of new antidiabetic medicines with regard to the intensity of prescribing policy for all antidiabetic medicines. Pearson correlation coefficient with corresponding statistical test was applied. According to the new antidiabetic medicines marketing authorization and market entry dates, the first correlation analysis was related to incretin-based medicines (DPP-4 inhibitors and GLP-1 receptor agonists) from 2007 to 2011. The second correlation analysis was related to SGLT-2 inhibitors from 2012 to 2016. For this purpose, the ratio of the intensity of prescribing policy for 2011 compared to 2007 and the ratio of the intensity of prescribing policy for 2016 compared to 2012 were calculated. In addition, the volume market share of DPP-4 inhibitors and GLP-1 receptor agonists in 2011 and volume market share of SGLT-2 inhibitors in 2016 were applied.
Number of new antidiabetic medicines
Fourteen new active substances were introduced from 2006 to 2016; five DPP-4 inhibitors, five GLP-1 receptor agonists, and three SGLT-2 inhibitors. Insulin degludec was considered a new insulin. The availability of new antidiabetic substances in included countries is presented in Fig. 1.
The number of available new antidiabetic active substances in use for each country within the study period. Values in brackets indicate the total number of new active substances. Asterisks indicate countries with only retail sales data available
In the set of new antidiabetic medicines there was also a combination of two new active substances, the GLP-1 receptor agonist liraglutide and insulin degludec, which was available in France, Hungary, Germany, Austria, Sweden, and the UK. In further analyses, the combination of liraglutide and degludec was taken into account in the group of GLP-1 analogues.
Time to first continuous use of new antidiabetic medicines
The pooled median time to the continuous use of new antidiabetic medicines for all countries together was 13 months. Figure 2 shows the median time to first continuous use of new antidiabetic medicines in each of the selected countries. In this, the estimation of time to first continuous use is based on the country's available antidiabetics.
Box plots representing times to first continuous use of new antidiabetic medicines in the included countries. The countries are listed according to increasing median times. Upper and lower bars indicate the values of the third and first quartiles, respectively. The number of products available in each country is given in brackets
A tenfold difference in median time to first continuous use was found among the selected countries. The fastest were Germany and the UK, with a median time of 3 months, followed by Austria and Sweden with a median time of less than 1 year. Apart from the differences in between countries' median time to first continuous use, a wide range in individual medicine times to first continuous use within a particular country was also found.
The results show a decrease in the volume market share of old antidiabetics, whereas the new antidiabetic medicines' market share increased. Volume market share of insulins remained more or less unchanged. Among the new antidiabetic medicines, the market share of DPP-4 inhibitors was the highest in all the selected countries. Table 1 shows the volume market share of new and old antidiabetic medicines and all insulins (including insulin degludec), in 2016. The volume market shares were also determined for other study period years and are provided in the Appendix Table S2. Similarly, the proportions of new antidiabetic consumption and expenditure was determined for all years in the study period and are presented in Appendix Figure S1.
Table 1 Volume market share and annual therapy cost per patient of new antidiabetic medicines, all insulins, and old antidiabetic medicines in 2016
Intensity of antidiabetic medicines prescribing policy
Figure 3 shows each country's intensity of prescribing policy for all antidiabetic medicines. Overall, an increase in intensity of prescribing policy was observed in all the selected countries; the greatest increase was observed in the UK, followed by Poland, Croatia, and Slovenia.
Each country's intensity of prescribing policy for all antidiabetic medicines in the period 2006–2016. The value 1 for intensity of prescribing policy indicates that all diabetes patients in the country receive a defined (or average) daily dose of antidiabetic medicine every day within a year
A correlation analysis showed a relatively strong negative and statistically significant correlation (Pearson correlation coefficient = -0.841, p = 0.001) between the volume market share of DPP-4 inhibitors and GLP-1 receptor agonists in 2011 and the ratio of intensity of prescribing policy for 2011 compared to 2007. Spain, France, Austria, and Germany had higher volume market shares of DPP-4 inhibitors and GLP-1 receptor agonists, yet a lower ratio of intensity of prescribing policy (Fig. 4.). In contrast, other countries, where volume market share of DPP-4 inhibitors and GLP-1 receptor agonists was lower than 5 %, had a significantly higher ratio of intensity of prescribing policy. However, this kind of correlation was not found in the case of SGLT-2 inhibitors (Pearson correlation coefficient = -0.351, p = 0.290).
Correlation between the ratio (2011 compared to 2007) of intensity of prescribing policy for all antidiabetic medicines and volume market share of incretin-based medicines (DDP-4 inhibitors and GLP-1 receptor agonists) in 2011
This study provides useful insight into differences in intensity of prescribing policy for all antidiabetic medicines and market uptake of new antidiabetic medicines in 11 selected European countries. An increase in the intensity of prescribing policy of antidiabetic medicines was observed in all the countries, suggesting growth in the pharmacological care of diabetic patients and reflects diabetes management as a healthcare priority. It is also in line with the increased number of antidiabetic medicines per patient reported in other recently published literature [18]. However, considering other outcomes of this study, it may also derive from other factors, e.g. changes in country-specific clinical guidelines, national antidiabetic medicines policy, and reimbursement restrictions [12, 14].
At least one new active substance from the group of DPP-4 inhibitors, GLP-1 receptor agonists, and SGLT-2 inhibitors, as well as insulin degludec was in continuous use in all countries included in the study. The only exception was France, where SGLT-2 inhibitors and insulin degludec as sole agent were not available between 2006 and 2016. After our study period, insulin degludec and SGLT-2 inhibitor dapagliflozine were introduced in 2018 and 2020, respectively [19].
Spain, was the only country where all new active substances have been available. In contrast, median times to first continuous use of new antidiabetic medicines differ significantly between countries. As expected, Germany was the fastest in launching new agents, most likely due to the current reimbursement policy of free launch followed by the early benefit assessment [20, 21]. Croatia, Poland, and Slovenia were shown to be the slowest in the introduction and continuous use of new antidiabetic agents. It should be mentioned that all available antidiabetic medicines were considered in this time analysis, which could mean that the first representative of the new antidiabetic group was available soon after marketing authorisation, whereas price negotiations for the subsequent medicines last longer due to payer requirements for the same or even lower price. Additionally, prescribing restrictions could affect the intensity of prescribing policy. For instance, Slovenia introduced SGLT-2 inhibitors with several prescribing restrictions [22], which have been removed in February 2020 and would most probably result in the increase of prescribing.
Although Germany was the fastest in introduction of new antidiabetic agents, it did not have the greatest consumption. It could be linked to the finding, that out of seven evaluated new antidiabetics, only one received an added benefit (non-quantifiable benefit) during the early benefit assessment of the reimbursement procedure by the Federal Joint Committee (G-BA) [23].
The highest volume market shares (around 27 %) of new antidiabetic medicines were observed in Spain and Austria. On the other hand, Poland had the lowest volume market shares, probably due to high patient co-payments for all new antidiabetic medicines [24, 25]. Reserved use of new antidiabetic medicines in almost all of the selected countries could be related to inadequate evidence of benefits according to the price (cost) of new antidiabetic medicines at the investigated time period. The evidence have been usually attributed to surrogate outcomes such as short-term glycaemic control and treatment of adverse effects [20]. Decisions on reimbursement of new antidiabetic agents at that time were therefore based on a lack of evidence, which is less affordable for lower-income countries [26, 27].
Based on the results of correlation analysis (Fig. 4) related to incretin-based medicines (DPP-4 inhibitors and GLP-1 receptor agonists), two groups of countries can be defined. The first group (Spain, Austria, France, and Germany), representing countries with a high volume market share of new antidiabetic medicines (Table 1 and Appendix Table S2) and a slight increase in intensity of prescribing policy from 2007 to 2011 (Fig. 3). The second group consists of all other countries. Therefore, most of the countries evaluated in this study tried to optimize diabetes care through more intense use of old antidiabetic medicines and insulins, and were probably forced to be more conservative in the use of new antidiabetic medicines.
Countries were also shown to differ in the extent of insulin use. Germany, Sweden, and Slovenia, with a volume market share of at least 30 %, are predominant (Table 1 and Table S2 in the Appendix). The literature has already shown that Sweden has a relatively high use of insulin for the treatment of type 2 diabetes compared to other European countries [28]. Up to 38.6 % of the market share of insulins prescribed in primary care practices was also reported in Germany [29]. Insulin treatment is usually started after the oral therapy is already optimized (in double or triple combination and at the maximum tolerated doses) yet fails to achieve optimal glycaemic control. Nonetheless, the insulin initiation is often inappropriately delayed, putting patients to unnecessarily increased risk of complications and potentially reduced quality of life or and life expectancy. This is termed "clinical inertia," and it can occur due to a number of factors, including clinical concerns (i.e., risk of weight gain, hypoglycaemia, or patient distress), professional concerns (e.g., lack of clinical experience, skills, or confidence in insulin titration), or health system concerns (e.g., competing priorities, regulatory or financial constraints, or a lack of impartial continued medical education) [30, 31].
Furthermore, a great difference in the annual therapy cost of old antidiabetic medicines compared to the annual therapy cost of insulins and new antidiabetic medicines was observed in all the selected countries. The highest annual therapy cost of old antidiabetic medicines was observed in Slovenia, €58. The highest annual therapy cost of new antidiabetic agents was observed in Sweden, €769; however, the annual cost of insulin therapy was significantly lower in Sweden, €373. In contrast, the annual therapy cost of insulins and new antidiabetic agents in Germany were shown to be almost the same, around €500. Nevertheless, the enormous cost gap between the old and new antidiabetic medicines and their financial burden could affect market uptake and consequently patient access to the new agents [8].
The study included 11 European countries, and address an important therapeutic area with evaluation of all relevant antidiabetic classes in a 10-year study period. It provides useful insight and strengthens the evidence regarding European countries' variability in introduction and adoption practices of new antidiabetic medicines at the time when limited evidence to assess risk/benefit of new agents were available. Indeed, inclusion of additional countries would contribute to the overall assessment, however the IQVIA data exhibits limitations in terms of the quality and type of data. The database combines two levels of data (retail and hospital consumption), however not in the same manner for all countries. Hence, certain countries were not eligible for inclusion and the study provides estimates of intensity of prescribing policy, and market uptake of new antidiabetic medicines. Furthermore, we were not able to divide the combinations in two different entities, therefore, aiming to detect any new antidiabetic agent available in the selected countries, fixed combinations for which one of the active substances was a new active substance were assigned to the corresponding group of new medicines. Taking a different approach might yield some difference with our current results. Secondly, the study period ended with 2016, when two major clinical trials [32, 33] changed the perspective on GLP-1 receptor agonists and SGLT-2 inhibitors, which resulted in an updated guidelines on diabetes management in 2018 [4]. Extension of the study period would provide additional insights and comparisons into how the new evidence influenced trends in intensity of prescribing policy, volume market shares and annual therapy cost of all antidiabetics classes. However, the availability of data limited the scope of our study.
All the countries had at least one new active substance among the DPP-4 inhibitors, GLP-1 receptor agonists, SGLT-2 inhibitors, and insulin degludec and overall growth in medication therapy for diabetic patients, shown through the increased intensity of prescribing policy, was observed. Nonetheless, the study found important differences in the uptake of new antidiabetic medicines. A similar comparative study using recent data would introduce new evidence on the evolution and changes of trends detected in this study.
Data obtained under license from the following IQVIA information service: IQVIA MIDAS Quarterly Sales Data, January 2006 – December 2016, IQVIA. All Rights Reserved.
Anatomical Therapeutic Chemical (ATC) Classification
DOT:
Days of treatment
DPP-4 inhibitors:
Dipeptidyl Peptidase-4 inhibitors
EMA:
GLP-1 receptor agonists:
Glucagon-like Peptide receptor agonists
SGLT-2 inhibitors:
Sodium-Glucose transporter 2 inhibitors
Tamayo T, Rosenbauer J, Wild SH, Spijkerman AM, Baan C, Forouhi NG, et al. Diabetes in Europe: an update. Diabetes Res Clin Pract. 2014;103(2):206–17.
OECD/EU (2016). "Diabetes prevalence" in Health at a Glance: Europe 2016: State of Health in the EU Cycle, OECD Publishing, Paris 2016 [Available from: https://doi.org/10.1787/health_glance_eur-2016-18-en.
Chaudhury A, Duvoor C, Reddy Dendi VS, Kraleti S, Chada A, Ravilla R, et al. Clinical Review of Antidiabetic Drugs: Implications for Type 2 Diabetes Mellitus Management. Front Endocrinol (Lausanne). 2017;8:6.
Davies MJ, D'Alessio DA, Fradkin J, Kernan WN, Mathieu C, Mingrone G, et al. Management of Hyperglycemia in Type 2 Diabetes, 2018. A Consensus Report by the American Diabetes Association (ADA) and the European Association for the Study of Diabetes (EASD). Diabetes Care. 2018;41(12):2669–701.
Blind E, Janssen H, Dunder K, de Graeff PA. The European Medicines Agency's approval of new medicines for type 2 diabetes. Diabetes Obes Metab. 2018;20(9):2059–63.
European Medicine Agency. Authorisation details Byetta exenatide [Available from: www.ema.europa.eu/en/medicines/human/EPAR/byetta#authorisation-details-section.]
Jacob L, von Vultee C, Kostev K. Prescription Patterns and the Cost of Antihyperglycemic Drugs in Patients With Type 2 Diabetes Mellitus in Germany. J Diabetes Sci Technol. 2017;11(1):123–7.
Pemminati S, Millis RM, Kamath A, Shenoy AK, Gangachannaiah S. Are the Newer Antidiabetic Agents Worth the Cost? J Clin Diagn Res. 2016;10(3):FL01.
Pichetti S. The Diffusion of New Anti-diabetic drugs: an International Comparison 2013 [Available from: https://www.irdes.fr/EspaceAnglais/Publications/IrdesPublications/QES187.pdf.]
Beran D, Ewen M, Lipska K, Hirsch IB, Yudkin JS. Availability and Affordability of Essential Medicines: Implications for Global Diabetes Treatment. Curr Diab Rep. 2018;18(8):48.
Droeschel D, de Paz B, Houzelot D, Vollmer L, Walzer S. A Comparison of Market Access Evaluations for Type II Diabetes Mellitus In France and Germany: An Analysis Using The Prismaccess Database. Value Health. 2015;18(7):A620.
Naci H, Lehman R, Wouters OJ, Goldacre B, Yudkin JS. Rethinking the appraisal and approval of drugs for type 2 diabetes. Bmj. 2015;351:h5260.
European Commision. Authorisation procedures - The centralised procedure [Available from: https://ec.europa.eu/health/authorisation-procedures-centralised_en. ]
REGULATION (EC) No 726/2004 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL [Available from: https://ec.europa.eu/health/sites/health/files/files/eudralex/vol-1/reg_2004_726/reg_2004_726_en.pdf.]
Cornu JN, Cussenot O, Haab F, Lukacs B. A Widespread Population Study of Actual Medical Management of Lower Urinary Tract Symptoms Related to Benign Prostatic Hyperplasia Across Europe and Beyond Official Clinical Guidelines. European Urology. 2010; 58:450–456. doi:https://doi.org/10.1016/j.eururo.2010.05.045
NCD Risk Factor Collaboration (NCD – RisC). Diabetes [Available from: http://ncdrisc.org/data-downloads-diabetes.html.]
World Health Organisation. Global Health Expenditure Database [Available from: http://apps.who.int/nha/database/Select/Indicators/en.]
Higgins V, Piercy J, Roughley A, Milligan G, Leith A, Siddall J, et al. Trends in medication use in patients with type 2 diabetes mellitus: a long-term view of real-world treatment between 2000 and 2015. Diabetes Metab Syndr Obes. 2016;9:371–80.
Vidal 2020 [Available from: https://www.vidal.fr/Medicament/forxiga-123958-prescription_delivrance_prise_en_charge.html]
International Society for Pharmacoeconomics and Outcomes Research ISPOR. Germany - Pharmaceutical, Global Health Technology Assessment Road Map [Available from: https://tools.ispor.org/htaroadmaps/Germany.asp]
IGES Institut GmbH. Reimbursement of Pharmaceuticals in Germany [Available from: https://www.iges.com/e15094/e15095/e15096/e17469/IGES_Reimbursement_Pharmaceuticals_2018_WEB_ger.pdf.]
Lunder M, Janic, M., Sabovic, M., Janez, A. SGLT-2 inhibitors: a novelty in the treatment of type 2 diabetes [in Slovene]. Zdravniski Vestnik. 2018;87(9–10):493–505.
OECD. Pharmaceutical reimbursement and pricing in Germany [Available from: http://www.oecd.org/els/health-systems/Pharmaceutical-Reimbursement-and-Pricing-in-Germany.pdf.
Jahnz-Rozyk K, Kawalec P, Malinowski K, Czok K. Drug Policy in Poland. Value Health Reg Issues. 2017;13:23–6.
Sliwczynski A, Brzozowska M, Jacyna A, Iltchev P, Iwanczuk T, Wierzba W, et al. Drug-class-specific changes in the volume and cost of antidiabetic medications in Poland between 2012 and 2015. PLoS One. 2017;12(6):e0178764.
Raimond V, Josselin JM, Rochaix L. HTA agencies facing model biases: the case of type 2 diabetes. Pharmacoeconomics. 2014;32(9):825–39.
Eichler HG, Pignatti F, Flamion B, Leufkens H, Breckenridge A. Balancing early market access to new drugs with the need for benefit/risk data: a mounting dilemma. Nat Rev Drug Discov. 2008 Oct;7(10):818–26. doi: https://doi.org/10.1038/nrd2664.
Nystrom T, Bodegard J, Nathanson D, Thuresson M, Norhammar A, Eriksson JW. Novel oral glucose-lowering drugs are associated with lower risk of all-cause mortality, cardiovascular events and severe hypoglycaemia compared with insulin in patients with type 2 diabetes. Diabetes Obes Metab. 2017;19(6):831–41.
Jacob L, Waehlert L, Kostev K. Changes in Type 2 Diabetes Mellitus Patients in German Primary Care Prior to (2006) and After (2010, 2014) Launch of New Drugs. J Diabetes Sci Technol. 2016;10(2):414 – 20.
Campbell MD, Babic D, Bolcina U, Smircic-Duvnjak L, Tankova T, Mitrakou A, et al. High level of clinical inertia in insulin initiation in type 2 diabetes across Central and South-Eastern Europe: insights from SITIP study. Acta Diabetol. 2019.
Aujoulat I, Jacquemin P, Rietzschel E, Scheen A, Trefois P, Wens J, et al. Factors associated with clinical inertia: an integrative review. Adv Med Educ Pract. 2014;5:141–7.
Marso et al. Liraglutide and Cardiovascular Outcomes in Type 2 Diabetes. N Engl J Med 2016; 375:311–322; DOI: https://doi.org/10.1056/NEJMoa1603827
Zinman et al. Empagliflozin, Cardiovascular Outcomes, and Mortality in Type 2 Diabetes. N Engl J Med 2015; 373:2117–2128; DOI: https://doi.org/10.1056/NEJMoa1504720
The statements, findings, conclusions, views, and opinions contained and expressed in this article are based in part on data obtained under license from the following IQVIA information service: IQVIA MIDAS Quarterly Sales Data, January 2006 – December 2016, IQVIA. All Rights Reserved. The statements, findings, conclusions, views, and opinions contained and expressed herein are not necessarily those of IQVIA or any of its affiliated or subsidiary entities.
The authors thank Astra Zeneca UK Limited, Slovenia, for allowing access to the IQVIA data; specifically, the IQVIA MIDAS Quarterly Sales Data, January 2006 – December 2016.
This work was financially supported by the Slovenian Research Agency, Grant No. P1-0189.
This work was presented as an Oral Communication during the 48th ESCP Annual Symposium held by European Society of Clinical Pharmacy in Ljubljana, Slovenia, 23-25th October 2019.
Code availability (software application or custom code)
Department of Social Pharmacy, University of Ljubljana, Faculty of Pharmacy, Askerceva cesta 7, 1000, Ljubljana, Slovenia
Nika Mardetko, Urska Nabergoj Makovec, Igor Locatelli & Mitja Kos
Department of Endocrinology, Diabetes and Metabolic Diseases, University Medical Centre Ljubljana, Zaloska Cesta 7, 1000, Ljubljana, Slovenia
Andrej Janez
Nika Mardetko
Urska Nabergoj Makovec
Igor Locatelli
Mitja Kos
All authors (NM, UNM, IL, AJ and MK) took an active role different study phases: designing and performing the study, reviewing and discussing results of the study and preparing the manuscript. The manuscript was drafted by NM (the first author) and reviewed in depth by all other authors (UNM, IL, AJ, MK). All authors have read and approved the manuscript.
Correspondence to Mitja Kos.
The study was performed on an aggregated database of medicines volume and sales data, without including patients or patients' data. Hence, the ethical approval was not sought and consent to participate was not needed.
To prepare this manuscript the data obtained under license from IQVIA information service through Astra Zeneca UK Limited, Slovenia were used. The authors declares that they have no other (financial or non-financial) competing interests related to the work described in this manuscript.
12902_2021_798_MOESM1_ESM.docx
Mardetko, N., Nabergoj Makovec, U., Locatelli, I. et al. Uptake of new antidiabetic medicines in 11 European countries. BMC Endocr Disord 21, 127 (2021). https://doi.org/10.1186/s12902-021-00798-3
Glucagon-Like Peptide 1 receptor agonists
Dipeptidyl-Peptidase 4 inhibitors
Market uptake | CommonCrawl |
Textiles and Clothing Sustainability
Improving dyeability and antibacterial activity of Lawsonia inermis L on jute fabrics by chitosan pretreatment
Muhammad Abdur Rahman Bhuiyan1,
Ariful Islam1,
Shafiqul Islam1,
Anowar Hossain2 &
Kamrun Nahar3
Textiles and Clothing Sustainability volume 3, Article number: 1 (2017) Cite this article
This paper investigates the dyeing and antimicrobial properties of jute fiber with natural dye henna after treatment with biopolymer chitosan. The treatment was carried out by applying chitosan solution on the fiber followed by dyeing with henna dye. Then, the performance was assessed in terms of the depth of shade by measuring K/S value and colorfastness properties of chitosan-treated dyed fabric samples. It has been observed that chitosan-treated fabrics showed a higher depth of shade compared to untreated dyed samples. As far as colorfastness is concerned, the dyed samples with and without chitosan pretreatment exhibited almost similar dry rubbing fastness. However, chitosan-treated fabrics showed inferior fastness ratings for wet rubbing and washing, particularly for the fabrics with higher chitosan concentrations. Again, the experimental results demonstrated that the combination of chitosan and henna dye can significantly enhance the antibacterial activity of jute fiber against the organism Staphylococcus aureus and Klebsiella pneumoniae. These findings suggest that the application of chitosan and natural dye from henna onto jute fiber is an approach to get the desired dyeing and antibacterial property.
Jute is a natural lignocellulosic fiber which constitutes α-cellulose along with hemicellulose and lignin (Lewin 2006). This fiber is increasingly popular due to its biodegradability, high tensile strength, and better permeability (Bhuiyan et al. 2013; Ghosh et al. 2004). Therefore, the demand of naturally biodegradable and eco-friendly fibers like jute is rising gradually in recent times because of greater global ecological awareness (Wang et al. 2008).
The dyeing of jute fiber can be carried out with a wide range of synthetic dyestuffs, such as direct, vat, basic, and reactive dyes (Bhuiyan et al. 2016a). These dyes are attractive as textile colorants due to their availability, economical, brilliant shades, and excellent color fastness properties, but to some extent these synthetic dyes are allergic, carcinogenic, and detrimental to human health (Siva 2007). For this reason, there has been a revival of the growing interest on the application of non-allergic, non-toxic, and eco-friendly natural dyes on natural fibers like jute because of their high compatibility with the environment, as well as the availability of various natural coloring resources (Samanta and Agarwal 2009).
A number of research studies (Pan et al. 2003; Deo and Desai 1999) were performed on the dyeing of jute fiber with several natural dyes such as deodara, jackfruit, eucalyptus leaf, and tea. In addition, jute fiber can also be dyed with the extract of the leaves of henna commonly known as lawsone or hennotannic acid. Chemically, the molecule of lawsone is 2-hydroxy-1,4-naphthoquinone (Scheme 1), a red-orange pigment which is the chief constituent of henna leaves (Rehman et al. 2012). Industrial classifications also depict lawsone as Natural Orange 6 and CI 75480 (Saxena and Raja 2014); it acts as a substantive dye for protein fiber and imparts an orange color on the substrate (Gulrajani et al. 1992). However, some intrinsic limitations such as poor color fastness properties and low substantivity of henna dye towards cellulose hinder its widespread applications on jute and other cellulosic fibers. This may be ascribed to the polar nature of both cellulose and lawsone (hydroxyl groups are usually active sites) and also complexity of natural aspects (Omer et al. 2015). Therefore, many attempts have been made to elucidate the functional aspects of henna dye in order to enhance its fastness properties.
Chemical structure of lawsone (2-hydroxy-1,4-naphthoquinone)
The introduction of cationic sites within the cellulose is a novel treatment not only to increase the dye adsorption but also to enhance the color strength and fastness properties of dyed fiber (Bashar and Khan 2013). Cationic sites can be introduced to the cellulose polymer through aminization technique by applying chitosan on jute fiber. The chemical structure of chitosan is very similar to that of cellulose which consists of several hundreds to more than a thousand β-(1-4) linked d-glucose units (Islam et al. 2016) (Scheme 2). A variety of methods namely pad-dry, pad-dry-steam, pad-batch, and exhaustion (Houshyar and Amirshahi 2002) can be used for the application of chitosan on cellulosic fiber to initiate crosslink (Lee et al. 2010), resulting in the formation of positive dyesites on the fiber surface (Bhuiyan et al. 2014).
Structural relationships between cellulose (a) and chitosan (b)
The chitosan treatment of textiles is considered as a multi-functional finish as the chemical aspects not only enhance color strength but also contribute to the microbial reduction of textile materials (Tang et al., 2016) and thus have garnered considerable interests of the researchers across the globe. The antimicrobial activity of chitosan is well-documented. It is attributed to the polycationic nature of chitosan which most likely can interact with the predominantly anionic components resulting in changes in permeability that leads to the death of cells by inducing leakage of intracellular components (Klaykruayat et al. 2010). This antibacterial characteristic leads to a considerable enhancement of microbial resistance of textile materials following treatment with chitosan.
In addition, extracts of henna leaves are capable of inhibiting the growth of both gram-positive and gram-negative bacteria. In the past, several studies (Ali et al. 2001; Bonjar 2004) have explored the antimicrobial properties of henna extracts against the microorganisms Staphylococcus aureus, Escherichia coli, Klebsiella pneumoniae, Pseudomonas aeruginosa, and Proteus mirabilis. The effectiveness of henna was also observed in the management of wound infections and also against the primary invaders of burn wounds (Nayak et al. 2007; Muhammad and Muhammad 2005). However, the combined antimicrobial effect of chitosan and natural dyes on cellulosic fiber like jute has not been yet investigated. Hence, the main objectives of this study were to explore the synergic effect of chitosan and henna on microbial reduction in jute fiber followed by the analysis of dyeability and color strength of henna dye on jute fiber being treated with chitosan.
Scoured, bleached 100% plain hessian jute fabric (261 GSM) was purchased from the local market of Bangladesh and used in all the experiments. For the dyeing of jute fabric, natural dye henna (Lawsonia inermis) was brought from the local market and used to obtain the dyestuff of lawsone, which is reddish brown in color. A commercial grade water soluble chitosan (straw yellow powder, deacetylation = 95.3%) was collected from Zhengzhou Sigma Chemical Company Ltd., China, and used as received in all experiments without any physical or chemical modification. Analytical grade acetic acid (CH3COOH) from Merck, India, was used to dissolve the chitosan in water. Ethanol (C2H5OH) for the extraction of dye from henna leaves was purchased from Merck, India. Antimicrobial activity of the chitosan-treated cotton fabric was examined against the organisms S. aureus (gram-positive) and K. pneumoniae (gram-negative) were obtained from the Department of Microbiology, University of Dhaka, Bangladesh.
Treatment of jute fiber with chitosan
For the treatment of jute fiber with chitosan, 0.25, 0.5, 1.0, and 1.5% (w/v) concentrations of chitosan solution were prepared at room temperature by dissolving the required amount of chitosan powder in 2% (v/v) acetic acid solution and stirring the dispersion for 1 h at 60 °C. The scoured and bleached jute fabrics were then immersed in chitosan solutions of different concentrations for 24 h at room temperature. The fabrics were then padded by using the Laboratory Padding Mangle from Copower Technology Ltd., Taiwan. The padding of fabric samples was performed at room temperature, 2 kg/cm2 pressure, squeezed to remove excess solution, and cured in the curing chamber at 120 °C for 5 min. For the ease of identification, all the test fabric samples were coded as shown in Table 1.
Table 1 Test fabric sample coding
Extraction of natural dye from henna
Fresh leaves of henna were dried in the sunlight for 1 day and again dried at 80 °C for 1 h in a hot air oven following washing and cleaning with distilled water. Dried leaves were grinded to powder form for getting proper extraction result. The extraction of dye was obtained after immersing 20 g henna powder in 100 mL water–ethanol mixture (90:10 v/v) for 24 h. This dye extract solution was used for the dyeing of both chitosan-treated and untreated jute fabric samples.
Dyeing with henna dyes
The dyeing of treated and untreated jute fiber was carried out using the required amount of extracts in water–ethanol mixture for 45 min at 75 °C temperature with material-to-liquor ratio 1:30 for optimum dye exhaustion. The dyeing was performed in Sandolab infrared lab dyeing machine from Copower Technology Ltd., Taiwan. After dyeing, the dyed fabric samples were rinsed with cold water and washed in a bath with 1 g/L of soaping agent at 60 °C temperature for 10 min and then dried in a dryer.
Measurement of color strength and related parameters
The dyeing performance of treated jute fabric samples with henna dye was analyzed using color measurement spectrophotometer (Datacolor 650 from China) in terms of depth of color and color difference from untreated dyed jute fabric sample. The depth of color of the dyed fabric was determined by analyzing the K/S value of a given dyed sample through the Kubelka–Munk equation (Eq. (1)) (Bhuiyan et al. 2016b).
$$ \frac{K}{S}\kern0.5em =\kern0.5em \frac{{\left(1-R\right)}^2}{2R} $$
where R = reflectance percentage, K = absorption coefficient, and S = scattering coefficient of dyes. This value was derived from the attenuation ratio of light due to absorption and scattering, which was found based on reflectance. The relative color strength of chitosan-treated dyed fabric samples was obtained using Eq. (2) (Yusuf et al. 2012).
$$ \mathrm{Relative}\ \mathrm{color}\ \mathrm{strength}\ \left(\%\right)\kern0.5em =\kern0.5em \frac{K/S\kern0.5em \mathrm{of}\kern0.5em \mathrm{treated}\kern0.5em \mathrm{dyed}\kern0.5em \mathrm{sample}}{K/S\kern0.5em \mathrm{of}\kern0.5em \mathrm{untreated}\kern0.5em \mathrm{dyed}\kern0.5em \mathrm{sample}}\kern0.5em \times \kern0.5em 100 $$
The color differences, i.e., CIELab color deviation (∆E*) value was calculated by using Eq. (3) (Broadbent 2001).
$$ \Delta {E}^{*}\kern0.5em =\kern0.5em \sqrt{\Delta {L^{*}}^2+\kern0.5em \Delta {a^{*}}^2+\Delta {b^{*}}^2} $$
The values of L*, a*, and b* for a given color locate its position in the three-dimensional CIELab color space, and ∆L* = deviation of lightness; ∆a* = deviation of color in green-red axis; and ∆b* = deviation of color in yellow–blue axis. Both the color deviation and depth of shade of dyed fabric was evaluated according to AATCC test method 173-2006 in illuminant D65, large area view, and CIE 10° standard observer. Each sample was folded twice to give an opaque view, and color reflectance was measured four times at different parts of the fabric surface
Determination of color fastness properties
The fastness properties of dyed fabric samples, i.e., color fastness to washing was done according to ISO 105 C03:1989 by wash fastness tester (Gyrowash 415/8 from James H. Heal and Co., UK). The change and staining of color due to washing were assessed by comparing the untreated fabric with the treated fabric samples with respect to the ratings of color change and color staining gray scales. The evaluation of color fastness to rubbing was performed according to ISO 105 X 12: 2001 by rubbing fastness tester (Crockmeter 670 from James H. Heal and Co., UK). All the analyses of test fabric samples were conducted after conditioning the dyed fabrics in testing atmosphere (temperature 27 ± 2 °C and relative humidity 65 ± 2%) for 24 h.
The determination of specific functional groups or chemical bonds that formed between chitosan and cellulose polymer of jute fiber after dyeing with henna was carried out by Fourier transform infrared (FTIR) spectrophotometer (Cary 630) from Agilent Technologies, USA. FTIR spectra were taken by absorption mode and measured by using potassium bromide (KBr) pellets made of finely cut and ground jute fibers.
Antibacterial property testing
The antimicrobial properties of the chitosan-treated and untreated dyed fabrics were assessed using ASTM E2149-01, which is a quantitative antibacterial test method intended to assess the resistance of non-leaching antimicrobial treated specimens to the growth of microbes under dynamic contact conditions (Ferrero and Periolatto 2012). Antimicrobial activity of test specimen was observed against S. aureus (gram-positive) and K. pneumoniae (gram-negative) bacteria. Each culture was suspended in a small amount of nutrient broth, spread on the nutrient agar plate, and incubated at 37 °C for 24 h. Two single colonies were picked up with an inoculating loop from the agar plate, suspended in a 5-mL nutrient broth, and incubated for 18 h at 37 °C. The inoculated plates were incubated in an incubator (Binder from USA) at 37 °C for 18–24 h and surviving cells being counted in a safety cabinet (Clernair from Belgium) were used to carry out the preparation of bacteria culture and transferred to an agar plate. The antimicrobial activity was expressed in terms of percentage reduction of the organism after contact with the test specimen compared to the number of bacterial cells surviving after contact with the control using Eq. (4). (Arif et al. 2015)
$$ \%\mathrm{Reduction}\kern0.5em =\kern0.5em \frac{B-A}{B}\times 100 $$
where A and B are the surviving cells (colony forming units per milliliter (CFU/mL)) for the flasks containing test samples (chitosan-treated and dyed jute) and the control (untreated and undyed jute), respectively, after 1 h contact time.
Measurement of color strength
The depth of color or color strength of dyed fabric samples was analyzed by K/S value through the Kubelka–Munk equation, the most useful and applicable theoretical approach or model for colorants when colorants exhibit properties of light absorbing and scattering. This value numerically represents the nature of the coloring material layer and an easy way to determine a color as a concentration (Nobbs 1985). The function K/S is directly proportional to the concentration of colorant in the substrate, and the values found for all the experimental fabric samples are shown in Fig. 1.
The gradual improvement of dye absorption (K/S value) of jute fiber with the increment of chitosan concentration
Figure 1 shows the K/S values of dyed fabric samples for different percentages of chitosan treatment. The greater the amount of dyestuff in the fabric, the deeper is the shade, resulting in higher K/S values. The longer bar diagram showed by the fabric samples treated with 1.5% chitosan indicates the greater absorption of dye, whereas fabric without chitosan treatment forms shortest bar due to lower K/S value consequently lower absorption of dye (Fig. 2).
Shade card of treated and untreated jute fabric samples dyed with henna dye
The better absorption of henna dye by jute fiber due to chitosan treatment has not been investigated extensively so far. However, chitosan has been proved to increase the rate of dye uptake and dye exhaustion of cellulosic fibers in case of reactive dyes (Kitkulnumchai et al. 2008; Bhuiyan et al. 2013). It may be suggested that the introduction of amino groups in the fiber structure due to chitosan treatment offers an additional affinity (Dev et al. 2009). The cationization of cellulose for the formation of ammonium ion (NH4 +) of chitosan (Bashar and Khan 2013) caused by treatment with jute fiber in acid medium may attract the acidic hydroxylated structure of the coloring component lawsone (2-hydroxy-1,4-naphthoquinone) of henna dye (Ali et al. 2009) resulting in higher absorption of dye by the fiber.
Color difference and relative color strength
The evaluation of the color parameter and the color difference of dyed fabric samples was performed by using CIELab system. The lightness value of color (from 100 to 0) is represented by L*; a higher lightness value indicates a lower color yield by the dyed fabric. Again, a* and b* values characterize the tone of the color; positive values of a* and b* stand for redder and yellower tones, while negative values show greener and bluer tones (Kuehni 2003).
It can be observed from Table 2 that the L* values decrease with the increase in chitosan concentration indicating that the sample becomes darker compared to the untreated dyed sample. The reduced lightness value of dyed fabric samples signifies greater absorption of dye and consequently higher relative strength of color. Moreover, the positive a* and b* values suggest the combination of red and yellow tones resulting in the orange color of the dyed fabric sample which is the characteristic natural color of the lawsone pigment of henna dye.
Table 2 Spectrophotometer characterization of chitosan-treated and untreated henna dyed jute fabric samples
The color difference (ΔE) values are also given in Table 2 and it is clearly demonstrated that there is a significant color difference between the untreated and chitosan-treated dyed samples. The color difference is more prominent in the case of fabric samples treated with 1.5% chitosan solution, and it decreases steadily with the reduction in chitosan concentration. The larger color difference with the increment of chitosan concentration is due to the greater absorption of dye by the fiber and accordingly leads to a higher K/S value with chitosan concentration.
Color fastness properties of dyed fabric
Color fastness is the property of a dye to retain its color when the dyed or printed textile material is exposed to washing or other conditions. The assessment of color fastness properties (rubbing and washing) of dyed textile material is carried out using the gray scale. Gray scales are used to measure the color fastness against washing by color change and color staining (with multifiber fabric) options (Saville, 1999). However, rubbing fastness of dyed fabric samples was evaluated only by color staining option in both dry and wet conditions. The color fastness ratings of all experimental dyed fabric samples are tabulated in Tables 3 and 4.
Table 3 Color fastness to washing (color change and color staining) of untreated and chitosan-treated dyed jute fabric samples
Table 4 Color fastness to rubbing (dry and wet) of untreated and chitosan-treated dyed jute fabric samples
The wash fastness ratings in Table 3 show that chitosan-treated and untreated jute fabrics dyed with henna dye have almost identical fastness ratings of "good" to "excellent" (within numerical grades 4–5) with very little variation. Again, the staining of color to adjacent multifiber fabric has been found higher for chitosan-treated dyed samples (D, E) particularly in cotton and wool fiber (3–4 and 3). Moreover, in case of dry rubbing (Table 4), all types of dyed samples have shown almost similar ratings; however, chitosan-treated samples with higher concentrations represent a lower rating (3–4) compared to untreated samples (4–5) for wet rubbing. In general, deeper shade shows inferior fastness to washing and rubbing than a lighter one on a similar type of fabric for the same dyestuff (Bhuiyan et al. 2013), because in case of deep shade, dye molecules are more saturated and tend to move out from the interior of the fiber due to washing. As discussed earlier, the chitosan present in the fabric enhances the dyesite causing higher absorption of dye resulting in a higher depth of shade. Hence, as a general consequence of achieving deeper shade, the chitosan-treated fabric samples have shown a slightly lower fastness rating in comparison to lighter, untreated fabric.
Characterization of dyed fabric samples
FTIR spectroscopic analysis
The structural properties of chitosan-treated and untreated jute fiber after dyeing with henna dyes were investigated by FTIR spectroscopy. This analytical technique provides information about the chemical bonds and molecular structure of a material. The existence of a specific chemical bond in any material is indicative of the presence of a peak at a specific wavenumber being revealed through scanning the test samples in the infrared light source.
The chemical interaction between chitosan and jute fiber polymer is investigated by the change of the peaks from the characteristic spectra as shown in Fig. 3. In general, no significant changes were observed in bands or their intensities in case of all test fabrics. The broad absorption band which appears in the range of 3500–3100 cm−1 indicates the presence of –OH groups in the cellulose polymer (Agarwal and Bhattacharya 2010). However, the strong absorption in the region 3500–3100 cm−1 in case of chitosan-treated fiber signifies the strong hydrogen bonded chitosan and jute fiber and also the existence of primary –NH2 and secondary –NH groups (Bhuiyan et al. 2016c). The presence of –NH2 group in the treated fabric is responsible for introducing a cationic site in jute polymer resulting in improved dye exhaustion and also interaction with microorganisms for antibacterial properties. Again, the spectrum near 2900 cm−1 corresponding to the symmetric stretching of methylene (–CH2–) groups (Davulcu et al. 2014) and 1735 cm−1 related to the C=O stretch of esters were found to be similar in all the tested samples. The absorption peak that appeared at 1640 cm−1 for chitosan-treated jute suggested the formation of Schiff base (C=N double bond) between aldehydic carbonyl group of cellulose and amino group of chitosan. All three samples showed peaks near 1430 and 1325 cm−1, being related to –OH bending of C–O–H alcohol groups and 1060 cm−1 corresponding to C–O mainly of C3–O3H secondary alcohol. Moreover, the peak at 898 cm−1 corresponding to asymmetric out-of-phase ring stretching of C1–O–C4 β–glucosidic bonds (Chung et al. 2004) was increased after the dyeing process.
FTIR absorption spectra of selected jute fabric samples
Improvement of antimicrobial properties
The antibacterial activity of chitosan-treated and untreated dyed jute fabric samples were carried out against S. aureus (gram-positive) and K. pneumoniae (gram-negative) bacteria. The pathogenic gram-positive bacterium S. aureus is the most frequently evaluated species, commonly found in the nose, respiratory tract, and on the skin, the major cause of cross-infection in hospitals as well as in commercial and home laundry practices (Kluytmans et al. 1997). It causes skin and tissue infections, respiratory infections, and food poisoning (Weese and van Duijkeren 2010). Again, the gram-negative bacterium K. pneumonia, which is a popular test organism, is found in the normal flora of the mouth, skin, and intestines and can cause severe bacterial infections leading to pneumonia, bloodstream infections, wound infections, urinary tract infections, and meningitis (Eliopoulos et al. 2008).
The reduction of microorganisms (S. aureus and K. pneumoniae) on chitosan-treated dyed jute fabrics are shown in Table 5 respectively. In case of untreated and undyed fabric samples, no antibacterial activity was observed against S. aureus and K. pneumoniae. However, jute fabrics dyed with henna dye following the treatment with chitosan demonstrated a significant improvement of antibacterial property against both organisms (Figs. 4 and 5) which revealed a wide spectrum activity and a high killing rate of chitosan against gram-positive and gram-negative bacteria. Furthermore, the microbial inhibition was found to be increased with the increase in chitosan concentration. The most accepted mechanism for the microbial reduction by chitosan is its polycationic structure which causes death of the cell by interacting with anionic proteins of microorganisms (Lim and Hudson, 2004). Moreover, the strong electrostatic interaction which is caused by higher positive charge density adsorbs the electronegative substance in the cell and disturbs the physiological activities of the microorganism (Bhuiyan et al. 2016c; Zheng and Zhu 2003). There are several intrinsic and extrinsic factors, such as pH, microorganism species, positive charge density, molecular weight (MW), and the degree of deacetylation (DD) of chitosan can influence the antimicrobial activity (Zivanovic et al. 2004), while the density of cations depends on the concentration of chitosan and its degree of substitution resulting in a greater reduction of microorganism with the increment of chitosan concentration.
Table 5 Microbial reduction percentages of henna dyed jute fabrics with the increment of chitosan concentration against S. aureus (gram-positive) and K. pneumoniae (gram-negative) bacteria
Number of surviving cells (S. aureus) after contact with untreated and undyed (a), untreated and dyed (b), and chitosan-treated (1.5%) dyed fabric sample (c)
Number of surviving cells (K. pneumoniae) after contact with untreated and undyed (a), untreated and dyed (b), and chitosan-treated (1.5%) dyed fabric sample (c)
On the other hand, fabrics dyed with henna alone without chitosan application also exhibit antimicrobial activity especially in case of S. aureus bacteria in Table 5. Moreover, the presence of residual ethanol influences the antibacterial property of henna dye due to ethanolic extraction. Several studies showed that the ethanolic extraction of L. inermis is the most active against all the bacteria in the test system compared to aqueous extraction of henna dye (Sukanya et al. 2009; Ali et al. 2001). In addition, the presence of colored pigment lawsone (2-hydroxy-1,4-naphthoquinone) is also responsible for the inherent antimicrobial activity of henna dye. The highly reactive nature of ketone groups (˃C=O) in the aromatic ring of quinones of lawsone exhibits the antimicrobial property by forming a complex irreversibly with nucleophilic amino acids in proteins and leading to inactivation of the protein and loss of function (Dev et al. 2009). Hence, the natural antimicrobial efficacy of henna dye can be enhanced considerably by coupling with chitosan for the application on textile materials to protect the clothes against common infections.
The treatment of jute fiber with the biopolymer chitosan and its effect on dyeing with natural colorants and antibacterial characteristics of fiber has been investigated. The detailed study has demonstrated the twofold effect of chitosan on jute fiber. The treatment of jute with chitosan can appreciably enhance the uptake of dye by the fiber. Moreover, the color fastness property of dyed fabrics against washing and rubbing also exhibits within the acceptable range of good to excellent. On the other hand, the antimicrobial activities of jute fabric increase significantly due to the combined effect of natural dye henna and biopolymer chitosan. Thus, the findings of the study suggest a potential application of chitosan as a non-toxic, eco-friendly, and multi-functional finish providing the desired dyeing and antimicrobial properties of jute fiber after treatment with chitosan.
Agarwal, B. J., & Bhattacharya, S. D. (2010). Possibilities of polymer aided dyeing of cotton fabric with reactive dyes at neutral pH. Journal of Applied Polymer Science, 118, 1257–1269.
Ali, S., Hussain, T., & Nawaz, R. (2009). Optimization of alkaline extraction of natural dye from henna leaves and its dyeing on cotton by exhaust method. Journal of Cleaner Production, 17(1), 61–66.
Ali, N. A., Jülich, W. D., Kusnick, C., & Lindequist, U. (2001). Screening of Yemeni medicinal plants for antibacterial and cytotoxic activities. Journal of Ethnopharmacology, 74(2), 173–179.
Arif, D., Niazi, M. B. K., Ul-Haq, N., Anwar, M. N., & Hashmi, E. (2015). Preparation of antibacterial cotton fabric using chitosan-silver nanoparticles. Fibers and Polymers, 16(7), 1519–1526.
Bashar, M. M., & Khan, M. A. (2013). An overview on surface modification of cotton fiber for apparel use. Journal of Polymers and the Environment, 21(1), 181–190.
Bhuiyan, M.R., Shaid, A., Bashar, M.M., Sarkar, .P (2016a) Investigation on dyeing performance of basic and reactive dyes concerning jute fiber dyeing. Journal of Natural Fibers, 13(4): 492–501.
Bhuiyan, M.R., Rahman, M.M., Shaid, A., Bashar, M.M., Khan, M.A. (2016b) Scope of reusing and recycling the textile wastewater after treatment with gamma radiation. Journal of Cleaner Producton, 112(4): 3063–3071.
Bhuiyan, M. R., Hossain, M. A., Zakaria, M., Islam, M. N., Uddin, M. Z. (2016c) Chitosan coated cotton fiber: physical and antimicrobial properties for apparel use. Journal of Polymers and the Environment, 1–9 DOI: 10.1007/s10924-016-0815-2.
Bhuiyan, M. R., Shaid, A., & Khan, M. A. (2014). Cationization of cotton fiber by chitosan and its dyeing with reactive dye without salt. Chemistry and Materials Engineering, 2(4), 96–100.
Bhuiyan, M. R., Shaid, A., Bashar, M. M., Haque, P., & Hannan, M. A. (2013). A novel approach of dyeing jute fiber with reactive dye after treating with chitosan. Open Journal of Organic Polymer Materials, 3(4), 87–91.
Bonjar, S. (2004). Evaluation of antibacterial properties of some medicinal plants used in Iran. Journal of Ethnopharmacology, 94(2), 301–305.
Broadbent, A. D. (2001). Basic principles of textile coloration society of dyers and colorists, West Yorkshire.
Chung, C., Lee, M., & Choe, E. K. (2004). Characterization of cotton fabric scouring by FT-IR ATR spectroscopy. Carbohydrate Polymers, 58, 417–420.
Davulcu, A., Benli, H., Şen, Y., & Bahtiyari, M. İ. (2014). Dyeing of cotton with thyme and pomegranate peel. Cellulose, 21(6), 4671–4680.
Deo, H. T., & Desai, B. K. (1999). Dyeing of cotton and jute with tea as a natural dye. Coloration Technology, 115(7–8), 224–227.
Dev, V. G., Venugopal, J., Sudha, S., Deepika, G., & Ramakrishna, S. (2009). Dyeing and antimicrobial characteristics of chitosan treated wool fabrics with henna dye. Carbohydrate Polymers, 75(4), 646–650.
Eliopoulos, G. M., Maragakis, L. L., & Perl, T. M. (2008). Acinetobacter baumannii: epidemiology, antimicrobial resistance, and treatment options. Clinical Infectious Diseases, 46(8), 1254–1263.
Ferrero, F., & Periolatto, M. (2012). Antimicrobial finish of textiles by chitosan UV-curing. Journal of Nanoscience and Nanotechnology, 12(6), 4803–4810.
Ghosh, P., Samanta, A. K., & Basu, G. (2004). Effect of selective chemical treatments of jute fibre on textile-related properties and processibility. Indian Journal of Fibre & Textile Research, 29, 85–99.
Gulrajani, M. L., Gupta, D. B., Aggarwal, V., & Jain, M. (1992). Some studies on natural yellow dyes, Part III: Quinones: Henna, Dolu. The Indian Textile Journal, 102(3), 76–83.
Houshyar, S., & Amirshahi, S. H. (2002). Treatment of cotton with chitosan and its effect on dyeability with reactive dyes. Iranian Polymer Journal, 11(5), 295–302.
Islam S, Bhuiyan MR, Islam MN (2016) Chitin and chitosan: structure, properties and application in biomedical engineering. Journal of Polymers and the Environment, 1–13. doi:10.1007/s10924-016-0865-5.
Kitkulnumchai, Y., Ajavakom, A., & Sukwattanasinitt, M. (2008). Treatment of oxidized cellulose fabric with chitosan and its surface activity towards anionic reactive dyes. Cellulose, 15(4), 599–608.
Klaykruayat, B., Siralertmukul, K., & Srikulkit, K. (2010). Chemical modification of chitosan with cationic hyperbranched dendritic polyamidoamine and its antimicrobial activity on cotton fabric. Carbohydrate Polymers, 80(1), 197–207.
Kluytmans, J., Van Belkum, A., & Verbrugh, H. (1997). Nasal carriage of Staphylococcus aureus: epidemiology, underlying mechanisms, and associated risks. Clinical Microbiology Reviews, 10(3), 505–520.
Kuehni, R.G. (2003). Color space and its divisions: color order from antiquity to the present. New Jersey: John Wiley & Sons.
Lee, S. H., Kim, M. J., & Park, H. (2010). Characteristics of cotton fabrics treated with epichlorohydrin and chitosan. Journal of Applied Polymer Science, 117(2), 623–628.
Lim, S. H., & Hudson, S. M. (2004). Synthesis and antimicrobial activity of a water-soluble chitosan derivative with a fiber-reactive group. Carbohydrate Research, 339(2), 313–319.
Lewin M (Ed.) (2006) Handbook of fiber chemistry. New York: Crc Press.
Muhammad, H. S., & Muhammad, S. (2005). The use of Lawsonia inermis Linn. (henna) in the management of burn wound infections. African Journal of Biotechnology, 4(9), 934–937.
Nayak, B. S., Isitor, G., Davis, E. M., & Pillai, G. K. (2007). The evidence based wound healing activity of Lawsonia inermis Linn. Phytotherapy Research, 21(9), 827–831.
Nobbs, J. H. (1985). Kubelka—Munk theory and the prediction of reflectance. Review of Progress in Coloration and Related Topics, 15(1), 66–75.
Omer, K. A., Tao, Z., & Seedahmed, A. I. (2015). New approach for dyeing and UV protection properties of cotton fabric using natural dye extracted from henna leaves. Fibres & Textiles in Eastern Europe, 5(113), 60–65.
Pan, N. C., Chattopadhyay, S. N., & Day, A. (2003). Dyeing of jute with natural dyes. Indian Journal of Fibre & Textile Research, 28(3), 339–342.
Rehman, F. U., Adeel, S., Qaiser, S., Bhatti, I. A., Shahid, M., & Zuber, M. (2012). Dyeing behaviour of gamma irradiated cotton fabric using Lawson dye extracted from henna leaves (Lawsonia inermis). Radiation Physics and Chemistry, 81(11), 1752–1756.
Samanta, A. K., & Agarwal, P. (2009). Application of natural dyes on textiles. Indian Journal of Fibre & Textile Research, 34(4), 384–399.
Saville, B. P. (1999). Physical testing of textiles. Cambridge: Woodhead Publishing Limited.
Saxena, S., & Raja, A. S. M. (2014). Natural dyes: sources, chemistry, application and sustainability issues. In Roadmap to Sustainable Textiles and Clothing (pp. 37–80). Singapore: Springer.
Siva, R. (2007). Status of natural dyes and dye-yielding plants in India. Current Science Bangalore, 92(7), 916–925.
Sukanya, S. L., Sudisha, J., Hariprasad, P., Niranjana, S. R., Prakash, H. S., & Fathima, S. K. (2009). Antimicrobial activity of leaf extracts of Indian medicinal plants against clinical and phytopathogenic bacteria. African Journal of Biotechnology, 8(23), 6677–6682.
Tang, R., Yu, Z., Zhang, Y., & Qi, C. (2016). Synthesis, characterization, and properties of antibacterial dye based on chitosan. Cellulose, 23(3), 1741–1749.
Wang, W. M., Cai, Z. S., & Yu, J. Y. (2008). Study on the chemical modification process of jute fiber. Journal of Engineered Fibers and Fabrics, 3(2), 1–11.
Weese, J. S., & van Duijkeren, E. (2010). Methicillin-resistant Staphylococcus aureus and Staphylococcus pseudintermedius in veterinary medicine. Veterinary Microbiology, 140(3), 418–429.
Yusuf, M., Ahmad, A., Shahid, M., Khan, M. I., Khan, S. A., Manzoor, N., & Mohammad, F. (2012). Assessment of colorimetric, antibacterial and antifungal properties of woollen yarn dyed with the extract of the leaves of henna (Lawsonia inermis). Journal of Cleaner Production, 27, 42–50.
Zivanovic, S., Basurto, C. C., Chi, S., Davidson, P. M., & Weiss, J. (2004). Molecular weight of chitosan influences antimicrobial activity in oil-in-water emulsions. Journal of Food Protection, 67(5), 952–959.
Zheng, L. Y., & Zhu, J. F. (2003). Study on antimicrobial activity of chitosan with different molecular weights. Carbohydrate Polymers, 54(4), 527–530.
MAR conceived of the study, designed the experiment and drafted the manuscript. AI and KN coordinated experimental analysis and manuscript submission. SI and AH interpreted data and drafting the manuscript. All authors read and approved the final manuscript submission.
Department of Textile Engineering, Dhaka University of Engineering and Technology, Gazipur, Bangladesh
Muhammad Abdur Rahman Bhuiyan, Ariful Islam & Shafiqul Islam
Department of Textile Engineering, Primeasia University, Dhaka, Bangladesh
Anowar Hossain
Department of Microbiology, Primeasia University, Dhaka, Bangladesh
Kamrun Nahar
Muhammad Abdur Rahman Bhuiyan
Ariful Islam
Shafiqul Islam
Correspondence to Muhammad Abdur Rahman Bhuiyan.
Bhuiyan, M.A.R., Islam, A., Islam, S. et al. Improving dyeability and antibacterial activity of Lawsonia inermis L on jute fabrics by chitosan pretreatment. Text Cloth Sustain 3, 1 (2017). https://doi.org/10.1186/s40689-016-0023-4
Henna dye
Dyeability
Antimicrobial properties | CommonCrawl |
The influence of riverine barriers, climate, and topography on the biogeographic regionalization of Amazonian anurans
Elevation shapes the reassembly of Anthropocene lizard communities
Luke O. Frishkoff, Eveling Gabot, … D. Luke Mahler
History and environment shape species pools and community diversity in European beech forests
Borja Jiménez-Alfaro, Marco Girardello, … Thomas Wohlgemuth
Habitat availability explains variation in climate-driven range shifts across multiple taxonomic groups
Philip J. Platts, Suzanna C. Mason, … Chris D. Thomas
Erosion of tropical bird diversity over a century is influenced by abundance, diet and subtle climatic tolerances
Jenna R. Curtis, W. Douglas Robinson, … Bruce McCune
Native range estimates for red-listed vascular plants
Jan Borgelt, Jorge Sicacha-Parada, … Francesca Verones
Modeling the distributions of tegu lizards in native and potential invasive ranges
Catherine S. Jarnevich, Mark A. Hayes, … Robert N. Reed
Habitat provided by native species facilitates higher abundances of an invader in its introduced compared to native range
Paul E. Gribben, Alistair G. B. Poore, … Jeffrey T. Wright
Location-level processes drive the establishment of alien bird populations worldwide
David W. Redding, Alex L. Pigot, … Tim M. Blackburn
Historic changes in species composition for a globally unique bird community
Swen C. Renner & Paul J. J. Bates
Marcela Brasil de Castro Godinho1 &
Fernando Rodrigues da Silva ORCID: orcid.org/0000-0002-0983-32072
Macroecology
We evaluated five non-mutually exclusive hypotheses driving the biogeographic regions of anuran species in the Amazonia. We overlaid extent-of-occurrence maps for anurans 50 × 50 km cells to generate a presence–absence matrix. This matrix was subjected to a cluster analysis to identify the pattern and number of biogeographic regions for the dataset. Then, we used multinomial logistic regression models and deviance partitioning to explore the relative importance of contemporary and historical climate variables, topographic complexity, riverine barriers and vegetation structure in explaining the biogeographic regions identified. We found seven biogeographic regions for anurans in the Amazonia. The major rivers in the Amazonia made the largest contribution to explaining the variability in anuran biogeographic regions, followed by climate variables and topography. The barrier effect seems to be strong for some rivers, such as the Amazon and Madeira, but other Amazonia rivers appear to not be effective barriers. Furthermore, climate and topographical variables provide an environmental gradient driving the species richness and anuran range-size distributions. Therefore, our results provide a spatially explicit framework that could be used to address conservation and management issues of anuran diversity for the largest tropical forests in the world.
The spatial patterns of species distributions express many ecological and evolutionary processes and are linked to a complex and historically contingent setting. Since the 19th century, studies have divided large geographic extents into regions of similar faunistic or floristic composition1,2,3,4. This approach, called biogeographical regionalization, has helped us understanding whether the processes influencing species distributions are determined by shared evolutionary histories (i.e. speciation, extinction and distribution), past or current climatic oscillations (i.e. precipitation and temperature gradients) and/or physical barriers (i.e. mountains and oceans) that limit species dispersal between areas2,4,5,6,7,8. For example, Holt et al.2 identified 20 distinct zoogeographic regions by combining data on the distributions and phylogenetic relationships of vertebrate species and found that spatial turnover in phylogenetic composition is higher in the Southern than in the Northern Hemisphere. Furthermore, global biogeographical regionalization has been used to evaluate international conservation priorities based on the degradation of natural habitats and ecosystems as a result of human activities3,9. Although large-scale global patterns are relatively well established2,4,9, intracontinental regionalization patterns are still scarce for some Neotropical areas, representing an opportunity for new insights about the processes influencing species distributions8,10.
The Amazonia encompasses more than 6 million km2 across eight countries in South America and is one of the most critical natural environments both in regulation climate and sustaining biodiversity at global scale11,12. Currently, Amazonia is threatened by several anthropic pressures, such as dam constructions, deforestation, and fire that will cascade onto the patterns of species distribution of the largest and most species-rich tropical forest in the world. Although, previous studies have delimited biogeographical regions for mammals and birds in the Amazonia, their results are not consensual. For example, Wallace1, considering primate ranges, identified four regions in the Amazonia. Haffer13, Cracraft14 and Silva et al.15, considering bird ranges, identified six, seven and eight regions in the Amazonia respectively. Thus, uncertainty about biogeographical regionalization in the Amazonia remains open to debate and different hypotheses have been proposed to explain the pattern of species distributions in the Amazonia1,16,17,18,19. Among competing hypotheses, the riverine barrier hypothesis states that the major rivers of Amazonia act as geographic barriers to gene flow, promoting the genetic divergence of populations and, therefore, speciation1. The Pleistocene refuge hypothesis states that during the Pleistocene, decreases in temperature and humidity in the Amazonia Basin left relatively small 'islands' of tropical rainforests surrounded by xeric habitats, isolating populations and changing distribution patterns4,16. The orogenic hypothesis states that the uplift of the Andes in Neogene and its effect on regional climate has had a substantial impact on the landscape evolution in the Amazonia18. Therefore, the pattern of species distribution in Amazonia will not be explained entirely by any single simple model, but it depends on the combination of more realistic, complex scenarios19,20,21.
Biogeographical units are hierarchically arranged, and no single biogeographic framework is optimal for all taxa2,3,6,9. To date, no study has evaluated the importance of multiple scenarios shaping present-day patterns of amphibian species composition along the Amazonia. Amphibians are the most threatened vertebrate group22, with Amazonia harboring the highest species richness in the world23. Moreover, patterns of amphibian species richness distribution are not randomly distributed throughout the Amazonia23,24. Because amphibian species are normally separated into more regions than other vertebrate groups due to their small-ranges25 and physiological constraints6,26, we believe that the Amazonia will present more than the eight regions previously proposed for birds14,15. Here, we performed a regionalization scheme for the current original extent of the Amazonia in order to explore how anurans are distributed throughout this complex and biodiverse biome. Our goal is to determine the biogeographical regions for anuran species in the Amazonia evaluating five non-mutually exclusive hypotheses:
Contemporary Climate hypothesis – present-day climate variables are key environmental determinants of anuran composition because they act as environmental filters influencing which species can inhabit specific areas26,27. Under this hypothesis is expected that areas with different climate gradients would harbor distinct species compositions due to specific physiological requirements or life history traits;
Pleistocene Climate Variation hypothesis – while current patterns of amphibian distributions in Europe28 and Brazilian Atlantic Forest29 were shaped by climate changes in the past, there is still no evidence that amphibian distributions in Amazonia has been influenced by Pleistocene climate variation30. However, Amazonia presents a large spatial extent, and stable climatically areas since the Pleistocene were not randomly distributed in the space15. Under this hypothesis is expected that areas that maintained similar climatic conditions, but are far apart from each other, would harbor distinct species compositions due to dissimilar rates of speciation, extinction and colonization that delimited different regional species pools31,32 along the Amazonia;
Topography hypothesis – areas with larger ranges in elevation increase the speciation rate and endemism18,33. Under this hypothesis is expected that these areas would harbor small-ranged species with historically limited dispersal capabilities due to physical barriers and/or physiological constraints;
iv)
Vegetation Structure hypothesis – the concept of habitat templets argues that habitat provides the templet on which evolution forges animal life-history strategies. Based on this idea, previous studies have found that floristic structure has a strong correlation with the biogeographic regions of amphibians identified in Europe6 and the Brazilian Atlantic Forest8,10. Under this hypothesis is expected that the biographical regions of anurans would be recognized because of the vegetation distribution within the Amazonia;
Riverine Barrier hypothesis – the major rivers of Amazonia act as geographic barriers to the dispersal of organisms and hamper gene flow between populations, increasing speciation rates1,12,34. Under this hypothesis is expected that some anuran species could not traverse the major rivers in the Amazonia, thus creating different species compositions between opposite banks of the major rivers of Amazonia.
We identified seven biogeographic regions in the Amazonia based on anuran species composition with explained dissimilarity values of 92% and a mean silhouette width of 0.33 (Fig. 1, Table 1). From the seven biogeographic regions observed, three biogeographical regions (BR1, BR2, and BR3) are delimited to the north of the Amazon River, three (BR5, BR6, and BR7) are delimited to the south of the Amazon River and one (BR4) is delimited in the western portion of Amazonia (Fig. 1). The grids in BR4 contain the highest values of species richness, while the grids in BR3, BR6, and BR7 contain the lowest values (Fig. 2A). Based on the range size of species distributions, we observed that BR4 contains anuran species with restricted range sizes while the grids in the BR1 and BR7 contain anuran species with wide range sizes (Fig. 2B).
Dendrogram and Amazonia map depicting the regionalization of anuran dissimilarity into seven biogeographical regions (BR) based on recluster.region algorithm68. Colors used in dendrogram and map are identical. Black lines in the map represent the ten major rivers in Amazonia. Map generated using ESRI ArcMap 9.2. https://www.esri.com.
Table 1 Values of the mean silhouette width (Silh) and the explained dissimilarity (ex.diss) for all the clustering solutions.
Gradients of (A) anuran species richness, and (B) mean range-size of anuran species occurring in each grid of the Amazonia. Black lines are delimiting the seven biogeographical regions of anuran identified in Fig. 1. Maps generated using ESRI ArcMap 9.2. https://www.esri.com/.
Among all models of predictor variables, the model without VEGE.PC1 and TOPO.PC2 was the best one for explaining the cluster patterns (∆ AICc > 6.4; Table 2). This model explains 80% of the cluster patterns (Table 2). The partitions of deviance indicated that the independent effect of riverine barriers accounted for 38% of the variability in the anuran biogeographic regions, followed by climate variables with 16% and topography with 3% (Fig. 3). Vegetation structure has a weak association with anuran biogeographic regions in the Amazonia (Fig. 3).
Table 2 The six most parsimonious multinomial logistic regression models used to investigate the influence of current (CURE.PC1 and CURE.PC2) and Pleistocene (HDT and HDP) climate conditions, topography (TOPO.PC1 and TOPO.PC2), riverine barriers (RIVERS) and vegetation structure (VEGE.PC1 and VEGE.PC2) in explaining the biogeographical regions for anurans in the Amazonia.
Partitioning analysis representing the deviance in the biogeographic regions configurations explained by climate (current + historical difference), topography, riverine barriers and vegetation structure of the Amazonia. C = climate, T = topography, V = vegetation structure, R = riverine barriers.
This is the first study showing that multiple factors shape anuran biogeographical regions in Amazonia. We found that the major rivers in Amazonia strongly contributed to explaining the variability in anuran biogeographic regions, followed by climate and topography variables. We identified seven biogeographic regions that partially overlap with the eight areas of endemism previously proposed for terrestrial vertebrates in Amazonia14,15. To the north of the Amazon River, we found that part of BR1, BR2, and BR3 are nested in the area corresponding to Guiana, a biogeographic unit identified by Cracraft14. BR4 and BR5 are partially congruent with the areas of Napo and Inambari, identified by Cracraft14, respectively. However, BR6 and BR7 in the southern Amazon River differ from spatial arrangements of the Rondonia, Tapajós, Xingu, and Belém areas of Cracraft14 and Silva et al.15. There is no single biogeographic solution that is optimal for all taxa1,2,4,7,14,15. For example, Rueda et al.6 found substantial variation in the number of regions considering different taxonomic groups in Europe. Naka20, redefined the boundaries of the Guiana region14 for Amazonia birds using different quantitative methods. Therefore, regionalization patterns depend on the taxonomic group of interest or the clustering methods used to delineate biogeographic units7,35. Because previous studies in Amazonia were performed with the spatial distributions of primate1 and bird14,15,20 species, our results provide new information about the factors associated with the spatial patterns of anuran species distribution in Amazonia.
A cluster analysis based on amphibian distribution recognized the boundaries of Amazonia as one of the four biogeographic regions in South America27. We scaled down the analysis and described the effects of riverine barriers, climatic and topographic variables acting inside Amazonia. Our results showed that the Amazon River separates the biogeographical regions in the north (BR1, BR2, and BR3) from those in the south (BR5, BR6 and BR7), while the Madeira river separates the southeastern biogeographical regions (BR6 and BR7) from that in the southwest (BR5). A longstanding debate exists as to whether the riverine barrier hypothesis has played an important role in shaping the present-day species distribution patterns in Amazonia1,14,19,21,33,34,36. Wallace1 defined distinct areas based on primate species composition that were separated by the Amazon, Solimões, Negro, and Madeira rivers. Recently, Dias-Terceiro et al.37 and Moraes et al.34 showed that the Madeira and Tapajós River respectively are barriers to some amphibian lineages in western and eastern Amazonia. In contrast, Gascon et al.33 did not find a relationship between amphibian species composition and the banks of the Juruá River. Taken together, these results indicate that rivers contribute unequally to the observed patterns of amphibian distribution in the Amazonia. Oliveira et al.21 found similar results to bird distributions and showed that some bird species with low dispersal ability were limited by all major Amazonia rivers, while many other species can apparently cross some rivers. Thus, the barrier effect might be strong for some rivers, such as the Amazon and Madeira, but others rivers might not be an effective barrier. We still lack a consensus on why different rivers are barriers to some species of mammals, birds, and amphibians but not others. To improve our understanding, we must consider life-history traits, dispersal ability, and phylogenetic relationships that are undoubtedly important factors related to the patterns of species distributions21,34. However, considering that over 2,000 new species of plants and vertebrates having been described since 199912, several of these information are currently lacking for most of species in the Amazonia.
Climate and topographic variables explained the second and third highest percentages of variance in the distribution of biogeographic regions, respectively. This result agrees with previous studies that defined biogeographic regions for amphibians in South America27, Europe6, the Atlantic Forest8 and at a global scale4. Amazonia has a well-defined climate gradient, with southeastern areas presenting warmer and more seasonal climate than northwestern areas11,18. This pattern is associated with orography of the northwestern areas, which contain the highest elevations in the Amazonia. We found that most small-ranged anuran species inhabit biogeographic regions with high elevations and humidity. Mountains affect species richness by fostering the diversification of unique lineages and as natural barriers to species with limited dispersal ability38,39. The distribution of amphibian species richness is usually associated with physiological constraints that reflects differences in tolerance to precipitation and temperature38,40,41,42. For example, Da Silva et al.26 found that humidity-related variables are key environmental factors related to both the richness of reproductive modes and anuran phylogenetic diversity in the Brazilian Atlantic Forest. Variation in the climatic and orographic variables seem to influence speciation, extinction, and dispersal rates of anuran species throughout Amazonia43,44. Therefore, different from Ficetola et al.4 who found that continental drift, climate differences, and mountain chains interact to determine the boundaries of biogeographic regions at global scale, we highlight an important role for climatic and orographic variables shaping anuran distributions at intermediate scale.
Previous studies have found that vegetation structure is an important factor related to biogeographical regions for amphibians6,8. In contrast, we found that vegetation types have a weak association with biogeographical regions. According to Charity et al.12, moist forest is the dominant vegetation type in the Amazonia, covering nearly 80 percent of the biome; other forest types include flooded and swamp forests (3.9 per cent), deciduous forest (1.4 per cent), savannah (6.8 per cent) and others (1.1. per cent). At broad scales, this homogenization of vegetation decreases the importance of vegetation structure in explaining the distribution of biogeographical regions. However, this is not the case when considering finer scales. For example, Gascon et al.33 found that flooded versus upland forest is an important predictor of community similarity in species composition of amphibians at the Juruá River. Islands of savannah of varying size occurring within the Amazonia biome are home to unique flora and fauna, including numerous endemics. Nonetheless, Amazonia savannahs are little known, highly threatened, and under-protected45. Thus, vegetation structure might be important for the distribution of biodiversity and conservation purposes when evaluating the biogegraphical units at finer resolutions.
For the first time, BR1 appears as a biogeographic region in the central part of Amazonia. One possible explanation for the identification of BR1 is that it is a biogeographical transition zone, representing geographical areas of species overlap, with a gradient of replacement and partial segregation between anuran species from neighboring biogeographic regions creating a distinct species composition46,47. Biogeographical transition zone is an area where historical and ecological changes allow both the mixture and the co-occurrence of species from two or more biogegraphical regions46. For example, the boundaries of BR1 are in contact with those of six biogeographic regions. If BR1 shares some anuran species with each of the six neighboring biogeographic regions, its identification as a biogeographic transition zone is valid. However, our knowledge of biodiveristy distribution is far from complete, and the geographical distribution of species already described is also fragmentary (i.e. Wallacean shortfall48). We are aware that the accuracy of amphibian range maps is not without criticism, mainly in megadiverse tropical regions, such as Amazonia49. Thus, the identification of BR1 could also be an artefact of the limitation in the knowledge about anuran distribution49. For example, Naka20 found a single area of endemism for 85 avian species in the Guiana shield that coincides with part of our BR1 and BR2 boundaries. This area of endemism is congruent with the Amazon River to the south, the lower Negro river to the south-west, and the Branco river to the west20. The remaining part of BR1 are congruent with the area of Imeri identified by Cracraft14. This area of endemism is congruent with the Negro River to the north-east and the Japurá river to the south-west14. Therefore, future studies with more accurate information on anuran distribution in Amazonia will be able to answer whether BR1 is a valid biogeographic region or an artifact of limited current datasets.
Biogeographical regionalization provides a framework for addressing evolutionary and ecological processes that underlie present-day distributions and several studies have used them as templates to test areas of endemism, historical relationship among areas, delimit regional species pools, and investigate macroecological patterns5,7,9,31,47. Understanding the occurrence of different species in particular geographical areas permit the identification of patterns that can be the starting point in conservation biogeography50,51. For example, the frog-killing fungus Batrachochytrium dendrobatidis, has been linked to extirpations and extinctions of amphibian species in several continents52 and one of the main hypotheses explaining this decline is the side effects of climate change53,54. Becker et al.55 found an increase in Batrachochytrium dendrobatidis positive samples in the southwestern Amazonia, coinciding with reported amphibian declines in neighboring high elevation sites on Andean slopes of Peru. Considering that the pathogen thrives in cool, moist environments in high-elevation tropical rainforests, our results indicate that anuran species occurring in BR4 would be the most susceptible to Batrachochytrium dendrobatidis expansion and anuran species populations in this region should be careful monitored.
Currently, the integrity of the Amazonia is under pressure from dam constructions, deforestation, climate change and unsustainable economic activities12,56,57. For example, large dam constructions could not only block movements that connect anuran populations, but also result in the loss of terrestrial habitats by flooding indigenous lands and conservation units that are protecting several endemic and undescribed species56,58. Based on the predictions of Latrubesse et al.57, if the planned dams are constructed in Amazon basin, BR4 and BR5 will be the most impacted biogeographic regions. These regions harbor the highest anuran species richness, with most species showing a restricted range-size distribution. Furthermore, future projections indicated that agricultural expansion and climate variability will change regional precipitation patterns in Amazonia11,59,60. Sorribas et al.60 projected a decrease in river discharges for eastern basins, and decrease in inundation in central and lower Amazonia. These projections are worrisome because most of these changes will occur with replacement of tropical forest by seasonal forest and tropical savanna59. The likelihood of "savannization" of parts of Amazonia could favor the invasion of these altered areas by anuran species from the Cerrado that are more resistant to desiccation and have more generalized reproductive mode61. Taken together, these actions could threat the integrity of the ecosystem, and alter the patterns of species distribution.
Species distribution data
We downloaded range maps for all species of anurans recorded in the Amazonia region from the IUCN version 2015.262. Then, we overlaid the range maps into grid cells at 50 × 50 km to generate a presence–absence matrix and determine the number of species by grid cell. We considered the extent of the Amazonia region based on the Cracraft14 delimitation and subsequently modified by Silva et al.15. We excluded all species from other biomes (e.g. Cerrado) with marginal occurrences inside the Amazonia region. In the end, a total of 577 anuran species were considered for the regionalization process (see Appendix S1 in Supporting Information). We standardized the nomenclature of anuran species following the Amphibian Species of the World (Frost)63.
We are aware that biogeographical inferences are affected by incomplete taxonomic and distributional knowledge7,64. Although the IUCN anuran maps might include either over- or underpredictions mainly in megadiverse tropical regions49, range maps have been used to investigate amphibian regionalization across a range of spatial scales4,6,8. Furthermore, from a macroecological perspective, range maps have performed very well at resolutions greater than 50 × 50 km65. However, to understand the effects of anuran species that were described recently or whose range size distribution is underpredicted, we also analyzed three other datasets excluding from the presence–absence matrix the small-ranged species that occurred in only one (501 species remained in the matrix), two (440 species) and three (418 species) grid cells. Biogeographical regions delimited using the 577 anuran species and the three datasets excluding small-ranged species were similar. Therefore, we will present only the results considering the 577 anuran species (see Appendix S2 in Supporting Information for a discussion about the results).
Clustering procedures
We used the recluster.region algorithm66,67 available in the recluster R package68 to identify the biogeographic regions in Amazonia with distinct anuran species compositions. This algorithm calculates the dissimilarity of species compositions between each pair of grid cells using the Simpson index (βsim), which is not affected by variations in species richness:
$${\rm{\beta }}\mathrm{sim}=1-\frac{{\rm{\min }}(b,c)}{a+\,\min (b,c)},$$
where component a comprises the total number of species shared by two grids; component b comprises the total number of species that occur in the neighboring grids but not in the focal one; and component c comprises the total number of species that occur in the focal grid but not in the neighboring one. This index is a desirable choice for regionalization because species replacement is largely influenced by vicariance and endemism phenomena7. Then, we used Ward hierarchical clustering to convert dissimilarity matrices into bifurcated dendrograms69. This method performs better in a simulation for recognizing regionalization patterns than other hierarchical clustering methods commonly used for biogeographical analyses67. According to Dapporto et al.66, due to a high frequency of ties and zero values produced by beta-diversity turnover indices, the topology and bootstrap support of dendrograms are affected by the order of areas in the original presence–absence matrix. To avoid these problems, the recluster.region algorithm produces n trees (n = 50 by default) by randomly reordering the areas in the original dissimilarity matrix. Next, the function cuts these trees at different k1 − kn levels (i.e. the number of regions to be identified), producing n matrices of areas x cluster membership67. We delimited the maximum number of regions at 50 clusters. Lastly, to identify the number of regions, the function provides the explained dissimilarity2 and the mean silhouette width70 for all the clustering solutions. The explained dissimilarity is represented by the ratio between the sums of the mean dissimilarities among members of different clusters and the sum of all dissimilarities in the matrix. This method maximizes the between-cluster variation relative to the within-cluster variation. According to Holt et al.2, clusters that reach the threshold value of 90% are an appropriate choice for establishing a suitable tree cut. The mean silhouette width measures the strength of any of the partitions of objects from a dissimilarity matrix. This index ranges between −1 and +1, with negative values indicating that cells are probably located in incorrect clusters70. Here, we identify biogeographic regions based on the number of clusters that considerably improved the explained dissimilarity and the mean silhouette width together. For that, we first found the number of cluster that reach the threshold value of 90% proposed by Holt et al.,2, then we delimited the cluster number when the mean silhouette value stopped increasing.
Predictor variables
To test the potential correlates in the anuran cluster patterns, we obtained current and historical climate data, topographic data, riverine barriers and vegetation structure, which are detailed below:
Current climate variables – the selected climate variables were: i) average annual maximum temperature (AMAXTE); ii) average annual minimum temperature (AMINTE); iii) temperature seasonality (TESE); iv) annual precipitation (APRE); v) precipitation range (PRER); and vi) precipitation seasonality (PRSE). These variables were chosen because they describe a central tendency as well as the variation in the descriptors representing physiological limits or dispersal barriers for anurans6,8,25. These data were downloaded from the WorldClim database at a resolution of 5′ arc-minutes71.
Pleistocene climate variables – we downloaded the values of annual precipitation and annual mean temperature from three models of the Last Glacial Maximum (LGM; CCSM4, MIROC-ESM, MPI-ESM-P) available from the WorldClim database (http://www.worldclim.org/downscaling). Following Moura et al.10 we calculated two historical difference in climate variables: i) historical difference in annual precipitation (HDP) was calculated by the difference between current and LGM annual precipitation; and ii) historical difference in annual mean temperature (HDT) was calculated by the difference between current and LGM annual mean temperature. These two measures indicate the historical variation in water availability and energy input respectively. In order to couple with the variations among the circulation models, we averaged the grid cell values among them prior to the calculation of historical difference10.
Topographic variables – for each grid cell, we calculated six measures of topographic heterogeneity based on elevation data (~1 × 1 km resolution) available at https://lta.cr.usgs.gov/GTOPO30. These measures were: i) maximum elevation (TOPOMAX); ii) minimum elevation (TOPOMIN); iii) elevational standard deviation (TOPOSTD); iv) slope range (SLOPERAN); v) slope standard deviation (SLOPESTD); and vi) aspect standard deviation (ASPECTSTD).
Riverine barrier – we categorized the grid cells into different regions based on the banks of the largest rivers in the Amazonia in terms of water discharge72 and preview studies14,17: i) Amazon (mean annual discharge − 209000 m3/s), ii) Orinoco (35000 m3/s), iii) Madeira (32000 m3/s), iv) Negro (28400 m3/s), v) Japurá (18600 m3/s), vi) Tapajós (13500 m3/s), vii) Purus (11000 m3/s), viii) Xingu (9700 m3/s), ix) Uacayali (9544 m3/s), x) Putumayo (8760 m3/s), xi)Tocantins (8440 m3/s) and xii) Rio Branco (1462 m3/s) (Fig. 4). These data were downloaded from the database of USGS at https://www.sciencebase.gov/catalog/item/56814fc2e4b0a04ef492213e.
Distribution of predictor variables used to evaluate the anuran biogeographical regions in the Amazonia. Current climate variables - first axis of principal components analyses (PCA) with precipitation and temperature variables (AMAXTE, AMINTE, TESE, APRE, PRER and PRSE); Historical difference precipitation (HDP) - difference between current and Last Glacial Maximum (LGM) annual precipitation; Historical difference temperature (HDT) - difference between current and LGM annual mean temperature; Topographic variables - first axis of PCA with elevation and slope variables (TOPOMAX, TOPOMIN, TOPOSTD, SLOPERAN, SLOPESTD and ASPECTSTD); Vegetation structure – Amazonia ecoregions based on the classification of Olson et al.9; and Riverine barriers - classification of grids based on the banks of ten major rivers in Amazonia. Maps generated using ESRI ArcMap 9.2. https://www.esri.com/.
Vegetation structure – we used the classification of Olson et al.9 to determine the percentage of vegetation type covering each grid (Fig. 4). The main vegetation types observed were moist forest, dry forest, varzea, mangrove and montane.
Correlates of biogeographical regions
To reduce the dimensionality and number of correlations between variables in our database, we performed three separate principal components analyses (PCA), a first one with the set of current climate variables (AMAXTE, AMINTE, TESE, APRE, PRER and PRSE), a second one with the set of topographic variables (TOPOMAX, TOPOMIN, TOPOSTD, SLOPERAN, SLOPESTD and ASPECTSTD) and a final one with the percentage of each vegetation type. Therefore, for the subsequent analysis, we used nine variables: i) the first two axes from the current climate variables (CURE.PC1 and CURE.PC2), ii) the first two axes from the topographic variables (TOPO.PC1 and TOPO.PC2), iii) the first two axes from the vegetation structure (VEGE.PC1 and VEGE.PC2), iv) two Pleistocene climate variations (HDP and HDT), and v) the classification of grids based on the banks of eight major rivers. We also evaluated the correlation between original environmental variables and the first two axes of the three PCAs using significance tests of Pearson correlation coefficients (see Appendix S3 in Supporting Information).
We used multinomial logistic regression models to investigate the influence of predictor variables in explaining the anuran biogeographic regions8,10. To determine the optimal model related to biogeographical regions, we started with a full model containing all explanatory variables. Then we generated sub-model sets from the full model using the dredge function implemented in the MuMIn package73. We used Akaike's information criterion corrected for small sample sizes (AICc74) to determine the optimal model. The AICc is calculated for each model from its log-likelihood and the number of parameters, and the model with the lowest AICc is judged to be the best of the candidate models74. Furthermore, to evaluate model selection uncertainty, we used Akaike weights (ὠ), which express the likelihood of each model given the data and the set of candidate models. Finally, we used variation partitioning analysis75 to partition the total percentage of variation into unique contributions of the sets of predictors of the best model.
All analyses were performed with R 3.2.3 software76.
Data accessibility statement
All data were gathered on public databases that are available on-line.
Wallace, A. R. On the monkeys of the Amazon. Proc. Zool. Soc. Lond. 20, 107–110 (1852).
Holt, B. G. et al. An update of Wallace's zoogeographic regions of the world. Science 339, 74–78, https://doi.org/10.1126/science.1228282 (2013).
Whittaker, R. J., Riddle, B. R., Hawkins, B. A. & Ladle, R. J. The geographical distribution of life and the problem of regionalization: 100 years after Alfred Russel Wallace. J. Biogeogr. 40, 2209–2214, https://doi.org/10.1111/jbi.12235 (2013).
Ficetola, G. F., Mazel, F., & Thuiller, W. Global determinants of zoogeographical boundaries. Nat. Ecol. Evol. 1; https://doi.org/10.1038/s41559-017-0089 (2017).
Mackey, B. G., Berry, S. L. & Brown, T. Reconciling approaches to biogeographical regionalization: a systematic and generic framework examined with a case study of the Australian continent. J. Biogeogr. 35, 213–229, https://doi.org/10.1111/j.1365-2699.2007.01822.x (2008).
Rueda, M., Rodriguez, M. A. & Hawkins, B. A. Towards a biogeographic regionalization of the European biota. J. Biogeogr. 37, 2067–2076, https://doi.org/10.1111/j.1365-2699.2010.02388.x (2010).
Kreft, H. & Jetz, W. A framework for delineating biogeographical regions based on species distribution. J. Biogeogr. 37, 2029–2053, https://doi.org/10.1111/j.1365-2699.2010.02375.x (2010).
Vasconcelos, T. S., Prado, V. H. M., da Silva, F. R. & Haddad, C. F. B. Biogeographic distribution patterns and their correlates in the diverse frog fauna of the Atlantic Forest Hotspot. PlosOne 9, 1–9, https://doi.org/10.1371/journal.pone.0104130 (2014).
Olson, D. M. et al. Terrestrial ecoregions of the world: a new map of life on Earth. BioScience 51, 933–938, https://doi.org/10.1641/0006-3568(2001)051[0933:TEOTWA]2.0.CO;2 (2001).
Moura, M. R., Argôlo, A. J. & Costa, H. C. Historical and contemporary correlates of snake biogeographical subregions in the Atlantic Forest hotspot. J. Biogeogr. 44, 640–650, https://doi.org/10.1111/jbi.12900 (2017).
Davidson, E. A. et al. The Amazon basin in transition. Nature 481, 321–328, https://doi.org/10.1038/nature10717 (2012).
Charity, S., Dudley, N., Oliveira, D. & Stolton, S. Living amazon report 2016: A regional approach to conservation in the Amazon. WWF Living Amazon Initiative, Brasília and Quito (2016).
Haffer, J. Distribution of Amazon birds. Bonn. Zool. Bull. 29, 38–78 (1978).
Cracraft, J. Historical biogeography and patterns of differentiation within the South American avifauna: areas of endemism. Ornithol. Monogr. 36, 49–84 (1985).
Silva, J. M. C., Novaes, F. C. & Oren, D. C. Differentiation of Xiphocolaptes (Dendrocolaptidae) across the river Xingu, Brazilian Amazonia: recognition of a new phylogenetic species and biogeographic implications. Bull. Br. Orn. Club 122, 185–194 (2002).
Haffer, J. Speciation in Amazonian forest birds. Science 165, 131–137 (1969).
Haffer, J. Hypotheses to explain the origin of species in Amazonia. Braz. J. Biol. 68, 917–947 (2008).
Hoorn, C. et al. Amazonia through time: Andean uplift, climate change, landscape evolution, and biodiversity. Science 330, 927–931, https://doi.org/10.1126/science.1194585 (2010).
Smith, B. T. et al. The drivers of tropical speciation. Nature 515, 406–409, https://doi.org/10.1038/nature13687L3 (2014).
Naka, L. N. Avian distribution patterns in the Guiana Shield: implications for the delimitation of Amazonian areas of endemism. J. Biogeogr. 38, 681–696, https://doi.org/10.1111/j.1365-2699.2010.02443.x (2011).
Oliveira, U., Vasconcelos, M. F., & Santos, A. J. Biogeography of Amazon birds: rivers limit species composition, but not areas of endemism. Sci. Rep. 7, https://doi.org/10.1038/s41598-017-03098-w (2017).
Catenazzi, A. State of the Wold's amphibians. Annu. Rev. Environ. Resour. 40, 91–119, https://doi.org/10.1146/annurev-environ-102014-021358 (2015).
Jenkins, C. N., Pimm, S. L. & Joppa, L. N. Global patterns of terrestrial vertebrate diversity and conservation. Proc. Natl. Acad. Sci. USA 110, E2602–E2061, https://doi.org/10.1073/pnas.1302251110 (2013).
Article ADS CAS PubMed PubMed Central Google Scholar
Azevedo-Ramos, C. & Galatti, U. Patterns of amphibian diversity in Brazilian Amazonia: conservation implications. Biol. Conserv. 103, 103–111, https://doi.org/10.1016/S0006-3207(01)00129-X (2002).
da Silva, F. R., Almeida-Neto, M., Prado, V. H. M., Haddad, C. F. B. & Rossa-Feres, D. C. Humidity levels drive reproductive modes and phylogenetic diversity of amphibians in the Brazilian Atlantic Forest. J. Biogeogr. 39, 1720–1732, https://doi.org/10.1111/j.1365-2699.2012.02726.x (2012).
Pimm, S. L. et al. The biodiversity of species and their rates of extinction, distribution and protection. Science 344, 1–10, https://doi.org/10.1126/science.1246752 (2014).
Vasconcelos, T. S., Rodríguez, M. A. & Hawkins, B. A. Biogeographic distribution patterns of South American amphibians: a regionalization based on cluster analysis. Nat. Conservacao 9, 67–72, https://doi.org/10.4322/natcon.2011.008 (2011).
Araújo, M. B. et al. Quaternary climate changes explain diversity among reptiles and amphibians. Ecography 31, 8–15, https://doi.org/10.1111/j.2007.0906-7590.05318.x (2008).
Carnaval, A. C., Hickerson, M. J., Haddad, C. F. B., Rodrigues, M. T. & Moritz, C. Stability predicts genetic diversity in the Brazilian Atlantic Forest hotspot. Science 323, 785–789, https://doi.org/10.1126/science.1166955 (2009).
Antonelli, A. et al. In Amazonia, landscape and species evolution: a look into the past (eds Hoorn, C. & Wesselingh, E. P.) 386–404 (Wiley-Blackwell, 2010).
Cornell, H. V. & Harrison, S. P. What are species pools and when are they important? Annu. Rev. Ecol. Evol. Syst. 45, 45–67, https://doi.org/10.1146/annurev-ecolsys-120213-091759 (2014).
Carstensen, D. W., Lessard, J.-P., Holt, B. G., Borregaard, M. K. & Rahbek, C. Introducing the biogeographic species pool. Ecography 36, 1–9, https://doi.org/10.1111/j.1600-0587.2013.00329.x (2013).
Gascon, C. et al. Riverine barriers and the geographic distribution of Amazonian species. Proc. Natl. Acad. Sci. USA 97, 13672–13677, https://doi.org/10.1073/pnas.230136397 (2000).
Moraes, L. J. C. L., Pavan, D., Barros, M. C. & Ribas, C. C. The combined influence of riverine barriers and flooding gradients on biogeographical patterns for amphibians and squamates in south-eastern Amazonia. J. Biogeogr. 43, 2113–2124, https://doi.org/10.1111/jbi.12756 (2016).
Bloomfield, N. J., Knerr, N. & Encinas-Viso, F. A comparison of network and clustering methods to detect biogeographical regions. Ecography 41, 1–10, https://doi.org/10.1111/ecog.02596 (2018).
Silva, J. M. C., Rylands, A. B. & Fonseca, G. A. B. The fate of the Amazonian areas of endemism. Conserv. Biol. 19, 689–694, https://doi.org/10.1111/j.1523-1739.2005.00705.x (2005).
Dias-Terceiro, R. G. et al. A matter of scale: historical and environmental factors structure anuran assemblages from the Upper Madeira River, Amazonia. Biotropica 47, 259–266, https://doi.org/10.1111/btp.12197 (2015).
Kozak, K. H. & Wiens, J. J. Does niche conservatism promote speciation? A case study in North American salamanders. Evolution 60, 2604–2621, https://doi.org/10.1554/06-334.1 (2006).
Kozak, K. H. & Wiens, J. J. Climatic zonation drives latitudinal variation in speciation mechanisms. Proc. R. Soc. B 274, 2995–3003, https://doi.org/10.1098/rspb.2007.1106 (2007).
Buckley, L. B. & Jetz, W. Environmental and historical constraints on global patterns of amphibian richness. Proc. R. Soc. B 274, 1167–1173, https://doi.org/10.1098/rspb.2006.0436 (2007).
Qian, H., Wang, X., Wang, S. & Li, Y. Environmental determinants of amphibian and reptile species richness in China. Ecography 30, 471–482, https://doi.org/10.1111/j.0906-7590.2007.05025.x (2007).
Vasconcelos, T. S., Santos, T. G., Haddad, C. F. B. & Rossa-Feres, D. C. Climatic variables and altitude as predictors of anuran species richness and number of reproductive modes in Brazil. J. Trop. Ecol. 26, 423–432, https://doi.org/10.1017/S0266467410000167 (2010).
Ricklefs, R. E. Community diversity: relative roles of local and regional processes. Science 235, 167–171 (1987).
Wiens, J. J., Graham, C. H., Moen, D. S., Smith, S. A. & Reeder, T. W. Evolutionary and ecological causes of the latitudinal diversity gradient in hylid frogs: treefrogs trees unearth the roots of high tropical diversity. Am. Nat. 168, 579–596, https://doi.org/10.1111/j.1558-5646.2009.00610.x (2006).
Carvalho, W. D. & Mustin, K. The highly threatened and little known Amazonian savannahs. Nat. Ecol. Evol. 1, https://doi.org/10.1038/s41559-017-0100 (2017).
Ferro, I. & Morrone, J. J. Biogeographical transition zones: a search for conceptual synthesis. Biol. J. Linnean Soc. 113, 1–12, https://doi.org/10.1111/bij.12333 (2014).
Morrone, J. J. The spectre of biogeographical regionalization. J. Biogeogr, https://doi.org/10.1111/jbi.13135 (2018).
Hortal, J. et al. Seven shortfalls that beset large-scale knowledge on biodiversity. Annu. Rev. Ecol. Evol. Syst. 46, 523–549, https://doi.org/10.1146/annurev-ecolsys-112414-054400 (2015).
Ficetola, G. F. et al. An evaluation of the robustness of global amphibian range maps. J. Biogeogr. 41, 211–221, https://doi.org/10.1111/jbi.12206 (2014).
Whittaker, R. J., Araújo, M. B., Jepson, P., Ladle, R. J., Watson, J. E. M. & Willis, K. J. Conservation biogeography: assessment and prospect. Diversity Distrib. 11, 3–23, https://doi.org/10.1111/j.1366-9516.2005.00143.x (2005).
Luna-Vega, I., Morrone, J. J., & Escalante, T. In Biogeography (eds Gailis M. & Kalnins, S.) 229–240 (Nova-Science Publishers, 2010).
Fisher, M. C., Garner, T. W. & Walker, S. F. Global emergence of Batrachochytrium dendrobatidis and amphibian chytridiomycosis in space, time, and host. Annu. Rev. Microbiol. 63, 291–310, https://doi.org/10.1146/annurev.micro.091208.073435 (2009).
Pounds, J. A. et al. Widespread amphibian extinctions from epidemic disease driven by global warming. Nature 439, 161–167, https://doi.org/10.1038/nature04246 (2006).
Rohr, J. R. & Raffel, T. R. Linking global climate and temperature variability to widespread amphibian declines putatively caused by disease. Proc. Natl. Acad. Sci. USA 107, 8269–8274, https://doi.org/10.1073/pnas.0912883107 (2010).
Becker, C. G., Rodriguez, D., Lambertini, C., Toledo, L. F. & Haddad, C. F. B. Historical dynamics of Batrachochytrium dendrobatidis in Amazonia. Ecography 39, 954–960, https://doi.org/10.1111/ecog.02055 (2016).
Winemiller, K. O. et al. Balancing hydropower and biodiversity in the Amazon, Congo, and Mekong. Science 351, 128–129, https://doi.org/10.1126/science.aac7082 (2016).
Latrubesse, E. M. et al. Damming the rivers of the Amazon basin. Nature 546, 363–369, https://doi.org/10.1038/nature22333 (2017).
Fearnside, P. M. Amazon dams and waterways: Brazil's Tapajós basin plans. Ambio 44, 426–439, https://doi.org/10.1007/s13280-015-0642-z (2015).
Nobre, C. A. et al. Land-use and climate change risks in the Amazon and the need of a novel sustainable development paradigm. Proc. Natl. Acad. Sci. USA 113, 10759–10768, https://doi.org/10.1073/pnas.1605516113 (2016).
Sorribas, M. V. et al. Projections of climate change effects on discharge and inundation in the Amazon basin. Clim. Change 136, 555–570, https://doi.org/10.1007/s10584-016-1640-2 (2016).
Haddad, C. F. B. & Prado, C. P. A. Reproductive modes in frogs and their unexpected diversity in the Atlantic Forest of Brazil. Bioscience 55, 207–217, https://doi.org/10.1641/0006-3568(2005)055[0207:RMIFAT]2.0.CO;2 (2005).
IUCN IUCN red list of threatened species, ver. 2015.2. http://www.iucnredlist.org/technical-documents/spatial-data (2015)
Frost, D. R. Amphibian Species of the World: an Online Reference. – Version 6.0, http://research.amnh.org/herpetology/amphibia/index.html (2016).
Vale, M. M., Marques, T. L., Cohn-Haft, M. & Vieira, M. V. Misuse of bird digital distribution maps creates reversed spatial diversity patterns in the Amazon. Biotropica 49, 636–642, https://doi.org/10.1111/btp.12460 (2017).
Hawkins, B. A., Rueda, M. & Rodríguez, M. Á. What do range maps and surveys tell us about diversity patterns? Folia Geobot. 43, 345–355, https://doi.org/10.1007/s12224-008-9007-8 (2008).
Dapporto, L. et al. recluster: an unbiased clustering procedure for beta-diversity turnover. Ecography 36, 1070–1075, https://doi.org/10.1111/j.1600-0587.2013.00444.x (2013).
Dapporto, L., Ciolli, G., Dennis, R. L. H., Fox, R. & Shreeve, T. G. A new procedure for extrapolating turnover regionalization at mid-small spatial scales, tested on British butterflies. Methods in Ecol. Evol. 6, 1287–1297, https://doi.org/10.1111/2041-210X.12415 (2015).
Dapporto, L. et al. recluster: Ordination methods for the analysis of beta-diversity Indices. R package version 2.8, https://CRAN.R-project.org/package = recluster (2015).
Legendre, P., & Legendre, L. Numerical ecology. Elsevier, Cambridge and Oxford (2012).
Bocard, D., Francois, G. & Legendre, P. Numerical Ecology with R. Springer-Verlag, New York. (2011).
Hijmans, R. J., Cameron, S. E., Parra, J. L., Jones, P. G. & Jarvis, A. Very high resolution interpolated climate surfaces for global land areas. Int. J. Climatol. 25, 1965–1978, https://doi.org/10.1002/joc.1276 (2005).
Latrubesse, E. M., Stevaux, J. C. & Sinha, R. Tropical rivers. Geomorphology 70, 187–206, https://doi.org/10.1016/j.geomorph.2005.02.005 (2005).
Barton, K. MuMIn: multi-model inference. R package version 1.40, http://CRAN.R-project.org/package=MuMIn (2017).
Burnham, K. P. & Anderson, D. R. Model selection and Multimodel inference. Springer-Verlag (2002).
Bocard, D., Legendre, P. & Drapeau, P. Partialling out the spatial component of ecological variation. Ecology 73, 1045–1055, https://doi.org/10.2307/1940179 (1992).
R Development Core Team R: A language and environment for statistical computing. – R Foundation for Statistical Computing, Vienna, Austria, http://www.Rproject.org (2015).
MBCG thanks Coordenação de Aperfeiçoamento de Pessoal do Nível Superior (CAPES) for master fellowship. FRdaS thanks São Paulo Research Foundation (FAPESP, 2013/50714-0) for financial support.
Programa de Pós-Graduação em Biologia Animal, Universidade Estadual Paulista Júlio de Mesquita Filho – UNESP, campus de São José do Rio Preto, São Paulo, 15054-000, Brazil
Marcela Brasil de Castro Godinho
Laboratório de Ecologia Teórica: Integrando Tempo, Biologia e Espaço (LET.IT.BE), Departamento de Ciências Ambientais, Universidade Federal de São Carlos, campus Sorocaba, São Paulo, 18052-780, Brazil
Fernando Rodrigues da Silva
F.R.S. conceived the idea; M.B.C.G. gathered the data; M.B.C.G. and F.R.S analysed the data and led the writing.
Correspondence to Fernando Rodrigues da Silva.
Godinho, M.B.d.C., da Silva, F.R. The influence of riverine barriers, climate, and topography on the biogeographic regionalization of Amazonian anurans. Sci Rep 8, 3427 (2018). https://doi.org/10.1038/s41598-018-21879-9
Received: 30 October 2017
Alfred Russel Wallace's legacy: an interdisciplinary conception of evolution in space and time
Joaquín Hortal
José Alexandre F. Diniz-Filho
Darren C. J. Yeo
npj Biodiversity (2023)
Phylogenetic relationships, population demography, and species delimitation of the Alouatta belzebul species complex (Atelidae: Alouattinae)
Cintia Povill
Marcelo de Assis Passos Oliveira
Cibele Rodrigues Bonvicino
Primates (2022)
Historical biogeography highlights the role of Miocene landscape changes on the diversification of a clade of Amazonian tree frogs
Diego A. Ortiz
Conrad J. Hoskin
Antoine Fouquet
Organisms Diversity & Evolution (2022)
The riverine thruway hypothesis: rivers as a key mediator of gene flow for the aquatic paradoxical frog Pseudis tocantins (Anura, Hylidae)
Emanuel M. Fonseca
Adrian A. Garda
Marcelo Gehara
Landscape Ecology (2021)
Riverine barriers to gene flow in a salamander with both aquatic and terrestrial reproduction
Clara Figueiredo-Vázquez
André Lourenço
Guillermo Velo-Antón
Evolutionary Ecology (2021)
Celebrating Alfred Russel Wallace
Top 100 in Ecology | CommonCrawl |
Nano Express | Open | Published: 09 May 2019
Comparative Study of the Antimicrobial Effect of Nanocomposites and Composite Based on Poly(butylene adipate-co-terephthalate) Using Cu and Cu/Cu2O Nanoparticles and CuSO4
A. F. Jaramillo1,
S. A. Riquelme2,
G. Sánchez-Sanhueza3,
C. Medina4,
F. Solís-Pomar5,
D. Rojas6,
C. Montalba7,
M. F. Melendrez6 &
E. Pérez-Tijerina5
Nanoscale Research Lettersvolume 14, Article number: 158 (2019) | Download Citation
Nanocomposites and a composite based on poly(butylene adipate-co-terephthalate) (PBAT) were synthesized using commercial copper nanoparticles (Cu-NPs), copper/cuprous oxide nanoparticles (Cu|Cu2O-NPs), and copper sulfate (CuSO4), respectively. The Cu|Cu2O-NPs were synthesized using chemical reduction and characterized by X-ray diffraction (XRD) and transmission electron microscopy (TEM). The synthesis of Cu|Cu2O-NPs yielded a mixture of Cu and Cu2O, with metal Cu having a spherical morphology of approximately 40 nm in diameter and Cu2O with a diameter of 150 nm. To prepare the nanocomposites (NCs) and the composite material (MC), the NPs and the CuSO4 salt were incorporated into the PBAT matrix in concentrations of 1, 3, and 5% p/p via an ex situ method. Fourier transform infrared spectroscopy (FTIR), a tensile test, differential scanning calorimetry (DSC), thermogravimetric analysis (TGA), and agar diffusion assays were used for structural, thermomechanical, and antimicrobial characterization. Results showed that the reinforcements did not modify the chemical structure of the PBAT and only slightly increased the percentage of crystallization. The mechanical and thermal properties of the PBAT did not change much with the addition of fillers, except for a slight increase in tensile strength and thermal stability, respectively. The agar diffusion antimicrobial assays showed that the NCs and MCs had good inhibitory responses against the nonresistant strains Enterococcus faecalis, Streptococcus mutans, and Staphylococcus aureus. The MCs based on CuSO4 had the highest biocidal effect, even against the resistant bacteria Acinetobacter baumannii.
Most plastic materials are produced from fossil fuels and are practically nondegradable, which generates concerns about economic and environmental sustainability [1, 2]. Thus, the development and synthesis of biodegradable materials from another source has received much attention from the scientific community with the goal of reducing the production of petroleum-based plastics [3,4,5]. Biodegradable polymers have begun to play a fundamental role in solving these problems as a promising option to fossil fuels together with a new class of materials known as bionanocomposites, which, through nanotechnology, have come to possess better properties [6,7,8,9,10].
Bionanocomposites consist of an organic matrix in which inorganic nanomaterials are dispersed [8, 11,12,13]. The different morphologies and sizes of the inorganic components, such as nanoparticles, nanotubes, nanosheets, nanowires, and nanoclay, have a considerable effect on the properties of the polymer matrix. The optical, thermal, mechanical, magnetic, and optoelectronic properties are improved because of the synergy between the surface area, high surface reactivity, excellent thermal stability, and high mechanical strength of the inorganic components and the polymer matrix [14,15,16]. A wide range of innovations in polymer chemistry and micro- and nanofabrication techniques have driven research in polymer bionanocomposites, not only for the production of improved structures, but also for the preparation of new functional materials with interesting properties and highly sophisticated applications [17,18,19]. Several biopolymers of natural or synthetic origin, such as polylactic acid (PLA) [20] and poly(butylene-adipate-co-terephthalate) (PBAT), have been widely studied [21, 22].
One polymer that is currently being used as the matrix in nanocomposites is PBAT [23]. This synthetic biopolymer is a linear aliphatic biodegradable polyester based on the monomers 1,4-butanediol, adipic acid, and terephthalic acid in the polymer chain [24]. Its properties are similar to those of low-density polyethylene because of its high molecular weight and long-chain branched molecular structure, which makes it flexible [24,25,26]. The main limitation of PBAT is its poor mechanical strength; however, with the addition of nanosized loads, this disadvantage can be overcome thus endowing this material with multifunctional properties such as better thermomechanical properties [6, 27].
Currently, there is also an urgent need to develop bionanocomposites that can control or prevent microbial colonization by incorporating nanoparticles with known antibacterial activity into or enhancing the antibacterial properties already possessed by the polymer matrix. In the latter case, the substantial improvement in the biocidal capacity of the polymer matrix has been associated with the synergy between the two components of the bionanocomposite [28, 29]. Therefore, the polymer not only provides a support matrix for the nanoparticles, but can also improve the antibacterial performance and extend the possible applications of the bionanocomposite to meet various requirements for biomedical applications or medical devices such as endotracheal tubes and vascular and urinary catheters [30,31,32]. However, the use of PBAT in medical devices has not been studied extensively; only a few articles have reported the possibility of its use in some clinical applications [1].
Several investigations have reported the use of metal nanoparticles as an antimicrobial agent. The intrinsic biological property of these materials depends on several factors such as the metal involved, particle size, structure, and surface area. All possible combinations of these factors can delay antibacterial resistance [33]. Most antimicrobial studies of nanocomposites have focused on food packaging, and the biocidal activity has always targeted the same bacteria. It is not certain if the bacteria become resistant to the biocidal nanoparticles in the same way they do to drugs. Thus, one of the objectives of this work was to evaluate the antimicrobial activity of nanocomposites containing PBAT with different concentrations of Cu-NPs for potential use in the manufacture of dental implements. In addition, we performed a complete comparative study on the thermomechanical and antimicrobial properties of PBAT-based materials. PBAT nanocomposites were prepared with Cu nanoparticles at three different concentrations. Similarly, nanocomposites were prepared using Cu|Cu2O-NPs as load. Finally, a CuSO4-based composite material was prepared at the same concentrations used to prepare the nanocomposites. The biocidal activity of the nanocomposites and the PBAT composite was evaluated against Staphylococcus aureus, which is responsible for cutaneous infections such as folliculitis, furunculosis, and conjunctivitis; Streptococcus mutans, which is partly responsible for dental plaque and dental biofilm; and Enterococcus faecalis and Acinetobacter baumannii, which can cause infections that compromise humans, especially in the hospital environment.
PBAT (Ecoflex) used for the preparation of nanocomposites was supplied by BASF (Ludwigshafen, Germany). Its molecular structure is shown in Additional file 1: Figure S1 (supplementary material). The 99.99% pure metal Cu nanoparticles (Sigma-Aldrich, St. Louis, MO, USA) were between 100 and 200 nm in diameter. For the synthesis of the Cu|Cu2O-NPs, CuSO4 was used as a precursor, ascorbic acid (C6H8O6) as a reducing agent, and sodium hydroxide (NaOH) as a pH controller. In addition, CuSO4 (Sigma-Aldrich) was used to prepare the composite material.
Synthesis of Nanoparticles by Chemical Reduction
A synthesis method proposed by Khan et al. [34] was used to obtain Cu|Cu2O-NPs. The synthesis started by dissolving CuSO4 × 5H2O in distilled water to obtain 120 mL of 0.1 M solution. Next, the 120 mL of CuSO4 was added to a flask immersed in a propylene glycol bath, followed by rapidly adding 50 mL of C6H8O6 solution. The mixture was vigorously stirred at approximately 390 rpm for 30 min while the temperature was increased to 80 °C, upon which 30 mL of NaOH solution was added dropwise and the solution was continuously agitated for 2 h. The final solution was allowed to settle overnight, and then, the supernatant liquid was removed. The concentrate was centrifuged and washed with distilled water and ethanol. Finally, the particles were dispersed using ultrasound equipment, placed in Petri dishes, and oven-dried at 60 °C overnight (see Additional file 1: Figure S2).
Nanocomposite synthesis
To prepare the nanocomposites and the composite material, Cu-NPs, Cu|Cu2O-NPs, and CuSO4 salt were incorporated into the PBAT matrix in concentrations of 1, 3, and 5%. First, the PBAT was melted, and then, the NPs were added and mixed in a torque rheometer (model 835205, Brabender GmbH & Co. KG, Duisburg, Germany) for 7 min at 60 rpm and a work temperature of 140 °C (Additional file 1: Figure S4). The maximum load was 5% because higher loads produced fluorescence effects in the Raman spectra (Additional file 1: Figure S3).
The obtained nanocomposites and composite materials were characterized to study their differences with respect to the PBAT polymer. Likewise, we studied how the different concentrations of Cu-NPs, Cu|Cu2O-NPs, and CuSO4 inside the polymer affected its mechanical, thermal, morphological, structural, and bactericidal properties.
Cu-NPs and Cu|Cu2O-NPs were characterized via X-ray diffraction (XRD) and transmission electron microscopy (TEM). PBAT nanocomposites with Cu-NPs (NCs-PBAT/Cu) and Cu|Cu2O-NPs (NCs-PBAT/Cu|Cu2O) and the PBAT composite material with CuSO4 (MCs-PBAT/CuSO4) were characterized via thermogravimetric analysis (TGA), differential scanning calorimetry (DSC), scanning electron microscopy (SEM), Fourier transform infrared spectroscopy (FTIR), XRD, tensile tests, and antimicrobial activity assay using agar diffusion. A 100-mm × 100-mm × 1-mm plate-shaped sample of each nanocomposite was prepared so that the samples homogenized in each analysis were the same size. To obtain the plate shape, NCs-PBAT/Cu, NCs-PBAT/Cu|Cu2O, and MCs-PBAT/CuSO4 were molded using a Labtech hydraulic press (model LP-20B; Labtech Engineering Co., Ltd., Samutprakarn, Thailand) at 160 °C and 110 bars for 5 min. The preheating and cooling times were 15 min and 1 min, respectively (Additional file 1: Figure S4).
Morphological and Structural Properties
To verify the nanometric scale of the nanoparticles and that the synthesized powders were a mixture of Cu and Cu2O nanoparticles, a structural analysis was performed using XRD and a morphological analysis was performed using TEM.
TEM micrographs of Cu|Cu2O-NPs were obtained with a JEM 1200 EX II transmission electron microscope (JEOL, Ltd., Tokyo, Japan) at a voltage of 120 kV. A sample was prepared by placing a drop of nanoparticles diluted in ethanol on a 200-mesh carbon-coated copper grid. In addition, the nanoparticles were analyzed via an electron diffraction pattern.
XRD spectra of the Cu-NPs, Cu|Cu2O-NPs, nanocomposites, and composite material were obtained using a Bruker Endeavor diffractometer (model D4/MAX-B; Bruker, Billerica, MA, USA). The sweep of 2θ was from 4 to 80° with a 0.02° step and counting time of 1 s. The diffractometer was operated at 20 mA and 40 kV with a copper cathode lamp (λ = 1.541 Å).
FTIR spectra of the nanocomposites were obtained using a Spectrum Two FTIR spectrometer (× 1720) (PerkinElmer, Waltham, MA, USA) with the attenuated total reflection (ATR) function. Each spectrum was obtained by consecutive scans in the range of 4000–500 cm−1 with a resolution of 1 cm−1.
Mechanical Properties (Tensile Test)
Tensile tests, based on the ASTM D638 standard, were carried out on a smarTens universal testing machine (005 model; Emmeram Karg Industrietechnik, Krailling, Germany) at a test speed of 50 mm/min and a load cell of 1 kN. The V-type specimens were manufactured by compression at molding temperatures of 160 °C. The preheating, pressing, and cooling times were 7, 5, and 1 min, respectively. Five samples of each NC and MC under study were manufactured, and the tensile strength, ultimate elongation percentage, and modulus were obtained.
TGA was carried out using a TG 209 FI Iris® thermo-microbalance (NETZSCH-Gerätebau GmbH, Selb, Germany). The samples, ranging from 3 to 10 mg, were placed in aluminum crucibles, which were then loaded into the instrument. The mass change as a function of temperature was measured by heating the samples from 20 to 600 °C at a rate of 10 °C/min under a N2 atmosphere.
DSC analysis was performed using a NETZSCH differential scanning calorimeter (DSC 204 F1 model). Nanocomposite samples (5–10 mg) were placed in sealed aluminum crucibles, which were heated from 25 to 200 °C at a rate of 10 °C/min under a constant N2 flow rate of 20 mL/min. The melting temperature (Tm) was obtained from this DSC analysis.
Antimicrobial Activity Assays of the NCs and MC Using Agar Diffusion
The antibacterial activity of the nanocomposites and composite material based on Cu-NPs, Cu|Cu2O-NPs, and CuSO4 was determined using the diffusion growth kinetics method in agar. The analysis was carried out in two stages following the protocol of Jaramillo et al. [35]. Four strains of bacteria were used: two clinical strains, A. baumannii (ABA 538) isolated from an intrahospital infection and E. faecalis (6.4) from an oral infection, and two collection strains, S. aureus (ATCC) and S. mutans (ATCC 25175).
The first stage consisted of a qualitative evaluation of antibacterial activity to select which of the three concentrations of nanocomposites and composite material to use to perform the quantitative tests to reduce the experimental design because using three load concentrations would be very expensive. After the evaluation tests, the sample with the load percentage that showed the best contact inhibition was selected. To perform the qualitative tests, A. baumannii (ABA 538), E. faecalis (6.4), S. aureus (ATCC), and S. mutans (ATCC 25175) were separately seeded on a trypticase soy agar (TSA) and incubated overnight at 37 °C. After culturing, a well-isolated colony was selected and transferred to a tube containing 4–5 mL of TSA broth using an inoculating loop. The broth was incubated again overnight at 37 °C until it reached or exceeded the turbidity of 0.5 on the McFarland scale. The turbidity of the inoculum then was adjusted with saline solution up to 0.5 on the McFarland scale using a turbidimeter. The prepared suspension contained approximately 1 × 108 CFU/mL, which was diluted to 1:10 to obtain a final inoculum concentration of 107 CFU/mL. TSA plates were seeded uniformly with each inoculum. Then, sheets (10 × 10 mm2) of the nanocomposites and composite material at concentrations of 1, 3, and 5%, plus a PBAT control, were placed on the surface of the TSA plates and checked to make sure that they adhered well. Finally, the plates were placed in an oven and incubated at 37 °C for 24 h to observe the inhibition of the PBAT samples.
The second stage of the growth kinetics method consisted of quantitative tests performed on only those nanocomposites and composite material where contact inhibition was evident in the qualitative test. To maintain sterility, the tests were carried out using a 1200 Series Type A2 Biological Safety Cabinet (ThermoFisher Scientific, Waltham, MA, USA). First, the samples were preconditioned by placing them inside sterile Petri dishes and bringing them to the biosafety cabinet where they were exposed to UV light for 15 min on each side. Next, 24-h bacterial cultures of each strain were adjusted to a turbidity of 0.5 on the McFarland scale to subsequently create six serial dilutions (1, 2, 3, 4, 5, and 6). An initial count was performed on dilutions 4, 5, and 6 (in triplicate) to determine the count at time zero.
Wet chambers, one for each evaluation time (2, 4, 6, and 8 h) and for each strain, were prepared by placing sterile gauze moistened with sterile distilled water into sterile Petri dishes. Then, a sterile slide was placed inside each wet chamber such that the upper side did not touch the wet gauze. Next, three 1 × 1-cm2 sheets of the nanocomposites and composite material, and PBAT sheets as controls, were placed in the chambers with the help of a sterile clamp. Dilution (20 μL) was deposited on each square sheet, and the chambers were incubated at 37 °C for 2, 4, 6, and 8 h.
After incubation, the wet chambers were extracted, and each polymer sheet was deposited inside a Falcon tube with 1 mL of sterile distilled water. The tubes were vortexed for 2–5 min [35]. Three dilutions were made from the product in the Falcon tubes. Petri dishes containing TSA were divided into four parts. Approximately three to five drops (corresponding to 20 μL) of each of the three dilutions and one drop of the undiluted Falcon tube contents were placed in the quadrants. The agar plates had to be completely dry so that the drops were absorbed almost instantaneously. The plates were then incubated at 37 °C for 24 h followed by a colony count with a colony counter. The data obtained were multiplied by the dilution factor used and plotted in graphs using the logarithm function or survival percentage.
Rheometry is used to obtain dynamic measurements of the rheological properties of nanocomposites under conditions close to the actual conditions under which the nanocomposites were processed. For this, measurements were made to control the changes in viscosity during melt mixing. The results of these measurements are shown in Additional file 1: Figure S5. The increase in the motor torque is related to the melting viscosity of the polymer [21, 36], and the values start to be constant after 4 min of mixing. This confirms that the mixing time of 7 min established in this work was enough to achieve complete mixing.
The torque values for the PBAT and NCs-PBAT/Cu 1% matrix were around 19.86 N m. The curves (Additional file 1: Figure S5) indicate that 1% concentration of Cu-NPs had little effect on the mechanical properties of the matrix, but lower equilibrium torque values of 18.4 and 17.4 N m were obtained for NCs-PBAT/Cu 3% and NCs-PBAT/Cu 5%, respectively. These results clearly imply that the processability of NCs-PBAT/Cu was improved with respect to the PBAT matrix [37]. Similar results were obtained with the mixture of NCs-PBAT/Cu|Cu2O, where the equilibrium torque value decreased with the increase in load percentage to 3%, but the 5% load yielded a value very close to that of the 1% load of Cu|Cu2O-NPs. The equilibrium torque values were 19.39, 19.07, and 19.37 Nm for 1, 3, and 5%, respectively. For the MCs-PBAT/CuSO4 mixture, the equilibrium torque values increased as the load of CuSO4 increased, i.e., 18.71 N m for 1%, 19.16 N m for 3%, and 19.79 N m for 5% load. This behavior can be attributed to the size of the CuSO4 crystals. Simultaneously, Additional file 1: Figure S5 shows that the equilibrium torque of all nanocomposites and composite material was stable with increasing mixing time, indicating that thermal decomposition did not occur in the mixer, probably because the nanoparticles decrease the cohesion forces between the polymer chains and most likely perform self-lubrication in the mixing process [37].
First, the nanoparticles obtained by chemical reduction were analyzed. The results of the synthesis of Cu|Cu2O-NPs are shown in Fig. 1b. The TEM micrograph shows a mixture of spherical particles and polyhedral particles. The average diameter of the spherical nanoparticles was 26 nm (Fig. 1c), while the diameter of the polyhedral nanoparticles ranged between 80 and 160 nm. The composition of these nanoparticles was determined by selected area electron diffraction (SAED) (Fig. 1c), which found phases corresponding to metal Cu and Cu2O. This finding was corroborated by the diffractogram shown in Fig. 1a. Six diffraction peaks were clearly observed at 2θ = 36.3°, 42.17°, 43.42°, 50.63°, 61.47°, and 74.37°. Because the nanoparticles were synthesized by chemically reducing CuSO4 to CuO, the diffraction peaks were verified by the data for Cu in the X'Pert HighScore database of X-ray powder diffraction patterns. We observed that the peaks at 2θ = 43.2°, 50.63°, and 74.37° belong to metal Cu diffraction planes (111), (200), and (220). The other three peaks show that the synthesized nanoparticles contained more than one substance, so the diffraction pattern is a combination of both. Wijesundera [38] analyzed thin films of Cu2O using XRD and showed that the planes diffracted at 2θ = 36.3°, 42.17°, and 61.47° correspond to the Miller indexes (111), (200), and (220). These indexes belong to a face-centered cubic structure (FCC) that corresponds to a part of the central area of an antifluorite structure, which agrees with the structure of Cu2O, in accordance with the findings of the SAED analysis.
a XRD of Cu and CuO2 nanoparticles synthesized. b, c TEM image, size distribution, and diffraction pattern of the synthesized nanoparticles. d XRD of Cu nanoparticles. e, f TEM image, size distribution, and diffraction pattern of Cu nanoparticles
Wang et al. [39] found that during the synthesis of Cu-NPs by chemical reduction, the size of the particles ranged between 100 and 150 nm. They used C6H8O6 as the reducing agent and poly(vinylpyrrolidone) (PVP) as the surfactant. The faces did not correspond to those of Cu2O because the PVP helped stabilize the growing seeds, thus avoiding their oxidation. However, the objective of our investigation was to synthesize Cu2O NPs, which can be achieved by chemical reduction without the use of a stabilizing agent such as PVP.
The Cu-NPs used in the preparation of the nanocomposite were spherical with a diameter ranging between 100 and 200 nm (Fig. 1e, f). In the XRD pattern for Cu-NPs shown in Fig. 1d, the three peaks clearly observed at 43.60°, 50.72°, and 73.95° correspond to the crystalline planes (111), (200), and (220), respectively. The cubic crystal structure with an Fm3m space group (JCPDS No.85-1326) [55] is in accordance with the structure found by SAED analysis (Fig. 1d).
The metal particles used in our study were obtained by means of a mechanical grinding system, according to the supplier. The disadvantage of this method is that a small percentage of particles (~ 10%) are larger than 500 nm. However, this did not negatively affect the objectives of our investigation. Below, we demonstrate how this dispersion affected the thermomechanical properties of the PBAT matrix. Importantly, mechanical grinding methods do not use precursors or stabilizers, as is the case with wet synthesis methods, which are known as chemical reduction methods. Therefore, the surface of Cu-NPs obtained by grinding is not passivated by the adsorption of molecules from either a stabilizer or a reaction by-product. Thus, these Cu-NPs, while not substantially improving the mechanical properties of the polymer, do not degrade them either. However, the antimicrobial properties must be improved because the migration of Cu2+ is facilitated on nonpassivated surfaces.
Figure 2 presents the XRD spectra of the NCs-PBAT/Cu (Fig. 2a), NCs-PBAT/Cu|Cu2O (Fig. 2b), and MCs-PBAT/CuSO4 (Fig. 2c). Figure 2c was prepared at three concentrations (1, 2, and 3% w/w). These diffractograms were compared with that of the PBAT polymer matrix to demonstrate the effect of the loads on the polymer structure. The PBAT diffractogram showed a diffraction pattern with five diffraction peaks at 2θ = 16.1°, 17.3°, 20.2°, 23.1°, and 25°, corresponding to planes (011), (010), (101), (100), and (111), respectively. This analysis revealed the existence of crystallinity in the polymer matrix. The characterization of PBAT by Arruda et al. [40] using XRD also found the same five diffraction peaks at the same angles as those found in this investigation, corresponding to the same planes.
Diffractogram of PBAT, NCs-PBAT/Cu, NCs-PBAT/Cu|Cu2O, and MCs-PBAT/CuSO4
The diffractograms of the nanocomposites with Cu-NPs loads are shown in Fig. 2a. The 2θ signals at 43°, 50°, and 74° are characteristic of the planes (111), (200), and (220) of the FCC structure of Cu with an Fm3m space group (JCPDS No.85-1326) [41]. No phases corresponding to CuO or Cu2O were observed in the diffractogram of NCs-PBAT/Cu, so we concluded that the nanoparticles were not oxidized during the synthesis of the nanocomposite. In addition, the diffractograms show that the nanoparticles did not affect or modify the structure of the PBAT and that the intensity of the peaks is directly proportional to the load percentage of the Cu-NPs. The diffractograms of the NCs-PBAT/Cu|Cu2O have six characteristic peaks at 2θ = 36.4°, 43°, 42.4°, 50°, 61.5°, and 74° (Fig. 2b). According to the literature and the analysis of the nanoparticles, only three correspond to metal Cu and the peaks at 36.4°, 42.4°, and 61.5° belong to Cu2O, according to the spectrum of this type of nanoparticle shown in Fig. 1a [35].
The diffraction peaks corresponding to the Cu|Cu2O-NPs reinforcements became more intense as the concentration increased inside the matrix, but the peaks belonging to the crystalline zone of the polymer decreased slightly in intensity with the incorporation of loads. Chivrac et al. [42] reported similar results in a study using loads of nanoclays in PBAT. They suggested that there was no significant transcrystallinity at the load-polymer interface, and therefore, there were no changes in the crystalline structure of the polymer. However, the decrease in the intensity of the diffraction peaks of the PBAT with the increase in the concentration of loads in the matrix indicates a drop in the crystallinity of the PBAT. Therefore, the loads hinder the crystalline growth of the PBAT. This could explain the slight decrease in the diffraction peaks belonging to the PBAT with the increase in Cu|Cu2O-NPs.
Figure 2c shows the XRD spectra of MC-PBAT/CuSO4 for the three concentrations of CuSO4 of 1, 3, and 5%. The addition of the 1% CuSO4 load did not generate changes in the polymer. The 3 and 5% CuSO4 load curves show only a minimum increase in the intensity of the peaks at 2θ = 36.4°, 40.25°, 43.94°, 57.9°, and 75.7°, which belong to the Cu and Cu2O present, indicating that a fraction of the Cu2SO4 was reduced and oxidized during the mixing process. As for the crystalline zone of the PBAT, the increase in the concentration of the CuSO4 reinforcements decreased the intensity of the diffraction peaks in PBAT, as occurred for the NCs-PBAT/Cu and NCs-PBAT/Cu|Cu2O. Thus, the incorporation of CuSO4 into the polymer matrix decreased its crystallization capacity, probably because CuSO4 hinders the growth of crystallites. Because no additional information on the XRD spectra of CuSO4 in composite materials has been reported, we will have to investigate its behavior in biodegradable polymers. The degree of crystallinity of the matrix was calculated as:
$$ {X}_{\mathrm{c}}=\frac{I_{\mathrm{c}}}{I_{\mathrm{c}}+{I}_{\mathrm{a}}} $$
where Ic is the area of the peaks of the crystalline phase and Ic + Ia is the total area under the diffractogram. The degree of crystallinity values for each material is given in Table 1. These results show that the percentage of crystallinity increases as the concentration of Cu-NPs and Cu|Cu2O-NPs increases in the PBAT matrix, which is evident with the increase in the intensity of the peaks in the respective diffractograms.
Table 1 Percentage of crystallinity of each of the mixtures of PBAT, NCs-PBAT/Cu, NCs-PBAT/Cu|Cu2O, and MCs-PBAT/CuSO4
On the other hand, the diffractograms show that the nanoparticles did not affect or modify the structure of the PBAT and that the intensity of the peaks is directly proportional to the load percentage of the Cu-NPs and Cu|Cu2O-NPs. Moreover, the addition of the CuSO4 precursor salt decreased the crystallinity of the polymer compared to that of the polymer in its pure state. This condition occurred because the addition of loads concentration in the nanocomposites increased the relative percentage of crystallinity but decreased the crystallinity of the PBAT, a result that, in general, was reported as a slight increase in the total percentage of crystallinity. The MCs-PBAT/CuSO4 loads did not present crystalline peaks in their XRD spectra. Therefore, they did not contribute to the increase in crystallinity but caused a decrease in crystallinity in the polymer chain, which explains the decrease in the total percentage of crystallinity in the composite material. Some studies have shown that metal nanoparticles act as centers of nucleation in the orientation of the polymer chains, which in turn increases the crystallinity of the polymer [43].
The FTIR (Additional file 1: Figure S6) spectra show that the characteristic peaks at different load concentrations are at the same frequency but have different intensities. The spectra show that as the concentration of nanoparticles in the polymer matrix increased, the intensity of the peaks corresponding to NCs-PBAT/Cu and NCs-PBAT/Cu|Cu2O increased with respect to the PBAT. Therefore, there was no effective interaction between the chains of the PBAT and the nanoparticles. Had there been interaction, some of the signals in the FTIR spectrum would have been displaced as a result of the interaction of the functional groups of the polymer with the surface of the PBAT [40].
To give multifunctionality to biopolymers, nanomaterials that provide special properties to a nanocomposite are usually incorporated. Their inclusion will change the mechanical properties of the material and the intensity of the changes is directly related to the union of the nanostructure with the polymer network [44]. We conducted tensile tests on the nanocomposites and the composite material. The tensile strength and maximum deformation values are summarized in Table 2.
Table 2 Tensile strength of PBAT, NCs-PBAT/Cu, NCs-PBAT/Cu|Cu2O, and MCs-PBAT/CuSO4
Figure 3 shows the average curves of the tensile tests on the nanocomposites and composite material. As the permanent deformation of the material began, the effect of the concentration of the nanoparticles in the polymer could be distinguished. Figure 3a shows the results for NCs-PBAT/Cu. The results show that the inclusion of nanostructures did not considerably affect the elastic range but there were noticeable changes in the yield strength. As the concentration of the Cu-NPs increased, maximum resistance increased and maximum elongation decreased. These changes clearly indicate that the nanostructures harden the PBAT. At 3% concentration of Cu-NPs, the tensile strength slightly increased but the elongation percentage in the fracture decreased between 30 and 35%.
Stress and strain of PBAT, NCs-PBAT/Cu, NCs-PBAT/Cu|Cu2O, and MCs-PBAT/CuSO4
Figure 3b shows the results of the tensile tests on the NCs-PBAT/Cu|Cu2O. The 1% load nanocomposite clearly showed an increase in tensile strength and elongation with respect to the PBAT. There was no appreciable effect on the elastic range, but it did appear to be above the yield stress. In addition, the curve for the 3% load NCs-PBAT/Cu|Cu2O shows there was no significant difference with respect to the PBAT. The same behavior is seen with curve for the 5% load NCs-PBAT/Cu|Cu2O. The curves for MCs-PBAT/CuSO4 (Fig. 3c) show that the yield stress decreased for the three concentrations of CuSO4 with respect to the PBAT.
From the results, we can conclude that the reinforcements did not significantly change the mechanical properties of the PBAT. Venkatesan and Rajeswari [45] showed a significant increase in mechanical properties by incorporating ZnO nanoparticles in a PBAT matrix with respect to that of the PBAT. Similar results with some improvements were obtained by Chen and Yang [46]. They elaborated a PBAT nanocomposite with montmorillonite nanoparticles using melt blending.
Our investigation found that the NCs-PBAT/Cu|Cu2O 3 and 5% and MCs-PBAT/CuSO4 1 and 5% had slightly decreased tensile strength, that is, there were no significant variations in the mechanical properties. However, the NCs-PBAT/Cu|Cu2O 1% and MCs-PBAT/CuSO4 3% had slightly increased tensile strength. Therefore, no reinforcement at any concentration in the matrix caused remarkable variations in the mechanical properties of the PBAT. In addition, as the concentration of Cu-NPs increased, their mechanical properties increased the resistance of the PBAT but elongation could not be maintained. The results of the tensile tests showed that the commercial Cu nanoparticles improved the tensile strength of the PBAT slightly more than did the Cu|Cu2O nanoparticles and the CuSO4 particles. The difference between the tensile properties found in our investigation and those in the literature could be attributed to load dispersion because the agglomerated particles act as stress concentrators [47]. Finally, the variations in the test values were explained by the preparation conditions of the test samples, the degree of crystallinity of the PBAT, the molecular mass, the degree of interaction at the polymer-reinforcement interface, and the load dispersion because the agglomerates in the matrix could act as stress concentrators.
One of the disadvantages of the PBAT is its low thermal stability because the fusion process can degrade its polymer chains [48]. Therefore, the effect of nanometric and micrometric loads on the decomposition of this biopolymer must be investigated. TGA of NCs-PBAT/Cu, NCs-PBAT/Cu|Cu2O, and MCs-PBAT/CuSO4 was carried out to observe the changes in the thermal stability of the PBAT caused by the presence of Cu nanoparticles in the matrix. The TGA results are shown in Fig. 4, and the initial (Tdi) and final (Tdf) decomposition temperatures of the analyzed samples are summarized in Table 3. The thermograms show that the polymer without any load had a weight loss of 1% at 420.77 °C, while the nanocomposites NCs-PBAT/Cu 1, 3, and 5% presented a weight loss of around 3% (Fig. 4a). This suggests that the presence of Cu-NPs at concentrations of 3 and 5% slightly increases the thermal stability of the nanocomposites compared to that of the unloaded polymer. After the final thermal decomposition, the degradation percentages, at around 420–427 °C, of the PBAT matrix and nanocomposites NCs-PBAT/Cu 1, 3, and 5% were 98.9, 97.5, 95.4, and 96.8%, respectively. The residues were higher for Cu-NPs-incorporated nanocomposite samples. Similar results have been reported for PBAT nanocomposites with different loads of Ag-NPs [49].
TGA of a PBAT and NCs-PBAT/Cu, b NCs-PBAT/Cu|Cu2O, and c MCs-PBAT/CuSO4, DTG of d PBAT and NCs-PBAT/Cu, e NCs-PBAT/Cu|Cu2O, f MCs-PBAT/CuSO4
Table 3 Degradation temperature of PBAT, NCs-PBAT/Cu, NCs-PBAT/Cu|Cu2O, and MCs-PBAT/CuSO4
Although no significant change is seen among the curves in Fig. 4b for the NCs-PBAT/Cu|Cu2O, the results show that as the Cu|Cu2O-NPs increased in the polymer structure, Tdi increased and Tdf decreased with respect to the initial and final degradation temperatures of PBAT; in addition, the total mass loss decreased. By calculating the derivative of the mass with respect to the temperature, we obtained the curves in Fig. 4d–f for the indicated peaks of the nanocomposite with Cu|Cu2O-NPs and found that Tdf, at which the maximum decomposition occurs, was between 402 and 403 °C (Table 3).
The CuSO4 loads incorporated into the polymer matrix, i.e., MCs-PBAT/CuSO4, yielded the same behavior as that of the NCs-PBAT/Cu|Cu2O, with an increase in Tdi and a decrease in Tdf with respect to the PBAT polymer. The Tdi values of the NCs-PBAT/Cu|Cu2O and the MCs-PBAT/CuSO4 were greater than that of the NCs-PBAT/Cu, but the Tdf and degradation percentage values were less than those of the nanocomposites with Cu-NPs loads.
This enhancement of the thermal stability of the PBAT is attributed to the barrier effect of the loads. The loads were also supposed to have a shielding effect on the matrix to slow the rate of mass loss of the decomposition product [50]. The data obtained by our analysis were compared with published results to verify that the indicated behavior is usual for this type of polymer. Sinha Ray et al. [51] found by thermal analysis of PBAT reinforced with nanoclays that the degradation temperatures of the nanocomposites were greater than or at least equal to that of the PBAT. In general, the reinforcements improve the thermal stability of the polymer matrix because they act as a heat barrier, which improves the total thermal stability of the system. However, the studies of Sinha Ray et al. and this investigation showed that the thermal stability of the nanocomposite and PBAT compounds only slightly improved. To explain the relatively low improvement in the thermal stability of some nanocomposites, Sinha Ray et al. assumed that in the early stages of thermal decomposition, the reinforcements displace the decomposition to higher temperatures, but in a second stage, the clay layers accumulate heat and then act as a source of heat. This heat source, along with the heat flow supplied by the external heat source, promotes the acceleration of decomposition. This could explain the behavior of the reinforcements in the NCs-PBAT/Cu|Cu2O and MCs-PBAT/CuSO4. Thus, we conclude that the thermal properties of the nanocomposites and the composite material slightly improve but not significantly. On the other hand, the results of DSC (Additional file 1: Figure S7 and Table S1) indicated that the addition of reinforcements to the matrix slightly hindered the kinetics and degree of crystallization of the PBAT. The addition of clays increased the crystallization temperature from 1 to 10 °C and the melting temperature from 1 to 5 °C. These phenomena were probably due to an increase in the viscosity of the polymer with the addition of clays, which reduced the mobility of the macromolecular chains against the growth of crystals.
Comparative Evaluation of the Antimicrobial Activity of NCs-PBAT/Cu, NCs-PBAT/Cu|Cu2O, and MCs-PBAT/CuSO4
Qualitative Test
After the experimental procedure was performed, we wanted to observe whether bacterial colonies were inhibited by each PBAT sample, i.e., NCs-PBAT/Cu 1, 3, and 5%; NCs-PBAT/Cu|Cu2O 1, 3, and 5%; and MCs-PBAT/CuSO4 1, 3, and 5%. We decided to use the 3% concentrations because the 1% concentrations did not produce enough bacterial inhibition and the 5% concentration produced behavior similar to that of the 3% concentration, the minimum percentage with activity that avoided toxicity in the polymer.
The study was carried out at different contact times using four bacterial strains and the PBAT samples NCs-PBAT/Cu 3%, NCs-PBAT/Cu|Cu2O 3%, and MCs-PBAT/CuSO4 3%. The times and colony-forming unit counts (CFU/mL) are presented in Table 4, and the bacterial activity and colony count for each Petri dish are shown in Fig. 5. In addition, a graphical analysis is shown in Fig. 6, where images of bacterial growth are also presented. The statistical analysis of the data is summarized in Table 5.
Table 4 Bacterial colonies count corresponding to four incubation times for each sample of NCs-PBAT/Cu-3%, NCs-PBAT/Cu|Cu2O 3%, and MCs-PBAT/CuSO4 3% and PBAT
Bacterial activity and colonization count PBAT, NCs-PBAT/Cu-3%, NCs-PBAT/Cu|Cu2O 3%, and MCs-PBAT/CuSO4 3% for each strain of bacteria. Staphylococcus aureus, Acinetobacter baumanni, Enterococcus faecalis, Streptococcus mutans
Graphical analysis of colony count (CFU/mL) vs time (h) of PBAT, NCs-PBAT/Cu-3% NCs-PBAT/Cu|Cu2O 3%, and MCs-PBAT/CuSO4 3% for each strain of bacteria. Enterococcus faecalis, Acinetobacter baumanni, Streptococcus mutans, Staphylococcus aureus
Table 5 Statistical analysis for each bacterial strain
The study of A. baumannii found that the colonies grew in all periods (2, 4, 6, and 8 h) in the samples containing Cu-NPs, Cu|Cu2O-NPs, and PBAT. High bactericidal activity occurred with the sample containing CuSO4 during exposure times of 4, 6, and 8 h, decreasing from 7 × 105 to 0 CFU/mL. The sample containing Cu-NPs showed a significant increase in the growth of bacterial colonies from 1 × 105 to 6 × 106 CFU/mL, with an average of 2 × 106 CFU/mL. The bacterial colonies in the sample containing Cu|Cu2O-NPs grew from 7 × 105 in time I to 6 × 106 in time IV, with an average growth of 3.19 × 106 CFU/mL. Bacterial growth in the PBAT reached an average of 1.75 × 106 CFU/mL.
The study of E. faecalis found good bactericidal activity by the samples containing Cu-NPs, Cu|Cu2O-NPs, and CuSO4, with average colony growth of 5 × 102, 1 × 104, and 2.2 × 103 CFU/mL, respectively, while the PBAT did not show bactericidal activity and the colonies grew at all times. Colony growth in the sample containing Cu-NPs was 2 × 103 CFU/mL at 2 h then dropped to zero at 4, 6, and 8 h, whereas the samples containing Cu|Cu2O-NPs had 0 CFU/mL at times I, II, and III, but 4 × 104 CFU/mL at time IV. Samples containing CuSO4 prevented the growth of bacteria in times I and II with growth activity of 0 CFU/mL, but colonies grew to 4 × 103 and 5 × 103 CFU/mL for times III and IV, respectively. PBAT did not show bactericidal activity against E. faecalis.
The study of S. mutans found no colony growth in the samples containing Cu|Cu2O-NPs and CuSO4. The sample containing Cu-NPs showed very good bactericidal activity except at time I, at which colony growth was 4 × 103 CFU/mL, making the average growth for the four times 8 × 102 CFU/mL. PBAT without reinforcement showed no bactericidal activity against S. mutans. The samples containing Cu-NPs, Cu|Cu2O-NPs, and CuSO4 in contact with S. aureus showed an excellent bactericidal response. They completely inhibited the growth of colonies, while PBAT did not show any bactericidal activity against S. aureus, which grew an average of 6 × 103 CFU/mL.
In general, the antibacterial effectiveness of polymer-and-metal nanocomposites improves with a high surface/volume ratio, which increases the number of ions released from the nanoparticles into the polymer. The mechanism of the corrosion of Cu in aqueous solutions and the resulting Cu species vary with pH. In general, the species Cu2O and CuO are formed and can be dissolved in Cu ions. Elemental metal particles require the presence of water and oxygen molecules to release a small amount of ions. Therefore, retention of water and oxygen within the polymer is crucial for the release of Cu ions. Some properties of polymer-and-metal nanocomposites such as the crystallinity and polarity of the matrix, which constitute a barrier for the diffusion of water molecules and ions during their propagation, can affect the rate of release. Shankar and Rhim [49] prepared films composed of PBAT and Ag nanoparticles (PBAT/Ag-NPs) that showed strong antibacterial activity against E. coli and Listeria monocytogenes compared with that of PBAT films without Ag-NPs. Similar results were obtained by Venkatesan and Rajeswari [45] when they evaluated the antimicrobial activity of ZnO-NPs incorporated in a PBAT matrix. The PBAT compound, which was used as a control matrix, showed no antimicrobial activity compared to the PBAT/ZnO-NPs nanocomposite films. The results showed that the films had high bactericidal activity against the pathogens tested (E. coli and S. aureus), with increased inhibition of bacterial growth as the ZnO load concentration increased from 1 to 10% by weight. This ability of Cu, Zn, and Ag nanoparticles to inhibit bacterial growth is mainly due to the irreparable damage to the membrane of the bacterial cells caused by the interaction between the surface of the bacteria and these oxides and metals [52, 53]. Compared with the works discussed above, our investigation found significant antimicrobial activity against inpatient and oral-resistant strains.
To complement this investigation, we performed water absorption tests using three different media and following point 7.4, "Long-Term Immersion", in ASTM D570-98. The results of these tests are reported in the supplementary material, Additional file 1: Table S2–S4 and Figure S8, with their respective analysis. Analysis showed that sulfate-based composite materials absorb large amounts of water, even in acidic and basic environments. This phenomenon greatly affects the mechanical properties of these materials; however, resistant bacteria, such as A. baumannii, require an immediate Cu+ distribution to control them. This explains the antimicrobial power of CuSO4 within the PBAT matrix.
Using XRD and TEM, we determined that the synthesis of nanocomposites and material composites based on PBAT using chemical reduction and a mixture of metal Cu nanoparticles with CuO2, where Cu had a spherical morphology and Cu2O had a polyhedral morphology. The structural characterization of the NCs and MCs by FTIR and XRD showed that the Cu-NPs, Cu|Cu2O-NPs, and CuSO4 reinforcements did not modify the structure of the PBAT. However, they did slightly alter the percentage of its crystallinity, which increased with NPs and decreased with CuSO4. On the other hand, the mechanical properties of the PBAT for both the NCs and MCs did not vary significantly with the addition of reinforcements, meaning that the PBAT maintained its mechanical properties. From the thermal tests, we concluded that reinforcing the PBAT did not fundamentally improve its thermal properties, it only increased its thermal stability a few degrees Celsius, which is not significant. Antimicrobial analyses showed that the Cu|Cu2O-NPs within the PBAT generated antibacterial activity against E. faecalis and S. mutans and excellent bactericidal properties against S. aureus. CuSO4 had a good bactericidal response against A. baumannii, E. faecalis, and S mutans and an exceptional response against S. aureus. The PBAT without loads did not present bactericidal properties when in contact with the bacterial strains. In general, the addition of loads into the PBAT generates bactericidal activity that the polymer does not possess by itself. The addition of CuSO4 yielded the best antimicrobial response against the four strains used in this investigation. In the search for new applications for bionanocomposites, it will be essential to evaluate their antimicrobial response in food containers, medical devices, packaging, and other products; analyze their biocidal effects against other bacteria against which only NPs have antibacterial characteristics; and justify the expense associated with their synthesis.
Cu|Cu2O-NPs:
Copper/cuprous oxide nanoparticles
Cu-NPs:
Copper nanoparticles
CuSO4 :
Copper sulfate
Differential scanning calorimetry
FTIR:
Fourier transform infrared spectroscopy
Composite material
MCs-PBAT/CuSO4 :
Composite materials of poly(butylene adipate-co-terephthalate) with copper sulfate
NCs:
Nanocomposites
NCs-PBAT/Cu:
Nanocomposites of poly(butylene adipate-co-terephthalate) with copper nanoparticles
NCs-PBAT/Cu|Cu2O:
Nanocomposites of poly(butylene adipate-co-terephthalate) with copper/cuprous oxide nanoparticles
PBAT:
Poly(butylene adipate-co-terephthalate)
Fukushima K, Wu MH, Bocchini S et al (2012) PBAT based nanocomposites for medical and industrial applications. Mater Sci Eng C 32:1331–1351. https://doi.org/10.1016/j.msec.2012.04.005
Youssef AM (2013) Polymer nanocomposites as a new trend for packaging applications. Polym Plast Technol Eng 52:635–660. https://doi.org/10.1080/03602559.2012.762673
Siracusa V, Rocculi P, Romani S, Rosa MD (2008) Biodegradable polymers for food packaging: a review. Trends Food Sci Technol 19:634–643. https://doi.org/10.1016/j.tifs.2008.07.003
Peelman N, Ragaert P, De Meulenaer B et al (2013) Application of bioplastics for food packaging. Trends Food Sci Technol 32:128–141. https://doi.org/10.1016/j.tifs.2013.06.003
Azeredo HMC de (2009) Nanocomposites for food packaging applications. Food Res Int 42:1240–1253 . doi: https://doi.org/10.1016/j.foodres.2009.03.019
Sorrentino A, Gorrasi G, Vittoria V (2007) Potential perspectives of bio-nanocomposites for food packaging applications. Trends Food Sci Technol 18:84–95. https://doi.org/10.1016/j.tifs.2006.09.004
Xie F, Pollet E, Halley PJ, Avérous L (2013) Starch-based nano-biocomposites. Prog Polym Sci 38:1590–1628. https://doi.org/10.1016/j.progpolymsci.2013.05.002
Othman SH (2014) Bio-nanocomposite materials for food packaging applications: types of biopolymer and nano-sized filler. Agric Agric Sci Procedia 2:296–303. https://doi.org/10.1016/j.aaspro.2014.11.042
Mihindukulasuriya SDF, Lim LT (2014) Nanotechnology development in food packaging: a review. Trends Food Sci Technol 40:149–167. https://doi.org/10.1016/j.tifs.2014.09.009
Duncan TV (2011) Applications of nanotechnology in food packaging and food safety: barrier materials, antimicrobials and sensors. J Colloid Interface Sci 363:1–24. https://doi.org/10.1016/j.jcis.2011.07.017
Bordes P, Pollet E, Avérous L (2009) Nano-biocomposites: biodegradable polyester/nanoclay systems. Prog Polym Sci 34:125–155. https://doi.org/10.1016/j.progpolymsci.2008.10.002
Briassoulis D, Giannoulis A (2018) Evaluation of the functionality of bio-based food packaging films. Polym Test 69:39–51. https://doi.org/10.1016/j.polymertesting.2018.05.003
Rhim J, Park H, Ha C (2013) Bio-nanocomposites for food packaging applications. Prog Polym Sci 38:1629–1652. https://doi.org/10.1016/j.progpolymsci.2013.05.008
Tunç S, Duman O (2010) Preparation and characterization of biodegradable methyl cellulose/montmorillonite nanocomposite films. Appl Clay Sci 48:414–424. https://doi.org/10.1016/j.clay.2010.01.016
Tunç S, Duman O, Polat TG (2016) Effects of montmorillonite on properties of methyl cellulose/carvacrol based active antimicrobial nanocomposites. Carbohydr Polym 150:259–268. https://doi.org/10.1016/j.carbpol.2016.05.019
Ojijo V, Sinha S (2013) Progress in Polymer Science Processing strategies in bionanocomposites. Prog Polym Sci 38:1543–1589. https://doi.org/10.1016/j.progpolymsci.2013.05.011
Cruzat Contreras C, Peña O, Meléndrez MF et al (2011) Synthesis, characterization and properties of magnetic colloids supported on chitosan. Colloid Polym Sci 289:21–31. https://doi.org/10.1007/s00396-010-2302-y
Meléndrez Castro MF, Cárdenas Triviño G, Morales J et al (2009) Synthesis and study of photoacoustic properties of (Pd/TiO2)/polystyrene nanocomposites. Polym Bull 62:355–366. https://doi.org/10.1007/s00289-008-0024-9
Silvestre C, Duraccio D, Cimmino S (2011) Food packaging based on polymer nanomaterials. Prog Polym Sci 36:1766–1782. https://doi.org/10.1016/j.progpolymsci.2011.02.003
Lin S, Guo W, Chen C et al (2012) Mechanical properties and morphology of biodegradable poly(lactic acid)/poly(butylene adipate-co-terephthalate) blends compatibilized by transesterification. Mater Des 36:604–608. https://doi.org/10.1016/j.matdes.2011.11.036
Fourati Y, Tarrés Q, Mutjé P, Boufi S (2018) PBAT/thermoplastic starch blends: effect of compatibilizers on the rheological, mechanical and morphological properties. Carbohydr Polym 199:51–57. https://doi.org/10.1016/j.carbpol.2018.07.008
Oliveira TA, Oliveira RR, Barbosa R et al (2017) Effect of reprocessing cycles on the degradation of PP/PBAT-thermoplastic starch blends. Carbohydr Polym 168:52–60. https://doi.org/10.1016/j.carbpol.2017.03.054
Fukushima K, Rasyida A, Yang MC (2013) Characterization, degradation and biocompatibility of PBAT based nanocomposites. Appl Clay Sci 80–81:291–298. https://doi.org/10.1016/j.clay.2013.04.015
Al-Itry R, Lamnawar K, Maazouz A (2014) Reactive extrusion of PLA, PBAT with a multi-functional epoxide: physico-chemical and rheological properties. Eur Polym J 58:90–102. https://doi.org/10.1016/j.eurpolymj.2014.06.013
Weng Y-X, Jin Y-J, Meng Q-Y et al (2013) Biodegradation behavior of poly(butylene adipate-co-terephthalate) (PBAT), poly(lactic acid) (PLA), and their blend under soil conditions. Polym Test 32:918–926. https://doi.org/10.1016/j.polymertesting.2013.05.001
Reddy MM, Mohanty AK, Misra M (2012) Optimization of tensile properties thermoplastic blends from soy and biodegradable polyesters: Taguchi design of experiments approach. J Mater Sci 47:2591–2599. https://doi.org/10.1007/s10853-011-6083-6
Weng Y-X, Jin Y-J, Meng Q-Y, Wang L, Zhanga M, Wang Y-Z. Biodegradation behavior of poly(butylene adipate-co-terephthalate) (PBAT), poly(lactic acid) (PLA), and their blend under soil conditions. Polymer Testing. 2013;32:918-26.
Venkatesan R, Rajeswari N (2016) Preparation, mechanical and antimicrobial properties of SiO 2/ poly(butylene adipate-co-terephthalate) films for active food packaging. Silicon:1–7. https://doi.org/10.1007/s12633-015-9402-8
Wang H, Wei D, Zheng A, Xiao H (2015) Soil burial biodegradation of antimicrobial biodegradable PBAT films. Polym Degrad Stab 116:14–22. https://doi.org/10.1016/j.polymdegradstab.2015.03.007
Ribeiro WA, Claudia A, De PC et al (2015) Poly ( butylene adipate-co-terephthalate )/ hydroxyapatite composite structures for bone tissue recovery. Polym Degrad Stab 120:61–69. https://doi.org/10.1016/j.polymdegradstab.2015.06.009
Qi W, Zhang X, Wang H (2018) Self-assembled polymer nanocomposites for biomedical application. Curr Opin Colloid Interface Sci 35:36–41. https://doi.org/10.1016/j.cocis.2018.01.003
Bheemaneni G, Saravana S, Kandaswamy R (2018) Processing and characterization of poly (butylene adipate-co-terephthalate) / wollastonite biocomposites for medical applications. Mater Today Proc 5:1807–1816. https://doi.org/10.1016/j.matpr.2017.11.279
Rice LB (2018) Antimicrobial stewardship and antimicrobial resistance. Med Clin North Am 102:805–818. https://doi.org/10.1016/j.mcna.2018.04.004
Khan A, Rashid A, Younas R, Chong R (2016) A chemical reduction approach to the synthesis of copper nanoparticles. Int Nano Lett 6:21–26. https://doi.org/10.1007/s40089-015-0163-6
Felipe Jaramillo A, Riquelme S, Montoya LF et al (2018) Influence of the concentration of copper nanoparticles on the thermo-mechanical and antibacterial properties of nanocomposites based on poly(butylene adipate- co -terephthalate). Polym Compos:1–13. https://doi.org/10.1002/pc.24949
Montazer M, Latifi M (2013) Colloids and Surfaces A : Physicochemical and Engineering Aspects Synthesis of nano copper / nylon composite using ascorbic acid and CTAB. Colloids Surfaces A Physicochem Eng Asp 439:167–175. https://doi.org/10.1016/j.colsurfa.2013.03.003
Zhao P, Liu W, Wu Q, Ren J (2010) Preparation, mechanical, and thermal properties of biodegradable polyesters/poly(lactic acid) blends. J Nanomater 2010:1–8. https://doi.org/10.1155/2010/287082
Wijesundera RP (2010) Fabrication of the CuO/Cu 2 O heterojunction using an electrodeposition technique for solar cell applications. Semicond Sci Technol 25:045015. https://doi.org/10.1088/0268-1242/25/4/045015
Wang Y, Chen P, Liu M (2006) Synthesis of well-defined copper nanocubes by a one-pot solution process. Nanotechnology 17:6000–6006. https://doi.org/10.1088/0957-4484/17/24/016
Arruda LC, Magaton M, Bretas RES, Ueki MM (2015) Influence of chain extender on mechanical, thermal and morphological properties of blown films of PLA/PBAT blends. Polym Test 43:27–37. https://doi.org/10.1016/j.polymertesting.2015.02.005
Dung Dang TM, Tuyet Le TT, Fribourg-Blanc E, Chien Dang M (2011) The influence of solvents and surfactants on the preparation of copper nanoparticles by a chemical reduction method. Adv Nat Sci Nanosci Nanotechnol 2:025004. https://doi.org/10.1088/2043-6262/2/2/025004
Chivrac F, Kadlecová Z, Pollet E, Avérous L (2006) Aromatic copolyester-based nano-biocomposites: elaboration, structural characterization and properties. J Polym Environ 14:393–401. https://doi.org/10.1007/s10924-006-0033-4
Miranda D, Sencadas V, Sánchez-Iglesias A et al (2009) Influence of silver nanoparticles concentration on the α - to β -phase transformation and the physical properties of silver nanoparticles doped poly(vinylidene fluoride) nanocomposites. J Nanosci Nanotechnol 9:2910–2916. https://doi.org/10.1166/jnn.2009.208
Medina MC, Rojas D, Flores P et al (2016) Effect of ZnO nanoparticles obtained by arc discharge on thermo-mechanical properties of matrix thermoset nanocomposites. J Appl Polym Sci 133:1–8. https://doi.org/10.1002/app.43631
Venkatesan R, Rajeswari N (2017) ZnO/PBAT nanocomposite films: investigation on the mechanical and biological activity for food packaging. Polym Adv Technol 28:20–27. https://doi.org/10.1002/pat.3847
Chen JH, Yang MC (2015) Preparation and characterization of nanocomposite of maleated poly(butylene adipate-co-terephthalate) with organoclay. Mater Sci Eng C 46:301–308. https://doi.org/10.1016/j.msec.2014.10.045
Evstatiev M, Simeonova S, Friedrich K et al (2013) MFC-structured biodegradable poly(l-lactide)/poly(butylene adipate-co-terephatalate) blends with improved mechanical and barrier properties. J Mater Sci 48:6312–6330. https://doi.org/10.1007/s10853-013-7431-5
Mohanty S, Nayak SK (2012) Biodegradable nanocomposites of poly(butylene adipate-co-terephthalate) (PBAT) and organically modified layered silicates. J Polym Environ 20:195–207. https://doi.org/10.1007/s10924-011-0408-z
Shankar S, Rhim J-W (2016) Tocopherol-mediated synthesis of silver nanoparticles and preparation of antimicrobial PBAT/silver nanoparticles composite films. LWT - Food Sci Technol 72:149–156. https://doi.org/10.1016/j.lwt.2016.04.054
Wei D, Wang H, Xiao H, et al (2012) Morphology and mechanical properties of poly ( butylene adipate-co-terephthalate )/ potato starch blends in the presence of synthesized reactive compatibilizer or modified poly (butylene. Carbohydr Polym 123:275–282 . doi: https://doi.org/10.1016/j.carbpol.2015.01.058
Sinha Ray S, Okamoto M (2003) Polymer/layered silicate nanocomposites: a review from preparation to processing. Prog Polym Sci 28:1539–1641. https://doi.org/10.1016/j.progpolymsci.2003.08.002
Tamayo L, Azócar M, Kogan M et al (2016) Copper-polymer nanocomposites: an excellent and cost-effective biocide for use on antibacterial surfaces. Mater Sci Eng C 69:1391–1409. https://doi.org/10.1016/j.msec.2016.08.041
Palza H (2015) Antimicrobial polymers with metal nanoparticles. Int J Mol Sci 16:2099–2116. https://doi.org/10.3390/ijms16012099
The authors would like to thank the Interdisciplinary Group of Advanced Nanocomposites (Grupo Interdisciplinario de Nanocompuestos Avanzados, GINA) of the Department of Materials Engineering (DIMAT, for its acronym in Spanish) of the School of Engineering, University of Concepción, for its laboratory of nanospectroscopy (LAB-NANOSPECT). AFJ, SR, and LFM would like to thank the National Commission for Scientific and Technological Research (CONICYT) for the scholarships CONICYT-PCHA/National Doctorate 2014-63140015, 2014-21140951, 2015-21150766, for Doctorate in Materials Science and Engineering (UdeC), and Doctorate in Chemical Sciences (UdeC), respectively. AFJ would like to thank the University of La Frontera: PROGRAM OF TRAINING OF RESEARCHERS (UFRO POSTDOCTORALES). MFM would like to thank Valentina Lamilla for her enormous support. The authors thank the "Proyecto CONICYT PIA/APOYO CCTE AFB170007.
National Commission for Scientific and Technological Research, (CONICYT): for the scholarships CONICYT-PCHA/National Doctorate 2014-63140015, 2014-21140951.
The research data is not shared by an ongoing patent application.
Department of Mechanical Engineering, Universidad de La Frontera, Francisco Salazar 01145, 4780000, Temuco, Chile
A. F. Jaramillo
Department of Materials Engineering (DIMAT), Faculty of Engineering, University of Concepción, 270 Edmundo Larenas, Box 160-C, 4070409, Concepción, Chile
S. A. Riquelme
Department of Restorative Dentistry, Endodontic Discipline, Faculty of Dentistry, University of Concepción, Concepción, Chile
G. Sánchez-Sanhueza
Department of Mechanical Engineering (DIM), Faculty of Engineering, University of Concepción, 219 Edmundo Larenas, Concepción, Chile
C. Medina
Nanoscience and Nanotechnology Laboratory, Faculty of Physical-Mathematical Sciences, Universidad Autónoma de Nuevo León, 66451, San Nicolas de los Garza, Nuevo León, México
F. Solís-Pomar
& E. Pérez-Tijerina
Advanced Nanocomposites Research Group (GINA), Department of Materials Engineering (DIMAT), Faculty of Engineering, University of Concepción, 270 Edmundo Larenas, Box 160-C, 4070409, Concepción, Chile
D. Rojas
& M. F. Melendrez
Departamento de Tecnologías Industriales, Universidad de Talca, Camino a Los Niches KM 1, Curicó, Chile
C. Montalba
Search for A. F. Jaramillo in:
Search for S. A. Riquelme in:
Search for G. Sánchez-Sanhueza in:
Search for C. Medina in:
Search for F. Solís-Pomar in:
Search for D. Rojas in:
Search for C. Montalba in:
Search for M. F. Melendrez in:
Search for E. Pérez-Tijerina in:
AFJ, MM, and EPT designed the experiments. AFJ, SAR, CM, DJ, and GSS performed the experiments. CM and FSP helped in grammar revising and language checking. AFJ, FSP, MM, and EPT wrote the paper. All authors discussed the results and commented on the manuscript. All authors read and approved the final manuscript.
Correspondence to M. F. Melendrez.
The authors declare that they have not competing interests.
Additional file
Supplementary figures and tables. This file contains supplementary Figures S1–S8. and Tables S1–S4.. (DOCX 1716 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
PBAT
Bio-nanocomposite
Antimicrobial activity | CommonCrawl |
Surveys in Geophysics
July 2019 , Volume 40, Issue 4, pp 779–801 | Cite as
The Status of Technologies to Measure Forest Biomass and Structural Properties: State of the Art in SAR Tomography of Tropical Forests
Stefano Tebaldini
Dinh Ho Tong Minh
Mauro Mariotti d'Alessandro
Ludovic Villard
Thuy Le Toan
Jerome Chave
Synthetic aperture radar (SAR) tomography (TomoSAR) is an emerging technology to image the 3D structure of the illuminated media. TomoSAR exploits the key feature of microwaves to penetrate into vegetation, snow, and ice, hence providing the possibility to see features that are hidden to optical and hyper-spectral systems. The research on the use of P-band waves, in particular, has been largely propelled since 2007 in experimental studies supporting the future spaceborne Mission BIOMASS, to be launched in 2022 with the aim of mapping forest aboveground biomass (AGB) accurately and globally. The results obtained in the frame of these studies demonstrated that TomoSAR can be used for accurate retrieval of geophysical variables such as forest height and terrain topography and, especially in the case of dense tropical forests, to provide a more direct link to AGB. This paper aims at providing the reader with a comprehensive understanding of TomoSAR and its application for remote sensing of forested areas, with special attention to the case of tropical forests. We will introduce the basic physical principles behind TomoSAR, present the most relevant experimental results of the last decade, and discuss the potentials of BIOMASS tomography.
Remote sensing Forestry Synthetic aperture radar (SAR) Tomography Microwaves Aboveground biomass Forest height Terrain topography BIOMASS
Synthetic aperture radar (SAR) imagery is nowadays a most relevant technology for remote sensing of the natural environment, as witnessed by the increasing number of spaceborne SARs and their improving performance (Moreira 2014). Indeed, SAR systems provide a powerful and unique combination of features relevant to remote sensing, such as large spatial coverage, resolution of the order of few meters, and the possibility to operate largely independent of weather conditions and solar illumination (Curlander and McDonough 1991). Another most relevant feature, which is peculiar to microwave systems, is the capability to probe the interior part of the illuminated media. Indeed, microwaves, especially at long wavelengths, can penetrate for meters, or even tens of meters, into natural media that are non-transparent at optical frequencies, as it is the case for vegetation, snow, ice, and sand. This feature makes SAR data sensitive to the vertical structure of those media, hence providing access to features that are hidden to optical and hyper-spectral sensors, see for example Mariotti d'Alessandro et al. (2013), Tebaldini et al. (2016a, b), Rekioua et al. (2017), Paillou et al. (2003). The downside is that microwave scattering from distributed media may be quite complex, involving a number of different mechanisms through which the wave is backscattered. Considering forested areas, which are the focus of this paper, the radar signal is determined by direct scattering from elements within the vegetation canopy, from the underlying terrain, as well as from multiple scattering resulting from the waves bouncing off the ground in the direction of the radar after being scattered downward by the tree canopy and trunks (Treuhaft and Siqueira 2000; Mariotti d'Alessandro et al. 2013; Papathanassiou and Cloude 2001; Smith-Jonforsen et al. 2005; Lin and Sarabandi 1992). As a result, SAR data analysis has traditionally been carried out based on mathematical models that provide the best trade-off between the variety of phenomena captured by the model and the possibility to produce robust estimates of forest parameters through model inversion; see, for example, Freeman (2007), Treuhaft and Siqueira (2000).
The introduction of SAR Tomography (TomoSAR) techniques has opened up the way to a completely new approach to look at SAR data. A TomoSAR survey is typically obtained by repeatedly flying a radar to acquire multiple SAR images of the same area from different trajectories, as shown in the left panel of Fig. 1. These data are then focused via digital signal processing algorithms to produce a collection of voxels that represent the backscattered energy in three dimensions, thus providing the possibility to see how the vertical structure of the vegetation interacts with the wave (Reigber and Moreira 2000; Mariotti d'Alessandro and Tebaldini 2012). An example is shown in the right panel of Fig. 1, where a 3D representation of a tropical forest at the Paracou Ecological Research Station, French Guiana, is provided by displaying four horizontal tomographic sections.
Left: TomoSAR illumination. Right: tomographic sections. Data: TropiSAR, P-band, Paracou Research Station. Data acquisition by ONERA
The use of TomoSAR for investigating forest structure has been under analysis for more than a decade, based on theoretical studies and on the analysis of real data from airborne campaigns. Research on the use of P-band waves (wavelength ≈ 70 cm), in particular, has accelerated since 2007 in view of the future P-band spaceborne Mission BIOMASS, which will be launched by the European Space Agency (ESA) in 2022 with the aim of providing AGB accurately and globally (ESA 2012). Early research was generally aimed at retrieving information about the vertical structure of the vegetation, to complement traditional radiometric and interferometric measurements, (Tebaldini 2009; Mariotti d'Alessandro and Tebaldini 2012; Frey and Meyer 2011a, b). Boosted by the first encouraging results, successive studies demonstrated that P-band TomoSAR could effectively be used to derive an accurate characterization of forest structural properties and of the interaction with radar waves. The works in Frey and Meyer (2011a), Tebaldini and Rocca (2012), and Mariotti d'Alessandro et al. (2013), for example, first showed the variation in wave polarization inside the vegetation layer, which could then be used as a fingerprint to detect and quantify the arising of double-bounce scattering from ground–trunk interactions, see in particular (Mariotti d'Alessandro et al. 2013). TomoSAR was also demonstrated to be a most valuable tool to retrieve forest canopy height (Tebaldini and Rocca 2012; Ho Tong Minh et al. 2016) and, by virtue of the penetration capabilities of P-band waves, sub-canopy terrain topography (Tebaldini 2009; Gatti et al. 2011; Mariotti d'Alessandro and Tebaldini 2018a, b). A most important result concerning the application of P-band TomoSAR in tropical forests is the one published in Ho Tong Minh et al. (2014a). In that paper, it was first shown by analyzing a tropical site in French Guiana that tomographic intensity at the height of the 'main canopy' in a tropical forest provides a much higher correlation to forest AGB than traditional 2D SAR intensity. It was later shown that model parameterization at one site could be used to predict AGB based on TomoSAR at two sites in French Guiana in Ho Tong Minh et al. (2016), and below, we extend this to three more forest sites in Gabon.
Along with P-band, research on Tomography was carried out at L-band (wavelength ≈ 25 cm) as well, motivated by the proposal of the L-band bistatic SAR systems Tandem-L and SAOCOM-CS (Moreira et al. 2015; ESA 2015). Most noticeable results are those concerning separation of ground and volume scattering (Pardini and Papathanassiou 2017; Tebaldini and Rocca 2012), the impact of weather changes (Pardini et al. 2014), and a study on AGB retrieval from vertical structure parameters extracted using TomoSAR (Toraño Caicoya et al. 2015). Most recently, a study has showed that L-band TomoSAR provides greatly improved correlation to forest AGB in boreal forests (Blomberg et al. 2018). It is worth noting that most research at L-band has thus far been focused on temperate and boreal forests, under the general assumption that L-band waves can hardly penetrate to the ground in dense tropical forests. Yet, in two recent studies carried out in Gabon, it was clearly observed that L-band TomoSAR can actually characterize the full vertical structure of tropical forests (La Valle et al. 2017; Pardini et al. 2018).
The aim of this paper is to provide the reader with a comprehensive understanding of TomoSAR and its application for remote sensing of forested areas, by introducing the basic physical principles behind TomoSAR, the most relevant experimental results obtained in the last decade, and the potential for spaceborne applications. In the exposition, we will mostly focus on the case of P-band TomoSAR and tropical forests. This choice is due to two reasons. In the first place, it is in tropical environments that the use of Tomography appears today to make the most significant difference with respect to conventional SAR and interferometric SAR (InSAR) methods. The second reason is the upcoming BIOMASS Mission, which will implement tomographic imaging for the first 14 months of its lifetime, with optimized performance on equatorial areas (ESA 2012).
This paper is structured as follows. The basic principles required to understand tomographic imaging are introduced and discussed in Sect. 2. Section 3 is intended to provide a brief introduction to forest scattering from forested areas. The use of TomoSAR of the remote sensing of forested areas is discussed in Sects. 4, 5, and 6, which focuses on imaging of the forest structure, forest biomass, and the retrieval of forest height and terrain topography. A discussion of the potentials of tomographic imaging using the future spaceborne mission BIOMASS is provided in Sect. 7. Conclusions are drawn in Sect. 8.
2 Synthetic Aperture Radar Tomography
The expression synthetic aperture radar (SAR) tomography (TomoSAR) is generally used to indicate a microwave imaging technology to focus the illuminated scatterers in 3D space by processing data from multiple SAR acquisitions (Reigber and Moreira 2000). SAR tomographic imaging has been receiving increasing attention in the last years by different research groups, including application fields such as 3D urban scenes, snow, ice sheets, glaciers, and of course forested areas (Reigber and Moreira 2000; Tebaldini 2009; Frey and Meier 2011a; Mariotti d'Alessandro and Tebaldini 2012; Ho Tong Minh et al. 2014a; Banda et al. 2016; Frey et al. 2016; Rekioua et al. 2017; Tebaldini et al. 2016b; Yitayew et al. 2017).
The rationale of TomoSAR is easily understood by considering it as an extension of SAR imaging from 2D to 3D. Conventional SAR systems transmit short pulses and receive the echoes backscattered by the illuminated targets along a single flight line, i.e., a 1D synthetic aperture in the SAR jargon. In this way, the received signal can be focused through digital signal processing techniques to produce a 2D image of the illuminated area, where targets are resolved in the range/azimuth plane (Curlander and McDonough 1991). TomoSAR imaging is based on the collection of multiple flight lines, i.e., a 2D synthetic aperture. This allows focusing the received signal not only in the range/azimuth plane, as in conventional 2D SAR imaging, but also in elevation. A sketch of this concept is shown in Fig. 2, where TomoSAR voxels are represented by the light blue ellipses.
TomoSAR geometry and tomographic voxels in the height/ground range plane. The interferometric baseline bn is defined as the projection of the vector connecting each sensor to the one chosen as common reference onto the normal to the Line of Sight of the radar, i.e., the direction connecting the reference sensor to the target. Baseline aperture bap is the total span of the available interferometric baselines
The geometrical resolution in range and azimuth direction is the same as conventional 2D SAR, that is:
$$\Delta r = \frac{c}{2B}$$
$$\Delta x = \frac{\lambda R}{{2L_{\text{s}} }}$$
where r and x indicate range and azimuth, respectively, c is the wave velocity in vacuum, B is the pulse bandwidth, λ is the carrier wavelength, Ls is the 1D synthetic aperture length (in azimuth), and R is the stand-off distance from the imaged target.
Resolution in elevation depends on the total length of the synthetic aperture in elevation, usually referred to as baseline aperture (Bamler and Hartl 1998), given by the formula:
$$\Delta e = \frac{\lambda R}{{2b_{\text{ap}} }}$$
where e indicates elevation and bap is the total baseline aperture.
Vertical resolution is then obtained by projecting the voxel along the vertical direction. In the case where range resolution is significantly finer than resolution in elevation (\(\Delta r < < \Delta e\)), vertical resolution is expressed as
$$\Delta z = \Delta e \cdot \sin (\theta ) = \frac{\lambda R}{2\Delta b} \cdot \sin (\theta )$$
where θ is the incidence angle (Tebaldini et al. 2016a, b). A real-world example is provided in Fig. 3, which shows two tomographic sections that represent the tropical forest at La Lopé, Gabon, as seen at P-band. In both panels, the color scale is proportional to signal intensity (blue = low, red = high), and the superimposed white line indicates terrain topography from Lidar measurements. The two panels were obtained by processing a stack of ten P-band airborne SAR images acquired by DLR during the ESA campaign AFRISAR (AfriSAR 2017). In this case, vertical resolution ranges from approximately 8–15 m from near to far range.
Two tomographic vertical sections of the tropical forest at La Lopé, Gabon. The small image on the left represent a SAR image of the same area, where the two red lines indicate the position of the two tomographic sections on the right. The color scale is proportional to signal intensity (blue = low, red = high). The superimposed white line indicates terrain topography, as obtained by Lidar measurements. Data: AfriSAR at P-band. Data acquisition by DLR. Tomographic processing by PoliMi
2.1 Models and Algorithms for SAR Tomography
A focused SAR image can be generally linked to the vertical distribution of the illuminated scatterers through expressions that account for the distance between scattering elements and the flight trajectory. Assuming approximately parallel trajectories flown along the x-axis, SAR data can be expressed by the 2D model (Bamler and Hartl 1998; Tebaldini et al. 2016a, b):
$$I_{n} = \int {s(y,z) \cdot \exp \left( {j\frac{4\pi }{\lambda }R_{n} (y,z)} \right){\text{d}}y{\text{d}}z}$$
where In is a complex-valued pixel from the SAR image acquired along the nth trajectory, s(y,z) represent individual scattering elements in the height/ground range plane, Rn is the distance from the scattering element at (y,z) to the radar sensor in the nth trajectory. The integral in Eq. (5) is limited to the circular crown of radius R, thickness Δr, and centered on the nth trajectory, as shown by the red-dashed lines in Fig. 2 (Bamler and Hartl 1998; Tebaldini et al. 2016a, b). A convenient approximation to Eq. (5) is obtained by expanding the expression of Rn at the first order about a reference position, see in particular (Tebaldini et al. 2016a, b), yielding:
$$I_{n} = \int {P(e) \cdot \exp \left( {j\frac{4\pi }{\lambda }\frac{{b_{n} }}{R} \cdot e} \right){\text{d}}e}$$
where e is elevation, P(e) is the projection of s(y,z) along the elevation axis, and bn is the interferometric baseline, as shown in Fig. 2. Equation (6) provides an immediate comprehension on the basic principle of SAR Tomography. Indeed, by defining the elevation wavenumber as (Tebaldini et al. 2016a, b):
$$K_{n} = \frac{4\pi }{\lambda R}b_{n}$$
it can immediately be seen that Eq. (6) states that the SAR data In are linked to the distribution of scattering elements along elevation P(e) via a Fourier transform. It then follows that P(e) can be simply retrieved by taking the (inverse) Fourier transform of SAR data with respect to the interferometric baseline, thus adding one dimension w.r.t. conventional SAR imaging (Reigber and Moreira 2000; Tebaldini 2009; Tebaldini et al. 2016a, b).
Following standard arguments from Fourier analysis, Eqs. (6) and (7) also provide an easy way to assess the performance of tomographic imaging depending on the distribution of available interferometric baselines bn. In the first place, resolution in elevation is obtained by taking the inverse of the range of elevation wavenumber (divided by 2π), leading to Eq. (3). Beside resolution, the other fundamental parameter for assessing the quality of tomographic is height of ambiguity, zamb, which represents the distance from a target along the vertical direction at which artifacts appear due to finite sampling in the wavenumber domain. By assuming a uniform baseline distribution of the form \(b_{n} = n \cdot \Delta b\), one gets that wavenumbers are sampled by \(\Delta K = \frac{4\pi }{\lambda R}\Delta b\), and hence the ambiguous height interval is
$$z_{\text{amb}} = \frac{\lambda R}{2\Delta b} \cdot \sin (\theta )$$
The fundamental requirement for correct tomographic imaging of forested area is that the height of ambiguity \(z_{\text{amb}}\) is larger than forest height. In designing a tomographic survey, it is good practice to choose a baseline sampling \(\Delta b\) such that the height of ambiguity is roughly twice that of forest height. The number of passes is then easily obtained by taking the ratio between Eqs. (8) and (4), or equivalently between baseline sampling \(\Delta b\) and baseline aperture \(b_{\text{ap}}\).
A large variety of approaches for tomographic processing is found in the literature. The most general and accurate approaches to tomographic imaging are obtained as 3D time domain back projection, which allow taking into account strongly irregular trajectories and very large baselines (Frey et al. 2009; Ponce et al. 2014; Tebaldini et al. 2016a, b). In most cases, however, tomographic imaging can be carried out by decoupling focusing in the range–azimuth plane from focusing in elevation. This approach allows casting tomographic processing in terms of a one-dimensional problem, as depicted in Eq. (6), resulting in a substantial advantage in terms of computational burden and enabling the employment of a large variety of techniques from spectral analysis (Gini et al. 2002; Budillon et al. 2011; Zhu and Bamler 2012; Aguilera et al. 2013; Huang et al. 2017). A most interesting aspect of these techniques is that they provide super-resolution capabilities, that is, the capability to resolve targets at a finer resolution than the one expressed in (4). Unfortunately, super-resolution is achieved at the expense of radiometric accuracy, which prevents the application of super-resolution techniques in a general context (Gini et al. 2002; Pardini and Papathanassiou 2017). Finally, a fundamental requirement to enable tomographic focusing is that the knowledge about the position of the radar along the trajectory in all flights is accurate enough to predict variations in the distance travelled by the wave to within an accuracy much better than the system wavelength. This accuracy is seldom met by current navigational systems concerning the location of one flight line with respect to another. As a result, SAR images are affected by space-varying phase disturbances, commonly referred to as phase screens, which produce blurring (Tebaldini and Monti Guarnieri 2010; Tebaldini et al. 2016a, b). For this reason, a preprocessing phase calibration step is quite often required before tomographic focusing (Tebaldini and Monti Guarnieri 2010; Tebaldini et al. 2016a, b; Azcueta and Tebaldini 2017; Mancon et al. 2017).
3 An Introduction to Forest Scattering in the Microwave Regime
A most relevant feature of low-frequency microwaves is the capability to penetrate through the whole vegetation layer down to the underlying ground. As the wave travels through the vegetation, it interacts with leaves, branches, tree trunks, and finally the terrain, generating new waves that are scattered in all directions and further interact with the vegetation and the terrain. As a result, the radar signal can be modeled as the result of different scattering mechanisms (SM). Consistent with a large part of the literature, we will assume here four SMs, as pictorially depicted in Fig. 4 (Sarabandi 1992; Ulaby et al. 1998; Cloude and Pottier 1997; Freeman and Durden 1998; Treuhaft and Siqueira 2000; Papathanassiou and Cloude 2001; Smith-Jonforsen et al. 2005).
Pictorial view of the four main scattering mechanisms arising in forested areas. Light green: canopy backscattering. Yellow: terrain backscattering. Red: trunk–ground double-bounce scattering. Blue: canopy–ground double-bounce scattering
Backscattering from the tree canopies. This SM is the result of direct backscattering from woody elements within the vegetation layer. Accordingly, it provides the most direct information about the vertical structure of the vegetation. The resulting signal is depolarized and hence presents with varying intensity in all polarimetric channels.
Backscattering from the terrain. This SM gives rise to a strongly polarized signal, for which the intensity is much larger in like-polarized returns than in cross-polarized ones. Especially at P-band, terrain scattering is weak compared to forest scattering and can usually be neglected.
Double-bounce scattering from trunk–ground interactions. This SM occurs as a result of the two specular reflections of the wave onto the tree trunks and the terrain, or vice versa. After the second reflection, the signal is conveyed back to the radar at varying intensity, depending on the tree characteristics and on topographic slope. Intensity is maximal when terrain topography is flat, whereas it tends to vanish on both positive and negative slopes (Smith-Jonforsen et al. 2005). It may be shown through geometrical arguments that this SM may be regarded as an equivalent point-like scatterer located at the tree trunk base. Hence, it appears at the terrain level in tomographic images. Another peculiar trait of trunk–ground scattering is found in the phase difference between HH and VV polarizations, which typically assumes values ranging from about 90° to 180°, depending on the Fresnel coefficients of the terrain and the tree trunks (Freeman and Durden 1998). Assuming flat topography, trunk–ground scattering is not expected to contribute at HV polarization (Freeman and Durden 1998).
(iv)
Double-bounce scattering from canopy–ground interactions. This SM results from the waves bouncing off the ground in the direction of the radar after being scattered downward by vegetation elements within the canopy (woody elements at P-band), or vice versa. Canopy–ground scattering appears at the terrain level in tomographic images, as discussed in the case of trunk–ground scattering. The resulting signal is depolarized and hence presents with varying intensity in all polarimetric channels.
4 Sensitivity to Forest Structure
The first tomographic campaign carried out in the frame of BIOMASS studies is BIOSAR 2007, which took place at the Remningstorp forest site, in Southern Sweden (BIOSAR 2008). Prevailing tree species are Norway spruce, Scots pine and birch. The dominant soil type is till with a field layer, when present, of blueberry and narrow-thinned grass. Tree heights are on the order of 20 m, with emergent trees up to 30 m. The topography is fairly flat, terrain elevation above sea level ranging between 120 and 145 m. The acquisition campaign was carried out by DLR from March to May 2007 and comprises 14 fully polarimetric P-band SAR images. The horizontal baseline spacing is approximately 10 m, resulting in a maximum horizontal baseline of approximately 80 m and a vertical resolution ranging from approximately 10 m to 40 m from near to far range.
Two tomographic vertical sections of the Remningstorp forest are shown in the right-hand part of Fig. 5. The two vertical sections are referred to the same area, but were obtained by processing different polarimetric channels. In both panels, the color scale is proportional to signal intensity (blue = low, red = high), and the superimposed lines indicates terrain topography (black) and canopy height (green) obtained from Lidar measurements. Looking at the vertical section at HH (top panel), the first feature that immediately catches the eye is that the highest levels of signal intensity (red areas) are found at terrain level, whereas the signal from the forest canopy is nearly undetectable. Consistently, with the discussion in Sect. 3, this behavior clearly indicates that the signal in like-polarized channels is dominated by trunk–ground scattering, as also further confirmed by polarimetric analysis in Tebaldini (2009, 2010). The most surprising result, however, is that a similar behavior is observed at HV (bottom panel of Fig. 5), for which the brightest areas are again found at the ground level, and the signal scattered from the forest canopies is only barely detectable, indicating that double-bounce scattering is present at HV as well (Tebaldini 2010).
Tomographic vertical sections of the semi-boreal forest at Remningstorp, Southern Sweden. The color scale is proportional to signal intensity (blue = low, red = high). For visualization purposes, both panels were normalized such that the sum along each column is unitary. The superimposed lines indicates terrain topography and forest height, as obtained by Lidar measurements. Data: BioSAR 2007, P-band. Data acquisition by DLR
The subsequent tomographic campaign on boreal forest was BIOSAR (2008), flown in Northern Sweden at P-band and L-band (BIOSAR 2009). Tomographic data from BIOSAR (2008) confirmed the intuitions of BIOSAR 2007, revealing the presence of double-bounce scattering and allowing to study their dependence on topographic slope (Tebaldini and Rocca 2012).
The first tomographic campaign focused on tropical areas was TropiSAR, which was flown in summer 2009 at the two tropical sites of Paracou and Nouragues, French Guiana (TropiSAR 2011). The Paracou site was the first to be investigated using tomography. The forest at this site is classified as a lowland moist forest, with approximately 140–200 tree species per hectare. The tree top height reaches 45 m, with an average canopy height of about 30 m. The acquisition campaign was carried out by ONERA. The tomographic dataset at Paracou and comprises six fully polarimetric P-band SAR images. Vertical resolution is approximately 20 m.
The right-hand panels of Fig. 6 reports an HH and an HV tomographic vertical section of the forest in Paracou. For visualization purposes, both panels were resampled such that the terrain level is 0. It is immediate to note how different these sections look as compared to those from Remningstorp. Indeed, scattering from the forest canopies is well detectable at both polarizations. At HV, one can even see that the brightest areas are often found at the canopy level, rather than on the ground. Yet, it can also be appreciated that intensity does not vanish in correspondence with the terrain level, demonstrating that, by and large, the wave is actually penetrating through the whole vegetation layer down to the ground. An in-depth analysis is reported in Mariotti D'Alessandro et al. (2013).
Tomographic vertical sections of the semi-boreal forest at Paracou, French Guiana. The color scale is proportional to signal intensity (blue = low, red = high). For visualization purposes, both panels were normalized such that the sum along each column is unitary and resampled such that terrain level is 0 m. Data: TropiSAR 2009, P-band. Data acquisition by ONERA
In conclusion, although the results in this section have only been discussed qualitatively, they demonstrate a most important point: SAR tomography is highly sensitive to forest structure. An in-depth assessment of the role of SAR tomography in the remote sensing of forested areas will be presented in the next sections.
5 The Link to Forest Aboveground Biomass
Aboveground biomass (AGB, in Mg of dry matter per hectare or in t/ha) is the mass of vegetation standing aboveground. It is a key quantity as it constitutes an important ecosystem service, but is also a major store of carbon in the biosphere. The first works that studied the link between SAR tomography and AGB in a quantitative manner are those by Ho Tong Minh et al. (2014a, 2016). Both papers are based on an analysis of the correlation between forest AGB available from in situ surveys and the intensity of tomographic horizontal sections corresponding to different heights w.r.t. to the terrain. The analyzed datasets are the ones collected by ONERA during the TropiSAR campaign (TropiSAR 2011). The Paracou site was investigated in Ho Tong Minh et al. (2014a), whereas the Nouragues site was studied in Ho Tong Minh et al. (2016).
The Paracou site is located in a lowland tropical rain forest near Sinnamary. Terrain elevation is between 5 and 50 m, and mean annual temperature is 26 °C, with an annual range of 1–1.5 °C. The landscape is characterized by a relatively flat terrain, which is dissected by narrow streams. As mentioned in the last section, the forest in Paracou is classified as a lowland moist forest. The tree flora at Paracou exceeds 550 woody species attaining 2 cm diameter at breast height (DBH) have been described in Molino and Sabatier (2001), and a single hectare of forest may harbor 140–200 tree species. Top-of-canopy height reaches up to 45 m with the average value around 30 m. The Nouragues Ecological Research Station is located 120 km south of Cayenne, French Guiana, and was established in 1986. This area is a protected natural reserve characterized by a lowland moist tropical rainforest (Sabatier and Prévost 1988; Van Der Meer and Bongers 1996). Recent floristic censuses have recorded over 660 species of trees above 10 cm in trunk diameter (DBH) in a 12-ha plot. The landscape is a succession of small hills, between 60 and 120 m asl covered by a pristine forest. One prominent feature of the landscape is the presence of a granitic hill, called inselberg, with no vegetation at the top. Top-of-canopy height reaches up to 55 m with the average value around 35 m.
In situ forest AGB measurements were available in Paracou from 16 permanent plots established since 1984, and in Nouragues from two large- and long-term permanent plots established in 1992–1994 and regularly surveyed to the present (Ho Tong Minh et al. 2016). At both test sites, plots were subdivided in 100 × 100 m subplots (1 ha), resulting in 85 plots in Paracou and 22 in Nouragues (Ho Tong Minh et al. 2016).
Tomographic data consisted of fully polarimetric P-band SAR images acquired on 14 August 2009 in Nouragues (5 flight tracks) and 10 days later in Paracou (6 flight tracks). Importantly, the tomographic flight lines were displaced in a vertical plane rather than in a horizontal plane, which helped limiting spatial variations in vertical resolution across the scene swath (Dubois-Fernandez et al. 2012). This allowed both forest sites to be imaged at an approximately constant vertical resolution of 20 m without the need for super-resolution imaging techniques, thus preserving radiometric accuracy (Mariotti d'Alessandro et al. 2013; Ho Tong Minh et al. 2016).
A synthesis of the analysis carried out in Ho Tong Minh et al. (2014a, 2016) is presented in Fig. 7. The main results are summarized as follows:
Correlation between SAR and TomoSAR intensities and AGB at Paracou and Nouragues, French Guiana. Top two rows: 2D SAR intensity in dB (leftmost panel) and TomoSAR intensities in dB from ground level, 15 m and 30 m above ground level for Paracou and Nouragues. The same color scale is used for all panels. Polarization is HV in all panels. Bottom row: scatterplots between intensity and AGB (as derived from in situ observations). The size of each plot is approximately 1 ha. Figures taken from Ho Tong Minh et al. (2016)
2D SAR intensity is poorly correlated with AGB.
Tomographic intensity at 0 m is poorly and negatively correlated with AGB.
Tomographic intensity at 15 m is poorly correlated with AGB.
Tomographic intensity at 30 m is highly correlated with AGB. The observed sensitivity is ≈ 50 Mg/ha per dB.
Interestingly, the relation between TomoSAR intensity at 30 m and AGB becomes increasingly accurate by aggregating plots at a larger scale and produces a correlation coefficient of 0.97 for plot sizes of about 6 ha.
Based on these results, an inversion procedure was introduced in Ho Tong Minh et al. (2014a) to retrieve AGB by assuming linear dependency on HV tomographic intensity at 30 m. The inversion was trained by using few 1-ha samples and validated with the remaining, resulting in a final total AGB error (RMSE) lower than 10% at 1 ha resolution This approach was later extended in Ho Tong Minh et al. (2016), where a cross-validation was assessed using training plots from Nouragues and validation plots from Paracou, and vice versa, resulting in a total AGB error at the two sites around 16–18% at 1 ha resolution, as shown in Fig. 8.
TomoSAR biomass retrieval result based on cross-validations: comparison of retrieved AGB and in situ AGB. Left: Training in Nouragues and validation in Paracou. Right: Training in Paracou and Validation in Nouragues. Figures taken from Ho Tong Minh et al. (2016)
After TropiSAR, the next campaign focused on tropical forests was AfriSAR, which was carried out in Gabon in 2015 and 2016 (AfriSAR 2017). The campaign was shared between ONERA (dry season, July 2015) and DLR (wet season, February 2016) and included tomographic acquisitions at a vertical resolution of about 10 m to 15 m at the forest sites of Lopé, Mondah, Mabounié, and Rabi. These four forest sites are characterized by different physical forest structure types, biomass levels, growth stages, and different levels and kinds of disturbance. All sites contain ground data (permanent tree inventories) together with aerial Lidar scanning of the regions of interest from which reference biomass data have been generated (Labrière et al. 2018). Lopé is located 250 km east of the Libreville airport, and it is characterized by a mosaic between forests and savannas. Forest AGB ranges between a few tons per hectare (in the case of open woody savannas) and up 600t/ha. Two tomographic vertical sections of the Lopé forest sites are shown in Sect. 2, Fig. 3. Mondah is located 25 km north of the Libreville airport. It is a relatively young forest with high variability of density, including disturbed areas due to the proximity to the city. Tree height can also be higher than 50 m. Mabounié is located 180 km southeast of the Libreville airport. The landscape is mostly forested (including swamps and temporarily flooded areas), and most areas are rather hilly (altitude ranges between 25 and 230 m asl). Due to the presence of rare earths, mining exploration took place during the last decades, and many degraded areas are still visible. Rabi is located 260 km south from the Libreville airport. The area of interest contains a 25-ha permanent plot maintained by the Smithsonian Institute, for which extensive ground measurements are available. Next to this plot, an oil extraction area is present around which many degraded areas can be found.
Tomographic data have been generated using data acquired by ONERA at the sites of Lopé, Mondah, and Rabi, and correlated to in situ AGB following the same methodology as in Ho Tong Minh (2014a). The results from AfriSAR are shown in Fig. 9, where we overlay those from TropiSAR for reference. Crucially, the results are observed to be consistent with those obtained in French Guiana, preserving a similar sensitivity and accuracy for AGB values larger than 200 t/ha.
Plot of TomoSAR intensity at 30 m against AGB at the five tropical forest sites in South America and Equatorial Africa
Accordingly, all the results obtained so far at five tropical forest sites in South America and Equatorial Africa clearly indicate that tomographic intensity at 30 m is dramatically more correlated with AGB than 2D SAR intensity. The observed sensitivity was found to be about 50 Mg/ha per dB across the range of AGB values from about 200 to 500 tons/ha.
Understanding the reasons behind this empirical result is of primary relevance to achieve a more complete understanding of forest scattering at P-band. This, in turn, is of crucial importance in the context of the BIOMASS mission. A better understanding is expected to result in better algorithms for AGB retrieval and possibly limits as much as possible the need for external data for calibration. In the remainder of this section, we discuss two possible reasons, of both ecological and electromagnetic nature, that could help understand the physics underlying the observations:
30 m is a biophysically relevant height in dense tropical forests.
Ground scattering acts as a noise factor on 2D SAR intensity, limiting its sensitivity to AGB. This disturbing factor is most efficiently canceled out in tomographic intensity at 30 m, which would explain the dramatic increase in sensitivity.
5.1 The ecological reason: the role of the 30 m layer
Considering the vertical resolution of the data considered in this paper, tomographic intensity at 30 m accounts approximately for scatterers in the layer from 20 to 40 m above the terrain. Accordingly, the question is whether there is any biophysical reason connecting this layer to total AGB.
To answer this question, we assume a simple structural model of tropical rain forests, which accounts for five layers: the overstorey, the main canopy, the understory, the shrub layer, and the forest floor. This structure is classically observed in aerial Lidar scanning, and even more precisely using terrestrial Lidar scanning. The overstorey refers to the crowns of emergent trees which are above the rest of the canopy (above 40 m). The canopy is the dense ceiling of closely packed trees and their branches centered at about 30 m, while the understory denotes more widely spaced, smaller tree species and young individuals that form a broken layer below the canopy (below 20 m). The shrub layer is characterized by shrubby species and young trees that grow only 2–6 m off the forest floor.
The canopy layer is the principal site for the interchange of heat, water vapor, and atmospheric gases. Under the canopy, there is little direct sunlight due to the extinction of the light through the canopy layer. For these reasons, it is expected that the layer from 20 to 40 m contains a major part of the leaves and a large proportion of woody elements, including trunks and most of the branches (primary, secondary, and higher order) that contribute to the total AGB. Still, the question remains whether the fraction of biomass contained in the 30-m layer is actually representative of the total AGB.
A first answer to this question was given in Ho Tong Minh et al. (2014a) by assuming a forest structure as derived from the TROLL model (Chave 1999; Maréchaux and Chave 2017), which is a spatially explicit forest growth simulator designed to study structural, successional and spatial patterns in natural tropical forests. The model includes competition for light, treefall gap formation and recruitment, which are the critical phenomena in the morphology of tropical forests. The parameters of the model for the species groups have been determined using field data in French Guiana. As a result, an area of 400 × 400 m2 was generated, from which biomass between 20 and 40 m was extracted and compared to total AGB. Simulations showed that biomass contained in the 20–40 m layer is about 40% of the total AGB, and that it is strongly correlated (rp = 0.92) to total AGB over the whole range of AGB from 250 to 700 t/ha (Ho Tong Minh et al. 2014a).
Most interestingly, this result is confirmed by the recent reanalysis of high-resolution discrete-return airborne Lidar data at nine tropical sites in South America (Meyer 2018), which analyzes the correlation between AGB and the area occupied at different heights by large trees. Correlation (R2) was found to be maximum at a height of 27–30 m at all the nine study sites, (Meyer 2018). In conclusion, both ecological modeling and empirical Lidar measurements support the idea that the 30-m layer do actually play a special role in tropical forests, as the fraction of biomass included in it provides a reliable proxy to total AGB.
5.2 The EM reason: the role of ground scattering
The behavior of ground scattering in a tropical forest was thoroughly investigated in Mariotti d'Alessandro et al. (2013) based on the tomographic P-band dataset acquired by ONERA at the forest site of Paracou, French Guiana.
The analysis in that paper focused on the variation in backscattered intensity and copolar phase (i.e., the phase between HH and VV) w.r.t. ground range slope, considering both 2D SAR intensity and Tomographic data at ground level, see Fig. 10.
Top row: backscattered intensity, copolar phase, and 2D histogram relating backscattered intensity and copolar phase for the original (i.e., non-tomographic) SLC data. Bottom row: backscattered intensity, copolar phase, and 2D histogram relating backscattered intensity and copolar phase for tomographic data corresponding to ground level. Leftmost panel: ground range slope: positive-valued pixels indicate the terrain is tilted toward the radar. Figure taken from Mariotti d'Alessandro et al. (2013)
Figure 10 shows that both the intensity and the copolar phase are modulated by ground range slope. In particular, in flat areas, intensity is seen to increase and the copolar phase approaches − 180°, which is a clear indication of the occurrence of double-bounce scattering from ground–trunk interactions (Mariotti d'Alessandro et al. 2013). With tomographic data, the variation in intensity associated with double-bounce scattering was accurately estimated as about 5 dB (see the bottom panels of Fig. 10).
Further analysis based on physical optics showed that the characteristic parameter that rules ground–trunk scattering is not only tree height, but also the length of the base available for ground reflections (Mariotti d'Alessandro et al. 2013), which indicates that this phenomenon is connected to the presence of nearby trees, understory, and undulating topography. The immediate conclusion that can be drawn from this analysis is that even in a dense tropical forest, double-bounce scattering from ground–trunk interactions is relevant on flat areas. Another conclusion is that ground–trunk scattering in tropical forests is strongly connected to local topography, and also to local forest features, such as tree density, and density of the understory. Moreover, although not considered in Mariotti d'Alessandro et al. (2013), soil moisture and canopy–ground scattering are expected to play a role, as predicted by EM models (Truong-Loi et al. 2015).
These conclusions are consistent with the results in Ho Tong Minh (2014a, 2016), showing that ground scattering is poorly and negatively correlated to AGB. Indeed, a slight decay of ground scattering with increasing AGB can be explained by assuming that total wave attenuation increases with AGB. However, as noted above, ground scattering is also determined by several other factors, which necessarily weakens the correlation to AGB.
6 Forest Height and Terrain Topography
The retrieval of canopy height using SAR tomography has been considered since the early experiments in 2007. The basic principle enabling the use of SAR tomography for this task is immediately understood by considering tomographic vertical sections, such as those in Figs. 3, 5, and 6. Indeed, wave scattering from forested areas is bound to occur between the terrain and the top of the canopy. Hence, canopy height can be retrieved, at least in principle, by tracing the upper envelope in tomographic sections. In practice, the upper envelope is extracted after interpolating the tomographic sections in ground coordinates with respect to the DTM, that is, by setting the terrain level at z = 0 m, see for example (Tebaldini and Rocca 2012). An advantage of this approach is that it is less computationally expensive than model-based inversion and can be applied in the absence of a specific model of the forest vertical structure. In the recent literature, this approach has been successfully applied on boreal and tropical forests, using data from BIOSAR 2008 and TropiSAR. Performance was assessed based on a pixel-to-pixel comparison against Lidar measurements and resulted in an accuracy of about 3 m at a resolution of 60 × 60 m2 in the case of boreal forests (Tebaldini and Rocca 2012) and about 2.5 m at a resolution of 60 × 60 m2 in the case of tropical forests (Ho Tong Minh et al. 2016). The achievement of a better accuracy in the latter case is most likely due to the better vertical resolution of TropiSAR data w.r.t. BIOSAR 2008. In Fig. 11, we report a recent example of the application of this approach to the case of tropical forests using data from AfriSAR. Performance assessment of the TomoSAR height retrieval algorithm is part of an ongoing study.
Retrieval of forest height at Nouragues and Paracou. Left panels: forest height from Lidar survey. Middle panels: forest height by tracing the envelope of tomographic intensities. Right panels: relative error \(\frac{{|H_{\text{Tomo}} - H_{\text{Lidar}} |}}{{H_{\text{Lidar}} }}\). Data: TropiSAR. Data acquisition by ONERA. Figures taken from Ho Tong Minh et al. (2016)
In addition to forest height, SAR tomography is also being investigated to retrieve a digital terrain model. This task was first considered for the case of boreal forests in (Tebaldini 2010), where terrain topography was retrieved by assuming a parametric model. In that paper, accuracy was assessed to be about 2 m by comparison against Lidar measurements of ground elevation (Tebaldini 2010). In dense tropical areas, terrain topography retrieval was first considered in Mariotti D'Alessandro et al. (2013). In this case, retrieval was obtained by jointly processing tomographic data at different polarizations, according to the procedure introduced in Tebaldini (2009). For terrain topography, accurate assessment was not considered in Mariotti d'Alessandro and Tebaldini (2012). However, that terrain localization was accurate enough to enable the later developments in Mariotti d'Alessandro et al. (2013), Ho Tong Minh et al. (2014a, 2016). A recent assessment of the accuracy achievable in tropical areas based on data from the AfriSAR campaign is reported in Pardini et al. (2018), Mariotti d'Alessandro and Tebaldini (2018a, 2019). The results indicate a final accuracy comparable to that of Lidar system, supporting the idea that SAR tomography will be useful in the future as an alternative to Lidar mapping. An example is shown in Fig. 12.
Retrieval of sub-canopy terrain topography. Left: DTM by SAR tomography. Right: Digital Terrain Model (DLR) from Lidar survey. Data: TropiSAR. Data acquisition by ONERA
7 BIOMASS Tomography
BIOMASS was selected in 2013 to be ESA's 7th Earth Explorer Core Mission. The mission primary objective is to generate accurate maps of forest biomass and height at a global scale. BIOMASS will be implemented as a fully polarimetric SAR operating at P-band, taking advantage of the understorey penetration capabilities of P-band wavelengths (ESA 2012). Launch date is currently planned in 2022. The satellite will operate in two different observation phases, referred to as tomographic and interferometric phase, respectively. The tomographic phase will operate during the first 14 months of the mission lifetime. In this phase, the satellite will be orbited to provide seven consecutive acquisitions per site from slightly different points of view, hence enabling TomoSAR imaging. Geographical coverage during the tomographic phase will be global, and the seven passes will be acquired with a time lag of 3 days from one another. In the subsequent interferometric phase, the satellite orbits will be modified to achieve faster coverage. This phase will produce three consecutive acquisitions per site at a revisit time of 3 days, enabling AGB and forest height retrieval by SAR Polarimetry and Polarimetric Interferometry (ESA 2012).
The achievement of radiometrically and geometrically accurate tomographic products in the context of BIOMASS is a challenging task. Indeed, one has to account for several potentially damaging factors w.r.t. the airborne case, including degraded signal-to-noise ratio, coarser spatial resolution, propagation through the ionosphere, and temporal decorrelation effects due to the fact that the seven acquisitions needed to implement SAR tomography will be collected in 18 days.
The impact of system range and azimuth resolution has drawn attention since early experiments. Indeed, airborne campaign data provide a resolution on the order of 1 m, whereas BIOMASS will generate data at a resolution of about 10 m in azimuth and only 25 m in range, due to the 6 MHz bandwidth limit imposed by ITU regulations (ESA 2012). This disparity in spatial resolution has stimulated the development of techniques to simulate BIOMASS acquisitions based on campaign data. The feasibility of tomographic imaging at BIOMASS resolution was first demonstrated in Tebaldini and Rocca (2012), where it was demonstrated that forest height in boreal forests could be retrieved to within an accuracy better than 4 m based on BIOMASS (simulated) data. The next relevant study on this subject is the one published in Ho Tong Minh et al. (2015a), based on simulated BIOMASS acquisitions on tropical forests derived from the TropiSAR campaign. In that paper, the correlation between in situ AGB and tomographic intensity at 30 m was observed to be about 0.84 for plot sizes of about 6 ha when using BIOMASS data, whereas it was 0.97 using airborne data at full resolution (see Sect. 5), hence providing evidence that use of tomography is expected to play a key role in the context of a spaceborne mission as well. Figure 13 shows an example of a tomographic vertical derived from synthetic BIOMASS data derived from airborne data (Mariotti d'Alessandro and Tebaldini 2018b).
Airborne tomography (top panel) and BIOMASS tomography from synthetic data derived from real airborne data (bottom panel). Data: TropiSAR P-band. Data acquisition by ONERA
The impact of temporal decorrelation was thoroughly studied in the frame of the long-term campaign TropiSCAT. The TropiSCAT campaign was implemented as a P-band fully polarimetric ground-based campaign, installed at the forest site of Paracou, French Guiana. The goal of the TropiSCAT campaign was to analyze the vertical distribution of temporal decorrelation in tropical forests. The system consists of 20 antennas installed on the 55-m high Guyaflux tower, each of which can be used as a transmitter or a receiver to form an equivalent monostatic vertical array of 15 elements for each polarization. The system was operated to collect tomographic data every 15 min for an overall period of about 1 year (Ho Tong Minh et al. 2013, 2014b). The results published in Ho Tong et al. (2014b, 2015b) indicated that the degradation of tomographic imaging due to temporal decorrelation is acceptable as long as the time lag between two consecutive acquisitions is 4 days or less. A recent study considered the emulation of BIOMASS tomography by mixing acquisitions from TropiSCAT gathered every 3 days, including sunny and rainy days. The results showed a total radiometric error due to temporal decorrelation of 1–1.5 dB, which would entail a biomass retrieval error around about 20% or better at spatial scales on the order of 6 ha (Bai et al. 2018).
SAR tomography is an increasingly studied technique to image the 3D structure of natural media, such as snow, ice, and vegetation. In the context of forest remote sensing, a most important finding is that tomographic intensity is significantly better correlated to forest AGB in tropical forests than conventional 2D SAR intensity, as it was observed in experimental data from two tropical sites in South America and three in equatorial Africa. Two possible reasons were considered to provide physical support to this finding, of both biophysical and electromagnetic nature: (i) 30 m is a biophysically relevant height in dense tropical forests, and (ii) ground scattering acts as a disturbing factor that needs to be removed. The following conclusions were drawn:
The hypothesis that 30 m is a biophysically relevant height in dense tropical forests is strongly supported by ecological modeling (Chave 1999) and by a reanalysis of Lidar data across Amazonia as published in Meyer (2018).
Ground scattering (which includes terrain scattering and double scattering from trunk–ground and canopy–ground interactions) appears to be determined by a complex set of factors other than forest biomass, including local topography, tree density, understorey, and soil moisture. For this reason, it appears unlikely that ground scattering can be directly related to AGB in an operational context, at least in the absence of specific knowledge about local terrain and vegetation features.
In summary, tomography appears to bring the most complete information about AGB in tropical forests by virtue of its ability to single out the returns from different layers within the vegetation while rejecting ground scattering.
Besides forest biomass, SAR tomography was also demonstrated to be a powerful tool for mapping canopy height and sub-canopy terrain topography, as shown by studies conducted at both boreal and tropical sites. Based on results from the recent literature, forest height can be retrieved by SAR tomography to within an accuracy better than 3 m in tropical forests, as validated through a pixel-to-pixel comparison against Lidar data. Retrieval of terrain topography under forests has been shown to be possible in both boreal forests and dense tropical forests, supporting the idea that SAR tomography might be used in a recent future as an alternative to Lidar mapping.
Tomography analyses of forested areas will be implemented for the first time from space during the tomographic phase of the BIOMASS mission. The tomographic phase of BIOMASS will last for the first 14 months of mission lifetime, providing global geographical coverage and enabling tomographic imaging with seven passes acquired with a time lag of 3 days from one another. Notwithstanding many limitations w.r.t. airborne tomography, mostly arising from the coarser resolution and increased temporal decorrelation, studies based on simulated BIOMASS data derived from airborne campaign indicate that accurate tomographic imaging is feasible, and support the idea that forest AGB in tropical could be retrieved to within a 20% accuracy at spatial scales on the order of 6 ha.
Acknowledegements
All of the results presented within this paper were obtained in the frame of studies funded by the European Space Agency (ESA) in support of the BIOMASS mission, and we acknowledge ESA for the support it gave to the research on SAR Tomography over the last decade. This paper stemmed from the most fruitful Forest Properties Workshop organized in November 2017 in Bern (CH) by the International Space Science Institute (ISSI), which we wish to warmly acknowledge for this initiative. We also acknowledge various funding sources including CNES (France, TOSCA) and an "Investissement d'Avenir" program managed by Agence Nationale de la Recherche (CEBA, Ref. ANR-10-LABX-25-01).
AFRISAR (2017) Technical assistance for the deployment of airborne SAR and geophysical measurements during the AFRISAR experiment. Final Report to ESA, 2017Google Scholar
Aguilera E, Nannini M, Reigber A (2013) Wavelet-based compressed sensing for SAR tomography of forested areas. IEEE Trans Geosci Remote Sens 51(12):5283–5295CrossRefGoogle Scholar
Azcueta M, Tebaldini S (2017) Non-cooperative bistatic SAR clock drift compensation for tomographic acquisitions. Remote Sensing 9(11):1087CrossRefGoogle Scholar
Bai Y, Tebaldini S, Minh DHT, Yang W (2018) An empirical study on the impact of changing weather conditions on repeat-pass SAR tomography. IEEE J Sel Top Appl Earth Obs Remote Sens 11(10):3505–3511CrossRefGoogle Scholar
Bamler R, Hartl P (1998) Synthetic aperture radar interferometry-inverse problems 14:R1–R54Google Scholar
Banda F, Dall J, Tebaldini S (2016) Single and multipolarimetric P-Band SAR tomography of subsurface ice structure. IEEE Trans Geosci Remote Sens 54(5):2832–2845CrossRefGoogle Scholar
BIOSAR (2008) Technical assistance for the deployment of airborne SAR and geophysical measurements during the BIOSAR 2007 experiment, Final Report to ESA, 2008Google Scholar
BIOSAR (2009) Technical assistance for the deployment of airborne SAR and geophysical measurements during the BIOSAR 2008 experiment. Final Report to ESA, 2009Google Scholar
Blomberg E, Ferro-Famil L, Soja MJ, Ulander LMH, Tebaldini S (2018) Forest biomass retrieval from L-band SAR using tomographic ground backscatter removal. IEEE Geosci Remote Sens Lett https://doi.org/10.1109/lgrs.2018.2819884
Budillon A, Evangelista A, Schirinzi G (2011) Three-dimensional SAR focusing from multipass signals using compressive sampling. IEEE Trans Geosci Remote Sens 49(1):488–499CrossRefGoogle Scholar
Chave J (1999) Study of structural, successional and spatial patterns in tropical rain forests using TROLL, a spatially explicit forest model. Ecol Modell 124(2–3):233–254CrossRefGoogle Scholar
Cloude SR, Pottier E (1997) An entropy based classification scheme for land applications of polarimetric SAR. IEEE trans Geosci Rem Sens 35(1):68–78CrossRefGoogle Scholar
Curlander JC, McDonough RN (1991) Synthetic aperture radar. Wiley Interscience, New YorkGoogle Scholar
Dubois-Fernandez PC et al (2012) The TropiSAR Airborne Campaign in French Guiana: objectives, description, and observed temporal behavior of the backscatter signal. IEEE trans Geosci Rem Sens 50(8):3228–3241. https://doi.org/10.1109/TGRS.2011.2180728 CrossRefGoogle Scholar
ESA (2012) Report for mission selection: biomass. ESA SP-1324/1 (3 volume series), European Space Agency, NoordwijkGoogle Scholar
ESA (2015) ESA, SAOCOM-CS mission science document, ESA, EOP-SM/2764/. ESA, NoordwijkGoogle Scholar
Freeman A (2007) Fitting a two-component scattering model to polarimetric SAR data from forests. IEEE Trans Geosci Remote Sens 45(8):2583–2592CrossRefGoogle Scholar
Freeman A, Durden SL (1998) A three-component scattering model for polarimetric SAR data. IEEE Trans Geosci Remote Sens 36(3):963–973CrossRefGoogle Scholar
Frey O, Meier E (2011a) Analyzing tomographic SAR data of a forest with respect to frequency, polarization, and focusing technique. IEEE Trans Geosci Remote Sens 49(10):3648–3659CrossRefGoogle Scholar
Frey O, Meier E (2011b) 3-D time-domain SAR imaging of a forest using airborne multibaseline data at L- and P-bands. IEEE Trans Geosci Remote Sens 49(10):3660–3664CrossRefGoogle Scholar
Frey O, Magnard C, Ruegg M, Meier E (2009) Focusing of airborne synthetic aperture radar data from highly nonlinear flight tracks. IEEE Trans Geosci Remote Sens 47(6):1844–1858CrossRefGoogle Scholar
Frey O, Werner CL, Caduff R, Wiesmann A (2016) A time series of SAR tomographic profiles of a snowpack. In: Proceedings of EUSAR 2016: 11th European conference on synthetic aperture radar, Hamburg, Germany, 2016, pp 1–5Google Scholar
Gatti G, Tebaldini S, Mariotti d'Alessandro M, Rocca F (2011) ALGAE: A Fast Algebraic Estimation of Interferogram Phase Offsets in Space-Varying Geometries. IEEE trans Geosci Rem Sens 49(6):2343–2353CrossRefGoogle Scholar
Gini F, Lombardini F, Montanari M (2002) Layover solution in multibaseline SAR interferometry. IEEE Trans Aerosp Electron Syst 38(4):1344–1356CrossRefGoogle Scholar
Ho Tong Minh D, Tebaldini S, Rocca F, Koleck T, Borderies P, Albinet C, Villard L, Hamadi A, Le Toan T (2013) Ground-based array for tomographic imaging of the tropical forest in p-band. IEEE Trans Geosci Remote Sens 51(8):4460–4472CrossRefGoogle Scholar
Ho Tong Minh D, Le Toan T, Rocca F, Tebaldini S, D'Alessandro MM, Villard L (2014a) Relating P-band synthetic aperture radar tomography to tropical forest biomass. IEEE Trans Geosci Remote Sens 52(2):967–979CrossRefGoogle Scholar
Ho Tong Minh D, Tebaldini S, Rocca F, Le Toan T, Borderies P, Koleck T, Albinet C, Hamadi A, Villard L (2014b) Vertical structure of p-band temporal decorrelation at the Paracou forest: results from tropiscat. IEEE Geosci Remote Sens Lett 11(8):1438–1442CrossRefGoogle Scholar
Ho Tong Minh D, Tebaldini S, Rocca F, Le Toan T, Villard L, Dubois-Fernandez PC (2015a) Capabilities of BIOMASS tomography for investigating tropical forests. IEEE Trans Geosci Remote Sens 53(2):965–975CrossRefGoogle Scholar
Ho Tong Minh D, Tebaldini S, Rocca F, Le Toan T (2015b) The impact of temporal decorrelation on BIOMASS tomography of tropical forests. IEEE Geosci Remote Sens Lett 12(6):1297–1301CrossRefGoogle Scholar
Ho Tong Minh D, Le Toan T, Rocca F, Tebaldini S, Villard L, Réjou-Méchain M, Phillips OL, Feldpausch TR, Dubois-Fernandez P, Scipal K, Chave J (2016) SAR tomography for the retrieval of forest biomass and height: cross-validation at two tropical forest sites in French Guiana. Remote Sens Environ 175:138–147CrossRefGoogle Scholar
Huang Y, Levy-Vehel J, Ferro-Famil L, Reigber A (2017) Three-dimensional imaging of objects concealed below a forest canopy using SAR tomography at L-band and wavelet-based sparse estimation. IEEE Geosci Remote Sens Lett 14(9):1454–1458CrossRefGoogle Scholar
Labrière N, Tao S, Chave J, Scipal K, Le Toan T, Abernethy K, Alonso A, Barbier N, Bissiengou P, Casal T, Davies SJ, Ferraz A, Hérault B, Jaouen G, Jeffery KJ, Kenfack D, Korte L, Lewis SL, Malhi Y, Memiaghe HR, Poulsen JR, Réjou-Méchain M, Villard L, Vincent G, White LJT, Saatchi S (2018) In situ reference datasets from the TropiSAR and AfriSAR campaigns in support of upcoming spaceborne biomass missions. JSTARS 2018 (in press)Google Scholar
Lavalle M, Hawkins B, Hensley S (2017) Tomographic imaging with UAVSAR: current status and new results from the 2016 AfriSAR campaign. In: 2017 IEEE international geoscience and remote sensing symposium (IGARSS)Google Scholar
Lin Y-C, Sarabandi K (1992) Electromagnetic scattering model for a tree trunk above a tilted ground plane. IEEE Trans Geosci Remote Sens 33(4):1063–1070Google Scholar
Mancon S, Monti Guarnieri A, Giudici D, Tebaldini S (2017) On the phase calibration by multisquint analysis in TOPSAR and stripmap interferometry. IEEE Trans Geosci Remote Sens 55(1):134–147CrossRefGoogle Scholar
Maréchaux I, Chave J (2017) An individual-based forest model to jointly simulate carbon and tree diversity in Amazonia: description and applications. Ecol Monogr 87:632–664. https://doi.org/10.1002/ecm.1271 CrossRefGoogle Scholar
Mariotti d'Alessandro M, Tebaldini S (2018a) Retrieval of terrain topography in tropical forests using P-band SAR tomography, IGARSS 2018, Valencia, 2018Google Scholar
Mariotti d'Alessandro M, Tebaldini S (2018b) Cross sensor simulation of tomographic SAR data. In: IGARSS 2018–2018 IEEE international geoscience and remote sensing symposium, Valencia, 2018, pp 8695–8698Google Scholar
Mariotti d'Alessandro M, Tebaldini S (2019) Digital terrain model retrieval in tropical forests through P-band SAR tomography, on IEEE Transactions on Geoscience and Remote Sensing, Early AccessGoogle Scholar
Mariotti d'Alessandro M, Tebaldini S (2012) Phenomenology of P-band scattering from a tropical forest through three-dimensional SAR tomography. IEEE Geosci Remote Sens Lett 9(3):442–446CrossRefGoogle Scholar
Mariotti D'Alessandro M, Tebaldini S, Rocca F (2013) Phenomenology of ground scattering in a tropical forest through polarimetric synthetic aperture radar tomography. IEEE Trans Geosci Remote Sens 51(8):4430–4437CrossRefGoogle Scholar
Meyer V et al. (2018) Canopy area of large trees explains aboveground biomass variations across nine neotropical forest landscapes. Biogeosci Discuss. https://doi.org/10.5194/bg-2017-547. https://www.biogeosciences-discuss.net/bg-2017-547/bg-2017-547.pdf
Molino JF, Sabatier D (2001) Tree diversity in tropical rain forests: a validation of the intermediate disturbance hypothesis. Science 294:17021704CrossRefGoogle Scholar
Moreira A (2014) A golden age for spaceborne SAR systems. 2014 20th international conference on microwaves, radar and wireless communications (MIKON). Gdansk 2014:1–4. https://doi.org/10.1109/MIKON.2014.6899903 Google Scholar
Moreira A et al (2015) Tandem-L: a highly innovative bistatic SAR mission for global observation of dynamic processes on the earth's surface. IEEE Geosci Remote Sens Mag 3(2):8–23CrossRefGoogle Scholar
Paillou P, Grandjean G, Baghdadi N, Heggy E, August-Bernex Th, Achache J (2003) Sub-surface imaging in central-southern Egypt using low frequency radar: bir Safsaf revisited. IEEE Trans Geosci Remote Sens 41(7):1672–1684CrossRefGoogle Scholar
Papathanassiou KP, Cloude SR (2001) Single baseline polarimetric SAR interferometry. IEEE Trans Geosci Remote Sens 39(11):2352–2363CrossRefGoogle Scholar
Pardini M, Papathanassiou K (2017) On the estimation of ground and volume polarimetric covariances in forest scenarios with SAR tomography. IEEE Geosci Remote Sens Lett 14(10):1860–1864CrossRefGoogle Scholar
Pardini M, Cantini A, Lombardini F, Papathanassiou K (2014) 3-D structure of forests: first analysis of tomogram changes due to weather and seasonal effects at L-band. EUSAR 2014. In: 10th European conference on synthetic aperture radar, Berlin, Germany, 2014, pp 1–4Google Scholar
Pardini M, Tello M, Cazcarra-Bes V, Papathanassiou KP, Hajnsek I (2018) L- and p-band 3-d SAR reflectivity profiles versus Lidar waveforms: the afrisar case. IEEE J Sel Top Appl Earth Obs Remote Sens 11(10):3386–3401CrossRefGoogle Scholar
Ponce O, Prats-Iraola P, Scheiber R, Reigber A, Moreira A, Aguilera E (2014) Polarimetric 3-D reconstruction from multicircular SAR at P-band. IEEE Geosci Remote Sens Lett 11(4):803–807CrossRefGoogle Scholar
Reigber A, Moreira A (2000) First demonstration of airborne SAR tomography using multibaseline L-band data. IEEE Trans Geosci Remote Sens Remote Sens 38(5):2142–2152CrossRefGoogle Scholar
Rekioua B, Davy M, Ferro-Famil L, Tebaldini S (2017) Snowpack permittivity profile retrieval from tomographic SAR data. C R Phys 18(1):57–65CrossRefGoogle Scholar
Sabatier D, Prévost MF (1988) Quelques données sur la composition floristique et la diversité des peuplements forestiers de Guyane Française. Bois et Forêts des Tropiques 219:31–55Google Scholar
Sarabandi K (1992) Scattering from dielectric structures above impedance surfaces and resistive sheets. IEEE Trans Antennas Propag 40(1):67–78CrossRefGoogle Scholar
Smith-Jonforsen G, Ulander L, Luo X (2005) Low vhf-band backscatter from coniferous forests on sloping terrain. IEEE Trans Geosci Remote Sens 43(10):2246–2260CrossRefGoogle Scholar
Tebaldini S (2009) Algebraic synthesis of forest scenarios from multibaseline PolInSAR data. IEEE Trans Geosci Remote Sens 47(12):4132–4142CrossRefGoogle Scholar
Tebaldini S (2010) Single and multipolarimetric SAR tomography of forested areas: a parametric approach. IEEE trans Geosci Rem Sens 48(5):2375–2387CrossRefGoogle Scholar
Tebaldini S, Monti Guarnieri A (2010) On the role of phase stability in SAR multibaseline applications. IEEE Trans Geosci Remote Sens 48(7):2953–2966CrossRefGoogle Scholar
Tebaldini S, Rocca F (2012) Multibaseline polarimetric SAR tomography of a boreal forest at P- and L-bands. IEEE Trans Geosci Remote Sens 50(1):232–246CrossRefGoogle Scholar
Tebaldini S, Rocca F, Mariotti D'Alessandro M, Ferro-Famil L (2016a) Phase calibration of airborne tomographic SAR data via phase center double localization. IEEE Trans Geosci Remote Sens 54(3):1775–1792CrossRefGoogle Scholar
Tebaldini S, Nagler T, Rott H, Heilig A (2016b) Imaging the internal structure of an alpine glacier via L-band airborne SAR tomography. IEEE Trans Geosci Remote Sens 54(12):7197–7209CrossRefGoogle Scholar
Toraño Caicoya A, Pardini M, Hajnsek I, Papathanassiou K (2015) Forest above-ground biomass estimation from vertical reflectivity profiles at L-band. IEEE Geosci Remote Sens Lett 12(12):2379–2383CrossRefGoogle Scholar
Treuhaft RN, Siqueira PR (2000) Vertical structure of vegetated land surfaces from interferometric and polarimetric radar. Radio Sci 35(1):141–177. https://doi.org/10.1029/1999RS900108 CrossRefGoogle Scholar
TROPISAR (2011) Technical assistance for the deployment of airborne SAR and geophysical measurements during the TROPISAR 2009 experiment. Final Report to ESA, 2011Google Scholar
Truong-Loï ML, Saatchi S, Jaruwatanadilok S (2015) Soil moisture estimation under tropical forests using UHF radar polarimetry. IEEE Trans Geosci Remote Sens 53(4):1718–1727CrossRefGoogle Scholar
Ulaby F, McDonald K, Sarabandi K, Dobson M (1998) Michigan microwave canopy scattering models (mimics). In: International geoscience and remote sensing symposium, 1988. IGARSS'88. Remote sensing: moving toward the 21st century. vol 2, pp 1009–1009Google Scholar
Van Der Meer PJ, Bongers F (1996) Patterns of tree-fall and branch-fall in a tropical rain forest in French Guiana. J Ecol 84(1):19–29CrossRefGoogle Scholar
Yitayew TG, Ferro-Famil L, Eltoft T, Tebaldini S (2017) Lake and fjord ice imaging using a multifrequency ground-based tomographic SAR system. IEEE J Sel Top Appl Earth Obs Remote Sens 10(10):4457–4468CrossRefGoogle Scholar
Zhu XX, Bamler R (2012) Super-resolution power and robustness of compressive sensing for spectral estimation with application to spaceborne tomographic SAR. IEEE Trans Geosci Remote Sens 50(1):247–258CrossRefGoogle Scholar
© Springer Nature B.V. 2019
1.Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB)Politecnico di MilanoMilanItaly
2.UMR TETIS, IRSTEA, University of MontpellierMontpellierFrance
3.Centre d'Etudes Spatiales de la Biosphère, CNRS-CNES-Université Paul Sabatier-IRDToulouse Cedex 9France
4.Laboratoire Evolution et Diversité Biologique, UMR5174, CNRSUniversité Paul Sabatier, IRDToulouse Cedex 9France
Tebaldini, S., Ho Tong Minh, D., Mariotti d'Alessandro, M. et al. Surv Geophys (2019) 40: 779. https://doi.org/10.1007/s10712-019-09539-7
Received 19 June 2018 | CommonCrawl |
Article | Open | Published: 09 May 2019
Mortality causes universal changes in microbial community composition
Clare I. Abreu1,
Jonathan Friedman ORCID: orcid.org/0000-0001-8476-80302,
Vilhelm L. Andersen Woltz1 &
Jeff Gore ORCID: orcid.org/0000-0003-4583-85551
Nature Communicationsvolume 10, Article number: 2120 (2019) | Download Citation
Bacterial systems biology
Microbial ecology
All organisms are sensitive to the abiotic environment, and a deteriorating environment can cause extinction. However, survival in a multispecies community depends upon interactions, and some species may even be favored by a harsh environment that impairs others, leading to potentially surprising community transitions as environments deteriorate. Here we combine theory and laboratory microcosms to predict how simple microbial communities will change under added mortality, controlled by varying dilution. We find that in a two-species coculture, increasing mortality favors the faster grower, confirming a theoretical prediction. Furthermore, if the slower grower dominates under low mortality, the outcome can reverse as mortality increases. We find that this tradeoff between growth and competitive ability is prevalent at low dilution, causing outcomes to shift dramatically as dilution increases, and that these two-species shifts propagate to simple multispecies communities. Our results argue that a bottom-up approach can provide insight into how communities change under stress.
Ecological communities are defined by their structure, which includes species composition, diversity, and interactions1. All such properties are sensitive to the abiotic environment, which influences both the growth of individual species and the interactions between them. The structure of multispecies communities can thus vary in complex ways across environmental gradients2,3,4,5,6,7. A major challenge is therefore to predict how a changing environment affects competition outcomes and alters community structure. In particular, environmental deterioration can radically change community structure. Instances of such deterioration include antibiotic use on gut microbiota8, ocean warming in reef communities9, overfishing in marine ecosystems10, and habitat loss in human-modified landscapes11. Such disturbances can affect community structure in several ways, such as allowing for the spread of invasive species12, causing biodiversity loss and mass extinction13,14, or altering the interactions between the remaining community members15,16. For example, a stable ecosystem can be greatly disrupted by the removal of a single keystone species, potentially affecting species with which it does not directly interact17,18,19.
A common form of environmental deterioration is increased mortality, which can be implemented in the laboratory in a simple way. In fact, the standard method of cultivating and coculturing bacteria involves periodic dilution into fresh media, a process that necessarily discards cells from the population. The magnitude of the dilution factor determines the fraction of cells discarded and therefore the added mortality rate, making environmental harshness easy to tune experimentally.
The choice of dilution factor often receives little attention, yet theoretical models predict that an increased mortality rate experienced equally by all species in the community can have dramatic effects on community composition. In particular, it is predicted that such a global mortality rate will favor the faster-growing species in pairwise coculture, potentially reversing outcomes from dominance of the slow grower to dominance of the fast grower1,20,21 as mortality increases. Indeed, there is some experimental support for such reversals in chemostat experiments with microbial species with different growth rates22,23,24. A less-explored prediction is that if a high mortality rate causes a competitive reversal, the coculture will also result in either coexistence or bistability (where the winner depends on the starting fraction) at some range of intermediate mortality25,26,27. Missing from the literature is a systematic study that probes both of these predictions with an array of pairwise coculture experiments across a range of dilution rates. In addition, little is known about how mortality will alter the composition of multispecies communities.
In this paper, we report experimental results that expand upon the prior literature regarding the effect of dilution on pairwise outcomes, and we use the pairwise outcomes to develop a predictive understanding of how multispecies community composition changes with increased dilution. First, pairwise coculture experiments with five bacterial species confirmed that (1) increased mortality favors the fast grower and can reverse the winner, or the only remaining species at the end of the experiment, from slow grower to fast grower, and (2) at intermediate dilution rates, either coexistence or bistability occurs, where from many starting fractions, the two species' abundances either converge to a stable fraction or diverge to either species winning, respectively. We measure species' growth rates by growing cells from low density in monoculture; fast growers reach a threshold density more quickly than slow growers. We define the competitive ability of a species as its average fraction after being cocultured in pairs with each of the other species for multiple dilution cycles. Interestingly, we find that a pervasive tradeoff between growth rate and competitive ability in our system favors slow growers in high-density, low-dilution environments, leading to striking changes in outcomes as mortality increases. Second, to bridge the pairwise results to three- and four-species communities, we employ simple predictive pairwise assembly rules28, where we find that the pairwise outcomes such as coexistence and bistability propagate up to the multispecies communities. Our results highlight that the seemingly complicated states a community adopts across a mortality gradient can be traced back to a predictable pattern in the outcomes of its constituent pairs.
Three-species community exhibits wide range of stable states
To probe how a changing environment affects community composition, we employed an experimentally tractable system of soil bacteria coculture experiments subject to daily growth/dilution cycles across six dilution factors (Fig. 1a). We selected five species of soil bacteria: Enterobacter aerogenes (Ea), Pseudomonas aurantiaca (Pa), Pseudomonas citronellolis (Pci), Pseudomonas putida (Pp), and Pseudomonas veronii (Pv) (Supplementary Fig. 10). These species have been used in previous experiments by our group, which did not vary dilution factor28,29. All five species grow well in our defined media containing glucose as the primary carbon source (see "Methods") and have distinct colony morphology that allows for measuring species abundance by plating and colony counting on agar.
Increasing dilution causes striking shifts in a three-species community. a To probe how added mortality changes community composition, we cocultured three soil bacteria over a range of dilution factors. Cells were inoculated and allowed to grow for 24 h before being diluted into fresh media. This process was continued for 7 days, until a stable equilibrium was reached. The magnitude of the dilution factor (10–106) determines the fraction of cells discarded, and thus the amount of added mortality. b We began with a three-species community (Enterobacter aerogenes (Ea), Pseudomonas citronellolis (Pci), and Pseudomonas veronii (Pv)), initialized from four starting fractions at each dilution factor. The outcomes of two of the starting fractions are shown (see Supplementary Fig. 8b for remaining starting fractions), along with a subway map, where survival of species is represented with colors assigned to each species. Black dots indicate where data were collected, while colors indicate the range over which a given species is inferred to survive. Species Pv dominates at the lowest dilution factor, and Ea dominates at the highest dilution factors. The grouping of two colors represents coexistence of two species, whereas the two levels at dilution factor 103 indicate bistability, where both coexisting states, Ea–Pv and Ea–Pci, are stable and the starting fraction determines which stable state the community reaches. Error bars are the SD of the beta distribution with Bayes' prior probability (see "Methods"). Source data are provided as a Source Data file
We began by competing three of the five species, Ea, Pci, and Pv, for seven 24-h cycles under six different dilution factor regimes. To assay for alternative stable states, each dilution factor condition was initialized by four different starting fractions (equal abundance as well as prevalence of one species in a 90–5–5% split). Despite the simplicity of the community and the experimental perturbation, we observed five qualitatively different outcomes corresponding to different combinations of the species surviving at equilibrium (Fig. 1b). At the highest and lowest dilution factors, one species excludes the others at all starting fractions (Pv at low dilution, Ea at high dilution). Two coexisting states (Ea–Pv and Ea–Pci) occur at medium low (102) and medium high (104) dilution factors, again independent of the starting fractions of the species. However, at intermediate dilution factor (103), we found that the surviving species depended upon the initial abundances of the species. At this experimental condition, the system displays bistability between the two different coexisting states (Ea–Pv and Ea–Pci) that were present at neighboring dilution factors. These three species therefore display a surprisingly wide range of community compositions as the mortality rate is varied.
Two-species model predicts that mortality favors faster grower
To make sense of these transitions in community composition, we decided to first focus on two-species competitions, not only because they should be simpler but also because prior work from our group gives reason to believe that pairwise outcomes are sufficient for predicting multispecies states28. Accordingly, we used a simple two-species Lotka–Volterra (LV) competition model with an added mortality term δNi experienced equally by both species21:
$$\dot N_i = r_iN_i\left( {1 - N_i - \alpha _{ij}N_j} \right) - \delta N_i$$
where Ni is the density of species i (normalized to its carrying capacity), ri is the maximum growth rate of species i, and the competition coefficient αij is a dimensionless constant reflecting how strongly species i is inhibited by species j (Fig. 2). This model can be re-parameterized into the LV model with no added mortality, where the new competition coefficients \(\tilde \alpha _{ij}\) now depend upon ri and δ (Supplementary Note 1, Supplementary Fig. 11):
$$\mathop {N}\limits^{\dot \sim } = \tilde r_i\tilde N_i\left( {1 - \tilde N_i - \tilde \alpha _{ij}\tilde N_j} \right)$$
$$\begin{array}{*{20}{c}} {\tilde \alpha _{ij} = \alpha _{ij}\frac{{\left( {1 - \frac{\delta }{{r_j}}} \right)}}{{\left( {1 - \frac{\delta }{{r_i}}} \right)}}} \end{array}$$
An increasing global mortality rate is predicted to favor the fast grower. a, b Here we illustrate the parameters of the Lotka–Volterra (LV) interspecific competition model with added mortality: population density N, growth r, death δ, and the strengths of inhibition αsf and αfs (subscript f for fast grower and s for slow grower). Here we assume a continuous death rate, but in the model, the outcome is the same for a discrete process, such as our daily dilution factor (Supplementary Note 2). The width of arrows in a corresponds to an interesting case that we observe experimentally, in which the fast grower is a relatively weak competitor. c The outcomes of the LV model without mortality depend solely upon the competition coefficients α, and the phase space is divided into one quadrant per outcome. If the slow grower is a strong competitor, it can exclude the fast grower. Imposing a uniform mortality rate δ on the system, however, favors the faster grower by making the re-parameterized competition coefficients \(\tilde \alpha\) depend on r and δ. Given that a slow grower dominates at low or no added death, the model predicts that coexistence or bistability will occur at intermediate added death rates before the outcome transitions to dominance of the fast grower at high added death (Supplementary Note 1). Two numerical examples show that the values of α (in the absence of added mortality) determine whether the trajectory crosses the bistability or coexistence region as mortality increases
The outcome of competition—dominance, coexistence, or bistability—simply depends upon whether each of the \(\tilde \alpha\) are greater or less than one, as in the basic LV competition model21. Stable coexistence occurs when both \(\tilde \alpha\) coefficients are less than one, bistability when both are greater than one, and dominance/exclusion when only one coefficient is greater than one.
In this model, it is possible for a slow grower (Ns) to outcompete a fast grower (Nf) if the slow grower is a strong competitor (αfs > 1) and the fast grower is a weak competitor (αsf < 1) (Fig. 2). However, the competition coefficients change with increasing mortality δ in a way that favors the fast grower: \(\tilde \alpha _{{\mathrm{fs}}}\) shrinks and \(\tilde \alpha _{{\mathrm{sf}}}\) grows, eventually leading the fast grower to outcompete the slow grower. A powerful way to visualize this change is to plot the outcomes as determined by the competition coefficients (Fig. 2c); increasing mortality causes the outcome to traverse a 45° trajectory through the phase space, leading to the fast grower winning at high mortality. At intermediate mortality, the model predicts that the two species will either coexist or be bistable. This model therefore makes very clear predictions regarding how pairwise competition will change under increased mortality, given the aforementioned slow grower advantage at low mortality.
Dilution experiments confirm predictions about mortality
To test these predictions in the laboratory, we performed all pairwise coculture experiments at multiple dilution factors and starting fractions of our five bacterial species: Pp, Ea, Pci, Pa, Pv (listed in order from fastest- to slowest-growing species). We find that these pairwise outcomes change as expected from the LV model, where increased dilution favors the fast grower (Supplementary Fig. 1). For example, in Ea–Pv competition we find that Pv, despite being the slower grower, is able to exclude Ea at low dilution rates (Fig. 3b, left panel). From the standpoint of the LV model, Pv is a strong competitor despite being a slow grower in this environment. However, as predicted by the model, at high dilution rates the slow-growing Pv is excluded by the fast-growing Ea (Fig. 3b, right panel). Importantly, Pv is competitively excluded at a dilution factor of 104, an experimental condition at which it could have survived in the absence of a competitor. Finally, and again consistent with the model, at intermediate dilution rates we find that the Ea–Pv pair crosses a region of coexistence, where the two species reach a stable fraction over time that is not a function of the starting fraction (Fig. 3b, middle panel). The Ea–Pv pair therefore displays the transitions through the LV phase space in the order predicted by our model (Fig. 3a–d).
In pairwise coculture experiments, increasing dilution favors the faster grower. a Experimental results are shown from a coculture experiment with Pv (blue) and Ea (pink). b Left panel: Despite its slow growth rate, Pv excludes faster grower Ea at the lowest dilution factor. Middle panel: Increasing death rate causes the outcomes to traverse the coexistence region of the phase space. Right panel: As predicted, fast-growing Ea dominates at high dilution factor. Error bars are the SD of the beta distribution with Bayes' prior probability (see "Methods"). c An experimental bifurcation diagram shows stable points with a solid line and unstable points with a dashed line. The stable fraction of coexistence shifts in favor of the fast grower as dilution increases. Gray arrows show experimentally measured time trajectories, beginning at the starting fraction and ending at the final fraction. d A "subway map" denotes survival/extinction of a species at a particular dilution factor with presence/absence of the species color. e, f Pv outcompeted another fast grower Pci (yellow) at low dilution factors, but the pair became bistable instead of coexisting as dilution increased; the unstable fraction can be seen to shift in favor of the fast grower (g). h Two levels in the subway map show bistability. Source data are provided as a Source Data file
The LV model predicts that other pairs will cross a region of bistability rather than coexistence, and indeed this is what we observe experimentally with the Pci–Pv pair (Fig. 3e–h). Once again, the slow-growing Pv dominates at low dilution factor yet is excluded at high dilution factor. However, at intermediate dilution factors this pair crosses a region of bistability, in which the final outcome depends upon the starting fractions of the species. The LV model with added mortality therefore provides powerful insight into how real microbial species compete, despite the many complexities of the growth and interaction that are necessarily neglected in a simple phenomenological model.
Indeed, a closer examination of the trajectory through the LV phase space of the Pci–Pv pair reveals a violation of the simple outcomes allowed within the LV model. In particular, at dilution factor 102 we find that when competition is initiated from high initial fractions of Pci that Pv persists at low fraction over time (Fig. 3g). This outcome, a bistability of coexistence and exclusion (rather than of exclusion and exclusion), is not an allowed outcome within the LV model (modifications to the LV model can give rise to it, as shown by ref. 30). This subtlety highlights that the transitions (e.g., bifurcation diagrams in Fig. 3c, g) can be more complex than what occurs in the LV model but that nonetheless the transitions within the LV model represent a baseline to which quantitative experiments can be compared.
Tradeoff between growth rate and competitive ability observed
The model predicts that mortality will reverse coculture outcomes if and only if a slow grower excludes a fast grower at low or no added death, exhibiting a tradeoff between growth and competitive ability. Changes in outcome are therefore most dramatic when a strongly competing slow grower causes the trajectory to begin in the upper left quadrant of the phase space (Fig. 3a, e), allowing it to move through other quadrants as mortality increases. Indeed, in the pairwise experiments described above, the slowest-growing species, Pv, is a strong competitor at low dilution factor. To probe this potential tradeoff more extensively, we compared the growth rates of our five species in monoculture (Supplementary Figs. 3, 4, and 5) to their competitive performance at low dilution factor. In seven of the ten pairs, the slower grower excluded the faster grower, and the other three pairs coexisted (Supplementary Fig. 1). We therefore find that our five species display a pervasive tradeoff between growth rate and competitive ability, possibly because the slower-growing species fare better in high-density environments that reach saturation.
To visualize how competitive success changes with dilution factor, we defined the competitive score of each species to be its mean fraction after reaching equilibrium in all pairs in which it competed. The aforementioned tradeoff can be seen as an inverse relationship between growth rate and competitive score at the lowest dilution factor (Fig. 4a). As predicted, the performance of the fast-growing species increases monotonically with increasing dilution factors (Fig. 4b). Competitive superiority of the slowest grower (Pv) at low dilution rates transitions to the next slowest (Pa) at intermediate rates, before giving rise to dominance of the fastest growers (Pci, Ea, Pp) at maximum rates (Fig. 4b–d). We therefore find that the mortality rate largely determines the importance of a species' growth rate to competitive performance in coculture experiments.
Tradeoff between growth and competitive ability leads to dependence of experimental outcome on dilution factor. The LV model predicts that increasing dilution will favor faster-growing species over slower-growing ones. If fast growers dominate at low dilution factors, though, no changes in outcome will be expected. Changes in outcome are therefore most dramatic when slow growers are strong competitors at low dilution, exhibiting a tradeoff between growth rate and competitive ability. a This tradeoff was pervasive in our system: slower growth rates resulted in higher competitive scores at the lowest dilution factor. Growth rate was calculated with OD600 measurements of the time taken for monocultures to reach a threshold density within the exponential phase; error bars represent the SEM of replicates (n = 21, per species) (Supplementary Fig. 3). Competitive score was calculated by averaging fraction of a given species across all pairwise competitive outcomes; error bars were calculated by bootstrapping, where replicates of mean experimental outcomes of a given pair were sampled 5000 times with replacement (n = 34, per species, per dilution factor). b The competitive scores in a are extended to all dilution factors. The slowest grower's score monotonically decreases with dilution, while the fast growers' scores increase, and an intermediate grower peaks at intermediate dilution factor. A similar pattern was seen in data from experiments in a complex growth medium (Supplementary Fig. 7). c At high dilution factors, the order of scores is reversed. d At low dilution factors 10 and 102, competitive ability is negatively correlated with growth rate; the correlation becomes positive above dilution factor 103. Error bars are the standard error coefficients given by the linear regression function lm in R. Source data are provided as a Source Data file
Pairwise outcomes predict multispecies states
Now that we have an understanding of how pairwise outcomes shift in response to increased mortality, we return to the seemingly complicated set of outcomes observed in our original three-species community (Fig. 1). In a previous study28, we developed community assembly rules that allow for prediction of species survival in multispecies communities from the corresponding pairwise outcomes. These rules state that in a multispecies coculture, a species will survive if and only if it coexists with all other surviving species in pairwise coculture. If one or more bistable pairs are involved in a multispecies community, the assembly rules allow for either of the stable states. We see that the seemingly complicated trio outcomes follow from these assembly rules applied to our corresponding pairwise outcomes at all dilution factors (Fig. 5). For example, at the lowest dilution factor (10), Ea–Pci coexist, but each of these species is excluded by Pv in pairwise coculture, thus leading to the (accurate) prediction that only Pv will survive in the three-species coculture experiment. In addition, we observe that the bistability of Pci–Pv at dilution factor 103 propagates up to lead to bistability in the trio but with each stable state corresponding to coexistence of two species. The only trio outcome not successfully predicted by the rules is the extinction of Pci at a dilution factor of 105 (Fig. 5d, Supplementary Fig. 8). Our analysis of pairwise shifts under increased mortality therefore provides a predictive understanding of the complex shifts observed within a simple three-species bacterial community.
Coexistence and bistability propagate from pair to trio, as predicted by assembly rules. a–c Subway maps show pairwise outcome trajectories across changing dilution factor (DF), as explained in Figs. 1 and 3. The fast grower's line is always plotted above the slow grower's line. Of the three pairs that make up the community Ea–Pci–Pv, two are coexisting (a, b) and one is bistable (c). d The pairwise assembly rules state that a species will survive in a community if it survives in all corresponding pairs. At DF 10, Ea and Pci coexist, but both are excluded by Pv. The rules correctly predict that Pv will dominate in the trio. Because both species can be excluded in a bistable pair, a bistable pairwise outcome propagates to the trio as more than one allowed state. Each of the bistable species can be seen separately coexisting with Ea at DF 103, as they do in pairs. The assembly rules failed at DF 105 for three out of four starting conditions: Pci usually goes extinct when it should coexist with Ea. e Three-species competition results are shown in simplex plots. Arrows begin and end at initial and final fractions, respectively. Edges represent pairwise results, and black dots represent trio results
To determine whether our analysis of community shifts under mortality is more broadly applicable, we combined our five species into various three- and four-species subsets, similar to the Ea–Pci–Pv competition (Fig. 5). In total, we competed five three-species communities and three four-species communities at all six dilution factors (see Supplementary Fig. 9 for examples, as well as for result of five-species coculture). Overall, a quantitative generalization of our assembly rules (see "Methods") predicted the equilibrium fractions with an error of 14%, significantly better than the 41% error that results from predictions obtained from monoculture carrying capacity (Table 1, Supplementary Fig. 2). Assembly rule prediction error does increase with increasing community size, however, particularly in the case of the five-species community (Supplementary Table 1, Supplementary Fig. 9), which may be due to slow equilibration or infrequent coexistence of more than two species. These results indicate that pairwise outcomes are good predictors of simple multispecies states in the presence of increased mortality.
Table 1 Errors of pairwise assembly rules are much lower than monoculture prediction errors
The question of how community composition will change in a deteriorating environment is essential, as climate change, ocean acidification, and deforestation infringe upon many organisms' habitats, increasing mortality either directly, by decimating populations, or indirectly, by making the environment less hospitable to them. We used an experimentally tractable microbial microcosm to tune mortality through dilution rate and found a pervasive tradeoff between growth rate and competitive ability (Fig. 4). This tradeoff causes slow growers to outcompete fast growers in high-density, low-dilution environments. Increasing mortality favors fast growers, in line with model predictions. We observed coexistence and bistability at intermediate dilution factors in pairwise experiments (Fig. 3) and found that such coexistence and bistability propagated up to three- and four-species communities (Fig. 5). Coexistence was more common than bistability, which is in line with expectations of optimal foraging theory1. We were able to explain seemingly complicated three-species states (Fig. 1) with pairwise results, which traversed all possible outcomes allowed by the two-species model.
The success of the simple pairwise assembly rules28 in predicting the states of three- and four-species communities (Table 1, Supplementary Fig. 2) is in line with recent microbial experiments suggesting that pairwise interactions play a key role in determining multispecies community assembly28,31 and community-level metabolic rates32. In contrast, some theory and empirical evidence supports the notion of pervasive and strong higher-order interactions33,34,35,36. Our results provide support for a bottom–up approach to simple multispecies communities and show that pairwise interactions alone can generate multispecies states that appear nontrivial. Prediction errors do increase with increasing community size, however, as can be seen in the case of the five-species community (Supplementary Table 1, Supplementary Fig. 9).
The aforementioned tradeoff made for striking transitions in the communities that we studied. Without the tradeoff, the model would be less useful—if a fast grower outcompetes a slow grower at low dilution rates, the model predicts no change in outcome at higher dilution rates. Our results at low dilution are consistent with previous experimental evidence of a tradeoff between growth and competitive ability among different mutants of the same bacterial strain37 and between different species of protists38,39. Other examples of this tradeoff include antibiotic resistance, which imposes a fitness cost on bacteria despite its clear competitive benefit40, and seed size in plants; plants that produce larger seeds necessarily produce fewer of them but were found to be more competitive in seedling establishment41,42. The high competitive score of slow growers in our system in low-dilution environments, together with the result that increasing dilution favors fast growers, provides a case study for how a unimodal diversity–disturbance relationship can occur in a microbial community, a phenomenon that has previously been observed43,44,45.
The exact mechanism for the competitive ability of the slow growers in our system cannot be fully explained. Monoculture pH levels were similar for all species (~6.2–6.5), ruling out the possibility that slow growers move the pH to a level not hospitable to the fast growers. Supernatant experiments, in which we grew each species in the filtered spent media of other species, showed inhibition of some fast growers (Pp, Pci) by some slow growers (Pa, Pv) (Supplementary Fig. 6b, d), which explains potentially three of the seven cases of slow grower dominance at low dilution factor. We also hypothesized that the tradeoff might be caused by the slow growers having relatively faster growth rates at low resource concentration (as explained below), but this hypothesis was not confirmed when tested (Supplementary Fig. 6f). In addition, in monocultures the slow growers exhibited higher lag times than the fast growers (Supplementary Fig. 5f), which would seem to be disadvantageous in low-dilution, high-density conditions where resources could be quickly consumed by a competitor with a shorter lag46. The reason for the tradeoff, as well as its frequency in other systems, is a matter worthy of further investigation, in particular because natural microbial systems, such as soil communities or the gut microbiome, are characterized as having a low dilution rate47,48.
Here we found that the LV model with added mortality provided useful guidance for how experimental competition would shift under increased dilution, but resource-explicit models may in some cases provide additional mechanistic insight49,50. In particular, various resource-explicit models can recapitulate the qualitative changes predicted by the LV model with added mortality. For example, the R* rule states the species that can survive on the lowest equilibrium resource concentration will dominate other species1. The equilibrium concentration increases with the dilution rate, thus favoring the species with the highest maximal growth rate (Supplementary Note 4, Supplementary Figs. 12 and 13). However, a species with a low maximal rate may dominate under low dilution if it can grow more efficiently at low resource concentrations. As mentioned, this hypothesis could not explain the tradeoff in our system (Supplementary Fig. 6f). Moreover, while we consider dilution to be essentially an added death rate because cells are discarded, the LV model does not include effects of the dilution process that could differentiate it from mere mortality. Previous experimental work has shown that dilution can modulate concentrations of oxygen51,52 and phosphate45 in the environment, leading to changes in microbial community composition. Further work is necessary to explore the circumstances in which phenomenological or resource-explicit models should be used53,54,55 in describing serial dilution experiments.
It is also important to note that not all deteriorating environments will cause such simple and uniform increases in mortality. Antibiotics, and in particular β-lactam antibiotics, might selectively attack fast growers over slow growers56. Overfishing might target certain species of fish. In such cases of species-specific mortality rates, the pairwise LV model still predicts that outcomes will move along the same 45° line through the phase space but in a direction dependent on the differing rates (Supplementary Note 1). Climate change might affect growth rate rather than death rate by increasing temperature, which usually increases growth rates57; in this case, it is not certain whether environmental deterioration in the form of warming would favor slow growers or fast growers. An important direction for future research is to determine whether changes to the environment other than mortality/dilution will have predictable consequences for the composition of microbial communities. In this study, we have seen how a simple prediction about a simple perturbation in pairwise competition—increased mortality will favor the faster-growing species—allowed us to interpret seemingly nontrivial outcomes in simple multispecies communities.
Species and media
The soil bacterial species used in this study were Enterobacter aerogenes (Ea, ATCC#13048), Pseudomonas aurantiaca (Pa, ATCC#33663), Pseudomonas citronellolis (Pci, ATCC#13674), Pseudomonas putida (Pp, ATCC#12633), and Pseudomonas veronii (Pv, ATCC#700474). All species were obtained from ATCC. Two types of growth media were used: one was complex and undefined, while the other was minimal and defined. All results presented in the main text are from the defined media. All species grew in monoculture in both media. The complex medium was 0.1× LB broth (diluted in water). The minimal medium was S medium, supplemented with glucose and ammonium chloride. It contains 100 mM sodium chloride, 5.7 mM dipotassium phosphate, 44.1 mM monopotassium phosphate, 5 mg/l cholesterol, 10 mM potassium citrate pH 6 (1 mM citric acid monohydrate, 10 mM tri-potassium citrate monohydrate), 3 mM calcium chloride, 3 mM magnesium sulfate, and trace metals' solution (0.05 mM disodium EDTA, 0.02 mM iron sulfate heptahydrate, 0.01 mM manganese chloride tetrahydrate, 0.01 mM zinc sulfate heptahydrate, 0.01 mM copper sulfate pentahydrate), 0.93 mM ammonium chloride, and 10 mM glucose. 1× LB broth was used for initial inoculation of colonies. For competitions involving more than two species, plating was done on 10 cm circular Petri dishes containing 25 ml of nutrient agar (nutrient broth (0.3% yeast extract, 0.5% peptone) with 1.5% agar added). For pairwise competitions, plating was done on rectangular Petri dishes containing 45 ml of nutrient agar, onto which diluted 96-well plates were pipetted at 10 μl per well.
Growth rate measurements
Growth curves were captured by measuring the optical density of monocultures (OD 600 nm) in 15-min intervals over a period of ~50 h (Fig. S3). Before these measurements, species were grown in 1× LB broth overnight and then transferred to the experimental medium for 24 h. The OD of all species was then equalized. The resulting cultures were diluted into fresh medium at factors of 10−8 to 10−3 of the equalized OD. Growth rates were measured by assuming exponential growth to a threshold of OD 0.1 and averaging across many starting densities and replicates (n = 19 for Pci, n = 22 for all other species). This time-to-threshold measurement implicitly incorporates lag times, because a species with a time lag will take longer to reach the threshold OD than another species with the same exponential rate but no lag time. We also estimated lag times and exponential rates explicitly (Fig. S4). We used these measurements to develop an alternative to the time-to-threshold rates, which also incorporated lag time. To estimate this effective growth rate, we multiplied the exponential rate by a factor depending on lag time and time between daily dilutions (Supplementary Fig. 5b and Supplementary Note 3). This method does change growth rate estimates slightly but does not change the order of growth rates among the five species and thus the qualitative predictions of the model (Supplementary Fig. 5a, b). For this reason, we preferred to use the time-to-threshold method, because it involved only one measurement, rather than two, and had a lower error.
Competition experiments
Frozen stocks of individual species were streaked out on nutrient agar Petri dishes, grown at room temperature for 48 h, and then stored at 4 °C for up to 2 weeks. Before competition experiments, single colonies were picked and each species was grown separately in 50 ml Falcon tubes, first in 5 ml LB broth for 24 h and next in 5 ml of the experimental media for 24 h. During the competition experiments, cultures were grown in 500 μl 96-well plates (BD Biosciences), with each well containing a 200-μl culture. Plates were incubated at 25 °C and shaken at 400 rpm and were covered with an AeraSeal film (Sigma-Aldrich). For each growth–dilution cycle, the cultures were incubated for 24 h and then serially diluted into fresh growth media. Initial cultures were prepared by equalizing OD to the lowest density measured among competing species, mixing by volume to the desired species composition, and then diluting mixtures by the factor to which they would be diluted daily (except for dilution factor 10−6, which began at 10−5 on Day 0, to avoid causing stochastic extinction of any species). Relative abundances were measured by plating on nutrient agar plates. Each culture was diluted in phosphate-buffered saline prior to plating. For competitions involving more than two species, plating was done on 10 cm circular Petri dishes. For pairwise competitions, plating was done on 96-well-plate-sized rectangular Petri dishes containing 45 ml of nutrient agar, onto which diluted 96-well plates were pipetted at 10 μl per well. Multiple replicates of the latter dishes were used to ensure that enough colonies could be counted. Colonies were counted after 48 h incubation at room temperature. The mean number of colonies counted, per plating, per experimental condition, was 42. During competition experiments, we also plated monocultures to determine whether each species could survive each dilution factor in the absences of other species. Pv went extinct in the highest two dilution factors, while Pa went extinct in the highest dilution factor; all other species survived all dilution factors (Fig. 4).
Assembly rule predictions and accuracy
In order to make predictions about three- and four-species states, we used the qualitative and quantitative outcomes of pairwise competition. The two types of pairwise outcomes allowed for two types of predictions. First, the qualitative outcomes (dominance/exclusion, coexistence, or bistability) of the pairs were used to predict whether a species would be present or absent from a community. These outcomes are shown in "subway maps" (Supplementary Fig. 1), where the presence of a species is noted by the presence of its assigned color. Coexistence is shown by two stacked colors, and bistability is shown by two separated colors. The qualitative error rate is the percentage of species, out of the total number of species (three for trios, four for quads), that are incorrectly predicted to be present or absent (Table 1, Supplementary Fig. 2a, b). The qualitative success rate is the percentage of species that are correctly predicted as present or absent (Supplementary Fig. 2d).
Second, the quantitative outcomes of the pairs were used to predict the quantitative outcomes of three- and four-species communities. These outcomes are shown in relative fraction plots (Supplementary Fig. 1), where equilibrium points are indicated by the black dots. When two or more species coexist in pairs, the assembly rules predicts that they will coexist in multispecies communities, provided that an additional species does not exclude them. The predicted equilibrium coexisting fraction of two species is the same in a community as it is in a pair, while the fractions of more than two coexisting species are predicted with the weighted geometric mean of pairwise coexisting fractions. For example, in a three-species coexisting community, the fraction of species 1 depends on its coexisting fractions with the other two species in pairs:
$$f_1 = \left( {f_{12}^{w_2}f_{13}^{w_3}} \right)^{\frac{1}{{w_2 + w_3}}}$$
where f12 is the fraction of species 1 after reaching equilibrium in competition with species 2, \(w_2 = \sqrt {f_{21}f_{23}}\) and \(w_3 = \sqrt {f_{31}f_{32}}\). Finally, these predictions are normalized by setting \(f_1^ \ast = \frac{{f_1}}{{f_1 + f_2 + f_3}}\). The quantitative error of a particular community outcome is the distance of the predicted fractions from the observed community fractions, measured with the L2 norm. The maximum error, for any number of species, is \(\sqrt 2\), which occurs when a species that was predicted to go extinct in fact dominates:
$$\sqrt {{\sum} {\left( {\left( {1,0, \ldots ,0} \right) - \left( {0,1, \ldots ,0} \right)} \right)} ^2 }= \sqrt 2.$$
To calculate the overall quantitative errors (Table 1, Supplementary Fig. 2c, Supplementary Table 1), we divided each error by \(\sqrt 2\) and took the mean.
Finally, we also predicted multispecies states using carrying capacities as measured in monocultures through colony counting (Supplementary Fig. 5c, d). We assumed that, in competition, each species would grow to a density proportionate to its carrying capacity. In other words, the monoculture prediction assumes that all species always coexist. The error from the prediction to the observed data was calculated with the L2 norm, as above.
The p values given in Supplementary Figs. 3 and 5 were obtained using two-tailed t tests. The error bars shown in the time-series plots in Figs. 1 and 3 and Supplementary Fig. 8 are the SD of the beta distribution with Bayes' prior probability:
$$\sigma = \sqrt {\frac{{\left( {\alpha + 1} \right)\left( {\beta + 1} \right)}}{{\left( {\alpha + \beta + 2} \right)^2\left( {\alpha + \beta + 3} \right)}}}.$$
Here α and β are the number of colonies of two different species. In case of more than two species, α and β are the number of colonies of a given species and the number of all other species' colonies, respectively.
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
The source data underlying Figs. 1b, 3b, f, and 4a–c, and Supplementary Figs. 1, 2, 5, and 7 are provided as a Source Data file. Access to the data is also publicly available at https://figshare.com/projects/Added_mortality_causes_universal_changes_in_microbial_community_composition/58304. A reporting summary for this article is available as a Supplementary Information file.
The code used for analyzing data is available from the first author upon request.
Journal peer review information: Nature Communications thanks Sean Gibbons, Wenying Shou, and Sara Mitri for their contribution to the peer review of this work. Peer reviewer reports are available.
Tilman, D. Resource Competition and Community Structure (Princeton University Press, Princeton, NJ, 1982).
Wellborn, G. A., Skelly, D. K. & Werner, E. E. Mechanisms creating community structure across a freshwater habitat gradient. Annu. Rev. Ecol. Syst. 27, 337–363 (1996).
DÍez, I., Secilla, A., Santolaria, A. & Gorostiaga, J. M. Phytobenthic intertidal community structure along an environmental pollution gradient. Mar. Pollut. Bull. 38, 463–472 (1999).
Yergeau, E. et al. Size and structure of bacterial, fungal and nematode communities along an Antarctic environmental gradient. FEMS Microbiol. Ecol. 59, 436–451 (2007).
Lessard, J.-P., Sackett, T. E., Reynolds, W. N., Fowler, D. A. & Sanders, N. J. Determinants of the detrital arthropod community structure: the effects of temperature and resources along an environmental gradient. Oikos 120, 333–343 (2011).
Cornwell, W. K. & Ackerly, D. D. Community assembly and shifts in plant trait distributions across an environmental gradient in coastal California. Ecol. Monogr. 79, 109–126 (2009).
Mykrä, H., Tolkkinen, M. & Heino, J. Environmental degradation results in contrasting changes in the assembly processes of stream bacterial and fungal communities. Oikos 126, 1291–1298 (2017).
Dethlefsen, L. & Relman, D. A. Incomplete recovery and individualized responses of the human distal gut microbiota to repeated antibiotic perturbation. Proc. Natl Acad. Sci. 108, 4554–4561 (2011).
Wernberg, T. et al. Climate-driven regime shift of a temperate marine ecosystem. Science 353, 169–172 (2016).
Daskalov, G. M., Grishin, A. N., Rodionov, S. & Mihneva, V. Trophic cascades triggered by overfishing reveal possible mechanisms of ecosystem regime shifts. Proc. Natl Acad. Sci. 104, 10518–10523 (2007).
Larsen, T. H., Williams, N. M. & Kremen, C. Extinction order and altered community structure rapidly disrupt ecosystem functioning. Ecol. Lett. 8, 538–547 (2005).
Iii, F. S. C. et al. Consequences of changing biodiversity. Nature 11, 234–242 https://doi.org/10.1038/35012241 (2000).
Thomas, C. D. et al. Extinction risk from climate change. Nature 427, 145–148 (2004).
Bálint, M. et al. Cryptic biodiversity loss linked to global climate change. Nat. Clim. Change 1, 313–318 (2011).
Harrington, R., Woiwod, I. & Sparks, T. Climate change and trophic interactions. Trends Ecol. Evol. 14, 146–150 (1999).
Ockendon, N. et al. Mechanisms underpinning climatic impacts on natural populations: altered species interactions are more important than direct effects. Glob. Change Biol. 20, 2221–2229 (2014).
Paine, R. T. The pisaster-tegula interaction: prey patches, predator food preference, and intertidal community structure. Ecology 50, 950–961 (1969).
Bond, W. J. in Biodiversity and Ecosystem Function pp. 237–253 (Springer, Berlin, Heidelberg, 1994).
Banerjee, S., Schlaeppi, K. & Heijden, M. G. Avander Keystone taxa as drivers of microbiome structure and functioning. Nat. Rev. Microbiol. 16, 567–576 (2018).
Stewart, F. M. & Levin, B. R. Partitioning of resources and the outcome of interspecific competition: a model and some general considerations. Am. Nat. 107, 171–198 (1973).
Hastings, A. Population Biology: Concepts and Models (Springer Science & Business Media, New York, 2013).
MEERS, J. L. Effect of dilution rate on the outcome of chemostat mixed culture experiments. Microbiology 67, 359–361 (1971).
Sommer, U. Phytoplankton competition along a gradient of dilution rates. Oecologia 68, 503–506 (1986).
Spijkerman, E. & Coesel, P. F. M. Competition for phosphorus among planktonic desmid species in continuous-flow culture. J. Phycol. 32, 939–948 (1996).
Gause, G. F. The Struggle for Existence (Courier Corporation, North Chelmsford, MA, 2003).
Slobodkin, L. B. Experimental populations of hydrida. J. Anim. Ecol. 33, 131–148 (1964).
Slobodkin, L. B. Growth and Regulation of Animal Populations (Holt, Rinehart and Winston, New York, 1980).
Friedman, J., Higgins, L. M. & Gore, J. Community structure follows simple assembly rules in microbial microcosms. Nat. Ecol. Evol. 1, 0109 (2017).
Celiker, H. & Gore, J. Clustering in community structure across replicate ecosystems following a long-term bacterial evolution experiment. Nat. Commun. 5, 4643 (2014).
Vet, S. et al. Bistability in a system of two species interacting through mutualism as well as competition: chemostat vs. Lotka-Volterra equations. PLOS ONE 13, e0197462 (2018).
Venturelli, O. S. et al. Deciphering microbial interactions in synthetic human gut microbiome communities. Mol. Syst. Biol. 14, e8157 (2018).
Guo, X. & Boedicker, J. Q. The contribution of high-order metabolic interactions to the global activity of a four-species microbial community. PLOS Comput. Biol. 12, e1005079 (2016).
Billick, I. & Case, T. J. Higher order interactions in ecological communities: what are they and how can they be detected? Ecology 75, 1529–1543 (1994).
Bairey, E., Kelsic, E. D. & Kishony, R. High-order species interactions shape ecosystem diversity. Nat. Commun. 7, 12285 (2016).
Grilli, J., Barabás, G., Michalska-Smith, M. J. & Allesina, S. Higher-order interactions stabilize dynamics in competitive network models. Nature 548, 210–213 (2017).
Mayfield, M. M. & Stouffer, D. B. Higher-order interactions capture unexplained complexity in diverse communities. Nat. Ecol. Evol. 1, 0062 (2017).
Kurihara, Y., Shikano, S. & Toda, M. Trade-off between interspecific competitive ability and growth rate in bacteria. Ecology 71, 645–650 (1990).
Luckinbill, L. S. Selection and the r/K continuum in experimental populations of protozoa. Am. Nat. 113, 427–437 (1979).
Violle, C., Pu, Z. & Jiang, L. Experimental demonstration of the importance of competition under disturbance. Proc. Natl Acad. Sci. 107, 12925–12929 (2010).
Andersson, D. I. & Levin, B. R. The biological cost of antibiotic resistance. Curr. Opin. Microbiol. 2, 489–493 (1999).
Gross, K. L. Effects of seed size and growth form on seedling establishment of six monocarpic perennial plants. J. Ecol. 72, 369–387 (1984).
Geritz, S. A. H., van der Meijden, E. & Metz, J. A. J. Evolutionary dynamics of seed size and seedling competitive ability. Theor. Popul. Biol. 55, 324–343 (1999).
Sousa, W. P. Disturbance in marine intertidal boulder fields: the nonequilibrium maintenance of species diversity. Ecology 60, 1225–1239 (1979).
Flöder, S. & Sommer, U. Diversity in planktonic communities: an experimental test of the intermediate disturbance hypothesis. Limnol. Oceanogr. 44, 1114–1119 (1999).
Gibbons, S. M. et al. Disturbance regimes predictably alter diversity in an ecologically complex bacterial system. mBio 7, e01372–16 (2016).
Manhart, M., Adkar, B. V. & Shakhnovich, E. I. Trade-offs between microbial growth phases lead to frequency-dependent and non-transitive selection. Proc. R. Soc. B 285, 20172459 (2018).
Venema, K. & van den Abbeele, P. Experimental models of the gut microbiome. Best. Pract. Res. Clin. Gastroenterol. 27, 115–126 (2013).
Avrani, S., Bolotin, E., Katz, S. & Hershberg, R. Rapid genetic adaptation during the first four months of survival under resource exhaustion. Mol. Biol. Evol. 34, 1758–1769 (2017).
Goldford, J. E. et al. Emergent simplicity in microbial community assembly. Science 361, 469–474 (2018).
Niehaus, L. et al. Microbial coexistence through chemical-mediated interactions. bioRxiv Preprint at: https://www.biorxiv.org/content/10.1101/358481v1 (2018).
Buckling, A., Kassen, R., Bell, G. & Rainey, P. B. Disturbance and diversity in experimental microcosms. Nature 408, 961–964 (2000).
Rainey, P. B. & Rainey, K. Evolution of cooperation and conflict in experimental bacterial populations. Nature 425, 72–74 (2003).
Fox, J. W. The intermediate disturbance hypothesis should be abandoned. Trends Ecol. Evol. 28, 86–92 (2013).
Chesson, P. & Huntly, N. The roles of harsh and fluctuating conditions in the dynamics of ecological communities. Am. Nat. 150, 519–553 (1997).
Hsu, S.-B. & Zhao, X.-Q. A Lotka–Volterra competition model with seasonal succession. J. Math. Biol. 64, 109–130 (2012).
Tresse, O., Jouenne, T. & Junter, G.-A. The role of oxygen limitation in the resistance of agar-entrapped, sessile-like Escherichia coli to aminoglycoside and β-lactam antibiotics. J. Antimicrob. Chemother. 36, 521–526 (1995).
Ratkowsky, D. A., Olley, J., McMeekin, T. A. & Ball, A. Relationship between temperature and growth rate of bacterial cultures. J. Bacteriol. 149, 1–5 (1982).
We thank the members of the Gore Laboratory for critical discussions and comments on the manuscript.
Department of Physics, Massachusetts Institute of Technology, Cambridge, 02139, MA, USA
Clare I. Abreu
, Vilhelm L. Andersen Woltz
& Jeff Gore
Department of Plant Pathology and Microbiology, The Hebrew University of Jerusalem, Rehovot, 7610001, Israel
Search for Clare I. Abreu in:
Search for Jonathan Friedman in:
Search for Vilhelm L. Andersen Woltz in:
Search for Jeff Gore in:
All the authors designed the study, discussed and interpreted the results, and wrote the manuscript. C.I.A. and V.L.A.W. carried out the experiments and performed the analysis.
Correspondence to Jeff Gore.
Peer Review File
Source data
Nature Communications menu
Editors' Highlights
Top 50 Read Articles of 2018 | CommonCrawl |
Tutorial #18: Parsing II: WCFGs, the inside algorithm, and weighted parsing
Authors: A. Kádár, S. Prince
In Part I of this tutorial, we introduced context-free grammars (CFGs) and how to convert them to Chomsky normal form. We presented the CYK algorithm for the recognition problem. This algorithm evaluates whether a sentence is valid under a grammar, and can be easily adapted to return the valid parse trees.
In this blog, we will introduce weighted context-free grammars or WCFGs. These assign a non-negative weight to each rule in the grammar. From here, we can assign a weight to any parse tree by multiplying the weights of its component rules together. We present two variations of the CYK algorithm that apply to WCFGs. (i) The inside algorithm computes the sum of the weights of all possible analyses (parse trees) for a sentence. (ii) The weighted parsing algorithm find the parse tree with the highest weight.
In Part III of this tutorial, we introduce probabilistic context-free grammars. These are a special case of WCFGs where the weights of all rules with the same left-hand side sum to one. We then discuss how to learn these weights from a corpus of text. We will see that the inside algorithm is a critical part of this process.
Context-free grammars and CYK recognition
Before we start our discussion, let's briefly review what we learned about context-free grammars and the CYK recognition algorithm in part I of this tutorial. Recall that we defined a context-free grammar as the tuple $\langle S, \mathcal{V}, \Sigma, \mathcal{R}\rangle$ with a start symbol $S$, non-terminals $\mathcal{V}$, terminals $\Sigma$ and finally the rules $\mathcal{R}$.
In our examples, the non-terminals are a set $\mathcal{V}=\{\mbox{VP, PP, NP, DT, NN, }\ldots\}$ containing sub-clauses (e.g., verb-phrase $\mbox{VP}$ ) and parts of speech (e.g., noun $\mbox{NN}$). The terminals contain the words. We will consider grammars in Chomsky Normal Form, where the rules either map one non-terminal to two other non terminals (e.g., $\text{VP} \rightarrow \text{V} \; \text{NP})$ or a single terminal symbol (e.g., $\text{V}$-> eats).
The CYK recognition algorithm takes a sentence and a grammar in Chomsky Normal Form and determines if the sentence is valid under the grammar. With minor changes, it can also return the set of valid parse trees. It constructs a chart where each position in the chart corresponds to a sub-sequence of words (figure 2). At each position, there is a binary array with one entry per rule, where this entry is set to true if this rule can be applied validly to the associated sub-sequence.
Figure 1. Chart construction for CYK algorithm. The original sentence is below the chart. Each element of the chart corresponds to a sub-sequence so that position (l, p) is the sub-sequence that starts at position $p$ and has length $l$. For the $l^{th}$ row of the chart, there are $l-1$ ways of dividing the sub-sequence into two parts. For example, the string in the gray box at position (4,2) can be split in 4-1 =3 ways that correspond to the blue, green and red shaded boxes and these splits are indexed by the variable $s$.
Figure 2. CYK recognition algorithm. a) We first consider sub-sequences of length $l=1$. For each position $p$, we consider whether there is rule that generates the word and set the appropriate element of the chart to TRUE. For example, at position (1,3) we set NP to TRUE as we have the rule $\mbox{NP}\rightarrow\mbox{him}$. b) We then consider sub-sequences of length $l=2$ and set elements at a position to be true if there is a binary non-terminal rule that explains this sub-sequence. For example, at position (2,2) we set VP to TRUE as we have the rule $\mbox{VP}\rightarrow\mbox{VBD NP}$. c) We continue in this way, working through longer and longer sub-sequences until we reach position (1,1) whic represents the whole string. If we can set $S$ to TRUE in this position, then the sentence can be generated by the grammar. For more details, see Part I of this tutorial.
The CYK algorithm works by first finding valid unary rules that map pre-terminals representing parts of speech to terminals representing words (e.g., DT$\rightarrow$ the). Then it considers sub-sequences of increasing length and identifies applicable binary non-terminal rules (e.g., $\mbox{NP}\rightarrow \mbox{DT NN})$. The rule is applicable if there are two sub-trees lower down in the chart whose roots match its right hand side. If the algorithm can place the start symbol in the top-left of the chart, then the overall sentence is valid. The pseudo-code is given by:
0 # Initialize data structure
1 chart[1...n, 1...n, 1...V] := FALSE
3 # Use unary rules to find possible parts of speech at pre-terminals
4 for p := 1 to n # start position
5 for each unary rule A -> w_p
6 chart[1, p, A] := TRUE
8 # Main parsing loop
9 for l := 2 to n # sub-sequence length
10 for p := 1 to n-l+1 # start position
11 for s := 1 to l-1 # split width
12 for each binary rule A -> B C
13 chart[l, p, A] = chart[l, p, A] OR
(chart[s, p, B] AND chart[l-s,p+s, C])
15 return chart[n, 1, S]
For a much more detailed discussion of this algorithm, consult Part I of this blog.
Weighted context-free grammars
Weighted context-free grammars (WCFGs) are context-free grammars which have a non-negative weight associated with each rule. More precisely, we add the function $g: \mathcal{R} \mapsto \mathbb{R}_{\geq 0}$ that maps each rule to a non-negative number. The weight of a full derivation tree $T$ is then the product of the weights of each rule $T_t$:
\begin{equation}\label{eq:weighted_tree_from_rules}
\mbox{G}[T] = \prod_{t \in T} g[T_t]. \tag{1}
Context-free grammars generate strings, whereas weighted context free grammars generate strings with an associated weight.
We will interpret the weight $g[T_t]$ as the degree to which we favor a rule, and so, we "prefer" parse trees $T$ with higher overall weights $\mbox{G}[T]$. Ultimately, we will learn these weights in such a way that real observed sentences have high weights and ungrammatical sentences have lower weights. From this viewpoint, the weights can be viewed as parameters of the model.
Since the tree weights $G[T]$ are non-negative, they can be interpreted as un-normalized probabilities. To create a valid probability distribution over possible parse trees, we must normalize by the total weight $Z$ of all tree derivations:
\begin{eqnarray}
Z &=& \sum_{T \in \mathcal{T}[\mathbf{w}]} \mbox{G}[T] \nonumber \\
&=& \sum_{T \in \mathcal{T}[\mathbf{w}]} \prod_{t \in T} \mbox{g}[T_t], \tag{2}
\end{eqnarray}
where $\mathcal{T}[\mathbf{w}]$ represents the set of all possible parse trees from which the observed words $\mathbf{w}=[x_{1},x_{2},\ldots x_{L}]$ can be derived. We'll refer to the normalizing constant $Z$ as the partition function. The conditional distribution of a possible derivation $T$ given the observed words $\mathbf{w}$ is then:
Pr(T|\mathbf{w}) = \frac{\mbox{G}[T]}{Z}. \tag{3}
Computing the partition function
We defined the partition function $Z$ as the sum of the weights of all the trees $\mathcal{T}[\mathbf{w}]$ from which the observed words $\mathbf{w}$ can be derived. However, in Part I of this tutorial we saw that the number of possible binary parse trees increases very rapidly with the sentence length.
The CYK recognition algorithm used dynamic programming to search this huge space of possible trees in polynomial time and determine whether there is at least one valid tree. To compute the partition function, we will use a similar trick to search through all possible trees and sum their weights simultaneously. This is known as the inside algorithm.
Semirings
Before we present the inside algorithm, we need to introduce the semiring. This abstract algebraic structure will help us adapt the CYK algorithm to compute different quantities. A semiring is a set $\mathbb{A}$ on which we have defined two binary operators:
1. $\oplus$ is a commutative operation with identity element 0, which behaves like the addition $+$:
$x \oplus y = y \oplus x$
$(x \oplus y) \oplus z = x \oplus (y \oplus z)$
$x \oplus 0 = 0 \oplus x = x$
2. $\otimes$ is an associative operation that (right) distributes over $\oplus$ just like multiplication $\times$. It has the identity element 1 and absorbing element 0:
$(x \otimes y) \otimes z = x \otimes (y \otimes z)$
$x \otimes (y \oplus z) = (x \otimes y) + (x \otimes z)$
$x \otimes 1 = 1 \otimes x = x$
$x \otimes 0 = 0 \otimes x = 0$
Similarly to grammars we will just denote semirings as tuples: $\langle\mathbb{A}, \oplus, \otimes, 0, 1\rangle$. You can think of the semiring as generalizing the notions of addition and multiplication.1
Inside algorithm for computing $Z$
Computing the partition function $Z$ for the conditional distribution $Pr(T|\mathbf{w})$ might appear difficult, because it sums over the large space of possible derivations for the sentence $\mathbf{w}$. However, we've already seen how the CYK recognition algorithm accepts or rejects a sentence in polynomial time, while sweeping though all possible derivations. The inside algorithm uses a variation of the same trick to compute the partition function.
When used for recognition, the $\texttt{chart}$ holds values of $\texttt{TRUE}$ and $\texttt{FALSE}$ and the computation was based on two logical operators OR and AND, and we can think of these as being part of the semiring $\langle\{\texttt{TRUE}, \texttt{FALSE}\}, OR, AND, \texttt{FALSE}, \texttt{TRUE}\rangle$.
The inside algorithm replaces this semiring with the sum-product semiring $\langle\mathbb{R}_{\geq 0} \cup \{+\infty\} , +, \times, 0, 1\rangle$ to get the following procedure:
1 chart[1...n, 1...n, 1...|V|] := 0
6 chart[1, p, A] := g[A-> w_p]
13 chart[l, p, A] = chart[l, p, A] +
(g[A -> B C] x chart[s, p, B] x chart[l-s,p+s, C] )
where we have highlighted the differences from the recognition algorithm in green.
As in the CYK recognition algorithm, each position $(p,l)$ in the $\texttt{chart}$ represents the sub-sequence that starts at position $p$ and is of length $l$ (figure 2). In the inside algorithm, every position in the chart holds a length $|V|$ vector where the $v^{th}$ entry corresponds to the $v^{th}$ non-terminal. The value held in this vector is the sum of the weights of all sub-trees for which the $v^{th}$ non-terminal is the root.
The intuition for the update rule in line 13 is simple. The additional weight for adding rule $A\rightarrow BC$ into the chart is the weight $g[A\rightarrow BC]$ for this rule times the sum of weights of all possible left sub-trees rooted in B times the sum of weights of all possible right sub-trees rooted in C. As before, there may be multiple possible rules that place non-terminal $A$ in a position corresponding to different splits of the sub-sequence and here we perform this computation for each rule and sum the results together.
Worked example
In figures 3 and 4 we show a worked example of the inside algorithm for the same sentence as we used for the CYK recognition algorithm. Figure 3a corresponds to lines 4-6 of the algorithm where we are initializing the first row of the chart based on the unary rule weights. Figure 3b corresponds to the main loop in lines 9-13 for sub-sequence length $l=2$. Here we assign binary non-terminal rules and compute their weights as (cost of rule $\times$ weight of left branch $\times$ weight of right branch).
Figure 3. Inside algorithm worked example 1. a) The weight (number above part of speech) for sub-sequences of length $l=1$ is determined by the associated rule weight (top right). For example, at position $1,3$, we assign a weight of 1.5 because the rule "$\mbox{NP}\rightarrow\mbox{him}$" which has weight 1.5 is applicable. b) The weight for sub-sequences of length 2 is calculated as the weight of the rule, multiplied by the weight of the left branch, multiplied by the right branch. For example, at position (2,2) the weight is 3.36 based on multiplying the weight 1.6 of the rule "$\mbox{VP}\rightarrow\mbox{VBD NP}$" times the weight 1.4 of the left-branch times the weight 1.5 of the right branch.
Figure 4a corresponds to the main loop in lines 9-13 for sub-sequence length $l=5$. At position (5,2), there are two possible rules that apply, both of which result in the same non-terminal. We calculate the weights for each rule as before, and add the results so that the final weight at this position sums over all sub-trees. Figure 4b shows the final result of the algorithm. The weight associated with the start symbol $S$ at position (6,1) is the partition function.
Figure 4. Inside algorithm worked example 2 a) When we reach position (5,2), there are two possible rules which both assign the non-terminal $VP$ to this position in the chart (red and blue splits). We calculate the weight of each rule as before, and add them together to generate the new weight. b) When we reach the top-left corner, the weight 441.21 associated with the start symbol is the partition function, Z. If we keep track of the paths we took to reach this point, we can reconstruct all of the trees that contributed to this value.
Inside weights and anchored non-terminals
Our discussion so far does not make it clear why the method for computing the partition function is known as the inside algorithm. This is because the $\texttt{chart}$ holds the inside-weights for each anchored non-terminal. By "anchored" we mean a non-terminal $A_i^k$ pronounced "Aye from eye to Kay" is anchored to a span in the sentence (i.e, a sub-string). It yields the string $A_i^k \Rightarrow w_i, \ldots, w_k$.
An anchored rule then has the form $A_i^k \rightarrow B_i^j C_j^k$. With this notation in our hand we can provide the recursive definition to the inside weight of anchored non-terminals:
\begin{equation}\label{eq:inside_update}
\alpha[A_i^k] = \sum_{B, C}\sum_{j=i+1}^k \mbox{g}[A \rightarrow B C] \times \alpha[B_i^j] \times \alpha[C_j^k]. \tag{4}
The inside-weight $\alpha[A_i^k]$ corresponds to the sum of all the left and right sub-trees considering all possible split points $j$ and all possible non-terminals B and C (figure 5).
Figure 5. Interpretation of inside weights. The inside weights $\alpha[B_i^j]$ and $\alpha[C_j^k]$ correspond to the total weights of all the sub-trees that feed upwards into non-terminals $B$ and $C$ respectively (shaded regions). These are used to update the inside weights $\alpha[A_i^k]$ for non-terminal A (see equation 4).
Weighted Parsing
In the previous section, we saw that we could transform the CYK recognition algorithm into the inside algorithm, by just changing the underlying semiring. With his small adjustment, we showed that we can compute the partition function (sum of weights of all tree derivations) in polynomial time. In this section, we apply a similar trick to weighted parsing.
Recall that the partition function $Z$ was defined as the sum of all possible derivations:
In contrast, weighted parsing aims to find the derivation $T^{*}$ with the highest weight among all possible derivations:
T^{*} &=& \underset{T \in \mathcal{T}[\mathbf{w}]}{\text{arg} \, \text{max}} \; \left[\mbox{G}[T]\right] \nonumber \\
&=& \underset{T \in \mathcal{T}[\mathbf{w}]}{\text{arg} \, \text{max}} \left[\prod_{t \in T} \mbox{g}[T_t]\right], \tag{6}
where $\mbox{G}[T]$ is the weight of a derivation tree which is computed by taking the product of the weights $\mbox{g}[T_t]$ of the rules.
Once again we will modify the semiring in the CYK algorithm to perform the task. Let us replace the sum-product semiring $\langle\mathbb{R}_{\geq 0} \cup \{+\infty\} , +, \times, 0, 1\rangle$ with the max-product semiring $<\mathbb{R}_{\geq 0} \cup \{+\infty\} , \max[\bullet], \times, 0, 1>$ to find the score of the "best" derivation. This gives us the following algorithm:
6 chart[1, p, A] := g[A -> w_p]
13 chart[l, p, A] = max[chart[l, p, A],
(g[A -> B C] x chart[s, p, B] x chart[l-s,p+s, C]]
The differences from the CYK recognition algorithm are colored in green, and the single difference from both the inside algorithm and the CYK recognition algorithm is colored in orange.
Once more, each position $(p,l)$ in the $\texttt{chart}$ represents the sub-sequence that starts at position $p$ and is of length $l$. In the inside algorithm, each position contained a vector with one entry for each of the $|V|$ rules. Each element of this vector contained the sum of the weights of all of the sub-trees which feed into this anchored non-terminal. In this variation, each element contains the maximum weight among all the sub-trees that feed into this anchored non-terminal. Position (n,1) represents the whole string, and so the value $\texttt{chart[n, 1, S]}$ is the maximum weight among all valid parse trees. If this is zero, then there is no valid derivation.
The update rule at line 13 for the weight at $\texttt{chart[l, p, A]}$ now has the following interpretation. For each rule $\texttt{A -> B C}$ and for each possible split $\texttt{s}$ of the data, we multiply the the rule weight $\texttt{g[A -> B C]}$ by the two weights $\texttt{chart[s, p, B]}$ and $\texttt{chart[l-s, p+s, B]}$ associated with the two child sub-sequences. If the result is larger than the current highest value, then we update it. If we are interested in the parse tree itself, then we can store back-pointers indicating which split yielded the maximum value at each position, and traverse backwards to retrieve the best tree.
In figure 6 we illustrate worked example of weighted parsing. The algorithm starts by assigning weights to pre-terminals exactly as in figure 3a. The computation of the weights for sub-sequences of length $l=2$ is also exactly as in figure 3b, and the algorithm also proceeds identically for $l=3$ and $l=4$.
The sole difference occurs for the sub-sequence of length $l=5$ at position $p=2$ (figure 6). There are two possible rules that both assign the non-terminal VP to the chart at this position. In the inside algorithm, we calculated the weights of these rules and summed them. In weighted parsing, we store the largest of these weights, and this operation corresponds to the $\mbox{max}[\bullet,\bullet]$ function on line 13 of the algorithm.
Figure 6. Weighted parsing worked example. a) When we get to string length $l=5$, position $p=2$ there are two possible rules that both assign the non-terminal VP to this position, which correspond to the red and blue sub-trees respectively. We calculate the weight for each as normal (rule weight $\times$ left tree weight $\times$ right tree weight), but then assign the maximum of these to the current position. b) Upon completion, the value associated with the start symbol S in the top-left corner is the maximum weight of all the parse trees. By keeping track of which sub-trees contributed to this (i.e, the blue sub-tree rather than the red one), we can retrieve this maximum weighted tree which is the "best" parse of the sentence according to the weighted context-free grammar.
At the end of the procedure, the weight associated with the start symbol at position (6,1) corresponds to the tree with the maximum weight and so is considered the "best". By keeping track of which sub-tree yielded the maximum weight at each split, we can retrieve this tree which corresponds to our best guess at parsing the sentence.
We've seen that we can add weights to CFGs and replace the $AND, OR$ semiring with $+, \times$ to find the total weight of all possible derivations (i.e. compute the partition function with the inside algorithm). Further more, but we can use $\max, \times$ instead to find the parse tree with the highest weight.
The semirings allow us to unify the CYK recognition, inside, and weighted parsing algorithms by recursively defining the chart entries as:
\texttt{chart}[A_i^k] = \bigoplus_{B, C, j} \mbox{g}[A \rightarrow B C] \otimes \texttt{chart}[B_i^j] \otimes \texttt{chart}[C_i^k], \tag{7}
where for recognition $\mbox{g}[A \rightarrow B C]$ just returns $\texttt{TRUE}$ for all existing rules.
Readers familiar with graphical models, will no doubt have noticed the similarity between these methods and sum-product and max-product belief propagation. Indeed, we could alternatively have presented this entire argument in terms of graphical models, but the semiring formulation is more concise.
In the final part of this blog, we will consider probabilistic context-free grammars, which are a special case of weighted-context free grammars. We'll develop algorithms to learn the weights from (i) a corpus of sentences with known parse trees and (ii) just the sentences. The latter case will lead to a discussion of the famous inside-outside algorithm.
1. If you are wondering why is it "semi", its because the magnificent rings also have an additive inverse for each element: $x \oplus (-x) = 0$.
A. Kádár
S. Prince
Tutorial #19: Parsing III: PCFGs and the inside-outside algorithm
Meet Turing by Borealis AI, an AI-powered text to SQL database interface
Tutorial #15: Parsing I: context-free grammars and the CYK algorithm
Impressed by the work of the team? Borealis AI is looking to hire for various roles across different teams. Visit our career page now and discover opportunities to join similar impactful projects!
Careers at Borealis AI | CommonCrawl |
Only show content I have access to (72)
Chapters (18)
Last 12 months (12)
Last 3 years (16)
Over 3 years (269)
Physics and Astronomy (148)
Materials Research (125)
Statistics and Probability (13)
Earth and Environmental Sciences (12)
Language and Linguistics (2)
MRS Online Proceedings Library Archive (122)
Epidemiology & Infection (13)
Proceedings of the Nutrition Society (13)
Publications of the Astronomical Society of Australia (13)
The Cambridge Law Journal (12)
Psychological Medicine (10)
British Journal of Nutrition (7)
Journal of Fluid Mechanics (7)
Microscopy and Microanalysis (6)
European Psychiatry (4)
Public Health Nutrition (4)
Infection Control & Hospital Epidemiology (3)
Transactions of the International Astronomical Union (3)
Journal of Zoology (2)
Proceedings of the British Society of Animal Science (2)
Proceedings of the International Astronomical Union (2)
Psychiatric Bulletin (2)
Quaternary Research (2)
The Journal of Agricultural Science (2)
Cambridge University Press (18)
Materials Research Society (125)
Nestle Foundation - enLINK (16)
Nutrition Society (9)
International Astronomical Union (8)
BSAS (7)
test society (7)
MiMi / EMAS - European Microbeam Analysis Society (6)
European Psychiatric Association (4)
Society for Healthcare Epidemiology of America (SHEA) (3)
Malaysian Society of Otorhinolaryngologists Head and Neck Surgeons (2)
The Royal College of Psychiatrists (2)
Canadian Mathematical Society (1)
Mineralogical Society (1)
Royal Aeronautical Society (1)
The Paleontological Society (1)
Weed Science Society of America (1)
Literature in Context (3)
Cambridge Handbooks in Behavioral Genetics (1)
Shakespeare Survey (1)
The Rapid ASKAP Continuum Survey I: Design and first results
Australian SKA Pathfinder
D. McConnell, C. L. Hale, E. Lenc, J. K. Banfield, George Heald, A. W. Hotan, James K. Leung, Vanessa A. Moss, Tara Murphy, Andrew O'Brien, Joshua Pritchard, Wasim Raja, Elaine M. Sadler, Adam Stewart, Alec J. M. Thomson, M. Whiting, James R. Allison, S. W. Amy, C. Anderson, Lewis Ball, Keith W. Bannister, Martin Bell, Douglas C.-J. Bock, Russ Bolton, J. D. Bunton, A. P. Chippendale, J. D. Collier, F. R. Cooray, T. J. Cornwell, P. J. Diamond, P. G. Edwards, N. Gupta, Douglas B. Hayman, Ian Heywood, C. A. Jackson, Bärbel S. Koribalski, Karen Lee-Waddell, N. M. McClure-Griffiths, Alan Ng, Ray P. Norris, Chris Phillips, John E. Reynolds, Daniel N. Roxby, Antony E. T. Schinckel, Matt Shields, Chenoa Tremblay, A. Tzioumis, M. A. Voronkov, Tobias Westmeier
Journal: Publications of the Astronomical Society of Australia / Volume 37 / 2020
The Rapid ASKAP Continuum Survey (RACS) is the first large-area survey to be conducted with the full 36-antenna Australian Square Kilometre Array Pathfinder (ASKAP) telescope. RACS will provide a shallow model of the ASKAP sky that will aid the calibration of future deep ASKAP surveys. RACS will cover the whole sky visible from the ASKAP site in Western Australia and will cover the full ASKAP band of 700–1800 MHz. The RACS images are generally deeper than the existing NRAO VLA Sky Survey and Sydney University Molonglo Sky Survey radio surveys and have better spatial resolution. All RACS survey products will be public, including radio images (with $\sim$ 15 arcsec resolution) and catalogues of about three million source components with spectral index and polarisation information. In this paper, we present a description of the RACS survey and the first data release of 903 images covering the sky south of declination $+41^\circ$ made over a 288-MHz band centred at 887.5 MHz.
Deactivation of SARS-CoV-2 with pulsed-xenon ultraviolet light: Implications for environmental COVID-19 control
Sarah E. Simmons, Ricardo Carrion, Kendra J. Alfson, Hilary M. Staples, Chetan Jinadatha, William R. Jarvis, Priya Sampathkumar, Roy F. Chemaly, Fareed Khawaja, Mark Povroznik, Stephanie Jackson, Keith S. Kaye, Robert M. Rodriguez, Mark A. Stibich
Journal: Infection Control & Hospital Epidemiology , First View
Published online by Cambridge University Press: 03 August 2020, pp. 1-4
Prolonged survival of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) on environmental surfaces and personal protective equipment may lead to these surfaces transmitting this pathogen to others. We sought to determine the effectiveness of a pulsed-xenon ultraviolet (PX-UV) disinfection system in reducing the load of SARS-CoV-2 on hard surfaces and N95 respirators.
Chamber slides and N95 respirator material were directly inoculated with SARS-CoV-2 and were exposed to different durations of PX-UV.
For hard surfaces, disinfection for 1, 2, and 5 minutes resulted in 3.53 log10, >4.54 log10, and >4.12 log10 reductions in viral load, respectively. For N95 respirators, disinfection for 5 minutes resulted in >4.79 log10 reduction in viral load. PX-UV significantly reduced SARS-CoV-2 on hard surfaces and N95 respirators.
With the potential to rapidly disinfectant environmental surfaces and N95 respirators, PX-UV devices are a promising technology to reduce environmental and personal protective equipment bioburden and to enhance both healthcare worker and patient safety by reducing the risk of exposure to SARS-CoV-2.
Impact of diet on CVD and diabetes mortality in Latin America and the Caribbean: a comparative risk assessment analysis
Ivan Sisa, Enrique Abeyá-Gilardon, Regina M Fisberg, Maria D Jackson, Guadalupe L Mangialavori, Rosely Sichieri, Frederick Cudhea, Raveendhara R Bannuru, Robin Ruthazer, Dariush Mozaffarian, Gitanjali M Singh
Journal: Public Health Nutrition , First View
Published online by Cambridge University Press: 03 June 2020, pp. 1-15
To quantify diet-related burdens of cardiometabolic diseases (CMD) by country, age and sex in Latin America and the Caribbean (LAC).
Intakes of eleven key dietary factors were obtained from the Global Dietary Database Consortium. Aetiologic effects of dietary factors on CMD outcomes were obtained from meta-analyses. We combined these inputs with cause-specific mortality data to compute country-, age- and sex-specific absolute and proportional CMD mortality of eleven dietary factors in 1990 and 2010.
Thirty-two countries in LAC.
Adults aged 25 years and older.
In 2010, an estimated 513 371 (95 % uncertainty interval (UI) 423 286–547 841; 53·8 %) cardiometabolic deaths were related to suboptimal diet. Largest diet-related CMD burdens were related to low intake of nuts/seeds (109 831 deaths (95 % UI 71 920–121 079); 11·5 %), low fruit intake (106 285 deaths (95 % UI 94 904–112 320); 11·1 %) and high processed meat consumption (89 381 deaths (95 % UI 82 984–97 196); 9·4 %). Among countries, highest CMD burdens (deaths per million adults) attributable to diet were in Trinidad and Tobago (1779) and Guyana (1700) and the lowest were in Peru (492) and The Bahamas (504). Between 1990 and 2010, greatest decline (35 %) in diet-attributable CMD mortality was related to greater consumption of fruit, while greatest increase (7·2 %) was related to increased intakes of sugar-sweetened beverages.
Suboptimal intakes of commonly consumed foods were associated with substantial CMD mortality in LAC with significant heterogeneity across countries. Improved access to healthful foods, such as nuts and fruits, and limits in availability of unhealthful factors, such as processed foods, would reduce diet-related burdens of CMD in LAC.
The GLEAM 4-Jy (G4Jy) Sample: I. Definition and the catalogue
Sarah V. White, Thomas M. O Franzen, Chris J. Riseley, O. Ivy Wong, Anna D. Kapińska, Natasha Hurley-Walker, Joseph R. Callingham, Kshitij Thorat, Chen Wu, Paul Hancock, Richard W. Hunstead, Nick Seymour, Jesse Swan, Randall Wayth, John Morgan, Rajan Chhetri, Carole Jackson, Stuart Weston, Martin Bell, Bi-Qing For, B. M. Gaensler, Melanie Johnston-Hollitt, André Offringa, Lister Staveley-Smith
Published online by Cambridge University Press: 01 June 2020, e018
The Murchison Widefield Array (MWA) has observed the entire southern sky (Declination, $\delta< 30^{\circ}$ ) at low radio frequencies, over the range 72–231MHz. These observations constitute the GaLactic and Extragalactic All-sky MWA (GLEAM) Survey, and we use the extragalactic catalogue (EGC) (Galactic latitude, $|b| >10^{\circ}$ ) to define the GLEAM 4-Jy (G4Jy) Sample. This is a complete sample of the 'brightest' radio sources ( $S_{\textrm{151\,MHz}}>4\,\text{Jy}$ ), the majority of which are active galactic nuclei with powerful radio jets. Crucially, low-frequency observations allow the selection of such sources in an orientation-independent way (i.e. minimising the bias caused by Doppler boosting, inherent in high-frequency surveys). We then use higher-resolution radio images, and information at other wavelengths, to morphologically classify the brightest components in GLEAM. We also conduct cross-checks against the literature and perform internal matching, in order to improve sample completeness (which is estimated to be $>95.5$ %). This results in a catalogue of 1863 sources, making the G4Jy Sample over 10 times larger than that of the revised Third Cambridge Catalogue of Radio Sources (3CRR; $S_{\textrm{178\,MHz}}>10.9\,\text{Jy}$ ). Of these G4Jy sources, 78 are resolved by the MWA (Phase-I) synthesised beam ( $\sim2$ arcmin at 200MHz), and we label 67% of the sample as 'single', 26% as 'double', 4% as 'triple', and 3% as having 'complex' morphology at $\sim1\,\text{GHz}$ (45 arcsec resolution). We characterise the spectral behaviour of these objects in the radio and find that the median spectral index is $\alpha=-0.740 \pm 0.012$ between 151 and 843MHz, and $\alpha=-0.786 \pm 0.006$ between 151MHz and 1400MHz (assuming a power-law description, $S_{\nu} \propto \nu^{\alpha}$ ), compared to $\alpha=-0.829 \pm 0.006$ within the GLEAM band. Alongside this, our value-added catalogue provides mid-infrared source associations (subject to 6" resolution at 3.4 $\mu$ m) for the radio emission, as identified through visual inspection and thorough checks against the literature. As such, the G4Jy Sample can be used as a reliable training set for cross-identification via machine-learning algorithms. We also estimate the angular size of the sources, based on their associated components at $\sim1\,\text{GHz}$ , and perform a flux density comparison for 67 G4Jy sources that overlap with 3CRR. Analysis of multi-wavelength data, and spectral curvature between 72MHz and 20GHz, will be presented in subsequent papers, and details for accessing all G4Jy overlays are provided at https://github.com/svw26/G4Jy.
The GLEAM 4-Jy (G4Jy) Sample: II. Host galaxy identification for individual sources
Sarah V. White, Thomas M. O. Franzen, Chris J. Riseley, O. Ivy Wong, Anna D. Kapińska, Natasha Hurley-Walker, Joseph R. Callingham, Kshitij Thorat, Chen Wu, Paul Hancock, Richard W. Hunstead, Nick Seymour, Jesse Swan, Randall Wayth, John Morgan, Rajan Chhetri, Carole Jackson, Stuart Weston, Martin Bell, B. M. Gaensler, Melanie Johnston–Hollitt, André Offringa, Lister Staveley–Smith
The entire southern sky (Declination, $\delta< 30^{\circ}$ ) has been observed using the Murchison Widefield Array (MWA), which provides radio imaging of $\sim$ 2 arcmin resolution at low frequencies (72–231 MHz). This is the GaLactic and Extragalactic All-sky MWA (GLEAM) Survey, and we have previously used a combination of visual inspection, cross-checks against the literature, and internal matching to identify the 'brightest' radio-sources ( $S_{\mathrm{151\,MHz}}>4$ Jy) in the extragalactic catalogue (Galactic latitude, $|b| >10^{\circ}$ ). We refer to these 1 863 sources as the GLEAM 4-Jy (G4Jy) Sample, and use radio images (of ${\leq}45$ arcsec resolution), and multi-wavelength information, to assess their morphology and identify the galaxy that is hosting the radio emission (where appropriate). Details of how to access all of the overlays used for this work are available at https://github.com/svw26/G4Jy. Alongside this we conduct further checks against the literature, which we document here for individual sources. Whilst the vast majority of the G4Jy Sample are active galactic nuclei with powerful radio-jets, we highlight that it also contains a nebula, two nearby, star-forming galaxies, a cluster relic, and a cluster halo. There are also three extended sources for which we are unable to infer the mechanism that gives rise to the low-frequency emission. In the G4Jy catalogue we provide mid-infrared identifications for 86% of the sources, and flag the remainder as: having an uncertain identification (129 sources), having a faint/uncharacterised mid-infrared host (126 sources), or it being inappropriate to specify a host (2 sources). For the subset of 129 sources, there is ambiguity concerning candidate host-galaxies, and this includes four sources (B0424–728, B0703–451, 3C 198, and 3C 403.1) where we question the existing identification.
EPA-1395 – Psychopathological Predictors of Antipsychotic Medication Use in Childhood Autism Spectrum Disorders
J. Downs, M. Hotopf, R.G. Jackson, H. Shetty, T. Ford, R. Stewart, R.D. Hayes
Journal: European Psychiatry / Volume 29 / Issue S1 / 2014
Published online by Cambridge University Press: 15 April 2020, p. 1
Autism Spectrum Disorders (ASDs) affect 1% of children and are associated with lifelong psychosocial impairments. The majority of children with ASD will experience co-occurring psychiatric disorders. In the UK, antipsychotics remain unlicensed for use in ASDs, however 10% of children with ASD receive antipsychotic treatment; the co-occurring disorders being targeted by these medications remains unclear.
To examine rates of antipsychotic medication use and identify associated co-occurring disorders among children with ASD receiving psychiatric care.
The sample consisted of 2844 children aged 2 to 17 with a NHS clinician recorded ICD-10 diagnoses for ASD between 2008–2013. Clinical variables extracted from their anonymised electronic patient records included disorder severity, medication use, co-occurring ICD-10 diagnoses, family characteristics, demographics and antipsychotic use.
Of the 2844 children (79% male), the majority (57%) had co-occurring psychiatric diagnoses. 313 (11%) received antipsychotic medication. The proportion of children aged 13 to 17 years and 6 to 12 years prescribed antipsychotics was 19% and 7% respectively. After controlling for socio-demographic factors, disorder severity, specialist treatment, inpatient duration, risk of self harm, violence to others, self injurious behaviour, maltreatment history, parental mental illness, caregiver anxiety, and neighbourhood deprivation, multivariate regression analysis revealed only hyperactivity disorders (O.R 1.94, 95%C.I. 1.32–2.86), psychotic disorders (O.R 5.12 95% C.I. 2.6–10.1), mood disorders (O.R 2.02, 95%C.I. 1.04–3.92) and intellectual disability (O.R 2.89 95% C.I. 1.89–4.71) were associated with anti-psychotic use.
The prescription of antipsychotic medications in this UK ASD clinical sample is strongly associated with specific co-occurring psychiatric disorders and intellectual disability.
Mood instability and clinical outcomes in mental health disorders: A natural language processing (NLP) study
R. Patel, T. Lloyd, R. Jackson, M. Ball, H. Shetty, M. Broadbent, J.R. Geddes, R. Stewart, P. McGuire, M. Taylor
Journal: European Psychiatry / Volume 33 / Issue S1 / March 2016
Published online by Cambridge University Press: 23 March 2020, p. s224
Mood instability is an important problem but has received relatively little research attention. Natural language processing (NLP) is a novel method, which can used to automatically extract clinical data from electronic health records (EHRs).
To extract mood instability data from EHRs and investigate its impact on people with mental health disorders.
Data on mood instability were extracted using NLP from 27,704 adults receiving care from the South London and Maudsley NHS Foundation Trust (SLaM) for affective, personality or psychotic disorders. These data were used to investigate the association of mood instability with different mental disorders and with hospitalisation and treatment outcomes.
Mood instability was documented in 12.1% of people included in the study. It was most frequently documented in people with bipolar disorder (22.6%), but was also common in personality disorder (17.8%) and schizophrenia (15.5%). It was associated with a greater number of days spent in hospital (B coefficient 18.5, 95% CI 12.1–24.8), greater frequency of hospitalisation (incidence rate ratio 1.95, 1.75–2.17), and an increased likelihood of prescription of antipsychotics (2.03, 1.75–2.35).
Using NLP, it was possible to identify mood instability in a large number of people, which would otherwise not have been possible by manually reading clinical records. Mood instability occurs in a wide range of mental disorders. It is generally associated with poor clinical outcomes. These findings suggest that clinicians should screen for mood instability across all common mental health disorders. The data also highlight the utility of NLP for clinical research.
The authors have not supplied their declaration of competing interest.
Delays to diagnosis and treatment in patients presenting to mental health services with bipolar disorder
R. Patel, H. Shetty, R. Jackson, M. Broadbent, R. Stewart, J. Boydell, P. McGuire, M. Taylor
Published online by Cambridge University Press: 23 March 2020, p. S75
There are often substantial delays before diagnosis and initiation of treatment in people bipolar disorder. Increased delays are a source of considerable morbidity among affected individuals.
To investigate the factors associated with delays to diagnosis and treatment in people with bipolar disorder.
Retrospective cohort study using electronic health record data from the South London and Maudsley NHS Foundation Trust (SLaM) from 1364 adults diagnosed with bipolar disorder. The following predictor variables were analysed in a multivariable Cox regression analysis on diagnostic delay and treatment delay from first presentation to SLaM: age, gender, ethnicity, compulsory admission to hospital under the UK Mental Health Act, marital status and other diagnoses prior to bipolar disorder.
The median diagnostic delay was 62 days (interquartile range: 17–243) and median treatment delay was 31 days (4–122). Compulsory hospital admission was associated with a significant reduction in both diagnostic delay (hazard ratio 2.58, 95% CI 2.18–3.06) and treatment delay (4.40, 3.63–5.62). Prior diagnoses of other psychiatric disorders were associated with increased diagnostic delay, particularly alcohol (0.48, 0.33–0.41) and substance misuse disorders (0.44, 0.31–0.61). Prior diagnosis of schizophrenia and psychotic depression were associated with reduced treatment delay.
Some individuals experience a significant delay in diagnosis and treatment of bipolar disorder, particularly those with alcohol/substance misuse disorders. These findings highlight a need to better identify the symptoms of bipolar disorder and offer appropriate treatment sooner in order to facilitate improved clinical outcomes. This may include the development of specialist early intervention services.
Novel psychoactive substances: An investigation of temporal trends in social media and electronic health records
A. Kolliakou, M. Ball, L. Derczynski, D. Chandran, G. Gkotsis, P. Deluca, R. Jackson, H. Shetty, R. Stewart
Journal: European Psychiatry / Volume 38 / October 2016
Published online by Cambridge University Press: 23 March 2020, pp. 15-21
Public health monitoring is commonly undertaken in social media but has never been combined with data analysis from electronic health records. This study aimed to investigate the relationship between the emergence of novel psychoactive substances (NPS) in social media and their appearance in a large mental health database.
Insufficient numbers of mentions of other NPS in case records meant that the study focused on mephedrone. Data were extracted on the number of mephedrone (i) references in the clinical record at the South London and Maudsley NHS Trust, London, UK, (ii) mentions in Twitter, (iii) related searches in Google and (iv) visits in Wikipedia. The characteristics of current mephedrone users in the clinical record were also established.
Increased activity related to mephedrone searches in Google and visits in Wikipedia preceded a peak in mephedrone-related references in the clinical record followed by a spike in the other 3 data sources in early 2010, when mephedrone was assigned a 'class B' status. Features of current mephedrone users widely matched those from community studies.
Combined analysis of information from social media and data from mental health records may assist public health and clinical surveillance for certain substance-related events of interest. There exists potential for early warning systems for health-care practitioners.
Determination of variability in serum low density lipoprotein cholesterol response to the replacement of dietary saturated fat with unsaturated fat, in the Reading, Imperial, Surrey Saturated fat Cholesterol Intervention ('RISSCI') project
A. Koutsos, R. Antoni, E. Ozen, G. Wong, L. Sellem, L. Jin, H. Ayyad, N. Jackson, B. A. Fielding, M. D. Robertson, K. G. Jackson, J. A. Lovegrove, B. A. Griffin
Journal: Proceedings of the Nutrition Society / Volume 79 / Issue OCE1 / 2020
Published online by Cambridge University Press: 22 January 2020, E6
Dietary pattern analysis reveals key food groups contributing to the successful exchange of saturated with unsaturated fatty acids in healthy men
L. Sellem, R. Antoni, A. Koutsos, M. Weech, E. Ozen, G. Wong, B. Fielding, M.D. Robertson, K. G. Jackson, B. A. Griffin, J. A. Lovegrove
Published online by Cambridge University Press: 19 October 2020, E772
A dietary exchange model to achieve target nutrient intakes in diets high and lower in saturated fatty acids
R. Antoni, L. Sellem, A. Koutsos, M. Weech, M.D. Robertson, G. Wong, E. Ozen, X. Zhong, K.G. Jackson, B. Fielding, J.A. Lovegrove, B.A. Griffin
Source counts and confusion at 72–231 MHz in the MWA GLEAM survey
T. M. O. Franzen, T. Vernstrom, C. A. Jackson, N. Hurley-Walker, R. D. Ekers, G. Heald, N. Seymour, S. V. White
Published online by Cambridge University Press: 11 February 2019, e004
The GaLactic and Extragalactic All-sky Murchison Widefield Array survey is a radio continuum survey at 72–231 MHz of the whole sky south of declination +30º, carried out with the Murchison Widefield Array. In this paper, we derive source counts from the GaLactic and Extragalactic All-sky Murchison data at 200, 154, 118, and 88 MHz, to a flux density limit of 50, 80, 120, and 290 mJy respectively, correcting for ionospheric smearing, incompleteness and source blending. These counts are more accurate than other counts in the literature at similar frequencies as a result of the large area of sky covered and this survey's sensitivity to extended emission missed by other surveys. At S154 MHz > 0.5 Jy, there is no evidence of flattening in the average spectral index (α ≈ −0.8 where S ∝ vα) towards the lower frequencies. We demonstrate that the Square Kilometre Array Design Study model by Wilman et al. significantly underpredicts the observed 154-MHz GaLactic and Extragalactic All-sky Murchison counts, particularly at the bright end. Using deeper Low-Frequency Array counts and the Square Kilometre Array Design Study model, we find that sidelobe confusion dominates the thermal noise and classical confusion at v ≳ 100 MHz due to both the limited CLEANing depth and the undeconvolved sources outside the field-of-view. We show that we can approach the theoretical noise limit using a more efficient and automated CLEAN algorithm.
Late Quaternary vegetation, climate, and fire history of the Southeast Atlantic Coastal Plain based on a 30,000-yr multi-proxy record from White Pond, South Carolina, USA
Teresa R. Krause, James M. Russell, Rui Zhang, John W. Williams, Stephen T. Jackson
Journal: Quaternary Research / Volume 91 / Issue 2 / March 2019
Print publication: March 2019
The patterns and drivers of late Quaternary vegetation dynamics in the southeastern United States are poorly understood due to low site density, problematic chronologies, and a paucity of independent paleoclimate proxy records. We present a well-dated (15 accelerator mass spectrometry 14C dates) 30,000-yr record from White Pond, South Carolina that consists of high-resolution analyses of fossil pollen, macroscopic charcoal, and Sporormiella spores, and an independent paleotemperature reconstruction based on branched glycerol dialkyl tetraethers. Between 30,000 and 20,000 cal yr BP, open Pinus-Picea forest grew under cold and dry conditions; elevated Quercus before 26,000 cal yr BP, however, suggest warmer conditions in the Southeast before the last glacial maximum, possibly corresponding to regionally warmer conditions associated with Heinrich event H2. Warming between 19,700 and 10,400 cal yr BP was accompanied by a transition from conifer-dominated to mesic hardwood forest. Sporormiella spores were not detected and charcoal was low during the late glacial period, suggesting megaherbivore grazers and fire were not locally important agents of vegetation change. Pinus returned to dominance during the Holocene, with step-like increases in Pinus at 10,400 and 6400 cal yr BP, while charcoal abundance increased tenfold, likely due to increased biomass burning associated with warmer conditions. Low-intensity surface fires increased after 1200 cal yr BP, possibly related to the establishment of the Mississippian culture in the Southeast.
A dietary exchange model to study inter-individual variation in serum low-density lipoprotein cholesterol response to dietary saturated fat intake
R. Antoni, L. Sellem, A. Koutsos, M. Weech, X Zhong, G. Wong, E. Ozen, K Kade, M.D. Robertson, K.G. Jackson, B. Fielding, J.A. Lovegrove, B.A. Griffin
Published online by Cambridge University Press: 07 March 2019, E9
SPICA—A Large Cryogenic Infrared Space Telescope: Unveiling the Obscured Universe
Exploring Astronomical Evolution with SPICA
P. R. Roelfsema, H. Shibai, L. Armus, D. Arrazola, M. Audard, M. D. Audley, C.M. Bradford, I. Charles, P. Dieleman, Y. Doi, L. Duband, M. Eggens, J. Evers, I. Funaki, J. R. Gao, M. Giard, A. di Giorgio, L. M. González Fernández, M. Griffin, F. P. Helmich, R. Hijmering, R. Huisman, D. Ishihara, N. Isobe, B. Jackson, H. Jacobs, W. Jellema, I. Kamp, H. Kaneda, M. Kawada, F. Kemper, F. Kerschbaum, P. Khosropanah, K. Kohno, P. P. Kooijman, O. Krause, J. van der Kuur, J. Kwon, W. M. Laauwen, G. de Lange, B. Larsson, D. van Loon, S. C. Madden, H. Matsuhara, F. Najarro, T. Nakagawa, D. Naylor, H. Ogawa, T. Onaka, S. Oyabu, A. Poglitsch, V. Reveret, L. Rodriguez, L. Spinoglio, I. Sakon, Y. Sato, K. Shinozaki, R. Shipman, H. Sugita, T. Suzuki, F. F. S. van der Tak, J. Torres Redondo, T. Wada, S. Y. Wang, C. K. Wafelbakker, H. van Weers, S. Withington, B. Vandenbussche, T. Yamada, I. Yamamura
Published online by Cambridge University Press: 28 August 2018, e030
Measurements in the infrared wavelength domain allow direct assessment of the physical state and energy balance of cool matter in space, enabling the detailed study of the processes that govern the formation and evolution of stars and planetary systems in galaxies over cosmic time. Previous infrared missions revealed a great deal about the obscured Universe, but were hampered by limited sensitivity.
SPICA takes the next step in infrared observational capability by combining a large 2.5-meter diameter telescope, cooled to below 8 K, with instruments employing ultra-sensitive detectors. A combination of passive cooling and mechanical coolers will be used to cool both the telescope and the instruments. With mechanical coolers the mission lifetime is not limited by the supply of cryogen. With the combination of low telescope background and instruments with state-of-the-art detectors SPICA provides a huge advance on the capabilities of previous missions.
SPICA instruments offer spectral resolving power ranging from R ~50 through 11 000 in the 17–230 μm domain and R ~28.000 spectroscopy between 12 and 18 μm. SPICA will provide efficient 30–37 μm broad band mapping, and small field spectroscopic and polarimetric imaging at 100, 200 and 350 μm. SPICA will provide infrared spectroscopy with an unprecedented sensitivity of ~5 × 10−20 W m−2 (5σ/1 h)—over two orders of magnitude improvement over what earlier missions. This exceptional performance leap, will open entirely new domains in infrared astronomy; galaxy evolution and metal production over cosmic time, dust formation and evolution from very early epochs onwards, the formation history of planetary systems.
Chapter 52 - Disability in Pregnancy
By Gwinnett M. Ladson, Kimberly R. Looney, Sonia Jackson
Edited by Martin Olsen, East Tennessee State University
Book: Obstetric Care
Multistate outbreak of Listeria monocytogenes infections linked to whole apples used in commercially produced, prepackaged caramel apples: United States, 2014–2015
K. M. ANGELO, A. R. CONRAD, A. SAUPE, H. DRAGOO, N. WEST, A. SORENSON, A. BARNES, M. DOYLE, J. BEAL, K. A. JACKSON, S. STROIKA, C. TARR, Z. KUCEROVA, S. LANCE, L. H. GOULD, M. WISE, B. R. JACKSON
Journal: Epidemiology & Infection / Volume 145 / Issue 5 / April 2017
Whole apples have not been previously implicated in outbreaks of foodborne bacterial illness. We investigated a nationwide listeriosis outbreak associated with caramel apples. We defined an outbreak-associated case as an infection with one or both of two outbreak strains of Listeria monocytogenes highly related by whole-genome multilocus sequence typing (wgMLST) from 1 October 2014 to 1 February 2015. Single-interviewer open-ended interviews identified the source. Outbreak-associated cases were compared with non-outbreak-associated cases and traceback and environmental investigations were performed. We identified 35 outbreak-associated cases in 12 states; 34 (97%) were hospitalized and seven (20%) died. Outbreak-associated ill persons were more likely to have eaten commercially produced, prepackaged caramel apples (odds ratio 326·7, 95% confidence interval 32·2–3314). Environmental samples from the grower's packing facility and distribution-chain whole apples yielded isolates highly related to outbreak isolates by wgMLST. This outbreak highlights the importance of minimizing produce contamination with L. monocytogenes. Investigators should perform single-interviewer open-ended interviews when a food is not readily identified.
A study of wrist-worn activity measurement as a potential real-world biomarker for late-life depression
J. T. O'Brien, P. Gallagher, D. Stow, N. Hammerla, T. Ploetz, M. Firbank, C. Ladha, K. Ladha, D. Jackson, R. McNaney, I. N. Ferrier, P. Olivier
Journal: Psychological Medicine / Volume 47 / Issue 1 / January 2017
Published online by Cambridge University Press: 26 September 2016, pp. 93-102
Print publication: January 2017
Late-life depression (LLD) is associated with a decline in physical activity. Typically this is assessed by self-report questionnaires and, more recently, with actigraphy. We sought to explore the utility of a bespoke activity monitor to characterize activity profiles in LLD more precisely.
The activity monitor was worn for 7 days by 29 adults with LLD and 30 healthy controls. Subjects underwent neuropsychological assessment and quality of life (QoL) (36-item Short-Form Health Survey) and activities of daily living (ADL) scales (Instrumental Activities of Daily Living Scale) were administered.
Physical activity was significantly reduced in LLD compared with controls (t = 3.63, p < 0.001), primarily in the morning. LLD subjects showed slower fine motor movements (t = 3.49, p < 0.001). In LLD patients, activity reductions were related to reduced ADL (r = 0.61, p < 0.001), lower QoL (r = 0.65, p < 0.001), associative learning (r = 0.40, p = 0.036), and higher Montgomery–Åsberg Depression Rating Scale score (r = −0.37, p < 0.05).
Patients with LLD had a significant reduction in general physical activity compared with healthy controls. Assessment of specific activity parameters further revealed the correlates of impairments associated with LLD. Our study suggests that novel wearable technology has the potential to provide an objective way of monitoring real-world function.
The Australian Square Kilometre Array Pathfinder: Performance of the Boolardy Engineering Test Array
D. McConnell, J. R. Allison, K. Bannister, M. E. Bell, H. E. Bignall, A. P. Chippendale, P. G. Edwards, L. Harvey-Smith, S. Hegarty, I. Heywood, A. W. Hotan, B. T. Indermuehle, E. Lenc, J. Marvil, A. Popping, W. Raja, J. E. Reynolds, R. J. Sault, P. Serra, M. A. Voronkov, M. Whiting, S. W. Amy, P. Axtens, L. Ball, T. J. Bateman, D. C.-J. Bock, R. Bolton, D. Brodrick, M. Brothers, A. J. Brown, J. D. Bunton, W. Cheng, T. Cornwell, D. DeBoer, I. Feain, R. Gough, N. Gupta, J. C. Guzman, G. A. Hampson, S. Hay, D. B. Hayman, S. Hoyle, B. Humphreys, C. Jacka, C. A. Jackson, S. Jackson, K. Jeganathan, J. Joseph, B. S. Koribalski, M. Leach, E. S. Lensson, A. MacLeod, S. Mackay, M. Marquarding, N. M. McClure-Griffiths, P. Mirtschin, D. Mitchell, S. Neuhold, A. Ng, R. Norris, S. Pearce, R. Y. Qiao, A. E. T. Schinckel, M. Shields, T. W. Shimwell, M. Storey, E. Troup, B. Turner, J. Tuthill, A. Tzioumis, R. M. Wark, T. Westmeier, C. Wilson, T. Wilson
Published online by Cambridge University Press: 09 September 2016, e042
We describe the performance of the Boolardy Engineering Test Array, the prototype for the Australian Square Kilometre Array Pathfinder telescope. Boolardy Engineering Test Array is the first aperture synthesis radio telescope to use phased array feed technology, giving it the ability to electronically form up to nine dual-polarisation beams. We report the methods developed for forming and measuring the beams, and the adaptations that have been made to the traditional calibration and imaging procedures in order to allow BETA to function as a multi-beam aperture synthesis telescope. We describe the commissioning of the instrument and present details of Boolardy Engineering Test Array's performance: sensitivity, beam characteristics, polarimetric properties, and image quality. We summarise the astronomical science that it has produced and draw lessons from operating Boolardy Engineering Test Array that will be relevant to the commissioning and operation of the final Australian Square Kilometre Array Path telescope. | CommonCrawl |
Present climate and climate change over North America as simulated by the fifth-generation Canadian regional climate model
Leo Šeparović1,2,
Adelina Alexandru1,2,
René Laprise1,2,
Andrey Martynov1,2,
Laxmi Sushama1,2,3,
Katja Winger1,2,
Kossivi Tete1,2 &
Michel Valin1,2
Climate Dynamics volume 41, pages 3167–3201 (2013)Cite this article
The fifth-generation Canadian Regional Climate Model (CRCM5) was used to dynamically downscale two Coupled Global Climate Model (CGCM) simulations of the transient climate change for the period 1950–2100, over North America, following the CORDEX protocol. The CRCM5 was driven by data from the CanESM2 and MPI-ESM-LR CGCM simulations, based on the historical (1850–2005) and future (2006–2100) RCP4.5 radiative forcing scenario. The results show that the CRCM5 simulations reproduce relatively well the current-climate North American regional climatic features, such as the temperature and precipitation multiannual means, annual cycles and temporal variability at daily scale. A cold bias was noted during the winter season over western and southern portions of the continent. CRCM5-simulated precipitation accumulations at daily temporal scale are much more realistic when compared with its driving CGCM simulations, especially in summer when small-scale driven convective precipitation has a large contribution over land. The CRCM5 climate projections imply a general warming over the continent in the 21st century, especially over the northern regions in winter. The winter warming is mostly contributed by the lower percentiles of daily temperatures, implying a reduction in the frequency and intensity of cold waves. A precipitation decrease is projected over Central America and an increase over the rest of the continent. For the average precipitation change in summer however there is little consensus between the simulations. Some of these differences can be attributed to the uncertainties in CGCM-projected changes in the position and strength of the Pacific Ocean subtropical high pressure.
Avoid the most common mistakes and prepare your manuscript for journal editors.
Coupled Global Climate Models (CGCMs) comprised of an atmospheric general circulation model coupled with the ocean, sea ice and land surface, forced with scenarios of the evolution of concentrations of anthropogenically affected greenhouse gases (GHG) and aerosols, are the most comprehensive tools for climate studies. However, because of their high complexity and the need to perform very long simulations to stabilize the deep ocean, CGCM simulations are very demanding in computational resources and are performed at relatively coarse horizontal resolution. Development of the adaptation and mitigation strategies requires information on spatial scales finer than those provided by CGCMs. One-way nested Regional Climate Models (RCMs) have been increasingly employed as a "magnifying glass" to dynamically downscale coarse-resolution global fields over a region of interest. In this paradigm, information derived from CGCM simulations or objective analyses provide the atmospheric lateral boundary conditions (LBC) and Sea Surface Temperature (SST) and Sea-Ice Concentration (SIC) for the integration of atmospheric and land-surface variables over a limited area of the globe using high-resolution computational grids (e.g., McGregor 1997; Giorgi and Mearns 1999; Wang et al. 2004; Laprise 2008; Rummukainen 2010).
When an RCM is forced by a CGCM, the RCM simulations are affected by the combination effect of its own structural biases and of the imperfect boundary conditions. RCM structural biases can be assessed comparing reanalysis-driven RCM simulations with some observational database. The effect of the imperfect boundaries on a RCM simulation can be assessed comparing the CGCM-driven RCM simulations with reanalysis-driven RCM simulations (e.g., Sushama et al. 2006; de Elía et al. 2008; Monette et al. 2012).
In principle, the structural biases of RCM are expected to be smaller than those of CGCMs, due to the higher resolution of RCM and the fact that they are driven by (nearly) perfect reanalysis boundary conditions. When the errors transmitted from the driving CGCMs are considered, the one-way nested RCMs are not intended to considerably change or improve the large-scale atmospheric driving fields imposed as the lateral boundary conditions since large inconsistencies would then arise at the perimeter of the lateral boundaries (von Storch et al. 2000). Further, the RCMs' performance considerably depends on the CGCM skill to reproduce the observed average SST and SIC, as these variables are prescribed as the lower boundary conditions in RCM simulations. The selection of CGCMs for regional downscaling is thus critical for the quality of RCM simulations and is usually based on the quality of CGCM simulations in the region of interest (e.g., Pierce et al. 2009).
Climate-change signal is obtained from RCM simulations by taking the difference between the projected future climate and the simulated current climate considering, for example, statistics computed over 30 years. The credibility of such climate-change signal is of course conditional to the skill of the RCM in faithfully reproducing the current climate. In that respect, RCM structural biases and errors transmitted from the driving CGCM fields via boundary conditions should be both small. If they are of the opposite sign but similar magnitude, they may cancel one another, leading to an apparently high RCM skill in reproducing the current climate, for rather wrong reasons; the cancelation of errors may not necessarily occur in a future climate, thus contaminating the climate-change signal with errors.
Comparing a CGCM-driven RCM simulation with the driving CGCM simulation provides a measure of the "added value" afforded by dynamical downscaling with an RCM. The "added value" may be studied under current climate conditions, for future climate and for the climate-change signal (e.g., Castro et al. 2005; Feser 2006; Laprise 2005; Laprise et al. 2008; Winterfeldt and Weisse 2009; Prömmel et al. 2010; De Sales and Xue, 2011; Di Luca et al. 2012a, b, c).
In order to compare the performance of RCMs and address the uncertainties in RCM climate projections and thus provide valuable high-resolution climate-change information for further impact and adaptation studies, the need of international coordination between RCM downscaling efforts has been early recognized (e.g., PIRCS, Takle et al. 1999; PRUDENCE, Christensen et al. 2007a, b; NARCCAP, Mearns et al. 2009). In 2009, a new World Climate Research Programme (WCRP) initiative—the COordinated Regional climate Downscaling EXperiment (CORDEX, Giorgi et al. 2009) was launched to provide a consistent framework for characterizing the uncertainties underlying regional climate-change projections within the timeline of the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). Within the CORDEX framework, the Coupled Model Intercomparison Project—Phase 5 (CMIP5, Taylor et al. 2012) CGCM simulations are downscaled over specified continent-scale regional domains, using specific timeframes for RCM integration (1950–2100) and validation purposes (20 years).
In this manuscript we present an analysis of the two transient climate-change RCM downscaling experiments over the North American CORDEX domain based on the historical and representative future GHGs and aerosol concentrations. These experiments are performed using the fifth-generation Canadian Regional Climate Model (CRCM5) driven at the lateral boundaries and ocean surface by the output from two different CMIP5 CGCMs' simulations. In addition, a reanalysis-driven CRCM5 simulation is performed in order to assess the CRCM5 own structural biases.
The skill of the reanalysis-driven CRCM5 simulation over the North American CORDEX domain in reproducing the observed precipitation and near-surface temperatures is analysed in detail in Martynov et al. (2013). The authors showed that the reanalysis-driven CRCM5 simulation has a comparably high skill in realistically reproducing some key synoptic and mesoscale climatic features of North American climate that were underlined in the IPCC AR4 (Christensen et al. 2007a), such as the North American Monsoon, Great Plains Low-Level Jet and its influence on the precipitation diurnal cycle in summer. In this paper, we first evaluate the ability of the CGCM-driven CRCM5 simulations to realistically reproduce the observed spatiotemporal variability of near-surface temperatures and precipitation, and then we present the projected changes for the 21st century. Recently, the CRCM5 simulations have also been performed over the CORDEX-Africa domain. The skill of the reanalysis-driven CRCM5 simulation at reproducing the key climatic features over Africa is discussed in Hernández-Díaz et al. (2012), and CGCM-driven CRCM5 simulations and climate projections over Africa are analyzed in Laprise et al. (2013).
The paper is organized as follows. A brief description of the CRCM5, driving CGCMs and the experiment design is given in Sect. 2. In Sect. 3 we discuss the uncertainty in the observed climate by considering multiple observation and reanalysis products. Sections 4, 5, 6 discuss the CRCM5 performance in reproducing different aspects of the current climate. Finally, Sect. 7 provides the projected climate changes. Summary and conclusions are presented in Sect. 8.
CRCM5 configuration
The CRCM5 (Zadra et al. 2008) is a limited-area version of the Environment Canada Numerical Weather Prediction Global Environmental Multiscale model (GEM, Côté et al. 1998; Yeh et al. 2002). It is a grid-point model based on a two-time-level semi-Lagrangian, (quasi) fully implicit time discretization scheme. The model includes a terrain-following vertical coordinate based on hydrostatic pressure (Laprise 1992) and the horizontal discretization on a rotated latitude-longitude, Arakawa C grid (Arakawa and Lamb 1977). The nesting technique employed in CRCM5 is derived from Davies (1976); it includes a 10-point wide halo zone along the lateral boundaries for the semi-Lagrangian interpolation and a 10-point sponge zone for a gradual relaxation of all prognostic atmospheric variables toward the driving data along the lateral boundaries. A detailed description of the CRCM5 model used here can be found in Hernández-Díaz et al. (2012) and Martynov et al. (2013).
In the present configuration, the CRCM5 employs Kain-Fritsch deep convection parameterization (Kain and Fritsch 1990), Kuo-transient shallow convection (Kuo 1965; Bélair et al. 2005), Sundqvist resolved-scale condensation (Sundqvist et al. 1989), correlated-K solar and terrestrial radiations (Li and Barker 2005), and subgrid-scale orographic gravity-wave drag (McFarlane 1987), the low-level orographic blocking parameterization (Zadra et al. 2003) with recent modifications described in Zadra et al. (2012), and the planetary boundary layer parameterization (Benoit et al. 1989; Delage and Girard 1992; Delage 1997) modified to introduce turbulent hysteresis as described in Zadra et al. (2012). Some important modifications were introduced to the physical parameterization of the model in order to improve its performance for regional climate. This includes a change to the planetary boundary layer parameterization to suppress turbulent vertical fluxes under very stable conditions. The interactively coupled one-dimensional lake model (Flake, Mironov et al. 2010) has been introduced and tested in the CRCM5 (Martynov et al. 2012), for both the resolved- and subgrid-scale lakes following a land-surface type aggregation approach.
The CRCM uses the Canadian Land-Surface Scheme, version 3.5 (CLASS3.5, Verseghy 1991, 2009). The CLASS was set to 26 soil layers, with the maximum depth of 60 m. The ECOCLIMAP bare soil albedo (Masson et al. 2003) is used instead of the default values in CLASS3.5 and the Sturm et al. (1997) parameterization is used for snow thermal conductivity. The geophysical fields representing the distribution and characteristics of vegetation have been modified in order to better reproduce the real vegetation; 50 % of the bare soil fraction has been filled with surrounding vegetation or short grass and forbs and 30 % of bare soil was added in boreal forest and north of it to the following vegetation types: needleleafs, deciduous broadleafs, deciduous shrubs, mixed wood forests. Further, 30 % of "crops" have been converted to "short grass and forbs". Although no organic soils were used in the simulation, peatlands were introduced as a separate soil type.
CGCMs
The CRCM5 simulations use data from two CMIP5 CGCMs. The second-generation Canadian Earth System Model (CanESM2) has evolved from CanESM1 (Arora et al. 2009, 2011). It consists of the fourth-generation atmospheric general circulation model CanAM4 coupled with the physical ocean component OGCM4 developed from the NCAR CSM Ocean Model (NCOM; Gent et al. 1998), the Canadian Model of Ocean Carbon (CMOC; Christian et al. 2010) and Canadian Terrestrial Ecosystem Model (CTEM; Arora and Boer 2010). The CanAM4 evolved from CanAM3, described in detail in von Salzen et al. (2005) and Scinocca et al. (2008) by introducing substantial improvements in the radiative transfer and cloud microphysics parameterizations and adding a prognostic bulk aerosol scheme with a full sulphur cycle, along with organic and black carbon, mineral dust and sea salt. The CanAM4 is a spectral model employing T63 triangular truncation with physical tendencies calculated on a 2.81° linear grid and 35 levels in the vertical (Arora et al. 2011). The OGCM4 horizontal coordinates are spherical with grid spacings approximately 1.41° in longitude and 0.94° in latitude.
The Max Planck Institute for Meteorology's Earth System Model (MPI-ESM) consists of the atmospheric global circulation model ECHAM version 6 (Roeckner et al. 2003; Giorgetta et al. 2012) that includes an advanced treatment of terrestrial biosphere using a dynamical land vegetation model (JSBACH; Brovkin et al. 2009). The ECHAM6 is coupled with the global ocean/sea ice model MPI-OM (Marsland et al. 2003) without any flux adjustment (Jungclaus et al. 2006) and the Hamburg Ocean Carbon Cycle model (HAMOCC; Wetzel et al. 2005). In its low-resolution version MPI-ESM-LR, the atmospheric component of ECHAM6 operates at spectral truncation T63, on a 1.87° quadratic Gaussian grid with 47 levels in the vertical, while the MPI-OM component operates on a 1.5° grid with 40 levels.
Two CGCM simulations (one member simulation from the CanESM2 and the other from MPI-ESM-LR) are used to drive the CRCM5. These CGCM simulations consist of the historical 1850–2005 period, when GHGs, aerosols and land cover, as well as the natural variability due to solar variability and explosive volcanoes force them. The continuations in the 2006–2100 period are forced with the Representative Concentration Pathway 4.5 future scenario (RCP4.5; Meinshausen and 2011).
CRCM5 simulation setup
Following the CORDEX recommendations, the CRCM5 simulations are performed on a grid mesh of 0.44°; at this resolution CRCM5 uses a 20-min timestep. The integration domain was slightly larger than the minimal one suggested by CORDEX for North America (see for example Fig. 1), consisting of 172 × 160 grid points, excluding the halo and sponge zone. In the vertical, 56 hybrid levels were used, with the top level near 10 hPa. Following the CORDEX recommendations, the simulations were driven at the lateral boundaries only, with no nudging in the interior of domain; thus the large-scale spectral nudging option was turned off in all simulations reported here.
ERA-Interim 1989–2008 average temperatures and deviations of the CRU and UDEL gridded analyses of observations from ERA-Interim temperatures for a DJF and b JJA
Three CRCM5 simulations were carried out. The first simulation spanned a 50-year period and was driven by the ERA40 reanalysis and AMIP II SST and SIC (Kanamitsu et al. 2002) for 1959–1988 and by the ERA-Interim reanalysis data during the period 1989–2008 for atmospheric and ocean surface conditions. Air temperature, horizontal wind components and specific humidity lateral boundary conditions on pressure levels were used for driving this simulation.
In order to spin up the CLASS for the CGCM-driven CRCM5 integrations, the soil temperature profiles are first taken from Stevens et al. (2008); they were obtained by forward modelling with a simple soil model using forcing data from a millennial CGCM integration. Next, these profiles were used as an input to a 300-year long CRCM5 integration on a grid mesh of 1° over North America, driven with the ERAINT reanalysis for a selected representative year. The final soil temperature profiles from this integration served as the initial profiles for the CGCM-driven CRCM5 simulations.
The two continuous CGCM-forced CRCM integrations were carried out for the period 1950–2100, driven from the lateral boundaries and ocean surface by the data from CanESM2 and MPI-ESM-LR, and forced with the historical and representative future GHG and aerosol concentrations from the RCP4.5. For these two simulations, the CGCM-derived lateral boundary conditions were interpolated on the model levels, with the same driving variables, except in the case of MPI-ESM-driven simulation where the available cloud data were also prescribed at the lateral boundaries. When unavailable in the driving CGCM due to the different land-use definitions arising from the very different model resolutions, the SST and SIC fields on the CRCM5 grid were derived using the linear and nearest-neighbour extrapolation, respectively. For diagnostic analysis the simulated fields were interpolated to 22 pressure levels. Most variables were archived at three hourly intervals, except for precipitation that was accumulated and archived at hourly intervals.
In what follows we will use the acronyms CRCM-ERA, CRCM-Can and CRCM-MPI for the reanalysis-, CanESM2- and MPI-ESM-LR-driven CRCM5 simulations. Prior to analysing these simulation results, we briefly discuss the current-climate near-surface temperature and precipitation over North America. We will compare various observation-based gridded datasets and reanalysis in order to assess the uncertainty in the observed climate and select the datasets for model validation.
Observed present-day climate
Figure 1 shows the 1989–2008 climatological-average 2 m temperatures from ERA-Interim (ERAINT) reanalysis for winter (DJF, Fig. 1a) and summer (JJA, Fig. 1b), interpolated on the CRCM5 grid. In addition, the central and right columns in Fig. 1 display the deviations from ERAINT values of corresponding fields from two other observational datasets that are only available over land: the University of East Anglia Climate Research Unit (CRU, version TS3.1; Mitchell and Jones 2005) and the University of Delaware (UDEL, version 2.01; Willmott and Matsuura 1995). It can be seen in Fig. 1 that, with the exception of Greenland and northern parts of the Canadian Archipelago, the differences among these datasets over central and eastern parts of the continent are generally not large. The CRU values tend to be somewhat cooler than ERAINT while UDEL values tend to be warmer; the absolute differences are, however, mostly confined to ±1 °C. Over the western part of the continent and Mexico, characterized with complex topography, there is somewhat less agreement between the three datasets, giving rise to differences locally as large as ±4 °C. Part of these differences might arise because of a somewhat coarser resolution of ERAINT reanalysis. It is produced with an assimilation system operating on a 0.75° reduced Gaussian grid with spectral truncation T255 (Dee et al. 2011) but the publicly available ERAINT 2 m temperatures are provided on the 1.5° latitude-longitude grid, which could result in some smoothing of the original data. The UDEL and CRU datasets (0.5°), might more accurately represent the local differences in elevation. On the other hand, the latter two datasets might suffer problems related to the localization of station data (valleys and mountains).
The first column of Fig. 2 shows the climatological-average precipitation for 2001–2008 in winter (Fig. 2a) and summer (Fig. 2b) from the Global Precipitation Climatology Project global daily merged precipitation analysis (GPCP, 1DD; 1°; Huffman et al. 2001). The other three columns on Fig. 2 display seasonal-average deviations of CRU, UDEL and Tropical Rainfall Measuring Mission (TRMM, 3B42, 0.25°, 1998–2009; Huffman et al. 2007) datasets, respectively. The TRMM dataset is defined over land and oceans, but only for latitudes below 50°N and for a shorter time frame. Very large deviations from the other three sets having been noted in the TRMM seasonal means in the period 1998–2000 in the 40°N–50°N range (not shown), we decided to exclude these years and to use only 2001–2008. The same period is used in Fig. 2 in order to compare the four datasets.
GPCP 2001–2008 average observed precipitation and the deviations of the CRU, UDEL and TRMM mean precipitation from the GPCP observations in a DJF and b JJA
The deviations of each of the three datasets in Fig. 2 are normalized with the arithmetic mean between that dataset and GPCP. In winter (Fig. 2a), CRU and UDEL have considerable dry deviations over Alaska and northern Pacific Coast. Another important feature of these sets is a dry deviation in the US central plains. A careful examination of this feature shows that the gradient of precipitation difference closely follows the US-Canada border. This cross-border discontinuity in winter precipitation has been attributed to snowdrift treatment and differences in catch characteristics between the national gauges (Yang et al. 2005). It is present in both CRU and UDEL datasets that are purely based on ground observations. On the other hand, GPCP and TRMM combine satellite and gauge data, which likely diminishes the cross-border discontinuity in winter. In summer, there is no such discontinuity and, in general, the relative differences among different datasets become considerably smaller. It is also worth noting that TRMM dataset exhibits a general dry deviation with respect to GPCP in both summer and winter, especially in the western-most regions of the continent and over the Pacific Ocean, locally as large as 100 %, which implies three times lower values in TRMM than in GPCP. A more thorough discussion of the TRMM bias and other observation uncertainties can be found in Nikulin et al. (2012) for CORDEX-Africa domain.
For validation of CRCM5-simulated spatially averaged precipitation, such as, for example, when evaluating the precipitation annual cycles over aggregated regions, the 1° GPCP set will be used as a reference in order to avoid the systematic differences associated with the snowdrift treatment. However, for grid-point validations of the CRCM5 precipitation the 0.5° CRU set will be used instead, because its higher spatial resolution matches more closely that of CRCM5. It can be seen in Fig. 4a that in winter over Mountainous West there are relatively large local differences between the GPCP as compared to CRU or UDEL sets. The latter two yield more precipitation on the western slopes of mountains, exposed to the westerly flow, but less over eastern slopes in the lee of mountains (such as the Okanagan Valley and Alberta Foothills). These local differences are likely due to the coarseness of the GPCP dataset. Finally, for the purpose of comparison of CRCM5 precipitation at higher temporal resolution, such as daily accumulations time series, we decided to utilize the high-resolution TRMM set (0.25°) since it can potentially better represent heavy precipitation events; for a comparison of GPCP and TRMM daily precipitation distributions, see Martynov et al. (2013).
For validation of CRCM5 seasonal 2 m temperatures we will use CRU data. Finally, for comparison of CRCM5 daily temperature time series ERAINT reanalysis will be utilized. Daily temperatures are not so fine-scale dominated as precipitation and we do not expect the choice of the reference dataset to have a large impact on the assessment of CRCM5 skill in reproducing daily temperature distributions, except in regions with complex topography, such as the Pacific Coast or Mountainous West, where deviations of the ERAINT reanalysis from UDEL and CRU datasets were noted (Fig. 1).
In the next section we begin the evaluation of CRCM5 simulations of the present climate by first considering seasonal-average variables.
Evaluation of CRCM5 seasonal averages
Among the CGCM variables used to force the CRCM5 simulations it appears that the SST have a large impact on the CRCM5 skill in reproducing present climate. Figure 3 shows the DJF- and JJA-average SST biases in CRCM-Can and CRCM-MPI simulations. The SSTs shown in Fig. 3 are identical to those of the corresponding CGCM simulations, except in regions where they are not defined and hence needed to be extrapolated, such as, in the case of CanESM2, the Canadian Archipelago and the Gulf of California. It can be seen in Fig. 3 that both CRCM-Can and CRCM-MPI exhibit a cold bias of 2–6 °C off the mid-latitude Pacific Coast in winter and a warm bias off the subtropical Pacific Coast in all seasons. This warm bias is exceptionally large in CRCM-MPI in summer when it reaches 6 °C and also extends farther northward. Both models also have considerable SST biases in the Atlantic, warm bias off the East Coast and a strong cold bias in north-central Atlantic, implying that the Gulf Stream is not well represented. It is worth noting that these biases are quite a bit larger than the interannual variability; the standard deviation of seasonal average SSTs is mainly confined to 1–2 °C (not shown).
Deviation of the 1989–2008 average SST in the CRCM-Can (a, b) and CRCM-MPI (c, d) from the ERA-Interim 1989–2008 mean, for DJF (a, c) and JJA (b, d)
The biases of CGCM-driven CRCM5 simulations can be thought of as originating from: (1) the CRCM5's own structural errors that are present even when driven with perfect lateral and lower boundary conditions, and (2) the effect of errors in the lateral boundary conditions and lower boundary forcing over ocean (SST and SIC) that are "inherited" from the driving CGCM, as well as due to the internal variability of the CGCM. Upon assuming that the reanalysis and observation errors are negligible, the CRCM5 structural bias (denoted as SB) can be quantified as the deviation of the reanalysis-driven simulation from observations. The lateral and lower boundary conditions effect (denoted as LLBCE) can then be assessed as the deviation of a CGCM-driven CRCM5 simulation from the reanalysis-driven CRCM5 simulation.
Figure 4a, b show the 1989–2008 DJF-average 2 m-temperature biases in CRCM-Can and CRCM-MPI simulations with respect to CRU observations. These biases are each decomposed in (1) the CRCM5 SB that is quantified by the CRCM-ERA deviation from CRU, which is displayed in Fig. 4c and is common to both CRCM-Can and CRCM-MPI simulations, and (2) the LLBCE, displayed in Fig. 4d, e for CRCM-Can and CRCM-MPI, respectively. Finally, for the purpose of the comparison, we show the CGCMs' own DJF 2 m-temperature structural biases in Fig. 4f, g for the CanESM2 and MPI-ESM-LR simulations, respectively.
Differences between a CRCM-Can, b CRCM-MPI, c CRCM-ERA, f CanESM2, g MPI-ESM-LR and CRU 1989–2008 DJF-mean 2 m temperatures; d difference between CRCM-Can and CRCM-ERA 1989–2008 DJF-mean 2 m temperatures; e the same as in d but between CRCM-MPI and CRCM-ERA
It can be seen in Fig. 4a, b that both CGCM-driven runs exhibit moderate to strong cold biases of −2 to −8 °C in DJF over most of the continent, except in the northeastern parts where the biases are near zero in CRCM-MPI and about +2 to +4 °C in CRCM-Can. Inspection of Fig. 4c shows that the cold bias over the western US and Mexico is also present in CRCM-ERA, though with somewhat smaller magnitude. It is however still much larger than the interannual standard deviation of DJF 2 m temperatures in these regions that takes values around 1–2 °C over the western US and smaller than 1 °C over Mexico (not shown). It appears that the cold bias over these regions in CGCM-driven runs is to a large degree due to the CRCM5 own structural errors. However, Fig. 4d shows that over the southwestern part of the continent the LLBCE also contributes to the cold bias when the CRCM5 is forced with the CanESM2. On the other hand, the warm bias over eastern Canada in CRCM-Can in Fig. 4a is mostly due to the LLBCE (Fig. 4d) since is absent in CRCM-ERA (Fig. 4c). When the CRCM5 is forced with the MPI-EMS-LR (Fig. 4e), the LLBCE has a considerable contribution to the cold bias over the entire west and central part of the continent; this is very likely due to the cold SST bias in Northern Pacific in MPI-EMS-LR (see Fig. 3c). Finally, Fig. 4f shows that the winter temperature bias pattern in the CanESM2 is similar to that in CRCM-Can, although CanESM2 tends to be warmer by a few degrees. The MPI-ESM-LR appears to have the best overall skill in reproducing winter temperatures over North America (Fig. 4g), with the exception of a strong cold bias over Pacific Northwest, which it has in common with the CRCM-MPI (Fig. 4b).
Figure 5 displays the corresponding analysis for summer. In general, the CRCM5 performs better in summer. The CRCM-Can summer temperatures (Fig. 5a) exhibit a relatively uniform warm bias of up to 4 °C in the interior of the continent. The exception is Mexico where there is a cold bias of similar magnitude. There is also a narrow region stretching over the northern-most Pacific Coast with strong cold biases with magnitude as large as −8 °C. Comparison of Fig. 5a with Fig. 5c shows similar patterns in CRCM-ERA over the northern-most Pacific Coast as well as over Mexico, implying that these features are due to the CRCM5 SB. The LLBCE in CRCM-Can (Fig. 5d) is considerable over northern Canada where it reaches of 2–4 °C. It is worth noting here that the standard deviation of JJA average 2 m temperatures is confined to 1 °C over most of the continent. This implies that the summer bias, despite being smaller in absolute terms than the bias in winter, is still large with respect to the interannual variability. CRCM-MPI summer temperatures (Fig. 5b) are, in general, quite close to the observations, with the exception of a cold bias over the West Coast and Mexico that is due to the CRCM5 SB (Fig. 5c). Comparison of Fig. 5c, e shows that a relatively high skill of CRCM-MPI over the central parts of the continent (Fig. 5b) is a consequence of the cancelation of the CRCM5 SB and LLBCE; the CRCM5 SB and LLBCE in CRCM-MPI summer temperatures are of similar magnitude but of the opposite sign. A negative LLBCE in Fig. 5e might be partly due to the cold SST biases over the northern Pacific and mid-latitude Atlantic in summer (Fig. 3d). Figure 5f shows that the CanESM2 has a very strong warm bias over central part of the continent with values as large as 10 °C. As it can be seen in Fig. 5a, CRCM5 substantially improves summer CanESM2 2 m temperatures. On the other hand, CRCM-MPI has biases roughly similar to those in MPI-ESM-LR but each of these have less than half of the amplitude of those found with CanESM2.
Same as in Fig. 4 but for JJA 2 m temperatures
Next we consider seasonal precipitation. Figure 6 displays the bias for 1989–2008 winter precipitation using the CRU data as a reference. The biases are normalized with the arithmetic average between the model and observed precipitation, and are expressed in percentage. Figure 6a–c as well as 6f and g show that all simulations exhibit a wet bias of 50–100 % over the Great Plains, south of the US-Canada border. Similar biases are present over Alaska and the Arctic Archipelago. As it was discussed earlier, despite being large these biases are of the order of magnitude of differences among the observations sets (see Fig. 2) and for this reason will not be pursued further. In other regions the CRCM simulations exhibit relatively small differences with respect to CRU observations. However, as it can be seen in Fig. 6a, b, the exception is Mexico, where the CRCM-Can and CRCM-MPI winter precipitation is strongly overestimated. Figure 6d, e show that the wet bias over central and western Mexico is mainly due to the LLBCE, since it has no counterpart in the CRCM-ERA simulation (Fig. 6c). The warm SST bias over subtropical Pacific in the CRCM-Can and CRCM-MPI (Fig. 3c, d) may contribute to this wet bias. Figure 6d also shows that the LLBCE in the CRCM-Can winter precipitation contributes to the wet bias over the southern and eastern coastal regions of the continent, likely due to the warm SST biases off the coast in these regions in the CRCM-Can simulation (Fig. 3b). Comparison of Fig. 6f,g with Fig. 6a, b shows that CGCMs' bias patterns are relatively similar to those in the corresponding CRCM simulations, except over the mountainous regions over the western parts of the continent. The both CGCM simulations exhibit common strong biases, locally larger than 100 % in magnitude, which can be associated with a poorly resolved topography in the CGCMs' simulations. The most notable feature in Fig. 6f, g is a long stretch of positive bias in lower basins between the Rocky Mountains and Coastal Range. This bias is however absent in the corresponding CRCM-Can and CRCM-MPI simulations (Fig. 6a, b), demonstrating the CRCM added value in the simulated winter precipitation due to a better resolved topography.
Same as in Fig. 4 but for 1989–2008 DJF-average precipitation with CRU observations as reference
We complete this section with the corresponding analysis for summer. CRCM-Can summer precipitation (Fig. 7a) exhibits relatively good agreement with the observations over the northern parts of North America. Over Central Plains and the Rocky Mountains there is a dry bias from 25 to 75 %. Similar bias patterns are found in the CRCM-ERA precipitation (Fig. 7c), implying that they are mainly due to CRCM5 SB. Further, CRCM-Can precipitation exhibits strong dry bias over the Pacific Coast, stretching from Mexico to Southern California as well as over the US Southwest. It is also worth noting that there is a strong dry bias over Greater Antilles and northern Gulf of Mexico. These features partly originate in the CRCM5 SB (Fig. 7c) and the LLBCE (Fig. 7d). The CRCM-MPI summer precipitation (Fig. 7b) is quite close to observations over most of the continent. It is worth noting however that the CRCM5 SB (Fig. 7c) is negative over Central Plains while the LLBCE has a positive contribution there (Fig. 7e), yielding a cancelation of errors and a good skill of CRCM-MPI in reproducing summer precipitation, as it was the case for summer temperatures. The largest positive deviation of CRCM-MPI summer precipitation from CRU occurs in the North American monsoon region, from the southern tip of Baja California, northward, into northwest Mexico and the US Southwest (Fig. 7b). The position of this pattern corresponds very well with the LLBCE displayed in Fig. 7e, implying that it is "inherited" from the driving MPI simulation. It is however very difficult to understand the nature of this wet bias in the CRCM-MPI simulation since the monsoon precipitation is a result of adverse effects. There is a strong positive SST bias of up to 4 °C in the driving MPI simulation off the coast of this region (Fig. 3d). The SST bias may have enhanced the evaporation and hence increased the precipitation over the adjacent coastal regions, yielding a wet bias in CRCM-MPI. On the other hand, the warm SST bias also implies a smaller land-sea temperature contrast and may weaken the monsoon; negative correlations between the SST anomalies off the northern Baja California and monsoon precipitation have been documented in the literature (e.g., Vera et al. 2006). Other LLBC effects may include the moisture flow via the synoptic-scale circulation as well as the soil moisture The CRCM-MPI simulation exhibits a wet bias over the US Southwest and Mexico in winter (Fig. 5b).
Same as in Fig. 6 but for JJA precipitation
Now we proceed to a more detailed evaluation of CRCM5 2 m temperature and precipitation by first considering the annual cycles of monthly means and then the daily time series distributions. For this purpose the NARCCAP regions of North America, proposed in Bukovsky (2011), will be used. In any regionalization there is a trade-off between selecting either smaller, quasi-homogeneous or larger, aggregated regions; we decided to use the latter approach. The ten Bukovsky's regions that will be used here are displayed in Fig. 8. Following Martynov et al. (2013), we introduced two additional regions situated in the US Southwest, in order to analyze the precipitation related to the North-American monsoon. These two regions are denoted as CORE and Arizona-New Mexico (AZNM) in Fig. 8.
Map of regionalization adopted from Bukovsky (2011)
Evaluation of annual cycles
For the sake of brevity we will evaluate the annual cycle of CRCM5 precipitation over selected regions; we omit the temperature as the evaluation of 2 m-temperature annual cycles in the reanalysis-driven CRCM-ERA simulation can be found in Martynov et al. (2013). The authors showed that the annual cycle of 2 m-temperature was in most cases generally well reproduced by the model as well as the interannual variability of this variable. Figure 9 displays the annual cycles of regional-average 1997–2008 monthly-mean precipitation in each of the 12 regions, for CRCM-ERA, CRCM-Can and CRCM-MPI. The GPCP precipitation is used as the reference. We also show the precipitation simulated by the two driving CGCMs: CanESM2 and MPI-ESM-LR.
Annual cycles of 1997–2008 monthly mean precipitation for the GPCP observations (green), CRCM-ERA (black-full), CRCM-Can (cyan-full), CanESM2 (cyan-dashed), CRCM-MPI (pink-full) and MPI-ESM-LR simulations (pink-dashed line)
The first row in Fig. 9 displays the results for regions that are characterized with cold-season minimum and warm-season maximum precipitation (Arctic Land, Boreal and Central). Over Arctic Land all simulations tend to somewhat overestimate precipitation in all seasons but, at the same time, they represent the annual cycle relatively well. The reanalysis-driven simulation CRCM-ERA has the strongest wet bias in late summer (Aug–Oct) of about 0.5 mm/day. The maximum precipitation in CRCM-ERA is in August, which is in accord with the GPCP observations. On the other hand in both CRCM-Can and CRCM-MPI, the maximum is shifted to September; this is also the case with the CanESM2 precipitation. Over Arctic Land the MPI-ESM-LR has a strong wet bias in spring but the CRCM5 simulation forced with MPI-ESM-LR tends to be much closer to the GPCP values in this season. Over Boreal forest region the GPCP precipitation has a maximum in July due to the peak in convection and another maximum in September. The CRCM-ERA reproduces this feature, although the convective maximum occurs too early, in June. The CRCM-Can precipitation has the same behaviour as the CRCM-ERA. On the other hand, the CRCM-MPI has some wet bias over Boreal forest region in summer, as is the case in MPI-ESM-LR, and in addition exhibits a single maximum in August. The observed precipitation cycle in the Central region is characterized with a single peak in June and minimum in January. The CRCM-ERA run performs quite well in Oct-May but not in Jun-Sep, when it has a pattern quite a bit different from the observations; instead of June maximum the CRCM-ERA precipitation decreases from May, having a minimum in July, when it has a dry bias of about 1 mm/day, and then increases until October, when it again gets close to the observations. The same holds for CRCM-Can and CanESM2 precipitation, the latter also having a dry bias in all seasons. The best results are obtained with Can-MPI that accurately reproduces the precipitation annual cycle over Central regions. However, as it was noted when discussing Fig. 7 in the case of Can-MPI summer precipitation over this region, the CRCM5 SB is balanced by the LLBCE, resulting in the cancelation of the two errors and a small CRCM-MPI bias. Interestingly, the same holds for the CRCM-MPI precipitation annual cycle.
The second row in Fig. 9 displays the precipitation annual cycles for the Great Lakes, East and South regions that are characterized with a more uniform precipitation throughout the year. Over the Great Lakes the GPCP curve shows an increase in precipitation in May–Sep, with respect to other months. In the CRCM-ERA this increase occurs much earlier and, in disaccord with the observations, the precipitation rate decreases in mid-summer. This also characterizes the CRCM-Can and CanESM2. However, CRCM-Can improves quite a bit the precipitation of its driving CGCM. Can-MPI more closely follows the GPCP curve, with some overestimation in early summer; this simulation also appears to improve its driving CGCM, which has a strong wet bias in summer over the Great Lakes. Next, over the East region, the CRCM-ERA precipitation is relatively close to GPCP, though there is some wet bias of up to 0.5 mm/day in almost all seasons. The CRCM-Can and CRCM-MPI both overestimate precipitation in the East and this is likely to the warm SST bias off the East Coast in the two simulations (see Fig. 3a–d). In the South region, the GPCP shows multiple maxima and minima. The CRCM-ERA very well captures the May and October minima, but not the minimum in August, when it overestimates the precipitation by about 0.5 mm/day. It is however close to the observed values during June and September maxima. CRCM-MPI variations are quite close to the GPCP in summer and autumn months, while in winter and spring its variations do not agree with the observations. The CRCM-Can annual cycle over the South deviates the most from the GPCP values, having a more pronounced annual variation with a maximum in winter and minimum in summer. The CanESM2 and MPI-ESM-LR also have too pronounced annual variations, the first with a dry bias in summer and the latter with a dry bias in winter; their CRCM5 counterparts, are still closer to the observations in the South.
Next we consider the Pacific NW, Pacific SW and Mountainous West (Mt West) regions characterized with summer minimum and winter maximum in the precipitation annual cycle (third row in Fig. 9). In the Pacific NW all CRCM5 simulations display the annual cycle similar to the observed one but tend to be too wet, by a few mm/day, especially in early winter. The driving CGCMs appear to have better results, especially the CanESM2 whose native region is the Pacific NW. Over the Pacific SW, both the CRCM-ERA and CRCM-Can are able to reproduce the annual cycle of precipitation. The CRCM-MPI, however, is too wet by 2 mm/day in winter-spring and produces considerable precipitation in Aug-Sep, while in the GPCP there is almost no precipitation in these months. This implies that the North American monsoon propagates to the southern-most portions of the Pacific SW region (see also Fig. 7b, e), which in nature does not happen (e.g., Adams and Comrie 1997). Note also a strong warm SST bias in the CRCM-MPI simulation in the subtropical Pacific (Fig. 3c, d). Interestingly the driving MPI-ESM-LR simulation produces no such wet bias in summer. In the Mt West the GPCP shows a weak annual variation with a general minimum in summer and two maxima in January and June. None of the models represents this behaviour well; they all tend to have a too pronounced annual variation, with a dry bias in summer and a wet bias in winter, the latter being especially strong in the CRCM-MPI and MPI-ESM-LR.
Finally we turn our attention to the Desert region (the bottom row in Fig. 9) and the two joint subregions AZNM and CORE, where the summer precipitation is governed by the North American Monsoon regime. In the Desert and CORE regions, the CRCM-ERA follows closely the GPCP curve in Sep–June period, but it does not represent well the Jul–Aug maximum; it is too dry in summer and the maximum is lagged more towards Aug–Sep. The CRCM-Can exhibits a similar behaviour but has a stronger dry bias in summer and also a wet bias in winter. The CRCM-MPI simulation strongly overestimates precipitation in all seasons. In the northern-most part of the Desert region (AZNM), the CRCM-ERA and CRCM-Can somewhat better represent the summer precipitation, being able to reproduce the correct timing of the monsoon-related maximum in August. They have, however, still a dry summer bias and some wet bias in winter-spring in AZNM.
In summary of Fig. 9, the two CGCM-driven CRCM5 simulations reproduce the most general features of the precipitation regional annual regimes but they disagree with observations in finer details. Some of these differences could be generated by the CGCM natural variability. Apart from the annual variation, in all season in the western parts of the continent and Mexico, the RCM simulations tend to overestimate precipitation, especially the CRCM-MPI that has a too warm subtropical Pacific SST. The CGCM-forced CRCM5 simulations also exhibit a relatively high skill in the monsoon timing but do not reproduce the precipitation amounts as accurately as the reanalysis-driven simulation CRCM-ERA.
Evaluation of spatiotemporal distributions
We now move to the investigation of the spatiotemporal distributions of temperature and precipitation. Spatiotemporal distributions were obtained by treating each archival times and grid points within each region as individual data that are then pooled in a large single set, which is then used to assess the empirical distribution of a climate variable for that region.
Figure 10 summarizes the results for 1989–2008 daily-mean 2 m-temperature series for ERAINT, the three CRCM5 simulations and two driving CGCM simulations, for each of the ten Bukovsky's regions. Panels a-c display the distribution mean, the 5th and 95th percentile, respectively, as a function of region for winter, and panels d-f show the same for summer. We begin the discussion by analysing the winter mean temperature (Fig. 10a). In Arctic Land, Boreal, Great Lakes and East regions, the reanalysis-driven CRCM-ERA (black circles) has a high skill in reproducing the ERAINT (green circles). In these regions the MPI-ESM-LR simulation (pink diamonds) is also very close to ERAINT, while the CanESM2 (cyan diamonds) has a warm bias of 2–6 °C. Clearly, the two CGCM-forced CRCM5 simulations (CRCM-Can as cyan squares and CRCM-MPI as pink squares) have smaller biases than the corresponding CGCM runs in these regions. In the other regions (Central, South, Pacific NW and SW, Mt West and Desert) the CRCM5 has a cold structural bias (SB), measured by the deviation of the CRCM-ERA from ERAINT, of 2–4 °C. Due to this CRCM5 SB, winter-average temperatures are generally underestimated in CRCM-Can and CRCM-MPI in these regions.
The mean (a, d), 5th (b, e) and 95th percentiles (c, f) of daily-mean temperatures for 1989–2008, as a function of Bukovsky's regions, in a–c DJF and d–f JJA; ERAINT (green circles), CRCM-ERA (black circles), CRCM-Can (cyan squares), CanESM2 (cyan diamonds), CRCM-MPI (pink squares), MPI-ESM-LR (pink diamonds)
Next we consider the 5th percentile of daily-average temperatures in winter (Fig. 10b). It can be seen that results are very similar to those obtained when the mean was considered. Again in Arctic Land, Boreal, Great Lakes and East regions, the reanalysis-driven CRCM-ERA is very close to the reference ERAINT. The CGCM-driven CRCM5 simulations produce generally substantially better results than the driving CGCM runs. Note also that over Arctic Land, all models appear to perform very well, when compared to ERAINT. However, ERAINT data are obtained in a process by which model information and observations are combined to produce consistent global parameters (Dee et al. 2011). Since the observations are very sparse over Arctic Land, ERAINT data are less constrained by the observations and more rely on model information. Thus it is possible that ERAINT suffers from common biases as the present CRCM5 and CGCM simulations. As of the rest of regions, it can be seen that the biases of the 5th percentile (Fig. 10b) tend to be of the same sign but of a somewhat larger magnitude when compared to the biases in the mean (Fig. 10a). The largest deviations from ERAINT are found for CRCM-MPI and MPI-ESM-LR over the Pacific NW, where they both have a cold bias of almost 10 °C. It is worth recalling the cold bias in CRCM-MPI SST over the North Pacific (Fig. 3c).
When the 95th percentile of winter daily 2 m-temperature is considered (Fig. 10c), it can be seen that the three CRCM5 simulations tend to show better performance than for the 5th percentile. This implies that models have generally more difficulties to reproduce the observed left tails of daily-temperature distribution. One exception is the Central region where the cold bias in the 95th percentile is similar to that in the 5th percentile and the mean, implying that the entire distribution is shifted to lower temperatures.
We now turn our attention to the corresponding summer data, summarized in Fig. 10d–f. When the mean is considered, it can be seen that, with the exception of Arctic Land and Desert, the CRCM-ERA summer temperatures are very close to ERAINT, implying that there is little SB in the CRCM5. At the same time the CGCMs' biases are quite large in some regions, especially in the case of the CanESM2. For example, in Central region, the CanESM2 summer mean is warmer than ERAINT by more than 10 °C. However, the CRCM-Can is also close to ERAINT. This, along with the fact that there is no considerable SB, implies that the improvement of summer temperatures in the CRCM-Can simulation relative to CanESM2 is achieved for good reasons and not as a result of a simple cancelation of biases. On the other hand MPI-ESM-LR temperatures tend to be somewhat colder than ERAINT, but in general rather good in both the mean and percentiles; this is also the case for the CRCM-MPI temperatures. The largest biases of CRCM5 simulations are found in the Pacific NW in the 5th percentile (Fig. 10e) for which the underestimation is about 6–8 °C in this region. Contrary to that, for the 95th percentile (Fig. 10f) the CRCM5 temperatures are in accord with ERAINT. It is possible that part of the apparent cold bias in the 5th percentile in Pacific NW originates from the coarser resolution of the ERAINT data. In this topographically complex region the CRCM5 grid points can consequently lie on a higher elevation than ERAINT allowing for lower temperatures to enter the spatiotemporal distribution in summer. However, the difference in the resolution of the model (0.44°) and ERAINT (1.5°) is likely too small to explain such large differences.
Comparison of Fig. 10d–f shows that in Arctic Land there is a cold bias in excess of 5 °C in the three CRCM5 simulations in the 5th percentile, while there is almost no bias in the 95th percentile. This implies that the cold bias in the mean is largely due an underestimation of temperatures during cold days. To put it simply, cold days are too cold in the CRCM5, while warm days are well reproduced. This can be interpreted as that there is a leftward shift of the temperature distribution's left tail. We found a similar situation in Boreal region, though the cold bias in the 5th percentile is smaller there. On the other hand, in Desert region, in summer, there is a cold bias of a similar magnitude in the mean, 5th and 95th percentiles, implying that the entire CRCM-ERA temperature distribution is left-shifted with respect to that of ERAINT.
In summary, it should be noted that, with the exception of a few cases, the reanalysis- and CGCM-driven CRCM5 simulations exhibit a relatively good skill in reproducing regional near-surface temperature means. This skill is not considerably deteriorated in the limit of lower and higher percentiles of the distribution, which is a necessary condition for a realistic representation of the natural variability of daily temperatures in the present-day climate.
We now move to spatiotemporal distributions of daily-mean precipitation. Evaluation of precipitation distributions is conducted using the regridded TRMM daily means. Since the TRMM data are defined at latitudes below 50°N, we modified the Bukovsky's regions (Fig. 8) in order to fit within this constraint; Arctic Land and Boreal are excluded from considerations, while Pacific NW, Mt West and Central regions are reduced to southward of 50°N. Note also that the CRCM5 grid mesh is coarser by about factor of two than that of the TRMM data. The effect of the CRCM5 lower spatial resolution is to potentially shift distributions towards smaller intensities, since the local heavy precipitation events that might occur in TRMM would be smoothed in the CRCM5. However, averaging in time acts in the same way, by reducing differences due to the spatial resolution (e.g., Di Luca et al. 2012a). Using daily averages is expected to reduce the differences caused by different spatial resolutions of TRMM and CRCM5. This, however, may not be the case with CGCM simulations, because their spatial resolution differs from that of TRMM by a much larger factor.
The frequency-intensity precipitation distributions are obtained by pooling 2001–2008 gridded seasonal time series of daily means from every grid point within a region in a large single set, treating each grid point as an individual data. We then computed the relative frequency of values smaller than 0.1 mm/day in this large set; this frequency is interpreted as the relative frequency of dry days. The values above this threshold are sorted and binned over intervals 0.1, 1 and 2n mm, where n = 1, 2, etc. Finally, the sum of accumulations falling into each individual bin is normalized with the sum of accumulations over all bins, i.e., the total 2001–2008 accumulated precipitation. The resulting normalized distribution will be referred to as the relative daily-accumulations distribution (RDAD). Because of the normalization of accumulations collected in individual intensity ranges (bins) with the total accumulation, deviation of the simulated total accumulation over a region from the observed value has no effect on RDAD; the RDAD only quantifies the portion of the total precipitation over a region that is collected at that daily intensity range. The bias in the mean is thus to be considered separately as well as the frequency of wet/dry days.
Figure 11 shows the RDADs, for DJF 2001–2008, from CRCM-Can, CRCM-MPI, CanESM2 and MPI-ESM-LR simulations in the 10 regions. For every set (model or observations) the RDADs at each intensity range are expressed in percentage of the total accumulation within that set. Also printed are the relative frequency of dry days, spatiotemporal average and maximum daily precipitation. Note that this figure should really be shown as histograms; it is shown as curves for ease of comparing different datasets. It can be seen that in Central, Great Lakes, East, South and Mt West, all simulations depart from the TRMM observations in a quite similar manner by having: (1) a wet bias in the mean, which can be seen by inspecting the printed values of the regional averages, and (2) a leftward displacement of the RDAD, especially in the left tail, implying that the accumulations at lower precipitation rates have a too large contribution to the total accumulation. The overestimation of the accumulations at lower rates comes at the expense of dry days, which are strongly underestimated in these regions. For example, over the Great Lakes, in TRMM on average 84 % of all days in DJF 2001–2008 are dry days, while their frequency is only 23–35 % in all simulations, including both the CRCM5 and the two CGCM simulations.
The relative daily accumulation distributions (RDAD) for 2001–2008 DJF daily precipitation series; TRMM observations (green), CRCM-ERA (black-full), CRCM-Can (cyan-full), CanESM2 (cyan-dashed), CRCM-MPI (pink-full) and MPI-ESM-LR simulations (pink-dashed line)
In the case of the Pacific NW and SW, there is a strong wet bias in the average in all simulations (see printed values), but at the same time, the RDADs in the three CRCM5 simulations are quite close to the TRMM. Despite the bias in the mean, the partitioning of accumulations is still close to the observations. In Pacific SW and NW, however, the CGCM simulations have less bias in the total accumulations but exhibit some leftward shift in their RDADs (toward lower-intensity bins), especially for the coarser-resolution CanESM2. The CGCM-forced CRCM5 simulations improve the RDADs of the driving CGCMs, possibly because they better represent the topography than the coarser-resolution CGCM simulations. Finally, over the southernmost regions Desert, AZNM and CORE, the CRCM-ERA and CRCM-Can RDADs are quite close to the TRMM, while the MPI-ESM-LR and especially the CRCM-MPI exhibit a shift towards higher intensities, which can be attributed to the warm SST bias in the vicinity of these regions (Fig. 3). In summary, the differences between CGCM and CRCM5 simulations in winter are not very large, possibly because winter precipitation is dominantly of the large-scale grid-resolved stratiform type that is adequately represented in CGCMs.
In summer, on the other hand, the subgrid convective precipitation has a dominant contribution over land and CGCM simulations underestimate it, which is the most striking feature of summer RDADs, displayed in Fig. 12. The exception is the Southwestern regions (Pacific SW, AZNM) where the MPI-ESM-LR produces larger relative accumulations than observed in the heavy precipitation range. The summer precipitation RDADs of the three CRCM5 simulations in Fig. 12 are in most of the regions quite close to the TRMM observations. The exceptions are the Central, Great Lakes and South regions where the CRCM5-simulated RDADs are slightly shifted towards the lower intensities. Over the Pacific SW, the CRCM-ERA and CRCM-Can do capture the specific shape of the TRMM RDAD but have a considerable shift towards the large intensities. Over all southern and western regions the CRCM-MPI systematically overestimates the average and exhibits a strong rightward shift in RDAD, thus strongly overestimating the relative contribution of heavy precipitation events in the total. Finally, in the North American Monsoon regions, the CRCM-ERA is quite close to the TRMM data in terms of the average, dry days and distribution of relative accumulations, especially in AZNM. In these regions CRCM-Can is also quite good, while the CRCM-MPI exhibits strong wet biases. The large SST biases inherited from the driving MPI-ESM-LR simulation (Fig. 3) coincide with a considerably deteriorated performance of the corresponding CRCM5 simulation, not only in terms of bias in the mean precipitation, but also in terms of the partition of the accumulations over the intensity ranges.
Same as in Fig. 11 but for JJA precipitation
The analysis of the RDADs shows that, at regional and daily temporal scale, both the reanalysis- and CGCM-driven CRCM5 simulations exhibit a quite high skill at partitioning the simulated total precipitation accumulations across the range of intensities. This holds despite the fact that there are considerable biases in the total precipitation over regions and large biases in the frequency of wet and dry days. A similar conclusion was found in Leung et al. (2003) for a reanalysis-driven RCM simulation over the western U.S. This is not the case for CGCM simulations that cannot adequately represent the partition of accumulations in the range of heavy precipitation, especially in summer, when the convective precipitation has a large contribution over land.
In summary, the CRCM5 simulations are satisfactory in reproducing 2 m temperatures and precipitation climatology. The exception is the Can-MPI simulation that has large biases over the western and southern parts of the continent, especially in summer precipitation, which can be attributed to large SST biases in the Pacific Ocean in the MPI-ESM-LR. At the same time however the CRCM-MPI performs quite well over the central and eastern North America, partly due to the cancelation of the CRCM5 SB and the LLBCE in the CRCM-MPI.
This concludes the discussion of the CRCM5 simulations' skill in reproducing the present climate. We next move to examine the projected climate changes over North America.
Climate projections
Climate projections will be discussed starting with 2 m temperatures. Figure 13 shows the projected changes in the mean 2 m temperatures between periods 2071–2100 and 1981–2010, for DJF (Fig. 13 a–d) and JJA (e–h), in the CRCM5 simulations (left column) and the corresponding driving CGCM simulations (right column). In winter, the CRCM-Can and CanESM2 simulations project large temperature increases, reaching 13 and 16 °C, respectively, over parts of the Arctic Ocean. On the other hand, the CRCM-MPI and MPI-ESM-LR both display quite a bit smaller values, especially over the northern regions where the projected change is limited to 7 °C over the Arctic. All four simulations do agree on a strong south-to-north gradient of the temperature increase over land in winter, with the smallest temperature increase over the US Southeast, by about 1 °C. The patterns of the projected changes in the CRCM5 simulations and their driving CGCM simulations are generally very similar in winter. The exception is the higher-elevation regions in the western parts of the continent where the CRCM5 simulations add to the climate-change signal some fine-scale details that are absent in the CGCMs. For higher elevations, such as the Cascades and the Rocky Mountains in the Pacific NW and British Columbia, the CRCM simulations indicate less warming than the corresponding CGCM simulations. On the other hand, over lower elevations such as the Columbia Basin, the CRCM shows warming of similar magnitude as the CGCM simulations. Salathe et al. (2008) obtained fairly similar results and attributed these warming differences to the fact that the snow-albedo feedback is more realistically represented in RCMs due to better-resolved elevations of the regional topography and mesoscale distribution of snow.
Change in the a–d DJF and e–h JJA average 2 m temperature in the period 2071–2100 compared to 1981–2010, for CRCM-Can (a, e), CanESM2 (b, f), CRCM-MPI (c, g) and MPI-ESM-LR (d, h)
In summer (Fig. 13 e–h), the projected changes vary from 1 °C over the central Arctic to about 2–4 °C over most of North and Central America and locally up to 6 °C over the Canadian Prairies and Pacific NW. The projected summer changes in the CRCM-MPI and MPI-ESM-LR are somewhat smaller than in CRCM-Can and CanESM2 simulations. While the latter two simulations are quite similar, the former two simulations exhibit more differences; the projected changes of summer temperature over the Canadian Archipelago are up to 5 °C in the CRCM-MPI and below 3 °C in MPI-ESM-LR simulation. A disagreement between the two simulations is also present over the Southern Great Plains, where the CRCM-MPI projects a warming smaller by about 1–2 °C than the MPI-ESM-LR.
It is worth noting that the CRCM-Can projected warming in both summer and winter, despite being large, appears not to exceed the 75th percentile of the multi-model distribution when compared to the IPCC AR4 projected temperature changes over different regions under the A1B scenario (Christensen et al. 2007a). When compared to the same reference, the CRCM-MPI projected warming rather lies in the lower percentiles.
We now move to examine the transient temperature change in detail, using the Bukovsky's regionalization. The changes in spatiotemporal variability of 2 m temperatures for periods 2011–2040, 2041–2070 and 2071–2100 to 1981–2010 are summarized in Fig. 14 for winter (panels a–c) and summer (d–f) where we show the change in the mean 2 m temperatures for the three periods (a and d), as well as the 5th (b, e) and 95th (c, f) percentiles of daily-average temperatures. As before, the spatiotemporal distributions are obtained by treating each element in a time series of daily averages in every grid point within a region as individual data. In Fig. 14 squares represent the CRCM5 and diamonds are for CGCM simulations.
Change in the mean (a, d), 5th (b, e) and 95th percentiles (c, f) of daily-averaged 2 m temperatures, as a function of region, for a–c DJF and d–f JJA, in 2011–2040 (blue, brown), 2041–2070 (cyan, pink) and 2071–2100 (green, yellow) with respect to 1981–2010; CRCM-Can (blue, cyan, green squares), CanESM2 (blue, cyan, green diamonds), CRCM-MPI (brown, pink, yellow squares), MPI-ESM-LR (brown, pink, yellow diamonds). Legend is split in two parts in order to fit the space
First we consider the regional temperature change in winter (Fig. 14a). When the CanESM2 simulation is considered, it can be seen that the temperature increases from 2 °C in South to 6 °C in Arctic Land region in 2071–2100. This change occurs in increments of about 1–2 °C for 2011–2040, then by additional 1 °C in the southern and quite sharply, by 3–4 °C, in the northern regions for 2041–2070 and, eventually, by less than 1 °C in all regions for 2071–2100. This pattern reflects the RCP 4.5 total radiative forcing that increases till around 2070 and then stabilizes. The temperature change in the MPI-ESM-LR for 2011–2040 is similar or slightly smaller than that in the CanESM2 simulation. However, for the other two periods the MPI-ESM-LR temperature generally increases by smaller increments, eventually giving rise to quite a bit smaller total projected changes for 2071–2100; interestingly, in South, East and Great Lakes regions, the MPI-ESM-LR increments for 2071–2100 are larger than those for 2041–2070. Both of the CRCM5 simulations closely follow the projected changes in the corresponding CGCM simulations.
Next, we move to the 5th percentile in winter (Fig. 14b) where we can see that in all simulations the projected temperature changes are quite larger than for the mean (Fig. 14a); the exception are the southern-most regions where the change in the 5th percentile is similar to that in the mean. The maximum temperature increase in the 5th percentile is shifted southward: while in the case of the mean the maximum is in Arctic Land region, in the case of the 5th percentile it is in Boreal region with somewhat smaller values in Arctic Land, Pacific NW, Mt West, Central, Great Lakes and East regions. It is also worth noting that along with higher climate sensitivity in the 5th percentile than in the mean, the differences among the simulations in projected changes are also larger in the 5th percentile than in the mean, including the differences between the CRCM5 and their corresponding CGCM simulations. For example, in Pacific NW region the projected warming in the CRCM-Can and CRCM-MPI simulations for 2071–2100 is about 2 °C smaller than in their driving CGCM simulations.
When the 95th percentile in winter daily temperatures is considered (Fig. 14c), it can be seen that the projected change is generally smaller by a few degrees than in the 5th percentile and also smaller than in the mean. A smaller warming on the higher end (warm days) than warming on the lower end (cold days) by 1–2 °C was also found in Leung et al. (2004) in winter daily temperature distributions over the western US. The exception to this rule is the southern most regions (Desert and South) where the projected change is rather uniform with respect to the three statistics. In the 95th percentile, by the end of the 21st century the temperature increases by less then 4 °C over Arctic Land and less than 3 °C in all other regions in all simulations.
Unlike in winter, in summer regional near-surface temperatures (Fig. 14d–f), we find an almost uniform climate-change signal with respect to the mean, 5th and 95th percentile. The projected change is also quite uniform with respect to the regions, given a CGCM simulation. As in winter, the MPI-ESM-LR and CRCM-MPI simulations display smaller projected temperature changes than CanESM2 and CRCM-Can. In the case of CanESM2 the temperature increases mostly by 3–4 °C in 2071–2100 and by 2–3 °C in the case of MPI-ESM-LR. Note also that in most of the cases the CRCM-Can projected changes tend to be smaller than in CanESM2, implying that the CRCM5 own sensitivity to the RCP4.5 radiative forcing is smaller than that of CanESM2. This also holds for winter.
We now consider projected changes in precipitation. Figure 15 displays the projected changes in the mean precipitation over the period 2071–2100 to 1981–2010, for DJF (panels a–d) and JJA (e–h), in the CRCM5 simulations (left) and the corresponding CGCM simulations (right column). The changes are presented as percentage of the 1981–2010 seasonal-mean precipitation. In winter, the CRCM-Can and CanESM2 simulations display a strong south-to-north gradient in projected relative change of precipitation, with an increase by 10–20 % over most of the continent, including the Greater Antilles, and up to 50 % over the Arctic. On the other hand, over Mexico (especially its Pacific coastal regions) a decrease of winter precipitation by 50 % is locally found in these simulations. Although the CRCM-Can and CanESM2 display similar large-scale precipitation change patterns, there are also considerable fine-scale differences, particularly over the complex topography. For example, over the lower-elevation basins of central British Columbia and the U.S. Pacific Northwest, the CRCM-Can indicates an increase of DJF precipitation of locally more than 40 %, while the CanESM2 indicates almost no change. The two simulations also display different trends over southeastern California; in the CRCM-Can the projected decrease of precipitation over the Pacific coast of Mexico extends further north into California than in CanESM2. The CRCM-MPI and MPI-ESM-LR simulations display a smaller south-north gradient of winter relative precipitation change; over most of North America the signal is much more uniform than in the CRCM-Can and CanESM2. The former simulations' precipitation increase over the Arctic and decrease over Central America are, in general, both of quite a bit smaller magnitude than in the CRCM-Can and CanESM2, being confined to the range of 0–20 %, including the Arctic, although locally, such as over Alaska, the CRCM-MPI projected increase may be larger. Over the central US all four simulations produce a small climate-change signal of similar magnitude in winter. These results appear to be well inside the range of AR4 CGCMs for 2080–2099 under A1B scenario (Christensen et al. 2007a). It is also worth noting that over the Columbia Basin in the Pacific Northwest the CRCM-MPI indicates an increase of 20 % while the MPI-ESM-LR shows no trend in this region. Interestingly, similar patterns of differences are also noted above when the CRCM-Can and CanESM2 were compared in the Pacific NW region, which is likely due to better-resolved orographic effects in the CRCM5.
Same as in Fig. 13 but for precipitation
In summer (Fig. 15 e–h), there is much less agreement between the CanESM2 and MPI-ESM-LR and the corresponding CRCM5 simulations on precipitation change. CRCM-Can and CanESM2 project an increase of summer precipitation by about 20 % over the Arctic and 10 % over the US southeast. Both the CRCM-Can and CanESM2 projections display a relative increase of summer precipitation over parts of the Rocky Mountains and parts of California locally as large as 80 %. However, these areas receive small amounts of precipitation in summer, so this increase is not large in absolute terms. The two simulations also have in common the reduction of precipitation over the northern Pacific Coast, Pacific Coast of Mexico and the Greater Antilles. The most important difference between the CRCM-Can and its driving simulation CanESM2 in summer is the reduction of precipitation over the Prairies by 10–40 % in the CRCM-Can. The two models use different deep-convection parameterizations; the CanESM2 uses a mass flux scheme (Zhang and McFarlane 1995) to model the precipitation associated to deep convection, while the Kain-Fritsch scheme is used in the CRCM-Can. Using the third-generation CRCM, Plummer et al. (2006) examined the difference in projected precipitation change due to a change in the physics package and obtained rather small differences in projections over the Northern Plains. However, the two physics packages used the same deep-convection parameterization.
The CRCM-MPI and MPI-ESM-LR projected changes in summer precipitation have generally smaller magnitude than those found in the CanESM2 and CRCM-Can simulations. There is increase of 10–30 % in summer over the Arctic and some drying in the southern portions of the domain. The CRCM-MPI produces also a highly spatially variable but mainly increasing summer precipitation over the California coastal regions, locally as large as 80 %. This feature appears to be restricted to the ocean in the MPI-ESM-LR, not reaching the coastal regions of California. Recall that in this region the MPI-ESM-LR, and also the CRCM-MPI simulations, present huge biases in the present-climate 2 m temperature and precipitation, as well as a large warm SST bias over the subtropical Pacific. It is thus not surprising that the projected changes also substantially differ. We will approach this issue in more detail when we consider the projected changes of daily-mean precipitation distributions over these regions.
Figure 16 summarizes by Bukovsky's regions the projected transient change of average precipitation for 2011–2040, 2041–2070 and 2071–2100 with respect to 1981–2010. In winter (Fig. 16a), a monotonic precipitation increase for the three periods is projected in Arctic Land and Boreal region, eventually giving rise to changes of almost 30 % in CanESM2 and 15–20 % in MPI-ESM-LR for 2071–2100. The CRCM5 tends to somewhat increase the trends of the CanESM2 simulation in these regions. In Central, Great Lakes and East regions, the CanESM2 and MPI-ESM-LR projected changes are smaller, reaching about 15 % in 2071–2100. In these regions the CanESM2 and MPI-ESM-LR closely agree in projected changes, but both the CRCM-Can and CRCM-MPI simulations tend to somewhat reduce the changes projected by their driving CGCMs by about 5 %. Over South, Pacific NW and AZNM regions there is a slight projected increase of precipitation but, for example, in the Pacific NW the MPI-ESM-LR and CRCM-MPI projected changes are the largest for 2011–2040 and afterwards the precipitation decreases. The signal might be too small to be distinguished from possible residuals of natural variability in the 30-year mean. Over Pacific SW and Mt West regions the projected changes of precipitation are positive, reaching 20 % for 2071–2100 in the CanESM2 and 10–15 % in the MPI-ESM-LR, while the CRCM5 follows these values closely. Finally, over Desert and especially CORE region, the CanESM2 and CRCM-Can simulations display a relatively strong drying trend, while the MPI-ESM-LR and CRCM-MPI show no significant changes.
The change in the spatiotemporal average precipitation, as a function of region, for a DJF and b JJA, in 2011–2040 (blue, brown), 2041–2070 (cyan, pink) and 2071–2100 (green, yellow) with respect to 1981–2010; CRCM-Can (blue, cyan, green squares), CanESM2 (blue, cyan, green diamonds), CRCM-MPI (brown, pink, yellow squares), MPI-ESM-LR (brown, pink, yellow diamonds). The top and bottom rows show the same, except that they display results for different regions
In summer (Fig. 16b), the average precipitation change signal is generally quite small, there is disagreement among the simulations on both magnitude and sign of the signal, and the projected changes are not monotonic with respect to the three 30-year slices. It is likely that the differences among the simulations as well as those among the 30-year slices are more a result of internal model dynamics (interdecadal variability) than due to the GHG forcing. The lack of statistical significance has been noted in studies of summer precipitation projected changes (e.g., Duffy et al. 2006). Only over the Arctic Land region do all simulations agree on an increase of average precipitation by about 10 %. On the other hand, in Mt West region the MPI-ESM-LR simulation projects first an increase in precipitation by 10 % in 2011–2040 and then a precipitation decrease. The CRCM-MPI simulation follows this pattern. The CanESM2 however projects a gradual increase eventually reaching 20 % in 2071–2100, while the CRCM-Can simulation projects almost no change for any of the three periods. Note in Fig. 14 that in the case of CRCM-MPI and MPI-ESM-LR the regions of projected precipitation decrease (south) and increase (north) appear to be rather well separated. It is noted in Christensen et al. (2007a) that the separating line between the projected precipitation increase and decrease moves north with increasing GHG concentrations. For regions near the separating line between the projected precipitation increase and decrease, such as the Mt West region, precipitation would first increase while the region is still to the north of the line, and then precipitation would decrease as the line moves northward.
In summary, the CRCM5-projected average precipitation changes exhibit quite large and spatially variable regional deviations from the corresponding changes in the driving CGCM simulations. The presence of considerable deviations of the CRCM5-projected changes from those in the corresponding CGCM simulations should be interpreted as the CRCM5 potential to add value to the driving CGCM simulations, due to a higher resolution representation of the land-surface forcing and atmospheric dynamics and physics in the CRCM5. It is not surprising that the deviations in projected precipitation changes are larger in summer, since, as we saw in Fig. 11, the temporal variability of summer daily precipitation is much more realistically represented in the CRCM5 simulations.
In order to complete the discussion of climate projections, we now examine the projected change in the spatiotemporal distribution of daily-average precipitation over Bukovsky's regions. The spatiotemporal distributions are obtained by pooling 30-year daily precipitation time series at each grid points within a region into a large single dataset. The change in the distribution is quantified as the change in the RDAD, discussed in Sect. 6, defined as follows:
$$ \delta P_{i} = \frac{{H_{i}^{(f)} - H_{i}^{(p)} }}{{\sum\nolimits_{i} {H_{i}^{(p)} } }} , $$
where \( H_{i}^{(p)} \) and \( H_{i}^{(f)} \) are the total accumulations in the intensity bin i over a region, in the period 1981–2010 and 2071–2100, respectively. Note that upon summing the relative accumulations \( \delta \,P_{i} \, \) over all bins i, we obtain the relative change in the spatiotemporal average precipitation over a region, which was shown in Fig. 16. In other words, \( \delta \,P_{i} \, \) partitions the projected relative change in the regional time-average mean precipitation into the contribution of every intensity range. Also note that in principle, the changes in individual intensity bins may be large in magnitude, but if they have the opposite sign, they may cancel in the process of summing over all intensity bins, giving rise to a negligible change in the mean.
Figure 17 displays the projected regional RDAD changes for the 2071–2100 interval in winter. Except in Desert, AZNM and CORE regions, all simulations generally agree in that the projected mean change is mostly due to increased accumulations in the range of moderate to heavy precipitation. The increase is the most uniform with respect to intensity ranges in the Arctic Land region. Moving south, the higher intensities tend to have more important relative contribution. In Pacific NW, Pacific SW, South and East regions, there is almost no change in the range from 1 to 16 mm/day, while in the range 32–128 mm/day the accumulations increase by 5–10 %; this is of course a consequence of a projected increase in the frequency of events at this range. At the same time, the change in the relative frequency of dry days is not projected to be very large; there is a slight decrease for about 5 % in the frequency of dry days in the northern regions and an increase by the same amount in the southernmost regions. No change in the midlatitude regions was found in dry days, implying that the increase in the winter mean precipitation in these regions is due to an increase of the frequency of heavy precipitation events at the expense of the frequency of light precipitation events. When the CRCM simulations are compared to the CGCM simulations in the Pacific NW and Pacific SW regions, the CRCM appears to push the corresponding CGCM-simulated RDAD changes towards higher bins. Likewise, in the South and East regions, the CRCM reduces the change in lower RDAD bins. This might be a result of better-resolved orographic effects and precipitation mesoscale systems such as the so-called "atmospheric rivers" by Dettinger et al. (2012).
Projected change in regional relative daily accumulation distributions (RDAD; Eq. 1) for DJF 2071–2100 with respect to 1981–2010 in percentage; CRCM-Can (cyan-full), CanESM2 (cyan-dashed), CRCM-MPI (pink-full) and MPI-ESM-LR simulations (pink-dashed line)
Figure 18 displays the same for summer. The inspection of the printed values shows that the relative frequency of dry days is generally not projected to change significantly in summer. As for the distributions themselves, the only region where the four simulations mostly agree on projected changes is the Arctic Land. In general, in the range of heavy precipitation the CRCM5 projected changes tend to be larger, as the CGCMs do not adequately represent distributions in this range. The projected changes in distributions are relatively small and the CGCM and CRCM5 simulations tend to disagree. The exception is the regions in the western part of the continent (Pacific NW and SW, Mt West and AZNM) where the relative changes are of a larger magnitude but they tend to have the opposite sign in the CanESM2 and CRCM-Can with respect to the MPI-ESM-LR and CRCM-MPI, indicating that the distributions are controlled by the driving CGCMs.
Same as in Fig. 17 but for JJA daily precipitation
In order to further examine this issue, we display in Fig. 19 the present-climate summer-average sea-level pressure (SLP) and projected summer-average changes in the four simulations for the period 2071–2100. When the CanESM2 and CRCM-Can projected changes (Fig. 19b, c) are compared with the present-climate SLP patterns (Fig. 19a) over the Pacific Ocean and West Coast, it can be seen that the projected changes indicate a weakening and northward shift of the Pacific subtropical high pressure, and a pressure increase over the US Southwest. Consequently, this implies a weakening of the subsidence off the West Coast, a decreased zonal pressure gradient over the coastal regions of California and a reduction of the flow of dry air masses from the Pacific high, eventually allowing for the penetration of the moist air masses from the tropical Pacific farther north. Accordingly, the CRCM-Can and CanESM2 projected changes in summer precipitation over the Pacific SW, AZNM and Mt West regions are positive. However, the projected changes in the RDADs for these regions (Fig. 18) show that the mean increases only due to higher accumulations at lower intensities. At the same time, the CRCM-Can and CanESM2 projected SLP changes over the Pacific Ocean imply a northward shift in the storm tracks, which results in the projected decrease in summer precipitation over the Pacific NW region in these simulations (Fig. 18). The projected summer SLP changes in the CRCM-MPI and MPI-ESM-LR (Fig. 19d, e) are quite different: the SLP is projected to increase over the US southwest but also over the adjacent regions over the Pacific Ocean and to decrease over Alaska. Thus, in these simulations the subtropical anticyclone strengthens over the US Southwest and the Pacific coast of northern Mexico, implying more subsidence and resulting in projected decrease in summer precipitation over these regions (Fig. 18; Desert, AZNM and CORE regions). At the same time, the patterns of projected change of SLP over the northern Pacific Ocean in these simulations imply an intensification of the westerly flow over the Pacific NW region, resulting in an increase in summer precipitation in this region. This increase is projected to be the mostly contributed by accumulations in the heavy precipitation range, above 16 and up to 256 mm/day (Fig. 18; Pacific NW).
ERAINT 1989–2008 JJA-average sea-level pressure (a) and projected changes for the period 2071–2100 to 1981–2010 in: b CRCM-Can, c CanESM2, d CRCM-MPI and e MPI-ESM-LR
The purpose of this study was to investigate the present climate and projected climate change as simulated by the CRCM5 in order to contribute to the CORDEX project. Three CRCM5 simulations were performed: a control reanalysis-driven simulation for the period 1959–2008 and two CGCM-driven transient climate-change simulations for the period 1950–2100 forced with CanESM2 and MPI-ESM-LR; the present-day control used the historical GHGs and aerosol concentrations and the future climate simulations were based on the RCP 4.5 radiative forcing scenario. The reanalysis-driven simulation was used to quantify the CRCM5 structural biases, when it is driven with nearly perfect atmospheric lateral boundary and ocean surface conditions. In addition, this simulation was used to separate the structural biases of CRCM5 from those transmitted from the driving CGCM simulations.
At continental scale, the CRCM5 simulations reproduce relatively well the near-surface temperature and precipitation over North America in the current climate. Temperature biases are mainly limited to ±2 °C, with the exception of a stronger cold bias during the winter season over the western and southern portions of the continent. This bias is also found in the reanalysis-driven run, implying that it originates from the CRCM5 own structural errors. Precipitation biases are relatively small over land, being mainly confined to about 25 % of the observed values, which is not much larger than the observational uncertainties. However, over coastal regions, especially over California and northern Mexico, larger precipitation biases are found coinciding with the CGCMs SST biases. The reanalysis-driven CRCM5 simulation generally performs better, but there are exceptions to this rule due to the possible cancellation of CRCM5 structural biases and those transmitted from the CGCMs; this happens in the MPI-ESM-LR-driven simulation over the central and eastern parts of North America in summer.
The examination of annual cycles of monthly-average regional-average precipitation based on the regionalisation proposed by Bukovsky (2011) shows, upon neglecting some systematic biases, that the reanalysis-driven simulation is quite close to the observations, both in the most general features of the precipitation annual cycle and in reproducing finer details such as, for example over the Boreal region, the small-scale driven convective precipitation maximum in June and the large-scale driven stratiform precipitation maximum in September. The CGCM-driven simulations are somewhat less skilful at reproducing finer details in annual precipitation patterns and have larger biases especially in the coastal regions. The timing of the summer precipitation maximum related to the North American monsoon in the US southwest and northern Mexico is correctly simulated in all CRCM5 simulation, although the model has some difficulties in reproducing the correct absolute amounts. In most of the regions, the CGCM-driven CRCM5 simulations more skilfully reproduce the observed annual patterns than the driving CGCMs, although the MPI-ESM-LR also has generally very good performance.
The 5th and 95th percentile of CRCM5-simulated daily temperature distributions over Bukovsky's regions are also rather well reproduced, with biases not considerably larger than in the case of multiannual seasonal means. In addition, in most of the cases the biases in the 5th and 95th percentile of CRCM5 temperatures were smaller than those in the driving CGCMs, implying that the variability of daily temperatures is better represented in the CRCM5.
At regional and daily temporal scale, both the reanalysis- and CGCM-driven CRCM5 simulations exhibit a quite high skill at partitioning the simulated total precipitation accumulations across the range of intensities. This holds despite the fact that there are considerable biases in the total precipitation over regions and large biases in the frequency of wet and dry days. A similar conclusion was found in Leung et al. (2003) for a reanalysis-driven RCM simulation over the western U.S. This is not the case for CGCM simulations that cannot adequately represent the partition of accumulations in the range of heavy precipitation, especially in summer, when the convective precipitation has a large contribution over land. The difference between CGCM and CRCM5 summer precipitation distributions emphasizes the need for RCM downscaling. Due to their higher resolution, the RCM-simulated precipitation accumulations at daily temporal scale are much more realistic, which is necessary for studying the projected changes in the heavy precipitation events.
The projected climate changes were assessed as the difference between the three 30-year statistics for the periods 2011–2040, 2041–2070 and 2071–2100 with respect to 1981–2010. The projected changes in mean temperatures and precipitation fall within the range of the IPCC AR4 projected changes for North America based on the SRES A1B emission scenario. The CRCM5-projected changes are very similar to those obtained in the driving CGCMs, with some fine-scale details added by the CRCM5 due to a higher resolution representation of the topography and land-surface forcing.
For the 2071–2100 winter-average temperature changes, the largest warming of more than 10 °C is found in the CanESM2-driven simulation over the northernmost parts of the domain. The temperature climate-change signal is however much smaller in the simulation driven with the MPI-ESM-LR. In both cases, the projected warming is larger over land than over the ocean and increases with latitude over land, being only 1–2 °C over the southeastern US and much larger over northern Canada. In summer, the south-north warming gradient disappears. In the CanESM2-driven simulation the maximum warming of up to 5 °C is projected over the Northern Plains and Pacific Northwest, while in the MPI-ESM-driven simulation there is a more uniform warming pattern of about 2–3 °C over land.
All simulations agree in projecting considerably larger warming in the 5th percentile than in the multiannual mean of daily average temperatures in winter, especially over the northern and central regions of the continent. This feature can be related to the fact that on average, the Arctic regions warm up the most. The cold waves over the central parts of the continent in winter are mostly due to the intrusions of the Arctic air masses; these cold waves are likely to become milder due to the large warming in their source region, resulting in a large increase in the temperatures' 5th percentile over the central parts of North America. In addition, the increase of the 95th percentile of winter daily temperatures in the northern parts of the continent is found to be smaller than the increase of the multi-annual mean, which is likely related to the fact that over low latitudes as well as over the Pacific and Atlantic Ocean, the projected mean temperature change in winter is relatively small. Warm periods over the Arctic and subarctic regions in winter are mostly due to the advection of warm air masses originating from lower latitudes and oceans. On the other hand, the projected changes in the 5th and 95th percentiles of summer temperatures are found to closely follow the change in the mean in all simulations.
The projected changes in average precipitation in winter for 2071–2100 are not large; the increase of precipitation of about 0–20 % is projected over most of the continent except Central America where precipitation is projected to decrease. The CRCM5 and CGCM simulations all agree in this general pattern although the CRCM5 simulations display important mesoscale differences with respect to their driving CGCMs. The increase of winter precipitation over the western, southern and eastern coastal regions, as well as over the Great Lakes, is found in all simulations to be mainly due to an increase in the frequency of days with heavy precipitation. This might be due to the intensification or an increase in frequency of winter storms, but this topic is beyond the scope of this paper. In summer, the projected precipitation changes are rather small and very uncertain; only over the northernmost regions of the continent the simulations agree on an increase of precipitation of about 10 %. In other regions, large differences are found between the two CRCM5 simulations, especially over the western half of the continent, where the simulations disagree on both magnitude and sign of the projected changes in summer precipitation. The uncertainties in the CGCM-projected changes in the synoptic-scale circulation over the Pacific Ocean, such as the position and strength of the subtropical high pressure, are likely to be the main cause of the large uncertainties in the CRCM5-projected changes in summer precipitation over western North America.
Adams DK, Comrie AC (1997) The North American monsoon. Bull Am Meteor Soc 78:2197–2213
Arakawa A, Lamb WR (1977) Computational design of the basic dynamical processes of the UCLA general circulation model. In: General circulation models of the atmosphere (A78-10662 01-47). Academic Press Inc, New York, pp 173–265
Arora VK, Boer GJ (2010) Uncertainties in the 20th century carbon budget associated with land use change. Global Chang Biol 16:3327–3348
Arora VK, Boer GJ, Christian JR, Curry CL, Denman KL, Zahariev K, Flato GM, Scinocca JF, Merryfield WJ, Lee WG (2009) The effect of terrestrial photosynthesis down-regulation on the 20th century carbon budget simulated with the CCCma earth system model. J Clim 22:6066–6088
Arora VK, Scinocca JF, Boer GJ, Christian JR, Denman KL, Flato GM, Kharin VV, Lee WG, Merryfield WJ (2011) Carbon emission limits required to satisfy future representative pathways of greenhouse gases. Geophys Res Lett 38:L05805
Bélair S, Mailhot J, Girard C, Vaillancourt P (2005) Boundary-layer and shallow cumulus clouds in a medium-range forecast of a large-scale weather system. Mon Weather Rev 133:1938–1960
Benoit R, Côté J, Mailhot J (1989) Inclusion of a TKE boundary layer parameterization in the Canadian regional finite-element model. Mon Weather Rev 117:1726–1750
Brovkin V, Raddatz T, Reick CH, Clauseen M, Gayler V (2009) Global biogeophysical interactions between forest and climate. Geophys Res Let 36:L07405
Bukovsky MS (2011) Masks for the Bukovsky regionalization of North America, Regional Integrated Sciences Collective, Institute for Mathematics Applied to Geosciences, National Center for Atmospheric Research, Boulder, CO. Downloaded 2012-07-03. http://www.narccap.ucar.edu/contrib/bukovsky/
Castro CL, Pielke RA, Leoncini G (2005) Dynamical downscaling: an assessment of value added using a regional climate model. J Geophys Res 110:D05108. doi:10.1029/2004JD004721
Christensen JH, Hewitson B, Busuioc A, Chen A, Gao X, Held I, Jones R, Kolli RK, Kwon W-T, Laprise R, Magaña Rueda V, Mearns L, Menéndez CG, Räisänen J, Rinke A, Sarr A, Whetton P (2007a) Regional climate projections. In: Solomon S, Qin D, Manning M, Chen Z, Marquis M, Averyt KB, Tignor M, Miller HL (eds) Climate change 2007: the physical science basis. Contribution of working group I to the fourth assessment report of the intergovernmental panel on climate change. Cambridge University Press, Cambridge
Christensen JH, Carter T, Rummukainen M, Amanatidis G (2007b) Evaluating the performance and utility of regional climate models: the PRUDENCE project. Climatic Chang 81:1–6
Christian JR, Arora VK, Boer GJ, Curry CL, Zahariev K, Denman KL, Flato GM, Lee WG, Merryfield WJ, Roulet NT, Scinocca JF (2010) The global carbon cycle in the Canadian Earth System Model (CanESM1): preindustrial control simulation. J Geophys Res 115:G03014
Côté J, Gravel S, Méthot A, Patoine A, Roch M, Staniforth A (1998) The operational CMC-MRB global environmental multiscale (GEM) model. Part I: design considerations and formulation. Mon Weather Rev 126:1373–1395
Davies HC (1976) A lateral boundary formulation for multi-level prediction models. Q J R Meteorol Soc 102(432):405–418
de Elía R, Caya D, Côté H, Frigon A, Biner S, Giguère M, Paquin D, Harvey R, Plummer D (2008) Evaluation of uncertainties in the CRCM-simulated North American climate. Clim Dyn 30:113–132
De Sales F, Xue Y (2011) Assessing the dynamic downscaling ability over South America using the Intensity-Scale verification technique. Int J Climatol 31:1205–1221
Dee DP, Uppala SM, Simmons AJ et al (2011) The ERA-Interim reanalysis: configuration and performance of the data assimilation system. Q J R Meteorol Soc 137:553–597. doi:10.1002/qj.828
Delage Y (1997) Parameterising sub-grid scale vertical transport in atmospheric models under statically stable conditions. Boundary-Layer Meteor 82:23–48
Delage Y, Girard C (1992) Stability functions correct at the free convection limit and consistent for both the surface and Ekman layers. Boundary-Layer Meteor 58:19–31
Dettinger MD, Ralph FM, Hughes M, Das T, Neiman P, Cox D, Estes G, Reynolds D, Hartman R, Cayan D, Jones L (2012) Design and quantification of an extreme winter storm scenario for emergency preparedness and planning exercises in California. Nat Hazards 60:1085–1111. doi:10.1007/s11069-011-9894-5
Di Luca A, de Elía R, Laprise R (2012a) Potential for added value in precipitation simulated by high-resolution nested regional climate models and observations. Clim Dyn 38:1229–1247
Di Luca A, de Elía R, Laprise R (2012b) Potential for added value in temperature simulated by high-resolution nested RCMs in present climate and in the climate change signal. Clim Dyn. doi:10.1007/s00382-012-1384-2
Di Luca A, de Elía R, Laprise R (2012c) Potential for small scale added value of RCM's downscaled climate change signal. Clim Dyn. doi:10.1007/s00382-012-1415-z
Duffy PB, Arritt RW, Coquard J, Gutowski W, Han J, Iorio J, Kim J, Leung L-R, Roads J, Zeledon E (2006) Simulations of present and future climates in the western United States with four nested regional climate models. J Clim 19:873–895. doi:10.1175/JCLI3669.1
Feser F (2006) Enhanced detectability of added value in limited-area model results separated into different spatial scales. Mon Weather Rev 134:2180–2190
Gent PR, Bryan FO, Danabasoglu G, Doney SC, Holland WR, Large WG, McWilliams JC (1998) The NCAR climate system model global ocean component. J Climate 11:1287–1306
Giorgetta MA, Roeckner E, Mauritsen T, Stevens B, Crueger T, Esch M, Rast S, Kornblueh L, Schmidt H, Kinne S, Möbis B, Krismer T, Reick C, Raddatz T, Gayler V (2012) The atmospheric general circulation model ECHAM6—model description. http://www.mpimet.mpg.de/en/science/models/echam.html. Accessed 30 July 2012
Giorgi F, Mearns LO (1999) Introduction to special section: regional climate modeling revisited. J Geophys Res 104(D6):6335–6352
Giorgi F, Jones C, Asrar G (2009) Addressing climate information needs at the regional level: the CORDEX framework. WMO Bull 58(3):175–183
Hernández-Díaz L, Laprise R, Sushama L, Martynov A, Winger K, Dugas B (2012) Climate simulation over CORDEX Africa domain using the fifth-generation Canadian regional climate model (CRCM5). Clim Dyn. doi:10.1007/s00382-012-1387-z
Huffman GJ, Adler RF, Morrissey MM, Curtis S, Joyce R, McGavock B, Susskind J (2001) Global precipitation at one-degree daily resolution from multi-satellite observations. J Hydrometeor 2:36–50
Huffman GJ, Adler RF, Bolvin DT, Gu G, Nelkin EJ, Bowman KP, Hong Y, Stocker EF, Wolff DB (2007) The TRMM multisatellite precipitation analysis (TMPA): quasi-global, multiyear, combined-sensor precipitation estimates at fine scales. J Hydrometeor 8:38–55
Jungclaus JH, Keenlyside N, Botzet M, Haak H, Luo JJ, Latif M, Marotzke J, Mikolajewicz U, Roeckner E (2006) Ocean circulation and tropical variability in the coupled model ECHAM5/MPI-OM. J Clim 19:3952–3972
Kain JS, Fritsch JM (1990) A one-dimensional entraining/detraining plume model and application in convective parameterization. J Atmos Sci 47:2784–2802
Kanamitsu M, Ebisuzaki W, Woollen J, Yang SK, Hnilo JJ, Fiorino M, Potter GL (2002) NCEP–DOE AMIP-II reanalysis (R-2). B Am Meteorol Soc 83:1631–1643
Kuo HL (1965) On formation and intensification of tropical cyclones through latent heat release by cumulus convection. J Atmos Sci 22:40–63
Laprise R (1992) The Euler equation of motion with hydrostatic pressure as independent coordinate. Mon Weather Rev 120:197–207
Laprise R (2005) A foreword to "high-resolution climate modelling: assessment, added value and applications". In: Bärring, Laprise R (ed) High-resolution climate modelling: assessment, added value and applications extended abstracts of a WMO/WCRP-sponsored regional-scale climate modelling workshop, Lund, Sweden, 29 March–2 April 2004. Lund University electronic reports in physical geography, pp 12–16 http://www.nateko.lu.se/ELibrary/Lerpg/5/Lerpg5Article.pdf
Laprise R (2008) Regional climate modeling. J Comput Phys 227(7):3641–3666
Laprise R, de Elía R, Biner S, Lucas-Picher P, Diaconescu E, Leduc M, Alexandru A, Šeparović L (2008) Challenging some tenets of regional climate modeling. Meteor Atmos Phys 100(1–4):3–22
Laprise R, Hernández-Díaz L, Tete K, Sushama L, Šeparović L, Martynov A, Winger K, Valin M (2013) Climate projections over CORDEX Africa domain using the fifth-generation Canadian Regional Climate Model (CRCM5). Clim Dyn. doi:10.1007/s00382-012-1651-2
Leung LR, Qian Y, Bian X (2003) Hydroclimate of the western United States based on observations and regional climate simulation of 1981–2000. Part I: seasonal statistics. J Climate 16:1892–1911
Leung LR, Qian Y, Bian X, Washington WM, Han J, Roads JO (2004) Mid-century ensemble regional climate change scenarios for the western United States. Clim Chang 62:75–113
Li J, Barker HW (2005) A radiation algorithm with correlated-k distribution. Part I: local thermal equilibrium. J Atmos Sci 62:286–309
Marsland SJ, Haak H, Jungclaus JH, Latif M, Roeske F (2003) The Max-Planck-Institute global ocean/sea ice model with orthogonal curvilinear coordinates. Ocean Model 5:91–127
Martynov A, Sushama L, Laprise R, Winger K, Dugas B (2012) Interactive lakes in the Canadian regional climate model, version 5: the role of lakes in the regional climate of North America. Tellus A 64:16226. doi:10.3402/tellusa.v64i0.16226
Martynov A, Laprise R, Sushama L, Winger K, Šeparović L, Dugas B (2013) Reanalysis-driven climate simulation over CORDEX North America domain using the Canadian Regional Climate Model, version 5: model performance evaluation. Clim Dyn. doi:10.1007/s00382-013-1778-9
Masson V, Champeaux J-L, Chauvin F, Meriguet C, Lacaze R (2003) A global database of land surface parameters at 1-km resolution in meteorological and climate models. J Climate 16:1261–1282
McFarlane NA (1987) The effect of orographically excited gravity-wave drag on the circulation of the lower stratosphere and troposphere. J Atmos Sci 44:1175–1800
McGregor JL (1997) Regional climate modeling. Meteor Atmos Phys 63(1):105–117
Mearns LO, Gutowski WJ, Jones R, Leung L-Y, McGinnis S, Nunes AMB, Qian Y (2009) A regional climate change assessment program for North America. EOS 90:311–312
Meinshausen M, Smith S et al. (2011) The RCP greenhouse gas concentrations and their extension from 1765 to 2500. Clim Chang (Special Issue on RCPs)
Mironov D, Heise E, Kourzeneva E, Ritter B, Schneider N, Terzhevik A (2010) Implementation of the lake parameterisation scheme FLake into the numerical weather prediction model COSMO. Boreal Environ Res 15:218–230
Mitchell TD, Jones PD (2005) An improved method of constructing a database of monthly climate observations and associated high-resolution grids. Int J Climatol 25:693–712. doi:10.1002/joc.1181
Monette A, Sushama L, Khaliq MN, Laprise R, Roy R (2012) Projected changes to precipitation extremes for Northeast Canadian watersheds using a multi-RCM ensemble. J Geophys Res 117(D13):D13106
Nikulin G, Jones C, Samuelsson P, Giorgi F, Sylla MB, Asrar G, Büchner M, Cerezo-Mota R, Christensen OB, Déqué M, Fernandez J, Hänsler A, van Meijgaard E, Sushama L (2012) Precipitation climatology in an ensemble of CORDEX-Africa regional climate simulations. J Clim. doi:10.1175/JCLI-D-11-00375.1
Pierce DW, Barnett TP, Santer BD, Gleckler PJ (2009) Selecting global climate models for regional climate change studies. Proc Nat Acad Sci USA 106(21):8441–8446
Plummer DA, Caya D, Frigon A, Côté H, Giguère M, Paquin D, Biner S, Harvey R, de Elía R (2006) Climate and climate change over North America as simulated by the Canadian RCM. J Clim 19:3112–3132
Prömmel K, Geyer B, Jones JM, Widmann M (2010) Evaluation of the skill and added value of a reanalysis-driven regional simulation for Alpine temperature. Int J Climatol 30:760–773. doi:10.1002/joc.1916
Roeckner E, Baeuml G, Bonaventura L, Brokopf R, Esch M, Giorgetta M, Hagemann S, Kirchner I, Kornblueh L, Manzini E, Rhodin A, Schlese U, Schulzweida U, Tompkins A (2003) The general circulation model ECHAM5. Part I: Model description. Report 349, Max-Planck-Institut for Meteorology, Hamburg
Rummukainen M (2010) State-of-the-art with regional climate models. WIREs Clim Chang 1(1):82–96
Salathe EP Jr, Steed R, Mass CF, Zahn PH (2008) A high-resolution climate model for the U.S. Pacific Northwest: mesoscale feedbacks and local responses to climate change. J Clim 21:5708–5726
Scinocca JF, McFarlane NA, Lazare M, Li J, Plummer D (2008) Technical note: the CCCma third generation AGCM and its extension into the middle atmosphere. Atmos Chem Phys 8:7055–7074
Stevens B, Gonzales-Rouco JF, Beltrami H (2008) North American climate of the last millennium: underground temperatures and model comparison. J Geophys Lett 113:F01008, 15
Sturm M, Holmgren J, König M, Morris K (1997) The thermal conductivity of seasonal snow. J Glaciol 43:26–41
Sundqvist H, Berge E, Kristjansson JE (1989) Condensation and cloud parameterization studies with a mesoscale numerical weather prediction model. Mon Weather Rev 117:1641–1657
Sushama L, Laprise R, Caya D, Frigon A, Slivitzky M (2006) Canadian RCM projected climate-change signal and its sensitivity to model errors. Int J Climotol 26:2141–2159
Takle ES, Gutowski WJ, Arritt RW, Pan Z, Anderson CJ, da Silva RR, Caya D, Chen S-C, Giorgi F, Christensen JH, Hong S-Y, Juang H-MH, Katzfey J, Lapenta WM, Laprise R, Liston GE, Lopez P, McGregor J, Pielke RA, Roads JO (1999) Project to intercompare regional climate simulations (PIRCS): description and initial results. J Geophys Res 104(D16):19443–19461
Taylor KE, Stouffer RJ, Meehl GA (2012) An overview of CMIP5 and the experiment design. Bull Am Meteor Soc 93:485–498
Vera C, Higgins W, Amador J, Ambrizzi T, Garreaud R, Gochis D, Gutzler D, Lettenmaier D, Marengo J, Mechoso CR, Nogues-Paegle J, Silva Dias L, Zhang C (2006) Toward a unified view of the American monsoon systems. J Clim 19:4977–5000
Verseghy DL (1991) CLASS—a Canadian land surface scheme for GCMs: I. Soil model. Int J Climatol 11:111–133
Verseghy DL (2009) CLASS—The Canadian land surface scheme (Version 3.4)—technical documentation (version 1.1). Internal report, Climate Research Division, Science and Technology Branch, Environment Canada, 183 pp
von Salzen K, McFarlane NA, Lazare M (2005) The role of shallow convection in the water and energy cycles of the atmosphere. Clim Dyn 25:671–688
von Storch H, Langenberg H, Feser F (2000) A spectral nudging technique for dynamical downscaling purposes. Mon Weather Rev 128(10):3664–3673
Wang Y, Leung LR, McGregor JL, Lee DK, Wang WC, Ding YH, Kimura F (2004) Regional climate modeling: progress, challenges and prospects. J Meteor Soc Japan 82:1599–1628
Wetzel P, Winguth A, Maier-Reimer E (2005) Sea-to-air CO2 fluxes from 1948 to 2003. Global Biogeochem Cycles 19:GB2005
Willmott CJ, Matsuura K (1995) Smart interpolation of annually averaged air temperature in the United States. J App Meteorol 34:2577–2586
Winterfeldt J, Weisse R (2009) Assessment of value added for surface marine wind speed obtained from two regional climate models. Mon Weather Rev 137:2955–2965. doi:10.1175/2009MWR2704.1
Yang D, Kane D, Zhang Z (2005) Bias corrections of long-term (1973–2004) daily precipitation data over the northern regions. J Geophys Lett 32:L19501. doi:10.1029/2005GL024057
Yeh KS, Côté J, Gravel S, Méthot A, Patoine A, Roch M, Staniforth A (2002) The CMC-MRB global environmental multiscale (GEM) model. Part III: nonhydrostatic formulation. Mon Weather Rev 130:339–356
Zadra A, Roch M, Laroche S, Charron M (2003) The subgrid-scale orographic blocking parameterization of the GEM Model. Atmos-Ocean 41:155–170
Zadra A, Caya D, Côté J, Dugas B, Jones C, Laprise R, Winger K, Caron L-P (2008) The next Canadian regional c1imate model. Phys Canada 64:74–83
Zadra A, McTaggart-Cowan R, Roch M (2012) Recent changes to the orographic blocking. Seminar presentation, RPN, Dorval, Canada, 30 March 2012. http://collaboration.cmc.ec.gc.ca/science/rpn/SEM/dossiers/2012/seminaires/2012-03-30/Seminar_2012-03-30_Ayrton-Zadra.pdf. Accessed 19 July 2012
Zhang GJ, McFarlane NA (1995) Sensitivity of climate simulations to the parameterization of cumulus convection in the CCC-GCM. Atmos-Ocean 3:407–446
This research was funded by the Canadian Foundation for Climate and Atmospheric Sciences (CFCAS), the Québec's Ministère du Développement Économique, Innovation et Exportation (MDEIE), the Natural Sciences and Engineering Research Council of Canada (NSERC), Hydro-Québec, the Ouranos Consortium on Regional Climatology and Adaptation to Climate Change, the Mathematics of Information Technology and Complex Systems (MITACS) Network of Centres of Excellence, and the Canada Research Chairs programme. The calculations were made possible through the CLUMEQ Consortium, on the Colosse and Guillimin high-performance computing platforms; CLUMEQ is part of the Compute Canada national HPC platform and a member of the Calcul Québec regional HPC platform. The authors thank Mr Georges Huard and Mrs Nadjet Labassi for maintaining an efficient and user-friendly local computing facility. The authors are also grateful to the following collaborators at Environment Canada: Mr Michel Desgagné for his work in developing a nested version of GEM, Dr Diana Verseghy for allowing to use the code of CLASS 3.5, Mr Richard Harvey for helping with CLASS, and specially Dr Bernard Dugas for his unwavering support on developing CRCM5 since the beginning of this work more than a decade ago. This study would not have been possible without the access to valuable data from ERA-Interim, CRU, UDEL, GPCP and TRMM, as well as the outputs from CanESM2 and MPI-ESM-LR models.
Centre ESCER (Étude et Simulation du Climat à l'Échelle Régionale), Montréal, QC, Canada
Leo Šeparović, Adelina Alexandru, René Laprise, Andrey Martynov, Laxmi Sushama, Katja Winger, Kossivi Tete & Michel Valin
Département des Sciences de la Terre et de l'Atmosphère, Université du Québec à Montréal (UQAM), C.P. 8888, Succ. Centre-ville, Montréal, QC, H3C 3P8, Canada
Canada Research Chair in Regional Climate Modelling, UQAM, Montréal, QC, Canada
Laxmi Sushama
Leo Šeparović
Adelina Alexandru
René Laprise
Andrey Martynov
Katja Winger
Kossivi Tete
Michel Valin
Correspondence to Leo Šeparović.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
Šeparović, L., Alexandru, A., Laprise, R. et al. Present climate and climate change over North America as simulated by the fifth-generation Canadian regional climate model. Clim Dyn 41, 3167–3201 (2013). https://doi.org/10.1007/s00382-013-1737-5
Issue Date: December 2013
Regional climate modelling
CRCM5
CORDEX
Climate change projections over North America
Bukovsky's regionalisation | CommonCrawl |
Crisis hazard assessment for snow-related lahars from an unforeseen new vent eruption: the 2018 eruption of Kusatsu-Shirane volcano, Japan
Kyoko S. Kataoka ORCID: orcid.org/0000-0003-2266-84851,
Kae Tsunematsu ORCID: orcid.org/0000-0002-1481-62762,
Takane Matsumoto1,
Atsushi Urabe1 &
Katsuhisa Kawashima1
Earth, Planets and Space volume 73, Article number: 220 (2021) Cite this article
Two-thirds of the 111 active volcanoes in Japan are covered with snow for several months during winter and demonstrate high hazard and risk potentials associated with snow-related lahars during and after eruptions. On 23 January 2018, a sudden phreatic eruption occurred at the ski field on Kusatsu-Shirane (Mt. Motoshirane) volcano, Japan. This new vent eruption from the snow-clad pyroclastic cone required forecasting of future snow-related lahars and crisis hazards zonation of downslope areas including Kusatsu town, a popular tourist site for skiing and hot springs. In order to achieve a prompt hazard assessment for snow-related lahars, a multidisciplinary approach was carried out involving characterization of proximal tephra deposits, snow surveys, and numerical lahar flow simulations using the Titan2D model. To determine the input parameters for the flow model, the consideration of snow water equivalent (SWE) immediately after the eruption (on 29 January) and in the post-eruptive period (on 12 March), was significant. In the case of Kusatsu-Shirane volcano during the winter of 2018, linear relationships between altitude and SWE, obtained at different elevations, were used to estimate the snow volume around the new vents. Several scenarios incorporating snow and snowmelt (water), with or without the occurrence of a new eruption, were simulated for the prediction of future lahars. Three lahar scenarios were simulated, including A) rain-on-snow triggered, B) ice/snow slurry, and C) full snowmelt triggered by a new eruption, and indicated the flow paths (inundation areas) and travel distances. These were useful for lahar hazard zonation and identification of potential high-risk areas. Since the input parameters required for the Titan2D flow model can be relatively easily determined, the model was suitable for the 2018 eruption at Motoshirane where historical and geological lahar records are not available for calibration. The procedure used in the study will enable rapid lahar prediction and hazard zonation at snow-clad volcanoes. Further consideration for simulating a cohesive-type flow, which was predicted by the primary deposits containing large amounts of clay minerals and could not be expressed in the Titan2D flow model, is necessary.
Two-thirds of the 111 active volcanoes in Japan are seasonally snow-covered during winter (occasionally up to 6 months) and demonstrate high hazard and risk potentials associated with snow and snowmelt during and after an eruption (Tada and Tsuya 1927; Waythomas 2014; Kataoka et al. 2018). In Japan, a snowmelt lahar, triggered by a hydrothermal eruption and associated flank collapse at Tokachi-dake, Hokkaido in 1924 caused 144 fatalities (Tada and Tsuya 1927; Uesawa 2014). Based on lessons learned from this event and from a snowmelt lahar tragedy at Nevado del Ruiz, Columbia in 1985 (Pierson et al. 1990), most snow-related lahar risk assessments and hazard maps in Japan are based on a single scenario in which snowmelt is directly triggered by a magmatic and pyroclastic eruption, such as that at Nevado del Ruiz. However, the types and triggering mechanisms for snow/ice-related mass flows vary and include volcanic mixed avalanches triggered by pyroclastic flows sweeping across snow (Pierson and Janda 1994), and ice slurry lahars initiated by phreatic eruptions (Kilgour et al. 2010), as reported worldwide. The trigger, size, and type of mixed flows (including lahars) of volcanic sediments and rocks, snow, and/or water can vary with the timing of the event, whether it occurs during the rainy season, snow season, or snowmelt season. The mechanism by which contrasting lahars occur at seasonally snow-clad volcanoes has been evaluated by a study conducted on lahar deposits and via river monitoring, after the September 2014 phreatic eruption at Ontake volcano, central Japan (Kataoka et al. 2018). At Ontake, a rain-triggered lahar occurred on 5 October 2014, causing the formation of clay-rich debris flow deposits (clay content of 10–20 wt%). In contrast, a lahar flow that occurred in April 2015 under rain-on-snow (hereafter ROS: Kattelmann 1997; Sui and Koehler 2001) conditions during snowmelt season was reportedly more water-rich and erosive, resulting in the formation of a clay-poor hyperconcentrated flow deposit (Kataoka et al. 2018). Such different flow types (e.g., cohesive or non-cohesive; debris flow or hyperconcentrated flow) may exhibit different characteristic travel distances, travel times, inundation areas, and flow transformations. Because lahars at snow-clad volcanoes can have these many variations, hazard assessment and zonation of lahars should be achieved by considering several lahar scenarios for different (i.e., snow, snowmelt, and rainy) seasons.
On 23 January 2018, a sudden phreatic eruption occurred at the ski field on Kusatsu-Shirane (Mt. Motoshirane) volcano in Honshu, Japan (Ogawa et al. 2018). The eruption originated from new vents located near the Kagamiike-kita cone at Motoshirane (Figs. 1 and 2), which had been considered dormant (Terada 2018; Ishizaki et al. 2020). Therefore, no volcanic hazard maps (including lahar hazards) based on an eruption from Motoshirane had been published prior to the 2018 eruption. Previous geophysical and geochemical studies have intensively focused on another peak (Mt. Shirane) of Kusatsu-Shirane volcano around the Yugama crater in Fig. 1 (e.g., Nurhasan et al. 2006; Terada, 2018; Ohba et al. 2019; Tseng et al. 2020). Kusatsu, a town located on the downslopes of the volcano, is extremely popular for sightseeing, skiing, and onsen (hot spring bathing); owing to such aspects, people come in close proximity to a volcanic hazard-prone area. Due to an elevated volcanic eruption risk since the January 2018 eruption, the hazards associated with future eruptions and lahars occurring during both snow and snowmelt seasons have garnered attention as major concerns for safety of the town, local residents, and inhabiting communities.
Oblique view on the Google Earth® map of Kusatsu-Shirane volcano including Mts. Motoshirane (middle) and Shirane (right), looking the eastern side of the volcanic edifice and Town of Kusatsu (downslope). Red circles indicate high-risk areas in the 2018 eruption crisis recognized by the lahar flow simulations in this study
The river system east of Kusatsu-Shirane (Mts. Shirane and Motoshirane) volcano. a Key locations of sampling, observation, and monitoring. The topographic map is based on open source data provided by the Geospatial Information Authority of Japan (GSI). Distribution of tephra and isopleths and dark gray-colored and limit of gray-colored areas indicating ash covered surfaces are based on a preliminary report by the Geological Survey of Japan (released on January 26, 2018) and the Joint Research Team for ash fall in Kusatsu-Shirane 2018 eruption (released on 6 February 2018). New vent locations after GSI data (https://www.gsi.go.jp/BOUSAI/kusatsushirane-index.html). b Schematic longitudinal profiles of the Furiko-zawa and Shimizu-sawa creeks and the Yazawa-gawa River
Most lahar hazard maps reported worldwide are background (long-term) maps prepared before eruption-associated unrest. Lahar hazards on glaciated stratovolcanoes in North and South America have been well researched using historical and geological evidence (e.g., Mothes 1992; Pierson 1999; Delgado-Granados et al. 2015). In contrast, mapping of lahar hazards associated with sudden eruptions of seasonally snow-covered volcanoes such as Motoshirane is extremely rare.
Titan2D, a numerical flow simulation tool (Patra et al. 2005; Pitman and Le 2005), has been used mostly for hindcasting of past lahar events (e.g., Williams et al. 2008; Córdoba et al. 2015). It has also been used in forecasting events prior to the 2007 Crater Lake outburst lahar at Ruapehu, New Zealand (Procter et al. 2012) and during the 2012 Tongariro eruption crisis documented in New Zealand: the latter had significantly less historical or geological data available (Leonard et al. 2014). The present study applied Titan2D to lahar hazard forecasting at Motoshirane, without historical and geological records of past lahars. In this study, snow-related lahar mass flows were simulated by considering the implications of snow conditions and seasonality. To constrain the parameter values for the flow simulations, we characterized primary tephra deposits that can be a source material for future lahars and conducted snow surveys to identify snow conditions and accumulations. A scenario-based hazard zonation considered potential lahar triggers, such as the type of future eruptions, and anticipated rain, snow, and snowmelt conditions. The process described in the present study for determination of scenarios and for estimation of parameter values will be helpful for forecasting snow-related lahars, without substantial reference geological and historical lahar data, during a volcanic crisis caused by an unforeseen new vent eruption on a seasonally snow-clad volcano.
Regional setting of Kusatsu-Shirane volcano and the 2018 phreatic eruption
Kusatsu-Shirane volcano (2160 m above sea level (a.s.l.) at Mt. Shirane, and 2171 m a.s.l. at Mt. Motoshirane) is located on the border of Nagano and Gunma prefectures on Honshu Island. The summit of the volcano is predominantly composed of two clusters of young pyroclastic cones and craters, the Shirane group and the Motoshirane group (Terada 2018; Ishizaki et al. 2020; Figs. 1 and 2), some of which are occupied by crater lakes. Greater active volcanism has been observed in the Shirane group with at least 10 phreatic eruptions reported during the twentieth century. Motoshirane was dormant until the phreatic eruption in 2018. Geological records of past volcanic activity at Motoshirane suggest that the magmatic eruptions that led to the formation of the Kagamiike-kita cone occurred approximately 1500 years ago, while the Kagamiike cone was formed approximately 4800 cal yr BP (Ishizaki et al. 2020).
On 23 January 2018 (10:02 a.m. Japanese Standard Time), a phreatic eruption suddenly occurred at vents near the Kagamiike-kita cone (ca. 2050 m a.s.l.), part of Mt. Motoshirane. Tremors and tilting were recognized immediately before and after the eruption (Japan Meteorological Agency 2018; Terada et al. 2021; Yamada et al. 2021). Several new vents were opened, ejecting tephra and ballistic projectiles estimated to total between 30,000 and 50,000 metric tons (The Joint Research Team for ash fall in Kusatsu-Shirane 2018 eruption 2018; Kametani et al. 2021). Fumaroles were observed until the end of February at vents located north of the Kagamiike-kita crater.
Proximal tephras, including ballistic projectiles, were reported to be the direct cause of one fatality and injured 11 people who were skiing or were in cable cars at the Kusatsu-Kokusai (now renamed as Kusatsu-Onsen) ski field (Fire and Disaster Management Agency 2018). The medial to distal tephra distribution was investigated by several research groups immediately after the eruption (The Joint Research Team for ash fall in Kusatsu-Shirane 2018 eruption 2018; Kametani et al. 2021). Airborne and ground surveys indicated that the tephra was distributed to the northeast location of the vents (Fig. 2).
Creeks and rivers where potential lahars could have occurred after the 2018 eruption were identified in this study (Figs. 1 and 2). The head of Furiko-zawa Creek is situated on the northern slope of the Kagamiike-kita cone. This creek runs through a valley formed at the edge of a lobe of Furikozawa lava (Hayakawa 1983; Uto et al. 1983) and was the main component of a ski course. Shimizu-sawa Creek, a part of which was also used as a ski course, spans from 500 m east of the Kagamiike-kita cone, flows in the ENE direction, and joins Furiko-zawa Creek in the middle of the ski field. After the confluence, Furiko-zawa Creek flows down the lower ski slope and finally joins the Yazawa-gawa River, the upper catchment of which is located on the eastern slope of the Yugama crater of Mt. Shirane (Figs. 1 and 2). A lahar hazard map is based on the assumption of lahar generation from Yugama occurring as an event with a probability of 1-in-200 years. This map was prepared before the 2018 eruption, when there was no expectation that lahars could originate from an eruption at Motoshirane.
Glaciers are absent, but the Kusatsu-Shirane volcanic region is seasonally snow-clad from November to May. The climatological normals for 1981–2010 over three winter months (December, January, and February) at the Kusatsu weather station (1223 m a.s.l., in center of Kusatsu town) operated by Japan Meteorological Agency showed a mean air temperature of − 3.1 °C and precipitation of 183.5 mm. For the three spring months (March, April, and May) mean temperature is 5.5 °C and rainfall is 346.3 mm. During the rainy season from mid-June to mid-July, this station receives, on average, 370.8 mm of rainfall. Before the 2018 eruption, the maximum daily precipitation recorded at the Kusatsu weather station was 222 mm in 1982; however, it was surpassed by the record of 250 mm in 2019.
Procedure and methods for lahar hazard assessment
A crisis (short-term) hazard map for potential lahar hazards and risks after the 2018 eruption was generated using a stepwise procedure (Fig. 3). As the eruption occurred in the middle of the snow season, consideration of various conditions of snowfall and snowmelt, which can affect lahar generation and flow behavior, was essential (Kilgour et al. 2010; Córdoba et al. 2015; Kataoka et al. 2018). The approach (Fig. 3) consists of (1) characterization of eruption deposits (petrography, grain size, clay mineralogy, and physical properties); (2) meteorological observations and snow surveys, and (3) flow simulations using actual values for parameters obtained via (1) and (2). Additionally, (4) real-time river monitoring was conducted for the hazard-potential Furiko-zawa Creek and Yazawa-gawa River (Fig. 2) to obtain data on flow features and to estimate the travel times in the event when a lahar is generated during the post-eruption period (Appendix 1).
Approach and methodology taken for lahar hazards and risk assessment in the 2018 Kusatsu-Shirane eruption crisis
Characteristics of the 2018 eruption deposits
Proximal tephra deposits are considered the most important information source for prediction of lahar flow type. The primary tephra deposits were sampled on 24 January 2018 (one day after the 23 January eruption) from no. 29 cable car at the Kusatsu-Kokusai ski field. The cable car was operational and was located ca. 440 m NNE of the new vents during the eruption (Location A; Fig. 2); it was damaged by ballistic ejecta (Fig. 4a). Owing to the proximity of the car to the vents, it could be ascertained whether the transport process of tephra occurred by fall, pyroclastic density current, or both. The tephra deposits that covered the seats in the car and clung to its window rims (Fig. 4b) were collected, and the following analyses were performed.
Photographs of a a damaged cable car (no. 29) by ballistic ejecta, and b primary tephra deposits covering the seat in the car
Petrography, grain size, clay mineralogy, and physical property of the eruption deposits
For petrography, the tephra samples were observed under binocular and petrographic microscopes. The samples were then subjected to washing over a sieve with a 63-µm mesh size to remove the mud fractions. Grain-size analysis was performed as follows: the tephra samples were dried, weighed, soaked in distilled water, and then wet-sieved. Grains larger than 63 μm (sand and gravel) were sieved on a phi scale, whereas grains smaller than 63 μm (i.e., silt and clay) were analyzed using a laser grain-size analyzer (Malvern Mastersizer 3000) at Niigata University. For the particle analyzer, samples were ultrasonically stirred for 2 min, and five randomly collected portions of the sample were measured five times. The duration of each measurement was set at 15 s. The mean values of all measurement were considered for grain-size distribution.
The gray-colored eruption deposits observed in no. 29 cable car were 7–9 mm thick and composed of armored lapilli-like aggregates (i.e., individual grains coated with mud) up to a few millimeters in size (Fig. 5a). Pebble- to cobble-sized volcanic rocks were scattered on the floor of the car. The deposits mainly consist of white hydrothermally altered rock fragments, quartz, feldspar, and extremely fine pyrite grains (Fig. 5b). Glassy rock fragments are rarely found. Visually, although the deposits seemed to be moderately sorted, grain-size distributions (Fig. 6) indicate poor sorting; however, this is probably attributable to the mud content. The mud population (< 0.063 mm) occupies 42–48 wt% of the deposits and 45–50 wt% of finer fractions (< 2 mm). Clay-sized (< 4 µm) particles, mostly derived from hydrothermally altered materials are ~ 10 wt%.
Photomicrograph (binocular) of primary tephra (KSs-02); a aggregates of mineral grains and lithic fragments coated by mud, b washed samples (mud fraction removed) consisting of mineral grains and lithic (some hydrothermally altered) fragments. Inset figure shows fine grained pyrite coherent with a white-colored lithic fragment
Grain-size distribution of tephra samples. The KSw-01 sample was deposited on the window rim outside the no 29 cable car, and the KSs-02 covered the seat
X-ray diffraction (XRD) analysis (Rigaku Ultima IV at Niigata University) was performed to understand the mineral composition and to determine the clay mineralogy. After separation by hydraulic settling and centrifugation-based separation, bulk and fine fraction (clay mineral) analyses were conducted. For clay minerals, analysis was performed as follows: (1) orientation, (2) treatment with ethylene glycol, (3) heating at 400 °C, and (4) heating at 550 °C for identification of the minerals with overlapping of specific cell parameters. The measurement conditions included exposure to CuKα radiation, 40 kV and 40 mA, and a scanning speed of 2°/min.
The mineral assemblage, identified via XRD analysis, consists of quartz, plagioclase, cristobalite, pyroxene, pyrite, alunite, pyrophyllite, illite, kaolin group minerals (7 Å), and chlorite (Fig. 7). The predominance of quartz was recognized by its peak intensity.
XRD spectrum of tephra sample (KSs-04) from no. 29 cable car, showing bulk sample and clay matrix data (inset)
For numerical flow simulations (described in the latter sections), the internal friction angle of the flows is necessary. For lahar flow simulations, the angle of repose of deposited materials was used as a surrogate for internal friction angle (e.g., Procter et al. 2010). In this study, the angle of repose (mean of four measurements) of tephra deposits, snow grains (1–2 mm in size), and admixture of tephra with snow was measured using a protractor in a cold room (under 0 °C) at the Research Institute for Natural Hazards, Niigata University. The angles of repose for dried tephra and snow grains were 33.6° and 39.3°, respectively. The angle of repose for a mixture of ca. 20 wt% of snow and 80 wt% of tephra was 36.3°.
Snow survey
The snow water equivalent (SWE: amount of the liquid water that would be released upon complete melting of the snowpack) around the new vents was estimated, based on the relationship between altitude and SWE at various sites on the slopes of Kusatsu-Shirane volcano, because in situ measurements of SWE around the vents immediately after the eruption were not possible. Field measurements of SWE were conducted at four sites (from 1059 to 1512 m a.s.l.) on the eastern side and at four sites (from 1150 to 1977 m a.s.l.) on the western side of the volcano on January 29, and again at five selected sites among the above-mentioned sites on March 12. A snow sampler (a metal tube with a cutter) was used to obtain the snow core samples, and the samples were then weighed on an electric balance.
SWE assessment can help provide values for the input parameter of thickness (i.e., volume) of snow and snowmelt for numerical simulations. In the case of Kusatsu-Shirane volcano during the winter of 2018 (Fig. 8), the relationships between altitude H (m) and SWE (mm) on January 29 and March 12 can be described as follows:
Snow water equivalent on 29 January and 12 March 2018 showing accumulation above tephra layer
January 29: SWE = 0.489 H – 481 (r2 = 0.92),
March 12: SWE = 0.735 H – 778 (r2 = 0.93).
Using these relationships, the SWE values around the new vents (2050 m a.s.l.) at Motoshirane on January 29 and March 12 were estimated as 521 mm and 729 mm, respectively.
During the snowmelt season, snow pit observations of snow stratigraphy, grain shape and size, density, hardness, and liquid–water content were performed on 10–11 April 2018 along the National Highway 292. At Location B (Fig. 2: 1838 m a.s.l.), on the proximal slope, NE of Motoshirane, a snowpack of ~ 0.6 m thickness above the 2018 tephra layer and a total snow thickness of 2.75 m were observed (Fig. 9a). A 10-cm-thick ice layer was observed immediately above the 2018 eruption deposits (depicted in dark gray, 2 cm thick), which, in turn, was overlain by a water-saturated snow layer (Fig. 9b). The tephra and ice layers were impermeable, thereby diminishing the infiltration capacity and enhancing the anticipated surface runoff from the snow layer, above the impermeable basal boundary, when it was saturated with water from melting snow and heavy rain. These observation occurred after the initial lahar hazard zonation because of the limitation imposed on visiting high-risk areas. However, the result confirmed the parameter setting in the scenario models for snow-related lahars.
Snowpack with intercalation of the 2018 tephra fall deposits on the northern and proximal slope nearby Motoshirane (1838 m a.s.l.) on April 10, 2018 in the snowmelt season. a A 2.75 m depth snowpack with patchy tephra exposure on the snow surface, and b A-10-cm thick ice layer developing above the 2018 eruption deposits (dark gray, 2 cm thick), overlain by water-saturated snow layer. Partly dark-colored part below the ashfall deposits is due to the wet snow (not ash infiltration); it became darker when water flowed from the above layer during digging the pit
Flow models of Titan2D
Titan2D is a numerical model (Patra et al. 2005) that is used for studying dry granular flows and is based on the Savage–Hutter model (Savage and Hutter 1989). One of the assumptions of the Savage–Hutter model is a fundamental recognition of Coulomb friction. Therefore, mass conservation and momentum conservation are written as two-dimensional equations. The global equations are as follows:
$$\frac{\partial }{\partial t}\left( {\vec{U}} \right) + \frac{\partial }{\partial x}\left( {\vec{F}\left( {\vec{U}} \right)} \right) + \frac{\partial }{\partial y}\left( {\vec{G}\left( {\vec{U}} \right)} \right) = \vec{S}\left( {\vec{U}} \right),$$
where vectors \(\overrightarrow{U}\), \(\overrightarrow{F}\left(\overrightarrow{U}\right)\) and \(\overrightarrow{G}\left(\overrightarrow{U}\right)\) are expressed as follows:
$$\vec{U} = \left[ {\begin{array}{*{20}l} h \\ {hV_{x} } \\ {hV_{y} } \\ \end{array} } \right], \vec{F}\left( {\vec{U}} \right) = \left[ {\begin{array}{*{20}l} {hV_{x} } \\ {hV_{x}^{2} + C_{0} gh^{2} } \\ {hV_{x} V_{y} } \\ \end{array} } \right] {\text{ and }}\, \vec{G}\left( {\vec{U}} \right) = \left[ {\begin{array}{*{20}l} {hV_{y} } \\ {hV_{x} V_{y} } \\ {hV_{y}^{2} + C_{0} gh^{2} } \\ \end{array} } \right],$$
$$\vec{S}\left( {\vec{U}} \right) = \left[ {\begin{array}{*{20}l} 0 \\ {g_{x} h - hk_{ap} sign\left( {\frac{{\partial V_{x} }}{\partial y}} \right)\frac{{\partial \left( {gh} \right)}}{\partial y}\text{sin}\phi_{int} - \frac{{V_{x} }}{{\sqrt {V_{x}^{2} + V_{y}^{2} } }}\text{max}\left( {g + \frac{{V_{x}^{2} }}{{r_{x} }}, 0} \right){h} \text{tan}\phi_{bed} } \\ {g_{y} h - hk_{ap} sign\left( {\frac{{\partial V_{y} }}{\partial x}} \right)\frac{{\partial \left( {gh} \right)}}{\partial x}\text{sin}\phi_{int} - \frac{{V_{y} }}{{\sqrt {V_{x}^{2} + V_{y}^{2} } }}\text{max}\left( {g + \frac{{V_{y}^{2} }}{{r_{y} }}, 0} \right){h} \text{tan}\phi_{bed} } \\ \end{array} } \right],$$
where \({V}_{x}\) and \({V}_{y}\) represent the flow velocities in the x and y directions, \({g}_{x}\) and \({g}_{y}\) represent gravity accelerations in the x and y directions, and ϕint and ϕbed represent the internal and bed (basal) friction angles.
Titan2D provides the option of using a two-phase flow model developed by Pitman and Le (2005). This model is also based on the Savage–Hutter model, but with the consideration of solid and fluid phases. In practice, the two-phase model requires the inclusion of the volume fraction of the solid phase as an input parameter. The two fluid system equations are based on those reported by Anderson and Jackson (1967), and the mass conservation equations for the two phases are as follows:
$$\text{Solid}{:} \,\frac{{\partial \rho_{s} \varphi }}{\partial t} + \nabla \left( {\rho_{s} \varphi {\varvec{v}}} \right) = 0,$$
$$\text{Fluid}{:}\, \frac{\partial {{\rho }_{f}}\left( 1-\varphi \right)}{\partial t}+\nabla \left( {{\rho }_{f}}\left( 1-\varphi \right)u \right)=0,$$
where \({\rho }_{s}\) and \({\rho }_{f}\) represent the specific densities of the granular material (solid) and interstitial fluid, respectively, and \(\varphi\) is the solid volume fraction. \({\varvec{u}}\) and \({\varvec{v}}\) represent the velocities of the fluid and solid phases, respectively. The momentum conservation equations are expressed as:
$$\text{Solid}{:}\, \rho_{s} \varphi \left( {\frac{{\partial {\varvec{v}}}}{\partial t} + \left( {{\varvec{v}} \cdot \nabla } \right){\varvec{v}}} \right) = - \nabla \cdot T^{s} - \varphi \nabla \cdot T^{f} + {\varvec{f}} + \rho_{s} \varphi {\varvec{g}},$$
$$\text{Fluid}{:}\, \rho_{f} \left( {1 - \varphi } \right)\left( {\frac{{\partial {\varvec{u}}}}{\partial t} + \left( {{\varvec{u}} \cdot \nabla } \right){\varvec{u}}} \right) = - \left( {1 - \varphi } \right)\nabla \cdot T^{f} - {\varvec{f}} + \rho_{f} \left( {1 - \varphi } \right){\varvec{g}},$$
where \({T}^{s}\) and \({T}^{f}\) represent solid and fluid stress tensor, respectively. \( {\varvec{f}}\) represents the interaction force which is expressed as:
$${\varvec{f}} = \left( {1 - \varphi } \right)\beta \left( {{\varvec{u}} - {\varvec{v}}} \right).$$
The phenomenological function \(\beta\) is based on the experimental results of Richardson and Zaki (1954) expressed as:
$$\beta = \frac{{\left( {\rho_{s} - \rho_{f} } \right)\varphi g}}{{v^{T} \left( {1 - \varphi } \right)^{m} }},$$
where \({v}^{T}\) is the terminal velocity of a typical solid particle falling in the fluid under gravity, \(g\) is the magnitude of the gravitational force, and m represents the Richardson and Zaki exponent related to the Reynolds number of the flow. The solid is assumed to be a Mohr–Coulomb incompressible granular fluid, as modeled by Savage and Hutter (1989). The model is then based on the assumption of a frictional boundary condition on the smooth basal surface, \(b(x,y)\). The fluid is assumed to be inviscid; however, it exerts a drag at the basal surface as a Navier slip at b, which accounts for fluid dissipation. After averaging the values for depth and simplifying certain assumptions, the entire set of equations with model parameters and their notations are listed in Appendices 2 and 3.
A simulated flow using Titan2D is commenced spontaneously with an initial pile that collapses because of its morphology and instability as a granular pile, and the simulation ends when it reaches the maximum number of iterations (10,000 iterations were performed in the initial phase of this study). The maximum number of iterations was set to 20,000–110,000 as this rendered the simulation time close to the estimated flow duration (Table 1). In this study, we used a cylindrical pile because it is easy to calculate an initial radius for the cylinder from the initial estimated volume. The input parameters of the Titan2D simulations include the internal friction angle, bed friction angle, initial pile height, initial pile radii, and volume fraction of the solid phase. A digital elevation model (DEM) is also necessary to calculate the granular surface flow.
Table 1 Observed and estimated parameter values of tephra, snow, water, and time duration for lahar simulations
The DEM was provided by the Geospatial Information Authority of Japan (GSI). The original DEM was used with a grid at 1 m; however, it was resampled to a 5-m resolution for simulation applications, because the simulations cannot be performed on the 1-m grid owing to computational complexity and time necessary for completion. The resolution is reasonable for wider applications of the flow models in other volcanic areas, because 5-m resolution DEMs are freely available from GSI for the conduct of studies on most Japanese mountainous and volcanic terrains.
Scenarios and input parameters for flow simulations
In this study, field and laboratory data were used to determine the values of the input parameters for the simulations (Fig. 3). Historical and geological records of past lahars at Motoshirane as well as data on the distributions, depositional characteristics (thickness, grain size, cohesiveness, and so on), and triggering mechanisms necessary for the calibration of the simulations were not available. Therefore, several scenarios were considered, and each involved the triggering of a lahar due to snow and snowmelt interactions (Fig. 10, Tables 1 and 2).
Scenario models for predicted lahars after the 2018 eruption at Motoshirane
Table 2 Input parameters for each scenario
The two-phase model used in Titan2D simulations requires the initial pile height and volume (i.e., area), the solid fraction, internal friction angle, and bed friction angle. In the simulated flows, the initial pile height and volume are equivalent to the entrained materials at the starting point of the flow, including tephra, snow, and water (rain). In this study, snow was considered as either a solid (particles) or a liquid/fluid (after melting) state of matter depending on the conditions and lahar triggers (discussed below). Tephra volumes in the proximal areas, which can be source material for a lahar, were considered based on a preliminary investigation reported on February 6, released by The Joint Research Team for ash fall in the Kusatsu-Shirane eruption (2018) consisting of groups belonging to the Geological Survey of Japan (GSJ), Earthquake Research Institute, the University of Tokyo (ERI), and National Research Institute for Earth Science and Disaster Resilience (NIED). The tephra distribution and volume estimation were referred from individual isopachs/areas of 1.5 m thick (GSJ) or 0.51 m thick (ERI) tephra deposition over an area of 100 m × 100 m, and of > 1 m thickness over an area of 70 m × 40 m (NIED) near vents. For snow accumulation, the snow survey helped determine volumes above the 2018 tephra layer (Figs. 8 and 9).
During the snowmelt season, a rain-triggered lahar can occur; therefore, the rain condition for the ROS lahar after the 2014 eruption at Ontake (3067 m a.s.l.), which recorded a total input ~ 400 mm (300 mm rain + 100 mm snowmelt) in 44 h (~ 2 days) at the Tanohara weather station (2195 m a.s.l.; Kataoka et al. 2018), was referred to set a maximum input value. In addition, the recorded maximum daily precipitation was 222 mm in Kusatsu town (1223 m) before the 2018 eruption. In near-vent areas (2050 m), more rainfall can be expected than in the lower elevated areas.
Internal friction values were based on laboratory observations of the angles of repose of the tephra and tephra and snow-mixed materials. Procter et al. (2010) reported that internal friction angles might affect the lateral movement of the flowing mass, and their results (between 30° and 35°) accorded with the representative angle of repose of debris flow deposits. In addition, the actual values of the internal friction angles in the Titan2D model are not sensitive for the simulation results and may subtly affect the lateral confinement of the flow (Sheridan et al. 2005; Williams et al. 2008; Procter et al. 2010). The bed friction angle was variable and subjected to adjustments through repeated simulation runs, but was also based on previous lahar and snow avalanche simulations using Titan2D (Williams et al. 2008; Procter et al. 2010, 2012; Takeuchi et al. 2018).
Three different scenarios were selected for the lahar simulations (Fig. 10, Table 1). Scenario A is based on the assumption of a non-eruption trigger (i.e., post-2018 eruption), whereas Scenario C is based on the flow triggered by a future eruption event. The intermediate setting, scenario B, is based on the assumption of both post-2018 and future (new) eruptive events.
Scenario A: A rain-triggered situation, at the end of the snow season and without further eruption, was considered. It was based on the assumption of an ROS condition with an input of approximately 400 mm per event. The SWE (Fig. 8) suggested that ca. 0.21 m of meltwater, above the tephra layer, was generated at 2050 m a.s.l. Considering that flow generation could occur widely, the representative value was set as 0.15 m. As a result, the solid fraction contained only tephra. The initial pile was set on the northern slope of the Kagamiike-kita cone overlapping the upper catchment of Furiko-zawa Creek, where proximal tephra deposits of approximately 1 m thickness were thought to be concentrated.
Scenario B: This scenario, for the middle to the end of the snow season, was based on the assumption of an ice/snow slurry triggered by melting of snow under warm temperatures and/or a small phreatic/hydrothermal eruption (e.g., Kilgour et al. 2010). The hypothesized slides were generated from the boundary above the tephra layer, and this assumption was validated later by snow pit observation (Fig. 9b). These slides can occur over a relatively wide area on the northern slope near the vents, resulting in the formation of the largest initial pile among the three scenarios. During the snowmelt season, patchy exposure of tephra and/or snow-covered slopes occurs (Fig. 9a) in areas proximal to the vents. Surface albedo modification by dark-colored ash cover may enhance snow and ice melt (Driedger 1981; Manville et al. 2000; Richardson and Brook 2010); however, thicker tephra layers may reduce ablation (Mattson et al. 1993; Richardson and Brook 2010) as the insulation reduces melting (Mattson et al. 1993; Brock et al. 2007). Therefore, the thickness of tephra was set as 0.5 m for this scenario. Water input was estimated to be ca. 0.21 m based on the infiltration of snowmelt water from the surroundings and upper catchment above the tephra layer (calculated by SWE at 2050 m a.s.l.; Fig. 8) and was augmented by a 100-mm input (assuming the intensive rainfall). In this scenario, snow can behave as a solid in the ice/snow slurry. The snow proportion was estimated by considering the same SWE, which was converted to snow depth (multiplied by 2.8, when snow density is ~ 360 kg/m3).
Scenario C: This scenario was based on the assumption of a "full" snowmelt lahar generated by a minor magmatic, pyroclastic eruption during the middle to the end of the snow season; considering an event with similar volumes of proximal tephra fall deposits to the 2018 phreatic eruption. The lahar was sourced at the eruption center; thus, a narrow area was adopted for the initial pile. The tephra thickness contributing to the total pile height was considered to be 1.5 m (the maximum thickness suggested by GSJ). All snowpacks, including the tephra layer near the vents, resulted in 0.73 m of snowmelt water at 2050 m a.s.l. In this scenario, there was no additional rainfall input, and only melted snow contributed to the fluid phase of the simulated flows.
The solid fraction for each scenario was > 0.6 (Table 1), implying that the lahars of debris flow type following a definition based on sediment concentration (Scott et al. 1995; Pierson 2005). As a consequence of pile height estimation, a solid fraction of approximately 0.8 was used to simulate the snow slurry of Scenario B. However, the plausibility of this value should be assessed further, because empirically, a non-volcanic slushflow (Hestnes 1998) exhibits more watery characteristics, whereas the hydrothermal eruption-triggered ice/snow slurry lahar at Ruapehu demonstrated a volume of snow that was at least 60 times greater than the expelled water from the Crater Lake (Lube et al. 2009).
For comparison, the present study was conducted using the single-phase Coulomb model, which fundamentally applies to single-phase granular flows such as snow avalanches and debris avalanches (not wet) (Manville et al. 2013), although it has been previously applied for lahar hazard assessments (Procter et al. 2004, 2012). For simplification, the sum of the solid and fluid volumes in individual scenarios was considered as the initial volume for the single-phase model used in this study. Additionally, the peak flow discharge and the duration for the hypothesized lahars in individual scenarios (Table 1) were estimated using the empirical methods described by Pierson (1998). The calculated durations were used to determine the maximum number of iterations for the simulations.
Comparison of the results
The simulated flow distributions are illustrated in Figs. 11, 12 and 13. In terms of the bed friction angle, the conduction of several two-phase flow simulations suggested that a value of 25° was representative of the present situation. Additionally, a simulated flow using the single-phase Coulomb model, with the same bed friction angle, covered a markedly shorter distance (Fig. 11b). Simulations using the two-phase model with lower friction angles of around 10° of previous lahar simulations (Williams et al. 2008; Procter et al. 2010, 2012) have been conducted; however, this resulted in numerical instability in the simulated flows in this study. Other cases were calculated using the single-phase Coulomb method, with minimum bed friction angles of 5° for Scenario A and 8° for Scenarios B and C. The simulations were based on the adoption of minimum bed friction angles for estimation of the maximum runout distances of the lahars. The minimum angles were defined by decreasing the values by 1° steps until the flow results showed instability, including flows progressing towards highly elevated zones. This study then considered application of the identified minimum bed friction angle plus 1° for the conduction of single-phase simulations.
Simulated flow distribution of the Scenario A (rain-on-snow), a two-phase with the bed friction angle of 25°, b single-phase with the bed friction angle of 25°, and c single-phase with the bed friction angle of 5°
Simulated flow distribution of the Scenario B (snow slurry), a two-phase with the bed friction angle of 25° and b single-phase with the bed friction angle of 8°
Simulated flow distribution of the Scenario C (full snowmelt): a two-phase with the bed friction angle of 25° and b single-phase with the bed friction angle of 8°
The Scenario A (an ROS condition) lahars demonstrated a downward flow from the Furiko-zawa Creek until the cable car terminal, with a maximum flow thickness of ~ 0.5 m, consequently reaching the Yazawa-gawa River (Fig. 11a). A lahar flow simulated using the single-phase model (Fig. 11c) was found to be more valley-confined and demonstrated a maximum flow depth of ~ 1 m, even in the Yazawa-gawa. In Scenario B (snow-slurry), lahars bifurcated and reached the Furiko-zawa Creek and into a valley south of the Furiko-zawa. The two branches converged in the middle of the ski field and finally flowed into the Yazawa-gawa River (Fig. 12). The maximum flow thickness was almost steady and was approximately 0.5–1 m from upstream to downstream. Around the cable car terminal, the topography split the lahar into two; one flowed further eastward, reaching Mononuguno-ike pond and the other flowed through a narrow valley to the Yazawa-gawa River. The flow paths of Scenario C (magmatic, small-scale eruption-triggered snowmelt lahar) under conditions of application of the two-phase model were similar to those of Scenario B with a maximum flow thickness of approximately 0.5 m, whereas the flow simulated using the single-phase model was similar to that observed in Scenario A (Fig. 13).
Most lahar flow simulations (Figs. 11, 12 and 13) exhibited similar flow paths, irrespective of the scenarios or flow assumptions. The most hazardous areas included the following: (1) regions where the Furiko-zawa Creek met the National Highway 292; (2) location in a valley south of Furiko-zawa Creek crossing the highway; and (3) sites near the cable car terminal on the lower slope of the ski field (Figs. 1, 2, 11a, c, 12 and 13). These locations are in ski areas during winter and are near the major transportation route during other seasons. Additionally, all numerical simulations conducted using variable input parameters suggest that there is a low probability of the occurrence of lahars directly reaching Kusatsu town and/or spilling from the Yazawa-gawa River (flowing in a deep gorge), and finally inundating the town.
The graphs of travel distance versus time (Fig. 14) depict breakpoints in the lines, these indicate changes in the flow velocity at certain times. Therefore, the flow velocity was calculated separately in the segments before and after the breakpoint (Table 3). The breakpoints occur ca. 2500 m downstream, reflecting the influence of topography such as the valley becomes widen and the slope gradient changes around the location.
Simulated travel distances of Scenario A, B and C, showing a breakpoint in flow velocity at around 2500 m distance. The velocity is estimated by distance between the flow front and the starting point without consideration of sinuosity of the rivers and valleys
Table 3 Result of numerical simulations
Snow-related lahar hazard assessment without the availability of geological and historical data
The 2018 eruption occurred at new unforeseen vents on the snow-clad Motoshirane. Geological and historical lahar data for probabilistic assessment of snow-related lahar hazards were unavailable. Therefore, a deterministic approach, using the characteristics of proximal tephra deposits and snow conditions, was adopted and flow was simulated using a numerical model.
Flow type of the predicted lahars after the 2018 phreatic eruption
Prediction of the flow type (cohesive, non-cohesive, debris flow, hyperconcentrated flow, and so on) of potential future lahars, even at qualitative levels, is important because it affects flow energy, flow transformation, travel distance, inundation areas, and so on, and these affect the outcomes of hazard mitigation strategies implemented, including evacuation planning. Syn- and post-eruption lahars tend to exhibit the characteristics of eruption deposits. For example, a lahar derived from magmatic activity can contain fresh magmatic materials such as pumice, volcanic glass shards and mineral grains. Lahar deposits associated with magmatic pyroclastic eruptions include gravel and sand-sized particles (e.g., Pierson et al. 1996) and can vary from matrix-rich (not clay-rich but silt- and sand-rich) to clast-rich, depending on the eruption style, size, and explosiveness (e.g., Mothes and Vallance 2015). In contrast, phreatic and magmatic hydrothermal activity through extant hydrothermal systems can produce clay-rich ejecta and hence clay-rich, syn- and post-eruptive lahars (Kataoka et al. 2018; Kataoka and Nagahashi 2019). Also, gravitational collapse of hydrothermally altered parts of the volcanic edifice can produce clay-rich lahars (Vallance and Scott 1997; Vallance 2005). Therefore, immediately after a phreatic/hydrothermal eruption, clay mineral assemblages and clay content (a cohesive lahar can be defined by the lahar deposit matrix containing > 3–5 wt% of clay: Scott et al. 1995) should be evaluated to predict the cohesiveness and its effect on the flow type of possible future lahars. The 2018 eruption deposits at Motoshirane contain ~ 10 wt% clay-sized particles, together with a mineral assemblage (Fig. 7) that suggests they were sourced from hydrothermally altered zones. The clay content in these primary ashfall deposits is lower than that in the deposits of the 2014 Ontake phreatic eruption (> 30 wt%), which produced cohesive lahars containing 10–20% matrix clay early in the eruption aftermath (Kataoka et al. 2018). The primary tephra characteristics of the 2018 phreatic eruption at Motoshirane suggested that an extremely cohesive flow following the eruption was unlikely. However, the presence of kaolin group minerals and clay content in the 2018 eruption deposits (Figs. 6 and 7) indicates that future lahars, derived from the tephra deposits, may be cohesive as such clay minerals can increase the yield strength of the flow (Hampton 1975; Pierson 2005). It is also predicted that the winnowing of fine particles from primary deposits (e.g., Cutler et al. 2021) via background pluvial, snowmelt, aeolian, and fluvial processes, may change features of the possible lahar types.
Constraining snow and snowmelt volumes for snow-related lahar simulation
This study considered the SWE in near-vent areas to determine the values of input parameters for Titan2D flow simulations (Table 1). In Japan, no governmental agency performs either manual or automatic SWE measurements. At weather stations operated by the Japan Meteorological Agency, snow depth is automatically measured in snowy regions, in addition to other meteorological variables such as air temperature and precipitation. However, as in many other countries, most stations are installed at low altitudes (the Kusatsu weather station at 1223 m a.s.l. in Kusatsu town is the third high-altitude located station in Japan where snow depth is measured (Suzuki 2018), even though the area of new vents at Motoshirane is located at ~ 2050 m a.s.l.). No other reliable data on SWE and/or snow depth are available, and thus, it is impossible to directly assess SWE and snow depth in the near-vent area. Snow-related reports from ski resorts located close to the volcanoes can be obtained; however, the information on snow depth is confined to middle altitudes and reflects the prevailing local conditions. In this study, using the snow survey data, we estimated the SWE in the near-vent area based on the relationship between SWE and altitude in areas of different elevations around Kusatsu-Shirane volcano (Fig. 8). It is widely known that a linear relationship between SWE and altitude can be observed in many mountain areas in Japan, including active volcanoes such as Daisetsu (Yamada et al. 1979), Ontake (Iyobe et al. 2016), and Adatara and Azuma (Matsumoto et al. 2019). These studies also highlight the fact that these relationships vary with season and slope orientation (to the prevailing wind direction in winter). Thus, it is important to understand the characteristics of the spatial distribution of SWE on individual snow-clad volcanoes for snow-related assessment. The SWE can also be measured during a background period, i.e., before volcanic unrest, and can be considered to constrain the values of input parameters.
Recommendations and limitations of the Titan2D flow simulation tool
Lahar flow models have been widely applied using a variety of methodologies (Manville et al. 2013). A popular empirical method such as LAHARZ (Iverson et al. 1998), used to predict the inundation area of lahar flow, was not suitable for assessment of the 2018 eruption at Kusatsu-Shirane (Motoshirane) because of the poor definition of the spatial distributions of previous lahar deposits that might have been considered for comparison. In contrast, application of the Titan2D (Pitman et al. 2003; Pitman and Le 2005) simulation tool depends on input parameters, such as the initial volume of flow, the solid fraction in flow, the internal friction angle, and the bed friction angle. These parameters can be easily estimated for the Motoshirane case through field survey and laboratory work. Furthermore, for application of the simulation tool, a normally specified desktop computer (Procter et al. 2012) is necessary, and the tool can be used to compute the results in a relatively short time. This enabled the authors to conduct numerous simulations under different conditions (such as a wide range of bed friction angles) and under several scenario settings. Therefore, the Titan2D tool can be widely applied in forecasting snow-related lahars, even without the availability of historical and geological data for calibration.
The estimated flow velocities (Table 3) are higher than the reported or calculated values (e.g., Pierson 1986; Pierson and Costa 1987) for moderate to large lahars. Williams et al. (2008) reported velocities of 3.1–7.0 m/s for observed and estimated small-scale lahars, at Vulcán Tungurahua with volumes (54,000 m3) of the same order as the hypothesized lahars for the 2018 eruption at Motoshirane (Table 1). These overestimated flow velocities are possibly attributable to the steep topography in the upper reach, which enhanced the simulated flow acceleration. Procter et al. (2012) also reported several difficulties in simulating flow travel times in proximal sections of a catchment, using the Titan2D model, owing to topography, although flow paths were appropriately defined by using the model. The single-phase and two-phase models used in Titan2D lahar simulations indicate different results (Figs. 11, 12 and 13). Simulated lahars using a two-phase model travelled further than those which used a single-phase model under conditions of the same bed friction angle (Fig. 11a, b). The two-phase flow simulations with lower bed friction angles occasionally depict numerical instabilities, indicating caution when assessing lahar hazards and risks even with plausible input values. In contrast, one-phase simulations showed stability in flows progressing through the valleys. Using lower values of bed friction in the single-phase model, as indicated in the present study, can aid the simulation of more fluidal and far-reaching flows. For wider applications, calibration of bed friction angles using past lahar deposits on other volcanoes will be helpful in constraining the models. The cohesiveness of the flow cannot be precisely expressed in the Titan2D model, although it is an important parameter for clay-rich lahar flow. As most previous studies on lahar flow simulations using the Titan2D flow model have reported the involvement of sandy and gravelly lahars (i.e., less cohesive) for hindcasting/forecasting (Williams et al. 2008; Córdoba et al. 2015; Procter et al. 2010, 2012), verification of the model for cohesive flows, such as that in the Motoshirane case and as determined by estimation of the clay content and mineralogy of the lahar source material, will be necessary in future studies.
The consequences of the 2018 eruption and suggestions for future crisis hazard assessment
A prompt hazard assessment, for the benefit of residents, communities, and emergency planners, is required in terms of information on flow paths and travel distances (inundation areas), rather than accurate simulation of physics within a model. Data on lahar hazard zonation using Titan2D flow models can be derived from the flow paths, areas, and distances of lahars under several scenarios using both single-phase and two-phase models (Figs. 11, 12 and 13). The information on the identification of key high-risk locations in the ski areas and near National Highway 292, as indicated by the simulations of lahar flows, was shared with the emergency planning board at the end of March 2018. At the time, it was recommended to close the gates on the highway when either further volcanic unrest or rapid changes in weather with heavy rain and/or snowmelt, were detected.
The monitoring system in Furiko-zawa Creek and the Yazawa-gawa River (Fig. 2 and Appendix 1) did not detect any major lahar events (debris flows and hyperconcentrated flows) from early March 2018 to mid-June 2019, although stage and turbidity changes in the rivers were recorded when heavy rains occurred. The data obtained by the monitoring systems were shared among scientists and emergency planners in Kusatsu town (Fig. 3). An internet-based remote camera, which acquires images every 10 min (and videos when certain movements are detected) transmits data to an internet cloud server. This is useful for understanding river and snow conditions. Consequently, the absence of a lahar after the 2018 eruption was probably because of the formation of lower volumes of initial tephra deposits on the upper slopes (30,000–50,000 tons: The Joint Research Team for ash fall in Kusatsu-Shirane eruption 2018) and the occurrence of less torrential rains during the early stage of the post-eruption period.
The findings of this study do not imply that Kusatsu-Shirane (Motoshirane) volcano and its river systems have not experienced lahars in the past. It is also important to study and find the past lahar deposits sourced from both Mts. Motoshirane and Shirane in the drainage system. Future lahars and resultant hazards at Motoshirane depend on the eruption magnitude, type, and interaction with snow and rain events. The crisis hazard maps for snow-related lahars created in this study (Figs. 11, 12 and 13) will be helpful as a background hazard map for conducting hazard zonation for the assessment of future Motoshirane eruptions. The revised parameter values applicable to the tephra and snow conditions at the time of future eruptions, together with the bed friction angles considered in this study, will result in effective crisis hazard assessment for any eruption. Even if future eruptions occur from new vents, without the availability of historical and geological lahar data, the procedure used in the study (Fig. 3) will enable rapid lahar prediction and hazard zonation.
Several previous studies, based on the application of numerical simulations for lahar hazard zonation, have hindcast past lahar events or forecast future lahars by calibrating historical and geological lahar records (e.g., Williams et al. 2008) prior to the next volcanic unrest. In contrast, the present study demonstrated the use of a deterministic approach to forecast snow-related lahars during the 2018 eruption crisis that originated at unforeseen vents at Motoshirane. Even without the availability of historical and geological lahar records for the volcano, characterization of proximal tephra deposits, snow surveys, and numerical lahar flow simulations using the Titan2D model enabled the prediction of flow type and flow paths, and the conduction of hazard assessment and zonation in a relatively short timeframe.
The characterization of primary tephra deposited proximal to the vents indicated that a non-juvenile eruption originated from hydrothermally altered zones. The tephra deposits contain approximately 10 wt% clay and clay minerals, which contributed to the cohesiveness of the predicted lahar flows.
Several scenarios for numerical flow simulations were applied. This included snow and snowmelt being incorporated to predict the future lahars, with or without a new eruption. To determine the input parameters for the flow model, the consideration of SWE, immediately after the eruption and in the post-eruptive period, effectively helped to estimate snow volume at the high-altitude regions near the vent area.
The present study reported the application of not only the two-phase model of Titan2D, commonly used for lahar assessment, but also the single-phase Coulomb model, which is relatively more stable numerically. Aspects of the results, such as apparent overestimation of flow velocities, highlight the limitations of the models. Nevertheless, the flow paths (inundation areas) and travel distances, calculated in repetitive runs with variable scenario settings, are useful for lahar hazard assessment and for identification of potential high-risk areas intended to be monitored closely whenever further volcanic unrest or abrupt changes in weather conditions are detected. The procedures used in the present study provide insights for forecasting of lahars sourced from new vent eruptions, as well as for indicating an appropriate use of latest eruption- and snow-related information.
Anderson TB, Jackson R (1967) A fluid mechanical description of fluidized beds: equations of motion. Ind Eng Chem Fundam 6:527–539
Brock B, Rivera A, Casassa G, Bown F, Acuña C (2007) The surface energy balance of an active ice-covered volcano: Villarrica Volcano, southern Chile. Ann Glaciol 45:104–114
Córdoba G, Villarosa G, Sheridan MF, Viramonte JG, Beigt D, Salmuni G (2015) Secondary lahar hazard assessment for Villa la Angostura, Argentina, using Two-Phase-Titan modelling code during 2011 Cordón Caulle eruption. Nat Hazards Earth Syst Sci 15:757–766
Cutler NA, Streeter RT, Dugmore AJ, Sear ER (2021) How do the grain size characteristics of a tephra deposit change over time? Bull Volcanol 83:45
Delgado-Granados H, Julio-Miranda P, Carrasco-Núñez G, Pulgarín-Alzate B, Mothes P, Moreno-Roa H, Cáceres-Correa BE, Cortés-Ramos J (2015) Hazards at Ice-Clad volcanoes: phenomena, processes, and examples from Mexico, Colombia, Ecuador, and Chile. In: Shroder JF, Haeberli W, Whiteman C (eds) Snow and ice-related hazards, risks and disasters. Academic Press, Cambridge, pp 607–646
Driedger CL (1981) Effect of ash thickness on snow ablation. In: Lippman PW, Millineaux DR (eds) The 1980 Eruption of Mount St. Helens, USGS Professional Paper, 1250 Washington, pp 757–760
Fire and Disaster Management Agency (2018) Damages caused by volcanic activity at Mt. Motoshirane and the response of fire departments (the 9th report) (in Japanese) https://www.fdma.go.jp/disaster/info/assets/post872.pdf. Accessed 21 Aug 2021
Hampton MA (1975) Competence of fine-grained debris flows. J Sediment Petrol 45:834–844
Hayakawa Y (1983) Geology of Kusatsu-Shirane Volcano. J Geol Soc Japan 89:511–525 (in Japanese with English abstract)
Hestnes E (1998) Slushflow hazard - where, why and when? 25 years of experience with slushflow consulting and research. Ann Glaciol 26:370–376
Ishizaki Y, Nigorikawa A, Kametani N, Yoshimoto M, Terada A (2020) Geology and eruption history of the Motoshirane Pyroclastic Cone Group, Kusatsu-Shirane Volcano, central Japan. J Geol Soc Japan 126:473–491 (in Japanese with English abstract)
Iverson RM, Schilling SP, Vallance JW (1998) Objective delineation of lahar-inundation hazard zones. Geol Soc Am Bull 110:972–984
Iyobe T, Matsumoto T, Kawashima K, Sasaki A, Suzuki K (2016) Temporal and spatial variabilities of snow water equivalent on snow covered volcano: case study of Ontake volcano. Proc Cold Region Technol Confer 32:27–32 (in Japanese)
Japan Meteorological Agency (2018) Volcanic Activity of Kusatsu-Shiranesan Volcano (February 2018 – May 2018), issued in the 129th Coordinating Committee for the Prediction of Volcanic Eruption (CCPVE) (in Japanese). https://www.data.jma.go.jp/svd/vois/data/tokyo/STOCK/kaisetsu/CCPVE/Report/129/kaiho_129_07.pdf. Accessed 21 Aug 2021
Kametani N, Ishizaki Y, Yoshimoto M, Maeno F, Terada A, Furukawa R, Honda R, Ishizuka Y, Komori J, Nagai M, Takarada S (2021) Total mass estimate of the January 23, 2018, phreatic eruption of Kusatsu-Shirane Volcano, central Japan. Earth Planets Space 73:141. https://doi.org/10.1186/s40623-021-01468-3
Kataoka KS, Nagahashi Y (2019) From sink to volcanic source: Unravelling missing terrestrial eruption records by characterization and high-resolution chronology of lacustrine volcanic density flow deposits, Lake Inawashiro-ko, Fukushima, Japan. Sedimentology 66:2784–2827. https://doi.org/10.1111/sed.12629
Kataoka KS, Matsumoto T, Saito T, Kawashima K, Nagahashi Y, Iyobe T, Sasaki A, Suzuki K (2018) Lahar characteristics as a function of triggering mechanism at a seasonally snow-clad volcano: contrasting lahars following the 2014 phreatic eruption of Ontake Volcano, Japan. Earth Planets Space 70:113. https://doi.org/10.1186/s40623-018-0873-x
Kattelmann R (1997) Flooding from rain-on-snow events in the Sierra Nevada. IAHS Publ 239:59–65
Kilgour G, Manville V, Della Pasqua F, Graettinger A, Hodgson KA, Jolly GE (2010) The 25 September 2007 eruption of Mount Ruapehu, New Zealand: directed ballistics, surtseyan jets, and ice-slurry lahars. J Volcanol Geotherm Res 191:1–14
Leonard GS, Stewart C, Wilson TM, Procter JN, Scott BJ, Keys HJ, Jolly GE, Wardman JB, Cronin SJ, McBride SK (2014) Integrating multidisciplinary science, modelling and impact data into evolving, syn-event volcanic hazard mapping and communication: a case study from the 2012 Tongariro eruption crisis, New Zealand. J Volcanol Geotherm Res 286:208–232
Lube G, Cronin SJ, Procter JN (2009) Explaining the extreme mobility of volcanic ice-slurry flows, Ruapehu volcano, New Zealand. Geology 37:15–18
Manville V, Hodgson KA, Houghton BF, Keys JR, White JDL (2000) Tephra, snow and water: complex sedimentary responses at an active snow-capped stratovolcano, Ruapehu, New Zealand. Bull Volcanol 62:278–293
Manville V, Major J, Fagents S (2013) Modeling lahar behavior and hazards. In: Fagents S, Gregg T, Lopes R (eds) Modeling volcanic processes: the physics and mathematics of volcanism. Cambridge University Press, Cambridge, pp 300–330
Matsumoto T, Kawashima K, Kataoka K, Iyobe T (2019) Accumulation and ablation of seasonal snow cover around Azuma and Adatara Volcanoes. JSSI & JSSE Joint Conference on Snow and Ice Research - 2019 in Yamagata: 286 (in Japanese). https://doi.org/10.14851/jcsir.2019.0_286
Mattson LE, Gardner JS, Young GJ (1993) Ablation on debris covered glaciers: an example from the Rakhiot Glacier, Panjab, Himalaya. IAHS Publ 218:289–296
Mothes PA, Vallance JW (2015) Lahars at Cotopaxi and Tungurahua Volcanoes, Ecuador: Highlights from stratigraphy and observational records and related downstream hazards. In: Shroder JF, Papale P (eds) Volcanic hazards, risks and disasters. Elsevier, Amsterdam, pp 141–168
Mothes PA (1992) Lahars of Cotopaxi Volcano, Ecuador: hazard and risk evaluation. In: McCall GJH, Laming DJC, Scott SC (eds) Geohazards. AGID Report Series (The Geosciences in International Development). Springer, Dordrecht. https://doi.org/10.1007/978-94-009-0381-4_7
Nurhasan, Ogawa Y, Ujihara N, Tank SB, Honkura Y, Onizawa S, Mori T, Makino M (2006) Two electrical conductors beneath Kusatsu-Shirane volcano, Japan, imaging by audiomagnetotellurics and their implications for hydrothermal system. Earth Planet Space 58:1053–1059. https://doi.org/10.1186/BF03352610
Ogawa Y, Aoyama H, Yamamoto M, Tsutsui T, Terada A, Ohkura T, Kanda W, Koyama T, Kaneko T, Ominato T, Ishizaki Y, Yoshimoto M, Ishimine Y, Nogami K, Mori T, Kikawada Y, Kataoka K, Matsumoto T, Kamiishi I, Yamaguchi S, Ito Y, Tsunematsu K (2018) Comprehensive survey of 2018 Kusatsu-Shirane Eruption. Proc Symp Nat Disaster Sci 55:25–30 (in Japanese)
Ohba T, Yaguchi M, Nishino K, Numanami N, Tsunogai U, Ito M, Shingubara R (2019) Time variation in the chemical and isotopic composition of fumarolic gasses at Kusatsu-Shirane Volcano Japan. Front Earth Sci 7:249. https://doi.org/10.3389/feart.2019.00249
Patra AK, Bauer AC, Nichita CC, Pitman EB, Sheridan MF, Bursik M, Rupp B, Webber A, Stinton AJ, Namikawa LM, Renschler CS (2005) Parallel adaptive numerical simulation of dry avalanches over natural terrain. J Volcanol Geotherm Res 139:1–21
Pierson TC (1986) Flow behavior of channelized debris flows, Mount St. Helens, Washington. In: Abrahams AD (ed) Hillslope Processes, Allen & Unwin, Boston, pp. 269–296
Pierson TC (1998) An empirical method for estimating travel times for wet volcanic mass flow. Bull Volcanol 60:98–109
Pierson TC, ed, (1999) Hydrologic consequences of hot rock–snowpack interactions at Mount St. Helens Volcano, Washington, 1982–84: US Geological Survey Professional Paper 1586, 117p.
Pierson TC (2005) Hyperconcentrated flow—transitional process between water flow and debris flow. In: Jacob M, Hungr O (eds) Debris-flow hazards and related phenomena. Springer, Berlin, pp 159–202
Pierson TC, Costa JE (1987) A rheologic classification of subaerial sediment-water flow. In: Costa JE, Wieczorek GF (eds) Debris flows/Avalanches: process, recognition, and mitigation, Geological Society of America, Reviews in Engineering Geology, vol 7. The Geological Society of America, Boulder, pp 1–12
Pierson TC, Janda RJ (1994) Volcanic mixed avalanches: a distinct eruption-triggered mass-flow process at snow-clad volcanoes. Geol Soc Am Bull 106:1351–1358
Pierson TC, Janda RJ, Thouret JC, Borrero CA (1990) Perturbation and melting of snow and ice by the 13 November 1985 eruption of Nevado del Ruiz, Colombia, and consequent mobilization, flow and deposition of lahars. J Volcanol Geotherm Res 41:17–66
Pierson TC, Daag AS, Delos Reyes PJ, Regalado MTM, Solidum RU, Tubianosa BS (1996) Flow and deposition of posteruption hot lahars on the east side of Mount Pinatubo, July-October 1991. In: Newhall, CG and Punongbayan, RS (eds). Fire and mud: eruptions and lahars of Mount Pinatubo, Philippines: Philippine Institute of Volcanology and Seismology, Quezon City and University of Washington Press, Seattle, pp 921–950
Pitman EB, Le L (2005) A two-fluid model for avalanche and debris flows. Phil Trans R Soc A 363:1573–160
Pitman EB, Nichita CC, Patra A, Bauer A, Sheridan M, Bursik M (2003) Computing granular avalanches and landslides. Phys Fluids 15:3638–3646
Procter JN, Cronin SJ, Fuller IC, Sheridan M, Neall VE, Keys H (2010) Lahar hazard assessment using Titan2D for an alluvial fan with rapidly changing geomorphology: Whangaehu River. Mt Ruapehu Geomorphology 116:162–174
Procter JN, Cronin SJ, Sheridan MF (2012) Evaluation of Titan2D modelling forecasts for the 2007 Crater Lake break-out lahar, Mt. Ruapehu, New Zealand. Geomorphology 136:95–105
Procter J, Cronin S, Sheridan M, Patra A, (2004) Application of titan2D mass-flow modelling to assessing hazards from a potential lake-breakout lahar at Ruapehu volcano, New Zealand. Abstract S11a pth 031 in proceedings IAVCEI General Assembly, Pucon, Chile.
Richardson JM, Brook MS (2010) Ablation of debris-covered ice: some effects of the 25 September 2007 Mt Ruapehu eruption. J Royal Soc NZ 40:45–55
Richardson JF, Zaki WN (1954) Sedimentation and fluidization: part I. Trans Inst Chem Eng 32:35–53
Savage S, Hutter K (1989) The motion of a finite mass of granular material down a rough incline. J Fluid Mech 199:177–215
Scott KM, Vallance JW, Pringle PT (1995) Sedimentology, behavior, and hazards of debris flows at Mount Rainier, Washington. U.S. Geol Surv Professional Paper 1547: 56p.
Sheridan MF, Stinton AJ, Patra A, Pitman EB, Bauer A, Nichita CC (2005) Evaluating Titan2D mass-flow model using the 1963 Little Tahoma Peak avalanches, Mount Rainier, Washington. J Volcanol Geotherm Res 139:89–102
Sui J, Koehler G (2001) Rain-on-snow induced flood events in Southern Germany. J Hydrol 252:205–220
Suzuki K (2018) Importance of hydro-meteorological observation in the mountainous area. Jpn J Mt Res 1:1–11 (in Japanese with English abstract)
Tada F, Tsuya H (1927) The eruption of the Tokachidake Volcano, Hokkaido, on May 24th, 1926. Bull Earthq Res Inst Univ Tokyo 2:49–84 (in Japanese with English abstract)
Takeuchi Y, Nishimura K, Patra A (2018) Observations and numerical simulations of the braking effect of forests on large-scale avalanches. Ann Glaciol 59:50–58
Terada A (2018) Kusatsu-Shirane volcano as a site of phreatic eruptions. J Geol Soc Japan 124:251–270. https://doi.org/10.5575/geosoc.2017.0060 (in Japanese with English abstract)
Terada A, Kanda W, Ogawa Y, Yamada T, Yamamoto M, Ohkura T, Aoyama H, Tsutsui T, Onizawa S (2021) The 2018 phreatic eruption at Mt. Motoshirane of Kusatsu-Shirane volcano, Japan: eruption and intrusion of hydrothermal fluid observed by a borehole tiltmeter network. Earth Planets Space 73:157. https://doi.org/10.1186/s40623-021-01475-4
Tseng KH, Ogawa Y, Nurhasan, Tank SB, Ujihara N, Honkura Y, Terada A, Usui Y, Kanda W (2020) Anatomy of active volcanic edifice at the Kusatsu-Shirane volcano, Japan, by magnetotellurics: hydrothermal implications for volcanic unrests. Earth Planet Space 72:161. https://doi.org/10.1186/s40623-020-01283-2
The Joint Research Team for ash fall in Kusatsu-shirane 2018 eruption (2018) Ash fall distribution of Jan. 23, 2018 eruption in Kusatsu-Shirane Volcano, issued in the 140th Coordinating Committee for the Prediction of Volcanic Eruption (CCPVE) (in Japanese). https://www.data.jma.go.jp/svd/vois/data/tokyo/STOCK/kaisetsu/CCPVE/shiryo/140/140_01-1-1.pdf. Accessed 21 Aug 2021
Uesawa S (2014) A study of the Taisho lahar generated by the 1926 eruption of Tokachidake Volcano, central Hokkaido, Japan, and implications for the generation of cohesive lahars. J Volcanol Geotherm Res 270:23–34
Uto K, Hayakawa Y, Aramaki S, Ossaka J (1983) Geological map of Kusatsu-Shirane Volcano. Geological Map of Volcanoes 3 (1:25000) (in Japanese).
Vallance JW (2005) Volcanic debris flows. In: Jacob M, Hungr O (eds) Debris-flow hazards and related phenomena. Springer, Berlin, pp 159–202
Vallance JW, Scott KM (1997) The Osceola mudflow from Mount Rainier: sedimentology and hazard implications of a huge-clay-rich debris flow. Geol Soc Am Bull 109:143–163
Waythomas CF (2014) Water, ice and mud: lahars and lahar hazards at ice- and snow-clad volcanoes. Geol Today 30:34–39
Williams R, Stinton AJ, Sheridan MF (2008) Evaluation of the Titan2D two-phase flow model using an actual event: Case study of the 2005 Vazcún Valley Lahar. J Volcanol Geotherm Res 177:760–766
Yamada T, Nishimura H, Suizu S, Wakahama G (1979) Distribution and process of accumulation and ablation of snow on the west slope of Mt. Asahidake, Hokkaido. Low Temperature Science 37A:1–12 (in Japanese with English abstract)
Yamada T, Kurokawa AK, Terada A, Kanda W, Ueda H, Aoyama H, Ohkura T, Ogawa Y, Tanada T (2021) Locating hydrothermal fluid injection of the 2018 phreatic eruption at Kusatsu-Shirane volcano with volcanic tremor amplitude. Earth Planet Space 73:14. https://doi.org/10.1186/s40623-020-01349-1
The authors thank Yoshitaka Nagahashi for his comments on the early version of the manuscript, Takuma Katori for XRD analysis and mineral identification, and Kanae Watabe for her help with grain-size analysis. Deploying river monitoring system was assisted by Ryoko Nishii and Shun Watabe. Akihiko Terada and Yasuo Ogawa kindly provided information about Kusatsu-Shirane volcano which helped fieldwork and monitoring. The authors also appreciate the officers (Disaster management section) of Kusatsu town hall, and rangers and officers of the Ministry of Environments and Forestry Agency, officers of the Ministry of Land, Infrastructure, Transport and Tourism for supporting submission of documents for permits. The data of DEM (1-m resolution) were provided by the Geospatial Information Authority of Japan. Christopher Gomez, two anonymous reviewers, and handing editor Yasuo Ogawa are thanked for their critical reviews and comments which greatly improved the manuscript.
This research is fully supported by the JSPS (Japan Society for the Promotion of Science) Grant-in-Aid for Special Purposes no. 17K20141 "Comprehensive Survey of 2018 Kusatsu-Shirane Eruption" (PI: Yasuo Ogawa, Co-I: Kyoko Kataoka).
Research Institute for Natural Hazards and Disaster Recovery, Niigata University, Ikarashi 2-8050, Nishi-ku, Niigata, 950-2181, Japan
Kyoko S. Kataoka, Takane Matsumoto, Atsushi Urabe & Katsuhisa Kawashima
Faculty of Science, Yamagata University, 1-4-12 Kojirakawa-machi, Yamagata, 990-8560, Japan
Kae Tsunematsu
Kyoko S. Kataoka
Takane Matsumoto
Atsushi Urabe
Katsuhisa Kawashima
KSK, KT, and TM contributed to writing the main part of the paper, discussing parameters for flow simulation and total risk assessments for potential lahar hazards. KT performed numerical simulations for lahars using Titan2D. TM and KK carried out snow survey and meteorological interpretations. KSK and AU did geological survey, tephra sampling, and its laboratory analyses, and river monitoring. All authors read and approved the final manuscript.
Correspondence to Kyoko S. Kataoka.
There are no competing interests in relation with the present research.
Appendix 1: River monitoring equipment and onsite data
The authors deployed river monitoring systems consisting of stage gauges and a remote camera from the early March, 2018 to the middle June, 2019 (Fig. 2). Three sites were selected; one in Furiko-zawa Creek (Cam 1: 2.7 km downstream of the head of the river close to the new vents), and two in the Yazawa-gawa River (Cam 2: 4.8 km and Cam 3: 7.7 km). The visual recording helped to understand changes in river discharge, turbidity, and geomorphology. The IOT based remote camera (HykeCam SP 4G, Hyke Inc. Japan) captures still images every 10 min (the time interval is variable) during daytime and even nighttime by infrared sensor, sending the image data to an internet cloud server. Videos were also able to be automatically captured when some movement was detected. Two types of stage gauges using ultrasonic and water pressure were also deployed. On site manual measurement of acidity of river water (pH) was also carried out at the same locations.
Appendix 2: Two-phase equations of Pitman and Le (2005) model
(1) Mass conservation for whole system
$${{\partial }_{t}h+\partial }_{x}\left(h(\varphi {v}^{x}+\left(1-\varphi \right){u}^{x})\right)+{\partial }_{y}\left(h(\varphi {v}^{y}+(1-\varphi ){u}^{y})\right)=0.$$
(2) Mass conservation of solid phase
$${\partial }_{t}\left(h\varphi \right)+{\partial }_{x}\left(h\varphi {v}^{x}\right)+{\partial }_{y}\left(h\varphi {v}^{y}\right)=0.$$
(3) Momentum conservation of solid phase in x direction
$$\begin{aligned} & {{\partial }_{t}}\left( h\varphi {{v}^{x}} \right)+{{\partial }_{x}}\left( h\varphi {{v}^{x2}} \right)+{{\partial }_{y}}\left( h\varphi {{v}^{x}}{{v}^{y}} \right) \\ & \quad =-\frac{1}{2}\epsilon \left( 1-\frac{{{\rho }^{f}}}{{{\rho }^{s}}} \right){{\partial }_{x}}\left( {{\alpha }_{xx}}{{h}^{2}}\varphi \left( -{{g}^{z}} \right) \right)-\frac{1}{2}\epsilon \left( 1-\frac{{{\rho }^{f}}}{{{\rho }^{s}}} \right){{\partial }_{y}}\left( {{\alpha }_{xy}}{{h}^{2}}\varphi \left( -{{g}^{z}} \right) \right) \\ & \quad \quad -\frac{1}{2}\epsilon \frac{{{\rho }^{f}}}{{{\rho }^{s}}}\varphi {{\partial }_{x}}\left( {{h}^{2}}\left( -{{g}^{z}} \right) \right)+\left( 1-\frac{{{\rho }^{f}}}{{{\rho }^{s}}} \right)\left( -\epsilon {{\alpha }_{xx}}{{\partial }_{x}}b-\epsilon {{\alpha }_{xy}}{{\partial }_{y}}b+{{\alpha }_{xz}} \right)h\varphi \left( -{{g}^{z}} \right) \\ & \quad \quad -\epsilon \frac{{{\rho }^{f}}}{{{\rho }^{s}}}h\varphi \left( -{{g}^{z}} \right){{\partial }_{x}}b+\left( 1-\frac{{{\rho }^{f}}}{{{\rho }^{s}}} \right)\frac{h\left( 1-\varphi \right)\varphi }{{{v}^{T}}{{\left( 1-\varphi \right)}^{m}}}\left( {{u}^{x}}-{{v}^{x}} \right)+h\varphi {{g}^{x}} \\ \end{aligned}$$
(4) Momentum conservation of solid phase in y direction
$$\begin{aligned} & {{\partial }_{t}}\left( h\varphi {{v}^{y}} \right)+{{\partial }_{x}}\left( h\varphi {{v}^{x}}{{v}^{y}} \right)+{{\partial }_{y}}\left( h\varphi {{v}^{y2}} \right) \\ & \quad =-\frac{1}{2}\epsilon \left( 1-\frac{{{\rho }^{f}}}{{{\rho }^{s}}} \right){{\partial }_{x}}\left( {{\alpha }_{xy}}{{h}^{2}}\varphi \left( -{{g}^{z}} \right) \right) \\ & \quad \quad -\frac{1}{2}\epsilon \left( 1-\frac{{{\rho }^{f}}}{{{\rho }^{s}}} \right){{\partial }_{y}}\left( {{\alpha }_{yy}}{{h}^{2}}\varphi \left( -{{g}^{z}} \right) \right)-\frac{1}{2}\epsilon \frac{{{\rho }^{f}}}{{{\rho }^{s}}}\varphi {{\partial }_{y}}\left( {{h}^{2}}\left( -{{g}^{z}} \right) \right) \\ & \quad \quad +\left( 1-\frac{{{\rho }^{f}}}{{{\rho }^{s}}} \right)\left( -\epsilon {{\alpha }_{xy}}{{\partial }_{x}}b-\epsilon {{\alpha }_{yy}}{{\partial }_{y}}b+{{\alpha }_{yz}} \right)h\varphi \left( -{{g}^{z}} \right) \\ & \quad \quad -\epsilon \frac{{{\rho }^{f}}}{{{\rho }^{s}}}h\varphi \left( -{{g}^{z}} \right){{\partial }_{y}}b+\left( 1-\frac{{{\rho }^{f}}}{{{\rho }^{s}}} \right)\frac{h\left( 1-\varphi \right)\varphi }{{{v}^{T}}{{\left( 1-\varphi \right)}^{m}}}\left( {{u}^{y}}-{{v}^{y}} \right)+h\varphi {{g}^{y}} \\ \end{aligned}$$
(5) Momentum conservation of fluid phase in x direction
$$\partial_{t} \left( {hu^{x} } \right) + \partial_{x} \left( {hu^{x} u^{x} } \right) + \partial_{y} \left( {hu^{x} u^{y} } \right) = - \frac{1}{2}\epsilon\partial_{x} h^{2} \left( { - g^{z} } \right) - \left( {\frac{{1 - \frac{{\rho^{f} }}{{\rho^{s} }}}}{{\frac{{\rho^{f} }}{{\rho^{s} }}}}} \right)\frac{h\varphi }{{v^{T} \left( {1 - \varphi } \right)^{m} }}\left( {u^{x} - v^{x} } \right) + hg^{x} .$$
(6) Momentum conservation of fluid phase in y direction
$$\partial_{t} \left( {hu^{y} } \right) + \partial_{x} \left( {hu^{x} u^{y} } \right) + \partial_{y} \left( {hu^{y} u^{y} } \right) = - \frac{1}{2}\epsilon\partial_{x} h^{2} \left( { - g^{z} } \right) - \left( {\frac{{1 - \frac{{\rho^{f} }}{{\rho^{s} }}}}{{\frac{{\rho^{f} }}{{\rho^{s} }}}}} \right)\frac{h\varphi }{{v^{T} \left( {1 - \varphi } \right)^{m} }}\left( {u^{y} - v^{y} } \right) + hg^{y} .$$
(7) Momentum conservation of solid phase in z direction
$$T^{szz} = \left( {1 - \frac{{\rho^{f} }}{{\rho^{s} }}} \right)\varphi \left( { - g^{z} } \right)\frac{h}{2}.$$
(8) Momentum conservation of fluid phase in z direction
$$T^{fzz} = \left( { - g^{z} } \right)\frac{h}{2}.$$
Appendix 3: Model parameters and their notations
\(h\) Depth averaged thickness
\(\varphi\) Solid volume fraction
\(\rho^{s}\), \(\rho^{f}\) Constant specific density for solid and fluid phase, respectively
\(v^{T}\) Terminal velocity of a typical solid particle falling in the fluid under gravity
\(g^{z}\) Gravity
\(\epsilon = \frac{H}{L}\) \(H\): typical flow thickness, \(L\): horizontal length scale
\(v = (v^{x}\) , \(v^{y} )\) Depth averaged velocity of solid components in x and y direction, respectively
\(u = (u^{x}\) , \(u^{y}\) ) Depth averaged velocity of fluid components in x and y direction, respectively
\(b\) Fixed basal surface
\(\alpha_{*z}\) Basal shear stresses \(T^{s*z}\) and the normal stress \(T^{szz}\)
\({{T}^{s*z}}=-\frac{{{v}^{*}}}{\left\| v \right\|}\text{tan}\left( {{\phi }_{bed}} \right){{T}^{szz}}\equiv {{\alpha }_{*z}}{{T}^{szz}}\)
where * can be either x or y, and the velocity ratio determines the force opposing the motion in the *-direction
\(\alpha_{**}\) The diagonal stresses \(T^{s**}\) and the normal solid stress \(T^{szz}\)
\(T^{s**} = k_{ap} T^{szz} = \alpha_{**} T^{szz}\)
where the same index x or y is used in both *s
\(\alpha_{xy}\) \(T^{sxy} = - {\text{sgn}}\left( {\partial_{y} v^{x} } \right){\text{sin}}\left( {\phi_{int} } \right)k_{ap} T^{szz} = \alpha_{xy} T^{szz}\)
where the sgn function ensures that friction opposes straining in the (x,y)-plane
\(k_{ap}\) The earth pressure coefficient, which is defined as
\(k_{ap} = 2\frac{{1 \pm \left[ {1 - {\text{cos}}^{2} \left( {\phi_{int} } \right)\left[ {1 + {\text{tan}}^{2} \left( {\phi_{bed} } \right)} \right]} \right]^{1/2} }}{{{\text{cos}}^{2} \left( {\phi_{int} } \right)}} - 1\)
\(\phi_{int}\), \(\phi_{bed}\) Internal friction angle and bed friction angle
Kataoka, K.S., Tsunematsu, K., Matsumoto, T. et al. Crisis hazard assessment for snow-related lahars from an unforeseen new vent eruption: the 2018 eruption of Kusatsu-Shirane volcano, Japan. Earth Planets Space 73, 220 (2021). https://doi.org/10.1186/s40623-021-01522-0
Lahar
Flow simulation
Titan2D
Snow-clad volcano
Crisis hazard map
Kusatsu-Shirane volcano
5. Volcanology
Understanding phreatic eruptions - recent observations of Kusatsu-Shirane volcano and equivalents | CommonCrawl |
Jakob Schwichtenberg in Miscellaneous | 4. March 2018
Why there is rarely only one viable explanation
"Nature is a collective idea, and, though its essence exist in each individual of the species, can never in its perfection inhabit a single object." ―Henry Fuseli
I recently came across a WIRED story titled "There's no one way to explain how flying works". The author published a video in which he explained how airplanes fly. Afterward, he got attacked in the comments because he didn't mention "Bernoulli's principle", which is the conventional way to explain how flying works.
Was his explanation wrong? No, as he emphasizes himself in the follow-up article mentioned above.
So is the conventional "Bernoulli's principle" explanation wrong? Again, the answer is no.
It's not just for flying that there are lots of absolutely equally valid ways to explain something. In fact, such a situation is more common than otherwise.
The futility of psychology in economics
Another good example is economics. Economists try to produce theories that describe the behavior of large groups of people. In this case, the individual humans are the fundamental building blocks and a more fundamental theory would explain economic phenomena in terms of how humans act in certain situations.
An economic phenomenon that we can observe is that that stock prices move randomly most of the time. How can we explain this?
So let's say I'm an economist and I propose a model that explains the random behavior of stock prices. My model is stunningly simple: humans are crazy and unpredictable. Everyone does what he feels is right. Some buy because they feel the price is cheap. Others buy because they think the same price is quite high. Humans act randomly and this is why stock prices are random. I call my fundamental model that explains economic phenomena in terms of individual random behavior the theory of the "Homo randomicus".
This hypothesis certainly makes sense and we can easily test it in experiments. There are numerous experiments that exemplify how irrational humans act most of the time. A famous one is the following "loss aversion" experiment:
Participants were given \$50. Then they were asked if they would rather keep \$30 or flip a coin to decide if they can keep all \$50 or lose it all. The majority decided to avoid gambling and simply keep the \$30.
However, then the experimenters changed the setup a bit. Again the participants were given \$50, but then they were asked the participants if they would rather lose \$20 or flip a coin to decide if they can keep all \$50 or lose it all. This time the majority decided to gamble.
This behavior certainly makes no sense. The rules are exactly the same but only framed differently. The experiment, therefore, proves that humans act irrationally.
So my model makes sense and is backed up by experiments. End of the story right?
Not so fast. Shortly after my proposal another economist comes around and argues that he has a much better model. He argues that humans act perfectly rational all the time and use all the available information to make a decision. In other words that humans act as "Homo oeconomicus". With a bit of thought it is easy to deduce from this model that stock prices move randomly.
This line of thought was first proposed by Louis Bachelier and you can read a nice excerpt that explains it from the book "The Physics of Wall Street" by James Owen Weatherall by clicking on the box below.
Why stocks move randomly even though people act rational
But why would you ever assume that markets move randomly? Prices go up on good news; they go down on bad news. there's nothing random about it. Bachelier's basic assumption, that the likelihood of the price ticking up at a given instant is always equal to the likelihood of its ticking down, is pure bunk. this thought was not lost on Bachelier. As someone intimately familiar with the workings of the Paris exchange, Bachelier knew just how strong an effect information could have on the prices of securities. And looking backward from any instant in time, it is easy to point to good news or bad news and use it to explain how the market moves. But Bachelier was interested in understanding the probabilities of future prices, where you don't know what the news is going to be. Some future news might be predictable based on things that are already known. After all, gamblers are very good at setting odds on things like sports events and political elections — these can be thought of as predictions of the likelihoods of various outcomes to these chancy events. But how does this predictability factor into market behavior?
Bachelier reasoned that any predictable events would already be reflected in the current price of a stock or bond. In other words, if you had reason to think that something would happen in the future that would ultimately make a share of Microsoft worth more — say, that Microsoft would invent a new kind of computer, or would win a major lawsuit — you should be willing to pay more for that Microsoft stock now than someone who didn't think good things would happen to Microsoft , since you have reason to expect the stock to go up. Information that makes positive future events seem likely pushes prices up now; information that makes negative future events seem likely pushes prices down now.
But if this reasoning is right, Bachelier argued, then stock prices must be random. think of what happens when a trade is executed at a given price. this is where the rubber hits the road for a market. A trade means that two people — a buyer and a seller — were able to agree on a price. Both buyer and seller have looked at the available information and have decided how much they think the stock is worth to them, but with an important caveat: the buyer, at least according to Bachelier's logic, is buying the stock at that price because he or she thinks that in the future the price is likely to go up. the seller, meanwhile, is selling at that price because he or she thinks the price is more likely to go down. taking this argument one step further, if you have a market consisting of many informed investors who are constantly agreeing on the prices at which trades should occur, the current price of a stock can be interpreted as the price that takes into account all possible information. It is the price at which there are just as many informed people willing to bet that the price will go up as are willing to bet that the price will go down. In other words, at any moment, the current price is the price at which all available information suggests that the probability of the stock ticking up and the probability of the stock ticking down are both 50%. If markets work the way Bachelier argued they must, then the random walk hypothesis isn't crazy at all. It's a necessary part of what makes markets run.
– Quote from "The Physics of Wall Street" by James Owen Weatherall
Certainly, it wouldn't take long until a third economist comes along and proposes yet another model. Maybe in his model humans act rational 50% of the time and randomly 50% of the time. He could argue that just like photons sometimes act like particles and sometimes as waves, humans sometimes act like as a "Homo oeconomicus" and sometimes as a "Homo randomicus" . A fitting name for his model would be the theory of the "Homo quantumicus".
Which model is correct?
Before tackling this question it is instructive to talk about yet another example. Maybe it's just that flying is so extremely complicated and that humans are so strange that we end up in the situation where we have multiple equally valid explanations for the same phenomenon?
The futility of microscopic theories that explain the ideal gas law
Another great example is the empirical law that the pressure of an ideal gas is inversely proportional to the volume:
$$ P \propto \frac{1}{V} $$
This means if we have a gas like air in some bottle and then make the bottle smaller, the pressure inside the bottle increases. Conversely, if we have a bottle and increase the pressure, the gas will expand the volume if possible. It's important the relationship is exactly as written above and not something like $ P \propto \frac{1}{V^2}$ or $ P \propto \frac{1}{V^{1.3}}$. How can we explain this?
It turns out there are lots of equally valid explanation.
The first one was provided by Boyle (1660) who compared the air particles to coiled-up balls of wool or springs. These naturally resist compression and expand if they are given more space. Newton quantified this idea and proposed a repelling force between nearest neighbors whose strength is inversely proportional to the distance between them squared. He was able to show that this explains the experimental observation $ P \propto \frac{1}{V} $ nicely.
However, some time afterward he showed that the same law can be explained if we consider air as a swarm of almost free particles, which only attract each other when they come extremely close to each other. Formulated differently, he explained $ P \propto \frac{1}{V} $ by proposing an attractive short-ranged force. This is almost exactly the opposite of the explanation above, where he proposed an attractive force as an explanation.
Afterwards other famous physicists started to explain $ P \propto \frac{1}{V} $. For example, Bernoulli proposed a model where air consists of hard spheres that collide elastically all the time. Maxwell proposed a model with an inverse power law, similar to Newton's first proposal above, but instead preferred a fifth power law instead of a second power law.
The story continues. In 1931 Lennard–Jones took the now established quantum–mechanical electrical structure of orbitals into account and proposed a seventh-power attractive law.
Science isn't about opinions. We do experiments and test our hypothesis. That's how we find out which hypothesis is favored over a competing one. While we can never achieve 100% certainty, it's possible to get an extremely high quantifiable confidence into a hypothesis. So how can it be that there are multiple equally valid explanations for the same phenomenon?
Renormalization
There is a great reason why and it has to do with the following law of nature:
Details become less important if we zoom out and look at something from a distance.
For laws of ideal gases this means not only that there are lots of possible explanations, but on the contrary that almost any microscopic model works. You can use an attractive force, you can use a repulsing force or even no force at all (= particles that only collide with the container walls). You can use a power law or an exponential law. It really doesn't matter.
Your microscopic model doesn't really matter as long as we are only interested in something macroscopic like air. If we zoom in all these microscopic models look completely different. The individual air particles will move and collide completely different. But if we zoom out and only have a look at the properties of the whole set of air particles as a gas, these microscopic details become unimportant.
The law $ P \propto \frac{1}{V} $ is not the result of some microscopic model. None of the models mentioned above is the correct one. Instead, $ P \propto \frac{1}{V} $ is a generic macroscopic expression of certain conservation laws and therefore of symmetries.
Analogously it is impossible to incorporate the individual psychology of each human into an economic theory. When we describe the behavior of large groups of people we must gloss over many details. As a result, things that we observe in economics can be explained by many equally valid "microscopic" models.
You can start with the "Homo oeconomicus", the "Homo randomicus" or something in between. It really doesn't matter since we always end up with the same result: stock markets move randomly. Most importantly, the pursuit of the one correct more fundamental theory is doomed to fail, since all the microscopic details get lost anyway when we zoom out.
This realization has important implications for many parts of science and especially for physics.
What makes theoretical physics difficult?
The technical term for the process of "zooming out" is renormalization. We start with a microscopic theory and zoom out by renormalizing it.
The set of transformations which describe the "zooming out" process are called the renormalization group.
Now the crux is that this renormalization group is not really a group, but a semi-group. This difference between a group and a semi-group is that there is no unique inverse element for semi-group elements. So while we can start with a microscopic theory and zoom out using the renormalization group, we can't do the opposite. We can't start with a macroscopic theory and zoom in to get the correct microscopic theory. In general, there are many, if not infinitely many, theories that yield exactly the same macroscopic theory.
This is what makes physics so difficult and why physics is currently in a crisis.
We have a nice model that explains the behavior of elementary particles and their interactions. This model is called the "standard model". However, there are lots of things left unexplained by it. For example, we would like to understand what dark matter is. In addition, we would like to understand why the standard model is the way it is. Why aren't the fundamental interactions described by different equations?
Unfortunately, there are infinitely many microscopic models that yield the standard model as a "macroscopic" theory, i.e. when we zoom out. There are infinitely many ways to add one or several new particles to the standard model which explain dark matter, but become invisible at present-day colliders like the LHC. There are infinitely many Grand Unified Theories, that explain why the interactions are the way they are.
We simply can't decide which one is correct without help from experiments.
The futility of arguing over fundamental models
Every time we try to explain something in terms of more fundamental building block, we must be prepared that there are many equally valid models and ideas.
The moral of the whole story is that explanations in terms of a more fundamental model are often not really important. It makes no sense to argue about competing models if you can't differentiate between them when you zoom out. Instead, we should focus on the universal features that survive the "zooming out" procedure. For each scale (think: planets, humans, atoms, quarks, …) there is a perfect theory that describes what we observe. However, there is no unique more fundamental theory that explains this theory. While we can perform experiments to check which of the many fundamental theories is more likely to be correct, this doesn't help us that much with our more macroscopic theory which remains valid. For example, a perfect theory of human behavior will not give us a perfect theory of economics. Analogously, the standard model will remain valid, even when the correct theory of quantum gravity will be found.
The search for the one correct fundamental model can turn into a disappointing endeavor, not only in physics but everywhere and it often doesn't make sense to argue about more fundamental models that explain what we observe.
PS: An awesome book to learn more about renormalization is "The Devil in the Details" by Robert Batterman. A great and free course to learn more it in a broader context (computer science, sociology, etc.) is "Introduction to Renormalization" by Simon DeDeo.
Jabr Alnoaimi
This is a great article. I am really inspired and feel liberated by it in some way. Many times I felt uncomfortable about many explanations in the humanities and in the soft sciences which look rational and sensible, yet, I thought it was possible to present different explanations, but uncomfortably accepted what has been given to me because it is accepted by the experts in the field. With your Boyle' law example (I have never known of all those different attempts at microscopic explanations) you have shown me how I can take many of the unverified explanations with an open mind.
We need to distinguish between two sets of explanations; first, macroscopic phenomena explained in terms of macroscopic concepts such as the flight of airplanes and second, macroscopic phenomena explained in terms of microscopic concepts such as Boyle's law in terms of the laws governing the gas molecules.
For the first kind of explanations, we need to distinguish between the phenomenon such as the boiling of a liquid and the process which leads to the phenomenon. For example, the same phenomenon or effect can be produced by two different processes such as one can boil a liquid by raising its temperature while keeping the pressure constant or by decreasing the pressure and keeping the temperature constant or by both means simultaneously.
It is also possible to explain the same phenomenon and process in different equivalent ways such as a body sliding down a frictionless inclined plane. It can be looked at in terms of the effect of forces on the body or the conservation of mechanical energy.
In the article, the flight of airplanes is explained as being caused by two different processes, namely by thrust or by Bernoulli's principle. Now the two are not equivalent mechanically. But we must have one right explanation for a process, otherwise we will not have one true classical mechanics or one true model, namely Newton's laws of motion. I believe that the two processes are simultaneously participating in the phenomenon of the flight of an airplane (as in the example of boiling a liquid above) with each contributing differently at different stages of the flight. My conjecture is that thrust would contribute more at the take off and less when the plane is flying smoothly in a straight line.
Finally, in my opinion, it remains a useful endeavour to try and guess the kind of microscopic behaviour that leads to macroscopic behaviour even if many possible microscopic behaviours lead to the same results because, 1- may be at intermediate scales or at certain critical conditions, the various models would predict different behaviours, 2- the process also encourages research in the fine structure of the system for distinguishing between the explanations, and 3- the exercise is also good for training students. There could also be more good reasons.
1. the sentence "Conversely, if we have a bottle and increase the pressure, the gas will expand the volume if possible" may be confusing for the non-physicist by giving the impression that the volume of a gas will increase if the pressure is increased.
2. The only model Maxwell proposed for the ideal gas, as far as I know, was freely moving particles colliding elastically with the walls of a container. Was his model of an inverse power law, with the fifth power of the distance between two particles, one of his first attempts at explaining Boyle's law?
Reply to Jabr | CommonCrawl |
Beginner's Guide to Decision Trees for Supervised Machine Learning
In this article we are going to consider a stastical machine learning method known as a Decision Tree. Decision Trees (DTs) are a supervised learning technique that predict values of responses by learning decision rules derived from features. They can be used in both a regression and a classification context. For this reason they are sometimes also referred to as Classification And Regression Trees (CART).
DT/CART models are an example of a more general area of machine learning known as adaptive basis function models. These models learn the features directly from the data, rather than being prespecified, as in some other basis expansions. However, unlike linear regression, these models are not linear in the parameters and so we are only able to compute a locally optimal maximum likelihood estimate (MLE) for the parameters[1].
DT/CART models work by partitioning the feature space into a number of simple rectangular regions, divided up by axis parallel splits. In order to obtain a prediction for a particular observation, the mean or mode of the training observations' responses, within the partition that the new observation belongs to, is used.
One of the primary benefits of using a DT/CART is that, by construction, it produces interpretable if-then-else decision rulesets, which are akin to graphical flowcharts.
Their main disadvantage lies in the fact that they are often uncompetitive with other supervised techniques such as support vector machines or deep neural networks in terms of prediction accuracy.
However they can become extremely competitive when used in an ensemble method such as with bootstrap aggregation ("bagging"), Random Forests or boosting.
In quantitative finance ensembles of DT/CART models are used in forecasting, either future asset prices/directions or liquidity of certain instruments. In future articles we will build trading strategies based off these methods.
Mathematical Overview
Under a probabilistic adaptive basis function specification the model $f({\bf x})$ is given by[1]:
\begin{eqnarray} f({\bf x}) = \mathbb{E}(y \mid {\bf x}) = \sum^{M}_{m=1} w_m \phi({\bf x}; {\bf v}_m) \end{eqnarray}
Where $w_m$ is the mean response in a particular region, $R_m$, and ${\bf v}_m$ represents how each variable is split at a particular threshold value. These splits define how the feature space in $R^p$ into $M$ separate "hyperblock" regions.
Decision Trees for Regression
Let us consider an abstract example of regression problem with two feature variables ($X_1$, $X_2$) and a numerical response $y$. This will allow us to easily visualise the nature of partitioning carried out by the tree.
In the following figure we can see a pre-grown tree for this particular example:
A Decision Tree with six separate regions
How does this correspond to a partitioning of the feature space? The following figure depicts a subset of $\mathbb{R}^2$ that contains our example data. Notice how the domain is partitioned using axis-parallel splits. That is, every split of the domain is aligned with one of the feature axes:
The resulting partition of the subset of $\mathbb{R}^2$ into six regional "blocks"
The concept of axis parallel splitting generalises straightforwardly to dimensions greater than two. For a feature space of size $p$, a subset of $\mathbb{R}^p$, the space is divided into $M$ regions, $R_m$, each of which is a $p$-dimensional "hyperblock".
We have yet to discuss how such a tree is "grown" or "trained". The following section outlines the algorithm for carrying this out.
Creating a Regression Tree and Making Predictions
The basic heuristic for creating a DT is as follows:
Given $p$ features, partition the p-dimensional feature space (a subset of $\mathbb{R}^p$) into $M$ mutually distinct regions that fully cover the subset of feature space and do not overlap. These regions are given by $R_1,...,R_M$.
Any new observation that falls into a particular partition $R_m$ has the estimated response given by the mean of all training observations with the partition, denoted by $w_m$.
However, this process doesn't actually describe how to form the partition in an algorithmic manner! For that we need to use a technique known as Recursive Binary Splitting (RBS)[2].
Recursive Binary Splitting
Our goal for this algorithm is to minimise some form of error criterion. In this particular instance we wish to minimise the Residual Sum of Squares (RSS), an error measure also used in linear regression settings. The RSS, in the case of a partitioned feature space with $M$ partitions is given by:
\begin{eqnarray} \text{RSS} = \sum^{M}_{m=1} \sum_{i \in R_m} ( y_i - \hat{y}_{R_m} )^2 \end{eqnarray}
First we sum across all of the partitions of the feature space (the first summation sign) and then we sum across all test observations (indexed by $i$) in a particular partition (the second summation sign). We then take the squared difference of the response $y_i$ of a particular testing observation with the mean response $\hat{y}_{R_m}$ of the training observations within partition $m$.
Unfortunately it is too computationally expensive to consider all possible partitions of the feature space into $M$ rectangles (in fact the problem is NP-complete). Hence we must use a less computationally intensive, but more sophisticated search approach. This is where RBS comes in.
RBS approaches the problem by beginning at the top of the tree and splitting the tree into two branches, which creates a partition of two spaces. It carries out this particular split at the top of the tree multiple times and chooses the split of the features that minimises the (current) RSS.
At this point the tree creates a new branch in a particular partition and carries out the same procedure, that is, evaluates the RSS at each split of the partition and chooses the best.
This makes it a greedy algorithm, meaning that it carries out the evaluation for each iteration of the recursion, rather than "looking ahead" and continuing to branch before making the evaluations. It is this "greedy" nature of the algorithm that makes it computationally feasible and thus practical for use[1], [2].
At this stage we haven't outlined when this procedure actually terminates. There are a few criteria that we could consider, including limiting the maximum depth of the tree, ensuring sufficient training examples in each region and/or ensuring that the regions are sufficiently homogeneous such that the tree is relatively "balanced".
However, as with all supervised machine learning methods, we need to constantly be aware of overfitting. This motivates the concept of "pruning" the tree.
Pruning The Tree
Because of the ever-present worry of overfitting and the bias-variance tradeoff we need a means of adjusting the tree splitting process such that it can generalise well to test sets.
Since it is too costly to use cross-validation directly on every possible sub-tree combination while growing the tree, we need an alternative approach that still provides a good test error rate.
The usual approach is to grow the full tree to a prespecified depth and then carry out a procedure known as "pruning". One approach is called cost-complexity pruning and is described in detail in [2] and [3]. The basic idea is to introduce an additional tuning parameter, denoted by $\alpha$ that balances the depth of the tree and its goodness of fit to the training data. The approach used is similar to the LASSO technique developed by Tibshirani.
The details of the tree pruning will not concern us here as we can make use of Scikit-Learn to help us with this aspect.
Decision Trees for Classification
In this article we have concentrated almost exclusively on the regression case, but decision trees work equally well for classification, hence the "C" in CART models!
The only difference, as with all classification regimes, is that we are now predicting a categorical, rather than continuous, response value. In order to actually make a prediction for a categorical class we have to instead use the mode of the training region to which an observation belongs, rather than the mean value. That is, we take the most commonly occurring class value and assign it as the response of the observation.
In addition we need to consider alternative criteria for splitting the trees as the usual RSS score isn't applicable in the categorical setting. There are three that we will consider, which include the "hit rate", the Gini Index and Cross-Entropy[1], [2], [3].
Classification Error Rate/Hit Rate
Rather than seeing how far a numerical response is away from the mean value, as in the regression setting, we can instead define the "hit rate" as the fraction of training observations in a particular region that don't belong to the most widely occuring class. That is, the error is given by[1], [2]:
\begin{eqnarray} E = 1 - \text{argmax}_{c} (\hat{\pi}_{mc}) \end{eqnarray}
Where $\hat{\pi}_{mc}$ represents the fraction of training data in region $R_m$ that belong to class $c$.
The Gini Index is an alternative error metric that is designed to show how "pure" a region is. "Purity" in this case means how much of the training data in a particular region belongs to a single class. If a region $R_m$ contains data that is mostly from a single class $c$ then the Gini Index value will be small:
\begin{eqnarray} G = \sum_{c=1}^C \hat{\pi}_{mc} (1 - \hat{\pi}_{mc}) \end{eqnarray}
Cross-Entropy/Deviance
A third alternative, which is similar to the Gini Index, is known as the Cross-Entropy or Deviance:
\begin{eqnarray} D = - \sum_{c=1}^C \hat{\pi}_{mc} \text{log} \hat{\pi}_{mc} \end{eqnarray}
This motivates the question as to which error metric to use when growing a classification tree. I will state here that the Gini Index and Deviance are used more often than the Hit Rate, in order to maximise for prediction accuracy. We won't dwell on the reasons for this, but a good discussion can be found in the books provided in the References section below.
In future articles we will utilise the Scikit-Learn library to perform classification tasks and assess these error measures in order to determine how effective our predictions are on unseen data.
Advantages and Disadvantages of Decision Trees
As with all machine learning methods there are pros and cons to using DT/CARTs over other models:
DT/CART models are easy to interpret, as "if-else" rules
The models can handle categorical and continuous features in the same data set
The method of construction for DT/CART models means that feature variables are automatically selected, rather than having to use subset selection or similar
The models are able to scale effectively on large datasets
Poor relative prediction performance compared to other ML models
DT/CART models suffer from instability, which means they are very sensitive to small changes in the feature space. In the language of the bias-variance trade-off, they are high variance estimators.
While DT/CART models themselves suffer from poor prediction performance they are extremely competitive when utilised in an ensemble setting, via bootstrap aggregation ("bagging"), Random Forests or boosting.
In subsequent articles we will use the Decision Tree module of the Python scikit-learn library for classification and regression purposes on some quant finance datasets.
In addition we will show how ensembles of DT/CART models can perform extremely well for certain quant finance datasets.
A gentle introduction to tree-based methods can be found in James et al (2013), which covers the basics of both DTs and their associated ensemble methods. A more rigourous account, pitched at the late undergraduate/early graduate mathematics/statistics level can be found in Hastie et al (2009). Murphy (2012) provides a discussion Adaptive Basis Function Models, of which DT/CART models are a subset. The book covers both the frequentist and Bayesian approach to these models. For the practitioner working on "real world" data (such as quants like us!), Kuhn et al (2013) is appropriate text pitched at a simpler level.
[1] Murphy, K.P. (2012) Machine Learning A Probabilistic Perspective, MIT Press
[2] James, G., Witten, D., Hastie, T., Tibshirani, R. (2013) An Introduction to Statistical Learning, Springer
[3] Hastie, T., Tibshirani, R., Friedman, J. (2009) The Elements of Statistical Learning, Springer
[4] Kuhn, M., Johnson, K. (2013) Applied Predictive Modeling, Springer | CommonCrawl |
Tactile illusions
Vincent Hayward (2015), Scholarpedia, 10(3):8245. doi:10.4249/scholarpedia.8245 revision #151585 [link to/cite this article]
Curator: Vincent Hayward
Jean Louis Thonnard
Roberta Klatzky
Tony J. Prescott
Prof. Vincent Hayward, Université Pierre et Marie Curie
Tactile illusions are found when the perception of a quality of an object through the sense of touch does not seem to be in agreement with the physical stimulus. They can arise in numerous circumstances and can provide insights into the mechanisms subserving haptic sensations. Many of them can be exploited, or avoided, in order to create efficient haptic display systems or to study the nervous system.
1 All senses, including touch, are subject to illusions
2 Haptic perception interacts with other senses
3 Similarities of certain illusions across senses
4 Order of differences: the particular multi-scale nature of touch
5 On contact mechanics
6 Mechanical regularities
9 Recommended Reading
All senses, including touch, are subject to illusions
It is sometimes assumed that vision is the main source of perceptual illusions and that, in contrast, touch is not subject to surprising perceptual phenomena. This belief is ancient. George Berkeley (1685-1753), Étienne Bonnot de Condillac (1714-1780), and others of this era frequently referred to touch as the provider of the 'truth' to the other senses. While this observation appears to be borne out frequently in everyday life, touch is similarly subject to ambiguous or conflicting sources of information, which, as in vision, audition, and other sensory inputs, provide circumstances in which touch-based illusions can arise. It could be that tactile illusions simply go unnoticed more frequently.
Like the systems subserving audition, vision, vestibular inputs, and taste/olfaction, (all of which are subject to illusions) the somatosensory system has evolved to solve perceptual problems quickly and reliably, subject to constraints that are physical (i.e., skin mechanoreceptors cannot be at a small distance from the surface comparatively to their size), physiological (i.e., neural computation is powerful but limited; mechanoreceptors must have a refractory period), and metabolic (i.e., only a proportion of afferent fibers from the periphery to the brain can be myelinated) in origin.
What is an illusion, visual, tactile, or otherwise? Gregory (1997) wrote that illusions are difficult to define. The commonly adopted definition of an illusion is that it is a discrepancy between perception and reality. This definition rapidly leads to the unsatisfactory conclusion that all percepts are illusions since a percept---which is a brain state---is always discrepant with a stimulus---which is a physical object. In fact these two notions cannot even be compared. For this reason, a simple operational definition which has the advantage of comparing things of the same nature was proposed (Hayward, 2008a). An illusion is a percept that arises from a stimulus combining two separable components. One component is fixed and the observer attends to it. What makes it an illusion is that the perception of this component is strongly contingent on the variation of a second component, perplexing the person made aware of the unchanging component of the stimulus. For instance, the moon disk appears larger when viewed close to the horizon than up in the sky. The fixed component is its angular size and the variable component is its elevation. The variable component can also be an internal state of the brain. The Necker cube illusion and other rivalry-based illusions are percepts that change through time according to brain states variations, although the stimulus is invariant.
Several surveys on tactile and haptic illusions were published recently (Hayward, 2008a; Bresciani et al., 2008; Lederman and Jones, 2011), describing several dozens of categories. The word 'haptic' is often used to refer to touch sensations that involve a motor component. According to Grunwald and John (2008) the term 'haptic' was coined by Max Dessoir (1867-1947), who was in need of a counterpart term for the terms 'optic' and 'acoustic'. In the past few years, the rate of discovery of new tactile and haptic illusions has increased greatly, indicating renewed interest in the subject, with more and better informed web-based resources than in the recent past, although much catching up remains to be done.
A particular aspect of haptic perception in humans, like in most animals, is that the whole body is a mechanically sensitive system. Of course, some species have developed specialized sensing organs: whiskers for rodents, pinnipeds, and other mammalians; dextrous fingers for primates, procyonids, and other families; scales for reptiles, crocodilians, and others; antennae, cuticles for insects and arthropods; spines in echinoids; and so on. In many cases these sensory organs are appendages with motor capabilities. Nevertheless, the whole body of an animal is, to some extent, mechanosensitive. Superficial mechanoreceptors are found in hair follicles, skin, scales, lips, cuticles; and deep receptors are found in muscles, tendons, ligaments, as well as other connective tissues, providing a great diversity of overlapping sensing options. What these sensing options have in common is that they all inform the brain of the mechanical state of the tissues in which they are embedded, but never provide direct information about the contacting objects. This information is always mediated by the laws of mechanics that govern the change of the mechanical state of tissues subject to internal and external loads.
Haptics and tactile sensing is thus the province of mechanics where the assumption of rigidity, which for simplification we are so easily inclined to adopt, is misleading even if we assume for the sake of analysis that the stimulated tissues take a quasi-static state (Hayward, 2011). The consequence is that we instinctively adopt finite-dimensional notions such as 'force' (which has meaning only for ideal 'point masses') or 'pressure' (which can be sensed only by compressible organs; since tissues are incompressible, the somatosensory system cannot sense pressure). These abstract notions are certainly convenient for helping our understanding of the tactile functions but have probably no significance for the brain. If we abandon these simplifications then the occurrence of haptic and tactile illusions---or surprising perceptual behaviors as defined earlier---can be expected. For example, a given load, which we normally express as a force, or a given displacement of a solid, which we normally express as a distance, do not correspond univocally to mechanical states of the body.
What follows is a selection of tactile perceptual phenomena that undoubtedly merit the status of being an illusion and that teach something specific about how the mechanical properties of objects are perceived by humans coarsely organized according to their likely attribution from more central to more peripheral haptic neural processing.
Haptic perception interacts with other senses
When discussing haptic and tactile illusions it must always be kept in mind that senses rarely operate in isolation and that they all interact with each other in the formation of perceptual estimates and judgements. Many studies have shown that touch interacts with taste, vision, and audition, and thus specific perceptual effects are elicited when these interactions are strong. Here is a very classic and powerful example of such interactions which can easily be demonstrated in a classroom or elsewhere. Procure two graspable boxes of similar appearance but of different sizes as illustrated in Figure 1 and arrange them so they have the same mass. When asked to judge the relative heaviness of the two blocks, most people will be convinced that the smaller is heavier than the larger block. This effect has been known for more than a century and is frequently termed the Charpentier illusion (Charpentier, 1891), or more commonly the 'size-weight illusion'. The effect is not small (it can be of 20% or more of difference in judgement of heaviness) and has been, and continues to be, the subject of a very large number of studies that appear at a rate that does not seem to subside. Despite numerous attempts, a principled explanatory mechanism for its occurrence remains to be found.
Figure 1: A convenient setting to demonstrate the Charpentier Illusion.
Figure 1 shows two boxes or wood blocks that are easily graspable are arranged to have the same mass, viz. 200 g. Here they differ by one dimension only (say, 30 mm versus 90 mm) which make it possible to show, using the same blocks in the dark, that the two objects do cause a similar sensation of heaviness if the grasp is carefully executed in order to conceal information about their size difference. Conversely, the same blocks can be used to show that, in the absence of vision, the haptic finger-span size estimation method gives similar information to the brain as vision, which results in a similar illusion.
An uncontroversial aspect of the 'size-weight illusion' is the role played by prior experience. The majority of the available, numerous explanations that have been discussed for now a century give a central role to expectation based on prior information (Ross, 1966, Buckingham, 2014), an hypothesis that has received strong support with the demonstration that the effect can be inverted after sufficiently long practice (Flanagan et al., 2008).
There are numerous other haptic illusions that can arise from interactions between vision and touch, and this is also true of touch and audition. It is worth describing a representative demonstration of such interactions, as one example among many. Because frictional interactions between solids are generally accompanied by acoustic emissions that can be heard and because the vibrations of the source of emission can also be felt, audition and touch are in a position to collaboratively determine the mechanical characteristics of surfaces sliding against one another. Specifically, the glabrous skin of our hands---the skin inside the hands that we use to interact with objects---is covered by a layer of keratin, a material that has strong affinity with water. The mechanical properties of keratin change profoundly with hydration and so do its frictional properties (Johnson et al., 1993; Adams et al., 2013). As a result, the frictional sound made by rubbing hands is a direct function of their moisture content. If the sound emitted by rubbing hands is artificially modified, then the sensation of hand dryness is also modified (Jousmäki and Hari, 1998).
Figure 2: Equipment needed to observe audio-tactile interactions.
The set-up shown in Figure 2 can be used to demonstrate interactions between audition and touch. One needs a microphone (directional) to pick up an auditory scene, such as rubbing hands; a frequency equalizer (analog or digital); headphones (closed) to reproduce the modified scene. The high frequencies characteristic of frictional sounds can be enhanced or attenuated, affecting tactile perception. The perception of other frictional interactions will be affected similarly (Guest et al., 2002), most notably chalk against a blackboard, etc.
In this subsection, we have seen two examples, selected from many, where sensory information supplied by different senses interfered sufficiently to give the resulting percept an illusory quality, suggesting that a fundamental type of brain mechanism is the fusion of sensory information to extract a single object property such as weight, size, distance, numerosity, movement, mobility, wetness, softness, smoothness, and so on.
Similarities of certain illusions across senses
In some cases of illusory perceptual phenomena there is a remarkable analogy between perceptual effects across senses, suggesting that certain brain mechanisms, even neural circuits, are shared by the senses, sometimes in surprising ways (Konkle et al., 2009). In vision there are many well-known effects arising from viewing certain line drawings (e.g. Delboeuf, Bourdon, Ebbinghaus, Müller-Lyer, Poggendorff, or Ponzo illusions). Interestingly, most of these visual illusions also operate in haptics when the figure represented as a raised drawing is explored with the finger (Suzuki, K. and Arashida, R, 1992).
The interpretations of these visual illusions frequently appeal to brain mechanisms engaged in resolving ambiguities introduced by optical projections (Howe and Purves, 2005, Wolfe et al., 2005). It is therefore surprising that these illusions also operate in touch (albeit not always as stably), since visual projections arise from the laws of optics and haptic projections come from self-generated movements (Hartcher-O'Brien et al., 2014). In contrast, explanations based on the anisotropy of fundamental sensory discrimination thresholds could apply in the two modalities (Heller et al., 1997; Mamassian and de Montalembert, 2010).
Figure 3: Geometrical visual illusions operate with touch.
The so-called visual vertical-horizontal illusion exemplified in the Figure 3 is a good representative example. For most people, the vertical segment appears to be longer than the horizontal one. They have the same length. Next, procure a page-size cardboard sheet and glue two 200 mm sticks on it, as indicated (chopsticks cut at length will do). Blindfolded exploration of the sticks will cause most people to feel, similarly to vision, that the vertical stick is longer than the horizontal one.
To mention another class of illusions that is common to all three non-chemical modalities, the so-called "tau effect" stands out. If two stimuli localized in time and in space are attended to, in all modalities: in visual space (Benussi, 1913), in auditory tonal space (Cohen et al., 1954), in auditory physical space (Sarrazin et al., 2007), on the skin (Gelb, 1914; Helson, 1930), the perceived distance between those stimuli depends on their temporal separation. A shorter time separation corresponds to a smaller perceived spatial separation. The reverse is also true and is called the "kappa effect" (Cohen et al., 1953). Numerous studies have been conducted about these and related phenomena, and the most commonly adopted approach to explain them is to evoke brain mechanisms aimed at coping with moving sources of stimulation in the presence of uncertainty (Goldreich, 2007). If the reader is interested in replicating any of these effects with electronically controlled stimuli, it is strongly advised to avoid employing the type of vibrator employed in consumer devices, particularly those based on eccentric motors, because their poor temporal resolution precludes the production of sufficiently brief stimuli.
Lateral inhibition is another neural computational principle that is shared by all senses and that can be invoked to explain universal interactions between intensity and proximity (von Békésy, 1959). Thus, apparent motion, which is tightly connected to the latter interaction, operates in touch as in other sensory modalities (Wertheimer, 1912; Bregman, 1990; Gjerdingen, 1994) by modulating the relative intensity of simultaneous stimuli that are separated in space (von Békésy, 1959). In the same vein, the permutability of amplitude and duration of short stimuli seems to be a general phenomenon (Bochereau et al., 2014). Perceptual rivalry can likewise be demonstrated in all three sensory modalities (Carter et al., 2008), so does the phenomenon of capture where the localization of a stimulus in space by one sensory modality is modified by synchronous inputs from other sensory modalities (Caclin et al., 2002), as well as the family of attentional and change blindness phenomena (Gallace et al., 2006).
The types of tactile and haptic illusions discussed so far (namely, interactions between sensory modalities, geometrical illusions, or space time interactions) share the quality of being classical in the sense that they have been known for a century or so. In the foregoing, haptic illusions that have been described more recently are described.
Order of differences: the particular multi-scale nature of touch
Sensory processes must deal with scale differences because auditory, visual, and haptic scenes can be examined at different spatial and temporal scales. For example, when looking at a tree, the details of the venation of its leaves need not to be considered in assessing the shape of the whole tree. Visual information also frequently has a self-similar character when the scale varies. For example the fundamental process of the extraction of illumination discontinuities in an image is similar when examining leaf venation or the tree branch patterns. Visual objects are also self-similar when viewed from different distances. In audition, a musical melody exists independently from the timber of the sounds of each note. Sounds also often have a self-similar character in their spectral characteristics (Voss and Clark, 1975). The situation is more complex in touch because, unlike the other senses, the physics at play differs fundamentally according to the scale at which haptic interaction is considered, even though certain self-similarity characteristics can also be observed (Wiertlewski et al., 2011). Tactile mechanics begin at the molecular scale since touch clearly depends on friction-related phenomena that depend on microscopic-scale physics, and it ends at the scales covered during ambulation.
At the macroscopic scale the multi-scale character of haptic perception can be demonstrated by the following illusion. If a flat plate is made to roll on the fingertip, that is, if the observer is provided with no other information than the orientation of the direction of the normal to a solid object while exploring it as depicted by Figure 4a, then the resulting percept is comparable to that of exploring a real slippery object where the observer is given displacement, orientation, and curvature information as shown in Figure 4b (Dostmohamed and Hayward, 2005). Provided that appropriate precautions are taken such as averting vision and ensuring that the observer is not aware of the mechanical details of the stimulation, then observers feel as if they were touching a curved object.
Figure 4: Bent plate illusion.
Figure 4c shows a cam mechanism capable of generating the sensation of exploring a virtual object with two fingers obtained by combining two stimuli as in Figure 4a. This effect can be achieved by assembling two of the mechanisms described in (Hayward, 2008a) in mirror opposition as in Figure 4d. During exploration, the two fingers remain at a constant distance from each other, as indicated in Figure 4c by the two thin lines, but the sensation is that of exploring a round object.
Figure 5: Human curvature discrimination performance model (with permission of the IEEE).
The relationship of this illusion with the notion of scale can be established assuming that one of the fundamental haptic perceptual tasks is to assess the local curvature of solid objects. It may be accepted without proof that in the simplified case of a profile of constant curvature the measurement of three points on this profile is the minimum information required to determine its curvature. Figure 5a illustrates this necessity. Measurements are necessarily corrupted by errors which translate to discrimination thresholds. It can be intuitively seen that, ceteris paribus, the greater is the portion of the profile that is considered, parametrized by the length of the cord, $d$, the more accurate is the measurement of curvature (Wijntjes et al., 2009). Assuming the existence of an osculating circle to a shape, the estimation of its curvature requires the measurement of the relative position of at least three points (circles). For a given scale, $d$, the displacement, $h$, the slope, $\phi$, or the curvature, $c$, are all potential sensory cues. Measurement errors can be represented either by the relative change in height, $\Delta h$, by the relative change in slope, $\Delta \phi$, at its opposite ends, or by the relative change in curvature, $\Delta c$, everywhere. Figure 5a is an abstraction of the curvature sensing problem. Zero-order error ($\Delta h$), first-order error ($\Delta \phi$), and second order error ($\Delta c$) can be related to each other with simple algebra. Figure 5b shows the results of a weak fusion cue combination model where the weights attributed to each sensory cue increase according to the reliability of the corresponding cue (Wijntjes et al., 2009). Given the known discrimination thresholds for these quantities, the model predicts that in the small scales (approx. $d < 1.0$ cm) curvature is the most reliable quantity to be sensed, in the intermediate scales (approx. $1.0 < d < 75$ cm), slope has this role, and in the large scales (approx. $d > 75$ cm) it is displacement, as corroborated by numerous psychophysical studies.
It was thus found that in the range of scales comprised between the size of a finger and the size of an arm, first-order information---that is, orientation---dominates over the other sources. These numbers suggest that the anatomical sizes of the human haptic appendages impose strict limits on the type of features that can be felt. These quantities correspond to orders of differences of displacement: zero, one, and second order; reflecting physiological constraints which in turn reflect the scale at which processing is performed. Of course one could speculate that higher derivatives could be leveraged to discriminate smaller scale features. The change of curvature over space would then be characteristic of a surface with asperities where curvature changes over very small length scales, viz. 1.0 mm and less.
On contact mechanics
One source of tactile illusions is clearly derived from contact mechanics effects. As alluded to earlier, extracting the attributes of a touched object from partial knowledge of one's own tissue deformation, is a noisy and ambiguous process. It occurs under the influence of internal and external loads, and is at the root of all effects described thus far. Contact mechanics, or the analysis of the deformation of solids in contact, is thus of immediate relevance in the perception of small-scale attributes such as surface details. Nakatani et al. (2006) described an intriguing effect where strips with different small-scale mechanical properties are juxtaposed to form a flush surface. When explored actively, such surfaces cause the sensation that they have raised or recessed geometries.
Figure 6: Fishbone illusion and variants.
In its original form, Figure 6a, the stimulus is a rigid surface textured as shown. A raised pattern (0.1 mm thick) has a 3 mm wide central spine with orthogonal processes extending on each side with a 2 mm spatial period. When rubbing the finger on the spine, it is perceived as a recessed feature compared to the sides. Variants of this stimulus can be realized by juxtaposing strips of different materials having different roughnesses, different frictional properties (such as metal and rubber), or even different mobilities (Nakatani et al., 2008). Figure 6b shows a variant that can be easily realized by drilling holes in a plastic or metal plate.
A rough explanation for this illusion involves the observation that, during sliding, surfaces with different frictional or mobility properties create different boundary conditions that cause a complex tissue deformation field to propagate inside the finger. Since the tactile system is by necessity capable of reporting a highly simplified version of the actual deformation field of the finger tissues, then peripheral or central neural processes provide their best guess of what the boundary condition could be. The difficulty of the inverse problem involved has been recognized by roboticists who noticed the inherent ambiguous nature of the corresponding computational problem (Ricker and Ellis, 1993; de Rossi et al., 1991).
In vision, it was found that the brain had preferences for certain solutions to ambiguous perceptual problems. As one instance among many, it is well known that the visual system prefers to accept motion over deformation to explain the raw visual inputs (Wallach and O'Connell, 1953). So we could conclude that the tactile system prefers to assign the possible cause of an effect to variations of geometry over variations of surface frictional properties or other factors that could affect an unknown boundary condition. This conclusion is supported by a number of related effects that are only briefly mentioned here (Wang and Hayward, 2008; Kikuuwe et al., 2005; Hayward and Cruz-Hernandez, 2000; Smith et al., 2009; Robles-De-La-Torre and Hayward, 2001) but which all point to the same conclusion.
Mechanical regularities
It may be surmised that the mechanical world is considerably more complicated than the optical or the acoustic world. This argument rests on the observation that the diversity of mechanical phenomena that can take place is truly great for the reason, as alluded to earlier, that different physics apply at different scales. Moreover, a variety of nonlinear and complex mechanical behaviors take place when objects come into contact, slide on each other, are compressed, are collided with, and so on. Only a small subset of objects we interact with are simple, smooth, solid objects. Most other solid objects are aggregations of small scale structures like fabrics, soil, wood, or have multi-stable mechanics like retractable ball pens or keyboards, and so-on, multiplying the possible mechanical behaviors at infinitum. Yet, universal, environmentally driven regularities must exist that the brain can initially extract and later rely upon. In vision, instances of such regularities include the celebrated convexity, light-from-above, or object rigidity assumptions (Ramachandran, 1988, Gregory, 1980, Ullman, 1979). Surely, similar notions must exist in touch and haptics.
Crushing things. Many surfaces on which one steps are made of complex, inhomogeneous, aggregated materials. These include carpets, gravel, soils, underbrush, snow, which have a broadband mechanical response due to the nonlinear mechanics at play. Despite their variety, these materials all share the property of a stronger response when they are crushed faster. If this regularity is artificially reproduced by vibrating a rigid tile with a random signal modulated in amplitude, one experiences the strong sensation that the tile gives under the foot, as shown by Visell et al. (2011). A related effect was demonstrated by Kildal et al. (2010) when pressing on a rigid surface with a vibrating pen.
Gravity. A omnipresent regularity that the brain should have internalized is the movement of objects under the influence of gravity (McIntyre et al., 2001). Balls rolling down a slope of inclination, $\alpha$, accelerate according to $0.7 \sin(\alpha)$, no matter what is their size and what is the substance they are made of. (This regularity was discovered by Galileo circa 1638 in one of the most far-reaching experiments in the history of science (Settle, 1961)). If one holds a stick made to vibrate with an amplitude $f(t)\propto g[7.0 \iint \sin(\alpha(t)) \mathrm{d} t]$, where $g$ is a periodic function and $\alpha$ is the stick inclination angle, then the person holding the stick spontaneously experiences the irrepressible sensation that a ball is rolling inside the stick (Yao and Hayward, 2006). The coefficient 7.0 is the corrected acceleration of gravity to account for the rolling movement of a ball. Different functions $g$ give different levels of realism but the effect is highly robust. The perceptual problem is to determine the ball displacement, $x(t)$, knowing $f(t)$, a type of inverse problem that the brain solves effortlessly despite the fact that $g$ is unknown but periodic.
Contact mechanics. Another example of a regularity which is linked to what our body experiences when pushing against a stiff surface. Almost all solid objects in contact obey to a Hertzian law which states that the area of the surface of contact between the bodies increases with the load. The rate of increase is a function of the relative geometry of the two bodies but also of their material properties (Hayward, 2008b). Thus, softer materials correspond to a lower rate of increase of the contact area. If an apparatus is constructed to modify the finger contact surface as a function of the pressing force independently of the finger displacement, then the modification of the rate of increase of the area of the contact surface can induce an illusory sensation of finger motion (Moscatelli et al., 2014). A related effect appealing to similar principles is the sensation of heaviness induced by the lateral deformation of the fingertips in the absence of net loading (Minamizawa et al., 2007).
Absence of slip. The notion of mechanical regularity can be exploited in the opposite manner. What would the brain make of stimuli which, precisely, do not contain the regularities that can normally be relied upon? Here is an example of an illusory effect that could be interpreted in this light. The so-called 'velvet hand illusion' (Mochiyama et al., 2005) occurs when one moves the two hands in contact with each other without slip but with an interposed network of wires or thin rods in-between. It is a conflicting stimulus since, normally, moving the hands together in mutual does not generate any significant tactile sensation, but here, the thin objects sliding between the two hands do cause a powerful tactile input. To the violation of the aforementioned regularity, the brain responds by 'feeling' a film interposed between the two hands (Kawabe et al., 2010).
The nonlinear nature of small scale mechanics. There are very few natural mechanical phenomena of relevance to touch that could be said to have a "linear character". Moreover, there is no indication that linearity is a useful concept in the mechano-transduction to tactile inputs (see for instance Lamoré et al. (1986). Thus it comes as no surprise that complex signals used to drive somatosensation may create surprising effects if they deviate in specific ways from the natural signals that the somatosensory system has evolved to process. The somatosensory system has been shown to have evolved to optimize the detection of fast rate stimuli differently from slow rate stimuli (Iggo and Ogawa, 1977, Edin and Valbo, 1990). It is otherwise known that if a signal detection system exhibits this property, then periodic excitatory signals having an odd symmetry will cause the output of the detector to undergo a DC drift (a ratcheting behavior). There are many examples in biology of such behaviors including, for instance, the pupillary reflex or heart rate regulation (Clynes, 1962). In touch, odd-symmetrical stimuli do cause a sensation of a persisting external load on the limb or the finger (Amemiya et al., 2005; Amemiya and Gomi, 2014).
In this note, only a small subset of known tactile and haptic illusions was discussed. They were used to point out the similarities and the differences of the putative perceptual mechanisms in other sensory modalities. In sum, touch exhibits a number of similarities to other perceptual systems, but touch has idiosyncrasies which can be understood from the observation that certain perceptual problems that touch faces cannot be related to those faced by other modalities.
It would be natural to ask whether tactile illusions are the expression of imperfections of the somatosensory system or if illusions are a necessity. In the opening paragraphs, the impossibility for the brain to gain perfect knowledge of the mechanical state of the body that it inhabits, let alone of the external objects that perturb its state, was made clear. Evolution has found methods able to expedite the resolution of these problems at speeds and accuracies that are compatible with the survival of the organism, such as quickly grabbing and evaluating the mass of an object, whether it is a 20 kg suitcase or a flimsy paper cup. These solutions are sometimes surprising and we call them illusions. So the answer to the question of the imperfection of the somatosensory system is rather a question of whether it could be improved. The answer is emphatically yes, through perceptual learning and other skill-based mechanisms.
Adams, M et al. (2013). Finger pad friction and its role in grip and touch. Journal of the Royal Society Interface 10(80): 20120467.
Amemiya, T and Gomi, H (2014). Distinct pseudo-attraction force sensation by a thumb-sized vibrator that oscillates asymmetrically. In: M Auvray and C Duriez (Eds.), Haptics: Neuroscience, Devices, Modeling, and Applications, Part-II (pp. 88-95).
Amemiya, T; Ando, H and Maeda, T (2005). Virtual force display: Direction guidance using asymmetric acceleration via periodic translational motion. Proceedings of the World Haptics Conference (pp. 619-622).
Benussi, V (1913). Psychologie der Zeitauffassung. Heidelberg: Carl Winter's Universitätsbuchhandlung.
Bochereau, S; Terekhov, A V and Hayward, V (2014). Amplitude and duration interdependence in the perceived intensity of complex tactile signals. In: M Auvray and C Duriez (Eds.), Haptics: Neuroscience, Devices, Modeling, and Applications, Part I (pp. 93-100).
Bregman, A S (1990). Auditory scene analysis. Cambridge, MA: The MIT Press.
Bresciani, J P; Drewing, K and Ernst, M O (2008). Human haptic perception and the design of haptic-enhanced virtual environments. In: The Sense of Touch and its Rendering (pp. 61-106). Berlin Heidelberg: Springer.
Buckingham, G (2014). Getting a grip on heaviness perception: a review of weight illusions and their probable causes. Experimental Brain Research 232: 1623-1629.
Caclin, A; Soto-Faraco, S; Kingstone, A and Spence, C (2002). Tactile "capture" of audition. Perception & Psychophysics, 64(4): 616-630.
Carter, O; Konkle, T; Wang, Q; Hayward, V and Moore, C I (2008). Tactile rivalry demonstrated with an ambiguous apparent-motion quartet. Current Biology 18(14): 1050-1054.
Charpentier, A (1891). Analyse expérimentale de quelques éléments de la sensation de poids. Archives de Physiologie Normale et Pathologique 3: 122-135.
Clynes, M (1962). The non-linear biological dynamics of unidirectional rate sensitivity illustrated by analog computer analysis, pupillary reflex to light and sound, and heart rate behavior. Annals of the New York Academy of Sciences 98(4): 806-845.
Cohen, J; Hansel, C E M and Sylvester, J D (1953). A new phenomenon in time judgment. Nature 172: 901.
Cohen, J; Hansel, C E M and Sylvester, J D (1954). Interdependence of temporal and auditory judgments. Nature 174: 642-644.
De Rossi, D; Caiti, A; Bianchi, R and Canepa, G (1991). Fine-form tactile discrimination through inversion of data from a skin-like sensor. In: Proceedings of the IEEE International Conference on Robotics and Automation (pp. 398-403).
Dostmohamed, H and Hayward, V (2005). Trajectory of contact region on the fingerpad gives the illusion of haptic shape. Experimental Brain Research 164(3): 387-394.
Edin, B. B. and Vallbo, A. B. (1990). Dynamic response of human muscle spindle afferents to stretch. Journal of Neurophysiology 63(6): 1297-1306.
Flanagan, J R; Bittner, J P and Johansson, R S (2008). Experience can change distinct size-weight priors engaged in lifting objects and judging their weights. Current Biology 18: 1742-1747.
Gallace, A; Tan, H Z and Spence, C (2006). The failure to detect tactile change: A tactile analogue of visual change blindness. Journal Psychonomic Bulletin & Review 13(2): 300-303.
Gelb, A (1914). Versuche auf dem Gebiete der Zeit- und Raumanschauung, Bericht uber der VI. Kongress fur Experimentelle Psychologie (pp. 36-42).
Gjerdingen, R O (1994). Apparent motion in music? Music Perception 11(4): 335-370.
Goldreich, D (2007). A bayesian perceptual model replicates the cutaneous rabbit and other spatiotemporal illusions. PLoS ONE 2(3): e333.
Gregory, R L (1980). Perceptions as hypotheses. Philosophical Transactions of the Royal Society of London B: Biological Sciences 290(1038): 181-197.
Gregory, R L (1997). Knowledge in perception and illusion. Philosophical Transactions of the Royal Society of London B: Biological Sciences 352(1358): 1121-1127.
Grunwald, M and John, M (2008). German pioneers of research into human haptic perception. In: Human Haptic Perception: Basics and Applications (pp. 15-39). Basel: Birkhäuser.
Guest, S; Catmur, C; Lloyd, D and Spence, C (2002). Audiotactile interactions in roughness perception. Experimental Brain Research 146: 161-171.
Hartcher-O'Brien, J; Terekhov, A V; Auvray, M and Hayward, V (2014). Haptic shape constancy across distance. In: M Auvray and C Duriez (Eds.), Haptics: Neuroscience, Devices, Modeling, and Applications, Part I (pp. 77-84).
Hayward, V (2008a). A brief taxonomy of tactile illusions and demonstrations that can be done in a hardware store. Brain Research Bulletin 75(6): 742-752.
Hayward, V (2008b). Haptic shape cues, invariants, priors, and interface design. In: M Grunwald (Ed.), Human Haptic Perception---Basics and Applications (pp. 381-392). Birkhauser Verlag.
Hayward, V (2011). Is there a 'plenhaptic' function? Philosophical Transactions of the Royal Society B: Biological Sciences 366(1581): 3115-3122.
Hayward, V and Cruz-Hernandez, M (2000). Tactile display device using distributed lateral skin stretch. In: Proceedings of the Haptic Interfaces for Virtual Environment and Teleoperator Systems Symposium, DSC-69-2 (pp. 1309-1314).
Heller, M A; Calcaterra, J A; Burson, L L and Green, S L (1997). The tactual horizontal-vertical illusion depends on radial motion of the entire arm. Perception & Psychophysics 59: 1297-1311.
Helson, H (1930). The tau effect---An example of psychological relativity. Science 71(1847): 536-537.
Howe, C Q and Purves, D (2005). Perceiving geometry: Geometrical illusions explained by natural scene statistics. New York: Springer.
Iggo, A. and Ogawa, H. (1977). Correlative physiological and morphological studies of rapidly adapting mechanoreceptors in cat's glabrous skin. Journal of Physiology 266(2): 275-296.
Johnson, S A; Gorman, D M; Adams, M J and Briscoe, B J (1993). The friction and lubrication of human stratum corneum. Tribology Series 25: 663-672.
Jousmäki, V and Hari, R (1998). Parchment-skin illusion: sound-biased touch. Current Biology 8(6): 190-191.
Kawabe, Y; Chami, A; Ohka, M and Miyaoka, T (2010). A basic study on tactile displays using velvet hand illusion. In: IEEE Haptics Symposium (pp. 101-104).
Kikuuwe, R; Sano, A; Mochiyama, H; Takasue, N and Fujimoto, H (2005). Enhancing haptic detection of surface undulation. ACM Transactions on Applied Perception 2(1): 46-67.
Kildal, J (2010). 3d-press: haptic illusion of compliance when pressing on a rigid surface. In: Proceedings of the International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ICMI-MLMI'10 (pp. 21:1-21:8).
Konkle, T; Wang, Q; Hayward, V and Moore, C I (2009). Motion aftereffects transfer between touch and vision. Current Biology 19(9): 745-750.
Lamoré, P J J; Muijser, H and Keemink, C J (1986). Envelope detection of amplitude-modulated high-frequency sinusoidal signals-by skin mechanoreceptors. Journal of the Acoustical Society of America 79(4): 1082-1085.
Lederman, S J and Jones, L A (2011). Tactile and haptic illusions. IEEE Transactions on Haptics 4(4): 273-294.
Mamassian, P and de Montalembert, M (2010). A simple model of the vertical-horizontal illusion. Vision Research 50(10): 956-962.
McIntyre, J; Zago, M; Berthoz, A and Lacquantini, F (2001). Does the brain model Newton's laws? Nature Neuroscience 4(7): 693-694.
Minamizawa, K; Kajimoto, H; Kawakami, N and Tachi, S (2007). A wearable haptic display to present the gravity sensation---Preliminary observations and device design. In: World Haptics Conference (pp. 133-138).
Mochiyama, H et al. (2005). Haptic illusions induced by moving line stimuli. In: World Haptics Conference (pp. 645-648).
Moscatelli, A et al. (2014). A change in the fingertip contact area induces an illusory displacement of the finger. In: M Auvray and C Duriez (Eds.),Haptics: Neuroscience, Devices, Modeling, and Applications, Part II (pp. 72-79).
Nakatani, M; Sato, A; Tachi, S and Hayward, V (2008). Tactile illusion caused by tangential skin srain and analysis in terms of skin deformation. In: Proceedings of Eurohaptics, LNCS 5024 (pp. 229-237). Springer-Verlag.
Nakatani, M; Howe, R D and Tachi, S (2006). The fishbone tactile illusion. In: Proceedings of EuroHaptics (pp. 69-73).
Ramachandran, V S (1988). Perception of shape-from-shading. Nature 331: 163-166.
Ricker, S L and Ellis, R E (1993). 2-D finite-element models of tactile sensors. In: Proceedings of the IEEE International Conference on Robotics and Automation, Vol. 1 (pp. 941-947).
Robles-De-La-Torre, G and Hayward, V (2001). Force can overcome object geometry in the perception of shape through active touch. Nature 412: 445-448.
Ross, H E (1966). Sensory information necessary for the size-weight illusion. Nature 212: 650.
Sarrazin, J C; Giraudo, M D and Pittenger, J B (2007). Tau and kappa effects in physical space: The case of audition. Psychological Research 71(2): 201-218.
Settle, T B (1961). An experiment in the history of science: with a simple but ingenious device Galileo could obtain relatively precise time measurements. Science 133: 19-23.
Smith, A M; Chapman, C E; Donati, F; Fortier-Poisson, P and Hayward, V (2009). Perception of simulated local shapes using active and passive touch. Journal of Neurophysiology 102: 3519-3529.
Suzuki, K and Arashida, R (1992). Geometrical haptic illusions revisited: Haptic illusions compared with visual illusions. Perception & Psychophysics 52(3): 329-335.
Ullman, S (1979). The interpretation of structure from motion. Proceedings of the Royal Society of London B: Biological Sciences 203(1153): 405-426.
Visell, Y; Giordano, B L; Millet, G and Cooperstock, J R (2011). Vibration influences haptic perception of surface compliance during walking. PLoS ONE 6(3): e17697.
von Békésy, G (1959). Neural funneling along the skin and between the inner and outer cells of the cochlea. Journal of the Acoustical Society of America 31(9): 1236-1249.
Voss, R F and Clarke, J (1975). 1/f noise in music and speech. Nature 258: 317–318.
Wallach, H and O'Connell, D N (1953). The kinetic depth effect. Journal of Experimental Psychology 45: 205-217.
Wang, Q and Hayward, V (2008). Tactile synthesis and perceptual inverse problems seen from the view point of contact mechanics. ACM Transactions on Applied Perception 5(2): 1-19.
Wertheimer, M (1912). Experimentelle Studien über das Sehen von Bewegung. Zeitschrift für Psychologie 61: 161-278. [Excerpted and translated in: T Shipley (Ed.), Classics in Psychology. New York: Philos. Lib. 1961.]
Wiertlewski, M; Hudin, C and Hayward, V (2011). On the 1/f noise and non-integer harmonic decay of the interaction of a finger sliding on flat and sinusoidal surfaces. In: Proceedings of World Haptics Conference (pp. 25-30).
Wijntjes, M W A; Sato, A; Hayward, V and Kappers, A M L (2009). Local surface orientation dominates haptic curvature discrimination. IEEE Transactions on Haptics 2(2): 94-102.
Wolfe, U; Maloney, L T and Tam, M (2005). Distortions of perceived length in the frontoparallel plane: Tests of perspective theories. Perception & psychophysics 67(6): 967-979.
Yao, H-Y and Hayward, V (2006). An experiment on length perception with a virtual rolling stone. In: Proceedings of Eurohaptics (pp. 325-330).
Grunwald, M (Ed.) (2008). Human Haptic Perception: Basics and Applications. Springer Science & Business Media.
http://www.newscientist.com/special/tactile-illusions
http://scienceblogs.com/developingintelligence/2008/02/14/an-illusive-touch-valentine-ea/
Sponsored by: Prof. Tony J. Prescott, Dept Psychology, Univ of Sheffield, UK
Reviewed by: Prof. Roberta Klatzky, Department of Psychology, Carnegie Mellon University
Reviewed by: Dr. Jean Louis Thonnard, Université Catholique de Louvain, Brussels
Retrieved from "http://www.scholarpedia.org/w/index.php?title=Tactile_illusions&oldid=151585"
"Tactile illusions" by Vincent Hayward is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. Permissions beyond the scope of this license are described in the Terms of Use | CommonCrawl |
Muc5b overexpression causes mucociliary dysfunction and enhances lung fibrosis in mice
Laura A. Hancock1 na1,
Corinne E. Hennessy1 na1,
George M. Solomon2,
Evgenia Dobrinskikh1,
Alani Estrella1,
Naoko Hara1,
David B. Hill ORCID: orcid.org/0000-0002-9270-777X3,4,
William J. Kissner3,
Matthew R. Markovetz ORCID: orcid.org/0000-0003-3931-05943,
Diane E. Grove Villalon5,
Matthew E. Voss5,
Guillermo J. Tearney6,
Kate S. Carroll7,
Yunlong Shi7,
Marvin I. Schwarz1,
William R. Thelin5,
Steven M. Rowe2,
Ivana V. Yang1,
Christopher M. Evans1,8 &
David A. Schwartz ORCID: orcid.org/0000-0001-6743-84431,8
Nature Communications volume 9, Article number: 5363 (2018) Cite this article
Biopolymers in vivo
Glycobiology
Mucosal immunology
Respiratory tract diseases
The gain-of-function MUC5B promoter variant rs35705950 is the dominant risk factor for developing idiopathic pulmonary fibrosis (IPF). Here we show in humans that MUC5B, a mucin thought to be restricted to conducting airways, is co-expressed with surfactant protein C (SFTPC) in type 2 alveolar epithelia and in epithelial cells lining honeycomb cysts, indicating that cell types involved in lung fibrosis in distal airspace express MUC5B. In mice, we demonstrate that Muc5b concentration in bronchoalveolar epithelia is related to impaired mucociliary clearance (MCC) and to the extent and persistence of bleomycin-induced lung fibrosis. We also establish the ability of the mucolytic agent P-2119 to restore MCC and to suppress bleomycin-induced lung fibrosis in the setting of Muc5b overexpression. Our findings suggest that mucociliary dysfunction might play a causative role in bleomycin-induced pulmonary fibrosis in mice overexpressing Muc5b, and that MUC5B in distal airspaces is a potential therapeutic target in humans with IPF.
Idiopathic pulmonary fibrosis (IPF) is a progressive fibrotic lung disease with a median survival of 3–5 years1 that worsens despite treatment2,3. A common gain-of-function MUC5B promoter variant rs35705950 is the dominant risk factor, either genetic or environmental, for the development of IPF4,5. MUC5B is a major gel forming mucin in the lung that plays a key role in mucociliary clearance (MCC) and host defense5 that is secreted from proximal submucosal glands and distal airway secretory cells6,7,8. The MUC5B promoter variant is associated with enhanced expression of the MUC5B transcript in lung tissue from unaffected subjects and patients with IPF4,9. In patients with IPF, excess MUC5B protein is especially observed in epithelial cells in the respiratory bronchiole and honeycomb cyst7,8, regions of lung involved in lung fibrosis. However, it remains unclear how MUC5B leads to the development of IPF.
Here we show that MUC5B is produced in cells lining distal airways and honeycomb cysts in human IPF. We also show that Muc5b overproduction in the distal lungs of mice are associated with MCC dysfunction and exaggerates the development of fibrosis, and that this can be prevented by treatment with a mucolytic agent. Our findings show a causative, dose-dependent role for Muc5b in murine lung fibrosis, and thus support development of mucolytic intervention strategies for human disease.
MUC5B/Muc5b in the distal lung and effects on fibrosis
In human IPF lung tissue, we found that MUC5B is co-expressed with surfactant protein C in columnar epithelial cells lining honeycomb cysts (Fig. 1a) and in type 2 alveolar epithelia (Fig. 1b), indicating that cell types involved in lung fibrosis in the distal airspace also express MUC5B. Accordingly, to understand the effect of increased Muc5b expression, such as that associated with the MUC5B promoter variant, we generated two lines of C57BL/6 mice that overexpress full-length murine Muc5b genomic transgenes. Tg(Scgb1a1-Muc5b) overproduce Muc5b under the control of a mouse secretaglobin 1a1 promoter fragment10, and Tg(SFTPC-Muc5b) mice overproduce Muc5b under the control of a human surfactant protein C promoter fragment (Supplementary Fig. 1a,b). These non-targeted transgenic lines are referred to as Scgb1a1-Muc5bTg and SFTPC-Muc5bTg, and constitutively overexpress Muc5b in specific lung locations in addition to endogenous Muc5b (Supplementary Fig. 1c-f). Notably, Scgb1a1-Muc5bTg mice produce Muc5b throughout the conducting airways, whereas SFTPC-Muc5bTg mice produce Muc5b in the distal airways and alveoli (Fig. 1c and Supplementary Fig. 1c). Muc5b gene-deficient mice on a congenic C57BL/6J background (Muc5b−/−) were included to assess the effect of the absence of Muc5b10.
Muc5b overexpression in the distal lung is associated with greater fibrogenesis following bleomycin. a, b In situ hybridization of human lung specimens from control and IPF subjects with MUC5B variant rs35705950. Arrowheads depict cells co-expressing SFTPC (red) and MUC5B (blue) in control bronchioles and IPF bronchiolar structures a and in control type 2 alveolar epithelia and IPF type 2 alveolar structures b. c Transgenic mice expressing Muc5b under control of the mouse Scgb1a1 and the human SFTPC promoters demonstrate Muc5b protein in conducting airways and in type 2 alveolar epithelia. d, e After repeated doses of bleomycin (2.5 U/kg, IT on day 0, 1.5 U/kg on days 14 and 28), SFTPC-Muc5bTg demonstrated worse survival d and tissue injury e relative to wild type (+) littermates. Survival data in d were evaluated by χ2 statistic using 17 Muc5b−/−, 16 Muc5b+/+, 15 Scgb1a1- Muc5bTg, 23 Scgb1a1-Muc5b+(wt), 26 SFTPC- Muc5bTg, and 25 SFTPC- Muc5b+(wt), mice. f–i Scgb1a1- Muc5bTg, Muc5b−/−, and control mice were treated IT with bleomycin as in d. To induce similar levels of fibrosis while limiting survivor bias, SFTPC-Muc5bTg and controls received 2.0 U/kg bleomycin on day 0, followed by 1.0 U/kg on days 14 and 28. HP content increased in Scgb1a1-Muc5bTg and SFTPC-Muc5bTg but decreased in Muc5b−/− mice compared to wild type (+) controls for each strain. g–i Fibrillar collagen (magenta in g) was assessed in peripheral lung tissues by confocal/multi-photon fluorescence microscopy with SHG. h, i Fibrillar collagen volumes following bleomycin were increased per mouse h and showed heterogeneous distributions i in Tg mice compared to controls. Data in i were analyzed by t-test (n = 105 Scgb1a1-Muc5bTg and 105 Scgb1a1-Muc5b+(wt) images, and n = 120 SFTPC-Muc5bTg and 120 Scgb1a1-Muc5b+(wt) littermate images. Scale bars, 10 μm a, b, c, 250 μm e, and 100 μm g. In f and h, data are means ± sem, numbers in italics indicate n animals used per experiment, and p-values indicate differences determined by ANOVA with Holm-Sidak's multiple comparisons test. In i, * indicates statistical significance (p < 10−5)
Collectively, these mouse lines allowed for robust gain-of-function and loss-of-function analyses of the effects of Muc5b production, and its effect in mouse models of pulmonary fibrosis.
Initially, mice were challenged with three intratracheal (IT) doses of bleomycin (2.5 U/kg, 1.5 U/kg, and 1.5 U/kg) over 7 weeks to strongly simulate the temporal heterogeneity and progressive nature of human IPF11. SFTPC-Muc5bTg mice express substantially higher concentrations of Muc5b than Scgb1a1-Muc5bTg mice (Supplementary Fig. 1d-f). Furthermore, following challenge with bleomycin, SFTPC-Muc5bTg mice experience significantly reduced survival than Scgb1a1-Muc5bTg, Muc5b−/−, and wild-type control mice for each strain (Fig. 1d). To minimize survivor bias, we used a lower dose of bleomycin (2.0 U/kg, 1.0 U/kg, and 1.0 U/kg) in SFTPC-Muc5bTg mice for subsequent studies.
Following bleomycin challenge, both Scgb1a1-Muc5bTg and SFTPC-Muc5bTg mice demonstrate substantial lung injury that was less prominent in Muc5b−/− mice (Fig. 1e). Therefore, we assessed lung fibrosis biochemically by measuring hydroxyproline (HP) content, a marker of collagen deposition. Bleomycin challenged Scgb1a1-Muc5bTg and SFTPC-Muc5bTg mice have elevated amounts of lung HP compared to their transgene negative littermate controls (Fig. 1f). In contrast, mice lacking Muc5b (Muc5b−/−) demonstrate decreased HP compared to wild-type littermates (Fig. 1f). This difference in collagen was quantified using confocal/multiphoton-excitation fluorescence microscopy with second harmonic generation (SHG, Fig. 1g). Quantification of SHG images demonstrates that Scgb1a1-Muc5bTg and SFTPC-Muc5bTg mice have significantly more collagen than non-challenged and transgene-negative littermate controls following challenge with bleomycin (Fig. 1h, i). Thus, Muc5b expression correlates with fibrosis detected biochemically and histologically, suggesting that the presence of excess Muc5b enhances the fibrotic response to bleomycin and that absence of Muc5b diminishes this response.
Effects of Muc5b in distal airspaces on MCC
To explore a mechanism involved in the Muc5b-associated fibrotic response to bleomycin, we considered the effect of enhanced expression of Muc5b on MCC12,13. Examination of the tracheal mucus layer by micro optical coherence tomography (μOCT)14,15,16 shows distinct differences in mucociliary transport apparatus between SFTPC-Muc5bTg and transgene negative littermates (Fig. 2a and Supplementary videos 1 and 2). Quantitatively, the mucus depth is significantly greater (Fig. 2b) in SFTPC-Muc5bTg mice compared to transgene-negative littermate control mice, but this has a minimal effect on the periciliary layer depth (Fig. 2c), suggesting that osmotic forces transmitted by increased mucus concentration are not substantial12. However, both ciliary beat frequency (Fig. 2d) and mucociliary transport rate (Fig. 2e) are disrupted in SFTPC-Muc5bTg mice. Mucin overexpression in Scgb1a1-Muc5bTg mice induces similar μOCT results (Supplementary Fig. 2, Supplementary videos 3 and 4), although both the mucus layer increase vs. respective littermate controls (7.0 in Scgb1a1-Muc5bTg vs. 9.5 in SFTPC-Muc5bTg mice) and the relative mucociliary transport decrement (49.5% and 70.6%, respectively) are less pronounced in this model, consistent with bleomycin sensitivity (Fig. 1d–i).
Muc5b overexpression in SFTPC-Muc5bTg mice is associated with impaired mucociliary transport. a Representative μOCT images of excised SFTPC-Muc5bTg mice trachea in comparison to trachea of wild type (+) littermate controls. b–e Quantitative metrics from image analysis reveal increased mucus layer depth b without significant alteration of periciliary layer (PCL) depth c in Tg mice compared to + controls. Functional analysis demonstrated reduced ciliary beat frequency d and diminished mucociliary transport rates e in Muc5b-overexpressing mice. Data in b–e are means ± sem and were analyzed by Mann–Whitney test
Effects of mucolytic treatment on MCC and fibrosis
The increase in fibrosis and decrease in mucociliary transport associated with elevated Muc5b levels led us to investigate whether therapeutic agents predicted to accelerate mucus clearance would be effective in improving outcomes in the SFTPC-Muc5bTg mouse model of lung fibrosis. Therefore, we tested the effects of a mucolytic compound, P-211917 (Parion Sciences, Durham, NC), that is designed to hydrolyze disulfide bonds more efficiently than existing reducing agents, e.g. N-acetylcysteine (NAC). P-2119 yields faster disulfide bond reduction and at lower concentrations than NAC in assays utilizing a model disulfide bond substrates (5,5′-dithiobis-2-nitrobenzoic acid (Fig. 3a, b) and MUC5B purified from human saliva (Fig. 3c). Additionally, in concentrated human airway mucus the ability of P-2119 to reduce MUC5B (Fig. 3d) causes improvements in viscosity and elasticity that are consistent with mucus alterations predicted to facilitate clearance (Fig. 3e, f and Supplementary Fig. 3) and are directly related to decreased MUC5B molecular mass determined by multi-angle laser light scattering spectroscopy (Fig. 3g). With these studies demonstrating the effectiveness of a mucolytic agent in vitro, we next sought to assess the effectiveness of P-2119 in vivo. In WT C57BL/6J mice challenged with LPS to induce Muc5b expression and acute lung injury, P-2119 inhalation effectively depolymerizes Muc5b, and the mucolytic effect of P-2119 persists for 120 min (Fig. 3h). Having established dose and time course parameters in WT mice, we next tested the effects of P-2119 in SFTPC-Muc5bTg mice treated with a single IT dose of bleomycin. In these animals, P-2119 treatment also results in smaller mucin polymers detected in lung lavage fluid (Fig. 3i). Mucolytic treatment results in acute clearance of inflammatory cells from the lungs, which is demonstrated by a significant and rapid diminishment in lung lavage leukocyte numbers (Fig. 3j and Supplementary Fig. 4) concurrent with Muc5b depolymerization. In aggregate, these results suggest that P-2119, a mucolytic agent, may favorably influence mucus properties in the airspace by improving MCC, consequently minimizing fibrosis following bleomycin-induced lung injury in the context of Muc5b overproduction.
P-2119 effectively cleaves mucus in vitro and in vivo, enhancing the acute clearance of inflammatory cells. a, b P-2119 hydrolyzed DTNB disulfide bonds more quickly than n-acetylcysteine (NAC) a, and at pH 6 P-2119 cleaved more bonds than NAC b. c In human saliva, P-2119 reduced MUC5B in salivary mucus at lower concentrations than NAC. d–g In concentrated normal human bronchial epithelial cell culture mucus (5% solids), P-2119 reduced MUC5B at a potency similar to that seen in saliva in c. Reduction of MUC5B by P-2119 dose dependently lowered mucus viscosity as demonstrated by enhanced mean square displacement (MSD) of fluorescent microspheres e, and as shown by improved macrorheological complex viscosity f that was strongly correlated with reduced molecular mass g. Data in e represent 900 technical and three biological replicates; data in f represent three technical and two biological replicates. Cyan symbols, vehicle; magenta symbols, P-2119. h–j In vivo effects of aerosolized P-2119 (68–135 mM for 60 min). h Wild-type mice challenged with LPS (20 μg, IT) 48 h prior to P-2119 aerosol. P-2119 decreased Muc5b mass detected by immunoblot of lung lavage fluid over a 120 min period. i SFTPC-Muc5bTg mice were challenged with bleomycin (2.5 U/kg, IT) 7 d prior to P-2119 treatment (Tx). P-2119 caused Muc5b reduction detected by immunoblot 120 min post initiation of P-2119 aerosol. j The effect of P-2119-induced mucolysis on MCC was assessed by quantifying the acute elimination of leukocytes in lung lavage fluid obtained from bleomycin treated SFTPC-Muc5bTg mice (n = 9 vehicle and 12 P-2119 treated) and wild type (+) controls (n = 12 vehicle and 13 P-2119 treated). Total cells were significantly lower in P-2119 treated mice compared to vehicle treated animals, reflecting decreases in all leukocyte subtypes in bleomycin-injured lungs. Data in a, e–g, and j are means ± sem. Data in a were analyzed between P-2119 and NAC treated groups using t-tests. Data in e–g were analyzed statistically on biological replicates: ANOVA of results between 0 mM vehicle and 0.1–10 mM P-2119 treatment groups e, f and linear regression of complex viscosity vs mass g
To pursue this further, we focused on the SFTPC-Muc5bTg mice that produced the highest concentrations of Muc5b (see Supplementary Fig. 1d,f), were most responsive to bleomycin (Fig. 1), and had more pronounced changes in mucociliary transport (see Fig. 2 and Supplementary Fig. 2). SFTPC-Muc5bTg mice were challenged with 2.5 U/kg of bleomycin IT, were allowed to respond to bleomycin for 1 or 8 weeks, and were then exposed to 2 weeks of aerosolized P-2119. Hence, mice were observed at 3 or 10 week endpoints post-bleomycin (Fig. 4a). This comprehensive approach allowed us to test the ability of P-2119 to treat both short and long-term fibrotic lung responses induced by bleomycin, whereas leaving the effects of bleomycin on lung injury intact18. Treatment with P-2119 reduces the severity of chronic inflammation (Fig. 4b, Supplementary Fig. 5), as well as mortality associated with bleomycin-induced lung injury (Supplementary Fig. 6). These protective effects of P-2119 on lung injury are not associated with changes in redox balance (Supplementary Fig. 7) or inflammatory cytokine/chemokine expression (Supplementary Table 1).
P-2119 treatment results in reduced collagen deposition in the setting of excess Muc5b. a SFTPC-Muc5bTg mice were subjected to bleomycin challenge with a 2 wk P-2119 intervention protocol, with daily P-2119 treatments (Tx) or saline vehicle (Veh) exposures starting 7 or 56 d post bleomycin challenge and ending 24 h prior to tissue harvest. b–h Injury and fibrosis induced in bleomycin-treated mice exposed to saline vehicle were decreased in P-2119 treated animals. Compared to vehicle treated animals, P-2119 treated mice had significantly fewer leukocytes in lung lavage b, less hydroxyproline in lung homogenates at 10 wks c, and decreased peripheral lung fibrillar collagen d–h. In b–d, data are means ± sem, numbers in italics indicate n animals used per experiment, and p-values indicate differences determined by t-test with Welch's correction for unequal variances. In e and f, * indicates statistical significance by t-test (p < 0.00001). Histograms in e are from 15 Veh mice (225 images) and from 12 Tx mice (165 images); histograms in f are from 9 Veh mice (180 images) and from 6 Tx mice (120 images). Images in g and h show greater amounts of fibrillar collagen (red) in Veh vs. Tx groups. Scale bars, 100 μm g, h
To assess the anti-fibrotic effects of mucolytic treatment, collagen levels in mouse lung tissues were quantified using biochemical and histological assays. Whereas, we do not detect a difference in HP content in lung homogenates between P-2119 and saline in short term studies (3 weeks after bleomycin), persistent HP increases are observed in SFTPC-Muc5bTg mice over a longer period (10 weeks) and these changes are significantly reduced by P-2119 treatment (Fig. 4c). Importantly, histologically detectable fibrillar collagen detected by SHG imaging decreases in the lungs of SFTPC-Muc5bTg mice treated with P-2119, and this is observed in both the 3 week and 10 week model of bleomycin-induced lung fibrosis (Fig. 4d–h). Strikingly, in the absence of P-2119 treatment, HP values increase in SFTPC-Muc5bTg mice during prolonged responses to bleomycin (Fig. 4c). However, the concentration of fibrillar collagen decreases over time (Fig. 4d). This discrepancy could reflect differences in the types of collagen detected since unlike SHG, HP does not distinguish fibrillar vs. non-fibrillar types of collagen. Also, SHG image analyses did not include central airways and vasculature, but were instead conducted on peripheral lung tissues (the location of pulmonary fibrosis in humans). Importantly though, for both HP and SHG analyses, P-2119 significantly reduces acute and protracted lung fibrosis. Taken together, these findings highlight the importance of short and long-term models of lung fibrosis and comprehensive assessment of outcomes, including secondary endpoints, such as survival, along with the critical primary endpoint of fibrosis in parenchymal lung tissues.
Mucociliary dysfunction is an emerging paradigm in lung diseases19,20. Previously considered a characteristic specific to obstructive diseases such as asthma and chronic obstructive pulmonary disease, and genetic diseases such as primary ciliary dyskinesia and cystic fibrosis, the importance of mucins, mucus, and mucociliary interactions has surfaced in diseases of the lung periphery, such as adenocarcinoma and IPF4,5. Unlike obstructive diseases, which involve central conducting airways and rapid defense against ubiquitous environmental challenges, mucus dysfunction in the lung periphery appears to involve injury-repair processes predominantly5,10,21. In this vein, infection, mucous metaplasia, and bronchiectasis that occur with mucociliary dysfunction in conducting airways contrast with pathologic changes, such as bronchioloalveolar epithelial and mesenchymal remodeling or proliferation that occur with mucin overexpression in the lung periphery21,22,23,24,25,26.
Our results demonstrate that elevated concentrations of Muc5b in the distal lung are directly related to the fibroproliferative response to bleomycin in mice, and MCC dysfunction might play an important role in this response. Previous findings in knockout mice showed MCC failure in the absence of Muc5b10, so dysfunctional MCC due to high concentrations of Muc5b may seem counterintuitive. However, the distribution of mucous and ciliated cells, the anatomical location of MUC5B/Muc5b expression, and the homeostatic control of mucus hydration are factors that coordinately affect mucus viscoelasticity and transport5. Accordingly, ectopic overproduction of Muc5b in the lung periphery in SFTPC-Muc5bTg mice (see Fig. 1c and Supplementary Fig. 1c) produces more severely defective mucociliary transport than what is observed in Scgb1a1-Muc5bTg mice (see Fig. 2e and Supplementary Fig. 2e), where Muc5b overproduced by club cells in airways where it is also normally expressed27,28,29. These findings suggest that overexpression of MUC5B in distal airspaces, which is known to occur in IPF4,9, disrupts the equilibrium necessary to sustain effective mucociliary transport13,30 thereby impairing mucus function12,26,31.
One potential consequence of mucociliary dysfunction is retention of inhaled substances (air pollutants, cigarette smoke, microorganisms, etc.) and endogenous inflammatory debris that, over time, results in temporally and spatially distinct areas of microscopic scaring and progressive fibroproliferation in the lung leading to the development of IPF1. Alternatively, reduced clearance or enhanced viscosity of MUC5B may initiate a reactive or regenerative fibrotic response localized to the bronchoalveolar region of the lung that eventually leads to the development of IPF. As we observed that MUC5B is co-expressed with surfactant protein C in type 2 alveolar epithelia and cells lining honeycomb cysts in human IPF (Fig. 1a, b), we postulate that the cells involved in MUC5B overexpression are involved in the lung remodeling that is characteristic of IPF.
Whereas there is no true equivalent animal model of IPF, intratracheal bleomycin challenge is a widely used model of lung fibrosis in mice. Nonetheless, both bleomycin challenge as a trigger and the mouse as a species have strengths and weaknesses32. Bleomycin causes an acute increase in collagen production that can resolve over time, but bleomycin can induce prolonged lung fibrosis under some conditions, including with repeated dosing11,33 as we have done here (Fig. 1c–i). In addition, mouse lungs are anatomically different from humans. Mice lack respiratory bronchioles and demonstrate little or no mucin expression in the lung periphery27,28,29. By using SFPTC-Muc5bTg mice, we help to significantly narrow this species gap. Importantly, SFTPC-Muc5bTg mice developed prolonged fibrosis in a single bleomycin challenge setting (Fig. 4). Thus, even in an acute injury setting that differs from the chronic development of disease in humans, this Muc5b-overexpressing model provides useful insight in an IPF-related setting. Other aspects such as microscopic honeycombing and proliferative repair vs. remodeling need to be explored11,33. The last has been implicated with much of the focus on expansion of fibroblasts and alveolar remodeling suggestive of a process of active disease progression. Our findings support a role for mucociliary dysfunction in active disease, and they support a long-term goal in the field of identifying ways to slow or reverse the development of IPF at early and/or preclinical stages.
The potential role for mucociliary dysfunction as a driver of IPF pathology is supported by unique gene expression signatures in IPF. In patients with IPF, the coordinated overexpression of MUC5B and cilium genes is associated with microscopic honeycombing, a pathognomonic feature of IPF34. Furthermore, cilium and MUC5B gene expression are associated with the concentration of MMP7, a metalloproteinase gene that is a known biomarker in IPF35 and is also known to amplify airway ciliated cell differentiation36. Adequate MCC requires a balance between the concentrations of water, ions, and macromolecules in the mucus gel. Excessive production of MUC5B presents a challenge to proper mucus hydration and cilia function12. In equilibrium, airway surface liquid homeostasis maintains a low-friction state within the periciliary layer (PCL). In diseases such as cystic fibrosis, a strong transcellular osmotic gradient causes hyperabsorption of ions and water by airway epithelial cells, generating mucus dehydration, reduced cilia motility, and in extreme cases cilia collapse. This phenomenon can also be caused by the presence of high concentrations of mucins on the apical surfaces of airways, generating similar dehydrating forces as observed in CF; excessive mucin could result in an osmotic gradient favoring water movement out of the PCL and towards airway lumen12,37. In both cases, the loss of a grafted gel-on-brush confirmation results in excessive mucus aggregation and impaired MCC. Finally, when mucin expression is uncoupled with CFTR-dependent anion secretion, abnormally viscous mucus could ensue beyond the effects of airway dehydration, precipitating abnormal host defense15. Whereas the findings here do not support PCL depletion as a mechanism for mucociliary dysfunction in Muc5b-overproducing mice at baseline (see Fig. 2c and Supplementary Fig. 2c), it is possible that PCL depletion may be detectable in injured mice. Additional studies will identify how these biophysical properties are regulated and alter mucociliary function, as well as the extent to which genetic factors such as the MUC5B promoter variant rs35705950 impact airway epithelial cell biology.
The relationship between MUC5B overproduction and IPF is complex. Although Muc5b transgenic mice do not appear to spontaneously develop pulmonary fibrosis, they are more responsive to bleomycin (Fig. 1f–i). Likewise, the human gain-of-function MUC5B promoter variant that causes overexpression of MUC5B in the distal airspace also does not appear to be sufficient to cause pulmonary fibrosis. Although ~20% of non-Hispanic Whites have at least one copy of the MUC5B promoter variant4, IPF is a rare disease occurring in less than 0.1% of the population1. Despite the caveat that IPF is likely underdiagnosed38,39,40, it remains clear that the MUC5B promoter variant represents a low penetrance allele. It is thus likely that other gene variants and/or environmental exposures interact with the MUC5B promoter polymorphism to cause IPF in individuals with this disease-associated genetic variant. These observations lead us to postulate that the etiology of IPF will best be understood by identifying the genes, transcripts, and environmental exposures that interact with MUC5B and contribute to the development of IPF. While future studies in large populations of patients with IPF may reveal the critical biological mechanisms that interact with the gain-of-function MUC5B promoter variant to cause IPF, our Muc5b transgenic mice may prove critically important in identifying the genes and/or environmental exposures that contribute to this complex disease.
In a broader context, these findings suggest that by identifying those at risk, patients could be diagnosed prior to the development of permanent and extensive lung parenchymal scarring. Genetic risk factors, particularly the MUC5B promoter variant4, have been shown to identify individuals with preclinical interstitial changes on chest CT scan38,39,40 that progress to clinical significance and are associated with reduced survival39. Given the irreversible nature of IPF, even approved treatments (pirfenidone2 and nintedanib3) only modestly slow progression and have not been shown to alter the median 3–5 year survival. Patients with preclinical stages of pulmonary fibrosis may be ideal candidates for early intervention focused on avoiding the development of irreversible lung remodeling. Our findings suggest that targeting MUC5B in the terminal airways of patients with preclinical stages of interstitial lung disease represents a rational strategy to prevent the progression of preclinical pulmonary fibrosis.
Human lung specimens
Lung specimens from patients with idiopathic pulmonary fibrosis (IPF) and unaffected controls were obtained from the NHLBI Lung Tissue Research Consortium (LTRC; https://ltrcpublic.com/). All protocols were performed in compliance with all relevant ethical regulations approved by local Institutional Review Boards, and individuals gave written informed consent to participate.
Mouse husbandry
Studies using animals complied with all relevant ethical regulations. Mice were housed in accordance with the Institutional Animal Care and Use Committee of the University of Colorado and University of Alabama at Birmingham and kept in specific-pathogen free housing areas that were monitored by institutional animal care staff. Mice were maintained on a 12 h light/dark cycle and fed ad libitum a normal diet of water and irradiated chow (Harlan Teklad). Mice were assigned non-descriptive numbers at weaning by attachment of an ear tag. Moribund mice were identified by observing changes in body weight and behavior. In accordance with the veterinary care procedures at University of Colorado, a loss of 15% of body weight without recovery, hunching, fur ruffling and lethargy were used as criteria for determining moribundity. Animals were killed by intraperitoneal injection of sodium pentobarbital followed by exsanguination.
Genetically engineered mice
Muc5b+/− mice were generated previously10, and heterozygous males and females were bred to obtain male Muc5b−/− and Muc5b+/+ littermates for experiments. Scgb1a1-Muc5bTg mice were generated previously1 and were maintained by continuous hemizygous outcrosses with C57BL/6J male mice purchased from the Jackson Laboratory. SFTPC-Muc5bTg mice were created by insertion of the full-length 34-kb genomic coding region of the mouse Muc5b gene into a transgenic targeting cassette containing 3.7 kb of the human SFTPC gene 5′-flanking region and an IRES-mCherry reporter. Founders were generated by injecting the targeting vector into C57BL/6N pronuclei at the National Jewish Health transgenic mouse core. After confirmation of Muc5b overexpression by western blot, SFTPC-Muc5bTg mice were maintained by continuous hemizygous outcrosses with C57BL/6J male mice purchased from the Jackson Laboratory.
Bleomycin exposure
All mice began bleomycin treatment between 8 and 12 weeks of age, and only male mice were used due to their known sensitivity to bleomycin41,42. Mice were anesthetized with inhalational isoflurane (MWI Veterinary Supply Company, Boise, ID) and tracheas were directly visualized with a rodent laryngoscope (Penn Century, Wyndmoor, PA). A 22 g gavage needle was used to instill 50 µl of saline or bleomycin solution (APP Pharmaceuticals, Schaumburg, IL) directly into the trachea. Studies were conducted following repeated bleomycin challenges (Fig. 1) or following a single bleomycin challenge (Figs. 3, 4 and Supplementary Fig. 4). For repeated bleomycin challenge studies shown in Fig. 1d, e, animals received 2.5 U/kg bleomycin in d 0, followed by 1.5 U/kg on d 14 and d 28. For studies shown in Fig. 1f–i, SFTPC-Muc5bTg mice received 2.0 U/kg on d 0, followed by 1.0 U/kg on d 14 and d 28. All repeat challenge animals were collected on d 49. For single bleomycin challenge studies, SFTPC-Muc5bTg, mice received 2.5 U/kg on d 0 and were studied on d 7 (Fig. 3), d 21 (Fig. 4), or d 70 (Fig. 4).
Bronchoalveolar lavage
Immediately following euthanasia, mouse tracheas were cannulated and the lungs lavaged three times with 0.5 ml of PBS containing 0.6 mM EDTA. Cells were counted using a hemacytometer and spun on to slides using a Cytospin 4 (Thermo Fisher Scientific, Waltham, MA). The slides were stained with the Hema 3 kit (Thermo Fisher) and used for differential counts of macrophages, lymphocytes, and neutrophils. Remaining lavage fluid was divided into two aliquots. One portion was centrifuged at 300×g for 10 min at 4 °C for subsequent studies of cytokines and other soluble factors, and the other portion was stored without centrifugation to preserve high molecular weight mucins that sediment at low centrifugation speeds. Both were frozen on dry ice and stored at −80 °C.
For repeat bleomycin studies, the entire right lung was removed, added to 550 μl PBS and homogenized using Lysing Matrix D and a FastPrep-24 bead beater (MP Biomedicals, Santa Ana, CA). Samples were then snap frozen and stored at −80 °C. Thawed lung homogenates were hydrolyzed overnight with a 1:1 volume of 12N HCl at 100 °C. Afterwards, 5 μl samples of hydrolyzed lung and hydroxyproline standards were plated in duplicate in 96-well plates and incubated for 20 min in 100 μl of 0.06 M chloramine T in citrate-acetate buffer, pH 6. Ehrlich's solution (100 μl; 1.2 M dimethylaminobenzaldehyde in 22% perchloric acid-n-propanol) was then added to each sample. After a 20-min incubation at 65 °C, plates were analyzed in an Synergy H1 plate reader (Biotek, Winooski, VT) at 550 nm for colorimetric analysis. Concentrations of each sample were determined by interpolation along a standard curve.
For P-2119 studies17, the right upper, lower, and accessory lobes were homogenized in 500 μl PBS and then processed as above.
Histochemistry and immunohistochemistry
At harvest, the left lung was inflated with 4% paraformaldehyde (PFA) at a pressure of 20 cm H2O for 5 min. The lung was then removed and placed in fresh 4% PFA overnight for fixation. Left lungs were cut into uniform slices, and volumes were recorded using Cavalieri imaging calculations. Lungs were then embedded in paraffin, cut into 5 µm sections, and collected on positively charged glass slides. For hematoxylin and eosin staining, tissues were stained using standard reagents at the University of Colorado Cancer Center Pathology Core.
For immunohistochemistry, tissues were heated in citrate buffer for antigen retrieval (20 min boiling), and tissues were then incubated overnight in goat primary antisera against Muc5b (1:1000, Everest Biotech, Upper Heyford, UK) or with rabbit primary anti-mouse Muc5b antisera (1:20,000)10. Secondary anti-goat antibody diluted 1:1000 tagged with AlexaFluor488 (Thermo Fisher) or ImmPRESS anti-rabbit conjugated with horseradish peroxidase (Vector Laboratories) were applied for 1 h at room temperature. Immunofluorescence slides were coverslipped with VECTASHIELD Hardest mounting medium with DAPI (Vector). DAB stained slides were counterstained with hematoxylin and permanently mounted. Samples were visualized using an Olympus BX63 microscope (Olympus, Tokyo, Japan).
RNAScope detection was used to perform in situ hybridization according the manufacturer's protocol (Advanced Cell Diagnostics, Hayward, CA). Briefly, formalin-fixed paraffin embedded human lungs were cut into 5 µm thick tissue sections. Slides were deparaffinized in xylene, followed by rehydration in a series of ethanol/water washes. Following citrate buffer (Advanced Cell Diagnostics) antigen retrieval, slides were rinsed in deionized water, and immediately treated with protease (Advanced Cell Diagnostics) at 40 °C for 30 min in a HybEZ hybridization oven (Advanced Cell Diagnostics). Probes directed against MUC5B and SFTPC mRNA and control probes were applied at 40 °C in the following order: target probes, preamplifier, amplifier; and label probe for 15 min. After each hybridization step, slides were washed two times at room temperature. Chromogenic detection was performed followed by counterstaining with hematoxylin (American MasterTech Scientific, Lodi, CA). Staining was visualized using an Olympus BX63 microscope using a ×60 oil immersion lens, z-stack imaging, extended focal image processing (Olympus, Tokyo, Japan).
Second harmonic generation (SHG)
Autofluorescence and SHG signals were acquired using Zeiss 780 laser-scanning confocal/multiphoton-excitation fluorescence microscope with a 34-channel GaAsP QUASAR detection unit and non-descanned detectors for two-photon fluorescence (Zeiss, Thornwood, NY). The imaging settings were initially defined empirically to maximize the signal-to-noise ratio and to avoid saturation; and they were kept constant for all measurements for comparative imaging and results. Seven percent of a two-photon Chameleon laser tuned to 800 nm was used for excitation, and emission signals corresponding to the autofluorescence and SHG signals were detected simultaneously through non-descanned detectors. Image processing was performed using Zeiss ZEN 2012 software. Fifteen images were obtained for each of the lung using standardized uniform random sampling43. The series of images were analyzed in Image J software. Percentage of area covered by fibrillar collagens was quantified for each slide, and collagen fractions were normalized to lung volumes. Histograms for cumulative data for each group was created using GraphPad Prism (GraphPad, La Jolla, CA).
Micro-optical coherence tomography
Adult SFTPC-Muc5bTg and Scgb1a1-Muc5bTg mice at 8–12 weeks old and their age-matched littermate were killed using ketamine and xylazine. Tracheas were removed, separated from the distal airways and lung tissue, and placed in HEPES-buffered DMEM. Tracheal tissue was then dissected along the transverse axis to expose the luminal epithelial cell surface and incubated for 30 min under physiologic conditions (37 °C, 5% CO2, 100% humidity) to allow the epithelial surface to equilibrate.
Functional assessments of the tracheal tissue explants were performed using µOCT, with acquisition speed set at 20,480 Hz line rate to yield 100 frames per second at 256 lines per frame. Images were recorded in 8–10 ROI per trachea by an imaging specialist blinded to genotype. Images were recorded at randomly chosen intervals on the mucosal surface with the optical beam scanned along the longitudinal direction16,44.
Several metrics were simultaneously quantified from µOCT-recorded images using Image J and Matlab. Airway surface liquid (ASL) and Periciliary layer (PCL) depths were measured directly44. Mucociliary transport rate was calculated based on the slope of mucus particulates in the mucus over several frames in the ASL region up to 50 µm above the epithelial cell surface. CBF was determined using Fourier transform analysis of the reflectance of beating cilia. Imaging analysis was performed in a blinded fashion with respect to genotype.
In vitro mucolytic testing
The rate of thiol-disulfide exchange was assayed for NAC and P-2119 utilizing Ellman's reagent (5,5′-dithiobis-2-nitrobenzoic acid or DTNB) and methods adapted by Han and Han45. Briefly, NAC or P-2119 was added to a final concentration of 22.5 µM in an excess of DTNB (45 µM) and formation of the fluorescent DTNB cleavage product (2-nitro-5-thiobenzoate) was monitored over time at 412 nm by a spectrophotometer.
MUC5B was analyzed using whole saliva or mucus purified from primary human bronchial epithelial cell cultures and concentrated to 5% solids (w/v)46. Briefly, mucus samples were treated with reducing agents NAC or P-2119 (0.03–30 mM) for 1 h at room temperature. Mucin polymer reduction was assessed by agarose-gel western blot (see Supplementary Fig. 8 for uncropped scans). The mucus samples were alkylated with N-ethylmaleamide (100 mM final concentration) and mucins were separated by 1% (w/v) agarose-gel electrophoresis, vacuum-blotted onto a PVDF membrane, and detected by western blotting with MUC5B antibodies47.
Mucus microrheology
Microbead rheology was performed by tracking the thermally driven motion of embedded 1 µm diameter carboxylated microspheres (FluoSpheres, Fischer Scientific)37,48,49. Briefly, microspheres were added to mucus and allowed to mix while rotating overnight at 4 °C. After reduction, fifteen 30 s movies were collect at 60 frames per second on a Nikon Eclipse TE 2000 microscope at ×40 with a Flea 3 camera (FLIR Machine Vision, Richmond BC, Canada). Particle trajectories were subsequently tracked using TrackPy (v2.4, https://doi.org/10.5281/zenodo.12255). The track positions were corrected in Matlab (The MathWorks, Natick, MA) to account for linear drift and mean squared displacement (MSD) was calculated for each bead according to Equation 1
$$\begin{array}{*{20}{c}} {{\mathrm{\Delta }}r^2\left( \tau \right) = \frac{1}{{N - \tau }}\mathop {\sum }\limits_{i = 1}^{N - \tau } \left( {x\left( {t_i + \tau } \right) - x\left( {t_i} \right)} \right)^2 \,+\, \left( {y\left( {t_i + \tau } \right) - y\left( {t_i} \right)} \right)^2\# } \end{array},$$
where N = 1800 total frames and τ is the time-lag.
Mucus macrorheology
Mucus was collected from human bronchial epithelium (HBE) cell culture models48,50,51 and prepared to 5% solids to mimic the concentration seen in obstructive airways disease. Mucus was treated with concentrations of compound from 0.1 mM to 10 mM and allowed to incubate for 1 h at 37 °C. The rheological properties are determined by analyzing the linear regime of a fixed frequency stress sweep50,51. All assays were performed on a Discovery Hybrid 3 rheometer with 20 mm diameter 10 cone with solvent trap (TA Instruments, New Castle Delaware). Data were analyzed using custom Matlab scripts (MathWorks). All treatments we performed by mixing P-2119 in PBS at pH 7 to 100-fold higher than the desired final concentration, and 10 µl of compound was added to 90 µl of HBE mucus and incubated at 37 °C for 1 h. All assays were performed at 23 °C to prevent evaporation.
Multi-angle laser light scattering spectrometry
The molecular weight determination is performed on a aliquot of the same mucus that was used for rheology. Ten microliters of mucus is added to 40 μl of 6M guanidinium HCl. Samples were then further diluted 1:20 into light scattering buffer for high-pressure liquid chromatography (HPLC) on a CL2B column to separate large molecular weight mucins from other small proteins. HPLC was run in combination with refractometry (TRex, Wyatt) and multi-angle laser light scattering (MALLS, DAWN Heleos II, Wyatt Technologies) to determine molecular weight (shown) and mucin concentration (unchanged by reduction).
Aerosol exposure
Solutions were nebulized with an Aeroneb nebulizer. Aerosolized P-211917 was delivered to a 24-port nose-only inhalation chamber (In-Tox Products, Inc., Albuquerque, NM) operated at ~10 l/min. Exposure monitoring was conducted by collection of air samples from the test atmosphere on to 47 mm Teflon membrane filters (TEFLO, Pall-Gelman) that were weighed before and after sample collection. The aerosol was withdrawn directly from the exposure chamber atmosphere with a flow rate of ~1 liter/minute. Particle size was ~ 1 μm with a geometric standard deviation of ~2.0.
In vivo mucolytic testing
To test mucolytic activity, mice were tested under two conditions. First, wild-type C57BL/6J mice were challenged IT with 20 μg of LPS (E. coli 055:B5) administered in a 50 μl volume of saline. Mice were then treated 2 d post LPS challenge with P-2119 (135 mM) or vehicle (normal saline) for 60 min. Mice were studied 15, 30, 60, and 120 min post P-2119 treatment (68 mM aerosol) to evaluate the potency and kinetics of mucolysis induced by P-2119. Second, a separate group of SFTPC-Muc5bTg mice was then analyzed following a single IT bleomycin challenge. Bleomycin treated mice were then treated with P-2119 (135 mM) or vehicle (normal saline) by aerosol for 60 min on d 7 post challenge, a peak point of bleomycin-induced inflammation. Mice were studied immediately after withdrawal from P-2119 or vehicle challenge or 1 h after P-2119 (60 and 120 min timepoints post initiation of treatments, respectively). Mucolysis was assessed in un-centrifuged lung lavage fluid that was treated with 1 M iodoacetamide to quench drug activity and alkylate thiols liberated by disulfide reduction. Western blotting was performed as described above for human MUC5B, in this case with rabbit-anti-mouse Muc5b antisera.
We further evaluated the functional consequences of mucolytic treatments mouse lungs. The same LPS (d 2), bleomycin (d 7), and mucolytic treatment groups described above for assessing mucolysis biochemically were used to assess mucolytic effects on MCC and mucus obstruction. To test MCC, we developed an acute endogenous clearance (AEC) measurement. Leukocyte numbers in lung lavage fluid were enumerated in P-2119 and vehicle control groups. AEC was determined by quantifying an acute decrease in leukocyte numbers in lavage fluid. In addition, to quantify mucus elimination, distal airspace Muc5b was examined immunohistochemically by fixing air-inflated lungs via immersion in methacarn21,52. Fractions of randomly selected ventral alveolar airspaces occupied by Muc5b were assessed by point counting21.
Cytokine and chemokine assessment
Chemokines and cytokines were assessed using a 19-plex MSD Multi-spot Assay System in bronchoalveolar lavage fluid supernatant obtained from mice exposed to saline (untreated) or P-2119 treatment (treated). Data are presented in pg/ml of each analyte (Supplementary Table 1). Statistical analysis was performed using a Mann–Whitney test.
Redox balance testing
Levels of glutathione (GSH) and oxidized glutathione (GSSG) were assess using a GSH-Glo™ Glutathione Assay (Promega, Madison, WI). The protocol (384-well format) was modified from the manufacturer's protocol (96-well format, Promega V6911). In brief, 2.5 µl of unclarified lung lavage samples (in duplicate) were treated with 2.5 µl GSH-Glo reagent with or without TCEP (final concentration = 1 mM). Following a 30 min incubation at room temperature, luciferin detection reagent (5 µl) was added, and luminescence signals were recorded on a plate reader (PerkinElmer EnVision 2104, 700 nm, 0.5 s exposure) 15 min later. For reactive oxygen species (ROS), samples were tested using a ROS-Glo Assay (Promega). Lung lavage samples were prepared as described above for GSH-Glo assays. Following a 60 min incubation at room temperature, luciferin detection reagent (5 µl) was added, and luminescence signals were recorded on a plate reader (PerkinElmer EnVision 2104, 700 nm, 0.5 s exposure) 20 min later.
Hydroxyproline, redox, inflammation, and gene expression data were analyzed by t-test or non-parametric analyses for non-normally distributed data, and parametric or non-parametric ANOVA's for multiple comparisons (GraphPad, La Jolla, CA). SHG percentage area histograms were fitted with Gaussian model using a least χ2 fit model to determine numbers of peaks in each experimental group in IgorPro (WaveMetrics, Lake Oswego, OR) and each pair was analyzed using a t-test. Degrees of freedom were over 100 for each distribution, giving a rejection region of 3.09 to ∞ for critical values (t) at a 0.1% level of significance.
The data that support the findings of this study are available from the authors on reasonable request, see author contributions for specific data sets.
Raghu, G. et al. Diagnosis of idiopathic pulmonary fibrosis. An official ATS/ERS/JRS/ALAT Clinical Practice Guideline. Am. J. Respir. Crit. Care. Med. 198, e44–e68 (2018).
King, T. E. Jr. et al. A phase 3 trial of pirfenidone in patients with idiopathic pulmonary fibrosis. N. Engl. J. Med. 370, 2083–2092 (2014).
Richeldi, L. et al. Design of the INPULSIS trials: two phase 3 trials of nintedanib in patients with idiopathic pulmonary fibrosis. Respir. Med. 108, 1023–1030 (2014).
Seibold, M. A. et al. A common MUC5B promoter polymorphism and pulmonary fibrosis. N. Engl. J. Med. 364, 1503–1512 (2011).
Evans, C. M. et al. Idiopathic pulmonary fibrosis: a genetic disease that involves mucociliary dysfunction of the peripheral airways. Physiol. Rev. 96, 1567–1591 (2016).
Rose, M. C. & Voynow, J. A. Respiratory tract mucin genes and mucin glycoproteins in health and disease. Physiol. Rev. 86, 245–278 (2006).
Seibold, M. A. et al. The idiopathic pulmonary fibrosis honeycomb cyst contains a mucocilary pseudostratified epithelium. PLoS ONE 8, e58658 (2013).
ADS CAS Article Google Scholar
Nakano, Y. et al. MUC5B promoter variant rs35705950 affects MUC5B expression in the distal airways in idiopathic pulmonary fibrosis. Am. J. Respir. Crit. Care. Med. 193, 464–466 (2016).
Helling, B. A. et al. Regulation of MUC5B expression in idiopathic pulmonary fibrosis. Am. J. Respir. Cell Mol. Biol. 57, 91–99 (2017).
Roy, M. G. et al. Muc5b is required for airway defence. Nature 505, 412–416 (2014).
Degryse, A. L. et al. Repetitive intratracheal bleomycin models several features of idiopathic pulmonary fibrosis. Am. J. Physiol. Lung Cell Mol. Physiol. 299, L442–L452 (2010).
Button, B. et al. A periciliary brush promotes the lung health by separating the mucus layer from airway epithelia. Science 337, 937–941 (2012).
Sloane, P. A. et al. A pharmacologic approach to acquired cystic fibrosis transmembrane conductance regulator dysfunction in smoking related lung disease. PLoS ONE 7, e39809 (2012).
Chu, K. K. et al. In vivo imaging of airway cilia and mucus clearance with micro-optical coherence tomography. Biomed. Opt. Express 7, 2494–2505 (2016).
Solomon, G. M. et al. Assessment of ciliary phenotype in primary ciliary dyskinesia by micro-optical coherence tomography. JCI Insight 2, e91702 (2017).
Birket, S. E. et al. A functional anatomic defect of the cystic fibrosis airway. Am. J. Respir. Crit. Care. Med. 190, 421–432 (2014).
Johnson M. R., Thelin W. R. Novel monothiol mucolytic agents. Patent WO2016123335A1 (2016).
Moeller, A., Ask, K., Warburton, D., Gauldie, J. & Kolb, M. The bleomycin animal model: a useful tool to investigate treatment options for idiopathic pulmonary fibrosis? Int. J. Biochem. Cell Biol. 40, 362–382 (2008).
Fahy, J. V. & Dickey, B. F. Airway mucus function and dysfunction. N. Engl. J. Med. 363, 2233–2247 (2010).
Button, B., Anderson, W. H. & Boucher, R. C. Mucus hyperconcentration as a unifying aspect of the chronic bronchitic phenotype. Ann. Am. Thorac. Soc. 13, S156–S162 (2016).
Evans, C. M. et al. The polymeric mucin Muc5ac is required for allergic airway hyperreactivity. Nat. Commun. 6, 6281 (2015).
Peljto, A. L. et al. Association between the MUC5B promoter polymorphism and survival in patients with idiopathic pulmonary fibrosis. JAMA 309, 2232–2239 (2013).
Molyneaux, P. L. et al. The role of bacteria in the pathogenesis and progression of idiopathic pulmonary fibrosis. Am. J. Respir. Crit. Care Med. 190, 906–913 (2014).
Fernandez-Blanco, J. A., et al. Attached stratified mucus separates bacteria from the epithelial cells in COPD lungs. JCI Iinsight 3, 906–913 (2018).
Bauer, A. K., et al. Requirement for MUC5AC in KRAS-dependent lung carcinogenesis. JCI Insight 3, 120941 (2018).
Livraghi-Butrico, A. et al. Contribution of mucus concentration and secreted mucins Muc5ac and Muc5b to the pathogenesis of muco-obstructive lung disease. Mucosal Immunol. 10, 829 (2017).
Evans, C. M. et al. Mucin is produced by clara cells in the proximal airways of antigen-challenged mice. Am. J. Respir. Cell Mol. Biol. 31, 382–394 (2004).
Zhu, Y. et al. Munc13-2-/- baseline secretion defect reveals source of oligomeric mucins in mouse airways. J. Physiol. 586, 1977–1992 (2008).
Reader, J. R. et al. Pathogenesis of mucous cell metaplasia in a murine asthma model. Am. J. Pathol. 162, 2069–2078 (2003).
Raju, S. V. et al. The cystic fibrosis transmembrane conductance regulator potentiator ivacaftor augments mucociliary clearance abrogating cystic fibrosis transmembrane conductance regulator inhibition by cigarette smoke. Am. J. Respir. Cell Mol. Biol. 56, 99–108 (2017).
Boucher, R. C. Idiopathic pulmonary fibrosis–a sticky business. N. Engl. J. Med. 364, 1560–1561 (2011).
Williamson, J. D., Sadofsky, L. R. & Hart, S. P. The pathogenesis of bleomycin-induced lung injury in animals and its applicability to human idiopathic pulmonary fibrosis. Exp. Lung Res. 41, 57–73 (2015).
Peng, R. et al. Bleomycin induces molecular changes directly relevant to idiopathic pulmonary fibrosis: a model for "active" disease. PLoS ONE 8, e59348 (2013).
Yang, I. V. et al. Expression of cilium-associated genes defines novel molecular subtypes of idiopathic pulmonary fibrosis. Thorax 68, 1114–1121 (2013).
Pardo, A. et al. Up-regulation and profibrotic role of osteopontin in human idiopathic pulmonary fibrosis. PLoS Med. 2, e251 (2005).
Gharib, S. A. et al. Matrix metalloproteinase-7 coordinates airway epithelial injury response and differentiation of ciliated cells. Am. J. Respir. Cell Mol. Biol. 48, 390–396 (2013).
Anderson, W. H. et al. The relationship of mucus concentration (Hydration) to mucus osmotic pressure and transport in chronic bronchitis. Am. J. Respir. Crit. Care. Med. 192, 182–190 (2015).
Hunninghake, G. M. et al. MUC5B promoter polymorphism and interstitial lung abnormalities. N. Engl. J. Med. 368, 2192–2200 (2013).
Araki, T. et al. Development and progression of interstitial lung abnormalities in the Framingham Heart Study. Am. J. Respir. Crit. Care Med. 194, 1514–1522 (2016).
Kropski, J. A. et al. Extensive phenotyping of individuals at risk for familial interstitial pneumonia reveals clues to the pathogenesis of interstitial lung disease. Am. J. Respir. Crit. Care Med. 191, 417–426 (2015).
Voltz, J. W. et al. Male sex hormones exacerbate lung function impairment after bleomycin-induced pulmonary fibrosis. Am. J. Respir. Cell Mol. Biol. 39, 45–52 (2008).
Redente, E. F. et al. Age and sex dimorphisms contribute to the severity of bleomycin-induced lung injury and fibrosis. Am. J. Physiol. Lung Cell Mol. Physiol. 301, L510–L518 (2011).
Ochs, M. & Muhlfeld, C. Quantitative microscopy of the lung: a problem-based approach. Part 1: basic principles of lung stereology. Am. J. Physiol. Lung Cell Mol. Physiol. 305, L15–L22 (2013).
Liu, L. et al. Method for quantitative study of airway functional microanatomy using micro-optical coherence tomography. PLoS ONE 8, e54473 (2013).
Han, J. C. & Han, G. Y. A procedure for quantitative determination of tris(2-carboxyethyl)phosphine, an odorless reducing agent more stable and effective than dithiothreitol. Anal. Biochem. 220, 5–10 (1994).
Kirkham, S., Sheehan, J. K., Knight, D., Richardson, P. S. & Thornton, D. J. Heterogeneity of airways mucus: variations in the amounts and glycoforms of the major oligomeric mucins MUC5AC and MUC5B. Biochem. J. 361, 537–546 (2002).
Thornton, D. J., Carlstedt, I. & Sheehan, J. K. Identification of glycoproteins on nitrocellulose membranes and gels. Methods Mol. Biol. 32, 119–128 (1994).
Duncan, G. A. et al. Microstructural alterations of sputum in cystic fibrosis lung disease. JCI Insight 1, e88198 (2016).
Hill, D. B. et al. A biophysical basis for mucus solids concentration as a candidate biomarker for airways disease. PLoS ONE 9, e87681 (2014).
ADS Article Google Scholar
Seagrave, J., Albrecht, H. H., Hill, D. B., Rogers, D. F. & Solomon, G. Effects of guaifenesin, N-acetylcysteine, and ambroxol on MUC5AC and mucociliary transport in primary differentiated human tracheal-bronchial cells. Respir. Res. 13, 98 (2012).
Matsui, H. et al. A physical linkage between cystic fibrosis airway surface dehydration and Pseudomonas aeruginosa biofilms. Proc. Natl Acad. Sci. USA 103, 18131–18136 (2006).
Johansson, M. E. et al. The inner of the two Muc2 mucin-dependent mucus layers in colon is devoid of bacteria. Proc. Natl Acad. Sci. USA 105, 15064–15069 (2008).
The authors thank Dr. Jennifer Matsuda at National Jewish Health for generating Muc5b transgenic mice, Melanie Sawyer at Parion Sciences for assistance with in vitro mucolytic testing, and Jake McDonald at Lovelace Respiratory Research Institute for support with aerosol delivery. This research was supported by the National Heart, Lung and Blood Institute (UH2/3-HL123442, R01-HL097163, R01-HL080396, R01-HL130938, R21/R33-HL120770, R35-HL135816, and P01-HL092870, P01-HL108808), National Institute of Diabetes and Digestive and Kidney Diseases (P30-DK072482, P30DK065988), Department of Defense (W81XWH-17-0597), Cystic Fibrosis Foundation HILL16XXO and BOUCHER15RO, and Parion Sciences, Inc.
These authors contributed equally: Laura A. Hancock, Corinne E. Hennessy.
Department of Medicine, University of Colorado Denver, School of Medicine, Aurora, CO, 80045, USA
Laura A. Hancock, Corinne E. Hennessy, Evgenia Dobrinskikh, Alani Estrella, Naoko Hara, Marvin I. Schwarz, Ivana V. Yang, Christopher M. Evans & David A. Schwartz
Department of Medicine, University of Alabama at Birmingham, School of Medicine, Birmingham, AL, 35294, USA
George M. Solomon & Steven M. Rowe
Marsico Lung Institute, University of North Carolina, Chapel Hill, NC, 27599, USA
David B. Hill, William J. Kissner & Matthew R. Markovetz
Physics and Astronomy, University of North Carolina, Chapel Hill, NC, 27599, USA
David B. Hill
Parion Sciences, Inc, Durham, NC, 27713, USA
Diane E. Grove Villalon, Matthew E. Voss & William R. Thelin
Wellman Center for Photomedicine and Department of Pathology, Massachusetts General Hospital, Boston, Massachusetts; Department of Pathology, Harvard Medical School, Massachusetts General Hospital, Boston, Massachusetts, USA
Guillermo J. Tearney
Department of Chemistry, The Scripps Research Institute, Jupiter, FL, 33458, USA
Kate S. Carroll & Yunlong Shi
Department of Immunology, University of Colorado Denver, School of Medicine, Aurora, CO, 80045, USA
Christopher M. Evans & David A. Schwartz
Laura A. Hancock
Corinne E. Hennessy
George M. Solomon
Evgenia Dobrinskikh
Alani Estrella
Naoko Hara
William J. Kissner
Matthew R. Markovetz
Diane E. Grove Villalon
Matthew E. Voss
Kate S. Carroll
Yunlong Shi
Marvin I. Schwarz
William R. Thelin
Steven M. Rowe
Ivana V. Yang
Christopher M. Evans
David A. Schwartz
L.A.H., G.M.S., W.R.T., I.V.Y., C.M.E., and D.A.S. designed the study; L.A.H., C.E.H., E.D., A.E., N.H., and M.I.S. designed, performed, and analyzed experiments. G.M.S., G.J.T., and S.M.R. designed, performed, and analyzed µOCT experiments. D.B.H., W.J.K., and M.R.M. designed, performed, and analyzed mucus biophysical and mucin molecular weight assays. D.E.G.V. M.E.V. and W.R.T. synthesized and analyzed P-2119. K.S.C. and Y.S. performed analyses of redox balance. L.A.H. and D.A.S. wrote the manuscript with help from all coauthors. C.E.H. and D.A.S. supervised the manuscript preparation.
Correspondence to David A. Schwartz.
D.A.S. is the founder and chief scientific officer of Eleven P15, a company focused on the early diagnosis and treatment of pulmonary fibrosis. D.A.S. has patents awarded (US Patent no: 8,673,565) and submitted (US Patent application no: 62/250,390, US Patent application no: 62/525,087, and US Patent application no: 62/525,088) for the treatment and diagnosis of fibrotic lung disease. D.E.G.V., M.E.V. and W.R.T. are employees of Parion Sciences. The remaining authors declare no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Description of Additional Supplementary Files
Supplementary Video 1
Hancock, L.A., Hennessy, C.E., Solomon, G.M. et al. Muc5b overexpression causes mucociliary dysfunction and enhances lung fibrosis in mice. Nat Commun 9, 5363 (2018). https://doi.org/10.1038/s41467-018-07768-9
Prognostic role of MUC5B rs35705950 genotype in patients with idiopathic pulmonary fibrosis (IPF) on antifibrotic treatment
Davide Biondini
Elisabetta Cocconcelli
Respiratory Research (2021)
Potential clinical utility of MUC5B und TOLLIP single nucleotide polymorphisms (SNPs) in the management of patients with IPF
Francesco Bonella
Ilaria Campo
Ulrich Costabel
Orphanet Journal of Rare Diseases (2021)
Pulmonary fibrosis distal airway epithelia are dynamically and structurally dysfunctional
Ian T. Stancil
Jacob E. Michalski
Nature Communications (2021)
Advances in the genomics of ANCA-associated vasculitis—a view from East Asia
Aya Kawasaki
Naoyuki Tsuchiya
Genes & Immunity (2021)
Disulfide disruption reverses mucus dysfunction in allergic airway disease
Leslie E. Morgan
Ana M. Jaramillo
Editorial Values Statement
Journal Impact
Editors' Highlights
Top 50 Articles
Nature Communications (Nat Commun) ISSN 2041-1723 (online) | CommonCrawl |
Exponential and polynomial stability results for networks of elastic and thermo-elastic rods
A mathematical study of diffusive logistic equations with mixed type boundary conditions
Kazuaki Taira
Institute of Mathematics, University of Tsukuba, Tsukuba 305–8571, Japan
Dedicated to the memory of Professor Rosella Mininni (1963–2020)
Received August 2021 Early access December 2021
The purpose of this paper is to provide a careful and accessible exposition of static bifurcation theory for a class of mixed type boundary value problems for diffusive logistic equations with indefinite weights, which model population dynamics in environments with spatial heterogeneity. We discuss the changes that occur in the structure of the positive solutions as a parameter varies near the first eigenvalue of the linearized problem, and prove that the most favorable situations will occur if there is a relatively large favorable region (with good resources and without crowding effects) located some distance away from the boundary of the environment. A biological interpretation of main theorem is that an initial population will grow exponentially until limited by lack of available resources if the diffusion rate is below some critical value; this idea is generally credited to the English economist T. R. Malthus. On the other hand, if the diffusion rate is above this critical value, then the model obeys the logistic equation introduced by the Belgian mathematical biologist P. F. Verhulst. The approach in this paper is distinguished by the extensive use of the ideas and techniques characteristic of the recent developments in partial differential equations.
Keywords: Diffusive logistic equation, spatial heterogeneity, mixed type boundary condition, population dynamics, bifurcation theory, positive solution, super-subsolution method.
Mathematics Subject Classification: Primary: 35J65; Secondary: 35P30, 35J25, 92D25.
Citation: Kazuaki Taira. A mathematical study of diffusive logistic equations with mixed type boundary conditions. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2021166
R. A. Adams and J. J. F. Fournier, Sobolev Spaces, 2nd edition, Pure and Applied Mathematics, Vol. 140, Elsevier/Academic Press, Amsterdam, 2003. Google Scholar
G. A. Afrouzi and K. J. Brown, On principal eigenvalues for boundary value problems with indefinite weight and Robin boundary conditions, Proc. Amer. Math. Soc., 127 (1999), 125-130. doi: 10.1090/S0002-9939-99-04561-X. Google Scholar
H. Amann, Fixed point equations and nonlinear eigenvalue problems in ordered Banach spaces, SIAM Rev., 18 (1976), 620-709. doi: 10.1137/1018114. Google Scholar
H. Amann, Nonlinear elliptic equations with nonlinear boundary conditions, North-Holland Math. Studies, 21 (1976), 43-63. Google Scholar
[5] A. Ambrosetti and A. Malchiodi, Nonlinear Analysis and Semilinear Elliptic Problems, Cambridge Studies in Advanced Mathematics, No. 104, Cambridge University Press, Cambridge, 2007. doi: 10.1017/CBO9780511618260. Google Scholar
[6] A. Ambrosetti and G. Prodi, A Primer of Nonlinear Analysis, Cambridge Studies in Advanced Mathematics, No. 34, Cambridge University Press, Cambridge, 1995. Google Scholar
J. Bergh and J. Löfström, Interpolation Spaces, An Introduction, Springer-Verlag, Berlin Heidelberg New York, 1976. Google Scholar
J.-M. Bony, Principe du maximum dans les espaces de Sobolev, C. R. Acad. Sc. Paris, 265 (1967), 333-336. Google Scholar
K. J. Brown and S. S. Lin, On the existence of positive eigenfunctions for an eigenvalue problem with indefinite weight function, J. Math. Anal. Appl., 75 (1980), 112-120. doi: 10.1016/0022-247X(80)90309-1. Google Scholar
R. F. Brown, A Topological Introduction to Nonlinear Analysis, 3$^{rd}$ edition, Springer, Cham, 2014. doi: 10.1007/978-3-319-11794-2. Google Scholar
R. S. Cantrell and C. Cosner, Diffusive logistic equations with indefinite weights: Population models in disrupted environments, Proc. Roy. Soc. Edinburgh Sect. A, 112 (1989), 293-318. doi: 10.1017/S030821050001876X. Google Scholar
K.-C. Chang, Methods in Nonlinear Analysis, Springer Monogr. Math., Springer-Verlag, Berlin, 2005. Google Scholar
[13] I. Chavel, Eigenvalues in Riemannian Geometry, Pure and Applied Mathematics, 115. Academic Press, Inc., Orlando, FL, 1984. Google Scholar
J. Chazarain and A. Piriou, Introduction à La Théorie Des Équations Aux Dérivées Partielles Linéaires, Gauthier-Villars, Paris, 1981. Google Scholar
S. N. Chow and J. K. Hale, Methods of Bifurcation Theory, Springer-Verlag, New York-Berlin, 1982. Google Scholar
M. G. Crandall and P. H. Rabinowitz, Bifurcation from simple eigenvalues, J. Functional Analysis, 8 (1971), 321-340. doi: 10.1016/0022-1236(71)90015-2. Google Scholar
M. G. Crandall and P. H. Rabinowitz, Bifurcation, perturbation of simple eigenvalues, and linearized stability, Arch. Rational Mech. Anal., 52 (1973), 161-180. doi: 10.1007/BF00282325. Google Scholar
E. N. Dancer, Global solution branches for positive mappings, Arch. Rational Mech. Anal., 52 (1973), 181-192. doi: 10.1007/BF00282326. Google Scholar
D. G. de Figueiredo, Positive solutions of semilinear elliptic problems, Differential Equations, 957 (1982), 34-87. Google Scholar
K. Deimling, Nonlinear Functional Analysis, Springer-Verlag, Berlin, 1985. doi: 10.1007/978-3-662-00547-7. Google Scholar
P. Drábek and J. Milota, Methods of Nonlinear Analysis, Applications to Differential Equations, 2$^{nd}$ edition, Birkhäuser Advanced Texts: Basler Lehrbücher, Birkhäuser/Springer Basel AG, Basel, 2013. doi: 10.1007/978-3-0348-0387-8. Google Scholar
W. H. Fleming, A selection-migration model in population genetics, J. Math. Biol., 2 (1975), 219-233. doi: 10.1007/BF00277151. Google Scholar
J. M. Fraile, P. Koch Medina, J. López-Gómez and S. Merino, Elliptic eigenvalue problems and unbounded continua of positive solutions of a semilinear elliptic equation, J. Differential Equations, 127 (1996), 295-319. doi: 10.1006/jdeq.1996.0071. Google Scholar
J. García-Melián, R. Gómez-Reñasco, J. López-Gómez and J. C. Sabina de Lis, Pointwise growth and uniqueness of positive solutions for a class of sublinear elliptic problems where bifurcation from infinity occurs, Arch. Rational Mech. Anal., 145 (1998), 261-289. doi: 10.1007/s002050050130. Google Scholar
J. García-Melián, J. D. Rossi and J. C. Sabina de Lis, Existence and uniqueness of positive solutions to elliptic problems with sublinear mixed boundary conditions, Commun. Contemp. Math., 11 (2009), 585-613. doi: 10.1142/S0219199709003508. Google Scholar
D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, Reprint of the 1998 edition. Classics in Mathematics. Springer-Verlag, Berlin, 2001. Google Scholar
I. C. Gohberg and M. G. Kreĭn, The basic propositions on defect numbers, root numbers and indices of linear operators, Uspehi Mat. Nauk., 12 (1957), 43–118; English translation: Amer. Math. Soc. Transl., 13 (1960), 185–264. doi: 10.1090/trans2/013/08. Google Scholar
P. Hess, Periodic-Parabolic Boundary Value Problems and Positivity, Pitman Research Notes in Mathematical Series, 247, Longman Scientific & Technical, Harlow, New York, 1991. Google Scholar
P. Hess and T. Kato, On some linear and nonlinear eigenvalue problems with an indefinite weight function, Comm. Partial Differential Equations, 5 (1980), 999-1030. doi: 10.1080/03605308008820162. Google Scholar
L. Hörmander, The Analysis of Linear Partial Differential Operators III, Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 274. Springer-Verlag, Berlin, 1994. Google Scholar
M. A. Krasnosel'skii, Positive Solutions of Operator Equations, P. Noordhoff, Groningen, 1964. Google Scholar
M. G. Kreĭn and M. A. Rutman, Linear operators leaving invariant a cone in a Banach space, Amer. Math. Soc. Transl., 1950 (1950), 128 pp. Google Scholar
O. A. Ladyzhenskaya, V. A. Solonnikov and N. N. Ural'tseva, Linear and Quasilinear Equations of Parabolic Type, Translations of Mathematical Monographs, Vol. 23, American Mathematical Society, Providence, Rhode Island 1968. Google Scholar
[34] O. A. Ladyzhenskaya and N. N. Ural'tseva, Linear and Quasilinear Equations of Elliptic Type, Nauka, Moscow, 1964 (Russian), English translation; Academic Press, New York London, 1968. Google Scholar
J. López-Gómez, Metasolutions: Malthus versus Verhulst in population dynamics, A dream of Volterra, Elsevier/North-Holland, Amsterdam, Stationary Partial Differential Equations, 2 (2005), 211-309. doi: 10.1016/S1874-5733(05)80012-9. Google Scholar
J. López-Gómez, Linear Second Order Elliptic Operators, World Scientific Publishing, Co. Pte. Ltd., Hackensack, NJ, 2013. doi: 10.1142/8664. Google Scholar
J. López-Gómez and J. C. Sabina de Lis, First variations of principal eigenvalues with respect to the domain and point-wise growth of positive solutions for problems where bifurcation from infinity occurs, J. Differential Equations, 148 (1998), 47-64. doi: 10.1006/jdeq.1998.3456. Google Scholar
A. Manes and A. M. Micheletti, Un'estensione della teoria variazionale classica degli autovalori per operatori ellitici del secondo ordine, Boll. Un. Mat. Ital., 7 (1973), 285-301. Google Scholar
J. Moser, A new proof of de Giorgi's theorem concerning the regularity problem for elliptic differential equations, Comm. Pure Appl. Math., 13 (1960), 457-468. doi: 10.1002/cpa.3160130308. Google Scholar
L. Nirenberg, Topics in Nonlinear Functional Analysis, New York University, Courant Institute of Mathematical Sciences, New York; revised reprint of the 1974 original, Courant Lecture Notes in Mathematics, No, 6, American Mathematical Society, Providence, Rhode Island, 2001. doi: 10.1090/cln/006. Google Scholar
T.-C. Ouyang, On the positive solutions of semilinear equations $\Delta u+\lambda u-hu^{p} = 0$ on the compact manifolds, Trans. Amer. Math. Soc., 331 (1992), 503-527. doi: 10.2307/2154124. Google Scholar
[42] C. V. Pao, Nonlinear Parabolic and Elliptic Equations, Plenum Press, New York London, 1992. Google Scholar
M. H. Protter and H. F. Weinberger, Maximum Principles in Differential Equations, Corrected reprint of the 1967 original. Springer-Verlag, New York, 1984. doi: 10.1007/978-1-4612-5282-5. Google Scholar
P. H. Rabinowitz, Some aspects of nonlinear eigenvalue problems, Rocky Mountain J. Math., 3 (1973), 161-202. doi: 10.1216/RMJ-1973-3-2-161. Google Scholar
R. Redlinger, Über die $C^{2}$-Kompaktheit der Bahn von Lösungen semilinearer parabolischer systeme, Proc. Roy. Soc. Edinburgh, 93 (1982/83), 99-103. doi: 10.1017/S0308210500031693. Google Scholar
[46] M. Reed and B. Simon, Methods of Modern Mathematical Physics IV: Analysis of Operators, Academic Press, New York San Francisco London, 1978. Google Scholar
T. Runst and W. Sickel, Sobolev Spaces of Fractional Order, Nemytskij Operators, and Nonlinear Partial Differential Equations, De Gruyter Series in Nonlinear Analysis and Applications, 3. Walter de Gruyter & Co., Berlin, 1996. doi: 10.1515/9783110812411. Google Scholar
D. H. Sattinger, Topics in Stability and Bifurcation Theory, Lecture Notes in Mathematics, No. 309, Springer-Verlag, New York Heidelberg Berlin, 1973. Google Scholar
J. C. Saut and B. Scheurer, Remarks on a non linear equation arising in population genetics, Comm. Partial Differential Equations, 3 (1978), 907-931. doi: 10.1080/03605307808820080. Google Scholar
M. Schechter, Principles of Functional Analysis, 2$^{nd}$ edition, Graduate Studies in Mathematics, 36. American Mathematical Society, Providence, RI, 2002. doi: 10.1090/gsm/036. Google Scholar
[51] N. N. Semenov, Chemical Kinetics and Chain Reactions, Clarendon Press, Oxford, 1935. Google Scholar
S. Senn, On a nonlinear elliptic eigenvalue problem with Neumann boundary conditions, with an application to population genetics, Comm. Partial Differential Equations, 8 (1983), 1199-1228. doi: 10.1080/03605308308820300. Google Scholar
S. Senn and P. Hess, On positive solutions of a linear elliptic eigenvalue problem with Neumann boundary conditions, Math. Ann., 258 (1981/82), 459-470. doi: 10.1007/BF01453979. Google Scholar
K. Taira, The Yamabe problem and nonlinear boundary value problems, J. Differential Equations, 122 (1995), 316-372. doi: 10.1006/jdeq.1995.1151. Google Scholar
K. Taira, Bifurcation for nonlinear elliptic boundary value problems I, Collect. Math., 47 (1996), 207-229. Google Scholar
K. Taira, Boundary value problems for elliptic integro-differential operators, Math. Z., 222 (1996), 305-327. doi: 10.1007/BF02621868. Google Scholar
K. Taira, Introduction to semilinear elliptic boundary value problems, Taiwanese J. Math., 2 (1998), 127-172. doi: 10.11650/twjm/1500406929. Google Scholar
K. Taira, Positive solutions of diffusive logistic equations, Taiwanese J. Math., 5 (2001), 117-140. doi: 10.11650/twjm/1500574891. Google Scholar
K. Taira, Diffusive logistic equations in population dynamics}, Adv. Differential Equations, 7 (2002), 237-256. Google Scholar
K. Taira, Introduction to diffusive logistic equations in population dynamics, Korean J. Comput. Appl. Math., 9 (2002), 289-347. doi: 10.1007/BF03021545. Google Scholar
K. Taira, Logistic Dirichlet problems with discontinuous coefficients, J. Math. Pures Appl., 82 (2003), 1137-1190. doi: 10.1016/S0021-7824(03)00058-8. Google Scholar
K. Taira, Diffusive logistic equations with degenerate boundary conditions, Mediterr. J. Math., 1 (2004), 315-365. doi: 10.1007/s00009-004-0018-2. Google Scholar
K. Taira, Degenerate elliptic eigenvalue problems with indefinite weights, Mediterr. J. Math., 5 (2008), 133-162. doi: 10.1007/s00009-008-0140-7. Google Scholar
K. Taira, Degenerate elliptic boundary value problems with asymmetric nonlinearity, J. Math. Soc. Japan, 62 (2010), 431-465. Google Scholar
K. Taira, Semigroups, Boundary Value Problems and Markov Processes, 2$^{nd}$ edition, Springer Monographs in Mathematics, Springer, Heidelberg, 2014. doi: 10.1007/978-3-662-43696-7. Google Scholar
[66] K. Taira, Analytic Semigroups and Semilinear Initial-Boundary Value Problems, 2$^{nd}$ edition, London Mathematical Society Lecture Note Series, 434. Cambridge University Press, Cambridge, 2016. doi: 10.1017/CBO9781316729755. Google Scholar
K. Taira, The hypoelliptic Robin problem for quasilinear elliptic equations, Discrete Contin. Dyn. Syst. Ser. S, 13 (2020), 1601-1618. doi: 10.3934/dcdss.2020091. Google Scholar
K. Taira, Dirichlet problems with discontinuous coefficients and Feller semigroups, Rend. Circ. Mat. Palermo, 69 (2020), 287-323. doi: 10.1007/s12215-019-00404-5. Google Scholar
K. Taira, Boundary Value Problems and Markov Processes: Functional Analysis Methods for Markov Processes, doi: 10.1007/978-3-030-48788-1. Google Scholar
K. Taira, Logistic Neumann problems with discontinuous coefficients, Ann. Univ. Ferrara Sez. VII Sci. Mat., 66 (2020), 409-485. doi: 10.1007/s11565-020-00350-6. Google Scholar
K. Taira, Semilinear degenerate elliptic boundary value problems via the Semenov approximation, Rend. Circ. Mat. Palermo, 70 (2021), 1305-1388. doi: 10.1007/s12215-020-00560-z. Google Scholar
K. Taira and K. Umezu, Bifurcation for nonlinear elliptic boundary value problems II, Tokyo J. Math., 19 (1996), 387-396. doi: 10.3836/tjm/1270042527. Google Scholar
H. Triebel, Theory of Function Spaces, Monographs in Mathematics, Vol. 78, Birkhäuser-Verlag, Basel, 1983. doi: 10.1007/978-3-0346-0416-1. Google Scholar
H. Triebel, Theory of Function Spaces II, Monographs in Mathematics, Vol. 84, Birkhäuser-Verlag, Basel, 1992 doi: 10.1007/978-3-0346-0419-2. Google Scholar
[75] G. M. Troianiello, Elliptic Differential Equations and Obstacle Problems, The University Series in Mathematics. Plenum Press, New York, 1987. doi: 10.1007/978-1-4899-3614-1. Google Scholar
[76] J. Wloka, Partial Differential Equations, Cambridge University Press, Cambridge, 1987. doi: 10.1017/CBO9781139171755. Google Scholar
K. Yosida, Functional Analysis, Classics in Mathematics. Springer-Verlag, Berlin, 1995. doi: 10.1007/978-3-642-61859-8. Google Scholar
Figure 1. The bounded domain $ D $ and the unit outward normal $ \mathbf{n} $ to $ \partial D $
Figure 2. The boundary portion $ M $ is deadly and its complement $ {\partial D} \setminus M $ is a barrier
Figure 3. The structural condition (Z.1) on the function $ h(x) $
Figure 4. The bifurcation diagram of Theorem 1.5
Figure 5. The bifurcation diagram of Theorem 1.5: Malthus versus Verhulst
Figure 6. The positive solution curve (16) for $ \lambda > \lambda_{1}(m) $ under condition (Z.3) via the Semenov approximation
Figure 7. The bifurcation diagram of Remark 1.4 under condition (Z.3) (Verhulst theory)
Figure 8. Conditions (b) and (d) in Theorem 3.3
Figure 9. The bifurcation curves $ \varGamma_{1} $ and $ \varGamma_{2} $ of the nonlinear equation (20) in Theorem 3.3
Figure 10. The point $ \left(1/ \mathop{\mathrm{spr}}(B), 0\right) $ is a bifurcation point of the nonlinear equation (21) to the trivial solution in Theorem 3.4
Figure 11. The mapping properties of the resolvent $ R_{c} $ in the spaces $ C(\overline{D}) $, $ W^{2,p}(D) $ and $ C^{1}_{B}(\overline{D}) $
Figure 12. The mapping properties of the resolvent $ R_{c} = \left(- \varDelta + c(x)\right)^{-1} $ in the spaces $ C(\overline{D}) $, $ C_{e}(\overline{D}) $ and $ C^{1}_{B}(\overline{D}) $
Figure 13. The first eigenvalues $ \mu_{1}(\lambda) = \gamma_{1}(\lambda) - \lambda $, $ \mu_{1}(0) = \gamma_{1} $ and $ \mu_{1}\left(\lambda_{1}(m)\right) = 0 $
Figure 14. The first eigenvalues $ \mu_{D}(\lambda) $, $ \mu_{N}(\lambda) $ and $ \mu_{1}(\lambda) $ in the case $ \int_{D}m(x)\,dx < 0 $
Figure 15. The first eigenvalues $ \mu_{D}(\lambda) $, $ \mu_{N}(\lambda) $ and $ \mu_{1}(\lambda) $ in the case $ \int_{D}m(x)\,dx = 0 $
Figure 16. The first eigenvalues $ \mu_{D}(\lambda) $, $ \mu_{N}(\lambda) $ and $ \mu_{1}(\lambda) $ in the case $ \int_{D}m(x)\,dx > 0 $
Figure 17. A flowchart of proof of Theorem 1.5, part (i)
Figure 18. A flowchart of proof of Lemma 7.2
Figure 19. The set of solutions of the semilinear problem (1) consists of a pitchfork near $ \lambda = \lambda_{1}(m) $
Figure 20. The critical value $ \overline{\lambda}(h) $ of the positive bifurcation solution curve $ \mathcal{C} = \{(\lambda, u(\lambda))\} $
Figure 21. The mapping properties of the resolvent $ R_{\lambda} = (\lambda I - \varDelta)^{-1} $ in the spaces $ C(\overline{D}) $, $ C_{e}(\overline{D}) $ and $ C^{1}_{B}(\overline{D}) $
Figure 22. A positive bifurcation solution curve $ (\lambda, u(\lambda)) $ of the nonlinear operator equation $ u = H(\lambda,u) $ can be continued beyond the point $ (\lambda^{\ast}, u^{\ast}) $ via the implicit function theorem (Theorem 3.1)
Figure 23. The mapping properties of the negative Laplacian $ - \varDelta $ and the resolvent $ R_{0} = \left(- \varDelta\right)^{-1} $
Figure 24. A flowchart of proof of Theorem 1.5, part (ii)
Figure 25. The bifurcation diagram of Theorem 8.1 (the Dirichlet case)
Figure 26. The bifurcation diagram of Theorem 8.2 in the case $ \int_{D}m(x)\,dx < 0 $ and $ \nu_{1}(m) > 0 $ (the Neumann case)
Figure 27. The bifurcation diagram of Theorem 8.2 in the case $ \int_{D}m(x)\,dx = 0 $ and $ \nu_{1}(m) = 0 $ (the Neumann case)
Figure 28. The bifurcation diagram of Theorem 8.2 in the case $ \int_{D}m(x)\,dx > 0 $ and $ \nu_{1}(m) < 0 $ (the Neumann case)
Figure 30. The bifurcation diagrams of Theorem 8.3 in the case $ \int_{D}m(x)\,dx < 0 $ and $ \nu_{1}(m) > 0 $
Figure 31. The bifurcation diagrams of Theorem 8.3 in the case $ \int_{D}m(x)\,dx = 0 $ and $ \nu_{1}(m) = 0 $
Figure 32. The bifurcation diagrams of Theorem 8.3 in the case $ \int_{D}m(x)\,dx > 0 $ and $ \nu_{1}(m) < 0 $
Figure 29. The open subset $ D^{+} $ with boundary $ \partial D^{+} $
Table 1. A biological meaning of each term
Term Biological interpretation
D Terrain
x Location of the terrain
u(x) Population density of a species inhabiting the terrain
∆ A member of the population moves about the terrain via the type of random walks occurring in Brownian motion
$\frac{1}{\lambda }$ Rate of diffusive dispersal
m(x) Intrinsic growth rate
h(x) Coefficient of intraspecific competition
Table 2. A biological meaning of boundary conditions
Boundary Condition Biological interpretation
Dirichlet case
(a(x') ≡ 0, b(x') ≡ 1) Completely hostile (deadly) exterior
Neumann case
(a(x') ≡ 1, b(x') ≡ 0) Barrier
Robin or mixed-type case
(a(x') + b(x') > 0) Hostile but not completely deadly exterior
Table 3. An overview of theorems for eigenvalue problems with indefinite weights
Problems Conditions Theorems
(mixed type case) (M.1) (H.1), (H.2) Theorem 1.3 for λ1(m)
(Dirichlet case) (M.1) Theorem 6.1 for γ1(m)
(Neumann case) (M.2) Theorem 6.2 for ν1(m)
(56), (59), (61) (M.1), (M.2)
(H.1), (H.2) Theorem 6.3
for µD(λ), µN(λ), µ1(λ)
Table 4. An overview of existence theorems for diffusive logistic problems
(mixed type case) (M.1)
(Z.1), (Z.2)
(H.1), (H.2) Theorem 1.5 for u(λ)
(Dirichlet case) (M.1)
(Z.1), (Z.2) Theorem 8.1 for v(λ)
(Neumann case) (M.2)
(Z.1), (Z.2) Theorem 8.2 for w(λ)
(1), (91), (93) (M.1), (M.2)
(H.1), (H.2) Theorem 8.3 for v(λ), w(λ), u(λ)
(mixed type case) (M.1), (Z.3)
Qingyan Shi, Junping Shi, Yongli Song. Hopf bifurcation and pattern formation in a delayed diffusive logistic model with spatial heterogeneity. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 467-486. doi: 10.3934/dcdsb.2018182
Santiago Cano-Casanova. Bifurcation to positive solutions in BVPs of logistic type with nonlinear indefinite mixed boundary conditions. Conference Publications, 2013, 2013 (special) : 95-104. doi: 10.3934/proc.2013.2013.95
Wenzhen Gan, Peng Zhou. A revisit to the diffusive logistic model with free boundary condition. Discrete & Continuous Dynamical Systems - B, 2016, 21 (3) : 837-847. doi: 10.3934/dcdsb.2016.21.837
W. E. Fitzgibbon, M.E. Parrott, Glenn Webb. Diffusive epidemic models with spatial and age dependent heterogeneity. Discrete & Continuous Dynamical Systems, 1995, 1 (1) : 35-57. doi: 10.3934/dcds.1995.1.35
Xiaoyan Zhang, Yuxiang Zhang. Spatial dynamics of a reaction-diffusion cholera model with spatial heterogeneity. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2625-2640. doi: 10.3934/dcdsb.2018124
Zhiming Guo, Zhi-Chun Yang, Xingfu Zou. Existence and uniqueness of positive solution to a non-local differential equation with homogeneous Dirichlet boundary condition---A non-monotone case. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1825-1838. doi: 10.3934/cpaa.2012.11.1825
Michael E. Filippakis, Donal O'Regan, Nikolaos S. Papageorgiou. Positive solutions and bifurcation phenomena for nonlinear elliptic equations of logistic type: The superdiffusive case. Communications on Pure & Applied Analysis, 2010, 9 (6) : 1507-1527. doi: 10.3934/cpaa.2010.9.1507
Tzung-shin Yeh. S-shaped and broken s-shaped bifurcation curves for a multiparameter diffusive logistic problem with holling type-Ⅲ functional response. Communications on Pure & Applied Analysis, 2017, 16 (2) : 645-670. doi: 10.3934/cpaa.2017032
Bastian Gebauer, Nuutti Hyvönen. Factorization method and inclusions of mixed type in an inverse elliptic boundary value problem. Inverse Problems & Imaging, 2008, 2 (3) : 355-372. doi: 10.3934/ipi.2008.2.355
Cong He, Hongjun Yu. Large time behavior of the solution to the Landau Equation with specular reflective boundary condition. Kinetic & Related Models, 2013, 6 (3) : 601-623. doi: 10.3934/krm.2013.6.601
Alain Hertzog, Antoine Mondoloni. Existence of a weak solution for a quasilinear wave equation with boundary condition. Communications on Pure & Applied Analysis, 2002, 1 (2) : 191-219. doi: 10.3934/cpaa.2002.1.191
Rui Peng, Xiao-Qiang Zhao. The diffusive logistic model with a free boundary and seasonal succession. Discrete & Continuous Dynamical Systems, 2013, 33 (5) : 2007-2031. doi: 10.3934/dcds.2013.33.2007
Xuejun Pan, Hongying Shu, Yuming Chen. Dirichlet problem for a diffusive logistic population model with two delays. Discrete & Continuous Dynamical Systems - S, 2020, 13 (11) : 3139-3155. doi: 10.3934/dcdss.2020134
Lin Wang, James Watmough, Fang Yu. Bifurcation analysis and transient spatio-temporal dynamics for a diffusive plant-herbivore system with Dirichlet boundary conditions. Mathematical Biosciences & Engineering, 2015, 12 (4) : 699-715. doi: 10.3934/mbe.2015.12.699
Jian-Wen Sun, Wan-Tong Li, Zhi-Cheng Wang. A nonlocal dispersal logistic equation with spatial degeneracy. Discrete & Continuous Dynamical Systems, 2015, 35 (7) : 3217-3238. doi: 10.3934/dcds.2015.35.3217
Jumpei Inoue, Kousuke Kuto. On the unboundedness of the ratio of species and resources for the diffusive logistic equation. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2441-2450. doi: 10.3934/dcdsb.2020186
Muhammad I. Mustafa. On the control of the wave equation by memory-type boundary condition. Discrete & Continuous Dynamical Systems, 2015, 35 (3) : 1179-1192. doi: 10.3934/dcds.2015.35.1179
Jesús Ildefonso Díaz, L. Tello. On a climate model with a dynamic nonlinear diffusive boundary condition. Discrete & Continuous Dynamical Systems - S, 2008, 1 (2) : 253-262. doi: 10.3934/dcdss.2008.1.253
Chris Cosner, Andrew L. Nevai. Spatial population dynamics in a producer-scrounger model. Discrete & Continuous Dynamical Systems - B, 2015, 20 (6) : 1591-1607. doi: 10.3934/dcdsb.2015.20.1591 | CommonCrawl |
Efficient pretreatment of lignocellulosic biomass with high recovery of solid lignin and fermentable sugars using Fenton reaction in a mixed solvent
Hui-Tse Yu1,2,
Bo-Yu Chen1,
Bing-Yi Li1,
Mei-Chun Tseng1,
Chien-Chung Han2 &
Shin-Guang Shyu1
Biotechnology for Biofuels volume 11, Article number: 287 (2018) Cite this article
Pretreatment of biomass to maximize the recovery of fermentable sugars as well as to minimize the amount of enzyme inhibitors formed during the pretreatment is a challenge in biofuel process. We develop a modified Fenton pretreatment in a mixed solvent (water/DMSO) to combine the advantages of organosolv and Fenton pretreatments. The hemicellulose and cellulose in corncob were effectively degraded into xylose, glucose, and soluble glucose oligomers in a few hours. This saccharide solution, separated from the solid lignin simply by filtration, can be directly applied to the subsequent enzymatic hydrolysis and ethanol fermentation.
After the pretreatment, 94% carbohydrates were recovered as soluble monosaccharide (xylose and glucose) and glucose oligomers in the filtrates, and 87% of solid lignin was recovered as the filter residue. The filtrates were directly applied to enzymatic hydrolysis, and 92% of raw corncob glucose was recovered. The hydrolysates containing the glucose and xylose from the enzymatic hydrolysis were directly applied to ethanol fermentation with ethanol yield equals 79% of theoretical yield. The pretreatment conditions (130 °C, 1.5 bar; 30 min to 4 h) are mild, and the pretreatment reagents (H2O2, FeCl3, and solvent) had low impact to environment. Using ferrimagnetic Fe3O4 resulted in similar pretreatment efficiency and Fe3O4 could be removed by filtration.
A modified Fenton pretreatment of corncob in DMSO/water was developed. Up to 94% of the carbohydrate content of corncob was recovered as a saccharide solution simply by filtration. Such filtrate was directly applied to the subsequent enzymatic hydrolysis and where 92% of the corncob glucose content was obtained. The hydrolysate so obtained was directly applied to ethanol fermentation with good fermentability. The pretreatment method is simple, and the additives and solvents used have a low impact to the environment. This method provides the opportunity to substantially maximize the carbohydrate and solid lignin recovery of biomass with a comparatively green process, such that the efficiency of biorefinery as well as the bioethanol production process can be improved. The pretreatment is still relatively energy intensive and expensive, and further optimization of the process is required in large-scale operation.
Fermentation of sugar from biomass to ethanol is one of the most important bioenergy technologies [1,2,3]. Biomass, a lignocellulosic material, has three major components: lignin, cellulose, and hemicellulose [4]. Hemicellulose and cellulose are the feedstock of fermentable sugar. Among many methods to hydrolyze these polysaccharides to fermentable sugar, enzymatic hydrolysis is the most commonly used method in the present bioethanol industry [5, 6].
Enzymatic hydrolysis (cellulose saccharification) cannot be directly applied to biomass, because cellulose in biomass is protected by lignin and hemicellulose [7]. Thus, pretreatment of biomass is needed to enhance the accessibility of enzymes to cellulose, to maximize the recovery of cellulose, and, at the same time, to minimize the amount of enzyme inhibitors that form during the pretreatment process [8].
Recently, enlightened by the white-rot or brown-rot fungi lignin degradation via in vivo Fenton chemistry [9], Fenton oxidation was applied to the pretreatment of lignocellulosic biomass [10,11,12,13,14,15,16,17,18] and degradation of cellulose [19, 20]. Combination of Fenton oxidation with the other pretreatment methods was reported to improve the pretreatment, so that subsequent enzymatic hydrolysis and fermentation steps could be optimized [15,16,17,18]. Despite the mild reaction conditions (low pressure and temperature) [10,11,12,13,14,15,16,17,18,19,20], environmentally benign reagents [21] and improved cellulose/lignin ratio of the pretreated biomass [16], a non-negligible amount of cellulose (up to 65% of glucan), sugar content (up to 60% of carbohydrate), and lignin (up to 60%) of the raw biomass were lost after Fenton pretreatment [13, 18].
A detailed molecular mechanism underlying the Fenton pretreatment process has yet to be described. It is generally accepted that the HO· (hydroxyl) and HOO· (perhydroxyl) radicals generated by the Fenton reaction degrade the lignocellulosic structure [10]. To improve the efficiency of the Fenton pretreatment, it is critical to enhance the accessibility of these free radicals to the lignocellulosic structure. Organosolv pretreatment allows the penetration of the pretreatment solution (organic solvent) into the lignocellulosic structure and leads to separation of high-purity cellulose by dissolving most lignin and hemicellulose [8]. Adding organic solvent into the Fenton pretreatment may help to dissolve some of the lignin and hemicellulose. This would then allow the free radicals from the Fenton reaction to penetrate deeper into the interior framework of the lignocellulosic structure. The aim of the present study is to develop a modified Fenton pretreatment in a mixed solvent (water and organic solvent) system which can maximize the carbohydrate and lignin recovery of the biomass.
We choose corncob as our pretreatment substrate as corn is grown on scale and large amounts of agricultural waste (corn residue) are generated. Maize residue is also one of the most abundant raw materials for biocompatible products along with bagasse, rice straw, wheat straw, and other lignocellulose substrates [22]. Corncob, a by-product of the sweet corn processing industry, is available in sufficient quantity. Pretreatments of corncob for biofuels have been of increasing interest in the recent years [23,24,25,26,27,28,29,30,31,32,33], because corncob contains a larger portion of glucose and xylose. In addition to maximize the glucose recovery, an efficient pretreatment method for corncob should provide high xylose yield, because xylose has many broad applications [34, 35].
Herein, we report a modified Fenton pretreatment which combines the advantage of organosolv and Fenton pretreatments. The hemicellulose and cellulose in corncob were effectively degraded into xylose, glucose, and soluble glucose oligomers in the pretreatment, and the saccharide solution was separated from the solid lignin simply by filtration. Up to 94% of the carbohydrate content of corncob was recovered, and the saccharide solution was directly applied to the subsequent hydrolysis by cellulase. Up to 92% of the glucose content of the corncob was recovered. The time of hydrolysis as well as the amount of cellulase needed for digestion were greatly reduced. Furthermore, the hydrolysates containing the glucose and xylose from the enzymatic hydrolysis can be directly applied to ethanol fermentation.
Lignocellulose biomass and chemicals
Corncobs were obtained domestically (Tainan, Taiwan) and were washed with deionized water. After drying at 105 °C, corncobs were mechanically grinded into particles and sieved through 40 mesh sieves (particle size smaller than 0.49 mm). All chemical reagents were purchased from commercial sources and used without further purification. Iron (III) chloride, o-phenanthroline, and dimethyl sulfoxide (DMSO) were purchased from Aldrich and J.T. Baker, respectively. Glucose and gluconic acid were purchased from Alfa Aesar. Hydrogen peroxide solution (35 wt% in H2O), α-cellulose, and cellulase from Trichoderma reesei were purchased from Sigma-Aldrich. S. cerevisiae for fermentation was purchased from Algist Bruggeman.
Biomass composition and characterization
The composition of the corncob particles was determined by following the standard protocol of the National Renewable Energy Laboratory [36]. The amount of xylose, glucose, and arabinose were determined by high-performance liquid chromatography (HPLC) on a Waters (1525 pump) with a 25 cm × 4.6 mm Shodex Asahipak NH2P-50 4E column using acetonitrile/water (4:1) as an eluent at a flow rate of 1.0 mL/min at 35 °C or with a 25 cm × 4.6 mm Benson BP-800H+ column using 5.0 mM H2SO4 aqueous solution as an eluent at a flow rate of 0.5 mL/min at 85 °C. The quantification of HMF, furfural, and gluconic acid were performed by Bruker Advance UHPLC system coupled to a Bruker EVOQ EliteTM triple quadrupole mass (Bremen, Germany) equipped with an atmospheric pressure chemical ionization (APCI) and electrospray (ESI) interfaces [37]. Chromatographic separations were performed on a Waters Acquity UPLC BEH C18 column (2.1 × 100 mm, 1.7 μm) using an isocratic mixture of 0.01 mmol/L acetic acid in 0.2% aqueous solution of formic acid for HMF and furfural, and on a Merck ZIC-HILIC column (2.1 × 150 mm, 3.5 μm) using mobile phase A (acetonitrile modified with 0.1% (v/v) formic acid) and mobile phase B (5.0 mmol/L ammonium acetate modified with 0.1% (v/v) formic acid) with gradient profile 10% B to 90% B in 19 min for glucose and gluconic acid. Both analyses were performed at a flow rate of 0.30 mL/min. The total carbohydrates content was determined by the phenol–sulfuric acid method [38]. Mineral contents were determined by following the standard protocol of the National Renewable Energy Laboratory [36].
Pretreatment method
The pretreatment reagent solution was prepared by dissolving FeCl3 (7.5 × 10−3 mmol) and H2O2 (0.30 mmol, 0.26 mL, 35 wt% in H2O) in the solvent (2.0 mL, DMSO/H2O = 1:6) in a Pyrex tube with a Teflon screw cap. The solution was then stirred at 130 °C for 10–15 min before use.
Corncob powder (0.200 g, particle size smaller than 0.49 mm) was added into the reagent solution and stirred at 130 °C for 30 min in a Pyrex tube with a Teflon screw cap. The slurry was then filtered, and a light brown powder and a brown filtrate were obtained. The amount of glucose, xylose, arabinose, and total carbohydrates in the filtrate were determined by quantitative HPLC and phenol–sulfuric acid method, respectively.
The light brown powder obtained in the above step (0.084 g) was added into a fresh pretreatment reagent solution, and the mixture was stirred at 130 °C for 4 h in a Pyrex tube with a Teflon screw cap. The mixture was then filtered. A light brown powder and a brown filtrate were obtained. The amount of glucose, xylose, arabinose, and total carbohydrates in the filtrate were determined by quantitative HPLC and phenol–sulfuric acid method, respectively. TGA analysis of the light brown powder (dried at 80 °C for 12 h before TGA analysis) indicates the powder contains lignin (Additional file 1: Figure. S1) [39]. The pretreatment flow chart is shown in Fig. 1.
Flowchart and results of the corncob pretreatment
Product yields of the pretreatment were calculated as follows:
$${\text{Xylose yield }} = \, ({{\text{The amount of xylose produced}} \mathord{\left/ {\vphantom {{\text{The amount of xylose produced}} {\text{The amount of xylose in the feedstock}}}} \right. \kern-0pt} {\text{The amount of xylose in the feedstock}}}) \times 100\%$$
$${\text{Glucose yield }} = \, \left( {{{\text{The amount of glucose produced}} \mathord{\left/ {\vphantom {{\text{The amount of glucose produced}} {\text{The amount of glucose in the feedstock}}}} \right. \kern-0pt} {\text{The amount of glucose in the feedstock}}}} \right) \times 100\%$$
$${\text{Arabinose yield }} = \, \left( {{{\text{The amount of arabinose produced}} \mathord{\left/ {\vphantom {{\text{The amount of arabinose produced}} {\text{The amount of arabinose in the feedstock}}}} \right. \kern-0pt} {\text{The amount of arabinose in the feedstock}}}} \right) \times 100\%$$
$${\text{Lignin yield }} = \, ({{\text{The amount ofligninproduced}} \mathord{\left/ {\vphantom {{\text{The amount ofligninproduced}} {\text{The amount of ligninin the feedstock}}}} \right. \kern-0pt} {\text{The amount of ligninin the feedstock}}}) \times 100\%$$
$${\text{Total carbohydrates yield }} = \, \left( {{{\text{The amount of total carbohydrates produced}} \mathord{\left/ {\vphantom {{\text{The amount of total carbohydrates produced}} {\text{The amount of total carbohydrates in the feedstock}}}} \right. \kern-0pt} {\text{The amount of total carbohydrates in the feedstock}}}} \right) \times 100\% .$$
Quantitative Fe(II) analysis before and during pretreatment
The concentration of the Fe(II) was determined by o-phenanthroline-based detection technique using Agilent Technologies Cary 8454 UV–Vis spectrometer at room temperature [40]. The ε (molar absorptivity at 508 nm) of Fe(II)(Phen)3Cl2 in DMSO/water (1:6) is 0.048 M−1 cm−1.
Enzymatic hydrolysis
Enzymatic hydrolysis of the glucose oligomers in the total filtrate (combination of filtrates from the first and the second rounds of the pretreatment) was carried out using cellulase from Trichoderma reesei. The pH of the filtrate was adjusted to 4.5–4.8 by adding 0.15 g calcium carbonate to 4.0 mL filtrate. Cellulase (44 mg) was added into the tube, and the solution was stirred at 180 rpm for 12 h at 50 °C. The enzymatic hydrolysis was terminated by boiling the reaction mixtures at 85 °C for 5 min. The amount of glucose and xylose in the ultimate enzymatic hydrolysate was determined by HPLC.
Two control groups with similar amount of glucose content of the glucose oligomer (0.038 g) in the filtrate were applied to hydrolysis in a citrate buffer (4.0 mL; 50 mmol/L, pH 4.8) with 44 mg of cellulase under the same hydrolysis condition. One control is 0.038 g of commercial cellulose. The other is 0.103 g of corncob powder. The time course of glucose yield of the enzymatic hydrolysis results is shown in Fig. 4. The time course of the total glucose yield (both glucose obtained in the enzymatic hydrolysis and the glucose produced from the pretreatment of 0.200 g corncob) in the filtrate and the glucose yield of raw corncob (0.200 g) after enzymatic hydrolysis is shown in Fig. 3.
Fermentability of the recovered hydrolyzates
Ethanol fermentation of the hydrolyzates from the enzymatic hydrolysis of the pretreatment filtrate was conducted using S. cerevisiae [15]. Fermentation of the hydrolyzates (containing 11.5 g/L glucose) was performed in a rotatory shaker at 37 °C and 150 rpm for 64 h. After fermentation, the amount of ethanol was determined by GC analyses with isopropanol as an internal standard on an Agilent 6890 Gas Chromatograph equipped with DB-5MS column (30 m × 0.25 mm internal diameter and 0.25 mm film thickness) and an FID detector. The amount of glucose was determined by high-performance liquid chromatography (HPLC).
Pretreatment general procedure and its performance
The composition of the corncob powder was determined as follows: 66.2% of total carbohydrates, 37.3% of glucose, 25.1% of xylose, 3.7% of arabinose, 14.9% of lignin, and 3.9% of ash. The corncob powder was added into the pretreatment reagent solution [Fe(II) concentration 0.38 mM; pH 2.0 at room temperature] in a screw capped Pyrex tube and the slurry was stirred for 30 min at 130 °C. During the pretreatment process, the Fe(II) raised to 2.24 mM after 30 min, and the pH value of the mixture remained similar (2.0). The reaction was monitored by a pressure gauge, and the maximum pressure observed was 1.5 bar. After filtration, a light brown solid and a brown filtrate were obtained. The filtrate was analyzed for xylose, glucose, arabinose, and total carbohydrate contents. The contents of the filtrate were determined as follows: 0.099 g of total carbohydrates, 0.014 g of glucose, 0.045 g of xylose, 0.007 g of arabinose, 0.032 g of glucose oligomer, and other sugars. The brown residue was applied to the second-round pretreatment. After filtration, solid lignin was obtained. The contents of the second filtrate were determined as follows: 0.025 g of total carbohydrates, 0.016 g of glucose, 0.001 g of xylose, 0.008 g of glucose oligomer, and other sugars. The total filtrate (combination of filtrates from the first and the second rounds) was subjected to enzymatic hydrolysis. The flowchart of a typical corncob pretreatment is shown in Fig. 1.
The total carbohydrates in the two filtrates were 0.124 g corresponding to 94% of carbohydrates in the corncob. The total xylose and glucose content (both glucose and its glucose oligomers) in the two filtrates were 0.046 g and 0.068 g, respectively. This indicates that 92% of xylose and 92% of glucose in the corncob were recovered in the filtrate. TGA analysis of the residues (Additional file 1: Figure S1) obtained in the second pretreatment cycle indicated that the residue contains 87% of lignin in the corncob.
Despite the pretreatment temperature being higher than that of the traditional Fenton pretreatments, the pretreatment condition is still relatively mild (130 °C and 1.5 bar) as compared to other pretreatments which usually require higher pressure and temperature [8, 33, 41]. In addition, the pretreatment has several unique properties: First, the concentration of FeCl3 is much lower (3.3 mmol/L) than that of other Fenton and metal salt pretreatments (0.02 mol/L to 0.2 mol/L) [18, 41]. Low FeCl3 concentration can reduce the environmental impact and the negative influence of the FeCl3 in the subsequent enzymatic hydrolysis [42, 43]. Second, hydrogen peroxide is a comparatively green oxidant, and the concentration of H2O2 (1.3 mol/L) is lower than that of the other reported Fenton pretreatments (1.5 mol/L to 2.5 mol/L) [13, 18]. Lower concentration of H2O2 can reduce the loss of lignin and cellulose content [18]. Finally, the amount of DMSO, a green solvent [44], used in the solvent system is only 14.3 vol%. This DMSO concentration does not inhibit enzyme activity in many enzyme processes [45], so that the hydrolysate, containing 94% of the carbohydrate content of corncob, can be directly applied to the subsequence glucose oligomers' enzymatic hydrolysis.
The role of metal salt and hydrogen peroxide in the pretreatment
Because corncob contains large amounts of xylose, we used the yield of xylose as a guide post to evaluate the relationship between individual components and the performance of the pretreatment. The results are summarized in Table 1.
Table 1 Conditions and products of the first-round corncob pretreatment
For a short reaction time (30 min), at 130 °C and with low FeCl3 concentration (3.3 mmol/L), 92% of xylose was obtained after pretreatment (Table 1, entry 3). When hydrogen peroxide was removed from the system, xylose yields dropped from 92 to 5% (Table 1, entries 3 and 2). When FeCl3 was removed, xylose yields dropped to 6% (Table 1, entry 1). These observations indicate that both FeCl3 and hydrogen peroxides are essential and have a synergistic effect on the pretreatment. This synergistic effect is due to the Fenton reaction in which the Fe cation induces the decomposition of hydrogen peroxide to produce the HO· (hydroxyl) and HOO· (perhydroxyl) radicals. These free radicals destructed the ether bond in cellulose and hemicellulose to produce xylose, glucose, arabinose, and soluble glucose oligomers with the lignin mostly intact under the pretreatment condition. In the traditional Fenton pretreatment and its combination of other pretreatment methods, a substantial amount of lignin (up to 60%) and cellulose (up to 65% of glucan) [18] were lost although the pretreated biomass had a higher cellulose/lignin ratio after pretreatment [16]. In our case, the destruction of cellulose and hemicellulose was extensive, such that almost 94% of carbohydrate in the corncob was dissolved into the pretreatment solution as monosaccharides and glucose oligomers. In addition, lignin was not extensively destructed in the pretreatment and was recovered as solid after filtration (87% recovery). These observations indicate that the HO· (hydroxyl) and HOO· (perhydroxyl) radicals generated in our pretreatment have a high selectivity towards the destruction of hemicellulose and cellulose in the lignocellulosic structure of corncob. The reason for this divergence was due to the effect of mixed solvent used in our pretreatment.
Evaluation of solvent influence
To evaluate the effect of DMSO on the pretreatment process, pure water and pure DMSO were used as solvents under similar pretreatment conditions (Table 1, entries 7 and 8). The yields of xylose dropped from 94 to 10% when treated with pure water and dropped to 15% when treated with pure DMSO, indicating that water and DMSO have synergistic effect in the pretreatment process.
Increasing the DMSO concentration from 14.2 vol% to 25 vol% and 75 vol% in the pretreatment reduced the xylose yield from 92 to 85% and 66% (Table 1, entries 5 and 4), respectively. These results indicate that the DMSO/water ratio in the mixed solvent has an empirical ratio in order to maximize the efficiency of the pretreatment. In the organosolv pretreatment, the mixed solvent (organic solvent/water) can penetrate more effectively into the structure of biomass than pure water or organic solvent alone [46]. Our DMSO/water mixed solvent may have similar penetration ability, such that Fe cation and hydrogen peroxide, along with the HO· and HOO· radicals generated from the Fenton reaction, can enter the corncob structure more easily through the penetration of the DMSO/water solution. Replacing mixed solvent with pure water or DMSO in the pretreatment resulted in poor pretreatment efficiency, supporting the above argument. Our pretreatment temperature is slightly lower than the usual organosolv pretreatment temperature [8]. In addition, in our pretreatment, only 14.2 vol% of DMSO is required which is much lower than the organic solvent content in organosolv pretreatments (require 50% or up organic solvent) [8].
Evaluation of hydrogen peroxide in the pretreatment
Hydrogen peroxide can remove lignin and hemicellulose from biomass, because the hydroxy-free radical produced weakens the bonding between lignin and hemicellulose [47]. Adding FeCl3 in hydrogen peroxide solution to promote the production of hydroxy-free radical through Fenton reaction [10] may cause the synergistic effect of FeCl3 and hydrogen peroxide observed in our pretreatment. Increasing the amount of hydrogen peroxide may enhance the production of hydroxyl-free radical, and thus the pretreatment efficiency. However, under such conditions, glucose can be oxidized to gluconic acid [48]. To evaluate these factors, the pretreatment process was carried out under different H2O2 concentrations. The yields of monosaccharides remained steady when the amount of H2O2 changed from 3.0 to 9.0 mmol (Table 2, entries 1 to 3), indicating that the hydrogen peroxide to FeCl3 concentration ratio did not alter pretreatment results in the experiment range. In addition, gluconic acid was not detected under these pretreatment conditions (Additional file 2). To confirm the absence of gluconic acid in the pretreatment, glucose was treated with the pretreatment solution for 5 h under the pretreatment condition, and gluconic acid was not detected (Additional file 3). These observations indicate that glucose cannot be oxidized to gluconic acid by the pretreatment reagent under the pretreatment conditions. The results are shown in Table 2.
Table 2 Effects of hydrogen peroxide and amount of corncob on the first-round pretreatment
Evaluation of pretreatment temperature and the pretreatment time
Based on the results of H2O2 evaluation mentioned above, we used 3.0 mmol of hydrogen peroxide for the evaluation of pretreatment temperature. The results of influence of pretreatment temperature on the monosaccharides yield are shown in Fig. 2a.
Effects of a temperature on the pretreatment of 0.200 g corncob using FeCl3 (7.5 × 10−3 mmol) with H2O2 (35 wt% in H2O, 3.0 mmol, 0.260 mL) in DMSO/water (1:6, 2.0 mL) for 30 min and b reaction time on the pretreatment of 0.200 g corncob using FeCl3 (7.5 × 10−3 mmol) with H2O2 (35 wt% in H2O, 3.0 mmol, 0.260 mL) in DMSO/water (1:6, 2.0 mL) at 130 °C
Generally speaking, increasing pretreatment temperature enhances the monosaccharides yield. However, at higher temperature, the amount of HMF and furfural, inhibitors of enzymatic hydrolysis, also increased [49]. The higher amount of HMF and furfural may be due to the higher conversion of glucose to HMF and furfural at higher temperature (Additional file 4: Table S3; Additional file 5). To optimize the pretreatment efficiency (higher carbohydrate recovery and higher yield for monosaccharides) and considering the subsequent enzymatic hydrolysis (less inhibitor is favored), pretreatment temperature was set at 130 °C. Both xylose and glucose yields remained high from 30 to 70 min. Longer pretreatment time did not increase the xylose and glucose yield significantly. However, for longer pretreatment time, the amount of inhibitors, furfural and HMF, increased 20% and 50%, respectively (Additional file 6: Table S4). Gluconic acid was not detected in all these pretreatment conditions (Additional files 4 and 6: Table S3 and S4). These results are shown in Fig. 2B.
Optimization of biomass amount
Based on our evaluations of the influence of FeCl3, solvent effect hydrogen peroxide, pretreatment temperature, and time on pretreatment outcomes, the optimized pretreatment conditions for corncob were set as follows: FeCl3 (7.5 × 10−3 mmol, 3.3 mmol/L), H2O2 (0.26 mL, 35 wt% in H2O), solvent: 2 mL (DMSO/water), temperature: 130 °C, and time: 30 min. These conditions are carried out in a 40 mL Pyrex tube with a Teflon screw cap.
To determine the maximum amount of corncob that can be used in each pretreatment, we tested a wide range of corncob input while maintaining same pretreatment conditions. The results are shown in Table 2, entries 4–6. We show that the amount of corncob in each pretreatment can be increased threefold to 0.6 g/2.26 mL (corresponding to 265.5 g/L). The concentration of monosaccharides of the resulting filtrate for the first round of pretreatment are 17.2 g/L for glucose, 51.5 g/L for xylose, and 54.2 g/L for glucose oligomers and other sugar oligomers.
Enzymatic hydrolysis of glucose oligomers in the pretreatment filtrate
The total filtrate (from both first- and second-round pretreatment of 0.200 g corncob) contained 0.124 g carbohydrates within which 0.076 g was xylose (0.046 g) and glucose (0.030 g). The other 0.041 g carbohydrate was glucose oligomers and other sugar oligomers (Fig. 1). The filtrate was applied to enzymatic hydrolysis to convert the soluble glucose oligomers to glucose. The row corncob was applied to hydrolysis under the same conditions for comparison. Cellulase from Trichoderma reesei was used and the results are shown in Fig. 3.
Time course of total glucose yields of the enzymatic hydrolysis of the pretreated (filtrate from pretreatment) and the untreated (corncob powder) feedstocks
After 12 h, 0.068 g glucose corresponding to 92% of the row corncob glucose was obtained. Considering the original amount of glucose (0.030 g) in the filtrate before hydrolysis, 0.038 g glucose was obtained through the enzyme hydrolysis of the glucose oligomers in the filtrate. This hydrolysate solution has a glucose concentration of 15.0 g/L and can be directly used as a feedstock for the subsequent fermentation to produce ethanol. For the hydrolysis of row corncob, only 21 wt% of its glucose content was obtained in the hydrolysis.
To show the glucose oligomer in the filtrate which could be hydrolysed more effectively than crystalline cellulose and the cellulose in the raw corncob, corresponding amount of commercial cellulose (0.038 g) and corncob (0.103 g) containing the same amount of glucose as the soluble glucose oligomers (0.038 g) in the filtrate were applied to enzymatic hydrolysis for comparison. The time course of enzymatic hydrolysis results is summarized in Fig. 4.
Time course of glucose yield of the enzymatic hydrolysis of the filtrate containing 0.038 g soluble glucose oligomers, α-cellulose (0.038 g) and raw corncob (0.103 g) with similar glucose content (0.038 g glucose)
After 6 h, 98 wt% of the glucose oligomers in the filtrate was hydrolyzed to glucose, and only 30 wt% of corresponding amount of commercial cellulose was hydrolyzed to glucose under similar condition. For the raw corncob, 20 wt% of its glucose content was hydrolysed to glucose. These observations indicate that the filtrate which contains FeCl3, DMSO, and trace amount of inhibitors (HMF and furfural) can be applied to enzymatic hydrolysis. The higher efficiency of the glucose oligomers hydrolysis is understandable, because soluble glucose oligomers obtained after the pretreatment have a higher accessibility towards cellulase than that of crystalline cellulose and the cellulose in the raw corncob which is protected by the lignin and hemicellulose [50].
Fermentability of the pretreatment hydrolysate
The hydrolysate obtained in the enzymatic hydrolysis of the pretreatment filtrate was directly applied to ethanol fermentation using S. cerevisiae [15]. The time courses for the ethanol fermentation of hydrolysate from the enzymatic hydrolysis of the pretreatment filtrate containing 11.5 g/L glucose and 8.3 g/L xylose is shown in Fig. 5.
Time courses for the ethanol fermentation of hydrolyzate containing glucose
After 24 h, the concentration of glucose reduced to 60% (from 11.5 to 6.7 g/L), and the concentration of ethanol was 1.96 g/L. After 64 h, the concentrations of ethanol and glucose reached 4.6 g/L and 0.2 g/L, respectively. The ethanol yield was 0.41 g (ethanol)/g (glucose) which equals to 79% of theoretical yield. These results indicate that the DMSO and FeCl3 in the hydrolysate have a little effect in the fermentation of glucose to ethanol.
Pretreatment by iron oxide
Because FeCl3 can be oxidized to iron oxide in aqueous solution [51], iron oxide may form during the pretreatment. It would be interesting to evaluate whether iron oxide can replace FeCl3 in the pretreatment (Additional file 7: Scheme S1) [52].
When iron oxide was used, the total carbohydrates obtained in the pretreatment filtrate were 0.082 g (62% of total carbohydrates in the corncob) with 0.040 g of xylose corresponding to 80% xylose of the corncob. The residue obtained in the first round of pretreatment was applied to subsequent rounds of pretreatment for totally 8 h, and 0.048 g of carbohydrates corresponding to 36.3% of carbohydrates in the corncob was obtained. The total carbohydrates obtained in the filtrates are 0.130 g corresponding to 98.2% of total carbohydrates in the corncob which was recovered. The efficiency of pretreatment using Fe3O4 is comparable to that of using FeCl3. However, Fe3O4 can be removed from the filtrate more easily and effectively by filtration.
There are several advantages of using iron oxide in the process. First, it is environmental benign. Second, it can be recovered in the filtration step. Third, it can be removed from residues by magnetic, because Fe3O4 is ferrimagnetic.
A modified Fenton pretreatment of corncob using low concentration of FeCl3 (3.3 mmol/L) and hydrogen peroxide in mixed solvent (DMSO/water) at 130 °C was developed. The pretreatment process is simple and efficient with 94% recovery of carbohydrates as soluble monosaccharide (92% xylose and 40% of glucose) and glucose oligomers in the filtrate. Such filtrate was directly applied to the subsequent enzymatic hydrolysis and where 92% of the corncob glucose content was obtained. The hydrolysate so obtained was directly applied to ethanol fermentation with good fermentability. This pretreatment condition is mild (130 °C, 1.5 bar), and the additives and solvents used in this pretreatment method have a low impact to the environment. We also show that, in this method, FeCl3 can be replaced by ferromagnetic Fe3O4 with slightly lower efficiency. This method now provides the opportunity to substantially maximize the carbohydrate and solid lignin recovery of biomass with a comparatively green process, such that the efficiency of biorefinery as well as the bioethanol production process can be improved. The present pretreatment is still relatively energy intensive and expensive (drying, grinding, comparatively expensive organic solvent, etc.), and further optimization of the process, such as open sun drying and with less grinding (bigger particular size) may be required in large-scale operation.
Ragauskas AJ, Williams CK, Davison BH, Britovsek G, Cairney J, Eckert CA, Frederick WJ, Hallett JP, Leak DJ, Liotta CL, Mielenz JR, Murphy R, Templer R, Tschaplinski T. The path forward for biofuels and biomaterials. Science. 2006;311:484–9.
Baeyens J, Kang Q, Appels L, Dewil R, Lv Y, Tan T. Challenges and opportunities in improving the production of bio-ethanol. Prog Energy Combust Sci. 2015;47:60–88.
Sarkar N, Ghosh SK, Bannerjee S, Aikat K. Bioethanol production from agricultural wastes: an overview. Renew Energy. 2012;37:19–27.
Ilnicka A, Lukaszewicz JP. Discussion remarks on the role of wood and chitin constituents during carbonization. Front Mater. 2015;2:1–5.
Ma R, Xu Y, Zhang X. Catalytic oxidation of biorefinery lignin to value-added chemicals to support sustainable biofuel production. Chemsuschem. 2015;8:24–51.
Ravindran R, Jaiswal AK. A comprehensive review on pre-treatment strategy for lignocellulosic food industry waste: challenges and opportunities. Bioresour Technol. 2016;199:92–102.
Sun Y, Cheng JY. Hydrolysis of lignocellulosic materials for ethanol production: a review. Bioresour Technol. 2002;83:1–11.
Zhang K, Pei Z, Wang D. Organic solvent pretreatment of lignocellulosic biomass for biofuels and biochemicals: a review. Bioresour Technol. 2016;199:21–33.
Arantes V, Jellison J, Goodell B. Peculiarities of brown-rot fungi and biochemical Fenton reaction with regard to their potential as a model for bioprocessing biomass. Appl Microbiol Biotechnol. 2012;94:323–38.
Jain P, Vigneshwaran N. Effect of Fenton's pretreatment on cotton cellulosic substrates to enhance its enzymatic hydrolysis response. Bioresour Technol. 2012;103:219–26.
Kato DM, Elia N, Flythe M, Lynn BC. Pretreatment of lignocellulosic biomass using Fenton chemistry. Bioresour Technol. 2014;162:273–8.
Xie Y, Xiao Z, Mai C. Degradation of chemically modified Scots pine (Pinus sylvestris L.) with Fenton reagent. Holzforschung. 2015;69:153–61.
Jung YH, Kim HK, Park HM, Park YC, Park K, Seo JH, Kim KH. Mimicking the Fenton reaction-induced wood decay by fungi for pretreatment of lignocellulose. Bioresour Technol. 2015;179:467–72.
Bhange VP, William SPMP, Sharma A, Gabhane J, Vaidya AN, Wate SR. Pretreatment of garden biomass using Fenton's reagent: influence of Fe2+ and H2O2 concentrations on lignocellulose degradation. J Environ Health Sci. 2015;13:12–9.
He YC, Ding Y, Xue YF, Yang B, Liu F, Wang C, Zhu ZZ, Qing Q, Wu H, Zhu C, Tao ZC, Zhang DP. Enhancement of enzymatic saccharification of corn stover with sequential Fenton pretreatment and dilute NaOH extraction. Bioresour Technol. 2015;193:324–30.
Jeong SY, Lee JW. Sequential Fenton oxidation and hydrothermal treatment to improve the effect of pretreatment and enzymatic hydrolysis on mixed hardwood. Bioresour Technol. 2016;200:121–7.
Wu K, Ying W, Shi Z, Yang H, Zheng Z, Zhang J, Yang J. Fenton reaction–oxidized bamboo lignin surface and structural modification to reduce nonproductive cellulase binding and improve enzyme digestion of cellulose. ACS Sustain Chem Eng. 2018;6:3853–61.
Zhang KJ, Si MY, Liu D, Zhuo SN, Liu MR, Liu H, Yan X, Shi Y. A bionic system with Fenton reaction and bacteria as a model for bioprocessing lignocellulosic biomass. Biotechnol Biofuels. 2018;11:31–45.
Halliwell G. Catalytic Decomposition of cellulose under biological conditions. Biochem J. 1965;95:35–40.
Zhang MF, Qin YH, Ma JY, Yang L, Wu ZK, Wang TL, Wang WG, Wang CW. Depolymerization of microcrystalline cellulose by the combination of ultrasound and Fenton reagent. Ultrason Sonochem. 2016;31:404–8.
Banerjee G, Car S, Scott-Craig JS, Hodge DB, Walton JD. Alkaline peroxide pretreatment of corn stover: effects of biomass, peroxide, and enzyme loading and composition on yields of glucose and xylose. Biotechnol Biofuels. 2011;4:16.
Sommer SG, Hamelin L, Olesen JE, Montes F, Jia W, Chen Q, Triolo JM. Agricultural waste biomass. In: Iakovou E, Bochtis D, Vlachos D, Aidonis D, editors. Supply chain management for sustainable food networks. Wiley; 2016, p. 67–106.
Procentese A, Johnson E, Orr V, Garruto Campanile A, Wood JA, Marzocchella A, Rehmann L. Deep eutectic solvent pretreatment and subsequent saccharification of corncob. Bioresour Technol. 2015;192:31–6.
Vedrenne M, Vasquez-Medrano R, Pedraza-Segura L, Toribio-Cuaya H, Ortiz-Estrada CH. Reducing furfural-toxicity of a corncob lignocellulosic prehydrolyzate liquid for Saccharomyces cerevisiae with the photo-fenton reaction. J Biobased Mater Bioenergy. 2015;9:476–85.
Wang S, Ouyang X, Wang W, Yuan Q, Yan A. Comparison of ultrasound-assisted Fenton reaction and dilute acid-catalysed steam explosion pretreatment of corncobs: cellulose characteristics and enzymatic saccharification. RSC Adv. 2016;6:76848–54.
Kawee-Ai A, Srisuwun A, Tantiwa N, Nontaman W, Boonchuay P, Kuntiya A, Chaiyaso T, Seesuriyachan P. Eco-friendly processing in enzymatic xylooligosaccharides production from corncob: influence of pretreatment with sonocatalytic-synergistic Fenton reaction and its antioxidant potentials. Ultrason Sonochem. 2016;31:184–92.
Zhang CW, Xia SQ, Ma PS. Facile pretreatment of lignocellulosic biomass using deep eutectic solvents. Bioresour Technol. 2016;219:1–5.
Lou H, Yuan L, Qiu X, Qiu K, Fu J, Pang Y, Huang J. Enhancing enzymatic hydrolysis of xylan by adding sodium lignosulfonate and long-chain fatty alcohols. Bioresour Technol. 2016;200:48–54.
Xing Y, Bu L, Zheng T, Liu S, Jiang J. Enhancement of high-solids enzymatic hydrolysis of corncob residues by bisulfite pretreatment for biorefinery. Bioresour Technol. 2016;221:461–8.
Liu Y, Guo L, Wang L, Zhan W, Zhou H. Irradiation pretreatment facilitates the achievement of high total sugars concentration from lignocellulose biomass. Bioresour Technol. 2017;232:270–7.
Seesuriyachan P, Kawee-Ai A, Chaiyaso T. Green and chemical-free process of enzymatic xylooligosaccharide production from corncob: enhancement of the yields using a strategy of lignocellulosic destructuration by ultra-high pressure pretreatment. Bioresour Technol. 2017;241:537–44.
Zhang H, Xu Y, Yu S. Co-production of functional xylooligosaccharides and fermentable sugars from corncob with effective acetic acid prehydrolysis. Bioresour Technol. 2017;234:343–9.
Zhang X, Yuan Q, Cheng G. Deconstruction of corncob by steam explosion pretreatment: correlations between sugar conversion and recalcitrant structures. Carbohydr Polym. 2017;156:351–6.
Ur-Rehman S, Mushtaq Z, Zahoor T, Jamil A, Murtaza MA. Xylitol: a review on bioproduction, application, health benefits, and related safety issues. Crit Rev Food Sci Nutr. 2015;55:1514–28.
Sun JF, Wang J, Tian KM, Dong ZX, Liu XG, Permaul K, Singh S, Prior BA, Wang ZX. A novel strategy for production of ethanol and recovery of xylose from simulated corncob hydrolysate. Biotechnol Lett. 2018;40:781–8.
Sluiter A, Hames B, Ruiz R, Scarlata C, Sluiter J, Templeton D, Crocker D. Determination of structural carbohydrates and lignin in biomass. National Renewable Energy Laboratory 2011; NREL/TP-510-42618.
Simoes J, Domingues P, Reis A, Nunes FM, Coimbra MA, Domingues RM. Identification of anomeric configuration of underivatized reducing glucopyranosyl–glucose disaccharides by tandem mass spectrometry and multivariate analysis. Anal Chem. 2007;79:5896–905.
Nielsen SS. Phenol-sulfuric acid method for total carbohydrates. In: Heidelberg D, editor. Food analysis laboratory manual. 2nd ed. New York: Springer; 2010. p. 47–55.
Worasuwannarak N, Sonobe T, Tanthapanichakoon W. Pyrolysis behaviors of rice straw, rice husk, and corncob by TG-MS technique. J Anal Appl Pyrol. 2007;78:265–71.
Fortune WB, Mellon MG. Determination of iron with o-phenanthroline—a spectrophotometric study. Ind Eng Chem. 1938;10:60–4.
Loow Y-L, Wu TY, Lim YS, Tan KA, Siow LF, Md. Jahim J, Mohammad AW. Improvement of xylose recovery from the stalks of oil palm fronds using inorganic salt and oxidative agent. Energy Conversion Manag. 2017;138:248–60.
Achinas S, Euverink GJW. Consolidated briefing of biochemical ethanol production from lignocellulosic biomass. Electron J Biotechnol. 2016;23:44–53.
Valette N, Perrot T, Sormani R, Gelhaye E, Morel-Rouhier M. Antifungal activities of wood extractives. Fungal Biol Rev. 2017;31:113–23.
Alfonsi K, Colberg J, Dunn PJ, Fevig T, Jennings S, Johnson TA, Kleine HP, Knight C, Nagy MA, Perry DA, Stefaniak M. Green chemistry tools to influence a medicinal chemistry and research chemistry based organisation. Green Chem. 2008;10:31–6.
Wiggers HJ, Cheleski J, Zottis A, Oliva G, Andricopulo AD, Montanari CA. Effects of organic solvents on the enzyme activity of Trypanosoma cruzi glyceraldehyde-3-phosphate dehydrogenase in calorimetric assays. Anal Biochem. 2007;370:107–14.
Jiang Z, Zhao P, Hu C. Controlling the cleavage of the inter- and intra-molecular linkages in lignocellulosic biomass for further biorefining: a review. Bioresour Technol. 2018;256:466–77.
Peng F, Peng P, Xu F, Sun RC. Fractional purification and bioconversion of hemicelluloses. Biotechnol Adv. 2012;30:879–903.
Rinsant D, Chatel G, Jérôme F. Efficient and selective oxidation of d-glucose into gluconic acid under low-frequency ultrasonic irradiation. ChemCatChem. 2014;6:3355–9.
Liu L, Sun J, Cai C, Wang S, Pei H, Zhang J. Corn stover pretreatment by inorganic salts and its effects on hemicellulose and cellulose degradation. Bioresour Technol. 2009;100:5865–71.
McMillan JD. Pretreatment of lignocellulosic biomass. Am Chem Soc. 1994;566:292–324.
Voigt B, Göbler A. Formation of pure haematite by hydrolysis of iron (III) salt solutions under hydrothermal conditions. Cryst Res Technol. 1986;21:1177–83.
Zhang S, Zhao X, Niu H, Shi Y, Cai Y, Jiang G. Superparamagnetic Fe3O4 nanoparticles as catalysts for the catalytic oxidation of phenolic and aniline compounds. J Hazard Mater. 2009;16:7560–6.
SS and HY developed the idea for the study and prepared the manuscript. HY, BB, and BL performed the experiment. MT analytical work. CH helped to revise the manuscript. All authors read and approved the final manuscript.
This research is funded by the Ministry of Science and Technology, Republic of China and Academia Sinica.
The data sets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
This research was founded by Academia Sinica and Ministry of science and technology.
Institute of Chemistry, Academia Sinica, Taipei, 11529, Taiwan
Hui-Tse Yu, Bo-Yu Chen, Bing-Yi Li, Mei-Chun Tseng & Shin-Guang Shyu
Department of Chemistry, National Tsing Hua University, Hsinchu, 30013, Taiwan
Hui-Tse Yu & Chien-Chung Han
Hui-Tse Yu
Bo-Yu Chen
Bing-Yi Li
Mei-Chun Tseng
Chien-Chung Han
Shin-Guang Shyu
Correspondence to Chien-Chung Han or Shin-Guang Shyu.
Spectra for (a) TGA and (b) DTG of raw and pretreated Corncob.
Additional file 2.
Quantitative analysis of glucose and gluconic acid by LC–MS on the corncob pretreatment filtrate.
Quantitative analysis of gluconic acid and glucose by LC–MS on pretreatment reagent treated glucose.
Amount of inhibitors produced at different temperatures.
Amount of HMF and furfural produced in the corncob pretreatments under different conditions.
Amount of inhibitors produced in different reaction time.
Additional file 7: Scheme S1.
Corncob pretreatment using Fe3O4.
Yu, HT., Chen, BY., Li, BY. et al. Efficient pretreatment of lignocellulosic biomass with high recovery of solid lignin and fermentable sugars using Fenton reaction in a mixed solvent. Biotechnol Biofuels 11, 287 (2018). https://doi.org/10.1186/s13068-018-1288-4
Fenton reaction
Corncob
Lignin | CommonCrawl |
Nonexistence of traveling wave solutions, exact and semi-exact traveling wave solutions for diffusive Lotka-Volterra systems of three competing species
CPAA Home
Parabolic problems with general Wentzell boundary conditions and diffusion on the boundary
July 2016, 15(4): 1419-1449. doi: 10.3934/cpaa.2016.15.1419
On the viscous Cahn-Hilliard-Navier-Stokes equations with dynamic boundary conditions
Laurence Cherfils 1, and Madalina Petcu 2,
Université de La Rochelle, Laboratoire de Mathématiques Images et Applications EA 3165, Avenue Michel Crépeau, 17042 La Rochelle Cedex 1
Laboratoire de Mathématiques et Applications UMR CNRS 6086, Université de Poitiers, Téléport 2 - BP 30179, Boulevard Marie et Pierre Curie, 86962 Futuroscope Chasseneuil
Received December 2015 Revised January 2016 Published April 2016
In the present article we study the viscous Cahn-Hilliard-Navier-Stokes model, endowed with dynamic boundary conditions, from the theoretical and numerical point of view. We start by deducing results on the existence, uniqueness and regularity of the solutions for the continuous problem. Then we propose a space semi-discrete finite element approximation of the model and we study the convergence of the approximate scheme. We also prove the stability and convergence of a fully discretized scheme, obtained using the semi-implicit Euler scheme applied to the space semi-discretization proposed previously. Numerical simulations are also presented to illustrate the theoretical results.
Keywords: viscous Cahn-Hilliard equations, finite element method, well-posedness, error estimates, backward Euler scheme., Navier-Stokes equations.
Mathematics Subject Classification: 65M60, 65M1.
Citation: Laurence Cherfils, Madalina Petcu. On the viscous Cahn-Hilliard-Navier-Stokes equations with dynamic boundary conditions. Communications on Pure & Applied Analysis, 2016, 15 (4) : 1419-1449. doi: 10.3934/cpaa.2016.15.1419
H. Abels, D. Depner and H. Garcke, On an incompressible Navier-Stokes/Cahn-Hilliard system with degenerate mobility, Ann. Inst. H. Poincaré Anal. Non Linéaire, 30 (2013), 1175-1190. doi: 10.1016/j.anihpc.2013.01.002. Google Scholar
S. Bosia and S. Gatti, Pullback exponential attractor for a Cahn-Hilliard-Navier-Stokes system in $2D$, Dyn. Partial Differ. Equ., 11 (2014), 1-38, 2014. doi: 10.4310/DPDE.2014.v11.n1.a1. Google Scholar
F. Boyer, Mathematical study of multi-phase flow under shear through order parameter formulation, Asymptot. Anal., 20 (1999), 175-212. Google Scholar
P.G. Ciarlet, The Finite Element Method for Elliptic Problems, SIAM, Philadelphia, (2002). doi: 10.1137/1.9780898719208. Google Scholar
L. Cherfils and M. Petcu, A numerical analysis of the Cahn-Hilliard equation with non-permeable walls, Numer. Math., 128 (2014), 517-547. doi: 10.1007/s00211-014-0618-0. Google Scholar
R. Chella and J. Vinals, Mixing of a two-phase fluid by a cavity flow, Physical Review E, 53 (1996). Google Scholar
A. Diegel, X. Feng and S. Wise, Analysis of a mixed finite element method for a Cahn-Hilliard-Darcy-Stokes system, SIAM J. Numer. Anal., 53 (2015), 127-152. doi: 10.1137/130950628. Google Scholar
M. Doi, Dynamics of domains and textures, Theoretical Challenges in the Dynamics of Complex Fluids, (1996), 293-314. Google Scholar
S. Dong, On imposing dynamic contact-angle boundary conditions for wall-bounded liquid-gas flows, Comput. Methods Appl. Mech. Engrg., (2012), 179-200. doi: 10.1016/j.cma.2012.07.023. Google Scholar
C.M. Elliott and D.A. French, A second order splitting method for the Cahn-Hilliard equation, Numer. Math., 54 (1989), 575-590. doi: 10.1007/BF01396363. Google Scholar
X. Feng, Fully discrete finite element approximations of the Navier-Stokes-Cahn-Hilliard diffuse interface model for two-phase fluid flows, SIAM J. Numer. Anal., 44 (2006), 1049-1072. doi: 10.1137/050638333. Google Scholar
X. Feng, Y. He and C. Liu, Analysis of finite element approximations of a phase field model for two-phase fluids, Math. Comp., 76 (2007), 539-571. doi: 10.1090/S0025-5718-06-01915-6. Google Scholar
X. Feng and S. Wise, Analysis of a Darcy-Cahn-Hilliard diffuse interface model for the Hele-Shaw flow and its fully discrete finite element approximation, SIAM J. Numer. Anal., 50 (2012), 1320-1343. doi: 10.1137/110827119. Google Scholar
C. Foias, O. Manley, R. Rosa and R. Temam, Navier-Stokes Equations and Turbulence, (Encyclopedia of Mathematics and its Applications), Cambridge University Press, (2008). doi: 10.1017/CBO9780511546754. Google Scholar
C.G. Gal and M. Grasselli, Asymptotic behavior of a Cahn-Hilliard-Navier-Stokes system in $2D$, Ann. I. H. Poincaré-AN, 27 (2010), 401-436. doi: 10.1016/j.anihpc.2009.11.013. Google Scholar
C.G. Gal, M. Grasselli and A. Miranville, Cahn-Hilliard-Navier-Stokes system with moving contact lines, hal-01135747, 2015. Google Scholar
M. Grasselli and M. Pierre, A splitting method for the Cahn-Hilliard equation with inertial term, Math. Models Methods Appl. Sci., 20 (2010), 1363-1390. doi: 10.1142/S0218202510004635. Google Scholar
V. Girault and A. Raviart, Finite Element Methods for Navier-Stokes equations: Theory and algorithms, Springer-Verlag, Berlin, Heidelberg, New York, 1981. doi: 10.1007/978-3-642-61623-5. Google Scholar
F. Hecht, New development in FreeFem++, J. Numer. Math., 20 (2012), 251-265. Google Scholar
D. Jacqmin, Calculation of two-phase Navier-Stokes flows using phase field modeling, J. Comput. Phys., 155 (1999), 96-127. doi: 10.1006/jcph.1999.6332. Google Scholar
D. Kay, V. Styles and R. Welford, Finite element approximation of a Cahn-Hilliard-Navier-Stokes system, Interfaces Free Bound., 10 (2008), 15-43. doi: 10.4171/IFB/178. Google Scholar
D. Kay and R. Welford, Efficient numerical solution of Cahn-Hilliard-Navier-Stokes fluids in $2d$, SIAM J. Sci. Comput., 29 (2007), 2241-2257. doi: 10.1137/050648110. Google Scholar
J.L. Lions, Quelques méthodes de résolution des problèmes aux limites non linéaires, DUNOD, 2002. Google Scholar
C. Liu and J. Shen, A phase field model for the mixture of two incompressible fluids and its approximation by a Fourier-spectral method, Phys. D, 179 (2003), 211-228. doi: 10.1016/S0167-2789(03)00030-7. Google Scholar
A. Miranville and S. Zelik, Exponential attractors for the Cahn-Hilliard equation with dynamic boundary conditions, Math. Methods Appl. Sci., 28 (2005), 709-735. doi: 10.1002/mma.590. Google Scholar
M. Tachim, Pullback attractors for a non-autonomous Cahn-Hilliard-Navier-Stokes system in $2D$, Asymptot. Anal., 90 (2014), 21-51. Google Scholar
Temam, Navier Stokes Equations. Theory and Numerical Analysis, AMS Chelsea Publishing, (2001). Google Scholar
R. Temam, Infinite-dimensional Dynamical Systems in Mechanics and Physics, Second edition, Applied Mathematical Sciences, Vol. 68, Springer-Verlag, New York, 1997. doi: 10.1007/978-1-4612-0645-3. Google Scholar
Daoyuan Fang, Ruizhao Zi. On the well-posedness of inhomogeneous hyperdissipative Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2013, 33 (8) : 3517-3541. doi: 10.3934/dcds.2013.33.3517
Reinhard Racke, Jürgen Saal. Hyperbolic Navier-Stokes equations I: Local well-posedness. Evolution Equations & Control Theory, 2012, 1 (1) : 195-215. doi: 10.3934/eect.2012.1.195
Matthias Hieber, Sylvie Monniaux. Well-posedness results for the Navier-Stokes equations in the rotational framework. Discrete & Continuous Dynamical Systems, 2013, 33 (11&12) : 5143-5151. doi: 10.3934/dcds.2013.33.5143
Juan Wen, Yaling He, Yinnian He, Kun Wang. Stabilized finite element methods based on multiscale enrichment for Allen-Cahn and Cahn-Hilliard equations. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021074
Quanrong Li, Shijin Ding. Global well-posedness of the Navier-Stokes equations with Navier-slip boundary conditions in a strip domain. Communications on Pure & Applied Analysis, 2021, 20 (10) : 3561-3581. doi: 10.3934/cpaa.2021121
Michele Coti Zelati. Remarks on the approximation of the Navier-Stokes equations via the implicit Euler scheme. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2829-2838. doi: 10.3934/cpaa.2013.12.2829
Yueqiang Shang, Qihui Zhang. A subgrid stabilizing postprocessed mixed finite element method for the time-dependent Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 3119-3142. doi: 10.3934/dcdsb.2020222
Maxim A. Olshanskii, Leo G. Rebholz, Abner J. Salgado. On well-posedness of a velocity-vorticity formulation of the stationary Navier-Stokes equations with no-slip boundary conditions. Discrete & Continuous Dynamical Systems, 2018, 38 (7) : 3459-3477. doi: 10.3934/dcds.2018148
Xiaopeng Zhao, Yong Zhou. Well-posedness and decay of solutions to 3D generalized Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 795-813. doi: 10.3934/dcdsb.2020142
Bin Han, Changhua Wei. Global well-posedness for inhomogeneous Navier-Stokes equations with logarithmical hyper-dissipation. Discrete & Continuous Dynamical Systems, 2016, 36 (12) : 6921-6941. doi: 10.3934/dcds.2016101
Daniel Coutand, J. Peirce, Steve Shkoller. Global well-posedness of weak solutions for the Lagrangian averaged Navier-Stokes equations on bounded domains. Communications on Pure & Applied Analysis, 2002, 1 (1) : 35-50. doi: 10.3934/cpaa.2002.1.35
Keyan Wang, Yao Xiao. Local well-posedness for Navier-Stokes equations with a class of ill-prepared initial data. Discrete & Continuous Dynamical Systems, 2020, 40 (5) : 2987-3011. doi: 10.3934/dcds.2020158
Weimin Peng, Yi Zhou. Global well-posedness of axisymmetric Navier-Stokes equations with one slow variable. Discrete & Continuous Dynamical Systems, 2016, 36 (7) : 3845-3856. doi: 10.3934/dcds.2016.36.3845
Yoshihiro Shibata. Local well-posedness of free surface problems for the Navier-Stokes equations in a general domain. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 315-342. doi: 10.3934/dcdss.2016.9.315
Irena Pawłow, Wojciech M. Zajączkowski. On a class of sixth order viscous Cahn-Hilliard type equations. Discrete & Continuous Dynamical Systems - S, 2013, 6 (2) : 517-546. doi: 10.3934/dcdss.2013.6.517
Riccarda Rossi. On two classes of generalized viscous Cahn-Hilliard equations. Communications on Pure & Applied Analysis, 2005, 4 (2) : 405-430. doi: 10.3934/cpaa.2005.4.405
Thomas Y. Hou, Congming Li. Global well-posedness of the viscous Boussinesq equations. Discrete & Continuous Dynamical Systems, 2005, 12 (1) : 1-12. doi: 10.3934/dcds.2005.12.1
Sergey Zelik, Jon Pennant. Global well-posedness in uniformly local spaces for the Cahn-Hilliard equation in $\mathbb{R}^3$. Communications on Pure & Applied Analysis, 2013, 12 (1) : 461-480. doi: 10.3934/cpaa.2013.12.461
Jan Prüss, Vicente Vergara, Rico Zacher. Well-posedness and long-time behaviour for the non-isothermal Cahn-Hilliard equation with memory. Discrete & Continuous Dynamical Systems, 2010, 26 (2) : 625-647. doi: 10.3934/dcds.2010.26.625
Weidong Zhao, Jinlei Wang, Shige Peng. Error estimates of the $\theta$-scheme for backward stochastic differential equations. Discrete & Continuous Dynamical Systems - B, 2009, 12 (4) : 905-924. doi: 10.3934/dcdsb.2009.12.905
Laurence Cherfils Madalina Petcu | CommonCrawl |
Addition of inert gas at "constant volume"?
My textbook states that
"The addition of inert gases at constant volume does not affect the equilibrium state of the reactants and products contained in that volume."
Although I am not questioning the truth behind this statement, I am very confused by it.
Let us consider a homogeneous gaseous equilibrium system contained within a closed cylinder. The cylinder has a very small hole on its side, through which an inert gas can be pumped in. This is how I visualize the addition of an inert gas to the system. Now, upon introducing a given amount of inert gas into the system, how does the volume remain constant in any way?
While deriving Van der Waals equation for real gases, I learned that the term $V$ in the ideal gas equation stands for the empty space that is available to the gas molecules for movement. Therefore, when we introduce an inert gas into the system, aren't we essentially decreasing this amount of space that is available to the reactant and product molecules for movement?
I am very confused. Please share your insights for it would be tremendously helpful for me. Thanks ever so much in advance :) Regards.
equilibrium gas-laws
The inert gas that you add does not act like a piston, denying part of the volume to the other molecules in the mixture. For a constant volume system comprised of an ideal gas mixture, the partial pressures of the reactants and products do not change when you add the inert gas. It only increases the total pressure, and that change is only because of its own partial pressure. However, if you are not operating in the ideal gas regime, the addition of the inert gas will affect the equilibrium.
Chet MillerChet Miller
$\begingroup$ OK, so what you're saying is that according to the assumptions of KTG for ideal gases, the volume of the reaction mixture will remain the volume of the container since the molecules of the inert gas have negligible volumes themselves, yeah? $\endgroup$ – user33789 Aug 31 '16 at 13:48
$\begingroup$ Additionally, if the reaction mixture is at constant pressure instead of constant volume, these inert gas molecules will exert pressure on the piston, moving it up and hence, increasing the volume, yeah? While studying Le Chatelier's principle for changes in volume, I didn't relate the volume change to pressure change. Instead, I imagined that on decreasing the volume, the equilibrium shifted in that direction which produces lesser number of molecules so that the volume would increase and not so pressure would decrease. I am wrong to think of it like this, right? $\endgroup$ – user33789 Aug 31 '16 at 13:53
$\begingroup$ I agree to your first comment. I don't understand your second comment. $\endgroup$ – Chet Miller Aug 31 '16 at 19:48
You should not :)
"does not affect the equilibrium state" means that the thermodynamical constant is still the same. If you add an inert gas $\ce{G}$ in your system it will not react with other chemical coumponds.
For example at the beginning you have $\ce{A} \rightleftharpoons \ce{B}$ and after you add you inert gas you have $\ce{A + G} \rightleftharpoons \ce{B + G}$. So it not afftects the value of $\ce{K_{eq}}$ because $\ce{K_{eq}}$ depends only of $\ce{T}$ the temperature.
If you use a solid container the volume will not change but the pressure will.
Also we have $$\ce{\mu_i(T,P_i)=\mu_i^°(T)}+\ce{RT\ln\frac{P_i}{P^°}}$$
Then $$\left(\frac{\partial\Delta_r\ce{G}}{\partial n_{inert}}\right)_{\ce{T,V}}=0$$
If you prefer calculus.
Curt F.
Hexacoordinate-CHexacoordinate-C
What is a rigorous definition of gas volume, and how is the Van der Waals equation derived?
Van der Waals equation at high temperature and high molar volume
Why two ( or more) gases have proportional volumes at constant pressure and temp.? (Avogadro's Law)
Dynamic equilibrium - effect of adding inert gas
how to calculate the gas volume per purge when I change the cylinder?
Van der Waals real gas equation
Prove that a Change of Volume will not Change the Value of the Equilibrium Constant
Calculating volume % in a mixture of gases at equilibrium
How do molecules know how to behave when a disturbance occurs in an equilibrium?
Using Le Chatelier's Principle | CommonCrawl |
Random exponential attractor for stochastic non-autonomous suspension bridge equation with additive white noise
Bifurcation of the critical crossing cycle in a planar piecewise smooth system with two zones
doi: 10.3934/dcdsb.2021253
The existence of time-dependent attractor for wave equation with fractional damping and lower regular forcing term
Xudong Luo and Qiaozhen Ma ,
School of Mathematics and Statistics, Northwest Normal University, Lanzhou 730070, China
Received April 2021 Revised August 2021 Early access October 2021
Fund Project: Luo is supported by NSF grant(11961059) and "Innovation Star" of Gansu Provincial Department of Education (2021CXZX-206)
We investigate the well-posedness and longtime dynamics of fractional damping wave equation whose coefficient $ \varepsilon $ depends explicitly on time. First of all, when $ 1\leq p\leq p^{\ast\ast} = \frac{N+2}{N-2}\; (N\geq3) $, we obtain existence of solution for the fractional damping wave equation with time-dependent decay coefficient in $ H_{0}^{1}(\Omega)\times L^{2}(\Omega) $. Furthermore, when $ 1\leq p<p^{*} = \frac{N+4\alpha}{N-2} $, $ u_{t} $ is proved to be of higher regularity in $ H^{1-\alpha}\; (t>\tau) $ and show that the solution is quasi-stable in weaker space $ H^{1-\alpha}\times H^{-\alpha} $. Finally, we get the existence and regularity of time-dependent attractor.
Keywords: Wave equation, critical exponent, well-posedness, time-dependent attractor.
Mathematics Subject Classification: Primary: 35B41, 37L30; Secondary: 35L05.
Citation: Xudong Luo, Qiaozhen Ma. The existence of time-dependent attractor for wave equation with fractional damping and lower regular forcing term. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2021253
J. Arrieta, A. N. Carvalho and J. K. Hale, A damped hyperbolic equation with critical exponent, Comm. Partial Differential Equations, 17 (1992), 841-866. doi: 10.1080/03605309208820866. Google Scholar
A. V. Babin and M. I. Visik, Regular attractors of semigroups and evolution equations, J. Math. Pures Appl., 62 (1983), 441-491. Google Scholar
S. M. Bruschi, A. N. Carvalho, J. W. Cholewa and T. Dlotko, Uniform exponential dichotomy and continuity of attractors for singularly perturbed damped wave equations, J. Dynam. Differential Equations, 18 (2006), 767-814. doi: 10.1007/s10884-006-9023-4. Google Scholar
V. V. Chepyzhov, M. Conti and V. Pata, A minimal approach to the theory of global attractor, Discrete Contin. Dyn. Syst., 32 (2012), 2079-2088. doi: 10.3934/dcds.2012.32.2079. Google Scholar
I. Chueshov and I. Lasiecka, Attractors of Evolution Equations, North-Holland, Amsterdam, 1992. Google Scholar
I. Chueshov and I. Lasiecka, Long-Time Behavior of Second Order Evolution Equations with Nonlinear Damping, Mem. Amer. Math. Soc. 2008. Google Scholar
I. Chueshov and I. Lasiecka, Attractors for second-order evolution equations with a nonlinear damping, J. Dynam. Differential Equations, 16 (2004), 469-512. doi: 10.1007/s10884-004-4289-x. Google Scholar
M. Conti and V. Pata, On the time-dependent Cattaneo law in space dimension one, Appl. Math. Comput., 259 (2015), 32-44. doi: 10.1016/j.amc.2015.02.039. Google Scholar
M. Conti, V. Pata and R. Temam, Attractors for process on time-dependent space, application to wave equation, J. Differential Equations, 255 (2013), 1254-1277. doi: 10.1016/j.jde.2013.05.013. Google Scholar
F. Di Plinio, G. S. Duane and R. Temam, Time dependent attractor for the oscillon equation, Discrete Contin. Dyn. Syst., 29 (2011), 141-167. doi: 10.3934/dcds.2011.29.141. Google Scholar
O. A. Ladyzhenskaya, Attractors of nonlinear evolution problems with dissipation, J. Sov. Math., 40 (1988), 632-640. doi: 10.1007/BF01094189. Google Scholar
Q. Ma, J. Wang and T. Liu, Time-dependent asymptotic behavior of the solution for wave equations with linear memory, Comput. Math. Appl., 76 (2018), 1372-1387. doi: 10.1016/j.camwa.2018.06.031. Google Scholar
V. Pata and S. Zelik, A remark on the damped wave equation, Commun. Pure Appl. Anal., 5 (2006), 611-616. doi: 10.3934/cpaa.2006.5.611. Google Scholar
V. Pata and S. Zelik, Smooth attractors for strongly damped wave equations, Nonlinearity, 19 (2006), 1495-1506. doi: 10.1088/0951-7715/19/7/001. Google Scholar
A. Savostianov, Strichartz estimates and smooth attractors for a sub-quintic wave equation with fractional damping in bounded domains, Adv. Differential Equations, 20 (2015), 495-530. Google Scholar
A. Savostianov, Strichartz Estimates and Smooth Attractors of Dissipative Hyperbolic Equations, Doctoral dissertation, 2015. Google Scholar
J. Simon, Compact sets in the space $L^{p}(0, T;B), $, Ann. Mat. Pur. Appl., 146 (1987), 65-96. doi: 10.1007/BF01762360. Google Scholar
H. F. Smith and C. D. Sogge, Global strichartz estimates for non-trapping perturbations of the laplacian, Comm. Partial Differential Equations, 25 (2000), 2171-2183. doi: 10.1080/03605300008821581. Google Scholar
R. Temam, Infinite Dimensional Dynamical Systems in Mechanics and Physics, SpringerVerlag, New York, 1997. Google Scholar
Z. Yang, Z. Liu and N. Feng, Longtime behavior of the semilinear wave equation with gentle dissipation, Discrete Contin. Dyn. Syst., 36 (2016), 6557-6580. doi: 10.3934/dcds.2016084. Google Scholar
Jiayun Lin, Kenji Nishihara, Jian Zhai. Critical exponent for the semilinear wave equation with time-dependent damping. Discrete & Continuous Dynamical Systems, 2012, 32 (12) : 4307-4320. doi: 10.3934/dcds.2012.32.4307
Francesco Di Plinio, Gregory S. Duane, Roger Temam. Time-dependent attractor for the Oscillon equation. Discrete & Continuous Dynamical Systems, 2011, 29 (1) : 141-167. doi: 10.3934/dcds.2011.29.141
Pengyan Ding, Zhijian Yang. Well-posedness and attractor for a strongly damped wave equation with supercritical nonlinearity on $ \mathbb{R}^{N} $. Communications on Pure & Applied Analysis, 2021, 20 (3) : 1059-1076. doi: 10.3934/cpaa.2021006
Qingquan Chang, Dandan Li, Chunyou Sun. Random attractors for stochastic time-dependent damped wave equation with critical exponents. Discrete & Continuous Dynamical Systems - B, 2020, 25 (7) : 2793-2824. doi: 10.3934/dcdsb.2020033
Hongmei Cao, Hao-Guang Li, Chao-Jiang Xu, Jiang Xu. Well-posedness of Cauchy problem for Landau equation in critical Besov space. Kinetic & Related Models, 2019, 12 (4) : 829-884. doi: 10.3934/krm.2019032
Zhaojuan Wang, Shengfan Zhou. Random attractor for stochastic non-autonomous damped wave equation with critical exponent. Discrete & Continuous Dynamical Systems, 2017, 37 (1) : 545-573. doi: 10.3934/dcds.2017022
Fengjuan Meng, Chengkui Zhong. Multiple equilibrium points in global attractor for the weakly damped wave equation with critical exponent. Discrete & Continuous Dynamical Systems - B, 2014, 19 (1) : 217-230. doi: 10.3934/dcdsb.2014.19.217
Nobu Kishimoto, Minjie Shan, Yoshio Tsutsumi. Global well-posedness and existence of the global attractor for the Kadomtsev-Petviashvili Ⅱ equation in the anisotropic Sobolev space. Discrete & Continuous Dynamical Systems, 2020, 40 (3) : 1283-1307. doi: 10.3934/dcds.2020078
Nikolaos Bournaveas. Local well-posedness for a nonlinear dirac equation in spaces of almost critical dimension. Discrete & Continuous Dynamical Systems, 2008, 20 (3) : 605-616. doi: 10.3934/dcds.2008.20.605
Hongjie Dong, Dapeng Du. Global well-posedness and a decay estimate for the critical dissipative quasi-geostrophic equation in the whole space. Discrete & Continuous Dynamical Systems, 2008, 21 (4) : 1095-1101. doi: 10.3934/dcds.2008.21.1095
Myeongju Chae, Soonsik Kwon. Global well-posedness for the $L^2$-critical Hartree equation on $\mathbb{R}^n$, $n\ge 3$. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1725-1743. doi: 10.3934/cpaa.2009.8.1725
Daniela De Silva, Nataša Pavlović, Gigliola Staffilani, Nikolaos Tzirakis. Global well-posedness for the $L^2$ critical nonlinear Schrödinger equation in higher dimensions. Communications on Pure & Applied Analysis, 2007, 6 (4) : 1023-1041. doi: 10.3934/cpaa.2007.6.1023
Keyan Wang. Global well-posedness for a transport equation with non-local velocity and critical diffusion. Communications on Pure & Applied Analysis, 2008, 7 (5) : 1203-1210. doi: 10.3934/cpaa.2008.7.1203
Maurizio Grasselli, Vittorino Pata. On the damped semilinear wave equation with critical exponent. Conference Publications, 2003, 2003 (Special) : 351-358. doi: 10.3934/proc.2003.2003.351
Masahiro Ikeda, Ziheng Tu, Kyouhei Wakasa. Small data blow-up of semi-linear wave equation with scattering dissipation and time-dependent mass. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021011
Jin Takahashi, Eiji Yanagida. Time-dependent singularities in the heat equation. Communications on Pure & Applied Analysis, 2015, 14 (3) : 969-979. doi: 10.3934/cpaa.2015.14.969
Borys Alvarez-Samaniego, Pascal Azerad. Existence of travelling-wave solutions and local well-posedness of the Fowler equation. Discrete & Continuous Dynamical Systems - B, 2009, 12 (4) : 671-692. doi: 10.3934/dcdsb.2009.12.671
George Avalos. Concerning the well-posedness of a nonlinearly coupled semilinear wave and beam--like equation. Discrete & Continuous Dynamical Systems, 1997, 3 (2) : 265-288. doi: 10.3934/dcds.1997.3.265
Boris Kolev. Local well-posedness of the EPDiff equation: A survey. Journal of Geometric Mechanics, 2017, 9 (2) : 167-189. doi: 10.3934/jgm.2017007
Lin Shen, Shu Wang, Yongxin Wang. The well-posedness and regularity of a rotating blades equation. Electronic Research Archive, 2020, 28 (2) : 691-719. doi: 10.3934/era.2020036
Xudong Luo Qiaozhen Ma | CommonCrawl |
2010 Mathematics Subject Classification: Primary: 14-XX [MSN][ZBL]
A scheme is a ringed space that is locally isomorphic to an affine scheme. More precisely, a scheme consists of a topological space $X$ (the underlying space of the scheme) and a sheaf $\def\cO{ {\mathcal O}}\cO_X$ of commutative rings with a unit on $X$ (the structure sheaf of the scheme); moreover, an open covering $(X_i)_{i\in I}$ of $X$ must exist such that $(X_i,\cO_X|_{X_i})$ is isomorphic to the affine scheme $\def\Spec{ {\rm Spec}\;}\def\G{\Gamma} \Spec\G(X_i,\cO_X)$ of the ring of sections of $\cO$ over $X_i$. A scheme is a generalization of the concept of an algebraic variety. For the history of the concept of a scheme, see [Di], [Sh], [Do].
1 Basic concepts and properties.
2 Cohomology of schemes.
3 Construction of schemes.
Basic concepts and properties.
Let $(X,\cO_X)$ be a scheme. For every point $x\in X$, the stalk $\cO_{X,x}$ at $x$ of the sheaf is a local ring; the residue field of this ring is denoted by $k(x)$ and is called the residue field of the point $X$. As the topological properties of the scheme the properties of the underlying space $x$ are considered (for example, quasi-compactness, connectedness, irreducibility). If $P$ is a property of affine schemes (i.e. a property of rings), then one says that a scheme has property $P$ locally if any of its points has an open affine neighbourhood that has this property. The property of being locally Noetherian is an example of this (see Noetherian scheme). A scheme is regular if all its local rings are regular (cf. Regular ring (in commutative algebra)). Other schemes defined in the same way include normal and reduced schemes, as well as Cohen–Macaulay schemes.
A morphism of schemes is a morphism between them as locally ringed spaces. In other words, a morphism $f$ of a scheme $X$ into a scheme $Y$ consists of a continuous mapping $f:X\to Y$ and a homomorphism of the sheaves of rings $f^* : \cO_Y\to f_*\cO_X$, where for any point $x\in X$, the homomorphism of local rings $\cO_{Y,f(x)}\to f_*\cO_{X,x}$ must map maximal ideals to maximal ideals. For any ring $A$, the morphisms of $X$ into $\Spec A$ are in bijective correspondence with the ring homomorphisms $A\to\G(X,\cO)$. For any point $x\in X$, its imbedding in $X$ can also be considered as a morphism of schemes $\Spec k(x)\to X$. An important property is the existence in the category of schemes of direct and fibre products (cf. Fibre product of objects in a category), which generalize the concept of the tensor product of rings. The underlying topological space of the product of two schemes $X$ and $Y$ differs, generally speaking, from the product of the underlying spaces $X\times Y$.
A scheme $X$ endowed with a morphism into a scheme $S$ is called an $S$-scheme, or a scheme over $S$. A morphism $h:X\to Y$ is called a morphism of $S$-schemes $f:X\to S$ and $g:Y\to S$ if $f=g\circ h$. Any scheme can be seen as a scheme over $\Spec \Z$. A morphism of base change $S'\to S$ permits a transition from the $S$-scheme $X$ to the $S'$-scheme $X_{S'} = X\times_S S'$ — the fibre product of $X$ and $S'$. If the underlying scheme $S$ is the spectrum of a ring $k$, then one also speaks of a $k$-scheme. A $k$-scheme $X$ is called a $k$-scheme of finite type if a finite affine covering $(X_i)_{i\in I}$ of $X$ exists such that the $k$-algebras $\G(X_i,\cO_X)$ are generated by a finite number of elements. A scheme of finite type over a field, sometimes requiring separability and completeness, is usually called an algebraic variety. A morphism of $k$-schemes $\Spec k\to X$ is called a rational point of the $k$-scheme $X$; the set of such points is denoted by $X(k)$.
For an $S$-scheme $f:X\to S$ and a point $s\in S$, the $k(s)$-scheme $f^{-1}(s) = X_s$, obtained from $X$ by a base change $\Spec k(s) \to X$, is called a stalk (or fibre) of the morphism $f$ over $s$. If, instead of the field $k(s)$ in this definition one takes its algebraic closure, then the concept of a geometric fibre is obtained. Thereby, the $S$-scheme $X$ can be considered as a family of schemes $X_s$ parametrized by the scheme $S$. Often, when speaking of families, it is also required that the morphism $f$ be flat (cf. Flat morphism).
Concepts relating to schemes over $S$ are often said to be relative, as opposed to the absolute concepts relating to schemes. In fact, for every concept that is used for schemes there is a relative variant. For example, an $S$-scheme $X$ is said to be separated if the diagonal imbedding $X\to X\times_S X$ is closed; a morphism $f:Z\to S$ is said to be smooth if it is flat and all its geometric fibres are regular. Other morphisms defined in the same way include affine, projective, proper, finite, étale, non-ramified, finite-type, etc. A property of a morphism is said to be universal if it is preserved under any base change.
Cohomology of schemes.
Studies of schemes and related algebraic-geometric objects can often be divided into two problems — local and global. Local problems are usually linearized and their data are described by some coherent sheaf or by sheaf complexes. For example, in the study of the local structure of a morphism $X\to S$, the sheaves $\def\O{\Omega}\O_{X/S}^P$ of relative differential forms (cf. Differential form) are of some importance. The global part is usually related to the cohomology of these sheaves (see, for example, deformation of an algebraic variety). Finiteness theorems are useful here, as are theorems on the vanishing of the cohomology spaces (see Kodaira theorem), duality, the Künneth formula, the Riemann–Roch theorem, etc.
A scheme of finite type over a field $\C$ can also be considered as a complex analytic space. Using transcendental methods, it is possible to calculate the cohomology of coherent sheaves; it is more important, however, that it is possible to speak of the complex, or strong, topology on $X(\C)$, the fundamental group, the Betti numbers, etc. The desire to find something similar for arbitrary schemes and the far-reaching arithmetical hypotheses put forward (see Zeta-function in algebraic geometry) have led to the construction of different topologies in the category of schemes, the best known of which is the étale topology (cf. Etale topology). This has made it possible to define the fundamental group of a scheme, other homotopy invariants, cohomology spaces with values in discrete sheaves, Betti numbers, etc. (see $l$-adic cohomology; Weil cohomology; Motives, theory of).
Construction of schemes.
In the construction of a concrete scheme one most frequently uses the concepts of an affine or projective spectrum (see Affine morphism; Projective scheme), including the definition of a subscheme by a sheaf of ideals. The construction of a projective spectrum makes it possible, in particular, to construct a monoidal transformation of schemes. Fibre products and glueing are also used in the construction of schemes. Less elementary constructions rely on the concept of a representable functor. By having at one's disposal a good concept of a family of objects parametrized by schemes, and by juxtaposing every scheme $S$ with a set $F(S)$ of families parametrized by $S$, a contravariant functor $F$ is obtained from the category of schemes into the category of sets (possibly with an additional structure). If the functor $F$ is representable, i.e. if a scheme $X$ exists such that $F(S)={\rm Hom}(S,X)$ for any $S$, then a universal family of objects parametrized by $X$ is obtained. The Picard scheme and Hilbert scheme are constructed in this way (see also Algebraic space; Moduli theory).
One other method of generating new schemes is transition to a quotient space by means of an equivalence relation on a scheme. As a rule, this quotient space exists as an algebraic space. A particular instance of this construction is the scheme of orbits $X/G$ under the action of a group scheme $G$ on a scheme $X$ (see Invariants, theory of).
One of the generalizations of the concept of a scheme is a formal scheme, which may be understood to be the inductive limit of schemes with one and the same underlying topological space.
In earlier terminology, e.g. the fundamental original book [GrDi], the phrase pre-scheme was used for a scheme as defined above; and scheme referred to a separated scheme, i.e. a scheme such that the diagonal $X\to X\times X$ is closed.
There are a large number of conditions, especially finiteness conditions, on morphisms between schemes that are considered. Some of these are as follows.
A morphism of schemes $f:X\to Y$ is a compact morphism (also called quasi-compact morphism) if there is an open covering of $Y$ by affine sets $V_i$ such that $f^{-1}(V_i)$ is compact for all $i$.
A morphism of schemes $f:X\to Y$ is a quasi-finite morphism if for every $y\in Y$, $f^{-1}(y)$ is a finite set.
A morphism $f:X\to Y$ is a quasi-separated morphism if the diagonal morphism $X\to X\times_Y X$ is compact.
A morphism $f:X\to Y$ is a morphism locally of finite type if there exists a covering of $Y$ by open affine sets $V_i=\Spec(B_i)$ such that for each $i$, $f^{-1}(V_i)$ can be covered by open affine sets $U_{ij} = \Spec(A_{ij}$ such that each $A_{ij}$ is a finitely-generated $B_i$-algebra. If, in addition, finitely many $U_{ij}$ suffice (for each $i$), then $f$ is a morphism of finite type.
A morphism $f:X\to Y$ is a finite morphism if there exists a covering of $Y$ by open affine sets $V_i=\Spec(B_i)$ such that each $f^{-1}(V_i)$ is affine, say $f^{-1}(V_i) = \Spec(A_i)$, and $A_i$ is a $B_i$-algebra which is finitely generated as a $B_i$-module.
Let $B$ be an algebra over a ring $R$. The algebra $B$ is said to be finitely presentable over $R$ if it is isomorphic to a quotient $R[T_1,\dots,T_n]/\def\fa{ {\mathfrak a}}\fa$, where $\fa$ is a finitely-generated ideal in $R[T_1,\dots,T_n]$. If $R$ is Noetherian, $B$ is finitely presentable if and only if $B$ is of finite type (i.e. finitely generated as an algebra over $R$).
Let $f:X\to Y$ be a morphism of (pre-) schemes, and $x\in X$, $y=f(x)$. Then $f$ is said to be finitely presentable in $x$ if there exists an open affine set $V\ni y$ and an open affine set $U\ni x$ such that $f(U)\subset V$ and such that the ring $A(U)$ is a finitely-presentable $A(V)$-algebra. The morphism $f$ is said to be locally finitely presentable if it is finitely presentable in each point $x$. If $Y$ is locally Noetherian, a morphism $f:X\to Y$ is locally finitely presentable if and only if it is locally of finite type. A morphism $f$ is finitely presentable if it is locally finitely presentable, quasi-compact and quasi-separated.
For some more important special conditions on morphisms of schemes and pre-schemes cf. Affine morphism; Smooth morphism (of schemes); Quasi-affine scheme; Separable mapping; Etale morphism; Proper morphism.
If $X\to Y$ is a morphism of such-and-such-a-type, then one often says that $X$ is a scheme of such-and-such-a-type over $Y$.
[Di] J. Dieudonné, "Cours de géométrie algébrique", I, Presses Univ. France (1974) Zbl 1092.14500 Zbl 1085.14500
[Do] I.V. Dolgachev, "Abstract algebraic geometry" J. Soviet Math., 2 : 3 (1974) pp. 264–303 Itogi Nauk. i Tekhn. Algebra Topol. Geom., 10 (1972) pp. 47–112 Zbl 1068.14059
[GrDi] A. Grothendieck, J. Dieudonné, "Eléments de géometrie algébrique", I. Le langage des schémes, Springer (1971) MR0217085 {ZBL|0203.23301}}
[Ha] R. Hartshorne, "Algebraic geometry", Springer (1977) MR0463157 Zbl 0367.14001
[Sh] I.R. Shafarevich, "Basic algebraic geometry", Springer (1977) (Translated from Russian) MR0447223 Zbl 0362.14001
Scheme. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Scheme&oldid=30762
This article was adapted from an original article by V.I. Danilov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Scheme&oldid=30762"
TeX done | CommonCrawl |
Only show open access (2)
Last 12 months (2)
Last 3 years (12)
Physics and Astronomy (16)
Laser and Particle Beams (4)
Proceedings of the International Astronomical Union (4)
Canadian Journal of Mathematics (3)
Journal of Materials Research (3)
Zygote (3)
Journal of Glaciology (2)
Powder Diffraction (2)
The Journal of Navigation (2)
Acta Neuropsychiatrica (1)
Bulletin of Entomological Research (1)
Cardiology in the Young (1)
Communications in Computational Physics (1)
Geological Magazine (1)
High Power Laser Science and Engineering (1)
International Journal of Technology Assessment in Health Care (1)
Journal of Developmental Origins of Health and Disease (1)
MRS Advances (1)
MRS Online Proceedings Library Archive (1)
Management and Organization Review (1)
Parasitology (1)
Materials Research Society (5)
International Astronomical Union (4)
Canadian Mathematical Society (3)
International Glaciological Society (2)
AEPC Association of European Paediatric Cardiology (1)
Global Science Press (1)
International Association for Chinese Management Research (1)
Cytochrome P450 gene CYP6BQ8 mediates terpinen-4-ol susceptibility in the red flour beetle, Tribolium castaneum (Herbst) (Coleoptera: Tenebrionidae)
Shanshan Gao, Xinlong Guo, Shumei Liu, Siying Li, Jiahao Zhang, Shuang Xue, Qingbo Tang, Kunpeng Zhang, Ruimin Li
Journal: Bulletin of Entomological Research , First View
Published online by Cambridge University Press: 13 January 2023, pp. 1-11
Cytochrome P450 proteins (CYPs) in insects can encode various detoxification enzymes and catabolize heterologous substances, conferring tolerance to insecticides. This study describes the identification of a P450 gene (CYP6BQ8) from Tribolium castaneum (Herbst) and investigation of its spatiotemporal expression profile and potential role in the detoxification of terpinen-4-ol, a component of plant essential oils. The developmental expression profile showed that TcCYP6BQ8 expression was relatively higher in early- and late-larval stages of T. castaneum compared with other developmental stages. Tissue expression profiles showed that TcCYP6BQ8 was mainly expressed in the head and integument of both larvae and adults. The expression profiling of TcCYP6BQ8 in developmental stages and tissues is closely related to the detoxification of heterologous substances. TcCYP6BQ8 expression was significantly induced after exposure to terpinen-4-ol, and RNA interference against TcCYP6BQ8 increased terpinen-4-ol-induced larval mortality from 47.78 to 66.67%. This indicates that TcCYP6BQ8 may be involved in T. castaneum's metabolism of terpinen-4-ol. Correlation investigation between the CYP6BQ8 gene and terpinen-4-ol resistance in T. castaneum revealed that the TcCYP6BQ8 gene was one of the factors behind T. castaneum's resistance to terpinen-4-ol. This discovery may provide a new theoretical foundation for future regulation of T. castaneum.
Impact of Subsidiary TMT Network Attention on Innovation: The Moderating Role of Subsidiary Autonomy
Shi-quan Wang, Shuang Zhang, Guo-yin Shang
Journal: Management and Organization Review / Volume 18 / Issue 6 / December 2022
This article takes group subsidiaries that were listed in the A-share market of Shanghai and Shenzhen Stock Exchanges in China from 2012 to 2017 as the research subject and innovatively explores the impact of subsidiary TMT (top management team) attention at different networks on subsidiary innovation, considering dual network embedding characteristics and autonomy of subsidiaries. Results show that subsidiary TMT group network attention will inhibit subsidiary innovation, while their external network attention will promote subsidiary innovation, after the inclusion of industry category factors, the effect has changed accordingly, but the moderating effect of subsidiary autonomy on the relationship between subsidiary TMT attention on different networks and subsidiary innovation is always significant. The identification of subsidiary TMT attention not only supplements to current literature's narrow focus on impact of the group parent company attention on subsidiary behaviors, but also broadens theoretical understanding of the driving factors of innovation behavior of subsidiaries. Through expounding on the moderating role of subsidiary autonomy, this article clarifies boundary conditions of subsidiary TMT attention's impact on subsidiary innovation and provides operable guidance for subsidiary TMT to allocate and utilize their attention to promote the development of subsidiary innovation behaviors.
Effect of black soldier fly (Hermetia illucens) larvae meal on lipid and glucose metabolism of Pacific white shrimp Litopenaeus vannamei
Yongkang Chen, Shuyan Chi, Shuang Zhang, Xiaohui Dong, Qihui Yang, Hongyu Liu, Beiping Tan, Shiwei Xie
Journal: British Journal of Nutrition / Volume 128 / Issue 9 / 14 November 2022
Published online by Cambridge University Press: 24 November 2021, pp. 1674-1688
Print publication: 14 November 2022
The present study investigated the effect of black soldier fly (Hermetia illucens) larvae meal (BSF) on haemolymph biochemical indicators, muscle metabolites as well as the lipid and glucose metabolism of Pacific white shrimp Litopenaeus vannamei. Four diets were formulated in which the control diet contained 25 % of fishmeal (FM) and 10 % (BSF10), 20 % (BSF20), and 30 % (BSF30) of FM protein were replaced with BSF. Four hundred and eighty shrimp (0·88 ± 0·00 g) were distributed to four groups of three replicates and fed for 7 weeks. Results showed that growth performance of shrimp fed BSF30 significantly decreased compared with those fed FM, but there was no significant difference in survival among groups. The whole shrimp crude lipid content, haemolymph TAG and total cholesterol were decreased with the increasing BSF inclusion. The results of metabolomics showed that the metabolite patterns of shrimp fed different diets were altered, with significant changes in metabolites related to lipid metabolism, glucose metabolism as well as TCA cycle. The mRNA expressions of hk, pfk, pk, pepck, ampk, mcd, cpt-1 and scd1 in hepatopancreas were downregulated in shrimp fed BSF30, but mRNA expression of acc1 was upregulated. Unlike BSF30, the mRNA expressions of fas, cpt-1, fbp and 6pgd in hepatopancreas were upregulated in shrimp fed BSF20. This study indicates that BSF20 diet promoted lipid synthesis and lipolysis, while BSF30 diet weakened β-oxidation and glycolysis as well as affected the unsaturated fatty acids synthesis, which may affect the growth performance and body composition of shrimp.
dl-Methionine supplementation in a low-fishmeal diet affects the TOR/S6K pathway by stimulating ASCT2 amino acid transporter and insulin-like growth factor-I in the dorsal muscle of juvenile cobia (Rachycentron canadum) – CORRIGENDUM
Yuanfa He, Shuyan Chi, Beiping Tan, Xiaohui Dong, Qihui Yang, Hongyu Liu, Shuang Zhang, Fenglu Han, Di Liu
Journal: British Journal of Nutrition / Volume 127 / Issue 1 / 14 January 2022
Published online by Cambridge University Press: 10 September 2021, p. 150
Print publication: 14 January 2022
An application of three different field methods to monitor changes in Urumqi Glacier No. 1, Chinese Tien Shan, during 2012–18
Hongliang Li, Puyu Wang, Zhongqin Li, Shuang Jin, Chunhai Xu, Shuangshuang Liu, Zhengyong Zhang, Liping Xu
Journal: Journal of Glaciology / Volume 68 / Issue 267 / February 2022
Published online by Cambridge University Press: 24 June 2021, pp. 41-53
Print publication: February 2022
This study deploys RTK-GNSS in 2012, TLS in 2015 and UAV in 2018 to monitor the changes of Urumqi Glacier No. 1 (UG1), eastern Tien Shan, and analyzes the feasibility of three technologies in monitoring the mountain glaciers. DEM differencing shows that UG1 has experienced a pronounced thinning and mass loss for the period of 2012–18. The glacier surface elevation change of −0.83 ± 0.57 m w.e. a−1 has been recorded for 2012–15, whereas the changes of glacier tongue surface elevation in 2015–18 and 2012–18 were −2.03 ± 0.95 and −1.34 ± 0.88 m w.e. a−1, respectively. The glacier area shrunk by 0.07 ± 0.07 × 10−3 km2 and the terminus retreat rate was 6.28 ± 0.83 m a−1 during 2012–18. The good agreement between the glaciological and geodetic specific mass-balances is promising, showing the combination of the three technologies is suitable to monitor glacier mass change. We recommend application of the three technologies to assess each other in different locations of the glacier, e.g. RTK-GNSS base stations, ground control points, glacier tongue and terminus, in order to avoid the inherent limitations of each technology and to provide reliable data for the future studies of mountain glacier changes in western China.
PP265 Application Of A Case-Mix Method For Medical Consumables Management In Anhui Province, China Using Healthcare Big Data
Lin Tong, Qing-hua Xu, Hong Ye, Shuang Zhang, Chen Cao
Journal: International Journal of Technology Assessment in Health Care / Volume 36 / Issue S1 / December 2020
Published online by Cambridge University Press: 28 December 2020, pp. 23-24
The case-mix method involves combining cases with similar complexities and medical services. The process of treating one episode of the disease and receiving treatment is the research unit, thus achieving different medical units. The feasibility of the calculation method is verified by calculating the public hospital consumption ratio, medical income, health materials expenditure indicators, and the differences between the various types of surgical combinations. A decision-making basis can then be provided for the creation of government indicator standards.
Medical records and data on the expenditure of medical consumables for the first and fourth quarters of 2017 were collected from seven third-class provincial hospitals. The medical consumption ratio for different diseases and surgical methods was calculated for the case-mix groups using a weighting method. Data were analyzed by descriptive statistics and the independent samples t-test.
There were significant differences in the proportions of combined use for different types of diseases. The same combination also had significant differences between different hospitals. In the fourth quarter of 2017, the operating group's consumption ratio was significantly lower than in the first quarter (p = 0.000).
It is reasonable to calculate the proportion of consumption by combined weighted analysis, which is also fairer for hospitals with better technical levels. This calculation method can be used by governments to manage the use and cost of medical consumables in hospitals.
Perturbations of gut microbiota in gestational diabetes mellitus patients induce hyperglycemia in germ-free mice
2019 DOHaD 11th World Congress
Yu Liu, Shengtang Qin, Ye Feng, Yilin Song, Na Lv, Fei Liu, Xiaoming Zhang, Shuxian Wang, Yumei Wei, Shuang Li, Shiping Su, Wanyi Zhang, Yong Xue, Yanan Hao, Baoli Zhu, Jingmei Ma, Huixia Yang
Journal: Journal of Developmental Origins of Health and Disease / Volume 11 / Issue 6 / December 2020
Published online by Cambridge University Press: 14 September 2020, pp. 580-588
Shifts in the maternal gut microbiota have been implicated in the development of gestational diabetes mellitus (GDM). Understanding the interaction between gut microbiota and host glucose metabolism will provide a new target of prediction and treatment. In this nested case-control study, we aimed to investigate the causal effects of gut microbiota from GDM patients on the glucose metabolism of germ-free (GF) mice. Stool and peripheral blood samples, as well as clinical information, were collected from 45 GDM patients and 45 healthy controls (matched by age and prepregnancy body mass index (BMI)) in the first and second trimester. Gut microbiota profiles were explored by next-generation sequencing of the 16S rRNA gene, and inflammatory factors in peripheral blood were analyzed by enzyme-linked immunosorbent assay. Fecal samples from GDM and non-GDM donors were transferred to GF mice. The gut microbiota of women with GDM showed reduced richness, specifically decreased Bacteroides and Akkermansia, as well as increased Faecalibacterium. The relative abundance of Akkermansia was negatively associated with blood glucose levels, and the relative abundance of Faecalibacterium was positively related to inflammatory factor concentrations. The transfer of fecal microbiota from GDM and non-GDM donors to GF mice resulted in different gut microbiota colonization patterns, and hyperglycemia was induced in mice that received GDM donor microbiota. These results suggested that the shifting pattern of gut microbiota in GDM patients contributed to disease pathogenesis.
Crustal growth event in the Cathaysia Block at 2.5 Ga: evidence from chronology and geochemistry of captured zircons in Jurassic acidic dykes
Shuang-Lian Li, Jian-Qing Lai, Wen-Zhou Xiao, Elena A. Belousova, Tracy Rushmer, Le-Jun Zhang, Quan Ou, Chao-Yun Liu
Journal: Geological Magazine / Volume 158 / Issue 4 / April 2021
Six acidic dykes were discovered surrounding the Laiziling pluton, Xianghualing area, in the western Cathaysia Block, South China. A number of captured zircons are found in two of these acidic dykes. By detailed U–Pb dating, Lu–Hf isotopes and trace-element analysis, we find that these zircons have ages clustered at c. 2.5 Ga. Two acidic dyke samples yielded upper intersection point 206U/238Pb ages of 2505 ± 42 Ma and 2533 ± 22 Ma, and weighted mean 207Pb/206Pb ages of 2500 ± 30 Ma and 2535 ± 16 Ma. The majority of these zircons have high (Sm/La)N, Th/U and low Ce/Ce* ratios, indicating a magmatic origin, but some grains were altered by later hydrothermal fluid. Additionally, the magmatic zircons have high Y, U, heavy rare earth element, Nb and Ta contents, indicating that their host rocks were mainly mafic rocks or trondhjemite–tonalite–granodiorite rock series. Equally, their moderate Y, Yb, Th, Gd and Er contents also indicate that a mafic source formed in a continental volcanic-arc environment. These zircons have positive ϵHf(t) values (2.5–6.9) close to zircons from the depleted mantle, with TDM (2565–2741 Ma) and TDM2 (2608–2864 Ma) ages close to their formation ages, indicating that these zircons originated directly from depleted mantle magma, or juvenile crust derived from the depleted mantle in a very short period. We therefore infer that the Cathaysia Block experienced a crustal growth event at c. 2.5 Ga.
Design, synthesis, and characterization of glycyrrhetinic acid-mediated multifunctional liver-targeting polymeric carrier materials
Qingxia Guan, Xue Zhang, Yue Zhang, Xin Yu, Weibing Zhang, Liping Wang, Shuang Sun, Xiuyan Li, Yanhong Wang, Shaowa Lv, Yongji Li
Journal: Journal of Materials Research / Volume 35 / Issue 10 / 28 May 2020
Print publication: 28 May 2020
The purpose of this study was to construct a glycyrrhetinic acid (GA)-mediated, breakable, intracellular, nanoscale drug-delivery carrier via amide and esterification reactions. The structures were identified by Fourier-transformed infrared (FTIR) and 1H-nuclear magnetic resonance (1H-NMR) spectrophotometry. The compatibility and safety of the carrier were evaluated using hemolysis and cytotoxicity tests. The GA-copolymer micelle was prepared using the solvent evaporation method. FTIR and 1H-NMR detection demonstrated the successful construction of the polymer. No hemolysis occurred in any concentration of polymer within 3 h, and the hemolysis rate was less than 5%. 3-(4,5-dimethyl-thiazol-2-yl)-2,5-diphenyl-tetrazolium bromide (MTT) experimental results showed that the novel polymer reduced the cell survival rate and had significant cytotoxic effects. The blank nanoparticles were liquid with light blue opalescence. Transmission electron microscopy revealed that the empty micelles were uniform spheres, with an average size of 62 nm and a zeta potential of −13 mV. The novel GA-mediated polymeric carrier material developed here has the potential to effectively kill human SMMC-7721 cancer cells within 3 days when the dose is above 500 ug/mL.
The role of parathyroid hormone during pregnancy on the relationship between maternal vitamin D deficiency and fetal growth restriction: a prospective birth cohort study
Deng-Hong Meng, Ying Zhang, Shuang-Shuang Ma, Hong-Lin Hu, Jing-Jing Li, Wan-Jun Yin, Rui-Xue Tao, Peng Zhu
Journal: British Journal of Nutrition / Volume 124 / Issue 4 / 28 August 2020
Print publication: 28 August 2020
Previous studies have shown conflicting findings regarding the relationship between maternal vitamin D deficiency (VDD) and fetal growth restriction (FGR). We hypothesised that parathyroid hormone (PTH) may be an underlying factor relevant to this potential association. In a prospective birth cohort study, descriptive statistics were evaluated for the demographic characteristics of 3407 pregnancies in the second trimester from three antenatal clinics in Hefei, China. The association of the combined status of vitamin D and PTH with birth weight and the risk of small for gestational age (SGA) was assessed by a multivariate linear and binary logistic regression. We found that declined status of 25-hydroxyvitamin D is associated with lower birth weight (for moderate VDD: adjusted β = −49·4 g, 95 % CI −91·1, −7·8, P < 0·05; for severe VDD: adjusted β = −79·8 g, 95 % CI −127·2, −32·5, P < 0·01), as well as ascended levels of PTH (for elevated PTH: adjusted β = −44·5 g, 95 % CI −82·6, −6·4, P < 0·05). Compared with the non-VDD group with non-elevated PTH, pregnancies with severe VDD and elevated PTH had the lowest neonatal birth weight (adjusted β = −124·7 g, 95 % CI −194·6, −54·8, P < 0·001) and the highest risk of SGA (adjusted risk ratio (RR) = 3·36, 95 % CI 1·41, 8·03, P < 0·01). Notably, the highest risk of less Ca supplementation was founded in severe VDD group with elevated PTH (adjusted RR = 4·67, 95 % CI 2·78, 7·85, P < 0·001). In conclusion, elevated PTH induced by less Ca supplementation would further aggravate the risk of FGR in pregnancies with severe VDD through impaired maternal Ca metabolism homoeostasis.
Prevalence and determinants of metabolic syndrome based on three definitions in rural northeast China
Zhi Du, Liying Xing, Shuang Liu, Li Jing, Yuanmeng Tian, Boqiang Zhang, Han Yan, Min Lin, Shiwen Yu, Yingxian Sun
Journal: Public Health Nutrition / Volume 23 / Issue 18 / December 2020
To gain more comprehensive understanding of metabolic syndrome (Mets) among in general Chinese population.
Cross-sectional study. Mets was defined by three widely accepted definitions including modified Adults Treatment Panel (ATP) III criteria, International Diabetes Federation (IDF) criteria and harmonized definition. Risk factors were evaluated by using multivariate logistic regression.
Nineteen rural villages in northeast China.
The survey was conducted in September 2017 and May 2018 on 10 926 individuals.
According to modified ATP III criteria, IDF criteria and harmonised definition, the overall prevalence of Mets was 41·3 % (95 % CI 40·3, 42·2), 34·2 % (95 % CI 33·2, 35·1) and 44·1 % (95 % CI 43·1, 45·1), respectively. Females had a higher prevalence, and elevated blood pressure was the most frequent. Age, female sex, non-peasant worker, higher BMI and lower-annual income were independent risk factors of Mets in all three definitions (all ps < 0·05). Based on modified ATP III criteria and harmonised definition, heavy drinking was positively correlated with Mets. In contrast, former drinking was inversely associated with Mets.
Mets is highly prevalent in rural areas of northeast China. Its independent risk factors include higher age, female sex, non-peasantry worker, higher BMI and lower-annual income. Modified ATP III criteria and harmonised definition may be superior definitions of Mets.
Novel Tm3+/Yb3+–co-doped Bi2MoO6: Synthesis, characterization, and enhanced photocatalytic activity under visible-light irradiation
Zuowei Zhang, Hongshun Hao, Shanshan Jin, Yunxia Hou, Hongman Hou, Gongliang Zhang, Jingran Bi, Shuang Yan, Guishan Liu, Wenyuan Gao
Journal: Journal of Materials Research / Volume 35 / Issue 3 / 14 February 2020
Published online by Cambridge University Press: 03 February 2020, pp. 312-320
Print publication: 14 February 2020
A novel photocatalyst Tm3+/Yb3+–co-doped bismuth molybdate (Bi2MoO6) were synthesized via the hydrothermal method. The samples were characterized through X-ray diffraction, field emission scanning electron microscopy, transmission electron microscopy, X-ray photoelectron spectroscopy (XPS), UV-vis diffuse reflectance spectra, and photoluminescence. XPS characterization confirmed the doped rare earth elements. Analysis of the optical properties explained the up-conversion process and its effect on the photocatalytic performance. The as-synthesized samples were employed to decompose Rhodamine B to value its photocatalytic activities under visible light irradiation. The doped samples showed an enhanced photocatalytic activity compared with the bare Bi2MoO6. When the ratio of Tm3+ and Yb3+ was 0.5:5, the degradation efficiency was the highest of 96.1% within 25 min, which was higher than that (74.9%) of pure Bi2MoO6. Moreover, the photocatalytic mechanism of improving the photocatalytic properties was discussed. Besides, the sample showed a super stability in photocatalytic activity. A novel catalyst for industrial pollutant degradation was proposed.
Prenatal diagnosis of anomalous origin of one pulmonary artery branch by two-dimensional echocardiography: summary of 12 cases
Li Wenxiu, Zhang Yuan, Huang Chaoning, Geng Bin, Wu Jiang, Yang Shuang
Journal: Cardiology in the Young / Volume 30 / Issue 1 / January 2020
To improve the prenatal diagnosis for anomalous origin of pulmonary artery branches by comparing and analyzing different types of fetal echocardiography features.
Between June 2012 and December 2018, fetal echocardiographic features were analyzed retrospectively from fetuses with a prenatal diagnosis of anomalous origin of pulmonary artery branch. The main points of identification were summarized.
A total of 12 fetuses were diagnosed, including anomalous origin of a pulmonary artery branch from the innominate artery and six cases with unilateral absence of pulmonary artery. The shared characteristic sonographic finding was the lack of confluence at the bifurcation of the main pulmonary artery. The differences between the two conditions are highlighted by the origin of the anomalous vessel. In fetuses with anomalous origin of one pulmonary artery branch, the affected pulmonary artery arose from the posterior wall of the ascending aorta as noted on three vessels and trachea view as well as the long axis of the left ventricular outflow tract. This is in contrast to fetuses with unilateral absence of pulmonary artery, where the origin of affected pulmonary artery arises from the base of the innominate artery via the ipsilateral patent arterial duct as evident on the three vessels and trachea view and the coronal view of innominate (brachiocephalic) artery.
(1) The main similarity is an absence of a confluence at the bifurcation of the main pulmonary artery. (2) The main distinguishing feature is the origin of the anomalous vessel from either the subclavian or directly from the aorta.
dl-Methionine supplementation in a low-fishmeal diet affects the TOR/S6K pathway by stimulating ASCT2 amino acid transporter and insulin-like growth factor-I in the dorsal muscle of juvenile cobia (Rachycentron canadum)
An 8-week feeding experiment was conducted to investigate the effects of dl-methionine (Met) supplementation in a low-fishmeal diet on growth, key gene expressions of amino acid transporters and target of rapamycin (TOR) pathway in juvenile cobia, Rachycentron canadum. Seven isonitrogenous and isolipidic diets were formulated, containing 0·72, 0·90, 1·00, 1·24, 1·41, 1·63 and 1·86 % Met. Weight gain and specific growth rates increased gradually with Met levels of up to 1·24 % and then decreased gradually. In dorsal muscle, mRNA levels of ASCT2 in the 1·00 % Met group were significantly up-regulated compared with 0·72, 1·63, and 1·86 %. The insulin-like growth factor-I (IGF-I) mRNA levels in the dorsal muscle of fish fed 1·00 and 1·24 % Met were higher than those in fish fed other Met levels. In addition, fish fed 1·24 % Met showed the highest mRNA levels of TOR and phosphorylation of TOR on Ser2448. The phosphorylation of ribosomal p70-S6 kinase (S6K) on Ser371 in the dorsal muscle of fish fed 1·86 % Met was higher than those in the 0·72 % group. In conclusion, straight broken-line analysis of weight gain rate against dietary Met level indicates that the optimal Met requirement for juvenile cobia is 1·24 % (of DM, or 2·71 % dietary protein). Met supplementation in a low-fishmeal diet increased cobia growth via a mechanism that can partly be attributed to Met's ability to affect the TOR/S6K signalling pathway by enhancing ASCT2 and IGF-I transcription in cobia dorsal muscle.
The possible origin of high frequency quasi-periodic oscillations in low mass X-ray binaries
Chang Sheng Shi, Shuang Nan Zhang, Xiang Dong Li
Journal: Proceedings of the International Astronomical Union / Volume 14 / Issue S346 / August 2018
Published online by Cambridge University Press: 30 December 2019, pp. 277-280
Print publication: August 2018
We summarize our model that high frequency quasi-periodic oscillations (QPOs) both in the neutron star low mass X-ray binaries (NS-LMXBs) and black hole LMXBs may originate from magnetohydrodynamic (MHD) waves. Based on the MHD model in NS-LMXBs, the explanation of the parallel tracks is presented. The slowly varying effective surface magnetic field of a NS leads to the shift of parallel tracks of QPOs in NS-LMXBs. In the study of kilohertz (kHz) QPOs in NS-LMXBs, we obtain a simple power-law relation between the kHz QPO frequencies and the combined parameter of accretion rate and the effective surface magnetic field. Based on the MHD model in BH-LMXBs, we suggest that two stable modes of the Alfv́en waves in the accretion disks with a toroidal magnetic field may lead to the double high frequency QPOs. This model, in which the effect of the general relativity in BH-LMXBs is considered, naturally accounts for the 3:2 relation for the upper and lower frequencies of the QPOs and the relation between the BH mass and QPO frequency.
Mechanism of dexamethasone in the context of Toxoplasma gondii infection
JING ZHANG, XIN QIN, YU ZHU, SHUANG ZHANG, XUE-WEI ZHANG, HE LU
Journal: Parasitology / Volume 144 / Issue 11 / September 2017
Toxoplasmosis is a serious zoonoses disease and opportunistic, and can be life-threatening. Dexamethasone (DEX) is widely used in the clinic for treatment of inflammatory and autoimmune diseases. However, long-term use of DEX is often easy to lead to acute toxoplasmosis in patients, and the potential molecular mechanism is still not very clear. The aims of this study were to investigate the effect of DEX on proliferation of Toxoplasma and its molecular mechanisms, and to establish the corresponding control measures. All the results showed that dexamethasone could enhance the proliferation of Toxoplasma gondii tachyzoites. After 72 h of DEX treatment, 566 (±7) tachyzoites were found in 100 host cells, while only 86 (±8) tachyzoites were counted from the non-treated control cells (P < 0·01). Gas chromatography (GC) analysis showed changes in level and composition of fatty acids in DEX-treated host cells, and T. gondii. Fish oil was added as a modulator of lipid metabolism in experimental mice. It was found that mice fed with fish oil did not develop the disease after infection with T. gondii, and the structure of fatty acids in plasma changed significantly. The metabolism of fatty acid in the parasites was limited, and the desaturase gene expression was downregulated. These results indicate that the molecular mechanism of dexamethasone to promote the proliferation of T. gondii may be that dexamethasone induces the change of fatty acids composition of tachyzoites and host cells. Therefore, we recommend supplementation of fatty acid in immunosuppressive and immunocompromised patients in order to inhibit toxoplasmosis.
Strict Comparison of Positive Elements in Multiplier Algebras
Victor Kaftal, Ping Wong Ng, Shuang Zhang
Journal: Canadian Journal of Mathematics / Volume 69 / Issue 2 / 01 April 2017
Published online by Cambridge University Press: 20 November 2018, pp. 373-407
Print publication: 01 April 2017
Main result: If a ${{C}^{*}}$-algebra $\mathcal{A}$ is simple, $\sigma $-unital, has finitely many extremal traces, and has strict comparison of positive elements by traces, then its multiplier algebra $\mathcal{M}\left( \mathcal{A} \right)$ also has strict comparison of positive elements by traces. The same results holds if finitely many extremal traces is replaced by quasicontinuous scale. A key ingredient in the proof is that every positive element in the multiplier algebra of an arbitrary $\sigma $-unital ${{C}^{*}}$ -algebra can be approximated by a bi-diagonal series. As an application of strict comparison, if $\mathcal{A}$ is a simple separable stable ${{C}^{*}}$ -algebra with real rank zero, stable rank one, and strict comparison of positive elements by traces, then whether a positive element is a positive linear combination of projections is determined by the trace values of its range projection.
Strategies for Improving Efficiency and Stability of Perovskite Solar Cells
Xiaoli Zheng, Yang Bai, Shuang Xiao, Xiangyue Meng, Teng Zhang, Shihe Yang
Journal: MRS Advances / Volume 2 / Issue 53 / 2017
Print publication: 2017
Perovskite solar cells (PSCs) based on organometal halide light absorbers have hit the limelight in recent years owing to their low temperature solution processability, material abundance and rapidly rising efficiency. To rival the leading photovoltaic technologies, efficiency and long-term stability of PSCs represent two prime facets of the challenges currently facing the research community. Herein we summarize the strategies for improving efficiency and stability of PSCs by drawing on our recent work. Emphasis is given to the importance of perovskite film growth, electron/hole transport materials and interface materials in cell performance. We also discuss possible degradation mechanisms of PSCs.
Asparagine reduces the mRNA expression of muscle atrophy markers via regulating protein kinase B (Akt), AMP-activated protein kinase α, toll-like receptor 4 and nucleotide-binding oligomerisation domain protein signalling in weaning piglets after lipopolysaccharide challenge
Xiuying Wang, Yulan Liu, Shuhui Wang, Dingan Pi, Weibo Leng, Huiling Zhu, Jing Zhang, Haifeng Shi, Shuang Li, Xi Lin, Jack Odle
Published online by Cambridge University Press: 30 August 2016, pp. 1188-1198
Pro-inflammatory cytokines are critical in mechanisms of muscle atrophy. In addition, asparagine (Asn) is necessary for protein synthesis in mammalian cells. We hypothesised that Asn could attenuate lipopolysaccharide (LPS)-induced muscle atrophy in a piglet model. Piglets were allotted to four treatments (non-challenged control, LPS-challenged control, LPS+0·5 % Asn and LPS+1·0 % Asn). On day 21, the piglets were injected with LPS or saline. At 4 h post injection, piglet blood and muscle samples were collected. Asn increased protein and RNA content in muscles, and decreased mRNA expression of muscle atrophy F-box (MAFbx) and muscle RING finger 1 (MuRF1). However, Asn had no effect on the protein abundance of MAFbx and MuRF1. In addition, Asn decreased muscle AMP-activated protein kinase (AMPK) α phosphorylation, but increased muscle protein kinase B (Akt) and Forkhead Box O (FOXO) 1 phosphorylation. Moreover, Asn decreased the concentrations of TNF-α, cortisol and glucagon in plasma, and TNF-α mRNA expression in muscles. Finally, Asn decreased mRNA abundance of muscle toll-like receptor (TLR) 4 and nucleotide-binding oligomerisation domain protein (NOD) signalling-related genes, and regulated their negative regulators. The beneficial effects of Asn on muscle atrophy may be associated with the following: (1) inhibiting muscle protein degradation via activating Akt and inactivating AMPKα and FOXO1; and (2) decreasing the expression of muscle pro-inflammatory cytokines via inhibiting TLR4 and NOD signalling pathways by modulation of their negative regulators.
Effects of side subsurface defects induced by CNC machine on the gain spatial distribution in neodymium phosphate glass
Bingyan Wang, Junyong Zhang, Shuang Shi, Kewei You, Jianqiang Zhu
Journal: High Power Laser Science and Engineering / Volume 4 / 2016
Published online by Cambridge University Press: 23 March 2016, e9
The processing method applied to the side surface is different from the method applied to the light pass surface in neodymium phosphate glass (Nd:glass), and thus subsurface defects remain after processing. The subsurface defects in the side surface influence the gain uniformity of Nd:glass, which is a key factor to evaluate the performance of amplifiers. The scattering characteristics of side subsurface defects were simulated by finite difference time domain (FDTD) Solutions software. The scattering powers of the glass fabricated by a computer numerical control (CNC) machine without cladding were tested at different incident angles. The trend of the curve was similar to the simulated result, while the smallest point was different with the complex true morphology. The simulation showed that the equivalent residual reflectivity of the cladding glass can be more than 0.1% when the number of defects in a single gridding is greater than 50. | CommonCrawl |
Problem G
On an $n \times n$ chessboard, the Prince and the Princess play a game. The squares on the chessboard are numbered $1, 2, 3, \ldots , n^2$, as shown in Figure 1:
Figure 1: The chessboard.
The Prince stands in square $1$, makes $p$ jumps and finally reaches square $n^2$. He enters a square at most once. So if we use $x_ i$ to denote the $i$:th square he enters, then $x_1, x_2, \ldots , x_{p+1}$ are all different. Note that $x_{1}=1$ and $x_{p+1}=n^2$.
The Princess does essentially the same thing – stands in square $1$, makes $q$ jumps and fainally reaches square $n^2$. We use $y_{1}, y_{2}, \ldots y_{q+1}$ to denote the sequence, and all $q+1$ numbers are different.
Figure 2 below shows a $3 \times 3$ square, a possible route for the Prince and a different route for the Princess.
The Prince moves along the sequence: $1 \rightarrow 7 \rightarrow 5 \rightarrow 4 \rightarrow 8 \rightarrow 3 \rightarrow 9$ (Black arrows), while the Princess moves along this sequence: $1 \rightarrow 4 \rightarrow 3 \rightarrow 5 \rightarrow 6 \rightarrow 2 \rightarrow 8 \rightarrow 9$ (White arrows).
The King, their father, has just come. "Why move separately? You are brother and sister!", said the King, "Ignore some jumps and make sure that you're always together."
For example, if the Prince ignores his 2:nd, 3:rd, and 6:th jump, he'll follow the route: $1 \rightarrow 4 \rightarrow 8 \rightarrow 9$. If the Princess ignores her 3:rd, 4:th, 5:th, 6:th jump, she'll follow the same route: $1 \rightarrow 4 \rightarrow 8 \rightarrow 9$, (The common route is shown in Figure 3) thus satisfying the King. The King wants to know the longest route they can move together. Could you tell him?
The first line of the input contains a single integer $t$, the number of test cases to follow. For each case, the first line contains three integers $n, p, q$ ($2 \le n \le 250, 1 \le p, q < n^2$). The second line contains $p+1$ different integers in the range $[1 \ldots n^2]$, the sequence of the Prince. The third line contains $q+1$ different integers in the range $[1 \ldots n^2]$, the sequence of the Princess.
For each test case, print the case number and the length of longest route they can move together. Look at the output for sample input for details.
Case 1: 4
Problem ID: princeandprincess
CPU Time limit: 1 second
Author: Rujia Liu | CommonCrawl |
Complex Analysis and Dynamics Seminar
Sep 8: Vincent Delecroix (LaBRI, France)
Counting Triangulations: From Elementary Combinatorics to the Geometries of Moduli Spaces of Complex Curves
How many triangulations of the sphere are there with n vertices and so that all vertices have degree not larger than $6$? It appears that there is a formula for these numbers due to P. Engel and P. Smillie in terms of the function $\sigma_9(n)$ (= sum of the $9$-th power of the divisors of the integer $n$). Even more interesting is that this sequence has a polynomial asymptotic whose leading coefficient is a Thurston (or Deligne-Mostow) volume of the moduli space $\mathcal{M}_{0,12}$ of configurations of points on the sphere. In this talk I will discuss several relations between combinatorial problems of graphs on surfaces (such as triangulations and meanders) and the geometries of moduli spaces of curves (intersection theory, Thurston volumes and Masur-Veech volumes).
Part of the material from this talk comes from a collaboration with E. Goujard, P. Zograf and A. Zorich.
Oct 6: Jun Hu (Brooklyn College and Graduate Center of CUNY)
Dynamics of Regularly Ramified Rational Maps in Some One-Parameter Families
\( \newcommand{\EC}{\widehat{\mathbb{C}}} \) Let $\EC$ be the Riemann sphere. A rational map $f:\EC \rightarrow \EC$ is said to be regularly ramified if for every point $q\in \EC$, all pre-images of $q$ under $f$ have equal indices (meaning that $f$ has same local degrees at all pre-images of $q$). Up to conjugacy by a Mobius transformation, any regularly ramified rational map $f$ can be written as a quotient map of a finite Kleinian group post-composed by a Mobius transformation, and $f$ can have only two or three critical values. In this paper, we classify the Julia sets of such maps in some one-parameter families $f_{\lambda }$, where $\lambda $ is a complex parameter. The maps in these families have a common super attracting fixed point of order $=2$ or $>2$. We show they have classifications similar to the classifications of the Julia sets of maps in the families $f_n^{c}(z)=z^n+c/z^n$, where $n$ is a positive integer $=2$ or $>2$ and $c$ is a complex number. A new type of Julia set is also presented, which has not appeared in the literature and which are called exploded McMullen necklaces. We first prove that none of the maps in these families can have Hermann rings in their Fatou sets. Then we prove: if the super attracting fixed point of $f_{\lambda }$ has order greater than $2$, then the Julia set $J(f_{\lambda })$ is either connected, a Cantor set, or a McMullen necklace (either exploded or not); if the super attracting fixed point has order equal to $2$, then $J(f_{\lambda })$ is either connected or a Cantor set. This is a joint work with Oleg Muzician and Yingqing Xiao.
Oct 13: Steven Bradlow (University of Illinois at Urbana-Champaign)
Exotic Components of Surface Group Representation Varieties, and Their Higgs Bundle Avatars
Moduli spaces of Higgs bundles on a Riemann surface correspond to representation varieties for the surface fundamental group. For representations into complex semisimple Lie groups, the components of these spaces are labeled by obvious topological invariants. This is no longer true if one restricts to real forms of the complex groups. Factors other than the obvious invariants lead to the existence of extra `exotic' components which can have special significance. Formerly, all known instances of such exotic components were attributable to one of two distinct mechanisms. Recent Higgs bundle results for the groups $SO(p,q)$ shed new light on this dichotomy and reveal new examples outside the scope of the two known mechanisms. This talk will survey what is known about the exotic components and, without assuming familiarity with Higgs bundles, describe the new $SO(p,q)$ results.
Oct 20: Stephen Preston (Brooklyn College and Graduate Center of CUNY)
Global Existence and Blowup for Geodesics in Universal Teichmuller Spaces
I will discuss the Riemannian approach to universal Teichmuller space and the universal Teichmuller curve, inspired by Tromba and Yamada, together with the Weil-Petersson metric. The universal Teichmuller space can be viewed as coming from the space of hyperbolic metrics on the upper half-plane modulo diffeomorphisms that fix the boundary, and we end up with the Weil-Petersson metric on the diffeomorphism group of the real line. Meanwhile the universal Teichmuller curve, as described by Teo, has a natural metric called the Velling-Kirillov metric. I will discuss how one can view this as coming from the space of Euclidean metrics on the upper half-plane.
Both situations yield a right-invariant metric on the diffeomorphism group of the reals. Geodesics on such groups can be written as an Euler-Arnold equation on the Lie algebra, and this equation may be studied using PDE techniques. In particular I will show that all initially-smooth geodesics on the universal Teichmuller curve end in finite time, while all initially-smooth geodesics in the universal Teichmuller space remain smooth for all time. This is joint work with my student, Pearce Washabaugh.
Nov 3: Felipe Ramírez (Wesleyan University)
Counterexamples and Questions in Inhomogeneous Approximation
Khintchine's Theorem (1924) states that almost all (respectively, almost no) real numbers can be approximated by rationals at a given rate, provided that the rate is monotonic and corresponds to a divergent (resp. convergent) series. In 1941, Duffin and Schaeffer showed by way of example that the monotonicity condition cannot be removed. They formulated their famous and resistant Duffin—Schaeffer Conjecture in response to this example. I will discuss an analogue of this situation for inhomogeneous approximations. From the point of view of dynamics, this talk is about toral translations.
Nov 17: Chenxi Wu (Rutgers University, New Brunswick)
An Upper Bound on the Asymptotic Translation Length on the Curve Complex
A pseudo-Anosov map induces a map from the curve graph to itself, and a basic question is to study the asymptotic translation length which is known to be a non-zero rational number. I will give an introduction on prior works on the study of this asymptotic translation length, and present an improved upper bound for the asymptotic translation length for pseudo-Anosov maps inside a fibered cone, which generalizes the previous result on sequences with small translation length on curve graphs by Kin and Shin. This is a joint work by Hyungryul Baik and Hyunshik Shik.
Dec 1: Martin Bridgeman (Boston College)
Schwarzian Derivatives, Projective Structures, and the Weil-Petersson Gradient Flow for Renormalized Volume
We consider complex projective structures and their associated locally convex pleated surface. We relate their geometry in terms of the $L^2$ and $L^{\infty}$ norms the Schwarzian derivative. We show that these give a unifying approach that generalizes a number of well-known results for convex cocompact hyperbolic structures including bounds on the Lipschitz constant for the retract and the length of the bending lamination. We then use these bounds to study the Weil-Petersson gradient flow of renormalized volume on the space CC(N) of convex cocompact hyperbolic structures on a compact manifold N with incompressible boundary. This leads to a proof of the conjecture that the renormalized volume has infimum given by one-half the simplicial volume of DN, the double of N. Joint work with Jeffrey Brock and Kenneth Bromberg.
Dec 8: Zeno Huang (College of Staten Island and Graduate Center of CUNY)
On Weil-Petersson Geometry on Universal Teichmuller Space
I will describe Hilbert structure on the universal Teichmuller space where the Weil-Petersson metric is well defined (via Takhatajan-Teo). It has been shown that the WP metric is negatively curved and Einstein. Instead of curvature tensor, we consider its curvature operator and study some fundamental properties of this operator. In particular we show the operator is non-positive, bounded, and noncompact. This is based on joint work with Y. Wu. | CommonCrawl |
This post discusses highlights of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19).
I attended AAAI 2019 in Honolulu, Hawaii last week. Overall, I was particularly surprised by the interest in natural language processing at the conference. There were 15 sessions on NLP (most standing-room only) with ≈10 papers each (oral and spotlight presentations), so around 150 NLP papers (out of 1,150 accepted papers overall). I also really enjoyed the diversity of invited speakers who discussed topics from AI for social good, to adversarial learning and imperfect-information games (videos of all invited talks are available here). Another cool thing was the Oxford style debate, which required debaters to take controversial positions. This was a nice change of pace from panel discussions, which tend to converge to a uniform opinion.
Question answering
AI for social good
Adversarial learning
Imperfect-information games
Inductive biases
Word embeddings
In his talk at the Reasoning and Learning for Human-Machine Dialogues workshop, Phil Cohen argued that chatbots are an attempt to avoid solving the hard problems of dialogue. They provide the illusion of having a dialogue but in fact do not have a clue what we are saying or meaning. What we should rather do is recognize intents via semantic parsing. We should then reason about the speech acts, infer a user's plan, and help them to succeed. You can find more information about his views in this position paper.
During the panel discussion, Imed Zitouni highlighted that the limitations of current dialogue models affect user behaviour. 75-80% of the time users only employ 4 skills: "play music", "set a timer", "set a reminder", and "what is the weather". Phil argued that we should not have to learn how to talk, how to make an offer, etc. all over again for each domain. We can often build simple dialogue agents for new domains "overnight".
At the Workshop on Reproducible AI, Joel Grus argued that Jupyter notebooks are bad for reproducibility. As an alternative, he recommended to adopt higher-level abstractions and declarative configurations. Another good resource for reproducibility is the ML reproducibility checklist by Joelle Pineau, which provides a list of items for algorithms, theory, and empirical results to enforce reproducibility.
Unit tests for AI experiments recommended by Joel Grus
A team from Facebook reported on their experiments reproducing AlphaZero in their ELF framework, training a model using 2,000 GPUs in 2 weeks. Reproducing an on-policy, distributed RL system such as AlphaZero is particularly challenging as it does not have a fixed dataset and optimization is dependent on the distributed environment. Training smaller versions and scaling up is key. For reproducibility, the random seed, the git commit number, and the logs should be stored.
During the panel discussion, Odd Eric Gunderson argued that reproducibility should be defined as the ability of an independent research team to produce the same results using the same AI method based on the documentation by the original authors. Degrees of reproducibility can be measured based on the availability of different types of documentation, such as the method description, data, and code.
Pascal van Hentenryck argued that reproducibility could be made part of the peer review process, such as in the Mathematical Programming Computation journal where each submission requires an executable file (which does not need to be public). He also pointed out that—empirically—papers with supplementary materials are more likely to be accepted.
At the Reasoning and Complex QA Workshop, Ken Forbus discussed an analogical training method for QA that adapts a general-purpose semantic parser to a new domain with few examples. At the end of his talk, Ken argued that the train/test method in ML is holding us back. Our learning systems should use rich relational representations, gather their own data, and evaluate progress.
Ashish Sabharwal discussed the OpenBookQA dataset presented at EMNLP 2018 during his talk. The open book setting is situated between reading comprehension and open-ended QA on the textual QA spectrum (see below).
The textual QA spectrum
It is designed to probe a deeper understanding rather than memorization skills and requires applying core principles to new situations. He also argued that while entailment is recognized as a core NLP task with many applications, it is still lacking a convincing application to an end-task. This is mainly due to multi-sentence entailment being a lot harder, as irrelevant sentences often have significant textual overlap.
Furthermore, he discussed the design of leaderboards, which have to make tradeoffs along multiple competing axes with respect to the host, the submitters, and the community. A particular deficit of current leaderboards is that they make it difficult to share and build upon successful techniques. For an extensive discussion of the pros and cons of leaderboards, check out this recent NLP Highlights podcast.
The first part of the final panel discussion focused on important outstanding technical challenges for question answering. Michael Witbrock emphasized techniques to create datasets that cannot easily be exploited by neural networks, such as the adversarial filtering in SWAG. Ken argued that models should come up with answers and explanations rather than performing multiple choice question answering, while Ashish noted that such explanations need to be automatically validated.
Eduard Hovy suggested that one way towards a system that can perform more complex QA could consist of the following steps:
Build a symbolic numerical reasoner that leverages relations from an existing KB, such as Geobase, which contains geography facts.
Look at the subset of questions in existing natural language datasets, which require reasoning that is possible with the reasoner.
Annotate these questions with semantic parses and train a semantic parsing model to convert the questions to logical forms. These can then be provided to the reasoner to produce an answer.
Augment the reasoner with another reasoning component and repeat steps 2-3.
The panel members noted that such reasoners exist, but lack a common API.
Finally, here are a few papers on question answering that I enjoyed:
COALA: A Neural Coverage-Based Approach for Long Answer Selection with Small Data: An approach that ranks answers based on how many of the question aspects they cover. They incorporate syntactic information via dependency parses and find that this improves performance.
Multi-Task Learning with Multi-View Attention for Answer Selection and Knowledge Base Question Answering: Answer selection and knowledge base QA are learned jointly via multi-task learning. Attention is performed on different views of the data.
QUAREL: A Dataset and Models for Answering Questions about Qualitative Relationships: A challenging new QA dataset of 2,771 story questions that require knowledge about qualitative relationships pertaining to 19 quantities such as smoothness, friction, speed, heat, and distance.
During his invited talk, Milind Tambe looked back on 10 years of research in AI and multiagent systems for social good (video available here; slides available here). Milind discussed his research on using game theory to optimize security resources such as patrols at airports, air marshal assignments on flights, coast guard patrols, and ranger patrols in African national parks to protect against poachers. Overall, his talk was a striking reminder of the positive effects AI can have if it is employed for social good.
An overview of an ML approach for predicting poacher behaviour in an African national park
The Oxford style debate focused on the proposition "The AI community today should continue to focus mostly on ML methods" (video available here). It pitted Jennifer Neville and Peter Stone on the 'pro' side against Michael Littman and Oren Etzioni on the 'against' side, with Kevin Leyton-Brown as moderator. Overall, the debate was entertaining and engaging to watch.
The debater panel (from left to right): Peter Stone, Jennifer Neville, Kevin Leyton-Brown (moderator), Michael Littman, Oren Etzioni
Here are some representative remarks from each of the debaters that stuck with me:
"The unique strength of the AI community is that we focus on the problems that need to be solved." – Jennifer Neville
"We are in the middle of one of the most amazing paradigm shifts in all of science, certainly computer science." – Oren Etzioni
"If you want to have an impact, don't follow the bandwagon. Keep alive other areas." – Peter Stone
"Scientists in the natural sciences are actually very excited about ML as much of their research relies on expensive computations, which can be approximated with neural networks." – Michael Littman
There were some important observations and ultimately a general consensus that ML alone is not enough and we need to integrate other methods with ML. Yonatan Belinkov also live tweeted, while I tweeted some remarks that elicited laughs.
During his invited talk (video available here), Ian Goodfellow discussed a multiplicity of areas to which adversarial learning has been applied. Among many advances, Ian mentioned that he was impressed by the performance and flexibility of attention masks for GANs, particularly that they are not restricted to circular masks.
He discussed adversarial examples, which are a consequence of moving away from i.i.d. data: attackers are able to confuse the model by showing unusual data from a different distribution such as graffiti on stop signs. He also argued—contrary to the prevalent opinion—that deep models that are more robust are more interpretable than linear models. The main reason is that the latent space of a linear model is totally unintuitive, while a more robust model is more inspectable (as can be seen below).
Traversing the latent space of a linear model (left) vs. a deep, more robust model (right) between different MNIST labels starting from "9"
Semi-supervised learning with GANs can allow models to be more sample-efficient. What is interesting about such applications is that they focus on the discriminator (which is normally discarded) rather than the generator where the discriminator is extended to classify n+1 classes. Regarding leveraging GANs for NLP, Ian conceded that we currently have not found a good way to deal with the large action space required to generate sentences with RL.
In his invited talk (video available here), Tuomas Sandholm—whose AI Libratus was the first AI to beat top Heads-Up No-Limit Texas Hold'em professionals in January 2017—discussed new results for solving imperfect-information games. He stressed that only game-theoretically sound techniques yield strategies that are robust against all opponents in imperfect-information games. Other advantages of a game-theoretic approach are a) that even if humans have access to the entire history of plays of the AI, they still can't find holes in its strategy; and b) it requires no data, just the rules of the game.
Most real-world applications are imperfect-information games
For solving such games, the quality of the solution depends on the quality of the abstraction. Developing better abstractions is thus important, which also applies to modelling such games. In imperfect-information games, planning is important. In real-time planning, we must consider how the opponent can adapt to changes in the policy. In contrast to perfect-information games, states do not have well-defined values.
There were several papers that incorporated different inductive biases into existing models:
Document Informed Neural Autoregressive Topic Models with Distributional Prior: An extension of the DocNADE topic model using word embedding vectors as prior. The model is evaluated on 15 datasets.
Syntax-aware Neural Semantic Role Labeling: The authors incorporate various syntax features into a semantic role labelling model. In contrast to common practice, which often tries to incorporate syntax via a TreeLSTM, they find that shortest dependency path and tree position features perform best.
Relation Structure-Aware Heterogeneous Information Network Embedding: A network embedding model that treats different relations differently: For affiliation relations ("papers are published in conferences") Euclidean distance is used, while for interaction relations ("authors write papers") a translation-based distance is used.
Gaussian Transformer: a Lightweight Approach for Natural Language Inference: A Transformer with a Gaussian prior for the self-attention that encourages focusing on neighbouring tokens.
Gradient-based Inference for Networks with Output Constraints: A method to incorporate output constraints, e.g. matching number of brackets for syntactic parsing, agreement with parse spans for SRL, etc. into the model via gradient-based inference at test-time. The method is extensively evaluated and also performs well on out-of-domain data.
ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning: A collection of 300k textual descriptions focusing on if-then relations with variables. Multi-task models that exploit the hierarchical structure of the data perform better.
Papers on transfer learning ranged from multi-task learning and semi-supervised learning to sequential and zero-shot transfer:
Transfer Learning for Sequence Labeling using Source Model and Target Data: Extension of fine-tuning techniques for NER for the case where the target task includes labels from the source domain (as well as new labels). 1) Output layer is extended with embeddings for new labels. 2) A BiLSTM takes the features of the source model as input and feeds its output to the target model.
A Hierarchical Multi-task Approach for Learning Embeddings from Semantic Tasks: A hierarchical model that jointly learns coreference resolution, relation extraction, entity mention detection, and NER. It achieves state of the art on 3/4 tasks. (Disclaimer: I'm a co-author of this paper.)
Revisiting LSTM Networks for Semi-Supervised Text Classification via Mixed Objective Function: A combination of entropy minimization, adversarial and virtual adversarial training with a simple 1-layer BiLSTM achieves state-of-the-art results on multiple text classification datasets.
Zero-shot Neural Transfer for Cross-lingual Entity Linking: A cross-lingual entity linking model that trains a character-based entity similarity encoder on a bilingual lexicon of entities. Conceptually similar to cross-lingual word embedding models. For languages that do not share the same script, words are transcribed to phonemes.
Zero-Shot Adaptive Transfer for Conversational Language Understanding: A model that performs zero-shot slot tagging by embedding the slot description and fine-tuning a pretrained model on the target domain.
Unsupervised Transfer learning for Spoken Language Understanding in Intelligent Agents: A more light-weight ELMo model that pretrains a shared BiLSTM layer for intent classification and entity tagging and fine-tunes it with ULMFiT techniques.
Latent Multi-task Architecture Learning: A multi-task learning architecture that enables more flexible parameter sharing between tasks and generalizes existing transfer and multi-task learning architectures. (Disclaimer: I'm a co-author of this paper.)
GIRNet: Interleaved Multi-Task Recurrent State Sequence Models: A multi-task learning model that leverages the output from auxiliary models based on position-dependent gates. The model is applied to sentiment analysis and POS tagging of code-switched data and target-dependent sentiment analysis.
A Generalized Language Model in Tensor Space: A higher-order language model that builds a representation based on the tensor product of word vectors. The model achieves strong results on PTB and WikiText.
Naturally there were also a number of papers that provided new methods for learning word embeddings:
Unsupervised Post-processing of Word Vectors via Conceptor Negation: A post-processing method that uses conceptors (a linear transformation) to dampen directions where a word vector has high variances. Post-processed embeddings not only improve on word similarity, but also on dialogue state tracking.
Enriching Word Embeddings with a Regressor Instead of Labeled Corpora: A method that enriches word embeddings during training with sentiment information based on a regressor trained on valence information from a sentiment lexicon. The enriched embeddings improve performance on sentiment and non-sentiment tasks.
Learning Semantic Representations for Novel Words: Leveraging Both Form and Context: A model that learns representations for novel words both from the surface form and the context—in contrast to previous models that only leverage one of the sources.
Finally, here are some papers that I enjoyed that do not fit into any of the above categories:
What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models: A supervised method to extract relevant neurons with regard to a task (by correlating neurons with the target property) and an unsupervised method to extract salient neurons with regard to the model (by correlating neurons across models). Techniques are evaluated on NMT and language modelling.
What Should I Learn First: Introducing LectureBank for NLP Education and Prerequisite Chain Learning: A dataset containing 1,352 NLP lecture files classified according to a taxonomy with 208 prerequisite relation topics. A model is trained to learn prerequisite relations to answer "what should one learn first".
Cover image: AAAI-19 Opening Reception
]]>http://ruder.io/4-biggest-open-problems-in-nlp/5c76ff7c1b9b0d18555b9eb0Tue, 15 Jan 2019 15:28:00 GMT
This post discusses 4 major open problems in NLP based on an expert survey and a panel discussion at the Deep Learning Indaba.
This is the second blog post in a two-part series. The series expands on the Frontiers of Natural Language Processing session organized by Herman Kamper, Stephan Gouws, and me at the Deep Learning Indaba 2018. Slides of the entire session can be found here. The first post discussed major recent advances in NLP focusing on neural network-based methods. This post discusses major open problems in NLP. You can find a recording of the panel discussion this post is based on here.
In the weeks leading up to the Indaba, we asked NLP experts a number of simple but big questions. Based on the responses, we identified the four problems that were mentioned most often:
NLP for low-resource scenarios
Reasoning about large or multiple documents
Datasets, problems, and evaluation
We discussed these problems during a panel discussion. This article is mostly based on the responses from our experts (which are well worth reading) and thoughts of my fellow panel members Jade Abbott, Stephan Gouws, Omoju Miller, and Bernardt Duvenhage. I will aim to provide context around some of the arguments, for anyone interested in learning more.
I think the biggest open problems are all related to natural language
understanding. [...] we should develop systems that read and
understand text the way a person does, by forming a representation of
the world of the text, with the agents, objects, settings, and the
relationships, goals, desires, and beliefs of the agents, and everything else
that humans create to understand a piece of text. Until we can do that, all of our progress is in improving our systems' ability to do pattern matching.
– Kevin Gimpel
Many experts in our survey argued that the problem of natural language understanding (NLU) is central as it is a prerequisite for many tasks such as natural language generation (NLG). The consensus was that none of our current models exhibit 'real' understanding of natural language.
Innate biases vs. learning from scratch A key question is what biases and structure should we build explicitly into our models to get closer to NLU. Similar ideas were discussed at the Generalization workshop at NAACL 2018, which Ana Marasovic reviewed for The Gradient and I reviewed here. Many responses in our survey mentioned that models should incorporate common sense. In addition, dialogue systems (and chat bots) were mentioned several times.
On the other hand, for reinforcement learning, David Silver argued that you would ultimately want the model to learn everything by itself, including the algorithm, features, and predictions. Many of our experts took the opposite view, arguing that you should actually build in some understanding in your model. What should be learned and what should be hard-wired into the model was also explored in the debate between Yann LeCun and Christopher Manning in February 2018.
Program synthesis Omoju argued that incorporating understanding is difficult as long as we do not understand the mechanisms that actually underly NLU and how to evaluate them. She argued that we might want to take ideas from program synthesis and automatically learn programs based on high-level specifications instead. Ideas like this are related to neural module networks and neural programmer-interpreters.
She also suggested we should look back to approaches and frameworks that were originally developed in the 80s and 90s, such as FrameNet and merge these with statistical approaches. This should help us infer common sense-properties of objects, such as whether a car is a vehicle, has handles, etc. Inferring such common sense knowledge has also been a focus of recent datasets in NLP.
Embodied learning Stephan argued that we should use the information in available structured sources and knowledge bases such as Wikidata. He noted that humans learn language through experience and interaction, by being embodied in an environment. One could argue that there exists a single learning algorithm that if used with an agent embedded in a sufficiently rich environment, with an appropriate reward structure, could learn NLU from the ground up. However, the compute for such an environment would be tremendous. For comparison, AlphaGo required a huge infrastructure to solve a well-defined board game. The creation of a general-purpose algorithm that can continue to learn is related to lifelong learning and to general problem solvers.
While many people think that we are headed in the direction of embodied learning, we should thus not underestimate the infrastructure and compute that would be required for a full embodied agent. In light of this, waiting for a full-fledged embodied agent to learn language seems ill-advised. However, we can take steps that will bring us closer to this extreme, such as grounded language learning in simulated environments, incorporating interaction, or leveraging multimodal data.
Emotion Towards the end of the session, Omoju argued that it will be very difficult to incorporate a human element relating to emotion into embodied agents. Emotion, however, is very relevant to a deeper understanding of language. On the other hand, we might not need agents that actually possess human emotions. Stephan stated that the Turing test, after all, is defined as mimicry and sociopaths—while having no emotions—can fool people into thinking they do. We should thus be able to find solutions that do not need to be embodied and do not have emotions, but understand the emotions of people and help us solve our problems. Indeed, sensor-based emotion recognition systems have continuously improved—and we have also seen improvements in textual emotion detection systems.
Cognitive and neuroscience An audience member asked how much knowledge of neuroscience and cognitive science are we leveraging and building into our models. Knowledge of neuroscience and cognitive science can be great for inspiration and used as a guideline to shape your thinking. As an example, several models have sought to imitate humans' ability to think fast and slow. AI and neuroscience are complementary in many directions, as Surya Ganguli illustrates in this post.
Omoju recommended to take inspiration from theories of cognitive science, such as the cognitive development theories by Piaget and Vygotsky. She also urged everyone to pursue interdisciplinary work. This sentiment was echoed by other experts. For instance, Felix Hill recommended to go to cognitive science conferences.
Dealing with low-data settings (low-resource languages, dialects (including social media text "dialects"), domains, etc.). This is not a completely "open" problem in that there are already a lot of promising ideas out there; but we still don't have a universal solution to this universal problem.
– Karen Livescu
The second topic we explored was generalisation beyond the training data in low-resource scenarios. Given the setting of the Indaba, a natural focus was low-resource languages. The first question focused on whether it is necessary to develop specialised NLP tools for specific languages, or it is enough to work on general NLP.
Universal language model Bernardt argued that there are universal commonalities between languages that could be exploited by a universal language model. The challenge then is to obtain enough data and compute to train such a language model. This is closely related to recent efforts to train a cross-lingual Transformer language model and cross-lingual sentence embeddings.
Cross-lingual representations Stephan remarked that not enough people are working on low-resource languages. There are 1,250-2,100 languages in Africa alone, most of which have received scarce attention from the NLP community. The question of specialized tools also depends on the NLP task that is being tackled. The main issue with current models is sample efficiency. Cross-lingual word embeddings are sample-efficient as they only require word translation pairs or even only monolingual data. They align word embedding spaces sufficiently well to do coarse-grained tasks like topic classification, but don't allow for more fine-grained tasks such as machine translation. Recent efforts nevertheless show that these embeddings form an important building lock for unsupervised machine translation.
More complex models for higher-level tasks such as question answering on the other hand require thousands of training examples for learning. Transferring tasks that require actual natural language understanding from high-resource to low-resource languages is still very challenging. With the development of cross-lingual datasets for such tasks, such as XNLI, the development of strong cross-lingual models for more reasoning tasks should hopefully become easier.
Benefits and impact Another question enquired—given that there is inherently only small amounts of text available for under-resourced languages—whether the benefits of NLP in such settings will also be limited. Stephan vehemently disagreed, reminding us that as ML and NLP practitioners, we typically tend to view problems in an information theoretic way, e.g. as maximizing the likelihood of our data or improving a benchmark. Taking a step back, the actual reason we work on NLP problems is to build systems that break down barriers. We want to build models that enable people to read news that was not written in their language, ask questions about their health when they don't have access to a doctor, etc.
Given the potential impact, building systems for low-resource languages is in fact one of the most important areas to work on. While one low-resource language may not have a lot of data, there is a long tail of low-resource languages; most people on this planet in fact speak a language that is in the low-resource regime. We thus really need to find a way to get our systems to work in this setting.
Jade opined that it is almost ironic that as a community we have been focusing on languages with a lot of data as these are the languages that are well taught around the world. The languages we should really focus on are the low-resource languages where not much data is available. The great thing about the Indaba is that people are working and making progress on such low-resource languages. Given the scarcity of data, even simple systems such as bag-of-words will have a large real-world impact. Etienne Barnard, one of the audience members, noted that he observed a different effect in real-world speech processing: Users were often more motivated to use a system in English if it works for their dialect compared to using a system in their own language.
Incentives and skills Another audience member remarked that people are incentivized to work on highly visible benchmarks, such as English-to-German machine translation, but incentives are missing for working on low-resource languages. Stephan suggested that incentives exist in the form of unsolved problems. However, skills are not available in the right demographics to address these problems. What we should focus on is to teach skills like machine translation in order to empower people to solve these problems. Academic progress unfortunately doesn't necessarily relate to low-resource languages. However, if cross-lingual benchmarks become more pervasive, then this should also lead to more progress on low-resource languages.
Data availability Jade finally argued that a big issue is that there are no datasets available for low-resource languages, such as languages spoken in Africa. If we create datasets and make them easily available, such as hosting them on openAFRICA, that would incentivize people and lower the barrier to entry. It is often sufficient to make available test data in multiple languages, as this will allow us to evaluate cross-lingual models and track progress. Another data source is the South African Centre for Digital Language Resources (SADiLaR), which provides resources for many of the languages spoken in South Africa.
Representing large contexts efficiently. Our current models are mostly based on recurrent neural networks, which cannot represent longer contexts well. [...] The stream of work on graph-inspired RNNs is potentially promising, though has only seen modest improvements and has not been widely adopted due to them being much less straight-forward to train than a vanilla RNN.
– Isabelle Augenstein
Another big open problem is reasoning about large or multiple documents. The recent NarrativeQA dataset is a good example of a benchmark for this setting. Reasoning with large contexts is closely related to NLU and requires scaling up our current systems dramatically, until they can read entire books and movie scripts. A key question here—that we did not have time to discuss during the session—is whether we need better models or just train on more data.
Endeavours such as OpenAI Five show that current models can do a lot if they are scaled up to work with a lot more data and a lot more compute. With sufficient amounts of data, our current models might similarly do better with larger contexts. The problem is that supervision with large documents is scarce and expensive to obtain. Similar to language modelling and skip-thoughts, we could imagine a document-level unsupervised task that requires predicting the next paragraph or chapter of a book or deciding which chapter comes next. However, this objective is likely too sample-inefficient to enable learning of useful representations.
A more useful direction thus seems to be to develop methods that can represent context more effectively and are better able to keep track of relevant information while reading a document. Multi-document summarization and multi-document question answering are steps in this direction. Similarly, we can build on language models with improved memory and lifelong learning capabilities.
Perhaps the biggest problem is to properly define the problems themselves. And by properly defining a problem, I mean building datasets and evaluation procedures that are appropriate to measure our progress towards concrete goals. Things would be easier if we could reduce everything to Kaggle style competitions!
– Mikel Artetxe
We did not have much time to discuss problems with our current benchmarks and evaluation settings but you will find many relevant responses in our survey. The final question asked what the most important NLP problems are that should be tackled for societies in Africa. Jade replied that the most important issue is to solve the low-resource problem. Particularly being able to use translation in education to enable people to access whatever they want to know in their own language is tremendously important.
The session concluded with general advice from our experts on other questions that we had asked them, such as "What, if anything, has led the field in the wrong direction?" and "What advice would you give a postgraduate student in NLP starting their project now?" You can find responses to all questions in the survey.
Deep Learning Indaba 2019
If you are interested in working on low-resource languages, consider attending the Deep Learning Indaba 2019, which takes place in Nairobi, Kenya from 25-31 August 2019.
Credit: Title image text is from the NarrativeQA dataset. The image is from the slides of the NLP session.
]]>http://ruder.io/10-exciting-ideas-of-2018-in-nlp/5c76ff7c1b9b0d18555b9eb7Wed, 19 Dec 2018 19:28:46 GMT
This post gathers 10 ideas that I found exciting and impactful this year—and that we'll likely see more of in the future.
For each idea, I will highlight 1-2 papers that execute them well. I tried to keep the list succinct, so apologies if I did not cover all relevant work. The list is necessarily subjective and covers ideas mainly related to transfer learning and generalization. Most of these (with some exceptions) are not trends (but I suspect that some might become more 'trendy' in 2019). Finally, I would love to read about your highlights in the comments or see highlights posts about other areas.
1) Unsupervised MT
There were two unsupervised MT papers at ICLR 2018. They were surprising in that they worked at all, but results were still low compared to supervised systems. At EMNLP 2018, unsupervised MT hit its stride with two papers from the same two groups that significantly improve upon their previous methods. My highlight:
Phrase-Based & Neural Unsupervised Machine Translation (EMNLP 2018): The paper does a nice job in distilling the three key requirements for unsupervised MT: a good initialization, language modelling, and modelling the inverse task (via back-translation). All three are also beneficial in other unsupervised scenarios, as we will see below. Modelling the inverse task enforces cyclical consistency, which has been employed in different approaches—most prominently in CycleGAN. The paper performs extensive experiments and evaluates even on two low-resource language pairs, English-Urdu and English-Romanian. We will hopefully see more work on low-resource languages in the future.
Toy illustration of the three principles of unsupervised MT. A) Two monolingual datasets. B) Initialization. C) Language modelling. D) Back-translation (Lample et al., 2018).
2) Pretrained language models
Using pretrained language models is probably the most significant NLP trend this year, so I won't spend much time on it here. There have been a slew of memorable approaches: ELMo, ULMFiT, OpenAI Transformer, and BERT. My highlight:
Deep contextualized word representations (NAACL-HLT 2018): The paper that introduced ELMo has been much lauded. Besides the impressive empirical results, where it shines is the careful analysis section that teases out the impact of various factors and analyses the information captured in the representations. The word sense disambiguation (WSD) analysis by itself (below on the left) is well executed. Both demonstrate that a LM on its own provides WSD and POS tagging performance close to the state-of-the-art.
Word sense disambiguation (left) and POS tagging (right) results of first and second layer bidirectional language model compared to baselines (Peters et al., 2018).
3) Common sense inference datasets
Incorporating common sense into our models is one of the most important directions moving forward. However, creating good datasets is not easy and even popular ones show large biases. This year, there have been some well-executed datasets that seek to teach models some common sense such as Event2Mind and SWAG, both from the University of Washington. SWAG was solved unexpectedly quickly. My highlight:
Visual Commonsense Reasoning (arXiv 2018): This is the first visual QA dataset that includes a rationale (an explantation) with each answer. In addition, questions require complex reasoning. The creators go to great lengths to address possible bias by ensuring that every answer's prior probability of being correct is 25% (every answer appears 4 times in the entire dataset, 3 times as an incorrect answer and 1 time as the correct answer); this requires solving a constrained optimization problem using models that compute relevance and similarity. Hopefully preventing possible bias will become a common component when creating datasets. Finally, just look at the gorgeous presentation of the data 👇.
VCR: Given an image, a list of regions, and a question, a model must answer the question and provide a rationale explaining why its answer is right (Zellers et al., 2018).
4) Meta-learning
Meta-learning has seen much use in few-shot learning, reinforcement learning, and robotics—the most prominent example: model-agnostic meta-learning (MAML)—but successful applications in NLP have been rare. Meta-learning is most useful for problems with a limited number of training examples. My highlight:
Meta-Learning for Low-Resource Neural Machine Translation (EMNLP 2018): The authors use MAML to learn a good initialization for translation, treating each language pair as a separate meta-task. Adapting to low-resource languages is probably the most useful setting for meta-learning in NLP. In particular, combining multilingual transfer learning (such as multilingual BERT), unsupervised learning, and meta-learning is a promising direction.
The difference between transfer learning multilingual transfer learning, and meta-learning. Solid lines: learning of the initialization. Dashed lines: Path of fine-tuning (Gu et al., 2018).
5) Robust unsupervised methods
This year, we and others have observed that unsupervised cross-lingual word embedding methods break down when languages are dissimilar. This is a common phenomenon in transfer learning where a discrepancy between source and target settings (e.g. domains in domain adaptation, tasks in continual learning and multi-task learning) leads to deterioration or failure of the model. Making models more robust to such changes is thus important. My highlight:
A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings (ACL 2018): Instead of meta-learning an initialization, this paper uses their understanding of the problem to craft a better initialization. In particular, they pair words in both languages that have a similar distribution of words they are similar to. This is a great example of using domain expertise and insights from an analysis to make a model more robust.
The similarity distributions of three words. Equivalent translations ('two' and 'due') have more similar distributions than non-related words ('two' and 'cane'—meaning 'dog'; Artexte et al., 2018).
6) Understanding representations
There have been a lot of efforts in better understanding representations. In particular, 'diagnostic classifiers' (tasks that aim to measure if learned representations can predict certain attributes) have become quite common. My highlight:
Dissecting Contextual Word Embeddings: Architecture and Representation (EMNLP 2018): This paper does a great job of better understanding pretrained language model representations. They extensively study learned word and span representations on carefully designed unsupervised and supervised tasks. The resulting finding: Pretrained representations learn tasks related to low-level morphological and syntactic tasks at lower layers and longer range semantics at higher layers. To me this really shows that pretrained language models indeed capture similar properties as computer vision models pretrained on ImageNet.
Per-layer performance of BiLSTM and Transformer pretrained representations on (from left to right) POS tagging, constituency parsing, and unsupervised coreference resolution (Peters et al., 2018).
7) Clever auxiliary tasks
In many settings, we have seen an increasing usage of multi-task learning with carefully chosen auxiliary tasks. For a good auxiliary task, data must be easily accessible. One of the most prominent examples is BERT, which uses next-sentence prediction (that has been used in Skip-thoughts and more recently in Quick-thoughts) to great effect. My highlights:
Syntactic Scaffolds for Semantic Structures (EMNLP 2018): This paper proposes an auxiliary task that pretrains span representations by predicting for each span the corresponding syntactic constituent type. Despite being conceptually simple, the auxiliary task leads to large improvements on span-level prediction tasks such as semantic role labelling and coreference resolution. This papers shows that specialised representations learned at the level required by the target task (here: spans) are immensely beneficial.
pair2vec: Compositional Word-Pair Embeddings for Cross-Sentence Inference (arXiv 2018): In a similar vein, this paper pretrains word pair representations by maximizing the pointwise mutual information of pairs of words with their context. This encourages the model to learn more meaningful representations of word pairs than with more general objectives, such as language modelling. The pretrained representations are effective in tasks such as SQuAD and MultiNLI that require cross-sentence inference. We can expect to see more pretraining tasks that capture properties particularly suited to certain downstream tasks and are complementary to more general-purpose tasks like language modelling.
Syntactic, PropBank and coreference annotations from OntoNotes. PropBank SRL arguments and coreference mentions are annotated on top of syntactic constituents. Almost every argument is related to a syntactic constituent (Swayamdipta et al., 2018).
8) Combining semi-supervised learning with transfer learning
With the recent advances in transfer learning, we should not forget more explicit ways of using target task-specific data. In fact, pretrained representations are complementary with many forms of semi-supervised learning. We have explored self-labelling approaches, a particular category of semi-supervised learning. My highlight:
Semi-Supervised Sequence Modeling with Cross-View Training (EMNLP 2018): This paper shows that a conceptually very simple idea, making sure that the predictions on different views of the input agree with the prediction of the main model, can lead to gains on a diverse set of tasks. The idea is similar to word dropout but allows leveraging unlabelled data to make the model more robust. Compared to other self-ensembling models such as mean teacher, it is specifically designed for particular NLP tasks. With much work on implicit semi-supervised learning, we will hopefully see more work that explicitly tries to model the target predictions going forward.
Inputs seen by auxiliary prediction modules: Auxiliary 1: They traveled to __________________. Auxiliary 2: They traveled to Washington _______. Auxiliary 3: _____________ Washington by plane. Auxiliary 4: ________________________ by plane (Clark et al., 2018).
9) QA and reasoning with large documents
There have been a lot of developments in question answering (QA), with an array of new QA datasets. Besides conversational QA and performing multi-step reasoning, the most challenging aspect of QA is to synthesize narratives and large bodies of information. My highlight:
The NarrativeQA Reading Comprehension Challenge (TACL 2018): This paper proposes a challenging new QA dataset based on answering questions about entire movie scripts and books. While this task is still out of reach for current methods, models are provided the option of using a summary (rather than the entire book) as context, of selecting the answer (rather than generate it), and of using the output from an IR model. These variants make the task more feasible and enable models to gradually scale up to the full setting. We need more datasets like this that present ambitious problems, but still manage to make them accessible.
Comparison of QA datasets (Kočiský et al., 2018).
10) Inductive bias
Inductive biases such as convolutions in a CNN, regularization, dropout, and other mechanisms are core parts of neural network models that act as a regularizer and make models more sample-efficient. However, coming up with a broadly useful inductive bias and incorporating it into a model is challenging. My highlights:
Sequence classification with human attention (CoNLL 2018): This paper proposes to use human attention from eye-tracking corpora to regularize attention in RNNs. Given that many current models such as Transformers use attention, finding ways to train it more efficiently is an important direction. It is also great to see another example that human language learning can help improve our computational models.
Linguistically-Informed Self-Attention for Semantic Role Labeling (EMNLP 2018): This paper has a lot to like: a Transformer trained jointly on both syntactic and semantic tasks; the ability to inject high-quality parses at test time; and out-of-domain evaluation. It also regularizes the Transformer's multi-head attention to be more sensitive to syntax by training one attention head to attend to the syntactic parents of each token. We will likely see more examples of Transformer attention heads used as auxiliary predictors focusing on particular aspects of the input.
10 years of PropBank semantic role labeling. Comparison of Linguistically-Informed Self-Attention (LISA) with other methods on out-of-domain data (Strubell et al., 2018).
]]>http://ruder.io/emnlp-2018-highlights/5c76ff7c1b9b0d18555b9eb4Tue, 06 Nov 2018 10:12:40 GMT
The post discusses highlights of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018).
This post originally appeared at the AYLIEN blog.
You can find past highlights of conferences here. You can find all 549 accepted papers in the EMNLP proceedings. In this review, I will focus on papers that relate to the following topics:
Inductive bias
Cross-lingual learning
Latent variable models
The inductive bias of a machine learning algorithm is the set of assumptions that the model makes in order to generalize to new inputs. For instance, the inductive bias obtained through multi-task learning encourages the model to prefer hypotheses (sets of parameters) that explain more than one task.
Inductive bias was the main theme during Max Welling's keynote at CoNLL 2018. The two key takeaways from his talk are:
Lesson 1: If there is symmetry in the input space, exploit it.
The most canonical example for exploiting such symmetry are convolutional neural networks, which are translation invariant. Invariance in general means that an object is recognized as an object even if its appearance varies in some way. Group equivariant convolutional networks and Steerable CNNs similarly are rotation invariant (see below). Given the success of CNNs in computer vision, it is a compelling research direction to think of what types of invariance are possible in language and how these can be implemented in neural networks.
Translation and rotation invariance in computer vision (Source: Matt Krause)
Lesson 2: When you know the generative process, you should exploit it.
For many problems the generative process is known but the inverse process of reconstructing the original input is not. Examples of such inverse problems are MRI, image denoising and super-resolution, but also audio-to-speech decoding and machine translation. The Recurrent Inference Machine (RIM) uses an RNN to iteratively generate an incremental update to the input until a sufficiently good estimate of the true signal has been reached, which can be seen for MRI below. This can be seen as similar to producing text via editing, rewriting, and iterative refining.
Inference process of an RIM for MRI (left: generated image; middle: reference; right: error; Source: CIFAR)
A popular way to obtain certain types of invariance in current NLP approaches is via adversarial examples. To this end, Alzantot et al. use a black-box population-based optimization algorithm to generate semantic and syntactic adversarial examples.
Minervini and Riedel propose to incorporate logic to generate adversarial examples. In particular, they use a combination of combinatorial optimization and a language model to generate examples that maximally violate such logic constraints for natural language inference.
Another form of inductive bias can be induced via regularization. In particular, Barrett et al. received the special paper award at CoNLL for showing that human attention provides a good inductive bias for attention in neural networks. The human attention is derived from eye-tracking corpora, which—importantly—can be disjoint from the training data.
For another beneficial inductive bias for attention, in one of the best papers of the conference, Strubell et al. encourage one attention head to attend to the syntactic parents for each token in multi-head attention. They additionally use multi-task learning and allow the injection of a syntactic parse at test time.
Many NLP tasks such as entailment and semantic similarity compute some sort of alignment between two sequences, but this alignment is either at the word or sentence level. Liu et al. propose to incorporate a structural bias by using structured alignments, which match spans in both sequences to each other.
Tree-based have been popular in NLP and encode the bias that knowledge of syntax is beneficial. Shi et al. analyze a phenomenon that runs counter to this, which is that trivial trees with no syntactic information often achieve better results than syntactic trees. Their key insight is that in well-performing trees, crucial words are closer to the final representation, which helps in mitigating RNNs' sequential recency bias.
For aspect-based sentiment analysis, sentence representations are typically computed separately from aspect representations. Huang and Carley propose a nice way to condition the sentence representation on the aspect by using the aspect representation as the parameters of the filters or gates in a CNN. Allowing encoded representations to directly parameterize other parts of a neural network might be useful for other applications, too.
There are roughly 6,500 languages spoken around the world. Despite this, the predominant focus of research is on English. This seems to change perceptibly as more papers are investigating cross-lingual settings.
In her CoNLL keynote, Asifa Majid gives an insightful overview of how culture and language can shape our internal representation of concepts. A common example of this is Scottish having 421 terms for snow. This phenomenon not only applies to our environment, but also to how we talk about ourselves and our bodies.
Languages vary surprisingly in the parts of the body they single out for naming. Variations in part of the lexicon can have knock-on effects for other parts.
If you ask speakers of different languages to color in different body parts in a picture, the body parts that are associated with each term depend on the language. In Dutch, the hand is often considered to be part of the term 'arm', whereas in Japanese, the arm is more clearly delimited. Indonesian, lacking an everyday term that corresponds to 'hand', associates 'arm' with both the hand and the arm as can be seen below.
Composite images for 'arm' in Dutch, Japanese, and Indonesian (Majid & van Staden, 2015)
The representations we obtain from language influence every form of perception. Hans Henning claimed that "olfactory (i.e. related to smell) abstraction is impossible". Most languages lack terms describing specific scents and odours. In contrast, the Jahai, a people of hunter-gatherers in Malaysia, have half a dozen terms for different qualities of smell, which allow them to identify smells much more precisely (Majid et al., 2018).
There was a surprising amount of work on cross-lingual word embeddings at the conference. Taking insights from Asifa's talk, it will be interesting to incorporate insights from psycholinguistics in how we model words across languages and different cultures, as cross-lingual embeddings have mostly focused on word-to-word alignment and so far did not even consider polysemy.
For cross-lingual word embeddings, Kementchedjhieva et al. show that mapping the languages onto a third, latent space (the mean of the monolingual embedding spaces) rather than directly onto each other, makes it easier to learn an alignment. This approach also naturally enables the integration of supporting languages in low resource scenarios. (Note: I'm a co-author on this paper.)
With a similar goal in mind, Doval et al. propose to move each word vector towards the mean between its current representation and the representation of the translation in a separate refinement step.
Similar to using multilingual support, Chen and Cardie propose to jointly learn cross-lingual embeddings between multiple languages by modeling the relations between all language pairs.
Hartmann et al. analyze an observation of our ACL 2018 paper: Aligning embedding spaces induced with different algorithms does not work. They show, however, that a linear transformation still exists and hypothesize that the optimization problem of learning this transformation might be complicated by the algorithms' different inductive biases.
Not only word embedding spaces induced by different algorithms, but also word embedding spaces in different languages have different structures, especially for distant languages. Nakashole proposes to learn a transformation that is sensitive to the local neighborhood, which is particularly beneficial for distant languages.
For the same problem, Hoshen and Wolf propose to first align the second moment of the word distributions and then iteratively refine the alignment.
Alvarez-Melis and Jaakkola offer a different perspective on word-to-word translation by viewing it as an optimal transport problem. They use the Gromov-Wasserstein distance to measure similarities between pairs of words across languages.
Xu et al. instead propose to minimize the Sinkhorn distance between the source and target distributions.
Huang et al. go beyond word alignment with their approach. They introduce multiple cluster-level alignments and additionally enforce the clusters to be consistently distributed across multiple languages.
In one of the best papers of the conference, Lample et al. proposes an unsupervised phrase-based machine translation model, which works particularly well for low-resource languages. On Urdu-English, it outperforms a supervised phrase-based model trained on 800,000 noisy and out-of-domain parallel sentences.
Artetxe et al. propose a similar phrase-based approach to unsupervised machine translation.
Besides cross-lingual word embeddings, there was naturally also work investigating and improving word embeddings, but this seemed to be a lot less pervasive than in past years.
Zhuang et al. propose to use second-order co-occurrence relations to train word embeddings via a newly designed metric.
Zhao et al. propose to learn word embeddings for out-of-vocabulary words by viewing words as bags of character n-grams.
Bosc and Vincent learn word embeddings by reconstructing dictionary definitions.
Zhao et al. learn gender-neutral word embeddings rather than removing the bias from trained embeddings. Their approach allocates certain dimensions of the embedding to gender information, while it keeps the remaining dimensions gender-neutral.
Latent variable models are slowly emerging as a useful tool to express a structural inductive bias and to model the linguistic structure of words and sentences.
Kim et al. provided an excellent tutorial of deep latent variable models. The slides can be found here.
In his talk at the BlackBox NLP workshop, Graham Neubig highlighted latent variables as a way to model the latent linguistic structure of text with neural network. In particular, he discussed multi-space variational encoder-decoders and tree-structured variational auto-encoders, two semi-supervised learning models that leverage latent variables to take advantage of unlabeled data.
In our paper, we showed how cross-lingual embedding methods can be seen as latent variable models. We can use this insight to derive an EM algorithm and learn a better alignment between words.
Dou et al. similarly propose a latent variable model based on a variational auto-encoder for unsupervised bilingual lexicon induction.
In the model by Zhang et al., sentences are viewed as latent variables for summarization. Sentences with activated variables are extracted and directly used to infer gold summaries.
There were also papers that proposed methods for more general applications. Xu and Durrett propose to use a different distribution in variational auto-encoders that mitigates the common failure mode of a collapsing KL divergence.
Niculae et al. propose a new approach to build dynamic computation graphs with latent structure through sparsity.
Language models are becoming more commonplace in NLP and many papers investigated different architectures and properties of such models.
In an insightful paper, Peters et al. show that LSTMs, CNNs, and self-attention language models all learn high-quality representations. They additionally show that the representations vary with network depth: morphological information is encoded at the word embedding layer; local syntax is captured at lower layers and longer-range semantics are encoded at the upper layers.
Tran et al. show that LSTMs generalize hierarchical structure better than self-attention. This hints at possible limitations of the Transformer architecture and suggests that we might need different encoding architectures for different tasks.
Tang et al. find that the Transformer and CNNs are not better than RNNs at modeling long-distance agreement. However, models relying on self-attention excel at word sense disambiguation.
Other papers looks at different properties of language models. Amrami and Goldberg show that language models can achieve state-of-the-art for unsupervised word sense induction. Importantly, rather than just providing the left and right context to the word, they find that appending "and" provides more natural and better results. It will be interesting to see what other clever uses we will find for LMs.
Krishna et al. show that ELMo performs better than logic rules on sentiment analysis tasks. They also demonstrate that language models can implicitly learn logic rules.
In the best paper at the BlackBoxNLP workshop, Giulianelli et al. use diagnostic classifiers to keep track and improve number agreement in language models.
In another BlackBoxNLP paper, Wilcox et al. show that RNN language models can represent filler-gap dependencies and learn a particular subset of restrictions known as island constraints.
Many new tasks and datasets were presented at the conference, many of which propose more challenging settings.
Grounded common sense inference: SWAG contains 113k multiple choice questions about a rich spectrum of grounded situations.
Coreference resolution: PreCo contains 38k documents and 12.5M words, which are mostly from the vocabulary of English-speaking preschoolers.
Document grounded dialogue: The dataset by Zhou et al. contains 4112 conversations with an average of 21.43 turns per conversation.
Automatic story generation from videos: VideoStory contains 20k social media videos amounting to 396 hours of video with 123k sentences, temporally aligned to the video.
Sequential open-domain question answering: QBLink contains 18k question sequences, with each sequence consisting of three naturally occurring human-authored questions.
Multimodal reading comprehension: RecipeQA consists of 20k instructional recipes with multiple modalities such as titles, descriptions and aligned set of images and 36k automatically generated question-answer pairs.
Word similarity: CARD-660 consists of 660 manually selected rare words with manually selected paired words and expert annotations.
Cloze style question answering: CLOTH consists of 7,131 passages and 99,433 questions used in middle-school and high-school language exams.
Multi-hop question answering: HotpotQA contains 113k Wikipedia-based question-answer pairs.
Open book question answering: OpenBookQA consists of 6,000 questions and 1,326 elementary level science facts.
Semantic parsing and text-to-SQL: Spider contains 10,181 questions and 5,693 unique complex SQL queries on 200 databases with multiple tables covering 138 different domains.
Few-shot relation classification: FewRel consists of 70k sentences on 100 relations derived from Wikipedia.
Natural language inference: MedNLI consists of 14k sentence pairs in the clinical domain.
Multilingual natural language inference: XNLI extends the MultiNLI dataset to 15 languages.
Task-oriented dialogue modeling: MultiWOZ, which won the best resource paper award, is a Wizard-of-Oz style dataset consisting of 10k human-human written conversations spanning over multiple domains and topics.
Papers also continued the trend of ACL 2018 of analyzing the limitations of existing datasets and metrics.
Text simplification: Sulem et al. show that BLEU is not a good evaluation metric for sentence splitting, the most common operation in text simplification.
Text-to-SQL: Yavuz et al. show what it takes to achieve 100% accuracy on the WikiSQL benchmark.
Reading comprehension: Kaushik and Lipton show in the best short paper that models that only rely on the passage or the last sentence for prediction do well on many reading comprehension tasks.
These are papers that provide a refreshing take or tackle an unusual problem, but do not fit any of the above categories.
Stanovsky and Hopkins propose a novel way to test word representations. Their approach uses Odd-Man-Out puzzles, which consists of 5 (or more) words and require the system to choose the word that does not belong with the others. They show that such a simple setup can reveal various properties of different representations.
A similarly playful way to test the associative properties of word embeddings is proposed by Shen et al. They use a simplified version of the popular game Codenames. In their setting, a speaker has to select an adjective to refer to two out of three nouns, which then need to be identified by the listener.
Causal understanding is important for many real-world application, but causal inference has so far not found much adoption in NLP. Wood-Doughty et al. demonstrate how causal analyses can be conducted for text classification and discuss opportunities and challenges and for future work.
Gender bias and equal opportunity are big issues in STEM. Schluter argues that a glass ceiling exists in NLP, which prevents high achieving women and minorities from obtaining equal access to opportunities. While the field of NLP has been consistently ~33% female, Schluter analyzes 52 years of NLP publication data consisting of 15.6k gender-labeled authors and observes that the growth level of female seniority standing (indicated by last-authorship on a paper) falls significantly below that of the male population, with a gap that is widening.
Shillingford and Jones tackle both an interesting problem and utilize a refreshing approach. They seek to recover missing characters for long vowels and glottal stops in Old Hawaiian Writing, which are important for reading comprehension and pronunciation. They propose to compose a finite-state transducer—which incorporates domain information—with a neural network. Importantly, their approach only requires modern Hawaiian texts for training.
You might also find these other reviews of EMNLP 2018 helpful:
Claudia Hauff lists 28 papers that stood out to her, focusing on dataset papers or papers with a strong IR component.
Patrick Lewis provides a comprehensive overview of the conference, covering many of the tutorials and workshops as well as highlights from each session.
A group of PhD students and postdocs at the University of Copenhagen wrote another very comprehensive overview of the conference.
]]>http://ruder.io/hackernoon-interview/5c76ff7c1b9b0d18555b9eafTue, 02 Oct 2018 10:40:18 GMT
This post is an interview by fast.ai fellow Sanyam Bhutani with me.
This post originally appeared at HackerNoon with a different introduction.
I had the honour to be interviewed by Sanyam Bhutani, a Deep Learning and Computer Vision practitioner and fast.ai fellow who's been doing a series interviewing people that inspire him. To be honest, it feels surreal to be the one being interviewed. I hope my answers may be interesting or useful to some of you.
Sanyam: Hello Sebastian, Thank you for taking the time to do this.
Sebastian: Thanks for having me.
Sanyam: You're working as a research scientist today at AYLIEN, and you're a Ph.D. student at Insight Research Centre for Data Analytics. Could you tell the readers about how you got started? What got you interested in NLP and Deep Learning?
Sebastian: I was really into maths and languages when I was in high school and took part in competitions. For my studies, I wanted to combine the logic of maths with the creativity of language somehow but didn't know if such a field existed. That's when I came across Computational Linguistics, which seemed to be a perfect fit at the intersection of computer science and linguistics. I then did my Bachelor's in Computational Linguistics at the University of Heidelberg in Germany, one of my favourite places in Europe. During my Bachelor's, I got most excited by machine learning, so I tried to get as much as exposure to ML as possible via internships and online courses. I only heard about word2vec as I was finishing my undergrad in 2015; as I learned more about Deep Learning at the start of my Ph.D. later that year, it seemed to be most exciting direction, so I decided to focus on it.
Sanyam: You started your research right after graduation. What made you pick research as a career path instead of the industry?
Sebastian: After graduating, I was planning to get some industry experience first by working in a startup. A PhD was always something I had dreamed of, but I hadn't seriously considered it at that point. When I discussed working with the Dublin-based NLP startup Aylien, they told me about the Employment-based Postgraduate Programme, a PhD programme that is hosted jointly by a university and a company, which seemed like the perfect fit for me. Combining research and industry work can be challenging at times, but has been rewarding for me overall. Most importantly, there should be a fit with the company.
Sanyam: You've been working as a researcher for 3 years now. What has been your favorite project during these years?
Sebastian: In terms of learning, delving into a new area where I don't know much, reading papers, and getting to collaborate with great people. In this vein, my project working on multi-task learning at the University of Copenhagen was a great and very stimulating experience. In terms of impact, being able to work with Jeremy, interacting with the fastai community, and seeing that people find our work on language models useful.
Sanyam: Natural Language Processing has arguably lagged behind Computer Vision. What are your thoughts about the current scenario? Is it a good time to get started as an NLP Practitioner?
Sebastian: I think now is a great time to get started with NLP. Compared to a couple of years ago, we're at a point of maturity where you're not limited to just using word embeddings or off-the-shelf models, but you can compose your model from a wide array of components, such as different layers, pretrained representations, auxiliary losses, etc. There also seems to be a growing feeling in the community that many of the canonical problems (POS tagging and dependency parsing on the Penn Treebank, sentiment analysis on movie reviews, etc.) are close to being solved, so we really want to make progress on more challenging problems, such as "real" natural language understanding and creating models that truly generalize. For these problems, I think we can really benefit from people with new perspectives and ideas. In addition, as we can now train models for many useful tasks such as classification or sequence labelling with good accuracy, there are a lot of opportunities for applying and adapting these models to other languages. If you're a speaker of another language, you can make a big difference by creating datasets others can use for evaluation and training models for that language.
Sanyam: For the readers and the beginners who are interested in working on Natural Language Processing, what would be your best advice?
Sebastian: Find a task you're interested in for instance by browsing the tasks on NLP-progress. If you're interested in doing research, try to choose a particular subproblem not everyone is working on. For instance, for sentiment analysis, don't work on movie reviews but conversations. For summarization, summarize biomedical papers rather than news articles. Read papers related to the task and try to understand what the state-of-the-art does. Prefer tasks that have open-source implementations available that you can run. Once you have a good handle of how something works, for research, reflect if you were surprised by any choices in the paper. Try to understand what kind of errors the model makes and if you can think of any information that could be used to mitigate them. Doing error and ablation analyses or using synthetic tasks that gauge if a model captures a certain kind of information are great ways to do this.
If you have an idea how to make the task more challenging or realistic, try to create a dataset and apply the existing model to that task. Try to recreate the dataset in your language and see if the model performs equally well.
Sanyam: Many job boards (For DL/ML) require the applicants to be post-grads or have research experience. For the readers who want to take up Machine Learning as a Career path, do you feel having research experience is a necessity?
Sebastian: I think research experience can be a good indicator that you're proficient with certain models and creative, innovative to come up with new solutions. You don't need to do a Ph.D. or a research fellowship to learn these skills, though. Being proactive, learning about and working on a problem that you're excited about, trying to improve the model, and writing about your experience is a good way to get started and demonstrate similar skills. In most applied ML settings, you won't be required to come up with totally new ways to solve a task. Doing ML and data science competitions can thus similarly help you demonstrate that you know how to apply ML models in practice.
Sanyam: Given the explosive growth rates in research, How do you stay up to date with the cutting edge?
Sebastian: I've been going through the arXiv daily update, adding relevant papers to my reading list, and reading them in batches. Jeff Dean recently said during a talk at the Deep Learning Indaba that he thinks it's better to read ten abstracts than one paper in-depth as you can always go back and read one of the papers in-depth. I agree with him. I think you want to read widely about as many ideas as possible, which you can catalogue and use for inspiration later. Having a good paper management system is key. I've been using Mendeley. Lately, I've been relying more on Arxiv Sanity Preserver to surface relevant papers.
Sanyam: You also maintain a great blog, which I'm a great fan of. Could you share some tips on effectively writing technical articles?
Sebastian: I've had the best experience writing a blog when I started out writing it for myself to understand a particular topic better. If you ever find yourself having to put in a lot of work to build intuition or do a lot of research to grasp a subject, consider writing a post about it so you can accelerate everyone else's learning in the future. In research papers, there's usually not enough space to properly contextualize a work, highlight motivations, and intuitions, etc. Blog posts are a great way to make technical content more accessible and approachable.
The great thing about a blog is that it doesn't need to be perfect. You can use it to improve your communication skills as well as obtain feedback on your ideas and things you might have missed. In terms of writing, I think the most important thing I have learned is to be biased towards clarity. Try to be as unambiguous as possible. Remove sentences that don't add much value. Remove vague adjectives. Write only about what the data shows and if you speculate, clearly say so.
Get feedback on your draft from your friends and colleagues. Don't try to make something 100% perfect, but get it to a point where you're happy with it. Feeling anxiety when clicking that 'Publish' button is totally normal and doesn't go away. Publishing something will always be worth it in the long-term.
Sanyam: Do you feel Machine Learning has been overhyped?
Sebastian: No.
Sanyam: Before we conclude, any tips for the beginners who are afraid to get started because of the idea that Deep Learning is an advanced field?
Sebastian: Don't let anyone tell you that you can't do this. Do online courses to build your understanding. Once you're comfortable with the fundamentals, read papers for inspiration when you have time. Choose something you're excited about, choose a library, and work on it. Don't think you need massive compute to work on meaningful problems. Particularly in NLP, there are lot of problems with a small number of labelled examples. Write about what you're doing and learning. Reach out to people with similar interests and focus areas. Engage with the community, e.g. the fastai community is awesome. Get on Twitter. Twitter has a great ML community and you can often get replies from top experts in the field way faster than via email. Find a mentor. If you write to someone for advice, be mindful of their time. Be respectful and try to help others. Be generous with praise and cautious with criticism.
Sanyam: Thank you so much for doing this interview.
The cover image for this post was generated based on the content of the post using wordcouds.
]]>http://ruder.io/a-review-of-the-recent-history-of-nlp/5c76ff7c1b9b0d18555b9eaeMon, 01 Oct 2018 12:50:00 GMT
This post discusses major recent advances in NLP focusing on neural network-based methods.
This is the first blog post in a two-part series. The series expands on the Frontiers of Natural Language Processing session organized by Herman Kamper and me at the Deep Learning Indaba 2018. Slides of the entire session can be found here. This post discusses major recent advances in NLP focusing on neural network-based methods. The second post discusses open problems in NLP. You can find a recording of the talk this post is based on here.
Disclaimer This post tries to condense ~15 years' worth of work into eight milestones that are the most relevant today and thus omits many relevant and important developments. In particular, it is heavily skewed towards current neural approaches, which may give the false impression that no other methods were influential during this period. More importantly, many of the neural network models presented in this post build on non-neural milestones of the same era. In the final section of this post, we highlight such influential work that laid the foundations for later methods.
2001 - Neural language models
2008 - Multi-task learning
2013 - Word embeddings
2013 - Neural networks for NLP
2014 - Sequence-to-sequence models
2015 - Attention
2015 - Memory-based networks
2018 - Pretrained language models
Other milestones
Non-neural milestones
Language modelling is the task of predicting the next word in a text given the previous words. It is probably the simplest language processing task with concrete practical applications such as intelligent keyboards, email response suggestion (Kannan et al., 2016), spelling autocorrection, etc. Unsurprisingly, language modelling has a rich history. Classic approaches are based on n-grams and employ smoothing to deal with unseen n-grams (Kneser & Ney, 1995).
The first neural language model, a feed-forward neural network was proposed in 2001 by Bengio et al., shown in Figure 1 below.
Figure 1: A feed-forward neural network language model (Bengio et al., 2001; 2003)
This model takes as input vector representations of the \(n\) previous words, which are looked up in a table \(C\). Nowadays, such vectors are known as word embeddings. These word embeddings are concatenated and fed into a hidden layer, whose output is then provided to a softmax layer. For more information about the model, have a look at this post.
More recently, feed-forward neural networks have been replaced with recurrent neural networks (RNNs; Mikolov et al., 2010) and long short-term memory networks (LSTMs; Graves, 2013) for language modelling. Many new language models that extend the classic LSTM have been proposed in recent years (have a look at this page for an overview). Despite these developments, the classic LSTM remains a strong baseline (Melis et al., 2018). Even Bengio et al.'s classic feed-forward neural network is in some settings competitive with more sophisticated models as these typically only learn to consider the most recent words (Daniluk et al., 2017). Understanding better what information such language models capture consequently is an active research area (Kuncoro et al., 2018; Blevins et al., 2018).
Language modelling is typically the training ground of choice when applying RNNs and has succeeded at capturing the imagination, with many getting their first exposure via Andrej's blog post. Language modelling is a form of unsupervised learning, which Yann LeCun also calls predictive learning and cites as a prerequisite to acquiring common sense (see here for his Cake slide from NIPS 2016). Probably the most remarkable aspect about language modelling is that despite its simplicity, it is core to many of the later advances discussed in this post:
Word embeddings: The objective of word2vec is a simplification of language modelling.
Sequence-to-sequence models: Such models generate an output sequence by predicting one word at a time.
Pretrained language models: These methods use representations from language models for transfer learning.
This conversely means that many of the most important recent advances in NLP reduce to a form of language modelling. In order to do "real" natural language understanding, just learning from the raw form of text likely will not be enough and we will need new methods and models.
Multi-task learning is a general method for sharing parameters between models that are trained on multiple tasks. In neural networks, this can be done easily by tying the weights of different layers. The idea of multi-task learning was first proposed in 1993 by Rich Caruana and was applied to road-following and pneumonia prediction (Caruana, 1998). Intuitively, multi-task learning encourages the models to learn representations that are useful for many tasks. This is particularly useful for learning general, low-level representations, to focus a model's attention or in settings with limited amounts of training data. For a more comprehensive overview of multi-task learning, have a look at this post.
Multi-task learning was first applied to neural networks for NLP in 2008 by Collobert and Weston. In their model, the look-up tables (or word embedding matrices) are shared between two models trained on different tasks, as depicted in Figure 2 below.
Figure 2: Sharing of word embedding matrices (Collobert & Weston, 2008; Collobert et al., 2011)
Sharing the word embeddings enables the models to collaborate and share general low-level information in the word embedding matrix, which typically makes up the largest number of parameters in a model. The 2008 paper by Collobert and Weston proved influential beyond its use of multi-task learning. It spearheaded ideas such as pretraining word embeddings and using convolutional neural networks (CNNs) for text that have only been widely adopted in the last years. It won the test-of-time award at ICML 2018 (see the test-of-time award talk contextualizing the paper here).
Multi-task learning is now used across a wide range of NLP tasks and leveraging existing or "artificial" tasks has become a useful tool in the NLP repertoire. For an overview of different auxiliary tasks, have a look at this post. While the sharing of parameters is typically predefined, different sharing patterns can also be learned during the optimization process (Ruder et al., 2017). As models are being increasingly evaluated on multiple tasks to gauge their generalization ability, multi-task learning is gaining in importance and dedicated benchmarks for multi-task learning have been proposed recently (Wang et al., 2018; McCann et al., 2018).
Sparse vector representations of text, the so-called bag-of-words model have a long history in NLP. Dense vector representations of words or word embeddings have been used as early as 2001 as we have seen above. The main innovation that was proposed in 2013 by Mikolov et al. was to make the training of these word embeddings more efficient by removing the hidden layer and approximating the objective. While these changes were simple in nature, they enabled---together with the efficient word2vec implementation---large-scale training of word embeddings.
Word2vec comes in two flavours that can be seen in Figure 3 below: continuous bag-of-words (CBOW) and skip-gram. They differ in their objective: one predicts the centre word based based on the surrounding words, while the other does the opposite.
Figure 3: Continuous bag-of-words and skip-gram architectures (Mikolov et al., 2013a; 2013b)
While these embeddings are no different conceptually than the ones learned with a feed-forward neural network, training on a very large corpus enables them to approximate certain relations between words such as gender, verb tense, and country-capital relations, which can be seen in Figure 4 below.
Figure 4: Relations captured by word2vec (Mikolov et al., 2013a; 2013b)
These relations and the meaning behind them sparked initial interest in word embeddings and many studies have investigated the origin of these linear relationships (Arora et al., 2016; Mimno & Thompson, 2017; Antoniak & Mimno, 2018; Wendlandt et al., 2018). However, later studies showed that the learned relations are not without bias (Bolukbasi et al., 2016). What cemented word embeddings as a mainstay in current NLP was that using pretrained embeddings as initialization was shown to improve performance across a wide range of downstream tasks.
While the relations word2vec captured had an intuitive and almost magical quality to them, later studies showed that there is nothing inherently special about word2vec: Word embeddings can also be learned via matrix factorization (Pennington et al, 2014; Levy & Goldberg, 2014) and with proper tuning, classic matrix factorization approaches like SVD and LSA achieve similar results (Levy et al., 2015).
Since then, a lot of work has gone into exploring different facets of word embeddings (as indicated by the staggering number of citations of the original paper). Have a look at this post for some trends and future directions. Despite many developments, word2vec is still a popular choice and widely used today. Word2vec's reach has even extended beyond the word level: skip-gram with negative sampling, a convenient objective for learning embeddings based on local context, has been applied to learn representations for sentences (Mikolov & Le, 2014; Kiros et al., 2015)---and even going beyond NLP---to networks (Grover & Leskovec, 2016) and biological sequences (Asgari & Mofrad, 2015), among others.
One particularly exciting direction is to project word embeddings of different languages into the same space to enable (zero-shot) cross-lingual transfer. It is becoming increasingly possible to learn a good projection in a completely unsupervised way (at least for similar languages) (Conneau et al., 2018; Artetxe et al., 2018; Søgaard et al., 2018), which opens applications for low-resource languages and unsupervised machine translation (Lample et al., 2018; Artetxe et al., 2018). Have a look at (Ruder et al., 2018) for an overview.
2013 and 2014 marked the time when neural network models started to get adopted in NLP. Three main types of neural networks became the most widely used: recurrent neural networks, convolutional neural networks, and recursive neural networks.
Recurrent neural networks Recurrent neural networks (RNNs) are an obvious choice to deal with the dynamic input sequences ubiquitous in NLP. Vanilla RNNs (Elman, 1990) were quickly replaced with the classic long short-term memory networks (Hochreiter & Schmidhuber, 1997), which proved more resilient to the vanishing and exploding gradient problem. Before 2013, RNNs were still thought to be difficult to train; Ilya Sutskever's PhD thesis was a key example on the way to changing this reputation. A visualization of an LSTM cell can be seen in Figure 5 below. A bidirectional LSTM (Graves et al., 2013) is typically used to deal with both left and right context.
Figure 5: An LSTM network (Source: Chris Olah)
Convolutional neural networks With convolutional neural networks (CNNs) being widely used in computer vision, they also started to get applied to language (Kalchbrenner et al., 2014; Kim et al., 2014). A convolutional neural network for text only operates in two dimensions, with the filters only needing to be moved along the temporal dimension. Figure 6 below shows a typical CNN as used in NLP.
Figure 6: A convolutional neural network for text (Kim, 2014)
An advantage of convolutional neural networks is that they are more parallelizable than RNNs, as the state at every timestep only depends on the local context (via the convolution operation) rather than all past states as in the RNN. CNNs can be extended with wider receptive fields using dilated convolutions to capture a wider context (Kalchbrenner et al., 2016). CNNs and LSTMs can also be combined and stacked (Wang et al., 2016) and convolutions can be used to speed up an LSTM (Bradbury et al., 2017).
Recursive neural networks RNNs and CNNs both treat the language as a sequence. From a linguistic perspective, however, language is inherently hierarchical: Words are composed into higher-order phrases and clauses, which can themselves be recursively combined according to a set of production rules. The linguistically inspired idea of treating sentences as trees rather than as a sequence gives rise to recursive neural networks (Socher et al., 2013), which can be seen in Figure 7 below.
Figure 7: A recursive neural network (Socher et al., 2013)
Recursive neural networks build the representation of a sequence from the bottom up in contrast to RNNs who process the sentence left-to-right or right-to-left. At every node of the tree, a new representation is computed by composing the representations of the child nodes. As a tree can also be seen as imposing a different processing order on an RNN, LSTMs have naturally been extended to trees (Tai et al., 2015).
Not only RNNs and LSTMs can be extended to work with hierarchical structures. Word embeddings can be learned based not only on local but on grammatical context (Levy & Goldberg, 2014); language models can generate words based on a syntactic stack (Dyer et al., 2016); and graph-convolutional neural networks can operate over a tree (Bastings et al., 2017).
In 2014, Sutskever et al. proposed sequence-to-sequence learning, a general framework for mapping one sequence to another one using a neural network. In the framework, an encoder neural network processes a sentence symbol by symbol and compresses it into a vector representation; a decoder neural network then predicts the output symbol by symbol based on the encoder state, taking as input at every step the previously predicted symbol as can be seen in Figure 8 below.
Figure 8: A sequence-to-sequence model (Sutskever et al., 2014)
Machine translation turned out to be the killer application of this framework. In 2016, Google announced that it was starting to replace its monolithic phrase-based MT models with neural MT models (Wu et al., 2016). According to Jeff Dean, this meant replacing 500,000 lines of phrase-based MT code with a 500-line neural network model.
This framework due to its flexibility is now the go-to framework for natural language generation tasks, with different models taking on the role of the encoder and the decoder. Importantly, the decoder model can not only be conditioned on a sequence, but on arbitrary representations. This enables for instance generating a caption based on an image (Vinyals et al., 2015) (as can be seen in Figure 9 below), text based on a table (Lebret et al., 2016), and a description based on source code changes (Loyola et al., 2017), among many other applications.
Figure 9: Generating a caption based on an image (Vinyals et al., 2015)
Sequence-to-sequence learning can even be applied to structured prediction tasks common in NLP where the output has a particular structure. For simplicity, the output is linearized as can be seen for constituency parsing in Figure 10 below. Neural networks have demonstrated the ability to directly learn to produce such a linearized output given sufficient amount of training data for constituency parsing (Vinyals et al, 2015), and named entity recognition (Gillick et al., 2016), among others.
Figure 10: Linearizing a constituency parse tree (Vinyals et al., 2015)
Encoders for sequences and decoders are typically based on RNNs but other model types can be used. New architectures mainly emerge from work in MT, which acts as a Petri dish for sequence-to-sequence architectures. Recent models are deep LSTMs (Wu et al., 2016), convolutional encoders (Kalchbrenner et al., 2016; Gehring et al., 2017), the Transformer (Vaswani et al., 2017), which will be discussed in the next section, and a combination of an LSTM and a Transformer (Chen et al., 2018).
Attention (Bahdanau et al., 2015) is one of the core innovations in neural MT (NMT) and the key idea that enabled NMT models to outperform classic phrase-based MT systems. The main bottleneck of sequence-to-sequence learning is that it requires to compress the entire content of the source sequence into a fixed-size vector. Attention alleviates this by allowing the decoder to look back at the source sequence hidden states, which are then provided as a weighted average as additional input to the decoder as can be seen in Figure 11 below.
Figure 11: Attention (Bahdanau et al., 2015)
Different forms of attention are available (Luong et al., 2015). Have a look here for a brief overview. Attention is widely applicable and potentially useful for any task that requires making decisions based on certain parts of the input. It has been applied to consituency parsing (Vinyals et al., 2015), reading comprehension (Hermann et al., 2015), and one-shot learning (Vinyals et al., 2016), among many others. The input does not even need to be a sequence, but can consist of other representations as in the case of image captioning (Xu et al., 2015), which can be seen in Figure 12 below. A useful side-effect of attention is that it provides a rare---if only superficial---glimpse into the inner workings of the model by inspecting which parts of the input are relevant for a particular output based on the attention weights.
Figure 12: Visual attention in an image captioning model indicating what the model is attending to when generating the word "frisbee". (Xu et al., 2015)
Attention is also not restricted to just looking at the input sequence; self-attention can be used to look at the surrounding words in a sentence or document to obtain more contextually sensitive word representations. Multiple layers of self-attention are at the core of the Transformer architecture (Vaswani et al., 2017), the current state-of-the-art model for NMT.
Attention can be seen as a form of fuzzy memory where the memory consists of the past hidden states of the model, with the model choosing what to retrieve from memory. For a more detailed overview of attention and its connection to memory, have a look at this post. Many models with a more explicit memory have been proposed. They come in different variants such as Neural Turing Machines (Graves et al., 2014), Memory Networks (Weston et al., 2015) and End-to-end Memory Networks (Sukhbaatar et al., 2015), Dynamic Memory Networks (Kumar et al., 2015), the Neural Differentiable Computer (Graves et al., 2016), and the Recurrent Entity Network (Henaff et al., 2017).
Memory is often accessed based on similarity to the current state similar to attention and can typically be written to and read from. Models differ in how they implement and leverage the memory. For instance, End-to-end Memory Networks process the input multiple times and update the memory to enable multiple steps of inference. Neural Turing Machines also have a location-based addressing, which allows them to learn simple computer programs like sorting. Memory-based models are typically applied to tasks, where retaining information over longer time spans should be useful such as language modelling and reading comprehension. The concept of a memory is very versatile: A knowledge base or table can function as a memory, while a memory can also be populated based on the entire input or particular parts of it.
Pretrained word embeddings are context-agnostic and only used to initialize the first layer in our models. In recent months, a range of supervised tasks has been used to pretrain neural networks (Conneau et al., 2017; McCann et al., 2017; Subramanian et al., 2018). In contrast, language models only require unlabelled text; training can thus scale to billions of tokens, new domains, and new languages. Pretrained language models were first proposed in 2015 (Dai & Le, 2015); only recently were they shown to be beneficial across a diverse range of tasks. Language model embeddings can be used as features in a target model (Peters et al., 2018) or a language model can be fine-tuned on target task data (Ramachandran et al., 2017; Howard & Ruder, 2018). Adding language model embeddings gives a large improvement over the state-of-the-art across many different tasks as can be seen in Figure 13 below.
Figure 13: Improvements with language model embeddings over the state-of-the-art (Peters et al., 2018)
Pretrained language models have been shown enable learning with significantly less data. As language models only require unlabelled data, they are particularly beneficial for low-resource languages where labelled data is scarce. For more information about the potential of pretrained language models, refer to this article.
Some other developments are less pervasive than the ones mentioned above, but still have wide-ranging impact.
Character-based representations Using a CNN or an LSTM over characters to obtain a character-based word representation is now fairly common, particularly for morphologically rich languages and tasks where morphological information is important or that have many unknown words. Character-based representations were first used for part-of-speech tagging and language modeling (Ling et al., 2015) and dependency parsing (Ballesteros et al., 2015). They later became a core component of models for sequence labeling (Lample et al., 2016; Plank et al., 2016) and language modeling (Kim et al., 2016). Character-based representations alleviate the need of having to deal with a fixed vocabulary at increased computational cost and enable applications such as fully character-based NMT (Ling et al., 2016; Lee et al., 2017).
Adversarial learning Adversarial methods have taken the field of ML by storm and have also been used in different forms in NLP. Adversarial examples are becoming increasingly widely used not only as a tool to probe models and understand their failure cases, but also to make them more robust (Jia & Liang, 2017). (Virtual) adversarial training, that is, worst-case perturbations (Miyato et al., 2017; Yasunaga et al., 2018) and domain-adversarial losses (Ganin et al., 2016; Kim et al., 2017) are useful forms of regularization that can equally make models more robust. Generative adversarial networks (GANs) are not yet too effective for natural language generation (Semeniuta et al., 2018), but are useful for instance when matching distributions (Conneau et al., 2018).
Reinforcement learning Reinforcement learning has been shown to be useful for tasks with a temporal dependency such as selecting data during training (Fang et al., 2017; Wu et al., 2018) and modelling dialogue (Liu et al., 2018). RL is also effective for directly optimizing a non-differentiable end metric such as ROUGE or BLEU instead of optimizing a surrogate loss such as cross-entropy in summarization (Paulus et al, 2018; Celikyilmaz et al., 2018) and machine translation (Ranzato et al., 2016). Similarly, inverse reinforcement learning can be useful in settings where the reward is too complex to be specified such as visual storytelling (Wang et al., 2018).
In 1998 and over the following years, the FrameNet project was introduced (Baker et al., 1998), which led to the task of semantic role labelling, a form of shallow semantic parsing that is still actively researched today. In the early 2000s, the shared tasks organized together with the Conference on Natural Language Learning (CoNLL) catalyzed research in core NLP tasks such as chunking (Tjong Kim Sang et al., 2000), named entity recognition (Tjong Kim Sang et al., 2003), and dependency parsing (Buchholz et al., 2006), among others. Many of the CoNLL shared task datasets are still the standard for evaluation today.
In 2001, conditional random fields (CRF; Lafferty et al., 2001), one of the most influential classes of sequence labelling methods were introduced, which won the Test-of-time award at ICML 2011. A CRF layer is a core part of current state-of-the-art models for sequence labelling problems with label interdependencies such as named entity recognition (Lample et al., 2016).
In 2002, the bilingual evaluation understudy (BLEU; Papineni et al., 2002) metric was proposed, which enabled MT systems to scale up and is still the standard metric for MT evaluation these days. In the same year, the structured preceptron (Collins, 2002) was introduced, which laid the foundation for work in structured perception. At the same conference, sentiment analysis, one of the most popular and widely studied NLP tasks, was introduced (Pang et al., 2002). All three papers won the Test-of-time award at NAACL 2018. In addition, the linguistic resource PropBank (Kingsbury & Palmer, 2002) was introduced in the same year. PropBank is similar to FrameNet, but focuses on verbs. It is frequently used in semantic role labelling.
2003 saw the introduction of latent dirichlet allocation (LDA; Blei et al., 2003), one of the most widely used techniques in machine learning, which is still the standard way to do topic modelling. In 2004, novel max-margin models were proposed that are better suited for capturing correlations in structured data than SVMs (Taskar et al., 2004a; 2004b).
In 2006, OntoNotes (Hovy et al., 2006), a large multilingual corpus with multiple annotations and high interannotator agreement was introduced. OntoNotes has been used for the training and evaluation of a variety of tasks such as dependency parsing and coreference resolution. Milne and Witten (2008) described in 2008 how Wikipedia can be used to enrich machine learning methods. To this date, Wikipedia is one of the most useful resources for training ML methods, whether for entity linking and disambiguation, language modelling, as a knowledge base, or a variety of other tasks.
In 2009, the idea of distant supervision (Mintz et al., 2009) was proposed. Distant supervision leverages information from heuristics or existing knowledge bases to generate noisy patterns that can be used to automatically extract examples from large corpora. Distant supervision has been used extensively and is a common technique in relation extraction, information extraction, and sentiment analysis, among other tasks.
In 2016, Universal Dependencies v1 (Nivre et al., 2016), a multilingual collection of treebanks, was introduced. The Universal Dependencies project is an open community effort that aims to create consistent dependency-based annotations across many languages. As of January 2019, Universal Dependencies v2 comprises more than 100 treebanks in over 70 languages.
Italian (by Luca Palmieri)
Thanks to Djamé Seddah, Daniel Khashabi, Shyam Upadhyay, Chris Dyer, and Michael Roth for providing pointers (see the Twitter thread).
Kneser, R., & Ney, H. (1995, May). Improved backing-off for m-gram language modeling. In icassp (Vol. 1, p. 181e4).
Kannan, A., Kurach, K., Ravi, S., Kaufmann, T., Tomkins, A., Miklos, B., ... & Ramavajjala, V. (2016, August). Smart reply: Automated response suggestion for email. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 955-964). ACM.
Bengio, Y., Ducharme, R., & Vincent, P. (2001). Proceedings of NIPS.
Mikolov, T., Karafiát, M., Burget, L., Černocký, J., & Khudanpur, S. (2010). Recurrent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association.
Graves, A. (2013). Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850.
Melis, G., Dyer, C., & Blunsom, P. (2018). On the State of the Art of Evaluation in Neural Language Models. In Proceedings of ICLR 2018.
Daniluk, M., Rocktäschel, T., Weibl, J., & Riedel, S. (2017). Frustratingly Short Attention Spans in Neural Language Modeling. In Proceedings of ICLR 2017.
Caruana, R. (1993). Multitask learning: A knowledge-based source of inductive bias. In Proceedings of the Tenth International Conference on Machine Learning.
Collobert, R., & Weston, J. (2008). A unified architecture for natural language processing. In Proceedings of the 25th International Conference on Machine Learning (pp. 160–167).
Caruana, R. (1998). Multitask Learning. Autonomous Agents and Multi-Agent Systems, 27(1), 95–133.
Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., & Kuksa, P. (2011). Natural Language Processing (almost) from Scratch. Journal of Machine Learning Research, 12(Aug), 2493–2537. Retrieved from http://arxiv.org/abs/1103.0398.
Ruder, S., Bingel, J., Augenstein, I., & Søgaard, A. (2017). Learning what to share between loosely related tasks. ArXiv Preprint ArXiv:1705.08142. Retrieved from http://arxiv.org/abs/1705.08142
Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Distributed Representations of Words and Phrases and their Compositionality. In Advances in Neural Information Processing Systems.
Mikolov, T., Corrado, G., Chen, K., & Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. Proceedings of the International Conference on Learning Representations (ICLR 2013).
Arora, S., Li, Y., Liang, Y., Ma, T., & Risteski, A. (2016). A Latent Variable Model Approach to PMI-based Word Embeddings. TACL, 4, 385–399.
Mimno, D., & Thompson, L. (2017). The strange geometry of skip-gram with negative sampling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (pp. 2863–2868).
Antoniak, M., & Mimno, D. (2018). Evaluating the Stability of Embedding-based Word Similarities. Transactions of the Association for Computational Linguistics, 6, 107–119.
Wendlandt, L., Kummerfeld, J. K., & Mihalcea, R. (2018). Factors Influencing the Surprising Instability of Word Embeddings. In Proceedings of NAACL-HLT 2018.
Kim, Y. (2014). Convolutional Neural Networks for Sentence Classification. Proceedings of the Conference on Empirical Methods in Natural Language Processing, 1746–1751. Retrieved from http://arxiv.org/abs/1408.5882
Pennington, J., Socher, R., & Manning, C. D. (2014). Glove: Global Vectors for Word Representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, 1532–1543.
Levy, O., & Goldberg, Y. (2014). Neural Word Embedding as Implicit Matrix Factorization. Advances in Neural Information Processing Systems (NIPS), 2177–2185. Retrieved from http://papers.nips.cc/paper/5477-neural-word-embedding-as-implicit-matrix-factorization
Levy, O., Goldberg, Y., & Dagan, I. (2015). Improving Distributional Similarity with Lessons Learned from Word Embeddings. Transactions of the Association for Computational Linguistics, 3, 211–225. Retrieved from https://tacl2013.cs.columbia.edu/ojs/index.php/tacl/article/view/570
Le, Q. V., & Mikolov, T. (2014). Distributed Representations of Sentences and Documents. International Conference on Machine Learning - ICML 2014, 32, 1188–1196. Retrieved from http://arxiv.org/abs/1405.4053
Kiros, R., Zhu, Y., Salakhutdinov, R., Zemel, R. S., Torralba, A., Urtasun, R., & Fidler, S. (2015). Skip-Thought Vectors. In Proceedings of NIPS 2015. Retrieved from http://arxiv.org/abs/1506.06726
Grover, A., & Leskovec, J. (2016, August). node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 855-864). ACM.
Asgari, E., & Mofrad, M. R. (2015). Continuous distributed representation of biological sequences for deep proteomics and genomics. PloS one, 10(11), e0141287.
Conneau, A., Lample, G., Ranzato, M., Denoyer, L., & Jégou, H. (2018). Word Translation Without Parallel Data. In Proceedings of ICLR 2018. Retrieved from http://arxiv.org/abs/1710.04087
Artetxe, M., Labaka, G., & Agirre, E. (2018). A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of ACL 2018.
Søgaard, A., Ruder, S., & Vulić, I. (2018). On the Limitations of Unsupervised Bilingual Dictionary Induction. In Proceedings of ACL 2018.
Ruder, S., Vulić, I., & Søgaard, A. (2018). A Survey of Cross-lingual Word Embedding Models. To be published in Journal of Artificial Intelligence Research. Retrieved from http://arxiv.org/abs/1706.04902
Elman, J. L. (1990). Finding structure in time. Cognitive science, 14(2), 179-211.
Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780.
Kalchbrenner, N., Grefenstette, E., & Blunsom, P. (2014). A Convolutional Neural Network for Modelling Sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (pp. 655–665). Retrieved from http://arxiv.org/abs/1404.2188
Kalchbrenner, N., Espeholt, L., Simonyan, K., Oord, A. van den, Graves, A., & Kavukcuoglu, K. (2016). Neural Machine Translation in Linear Time. ArXiv Preprint ArXiv: Retrieved from http://arxiv.org/abs/1610.10099
Wang, J., Yu, L., Lai, K. R., & Zhang, X. (2016). Dimensional Sentiment Analysis Using a Regional CNN-LSTM Model. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), 225–230.
Bradbury, J., Merity, S., Xiong, C., & Socher, R. (2017). Quasi-Recurrent Neural Networks. In ICLR 2017. Retrieved from http://arxiv.org/abs/1611.01576
Socher, R., Perelygin, A., & Wu, J. (2013). Recursive deep models for semantic compositionality over a sentiment treebank. Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, 1631–1642.
Tai, K. S., Socher, R., & Manning, C. D. (2015). Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks. Acl-2015, 1556–1566.
Levy, O., & Goldberg, Y. (2014). Dependency-Based Word Embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Short Papers) (pp. 302–308). https://doi.org/10.3115/v1/P14-2050
Dyer, C., Kuncoro, A., Ballesteros, M., & Smith, N. A. (2016). Recurrent Neural Network Grammars. In NAACL. Retrieved from http://arxiv.org/abs/1602.07776
Bastings, J., Titov, I., Aziz, W., Marcheggiani, D., & Sima'an, K. (2017). Graph Convolutional Encoders for Syntax-aware Neural Machine Translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing.
Vinyals, O., Toshev, A., Bengio, S., & Erhan, D. (2015). Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3156-3164).
Lebret, R., Grangier, D., & Auli, M. (2016). Generating Text from Structured Data with Application to the Biography Domain. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Retrieved from http://arxiv.org/abs/1603.07771
Loyola, P., Marrese-Taylor, E., & Matsuo, Y. (2017). A Neural Architecture for Generating Natural Language Descriptions from Source Code Changes. In ACL 2017. Retrieved from http://arxiv.org/abs/1704.04856
Vinyals, O., Kaiser, L., Koo, T., Petrov, S., Sutskever, I., & Hinton, G. (2015). Grammar as a Foreign Language. Advances in Neural Information Processing Systems.
Gillick, D., Brunk, C., Vinyals, O., & Subramanya, A. (2016). Multilingual Language Processing From Bytes. In NAACL (pp. 1296–1306). Retrieved from http://arxiv.org/abs/1512.00103
Wu, Y., Schuster, M., Chen, Z., Le, Q. V, Norouzi, M., Macherey, W., … Dean, J. (2016). Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. ArXiv Preprint ArXiv:1609.08144.
Gehring, J., Auli, M., Grangier, D., Yarats, D., & Dauphin, Y. N. (2017). Convolutional Sequence to Sequence Learning. ArXiv Preprint ArXiv:1705.03122. Retrieved from http://arxiv.org/abs/1705.03122
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., … Polosukhin, I. (2017). Attention Is All You Need. In Advances in Neural Information Processing Systems.
Chen, M. X., Foster, G., & Parmar, N. (2018). The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation. In Proceedings of ACL 2018.
Bahdanau, D., Cho, K., & Bengio, Y. (2015). Neural Machine Translation by Jointly Learning to Align and Translate. In ICLR 2015.
Luong, M.-T., Pham, H., & Manning, C. D. (2015). Effective Approaches to Attention-based Neural Machine Translation. In Proceedings of EMNLP 2015. Retrieved from http://arxiv.org/abs/1508.04025
Hermann, K. M., Kočiský, T., Grefenstette, E., Espeholt, L., Kay, W., Suleyman, M., & Blunsom, P. (2015). Teaching Machines to Read and Comprehend. Advances in Neural Information Processing Systems. Retrieved from http://arxiv.org/abs/1506.03340v1
Xu, K., Courville, A., Zemel, R. S., & Bengio, Y. (2015). Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. In Proceedings of ICML 2015.
Vinyals, O., Blundell, C., Lillicrap, T., Kavukcuoglu, K., & Wierstra, D. (2016). Matching Networks for One Shot Learning. In Advances in Neural Information Processing Systems 29 (NIPS 2016). Retrieved from http://arxiv.org/abs/1606.04080
Graves, A., Wayne, G., & Danihelka, I. (2014). Neural turing machines. arXiv preprint arXiv:1410.5401.
Weston, J., Chopra, S., & Bordes, A. (2015). Memory Networks. In Proceedings of ICLR 2015.
Sukhbaatar, S., Szlam, A., Weston, J., & Fergus, R. (2015). End-To-End Memory Networks. In Proceedings of NIPS 2015. Retrieved from http://arxiv.org/abs/1503.08895
Kumar, A., Irsoy, O., Ondruska, P., Iyyer, M., Bradbury, J., Gulrajani, I., ... & Socher, R. (2016, June). Ask me anything: Dynamic memory networks for natural language processing. In International Conference on Machine Learning (pp. 1378-1387).
Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwińska, A., … Hassabis, D. (2016). Hybrid computing using a neural network with dynamic external memory. Nature.
Henaff, M., Weston, J., Szlam, A., Bordes, A., & LeCun, Y. (2017). Tracking the World State with Recurrent Entity Networks. In Proceedings of ICLR 2017.
Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems.
McCann, B., Bradbury, J., Xiong, C., & Socher, R. (2017). Learned in Translation: Contextualized Word Vectors. In Advances in Neural Information Processing Systems.
Conneau, A., Kiela, D., Schwenk, H., Barrault, L., & Bordes, A. (2017). Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing.
Subramanian, S., Trischler, A., Bengio, Y., & Pal, C. J. (2018). Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning. In Proceedings of ICLR 2018.
Dai, A. M., & Le, Q. V. (2015). Semi-supervised Sequence Learning. Advances in Neural Information Processing Systems (NIPS '15). Retrieved from http://arxiv.org/abs/1511.01432
Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018). Deep contextualized word representations. In Proceedings of NAACL-HLT 2018.
Howard, J., & Ruder, S. (2018). Universal Language Model Fine-tuning for Text Classification. In Proceedings of ACL 2018. Retrieved from http://arxiv.org/abs/1801.06146
Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K., & Dyer, C. (2016). Neural Architectures for Named Entity Recognition. In NAACL-HLT 2016.
Plank, B., Søgaard, A., & Goldberg, Y. (2016). Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics.
Ling, W., Trancoso, I., Dyer, C., & Black, A. (2016). Character-based Neural Machine Translation. In ICLR. Retrieved from http://arxiv.org/abs/1511.04586
Lee, J., Cho, K., & Bengio, Y. (2017). Fully Character-Level Neural Machine Translation without Explicit Segmentation. In Transactions of the Association for Computational Linguistics.
Jia, R., & Liang, P. (2017). Adversarial Examples for Evaluating Reading Comprehension Systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing.
Miyato, T., Dai, A. M., & Goodfellow, I. (2017). Adversarial Training Methods for Semi-supervised Text Classification. In Proceedings of ICLR 2017.
Yasunaga, M., Kasai, J., & Radev, D. (2018). Robust Multilingual Part-of-Speech Tagging via Adversarial Training. In Proceedings of NAACL 2018. Retrieved from http://arxiv.org/abs/1711.04903
Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., … Lempitsky, V. (2016). Domain-Adversarial Training of Neural Networks. Journal of Machine Learning Research, 17.
Kim, Y., Stratos, K., & Kim, D. (2017). Adversarial Adaptation of Synthetic or Stale Data. In Proceedings of ACL (pp. 1297–1307).
Semeniuta, S., Severyn, A., & Gelly, S. (2018). On Accurate Evaluation of GANs for Language Generation. Retrieved from http://arxiv.org/abs/1806.04936
Fang, M., Li, Y., & Cohn, T. (2017). Learning how to Active Learn: A Deep Reinforcement Learning Approach. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Retrieved from https://arxiv.org/pdf/1708.02383v1.pdf
Wu, J., Li, L., & Wang, W. Y. (2018). Reinforced Co-Training. In Proceedings of NAACL-HLT 2018.
Paulus, R., Xiong, C., & Socher, R. (2018). A deep reinforced model for abstractive summarization. In Proceedings of ICLR 2018.
Celikyilmaz, A., Bosselut, A., He, X., & Choi, Y. (2018). Deep communicating agents for abstractive summarization. In Proceedings of NAACL-HLT 2018.
Ranzato, M. A., Chopra, S., Auli, M., & Zaremba, W. (2016). Sequence level training with recurrent neural networks. In Proceedings of ICLR 2016.
Wang, X., Chen, W., Wang, Y.-F., & Wang, W. Y. (2018). No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling. In Proceedings of ACL 2018. Retrieved from http://arxiv.org/abs/1804.09160
Liu, B., Tür, G., Hakkani-Tür, D., Shah, P., & Heck, L. (2018). Dialogue Learning with Human Teaching and Feedback in End-to-End Trainable Task-Oriented Dialogue Systems. In Proceedings of NAACL-HLT 2018.
Kuncoro, A., Dyer, C., Hale, J., Yogatama, D., Clark, S., & Blunsom, P. (2018). LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better. In Proceedings of ACL 2018 (pp. 1–11). Retrieved from http://aclweb.org/anthology/P18-1132
Blevins, T., Levy, O., & Zettlemoyer, L. (2018). Deep RNNs Encode Soft Hierarchical Syntax. In Proceedings of ACL 2018. Retrieved from http://arxiv.org/abs/1805.04218
Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., & Bowman, S. R. (2018). GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding.
McCann, B., Keskar, N. S., Xiong, C., & Socher, R. (2018). The Natural Language Decathlon: Multitask Learning as Question Answering.
Lample, G., Denoyer, L., & Ranzato, M. (2018). Unsupervised Machine Translation Using Monolingual Corpora Only. In Proceedings of ICLR 2018.
Artetxe, M., Labaka, G., Agirre, E., & Cho, K. (2018). Unsupervised Neural Machine Translation. In Proceedings of ICLR 2018. Retrieved from http://arxiv.org/abs/1710.11041
Graves, A., Jaitly, N., & Mohamed, A. R. (2013, December). Hybrid speech recognition with deep bidirectional LSTM. In Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on (pp. 273-278). IEEE.
Ramachandran, P., Liu, P. J., & Le, Q. V. (2017). Unsupervised Pretraining for Sequence to Sequence Learning. In Proceedings of EMNLP 2017.
Baker, C. F., Fillmore, C. J., & Lowe, J. B. (1998, August). The berkeley framenet project. In Proceedings of the 17th international conference on Computational linguistics-Volume 1 (pp. 86-90). Association for Computational Linguistics.
Tjong Kim Sang, E. F., & Buchholz, S. (2000, September). Introduction to the CoNLL-2000 shared task: Chunking. In Proceedings of the 2nd workshop on Learning language in logic and the 4th conference on Computational natural language learning-Volume 7 (pp. 127-132). Association for Computational Linguistics.
Tjong Kim Sang, E. F., & De Meulder, F. (2003, May). Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4 (pp. 142-147). Association for Computational Linguistics.
Buchholz, S., & Marsi, E. (2006, June). CoNLL-X shared task on multilingual dependency parsing. In Proceedings of the tenth conference on computational natural language learning (pp. 149-164). Association for Computational Linguistics.
Lafferty, J., McCallum, A., & Pereira, F. C. (2001). Conditional random fields: Probabilistic models for segmenting and labeling sequence data.
Papineni, K., Roukos, S., Ward, T., & Zhu, W. J. (2002, July). BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics (pp. 311-318). Association for Computational Linguistics.
Collins, M. (2002, July). Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10 (pp. 1-8). Association for Computational Linguistics.
Pang, B., Lee, L., & Vaithyanathan, S. (2002, July). Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10 (pp. 79-86). Association for Computational Linguistics.
Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. Journal of machine Learning research, 3(Jan), 993-1022.
Taskar, B., Guestrin, C., & Koller, D. (2004). Max-margin Markov networks. In Advances in neural information processing systems (pp. 25-32).
Taskar, B., Klein, D., Collins, M., Koller, D., & Manning, C. (2004). Max-margin parsing. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing.
Hovy, E., Marcus, M., Palmer, M., Ramshaw, L., & Weischedel, R. (2006, June). OntoNotes: the 90% solution. In Proceedings of the human language technology conference of the NAACL, Companion Volume: Short Papers (pp. 57-60). Association for Computational Linguistics.
Milne, D., & Witten, I. H. (2008, October). Learning to link with wikipedia. In Proceedings of the 17th ACM conference on Information and knowledge management (pp. 509-518). ACM.
Mintz, M., Bills, S., Snow, R., & Jurafsky, D. (2009, August). Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2 (pp. 1003-1011). Association for Computational Linguistics.
Ling, W., Luis, T., Marujo, L., Astudillo, R. F., Amir, S., Dyer, C., … Trancoso, I. (2015). Finding Function in Form: Compositional Character Models for Open Vocabulary Word Representation. In Proceedings of EMNLP 2015 (pp. 1520–1530).
Ballesteros, M., Dyer, C., & Smith, N. A. (2015). Improved Transition-Based Parsing by Modeling Characters instead of Words with LSTMs. In Proceedings of EMNLP 2015.
Kim, Y., Jernite, Y., Sontag, D., & Rush, A. M. (2016). Character-Aware Neural Language Models. In Proceedings of AAAI 2016
Bolukbasi, T., Chang, K.-W., Zou, J., Saligrama, V., & Kalai, A. (2016). Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In 30th Conference on Neural Information Processing Systems (NIPS 2016).
Kingsbury, P., & Palmer, M. (2002, May). From TreeBank to PropBank. In LREC (pp. 1989-1993).
Nivre, J., De Marneffe, M. C., Ginter, F., Goldberg, Y., Hajic, J., Manning, C. D., ... & Tsarfaty, R. (2016, May). Universal Dependencies v1: A Multilingual Treebank Collection. In LREC.
]]>http://ruder.io/acl-2018-highlights/5c76ff7c1b9b0d18555b9eadThu, 26 Jul 2018 11:00:00 GMT
This post discusses highlights of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018).
I attended the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018) in Melbourne, Australia from July 15-20, 2018 and presented three papers . It is foolhardy to try to condense an entire conference into one topic; however, in retrospect, certain themes appear particularly pronounced. In 2015 and 2016, NLP conferences were dominated by word embeddings and some people were musing that Embedding Methods in Natural Language Processing was a more appropriate name for the Conference on Empirical Methods in Natural Language Processing, one of the top conferences in the field.
According to Chris Manning, 2017 was the year of the BiLSTM with attention. While BiLSTMs optionally with attention are still ubiquitous, the main themes of this conference for me were to gain a better understanding what the representations of such models capture and to expose them to more challenging settings. In my review, I will mainly focus on contributions that touch on these themes but will also discuss other themes that I found of interest.
Understanding Representations
Probing models
It was very refreshing to see that rather than introducing ever shinier new models, many papers methodically investigated existing models and what they capture. This was most commonly done by automatically creating a dataset that focuses on one particular aspect of the generalization behaviour and evaluating different trained models on this dataset:
Conneau et al. for instance evaluate different sentence embedding methods on ten datasets designed to capture certain linguistic features, such as predicting the length of a sentence, recovering word content, sensitivity to bigram shift, etc. They find that different encoder architectures can result in embeddings with different characteristics and that bag-of-embeddings is surprisingly good at capturing sentence-level properties, among other results.
Zhu et al. evaluate sentence embeddings by observing the change in similarity of generated triplets of sentences that differ in a certain semantic or syntactic aspect. They find---among other things---that SkipThought and InferSent can distinguish negation from synonymy, while InferSent is better at identifying semantic equivalence and dealing with quantifiers.
Pezzelle et al. focus specifically on quantifiers and test different CNN and LSTM models on their ability to predict quantifiers in single-sentence and multi-sentence contexts. They find that in single-sentence context, models outperform humans, while humans are slightly better in multi-sentence contexts.
Kuncoro et al. evaluate LSTMs on modeling subject-verb agreement. They find that with enough capacity, LSTMs can model subject-verb agreement, but that more syntax-sensitive models such as recurrent neural network grammars do even better.
Blevins et al. evaluate models pretrained on different tasks whether they capture a hierarchical notion of syntax. Specifically, they train the models to predict POS tags as well as constituent labels at different depths of a parse tree. They find that all models indeed encode a significant amount of syntax and---in particular---that language models learn some syntax.
Khandelwal et al. show that LSTM language models use about 200 tokens of context on average. Word order is only relevant within the most recent sentence.
Another interesting result regarding the generalization ability of language models is due to Lau et al. who find that a language model trained on a sonnet corpus captures meter implicitly at human-level performance.
Language models, however, also have their limitations. Spithourakis and Riedel observe that language models are bad at modelling numerals and propose several strategies to improve them.
Liu et al. show that LSTMs trained on natural language data are able to recall tokens from much longer sequence than models trained on non-language data at the Repl4NLP workshop.
In particular, I think better understanding what information LSTMs and language models will become more important, as they seem to be a key driver of progress in NLP going forward, as evidenced by our ACL paper on language model fine-tuning and related approaches.
Understanding state-of-the-art models
While the above studies try to understand a specific aspect of the generalization ability of a particular model class, several papers focus on better understanding state-of-the-art models for a particular task:
Glockner et al. focused on the task of natural language inference. They created a dataset with sentences that differ by at most one word from sentences in the training data in order to probe if models can deal with simple lexical inferences. They find that current state-of-the-art models fail to capture many simple inferences.
Mudrakarta et al. analyse state-of-the-art QA models across different modalities and find that the models often ignore key question terms. They then perturb questions to craft adversarial examples that significantly lower models' accuracy.
I found many of the papers probing different aspects of models stimulating. I hope that the generation of such probing datasets will become a standard tool in the toolkit of every NLP researchers so that we will not only see more of such papers in the future but that such an analysis may also become part of the standard model evaluation, besides error and ablation analyses.
Analyzing the inductive bias
Another way to gain a better understanding of a model is to analyze its inductive bias. The Workshop on Relevance of Linguistic Structure in Neural Architectures for NLP (RELNLP) sought to explore how useful it is to incorporate linguistic structure into our models. One of the key points of Chris Dyer's talk during the workshop was whether RNNs have a useful inductive bias for NLP. In particular, he argued that there are several pieces of evidence indicating that RNNs prefer sequential recency, namely:
Gradients become attenuated across time. LSTMs or GRUs may help with this, but they also forget.
People have used training regimes like reversing the input sequence for machine translation.
People have used enhancements like attention to have direct connections back in time.
For modeling subject-verb agreement, the error rate increases with the number of attractors.
According to Chomsky, sequential recency is not the right bias for learning human language. RNNs thus don't seem to have the right bias for modeling language, which in practice can lead to statistical inefficiency and poor generalization behaviour. Recurrent neural network grammars, a class of models that generates both a tree and a sequence sequentially by compressing a sentence into its constituents, instead have a bias for syntactic (rather than sequential) recency.
However, it can often be hard to identify whether a model has a useful inductive bias. For identifying subject-verb agreement, Chris hypothesizes that LSTM language models learn a non-structural "first noun" heuristic that relies on matching the verb to the first noun in the sentence. In general, perplexity (and other aggregate metrics) are correlated with syntactic/structural competence, but are not particularly sensitive at distinguishing structurally sensitive models from models that use a simpler heuristic.
Using Deep Learning to understand language
In his talk at the workshop, Mark Johnson opined that while Deep Learning has revolutionized NLP, its primary benefit is economic: complex component pipelines have been replaced with end-to-end models and target accuracy can often be achieved more quickly and cheaply. Deep Learning has not changed our understanding of language. Its main contribution in this regard is to demonstrate that a neural network aka a computational model can perform certain NLP tasks, which shows that these tasks are not indicators of intelligence. While DL methods can pattern match and perform perceptual tasks really well, they struggle with tasks relying on deliberate reflection and conscious thought.
Incorporating linguistic structure
Jason Eisner questioned in his talk whether linguistic structures and categories actually exist or whether "scientist just like to organize data into piles" given that a linguistics-free approach works surprisingly well for MT. He finds that even "arbitrarily defined" categories such as the difference between the /b/ and /p/ phonemes can become hardened and accrue meaning. However, neural models are pretty good sponges to soak up whatever isn't modeled explicitly.
He outlines four common ways to introduce linguistic information into models: a) via a pipeline-based approach, where linguistic categories are used as features; b) via data augmentation, where the data is augmented with linguistic categories; c) via multi-task learning; d) via structured modeling such as using a transition-based parser, a recurrent neural network grammar, or even classes that depend on each other such as BIO notation.
In her talk at the workshop, Emily Bender questioned the premise of linguistics-free learning altogether: Even if you had a huge corpus in a language that you knew nothing about, without any other priors, e.g. what function words are, you would not be able to learn sentence structure or meaning. She also pointedly called out many ML papers that describe their approach as similar to how babies learn, without citing any actual developmental psychology or language acquisition literature. Babies in fact learn in situated, joint, emotional context, which carries a lot of signal and meaning.
Understanding the failure modes of LSTMs
Better understanding representations was also a theme at the Representation Learning for NLP workshop. During his talk, Yoav Goldberg detailed some of the efforts of his group to better understand representations of RNNs. In particular, he discussed recent work on extracting a finite state automaton from an RNN in order to better understand what the model has learned. He also reminded the audience that LSTM representations, even though they have been trained on one task, are not task-specific. They are often predictive of unintended aspects such as demographics in the data. Even when a model has been trained using a domain-adversarial loss to produce representations that are invariant of a certain aspect, the representations will be still slightly predictive of said attribute. It can thus be a challenge to completely remove unwanted information from encoded language data and even seemingly perfect LSTM models may have hidden failure modes.
On the topic of failure modes of LSTMs, a statement that also fits well in this theme was uttered by this year's recipient of the ACL lifetime achievement award, Mark Steedman. He asked "LSTMs work in practice, but can they work in theory?".
Evaluation in more challenging settings
Adversarial examples
A theme that is closely interlinked with gaining a better understanding of the limitations of state-of-the-art models is to propose ways how they can be improved. In particular, similar to adversarial example paper mentioned above, several papers tried to make models more robust to adversarial examples:
Cheng et al. propose to make both the encoder and decoder in NMT models more robust against input perturbations.
Ebrahimi et al. propose white-box adversarial examples to trick a character-level neural classifier by swapping few tokens.
Ribeiro et al. improve upon the previous method with semantic-preserving perturbations that induce changes in the model's predictions, which they generalize to rules that induce adversaries on many instances.
Bose et al. incorporate adversarial examples into noise contrastive estimation using an adversarially learned sampler. The sampler finds harder negative examples, which forces the model to learn better representations.
Learning robust and fair representations
Tim Baldwin discussed different ways to make models more robust to a domain shift during his talk at the RepL4NLP workshop. The slides can be found here. For using a single source domain, he discussed a method to linguistically perturb training instances based on different types of syntactic and semantic noise. In the setting with multiple source domains, he proposed to train an adversarial model on the source domains. Finally, he discussed a method that allows to learn robust and privacy-preserving text representations.
Margaret Mitchell focused on fair and privacy-preserving representations during her talk at the workshop. In particular, she highlighted the difference between a descriptive and a normative view of the world. ML models learn representations that reflect a descriptive view of the data they're trained on. The data represents "the world as people talk about it". Research in fairness conversely seeks to create representations that reflect a normative view of the world, which captures our values and seeks to instill them in the representations.
Improving evaluation methodology
Besides making models more robust, several papers sought to improve the way we evaluate our models:
Finegan-Dollak et al. identify limitations and propose improvements to current evaluations of text-to-SQL systems. They show that the current train-test split and practice of anonymization of variables are flawed and release standardized and improved versions of seven datasets to mitigate these.
Dror et al. focus on a practice that should be commonplace, but is often not done or done poorly: statistical significance testing. In particular, they survey recent empirical papers in ACL and TACL 2017 finding that statistical significance testing is often ignored or misused and propose a simple protocol for statistical significance test selection for NLP tasks.
Chaganty et al. investigate the bias of automatic metrics such as BLEU and ROUGE and find that even an unbiased estimator only achieves a comparatively low error reduction. This highlights the need to improve both the correlation of automatic metric as well as reduce the variance of human annotation.
Strong baselines
Another way to improve model evaluation is to compare new models against stronger baselines, in order to make sure that improvements are actually significant. Some papers focused on this line of research:
Shen et al. systematically compare simple word embedding-based methods with pooling to more complex models such as LSTMs and CNNs. They find that for most datasets, word embedding-based methods exhibit competitive or even superior performance.
Ethayarajh proposes a strong baseline for sentence embedding models at the RepL4NLP workshop.
In a similar vein, Ruder and Plank find that classic bootstrapping algorithms such as tri-training make for strong baselines for semi-supervised learning and even outperform recent state-of-the-art methods.
In the above paper, we also emphasize the importance of evaluating in more challenging settings, such as on out-of-distribution data and on different tasks. Our findings would have been different if we had just focused on a single task or only on in-domain data. We need to test our models under such adverse conditions to get a better sense of their robustness and how well they can actually generalize.
Creating harder datasets
In order to evaluate under such settings, more challenging datasets need to be created. Yejin Choi argued during the RepL4NLP panel discussion (a summary can be found here) that the community pays a lot of attention to easier tasks such as SQuAD or bAbI, which are close to solved. Yoav Goldberg even went so far as to say that "SQuAD is the MNIST of NLP". Instead, we should focus on solving harder tasks and develop more datasets with increasing levels of difficulty. If a dataset is too hard, people don't work on it. In particular, the community should not work on datasets for too long as datasets are getting solved very fast these days; creating novel and more challenging datasets is thus even more important. Two datasets that seek to go beyond SQuAD for reading comprehension were presented at the conference:
QAngaroo focuses on reading comprehension that requires to gather several pieces of information via multiple steps of inference.
NarrativeQA requires understand of an underlying narrative by asking the reader to answer questions about stories by reading entire books or movie scripts.
Richard Socher also stressed the importance of training and evaluating a model across multiple tasks during his talk during the Machine Reading for Question Answering workshop. In particular, he argues that NLP requires many types of reasoning, e.g. logical, linguistic, emotional, etc., which cannot all be satisfied by a single task.
Evaluation on multiple and low-resource languages
Another facet of this is to evaluate our models on multiple languages. Emily Bender surveyed 50 NAACL 2018 papers in her talk mentioned above and found that 42 papers evaluate on an unnamed mystery language (i.e. English). She emphasizes that it is important to name the language you work on as languages have different linguistic structures; not mentioning the language obfuscates this fact.
If our methods are designed to be cross-lingual, then we should additionally evaluate them on the more challenging setting of low-resource languages. For instance, both of the following two papers observe that current methods for unsupervised bilingual dictionary methods fail if the target language is dissimilar to language such as with Estonian or Finnish:
Søgaard et al. probe the limitations of current methods further and highlight that such methods also fail when embeddings are trained on different domains or using different algorithms. They finally propose a metric to quantify the potential of such methods.
Artetxe et al. propose a new unsupervised self-training method that employs a better initialization to steer the optimization process and is particularly robust for dissimilar language pairs.
Several other papers also evaluate their approaches on low-resource languages:
Dror et al. propose to use orthographic features for bilingual lexicon induction. Though these mostly help for related languages, they also evaluate on the dissimilar language pair English-Finnish.
Ren et al. finally propose to leverage another rich language for translation into a resource-poor language . They find that their model significantly improves the translation quality of rare languages.
Currey and Heafield propose an unsupervised tree-to-sequence model for NMT by adapting the Gumbel tree-LSTM. Their model proves particularly useful for low-resource languages.
Progress in NLP
Another theme during the conference for me was that the field is visibly making progress. Marti Hearst, president of the ACL, echoed this sentiment during her presidential address. She used to demonstrate what our models can and can't do using the example of Stanley Kubrick's HAL 9000 (seen below). In recent years, this has become a less useful exercise as our models have learned to perform tasks that seemed previously decades away such as recognizing and producing human speech or lipreading[1]. Naturally, we are still far away from tasks that require deep language understanding and reasoning such as having an argument; nevertheless, this progress is remarkable.
Hal 9000. (Source: CC BY 3.0, Wikimedia)
Marti also paraphrased NLP and IR pioneer Karen Spärck Jones saying that research is not going around in circles, but climbing a spiral or---maybe more fittingly---different staircases that are not necessarily linked but go in the same direction. She also expressed a sentiment that seems to resonate with a lot of people: In the 1980s and 90s, with only a few papers to read, it was definitely easier to keep track of the state of the art. To make this easier, I have recently created a document to collect the state of the art across different NLP tasks.
With the community growing, she encouraged people to participate and volunteer and announced an ACL Distinguished Service Award for the most dedicated members. ACL 2018 also saw the launch (after EACL in 1982 and NAACL in 2000) of its third chapter, AACL, the Asia-Pacific Chapter of the Association for Computational Linguistics.
The business meeting during the conference focused on measures to address a particular challenge of the growing community: the escalating number of submissions and the need for more reviewers. We can expect to see new efforts to deal with the large number of submissions at the conferences next year.
Back in 2016, it seemed as though reinforcement learning (RL) was finding its footing in NLP and being applied to more and more tasks. These days, it seems that the dynamic nature of RL makes it most useful for tasks that intrinsically have some temporal dependency such as selecting data during training[1][1] and modelling dialogue, while supervised learning seems to be better suited for most other tasks. Another important application of RL is to optimize the end metric such as ROUGE or BLEU directly instead of optimizing a surrogate loss such as cross-entropy. Successful applications of this are summarization[1][1] and machine translation[1].
Inverse reinforcement learning can be valuable in settings where the reward is too complex to be specified. A successful application of this is visual storytelling[1]. RL is particularly promising for sequential decision making problems in NLP such as playing text-based games, navigating webpages, and completing tasks. The Deep Reinforcement Learning for NLP tutorial provided a comprehensive overview of the space.
There were other great tutorials as well. I particularly enjoyed the Variational Inference and Deep Generative Models tutorial. The tutorials on Semantic Parsing and about "100 things you always wanted to know about semantics & pragmatics" also seemed really worthwhile. A complete list of the tutorials can be found here.
Cover image: View from the conference venue.
Thanks to Isabelle Augenstein for some paper suggestions.
Chung, J. S., & Zisserman, A. (2016, November). Lip reading in the wild. In Asian Conference on Computer Vision (pp. 87-103). Springer, Cham.
Glockner, M., Shwartz, V., & Goldberg, Y. (2018). Breaking NLI Systems with Sentences that Require Simple Lexical Inferences. In Proceedings of ACL 2018. Retrieved from http://arxiv.org/abs/1805.02266.
Zhu, X., Li, T., & De Melo, G. (2018). Exploring Semantic Properties of Sentence Embeddings. In Proceedings of ACL 2018 (pp. 1–6). Retrieved from http://aclweb.org/anthology/P18-2100
Pezzelle, S., Steinert-Threlkeld, S., Bernardi, R., & Szymanik, J. (2018). Some of Them Can be Guessed! Exploring the Effect of Linguistic Context in Predicting Quantifiers. In Proceedings of ACL 2018. Retrieved from http://arxiv.org/abs/1806.00354
Conneau, A., Kruszewski, G., Lample, G., Barrault, L., & Baroni, M. (2018). What you can cram into a single vector: Probing sentence embeddings for linguistic properties. In Proceedings of ACL 2018. Retrieved from http://arxiv.org/abs/1805.01070
Ribeiro, M. T., Singh, S., & Guestrin, C. (2018). Semantically Equivalent Adversarial Rules for Debugging NLP Models. In Proceedings of ACL 2018.
Dror, R., Baumer, G., Shlomov, S., & Reichart, R. (2018). The Hitchhiker's Guide to Testing Statistical Significance in Natural Language Processing. In Proceedings of ACL 2018. Retrieved from https://ie.technion.ac.il/~roiri/papers/ACL-2018-sig-cr.pdf
Dror, R., Baumer, G., Shlomov, S., & Reichart, R. (2018). Riley, P., & Gildea, D. (2018). Orthographic Features for Bilingual Lexicon Induction. In Proceedings of ACL 2018.
Lau, J. H., Cohn, T., Baldwin, T., Brooke, J., & Hammond, A. (2018). Deep-speare: A Joint Neural Model of Poetic Language, Meter and Rhyme. In Proceedings of ACL 2018. Retrieved from http://arxiv.org/abs/1807.03491
Chaganty, A. T., Mussman, S., & Liang, P. (2018). The price of debiasing automatic metrics in natural language evaluation. In Proceedings of ACL 2018. Retrieved from http://arxiv.org/abs/1807.02202
Finegan-Dollak, C., Kummerfeld, J. K., Zhang, L., Ramanathan, K., Sadasivam, S., Zhang, R., & Radev, D. (2018). Improving Text-to-SQL Evaluation Methodology. In Proceedings of ACL 2018. Retrieved from http://arxiv.org/abs/1806.09029
Shen, D., Wang, G., Wang, W., Min, M. R., Su, Q., Zhang, Y., … Carin, L. (2018). Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms. In Proceedings of ACL 2018.
Ren, S., Chen, W., Liu, S., Li, M., Zhou, M., & Ma, S. (2018). Triangular Architecture for Rare Language Translation. In Proceedings of ACL 2018. Retrieved from http://arxiv.org/abs/1805.04813
Cheng, Y., Tu, Z., Meng, F., Zhai, J., & Liu, Y. (2018). Towards Robust Neural Machine Translation. In Proceedings of ACL 2018.
Mudrakarta, P. K., Taly, A., Brain, G., Sundararajan, M., & Google, K. D. (2018). Did the Model Understand the Question? In Proceedings of ACL 2018. Retrieved from https://arxiv.org/pdf/1805.05492.pdf
Spithourakis, G. P., & Riedel, S. (2018). Numeracy for Language Models: Evaluating and Improving their Ability to Predict Numbers. In Proceedings of ACL 2018.
Ruder, S., & Plank, B. (2018). Strong Baselines for Neural Semi-supervised Learning under Domain Shift. In Proceedings of ACL 2018.
Ebrahimi, J., Rao, A., Lowd, D., & Dou, D. (2018). HotFlip: White-Box Adversarial Examples for Text Classification. In Proceedings of ACL 2018. Retrieved from http://arxiv.org/abs/1712.06751
Linzen, T., Dupoux, E., & Goldberg, Y. (2016). Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies. Proceedings of TACL 2016. Retrieved from http://arxiv.org/abs/1611.01368
Currey, A., & Heafield, K. (2018). Unsupervised Source Hierarchies for Low-Resource Neural Machine Translation. In Proceedings of the Workshop on the Relevance of Linguistic Structure in Neural Architectures for NLP (pp. 1–7).
Li, Y., Cohn, T., & Baldwin, T. (2017). Robust Training under Linguistic Adversity. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (Vol. 2, pp. 21–27).
Li, Y., Baldwin, T., & Cohn, T. (2018). Learning Domain-Robust Text Representations using Adversarial Training. In Proceedings of NAACL-HLT 2018.
Li, Y., Baldwin, T., & Cohn, T. (2018). Towards Robust and Privacy-preserving Text Representations. In Proceedings of ACL 2018.
Weiss, G., Goldberg, Y., & Yahav, E. (2018). Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples. In Proceedings of ICML 2018. Retrieved from http://arxiv.org/abs/1711.09576
Ethayarajh, K. (2018). Unsupervised Random Walk Sentence Embeddings: A Strong but Simple Baseline. In Proceedings of the 3rd Workshop on Representation Learning for NLP. Retrieved from http://www.aclweb.org/anthology/W18-3012
Liu, N. F., Levy, O., Schwartz, R., Tan, C., & Smith, N. A. (2018). LSTMs Exploit Linguistic Attributes of Data. In Proceedings of the 3rd Workshop on Representation Learning for NLP,. Retrieved from http://arxiv.org/abs/1805.11653
Welbl, J., Stenetorp, P., & Riedel, S. (2018). Constructing Datasets for Multi-hop Reading Comprehension Across Documents. In Transactions of the Association for Computational Linguistics. Retrieved from http://arxiv.org/abs/1710.06481
Kočiský, T., Schwarz, J., Blunsom, P., Dyer, C., Hermann, K. M., Melis, G., & Grefenstette, E. (2018). The NarrativeQA Reading Comprehension Challenge. Transactions of the Association for Computational Linguistics. Retrieved from http://arxiv.org/abs/1712.07040
Bose, A., Ling, H., & Cao, Y. (2018). Adversarial Contrastive Estimation. In Proceedings of ACL 2018.
Khandelwal, U., He, H., Qi, P., & Jurafsky, D. (2018). Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context. In Proceedings of ACL 2018.
]]>http://ruder.io/nlp-imagenet/5c76ff7c1b9b0d18555b9eacThu, 12 Jul 2018 08:27:00 GMT
This post discusses pretrained language models, one of the most exciting directions in contemporary NLP.
This post originally appeared at TheGradient and was edited by Andrey Kurenkov, Eric Wang, and Aditya Ganesh.
Big changes are underway in the world of Natural Language Processing (NLP). The long reign of word vectors as NLP's core representation technique has seen an exciting new line of challengers emerge: ELMo[1], ULMFiT[2], and the OpenAI transformer[3]. These works made headlines by demonstrating that pretrained language models can be used to achieve state-of-the-art results on a wide range of NLP tasks. Such methods herald a watershed moment: they may have the same wide-ranging impact on NLP as pretrained ImageNet models had on computer vision.
From Shallow to Deep Pre-Training
Pretrained word vectors have brought NLP a long way. Proposed in 2013 as an approximation to language modeling, word2vec[4] found adoption through its efficiency and ease of use in a time when hardware was a lot slower and deep learning models were not widely supported. Since then, the standard way of conducting NLP projects has largely remained unchanged: word embeddings pretrained on large amounts of unlabeled data via algorithms such as word2vec and GloVe[5] are used to initialize the first layer of a neural network, the rest of which is then trained on data of a particular task. On most tasks with limited amounts of training data, this led to a boost of two to three percentage points[6]. Though these pretrained word embeddings have been immensely influential, they have a major limitation: they only incorporate previous knowledge in the first layer of the model---the rest of the network still needs to be trained from scratch.
Relations captured by word2vec. (Source: TensorFlow tutorial)
Word2vec and related methods are shallow approaches that trade expressivity for efficiency. Using word embeddings is like initializing a computer vision model with pretrained representations that only encode edges: they will be helpful for many tasks, but they fail to capture higher-level information that might be even more useful. A model initialized with word embeddings needs to learn from scratch not only to disambiguate words, but also to derive meaning from a sequence of words. This is the core aspect of language understanding, and it requires modeling complex language phenomena such as compositionality, polysemy, anaphora, long-term dependencies, agreement, negation, and many more. It should thus come as no surprise that NLP models initialized with these shallow representations still require a huge number of examples to achieve good performance.
At the core of the recent advances of ULMFiT, ELMo, and the OpenAI transformer is one key paradigm shift: going from just initializing the first layer of our models to pretraining the entire model with hierarchical representations. If learning word vectors is like only learning edges, these approaches are like learning the full hierarchy of features, from edges to shapes to high-level semantic concepts.
Interestingly, pretraining entire models to learn both low and high level features has been practiced for years by the computer vision (CV) community. Most often, this is done by learning to classify images on the large ImageNet dataset. ULMFiT, ELMo, and the OpenAI transformer have now brought the NLP community close to having an "ImageNet for language"---that is, a task that enables models to learn higher-level nuances of language, similarly to how ImageNet has enabled training of CV models that learn general-purpose features of images. In the rest of this piece, we'll unpack just why these approaches seem so promising by extending and building on this analogy to ImageNet.
ImageNet
The ImageNet Large Scale Visual Recognition Challenge. (Source: Xavier Giro-o-Nieto)
ImageNet's impact on the course of machine learning research can hardly be overstated. The dataset was originally published in 2009 and quickly evolved into the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). In 2012, the deep neural network[7] submitted by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton performed 41% better than the next best competitor, demonstrating that deep learning was a viable strategy for machine learning and arguably triggering the explosion of deep learning in ML research.
The success of ImageNet highlighted that in the era of deep learning, data was at least as important as algorithms. Not only did the ImageNet dataset enable that very important 2012 demonstration of the power of deep learning, but it also allowed a breakthrough of similar importance in transfer learning: researchers soon realized that the weights learned in state of the art models for ImageNet could be used to initialize models for completely other datasets and improve performance significantly. This "fine-tuning" approach allowed achieving good performance with as little as one positive example per category (Donahue et al., 2014)[8].
Features trained on ILSVRC-2012 generalize to the SUN-397 dataset. (Source: Donahue et al., 2014)
Pretrained ImageNet models have been used to achieve state-of-the-art results in tasks such as object detection[9], semantic segmentation[10], human pose estimation[11], and video recognition[12]. At the same time, they have enabled the application of CV to domains where the number of training examples is small and annotation is expensive. Transfer learning via pretraining on ImageNet is in fact so effective in CV that not using it is now considered foolhardy (Mahajan et al., 2018)[13].
What's in an ImageNet?
In order to determine what an ImageNet for language might look like, we first have to identify what makes ImageNet good for transfer learning. Previous studies[14] have only shed partial light on this question: reducing the number of examples per class or the number of classes only results in a small performance drop, while fine-grained classes and more data are not always better.
Rather than looking at the data directly, it is more prudent to probe what the models trained on the data learn. It is common knowledge that features of deep neural networks trained on ImageNet transition from general to task-specific from the first to the last layer[15]: lower layers learn to model low-level features such as edges, while higher layers model higher-level concepts such as patterns and entire parts or objects[16] as can be seen in the figure below. Importantly, knowledge of edges, structures, and the visual composition of objects is relevant for many CV tasks, which sheds light on why these layers are transferred. A key property of an ImageNet-like dataset is thus to encourage a model to learn features that will likely generalize to new tasks in the problem domain.
Visualization of the information captured by features across different layers in GoogLeNet trained on ImageNet. (Source: Distill)
Beyond this, it is difficult to make further generalizations about why transfer from ImageNet works quite so well. For instance, another possible advantage of the ImageNet dataset is the quality of the data. ImageNet's creators went to great lengths to ensure reliable and consistent annotations. However, work in distant supervision serves as a counterpoint, indicating that large amounts of weakly labelled data might often be sufficient. In fact, recently researchers at Facebook showed that they could pre-train a model by predicting hashtags on billions of social media images to state-of-the-art accuracy on ImageNet.
Without any more concrete insights, we are left with two key desiderata:
An ImageNet-like dataset should be sufficiently large, i.e. on the order of millions of training examples.
It should be representative of the problem space of the discipline.
An ImageNet for language
In NLP, models are typically a lot shallower than their CV counterparts. Analysis of features has thus mostly focused on the first embedding layer, and little work has investigated the properties of higher layers for transfer learning. Let us consider the datasets that are large enough, fulfilling desideratum #1. Given the current state of NLP, there are several contenders[17].
Reading comprehension is the task of answering a natural language question about a paragraph. The most popular dataset for this task is the Stanford Question Answering Dataset (SQuAD)[18], which contains more than 100,000 question-answer pairs and asks models to answer a question by highlighting a span in the paragraph as can be seen below.
Question-answer pairs for a sample passage in the SQuAD dataset (Rajpurkar et al., 2016)
Natural language inference is the task of identifying the relation (entailment, contradiction, and neutral) that holds between a piece of text and a hypothesis. The most popular dataset for this task, the Stanford Natural Language Inference (SNLI) Corpus[19], contains 570k human-written English sentence pairs. Examples of the dataset can be seen below.
Examples from the SNLI dataset. (Bowman et al., 2015)
Machine translation, translating text in one language to text in another language, is one of the most studied tasks in NLP, and---over the years---has accumulated vast amounts of training data for popular language pairs, e.g. 40M English-French sentence pairs in WMT 2014. See below for two example translation pairs.
French-to-English translations from newstest2014 (Artetxe et al., 2018)
Constituency parsing seeks to extract the syntactic structure of a sentence in the form of a (linearized) constituency parse tree as can be seen below. In the past, millions of weakly labelled parses[20] have been used for training sequence-to-sequence models for this task.
A parse tree and its linearization. (Vinyals et al., 2015)
Language modeling (LM) aims to predict the next word given its previous word. Existing benchmark datasets consist of up to 1B words, but as the task is unsupervised, any number of words can be used for training. See below for examples from the popular WikiText-2 dataset consisting of Wikipedia articles.
Examples from the WikiText-2 language modeling dataset. (Source: Salesforce)
All of these tasks provide---or would allow the collection of---a sufficient number of examples for training. Indeed, the[21] above[22] tasks[23] (and many others such as sentiment analysis[24], constituency parsing [20:1], skip-thoughts[25], and autoencoding[26]) have been used to pretrain representations in recent months.
While any data contains some bias[27], human annotators may inadvertently introduce additional signals that a model can exploit. Recent[28] studies[29] reveal that state-of-the-art models for tasks such as reading comprehension and natural language inference do not in fact exhibit deep natural language understanding but pick up on such cues to perform superficial pattern matching. For instance, Gururangan et al. (2018)[29:1] show that annotators tend to produce entailment examples simply by removing gender or number information and generate contradictions by introducing negations. A model that simply exploits these cues can correctly classify the hypothesis without looking at the premise in about 67% of the SNLI dataset.
The more difficult question thus is: Which task is most representative of the space of NLP problems? In other words, which task allows us to learn most of the knowledge or relations required for understanding natural language?
The case for language modelling
In order to predict the most probable next word in a sentence, a model is required not only to be able to express syntax (the grammatical form of the predicted word must match its modifier or verb) but also model semantics. Even more, the most accurate models must incorporate what could be considered world knowledge or common sense. Consider the incomplete sentence "The service was poor, but the food was". In order to predict the succeeding word such as "yummy" or "delicious", the model must not only memorize what attributes are used to describe food, but also be able to identify that the conjunction "but" introduces a contrast, so that the new attribute has the opposing sentiment of "poor".
Language modelling, the last approach mentioned, has been shown to capture many facets of language relevant for downstream tasks, such as long-term dependencies[30], hierarchical relations[31], and sentiment[32]. Compared to related unsupervised tasks such as skip-thoughts and autoencoding, language modelling performs better on syntactic tasks even with less training data.
Among the biggest benefits of language modelling is that training data comes for free with any text corpus and that potentially unlimited amounts of training data are available. This is particularly significant, as NLP deals not only with the English language. More than 4,500 languages are spoken around the world by more than 1,000 speakers. Language modeling as a pretraining task opens the door to developing models for previously underserved languages. For very low-resource languages where even unlabeled data is scarce, multilingual language models may be trained on multiple related languages at once, analogous to work on cross-lingual embeddings[33].
The different stages of ULMFiT. (Howard and Ruder, 2018)
So far, our argument for language modeling as a pretraining task has been purely conceptual. Pretraining a language model was first proposed in 2015 [26:1], but it remained unclear whether a single pretrained language model was useful for many tasks. In recent months, we finally obtained overwhelming empirical proof: Embeddings from Language Models (ELMo), Universal Language Model Fine-tuning (ULMFiT), and the OpenAI Transformer have empirically demonstrated how language modeling can be used for pretraining, as shown by the above figure from ULMFiT. All three methods employed pretrained language models to achieve state-of-the-art on a diverse range of tasks in Natural Language Processing, including text classification, question answering, natural language inference, coreference resolution, sequence labeling, and many others.
In many cases such as with ELMo in the figure below, these improvements ranged between 10-20% better than the state-of-the-art on widely studied benchmarks, all with the single core method of leveraging a pretrained language model. ELMo furthermore won the best paper award at NAACL-HLT 2018, one of the top conferences in the field. Finally, these models have been shown to be extremely sample-efficient, achieving good performance with only hundreds of examples and are even able to perform zero-shot learning.
The improvements ELMo achieved on a wide range of NLP tasks. (Source: Matthew Peters)
In light of this step change, it is very likely that in a year's time NLP practitioners will download pretrained language models rather than pretrained word embeddings for use in their own models, similarly to how pre-trained ImageNet models are the starting point for most CV projects nowadays.
However, similar to word2vec, the task of language modeling naturally has its own limitations: It is only a proxy to true language understanding, and a single monolithic model is ill-equipped to capture the required information for certain downstream tasks. For instance, in order to answer questions about or follow the trajectory of characters in a story, a model needs to learn to perform anaphora or coreference resolution. In addition, language models can only capture what they have seen. Certain types of information, such as most common sense knowledge, are difficult to learn from text alone[34] and require incorporating external information.
One outstanding question is how to transfer the information from a pre-trained language model to a downstream task. The two main paradigms for this are whether to use the pre-trained language model as a fixed feature extractor and incorporate its representation as features into a randomly initialized model as used in ELMo, or whether to fine-tune the entire language model as done by ULMFiT. The latter fine-tuning approach is what is typically done in CV where either the top-most or several of the top layers are fine-tuned. While NLP models are typically more shallow and thus require different fine-tuning techniques than their vision counterparts, recent pretrained models are getting deeper. The next months will show the impact of each of the core components of transfer learning for NLP: an expressive language model encoder such as a deep BiLSTM or the Transformer, the amount and nature of the data used for pretraining, and the method used to fine-tune the pretrained model.
But where's the theory?
Our analysis thus far has been mostly conceptual and empirical, as it is still poorly understood why models trained on ImageNet---and consequently on language modeling---transfer so well. One way to think about the generalization behaviour of pretrained models more formally is under a model of bias learning (Baxter, 2000)[35]. Assume our problem domain covers all permutations of tasks in a particular discipline, e.g. computer vision, which forms our environment. We are provided with a number of datasets that allow us to induce a family of hypothesis spaces $\mathrm{H} = {\mathcal{H}}$. Our goal in bias learning is to find a bias, i.e. a hypothesis space $\mathcal{H} \in \mathrm{H}$ that maximizes performance on the entire (potentially infinite) environment.
Empirical and theoretical results in multi-task learning (Caruana, 1997; Baxter, 2000)[36][35:1] indicate that a bias that is learned on sufficiently many tasks is likely to generalize to unseen tasks drawn from the same environment. Viewed through the lens of multi-task learning, a model trained on ImageNet learns a large number of binary classification tasks (one for each class). These tasks, all drawn from the space of natural, real-world images, are likely to be representative of many other CV tasks. In the same vein, a language model---by learning a large number of classification tasks, one for each word---induces representations that are likely helpful for many other tasks in the realm of natural language. Still, much more research is necessary to gain a better theoretical understanding why language modeling seems to work so well for transfer learning.
The ImageNet moment
The time is ripe for practical transfer learning to make inroads into NLP. In light of the impressive empirical results of ELMo, ULMFiT, and OpenAI it only seems to be a question of time until pretrained word embeddings will be dethroned and replaced by pretrained language models in the toolbox of every NLP practitioner. This will likely open many new applications for NLP in settings with limited amounts of labeled data. The king is dead, long live the king!
For further reading, check out the discussion on HackerNews.
Cover image due to Matthew Peters.
Peters, Matthew E., et al. "Deep contextualized word representations." Proceedings of NAACL-HLT (2018). ↩︎
Howard, Jeremy, and Sebastian Ruder. "Fine-tuned Language Models for Text Classification." Proceedings of ACL (2018). ↩︎
Radford, Alec, et al. "Improving Language Understanding by Generative Pre-Training." ↩︎
Mikolov, Tomas, et al. "Distributed representations of words and phrases and their compositionality." Advances in neural information processing systems. 2013. ↩︎
Pennington, Jeffrey, Richard Socher, and Christopher Manning. "Glove: Global vectors for word representation." Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). 2014. ↩︎
Kim, Yoon. "Convolutional neural networks for sentence classification." Proceedings of EMNLP (2014). ↩︎
Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems. 2012. ↩︎
Donahue, Jeff, et al. "Decaf: A deep convolutional activation feature for generic visual recognition." International conference on machine learning. 2014. ↩︎
He, Kaiming, et al. "Mask r-cnn." Computer Vision (ICCV), 2017 IEEE International Conference on. IEEE, 2017. ↩︎
Zhao, Hengshuang, et al. "Pyramid scene parsing network." IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). 2017. ↩︎
Papandreou, George, et al. "Towards accurate multi-person pose estimation in the wild." CVPR. Vol. 3. No. 4. 2017. ↩︎
Carreira, Joao, and Andrew Zisserman. "Quo vadis, action recognition? a new model and the kinetics dataset." Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on. IEEE, 2017. ↩︎
Mahajan, Dhruv, et al. "Exploring the Limits of Weakly Supervised Pretraining." arXiv preprint arXiv:1805.00932 (2018). ↩︎
Huh, Minyoung, Pulkit Agrawal, and Alexei A. Efros. "What makes ImageNet good for transfer learning?." arXiv preprint arXiv:1608.08614 (2016). ↩︎
Yosinski, Jason, et al. "How transferable are features in deep neural networks?." Advances in neural information processing systems. 2014. ↩︎
Olah, Chris, Alexander Mordvintsev, and Ludwig Schubert. "Feature visualization." Distill 2.11 (2017): e7. ↩︎
For a comprehensive overview of progress in NLP tasks, you can refer to this GitHub repository. ↩︎
Rajpurkar, Pranav, et al. "Squad: 100,000+ questions for machine comprehension of text." Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP, 2016). ↩︎
Bowman, Samuel R., et al. "A large annotated corpus for learning natural language inference." Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP, 2015). ↩︎
Vinyals, Oriol, et al. "Grammar as a foreign language." Advances in Neural Information Processing Systems. 2015. ↩︎ ↩︎
Conneau, Alexis, et al. "Supervised learning of universal sentence representations from natural language inference data." Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP, 2017). ↩︎
Subramanian, Sandeep, et al. "Learning general purpose distributed sentence representations via large scale multi-task learning." Proceedings of the International Conference on Learning Representations, ICLR (2018). ↩︎
McCann, Bryan, et al. "Learned in translation: Contextualized word vectors." Advances in Neural Information Processing Systems. 2017. ↩︎
Felbo, Bjarke, et al. "Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm." Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP, 2017). ↩︎
Kiros, Ryan, et al. "Skip-thought vectors." Advances in neural information processing systems. 2015. ↩︎
Dai, Andrew M., and Quoc V. Le. "Semi-supervised sequence learning." Advances in neural information processing systems. 2015. ↩︎
Bolukbasi, Tolga, et al. "Man is to computer programmer as woman is to homemaker? debiasing word embeddings." Advances in Neural Information Processing Systems. 2016. ↩︎
Chen, Danqi, Jason Bolton, and Christopher D. Manning. "A thorough examination of the cnn/daily mail reading comprehension task." Proceedings of the Meeting of the Association for Computational Linguistics (2016). ↩︎
Gururangan, Suchin, et al. "Annotation artifacts in natural language inference data." Proceedings of NAACL-HLT (2018). ↩︎ ↩︎
Linzen, T., Dupoux, E., & Goldberg, Y. (2016). Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies. Proceedings of TACL 2016. ↩︎
Gulordava, K., Bojanowski, P., Grave, E., Linzen, T., & Baroni, M. (2018). Colorless green recurrent networks dream hierarchically. In Proceedings of NAACL-HLT 2018. ↩︎
Radford, A., Jozefowicz, R., & Sutskever, I. (2017). Learning to Generate Reviews and Discovering Sentiment. arXiv preprint arXiv:1704.01444. ↩︎
Ruder, Sebastian, Ivan Vulic, and Anders Søgaard. "A Survey of Cross-lingual Word Embedding Models." Journal of Artificial Intelligence Research (2018). ↩︎
Lucy, Li, and Jon Gauthier. "Are distributional representations ready for the real world? Evaluating word vectors for grounded perceptual meaning." Proceedings of the First Workshop on Language Grounding for Robotics (2017). ↩︎
Jonathan Baxter. 2000. A Model of Inductive Bias Learning. Journal of Artificial Intelligence Research 12:149–198. ↩︎ ↩︎
Rich Caruana. 1993. Multitask learning: A knowledge-based source of inductive bias. In Proceedings of the Tenth International Conference on Machine Learning. ↩︎
]]>http://ruder.io/tracking-progress-nlp/5c76ff7c1b9b0d18555b9eaaFri, 22 Jun 2018 17:31:55 GMT
This post introduces a resource to track the progress and state-of-the-art across many tasks in NLP.
Go directly to the document tracking the progress in NLP.
Research in Machine Learning and in Natural Language Processing (NLP) is moving so fast these days, it is hard to keep up. This is an issue for people in the field, but it is an even bigger obstacle for people wanting to get into NLP and those seeking to make the leap from tutorials to reproducing papers and conducting their own research. Without expert guidance and prior knowledge, it can be a painstaking process to identify the most common datasets and the current state-of-the-art for your task of interest.
A number of resources exist that could help with this process, but each has deficits: The Association of Computation Linguistics (ACL) has a wiki page tracking the state-of-the-art, but the page is not maintained and contributing is not straightforward. The Electronic Frontier Foundation and the AI Index try to do something similar for all of AI but only cover a few language tasks. The Language Resources and Evaluation (LRE) Map collects language resources presented at LREC and other conferences, but does not allow to break them out by tasks or popularity. Similarly, the International Workshop on Semantic Evaluation (SemEval) hosts a small number of tasks each year, which provide new datasets that typically have not been widely studied before. There are also resources that focus on computer vision and speech recognition as well as this repo, which focuses on all of ML.
As an alternative, I have created a GitHub repository that keeps track of the datasets and the current state-of-the-art for the most common tasks in NLP. The repository is kept as simple as possible to make maintenance and contribution easy. If I missed your favourite task or dataset or your new state-of-the-art result or if I made any error, you can simply submit a pull request.
The aim is to have a comprehensive and up-to-date resource where everyone can see at a glance the state-of-the-art for the tasks they care about. Datasets, which already do a great job at tracking this such as SQuAD or SNLI using a public leaderboard will simply be referenced instead.
My hope is that such a resource will give a broader sense of progress in the field than results in individual papers. It might also make it easier to identify tasks or areas where progress has been lacking. Another benefit is that such a resource may encourage serendipity: chancing upon an interesting new task or method. Finally, a positive by-product of having the state-of-the-art for each task easily accessible may be that it will be harder to justify (accidentally) comparing to weak baselines. For instance, the perplexity of the best baseline on the Penn Treebank varied dramatically across 10 language modeling papers submitted to ICLR 2018 (see below).
Figure 1: Comparison of perplexity (PPL) of proposed model vs. PPL of best baseline across 10 language modeling papers submitted to ICLR 2018 (credit: @AaronJaech)
Credit for the cover image is due to the Electronic Frontier Foundation.
]]>http://ruder.io/highlights-naacl-2018/5c76ff7c1b9b0d18555b9eabTue, 12 Jun 2018 13:42:00 GMT
This post discusses highlights of NAACL-HLT 2018.
I attended NAACL-HLT 2018 in New Orleans last week. I didn't manage to catch as many talks and posters this time around (there were just too many inspiring people to talk to!), so my highlights and the trends I observed mainly focus on invited talks and workshops.
Specifically, my highlights concentrate on three topics, which were prominent throughout the conference: Generalization, the Test-of-Time awards, and Dialogue Systems. For more information about other topics, you can check out the conference handbook and the proceedings.
First of all, there were four quotes from the conference that particularly resonated with me (some of them are paraphrased):
People worked on MT before the BLEU score. - Kevin Knight
It's natural to work on tasks where evaluation is easy. Instead, we should encourage more people to tackle hard problems that are not easy to evaluate. These are often the most rewarding to work on.
BLEU is an understudy. It was never meant to replace human judgement and never expected to last this long. - Kishore Papineni, co-creator of BLEU, the most commonly used metric for machine translation.
No approach is perfect. Even the authors of landmark papers were aware their methods had flaws. The best we can do is provide a fair evaluation of our technique.
We decided to sample an equal number of positive and negative reviews---was that a good idea? - Bo Pang, first author of one of the first papers on sentiment analysis (7k+ citations).
In addition to being aware of the flaws of our method, we should be explicit about the assumptions we make so that future work can either build upon them or discard them if they prove unhelpful or turn out to be false.
I pose the following challenge to the community: we should evaluate on out-of-distribution data or on a new task. - Percy Liang.
We never know how well our model truly generalizes if we just test it on data of the same distribution. In order to develop models that can be applied to the real world, we need to evaluate on out-of-distribution data or on a new task. Percy Liang's quote ties into one of the topics that received increasing attention at the conference: how can we train models that are less brittle and that generalize better?
Over the last years, much of the research within the NLP community focused on improving LSTM-based models on benchmark tasks. At NAACL, it seemed that increasingly people were thinking about how to get models to generalize beyond the conditions during training, reflecting a similar sentiment in the Deep Learning community in general (a paper on generalization won the best paper award at ICLR 2017).
One aspect is generalizing from few examples, which is difficult with the current generation of neural network-based methods. Charles Yang, professor of Linguistics, Computer Science and Psychology at the University of Pennsylvania put this in a cognitive science context.
Machine learning and NLP researchers in the neural network era frequently like to motivate their work by referencing the remarkable generalization ability of young children. One piece of information that often is eluded, however, is that generalization in young children is also not without its errors, because it requires learning a rule and accounting for exceptions. For instance, when learning to count, young children still frequently make mistakes, as they have to balance rule-based generalization (for regular numbers such as sixteen, seventeen, eighteen, etc.) with memorizing exceptions (numbers such as fifteen, twenty, thirty, etc.).
However, once a child can flawlessly count to 72, it can generalize to any new numbers. This magic number, 72, is given by the so-called tolerance principle, which prescribes that in order for a generalization rule to be productive, there can be at most N/ln(N) exceptions, where N is the total number of examples as can be seen in the Figure below. For counting, 72/ln(72) ≈ 17, which is exactly the number of exceptions until 72.
Hitchhiker's Guide to the Galaxy fans, however, need not be disappointed: in Chinese, the magic number is 42. Chinese only has 11 exceptions. As 42/ln(42) ≈ 11, Chinese children typically only need to learn to count up to 42 in order to generalize, which explains why Chinese children usually learn to count faster.
Figure 1: The Tolerance Principle
It is also interesting to note that even though young children can count up to a certain number, they can't tell you, for example, which number is higher than 24. Only after they've learned the rule can they actually apply it productively.
The tolerance principle implies that if the total number of examples covered by the rule is smaller, it is easier to incorporate comparatively more exceptions. While children can productively learn language from few examples, this indicates that for few-shot learning (at least in the cognitive process of language acquisition), big data may actually be harmful. Insights from cognitive science may thus help us in developing models that generalize better.
The choice of the best paper of the conference, Deep contextualized word representations, also demonstrates an increasing interest in generalization. Embeddings from Language Models (ELMo) showed significant improvements over the state-of-the-art on a wide range of tasks as can be seen below. This---together with better ways to fine-tune pre-trained language models---reveals the potential of transfer learning for NLP.
Figure 2: Improvements by ELMo on six diverse NLP tasks
Generalization was also the topic of the Workshop on New Forms of Generalization in Deep Learning and Natural Language Processing, which sought to analyze the failings of brittle models and propose new ways to evaluate and new models that would enable better generalization. Throughout the workshop, the room (seen below) was packed most of the time, which is both a testament to the prestige of the speakers and the community interest in the topic.
Figure 3: The packed venue of the Gen-Deep Workshop
In the first talk of the workshop, Yejin Choi argued that natural language understanding (NLU) does not generalize to natural language generation (NLG): while pre-Deep Learning NLG models often started with NLU output, post-DL NLG seems less dependent on NLU. However, current neural NLG heavily depends on language models and neural NLG can be brittle; in many cases, baselines based on templates can actually work better.
Despite advances in NLG, generating a coherent paragraph still does not work well and models end up generating generic, contentless, bland text full of repetitions and contradictions. But if we feed our models natural language input, why do they produce unnatural language output?
Yejin identified two limitations of language models in this context. Language models are passive learners: in the real world, one can't learn to write just by reading; and similarly, she argued, even RNNs need to "practice" writing. Secondly, language models are surface learners: they need "world" models and must be sensitive to the "latent process" behind language. In reality, people don't write to maximize the probability of the next token, but rather seek to fulfill certain communicative goals.
To address this, Yejin and collaborators proposed in an upcoming ACL 2018 paper to augment the generator with multiple discriminative models that grade the output along different dimensions inspired by Grice's maxims.
Yejin also sought to explain why there is a significant performance gaps between different NLU tasks such as machine translation and dialogue. For Type 1 or shallow NLU tasks, there is a strong alignment between input and output and models can often match surface patterns. For Type 2 or deep NLU tasks, the alignment between input and output is weaker; in order to perform well, a model needs to be able to abstract and reason, and requires certain types of knowledge, especially common sense knowledge. In particular, commonsense knowledge has somewhat fallen out of favour; past approaches, which were mostly proposed in the 80s, did not have access to a lot of computing power and were mostly done by non-NLP people. Overall, NLU traditionally focuses on understanding only "natural" language, while NLG also requires understanding machine language, which may be unnatural.
Devi Parikh discussed generalization "opportunities" in visual question answering (VQA) and illustrated successes and failures of VQA models. In particular, VQA models are not very good at generalizing to novel instances; the distance of a test image from the k-nearest neighbours seen during training can predict the success or failure of a model with about 67% accuracy.
Devi also showed that in many cases, VQA models do not even consider the entire question: in 50% of cases, only half the question is read. In particular, certain prefixes demonstrate the power of language priors: if the question begins with "Is there a clock…?", the answer is "yes" 98% of the time; if the question begins with "Is the man wearing glasses…?", the answer is "yes" 94% of the time. In order to counteract these biases, Devi and her collaborators introduced a new dataset of complimentary scenes, which are very similar but differ in their answers. They also proposed a new setting for VQA where for every question type, train and test sets have different prior distributions of answers.
The final discussion with senior panelists (seen below) was arguably the highlight of the workshop; a summary can be found here. The main takeaways are that we need to develop models with inductive biases and that we need to do a better job of educating people on how to design experiments and identify biases in datasets.
Figure 4: Panel discussion at the Generalization workshop (from left to right: Chris Manning, Percy Liang, Sam Bowman, Yejin Choi, Devi Parikh)
Test-of-time awards
Another highlight of the conference was the test-of-time awards session, which highlighted persons and papers that had a huge impact on the field. At the beginning of the session, Aravind Yoshi (see below), NLP pioneer and professor of Computer Science at the University of Pennsylvania, who passed away on December 31, 2017 was honored in touching epitaphs by close friends and people who knew him. The commemoration was a powerful reminder that research is about more than the papers we publish, but about the people we help and the lives we touch.
Figure 5: Aravind Joshi
Afterwards, three papers (all published in 2002) were honored with test-of-time awards. For each paper, one of the original authors presented the paper and reflected on its impact. The first paper presented was BLEU: a Method for Automatic Evaluation of Machine Translation, which introduced the original BLEU metric, now commonplace in machine translation (MT). Kishore Papineni recounted that the name was inspired by Candide, an experimental MT system at IBM in the early 1990s and by IBM's nickname, Big Blue, as all authors were at IBM at that time.
Before BLEU, machine translation evaluation was cumbersome, expensive, and thought to be as difficult as training an MT model. Despite its huge impact, BLEU's creators were apprehensive before its initial publication. Once published, BLEU seemed to split the community in two camps: those who loved it, and those who hated it; the authors hadn't expected such a strong reaction.
BLEU is still criticized today. It was meant as a corpus-level metric; individual sentence errors should be averaged out across the entire corpus. Kishore conceded that in hindsight, they made a few mistakes: they should have included smoothing and statistical significance testing; an initial version was also case insensitive, which caused confusion.
In summary, BLEU has many known limitations and inspired many colorful variants. On the whole, however, it is an understudy (as the acronym BiLingual Evaluation Understudy implies): it was never meant to replace human judgement and---notably---was never expected to last this long.
The second honored paper was Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms by Michael Collins, which introduced the Structured Perceptron, one of the foundational and easiest to understand algorithms for general structured prediction.
Lastly, Bo Pang looked back on her paper Thumbs up? Sentiment Classification using Machine Learning Techniques, which was the first paper of her PhD and one of the first papers on sentiment analysis, now an active research area in the NLP community. Prior to the paper, people had worked on classifying the subjectivity of sentences and the semantic orientation (polarity) of adjectives; sentiment classification was thus a natural progression.
Over the years, the paper has accumulated more than 7,000 citations. One reason why the paper was so impactful was that the authors decided to release a dataset with it. Bo was critical of the sampling choices they made that "messed" with the natural distribution of the data: they capped the number of reviews of prolific authors, which was probably a good idea. However, they sampled an equal number of positive and negative reviews, which set the standard that many approaches later followed and is still the norm for sentiment analysis today. A better idea might have been to stay true to the natural distribution of the data.
I found the test-of-time award session both insightful and humbling: we can derive many insights from traditional approaches and combining traditional with more or recent approaches is often a useful direction; at the same time, even the authors of landmark papers are critical of their own work and aware of its own flaws.
Dialogue systems
Another pervasive topic at the conference was dialogue systems. On the first day, researchers from PolyAI gave an excellent tutorial on Conversational AI. On the second day, Mari Ostendorf, professor of Electrical Engineering at the University of Washington and faculty advisor to the Sounding Board team, which won the inaugural Alexa Prize competition, shared some of the secrets to their winning socialbot. A socialbot in this context is a bot with which you can have a conversation, in contrast to a personal assistant that is designed to accomplish user-specified goals.
A good socialbot should have the same qualities as someone you enjoy talking to at a cocktail party: it should have something interesting to say and show interest in the conversation partner. To illustrate this, an example conversation with the Sounding Board bot can be seen below.
Figure 6: An example conversation with the Sounding Board bot
With regard to saying something interesting, the team found that users react positively to learning something new but negatively to old or unpleasant news; a challenge here is to filter what to present to people. In addition, users lose interest when they receive too much content that they do not care about.
Regarding expressing interest, users appreciate an acknowledgement of their reactions and requests. While some users need encouragement to express opinions, some prompts can be annoying ("The article mentioned Google. Have you heard of Google?").
They furthermore found that modeling prosody is important. Prosody deals with modeling the patterns of stress and intonation in speech. Instead of sounding monotonous, a bot that incorporates prosody seems more engaging, can better articulate intent, communicate sentiment or sarcasm, express empathy or enthusiasm, or change the topic. In some cases, prosody is also essential for avoiding---often hilarious---misunderstandings: for instance, the Alexa's default voice pattern for 'Sounding Board' sounds like 'Sounding Bored'.
Using a knowledge graph, the bot can have deeper conversations by staying "sort of on topic without totally staying on topic". Mari also shared four key lessons that they learned working with 10M conversations:
Lesson #1: ASR is imperfect
While automatic speech recognition (ASR) has reached lower and lower word error rates in recent years, ASR is far from being solved. In particular, ASR in dialogue agents is tuned for short commands, not conversational speech. Whereas it can accurately identify for instance many obscure music groups, it struggles with more conversational or informal input. In addition, current development platforms provide developers with an impoverished representation of speech that does not contain segmentation or prosody and often misses certain intents and affects. Problems caused by this missing information can be seen below.
Figure 7: Problems caused by missing prosody information and missing punctuation
In practice, the Sounding Board team found it helpful for the socialbot to behave similarly to attendees at a cocktail party: if the intent of the entire utterance is not completely clear, the bot responds to whatever part it understood, e.g. to a somewhat unintelligible "cause does that you're gonna state that's cool" it might respond "I'm happy you like that." They also found that often, asking the conversation partner to repeat an utterance will not yield a much better result; instead, it is often better just to change the topic.
Lesson #2: Users vary
The team discovered something what might seem obvious but has wide-ranging consequences in practice: users vary a lot across different dimensions. Users have different interests, different opinions on issues, and a different sense of humor. Interaction styles, similarly, can range from terse to verbose (seen below), from polite to rude. Users can also interact with the bot in pursuit of widely different goals: they may seek information, intend to share opinions, try to get to know the bot, or seek to explore the limitations of the bot. Modeling the user involves both determining what to say as well as listening to what the user says.
Figure 8: Interaction styles of talkative and terse users
Lesson #3: It's a wild world
There is a lot of problematic content and many issues exist that need to be navigated. Problematic content can consist of offensive or controversial material or sensitive and depressing topics. In addition, users might act adversarially and deliberately try to get the bot to produce such content. Examples of such behaviour can be seen below. Other users (e.g. such as those suffering from a mental illness) are in turn risky to deal with. Overall, filtering content is a hard problem. As one example, Mari mentioned that early in the project, the bot made the following witty observation / joke: "You know what I realized the other day? Santa Claus is the most elaborate lie ever told".
Figure 9: Adversarial user examples
Lesson #4: Shallow conversations
As the goal of the Alexa Prize was to maintain a conversation of 20 minutes, in light of the current limited understanding and generation capabilities, the team focused on a dialog strategy of shallow conversations. Even for shallow conversations, however, switching to related topics can still be fragile due to word sense ambiguities. It is notable that among the top 3 Alexa Prize teams, Deep Learning was only used by one team and only for reranking.
In total, a competition such as the Alexa Prize that brings academia and industry together is useful as it allows researchers to access data from real users at a large scale, which impacts the problems they choose to solve and the resulting solutions. It also teaches students about complete problems and real-world challenges and provides funding support for students. The team found it in particular beneficial someone from industry available to support the partnership, to provide advice on tools, and feedback on progress.
On the other hand, privacy-preserving access to user data, such as prosody info for spoken language and speaker/author demographics for text and speech still needs work. For spoken dialog systems, richer speech interfaces are furthermore necessary. Finally, while competitions are great kickstarters, they nevertheless require a substantial engineering effort.
Finally, in her keynote address on dialogue models, Dilek Hakkani-Tür, Research Scientist at Google Research, argued that over the recent years, chitchat systems and task-oriented dialogue systems have been converging. However, current production systems are essentially a walled garden of domains and only allow directed dialogue and limited personalization. Models have to be learned from developers using a limited set of APIs and tools and are hard to scale. At the same time, conversation is a skill even for humans. It is thus important to learn from real users, not from developers.
In order to learn about users, we can leverage personal knowledge graphs learned from user assertions, such as "show me directions to my daughter's school". Semantic recall, i.e. remembering entities from previous user interactions, e.g. "Do you remember the restaurant we ordered Asian food from?" is important. Personalized natural language understanding can also leverage data from user's device (in a privacy-preserving manner) and employ user modeling for dialogue generation.
For learning from users, actions can be learned from user demonstration and/or explanation or from experience and feedback (mostly using RL for dialogue systems). In both cases, transcription and annotation are bottlenecks. A user can't be expected to transcribe or annotate data; on the other hand, it is easier to give feedback after the system repeats an utterance.
Generally, task-oriented dialogue can be treated as a game between two parties (see below): the seeker has a goal, which is fixed or flexible, while the provider has access to APIs to perform the task. The dialogue policy of the seeker decides the next seeker action. This is typically determined using "user simulators", which are often sequence-to-sequence models. Most recently, hierarchical sequence-to-sequence models have been used with a focus on following the user goals and generating diverse responses.
Figure 10: Task-oriented dialogue as a game
The dialogue policy of the provider similarly determines the next provider action, which is realized either via supervised or reinforcement learning (RL). For RL, reward estimation and policy shaping are important. Recent approaches jointly learn seeker and provider policies. End-to-end dialogue models with deep RL are critical for learning from user feedback, while component-wise training benefits from additional data for each component.
In practice, a combination of supervised and reinforcement learning is best and outperforms both purely supervised learning and supervised learning with policy-only RL as can be seen below.
Figure 11: Supervised and reinforcement learning for dialogue modeling
Overall, the conference was a great opportunity to see fantastic research and meet great people. See you all at ACL 2018!
]]>http://ruder.io/semi-supervised/5c76ff7c1b9b0d18555b9ea7Thu, 26 Apr 2018 07:00:00 GMT
This post discusses semi-supervised learning algorithms that learn from proxy labels assigned to unlabelled data.
Note: Parts of this post are based on my ACL 2018 paper Strong Baselines for Neural Semi-supervised Learning under Domain Shift with Barbara Plank.
Multi-view training
Co-training
Democratic Co-learning
Tri-training
Tri-training with disagreement
Asymmetric tri-training
Multi-task tri-training
Self-ensembling
Ladder networks
Virtual Adversarial Training
\(\Pi\) model
Temporal Ensembling
Mean Teacher
Related methods and areas
Learning from weak supervision
Learning with noisy labels
Data augmentation
Ensembling a single model
Unsupervised learning constitutes one of the main challenges for current machine learning models and one of the key elements that is missing for general artificial intelligence. While unsupervised learning on its own is still elusive, researchers have a made a lot of progress in combining unsupervised learning with supervised learning. This branch of machine learning research is called semi-supervised learning.
Semi-supervised learning has a long history. For a (slightly outdated) overview, refer to Zhu (2005) [1] and Chapelle et al. (2006) [2]. Particularly recently, semi-supervised learning has seen some success, considerably reducing the error rate on important benchmarks. Semi-supervised learning also makes an appearance in Amazon's annual letter to shareholders where it is credited with reducing the amount of labelled data needed to achieve the same accuracy improvement by \(40\times\).
In this blog post, I will focus on a particular class of semi-supervised learning algorithms that produce proxy labels on unlabelled data, which are used as targets together with the labelled data. These proxy labels are produced by the model itself or variants of it without any additional supervision; they thus do not reflect the ground truth but might still provide some signal for learning. In a sense, these labels can be considered noisy or weak. I will highlight the connection to learning from noisy labels, weak supervision as well as other related topics in the end of this post.
This class of models is of particular interest in my opinion, as a) deep neural networks have been shown to be good at dealing with noisy labels and b) these models have achieved state-of-the-art in semi-supervised learning for computer vision. Note that many of these ideas are not new and many related methods have been developed in the past. In one half of this post, I will thus cover classic methods and discuss their relevance for current approaches; in the other half, I will discuss techniques that have recently achieved state-of-the-art performance. Some of the following approaches have been referred to as self-teaching or bootstrapping algorithms; I am not aware of a term that captures all of them, so I will simply refer to them as proxy-label methods.
I will divide these methods in three groups, which I will discuss in the following: 1) self-training, which uses a model's own predictions as proxy labels; 2) multi-view learning, which uses the predictions of models trained with different views of the data; and 3) self-ensembling, which ensembles variations of a model's own predictions and uses these as feedback for learning. I will show pseudo-code for the most important algorithms. You can find the LaTeX source here.
There are many interesting and equally important directions for semi-supervised learning that I will not cover in this post, e.g. graph-convolutional neural networks [3].
Self-training (Yarowsky, 1995; McClosky et al., 2006) [4] [5] is one of the earliest and simplest approaches to semi-supervised learning and the most straightforward example of how a model's own predictions can be incorporated into training. As the name implies, self-training leverages a model's own predictions on unlabelled data in order to obtain additional information that can be used during training. Typically the most confident predictions are taken at face value, as detailed next.
Formally, self-training trains a model \(m\) on a labeled training set \(L\) and an unlabeled data set \(U\). At each iteration, the model provides predictions \(m(x)\) in the form of a probability distribution over the \(C\) classes for all unlabeled examples \(x\) in \(U\). If the probability assigned to the most likely class is higher than a predetermined threshold \(\tau\), \(x\) is added to the labeled examples with \(\DeclareMathOperator*{\argmax}{argmax} p(x) = \argmax m(x)\) as pseudo-label. This process is generally repeated for a fixed number of iterations or until no more predictions on unlabelled examples are confident. This instantiation is the most widely used and shown in Algorithm 1.
Classic self-training has shown mixed success. In parsing it proved successful with small datasets (Reichart, and Rappoport, 2007; Huang and Harper, 2009) [6] [7] or when a generative component is used together with a reranker when more data is available (McClosky et al., 2006; Suzuki and Isozaki , 2008) [8]. Some success was achieved with careful task-specific data selection (Petrov and McDonald, 2012) [9], while others report limited success on a variety of NLP tasks (He and Zhou, 2011; Plank, 2011; Van Asch and Daelemans, 2016; van der Goot et al., 2017) [10] [11] [12] [13].
The main downside of self-training is that the model is unable to correct its own mistakes. If the model's predictions on unlabelled data are confident but wrong, the erroneous data is nevertheless incorporated into training and the model's errors are amplified. This effect is exacerbated if the domain of the unlabelled data is different from that of the labelled data; in this case, the model's confidence will be a poor predictor of its performance.
Multi-view training aims to train different models with different views of the data. Ideally, these views complement each other and the models can collaborate in improving each other's performance. These views can differ in different ways such as in the features they use, in the architectures of the models, or in the data on which the models are trained.
Co-training Co-training (Blum and Mitchell, 1998) [14] is a classic multi-view training method, which makes comparatively strong assumptions. It requires that the data \(L\) can be represented using two conditionally independent feature sets \(L^1\) and \(L^2\) and that each feature set is sufficient to train a good model. After the initial models \(m_1\) and \(m_2\) are trained on their respective feature sets, at each iteration, only inputs that are confident (i.e. have a probability higher than a threshold \(\tau\)) according to exactly one of the two models are moved to the training set of the other model. One model thus provides the labels to the inputs on which the other model is uncertain. Co-training can be seen in Algorithm 2.
In the original co-training paper (Blum and Mitchell, 1998), co-training is used to classify web pages using the text on the page as one view and the anchor text of hyperlinks on other pages pointing to the page as the other view. As two conditionally independent views are not always available, Chen et al. (2011) [15] propose pseudo-multiview regularization (Chen et al., 2011) in order to split the features into two mutually exclusive views so that co-training is effective. To this end, pseudo-multiview regularization constrains the models so that at least one of them has a zero weight for each feature. This is similar to the orthogonality constraint recently used in domain adaptation to encourage shared and private spaces (Bousmalis et al., 2016) [16]. A second constraint requires the models to be confident on different subsets of \(U\). Chen et al. (2011) [17] use pseudo-multiview regularization to adapt co-training to domain adaptation.
Democratic Co-learning Rather than treating different feature sets as views, democratic co-learning (Zhou and Goldman, 2004) [18] employs models with different inductive biases. These can be different network architectures in the case of neural networks or completely different learning algorithms. Democratic co-learning first trains each model separately on the complete labelled data \(L\). The models then make predictions on the unlabelled data \(U\). If a majority of models confidently agree on the label of an example, the example is added to the labelled dataset. Confidence is measured in the original formulation by measuring if the sum of the mean confidence intervals \(w\) of the models, which agreed on the label is larger than the sum of the models that disagreed. This process is repeated until no more examples are added. The final prediction is made with a majority vote weighted with the confidence intervals of the models. The full algorithm can be seen below. \(M\) is the set of all models that predict the same label \(j\) for an example \(x\).
Tri-training Tri-training (Zhou and Li, 2005) [19] is one of the best known multi-view training methods. It can be seen as an instantiation of democratic co-learning, which leverages the agreement of three independently trained models to reduce the bias of predictions on unlabeled data. The main requirement for tri-training is that the initial models are diverse. This can be achieved using different model architectures as in democratic co-learning. The most common way to obtain diversity for tri-training, however, is to obtain different variations \(S_i\) of the original training data \(L\) using bootstrap sampling. The three models \(m_1\), \(m_2\), and \(m_3\) are then trained on these bootstrap samples, as depicted in Algorithm 4. An unlabeled data point is added to the training set of a model \(m_i\) if the other two models \(m_j\) and \(m_k\) agree on its label. Training stops when the classifiers do not change anymore.
Despite having been proposed more than 10 years ago, before the advent of Deep Learning, we found in a recent paper (Ruder and Plank, 2018) [20] that classic tri-training is a strong baseline for neural semi-supervised with and without domain shift for NLP and that it outperforms even recent state-of-the-art methods.
Tri-training with disagreement Tri-training with disagreement (Søgaard, 2010) [21] is based on the intuition that a model should only be strengthened in its weak points and that the labeled data should not be skewed by easy data points. In order to achieve this, it adds a simple modification to the original algorithm (altering line 8 in Algorithm 2), requiring that for an unlabeled data point on which \(m_j\) and \(m_k\) agree, the other model \(m_i\) disagrees on the prediction. Tri-training with disagreement is more data-efficient than tri-training and has achieved competitive results on part-of-speech tagging (Søgaard, 2010).
Asymmetric tri-training Asymmetic tri-training (Saito et al., 2017) [22] is a recently proposed extension of tri-training that achieved state-of-the-art results for unsupervised domain adaptation in computer vision. For unsupervised domain adaptation, the test data and unlabeled data are from a different domain than the labelled examples. To adapt tri-training to this shift, asymmetric tri-training learns one of the models only on proxy labels and not on labelled examples (a change to line 10 in Algorithm 4) and uses only this model to classify target domain examples at test time. In addition, all three models share the same feature extractor.
Multi-task tri-training Tri-training typically relies on training separate models on bootstrap samples of a potentially large amount of training data, which is expensive. Multi-task tri-training (MT-Tri) (Ruder and Plank, 2018) aims to reduce both the time and space complexity of tri-training by leveraging insights from multi-task learning (MTL) (Caruana, 1993) [23] to share knowledge across models and accelerate training. Rather than storing and training each model separately, MT-Tri shares the parameters of the models and trains them jointly using MTL. Note that the model does only pseudo MTL as all three models effectively perform the same task.
The output softmax layers are model-specific and are only updated for the input of the respective model. As the models leverage a joint representation, diversity is even more crucial. We need to ensure that the features used for prediction in the softmax layers of the different models are as diverse as possible, so that the models can still learn from each other's predictions. In contrast, if the parameters in all output softmax layers were the same, the method would degenerate to self-training. Similar to pseudo-view regularization, we thus use an orthogonality constraint (Bousmalis et al., 2016) on two of the three softmax output layers as an additional loss term.
The pseudo-code can be seen below. In contrast to classic tri-training, we can train the multi-task model with its three model-specific outputs jointly and without bootstrap sampling on the labeled source domain data until convergence, as the orthogonality constraint enforces different representations between models \(m_1\) and \(m_2\). From this point, we can leverage the pair-wise agreement of two output layers to add pseudo-labeled examples as training data to the third model. We train the third output layer \(m_3\) only on pseudo-labeled target instances in order to make tri-training more robust to a domain shift. For the final prediction, we use majority voting of all three output layers. For more information about multi-task tri-training, self-training, other tri-training variants, you can refer to our recent ACL 2018 paper.
Self-ensembling methods are very similar to multi-view learning approaches in that they combine different variants of a model. Multi-task tri-training, for instance, can also be seen as a self-ensembling method where different variations of a model are used to create a stronger ensemble prediction. In contrast to multi-view learning, diversity is not a key concern. Self-ensembling approaches mostly use a single model under different configurations in order to make the model's predictions more robust. Most of the following methods are very recent and several have achieved state-of-the-art results in computer vision.
Ladder networks The \(\Gamma\) (gamma) version of Ladder Networks (Rasmus et al., 2015) [24] aims to make a model more robust to noise. For each unlabelled example, it uses the model's prediction on the clean example as a proxy label for prediction on a perturbed version of the example. This way, the model learns to develop features that are invariant to noise and predictive of the labels on the labelled training data. Ladder networks have been mostly used in computer vision where many forms of perturbation and data augmentation are available.
Virtual Adversarial Training If perturbing the original sample is not possible or desired, we can instead perturb the example in feature space. Rather than randomly perturbing it by e.g. adding dropout, we can apply the worst possible perturbation for the model, which transforms the input into an adversarial sample. While adversarial training requires access to the labels to perform these perturbations, virtual adversarial training (Miyato et al., 2017) [25] requires no labels and is thus suitable for semi-supervised learning. Virtual adversarial training effectively seeks to make the model robust to perturbations in directions to which it is most sensitive and has achieved good results on text classification datasets.
\(\Pi\) model Rather than treating clean predictions as proxy labels, the \(\Pi\) (pi) model (Laine and Aila, 2017) [26] ensembles the predictions of the model under two different perturbations of the input data and two different dropout conditions \(z\) and \(\tilde{z}\). The full pseudo-code can be seen in Algorithm 6 below. \(g(x)\) is the stochastic input augmentation function. The first loss term encourages the predictions under the two different noise settings to be consistent, with \(\lambda\) determining the contribution, while the second loss term is the standard cross-entropy loss \(H\) with respect to the label \(y\). In contrast to the models we encountered before, we apply the unsupervised loss component to both unlabelled and labelled examples.
Temporal Ensembling Instead of ensembling over the same model under different noise configurations, we can ensemble over different models. As training separate models is expensive, we can instead ensemble the predictions of a model at different timesteps. We can save the ensembled proxy labels \(Z\) as an exponential moving average of the model's past predictions on all examples as depicted below in order to save space. As we initialize the proxy labels as a zero vector, they are biased towards \(0\). We can correct this bias similar to Adam (Kingma and Ba, 2015) [27] based on the current epoch \(t\) to obtain bias-corrected target vectors \(\tilde{z}\). We then update the model similar to the \(\Pi\) model.
Mean Teacher Finally, instead of averaging the predictions of our model over training time, we can average the model weights. Mean teacher (Tarvainen and Valpola, 2017) [28] stores an exponential moving average of the model parameters. For every example, this mean teacher model is then used to obtain proxy labels \(\tilde{z}\). The consistency loss and supervised loss are computed as in temporal ensembling.
Mean teacher has achieved state-of-the-art results for semi-supervised learning for computer vision. For reference, on ImageNet with 10% of the labels, it achieves an error rate of \(9.11\), compared to an error rate of \(3.79\) using all labels with the state-of-the-art. For more information about self-ensembling methods, have a look at this intuitive blog post by the Curious AI company. We have run experiments with temporal ensembling for NLP tasks, but did not manage to obtain consistent results. My assumption is that the unsupervised consistency loss is more suitable for continuous inputs. Mean teacher might work better, as averaging weights aka Polyak averaging (Polyak and Juditsky, 1992) [29] is a tried method for accelerating optimization.
Very recently, Oliver et al. (2018) [30] raise some questions regarding the true applicability of these methods: They find that the performance difference to a properly tuned supervised baseline is smaller than typically reported, that transfer learning from a labelled dataset (e.g. ImageNet) outperforms the presented methods, and that performance degrades severely under a domain shift. In order to deal with the latter, algorithms such as asymmetric or multi-task tri-training learn different representations for the target distribution. It remains to be seen if these insights translate to other domains; a combination of transfer learning and semi-supervised adaptation to the target domain seems particularly promising.
Distillation Proxy-label approaches can be seen as different forms of distillation (Hinton et al., 2015) [31]. Distillation was originally conceived as a method to compress the information of a large model or an ensemble in a smaller model. In the standard setup, a typically large and fully trained teacher model provides proxy targets for a student model, which is generally smaller and faster. Self-learning is akin to distillation without a teacher, where the student is left to learn by themselves and with no-one to correct its mistakes. For multi-view learning, different models work together to teach each other, alternately acting as both teachers and students. Self-ensembling, finally, has one model assuming the dual role of teacher and student: As a teacher, it generates new targets, which are then incorporated by itself as a student for learning.
Learning from weak supervision Learning from weak supervision, as the name implies, can be seen as a weaker form of supervised learning or alternatively as a stronger form of semi-supervised learning: While supervised learning provides us with labels that we know to be correct and semi-supervised learning only provides us with a small set of labelled examples, weak supervision allows us to obtain labels that we know to be noisy for the unlabelled data as a further signal for learning. Typically, the weak annotator is an unsupervised method that is very different from the model we use for learning the task. For sentiment analysis, this could be a simple lexicon-based method [32]. Many of the presented methods could be extended to the weak supervision setting by incorporating the weak labels as feedback. Self-ensembling methods, for instance, might employ another teacher model that gauges the quality of weakly annotated examples similar to Deghani et al. (2018) [33]. For an overview of weak supervision, have a look at this blog post by Stanford's Hazy Research group.
Learning with noisy labels Learning with noisy labels is similar to learning from weak supervision. In both cases, labels are available that cannot be completely trusted. For learning with noisy labels, labels are typically assumed to be permuted with a fixed random permutation. While proxy-label approaches supply the noisy labels themselves, when learning with noisy labels, the labels are part of the data. Similar to learning from weak supervision, we can try to model the noise to assess the quality of the labels (Sukhbaatar et al., 2015) [34]. Similar to self-ensembling methods, we can enforce consistency between the model's preditions and the proxy labels (Reed et al., 2015) [35].
Data augmentation Several self-ensembling methods employ data augmentation to enforce consistency between model predictions under different noise settings. Data augmentation is mostly used in computer vision, but noise in the form of different dropout masks can also be applied to the model parameters as in the \(\Pi\) model and has also been used in LSTMs (Zolna et al., 2018) [36]. While regularization in the form of dropout, batch normalization, etc. can be used when labels are available in order to make predictions more robust, a consistency loss is required in the case without labels. For supervised learning, adversarial training can be employed to obtain adversarial examples and has been used successfully e.g. for part-of-speech tagging (Yasunaga et al., 2018) [37].
Ensembling a single model The discussed self-ensembling methods all employ ensemble predictions not just to make predictions more robust, but as feedback to improve the model itself during training in a self-reinforcing loop. In the supervised setting, this feedback might not be necessary; ensembling a single model is still useful, however, to save time compared to training multiple models. Two methods that have been proposed to ensemble a model from a single training run are checkpoint ensembles and snapshot ensembles. Checkpoint ensembles (Sennrich et al., 2016) [38] ensemble the last \(n\) checkpoints of a single training run and have been used to achieve state-of-the-art in machine translation. Snapshot ensembles (Huang et al., 2017) [39] ensemble models converged to different minima during a training run and have been used to achieve state-of-the-art in object recognition.
I hope this post was able to give you an insight into a part of the semi-supervised learning landscape that seems to be particularly useful to improve the performance of current models. While learning completely without labelled data is unrealistic at this point, semi-supervised learning enables us to augment our small labelled datasets with large amounts of available unlabelled data. Most of the discussed methods are promising in that they treat the model as a black box and can thus be used with any existing supervised learning model. As always, if you have any questions or noticed any mistakes, feel free to write a comment in the comments section below.
Zhu, X. (2005). Semi-Supervised Learning Literature Survey. ↩︎
Chapelle, O., Schölkopf, B., & Zien, A. (2006). Semi-Supervised Learning. Interdisciplinary sciences computational life sciences (Vol. 1). http://doi.org/10.1007/s12539-009-0016-2 ↩︎
Kipf, T. N., & Welling, M. (2017). Semi-Supervised Classification with Graph Convolutional Networks. Proceedings of ICLR 2017. ↩︎
Yarowsky, D. (1995). Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of the 33rd annual meeting on Association for Computational Linguistics (pp. 189-196). Association for Computational Linguistics. ↩︎
McClosky, D., Charniak, E., & Johnson, M. (2006). Effective self-training for parsing. Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, 152–159. ↩︎
Reichart, R., & Rappoport, A. (2007). Self-training for enhancement and domain adaptation of statistical parsers trained on small datasets. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (pp. 616-623) ↩︎
Huang, Z., & Harper, M. (2009). Self-training PCFG grammars with latent annotations across languages. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2-Volume 2 (pp. 832-841). Association for Computational Linguistics. ↩︎
Suzuki, J., & Isozaki, H. (2008). Semi-supervised sequential labeling and segmentation using giga-word scale unlabeled data. Proceedings of ACL-08: HLT, 665-673. ↩︎
Petrov, S., & McDonald, R. (2012). Overview of the 2012 shared task on parsing the web. In Notes of the first workshop on syntactic analysis of non-canonical language (sancl) (Vol. 59). ↩︎
He, Y., & Zhou, D. (2011). Self-training from labeled features for sentiment analysis. Information Processing & Management, 47(4), 606-616. ↩︎
Plank, B. (2011). Domain adaptation for parsing. University Library Groniongen][Host]. ↩︎
Van Asch, V., & Daelemans, W. (2016). Predicting the Effectiveness of Self-Training: Application to Sentiment Classification. arXiv preprint arXiv:1601.03288. ↩︎
van der Goot, R., Plank, B., & Nissim, M. (2017). To normalize, or not to normalize: The impact of normalization on part-of-speech tagging. arXiv preprint arXiv:1707.05116. ↩︎
Blum, A., & Mitchell, T. (1998). Combining labeled and unlabeled data with co-training. In Proceedings of the eleventh annual conference on Computational learning theory (pp. 92-100). ACM. ↩︎
Chen, M., Weinberger, K. Q., & Chen, Y. (2011). Automatic Feature Decomposition for Single View Co-training. Proceedings of the 28th International Conference on Machine Learning (ICML-11), 953–960. ↩︎
Bousmalis, K., Trigeorgis, G., Silberman, N., Krishnan, D., & Erhan, D. (2016). Domain Separation Networks. In Advances in Neural Information Processing Systems. ↩︎
Chen, M., Weinberger, K. Q., & Blitzer, J. C. (2011). Co-Training for Domain Adaptation. In Advances in Neural Information Processing Systems. ↩︎
Zhou, Y., & Goldman, S. (2004). Democratic Co-Learning. In 16th IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2004. ↩︎
Zhou, Z.-H., & Li, M. (2005). Tri-Training: Exploiting Unlabled Data Using Three Classifiers. IEEE Trans.Data Eng., 17(11), 1529–1541. http://doi.org/10.1109/TKDE.2005.186 ↩︎
Ruder, S., & Plank, B. (2018). Strong Baselines for Neural Semi-supervised Learning under Domain Shift. In Proceedings of ACL 2018. ↩︎
Søgaard, A. (2010). Simple semi-supervised training of part-of-speech taggers. Proceedings of the ACL 2010 Conference Short Papers. ↩︎
Saito, K., Ushiku, Y., & Harada, T. (2017). Asymmetric Tri-training for Unsupervised Domain Adaptation. In ICML 2017. Retrieved from http://arxiv.org/abs/1702.08400 ↩︎
Caruana, R. (1993). Multitask learning: A knowledge-based source of inductive bias. In Proceedings of the Tenth International Conference on Machine Learning. ↩︎
Rasmus, A., Valpola, H., Honkala, M., Berglund, M., & Raiko, T. (2015). Semi-Supervised Learning with Ladder Network. arXiv Preprint arXiv:1507.02672. Retrieved from http://arxiv.org/abs/1507.02672 ↩︎
Miyato, T., Dai, A. M., & Goodfellow, I. (2017). Adversarial Training Methods for Semi-supervised Text Classification. In Proceedings of ICLR 2017. ↩︎
Laine, S., & Aila, T. (2017). Temporal Ensembling for Semi-Supervised Learning. In Proceedings of ICLR 2017. ↩︎
Kingma, D. P., & Ba, J. L. (2015). Adam: a Method for Stochastic Optimization. International Conference on Learning Representations. ↩︎
Tarvainen, A., & Valpola, H. (2017). Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in Neural Information Processing Systems. Retrieved from http://arxiv.org/abs/1703.01780 ↩︎
Polyak, B. T., & Juditsky, A. B. (1992). Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4), 838-855. ↩︎
Oliver, A., Odena, A., Raffel, C., Cubuk, E. D., & Goodfellow, I. J. (2018). Realistic Evaluation of Semi-Supervised Learning Algorithms. arXiv preprint arXiv:1804.09170. ↩︎
Hinton, G., Vinyals, O., & Dean, J. (2015). Distilling the Knowledge in a Neural Network. arXiv Preprint arXiv:1503.02531. https://doi.org/10.1063/1.4931082 ↩︎
Kiritchenko, S., Zhu, X., & Mohammad, S. M. (2014). Sentiment analysis of short informal texts. Journal of Artificial Intelligence Research, 50, 723-762. ↩︎
Dehghani, M., Mehrjou, A., Gouws, S., Kamps, J., & Schölkopf, B. (2018). Fidelity-Weighted Learning. In Proceedings of ICLR 2018. Retrieved from http://arxiv.org/abs/1711.02799 ↩︎
Sukhbaatar, S., Bruna, J., Paluri, M., Bourdev, L., & Fergus, R. (2015). Training Convolutional Networks with Noisy Labels. Workshop Track - ICLR 2015. Retrieved from http://arxiv.org/abs/1406.2080 ↩︎
Reed, S., Lee, H., Anguelov, D., Szegedy, C., Erhan, D., & Rabinovich, A. (2015). Training Deep Neural Networks on Noisy Labels with Bootstrapping. ICLR 2015 Workshop Track. Retrieved from http://arxiv.org/abs/1412.6596 ↩︎
Zolna, K., Arpit, D., Suhubdy, D., & Bengio, Y. (2018). Fraternal Dropout. In Proceedings of ICLR 2018. Retrieved from http://arxiv.org/abs/1711.00066 ↩︎
Yasunaga, M., Kasai, J., & Radev, D. (2018). Robust Multilingual Part-of-Speech Tagging via Adversarial Training. In Proceedings of NAACL 2018. Retrieved from http://arxiv.org/abs/1711.04903 ↩︎
Sennrich, R., Haddow, B., & Birch, A. (2016). Edinburgh neural machine translation systems for WMT 16. arXiv preprint arXiv:1606.02891. ↩︎
Huang, G., Li, Y., Pleiss, G., Liu, Z., Hopcroft, J. E., & Weinberger, K. Q. (2017). Snapshot Ensembles: Train 1, get M for free. In Proceedings of ICLR 2017. ↩︎
]]>http://ruder.io/text-classification-tensorflow-estimators/5c76ff7c1b9b0d18555b9ea9Mon, 16 Apr 2018 13:24:29 GMT
This post is a tutorial on how to use TensorFlow Estimators for text classification.
Note: This post was written together with the awesome Julian Eisenschlos and was originally published on the TensorFlow blog.
Hello there! Throughout this post we will show you how to classify text using Estimators in TensorFlow. Here's the outline of what we'll cover:
Loading data using Datasets.
Building baselines using pre-canned estimators.
Using word embeddings.
Building custom estimators with convolution and LSTM layers.
Loading pre-trained word vectors.
Evaluating and comparing models using TensorBoard.
Welcome to Part 4 of a blog series that introduces TensorFlow Datasets and Estimators. You don't need to read all of the previous material, but take a look if you want to refresh any of the following concepts. Part 1 focused on pre-made Estimators, Part 2 discussed feature columns, and Part 3 how to create custom Estimators.
Here in Part 4, we will build on top of all the above to tackle a different family of problems in Natural Language Processing (NLP). In particular, this article demonstrates how to solve a text classification task using custom TensorFlow estimators, embeddings, and the tf.layers module. Along the way, we'll learn about word2vec and transfer learning as a technique to bootstrap model performance when labeled data is a scarce resource.
We will show you relevant code snippets. Here's the complete Jupyter Notebook that you can run locally or on Google Colaboratory. The plain .py source file is also available here. Note that the code was written to demonstrate how Estimators work functionally and was not optimized for maximum performance.
The task
The dataset we will be using is the IMDB Large Movie Review Dataset, which consists of 25,00025,00025,000 highly polar movie reviews for training, and 25,00025,00025,000 for testing. We will use this dataset to train a binary classification model, able to predict whether a review is positive or negative.
For illustration, here's a piece of a negative review (with 222 stars) in the dataset:
Now, I LOVE Italian horror films. The cheesier they are, the better. However, this is not cheesy Italian. This is week-old spaghetti sauce with rotting meatballs. It is amateur hour on every level. There is no suspense, no horror, with just a few drops of blood scattered around to remind you that you are in fact watching a horror film.
Keras provides a convenient handler for importing the dataset which is also available as a serialized numpy array .npz file to download here. For text classification, it is standard to limit the size of the vocabulary to prevent the dataset from becoming too sparse and high dimensional, causing potential overfitting. For this reason, each review consists of a series of word indexes that go from 444 (the most frequent word in the dataset the) to 499949994999, which corresponds to orange. Index 111 represents the beginning of the sentence and the index 222 is assigned to all unknown (also known as out-of-vocabulary or OOV) tokens. These indexes have been obtained by pre-processing the text data in a pipeline that cleans, normalizes and tokenizes each sentence first and then builds a dictionary indexing each of the tokens by frequency.
After we've loaded the data in memory we pad each of the sentences with $0$ so that we have two $25000 \times 200$ arrays for training and testing respectively.
vocab_size = 5000
sentence_size = 200
(x_train_variable, y_train), (x_test_variable, y_test) = imdb.load_data(num_words=vocab_size)
x_train = sequence.pad_sequences(
x_train_variable,
maxlen=sentence_size,
padding='post',
value=0)
x_test = sequence.pad_sequences(
x_test_variable,
Input Functions
The Estimator framework uses input functions to split the data pipeline from the model itself. Several helper methods are available to create them, whether your data is in a .csv file, or in a pandas.DataFrame, whether it fits in memory or not. In our case, we can use Dataset.from_tensor_slices for both the train and test sets.
x_len_train = np.array([min(len(x), sentence_size) for x in x_train_variable])
x_len_test = np.array([min(len(x), sentence_size) for x in x_test_variable])
def parser(x, length, y):
features = {"x": x, "len": length}
return features, y
def train_input_fn():
dataset = tf.data.Dataset.from_tensor_slices((x_train, x_len_train, y_train))
dataset = dataset.shuffle(buffer_size=len(x_train_variable))
dataset = dataset.batch(100)
dataset = dataset.map(parser)
dataset = dataset.repeat()
iterator = dataset.make_one_shot_iterator()
return iterator.get_next()
def eval_input_fn():
dataset = tf.data.Dataset.from_tensor_slices((x_test, x_len_test, y_test))
We shuffle the training data and do not predefine the number of epochs we want to train, while we only need one epoch of the test data for evaluation. We also add an additional "len" key that captures the length of the original, unpadded sequence, which we will use later.
Building a baseline
It's good practice to start any machine learning project trying basic baselines. The simpler the better as having a simple and robust baseline is key to understanding exactly how much we are gaining in terms of performance by adding extra complexity. It may very well be the case that a simple solution is good enough for our requirements.
With that in mind, let us start by trying out one of the simplest models for text classification. That would be a sparse linear model that gives a weight to each token and adds up all of the results, regardless of the order. As this model does not care about the order of words in a sentence, we normally refer to it as a Bag-of-Words approach. Let's see how we can implement this model using an Estimator.
We start out by defining the feature column that is used as input to our classifier. As we have seen in Part 2, categorical_column_with_identity is the right choice for this pre-processed text input. If we were feeding raw text tokens other feature_columns could do a lot of the pre-processing for us. We can now use the pre-made LinearClassifier.
column = tf.feature_column.categorical_column_with_identity('x', vocab_size)
classifier = tf.estimator.LinearClassifier(
feature_columns=[column],
model_dir=os.path.join(model_dir, 'bow_sparse'))
Finally, we create a simple function that trains the classifier and additionally creates a precision-recall curve. As we do not aim to maximize performance in this blog post, we only train our models for 25,000 steps.
def train_and_evaluate(classifier):
classifier.train(input_fn=train_input_fn, steps=25000)
eval_results = classifier.evaluate(input_fn=eval_input_fn)
predictions = np.array([p['logistic'][0] for p in classifier.predict(input_fn=eval_input_fn)])
tf.reset_default_graph()
# Add a PR summary in addition to the summaries that the classifier writes
pr = summary_lib.pr_curve('precision_recall', predictions=predictions, labels=y_test.astype(bool), num_thresholds=21)
with tf.Session() as sess:
writer = tf.summary.FileWriter(os.path.join(classifier.model_dir, 'eval'), sess.graph)
writer.add_summary(sess.run(pr), global_step=0)
writer.close()
train_and_evaluate(classifier)
One of the benefits of choosing a simple model is that it is much more interpretable. The more complex a model, the harder it is to inspect and the more it tends to work like a black box. In this example, we can load the weights from our model's last checkpoint and take a look at what tokens correspond to the biggest weights in absolute value. The results look like what we would expect.
# Load the tensor with the model weights
weights = classifier.get_variable_value('linear/linear_model/x/weights').flatten()
# Find biggest weights in absolute value
extremes = np.concatenate((sorted_indexes[-8:], sorted_indexes[:8]))
# word_inverted_index is a dictionary that maps from indexes back to tokens
extreme_weights = sorted(
[(weights[i], word_inverted_index[i - index_offset]) for i in extremes])
# Create plot
y_pos = np.arange(len(extreme_weights))
plt.bar(y_pos, [pair[0] for pair in extreme_weights], align='center', alpha=0.5)
plt.xticks(y_pos, [pair[1] for pair in extreme_weights], rotation=45, ha='right')
plt.ylabel('Weight')
plt.title('Most significant tokens')
As we can see, tokens with the most positive weight such as 'refreshing' are clearly associated with positive sentiment, while tokens that have a large negative weight unarguably evoke negative emotions. A simple but powerful modification that one can do to improve this model is weighting the tokens by their tf-idf scores.
The next step of complexity we can add are word embeddings. Embeddings are a dense low-dimensional representation of sparse high-dimensional data. This allows our model to learn a more meaningful representation of each token, rather than just an index. While an individual dimension is not meaningful, the low-dimensional space—when learned from a large enough corpus—has been shown to capture relations such as tense, plural, gender, thematic relatedness, and many more. We can add word embeddings by converting our existing feature column into an embedding_column. The representation seen by the model is the mean of the embeddings for each token (see the combiner argument in the docs). We can plug in the embedded features into a pre-canned DNNClassifier.
A note for the keen observer: an embedding_column is just an efficient way of applying a fully connected layer to the sparse binary feature vector of tokens, which is multiplied by a constant depending of the chosen combiner. A direct consequence of this is that it wouldn't make sense to use an embedding_column directly in a LinearClassifier because two consecutive linear layers without non-linearities in between add no prediction power to the model, unless of course the embeddings are pre-trained.
embedding_size = 50
word_embedding_column = tf.feature_column.embedding_column(
column, dimension=embedding_size)
classifier = tf.estimator.DNNClassifier(
hidden_units=[100],
feature_columns=[word_embedding_column],
model_dir=os.path.join(model_dir, 'bow_embeddings'))
We can use TensorBoard to visualize our $50$ dimensional word vectors projected into $\mathbb{R}^3$ using t-SNE. We expect similar words to be close to each other. This can be a useful way to inspect our model weights and find unexpected behaviours.
Convolutions
At this point one possible approach would be to go deeper, further adding more fully connected layers and playing around with layer sizes and training functions. However, by doing that we would add extra complexity and ignore important structure in our sentences. Words do not live in a vacuum and meaning is compositional, formed by words and its neighbors.
Convolutions are one way to take advantage of this structure, similar to how we can model salient clusters of pixels for image classification. The intuition is that certain sequences of words, or n-grams, usually have the same meaning regardless of their overall position in the sentence. Introducing a structural prior via the convolution operation allows us to model the interaction between neighboring words and consequently gives us a better way to represent such meaning.
The following image shows how a filter matrix $F \in \mathbb{R}^{d\times m}$ tri-gram window of tokens to build a new feature map. Afterwards a pooling layer is usually applied to combine adjacent results.
Source: Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks by Severyn et al. [2015]
Let us look at the full model architecture. The use of dropout layers is a regularization technique that makes the model less likely to overfit.
Embedding Layer
Convolution1D
GlobalMaxPooling1D
Hidden Dense Layer
Output Layer
Creating a custom estimator
As seen in previous blog posts, the tf.estimator framework provides a high-level API for training machine learning models, defining train(), evaluate() and predict() operations, handling checkpointing, loading, initializing, serving, building the graph and the session out of the box. There is a small family of pre-made estimators, like the ones we used earlier, but it's most likely that you will need to build your own.
Writing a custom estimator means writing a model_fn(features, labels, mode, params) that returns an EstimatorSpec. The first step will be mapping the features into our embedding layer:
input_layer = tf.contrib.layers.embed_sequence(
features['x'],
vocab_size,
embedding_size,
initializer=params['embedding_initializer'])
Then we use tf.layers to process each output sequentially.
training = (mode == tf.estimator.ModeKeys.TRAIN)
dropout_emb = tf.layers.dropout(inputs=input_layer,
rate=0.2,
training=training)
conv = tf.layers.conv1d(
inputs=dropout_emb,
filters=32,
kernel_size=3,
padding="same",
activation=tf.nn.relu)
pool = tf.reduce_max(input_tensor=conv, axis=1)
hidden = tf.layers.dense(inputs=pool, units=250, activation=tf.nn.relu)
dropout = tf.layers.dropout(inputs=hidden, rate=0.2, training=training)
logits = tf.layers.dense(inputs=dropout_hidden, units=1)
Finally, we will use a Head to simplify the writing of our last part of the model_fn. The head already knows how to compute predictions, loss, train_op, metrics and export outputs, and can be reused across models. This is also used in the pre-made estimators and provides us with the benefit of a uniform evaluation function across all of our models. We will use binary_classification_head, which is a head for single label binary classification that uses sigmoid_cross_entropy_with_logits as the loss function under the hood.
head = tf.contrib.estimator.binary_classification_head()
optimizer = tf.train.AdamOptimizer()
def _train_op_fn(loss):
tf.summary.scalar('loss', loss)
return optimizer.minimize(
loss=loss,
global_step=tf.train.get_global_step())
return head.create_estimator_spec(
features=features,
labels=labels,
mode=mode,
logits=logits,
train_op_fn=_train_op_fn)
Running this model is just as easy as before:
initializer = tf.random_uniform([vocab_size, embedding_size], -1.0, 1.0))
params = {'embedding_initializer': initializer}
cnn_classifier = tf.estimator.Estimator(model_fn=model_fn,
model_dir=os.path.join(model_dir, 'cnn'),
params=params)
train_and_evaluate(cnn_classifier)
LSTM Networks
Using the Estimator API and the same model head, we can also create a classifier that uses a Long Short-Term Memory (LSTM) cell instead of convolutions. Recurrent models such as this are some of the most successful building blocks for NLP applications. An LSTM processes the entire document sequentially, recursing over the sequence with its cell while storing the current state of the sequence in its memory.
One of the drawbacks of recurrent models compared to CNNs is that, because of the nature of recursion, models turn out deeper and more complex, which usually produces slower training time and worse convergence. LSTMs (and RNNs in general) can suffer convergence issues like vanishing or exploding gradients, that said, with sufficient tuning they can obtain state-of-the-art results for many problems. As a rule of thumb CNNs are good at feature extraction, while RNNs excel at tasks that depend on the meaning of the whole sentence, like question answering or machine translation.
Each cell processes one token embedding at a time updating its internal state based on a differentiable computation that depends on both the embedding vector $x_t$ and the previous state $h_{t-1}$. In order to get a better understanding of how LSTMs work, you can refer to Chris Olah's blog post.
Source: Understanding LSTM Networks by Chris Olah
The complete LSTM model can be expressed by the following simple flowchart:
In the beginning of this post, we padded all documents up to 200200200 tokens, which is necessary to build a proper tensor. However, when a document contains fewer than 200200200 words, we don't want the LSTM to continue processing padding tokens as it does not add information and degrades performance. For this reason, we additionally want to provide our network with the length of the original sequence before it was padded. Internally, the model then copies the last state through to the sequence's end. We can do this by using the "len" feature in our input functions. We can now use the same logic as above and simply replace the convolutional, pooling, and flatten layers with our LSTM cell.
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(100)
_, final_states = tf.nn.dynamic_rnn(
lstm_cell, inputs, sequence_length=features['len'], dtype=tf.float32)
logits = tf.layers.dense(inputs=final_states.h, units=1)
Pre-trained vectors
Most of the models that we have shown before rely on word embeddings as a first layer. So far, we have initialized this embedding layer randomly. However, much previous work has shown that using embeddings pre-trained on a large unlabeled corpus as initialization is beneficial, particularly when training on only a small number of labeled examples. The most popular pre-trained embedding is word2vec. Leveraging knowledge from unlabeled data via pre-trained embeddings is an instance of transfer learning.
To this end, we will show you how to use them in an Estimator. We will use the pre-trained vectors from another popular model, GloVe.
embeddings = {}
with open('glove.6B.50d.txt', 'r', encoding='utf-8') as f:
for line in f:
values = line.strip().split()
w = values[0]
vectors = np.asarray(values[1:], dtype='float32')
embeddings[w] = vectors
After loading the vectors into memory from a file we store them as a numpy.array using the same indexes as our vocabulary. The created array is of shape (5000, 50). At every row index, it contains the 50-dimensional vector representing the word at the same index in our vocabulary.
embedding_matrix = np.random.uniform(-1, 1, size=(vocab_size, embedding_size))
for w, i in word_index.items():
v = embeddings.get(w)
if v is not None and i < vocab_size:
embedding_matrix[i] = v
Finally, we can use a custom initializer function and pass it in the params object to our cnn_model_fn , without any modifications.
def my_initializer(shape=None, dtype=tf.float32, partition_info=None):
assert dtype is tf.float32
return embedding_matrix
params = {'embedding_initializer': my_initializer}
cnn_pretrained_classifier = tf.estimator.Estimator(
model_fn=cnn_model_fn,
model_dir=os.path.join(model_dir, 'cnn_pretrained'),
train_and_evaluate(cnn_pretrained_classifier)
Running TensorBoard
Now we can launch TensorBoard and see how the different models we've trained compare against each other in terms of training time and performance.
In a terminal, we run
> tensorboard --logdir={model_dir}
We can visualize many metrics collected while training and testing, including the loss function values of each model at each training step, and the precision-recall curves. This is of course most useful to select which model works best for our use-case as well as how to choose classification thresholds.
Getting Predictions
To obtain predictions on new sentences we can use the predict method in the Estimator instances, which will load the latest checkpoint for each model and evaluate on the unseen examples. But before passing the data into the model we have to clean up, tokenize and map each token to the corresponding index as we see below.
def text_to_index(sentence):
# Remove punctuation characters except for the apostrophe
translator = str.maketrans('', '', string.punctuation.replace("'", ''))
tokens = sentence.translate(translator).lower().split()
return np.array([1] + [word_index[t] + index_offset if t in word_index else 2 for t in tokens])
def print_predictions(sentences, classifier):
indexes = [text_to_index(sentence) for sentence in sentences]
x = sequence.pad_sequences(indexes,
value=-1)
length = np.array([min(len(x), sentence_size) for x in indexes])
predict_input_fn = tf.estimator.inputs.numpy_input_fn(x={"x": x, "len": length}, shuffle=False)
predictions = [p['logistic'][0] for p in classifier.predict(input_fn=predict_input_fn)]
print(predictions)
It is worth noting that the checkpoint itself is not sufficient to make predictions; the actual code used to build the estimator is necessary as well in order to map the saved weights to the corresponding tensors. It's a good practice to associate saved checkpoints with the branch of code with which they were created.
If you are interested in exporting the models to disk in a fully recoverable way, you might want to look into the SavedModel class, which is especially useful for serving your model through an API using TensorFlow Serving.
In this blog post, we explored how to use estimators for text classification, in particular for the IMDB Reviews Dataset. We trained and visualized our own embeddings, as well as loaded pre-trained ones. We started from a simple baseline and made our way to convolutional neural networks and LSTMs.
For more details, be sure to check out:
A Jupyter notebook that can run locally, or on Colaboratory.
The complete source code for this blog post.
The TensorFlow Embedding guide.
The TensorFlow Vector Representation of Words tutorial.
The NLTK Processing Raw Text chapter on how to design language pipelines.
]]>http://ruder.io/requests-for-research/5c76ff7c1b9b0d18555b9ea8Sun, 04 Mar 2018 15:00:00 GMT
This post aims to provide inspiration and ideas for research directions to junior researchers and those trying to get into research.
Task-independent data augmentation for NLP
Few-shot learning for NLP
Transfer learning for NLP
Multi-task learning
Task-independent architecture improvements
It can be hard to find compelling topics to work on and know what questions are interesting to ask when you are just starting as a researcher in a new field. Machine learning research in particular moves so fast these days that it is difficult to find an opening.
This post aims to provide inspiration and ideas for research directions to junior researchers and those trying to get into research. It gathers a collection of research topics that are interesting to me, with a focus on NLP and transfer learning. As such, they might obviously not be of interest to everyone. If you are interested in Reinforcement Learning, OpenAI provides a selection of interesting RL-focused research topics. In case you'd like to collaborate with others or are interested in a broader range of topics, have a look at the Artificial Intelligence Open Network.
Most of these topics are not thoroughly thought out yet; in many cases, the general description is quite vague and subjective and many directions are possible. In addition, most of these are not low-hanging fruit, so serious effort is necessary to come up with a solution. I am happy to provide feedback with regard to any of these, but will not have time to provide more detailed guidance unless you have a working proof-of-concept. I will update this post periodically with new research directions and advances in already listed ones. Note that this collection does not attempt to review the extensive literature but only aims to give a glimpse of a topic; consequently, the references won't be comprehensive.
I hope that this collection will pique your interest and serve as inspiration for your own research agenda.
Data augmentation aims to create additional training data by producing variations of existing training examples through transformations, which can mirror those encountered in the real world. In Computer Vision (CV), common augmentation techniques are mirroring, random cropping, shearing, etc. Data augmentation is super useful in CV. For instance, it has been used to great effect in AlexNet (Krizhevsky et al., 2012) [1] to combat overfitting and in most state-of-the-art models since. In addition, data augmentation makes intuitive sense as it makes the training data more diverse and should thus increase a model's generalization ability.
However, in NLP, data augmentation is not widely used. In my mind, this is for two reasons:
Data in NLP is discrete. This prevents us from applying simple transformations directly to the input data. Most recently proposed augmentation methods in CV focus on such transformations, e.g. domain randomization (Tobin et al., 2017) [2].
Small perturbations may change the meaning. Deleting a negation may change a sentence's sentiment, while modifying a word in a paragraph might inadvertently change the answer to a question about that paragraph. This is not the case in CV where perturbing individual pixels does not change whether an image is a cat or dog and even stark changes such as interpolation of different images can be useful (Zhang et al., 2017) [3].
Existing approaches that I am aware of are either rule-based (Li et al., 2017) [4] or task-specific, e.g. for parsing (Wang and Eisner, 2016) [5] or zero-pronoun resolution (Liu et al., 2017) [6]. Xie et al. (2017) [7] replace words with samples from different distributions for language modelling and Machine Translation. Recent work focuses on creating adversarial examples either by replacing words or characters (Samanta and Mehta, 2017; Ebrahimi et al., 2017) [8] [9], concatenation (Jia and Liang, 2017) [10], or adding adversarial perturbations (Yasunaga et al., 2017) [11]. An adversarial setup is also used by Li et al. (2017) [12] who train a system to produce sequences that are indistinguishable from human-generated dialogue utterances.
Back-translation (Sennrich et al., 2015; Sennrich et al., 2016) [13] [14] is a common data augmentation method in Machine Translation (MT) that allows us to incorporate monolingual training data. For instance, when training a EN\(\rightarrow\)FR system, monolingual French text is translated to English using an FR\(\rightarrow\)EN system; the synthetic parallel data can then be used for training. Back-translation can also be used for paraphrasing (Mallinson et al., 2017) [15]. Paraphrasing has been used for data augmentation for QA (Dong et al., 2017) [16], but I am not aware of its use for other tasks.
Another method that is close to paraphrasing is generating sentences from a continuous space using a variational autoencoder (Bowman et al., 2016; Guu et al., 2017) [17] [18]. If the representations are disentangled as in (Hu et al., 2017) [19], then we are also not too far from style transfer (Shen et al., 2017) [20].
There are a few research directions that would be interesting to pursue:
Evaluation study: Evaluate a range of existing data augmentation methods as well as techniques that have not been widely used for augmentation such as paraphrasing and style transfer on a diverse range of tasks including text classification and sequence labelling. Identify what types of data augmentation are robust across task and which are task-specific. This could be packaged as a software library to make future benchmarking easier (think CleverHans for NLP).
Data augmentation with style transfer: Investigate if style transfer can be used to modify various attributes of training examples for more robust learning.
Learn the augmentation: Similar to Dong et al. (2017) we could learn either to paraphrase or to generate transformations for a particular task.
Learn a word embedding space for data augmentation: A typical word embedding space clusters synonyms and antonyms together; using nearest neighbours in this space for replacement is thus infeasible. Inspired by recent work (Mrkšić et al., 2017) [21], we could specialize the word embedding space to make it more suitable for data augmentation.
Adversarial data augmentation: Related to recent work in interpretability (Ribeiro et al., 2016) [22], we could change the most salient words in an example, i.e. those that a model depends on for a prediction. This still requires a semantics-preserving replacement method, however.
Zero-shot, one-shot and few-shot learning are one of the most interesting recent research directions IMO. Following the key insight from Vinyals et al. (2016) [23] that a few-shot learning model should be explicitly trained to perform few-shot learning, we have seen several recent advances (Ravi and Larochelle, 2017; Snell et al., 2017) [24] [25].
Learning from few labeled samples is one of the hardest problems IMO and one of the core capabilities that separates the current generation of ML models from more generally applicable systems. Zero-shot learning has only been investigated in the context of learning word embeddings for unknown words AFAIK. Dataless classification (Song and Roth, 2014; Song et al., 2016) [26] [27] is an interesting related direction that embeds labels and documents in a joint space, but requires interpretable labels with good descriptions.
Potential research directions are the following:
Standardized benchmarks: Create standardized benchmarks for few-shot learning for NLP. Vinyals et al. (2016) introduce a one-shot language modelling task for the Penn Treebank. The task, while useful, is dwarfed by the extensive evaluation on CV benchmarks and has not seen much use AFAIK. A few-shot learning benchmark for NLP should contain a large number of classes and provide a standardized split for reproducibility. Good candidate tasks would be topic classification or fine-grained entity recognition.
Evaluation study: After creating such a benchmark, the next step would be to evaluate how well existing few-shot learning models from CV perform for NLP.
Novel methods for NLP: Given a dataset for benchmarking and an empirical evaluation study, we could then start developing novel methods that can perform few-shot learning for NLP.
Transfer learning has had a large impact on computer vision (CV) and has greatly lowered the entry threshold for people wanting to apply CV algorithms to their own problems. CV practicioners are no longer required to perform extensive feature-engineering for every new task, but can simply fine-tune a model pretrained on a large dataset with a small number of examples.
In NLP, however, we have so far only been pretraining the first layer of our models via pretrained embeddings. Recent approaches (Peters et al., 2017, 2018) [28] [29] add pretrained language model embedddings, but these still require custom architectures for every task. In my opinion, in order to unlock the true potential of transfer learning for NLP, we need to pretrain the entire model and fine-tune it on the target task, akin to fine-tuning ImageNet models. Language modelling, for instance, is a great task for pretraining and could be to NLP what ImageNet classification is to CV (Howard and Ruder, 2018) [30].
Here are some potential research directions in this context:
Identify useful pretraining tasks: The choice of the pretraining task is very important as even fine-tuning a model on a related task might only provide limited success (Mou et al., 2016) [31]. Other tasks such as those explored in recent work on learning general-purpose sentence embeddings (Conneau et al., 2017; Subramanian et al., 2018; Nie et al., 2017) [32] [33] [34] might be complementary to language model pretraining or suitable for other target tasks.
Fine-tuning of complex architectures: Pretraining is most useful when a model can be applied to many target tasks. However, it is still unclear how to pretrain more complex architectures, such as those used for pairwise classification tasks (Augenstein et al., 2018) or reasoning tasks such as QA or reading comprehension.
Multi-task learning (MTL) has become more commonly used in NLP. See here for a general overview of multi-task learning and here for MTL objectives for NLP. However, there is still much we don't understand about multi-task learning in general.
The main questions regarding MTL give rise to many interesting research directions:
Identify effective auxiliary tasks: One of the main questions is which tasks are useful for multi-task learning. Label entropy has been shown to be a predictor of MTL success (Alonso and Plank, 2017) [35], but this does not tell the whole story. In recent work (Augenstein et al., 2018) [36], we have found that auxiliary tasks with more data and more fine-grained labels are more useful. It would be useful if future MTL papers would not only propose a new model or auxiliary task, but also try to understand why a certain auxiliary task might be better than another closely related one.
Alternatives to hard parameter sharing: Hard parameter sharing is still the default modus operandi for MTL, but places a strong constraint on the model to compress knowledge pertaining to different tasks with the same parameters, which often makes learning difficult. We need better ways of doing MTL that are easy to use and work reliably across many tasks. Recently proposed methods such as cross-stitch units (Misra et al., 2017; Ruder et al., 2017) [37] [38] and a label embedding layer (Augenstein et al., 2018) are promising steps in this direction.
Artificial auxiliary tasks: The best auxiliary tasks are those, which are tailored to the target task and do not require any additional data. I have outlined a list of potential artificial auxiliary tasks here. However, it is not clear which of these work reliably across a number of diverse tasks or what variations or task-specific modifications are useful.
Creating models that perform well across languages and that can transfer knowledge from resource-rich to resource-poor languages is one of the most important research directions IMO. There has been much progress in learning cross-lingual representations that project different languages into a shared embedding space. Refer to Ruder et al. (2017) [39] for a survey.
Cross-lingual representations are commonly evaluated either intrinsically on similarity benchmarks or extrinsically on downstream tasks, such as text classification. While recent methods have advanced the state-of-the-art for many of these settings, we do not have a good understanding of the tasks or languages for which these methods fail and how to mitigate these failures in a task-independent manner, e.g. by injecting task-specific constraints (Mrkšić et al., 2017).
Novel architectures that outperform the current state-of-the-art and are tailored to specific tasks are regularly introduced, superseding the previous architecture. I have outlined best practices for different NLP tasks before, but without comparing such architectures on different tasks, it is often hard to gain insights from specialized architectures and tell which components would also be useful in other settings.
A particularly promising recent model is the Transformer (Vaswani et al., 2017) [40]. While the complete model might not be appropriate for every task, components such as multi-head attention or position-based encoding could be building blocks that are generally useful for many NLP tasks.
I hope you've found this collection of research directions useful. If you have suggestions on how to tackle some of these problems or ideas for related research topics, feel free to comment below.
Cover image is from Tobin et al. (2017).
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105). ↩︎
Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., & Abbeel, P. (2017). Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World. arXiv Preprint arXiv:1703.06907. Retrieved from http://arxiv.org/abs/1703.06907 ↩︎
Zhang, H., Cisse, M., Dauphin, Y. N., & Lopez-Paz, D. (2017). mixup: Beyond Empirical Risk Minimization, 1–11. Retrieved from http://arxiv.org/abs/1710.09412 ↩︎
Li, Y., Cohn, T., & Baldwin, T. (2017). Robust Training under Linguistic Adversity. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (Vol. 2, pp. 21–27). ↩︎
Wang, D., & Eisner, J. (2016). The Galactic Dependencies Treebanks: Getting More Data by Synthesizing New Languages. Tacl, 4, 491–505. Retrieved from https://www.transacl.org/ojs/index.php/tacl/article/viewFile/917/212 https://transacl.org/ojs/index.php/tacl/article/view/917 ↩︎
Liu, T., Cui, Y., Yin, Q., Zhang, W., Wang, S., & Hu, G. (2017). Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (pp. 102–111). ↩︎
Xie, Z., Wang, S. I., Li, J., Levy, D., Nie, A., Jurafsky, D., & Ng, A. Y. (2017). Data Noising as Smoothing in Neural Network Language Models. In Proceedings of ICLR 2017. ↩︎
Samanta, S., & Mehta, S. (2017). Towards Crafting Text Adversarial Samples. arXiv preprint arXiv:1707.02812. ↩︎
Ebrahimi, J., Rao, A., Lowd, D., & Dou, D. (2017). HotFlip: White-Box Adversarial Examples for NLP. Retrieved from http://arxiv.org/abs/1712.06751 ↩︎
Jia, R., & Liang, P. (2017). Adversarial Examples for Evaluating Reading Comprehension Systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. ↩︎
Li, J., Monroe, W., Shi, T., Ritter, A., & Jurafsky, D. (2017). Adversarial Learning for Neural Dialogue Generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Retrieved from http://arxiv.org/abs/1701.06547 ↩︎
Sennrich, R., Haddow, B., & Birch, A. (2015). Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709. ↩︎
Mallinson, J., Sennrich, R., & Lapata, M. (2017). Paraphrasing revisited with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers (Vol. 1, pp. 881-893). ↩︎
Dong, L., Mallinson, J., Reddy, S., & Lapata, M. (2017). Learning to Paraphrase for Question Answering. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. ↩︎
Bowman, S. R., Vilnis, L., Vinyals, O., Dai, A. M., Jozefowicz, R., & Bengio, S. (2016). Generating Sentences from a Continuous Space. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning (CoNLL). Retrieved from http://arxiv.org/abs/1511.06349 ↩︎
Guu, K., Hashimoto, T. B., Oren, Y., & Liang, P. (2017). Generating Sentences by Editing Prototypes. ↩︎
Hu, Z., Yang, Z., Liang, X., Salakhutdinov, R., & Xing, E. P. (2017). Toward Controlled Generation of Text. In Proceedings of the 34th International Conference on Machine Learning. ↩︎
Shen, T., Lei, T., Barzilay, R., & Jaakkola, T. (2017). Style Transfer from Non-Parallel Text by Cross-Alignment. In Advances in Neural Information Processing Systems. Retrieved from http://arxiv.org/abs/1705.09655 ↩︎
Mrkšić, N., Vulić, I., Séaghdha, D. Ó., Leviant, I., Reichart, R., Gašić, M., … Young, S. (2017). Semantic Specialisation of Distributional Word Vector Spaces using Monolingual and Cross-Lingual Constraints. TACL. Retrieved from http://arxiv.org/abs/1706.00374 ↩︎
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144). ACM. ↩︎
Vinyals, O., Blundell, C., Lillicrap, T., Kavukcuoglu, K., & Wierstra, D. (2016). Matching Networks for One Shot Learning. NIPS 2016. Retrieved from http://arxiv.org/abs/1606.04080 ↩︎
Ravi, S., & Larochelle, H. (2017). Optimization as a Model for Few-Shot Learning. In ICLR 2017. ↩︎
Snell, J., Swersky, K., & Zemel, R. S. (2017). Prototypical Networks for Few-shot Learning. In Advances in Neural Information Processing Systems. ↩︎
Song, Y., & Roth, D. (2014). On dataless hierarchical text classification. Proceedings of AAAI, 1579–1585. Retrieved from http://cogcomp.cs.illinois.edu/papers/SongSoRo14.pdf ↩︎
Song, Y., Upadhyay, S., Peng, H., & Roth, D. (2016). Cross-Lingual Dataless Classification for Many Languages. Ijcai, 2901–2907. ↩︎
Peters, M. E., Ammar, W., Bhagavatula, C., & Power, R. (2017). Semi-supervised sequence tagging with bidirectional language models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017). ↩︎
Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018). Deep contextualized word representations. Proceedings of NAACL. ↩︎
Howard, J., & Ruder, S. (2018). Fine-tuned Language Models for Text Classification. arXiv preprint arXiv:1801.06146. ↩︎
Mou, L., Meng, Z., Yan, R., Li, G., Xu, Y., Zhang, L., & Jin, Z. (2016). How Transferable are Neural Networks in NLP Applications? Proceedings of 2016 Conference on Empirical Methods in Natural Language Processing. ↩︎
Conneau, A., Kiela, D., Schwenk, H., Barrault, L., & Bordes, A. (2017). Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. ↩︎
Subramanian, S., Trischler, A., Bengio, Y., & Pal, C. J. (2018). Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning. In Proceedings of ICLR 2018. ↩︎
Nie, A., Bennett, E. D., & Goodman, N. D. (2017). DisSent: Sentence Representation Learning from Explicit Discourse Relations. arXiv Preprint arXiv:1710.04334. Retrieved from http://arxiv.org/abs/1710.04334 ↩︎
Alonso, H. M., & Plank, B. (2017). When is multitask learning effective? Multitask learning for semantic sequence prediction under varying data conditions. In EACL. Retrieved from http://arxiv.org/abs/1612.02251 ↩︎
Augenstein, I., Ruder, S., & Søgaard, A. (2018). Multi-task Learning of Pairwise Sequence Classification Tasks Over Disparate Label Spaces. In Proceedings of NAACL 2018. ↩︎
Misra, I., Shrivastava, A., Gupta, A., & Hebert, M. (2016). Cross-stitch Networks for Multi-task Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. http://doi.org/10.1109/CVPR.2016.433 ↩︎
Ruder, S., Bingel, J., Augenstein, I., & Søgaard, A. (2017). Sluice networks: Learning what to share between loosely related tasks. arXiv preprint arXiv:1705.08142. ↩︎
Ruder, S., Vulić, I., & Søgaard, A. (2017). A Survey of Cross-lingual Word Embedding Models. arXiv Preprint arXiv:1706.04902. Retrieved from http://arxiv.org/abs/1706.04902 ↩︎
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., … Polosukhin, I. (2017). Attention Is All You Need. In Advances in Neural Information Processing Systems. ↩︎
]]>http://ruder.io/deep-learning-optimization-2017/5c76ff7c1b9b0d18555b9e9fSun, 03 Dec 2017 15:36:00 GMT
This post discusses the most exciting highlights and most promising directions in optimization for Deep Learning.
Improving Adam
Decoupling weight decay
Fixing the exponential moving average
Tuning the learning rate
Warm restarts
SGD with restarts
Snapshot ensembles
Adam with restarts
Learning to optimize
Understanding generalization
Deep Learning ultimately is about finding a minimum that generalizes well -- with bonus points for finding one fast and reliably. Our workhorse, stochastic gradient descent (SGD), is a 60-year old algorithm (Robbins and Monro, 1951) [1], that is as essential to the current generation of Deep Learning algorithms as back-propagation.
Different optimization algorithms have been proposed in recent years, which use different equations to update a model's parameters. Adam (Kingma and Ba, 2015) [2] was introduced in 2015 and is arguably today still the most commonly used one of these algorithms. This indicates that from the Machine Learning practitioner's perspective, best practices for optimization for Deep Learning have largely remained the same.
New ideas, however, have been developed over the course of this year, which may shape the way we will optimize our models in the future. In this blog post, I will touch on the most exciting highlights and most promising directions in optimization for Deep Learning in my opinion. Note that this blog post assumes a familiarity with SGD and with adaptive learning rate methods such as Adam. To get up to speed, refer to this blog post for an overview of existing gradient descent optimization algorithms.
Despite the apparent supremacy of adaptive learning rate methods such as Adam, state-of-the-art results for many tasks in computer vision and NLP such as object recognition (Huang et al., 2017) [3] or machine translation (Wu et al., 2016) [4] have still been achieved by plain old SGD with momentum. Recent theory (Wilson et al., 2017) [5] provides some justification for this, suggesting that adaptive learning rate methods converge to different (and less optimal) minima than SGD with momentum. It is empirically shown that the minima found by adaptive learning rate methods perform generally worse compared to those found by SGD with momentum on object recognition, character-level language modeling, and constituency parsing. This seems counter-intuitive given that Adam comes with nice convergence guarantees and that its adaptive learning rate should give it an edge over the regular SGD. However, Adam and other adaptive learning rate methods are not without their own flaws.
One factor that partially accounts for Adam's poor generalization ability compared with SGD with momentum on some datasets is weight decay. Weight decay is most commonly used in image classification problems and decays the weights \(\theta_t\) after every parameter update by multiplying them by a decay rate \(w_t\) that is slightly less than \(1\):
\(\theta_{t+1} = w_t : \theta_t \)
This prevents the weights from growing too large. As such, weight decay can also be understood as an \(\ell_2\) regularization term that depends on the weight decay rate \(w_t\) added to the loss:
\(\mathcal{L}_\text{reg} = \dfrac{w_t}{2} |\theta_t |^2_2 \)
Weight decay is commonly implemented in many neural network libraries either as the above regularization term or directly to modify the gradient. As the gradient is modified in both the momentum and Adam update equations (via multiplication with other decay terms), weight decay no longer equals \(\ell_2\) regularization. Loshchilov and Hutter (2017) [6] thus propose to decouple weight decay from the gradient update by adding it after the parameter update as in the original definition.
The SGD with momentum and weight decay (SGDW) update then looks like the following:
\(
\begin{split}
v_t &= \gamma v_{t-1} + \eta g_t \\
\theta_{t+1} &= \theta_t - v_t - \eta w_t \theta_t
\end{split}
where \(\eta\) is the learning rate and the third term in the second equation is the decoupled weight decay. Similarly, for Adam with weight decay (AdamW) we obtain:
m_t &= \beta_1 m_{t-1} + (1 - \beta_1) g_t \\
v_t &= \beta_2 v_{t-1} + (1 - \beta_2) g_t^2\\
\hat{m}_t &= \dfrac{m_t}{1 - \beta^t_1} \\
\hat{v}_t &= \dfrac{v_t}{1 - \beta^t_2} \\
\theta_{t+1} &= \theta_{t} - \dfrac{\eta}{\sqrt{\hat{v}_t} + \epsilon} \hat{m}_t - \eta w_t \theta_t
where \(m_t\) and \(\hat{m}_t\) and \(v_t\) and \(\hat{v}_t\) are the biased and bias-corrected estimates of the first and second moments respectively and \(\beta_1\) and \(\beta_2\) are their decay rates, with the same weight decay term added to it. The authors show that this substantially improves Adam's generalization performance and allows it to compete with SGD with momentum on image classification datasets.
In addition, it decouples the choice of the learning rate from the choice of the weight decay, which enables better hyperparameter optimization as the hyperparameters no longer depend on each other. It also separates the implementation of the optimizer from the implementation of the weight decay, which contributes to cleaner and more reusable code (see e.g. the fast.ai AdamW/SGDW implementation).
Several recent papers (Dozat and Manning, 2017; Laine and Aila, 2017) [7],[8] empirically find that a lower \(\beta_2\) value, which controls the contribution of the exponential moving average of past squared gradients in Adam, e.g. \(0.99\) or \(0.9\) vs. the default \(0.999\) worked better in their respective applications, indicating that there might be an issue with the exponential moving average.
An ICLR 2018 submission formalizes this issue and pinpoints the exponential moving average of past squared gradients as another reason for the poor generalization behaviour of adaptive learning rate methods. Updating the parameters via an exponential moving average of past squared gradients is at the heart of adaptive learning rate methods such as Adadelta, RMSprop, and Adam. The contribution of the exponential average is well-motivated: It should prevent the learning rates to become infinitesimally small as training progresses, the key flaw of the Adagrad algorithm. However, this short-term memory of the gradients becomes an obstacle in other scenarios.
In settings where Adam converges to a suboptimal solution, it has been observed that some minibatches provide large and informative gradients, but as these minibatches only occur rarely, exponential averaging diminishes their influence, which leads to poor convergence. The authors provide an example for a simple convex optimization problem where the same behaviour can be observed for Adam.
To fix this behaviour, the authors propose a new algorithm, AMSGrad that uses the maximum of past squared gradients rather than the exponential average to update the parameters. The full AMSGrad update without bias-corrected estimates can be seen below:
\hat{v}_t &= \text{max}(\hat{v}_{t-1}, v_t) \\
\theta_{t+1} &= \theta_{t} - \dfrac{\eta}{\sqrt{\hat{v}_t} + \epsilon} m_t
The authors observe improved performance compared to Adam on small datasets and on CIFAR-10.
In many cases, it is not our models that require improvement and tuning, but our hyperparameters. Recent examples for language modelling demonstrate that tuning LSTM parameters (Melis et al., 2017) [9] and regularization parameters (Merity et al., 2017) [10] can yield state-of-the-art results compared to more complex models.
An important hyperparameter for optimization in Deep Learning is the learning rate \(\eta\). In fact, SGD has been shown to require a learning rate annealing schedule to converge to a good minimum in the first place. It is often thought that adaptive learning rate methods such as Adam are more robust to different learning rates, as they update the learning rate themselves. Even for these methods, however, there can be a large difference between a good and the optimal learning rate (psst... it's \(3e-4\)).
Zhang et al. (2017) [11] show that SGD with a tuned learning rate annealing schedule and momentum parameter is not only competitive with Adam, but also converges faster. On the other hand, while we might think that the adaptivity of Adam's learning rates might mimic learning rate annealing, an explicit annealing schedule can still be beneficial: If we add SGD-style learning rate annealing to Adam, it converges faster and outperforms SGD on Machine Translation (Denkowski and Neubig, 2017) [12].
In fact, learning rate annealing schedule engineering seems to be the new feature engineering as we can often find highly-tuned learning rate annealing schedules that improve the final convergence behaviour of our model. An interesting example of this is Vaswani et al. (2017) [13]. While it is usual to see a model's hyperparameters being subjected to large-scale hyperparameter optimization, it is interesting to see a learning rate annealing schedule as the focus of the same attention to detail: The authors use Adam with \(\beta_1=0.9\), a non-default \(\beta_2=0.98\), \(\epsilon = 10^{-9}\), and arguably one of the most elaborate annealing schedules for the learning rate \(\eta\):
\(\eta = d_\text{model}^{-0.5} \cdot \min(step\text{_}num^{-0.5}, step\text{_}num \cdot warmup\text{_}steps^{-1.5}) \)
where \(d_\text{model}\) is the number of parameters of the model and \(warmup\text{_}steps = 4000\).
Another recent paper by Smith et al. (2017) [14] demonstrates an interesting connection between the learning rate and the batch size, two hyperparameters that are typically thought to be independent of each other: They show that decaying the learning rate is equivalent to increasing the batch size, while the latter allows for increased parallelism. Conversely, we can reduce the number of model updates and thus speed up training by increasing the learning rate and scaling the batch size. This has ramifications for large-scale Deep Learning, which can now repurpose existing training schedules with no hyperparameter tuning.
Another effective recent development is SGDR (Loshchilov and Hutter, 2017) [15], an SGD alternative that uses warm restarts instead of learning rate annealing. In each restart, the learning rate is initialized to some value and is scheduled to decrease. Importantly, the restart is warm as the optimization does not start from scratch but from the parameters to which the model converged during the last step. The key factor is that the learning rate is decreased with an aggressive cosine annealing schedule, which rapidly lowers the learning rate and looks like the following:
\(\eta_t = \eta_{min}^i + \dfrac{1}{2}(\eta_{max}^i - \eta_{min}^i)(1 + \text{cos}(\dfrac{T_{cur}}{T_i}\pi)) \)
where \(\eta_{min}^i\) and \(\eta_{max}^i\) are ranges for the learning rate during the \(i\)-th run, \(T_{cur}\) indicates how many epochs passed since the last restart, and \(T_i\) specifies the epoch of the next restart. The warm restart schedules for \(T_i=50\), \(T_i=100\), and \(T_i=200\) compared with regular learning rate annealing are shown in Figure 1.
Figure 1: Learning rate schedules with warm restarts (Loshchilov and Hutter, 2017)
The high initial learning rate after a restart is used to essentially catapult the parameters out of the minimum to which they previously converged and to a different area of the loss surface. The aggressive annealing then enables the model to rapidly converge to a new and better solution. The authors empirically find that SGD with warm restarts requires 2 to 4 times fewer epochs than learning rate annealing and achieves comparable or better performance.
Learning rate annealing with warm restarts is also known as cyclical learning rates and has been originally proposed by Smith (2017) [16]. Two more articles by students of fast.ai (which has recently started to teach this method) that discuss warm restarts and cyclical learning rates can be found here and here.
Snapshot ensembles (Huang et al., 2017) [17] are a clever, recent technique that uses warm restarts to assemble an ensemble essentially for free when training a single model. The method trains a single model until convergence with the cosine annealing schedule that we have seen above. It then saves the model parameters, performs a warm restart, and then repeats these steps \(M\) times. In the end, all saved model snapshots are ensembled. The common SGD optimization behaviour on an error surface compared to the behaviour of snapshot ensembling can be seen in Figure 2.
Figure 2: SGD vs. snapshot ensemble (Huang et al., 2017)
The success of ensembling in general relies on the diversity of the individual models in the ensemble. Snapshot ensembling thus relies on the cosine annealing schedule's ability to enable the model to converge to a different local optimum after every restart. The authors demonstrate that this holds in practice, achieving state-of-the-art results on CIFAR-10, CIFAR-100, and SVHN.
Warm restarts did not work originally with Adam due to its dysfunctional weight decay, which we have seen before. After fixing weight decay, Loshchilov and Hutter (2017) similarly extend Adam to work with warm restarts. They set \(\eta_{min}^i=0\) and \(\eta_{max}^i=1\), which yields:
\(\eta_t = 0.5 + 0.5 : \text{cos}(\dfrac{T_{cur}}{T_i}\pi))\)
They recommend to start with an initially small \(T_i\) (between \(1%\) and \(10%\) of the total number of epochs) and multiply it by a factor of \(T_{mult}\) (e.g. \(T_{mult}=2\)) at every restart.
One of the most interesting papers of last year (and reddit's "Best paper name of 2016" winner) was a paper by Andrychowicz et al. (2016) [18] where they train an LSTM optimizer to provide the updates to the main model during training. Unfortunately, learning a separate LSTM optimizer or even using a pre-trained LSTM optimizer for optimization greatly increases the complexity of model training.
Another very influential learning-to-learn paper from this year uses an LSTM to generate model architectures in a domain-specific language (Zoph and Quoc, 2017) [19]. While the search process requires vast amounts of resources, the discovered architectures can be used as-is to replace their existing counterparts. This search process has proved effective and found architectures that achieve state-of-the-art results on language modeling and results competitive with the state-of-the-art on CIFAR-10.
The same search principle can be applied to any other domain where key processes have been previously defined by hand. One such domain are optimization algorithms for Deep Learning. As we have seen before, optimization algorithms are more similar than they seem: All of them use a combination of an exponential moving average of past gradients (as in momentum) and of an exponential moving average of past squared gradients (as in Adadelta, RMSprop, and Adam) (Ruder, 2016) [20].
Bello et al. (2017) [21] define a domain-specific language that consists of primitives useful for optimization such as these exponential moving averages. They then sample an update rule from the space of possible update rules, use this update rule to train a model, and update the RNN controller based on the performance of the trained model on the test set. The full procedure can be seen in Figure 3.
Figure 3: Neural Optimizer Search (Bello et al., 2017)
In particular, they discover two update equations, PowerSign and AddSign. The update equation for PowerSign is the following:
\( \theta_{t+1} = \theta_{t} - \alpha^{f(t)*
\text{sign}(g_t)*\text{sign}(m_t)}*g_t \)
where \(\alpha\) is a hyperparameter that is often set to \(e\) or \(2\), \(f(t)\) is either \(1\) or a decay function that performs linear, cyclical or decay with restarts based on time step \(t\), and \(m_t\) is the moving average of past gradients. The common configuration uses \(\alpha=e\) and no decay. We can observe that the update scales the gradient by \(\alpha^{f(t)}\) or \(1/\alpha^{f(t)}\) depending on whether the direction of the gradient and its moving average agree. This indicates that this momentum-like agreement between past gradients and the current one is a key piece of information for optimizing Deep Learning models.
AddSign in turn is defined as follows:
\( \theta_{t+1} = \theta_{t} - \alpha + f(t) * \text{sign}(g_t) * \text{sign}(m_t)) * g_t\)
with \(\alpha\) often set to \(1\) or \(2\). Similar to the above, this time the update scales \(\alpha + f(t)\) or \(\alpha - f(t)\) again depending on the agreement of the direction of the gradients. The authors show that PowerSign and AddSign outperform Adam, RMSprop, and SGD with momentum on CIFAR-10 and transfer well to other tasks such as ImageNet classification and machine translation.
Optimization is closely tied to generalization as the minimum to which a model converges defines how well the model generalizes. Advances in optimization are thus closely correlated with theoretical advances in understanding the generalization behaviour of such minima and more generally of gaining a deeper understanding of generalization in Deep Learning.
However, our understanding of the generalization behaviour of deep neural networks is still very shallow. Recent work showed that the number of possible local minima grows exponentially with the number of parameters (Kawaguchi, 2016) [22]. Given the huge number of parameters of current Deep Learning architectures, it still seems almost magical that such models converge to solutions that generalize well, in particular given that they can completely memorize random inputs (Zhang et al., 2017) [23].
Keskar et al. (2017) [24] identify the sharpness of a minimum as a source for poor generalization: In particular, they show that sharp minima found by batch gradient descent have high generalization error. This makes intuitive sense, as we generally would like our functions to be smooth and a sharp minima indicates a high irregularity in the corresponding error surface. However, more recent work suggests that sharpness may not be such a good indicator after all by showing that local minima that generalize well can be made arbitrarily sharp (Dinh et al., 2017) [25]. A Quora answer by Eric Jang also discusses these articles.
An ICLR 2018 submission demonstrates through a series of ablation analyses that a model's reliance on single directions in activation space, i.e. the activation of single units or feature maps is a good predictor of its generalization performance. They show that this holds across models trained on different datasets and for different degrees of label corruption. They find that dropout does not help to resolve this, while batch normalization discourages single direction reliance.
While these findings indicate that there is still much we do not know in terms of Optimization for Deep Learning, it is important to remember that convergence guarantees and a large body of work exists for convex optimization and that existing ideas and insights can also be applied to non-convex optimization to some extent. The large-scale optimization tutorial at NIPS 2016 provides an excellent overview of more theoretical work in this area (see the slides part 1, part 2, and the video).
I hope that I was able to provide an impression of some of the compelling developments in optimization for Deep Learning over the past year. I've undoubtedly failed to mention many other approaches that are equally important and noteworthy. Please let me know in the comments below what I missed, where I made a mistake or misrepresented a method, or which aspect of optimization for Deep Learning you find particularly exciting or underexplored.
You can find the discussion of this post on HN here.
Robbins, H., & Monro, S. (1951). A stochastic approximation method. The annals of mathematical statistics, 400-407. ↩︎
Huang, G., Liu, Z., Weinberger, K. Q., & van der Maaten, L. (2017). Densely Connected Convolutional Networks. In Proceedings of CVPR 2017. ↩︎
Wu, Y., Schuster, M., Chen, Z., Le, Q. V, Norouzi, M., Macherey, W., … Dean, J. (2016). Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. arXiv Preprint arXiv:1609.08144. ↩︎
Wilson, A. C., Roelofs, R., Stern, M., Srebro, N., & Recht, B. (2017). The Marginal Value of Adaptive Gradient Methods in Machine Learning. arXiv Preprint arXiv:1705.08292. Retrieved from http://arxiv.org/abs/1705.08292 ↩︎
Loshchilov, I., & Hutter, F. (2017). Fixing Weight Decay Regularization in Adam. arXiv Preprint arXi1711.05101. Retrieved from http://arxiv.org/abs/1711.05101 ↩︎
Dozat, T., & Manning, C. D. (2017). Deep Biaffine Attention for Neural Dependency Parsing. In ICLR 2017. Retrieved from http://arxiv.org/abs/1611.01734 ↩︎
Melis, G., Dyer, C., & Blunsom, P. (2017). On the State of the Art of Evaluation in Neural Language Models. In arXiv preprint arXiv:1707.05589. ↩︎
Merity, S., Shirish Keskar, N., & Socher, R. (2017). Regularizing and Optimizing LSTM Language Models. arXiv Preprint arXiv:1708.02182. Retrieved from https://arxiv.org/pdf/1708.02182.pdf ↩︎
Zhang, J., Mitliagkas, I., & Ré, C. (2017). YellowFin and the Art of Momentum Tuning. In arXiv preprint arXiv:1706.03471. ↩︎
Denkowski, M., & Neubig, G. (2017). Stronger Baselines for Trustable Results in Neural Machine Translation. In Workshop on Neural Machine Translation (WNMT). Retrieved from https://arxiv.org/abs/1706.09733 ↩︎
Smith, S. L., Kindermans, P.-J., & Le, Q. V. (2017). Don't Decay the Learning Rate, Increase the Batch Size. In arXiv preprint arXiv:1711.00489. Retrieved from http://arxiv.org/abs/1711.00489 ↩︎
Loshchilov, I., & Hutter, F. (2017). SGDR: Stochastic Gradient Descent with Warm Restarts. In Proceedings of ICLR 2017. https://doi.org/10.1002/fut ↩︎
Smith, Leslie N. "Cyclical learning rates for training neural networks." In Applications of Computer Vision (WACV), 2017 IEEE Winter Conference on, pp. 464-472. IEEE, 2017. ↩︎
Andrychowicz, M., Denil, M., Gomez, S., Hoffman, M. W., Pfau, D., Schaul, T., & de Freitas, N. (2016). Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems. Retrieved from http://arxiv.org/abs/1606.04474 ↩︎
Zoph, B., & Le, Q. V. (2017). Neural Architecture Search with Reinforcement Learning. In ICLR 2017. ↩︎
Ruder, S. (2016). An overview of gradient descent optimization algorithms. arXiv Preprint arXiv:1609.04747. ↩︎
Bello, I., Zoph, B., Vasudevan, V., & Le, Q. V. (2017). Neural Optimizer Search with Reinforcement Learning. In Proceedings of the 34th International Conference on Machine Learning. ↩︎
Kawaguchi, K. (2016). Deep Learning without Poor Local Minima. In Advances in Neural Information Processing Systems 29 (NIPS 2016). Retrieved from http://arxiv.org/abs/1605.07110 ↩︎
Zhang, C., Bengio, S., Hardt, M., Recht, B., & Vinyals, O. (2017). Understanding deep learning requires rethinking generalization. In Proceedings of ICLR 2017. ↩︎
Keskar, N. S., Mudigere, D., Nocedal, J., Smelyanskiy, M., & Tang, P. T. P. (2017). On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. In Proceedings of ICLR 2017. Retrieved from http://arxiv.org/abs/1609.04836 ↩︎
Dinh, L., Pascanu, R., Bengio, S., & Bengio, Y. (2017). Sharp Minima Can Generalize For Deep Nets. In Proceedings of the 34th International Conference on Machine Learning. ↩︎
]]>http://ruder.io/word-embeddings-2017/5c76ff7c1b9b0d18555b9ea3Sat, 21 Oct 2017 09:00:00 GMT
This post discusses the deficiencies of word embeddings and how recent approaches have tried to resolve them.
Subword-level embeddings
OOV handling
Multi-sense embeddings
Beyond words as points
Phrases and multi-word expressions
Temporal dimension
Lack of theoretical understanding
Task and domain-specific embeddings
Embeddings for multiple languages
Embeddings based on other contexts
The word2vec method based on skip-gram with negative sampling (Mikolov et al., 2013) [1] was published in 2013 and had a large impact on the field, mainly through its accompanying software package, which enabled efficient training of dense word representations and a straightforward integration into downstream models. In some respects, we have come far since then: Word embeddings have established themselves as an integral part of Natural Language Processing (NLP) models. In other aspects, we might as well be in 2013 as we have not found ways to pre-train word embeddings that have managed to supersede the original word2vec.
This post will focus on the deficiencies of word embeddings and how recent approaches have tried to resolve them. If not otherwise stated, this post discusses pre-trained word embeddings, i.e. word representations that have been learned on a large corpus using word2vec and its variants. Pre-trained word embeddings are most effective if not millions of training examples are available (and thus transferring knowledge from a large unlabelled corpus is useful), which is true for most tasks in NLP. For an introduction to word embeddings, refer to this blog post.
Word embeddings have been augmented with subword-level information for many applications such as named entity recognition (Lample et al., 2016) [2], part-of-speech tagging (Plank et al., 2016) [3], dependency parsing (Ballesteros et al., 2015; Yu & Vu, 2017) [4], [5], and language modelling (Kim et al., 2016) [6]. Most of these models employ a CNN or a BiLSTM that takes as input the characters of a word and outputs a character-based word representation.
For incorporating character information into pre-trained embeddings, however, character n-grams features have been shown to be more powerful than composition functions over individual characters (Wieting et al., 2016; Bojanowski et al., 2017) [7], [8]. Character n-grams -- by far not a novel feature for text categorization (Cavnar et al., 1994) [9] -- are particularly efficient and also form the basis of Facebook's fastText classifier (Joulin et al., 2016) [10]. Embeddings learned using fastText are available in 294 languages.
Subword units based on byte-pair encoding have been found to be particularly useful for machine translation (Sennrich et al., 2016) [11] where they have replaced words as the standard input units. They are also useful for tasks with many unknown words such as entity typing (Heinzerling & Strube, 2017) [12], but have not been shown to be helpful yet for standard NLP tasks, where this is not a major concern. While they can be learned easily, it is difficult to see their advantage over character-based representations for most tasks (Vania & Lopez, 2017) [13].
Another choice for using pre-trained embeddings that integrate character information is to leverage a state-of-the-art language model (Jozefowicz et al., 2016) [14] trained on a large in-domain corpus, e.g. the 1 Billion Word Benchmark (a pre-trained Tensorflow model can be found here). While language modelling has been found to be useful for different tasks as auxiliary objective (Rei, 2017) [15], pre-trained language model embeddings have also been used to augment word embeddings (Peters et al., 2017) [16]. As we start to better understand how to pre-train and initialize our models, pre-trained language model embeddings are poised to become more effective. They might even supersede word2vec as the go-to choice for initializing word embeddings by virtue of having become more expressive and easier to train due to better frameworks and more computational resources over the last years.
One of the main problems of using pre-trained word embeddings is that they are unable to deal with out-of-vocabulary (OOV) words, i.e. words that have not been seen during training. Typically, such words are set to the UNK token and are assigned the same vector, which is an ineffective choice if the number of OOV words is large. Subword-level embeddings as discussed in the last section are one way to mitigate this issue. Another way, which is effective for reading comprehension (Dhingra et al., 2017) [17] is to assign OOV words their pre-trained word embedding, if one is available.
Recently, different approaches have been proposed for generating embeddings for OOV words on-the-fly. Herbelot and Baroni (2017) [18] initialize the embedding of OOV words as the sum of their context words and then rapidly refine only the OOV embedding with a high learning rate. Their approach is successful for a dataset that explicitly requires to model nonce words, but it is unclear if it can be scaled up to work reliably for more typical NLP tasks. Another interesting approach for generating OOV word embeddings is to train a character-based model to explicitly re-create pre-trained embeddings (Pinter et al., 2017) [19]. This is particularly useful in low-resource scenarios, where a large corpus is inaccessible and only pre-trained embeddings are available.
Evaluation of pre-trained embeddings has been a contentious issue since their inception as the commonly used evaluation via word similarity or analogy datasets has been shown to only correlate weakly with downstream performance (Tsvetkov et al., 2015) [20]. The RepEval Workshop at ACL 2016 exclusively focused on better ways to evaluate pre-trained embeddings. As it stands, the consensus seems to be that -- while pre-trained embeddings can be evaluated on intrinsic tasks such as word similarity for comparison against previous approaches -- the best way to evaluate them is extrinsic evaluation on downstream tasks.
A commonly cited criticism of word embeddings is that they are unable to capture polysemy. A tutorial at ACL 2016 outlined the work in recent years that focused on learning separate embeddings for multiple senses of a word (Neelakantan et al., 2014; Iacobacci et al., 2015; Pilehvar & Collier, 2016) [21], [22], [23]. However, most existing approaches for learning multi-sense embeddings solely evaluate on word similarity. Pilehvar et al. (2017) [24] are one of the first to show results on topic categorization as a downstream task; while multi-sense embeddings outperform randomly initialized word embeddings in their experiments, they are outperformed by pre-trained word embeddings.
Given the stellar results Neural Machine Translation systems using word embeddings have achieved in recent years (Johnson et al., 2016) [25], it seems that the current generation of models is expressive enough to contextualize and disambiguate words in context without having to rely on a dedicated disambiguation pipeline or multi-sense embeddings. However, we still need better ways to understand whether our models are actually able to sufficiently disambiguate words and how to improve this disambiguation behaviour if necessary.
While we might not need separate embeddings for every sense of each word for good downstream performance, reducing each word to a point in a vector space is unarguably overly simplistic and causes us to miss out on nuances that might be useful for downstream tasks. An interesting direction is thus to employ other representations that are better able to capture these facets. Vilnis & McCallum (2015) [26] propose to model each word as a probability distribution rather than a point vector, which allows us to represent probability mass and uncertainty across certain dimensions. Athiwaratkun & Wilson (2017) [27] extend this approach to a multimodal distribution that allows to deal with polysemy, entailment, uncertainty, and enhances interpretability.
Rather than altering the representation, the embedding space can also be changed to better represent certain features. Nickel and Kiela (2017) [28], for instance, embed words in a hyperbolic space, to learn hierarchical representations. Finding other ways to represent words that incorporate linguistic assumptions or better deal with the characteristics of downstream tasks is a compelling research direction.
In addition to not being able to capture multiple senses of words, word embeddings also fail to capture the meanings of phrases and multi-word expressions, which can be a function of the meaning of their constituent words, or have an entirely new meaning. Phrase embeddings have been proposed already in the original word2vec paper (Mikolov et al., 2013) [29] and there has been consistent work on learning better compositional and non-compositional phrase embeddings (Yu & Dredze, 2015; Hashimoto & Tsuruoka, 2016) [30], [31]. However, similar to multi-sense embeddings, explicitly modelling phrases has so far not shown significant improvements on downstream tasks that would justify the additional complexity. Analogously, a better understanding of how phrases are modelled in neural networks would pave the way to methods that augment the capabilities of our models to capture compositionality and non-compositionality of expressions.
Bias in our models is becoming a larger issue and we are only starting to understand its implications for training and evaluating our models. Even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent (Bolukbasi et al., 2016) [32]. Understanding what other biases word embeddings capture and finding better ways to remove theses biases will be key to developing fair algorithms for natural language processing.
Words are a mirror of the zeitgeist and their meanings are subject to continuous change; current representations of words might differ substantially from the way these words where used in the past and will be used in the future. An interesting direction is thus to take into account the temporal dimension and the diachronic nature of words. This can allows us to reveal laws of semantic change (Hamilton et al., 2016; Bamler & Mandt, 2017; Dubossarsky et al., 2017) [33], [34], [35], to model temporal word analogy or relatedness (Szymanski, 2017; Rosin et al., 2017) [36], [37], or to capture the dynamics of semantic relations (Kutuzov et al., 2017) [37:1].
Besides the insight that word2vec with skip-gram negative sampling implicitly factorizes a PMI matrix (Levy & Goldberg, 2014) [38], there has been comparatively little work on gaining a better theoretical understanding of the word embedding space and its properties, e.g. that summation captures analogy relations. Arora et al. (2016) [39] propose a new generative model for word embeddings, which treats corpus generation as a random walk of a discourse vector and establishes some theoretical motivations regarding the analogy behaviour. Gittens et al. (2017) [40] provide a more thorough theoretical justification of additive compositionality and show that skip-gram word vectors are optimal in an information-theoretic sense. Mimno & Thompson (2017) [41] furthermore reveal an interesting relation between word embeddings and the embeddings of context words, i.e. that they are not evenly dispersed across the vector space, but occupy a narrow cone that is diametrically opposite to the context word embeddings. Despite these additional insights, our understanding regarding the location and properties of word embeddings is still lacking and more theoretical work is necessary.
One of the major downsides of using pre-trained embeddings is that the news data used for training them is often very different from the data on which we would like to use them. In most cases, however, we do not have access to millions of unlabelled documents in our target domain that would allow for pre-training good embeddings from scratch. We would thus like to be able to adapt embeddings pre-trained on large news corpora, so that they capture the characteristics of our target domain, but still retain all relevant existing knowledge. Lu & Zheng (2017) [42] proposed a regularized skip-gram model for learning such cross-domain embeddings. In the future, we will need even better ways to adapt pre-trained embeddings to new domains or to incorporate the knowledge from multiple relevant domains.
Rather than adapting to a new domain, we can also use existing knowledge encoded in semantic lexicons to augment pre-trained embeddings with information that is relevant for our task. An effective way to inject such relations into the embedding space is retro-fitting (Faruqui et al., 2015) [43], which has been expanded to other resources such as ConceptNet (Speer et al., 2017) [44] and extended with an intelligent selection of positive and negative examples (Mrkšić et al., 2017) [45]. Injecting additional prior knowledge into word embeddings such as monotonicity (You et al., 2017) [46], word similarity (Niebler et al., 2017) [47], task-related grading or intensity, or logical relations is an important research direction that will allow to make our models more robust.
Word embeddings are useful for a wide variety of applications beyond NLP such as information retrieval, recommendation, and link prediction in knowledge bases, which all have their own task-specific approaches. Wu et al. (2017) [48] propose a general-purpose model that is compatible with many of these applications and can serve as a strong baseline.
Rather than adapting word embeddings to any particular task, recent work has sought to create contextualized word vectors by augmenting word embeddings with embeddings based on the hidden states of models pre-trained for certain tasks, such as machine translation (McCann et al., 2017) [49] or language modelling (Peters et al., 2018) [50]. Together with fine-tuning pre-trained models (Howard and Ruder, 2018) [51], this is one of the most promising research directions.
As NLP models are being increasingly employed and evaluated on multiple languages, creating multilingual word embeddings is becoming a more important issue and has received increased interest over recent years. A promising direction is to develop methods that learn cross-lingual representations with as few parallel data as possible, so that they can be easily applied to learn representations even for low-resource languages. For a recent survey in this area, refer to Ruder et al. (2017) [52].
Word embeddings are typically learned only based on the window of surrounding context words. Levy & Goldberg (2014) [53] have shown that dependency structures can be used as context to capture more syntactic word relations; Köhn (2015) [54] finds that such dependency-based embeddings perform best for a particular multilingual evaluation method that clusters embeddings along different syntactic features.
Melamud et al. (2016) [55] observe that different context types work well for different downstream tasks and that simple concatenation of word embeddings learned with different context types can yield further performance gains. Given the recent success of incorporating graph structures into neural models for different tasks as -- for instance -- exhibited by graph-convolutional neural networks (Bastings et al., 2017; Marcheggiani & Titov, 2017) [56] [57], we can conjecture that incorporating such structures for learning embeddings for downstream tasks may also be beneficial.
Besides selecting context words differently, additional context may also be used in other ways: Tissier et al. (2017) [58] incorporate co-occurrence information from dictionary definitions into the negative sampling process to move related works closer together and prevent them from being used as negative samples. We can think of topical or relatedness information derived from other contexts such as article headlines or Wikipedia intro paragraphs that could similarly be used to make the representations more applicable to a particular downstream task.
It is nice to see that as a community we are progressing from applying word embeddings to every possible problem to gaining a more principled, nuanced, and practical understanding of them. This post was meant to highlight some of the current trends and future directions for learning word embeddings that I found most compelling. I've undoubtedly failed to mention many other areas that are equally important and noteworthy. Please let me know in the comments below what I missed, where I made a mistake or misrepresented a method, or just which aspect of word embeddings you find particularly exciting or unexplored.
Refer to the discussion on Hacker News for some more insights on word embeddings.
Other blog posts on word embeddings
If you want to learn more about word embeddings, these other blog posts on word embeddings are also available:
On word embeddings - Part 1
On word embeddings - Part 2: Approximating the softmax
On word embeddings - Part 3: The secret ingredients of word2vec
Unofficial Part 4: A survey of cross-lingual embedding models
Cover image credit: Hamilton et al. (2016)
Mikolov, T., Corrado, G., Chen, K., & Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. Proceedings of the International Conference on Learning Representations (ICLR 2013), 1–12. ↩︎
Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K., & Dyer, C. (2016). Neural Architectures for Named Entity Recognition. In NAACL-HLT 2016. ↩︎
Plank, B., Søgaard, A., & Goldberg, Y. (2016). Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. ↩︎
Ballesteros, M., Dyer, C., & Smith, N. A. (2015). Improved Transition-Based Parsing by Modeling Characters instead of Words with LSTMs. In Proceedings of EMNLP 2015. https://doi.org/10.18653/v1/D15-1041 ↩︎
Yu, X., & Vu, N. T. (2017). Character Composition Model with Convolutional Neural Networks for Dependency Parsing on Morphologically Rich Languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (pp. 672–678). ↩︎
Kim, Y., Jernite, Y., Sontag, D., & Rush, A. M. (2016). Character-Aware Neural Language Models. AAAI. Retrieved from http://arxiv.org/abs/1508.06615 ↩︎
Wieting, J., Bansal, M., Gimpel, K., & Livescu, K. (2016). Charagram: Embedding Words and Sentences via Character n-grams. Retrieved from http://arxiv.org/abs/1607.02789 ↩︎
Bojanowski, P., Grave, E., Joulin, A., & Mikolov, T. (2017). Enriching Word Vectors with Subword Information. Transactions of the Association for Computational Linguistics. Retrieved from http://arxiv.org/abs/1607.04606 ↩︎
Cavnar, W. B., Trenkle, J. M., & Mi, A. A. (1994). N-Gram-Based Text Categorization. Ann Arbor MI 48113.2, 161–175. https://doi.org/10.1.1.53.9367 ↩︎
Joulin, A., Grave, E., Bojanowski, P., & Mikolov, T. (2016). Bag of Tricks for Efficient Text Classification. arXiv Preprint arXiv:1607.01759. Retrieved from http://arxiv.org/abs/1607.01759 ↩︎
Sennrich, R., Haddow, B., & Birch, A. (2016). Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016). Retrieved from http://arxiv.org/abs/1508.07909 ↩︎
Heinzerling, B., & Strube, M. (2017). BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages. Retrieved from http://arxiv.org/abs/1710.02187 ↩︎
Vania, C., & Lopez, A. (2017). From Characters to Words to in Between: Do We Capture Morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (pp. 2016–2027). ↩︎
Jozefowicz, R., Vinyals, O., Schuster, M., Shazeer, N., & Wu, Y. (2016). Exploring the Limits of Language Modeling. arXiv Preprint arXiv:1602.02410. Retrieved from http://arxiv.org/abs/1602.02410 ↩︎
Rei, M. (2017). Semi-supervised Multitask Learning for Sequence Labeling. In Proceedings of ACL 2017. ↩︎
Peters, M. E., Ammar, W., Bhagavatula, C., & Power, R. (2017). Semi-supervised sequence tagging with bidirectional language models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (pp. 1756–1765). ↩︎
Dhingra, B., Liu, H., Salakhutdinov, R., & Cohen, W. W. (2017). A Comparative Study of Word Embeddings for Reading Comprehension. arXiv preprint arXiv:1703.00993. ↩︎
Herbelot, A., & Baroni, M. (2017). High-risk learning: acquiring new word vectors from tiny data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. ↩︎
Pinter, Y., Guthrie, R., & Eisenstein, J. (2017). Mimicking Word Embeddings using Subword RNNs. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Retrieved from http://arxiv.org/abs/1707.06961 ↩︎
Tsvetkov, Y., Faruqui, M., Ling, W., Lample, G., & Dyer, C. (2015). Evaluation of Word Vector Representations by Subspace Alignment. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 17-21 September 2015, 2049–2054. ↩︎
Neelakantan, A., Shankar, J., Passos, A., & Mccallum, A. (2014). Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space. In Proceedings fo (pp. 1059–1069). ↩︎
Iacobacci, I., Pilehvar, M. T., & Navigli, R. (2015). SensEmbed: Learning Sense Embeddings for Word and Relational Similarity. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (pp. 95–105). ↩︎
Pilehvar, M. T., & Collier, N. (2016). De-Conflated Semantic Representations. In Proceedings of EMNLP. ↩︎
Pilehvar, M. T., Camacho-Collados, J., Navigli, R., & Collier, N. (2017). Towards a Seamless Integration of Word Senses into Downstream NLP Applications. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 1857–1869). https://doi.org/10.18653/v1/P17-1170 ↩︎
Johnson, M., Schuster, M., Le, Q. V, Krikun, M., Wu, Y., Chen, Z., … Dean, J. (2016). Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation. arXiv Preprint arXiv:1611.0455. ↩︎
Vilnis, L., & McCallum, A. (2015). Word Representations via Gaussian Embedding. ICLR. Retrieved from http://arxiv.org/abs/1412.6623 ↩︎
Athiwaratkun, B., & Wilson, A. G. (2017). Multimodal Word Distributions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017). ↩︎
Nickel, M., & Kiela, D. (2017). Poincaré Embeddings for Learning Hierarchical Representations. arXiv Preprint arXiv:1705.08039. Retrieved from http://arxiv.org/abs/1705.08039 ↩︎
Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Distributed Representations of Words and Phrases and their Compositionality. NIPS. ↩︎
Yu, M., & Dredze, M. (2015). Learning Composition Models for Phrase Embeddings. Transactions of the ACL, 3, 227–242. ↩︎
Hashimoto, K., & Tsuruoka, Y. (2016). Adaptive Joint Learning of Compositional and Non-Compositional Phrase Embeddings. ACL, 205–215. Retrieved from http://arxiv.org/abs/1603.06067 ↩︎
Bolukbasi, T., Chang, K.-W., Zou, J., Saligrama, V., & Kalai, A. (2016). Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In 30th Conference on Neural Information Processing Systems (NIPS 2016). Retrieved from http://arxiv.org/abs/1607.06520 ↩︎
Hamilton, W. L., Leskovec, J., & Jurafsky, D. (2016). Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (pp. 1489–1501). ↩︎
Bamler, R., & Mandt, S. (2017). Dynamic Word Embeddings via Skip-Gram Filtering. In Proceedings of ICML 2017. Retrieved from http://arxiv.org/abs/1702.08359 ↩︎
Dubossarsky, H., Grossman, E., & Weinshall, D. (2017). Outta Control: Laws of Semantic Change and Inherent Biases in Word Representation Models. In Conference on Empirical Methods in Natural Language Processing (pp. 1147–1156). Retrieved from http://aclweb.org/anthology/D17-1119 ↩︎
Szymanski, T. (2017). Temporal Word Analogies : Identifying Lexical Replacement with Diachronic Word Embeddings. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (pp. 448–453). ↩︎
Rosin, G., Radinsky, K., & Adar, E. (2017). Learning Word Relatedness over Time. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Retrieved from https://arxiv.org/pdf/1707.08081.pdf ↩︎ ↩︎
Levy, O., & Goldberg, Y. (2014). Neural Word Embedding as Implicit Matrix Factorization. Advances in Neural Information Processing Systems (NIPS), 2177–2185. Retrieved from http://papers.nips.cc/paper/5477-neural-word-embedding-as-implicit-matrix-factorization ↩︎
Arora, S., Li, Y., Liang, Y., Ma, T., & Risteski, A. (2016). A Latent Variable Model Approach to PMI-based Word Embeddings. TACL, 4, 385–399. Retrieved from https://transacl.org/ojs/index.php/tacl/article/viewFile/742/204 ↩︎
Gittens, A., Achlioptas, D., & Mahoney, M. W. (2017). Skip-Gram – Zipf + Uniform = Vector Additivity. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (pp. 69–76). https://doi.org/10.18653/v1/P17-1007 ↩︎
Mimno, D., & Thompson, L. (2017). The strange geometry of skip-gram with negative sampling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (pp. 2863–2868). ↩︎
Lu, W., & Zheng, V. W. (2017). A Simple Regularization-based Algorithm for Learning Cross-Domain Word Embeddings. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (pp. 2888–2894). ↩︎
Faruqui, M., Dodge, J., Jauhar, S. K., Dyer, C., Hovy, E., & Smith, N. A. (2015). Retrofitting Word Vectors to Semantic Lexicons. In NAACL 2015. ↩︎
Speer, R., Chin, J., & Havasi, C. (2017). ConceptNet 5.5: An Open Multilingual Graph of General Knowledge. In AAAI 31 (pp. 4444–4451). Retrieved from http://arxiv.org/abs/1612.03975 ↩︎
You, S., Ding, D., Canini, K., Pfeifer, J., & Gupta, M. (2017). Deep Lattice Networks and Partial Monotonic Functions. In 31st Conference on Neural Information Processing Systems (NIPS 2017). Retrieved from http://arxiv.org/abs/1709.06680 ↩︎
Niebler, T., Becker, M., Pölitz, C., & Hotho, A. (2017). Learning Semantic Relatedness From Human Feedback Using Metric Learning. In Proceedings of ISWC 2017. Retrieved from http://arxiv.org/abs/1705.07425 ↩︎
Wu, L., Fisch, A., Chopra, S., Adams, K., Bordes, A., & Weston, J. (2017). StarSpace: Embed All The Things! arXiv preprint arXiv:1709.03856. ↩︎
Mccann, B., Bradbury, J., Xiong, C., & Socher, R. (2017). Learned in Translation: Contextualized Word Vectors. In Advances in Neural Information Processing Systems. ↩︎
Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018). Deep contextualized word representations. Proceedings of NAACL-HLT 2018. ↩︎
Howard, J., & Ruder, S. (2018). Universal Language Model Fine-tuning for Text Classification. In Proceedings of ACL 2018. Retrieved from http://arxiv.org/abs/1801.06146 ↩︎
Ruder, S., Vulić, I., & Søgaard, A. (2017). A Survey of Cross-lingual Word Embedding Models Sebastian. arXiv preprint arXiv:1706.04902. Retrieved from http://arxiv.org/abs/1706.04902 ↩︎
Levy, O., & Goldberg, Y. (2014). Dependency-Based Word Embeddings. Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Short Papers), 302–308. https://doi.org/10.3115/v1/P14-2050 ↩︎
Köhn, A. (2015). What's in an Embedding? Analyzing Word Embeddings through Multilingual Evaluation. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 17-21 September 2015, (2014), 2067–2073. ↩︎
Melamud, O., McClosky, D., Patwardhan, S., & Bansal, M. (2016). The Role of Context Types and Dimensionality in Learning Word Embeddings. In Proceedings of NAACL-HLT 2016 (pp. 1030–1040). Retrieved from http://arxiv.org/abs/1601.00893 ↩︎
Bastings, J., Titov, I., Aziz, W., Marcheggiani, D., & Sima'an, K. (2017). Graph Convolutional Encoders for Syntax-aware Neural Machine Translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. ↩︎
Marcheggiani, D., & Titov, I. (2017). Encoding Sentences with Graph Convolutional Networks for Semantic Role Labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. ↩︎
Tissier, J., Gravier, C., & Habrard, A. (2017). Dict2Vec : Learning Word Embeddings using Lexical Dictionaries. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Retrieved from http://aclweb.org/anthology/D17-1024 ↩︎ | CommonCrawl |
These pages provide run throughs of the Labcoat Leni boxes in Discovering Statistics Using IBM SPSS Statistics (5th edition).
Is Friday 13th unlucky?
Let's begin with accidents and poisoning on Friday the 6th. First, arrange the scores in ascending order: 1, 1, 4, 6, 9, 9.
The median will be the (n + 1)/2th score. There are 6 scores, so this will be the 7/2 = 3.5th. The 3.5th score in our ordered list is half way between the 3rd and 4th scores which is (4+6)/2= 5 accidents.
The mean is 5 accidents:
\[ \begin{align} \bar{X} &= \frac{\sum_{i = 1}^{n}x_i}{n} \\ &= \frac{1 + 1 + 4 + 6 + 9 + 9}{6} \\ &= \frac{30}{6} \\ &= 5 \end{align} \]
The lower quartile is the median of the lower half of scores. If we split the data in half, there will be 3 scores in the bottom half (lowest scores) and 3 in the top half (highest scores). The median of the bottom half will be the (3+1)/2 = 2nd score below the mean. Therefore, the lower quartile is 1 accident.
The upper quartile is the median of the upper half of scores. If we again split the data in half and take the highest 3 scores, the median will be the (3+1)/2 = 2nd score above the mean. Therefore, the upper quartile is 9 accidents.
The interquartile range is the difference between the upper and lower quartiles: 9 − 1 = 8 accidents.
To calculate the sum of squares, first take the mean from each score, then square this difference, and finally, add up these squared values:
(Score − Mean)
Error Squared
So, the sum of squared errors is: 16 + 16 + 1 + 1 + 16 + 16 = 66.
The variance is the sum of squared errors divided by the degrees of freedom (N − 1):
\[ s^{2} = \frac{\text{sum of squares}}{N- 1} = \frac{66}{5} = 13.20 \]
The standard deviation is the square root of the variance:
\[ s = \sqrt{\text{variance}} = \sqrt{13.20} = 3.63 \]
Next let's look at accidents and poisoning on Friday the 13th. First, arrange the scores in ascending order: 5, 5, 6, 6, 7, 7.
The median will be the (n + 1)/2th score. There are 6 scores, so this will be the 7/2 = 3.5th. The 3.5th score in our ordered list is half way between the 3rd and 4th scores which is (6+6)/2 = 6 accidents.
\[ \begin{align} \bar{X} &= \frac{\sum_{i = 1}^{n}x_{i}}{n} \\ &= \frac{5 + 5 + 6 + 6 + 7 + 7}{6} \\ &= \frac{36}{6} \\ &= 6 \\ \end{align} \]
The lower quartile is the median of the lower half of scores. If we split the data in half, there will be 3 scores in the bottom half (lowest scores) and 3 in the top half (highest scores). The median of the bottom half will be the (3+1)/2 = 2nd score below the mean. Therefore, the lower quartile is 5 accidents.
To calculate the sum of squares, first take the mean from each score, then square this difference, finally, add up these squared values:
So, the sum of squared errors is: 1 + 0 + 1 + 1 + 1 + 0 = 4.
\[ s^{2} = \frac{\text{sum of squares}}{N - 1} = \frac{4}{5} = 0.8 \]
\[ s = \sqrt{\text{variance}} = \sqrt{0.8} = 0.894 \]
Next, let's look at traffic accidents on Friday the 6th. First, arrange the scores in ascending order: 3, 5, 6, 9, 11, 11.
The median will be the (n + 1)/2th score. There are 6 scores, so this will be the 7/2 = 3.5th. The 3.5th score in our ordered list is half way between the 3rd and 4th scores. The 3rd score is 6 and the 4th score is 9. Therefore the 3.5th score is (6+9)/2 = 7.5 accidents.
The mean is 7.5 accidents:
\[ \begin{align} \bar{X} &= \frac{\sum_{i = 1}^{n}x_{i}}{n} \\ &= \frac{3 + 5 + 6 + 9 + 11 + 11}{6} \\ &= \frac{45}{6} \\ &= 7.5 \end{align} \]
The upper quartile is the median of the upper half of scores. If we again split the data in half and take the highest 3 scores, the median will be the (3+1)/2 = 2nd score above the mean. Therefore, the upper quartile is 11 accidents.
The interquartile range is the difference between the upper and lower quartiles: 11 − 5 = 6 accidents.
9 1.5 2.25
11 3.5 12.25
3 –4.5 20.25
So, the sum of squared errors is: 2.25 + 2.25 + 12.25 + 12.25 + 20.25 + 6.25 = 55.5.
\[ s^{2} = \frac{\text{sum of squares}}{N - 1} = \frac{55.5}{5} = 11.10 \]
Finally, let's look at traffic accidents on Friday the 13th. First, arrange the scores in ascending order: 4, 10, 12, 12, 13, 14.
The median will be the (n + 1)/2th score. There are 6 scores, so this will be the 7/2 = 3.5th. The 3.5th score in our ordered list is half way between the 3rd and 4th scores. The 3rd score is 12 and the 4th score is 12. Therefore the 3.5th score is (12+12)/2= 12 accidents.
The mean is 10.83 accidents:
\[ \begin{align} \bar{X} &= \frac{\sum_{i = 1}^{n}x_{i}}{n} \\ &= \frac{4 + 10 + 12 + 12 + 13 + 14}{6} \\ &= \frac{65}{6} \\ &= 10.83 \end{align} \]
The lower quartile is the median of the lower half of scores. If we split the data in half, there will be 3 scores in the bottom half (lowest scores) and 3 in the top half (highest scores). The median of the bottom half will be the (3+1)/2 = 2nd score below the mean. Therefore, the lower quartile is 10 accidents.
The interquartile range is the difference between the upper and lower quartile: 13 − 10 = 3 accidents.
4 –6.83 46.65
14 3.17 10.05
So, the sum of squared errors is: 46.65 + 0.69 + 1.37 + 1.37 + 4.71 + 10.05 = 64.84.
\[ s^{2} = \frac{\text{sum of squares}}{N- 1} = \frac{64.84}{5} = 12.97 \]
\[ s = \sqrt{\text{variance}} = \sqrt{12.97} = 3.6 \]
No Labcoat Leni in this chapter.
Researcher degrees of freedom: a sting in the tale
No solution required.
Gonna be a rock 'n' roll singer
Using a task from experimental economics called the ultimatum game, individuals are assigned the role of either proposer or responder and paired randomly. Proposers were allocated $10 from which they had to make a financial offer to the responder (i.e., $2). The responder can accept or reject this offer. If the offer is rejected neither party gets any money, but if the offer is accepted the responder keeps the offered amount (e.g., $2), and the proposer keeps the original amount minus what they offered (e.g., $8). For half of the participants the song 'It's a long way to the top' sung by Bon Scott was playing in the background; for the remainder 'Shoot to thrill' sung by Brian Johnson was playing. Oxoby measured the offers made by proposers, and the minimum accepted by responders (called the minimum acceptable offer). He reasoned that people would accept lower offers and propose higher offers when listening to something they like (because of the 'feel-good factor' the music creates). Therefore, by comparing the value of offers made and the minimum acceptable offers in the two groups he could see whether people have more of a feel-good factor when listening to Bon or Brian. These data are estimated from Figures 1 and 2 in the paper because I couldn't get hold of the author to get the original data files. The offers made (in dollars) are as follows (there were 18 people per group):
Bon Scott group: 1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 5, 5, 5
Brian Johnson group: 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5
Enter these data into the IBM SPSS Statistics data editor, remembering to include value labels, to set the measure property, to give each variable a proper label, and to set the appropriate number of decimal places. This file can be found in oxoby_2008_offers.sav and should look like this:
Completed data editor
Or with the value labels off, like this:
Gonna be a rock 'n' roll singer (again!)
First, let's produce a population pyramid for the minimum acceptable offer data. To do this, open the file oxoby_2008_mao.sav, access Graphs > Chart Builder … and then select Histogram in the list labelled Choose from to bring up the gallery. This gallery has four icons representing different types of histogram, and you should select the appropriate one either by double-clicking on it, or by dragging it onto the canvas in the Chart Builder. Click on the population pyramid icon (see the book chapter) to display the template for this graph on the canvas. Then from the variable list select the variable representing the minimum acceptable offer and drag it to to set it as the variable that you want to plot. Then drag the variable representing background music to to set it as the variable for which you want to plot different distributions. Click on to produce the graph. The resulting population pyramid is show below.
Population pyramid of minimum acceptable offers
We can compare the resulting population pyramid above with Figure 2 from the original article (below). Both graphs show that MAOs were higher when participants heard the music of Bon Scott. This suggests that more offers would be rejected when listening to Bon Scott than when listening to Brian Johnson.
Oxoby (2008) Figure 2
Next we want to produce a population pyramid for number of offers made. To do this, open the file Oxoby (2008) Offers.sav, access Graphs > Chart Builder … and then select Histogram in the list labelled Choose from to bring up the gallery. This gallery has four icons representing different types of histogram, and you should select the appropriate one either by double-clicking on it, or by dragging it onto the canvas in the Chart Builder. Click on the population pyramid icon (see the book chapter) to display the template for this graph on the canvas. Then drag the variable representing offers made to to set it as the variable that you want to plot. Next, drag the variable representing background music to to set it as the variable for which you want to plot different distributions. Click on to produce the graph. The resulting population pyramid is show below.
Population pyramid of offers made
We can compare the resulting population pyramid above with Figure 1 from the original article (below). Both graphs show that offers made were lower when participants heard the music of Bon Scott.
Select Graphs > Chart Builder … and then a simple bar chart. The y-axis needs to be the dependent variable, or the thing you've measured, or more simply the thing for which you want to display the mean. In this case it would be the four different colours (pale pink, light pink, dark pink and red). So select all of these colours from the variable list and drag them into the y-axis drop zone ( ):
Completed dialog box
A dialog box should pop up (see below) informing you that the values from your variables will be used to summarize your data:
This is fine, so click .To add error bars to your graph select Display error bars and make sure you have select mean from the statistics dropdown list:
Click to produce the graph:
The mean ratings for all colours are fairly similar, suggesting that men don't prefer the colour red. In fact, the colour red has the lowest mean rating, suggesting that men liked the red genitalia the least. The light pink genital colour had the highest mean rating, but don't read anything into that: the means are all very similar.
No Labcoat Lenis in this chapter.
Having a quail of a time?
To run a Wilcoxon test you need to follow the general procedure outlined in the book chapter. First, select Analyze > Nonparametric Tests > Related Samples …. In the Objective tab select Customize analysis. In the Fields tab you will see all of the variables in the data editor listed in the box labelled Fields. If you assigned roles for the variables in the data editor Use predefined roles will be selected and SPSS Statistics will have automatically assigned your variables. Otherwise Use custon field assignments will be selected and you'll need to assign variables yourself. Drag both dependent variables (select Signaled Male then, holding down Ctrl (⌘ on a Mac), click Control Male) to the box labelled Test Fields. The completed dialog box is shown below.
In the Settings tab select Choose Tests. To do a Wilcoxon test select Customize tests and Wilcoxon matched-pair signed rank (2 samples) and click .
The summary table in the output tells you that the significance ofthe test was .022 and suggests that we reject the null hypothesis. Double click on this table to enter the model viewer. Notice that we have different coloured bars: the brown bars represent positive differences (these are females that produced fewer eggs fertilized by the male in his signalled chamber than the male in his control chamber) and the blue bars negative differences (these are females that produced more eggs fertilized by the male in his signalled chamber than the male in his control chamber). We can see that the bars are predominantly blue. The legend of the graph confirms that there were 3 positive differences, 10 negative differences and 1 tie. This means that for 10 of the 14 quails, the number of eggs fertilized by the male in his signalled chamber was greater than for the male in his control chamber, indicating an adaptive benefit to learning that a chamber signalled reproductive opportunity. The one tied rank tells us that there was one female who produced an equal number of fertilized eggs for both males.
There is a table below the histogram that tells us the test statistic (13.50), its standard error (13.92), and the corresponding z-score (−2.30). The p-value associated with the z-score is .022, which means that there's a probability of .022 that we would get a value of z at least as large as the one we have if there were no effect in the population; because this value is less than the critical value of .05 we should conclude that there were a greater number of fertilized eggs from males mating in their signalled context, z = −2.30, p < .05. In other words, conditioning (as a learning mechanism) provides some adaptive benefit in that it makes it more likely that you will pass on your genes.
The authors concluded as follows (p. 760:
Of the 78 eggs laid by the test females, 39 eggs were fertilized. Genetic analysis indicated that 28 of these (72%) were fertilized by the signalled males, and 11 were fertilized by the control males. Ten of the 14 females in the experiment produced more eggs fertilized by the signalled male than by the control male (see Fig. 1; Wilcoxon signed-ranks test, T = 13.5, p < .05). These effects were independent of the order in which the 2 males copulated with the female. Of the 39 fertilized eggs, 20 were sired by the 1st male and 19 were sired by the 2nd male. The present findings show that when 2 males copulated with the same female in succession, the male that received a Pavlovian CS signalling copulatory opportunity fertilized more of the female's eggs. Thus, Pavlovian conditioning increased reproductive fitness in the context of sperm competition.
Eggs-traordinary
To run a Kruskal–Wallis test, follow the general procedure outlined in the book chapter. First, select Analyze > Nonparametric Tests > Related Samples …. In the Objective tab select Customize analysis. In the Fields tab you will see all of the variables in the data editor listed in the box labelled Fields. If you assigned roles for the variables in the data editor Use predefined roles will be selected and SPSS Statistics will have automatically assigned your variables. Otherwise Use custon field assignments will be selected and you'll need to assign variables yourself. Drag both dependent variables (select Percentage of Eggs Fertilised then, holding down Ctrl (⌘ on a Mac), click Time taken to initiate copulation) to the box labelled Test Fields. Next, drag the grouping variable, in this case Group, to the box labelled Groups. The completed dialog box is shown below.
In the Settings tab select Choose Tests. To do a Kruskal–Wallis test select Customize tests and Kruskal-Wallis 1-way ANOVA (k samples). Below this option there is a drop down list labelled Multiple comparisons, select All pairwise and be on your way by clicking :
The summary table tells us for both outcome variables that there was a significant effect, and we are given a little message of advice to reject the null hypotheses. How helpful.
Double-click on the first row of the summary table to open up the model viewer window, which shows the results of whether the percentage of eggs fertilized was different across groups in more detail (see below). Here we can see the test statistic, H, for the Kruskal–Wallis (11.955), its associated degrees of freedom (2) and the significance. The significance value of.003 is less than .05, so we could conclude that the percentage of eggs fertilized was significantly different across the two groups.
Next, double-click on the second row of the summary table to open up the model viewer window, which displays the results of the test of whether the time taken to initiate copulation was different across groups in more detail. In this window we can see the test statistic, H, for the Kruskal–Wallis (32.244) its associated degrees of freedom (2) and the significance, which is .000; because this value is less than .05 we could conclude that the time taken to initiate copulation differed significantly across the two groups.
We know that there are differences between the groups but we don't know where these differences lie. One way to see which groups differ is to look at boxplots. SPSS produces boxplots for us (see the outputs above). If we look at the boxplot in the first output (percentage of eggs fertilized), using the control as our baseline, the medians for the non-fetishistic male quail and the control group were similar, indicating that the non-fetishistic males yielded similar rates of fertilization to the control group. However, the median of the fetishistic males is higher than the other two groups, suggesting that the fetishistic male quail yielded higher rates of fertilization than both the non-fetishistic male quail and the control male quail.
If we now look at the boxplot for the time taken to initiate copulation, the medians suggest that non-fetishistic males had shorter copulatory latencies than both the fetishistic male quail and the control male. However, these conclusions are subjective. What we really need are some follow-up analyses.
The output of the follow-up tests won't be immediately visible in the model viewer window. The right-hand side of the model viewer window shows the main output by default (labelled the Independent Samples Test View), but we can change what is visible in the right-hand panel by using the drop-down list at the bottom of the window labelled View. By clicking on this drop-down list you'll see several options including Pairwise Comparisons (because we selected All pairwise when we ran the analysis). Selecting this option displays the output for the follow-up analysis in the right-hand panel of the model viewer, and to switch back to the main output you would use the same drop-down list but select Independent Samples Test View.
Let's look at the pairwise comparisons first for the percentage of eggs fertilized first (see output below). The diagram at the top shows the average rank within each group: so, for example, the average rank in the fetishistic group was 41.82, and in the non-fetishistic group it was 26.97. This diagram will also highlight differences between groups by using a different coloured line to connect them. In the current example, there are significant differences between the fetishistic group and the control group, and also between the fetishistic group and the non-fetishistic group, which is why these connecting lines are in yellow. There was no significant difference between the control group and the non-fetishistic group, which is why there is no connecting line. The table underneath shows all of the possible comparisons. The column labelled Adj.Sig. contains the adjusted p-values and it is this column that we need to interpret (no matter how tempted we are to interpret the one labelled Sig.). Looking at this column, we can see that significant differences were found between the control group and the fetishistic group, p = .002, and between the fetishistic group and the non-fetishistic group, p = .039. However, the non-fetishistic group and the control group did not differ significantly, p = 1. We know by looking at the boxplot and the ranks that the fetishistic males yielded significantly higher rates of fertilization than both the non-fetishistic male quail and the control male quail.
Let's now look at the pairwise comparisons for the time taken to initiate copulation (see output below). The diagram highlights differences between groups by using a different coloured line to connect them. In the current example, there was not a significant difference between the fetishistic group and the control group, as indicated by the absence of a connecting line. However, there were significant differences between the fetishistic group and the non-fetishistic group, and between the non-fetishistic group and the control, which is why they are connected with a yellow line. The table underneath shows all of the possible comparisons. Interpret the column labelled Adj.Sig. which contains the p-values adjusted for the number of comparisons. Significant differences were found between the control group and the non-fetishistic group, p = .000, and between the fetishistic group and the non-fetishistic group, p = .000. However, the fetishistic group and the control group did not differ significantly, p = .743. We know by looking at the boxplot and the ranks that the non-fetishistic males yielded significantly shorter latencies to initiate copulation than the fetishistic males and the controls.
The authors reported as follows (p. 429):
Kruskal–Wallis analysis of variance (ANOVA) confirmed that female quail partnered with the different types of male quail produced different percentages of fertilized eggs, \(\chi^{2}\)(2, N = 59) =11.95, p < .05, \(\eta^{2}\) = 0.20. Subsequent pairwise comparisons with the Mann–Whitney U test (with the Bonferroni correction) indicated that fetishistic male quail yielded higher rates of fertilization than both the nonfetishistic male quail (U = 56.00, N1 = 17, N2 = 15, effect size = 8.98, p < .05) and the control male quail (U = 100.00, N1 = 17, N2 = 27, effect size = 12.42, p < .05). However, the nonfetishistic group was not significantly different from the control group (U = 176.50, N1 = 15, N2 = 27, effect size = 2.69, p > .05).
For the latency data they reported as follows:
A Kruskal–Wallis analysis indicated significant group differences,\(\ \chi^{2}\)(2, N = 59) = 32.24, p < .05, \(\eta^{2}\) = 0.56. Pairwise comparisons with the Mann–Whitney U test (with the Bonferroni correction) showed that the nonfetishistic males had significantly shorter copulatory latencies than both the fetishistic male quail (U = 0.00, N1 = 17, N2 = 15, effect size = 16.00, p < .05) and the control male quail (U = 12.00, N1 = 15, N2 = 27, effect size = 19.76, p < .05). However, the fetishistic group was not significantly different from the control group (U = 161.00, N1 = 17, N2 = 27, effect size = 6.57, p > .05). (p. 430)
These results support the authors' theory that fetishist behaviour may have evolved because it offers some adaptive function (such as preparing for the real thing).
Why do you like your lecturers?
We can run this analysis by loading the file and just pretty much selecting everything in the variable list and running a Pearson correlation. The dialog box will look like this:
The resulting output will look like this:
This looks pretty horrendous, but there are a lot of correlations that we don't need. First, the table is symmetrical around the diagonal so we can first ignore either the top diagonal or the bottom (the values are the same). The second thing is that we're interested only in the correlations between students' personality and what they want in lecturers. We're not interested in how their own five personality traits correlate with each other (i.e. if a student is neurotic are they conscientious too?). I have shaded out all of the correlations that we can ignore so that we can focus on the top right quadrant, which replicated the values reported in the original research paper (part of the authors' table is below so you can see how they reported these values – match these values to the values in your output):
As for what we can conclude, well, neurotic students tend to want agreeable lecturers, r = .10, p = .041; extroverted students tend to want extroverted lecturers, r = .15, p = .010; students who are open to experience tend to want lecturers who are open to experience, r = .20, p < .001, and don't want agreeable lecturers, r = −.16, p < .001; agreeable students want every sort of lecturer apart from neurotic. Finally, conscientious students tend to want conscientious lecturers, r = .22, p < .001, and extroverted ones, r = .10, p = .09 (note that the authors report the one-tailed p-value), but don't want neurotic ones, r = −.14, p = .005.
I want to be loved (on Facebook)
The first linear model looks at whether narcissism predicts, above and beyond the other variables, the frequency of status updates. To do this, drag the outcome variable Frequency of changing status per week to the Dependent box, then define the three blocks as follows. In the first block put Age, Gender and Grade:
In the second block, put extraversion (NEO_FFI):
And in the third block put narcissism (NPQC_R):
Set the options as in the book chapter. The main output is as follows:
So basically, Ong et al.'s prediction was supported in that after adjusting for age, grade and gender, narcissism significantly predicted the frequency of Facebook status updates over and above extroversion. The positive standardized beta value (.21) indicates a positive relationship between frequency of Facebook updates and narcissism, in that more narcissistic adolescents updated their Facebook status more frequently than their less narcissistic peers did. Compare these results to the results reported in Ong et al. (2011). The Table 2 from their paper is reproduced at the end of this task below.
OK, now let's fit the second model to investigate whether narcissism predicts, above and beyond the other variables, the Facebook profile picture ratings. Drag the outcome variable Sum of Profile picture ratings to the Dependent box, then define the three blocks as follows. In the first block put Age, Gender and Grade:
The main output is as follows:
These results show that after adjusting for age, grade and gender, narcissism significantly predicted the Facebook profile picture ratings over and above extroversion. The positive beta value (.37) indicates a positive relationship between profile picture ratings and narcissism, in that more narcissistic adolescents rated their Facebook profile pictures more positively than their less narcissistic peers did. Compare these results to the results reported in Table 2 of Ong et al. (2011) below.
Table 2 from Ong et al. (2011)
Lecturer neuroticism
The first model we'll fit predicts whether students want lecturers to be neurotic. Drag the outcome variable (LectureN) to the box labelled Dependent:. Then define the predictors in the two blocks as follows. In the first block put Age and Sex:
In the second block, put all of the student personality variables (five variables in all):
Set the options as in the book chapter.
The main output (I haven't reproduced it all), is as follows:
So basically, age, openness and conscientiousness were significant predictors of wanting a neurotic lecturer (note that for openness and conscientiousness the relationship is negative, i.e. the more a student scored on these characteristics, the less they wanted a neurotic lecturer).
Lecturer extroversion
The second variable we want to predict is lecturer extroversion. You can follow the steps of the first example but drag the outcome variable of LectureN out of the box labelled Dependent: and in its place drag LecturE. Alternatively run the following syntax:
/DESCRIPTIVES MEAN STDDEV CORR SIG N
/MISSING LISTWISE
/STATISTICS COEFF OUTS CI(95) R ANOVA CHANGE ZPP
/CRITERIA=PIN(.05) POUT(.10)
/NOORIGIN
/DEPENDENT lecturE
/METHOD=ENTER Age Sex
/METHOD=ENTER studentN studentE studentO studentA studentC
/PARTIALPLOT ALL
/SCATTERPLOT=(*ZPRED ,*ZRESID)
/RESIDUALS HISTOGRAM(ZRESID) NORMPROB(ZRESID)
/CASEWISE PLOT(ZRESID) OUTLIERS(3).
You should find that student extroversion was the only significant predictor of wanting an extrovert lecturer; the model overall did not explain a significant amount of the variance in wanting an extroverted lecturer.
Lecturer openness to experience
You can follow the steps of the first example but drag the outcome variable of LectureN out of the box labelled Dependent: and in its place drag LecturO. Alternatively run the following syntax:
/DEPENDENT lecturO
You should find that student openness to experience was the most significant predictor of wanting a lecturer who is open to experience, but student agreeableness predicted this also.
Lecturer agreeableness
The fourth variable we want to predict is lecturer agreeableness. You can follow the steps of the first example but drag the outcome variable of LectureN out of the box labelled Dependent: and in its place drag LecturA. Alternatively run the following syntax:
/DEPENDENT lecturA
You should find that Age, student openness to experience and student neuroticism significantly predicted wanting a lecturer who is agreeable. Age and openness to experience had negative relationships (the older and more open to experienced you are, the less you want an agreeable lecturer), whereas as student neuroticism increases so does the desire for an agreeable lecturer (not surprisingly, because neurotics will lack confidence and probably feel more able to ask an agreeable lecturer questions).
Lecturer conscientiousness
The final variable we want to predict is lecturer conscientiousness. You can follow the steps of the first example but drag the outcome variable of LectureN out of the box labelled Dependent: and in its place drag LecturC. Alternatively run the following syntax:
/DEPENDENT lecturC
Student agreeableness and conscientiousness both signfiicantly predict wanting a lecturer who is conscientious. Note also that gender predicted this in the first step, but its b became slightly non-significant (p = .07) when the student personality variables were forced in as well. However, gender is probably a variable that should be explored further within this context.
Compare all of your results to Table 4 in the actual article (shown below) - our five analyses are represented by the columns labelled N, E, O, A and C).
Table 4 from Chamorro-Premuzic et al. (2008)
You don't have to be mad here, but it helps
The data look like this:
The columns represent the following:
Outcome: A string variable that tells us which personality disorder the numbers in each row relate to.
X1: Mean of the managers group.
X2: Mean of the psychopaths group.
sd1: Standard deviation of the managers group.
sd2: Standard deviation of the psychopaths group.
n1: The number of managers tested.
n2: The number of psychopaths tested.
The syntax file looks like this:
The syntax editor
We can run the syntax by selecting Run > All. The output looks like this:
We can report that managers scored significantly higher than psychopaths on histrionic personality disorder, t(354) = 7.18, p < .001, d = 1.22. There were no significant differences between groups on narscissistic personality disorder, t(354) = 1.41, p = .160, d = 0.24 , or compulsive personality disorder, t(354) = 0.77, p = .442, d = 0.13. On all other measures, psychopaths scored significantly higher than managers: antisocial personality disorder, t(354) = −5.23, p < .001, d = −0.89; borderline personality disorder, t(354) = −10.01, p < .001, d = −1.70; dependent personality disorder, t(354) = −9.80, p < .001, d = −1.67; passive-aggressive personality disorder, t(354) = −3.83, p < .001, d = −0.65; paranoid personality disorder, t(354) = −8.73, p < .001, d = −1.48; schizotypal personality disorder, t(354) = −10.76, p < .001, d = −1.83; schizoid personality disorder, t(354) = −8.18, p < .001, d = −1.39; avoidant personality disorder, t(354) = −6.31, p < .001, d = −1.07.
The results show the presence of elements of PD in the senior business manager sample, especially those most associated with psychopathic PD. The senior business manager group showed significantly higher levels of traits associated with histrionic PD than psychopaths. They also did not significantly differ from psychopaths in narcissistic and compulsive PD traits. These findings could be an issue of power (effects were not detected but are present). The effect sizes d can help us out here, and these are quite small (0.24 and 0.13), which can give us confidence that there really isn't a difference between psychopaths and managers on these traits. Broad and Fritzon (2005) conclude that:
'At a descriptive level this translates to: superficial charm, insincerity, egocentricity, manipulativeness (histrionic), grandiosity, lack of empathy, exploitativeness, independence (narcissistic), perfectionism, excessive devotion to work, rigidity, stubbornness, and dictatorial tendencies (compulsive). Conversely, the senior business manager group is less likely to demonstrate physical aggression, consistent irresponsibility with work and finances, lack of remorse (antisocial), impulsivity, suicidal gestures, affective instability (borderline), mistrust (paranoid), and hostile defiance alternated with contrition (passive/aggressive.)'.
Remember, these people are in charge of large companies. Suddenly a lot things make sense.
We will conduct an independent samples t-test on these data because there were different participants in each of the two groups (independent design). Your completed dialog box should look like this:
Looking at the means in the Group Statistics table below, we can see that on average more participants in the High Urgency group (M = 4.5) chose the large financial reward for which they would wait longer than participants in the Low Urgency group (M = 3.8). Looking at the Independent Samples Test table, we can see that this difference was significant, p = .03.
To calculate the effect size r, all we need is the value of t and the df from the Independent Samples Test table:
\[ \begin{align} r &= \sqrt{\frac{{2.203}^{2}}{{2.203}^{2} + 100}} \\ &= \sqrt{\frac{4.853}{104.853}} \\ &= 0.215 \end{align} \]
Think back to our benchmarks for effect sizes, this represents a small to medium effect (it is between .1 (small effect) and .3 (a medium effect)).
In this example the Independent Samples Test table tells us that the value of t was 2.20, that this was based on 100 degrees of freedom, and that it was significant at p = .03. We can also see the means for each group. We could write this as:
On average, participants who had full bladders (M = 4.5, SD = 1.59) were more likely to choose the large financial reward for which they would wait longer than participants who had relatively empty bladders (M = 3.8, SD = 1.49), t(100) = 2.20, p = .03.
The beautiful people
We need to run a paired samples t-test on these data because the researchers recorded the number of daughters and sons for each participant (repeated-measures design). Your completed dialog should look like this:
Looking at the output below, we can see that there was a non-significant difference between the number of sons and daughters produced by the 'beautiful' celebrities.
We are going to calculate Cohen's d as our effect size. To do this we first need to get some descriptive statistics for these data – the means and standard deviations (see above).
We can now compute Cohen's d using the two means (.68 and .62) and the standard deviation of the control group (it doesn't matter which one you choose here, but I have chosen to use the sons):
\[ \begin{align} \widehat{d} &= \frac{{\overline{X}}_{\text{Daughters}} - {\overline{X}}_{\text{Sons}}}{s_{\text{Sons}}} \\ &= \frac{0.62 - 0.68}{0.901} \\ &= - 0.07 \end{align} \]
This means that there is −0.07 of a standard deviation difference between the number of sons and daughters produced by the celebrities, which is a very small effect.
In this example the SPSS Statistics output tells us that the value of t was 0.81, that this was based on 253 degrees of freedom, and that it was non-significant, p = .420. We also calculated the means for each group. We could write this as follows:
There was no significant difference between the number of daughters (M = 0.62, SE = 0.06) produced by the 'beautiful' celebrities and the number of sons (M = 0.68, SE = 0.06), t(253) = 0.81, p = .420, d = −0.07.
I heard that Jane has a boil and kissed a tramp
Solution using Baron and Kenny's method
Baron and Kenny suggested that mediation is tested through three linear models:
A linear model predicting the outcome (Gossip) from the predictorvariable (Age).
A linear model predicting the mediator (Mate_Value) from the predictor variable (Age).
A linear model predicting the outcome (Gossip) from both the predictor variable (Age) and the mediator (Mate_Value).
These models test the four conditions of mediation: (1) the predictor variable (Age) must significantly predict the outcome variable (Gossip) in model 1; (2) the predictor variable (Age) must significantly predict the mediator (Mate_Value) in model 2; (3) the mediator (Mate_Value) must significantly predict the outcome (Gossip) variable in model 3; and (4) the predictor variable (Age) must predict the outcome variable (Gossip) less strongly in model 3 than in model 1.
Model 1: Predicting Gossip from Age
Model 1 indicates that the first condition of mediation was met, in that participant age was a significant predictor of the tendency to gossip, t(80) = −2.59, p = .011.
Model 2: Predicting Mate_Value from Age
Model 2 shows that the second condition of mediation was met: participant age was a significant predictor of mate value, t(79) = −3.67, p < .001.
Model 3: Predicting Gossip from Age and Mate_Value
Model 3 shows that the third condition of mediation has been met: mate value significantly predicted the tendency to gossip while adjusting for participant age, t(78) = 3.59, p < .001. The fourth condition of mediation has also been met: the standardized coefficient between participant age and tendency to gossip decreased substantially when adjusting for mate value, in fact it is no longer significant, t(78) = −1.28, p. Therefore, we can conclude that the author's prediction is supported, and the relationship between participant age and tendency to gossip is mediated by mate value.
Diagram of the mediation model, taken from Massar et al. (2011)
Solution using PROCESS
{width="5.901388888888889in" height="3.926388888888889in"}
The first oputput shows that age significantly predicts mate value, b = −0.03, t = −3.67, p < .001. The R2 value tells us that age explains 14.6% of the variance in mate value, and the fact that the b is negative tells us that the relationship is negative also: as age increases, mate value declines (and vice versa).
The next output shows the results of the model predicting tendency to gossip from both age and mate value. We can see that while age does not significantly predict tendency to gossip with mate value in the model, b = −0.01, t = −1.28, p = .21, mate value does significantly predict tendency to gossip, b = 0.45, t = 3.59, p = .0006. The R2 value tells us that the model explains 21.3% of the variance in tendency to gossip. The negative b for age tells us that as age increases, tendency to gossip declines (and vice versa), but the positive b for mate value indicates that as mate value increases, tendency to gossip increases also. These relationships are in the predicted direction.
The next output shows the total effect of age on tendency to gossip (outcome). You will get this bit of the output only if you selected Total effect model. The total effect is the effect of the predictor on the outcome when the mediator is not present in the model. When mate value is not in the model, age significantly predicts tendency to gossip, b = −0.02, t = −2.67, p = .009. The R2 value tells us that the model explains 8.27% of the variance in tendency to gossip. Therefore, when mate value is not included in the model, age has a significant negative relationship with infidelity (as shown by the negative b value).
The next output shows the bootstrapped model parameters. For example, the total effect of age that we just discussed had a b = –0.0234 with a 95% confidience interval from –0.0408 to –0.0059. The estimates based on bootstrapping are b = –0.0266 [–0.0411, –0.0124]1. Similarly, if we look at the model that also included the effect of Mate_Val, the parameter [and 95% CI] is b = 0.4546 [0.2027, 0.7066] and the bootstrap estimates (below) are b = 0.4546 [0.1673, 0.7389]. Remember that the bootstrap estimates are robust.
The next output displays the results for the indirect effect of age on gossip (i.e., the effect via mate value). We're told the effect of age on gossip when mate value is included as a predictor as well (the direct effect). The first bit of new information is the Indirect effect of X on Y, which in this case is the indirect effect of age on gossip. We're given an estimate of this effect (b = −0.012) as well as a bootstrapped standard error and confidence interval. As we have seen many times before, 95% confidence intervals contain the true value of a parameter in 95% of samples. Therefore, we tend to assume that our sample isn't one of the 5% that does not contain the true value and use them to infer the population value of an effect. In this case, assuming our sample is one of the 95% that 'hits' the true value, we know that the true b-value for the indirect effect falls between −0.0252 and −0.0031.2 This range does not include zero (although both values are not much bigger than zero), and remember that b = 0 would mean 'no effect whatsoever'; therefore, the fact that the confidence interval does not contain zero means that there is likely to be a genuine indirect effect. Put another way, mate value is a mediator of the relationship between age and tendency to gossip.
Scraping the barrel?
Let's do the graph first. There are two variables in the data editor: Phallus (the independent variable that has three levels: no ridge, minimal ridge and normal ridge) and Displacement (the dependent variable, the percentage of sperm displaced). The graph should therefore plot Phallus on the x-axis and Displacement on the y-axis. The completed dialog box should look like this:
The graph shows that having a coronal ridge results in more sperm displacement than not having one. The size of ridge made very little difference:
We can fit the model using Analyze > Compare Means > One-Way ANOVA …. The main dialog box should look like this:
To test our hypotheses we need to enter the following codes for the contrasts:
No Ridge
Minimal ridge
Coronal ridge
No ridge vs. ridge -2 1 1
Minimal vs. coronal 0 -1 1
Contrast 1 tests hypothesis 1: that having a bell-end will displace more sperm than not. To test this we compare the two conditions with a ridge against the control condition (no ridge). So we compare chunk 1 (no ridge) to chunk 2 (minimal ridge, coronal ridge). The numbers assigned to the groups are the number of groups in the opposite chunk, and then we randomly assigned one chunk to be a negative value (the codes 2, −1, −1 would work fine as well). We enter these codes into SPSS Statistics using the Contrasts dialog box:
Contrast 2 tests hypothesis 2: the phallus with the larger coronal ridge will displace more sperm than the phallus with the minimal coronal ridge. First we get rid of the control phallus by assigning a code of 0; next we compare chunk 1 (minimal ridge) to chunk 2 (coronal ridge). The numbers assigned to the groups are the number of groups in the opposite chunk, and then we randomly assigned one chunk to be a negative value (the codes 0, 1, −1 would work fine as well). We enter these codes into SPSS Statistics using the Contrasts dialog box:
We should also ask for corrections for heteroscedasticity using the Options dialog box:
The main output tells us that there was a significant effect of the type of phallus, F(2, 12) = 41.56, p < .001. (This is exactly the same result as reported in the paper on page 280.) There is also a significant linear trend, F(1, 12) = 62.47, p > .001, indicating that more sperm was displaced as the ridge increased (however, note from the graph that this effect reflects the increase in displacement as we go from no ridge to having a ridge; there is no extra increase from 'minimal ridge' to 'coronal ridge'). Note that using robust F-tests that correct for lack of homogeneity the effect is still highly significant (p = .001 using Welch's F, and p < .001 using Brown-Forsythe's F).
The next output firstly tells us that we entered our weights correctly. Next the table labelled Contrast Tests shows that hypothesis 1 is supported (contrast 1): having some kind of ridge led to greater sperm displacement than not having a ridge, t(12) = 9.12, p < .001. Contrast 2 shows that hypothesis 2 is not supported: the amount of sperm displaced by the normal coronal ridge was not significantly different from the amount displaced by a minimal coronal ridge, t(12) = −0.02, p = .99.
To run this analysis we need to access the main dialog box by selecting Analyze > General Linear Model > Univariate …. Drag Interpretational_Bias and to the box labelled Dependent Variable:. Drag Training (i.e., the type of training that the child had) to the box labelled Fixed Factor(s):, and then select Gender, Age and SCARED by holding down Ctrl (⌘ on a mac) while you click on these variables and drag them to the box labelled Covariate(s):. The finished dialog box should look like this:
In the chapter we looked at how to select contrasts, but because our main predictor variable (the type of training) has only two levels (positive or negative) we don't need contrasts: the main effect of this variable can only reflect differences between the two types of training. We can ask for adjusted means and parametr estimates though:
In the main table, we can see that even after partialling out the effects of age, gender and natural anxiety, the training had a significant effect on the subsequent bias score, F(1, 65) = 13.43, p < .001.
The adjusted means tell us that interpretational biases were stronger (higher) after negative training (adjusting for age, gender and SCARED). This result is as expected. It seems then that giving children feedback that tells them to interpret ambiguous situations negatively does induce an interpretational bias that persists into everyday situations, which is an important step towards understanding how these biases develop.
In terms of the covariates, age did not have a significant influence on the acquisition of interpretational biases. However, anxiety and gender did.If we look at the Parameter Estimates table, we can use the beta values to interpret these effects. For anxiety (SCARED), b = 2.01, which reflects a positive relationship. Therefore, as anxiety increases, the interpretational bias increases also (this is what you would expect, because anxious children would be more likely to naturally interpret ambiguous situations in a negative way). If you draw a scatterplot of the relationship between SCARED and Interpretational_Bias you'll see a very nice positive relationship. For Gender, b = 26.12, which again is positive, but to interpret this we need to know how the children were coded in the data editor. Boys were coded as 1 and girls as 2. Therefore, as a child 'changes' (not literally) from a boy to a girl, their interpretational biases increase. In other words, girls show a stronger natural tendency to interpret ambiguous situations negatively. This is consistent with the anxiety literature, which shows that females are more likely to have anxiety disorders.
One important thing to remember is that although anxiety and gender naturally affected whether children interpreted ambiguous situations negatively, the training (the experiences on the alien planet) had an effect adjusting for these natural tendencies (in other words, the effects of training cannot be explained by gender or natural anxiety levels in the sample).
Have a look at the original article to see how Muris et al. reported the results of this analysis – this can help you to see how you can report your own data from an ANCOVA. (One bit of good practice that you should note is that they report effect sizes from their analysis – as you will see from the book chapter, this is an excellent thing to do.)
Going out on the pierce
To do an error bar chart for means that are independent (i.e., have come from different groups) double-click on the clustered error bar chart icon in the Chart Builder (see the book chapter) and drag our variables into the appropriate drop zones. Drag Alcohol into , drag Group into and drag Sex it into . This will mean that error bars representing males and females will be displayed in different colours. The completed dialog box should look like this:
The error bar graph shows that in each group the men had consumed more alcohol than the women (the blue bars are taller than the red bars for all groups); this suggests that there may be a significant main effect of Sex. There is a steady increase in the volume of alcohol consumed as we move along the Group variable – the no tattoos, no piercing group consumed the least amount of alcohol and the tattoos and piercings group consumed the largest amount of alcohol – suggesting that there may be a significant main effect of Group. This trend appears to be the same for both men and women, suggesting that the interaction effect of Sex and Group is unlikely to be significant.
We need to conduct a 4 (experimental group) × 2 (gender) two-way independent ANOVA on the mass of alcohol per litre of exhaled breath. Select Analyze > General Linear Model > Univariate …. In the main dialog box, drag the dependent variable Alcohol from the to the space labelled Dependent Variable: . Select Group and Sex simultaneously by holding down Ctrl (⌘ on a Mac) while clicking on the variables and drag them to the Fixed Factor(s): box:
Let's ask for some LSD post hoc tests (to mimic the article):
Finally, we'll ask for effect sizes:
The main ANOVA shows a significant main effect of sex, F(1, 1957) = 16.44, p < .001,\(\eta_p^2 = .01\) with a partial eta squared of .01. Men (M = 0.19, SD = 0.15) consumed a significantly higher mass of alcohol than women (M = 0.15, SD = 0.11).
There was also a significant main effect of group, F(3, 1957) = 26.88, p < .001, \(\eta_p^2 = .04\). Post hoc tests (Output 3) revealed that participants who had only piercings (M = 0.22) consumed a significantly greater mass of alcohol than those who only had tattoos (M = 0.17) (least significant difference (LSD) test, p < .001) and those who had no tattoos and no piercings (M = 0.15) (LSD test, p < .001). Participants who had both tattoos and piercings (M = 0.25) consumed a significantly greater mass of alcohol than those who only had tattoos (M = 0.17) (LSD test, p < .001), and those who had no tattoos and no piercings (M = 0.15) (LSD test, p < .001). However, they did not consume a significantly greater mass than those who only had piercings (M = 0.22) (LSD test, p = .05).
This effect of group was not significantly moderated by the biological sex of the participant, F(3, 1957) = 1.62, p = .182, \(\eta_p^2 = .002\). This nonsignificant interaction implies that the effects we just described were comparable for males and females (especially given the large sample size).
Don't forget your toothbrush
To do an error bar chart for means that are independent (i.e., have come from different groups) double-click on the clustered error bar chart icon in the Chart Builder (see the book chapter) and drag our variables into the appropriate drop zones. Drag Checks into , drag Mood into and drag Stop_Rule it into . This will mean that error bars representing people using different stop rules will be displayed in different colours. The completed dialog box should look like this:
The error bar graph shows that when in a nagetive mood people performed more checks when using an as many as can stop rule than when using a feel like continuing stop rule. In a positive mood the oppsoite was true, and in neutral moods the number of cheks was very similar in the two stop rule conditions.
Select Analyze > General Linear Model > Univariate …. In the main dialog box, drag the dependent variable Checks from the to the space labelled Dependent Variable: . Select Mood and Stop_Rule simultaneously by holding down Ctrl (⌘ on a Mac) while clicking on the variables and drag them to the Fixed Factor(s): box:
The resulting output can be interpreted as follows. The main effect of mood was not significant, F(2, 54) = 0.68, p = .51, indicating that the number of checks (when we ignore the stop rule adopted) was roughly the same regardless of whether the person was in a positive, negative or neutral mood. Similarly, the main effect of stop rule was not significant, F(1, 54) = 2.09, p = .15, indicating that the number of checks (when we ignore the mood induced) was roughly the same regardless of whether the person used an 'as many as can' or a 'feel like continuing' stop rule. The mood × stop rule interaction was significant, F(2, 54) = 6.35, p = .003, indicating that the mood combined with the stop rule significantly affected checking behaviour. Looking at the graph, a negative mood in combination with an 'as many as can' stop rule increased checking, as did the combination of a 'feel like continuing' stop rule and a positive mood, just as Davey et al. predicted.
Are splattered cadavers distracting?
Select Analyze > General Linear Model > Reperated measures … . In the define factors dialog box supply a name for the first within-subject (repeated-measures) variable. The first repeated-measures variable we're going to ender is the type of sound (quiet, liked or disliked), so replace the word factor1 with the word Sound. Next, specify how many levels there were (i.e., how many experimental conditions there were). In this case, there were three type of sound, so enter the number 3 into the box labelled Number of Levels:. Click to add this variable to the list of repeated-measures variables. This variable will now appear as Sound(3). Repeat this process for the second independent variable, the position of the letter in the list, by entering the word Position into the space labelled Within-Subject Factor Name: and then, because there were eight levels of this variable, enter the number 8 into the space labelled Number of Levels:. Again click and this variable will appear as Position(8). The finished dialog box is shown below.
Once you are in the main dialog box (Figure 2) you are required to replace the question marks with variables from the list on the left-hand side of the dialog box. In this design, if we look at the first variable, Sound, there were three conditions, like, dislike and quiet. The quiet condition is the control condition, therefore for this variable we might want to compare the like and dislike conditions with the quiet condition. In terms of conducting contrasts, it is therefore essential that the quiet condition be entered as either the first or last level of the independent variable Sound (because you can't specify the middle level as the reference category in a simple contrast). I have coded quiet = level 1, liked = level 2 and disliked = level 3.
Now, let's think about the second factor Position. This variable doesn't have a control category and so it makes sense for us to just code level 1 as position 1, level 2 as position 2 and so on for ease of interpretation. Coincidentally, this order is the order in which variables are listed in the data editor. Actually it's not a coincidence: I thought ahead about what contrasts would be done, and then entered variables in the appropriate order:
In the Estimated Marginal Means dialog box drag all of the effects to the box labelled Display Means for:, select ro Compare main effects and choose an appropriate correction (I chose LSD(none), which isn't an appropriate correction but there you go …). These tests are interesting only if the interaction effect is not significant.
The plots dialog box is a convenient way to plot the means for each level of the factors (although really you should do some proper graphs before the analysis). Drag Position to the space labelled Horizontal Axis and Sound to the space labelled Separate Lines and click . I also selected to include error bars.
The resulting plot displays the estimated marginal means of letters recalled in each of the positions of the lists when no music was played (blue line), when liked music was played (red line) and when disliked music was played (green line). The chart shows that the typical serial curve was elicited for all sound conditions (participants' memory was best for letters towards the beginning of the list and at the end of the list, and poorest for letters in the middle of the list) and that performance was best in the quiet condition, poorer in the disliked music condition and poorest in the liked music condition.
Mauchly's test shows that the assumption of sphericity has been broken for both of the independent variables and also for the interaction. In the book I advise you to routinely interpret the Greenhouse-Geisser corrected values for the main model anyway, but for these data this is certainly a good idea.
The main ANOVA summary table (which, as I explain in the book, I have edited to show only the Greenhouse-Geisser correct values) shows a significant main effect of the type of sound on memory performance F(1.62, 38.90) = 9.46, p = .001. Looking at the earlier graph, we can see that performance was best in the quiet condition, poorer in the disliked music condition and poorest in the liked music condition. However, we cannot tell where the significant differences lie without looking at some contrasts or post hoc tests. There was also a significant main effect of position, F(3.83, 91.92) = 41.43, p < 0.001, but no significant position by sound interaction, F(6.39, 153.39) = 1.44, p = 0.201.
The main effect of position was significant because of the production of the typical serial curve, so post hoc analyses were not conducted. However, we did conduct post hoc least significant difference (LSD) comparisons on the main effect of sound. These post hoc tests revealed that performance in the quiet condition (level 1. was significantly better than both the liked condition (level 2), p = .001, and in the disliked condition (level 3), p = .022. Performance in the disliked condition (level 3) was significantly better than in the liked condition (level 2), p = 0.020. We can conclude that liked music interferes more with performance on a memory task than disliked music.
The objection of desire
There are two repeated-measures variables: whether the target picture was of a male or female (let's call this TargetGender) and whether the target picture was upright or inverted (let's call this variable TargetLocation). The resulting model will be a 2 (TargetGender: male or female) × 2 (TargetLocation: upright or inverted) × 2 (Gender: male or female) three-way mixed ANOVA with repeated measures on the first two variables. Select Analyze > General Linear Model > Reperated measures … and complete the initial dialog box as follows:
Next, we need to define these variables that we just created (TargetGender and TargetLocation) by specifying the columns in the data editor that relate to the different combinations of the gender and orientation of the picture:
You could also ask for an interaction graph for the three-way interaction:
You can set other options as in the book chapter.
The plot for the two-way interaction between target gender and target location for female participants shows that when the target was of a female (i.e., when Target Gender = 1. female participants correctly recognized a similar number of inverted (blue line) and upright (red line) targets, indicating that there was no inversion effect for female pictures. We can tell this because the dots are very close together. However, when the target was of a male (Target Gender = 2), the female participants' recognition of inverted male targets was very poor compared with their recognition of upright male targets (the dots are very far apart), indicating that the inversion effect was present for pictures of males.
The plot for the two-way interaction between target gender and target location for male participants shows that there appears to be a similar pattern of results as for the female participants: when the target was of a female (i.e., when Target Gender = 1) male participants correctly recognized a fairly similar number of inverted (blue line) and upright (red line) targets, indicating no inversion effect for the female target pictures. We can tell this because the dots are reasonably together. However, when the target was of a male (Target Gender = 2), the male participants' recognition of inverted male targets was very poor compared with their recognition of upright male targets (the dots are very far apart), indicating the presence of the inversion effect for male target pictures. The fact that the pattern of results were very similar for male and female participants suggests that there may not be a significant three-way interaction between target gender, target location and participant gender
Because both of our repeated-measures variables have only two levels, we do not need to worry about sphericity. As such I have edited the main summary table to show the effects when sphericity is assumed (see the book for how to do this). We could report these effects as follows:
There was a significant interaction between target gender and target location, F(1, 75) = 15.07, p < .001, η2 = .167, indicating that if we ignore whether the participant was male or female, the relationship between recognition of upright and inverted targets was different for pictures depicting men and women. The two-way interaction between target location and participant gender was not significant, F(1, 75) = .96, p = .331, η2 = .013, indicating that if we ignore whether the target depicted a picture of a man or a woman, male and female participants did not significantly differ in their recognition of inverted and upright targets. There was also no significant three-way interaction between target gender, target location and participant gender, F(1, 75) = .02, p = .904, η2 = .000, indicating that the relationship between target location (whether the target picture was upright or inverted) and target gender (whether the target was of a male or female) was not significantly different in male and female participants.
The next part of the question asks us to follow up the analysis with t-tests looking at inversion and gender effects. To do this , we need to conduct four paired-samples t-tests. Once you have the Paired-Samples T- Test dialog box open, transfer pairs of varialbles from the left-hand side to the box labelled Paired Variables. The first pair I am going to compare is Upright Female vs. Inverted Female, to look at the inversion effect for female pictures. The next pair will be Upright Male vs. Inverted Male, and this comparison will investigate the inversion effect for male pictures. To look at the gender effect for upright pictures we need to compare Upright Female vs. Upright Male. Finally, to look at the gender effect for inverted pictures we need to compare the variables Inverted Female and Inverted Male. Your complated dialog box should look like this:
The results of the paired samples t-tests show that people recognized upright males (M = 0.85, SD = 0.17) significantly better than inverted males (M = 0.73, SD = 0.17), t(77) = 6.29, p < .001, but this pattern did not emerge for females, t(77) = 1.38, p = .171. Additionally, participants recognized inverted females (M = 0.83, SD = 0.16) significantly better than inverted males (M = 0.73, SD = 0.17), t(77) = 5.42, p < .001. This effect was not found for upright males and females, t(77) = 0.54, p = .59. Note: the sign of the t-statistic will depend on which way round you entered the variables in the Paired-Samples T Test dialog box.
Consistent with the authors' hypothesis, the results showed that the inversion effect emerged only when participants saw sexualized males. This suggests that, at a basic cognitive level, sexualized men were perceived as people, whereas sexualized women were perceived as objects.
Keep the faith(ful)?
We want to run these analyses on men and women separately. An efficient way to do this ism to split the file by the variable Gender (see the book):
For the main model there are two repeated-measures variables: whether the sentence was a distractor or a target (let's call this Sentence_Type) and whether the distractor used on a trial was neutral, indicated sexual infidelity or emotional infidelity (let's call this variable Distracter_Type). The resulting model will be a 2 (relationship: with partner or not) × 2 (sentence type: distractor or target) × 3 (distractor type: neutral, emotional infidelity or sexual infidelity) three-way mixed ANOVA with repeated measures on the last two variables. First, we must define our two repeated-measures variables. Select Analyze > General Linear Model > Reperated measures … and complete the initial dialog box as follows:
Next, we need to define these variables by specifying the columns in the data editor that relate to the different combinations of the type of sentence and the type of trial. As you can see in the figure below, because we specified Sentence_Type first we have all of the variables relating to distractors specified before those for targets. For each type of sentence there are three different variants, depending on whether the distractor used was neutral, emotional or sexual. Note that we have use the same order for both types of sentence (neutral, emotional, sexual) and that we have put neutral distractors as the first category so that we can look at some contrasts (neutral distractors are the control).
Use the Contrasts dialog box to select some simple contrasts comparing everything to the first category:
Specify a graph for the three-way interaction with error bars:
Set other options as in the book chapter.
The sphericity tests are all non-significant, which means we can assume sphericity. In the book I recommend to ignore this test and routinely interpret Greenhouse-Geisser corrected values, but that's not what the authors did so in keeping with what they did I have simplified the main output to show only the sphericity assumed tests (you can find out how to do this in the book):
We could report these effects as follows:
A three-way ANOVA with current relationship status as the between-subjects factor and men's recall of sentence type (targets vs. distractors) and distractor type (neutral, emotional infidelity and sexual infidelity) as the within-subjects factors yielded a significant main effect of sentence type, F(1, 37) = 53.97, p < .001, and a significant interaction between current relationship status and distractor content, F(2, 74) = 3.92, p = .024. More important, the three-way interaction was also significant, F(2, 74) = 3.79, p = .027. The remaining main effects and interactions were not significant, Fs < 2, ps > .17.
To pick apart the three-way interaction we can look at the table of contrasts:
The contrasts for the three way interaction in this table tell us that the effect of whether or not you are in a relationship and whether you were remembering a distractor or target was similar in trials in which an emotional infidelity distractor was used compared to when a neutral distractor was used, F(1, 37) = .005, p = .95 (level 2 vs. level 1 in the table). However, as predicted, there is a difference in trials in which a sexual infidelity distractor was used compared to those in which a neutral distractor was used, F(1, 37) = 5.39, p = .026 (level 3 vs. level 1).
To further see what these contrasts tell us, look at the graphs below. First off, those without partners remember many more targets than they do distractors, and this is true for all types of trials. In other words, it doesn't matter whether the distractor is neutral, emotional or sexual; these people remember more targets than distractors. The same pattern is seen in those with partners except for distractors that indicate sexual infidelity (the green line). For these, the number of targets remembered is reduced. Put another way, the slopes of the red and blue lines are more or less the same for those in and out of relationships (compare graphs) and the slopes are more or less the same as each other (compare red with blue). The only difference is for the green line, which is comparable to the red and blue lines for those not in relationships, but is much shallower for those in relationships. They remember fewer targets that were preceded by a sexual infidelity distractor. This supports the predictions of the author: men in relationships have an attentional bias such that their attention is consumed by cues indicative of sexual infidelity.
Let's now look at the women's output. Sphericity tests are all non-significant and I've (again) simplified the main output to show only the sphericity assumed tests.
A three-way ANOVA with current relationship status as the between-subject factor and men's recall of sentence type (targets vs. distractors) and distractor type (neutral, emotional infidelity and sexual infidelity) as the within-subject factors yielded a significant main effect of sentence type, F(1, 39) = 39.68, p < .001, and distractor type, F(2, 78) = 4.24, p = .018. Additionally, significant interactions were found between sentence type and distractor type, F(2, 78) = 4.63, p = .013, and, most important, sentence type × distractor type × relationship, F(2, 78) = 5.33, p = .007. The remaining main effect and interactions were not significant, F < 1.2, p > .29.
To pick apart the three-way interaction we can look at the contrasts for the three-way interaction. The contrasts tell us that the effect of whether or not you are in a relationship and whether you were remembering a distractor or target was significantly different in trials in which a emotional infidelity distractor was used compared to when a neutral distractor was used, F(1, 39) = 7.56, p = .009 (level 2 vs. level 1 in the table). However, there was not a significant difference in trials in which a sexual infidelity distractor was used compared to those in which a neutral distractor was used, F(1, 39) = 0.31, p = .58 (level 3 vs. level 1).
The graphs we requested for the 3-way interaction illustrate what these contrasts tell us. As for the men, women without partners remember many more targets than they do distractors, and this is true for all types of trials (although it's less true for the sexual infidelity trials because this line has a shallower slope). The same pattern is seen in those with partners except for distractors that indicate emotional infidelity (the red line). For these, the number of targets remembered is reduced. Put another way, the slopes of the green and blue lines are more or less the same for those in and out of relationships (compare graphs). The only difference is for the red line, which is much shallower for those in relationships. They remember fewer targets that were preceded by a emotional infidelity distractor. This supports the predictions of the author: women in relationships have an attentional bias such that their attention is consumed by cues indicative of emotional infidelity.
A lot of hot air!
To do the graph select Graphs > Chart Builder …, choose a clustered error bar chart and drag Mood to . Next, select all of the dependent variables (click on change in anxiety, then hold Shift down and click on change in contempt and all six should become highlighted) and drag them (simultaneously) into . This will have the effect that different moods will be displayed by different-coloured bars:
So far, so good, but we have another variable, the type of induction, that we want to plot. We can display this variable too. First, click on the Groups/Point ID tab and select Rows panel variable. Checking this option activates a new drop zone (called Panel?) on the bottom right of the canvas. Drag the type of induction into that zone as shown:
The completed graph shows that the neutral mood induction (regardless of the way in which it was induced) didn't really affect mood too much (the changes are all quite small). For the disgust mood induction, disgust always increased quite a lot (the yellow bars) regardless of how disgust was induced. Similarly, the anxiety induction raised anxiety (predominantly). Happiness decreased for both anxiety and disgust mood inductions.
To run the MANOVA, selecy Analyze > General Linear Model > Multivariate …. The main dialog box should look like this:
You can set whatever options you like based on the chapter. The main multivariate statistics are shown below. A main effect of mood was found F(12, 334) = 21.91, p < .001, showing that the changes for some mood inductions were bigger than for others overall (looking at the graph, this finding probably reflects that the disgust mood induction had the greatest effect overall – mainly because it produced such huge changes in disgust).
There was no significant main effect of the type of mood induction F(12, 334) = 1.12, p = .340, showing that whether videos, memory, tapes, etc., were used did not affect the changes in mood. Also, the type of mood × type of induction interaction, F(24, 676) = 1.22, p = .215, showed that the type of induction did not influence the main effect of mood. In other words, the fact that the disgust induction seemed to have the biggest effect on mood (overall) was not influenced by how disgust was induced.
The univariate effects for type of mood (which was the only significant multivariate effect) show that the effect of the type of mood induction was significant for all six moods (in other words, for all six moods there were significant differences across the anxiety, disgust and neutral conditions).
You could produce a graph that collapses across the way that mood was induced (video, music, etc.) because this effect was not significant. (You can create this by going back to the chart builder and deselecting Rows panel variable.) We should do more tests, but just looking at the graph shows that change score in anxiety (blue bars) is highest for the anxiety induction, around 0 (i.e. there was no change) for the disgust induction, and negative for the neutral induction (i.e., anxiety went down). For disgust, the change was largest after the disgust induction, close to zero for the neutral conditiona nd slightly positive for the anxiety induction. For happiness, the change scores are strongly negative (i.e. happiness decreased) after both anxiety and disgust inductions, but the change score was close to zero after the neutral induction (i.e. happiness didn't change).
World wide addiction?
To get the descriptive statistics I would use Analyze > Descriptive Statistics > Frequencies …. Select all of the questionnaire items but just ask for means and standard deviations:
The table of means and standard deviations shows that the items with the lowest values are IAS-23 (I see my friends less often because of the time that I spend on the Internet) and IAS-34 (When I use the Internet, I experience a buzz or a high).
To get a table of correlations select Analyze > Correlate > Bivariate ….. Select all of the variables:
To help interpret the resulting table you could use the Style dialog box to set a rule that highlights correlations that are small. For example, below I have set it to highlight correlations between –0.3 and 0.3:
We know that the authors eliminated three items for having low correlations. Because we asked SPSS to highlight cells with low correlations (–0.3 to 0.3) we're looking for variables that have a lot of highlighted cells. The three items that stand out are IAS-13 (I have felt a persistent desire to cut down or control my use of the internet), IAS-22 (I have neglected things which are important and need doing), and IAS-32 (I find myself thinking/longing about when I will go on the internet again.). As such these variables will also be excluded from the factor analysis.
To do the principal component analysis select Analyze > Dimension Reduction > Factor …. Choose all of the variables except for the five that we have excluded:
We can set the following options to replicate what the authors did:
Sample size: When communalities after extraction are above .5, a sample size between 100 and 200 can be adequate, and even when communalities are below .5, a sample size of 500 should be sufficient (MacCallum, Widaman, Zhang, & Hong, 1999). We have a sample size of 207 with only one communality below .5, and so the sample size should be adequate. However, the KMO measure of sampling adequacy is .942, which is above Kaiser's (1974) recommendation of .5. As such, the evidence suggests that the sample size is adequate to yield distinct and reliable factors.
Bartlett's test: This test is significant, χ2(465) = 4238.98, p < .001, indicating that the correlations within the R-matrix are sufficiently different from zero to warrant factor analysis.
Extraction: Note in the diagrams I forced SPSS Statistics to extract only 1 factor. By default it would have extracted five factors based on Kaiser's criterion of retaining factors with eigenvalues greater than 1. Is this warranted? Kaiser's criterion is accurate when there are less than 30 variables and the communalities after extraction are greater than .7, or when the sample size exceeds 250 and the average communality is greater than .6. For these data the sample size is 207, there are 31 variables and the mean communality is .64, so extracting five factors is probably not warranted. The scree plot (Output 6) shows a clear one-factor solution. This is the solution that the authors adopted and is the reason I forced a one-factor solution.
Because we are retaining only one factor there won't be a rotated factor solution so we can look at the unrotated component matrix. This shows that all items have a high loading on the one factor we extracted.
The authors reported their analysis as follows (p. 382):
We conducted principal-components analyses on the log transformed scores of the IAS (see above). On the basis of the scree test (Cattell, 1978) and the percentage of variance accounted for by each factor, we judged a one-factor solution to be most appropriate. This component accounted for a total of 46.50% of the variance. A value for loadings of .30 (Floyd & Widaman, 1995) was used as a cut-off for items that did not relate to a component.
All 31 items loaded on this component, which was interpreted to represent aspects of a general factor relating to Internet addiction reflecting the negative consequences of excessive Internet use.
The impact of sexualized images on women's self-evaluations
Because the frequency data have been entered rather than raw data, we must tell SPSS Statistics that the variable Self_Evaluation represents the number of cases that fell into a particular combination of categories. To do this, access the Weight Cases dialog box (Data > Weight Cases …). Drag Self_Evaluation) to the box labelled Frequency variable:. Your completed dialog box should look like this:
Next, select Analyze > Descriptive Statistics > Crosstabs …. Drag Type of Picture to the area labelled Row(s): and drag Was Theme Present or Absent in what participant wrote to the box labelled Column(s):
The Statistics dialog box is used to specify various statistical tests. Select the chi-square test, the contingency coefficient, phi and lambda:
The Cells dialog box is used to specify the information displayed in the crosstabulation table. It is important that you ask for expected counts because this is how we check the assumptions about the expected frequencies. It is also useful to have a look at the row, column and total percentages because these values are usually more easily interpreted than the actual frequencies and provide some idea of the origin of any significant effects. There are two other options that are useful for breaking down a significant effect (should we get one): (1) we can select a z-test to compare cell counts across columns of the contingency table, and if we do we should use a Bonferroni correction; and (2) select standardized residuals:
Let's check that the expected frequencies assumption has been met. We have a 2 × 2 table, so all expected frequencies need to be greater than 5. If you look at the expected counts in the contingency table, we see that the smallest expected count is 34.6 (for women who saw pictures of performance athletes and did self-evaluate). This value exceeds 5 and so the assumption has been met.
The other thing to note about this table is that because we selected Compare column proportions our counts have subscript letters. For example, in the row labelled Performance Athletes the count of 97 has a subscript letter a and the count of 20 has a subscript letter b. These subscripts tell us the results of the z-test that we asked for: columns with different subscripts have significantly different column proportions. We need to look within rows of the table. So, for Performance Athletes the columns have different subscripts as I just explained, which means that proportions within the column variable (i.e., Was the theme present or absent in what they wrote?) are significantly different. The z-test compares the proportion of the total frequency of the first column that falls into the first row against the proportion of the total frequency of the second column that falls into the first row. So, of all the women who did self-evaluate (theme present), 26.3% saw pictures of performance athletes, and of all the women who didn't self-evaluate (theme absent), 53.6% saw pictures of performance athletes. The different subscripts tell us that these proportions are significantly different. Put another way, the proportion of women who self-evaluated after seeing pictures of performance athletes was significantly less than the proportion who didn't self-evaluate after seeing pictures of performance athletes.
If we move on to the row labelled Sexualized Athletes, the count of 84 has a subscript letter a and the count of 56 has a subscript letter b; as before, the fact they have different letters tells us that the column proportions are significantly different. The proportion of women who self-evaluated after seeing sexualized pictures of female athletes (73.7%) was significantly greater than the proportion who didn't self-evaluate after seeing sexualized pictures of female athletes (46.4%).
As we saw earlier, Pearson's chi-square test examines whether there is an association between two categorical variables (in this case the type of picture and whether the women self-evaluated or not). The value of the chi-square statistic is 16.057. This value is highly significant (p < .001), indicating that the type of picture used had a significant effect on whether women self-evaluated.
Underneath the chi-square table there are several footnotes relating to the assumption that expected counts should be greater than 5. If you forgot to check this assumption yourself, SPSS kindly gives a summary of the number of expected counts below 5. In this case, there were no expected frequencies less than 5, so we know that the chi-square statistic should be accurate.
The highly significant result indicates that there is an association between the type of picture and whether women self-evaluated or not. In other words, the pattern of responses (i.e., the proportion of women who self-evaluated to the proportion who did not) in the two picture conditions is significantly different. Below is an excerpt from Daniels's (2012) conclusions:
Is the Black American happy?
Are Black Americans happy?
Let's run the analysis on the first question. First we must remember to tell SPSS Statistics which variable contains the frequencies by using Data > Weight Cases …. In the resulting dialog box drag Happy to the box labelled Frequency variable:
Next, select Analyze > Descriptive Statistics > Crosstabs …. Drag Profession to the area labelled Row(s): and drag Response to the box labelled Column(s):
The chi-square test is highly significant, χ2(7) = 936.14, p < .001. This indicates that the profile of yes and no responses differed across the professions. Looking at the standardized residuals, the only profession for which these are non-significant are housewives who showed a fairly even split of whether they thought Black Americans were happy (40%) or not (60%). Within the other professions all of the standardized residuals are much higher than 1.96, so how can we make sense of the data? What's interesting is to look at the direction of these residuals (i.e., whether they are positive or negative). For the following professions the residual for 'no' was positive but for 'yes' was negative; these are therefore people who responded more than we would expect that Black Americans were not happy and less than expected that Black Americans were happy: college students, preachers and lawyers. The remaining professions (labourers, physicians, school teachers and musicians) show the opposite pattern: the residual for 'no' was negative but for 'yes' was positive; these are, therefore, people who responded less than we would expect that Black Americans were not happy and more than expected that Black Americans were happy.
Are they Happy as Black Americans?
We run this analysis in exactly the same way except that we now have to weight the cases by the variable You_Happy. Select Data > Weight Cases …. Assuming you're following up the previous analysis, click to place Happy back into the variable list, then drag You_Happy into the box labelled Frequency variable:
Next, select Analyze > Descriptive Statistics > Crosstabs … and use the exact same options as before (if you're following up the previous analysis everything will be set up from the previous analusis and you can simply click .
The chi-square test is highly significant, χ2(7) = 1390.74, p < .001. This indicates that the profile of yes and no responses differed across the professions. Looking at the standardized residuals, these are significant in most cells with a few exceptions: physicians, lawyers and school teachers saying 'yes'. Within the other cells all of the standardized residuals are much higher than 1.96. Again, we can look at the direction of these residuals (i.e., whether they are positive or negative). For labourers, housewives, school teachers and musicians the residual for 'no' was positive but for 'yes' was negative; these are, therefore, people who responded more than we would expect that they were not happy as Black Americans and less than expected that they were happy as Black Americans. The remaining professions (college students, physicians, preachers and lawyers) show the opposite pattern: the residual for 'no' was negative but for 'yes' was positive; these are, therefore, people who responded less than we would expect that they were not happy as Black Americans and more than expected that they were happy as Black Americans. Essentially, the former group are in low-paid jobs in which conditions would have been very hard (especially in the social context of the time). The latter group are in much more respected (and probably better-paid) professions. Therefore, the responses to this question could say more about the professions of the people asked than their views of being Black Americans.
Should Black Americans be happy?
We run this analysis in exactly the same way except that we now have to weight the cases by the variable Should_Be_Happy. Select Data > Weight Cases …. Assuming you're following up the previous analysis, click to place You_Happy back into the variable list, then drag Should_Be_Happy into the box labelled Frequency variable:
The chi-square test is highly significant, χ2(7) = 1784.23, p < .001. This indicates that the profile of yes and no responses differed across the professions. Looking at the standardized residuals, these are nearly all significant. Again, we can look at the direction of these residuals (i.e., whether they are positive or negative). For college students and lawyers the residual for 'no' was positive but for 'yes' was negative; these are, therefore, people who responded more than we would expect that they thought that Black Americans should not be happy and less than expected that they thought Black Americans should be happy. The remaining professions show the opposite pattern: the residual for 'no' was negative but for 'yes' was positive; these are, therefore, people who responded less than we would expect that they did not think that Black Americans should be happy and more than expected that they thought that Black Americans should be happy.
What is interesting here and in the first question is that college students and lawyers are in vocations in which they are expected to be critical about the world. Lawyers may well have defended Black Americans who had been the subject of injustice and discrimination or racial abuse, and college students would likely be applying their critically trained minds to the immense social injustice that prevailed at the time. Therefore, these groups can see that their racial group should not be happy and should strive for the equitable and just society to which they are entitled. People in the other professions perhaps adopt a different social comparison.
It's also possible for this final question that the groups interpreted the question differently: perhaps the lawyers and students interpreted the question as 'should they be happy given the political and social conditions of the time?', while the others interpreted the question as 'do they deserve happiness?'
It might seem strange to have picked a piece of research from so long ago to illustrate the chi-square test, but what I wanted to demonstrate is that simple research can sometimes be incredibly illuminating. This study asked three simple questions, yet the data are utterly fascinating. It raises further hypotheses that could be tested, it unearths very different views in different professions, and it illuminates a very important social and psychological issue. There are others studies that sometimes use the most elegant paradigms and the highly complex methodologies, but the questions they address are utterly meaningless for the real world. They miss the big picture. Albert Beckham was a remarkable man, trying to understand important and big real-world issues that mattered to hundreds of thousands of people.
Mandatory suicide?
The main analysis is fairly simple to specify because we're just forcing all predictors in at the same time. Therefore, the completed main dialog box should look like the figure below. (Note that I have ordered the predictors as suggested by Labcoat Leni, and that you won't see all of them in the dialog box because the list is too long!)
We also need to specify our categorical variables (ee have only 1, Marital_Status) using the Categorical dialog box. I have chosen an indicator contrast with the first category (Together) as the reference category. It actually doesn't matter whether you select first or last because there are only two categories. However, it will affect the sign of the beta coefficient. I have chosen the first category as the reference category purely because it gives us a positive beta as in Lacourse et al.'s table. If you chose 'last' (the default) the resulting coefficient will be the same magnitude but a negative value instead.
We can also use the Options dialog box to specify some options. You can select whatever other options you see fit based on the chapter but the CI for exp(B) option will need to be selected to get the same output as below.
We can present these results in the following table:
95% CI lower
95% CI upper
Constant 6.21 6.21
Age 0.69* 0.32 1.06 2.00 3.77
Marital status 0.18 0.68 0.32 1.20 4.53
Mother negligence −0.02 0.05 0.88 0.98 1.09
Father negligence 0.09* 0.05 0.99 1.09 1.20
Self-estrangement/ powerlessness 0.15* 0.06 1.03 1.17 1.33
Social isolation −0.01 0.08 0.86 0.99 1.15
Normlessness 0.19* 0.11 0.98 1.21 1.50
Meaninglessness −0.07 0.06 0.83 0.94 1.05
Drug use 0.32** 0.10 1.12 1.37 1.68
Metal 0.14 0.09 0.96 1.15 1.37
Worshipping 0.16* 0.13 0.91 1.17 1.51
Vicarious listening −0.34 0.20 0.48 0.71 1.04
*p < .05, **p < .01; one-tailed
I've reported one-tailed significances (because Lacourse et al. do and it makes it easier to compare our results to Table 3 in their paper). We can conclude that listening to heavy metal did not significantly predict suicide risk in women (of course not; anyone I've ever met who likes metal does not conform to the stereotype). However, in case you're interested, listening to country music apparently does (Stack & Gundlach, 1992)3. The factors that did predict suicide risk were age (risk increased with age), father negligence (although this was significant only one-tailed, it showed that as negligence increased so did suicide risk), self-estrangement (basically low self-esteem predicted suicide risk, as you might expect), normlessness (again, only one-tailed), drug use (the more drugs used, the more likely a person was to be in the at-risk category), and worshipping (the more the person showed signs of worshipping bands, the more likely they were to be in the at-risk group).
The most significant predictor was drug use. So, this shows you that, for girls, listening to metal was not a risk factor for suicide, but drug use was. To find out what happens for boys, you'll just have to read the article! This is scientific proof that metal isn't bad for your health, so download some Deathspell Omega and enjoy!
A fertile gesture
Select Analyze > Mixed Models > Linear … to access the main dialog box. In this example, multiple scores or shifts are nested within each dancer. Therefore, the level 2 variable is the participant (the dancer) and this variable is represented by the variable labelled ID. Drag this variable to the box labelled Subjects and click to access the main dialog box.
In the main dialog box we need to set up our predictors and outcome. The outcome was the value of tips earned, so drag Tips to the box labelled Dependent variable:. We have two predictors: Cyclephase and Contraceptive. Drag both of these to the box labelled Factor(s):. We use the Factor(s) box because both variables are categorical.
To add these fixed effects to our model click on to access the Fixed Effects dialog box. To specify both main effects and the interaction term, select both predictors (click on Cyclephase and then, while holding down Ctrl (⌘ on a mac), click on Contraceptive), then select , and then click . You should find that both main effects and the interaction term are transferred to the Model: box. Click to return to the main dialog box.
In the model that Miller et al. fitted, they did not assume that there would be random slopes (i.e., the relationship between each predictor and tips was not assumed to vary within lap dancers). This decision is appropriate for Contraceptive because this variable didn't vary at level 2 (the lap dancer was either taking contraceptives or not, so this could not be set up as a random effect because it doesn't vary over our level 2 variable of participant). Also, because Cyclephase is a categorical variable with three unordered categories we could not expect a linear relationship with tips: we expect tips to vary over categories but the categories themselves have no meaningful order. However, we might expect tips to vary over participants (some lap dancers will naturally get more money than others) and we can factor this variability in by allowing the intercept to be random. As such, we're fitting a random intercept model to the data.
To do this, click on in the main dialog box to access the Random Effects dialog box. The first thing we need to do is to specify our contextual variable. We do this by selecting it from the list of contextual variables that we have already specified. These appear in the section labelled Subjects. Because we specified only one variable, there is only one variable in the list, ID.
Drag this variable to the area labelled Combinations. We want to specify that only the intercept is random, and we do this by selecting . Notice that this dialog box includes a drop-down list used to specify the type of covariance ( ). For a random intercept model this default option is fine. Click to return to the main dialog box.
The authors report in the paper that they used restricted maximum-likelihood estimation (REML), so click on and select this option. Finally, click and select Parameter estimates and Tests for covariance parameters. Click to return to the main dialog box. To fit the model click .
The first output tells us our fixed effects. As you can see they are all significant. Miller et al. reported these results as follows: 'Main effects of cycle phase [F(2, 236) = 27.46, p < .001] and contraception use [F(1, 17) = 6.76, p = .019] were moderated by an interaction between cycle phase and pill use [F(2, 236) = 5.32, p = .005]' (p. 378). Hopefully you can see where these values come from in the table (they rounded the df off to whole numbers).
Basically this shows that the phase of the dancer's cycle significantly predicted tip income, and this interacted with whether or not the dancer was having natural cycles or was on the contraceptive pill. However, we don't know which groups differed. We can use the parameter estimates to tell us.
I coded Cyclephase in a way that would be most useful for interpretation, which was to code the group of interest (fertile period) as the last category (2), and the other phases as 1 (Menstrual) and 0 (Luteal). The parameter estimates for this variable, therefore, compare each category against the last category, and because I made the last category the fertile phase this means we get a comparison of the fertile phase against the other two. Therefore, we could say (because the b is negative) that tips were significantly higher in the fertile phase than in the luteal phase, b = –100.41, t(235.21) = –6.11, p < .001, and in the menstrual phase, b = –170.86, t(234.92) = –9.84, p < .001. The beta, as in regression, tells us the change in tips as we shift from one group to another, so during the fertile phase, dancers than the menstrual phase.
These effects don't factor in the contraceptive use. To look at this we need to look at the contrasts for the interaction term. The first of these tells us the following: if we worked out the relative difference in tips between the fertile phase and the luteal phase, how much more do those in their natural cycle earn compared to those on contraceptive pills? The answer is about $86. In other words, there is a combined effect of being in a natural cycle (relative to being on the pill) and being in the fertile phase (relative to the luteal phase), and this is significant, b = 86.09, t(237) = 2.86, p = .005. The second contrast tells us the following: if we worked out the relative difference in tips between the fertile phase and the menstrual phase, how much more do those in their natural cycle earn compared to those on contraceptive pills? The answer is about $90 (the b). In other words, there is a combined effect of being in a natural cycle and being in the fertile phase compared to the menstrual phase, and this is significant, b = 89.94, t(236.80) = 2.63, p = .009.
The final table is not central to the hypotheses, but it does tell us about the random intercept. In other words, it tells us whether tips (in general) varied from dancer to dancer. The variance in tips across dancers was 3571.12, and this is significant, z = 2.37, p < .05. In other words, the average tip per dancer varied significantly. This confirms that we were justified in treating the intercept as a random variable.
To conclude, then, this study showed that the 'estrus-hidden' hypothesis is wrong: men did find women more attractive (as indexed by how many lap dances a woman performed and therefore how much she earned) during the fertile phase of their cycle compared to the other phases.
Remember that because of the nature of bootstrapping you will get slightly different values in your output.↩︎
Stack, S., & Gundlach, J. (1992). The effect of country music on suicide. Social Forces, 71(1), 211–218.↩︎ | CommonCrawl |
A virtual element generalization on polygonal meshes of the Scott-Vogelius finite element method for the 2-D Stokes problem
JCD Home
Applying splitting methods with complex coefficients to the numerical integration of unitary problems
doi: 10.3934/jcd.2021021
Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible.
Readers can access Online First articles via the "Online First" tab for the selected journal.
Simulating deformable objects for computer animation: A numerical perspective
Uri M. Ascher , , Egor Larionov , Seung Heon Sheen and Dinesh K. Pai
Dept. Computer Science, University of British Columbia, Vancouver, BC, V6T 1Z4, Canada
Received March 2021 Revised October 2021 Early access December 2021
Fund Project: The first and last authors are supported by NSERC Discovery grants 84306 and RGPIN/2017-04604 respectively. Pai's research was also supported by a Canada Research Chair and an NSERC Idea-to-Innovation grant co-sponsored by Vital Mechanics
Full Text(HTML)
Figure(9)
We examine a variety of numerical methods that arise when considering dynamical systems in the context of physics-based simulations of deformable objects. Such problems arise in various applications, including animation, robotics, control and fabrication. The goals and merits of suitable numerical algorithms for these applications are different from those of typical numerical analysis research in dynamical systems. Here the mathematical model is not fixed a priori but must be adjusted as necessary to capture the desired behaviour, with an emphasis on effectively producing lively animations of objects with complex geometries. Results are often judged by how realistic they appear to observers (by the "eye-norm") as well as by the efficacy of the numerical procedures employed. And yet, we show that with an adjusted view numerical analysis and applied mathematics can contribute significantly to the development of appropriate methods and their analysis in a variety of areas including finite element methods, stiff and highly oscillatory ODEs, model reduction, and constrained optimization.
Keywords: Physically-based simulation, deformable object, time integration, stiffness, nonlinear constitutive material.
Mathematics Subject Classification: Primary: 65D18, 68U05; Secondary: 65P99.
Citation: Uri M. Ascher, Egor Larionov, Seung Heon Sheen, Dinesh K. Pai. Simulating deformable objects for computer animation: A numerical perspective. Journal of Computational Dynamics, doi: 10.3934/jcd.2021021
A. H. Al-Mohy and N. J. Higham, Computing the action of the matrix exponential, with an application to exponential integrators, SIAM J. Sci. Comput., 33 (2011,488–511. doi: 10.1137/100788860. Google Scholar
R. Alexander, Diagonally implicit runge-kutta methods for stiff ode's, SIAM J. Numer. Anal., 14 (1977), 1006-1021. doi: 10.1137/0714068. Google Scholar
U. Ascher, Numerical Methods for Evolutionary Differential Equations, Computational Science & Engineering, 5. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2008. doi: 10.1137/1.9780898718911. Google Scholar
U. Ascher and L. Petzold, Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1998. Google Scholar
J. Awrejcewicz, D. Grzelczyk and Y. Pyryev, A novel dry friction modeling and its impact on differential equations computation and lyapunov exponents estimation, Journal of Vibroengineering, 10 (2008). Google Scholar
D. Baraff and A. Witkin, Large steps in cloth simulation, Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, (1998), 43–54. doi: 10.1145/280814.280821. Google Scholar
J. Barbic and D. James, Real-time subspace integration for st. venant-kirchhoff deformable models, ACM Trans. Graphics, 24 (2005), 982-990. doi: 10.1145/1186822.1073300. Google Scholar
E. Boxerman and U. Ascher, Decomposing cloth, Proceedings of the 2004 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, Eurographics Association, (2004), 153–161. doi: 10.1145/1028523.1028543. Google Scholar
J. C. Butcher and D. J. L. Chen, A new type of singly-implicit runge-kutta method, Appl. Numer. Math., 34 (2000), 179-188. doi: 10.1016/S0168-9274(99)00126-9. Google Scholar
D. Chen, D. I. W. Levin, W. Matusik and D. M. Kaufman, Dynamics-aware numerical coarsening for fabrication design, ACM Trans. Graph., 36 (2017), 1-15. doi: 10.1145/3072959.3073669. Google Scholar
Y. J. Chen, U. Ascher and D. K. Pai, Exponential rosenbrock-euler integrators for elastodynamic simulation, IEEE Transactions on Visualization and Computer Graphics, 24 (2018), 2702-2713. doi: 10.1109/TVCG.2017.2768532. Google Scholar
Y. J. (Edwin) Chen, D. I. W. Levin, D. M. Kaufman, U. M. Ascher and D. K. Pai, Eigenfit for consistent elastodynamic simulation across mesh resolution, Proceedings SCA, (2019), Article No. 5, 1–13. doi: 10.1145/3309486.3340248. Google Scholar
Y. J. (Edwin) Chen, S. H. Sheen, U. M. Ascher and D. K. Pai, Siere: A hybrid semi-implicit exponential integrator for efficiently simulating stiff deformable objects, ACM Transactions on Graphics (TOG), 40 (2020), 1–12. doi: 10.1145/3410527. Google Scholar
J. Chung and G. M. Hulbert, A time integration algorithm for structural dynamics with improved numerical dissipation: The generalized-$\alpha$ method, J. Applied Mech., 60 (1993), 371-375. doi: 10.1115/1.2900803. Google Scholar
P. G. Ciarlet, Three-Dimensional Elasticity, Recherches en Mathématiques Appliquées [Research in Applied Mathematics], 1. Masson, Paris, 1986. Google Scholar
G. Daviet, F. Bertails-Descoubes and L. Boissieux, A hybrid iterative solver for robustly capturing coulomb friction in hair dynamics, ACM Trans. Graph., 30 (2011), 1-12. doi: 10.1145/2024156.2024173. Google Scholar
G. De Saxcé and Z. Q. Feng, The bipotential method: A constructive approach to design the complete contact law with friction and improved numerical algorithms, Math. Comput. Modelling, 28 (1998), 225-245. doi: 10.1016/S0895-7177(98)00119-8. Google Scholar
K. Erleben, Rigid body contact problems using proximal operators, Proceedings of the ACM SIGGRAPH / Eurographics Symposium on Computer Animation, (2017), Article No. 13, 1–12. doi: 10.1145/3099564.3099575. Google Scholar
Z. Ferguson, M. Li, T. Schneider, F. Gil-Ureta, T. Langlois, C. Jiang, D. Zorin, D. M. Kaufman and D. Panozzo, Intersection-free rigid body dynamics, ACM Transactions on Graphics (SIGGRAPH), 40 (2021). Google Scholar
T. F. Gast, C. Schroeder, A. Stomakhin, C. Jiang and J. M. Teran, Optimization integrator for large time steps, IEEE Trans Visualization and Computer Graphics, 21 (2015), 1103-1115. doi: 10.1109/TVCG.2015.2459687. Google Scholar
M. Geilinger, D. Hahn, J. Zehnder, M. Bacher, B. Thomaszewski and S. Coros, Add: Analytically differentiable dynamics for multi-body systems with frictional contact, ACM Transactions on Graphics (TOG), 39 (2020), 1-15. doi: 10.1145/3414685.3417766. Google Scholar
E. Hairer, C. Lubich and G. Wanner, Geometric Numerical Integration, Springer Series in Computational Mathematics, 31. Springer-Verlag, Berlin, 2002. doi: 10.1007/978-3-662-05018-7. Google Scholar
E. Hairer and G. Wanner, Solving Ordinary Differential Equations Ⅱ: Stiff and Differential-Algebraic Problems, 2$^nd$ edition, Springer Series in Computational Mathematics, 14. Springer-Verlag, Berlin, 1996. doi: 10.1007/978-3-642-05221-7. Google Scholar
C. Kane, J. Marsden, M. Ortiz and M. West, Variational integrators and the Newmark algorithm for conservative and dissipative mechanical systems, Internat. J. Numer. Methods Engrg., 49 (2000), 1295-1325. doi: 10.1002/1097-0207(20001210)49:10<1295::AID-NME993>3.0.CO;2-W. Google Scholar
D. M. Kaufman, S. Sueda, D. L. James and D. K. Pai, Staggered projections for frictional contact in multibody systems, ACM Transactions on Graphics (SIGGRAPH Asia 2008), 27 (2008), 1-11. doi: 10.1145/1457515.1409117. Google Scholar
R. Kikuuwe, N. Takesue, A. Sano, H. Mochiyama and H. Fujimoto, Fixed-step friction simulation: From classical Coulomb model to modern continuous models, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, (2005), 1009–1016. doi: 10.1109/IROS.2005.1545579. Google Scholar
E. Larionov, Y. Fan and D. K Pai, Frictional Contact on Smooth Elastic Solids, ACM Transactions on Graphics, (2021), 1–17. doi: 10.1145/3446663. Google Scholar
R. J. LeVeque, Finite Difference Methods for Ordinary and Partial Differential Equations, SIAM, 2007. doi: 10.1137/1.9780898717839. Google Scholar
M. Li, Z. Ferguson, T. Schneider, T. Langlois, D. Zorin, D. Panozzo, C. Jiang and D. M. Kaufman, Incremental potential contact: Intersection- and inversion-free, large-deformation dynamics, ACM Transactions on Graphics (TOG), 39 (2020), 1-20. doi: 10.1145/3386569.3392425. Google Scholar
M. Li, D. M. Kaufman and C. Jiang, Codimensional Incremental Potential Contact, arXiv: 2012.04457, [cs], 2021. Google Scholar
A. Longva, F. Löschner, T. Kugelstadt, J. A. Fernández-Fernández and J. Bender, Higher-order finite elements for embedded simulation, ACM Transactions on Graphics, 39 (2020), Article No. 181, 1–14. doi: 10.1145/3414685.3417853. Google Scholar
F. Loschner, A. Longva, S. Jeske, T. Kugelstadt and J. Bender, Higher order time integration for deformable solids, Computer Graphics Forum, 39 (2020), 157-169. doi: 10.1111/cgf.14110. Google Scholar
P. Lotstedt, Mechanical systems of rigid bodies subject to unilateral constraints, SIAM J. Appl. Math., 42 (1982), 281-296. doi: 10.1137/0142022. Google Scholar
P. Lotstedt and L. Petzold, Numerical solution of nonlinear differential equations with algebraic constraints i: Convergence results for backward differentiation formulas, Math. Comp., 46 (1986), 491-516. doi: 10.2307/2007989. Google Scholar
D. L. Michels, V. T. Luan and M. Tokman, A stiffly accurate integrator for elastodynamic problems, ACM Transactions on Graphics (TOG), 36 (2017), Article No.: 116, 1–14. doi: 10.1145/3072959.3073706. Google Scholar
D. L. Michels and J. P. T. Mueller, Discrete computational mechanics for stiff phenomena, SIGGRAPH ASIA 2016 Courses, (2016), Article No.: 13, 1–13. doi: 10.1145/2988458.2988464. Google Scholar
C. Moler and C. Van Loan, Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later, SIAM Rev., 45 (2003), 3-49. doi: 10.1137/S00361445024180. Google Scholar
J. Niesen and W. M. Wright, Algorithm 919: A Krylov subspace algorithm for evaluating the $\phi$-functions appearing in exponential integrators, ACM Trans. Math. Software (TOMS), 38 (2012), Art. 22, 19pp. doi: 10.1145/2168773.2168781. Google Scholar
J. Nocedal and S. Wright, Numerical Optimization, Springer Series in Operations Research. Springer-Verlag, New York, 1999. doi: 10.1007/b98874. Google Scholar
D. K. Pai, K. van den Doel, D. L. James, J. Lang, J. E. Lloyd, J. L. Richmond and S. H. Yau, Scanning physical interaction behavior of 3D objects, Computer Graphics (ACM SIGGRAPH 2001 Conference Proceedings), (2001), 87–96. Google Scholar
D. K. Pai, A. Rothwell, P. Wyder-Hodge, A. Wick, Y. Fan, E. Larionov, D. Harrison, D. R. Neog and C. Shing, The human touch: Measuring contact with real human soft tissues, ACM Transactions on Graphics (TOG), 37 (2018), Article No.: 58, 1–12. doi: 10.1145/3197517.3201296. Google Scholar
E. Sifakis and J. Barbic, FEM simulation of 3D deformable solids: A practitioner's guide to theory, discretization and model reduction, ACM SIGGRAPH 2012 Courses, (2012), Article No.: 20, 1–50 doi: 10.1145/2343483.2343501. Google Scholar
B. Smith, F. de Goes and T. Kim, Stable neo-hookean flesh simulation, ACM Trans. Graph., 37 (2018), Article No.: 12, 1–15. doi: 10.1145/3180491. Google Scholar
O. Sorkine and M. Alexa, As-rigid-as-possible surface modeling, Eurographics Symposium on Geometry Processing, 4 (2007), 109-116. Google Scholar
M. Verschoor and A. C. Jalba, Efficient and accurate collision response for elastically deformable models, ACM Trans. Graph., 38 (2019), Article No.: 17, 1–20. doi: 10.1145/3209887. Google Scholar
B. Wang, L. Wu, K. Yin, U. Ascher, L. Liu and H. Huang, Deformation capture and modelling of soft objects, ACM trans. on Graphics (SIGGRAPH), 34 (2015). Google Scholar
J. Wojewoda, A. Stefański, M. Wiercigroch and T. Kapitaniak, Hysteretic effects of dry friction: Modelling and experimental studies, Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 336 (2008), 747-765. doi: 10.1098/rsta.2007.2125. Google Scholar
H. Xu and J. Barbic, Example-based damping design, ACM Trans. Graphics, 36 (2017), Article No.: 53, 1–14. doi: 10.1145/3072959.3073631. Google Scholar
13]">Figure 1. Deformable articulated objects: a swaying tree and a constrained jelly brick; cf. [13]
Figure Options
Download as PowerPoint slide
Figure 2. Moving tetrahedral FEM mesh for position coordinates $ {\bf q}(t) $
Figure 3. Damping curves for the SDIRK method (solid line), TR-BDF2 (dashed) and BDF2 (dash-dot). The two DIRK methods behave similarly, and they differ significantly from BDF2
13]. The cost of exponential integrators including ERE becomes prohibitive as the stiffness parameter increases. By contrast, the cost of SIERE does not grow significantly with stiffness">Figure 4. Computational costs for a swinging armadillo simulation [13]. The cost of exponential integrators including ERE becomes prohibitive as the stiffness parameter increases. By contrast, the cost of SIERE does not grow significantly with stiffness
Figure 5. Plot of of the first 1000 eigenvalues of a soft body problem
Figure 6. Potential energy plots for different integrators applied to a soft object: BE (thick solid line), SIERE with $ s = 10 $ (thin solid line), STR-SBDF2ERE with $ s = 10 $ (dash-dot), TR-BDF2 (dotted), and SDIRK (dashed). A soft beam is fixed at its ends and is subjected to gravity. Notice that the TR-BDF2 and SDIRK energies do not decay by much, whereas BE dissipates energy quickly. SIERE is less damping than BE but still much more damping than STR-SBDF2ERE, which in turn is still more damping than the two DIRK methods
Figure 7. With large enough time steps, velocity based contact constraints may reject plausible steps. If a vertex in blue is constrained to have a strictly positive velocity with respect to the convex gray contact surface, then plausibly valid next-step configurations (right) may be erroneously rejected
Figure 8. Barrier function $ b = b(x; \delta) $ for different values of $ \delta $. It is used in (30) to approximate the contact force
Figure 9. Plot of the smoothing function $ s = s(x;\epsilon) $ for different values of $ \epsilon $. It is used in (34) through (32) to approximate the Coulomb friction force
Eleonora Messina. Numerical simulation of a SIS epidemic model based on a nonlinear Volterra integral equation. Conference Publications, 2015, 2015 (special) : 826-834. doi: 10.3934/proc.2015.0826
Tao Guan, Denghua Zhong, Bingyu Ren, Pu Cheng. Construction schedule optimization for high arch dams based on real-time interactive simulation. Journal of Industrial & Management Optimization, 2015, 11 (4) : 1321-1342. doi: 10.3934/jimo.2015.11.1321
Hongguang Xiao, Wen Tan, Dehua Xiang, Lifu Chen, Ning Li. A study of numerical integration based on Legendre polynomial and RLS algorithm. Numerical Algebra, Control & Optimization, 2017, 7 (4) : 457-464. doi: 10.3934/naco.2017028
Gong Chen, Peter J. Olver. Numerical simulation of nonlinear dispersive quantization. Discrete & Continuous Dynamical Systems, 2014, 34 (3) : 991-1008. doi: 10.3934/dcds.2014.34.991
Dieter Armbruster, Christian Ringhofer, Andrea Thatcher. A kinetic model for an agent based market simulation. Networks & Heterogeneous Media, 2015, 10 (3) : 527-542. doi: 10.3934/nhm.2015.10.527
Olivier P. Le Maître, Lionel Mathelin, Omar M. Knio, M. Yousuff Hussaini. Asynchronous time integration for polynomial chaos expansion of uncertain periodic dynamics. Discrete & Continuous Dynamical Systems, 2010, 28 (1) : 199-226. doi: 10.3934/dcds.2010.28.199
Philipp Bader, Sergio Blanes, Fernando Casas, Mechthild Thalhammer. Efficient time integration methods for Gross-Pitaevskii equations with rotation term. Journal of Computational Dynamics, 2019, 6 (2) : 147-169. doi: 10.3934/jcd.2019008
Xinxin Tan, Shujuan Li, Sisi Liu, Zhiwei Zhao, Lisa Huang, Jiatai Gang. Dynamic simulation of a SEIQR-V epidemic model based on cellular automata. Numerical Algebra, Control & Optimization, 2015, 5 (4) : 327-337. doi: 10.3934/naco.2015.5.327
Sie Long Kek, Mohd Ismail Abd Aziz, Kok Lay Teo, Rohanin Ahmad. An iterative algorithm based on model-reality differences for discrete-time nonlinear stochastic optimal control problems. Numerical Algebra, Control & Optimization, 2013, 3 (1) : 109-125. doi: 10.3934/naco.2013.3.109
Ahmad Deeb, A. Hamdouni, Dina Razafindralandy. Comparison between Borel-Padé summation and factorial series, as time integration methods. Discrete & Continuous Dynamical Systems - S, 2016, 9 (2) : 393-408. doi: 10.3934/dcdss.2016003
Jean-Paul Chehab. Damping, stabilization, and numerical filtering for the modeling and the simulation of time dependent PDEs. Discrete & Continuous Dynamical Systems - S, 2021, 14 (8) : 2693-2728. doi: 10.3934/dcdss.2021002
Jiakou Wang, Margaret J. Slattery, Meghan Henty Hoskins, Shile Liang, Cheng Dong, Qiang Du. Monte carlo simulation of heterotypic cell aggregation in nonlinear shear flow. Mathematical Biosciences & Engineering, 2006, 3 (4) : 683-696. doi: 10.3934/mbe.2006.3.683
Kolade M. Owolabi, Edson Pindza. Numerical simulation of multidimensional nonlinear fractional Ginzburg-Landau equations. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 835-851. doi: 10.3934/dcdss.2020048
Walid K. Abou Salem, Xiao Liu, Catherine Sulem. Numerical simulation of resonant tunneling of fast solitons for the nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems, 2011, 29 (4) : 1637-1649. doi: 10.3934/dcds.2011.29.1637
Xiangrong Li, Vittorio Cristini, Qing Nie, John S. Lowengrub. Nonlinear three-dimensional simulation of solid tumor growth. Discrete & Continuous Dynamical Systems - B, 2007, 7 (3) : 581-604. doi: 10.3934/dcdsb.2007.7.581
Simone Fiori, Italo Cervigni, Mattia Ippoliti, Claudio Menotta. Synthetic nonlinear second-order oscillators on Riemannian manifolds and their numerical simulation. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021088
Riadh Chteoui, Abdulrahman F. Aljohani, Anouar Ben Mabrouk. Classification and simulation of chaotic behaviour of the solutions of a mixed nonlinear Schrödinger system. Electronic Research Archive, 2021, 29 (4) : 2561-2597. doi: 10.3934/era.2021002
Alexander Schaub, Olivier Rioul, Jean-Luc Danger, Sylvain Guilley, Joseph Boutros. Challenge codes for physically unclonable functions with Gaussian delays: A maximum entropy problem. Advances in Mathematics of Communications, 2020, 14 (3) : 491-505. doi: 10.3934/amc.2020060
Guoliang Ju, Can Chen, Rongliang Chen, Jingzhi Li, Kaitai Li, Shaohui Zhang. Numerical simulation for 3D flow in flow channel of aeroengine turbine fan based on dimension splitting method. Electronic Research Archive, 2020, 28 (2) : 837-851. doi: 10.3934/era.2020043
Francesco Maddalena, Danilo Percivale, Franco Tomarelli. Adhesive flexible material structures. Discrete & Continuous Dynamical Systems - B, 2012, 17 (2) : 553-574. doi: 10.3934/dcdsb.2012.17.553
Impact Factor:
PDF downloads (31)
HTML views (25)
Uri M. Ascher Egor Larionov Seung Heon Sheen Dinesh K. Pai | CommonCrawl |
Displacement and Distance with Examples
Category : Straight Line Motion
The motion of objects in one dimension is described in a problem-solution based approach.
Velocity and Acceleration: Definition and Solved examples
Average Acceleration Definition and solved examples for secondary school
Definition of displacement and distance
Simple examples of displacement and distance
Displacement in two and three dimension
More examples with detailed answer
Displacement is a vector quantity describes a change in position of an object or how far an object is displaced from its initial position and is given by below formula
\[ \Delta \vec x = x_f - x_i \]
Where the starting and ending positions are denoted by $x_i$ and $x_f$, respectively.
Distance is a scalar quantity indicating the total path traveled by a moving object.
Since in both cases, the interval between two points is measured- one as the shortest line and the other as the total traveled path- so the SI unit of displacement and distance is the meter.
The illustration below shows the difference between them clearly.
Now with this brief definition, we can go further and explain those concepts more concisely with numerous examples.
Consider a boy playing around a rectangle. He starts his game from one corner ($i$) and ends at the same one. In this course, since his initial and final points are the same so, by definition, his displacement is zero which is in sharp contrast with this concept in everyday language. On the other hand, total distance traveled by him is the perimeter of that rectangular shape. In this way, distance traveled (in physics) is consistent with the concept of displacement in everyday language.
OR consider a biker ride into a horizontal loop of radius $10$ meters and covers three fourth of it as follows. What are the displacement and distance traveled by him?
As mentioned previously, displacement is a quantity which depends only on the position of initial and final points. So the straightest line between those points, which gives the magnitude of the displacement vector, are computed by Pythagorean theorem as
\begin{eqnarray*}
D^{2} &=& r^{2} + r^{2} \\
&=& 2\, r^{2} \\
\Rightarrow D &=& r\, \sqrt{2}
\end{eqnarray*}
where the magnitude of displacement denoted by $D$. Putting the values gives, $D\, = \, 10\sqrt{2}\, (\rm m)$.
Distance is simply the circemference of three fourth of the circle, so
\text{distance} &=& \frac{3}{4} \, {\text{perimeter}} \\
&=& \frac{3}{4} \left(2\pi\,r \right) \\
&=& \frac{3 \times 2\times \pi \times 10}{4} \\
&=& 15\,\pi \, (\rm m)
Another example with more mathematical detail is as follows:
Let's explain this concept with an example. We want to study a car's motion along a straight line. Let the car be a point particle and moves in one dimension. To specify the location of the particle in one dimension we need only one axis that we call it $x$ and lies along the straight line path. First, we must define an important quantity that the other kinematic quantities are made from it, displacement. To describe the motion of the car, we must know its position and how that position changes with time. The change in the car's position from initial position $x_i$ to final position $x_f$ is called displacement, $\Delta \vec x= x_f-x_i$ (in physics we use the Greek letter $\Delta$ to indicate the change in a quantity). This quantity is a vector point from $A$ to $B$ and in $1$-D denoted by $\Delta \vec x=x_B-x_A$. In the figure below, the car moves from point $A$ at $x=2\, {\rm m}$ and after reaching to $x=9\,{\rm m}$ returns and stops at position $x=6\,{\rm m}$ at point $B$. Therefore, the car's displacement is $\Delta x=6-2=+4\,{\rm m}$.
Another quantity which sometimes confused with displacement is the distance traveled (or simply distance) and defined as the overall distance covered by the particle. In our example, the distance from the initial position is computed as follows: first, calculate the distance to the return point $d_1=x_C-x_A=9-2=7\,{\rm m}$ then from that point ($x_C$) to the final point $x_B$ i.e. $d_2=x_B-x_C=6-9=-3$. But we should pick the absolute value of it since distance is a scalar quantity and for them, a negative value is nonsense. Therefore, the total distance covered by our car is $d_{tot}=d_1+|d_2|=7+|-3|=7+3=10\,{\rm m}$ . In fact, if there are several turning points along the straight path or once the motion's path to be on a plane or even three dimensional cases, one should divide the overall path (one, two or three dimensional) into straight lines (without any turning point), calculate difference of those initial and final points and then add their absolute values of each path to reach to the distance traveled by that particle on that specific path (see examples below).
In more than one-dimension, the computations are a bit involved and we need to armed with additional concepts. In this section, we can learn how by using vectors one can describe the position of an object and by manipulating of them to characterize the displacement and other related kinematical quantities (like velocity and acceleration).
In a coordinate system, the position of an object is describe by a so-called position vector which is extends from reference origin $O$ to the location of object $P$ and denoted by $\vec{r}=\vec{OP}$. This vectors, in Cartesian coordinate system (or other related coordinates), can be expressed as a linear combination of unit vectors, $\hat{i},\hat{j},\hat{k}$, (the ones with unit length) as
\[ \textbf{r}=\sum_1^{n} r_x \hat{i}+r_y \hat{j} +r_z \hat{k} \]
where $n$ denotes the dimension of problem i.e. in two and three dimensions $n=2,3$, respectively. $r_x , r_y$ and $r_z$ are called the components of the vector $\vec{r}$.
Now the only thing remains is adding or subtracting of these vectors, known as vector algebra, to provide the kinematical quantities. To do this, simply add or subtract the terms (components) along a specific axis with each other (as below). Consider adding of two vector $\vec{a}$ and $\vec{b}$ in two dimension,
\textbf{a}+\textbf{b}&=&\left(a_x \hat{i}+a_y \hat{j} \right)+\left(b_x \hat{i}+b_y \hat{j}\right)\\
&=&\left(a_x+b_x\right)\hat{i}+\left(a_y+b_y\right)\hat{j}\\
&=&c_x\, \hat{i}+c_y\, \hat{j}
In the last line, the components of the final vector (or resultant vector) are denoted by $c_x$ and $c_y$.
The magnitude and direction of the obtained vector are represented by the following relations
|\textbf{c}| &=& \sqrt{\left(a_x+b_x\right)^2 +\left(a_y+b_y\right)^2 } \qquad \text{magnitude}\\
\theta &=& \tan^{-1} \left(\frac{a_y+b_y}{a_x+b_x}\right) \qquad \text{direction}
where $\theta$ is the angle with respect to the $x$ axis.
We have two types of problems in the topic of displacement. In the first case, initial and final coordinates (position) of an object are given. Write position vectors for every point. The vector which extends from the tail of initial point to the tail of final point is displacement vector and computed as the difference of those vectors i.e. $\vec{c}=\vec{b}-\vec{a}$.
In the second case, the overall path of an object, between initial and final points, is given as consecutive vectors as the figure below. Here, one should decompose each vector with respect to its origin, then add components along $x$ and $y$ axes separately. Displacement vector is the one that points from the tip of the first vector to the tail of the last vector and its magnitude is addition vector of those vectors i.e. $\vec{d}=\vec{a}+\vec{b}+\vec{c}$.
More examples with detailed answer:
Example $1$:
A moving object is displaced from $A(2,-1)$ to $B(-5,3)$ in a two-dimensional plane. What is the displacement vector of this object?
First, this is the first case which mentioned above. So construct the position vectors of point $A$ and $B$ as below
\overrightarrow{OA}&=&2\,\hat{i}+(-1)\,\hat{j}\\
\overrightarrow{OB}&=&-5\,\hat{i}+3\,\hat{j}
Now, by definition, the difference of initial and final points or simply position vectors gets the displacement vector $\vec{d}$ as
\vec{d} &=& \overrightarrow{OB}-\overrightarrow{OA}\\
&=& \left(-5\,\hat{i}+3\,\hat{j}\right)-\left(2\,\hat{i}+(-1)\,\hat{j}\right)\\
&=& -7\,\hat{i}+4\,\hat{j}
its magnitude and direction is also obtained as follows
|\vec{d} |&=&\sqrt{\left(-7\right)^2 +\left(4\right)^2 }\\
&=&\sqrt{49+16}\approx 8.06\,{\rm m}
\[\theta = \tan^{-1} \left(\frac{d_y}{d_x}\right)=\tan^{-1}\left(\frac{4}{-7}\right)\]
The angle $\theta$ may be $-29.74^\circ$ or $150.25^\circ$ but since $d_x$ is negative and $d_y$ is positive so the resultant vector lies on the second quarter of coordinate system. Therefore, the desired angle with $x$-axis is $150.25^\circ$.
An airplane flies $276.9\,{\rm km}$ $\left[{\rm W}\, 76.70^\circ\, {\rm S}\right]$ from Edmonton to Calgary and then continues $675.1\,{\rm km}$ $\left[{\rm W}\, 11.45^\circ\,{\rm S}\right]$ from Calgary to Vancouver. Using components, calculate the plane's total displacement. (Nelson 12, p. 27).
In these problems, there is a new thing which appears in many Textbooks that is the compact form of direction as stated in brackets. $\left[{\rm W}\, 76.70^\circ\, {\rm S}\right]$ can be read as "point west, and then turn $76.70^\circ$ toward the south".
To solve such practices, first sketch a diagram of all vectors, decompose them and next using vector algebra, explained above, compute the desired quantity (here $ \vec{d}$).
Two successive paths denoted by vectors $\vec{d_1}$ and $\vec{d_2}$ and in terms of components read as
\vec{d_1} &=& |\vec{d_1}|\,\cos \theta \, (-\hat{i})+|\vec{d_1}|\,\sin \theta \, (-\hat{j})\\
\vec{d_2} &=& |\vec{d_2}|\,\cos \alpha \, (-\hat{i})+|\vec{d_2}|\,\sin \alpha \, (-\hat{j})
putting numbers in the above, one obtain
\vec{d_1} &=& 276.9\,\cos 76.70^{\circ} \, (-\hat{i})+276.9\,\sin 76.7^{\circ} \, (-\hat{j})\\
&=& 63.700\,\left(-\hat{i}\right)+269.47\,\left(-\hat{j}\right) \qquad [{\rm km}]\\
\vec{d_2} &=& 675.1\,\cos 11.45^{\circ} \, (-\hat{i})+675.1\,\sin 11.45^{\circ} \, (-\hat{j})\\
&=& 661.664\,\left(-\hat{i}\right)+134.016\,\left(-\hat{j}\right) \qquad [{\rm km}]
Total displacement is drawn from the tail of $\vec{d_1}$ to the tip of $\vec{d_2}$. In the language of vector addition $\vec{d}=\vec{d_2}+\vec{d_1}$, so
\vec{d} &=& \vec{d_2}+\vec{d_1}\\
&=& \left(661.664+63.700 \right)\,\left(-\hat{i}\right)+\left(134.016+269.47 \right)\,\left(-\hat{j}\right)\\
&=& \left(725.364 \right)\,\left(-\hat{i}\right)+\left(403.486 \right)\,\left(-\hat{j}\right) \qquad [{\rm km}]
Therefore, the length of total path flied by the airplane from Edmonton to Vancouver and its direction with $x$-axis is
|\vec{d}| &=& \sqrt{\left(725.364 \right)^2 +\left(403.486 \right)^2 }\\
&=& 830.032\,{\rm km}
\gamma &=& \tan^{-1} \left(\frac{d_y}{d_x}\right)\\
&=& \tan^{-1} \left(\frac{403.486}{725.364}\right)\\
&=& 29.09^\circ
As one can see, the resultant vector points to the south west or $\left[{\rm W}\,29.09^{\circ}\,{\rm S}\right]$
A moving particle moves over the surface of a solid cube in such a way that passes through $A$ to $B$. What is the magnitude of displacement vector in this change of location of the particle?
In three-dimensional cases as $2-D$ ones, we should only know the location(coordinates) of the object and then use the following relations, one can obtain the displacement of a moving particle on any dimensions.
Points $A$ and $B$ lies on the $x-z$ plane and $y$ axis, respectively so their coordinates are $(10,0,10)$ and $(0,10,0)$ which parenthesis denote the $(x,y,z)$. This is type one of the problems.
\overrightarrow{OA}&=& 10\,\hat{i}+0\,\hat{j}+10\,\hat{k}\\
\overrightarrow{OB}&=& 0\,\hat{i}+10\,\hat{j}+0\,\hat{k}
\vec{d} &=&\overrightarrow{OB}-\overrightarrow{OA}\\
&=& -10\,\hat{i}+10\,\hat{j}-10\,\hat{k}
Therefore, the desired vector in terms of its components computed as above. Also its magnitude is the square root of sum of the squares of each components.
|\vec{d}|=\sqrt{(-10)^2 +(-10)^2 + (10)^2 }=10\sqrt{3}
\].
A car moves around a circle of radius of $20\,{\rm m}$ and returns to its starting point. What is the distance and displacement of the car? ($\pi = 3$)
As mentioned above, displacement depends on the initial and final points of the motion. since car returns to its initial position so in fact no displacement is made by the car. But the amount of distance traveled is simply the perimeter of the circle (since this scalar quantity depends on the form of the path). So $d=2 \pi r=2 \times 3 \times 20 =120\,{\rm m}$, where $r$ is radius of the circle.
A moving object is covering a square path -with one end left open- as shown in figure below. What is the desired displacement and distance
traveled between the specified points? point $p$ lies in the middle of $BC$.
Displacement is the shortest and straightest line between initial and final points. So using Pythagorean theorem, we get
D^{2} &=& \left(iB\right)^{2} + \left(Bf\right)^{2} \\
&=& 5^{2} + (2.5)^{2} \\
\Rightarrow D &=& \sqrt{25 + 6.25} \approx 5.6\, (\rm m)
Direction of displacement vector obtained as
\tan \theta &=& \frac{Bp}{iB} \\
&=& \frac{2.5}{5} \\
\Rightarrow \theta &=& \arctan \frac{2.5}{5} \\
&=& 26.5 ^\circ\, \left[\text{South east}\right]
Distance is simply the perimeter of traveled path, so
\text{distance} &=& 5 + 2.5 \\
&=& 7.5\, (\rm m)
In summary, Displacement is a vector that depends only on initial and final positions of the particle and not on the details of the motion and path. These vector quantities, require both a length and a direction to be identified. On the contrary, distance is a quantity which is characterized only by a simple value, which is called scalar, and is path dependence. In general, the distance traveled and the magnitude of the displacement vector between two points is not the same. If the moving object changes its direction in the course of travel, then the total distance traveled is greater than the magnitude of the displacement between those points. The SI unit of both quantity is the meter.
Ali Nemati
Tags : Velocity and Acceleration with Examples, Examples of Kinematic in one dimension , Examples of displacement , Definition of distance and displacement in physics | CommonCrawl |
Mean dimension of shifts of finite type and of generalized inverse limits
DCDS Home
Existence of periodic waves for a perturbed quintic BBM equation
August 2020, 40(8): 4705-4765. doi: 10.3934/dcds.2020199
Representation formula for symmetrical symplectic capacity and applications
Rongrong Jin and Guangcun Lu ,
School of Mathematical Sciences, Beijing Normal University, Laboratory of Mathematics and Complex Systems, Ministry of Education, Beijing 100875, China
* Corresponding author: Guangcun Lu
Received June 2019 Revised February 2020 Published May 2020
Fund Project: The second author is partially supported by the NNSF 11271044 of China
This is the second installment in a series of papers aimed at generalizing symplectic capacities and homologies. We study symmetric versions of symplectic capacities for real symplectic manifolds, and obtain corresponding results for them to those of the first [19] of this series (such as representation formula, a theorem by Evgeni Neduv, Brunn-Minkowski type inequality and Minkowski billiard trajectories proposed by Artstein-Avidan-Ostrover).
Keywords: Representation formula, symmetrical Ekeland-Hofer symplectic capacity, symmetrical Hofer-Zehnder symplectic capacity, Brunn-Minkowski type inequality, Minkowski billiard trajectories.
Mathematics Subject Classification: Primary: 53D35, 53C23; Secondary: 70H05, 37J05, 57R17.
Citation: Rongrong Jin, Guangcun Lu. Representation formula for symmetrical symplectic capacity and applications. Discrete & Continuous Dynamical Systems - A, 2020, 40 (8) : 4705-4765. doi: 10.3934/dcds.2020199
P. Albers and U. Frauenfelder, The space of linear anti-symplectic involutions is a homogenous space, Arch. Math. (Basel), 99 (2012), 531-536. doi: 10.1007/s00013-012-0461-4. Google Scholar
S. Artstein-Avidan, R. Karasev and Y. Ostrover, From symplectic measurements to the Mahler conjecture, Duke Math. J., 163 (2014), 2003-2022. doi: 10.1215/00127094-2794999. Google Scholar
S. Artstein-Avidan and Y. Ostrover, A Brunn-Minkowski inequality for symplectic capacities of convex domains, Int. Math. Res. Not. IMRN 2008, (2008), Art. ID rnn044, 31 pp. doi: 10.1093/imrn/rnn044. Google Scholar
S. Artstein-Avidan and Y. Ostrover, Bounds for Minkowski billiard trajectories in convex bodies, Int. Math. Res. Not. IMRN 2014, (2014), 165–193. doi: 10.1093/imrn/rns216. Google Scholar
S. M. Bates, Some simple continuity properties of symplectic capacities, The Floer Memorial Volume, Progr. Math., Birkhäuser, Basel, 133 (1995), 185-193. Google Scholar
S. M. Bates, A capacity representation theorem for some non-convex domains, Math. Z., 227 (1998), 571-581. doi: 10.1007/PL00004394. Google Scholar
J. Blot, On the almost everywhere continuity, http://arXiv.org/abs/1411.3582v1[math.OC]. Google Scholar
J. Bourgain, J. Lindenstrauss and V. D. Milman, Minkowski sums and symmetrizations, Geometric Aspects of Functional Analysis (1986/87), Lecture Notes in Math., Springer, Berlin, 1317 (1988), 44-66. doi: 10.1007/BFb0081735. Google Scholar
H. Brezis, Functional Analysis, Sobolev Spaces and Partial Differential Equation, Universitext. Springer, New York, 2011. Google Scholar
F. H. Clarke, A classical variational principle for periodic Hamiltonian trajectories, Proc. Amer. Math. Soc., 76 (1979), 186-188. doi: 10.2307/2042942. Google Scholar
F. H. Clarke, Optimization and Nonsmooth Analysis, A Wiley-Interscience Publication, John Wiley & Sons, Inc., New York, 1983. Google Scholar
I. Ekeland, Convexity Methods in Hamiltonian Mechanics, Ergebnisse der Mathematik und Ihrer Grenzgebiete (3), 19. Springer-Verlag, Berlin, 1990. doi: 10.1007/978-3-642-74331-3. Google Scholar
I. Ekeland and H. Hofer, Symplectic topology and Hamiltonian dynamics, Math. Z., 200 (1989), 355-378. doi: 10.1007/BF01215653. Google Scholar
A. Figalli, J. Palmer and Á. Pelayo, Symplectic $G$-capacities and integrable systems, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5), 18 (2018), 65-103. Google Scholar
M. Ghomi, Shortest periodic billiard trajectories in convex bodies, Geometric and Functional Analysis, 14 (2004), 295-302. doi: 10.1007/s00039-004-0458-7. Google Scholar
H. Hofer and E. Zehnder, A new capacity for symplectic manifolds, Analysis et Cetera, Academic Press, Boston, MA, (1990), 405–427. Google Scholar
H. Hofer and E. Zehnder, Symplectic Invariants and Hamiltonian Dynamics, Birkhäuser Advanced Texts: Basler Lehrbúcher., Birkhäuser Verlag, Basel, 1994. Google Scholar
K. Irie, Periodic billiard trajectories and Morse theory on loop spaces, Comment. Math. Helv., 90 (2015), 225-254. doi: 10.4171/CMH/352. Google Scholar
R. R. Jin and G. C. Lu, Generalizations of Ekeland-Hofer and Hofer-Zehnder symplectic capacities and applications, (2019), arXiv: 1903.01116v2[math.SG]. Google Scholar
S. G. Krantz, Convex Analysis, Textbooks in Mathematics, CRC Press, Boca Raton, FL, 2015. Google Scholar
A. F. Künzle, Singular Hamiltonian systems and symplectic capacities, Singularities and Differential Equations, Banach Center Publications, Polish Acad. Sci. Inst. Math., Warsaw, 33 (1996), 171-187. Google Scholar
S. Lisi and A. Rieser, Coisotropic Hofer-Zehnder capacities and non-squeezing for relative embeddings, arXiv: 1312.7334[math.SG]. Google Scholar
C. G. Liu and Q. Wang, Symmetrical symplectic capacity with applications, Discrete Contin. Dyn. Syst., 32 (2012), 2253-2270. doi: 10.3934/dcds.2012.32.2253. Google Scholar
J. Moser and E. J. Zehnder, Notes on Dynamical Systems, Courant Lecture Notes in Mathematics, 12. New York University, Courant Institute of Mathematical Sciences, New York, American Mathematical Society, Providence, RI, 2005. doi: 10.1090/cln/012. Google Scholar
E. Neduv, Prescribed minimal period problems for convex Hamiltonian systems via Hofer-Zehnder symplectic capacity, Math. Z., 236 (2001), 99-112. doi: 10.1007/PL00004828. Google Scholar
R. S. Palais, The principle of symmetric criticality, Commun. Math. Phys., 69 (1979), 19-30. doi: 10.1007/BF01941322. Google Scholar
A. Rieser, Lagrangian blow-ups, blow-downs, and applications to real packing, Journal of Symplectic Geometry, 12 (2014), 725-789. doi: 10.4310/JSG.2014.v12.n4.a4. Google Scholar
R. T. Rockafellar, Convex Analysis, Princeton Mathematical Series, No. 28 Princeton University Press, Princeton, N.J., 1970. Google Scholar
R. Schneider, Convex Bodies: The Brunn-Minkowski Theory, Encyclopedia of Mathematics and its Applications, 44. Cambridge University Press, Cambridge, 1993. doi: 10.1017/CBO9780511526282. Google Scholar
R. Schneider, Stability for some extremal properties of the simplex, Journal of Geometry, 96 (2009), 135-148. doi: 10.1007/s00022-010-0028-0. Google Scholar
J.-C. Sikorav, Systémes Hamiltoniens et Topologie Symplectique, Dipartimento di Matematica dell'Universitá di Pisa, 1990. Google Scholar
C. Viterbo, Symplectic real algebraic geometry, preprint, (1999). Google Scholar
[33] Y. C. Xu, Linear Algebra and Matrix Theory, Higher Education Press, Beijing, 1992. Google Scholar
F. C. Yang and Z. Wei, Generalized Euler identity for subdifferentials of homogeneous functions and applications, J. Math. Anal. Appl., 337 (2008), 516-523. doi: 10.1016/j.jmaa.2007.04.008. Google Scholar
Philippe G. Lefloch, Cristinel Mardare, Sorin Mardare. Isometric immersions into the Minkowski spacetime for Lorentzian manifolds with limited regularity. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 341-365. doi: 10.3934/dcds.2009.23.341
Yanjun He, Wei Zeng, Minghui Yu, Hongtao Zhou, Delie Ming. Incentives for production capacity improvement in construction supplier development. Journal of Industrial & Management Optimization, 2021, 17 (1) : 409-426. doi: 10.3934/jimo.2019118
Wenqin Zhang, Zhengchun Zhou, Udaya Parampalli, Vladimir Sidorenko. Capacity-achieving private information retrieval scheme with a smaller sub-packetization. Advances in Mathematics of Communications, 2021, 15 (2) : 347-363. doi: 10.3934/amc.2020070
Mahdi Karimi, Seyed Jafar Sadjadi. Optimization of a Multi-Item Inventory model for deteriorating items with capacity constraint using dynamic programming. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021013
Yifan Chen, Thomas Y. Hou. Function approximation via the subsampled Poincaré inequality. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 169-199. doi: 10.3934/dcds.2020296
Gökhan Mutlu. On the quotient quantum graph with respect to the regular representation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020295
Federico Rodriguez Hertz, Zhiren Wang. On $ \epsilon $-escaping trajectories in homogeneous spaces. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 329-357. doi: 10.3934/dcds.2020365
Héctor Barge. Čech cohomology, homoclinic trajectories and robustness of non-saddle sets. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020381
Yuan Cao, Yonglin Cao, Hai Q. Dinh, Ramakrishna Bandi, Fang-Wei Fu. An explicit representation and enumeration for negacyclic codes of length $ 2^kn $ over $ \mathbb{Z}_4+u\mathbb{Z}_4 $. Advances in Mathematics of Communications, 2021, 15 (2) : 291-309. doi: 10.3934/amc.2020067
Adrian Constantin, Darren G. Crowdy, Vikas S. Krishnamurthy, Miles H. Wheeler. Stuart-type polar vortices on a rotating sphere. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 201-215. doi: 10.3934/dcds.2020263
Sumit Arora, Manil T. Mohan, Jaydev Dabas. Approximate controllability of a Sobolev type impulsive functional evolution system in Banach spaces. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020049
Meng Chen, Yong Hu, Matteo Penegini. On projective threefolds of general type with small positive geometric genus. Electronic Research Archive, , () : -. doi: 10.3934/era.2020117
Shun Zhang, Jianlin Jiang, Su Zhang, Yibing Lv, Yuzhen Guo. ADMM-type methods for generalized multi-facility Weber problem. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020171
Nguyen Huy Tuan. On an initial and final value problem for fractional nonclassical diffusion equations of Kirchhoff type. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020354
Yoshihisa Morita, Kunimochi Sakamoto. Turing type instability in a diffusion model with mass transport on the boundary. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3813-3836. doi: 10.3934/dcds.2020160
Junyong Eom, Kazuhiro Ishige. Large time behavior of ODE type solutions to nonlinear diffusion equations. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3395-3409. doi: 10.3934/dcds.2019229
Tomasz Szostok. Inequalities of Hermite-Hadamard type for higher order convex functions, revisited. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020296
João Vitor da Silva, Hernán Vivas. Sharp regularity for degenerate obstacle type problems: A geometric approach. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1359-1385. doi: 10.3934/dcds.2020321
HTML views (93)
Rongrong Jin Guangcun Lu | CommonCrawl |
Will The Dog Catch The Duck?
Feb. 12, 2016 , at 8:00 AM
Mull it over on your commute, dissect it on your lunch break, and argue about it with your friends and lovers. When you're ready, submit your answer using the form at the bottom. I'll reveal the solution next week, and a correct submission (chosen at random) will earn a shoutout in this column. Important small print: To be eligible for the shoutout, I need to receive your correct answer before 11:59 p.m. EST on Sunday — have a great weekend!
Before we get to the new puzzle, let's return to last week's. Congratulations to 👏 Ian Rhile 👏 of Wyomissing, Pa., our big winner. You can find a solution to the previous Riddler at the bottom of this post.
Now, here's this week's Riddler. In a few days, at Madison Square Garden here in New York, a new king or queen of the canine world will be crowned at the Westminster Kennel Club Dog Show. As a tip of the ol' deerstalker, here's a classic problem I reworked to feature a very clever dog. Fair warning: This one is tougher than a brand new chew toy.
There is a duck paddling in a perfectly circular pond, and a Nova Scotia duck tolling retriever prowling on the pond's banks. The retriever very much wants to catch the duck. The dog can't swim, and this particular duck can't take flight from water. But the duck does want to fly away, which means it very much wants to make it safely to land without meeting the dog there. Assume the dog and the duck are both very smart and capable of acting perfectly in their best interests — they're both expert strategists and tacticians. Say the duck starts in the center of the pond, and the dog somewhere right on the pond's edge. To ensure that the duck could never escape safely, how many times faster would the dog have to be able to run relative to how fast the duck can swim? (Assume the dog is a good boy, yes he is, and thus very patient. The duck can't simply wait him out. Also assume both animals can change direction without losing speed, and that the duck can take flight safely immediately after reaching shore if the dog isn't there.)
[googleapps domain="docs" dir="forms/d/1isRCKZIGo5XhbUbjM9uqkGgVn96u5Kd8UIJMnt2vnx8/viewform" query="embedded=true" width="760″ height="1000″ /]
And here's the solution to last week's Riddler, concerning vehicular traffic on a very long highway, which came to us from Django Wexler. I've adapted this solution from an excellent submission from reader Aaron Montgomery.
The average number of groups of cars, given a total of N cars, is 1+1/2+1/3+…+1/N. Let's prove it.
Call the number of groups of cars \(X\). Suppose that the cars are added to the road — placed somewhere randomly on it — one by one in reverse order of speed, with the slowest car added first and the fastest car added last. After each car is added, we wait until all the cars settle into their new positions before adding the next one. Let's consider what happens when there are \(n\) cars already on the road in some order, in \(X_n\) groups, where \(n\) is a number less than \(N\). Now we add another car, which will be faster than all the cars already on the road. This new car will create a new group if and only if it is the lead car. (It will either speed off in the lead, or it will get stuck behind some necessarily slower car.) It will be the lead car with probability \(1/(n+1)\). Thus,
$$X_{n+1} = \begin{cases} X_n + 1 &\mbox{with probability} \frac{1}{n+1} \\ X_n & \mbox{with probability } \frac{n}{n+1} \end{cases}$$
Let \(E[X_n]\) denote the expected number of groups of cars, given n cars. From above, we now know that
\begin{equation} \begin{split} E[X_{n+1}] &= (E[X_n]+1)\cdot \frac{1}{n+1} + E[X_n]\cdot \frac{n}{n+1} \\& = E[X_n]+\frac{1}{n+1} \end{split} \end{equation}
Combine this with the fact that we know that if there is just one car (\(n=1\)), then there must necessarily be just one group (\(E[X_1]=1\)), and we're done. If there's one car there's one group. If there are two cars there are 1 + 1/2 groups on average. If there are three cars there are 1+1/2+1/3 groups on average, and so on. This series of the average number of groups of cars is known as the harmonic series. Visually, the expected number of groups looks like this, as the number of cars on the road increases:
While it looks like it might plateau at some number of groups of cars, this series never converges! As the number of cars increases to infinity, the expected number of groups increases to infinity as well — just very, very slowly.
Competition for the 🏆 Coolest Riddler Extension Award 🏆 was fierce this week. After a series of sleepless nights, and a lengthy, pricey and messy consultation with the haruspex who works above my bodega, I've decided that the award goes to [drumroll…] Nick Stanisha. Nick proposed adding a second lane — a "fast lane" — to the highway, and then analyzed optimal driver behavior. His main finding: "The cars can proceed down the highway with the fastest expected speed if the left lane is reserved exclusively for people who travel faster than average." I thank Nick for his public service, and wish the jamoke on I-278 last weekend had understood how fast lanes are supposed to work. Here is a graphic summary of Nick's finding:
Three honorable mentions for coolest extension:
Dan Schlauch proposed, "Every hour, the leader of each group of cars looks into the mirror. Upon noticing the traffic built up behind them, they pull over and let the group pass them before continuing on the road. How many hours will need to pass before the n cars are all unobstructed?" Suffice to say his findings made me happy that I live near a subway station.
Meanwhile, some readers got a bit wild. Marty Nguyen proposed teleportation portals on the highway, a la "Portal."
And Cyrus Hettle emailed to suggest the addition of highway monsters. His email is worth quoting at some length:
We now introduce two powerful creatures, a gnome and a demon, who care deeply about this particular stretch of highway. The gnome is a benevolent being, who wishes speedy travel to all, whereas the demon wishes to inflict as much unpleasant gridlock as he can on this small group of unfortunate drivers. Each has the power to, after the cars are on the road, remove one driver from the road, according to whatever suits his respective wishes (the gnome wants to minimize the average time it takes the remaining drivers to reach their destination, while the demon wants to maximize it).
Cyrus found that interference by the demon led to a slight uptick in average travel time, but interference by the gnome led to a much more significant decrease. Makes sense to me.
And with that, have a lovely weekend! | CommonCrawl |
QSAR Models for Predicting Additive and Synergistic Toxicities of Binary Pesticide Mixtures on Scenedesmus Obliquus
Ling-Yun MO , Bai-Kang YUAN , Jie ZHU , Li-Tang QIN , Jun-Feng DAI
Citation: Ling-Yun MO, Bai-Kang YUAN, Jie ZHU, Li-Tang QIN, Jun-Feng DAI. QSAR Models for Predicting Additive and Synergistic Toxicities of Binary Pesticide Mixtures on Scenedesmus Obliquus[J]. Chinese Journal of Structural Chemistry, 2022, 41(3): 2203166-2203177. doi: 10.14102/j.cnki.0254-5861.2011-3306
莫凌云 a, b, c, ,
袁柏康 a, ,
朱杰 a, ,
覃礼堂 a, b, , ,
代俊峰 a, d, ,
通讯作者: 覃礼堂, [email protected]
代俊峰, [email protected]
Ling-Yun MO a, b, c, ,
Bai-Kang YUAN a, ,
Jie ZHU a, ,
Li-Tang QIN a, b, , ,
Jun-Feng DAI a, d, ,
College of Environmental Science and Engineering, Guilin University of Technology, Guilin 541004, China
Guangxi Key Laboratory of Environmental Pollution Control Theory and Technology, Guilin University of Technology, Guilin, Guangxi 541004, China
Technical Innovation Center of Mine Geological Environmental Restoration Engineering in Southern Karst Area, MNR, Nanning 530023, China
Guangxi Collaborative Innovation Center for Water Pollution Control and Water Safety in Karst Area, Guilin University of Technology, Guilin 541004, China
Corresponding author: Li-Tang QIN, [email protected] Jun-Feng DAI, [email protected]
Accepted Date: 13 October 2021
the National Key Research and Development Program of China 2019YFC0507502
Guangxi Science and Technology Major Special Project Guike-AA2016004
Natural Science Foundation of Guangxi Province 2018GXNSFAA281156
Guilin Scientific Research and Technology Development Program 20180107-5
Abstract: Pesticides released into the environment may pose potential risks to the ecological system and human health. However, existing toxicity data on pesticide mixtures still lack, especially regarding the toxic interactions of their mixtures. This study aimed to determine the toxic interactions of binary mixtures of pesticides on Scenedesmus Obliquus (S. obliquus) and to build quantitative structure-activity relationship models (QASR) for predicting the mixture toxicities. By applying direct equipartition ray method to design binary mixtures of five pesticides (linuron, dimethoate, dichlorvos, trichlorfon and metribuzin), the toxicity of a single pesticide and its mixture was tested by microplate toxicity analysis on S. obliquus. The QASR models were built for combined toxicity of binary mixtures of pesticides at the half-maximal effective concentration (EC50), 30% maximal effective concentration (EC30) and 10% maximal effective concentration (EC10). The results showed that the single toxicity follows: metribuzin > linuron > dichlorvos > trichlorfon > dimethoate. The mixtures of linuron and trichlorfon, dichlorvos and metribuzin, dimethoate and metribuzin induced synergetic effects, while the remaining binary mixtures exhibited additive. The developed QSAR models were internally validated using the leave-one-out cross-validation (LOO), leave-many-out cross-validation (LMO), bootstrapping, and y-randomization test, and externally validated by the test sets. All three QSAR models satisfied well with the experimental values for all mixture toxicities, and presented high internally (R2 and Q2 > 0.85) and externally (QF12, QF22, and QF32 > 0.80) predictive powers. The developed QSAR models could accurately predict the toxicity values of EC50, EC30 and EC10 and were superior to the concentration addition model's results (CA). Compared to the additive effect, the QSAR model could more accurately predict the binary mixture toxicities of pesticides with synergistic effects.
/ QSAR
/ toxicity prediction
/ binary mixture
/ algae
In recent years, with the improvement of agricultural science and technology and the increase of agricultural inputs, pesticides play a great role in agrarian cultivation, such as improving crop yield[1]. The threats posed to the environmental quality of agricultural soils and surface water cannot be ignored. Studies have shown that nonpoint source pollution is the leading cause of surface water pollution globally, with the largest contribution from agricultural point source pollution. Pesticide loss and residual pollution caused by the heavy application of pesticides, which causes the environmental pollution of soil and water in agricultural fields, have become an increasingly prominent ecological problem[2, 3]. For example, Zheng et al.[4] detected 82 pesticide residues in the Chiu-lung Rive of Fujian Province, China. Canccapa et al.[5] found high detection rates of chlorpyrifos, diazinon, and carbendazim in the Turia and JúcarRivers (Spain) from 2010 to 2013, respectively. Xu et al.[6] reported the results of pesticide residue monitoring from 55 main water sources in 12 urban areas of Yantai City: organophosphorus and pyrethroid pesticides were detected in all water sources at concentrations ranging from 103.0 to 345.7 ng/L. Therefore, it is of great significance to study the pollution and ecological risk of pesticide exposure to the environment.
Algae play an essential role in aquatic and soil ecosystems. Algae can carry out photosynthesis, and the characteristics of their species can directly affect the structures and functions of aquatic ecosystems, which has been used as indicator organisms in ecotoxicological research in recent years[7]. Many research achievements have been obtained. Tien et al.[8] studied the single toxic effects of chlorpyrifos, terbufos and methamidophos on diatoms, cyanobacteria and green algae. The results showed that different algae showed wide variation in sensitivity to different pesticides, with green algae being the most tolerant with EC50 of 1.29~41.16 mg/L. Wan et al.[9] investigated the toxicity of trichlorfon to the fresh-water alga Chlamydomonas reinhardtii, with an EC50 of 200 mg/L. Studies have shown that Chlamydomonas reinhardtii has a high tolerance ability to trichlorfon and is promising for removing trichlorfon from natural water environments. Liu et al.[10] studied the single and combined toxicities of six pesticides (simetryn, bromacil, hexazinone, dodine, propoxur, and metalaxyl) on Chlorella pyrenoidosa. With the concentration addition as an additive reference model, four kinds of binary mixture rays exhibited antagonistic effects on Chlorella pyrenoidosa. It is imperative to apply algae to the toxicity research of pesticide pollutants, and it is of great significance to evaluate the combined toxicity of pesticide pollutants for ecological risk.
There are many varieties of pesticides, and compound pesticides account for a large proportion. Mixed-use, blind use or abuse of multiple pesticides lead to the prominent phenomenon of multiple pesticide residues in the environment. This leads to the fact that pesticides in the environment will mix in many and varied forms, resulting in complex mixture systems[11]. It is possible that the components of these complex mixtures produce toxic interactions with each other or are enriched in living organisms, causing more serious environmental pollution problems. Therefore, the investigation of combined toxic effects of pesticides has become the focus of researchers in recent years. Tien et al.[8] studied the combined toxicity of chlorpyrifos, terbufos and methamidophos on diatoms, cyanobacteria and green algae. The results indicated that the mixtures of pesticides exhibited antagonism and synergism to algae, and pesticide mixtures are more likely to induce detoxification mechanisms than single pesticides. Du et al.[12] used plasma metabolomics to evaluate the toxic effects of four organophosphorus pesticides (dichlorvos, acephate, dimethoate and phorate) on male Wistar rats. The results showed that the combination of the four drugs could produce a combined effect at a non-damaging effect level, causing body oxidative stress and liver and kidney dysfunction.
Quantitative structure-activity relationship (QSAR) is a method that can effectively predict the physicochemical properties, environmental behavior, and toxicity characteristic effect parameters of organic pollution[13]. The qualitative identification of the combined effect of mixtures has achieved remarkable results. Qualitative research can determine that the combined effect of mixtures is antagonism, synergism or additive[14, 15], which has been widely used. However, this qualitative discrimination cannot quantitatively determine the intensity of toxicity, so the research on combined toxicity must be changed from qualitative research to quantitative research. QSAR is one of the commonly used methods to study the combined toxicity of mixtures quantitatively, and it is also the frontier and emphasis of current structural-activity relationship research[16]. In this study, we selected the green algae Scenedesmus obliquus (S. obliquus) as target organisms and determined the toxicities of five pesticides, including two herbicides (linuron and metribuzin) and three insecticides (dimethoate, dichlorvos and trichlorfon). Based on the single toxicity of five pesticides and the combined toxicity of seven binary mixture systems designed by direct equipartition ray (EquRay)[17], as well as screening the optimal descriptors describing the contributions of these five pesticides to the combined toxic effect, QSAR models that could accurately predict the toxic value of binary mixtures were developed. Therefore, the model may serve as a theoretical basis for predicting the binary toxicities of pesticide compounds.
2.1 Test organism and culture
The insecticide and herbicide are used in agricultural areas to protect crops and control pests and weeds. Linuron and metribuzin belong to herbicides, and dimethoate, dichlorvos and trichlorfon belong to insecticides, and are widely used in agricultural production[18]. However, in practical use, their actual utilization is very low, and a large amount of the residual herbicides and insecticides enter into the ecosystem such as soil and water bodies[19], which pose a serious threat to the ecological environment and even human health. Researchers have paid great attention to the pollution of herbicides and insecticides to the environment[20], but there are few reports on their mixtures' toxic effects. Five pesticides including linuron (CAS 330-55-2, purity > 99.44%), dimethoate (CAS 60-51-5, purity > 99.30%), dichlorvos (CAS 62-73-7, purity > 99.62%), trichlorfon (CAS 52-68-6, purity > 97.20%), and metribuzin (CAS 21087-64-9, purity > 99.82%) were purchased from Dr Ehrenstorfer GmbH Co.. The S. obliquus was purchased from Freshwater Algae Culture Collection at the Institute of Hydrobiology (FACHB), numbered FACHB-5.
2.2 Toxicity test and concentration-response curve fitting
A modified procedure of the microplate toxicity analysis (MTA) based on algae growth inhibition was used to determine the toxicity of a pesticide or a mixture to S. obliquus[21]. In the MTA, the peripheral wells were filled with 200 μL water to minimize the edge effects. A total of 24 wells of the second, sixth, seventh, and eleventh columns in a 96-well microplate were selected as blank controls, and the remaining 36 microwells were designed with twelve concentration gradients of three parallels by a dilution factor. A total of 100 μL algal liquid was added to each well, and the total volume was 200 μL. Three plates were repeated and incubated at 25 ℃ with a light: dark ratio of 12:12. The optical density (OD) of S. obliquus was determined on the Power Wave microplate spectrophotometer (American BIO-TEK Company) after 96 h at the wavelength of 690 nm. The toxicity is expressed in terms of inhibition rate (E), and the formula is given in Eq. (1).
$ E=\left(1-\frac{OD}{{OD}_{0}}\right)\times 100\% $
where OD was the average optical density of the experimental group and OD0 was that of the control group.
To obtain the effect value and effect concentration for individual pesticide or their mixtures, Logit (Eq. 2) and Weibull (Eq. 3) functions were used to fit the concentration-response curve (CRC) measured by MTA with the nonlinear leastsquares method. In statistics, the ratio of the number of data points (i. e., the number of samples) to the model parameters should be greater than 5:1. There were 12 concentration points in the MTA. Therefore, the two-parameter equation (Logit and Weibull) was used to fit the curve in this study, which could effectively reduce the over-fitting phenomenon. The coefficient of determination (R2) is an evaluation index of the final fitting regression effect. For each CRC, Logit and Weibull functions were used to fit the curve simultaneously, and then their R2 was compared. The closer R2 is to 1, the better the fitting regression effect is.
$ E=1/(1+\mathrm{e}\mathrm{x}\mathrm{p}(-\alpha -\beta \mathrm{l}\mathrm{o}\mathrm{g}c\left)\right) $
$ E=1-\mathrm{e}\mathrm{x}\mathrm{p}(-\mathrm{e}\mathrm{x}\mathrm{p}(\alpha +\beta \mathrm{l}\mathrm{o}\mathrm{g}c\left)\right) $
where E is the toxic effect value, c the concentration dose, α the positional parameter, and β the slope parameter.
2.3 Mixture design
To systematically examine the variation of toxicity in mixtures, five mixtures ray with different concentration ratios (pi, i = 1, 2, 3, 4, 5) were designed by direct equipartition ray design (EquRay) for every group mixture (pi, Table 2), and 12 concentration points were arranged for each ray. The concentration ratio (pi) of a component (pesticide) is defined as a ratio of the concentration of the component in a mixture to the sum of concentrations of all components in the mixture. There are seven groups of binary mixtures, B1 consisting of linuron and dimethoate, B2 of linuron and dichlorvos, B3 of linuron and trichlorfon, B4 of dichlorvos and trichlorfon, B5 of dichlorvos and metribuzin, B6 of trichlorfon and metribuzin and B7 of dimethoate and metribuzin.
2.4 Toxic interaction analysis of the mixtures
The concentration addition (CA) model was used as the additive reference model to analyze and compare the toxic interactions in different concentration ranges from CRCs of mixtures. The formula of the CA model is expressed as in the following equation[22]:
$ EC{x}_{mix}={\left({\sum }_{i}^{n}\frac{{p}_{i}}{EC{x, }_{i}}\right)}^{-1} $
where ECxmix is the mixture concentration in x% combined effect, n is the number of mixture components, ECx, i represents the concentration at which the effect (x%) produced in the presence of the ith compound alone in the mixture, and pi is the concentration ratio of the ith component in the mixture.
The combined toxic interaction of mixture was qualitatively distinguished by comparing experimental CRC with predictive CRC by CA in the whole effect range[23]. Due to the experimental error in the toxicity experiment and the fitting error in the nonlinear fitting of CRC, 95 percent observed confidence intervals (OCI) of CRC must be considered to compare the predictive CRC by CA with the experimental CRC. The predictive CRC by CA is almost located between the upper limit and lower one of the 95 percent OCI, which states that the toxicities of the mixtures are additive. The predictive CRC by CA significantly deviates from the experimental CRC, locating above the 95 percent OCI and exhibiting antagonistic interaction. The predictive CRC by CA significantly deviates from the experimental CRC, locating under the 95 percent OCI and displaying synergistic interaction.
2.5 Characterization of molecular structure of mixtures
Firstly, 3D molecular structures of five single pesticides and their binary pesticide mixtures were pre-optimized for energy minimization of Guassian09 software[24]. Secondly, density functional theory (DFT) of Guassian09 software at the B3LPY/6-31G (d, p) level was employed to optimize the molecular structure of single and binary mixed pesticides to the best transition state[24]. Sixteen descriptors were extracted, including polarizability, single-point energy, dipole moment, the most positive charge, the most negative charge, EHOMO, ELUMO, zero vibration energy, enthalpy of formation, Gibbs free energy, heat correction value, constant volume molar melting, entropy, absolute hardness, softness, and chemical potential. Thirdly, the AM1 method which runs the MOPAC program was used to calculate eight descriptors of molecular structures of single and binary mixed pesticides, including the final formation heat, total energy, electron energy, core repulsion, COSMO surface area, COSMO volume, ionization potential and molecular weight[25]. Finally, Dragon software[26] can only calculate the molecular descriptors of a single pesticide, including topological index, connection index, RDF descriptor, 3D-MoRSE descriptor, and edge adjacency index. A total of 5270 single pesticide molecular structure descriptors are screened preliminarily in dragon software, and the remaining 43 qualified descriptors are obtained.
If the binary mixture conforms to the concentration addition model, it can be considered that the mixture has no interaction, which can be characterized by the mixture descriptor (xmix) as follows[27]:
$ {x_{mix}} = {p_1}{x_1} + {p_2}{x_2} + \cdot \cdot \cdot + {p_i}{x_i} $
where xmix is mixture descriptors, xi represents a descriptor for component, and pi is the concentration ratio of the ith component in the mixture.
If the components contained in the mixture are considered together as a whole, and molecular structure descriptors of all components in the mixture are calculated simultaneously by Guassian09 and MOPAC software, a mixture descriptor with interaction information can be obtained as below:
$ {x_{mix}} = ({p_1}{x_1} + {p_2}{x_2} + \cdot \cdot \cdot + {p_i}{x_i}) - n{x_m} $
where xmix is the mixture descriptors, xi represents a descriptor for component, pi is the concentration ratio of the ith component in the mixture, n is the number of components of the mixture and xm is the descriptor obtained when the mixture is calculated as a whole.
2.6 Establishment and validation of the QSAR model
The overall data set is randomly divided into a training set (70%) and a test set (30%) in QSARINS software[28, 29]. The test set is used for external validation of the model. The best descriptor based on genetic algorithms was selected in QSARINS software and QSAR models were built with training set by multiple linear regressions (MLR). To ensure the reliability and validity of the regression model, the leave-one-out cross-validation (LOO), leave-many-out cross-validation (LMO), y-randomization test, and the bootstrapping method are used for the internal validation.
The application domain (AD) of the QSAR model is a thoretical area in the space defined by the descriptors used in the model, and established QSAR models should make predictions about the reliability of that space, defining the AD of the QSAR model with a leverage value hi, which is calculated as[30]:
$ h_{i} = x_{i}·(X^{T}X)^{-1}·x_{i}^{T} (7) $
where xi represents the original variable of molecular descriptor for a compound and X is the descriptor matrix of the compounds of the training set.
3.1 Toxicity of five pesticides to S. obliquus
The fitted CRCs parameters (α and β), fitted statistics (R2 and RMSE), and EC(10, 30, 50) of five pesticides are listed in Table 1. From Table 1, apart from the CRCs of dimethoate and dichlorvos depicted by the Weibull function, the CRCs of the other three pesticides to S. obliquus can be well described by the Logit function (R2 > 0.9704 and RMSE < 0.0783). The experimental concentration-toxicity (inhibition percent) data points and fitted CRCs are shown in Fig. 1. The EC50 value is considered as a toxicity index, and the toxicity order of five pesticides to S. obliquus was metribuzin > linuron > dichlorvos > trichlorfon > dimethoate. The freshwater green algae S. obliquus was more sensitive to two herbicides (linuron and metribuzin) than the insecticide. The structures of dimethoate, dichlorvos, and trichlorfon have a similar frame-work, except that the side chain substituents are different, and the toxicities of dichlorvos and trichlorfon containing chlorine substituents were slightly greater than that of dimethoate without chlorine substituents. Therefore, the side-chain chlorine substituents of pesticide molecules may enhance their toxicity. In contrast, metribuzin is close to and has a cyclic structure to linuron, and its toxicity is higher than that of dimethoate, dichlorvos, and trichlorfon, so the cyclic structure of pesticide molecules may increase their toxicity.
Table 1. Fitted CRC Parameters (α and β), Fitted Statistics (R2 and RMSE), and EC(10, 30, 50) of Five Pesticides
Pesticide CAS RN F α β RMSE R2 EC50 EC30 EC10
Linuron 330-55-2 L 27.33 3.90 0.0607 0.9899 9.824E-08 5.957E-08 2.685E-08
Dimethoate 60-51-5 W 6.39 2.28 0.0605 0.9808 9.304E-04 5.562E-04 1.623E-04
Dichlorvos 62-73-7 W 6.26 2.16 0.0528 0.9704 7.252E-04 4.213E-04 1.148E-04
Trichlorfon 52-68-6 L 14.48 4.66 0.0783 0.9713 7.811E-04 5.139E-04 2.638E-04
Metribuzin 21087-64-9 L 29.87 4.17 0.0524 0.9809 6.870E-08 4.303E-08 2.042E-08
F: fitted functions where L refers to Logit function and W to Weibull function; α and β are fitting function parameters;
CAS RN is the chemical abstracts service register number; R2 and RMSE are the fitting function statistic.
Figure 1. Concentration-response curves of five pesticides to S. obliquus
3.2 Combined toxicity of pesticide mixtures
Using the MAT method to determine the toxicity of 35 rays of pesticide binary mixture to S. obliquus, the experimental concentration-toxicity (inhibition percent) data points and fitted CRCs are shown in Fig. 2. The CRCs of two rays (B1-R1 and B3-R5) of 35 mixture rays can be well described by Logit (L) functions and the other 33 rays can be described by Weibull (W) functions (Table 1S). The fitted regression coefficients (α and β) and statistics, determination coefficient (R2) and root mean squared error (RMSE) as well as pEC(50, 30, 10) are listed in Table 1S. It has been shown that 35 mixture rays exhibit a good statistical significance with the determination coefficient of > 0.93 and the root mean squared error of < 0.1254, which explains that all pesticide mixtures have a good concentration-response relationship.
Figure 2. Concentration-response curves of 35 rays of pesticide binary mixture to S. obliquus
3.3 Toxicity interaction between pesticide mixtures
Selecting CA as an additive reference model, if the experimental fitting effect is higher, equal to or lower than the CA prediction effect, the mixture exhibits synergistic, additive and antagonistic interactions. From Fig. 2, at different concentration proportions, the fitted CRCs deviated from CA predicted line to different degrees, i.e., the different interactions.
The predictive CRCs by CA of five binary mixture rays, B3-R1, B3-R2, B7-R2, B7-R3 and B7-R5, significantly deviated from the experimental CRCs, locating under the 95 OCI and displaying synergistic interactions. The predictive CRCs by CA of ten binary mixture rays, B3-R3, B3-R4, B3-R5, B5-R1, B5-R2, B5-R3, B5-R4, B5-R5, B7-R1, and B7-R4, evidently deviated from the experimental CRCs in high concentration area (including EC50 and EC30), locating under the 95 percent OCI and showing high concentration synergistic interactions. The predictive CRC by CA of B6-R1 clearly deviated from the experimental CRC in the medium concentration zone (including EC10), locating under the 95 percent OCI and exhibiting medium concentration synergistic interactions. The predictive CRC by CA of B2-R1 obviously deviated from the experimental CRC in the low concentration region, locating under the 95 percent OCI and demonstrating low concentration synergistic interactions. For the remaining 18 rays of 35 mixture rays, the predictive-CRCs were almost completely located between the upper limit and lower one of the 95 percent OCI, which states that the toxicities of the mixtures are additive. Furthermore, 16, 15, and 10 rays of mixtures displayed synergism at EC50, EC30 and EC10 concentration effects, respectively.
The CA model was used to evaluate the toxicity interaction (synergism or additive) of binary mixtures of pesticides. The results showed that the proportions of rays with synergistic effect were 46%, 43% and 29% at EC50, EC30 and EC10 concentrations, respectively. Therefore, the variation of concentration level may change the toxicity interaction of binary mixtures. Besides, from EC10 to EC50, the synergistic effect of pesticides on S. obliquus was enhanced with the increase of concentration level of a binary mixture. As previous studies have shown, synergy was enhanced at the high concentration level of binary mixtures such as pesticides and metals[31, 32].
3.4 QSAR model of pesticide mixtures
For genetic algorithm parameters, we selected the maximum population size (1000 chromosomes), the minimum mutation rate (0.05), the maximum model size (4 variables) and 500 as the number of generations. By selecting four descriptors as the best independent variables and using the pEC50, pEC30 and pEC10 (the negative logarithm of EC50, EC30 and EC10) values as dependent variables, the QSAR models of pesticide mixtures were established, including model pEC50, model pEC30 and model pEC10. The model pEC50 was shown as follows:
$ pEC_{50} = –(1681.399 ± 370.9857) + (0.0003 ± 0.0002)·{\bf{TE}} – (0.0049 ± 0.0012)·{\bf{COA}}\\ \;\;\;\;– (0.0357 ± 0.0102)·{\bf{DM}} + (3499.740 ± 772.6334)·{\bf{GGI4}} \\ {R^2} = 0.9403,RMSE = 0.1033,Q_{{\rm{LOO}}}^2 = 0.8921,Q_{{\rm{LMO}}}^2 = 0.8760,R_{{\rm{bstr}}}^2 = 0.9316,Q_{{\rm{bstr}}}^2 = 0.8858,R_{{\rm{yrand}}}^2 = 0.1070\\Q_{{\rm{yrand}}}^2 = - 0.4114,RMS{P_{{\rm{ext}}}} = 0.1420,Q_{{\rm{F}}1}^2 = 0.8890,Q_{{\rm{F}}2}^2 = 0.8291,Q_{{\rm{F}}3}^2 = 0.8872,CCC = 0.8985,F = 78.7920 $
where, TE is the total energy, COA the cosmo area, DM the dipole moment, GGI4 the topological charge index of order 4, RMSE the root mean square error of the model, F the Fischer's statistic, R2 the coefficient of multiple determination, QLOO2 the correlation coefficient of leave-one-out cross-validation, and QLMO2 the correlation coefficient of leave-many-out cross-validation, Rbstr2 and Qbstr2 are the average correlation coefficient of bootstrapping method, Ryrand2 and Qyrand2 the maximum R2 and Q2 the values of the y-randomization tests, RMSPext is the root mean square error of test set, QF12, QF22 and QF32 are the external validation coefficient of the model, and CCC is the concordance correlation coefficient of the model. The values before and after "±" represent the regression coefficient of the model and its corresponding standard deviation.
The model pEC30 equation is as follows:
$pEC_{30} = –(1726.8388 ± 432.7641) + (0.003 ± 0.0002)·{\bf{TE}} – (0.0056 ± 0.0015)·{\bf{COA}} \\\;\;\;\;\; – (0.0366 ± 0.0123)·{\bf{DM}} + (3593.603 ± 901.8907)·{\bf{GGI4}} \\ {R^2} = 0.9298,RMSE = 0.1196,Q_{{\rm{LOO}}}^2 = 0.8780,Q_{{\rm{LMO}}}^2 = 0.8731,R_{{\rm{bstr}}}^2 = 0.9196,Q_{{\rm{bstr}}}^2 = 0.8630,R_{{\rm{yrand}}}^2 = 0.1051 \\ Q_{{\rm{yrand}}}^2 = - 0.4184,RMS{P_{{\rm{ext}}}} = 0.1440,Q_{{\rm{F}}1}^2 = 0.8361,Q_{{\rm{F}}2}^2 = 0.8159,Q_{{\rm{F}}3}^2 = 0.8906,CCC = 0.9177,F = 66.2390 $
where, TE is the total energy, COA the cosmo area, DM the dipole moment, and GGI4 the topological charge index of order 4.
The model pEC10 is shown as below:
$ pEC_{10} = –(1914.501 ± 565.2238) + (0.0044 ± 0.0016)·{\bf{COA}} – (0.5269 ± 0.2981)·{\bf{MW}}\\ \;\;\;\;– (5.8414 ± 3.8905)·E_{HOMO} + (3985.4347 ± 1178.3972)·{\bf{GGI4}} \\{R^2} = 0.8564,RMSE = 0.1759,Q_{{\rm{LOO}}}^2 = 0.7615,Q_{{\rm{LMO}}}^2 = 0.7484,R_{{\rm{bstr}}}^2 = 0.8588,Q_{{\rm{bstr}}}^2 = 0.7679,R_{{\rm{yrand}}}^2 = 0.1272\\ Q_{{\rm{yrand}}}^2 = - 0.3807,RMS{P_{{\rm{ext}}}} = 0.1781,Q_{{\rm{F}}1}^2 = 0.8327,Q_{{\rm{F}}2}^2 = 0.8324,Q_{{\rm{F}}3}^2 = 0.8281,CCC = 0.9019,F = 29.8160 $
where, COA is the cosmo area, MW the molecular weight, EHOMO the highest molecular orbital energy, and GGI4 the topological charge index of order 4.
The statistic is used to evaluate the internal and external predictive ability of QSAR models. If the statistic values are satisfied (thus the models are robust), the models are acceptable. Q2 greater than 0.5 considered the model to be good, and greater than 0.9 showed the model excellent[33]. Tropsha et al.[34] suggested that both R2 and Q2 are greater than 0.6. There was no significant difference between correlation coefficients of models pEC50 (R2 = 0.9403 and QLOO2 = 0.8321), pEC30 (R2 = 0.9298 and QLOO2 = 0.8780) and pEC10 (R2 = 0.8564 and QLOO2 = 0.7615), which shows these models have strong robustness. In addition, R2 – QLOO2 < 0.1[35], so the models have no over-fitting. Furthermore, the statistic values of QLMO2 and QLOO2 were close to each other, and the result of bootstrapping method[36] is as below: Rbstr2 > 0.6, Qbstr2 > 0.5, indicating that the models all have good robustness. The result of y-randomization test was Ryrand2 > Qyrand2[37], which demonstrates no chance correlation between the independent and dependent variables of the models. The external predictive ability of the model can be assessed with statistics of QF12, QF22 and QF32. All relevant statistical parameters of the external validation of the three models meet the conditions suggested by Chirico and Gramatica[38], in which all QF12, QF22, and QF32 shall be greater than 0.6 and the concordance correlation coefficient (CCC) should be greater than 0.85. It could be indicated that the three models have a good external predictive ability.
Williams plot is the relationship between LOO standard residual and leverage value[39], and the leverage threshold (h*) of models pEC50, pEC30 and pEC10 was 0.4800. The leverage threshold h* = 0.4800 was derived according to QSARMLR program, and the standard residual is the value obtained by dividing the residual by its standard deviation, setting at ± 2.5[40]. If the absolute value of the LOO standard residual of a sample is greater than 2.5 and greater than h*, the sample is abnormal[41]. Moreover, it could be seen from Fig. 3 that the calculated values of the training set and the predicted values of the test set are all within the application domain, indicating that the model has no abnormal samples and the data of all binary mixture rays are reliable.
Figure 3. Williams plot based on standardized residuals for three models
3.5 Interpretation of the descriptors
GGI4 is defined as the sum of the corresponding charge terms absolute values over a pair of vertices (nonhydrogen atoms) with all topological distances equal to 4 in the molecule. Its value is mainly determined by the number of nonhydrogen atoms, the charge, and the topological distance. GGI4 can be calculated by establishing the compound matrix (distance matrix D, coulomb matrix T, matrix M = C⋅T, where C is the adjacency matrix). The mathematical model can be expressed as[42]:
$ {G_k} = \sum\limits_{i = 1}^{N - 1} {\sum\limits_{j = 1}^N {\left| {{g_{ij}}} \right|} } {\delta _{k, {d_{ij}}}} $
where element dij of matrix D is the topological distance between atoms i and j, gij = mij – mji (mij is the matrix M element) denotes the difference in magnitude of net charge of atoms i to j, N is the number of nonhydrogen atoms in the molecule, k is an indicator variable for the topological distance between atoms i and j, taking values 1~10 k (k = 4 in this descriptor), δ is Kronecker delta[43], defined as two variables, i and j, which equals 1 when i = j and equals 0 when i is not equal to j.
The cosmo area is used in the cosmo model to characterize the size of the area where solvent molecules surround the molecules in solution[44]. Total energy includes electronic energy and nuclear repulsion energy, and its value is mainly related to the number of atoms in the molecule and the charge the atom itself carries. Dipole moment belongs to the geometric descriptor[45, 46], a physical quantity that describes the situation of charge distribution in a molecule. It usually contains information about the property and location of all atoms and bonds in the molecule, representing properties such as geometry, type of chemical bonds and so on of the molecule.
Molecular weight is the sum of the relative atomic mass of all atoms in a molecule. EHOMO represents the molecular orbital with the highest energy occupied by electrons. With the increase of atomic number, the effective nuclear charge and the number of the main quantum also increases, leading to the electron moving towards the orbital of higher energy level, so that the molecular orbital energy is affected by the type of atoms, the number and the spatial distance of the atoms[47].
3.6 Model comparison between QSAR and CA
The QSAR model is used to quantitatively analyze the toxicity of mixtures based on the molecular structure information of mixtures, while the CA model is used to qualitatively analyze the toxicity of mixtures based on the toxicity of single compounds[48]. Therefore, their toxicity prediction ability on binary mixtures also has a large gap. Moreover, it could be seen from Fig. 4 that the QSAR model has a higher fitting degree than the CA model. The RMSE of QSAR model at EC50, EC30 and EC10 were 0.1033, 0.1196 and 0.1759, respectively, while 0.4273, 0.4182 and 0.4818 for the CA model. It could be concluded that the experimental value of the CA model has a great deviation from the predicted one, and the QSAR model has higher toxicity predictive ability than the CA model at all three concentration effects. In addition, models pEC50 and pEC30 had higher R2 (0.9403 and 0.9298) and lower RMSPext (0.142 and 0.144), and their predictive ability and robustness were significantly better than those of model pEC10 (R2 = 0.8564 and RMSPext = 0.1781). By combining section 3.1 analysis, 16, 15 and 10 binary mixture rays with synergistic effect could be seen in models pEC50, pEC30 and pEC10, respectively, which shows that with the increase of the synergistic effect of the mixture, the predictive ability of established QSAR model is more excellent. Thus, the QSAR model can predict the synergistic toxicity value of binary mixtures more accurately.
Figure 4. Plot of the observed versus predicted pEC50, pEC30 and pEC10 resulted from QSAR model (A, B, C) and CA model (D, E, F)
The concentration-response curves of S. obliquus for five pesticides and their 35 binary mixture rays could all be well fitted by Logistic or Weibull functions. The toxicity order of five pesticides was metribuzin > linuron > dichlorvos > trichlorfon > dimethoate. The freshwater green algae S. obliquus was more sensitive to two herbicides (linuron and metribuzin) than the insecticide. The toxicity of a single pesticide may be related to its cyclic structure. Metribuzin and linuron with cyclic structure had higher toxicity than other pesticides.
The mixtures of linuron and trichlorfon, dichlorvos and metribuzin, dimethoate and metribuzin exhibited synergistic effects, while the remaining binary mixtures exhibited additive effects. Selecting CA as an additive reference model to evaluate the toxicity interaction of binary mixtures of pesticides could found that 46%, 43% and 29% of the binary mixtures have synergistic effects at EC50, EC30 and EC10 concentrations, respectively, indicating that the interaction model of chemicals may change with the variation of mixture concentration levels or concentration ratios, and the higher the concentration level, the greater the possibility of synergistic effect.
Mixed structure descriptors can effectively characterize the overall molecular structure of the mixture. The QSAR models established at the effect concentrations of EC50, EC30 and EC10 had good internal robustness (both R2 and Q2 > 0.8) and external predictive ability (QF2 > 0.8, CCC > 0.85). The excellent degree of the model was model pEC50 > model pEC30 > model pEC10, and the predictive ability was better than that of the CA model under the same concentration effect. Therefore, three models may be useful in predicting unknown toxicity values of the binary pesticide mixtures. In addition, compared to additive, three models can more accurately predict the toxicity of binary mixtures of pesticides with a synergistic effect.
Papa, E.; Battaini, F.; Gramatica, P. Ranking of aquatic toxicity of esters modelled by QSAR. Chemosphere 2005, 58, 559−570. doi: 10.1016/j.chemosphere.2004.08.003
Park, J. A.; Abd El-Aty, A. M.; Zheng, W.; Kim, S. K.; Cho, S. H.; Choi, J. M.; Hacimuftuo, A.; Jeong, J. H.; Wang, J.; Shim, J. H.; Shin, H. C. Simultaneous determination of clanobutin, dich-lorvos, and naftazone in pork, beef, chicken, milk, and egg using liquid chromatography-tandem mass spectrometry. Food Chem. 2018, 252, 40−48. doi: 10.1016/j.foodchem.2018.01.085
Yu, Y.; Hu, S.; Yang, Y.; Zhao, X.; Xue, J.; Zhang, J.; Gao, S.; Yang A. Successive monitoring surveys of selected banned and restricted pesticide residues in vegetables from the northwest region of China from 2011 to 2013. Bmc. Pub. Hea. 2018, 18, 91−100. doi: 10.1186/s12889-017-4632-x
Zheng, S.; Chen, B.; Qiu, X.; Chen, M.; Ma, Z.; Yu, X. Distribution and risk assessment of 82 pesticides in jiulong river and estuary in south China. Chemosphere 2016, 144, 1177−1192. doi: 10.1016/j.chemosphere.2015.09.050
Ccanccapa, A.; Masiá, A.; Andreu, V. Spatio-temporal patterns of pesticide residues in the turia and júcar rivers (Spain). Sci. Total Environ. 2016, 540, 200−210. doi: 10.1016/j.scitotenv.2015.06.063
Xu, Y. C.; Wang, S. S.; Xu, J. J.; Yu, G. M. Monitoring pesticide residues in source of drinking water in rural area, Yantai. Mod. Prev. Med. 2015, 42, 1704−1707.
Chen, C.; Zou, W. B.; Cui, G. L.; Tian, J. C.; Wang, Y. C.; Ma, L. M. Ecological risk assessment of current-use pesticides in an aquatic system of Shanghai, China. Chemosphere 2020, 257, 127−222.
Tien, C. J.; Chen, C. S. Assessing the toxicity of organophosphorous pesticides to indigenous algae with implication for their ecotoxicological impact to aquatic ecosystems. J. Envir. Sci. Hea. Part. B 2012, 47, 901−912. doi: 10.1080/03601234.2012.693870
Wan, L.; Wu, Y.; Ding, H.; Zhang, W. Toxicity, biodegradation, and metabolic fate of organophosphorus pesticide trichlorfon on the freshwater algae chlamydomonas reinhardtii. J. Agric. Food Chem. 2020, 68, 1645−1653. doi: 10.1021/acs.jafc.9b05765
Liu, S. S.; Wang, C. L.; Zhang, J.; Zhu, X. W.; Li, W. Y. Combined toxicity of pesticide mixtures on green algae and photobacteria. Ecotoxicol. Environ. Saf. 2013, 95, 98−103. doi: 10.1016/j.ecoenv.2013.05.018
Tao, Y.; Jia, C.; Jing, J.; Zhang, J.; Yu, P.; He, M.; Wu, J.; Chen, L.; Zhao, E. Occurrence and dietary risk assessment of 37 pesticides in wheat fields in the suburbs of Beijing, China. Food Chem. 2021, 350, 129245. doi: 10.1016/j.foodchem.2021.129245
Du, L.; Li, S.; Qi, L.; Hou, Y.; Yan, Z.; Wei, X.; Wang, H.; Zhao, X.; Sun, C. Metabonomic analysis of the joint toxic action of long-term low-level exposure to a mixture of four organophosphate pesticides in rat plasma. Mol. Biosyst. 2014, 10, 1153−1161. doi: 10.1039/C4MB00044G
Qin, L. T.; Chen, Y. H.; Zhang, X.; Mo, L. Y.; Zeng, H. H.; Liang, Y. P. QSAR prediction of additive and non-additive mixture toxicities of antibiotics and pesticide. Chemosphere 2018, 198, 122−129. doi: 10.1016/j.chemosphere.2018.01.142
Mo, L. Y.; Zhao, D. N.; Qin, M.; Qin, L. T.; Zeng, H. H.; Liang, Y. P. Joint toxicity of six common heavy metals to chlorella pyrenoidosa. Environ. Sci. Pollut. Res. 2017, 26, 30554−30560.
Mo, L. Y.; Liu, J.; Qin, L. T.; Zeng, H. H.; Liang, Y. P. Two-stage prediction on effects of mixtures containing phenolic compounds and heavy metals on Vibrio qinghaiensis sp. Q67. Bull. Environ. Contam. Toxicol. 2017, 99, 17−22. doi: 10.1007/s00128-017-2099-1
Zhang, S. N.; Su, L. M.; Zhang, X. J.; Li, C.; Qin, W. C.; Zhang, D. M.; Liang, X. X.; Zhao, Y. H. Combined toxicity of nitro-substituted benzenes and zinc to photobacterium phosphoreum: evaluation and QSAR analysis. Int. J. Env. Res. Pub. He. 2019, 16, 1041−1052. doi: 10.3390/ijerph16061041
Dou, R. N.; Liu, S. S.; Mo, L. Y.; Liu, H. L.; Deng, F. C. A novel direct equipartition ray design (EquRay) procedure for toxicity interaction between ionic liquid and dichlorvos. Environ. Sci. Pollut. Res. 2011, 18, 734−742. doi: 10.1007/s11356-010-0419-7
Zhou, B.; Xin, L. The monitoring of chemical pesticides pollution on ecological environment by GIS. Environ. Technol. Inno. 2021, 34, 101506.
Calvo, S.; Romo, S.; Soria, J. Pesticide contamination in water and sediment of the aquatic systems of the natural park of the albufera of valencia (spain) during the rice cultivation period. Sci. Total Environ. 2021, 774, 145009. doi: 10.1016/j.scitotenv.2021.145009
Thomas, M. C.; Flores, F.; Kaserzon, S. Toxicity of ten herbicides to the tropical marine microalgae Rhodomonas salina. Sci. Rep. 2020, 10, 7612. doi: 10.1038/s41598-020-64116-y
Ge, H. L.; Liu, S. S.; Su, B. X. Predicting joint toxicity of organophosphorus and triazine pesticides on green algae using the generalized concentration addition model. J. Environ. Sci. 2014, 34, 2413−2419.
Owczarek, K.; Kudak, B. J.; Simeonov, V.; Mazerska, Z.; Namienik, J. Binary mixtures of selected bisphenols in the environment: their toxicity in relationship to individual constituents. Molecules 2018, 23, 3226−3239. doi: 10.3390/molecules23123226
Liu, S. S.; Zhang, J.; Zhang, Y. H.; Qin, L. T. APTox: assessment and prediction on toxicity of chemical mixtures. Acta Chim. Sin. 2012, 70, 1511−1517. doi: 10.6023/A12050175
Qu, R. J.; Liu, H. X.; Feng, M. B.; Yang, X.; Wang, Z. Y. Investigation on intramolecular hydrogen bond and some thermodynamic properties of polyhydroxylated anthraquinones. J. Chem. Eng. Data 2012, 57, 2442−2455. doi: 10.1021/je300407g
Shi, J. Q.; Qu, R. J.; Feng, M. B.; Wang, X. G.; Wang, L. S.; Yang, S. G.; Wang, Z. Y. Oxidative degradation of decabromodiphenyl ether (BDE 209) by potassium permanganate: reaction pathways, kinetics, and mechanisms assisted by density functional theory calculations. Environ. Sci. Technol. 2015, 49, 4209−4217. doi: 10.1021/es505111r
Andrea, M.; Viviana, C.; Manuela, P.; Roberto, T. Dragon software: an easy approach to molecular descriptor calculations. Match-Commun. Math. Co. 2006, 56, 237−248.
Chatterjee, M.; Roy, K. Prediction of aquatic toxicity of chemical mixtures by the QSAR approach using 2D structural descriptors. J. Hazard. Mater. 2021, 408, 124936. doi: 10.1016/j.jhazmat.2020.124936
Kurt, B. Z.; Gazioglu, I.; Sonmez, F.; Kucukislamoglu, M. Synthesis, antioxidant and anticholinesterase activities of novel coumarylthiazole derivatives. Bioorg. Chem. 2015, 59, 80−90. doi: 10.1016/j.bioorg.2015.02.002
Gramatica, P.; Chirico, N.; Papa, E.; Cassani, S.; Kovarich, S. QSARINS: a new software for the development, analysis, and validation of QSAR MLR models. J. Comput. Chem. 2013, 34, 2121−2132. doi: 10.1002/jcc.23361
Kunal, R.; Supratik, K.; Pravin, A. On a simple approach for determining applicability domain of QSAR models. Chemometr. Intell. Lab. Syst. 2015, 145, 22−29. doi: 10.1016/j.chemolab.2015.04.013
Jonker, M. J.; Svendsen, C.; Bedaux, J. J. M.; Bongers, M.; Kammenga, J. E. Significance testing of synergistic/antagonistic, dose level-dependent, or dose ratio-dependent effects in mixture dose-response analysis. Environ. Toxicol. Chem. 2005, 24, 2701−2713. doi: 10.1897/04-431R.1
Syberg, K.; Elleby, A.; Pedersen, H.; Cedergreen, N.; Forbes, V. E. Mixture toxicity of three toxicants with similar and dissimilar modes of action to Daphnia magna. Ecotoxicol. Environ. Saf. 2008, 69, 428−436. doi: 10.1016/j.ecoenv.2007.05.010
Eriksson, L.; Jaworska, J.; Worth, A. P.; Cronin, M.; Mcdowell, R. M. Methods for reliability and uncertainty assessment and for applicability evaluations of classification- and regression-based QSARs. Environ. Health Persp. 2003, 111, 1361−1375. doi: 10.1289/ehp.5758
Tropsha, A. Best practices for QSAR model development, validation, and exploitation. Mol. Inform. 2010, 29, 476−488. doi: 10.1002/minf.201000061
Chirico, N.; Gramatica, P. Real external predictivity of QSAR models: how to evaluate it? Comparison of different validation criteria and proposal of using the concordance correlation coefficient. J. Chem. Inf. Model. 2011, 51, 2320−2335. doi: 10.1021/ci200211n
Kiralj, R.; Ferreira, M. Basic validation procedures for regression models in QSAR and QSPR studies: theory and application. J. Brazil. Chem. Soc. 2008, 20, 770−787.
Shi, L. M.; Fang, H.; Tong, W. QSAR models using a large diverse set of estrogens. J. Chem. Inf. Comp. Sci. 2001, 41, 186−195. doi: 10.1021/ci000066d
Tropsha, A.; Gramatica, P.; Gombar, V. The importance of being earnest: validation is the absolute essential for successful application and interpretation of QSPR models. QSAR Comb. Sci. 2003, 22, 69−77. doi: 10.1002/qsar.200390007
Ghodsi, R.; Hemmateenejad, B. QSAR study of diarylalkylimidazole and diarylalkyltriazole aromatase inhibitors. Med. Chem. Res. 2016, 25, 834−842. doi: 10.1007/s00044-016-1530-1
Asadollahi, T.; Dadfarnia, S.; Shabani, A. M. H.; Ghasemi, J. B.; Sarkhosh, M. QSAR models for cxcr2 receptor antagonists based on the genetic algorithm for data preprocessing prior to application of the PLS linear regression method and design of the new compounds using in silico virtual screening. Molecules 2011, 16, 1928−1955. doi: 10.3390/molecules16031928
Yang, F.; Wang, M.; Wang, Z. Y. Sorption behavior of 17 phthalic acid esters on three soils: effects of pH and dissolved organic matter, sorption coefficient measurement and QSPR study. Chemosphere 2013, 93, 82−89. doi: 10.1016/j.chemosphere.2013.04.081
Zhang, Q. Y.; Hu, W. P.; Hao, J. F.; Liu, X. H.; Xu, L. Chiral topological charge index and its applications. Acta Chim. Sin. 2010, 68, 883−888.
Wang, X.; Jie, O.; Zhao, F. Local kronecker delta property of the MLS approximation and feasibility of directly imposing the essential boundary conditions for the EFG method. Eng. Anal. Bound. Elem. 2013, 37, 1021−1042. doi: 10.1016/j.enganabound.2013.03.011
Han, J.; Dai, C.; Yu, G.; Lei, Z. Parameterization of COSMO-RS model for ionic liquids. Green Energy Environ. 2018, 3, 247−265. doi: 10.1016/j.gee.2018.01.001
Cao, J. S.; Wei, M. J.; Chen, F. W. Relationship between the bond dipole moment and bond angle of polar molecules. Acta Phys. Chim. Sin. 2016, 32, 1639−1648. doi: 10.3866/PKU.WHXB201604062
Fu, X. M.; Li, Y.; Wang, J. H. A study of the influential factors on the dipole moments of small organic. Comput. Appl. Chem. 2015, 32, 408−412.
Zeng, X. L.; Qu, R. J.; Feng, M. B.; Chen, J.; Wang, L. S.; Wang, Z. Y. Photodegradation of polyfluorinated dibenzo-p-dioxins (PFDDs) in organic solvents: experimental and theoretical studies. Environ. Sci. Technol. 2016, 50, 8128−8134. doi: 10.1021/acs.est.6b02682
Liu, H. X.; Shi, J. Q.; Liu, H.; Wang, Z. Y. Improved 3D-QSPR analysis of the predictive octanoleair partition coefficients of hydroxylated and methoxylated polybrominated diphenyl ethers. Atmos. Environ. 2013, 77, 840−845. doi: 10.1016/j.atmosenv.2013.05.068
Figure 1 Concentration-response curves of five pesticides to S. obliquus
Figure 2 Concentration-response curves of 35 rays of pesticide binary mixture to S. obliquus
Figure 3 Williams plot based on standardized residuals for three models
Figure 4 Plot of the observed versus predicted pEC50, pEC30 and pEC10 resulted from QSAR model (A, B, C) and CA model (D, E, F) | CommonCrawl |
by Mike Sondalini 2 Comments
Vibration and its Control
Vibration in equipment is the result of unbalanced forces.
Out-of-balance is corrected by adding or removing material so that when the equipment is operating the unbalance is controlled to an acceptable level.
Keywords: spring stiffness, damping, center of rotation, center of mass, natural frequency, isolation mount, counterbalance, out-of-phase.
The transference of unbalanced forces through equipment into neighboring structures causing them to shake is vibration.
The motion of a body is limited by what connects it to a machine and the walls in which it moves. Every time there is a change of direction unbalanced forces produce a shock.
This shock travels throughout the machine and is transmitted to all connected items.
Mostly a spring is used to isolate vibration movement or a damper is used to absorb the movement. A full explanation of vibration control requires calculus and is beyond the scope of this article.
The spring-mass-damper system sketch below is a simplified representation of a vibration control system.
Spring force and damper pressure control the mass' movement.
The damper piston moves and so absorbs the vibration. Whereas the spring flexes and isolates the movement from its attachment.
The rate of vibration is called the frequency.
It is measured in cycles per second and has the units Hertz. A four-pole electric motor rotates at about 1500 RPM. This is 25 cycles per second or 25 Hertz.
Vibration caused by an external applied force is known as a forced vibration because the mass oscillates at the frequency of the external force.
An example is the shake produced by the moving pistons and crankshaft in a car engine.
The equation for the natural frequency of an undamped spring-mass system moving in one direction is
$$ {{f}_{n}}=\frac{1}{2}\pi \sqrt{\left( \frac{K}{M} \right)}$$
where K is the spring stiffness and M the mass.
This equation lets us find the resonance frequency for a mass–spring system.
Such a system can be represented in the drawing above by removing the damper.
Wild gyrations develop when the forced frequency nears the system natural frequency. Every system has a natural frequency and will shake to destruction if it is forced to move at that rate.
This phenomenon is known as resonance.
An example would be the shattered wine glass caused by an opera singer's voice or vibrations in long, thin shafts that start and then stop as the shaft speed goes through its natural frequency speed.
The four methods of vibration control are listed below.
Reduce or eliminate the exciting force by balance or removal.
Use sufficient damping to limit amplitude.
Isolate the vibration source from the surrounds by using spring mounts of appropriate stiffness.
Introduce a counterbalancing force opposite in phase to the exciting force.
Most importantly every moving mass must be balanced about its center of rotation.
The topic of rotary machine balancing was introduced in the article "When Spinning Equipment is Unbalanced". The article indicated that rotating masses must be balanced to an acceptable standard.
Out–of-balance rotors cause vibration because the center of mass of the rotor is eccentric (not running true) to its center of rotation.
The spinning off-center mass is continually being flung outward. The machine's bearings hold the mass in place and react against the developed forces.
Vibration results as first the mass is on one side of the bearings and then it is on the other.
Balancing aims to distribute the mass evenly about the running center.
The drawing below shows eccentricity between the center of mass and the center of rotation.
Materials such as rubber dampen shaking.
The rubber flexes and absorbs the movement within itself.
The sketch below is a simple rubber vibration damper. Because rubber cannot compress much to accommodate movement, rubber dampers are normally used for low amplitude, high-frequency vibration where noise transference is a problem.
Shock absorbers are used for large amplitude, low-frequency situations where springs alone would produce bouncing. An example is in car suspensions.
The natural frequency can be moved away from the forcing frequency by changing the weight of the system.
Making the frame of a vibrating machine heavier will lower the assembly's natural frequency.
Concrete added to the base frames of lightweight fans reduce vibration by lowering the natural frequency and moving it away from the forcing frequency produced by the rotating blades.
A vibrating mass can also be isolated from its surroundings by springs.
The springs deflect under the shaking body. Installing isolation springs make the spring's natural frequency the governing frequency for vibration transfer. Altering the spring stiffness allows us to select the desired amount of isolation.
Spring stiffness controls the amount of vibration transferred to the attachment.
Too stiff will transmit vibration, while insufficient stiffness will cause bounce.
The correct spring stiffness can be found using charts available from specialist vibration control companies. The isolation springs must not have a natural frequency near the forcing frequency of the isolated equipment.
In such a case the system would start to resonate and jump about. The sketch below shows an air compressor supported on springs.
Stiffer springs are at the heavier end of the machine to both keep the machine level and to prevent resonance developing as the mass increases.
The drawing also indicates that vibration is usually in more than one direction.
An out-of-phase mass is a method not often used to control vibration.
It is possible to use a weight with an opposite vibration pattern to negate the out-of-balance forces.
This method has been used in motor car engines where a shaft with an eccentric mass is spun in the opposite direction to the crankshaft.
Mike Sondalini – Maintenance Engineer
« Plotting Repairable System Failure Data
The Nine Circles of Planning Hell »
Ian Johanson says
Thanks for your explanation on how vibration is controlled. It is the most detailed article I have seen. I just have a question about the springs used to control vibration. How long does it take for spring stiffness to be compromised in a vibrating motor? Is it a commonly replaced part?
Jenson says
A four-pole electric motor rotates at about 1500 RPM. This is 25 cycles per second or 25 Hertz.
By equation N (R.P.M) equals to 120 f (Hz)/P ( no. of poles).
Calculating gets 50 Hz. | CommonCrawl |
THEMIS: Towards a Decentralized Ad Platform with Reporting Integrity (Part 1)
June 25, 2020 | Announcements
This post describes the work done by Gonçalo Pestana, Research Engineer, Iñigo Querejeta-Azurmendi, Cryptography Engineer, Dr. Panagiotis Papadopoulos, Security Researcher, and Dr. Ben Livshits, Chief Scientist; this post is also part of a series that focuses on further progressive decentralization for Brave ads.
Note: THEMIS is primarily a research effort for now and does not constitute a commitment regarding product plans around Brave Rewards.
The whitepaper introducing the Basic Attention Token (BAT) [1] was released mid 2017 and, since then, BAT has been used by millions of users, advertisers, and publishers, each using and earning BAT through the Brave Browser (Figure 1) [2]. It has been a long ride since 2017 and we're very proud that BAT is acknowledged as one of the most successful use cases for decentralized ledgers and utility tokens.
The BAT token powers the BAT-based advertising ecosystem. The main goal of the BAT-based ad ecosystem is to provide the choice for users to value their attention, while keeping full control over their data and personal privacy. The main tenets of the BAT-based advertising ecosystem are to provide privacy by default, to restore control to users over their data, and to provide a decentralized marketplace where Brave Browser users are incentivized to watch ads and to contribute to creators. Through these principles, Brave's vision is to fix the current online advertising industry [1], and get rid of widespread fraud schemes [3] [3.1], [4], market fragmentation [5] [6] and privacy issues [7] [8].
In line with these goals, Brave's research team has been working on a decentralized and privacy-by-design protocol that further improves upon the current BAT-based ad ecosystem. In this first post in a series of blog posts, we present THEMIS: a novel privacy-by-design ad platform that requires zero trust from both users and advertisers alike. THEMIS provides auditability to all participants, rewards users for interacting with ads, and allows advertisers to verify the performance and billing reports of their ad campaigns. In this blog series, we describe the THEMIS protocol and its building blocks. In the next post, we will present a preliminary scalability evaluation of THEMIS in a deployment environment.
Figure 1. Example of an ad notification delivered through the Browser for Brave Ads users.
The current web advertising ecosystem
Digital advertising is the most popular way of funding websites. However, web advertising has fundamental flaws such as market fragmentation, rampant fraud, and unprecedented invasion of privacy. Further, web users are increasingly opting out of web advertising, costing publishers millions of dollars in ad revenues every year. A growing number of users (47% of internet users globally, as of today [13]) use ad-blockers.
Academia and industry have responded by designing new monetization systems. These systems generally emphasize properties such as user choice, privacy protection, fraud prevention, and performance improvements. Privad [11], and Adnostic [12] are examples of academic projects that focus on privacy-friendly advertising. Despite the contributions of these systems, they have significant shortcomings that have limited their adoption. These systems either (i) do not scale, (ii) require the user to trust central authorities within the system to process ad transactions, or (iii) do not allow advertisers to accurately gauge campaign performance.
To make matters worse, current advertising systems lack proper auditability: The ad network exclusively determines how much advertisers will be charged, as well as the revenue share that the publishers may get. Malicious ad networks can overcharge advertisers or underpay publishers. Another issue is non-repudiation, as ad networks do not generally prove that the claimed ad views/clicks occurred in reality.
Figure 1. A high-level visual overview of THEMIS. Ad distribution and ad interaction reporting activities. Users are rewarded for interacting with ads. In THEMIS, a campaign manager and advertisers agree on ad campaigns, which are encoded in a smart contract running on a side-chain. Using Brave Browser, users request rewards from a smart contract, which implements a cryptographic protocol that moves us towards decentralization, transparency, and privacy.
Our Approach: THEMIS
In this blog post series, the Brave Research team presents THEMIS (Figure 1), a private-by-design ad platform that makes a significant step towards decentralizing the ad ecosystem by leveraging a side-chain and smart contracts to eliminate centralized ad network management. We believe in progressive decentralization, which means that the system presented in the first blog post is not yet fully decentralized; subsequent blog posts will discuss further decentralization steps.
The current implementation of Brave Ads protects user privacy and anonymity through the use of privacy-preserving cryptographic protocols, client-side ad matching, and other anonymization techniques. For example, Brave servers cannot determine which ads a user has interacted with, and they do not receive any data concerning a specific user's interests or browsing habits.
The THEMIS protocol provides the same strong anonymity properties as Brave Ads, while making an important step toward progressive decentralization of the Brave Ads ecosystem. THEMIS is highly relevant to the BAT Apollo mission [14]. As discussed in a BAT Community-run AMA [15], the main goals of the BAT Apollo mission are to improve transparency, to decrease transaction costs, and to further decentralize Brave Ads.
By combining the strong privacy properties with decentralization, THEMIS:
Effectively addresses the auditability and non-repudiation issues of the current ecosystem by requiring all participants to generate cryptographic proofs of correct behaviour. Every participants can verify that everybody is following the protocol correctly;
And provides the advertisers with the necessary feedback regarding the performance of their ad campaigns without compromising the end-user privacy. By guaranteeing the computational integrity of this reporting, advertisers can accurately learn how many users viewed and interacted with their ads without learning exactly which of them.
In this section, we sketch a brief technical background regarding the mechanisms and building blocks used by THEMIS; we also describe why and how THEMIS leverages them.
Permissioned Blockchains
THEMIS relies on a blockchain with smart contract functionality to provide a decentralized ad platform. Smart contracts enable the business logic and payments to be performed without relying on a central authority. THEMIS could, for example, run on the Ethereum Mainnet. However, due to Ethereum's low transaction throughput, the high gas costs, and the current scalability issues, THEMIS relies on a Permissioned Blockchain instead, more concretely on a Proof-of-Authority (PoA) blockchain.
A PoA blockchain consists of a distributed ledger that relies on consensus achieved by a permissioned pool of validator nodes. PoA validators can rely on fast consensus protocols such as IBFT/IBFT2.0 and Clique, which result in faster minted blocks and thus PoA can reach higher transaction throughput than traditional PoW based blockchains.
As opposed to traditional, permissionless blockchains (such as Bitcoin and Ethereum), the number of nodes participating in the consensus is relatively small and all nodes are authenticated. In our case publishers, and other industry entities, are potential participants of the pool of validators.
Cryptographic Tools
THEMIS uses an additively homomorphic encryption scheme to calculate the ads payouts for each user, while keeping the user behavior (e.g. ad clicks) private. Given a public-private key-pair [[(\sk, \pk)]], the encryption scheme is defined by four functions:
Encryption: first, the encryption function, where given a public key and a message, outputs a ciphertext, [[\ctxt = \enc(\pk, \message)]];
Decryption: secondly, the decryption function, that given a ciphertext and a private key, outputs a decrypted message, [[\message = \dec(\sk, \ctxt)]];
Sign: next, the signing function, where given a message and a secret key, outputs a signature on the message, [[\signature = \sign(\sk, \message)]].
Verify: finally, the signature verification function, where given a signature and a public key, outputs [[\bot, \top]] if the signature fails or validates respectively, [[\signverify(\signature, \pk)\in\{\bot, \top\}]].
The additive homomorphic property guarantees that the addition of two ciphertexts,
$$ \ctxt_{1} = \enc(\pk, \message_{1}), \ctxt_{2} = \enc(\pk, \message_{2}) $$
encrypted under the same key, results in the addition of the encryption of its messages, more precisely:
$$ \ctxt_{1} + \ctxt_{2} = \enc(\pk, \message_{1} + \message_{2}) $$
Some examples of such encryption algorithms are ElGamal [9] or Paillier [10] encryption schemes.
To prove correct decryption, THEMIS leverages Zero Knowledge Proofs (ZKP) which allow an entity (i.e. the prover) to convince a different entity (i.e. the verifier) that a certain statement is true over a private input without disclosing any other information from that input other than whether statement is true or not. We denote proofs with \(\Pi\), and its verifications as \(\verify(\Pi)\in\{\bot, \top\}\).
Distribution of trust
THEMIS distributes trust to generate a public-private key-pair for each ad campaign, under which the sensitive information is encrypted. For this, it uses a distributed key generation (DKG) protocol to share the knowledge of the secret. This allows a group of players to distributively generate the key-pair, [[(\sk_T, \pk_T)]], where each player has a share of the private key, [[\sk_{T_{i}}]], and no player ever gains knowledge of the full private key, [[\sk_{T}]].
Moreover, the resulting key-pair is a threshold key-pair which requires at least a well-defined number of participants – out of the peers that distributively generated the key – to interact during the decryption or signing operations.
We follow a similar DKG protocol as presented by Schindler et.al. [11].
In order to choose this selected group of key generation players in a distributed way, THEMIS leverages Verifiable Random Functions (VRFs). In general, VRFs enable users to generate a random number and prove its randomness. In THEMIS, we use VRFs to select a random pool of users and generate the distributed keys. Given a public-private key-pair, [[(\VRFsk, \VRFpk)]], VRFs are defined by a function which outputs a random number and a zero knowledge proof of correct generation.
System Properties and Guarantees
The main properties we focused on while designing THEMIS included privacy, accountability, reporting integrity, and decentralization:
In the context of a sustainable ad ecosystem, we define privacy as the ability for users and advertisers to use our system without disclosing any critical information about themselves and their business:
For the user, privacy means being able to interact with ads without revealing their interests/preferences to advertisers, other protocol participants or eavesdroppers. In THEMIS, we preserve the privacy of the user not only when they are interacting with ads but also when they claim the corresponding rewards for these ads.
Brave Ads currently protects advertiser privacy. For advertisers, privacy means that they are able to set up ad campaigns without revealing any policies (i.e. what is the reward of each of their ads) to the prying eyes of their competitors. THEMIS keeps these ad policies confidential throughout the whole process, while enabling users to claim rewards based on ad policies.
Decentralization and auditability
Existing works require a central authority to manage and orchestrate the proper execution of the protocol, either in terms of user privacy or billing. What if this (considered as trusted) entity censors users by denying or transferring an incorrect amount of rewards? What if it attempts to charge advertisers more than what they should pay based on users' ad interactions? What if the advertising policies are not applied as agreed with the advertisers when setting up ad campaigns?
One of the primary goals of our system is to be decentralized and transparent. To achieve this, THEMIS leverages a permissioned blockchain with smart contract functionality.
Ad platforms need to be able to scale seamlessly and serve millions of users. However, important proposed systems fail to achieve this. We consider scalability as an important aspect affecting the practicability of the system. THEMIS needs to not only serve ads in a privacy preserving way to millions of users but also finalize the payments related to their ad rewards as timely as possible.
Contrary to existing works, THEMIS does not rely on a trusted central authority. Therefore, it needs to provide both the users and the advertisers with mechanisms to verify the authenticity of the statements and the performed operations. Achieving such integrity guarantees requires the use of zero-knowledge proofs to ensure every participant can prove and verify the correctness and validity of billing and reporting.
System Overview – A Strawman Approach
The remainder of this blog post will be dedicated to outline a straw-man approach to describe the basic principles and steps of THEMIS. In an upcoming blog post, we build on the straw-man approach and introduce the decentralization into the system.
Our straw-man approach is the first step towards a privacy-preserving and decentralized online advertising system. Our goal at this stage is to provide a mechanism for advertisers to create ad campaigns and to be correctly charged, based on the user's interactions with their ads. In addition, the system aims at keeping track of the ads viewed by users, so that (i) advertisers can have feedback about their ad campaigns and (ii) users can be rewarded for interacting with ads. All these goals should be achieved while preserving the privacy of the ad policies and the user behaviour.
We assume three different roles in this straw-man approach: (i) the users, (ii) the advertisers, and (iii) an ad Campaigns Manager (CM). The users are incentivized to view and interact with ads created by the advertisers. The CM is responsible (a) for orchestrating the protocol, (b) for handling the ad views reporting and finally (c) for calculating the rewards that need to be paid to users according to the policies defined by the advertisers.
Note that the straw-man approach assumes a semi-trusted Campaign Manager. This role will be removed in the full THEMIS protocol, which is described in the next blogpost. For the sake of this initial introduction to THEMIS, relying on a CM entity allows us to simplify the explanation.
Privacy-preserving Ad Matching
In THEMIS – as in the current Brave Rewards architecture – the user downloads an updated version of the ad catalog, which includes ads and their metadata from all active ad-campaigns. The CM maintains and provides the ad catalog for users to download periodically.
The ad-matching happens locally based on a pre-trained model and the user's interests extracted from their web browsing history in a similar way as in Brave Rewards. In order to serve and match ads to the user interests, no data leaves the user's device. This creates a walled garden of browsing data that is used for recommending the best matching ad while user privacy is guaranteed.
Incentives for Ad-viewing
User incentives to interact with ads are at the core of THEMIS. Each viewed/clicked ad yields an amount of BAT rewards. Different ads may provide different amounts of reward to the users. This amount is agreed by the corresponding ad creator (i.e. the advertiser) and the Campaign Manager. The user can claim rewards periodically (e.g. every week or every month). In Figure 4, we present an overview of the reward request generation and the steps to claim the ad rewards in the straw-man approach.
The straw-man approach
We now outline the different phases of the straw-man version of THEMIS.
Phase 1: Defining Ad Rewards
In order for an advertiser to have their ad campaign included in the next version of the ad catalog, they first need to agree with the CM on the policies of the given campaign (i.e. rewards per ad, ad impressions per user, etc.) (step 1 in Figure 4).
Once the advertiser agrees off-band with the CM on the ads that will be part of the campaign and respective payouts, the CM encodes the agreed policy as a vector, [[\policyvector]], where each index corresponds to the amount of tokens that an ad yields when viewed/clicked (e.g. Ad1: 0.4 BAT, Ad2: 2 BAT, Ad3: 1.2 BAT). The CM stores this vector privately and the advertiser needs to trust that the policies are respected (this will be addressed in the full THEMIS protocol – see next blog post). The indices used in the policy vector maintain the same order as the corresponding indices of its ads in the ad catalog.
Figure 4. High-level overview of the user rewards claiming procedure of our straw-man approach. Advertisers can set how much they reward each ad click without disclosing that to competitors. The user can claim rewards without exposing which ads they interacted with.
In addition to agreeing with the CM on the ads policies for the campaign, the advertiser also transfers to an escrow account the necessary funds to cover the campaign. At the end of the campaign, unused funds (i.e. when users have not clicked/interacted with enough ads to use up all the escrowed funds), are released back to the advertisers.
For the sake of simplicity, throughout this section, we consider one advertiser who participates in our ad platform and runs multiple ad campaigns. In a real world scenario many advertisers can participate running many ad campaigns simultaneously. We also consider as agreed policies the amount of tokens an ad provides as reward to a clicking user.
Phase 2: Claiming Ad Rewards
The user generates locally an interaction vector, which keeps track of the number of times each ad of the catalog was viewed/clicked (.eg Ad1: was viewed 3 times, Ad2: was viewed 0 times, Ad3: was viewed 2 times).
In every payout period, the user encrypts the state of the interaction vector. More technically, let [[\adclicks]] (ac in Figure 4) be the interaction vector containing the number of views/clicks of users with each ad, where element [[i]] of vector [[\adclicks]] represents the number of times [[\ad_i]] was viewed/clicked. On every payout period, the user generates a new ephemeral key pair [[\sk, \pk]], to ensure the unlinkability of the payout requests. The user then proceeds at each entry of [[\adclicks]] with the newly generate public key:
$$ \encryptedvector = \left[\enc(\pk, \nrinteractions_1)\ldots, \enc(\pk, \nrinteractions_{\nrads})\right] $$
where [[\nrinteractions_i]] is the number of interactions for ad [[i]], and [[\nrads]] is the total number of ads. It proceeds to send [[\encryptedvector]] to the Campaign Manager (step 2a in Figure 4).
Note that the CM cannot decrypt the received vector and thus cannot learn the user's ad interactions (and consequently their interests). Instead, they leverage the additive homomorphic property of the underlying encryption scheme (as described in the Background Section) to calculate the sum of all payouts based on the interactions encoded in the encrypted vector [[\encryptedvector]] (step 2b in Figure 4).
More formally, the CM computes the aggregate payout for the user as follows:
$$ \aggrresult = \sum_{i=1}^{\nrads} \policyvector[i]\cdot\encryptedvector[i] $$
where [[\policyvector[i]]] is the ad policy associated with the ad in the position [[i]] of the vector. Then CM signs the computed aggregate result:
$$ \signreward = \sign(\aggrresult, \sk_{CM}) $$
and sends the 2-tuple [[(\aggrresult, \signreward)]] back to the user.
Upon receiving this tuple (step 2c in Figure 4), the user verifies the signature of the result: [[\signverify(\aggrresult, \signreward)]] and proceeds with decrypting the result of the aggregate:
$$ \decryptedaggr = \dec(\sk, \aggrresult) $$
As a final step, it proves the correctness of the decryption by creating a zero knowledge proof of correct decryption: [[\proofresult]] (i.e. proving that the decryption is, in fact, associated with the encrypted aggregate).
Phase 3: Payment Request
Finally, the user generates the payment request and sends the following 4-tuple to the CM (step 3a in Figure 4):
$$ (\decryptedaggr, \aggrresult, \signreward,\proofresult) $$
As a next step (step 3b in Figure 4), the CM verifies that the payment request is valid. More specifically, CM will reject the payment request of the user if
$$ \signverify(\pk_{CM}, \signreward, \aggrresult) = \bot $$
$$ \verify(\proofresult) = \bot $$
Otherwise, it proceeds with transferring the proper amount (equal to [[\decryptedaggr]]) of reward to the user.
Reporting to Advertisers
THEMIS aims at providing feedback about the ad campaigns to the advertisers. During billing procedure the advertisers need to be able to verify the integrity of the reported statistics by the Campaign Manager regarding the number of times an ad was viewed/clicked by the users.
To achieve this, whenever a new version of the ad-catalog is online and retrieved from the users, a new key-pair, [[\pk_{T}]], is generated. This key is used to encrypt a copy of the adclicks vector CM (remember step 2a in Figure 4).
The key used in this step, [[\pk_{T}]], is a public threshold key generated in a distributed way. In order to generate such a key, a pool of multiple participating users (Users are incentivized to participate in this pool. Details on how to orchestrate the incentives are left outside the scope of this blog post.), the consensus pool, is created (more details on how the consensus pool is created will be discussed in the next blog post). For this purpose, the consensus pool runs a distributed key generation algorithm. This results in a shared public key [[\pk_{T}]] and each consensus pool participant owning a privacy key share [[\sk_{T,i}]]. The public key, [[\pk_{T}]], is sent to the CM, so the key can be shared to all users.
Hence, apart from the [[\encryptedvector]] each user also sends [[\encryptedvector']] to the CM, where:
$$ \encryptedvector' = \left[\enc(\pk_{T}, \nrinteractions_{1}), \ldots, \enc(\pk_{T}, \nrinteractions_{\nrads})\right] $$
When the ads campaign is over, all the [[\encryptedvector']] generated by the users will be processed to calculate how many rewards were paid per advertiser. By using the same additively homomorphic properties used to calculate the payouts for the users, the CM can also calculate the payout per advertiser using all [[\encryptedvector']]. Thus, considering all the [[\encryptedvector']] of the campaign, the encrypted amount of ads payout for the ad in position [[i]], can be calculated by the CM in the following way:
$$ \encadspayout_i = \sum_{i=1}^{\nrads}\encryptedvector'_{0}[i] + \cdots + \encryptedvector'_{\nrusers}[i] $$
where [[\nrusers]] is the number of users. Each of the [[\encadspayout_{i}]] be decrypted using the threshold public-private key-pair, which requires a minimum number of pool participants to decrypt. The decrypted values are shared with the advertisers, which then allow them to verify whether the funds used by the CM to pay the users are the correct ones, based on the users interactions with the ad campaign.
In this first blog post, we presented the motivation and goals for THEMIS, a novel privacy-by-design ad platform design and implemented by Brave's Research team. Similarly to Brave ads, THEMIS provides strong anonymity to users. In addition, it is decentralized and requires zero trust from users and advertisers. THEMIS core protocol (i) provides auditability to all participants, (ii) rewards users for interacting with ads, and (iii) allows advertisers to verify the performance and billing reports of their ad campaigns.
In addition to introducing and motivating THEMIS, we outlined a simplified straw-man design of the core protocol, which guarantees that:
The user receives rewards they earned by interacting with ads. The same property holds as with Brave Ads: THEMIS does not disclose which ads users have interacted with to Brave or advertisers.
The campaign manager is able to correctly apply the pricing policy of each ad without disclosing any information to users or potential competitors of the advertiser.
However, the straw-man approach does not cover all the properties we would like to achieve for THEMIS, particularly in terms of trust. In the straw-man approach, the campaign manager is responsible for orchestrating the protocol: it handles the user request for payouts and calculates the rewards. In addition, the CM stores the ad policies privately and both users and the advertisers need to trust that the policies are respected when the payouts are calculated. Finally, the straw-man system does not address the privacy-preserving payment mechanism for rewards.
In the upcoming blog post, we improve the simplified straw-man approach and present the end-to-end THEMIS protocol; we will also present a scalability evaluation, which shows how THEMIS operates at scale.
[1] BAT whitepaper
[2] Brave Rewards Stats & Token Activity
[3] N. Kshetri, "The Economics of Click Fraud," in IEEE Security & Privacy, vol. 8, no. 3, pp. 45-53, May-June 2010.
[3.1] The Dark Alleys of Madison Avenue: Understanding Malicious Advertisements
[4] Kumari, Shilpa, et al. "Demystifying ad fraud." 2017 IEEE Frontiers in Education Conference (FIE). IEEE, 2017.
[5] Bashir, Arshad, et.al "Tracing Information Flows Between Ad Exchanges Using Retargeted Ads". 25th USENIX Security Symposium (USENIX Security 16)
[6] Papadopoulos, Kourtellis and Markatos "Cookie Synchronization: Everything You Always Wanted to Know But Were Afraid to Ask"
[7] Speicher, T., Ali, M., Venkatadri, et. al. (2018) "Potential for Discrimination in Online Targeted Advertising". Proceedings of the 1st Conference on Fairness, Accountability and Transparency
[8] Venkatadri, Athanasios, et. al. (2018). Privacy Risks with Facebook's PII-Based Targeting: Auditing a Data Broker's Advertising Interface.
[9] El Gamal Encryption
[10] Paillier Cryptosystem
[11] Privad: practical privacy in online advertising
[12] Adnostic: Privacy Preserving Targeted Advertising
[13] Global Ad-Blocking Behaviors In 2019 – Stats & Consumer Trends (infographic)
[14] BAT roadmap
[15] BAT Apollo AMA with Marshall Rose
STATE OF THE BAT: 2021 Recap and 2022 Outlook
2021 was a significant year for the growth and development of Brave and BAT. Brave grew over 2x to 50.2 million monthly active users in 2021, with over 8 million users earning BAT through Brave Rewards, becoming the 11th most distributed token by on-chain holders according to etherscan.io.
For the fifth year in a row, we've doubled the number of our monthly active users, going from 24 million MAU on December 31st, 2020, to over 50 million by the end of 2021. | CommonCrawl |
High-cycle fatigue behaviour of a timber-to-timber connection with self-tapping screws under lateral loading
Peter Niebuhr ORCID: orcid.org/0000-0003-4496-80911 &
Mike Sieder ORCID: orcid.org/0000-0003-3371-68411
European Journal of Wood and Wood Products volume 79, pages 785–796 (2021)Cite this article
The high-cycle fatigue behaviour of a timber-to-timber connection with self-tapping screws is examined with the fasteners under bending due to their alignment lateral to the load direction. The cyclic tests were carried out with a sinusoidal non-reversed load (\(R=0.1\)) with a loading frequency of 5 Hz. The examined connection is designed for the quasistatic failure mechanism with two plastic hinges per shear plane according to Johansen's theory (European Yield Model), which is mirrored in the observed fatigue failure. Based on 30 cyclic tests on four nominal stress levels \(\left( S=\left\{ 0.47,\;0.41,\;0.31,\;0.20 \right\} \right) \) in the finite-life regime the respective Wöhler-curve is obtained, showing high conformity with the test data due to a consideration of the specific density of the individual specimens. It is shown that the examined fasteners show a superior fatigue behaviour under bending compared to axial loading. A simple safe-side approach for the application of Wöhler-curves for axial loading of threaded fasteners to the present case of fastener bending is proposed, extending the field of possible applications for the results of existing and future studies of the behaviour under axial loading.
In the light of a growing societal and political demand for more sustainable solutions in the construction sector, timber constructions are increasingly considered for high/performance structures that are subjected to repeated loading, for example, towers for wind energy plants (cf. Röhm et al. 2015; Schröder 2015; Christian and Aicher 2016; Gräfe et al. 2017; Sieder and Schröder 2019), heavy duty road bridges (cf. Rantakokko and Salokangas 2000; Flach and Frenette 2004; Meyer et al. 2005; Lefebvre and Richard 2014), or elevator shafts (cf. Abrahamsen and Malo 2014; Malo et al. 2016; Abrahamsen 2017). Constituting an essential advancement in timber construction fastener technology, self-tapping screws can make a crucial contribution to the fulfillment of the structural challenges that arise with the new application fields.
However, little is known about the behaviour of connections with these fasteners in the high-cycle fatigue regime, which is a decisive limit state in the design of the described structures. Previous studies on this subject are highly limited and consider either small conventional timber and furniture screws that are not suitable for high-performance structural engineering applications (Burmester and Hoffmann 1970; Trübswetter 1973; Bröker and Krause 1991), or consider the fastener itself under axial loading (Ringhofer 2017; Ringhofer et al. 2019; Niebuhr and Sieder 2020) or connections with primarily axial loading of the fasteners (Stamatopoulos and Malo 2017).
Connections with self-tapping screws are predominantly realised with an inclined orientation of the fasteners with regard to the load direction to achieve mainly axial loading in the screws. This configuration yields higher strength and stiffness values than a lateral orientation of the fasteners which subjects them predominantly to bending. If, however, fatigue failure is considered, threaded metallic fasteners under bending generally show a behaviour that is superior to that under axial tension (cf., e.g., Schaumann and Marten 2009) which could privilege the lateral orientation of the fastener in cases where material fatigue needs consideration. In this contribution, a timber-to-timber connection with self-tapping screws under mainly lateral loading will be examined to form a first empirical basis for the comparison of the fatigue performance of timber connections with lateral and axial loading of the fasteners.
A previous study (Niebuhr and Sieder 2020) that examined the same fasteners under axial loading will later be complemented by an ongoing examination of the withdrawal behaviour of the fasteners in spruce wood. That way, all possible failure mechanisms of connections with an inclined orientation of the fasteners have been examined, allowing for a full characterisation of these connections, which can then be compared to connections with lateral loading as examined in this contribution.
The examined fasteners are fully threaded self-tapping screws with a nominal outer thread diameter of \(d_n=6\) mm and length \(l=120\) mm with a cylindric head according to ETA-12/0114 (2017). In a sample with 23 specimens, the mean values of outer thread diameter, core diameter, flank inclination angle, head diameter and pitch have been determined as \(d=5.96\) mm, \(d_c=4.01\) mm, \(\nu =39.59^\circ \), \(d_k=8.15\) mm, and \(p=3.60\) mm. The tensile strength of the material was determined as \(\sigma _{u,mean}=1236.9\) N/mm\(^2\) with \(CV_{\sigma u}=1.36\%\) in a previous study (Niebuhr and Sieder 2020).
The wooden specimens were manufactured from kiln dried spruce wood (Picea abies). Prior to specimen manufacturing, the material was stored in standard climate 20/65 (DIN 50014:2018-08 2018) until mass equilibrium was reached, and was stored in the same climate between manufacturing and testing. All specimens with visible imperfections such as knots, checks, excessive slope of grain etc. were neglected so that small clear specimens can be assumed. All tests were performed in standard climate 23/50 (DIN 50014:2018-08 2018). The mean density of the specimens was determined as \(\rho _{N,mean}=473.9\) kg/m\(^3\) with \(CV_{\rho }=11.06\%\). Figure 1 shows the distribution of \(\rho _N\) for all specimens, divided into the test series samples.
Distribution of the specimen density at standard climate 20/65 (DIN 50014:2018-08 2018)
The examined connection is a timber-to-timber connection with mainly lateral loading of the screws, i.e. the screws are oriented perpendicular to the load direction; the test setup is shown in Fig. 2.
Test setup, specimen geometry and location of displacement sensors. Note that each holding plate for the displacement sensors is fastened with only one screw either in a side member or in the middle member to measure the relative displacement in the shear planes
The load is introduced as a compression force on the middle member and the outer members are supported vertically. To prevent unwanted axial forces in the fasteners due to the excentric load introduction into the side members, steel profiles have been set out to act as horizontal supports for the outer members, shown as ideal horizontal supports in Fig. 2. To ensure that the fasteners bear as much of the outer load as possible, the surface between middle and outer members has been lined with a thin PTFE-layer (\(t=0.20\) mm) to minimise friction. Thin plywood battens (also lined with PTFE) were set out in the lower part of the specimens to prevent a rotation of the members around the fastener axis. The displacement in the shear planes was measured with four inductive displacement sensors, one on each side of each shear plane, see Fig. 2. Any mentioned displacement value is a mean value of these four points.
Spacings and edge distances of the fasteners fulfill the requirements in ETA-12/0114 (2017); the thickness of the outer members and the penetration depth in the middle member were chosen in accordance with Eq. (NA.110) of the German National Annex to EN 1995-1-1:2004+A1:2008 (2008) (DIN EN 1995-1-1/NA:2013-08 2013) to ensure two plastic hinges per shear plane according to Johansen's theory (European Yield Model, cf. Johansen 1949):
$$\begin{aligned} t_{req}=1.15\cdot \left[ 3.14\right] \cdot \sqrt{\frac{M_{y,R}}{f_{h}\cdot d}}\approx 40\,mm \end{aligned}$$
Based on the mean density of the specimen material, the embedment strength was assumed as \(f_{h,mean}=26.7\) N/mm\(^2\) according to Blaß et al. (2006). The fastener yield moment was assumed as the characteristic value in ETA-12/0114 (2017), \(M_{y,Rk}=16{,}000\) Nmm, because a mean value was not available. As the chosen test setup is planned to be used in a later study with inclined screws, the penetration depth in the middle member was increased by \(1.17\cdot d=7\) mm in accordance with Pirnbacher et al. (2009), who proposed considering the influence of the fastener tip on the withdrawal behaviour of self-tapping screws with a reduction factor \(k_{length}=1.17\cdot d\). This is not assumed to influence the behaviour under lateral loading considerably and was done to ensure maximal comparability between the tests with fasteners aligned perpendicular to the load direction and later tests with inclined fasteners.
Preliminary to the cyclic tests, the quasistatic capacity of the specimens was determined in 13 ramp tests according to EN 26891:1991-07 (1991) (Series Lstat).
Experimental programme
The fatigue behaviour of the considered connection was examined in three series of cyclic tests as shown in Table 1. The stress level \(S=F_{max}/F_{ult}\) was determined with the quasistatic ultimate capacity \(F_{ult,mean}\) from Series Lstat. The ratio between minimal and maximal loading (stress ratio) was set to \(R=0.1\)which is close to the most damaging non-reversing loading for timber structures (cf. empirical data by, e.g., Sterr 1963; Tsai and Ansell 1990; Bonfield and Ansell 1991; Bond and Ansell 1998 or general assessments by Kreuzinger and Mohr 1994 and Smith et al. 2003). Reversed loading was omitted to keep the load introduction simple. The chosen loading frequency is \(f=5\,Hz\) in all cyclic tests. Because the load cycle numbers at failure in the first tests were smaller than the commonly assumed threshold for high-cycle fatigue phenomena of metallic specimens (\(N<1.0E4\), see e.g., Collins 1993), the stress level of the subsequent tests in that series was adjusted and the first tests are separately considered as Series Ldyn,0. All cyclic tests were performed as force-controlled tests with sinusoidal loading at MFPA Leipzig GmbH on a walter + bai servo-hydraulic test rig (type LFV-5) with a maximum capacity of 7.5 kN. In compliance with the quasistatic reference tests the chosen termination criterion is a displacement of \(u=15\) mm.
Table 1 Experimental programme
Quasistatic tests
Figure 3 shows the array of \(F-u\)- curves of the quasistatic tests. Note, that the discontinuity at ca. 5.5 kN is the transition from force-controlled testing to displacement-controlled testing. In all tests, the displacement threshold of \(u=15\) mm was decisive for the determination of the ultimate capacity, which was determined as
$$\begin{aligned} F_{ult,mean}=6.836\,\mathrm{kN}\;\;\;\text {with}\;\;\;CV_{Fult}=15.28\%\text {.} \end{aligned}$$
The individual results of the quasistatic tests in Series Lstat are given in Table 2. An evaluation of the opened specimens after testing showed plastic deformation in the fasteners at 47 of the 52 expected plastic hinge locations (ca. 90%). However, as the tests were not aborted at \(u=15\) mm but were continued up to \(u\approx 20\) mm, a proportion of these observed plastic hinges might have formed after reaching the respective ultimate capacity.
Table 2 Individual results of Series \(L_{stat}\)
Array of \(F-u\)- curves of the quasistatic preliminary tests
Influence of \(\rho _{N}\) on \(F_{ult}\) (\(\rho _{N,mean,i}\) acc. to Eq. 3)
As shown in Fig. 4, the observed ultimate capacity shows a distinct dependence on the density of the individual specimen, which can satisfactorily be explained by the influence of the specimen density on the embedment strength. In order to estimate the individual quasistatic capacity of each specimen in the cyclic test series, a linear regression was performed on the results from Series Lstat, yielding the following relation between specimen density and quasistatic capacity (\(R^2=0.9147\)):
$$\begin{aligned} F_{ult,i}\,\mathrm{[kN]}=0.022\cdot \rho _{N,mean,i}\,\left[ \frac{\mathrm{kg}}{\mathrm{m}}^3\right] -3.506 \end{aligned}$$
Here, a mean value for the density of the individual specimens (each comprised of three wooden members: two side members [sm] and one middle member [mm]) was assumed. First, the individual geometric mean density for both shear planes was calculated analogous to the consideration of different densities when determining \(K_{ser}\), cf. EN 1995-1-1:2004+A1:2008 (2008) 7.1 (2); then, the arithmetic mean of the values for both shear planes was determined:
$$\begin{aligned} \rho _{N,mean,i}=\frac{\sqrt{\rho _{N,sm1,i}\cdot \rho _{N,mm,i}}+\sqrt{\rho _{N,sm2,i}\cdot \rho _{N,mm,i}}}{2} \end{aligned}$$
In the subsequent description of the results of the cyclic tests, the stress level will be individually evaluated for each test, determining the quasistatic capacity with Eq. 2 and the individual mean density of the respective specimen. These individually determined stress levels are indicated with an asterisk \(S^*\).
Cyclic tests
In all cases, ultimate failure in the cyclic tests occurred through tear off of the screws due to the cyclic bending. Failure was observed at two points in each screw, one on each side of the shear plane, analogous to the desired pairs of plastic hinges according to Johansen's theory under ultimate quasistatic load, see Fig. 5.
Failure through tear off at two locations in each fastener (specimen L16 Series Ldyn,3 \(N=1.06\text{E}6\) \(\vert \) \(S^*=0.20\)). The dislocated pieces of the fasteners are not included in the picture
While in most cases full separation was observed, in a few cases failure occured only as a distinct crack on the tension side of the screw. With decreasing stress level, the location of the points of failure was observed to be closer to the shear plane, see Fig. 6.
Observed distance between location of failure and shear plane, theoretical location of \(M_{max}\)
Figure 7 shows the progression of the deformations at maximum load during cyclic testing. To compensate for the different magnitudes of load cycle numbers and deformations, the graphs of all tests are normalised with regard to both axes. The load cycle numbers are displayed relative to the individual ultimate load cycle number \(N_{i}/N\) and the deformations are displayed relative to the mean deformation \(u_{mean}\) between \(0.15\cdot N\) and \(0.75\cdot N\) of every test during which the deformations are quite stable. The individual values of \(u_{mean}\) are given in Table 3.
Progression of maximal deformation during cyclic testing—note the different scaling on the individual ordinates
Table 3 Individual results of the cyclic test series
Load cycle numbers
The load cycle numbers at failure of the individual tests are given in Table 3 and displayed in Fig. 8. In Fig. 8, the four tested load levels (cf. Table 1) cannot readily be identified, because the ordinate displays the individual stress levels \(S^*\), determined with \(F_{ult,i}\) according to Eq. 2. This yields an individual effective stress level for each specimen, even though the absolute loading is the same within each series Ldyn,i.
Observed load cycle numbers and linear regression of double-logarithmic data (\(P_A=50 \%\)); Wöhler-curve for connections with nails in EN 1995-2:2004 (2004); Wöhler-curve derived from axial tests (Eq. 17)
Since failure essentially occurs in the metallic fasteners, assuming a linear relation between the logarithmic values of loading and load cycle numbers was identified as the most promising approach for a continuous description of the test results. A linear regression was performed with the logarithmic \(S^*-N\)-data, yielding the following relation for a failure probability of \(P_A=50\,\%\) (\(R^2=0.939\)):
$$\begin{aligned} \log N=2.1830-5.1309\cdot \log S^* \end{aligned}$$
Although the linear regression is based on the double-logarithmic data, a linear scale is chosen for the ordinate (\(S^*\)) in Fig. 8 to comply with the common form of \(S-N\)-curves in timber construction. The chosen \(S^*-N\)-relation also accounts for a possible influence of the fatigue behaviour of the embedment in the wood. Beyond that, an evaluation of the stress amplitude in the fasteners (the ordinary approach in the evaluation of metal fatigue) is only possible with limited reliability. Both of these aspects will be further discussed in Sect. 4. Analogous to the pearl string method in DIN 50100:2016-12 (2016), the standard deviation of the logarithmic load cycle numbers at failure over all tests was determined as \({\tilde{s}}_{logN,corr}=0.1922\) (corrected according to Martin et al. 2011 to compensate for the limited sample size). This approach is based on the assumption that the respective standard deviation is constant in the finite-life regime. A summary of the different testing and evaluation approaches in DIN 50100:2016-12 (2016) and the underlying assumptions has been given by Masendorf and Müller (2018).
To allow for a comparison of the given test results in bending and the results in axial tension from an earlier study (Niebuhr and Sieder 2020), the axial bending stresses in the fasteners shall be assessed. The analytical determination of the cross-sectional screw properties is not trivial and has been described extensively by Ringhofer (2017). Figure 9 shows the chosen coordinates and the considered cross-section. For the fastener given here (see Sect. 2), the moments of inertia have been determined as
$$\begin{aligned} I_y=19.3\,\mathrm{mm}^4 \text { and } I_z=13.0\,\mathrm{mm}^4\text {.} \end{aligned}$$
With the location of the cross-section's local centre of gravity (\(y_s=0/z_s=-0.22\,mm\)), the section moduli are
$$\begin{aligned} W_{y}= {\left\{ \begin{array}{ll} 8.7\,\mathrm{mm}^3 \\ -7.0\,\mathrm{mm}^3 \end{array}\right. } \text { and } W_{z}=\pm 6.5\,\mathrm{mm}^3\text {.} \end{aligned}$$
Considered cross-section and coordinates for the determination of cross-sectional properties (adapted from Ringhofer 2017)
To estimate the bending moment in the fasteners during the cyclic tests, a simple numerical beam-on-springs-model was used (implemented in the software package DLUBAL RFEM). The fasteners were modelled as cylindric beams with linear elastic behaviour (\(E=210\) GPa) and diameter \(d=d_c=4.01\) mm. The embedment was modelled with discrete nonlinear spring elements (spaced 1 mm apart), estimating the load-slip behaviour with the following equation:
$$\begin{aligned} \sigma _{h}(w)=f_{h}\cdot \left[ 1.029-0.01\cdot w'\right] \cdot \left[ 1-e^{-\frac{w'}{0.8}}\right] \le f_{h} \end{aligned}$$
Equation 7 is taken from Blaß et al. (2006) for the chosen geometric conditions. Initial slip \(w_s=0.022\) mm is considered in \(w'=w-w_s\). To account for the pronounced influence of the specimen density on \(f_h\), the specimens were grouped into five classes with respect to their mean density \(\rho _{N,mean,i}\). For each class, a nominal embedment strength and respective load-slip diagram was estimated and considered in the numerical determination of the bending moments:
$$\begin{aligned} f_h=\left\{ 20.0,\;23.0,\;26.0,\;29.0,\;32.0\right\} \,\mathrm{N/mm}^2 \end{aligned}$$
Figure 10 shows the assumed load-slip diagrams.
Assumed load-slip behaviour in the beam-on-springs-model for \(f_h=\left\{ 20.0,\;23.0,\;26.0,\;29.0,\;32.0\right\} \) N/mm\(^2\)
The determined location of the maximal bending moment in the screw is shown in Fig. 6. It can be seen that the distance between the shear plane and the theoretical location of the maximal bending moment increases with increasing stress level \(S^*\). However, the observed distance between location of failure and shear plane is generally smaller than between the shear plane and the numerically determined location of the maximal bending moment, see also Fig. 5. Possible reasons for this discrepancy will be discussed in Sect. 4.
Progression of deformation
As seen in Fig. 7, the deformations at \(F_{max}\) of most specimens are stable for most of the fatigue life and a considerable rise in deformations is observed only shortly before ultimate failure. In a small number of tests, however, an abrupt rise of deformations is seen considerably before ultimate failure, followed by a stabilisation on a higher deformation level. Obviously, considerable damage has occured before ultimate failure in these tests, with ultimate failure probably only deferred by the redundant nature of the chosen test setup with two fasteners.
A possible approach to considering this phenomenon is to choose a different termination criterion in the fatigue tests, for example, a certain deformation increment between load cycles or a smaller absolute deformation. However, because the stress level in the fatigue tests is defined by the quasistatic ultimate capacity determined in Series Lstat, compliance between the failure criteria in the static and dynamic test series is prioritised, which is why the original failure criterion (\(u=15\) mm) was maintained in the fatigue tests. As the described premature damaging occured only in a small fraction of the tests, the effective influence on the overall evaluation is assumed to be small.
Location of failure
Quantitatively, the observed shift of the location of failure towards the shear plane with decreasing stress level \(S^*\) as shown in Fig. 6 is plausible, given that the embedment stiffness decreases with increasing embedment stress (cf. Fig. 10). For smaller stress levels the embedment stiffness is higher in relation to the bending stiffness of the fastener, which is assumed to be constant in the linear elastic regime. With increasing embedment stiffness, the location of the maximal bending moment moves closer to the shear plane, ultimately yielding the ideal bearing-type connection assumed for bolted steel connections, where no bending in the fastener is considered (Petersen 2013).
As mentioned, a discrepancy between the observed location of failure and the theoretical location of the maximal bending moment in the fastener was observed (cf. Figs. 5, 6 ). On average, the failure location is \({\varDelta }_{u\vert M}=3.72\) mm closer towards the shear plane than the numerically estimated maximal bending moment. Considering the individual values of \({\varDelta }_{u\vert M}\) for all fatigue tests, given in Table 3 as a mean value of the four failure locations per specimen, the discrepancy tends to be bigger for smaller stress levels \(S^*\). While the reason(s) for the observed discrepancy could not definitively be found, some possible explanations will be discussed:
Deficiencies in the numerical simulation Obviously, the chosen numerical beam-on-springs-model is a gross simplification of the reality, especially with regard to the fatigue behaviour of the embedment of the fastener in the wood. However, given that the standard deviations for the load-slip-curve parameter given by Blaß et al. (2006) are small, the estimation of this behaviour is thought to be sufficiently accurate.
Gaping shear plane Due to imperfect manufacturing and possibly also due to unexpected (and unnoticed) lateral deformations during testing, an open gap between side members and middle member of the specimens (in the shear plane) cannot be neglected. At a minimum, the members are kept at a certain distance by the PTFE-layer, although this alone (\(t=0.20\) mm) is not enough to justify the observed discrepancy \({\varDelta }_{u\vert M}\). However, considering any gaping joint yields a maximal bending moment that lies closer to the shear plane, allowing to assign a partly explanation to this aspect.
Random scatter of crack initiation nuclei in the fastener The location of a fatigue crack in metals is bound to the existence of a crack initiation nucleus in the material such as dislocations and vacancies (Forsyth 1969). A certain proportion of these points is assumed to be randomly distributed along the idealised longitudinal axis of the fastener (disregarding geometrical effects such as notches formed by the thread and other effects such as work hardening). In a previous study of the fatigue behaviour of the given fasteners under axial tensile loading (Niebuhr and Sieder 2020), failure was observed to occur anywhere between the 2nd and 22nd thread (counting from head to tip), which is assumed to be reasoned by the natural scatter of crack initiation nuclei. The same can be applied here, certainly explaining some of the scatter of the location of failure. Additionally, with increasing deformation, an increasing proportion of the external load is borne as axial forces in the fasteners due to the rope effect, reducing the relative influence of the bending stresses in the fastener due to superposition of the stresses. However, as the observed failure locations are generally on one side of the theoretical location, natural scatter alone is an unsufficient explanation for the discrepancy.
Influence of geometric and metallurgical notches Given that the surface geometry of the specimen has a major influence on the crack initiation, a deviation of the failure locations in the magnitude of the thread pitch \(p=3.60\) mm would be plausible as this is the interval in which the geometric shape of the fastener repeats along the axis. However, the scatter range is about three times bigger than that (cf. Fig. 6), moreover, the deviation is only to one side of the theoretical location, see above.
Accumulated deformation in the wood close to the shear plane Smith et al. (2003) describe that under compressive fatigue loading parallel to grain an accumulation of deformations can be observed in spruce wood (findings based on Gong 2000). Assuming that this is also valid for the given embedment in the wood (the load is oriented parallel to the grain as well), the aforementioned gaping shear plane effect might be considerably intensified by local fatigue damage of the wood fibers close to the shear plane (cf. Fig. 11), effectively leading to the described gaping shear plane without an externally visible gap. This effect would assign a non-negligible influence on the behaviour of the connection to the fatigue behaviour of the wood, as it governs the alteration of the fastener loading during the progression of the test.
Possible local embedment failure near the shear plane due to wood fatigue resulting in altered loading of the fasteners (top: no damage in wood; bottom: damage near the shear plane)
Given the described discrepancy between the location of failure and the theoretical location of the maximum bending moment, the determination of axial bending stresses in the fastener must be discussed. Due to the uncertainties described above, the exact magnitude of the bending moment in the fastener at the location of failure is unknown. Additionally, the appropriate section moduli of the screw are unknown as the orientation of the screw (with regard to rotation around its longitudinal axis) is unspecified; Eq. 6 gives only the extreme values. Hence, the axial bending stresses in the fastener at the location of failure are estimated as a limit value consideration, determining the minimal and maximal realistic magnitude of the stress amplitudes for every specimen.
The maximum limit value of the bending moment in the fastener \(M_{max}\) is assumed as the maximum value from the simulation, regardless of its location. The minimum limit value \(M_{loc}\) is assumed as the value from the simulation at the point where failure was observed in the test. Both values are shown in Fig. 5. As to the section moduli, the minimal and maximal values from Eq. 6, \(W_y\) and \(W_z\), are used, corresponding to the most favourable and most unfavourable axial orientation of the fastener (cf. Fig. 9). With these quantities, the minimum limit value of the axial bending stress at maximum external force \(F_{max}\) is determined with the minimal bending moment and the maximal section modulus:
$$\begin{aligned} \text {min}\,\sigma (F=F_{max})=\frac{M_{loc}(F=F_{max})}{W_y} \end{aligned}$$
Accordingly, the maximum limit value at maximum external force \(F_{max}\) is determined with the maximal bending moment and the minimal section modulus:
$$\begin{aligned} \text {max}\,\sigma (F=F_{max})=\frac{M_{max}(F=F_{max})}{W_z} \end{aligned}$$
All of these considerations were undertaken for the stresses under the maximal external force in the load cycle \(F_{max}\). Neglecting some of the aforementioned nonlinearities, the stresses at \(F_{min}=F_{max}\cdot R\) are assumed as
$$\begin{aligned} \begin{aligned}&\text {min}\,\sigma (F=F_{min})=\text {min}\,\sigma (F=F_{max})\cdot R \\&\text {max}\,\sigma (F=F_{min})=\text {max}\,\sigma (F=F_{max})\cdot R \end{aligned} \end{aligned}$$
which yields
$$\begin{aligned} \begin{aligned}&\text {min}\,\sigma _a=\frac{1}{2}\cdot \frac{M_{loc}(F=F_{max})}{W_y}\cdot (1-R) \\&\text {max}\,\sigma _a=\frac{1}{2}\cdot \frac{M_{max}(F=F_{max})}{W_z}\cdot (1-R) \end{aligned} \end{aligned}$$
for the limit values of the stress amplitude. Equation 11 was used to determine the axial bending stress amplitudes in Fig. 12, showing the minimal and maximal values as a range of uncertainty. A linear regression of the logarithmic \(\sigma _a-N\)-data yielded the following relations for the minimum \((R^2=0.955)\) and maximum \((R^2=0.945)\) limit values:
$$\begin{aligned} \begin{aligned}&\log N=15.853-4.430\cdot \log \left( \text {min}\,\sigma _a\right) \\&\log N=16.949-4.717\cdot \log \left( \text {max}\,\sigma _a\right) \end{aligned} \end{aligned}$$
Observed load cycle numbers at failure in relation to the axial stress due to bending (Eqs. 11, 12), comparison with Wöhler-curve for axial tension (axial data taken from Niebuhr and Sieder, 2020)
Comparison axial tension and bending
Figure 12 shows the derived \(\sigma _a - N\)-Wöhler-curves in the classical double-logarithmic form for metals, displaying the stress amplitude on the ordinate. The results are compared to the corresponding Wöhler-curve for the tested screw under axial tension from an earlier study (Niebuhr and Sieder 2020). It is obvious that the observed behaviour under bending is superior to that under comparable axial tension. This complies with existing findings on the behaviour of steel bolts that have been extensively examined for applications with combined loading such as bolted ring flange joints in towers for wind energy plants (cf. Agatonovic 1973; Frank 1980; Kampf 1997; Seidel 2001; Alt 2005; Berger et al. 2008; Schaumann and Marten 2009; Eichstädt 2019). As stated by the named authors, and shown in Fig. 12, the fatigue behaviour of threaded fasteners under bending and under combined axial and bending loading can be characterised on the safe side using a Wöhler-curve for pure axial loading, as long as for combined loading the additional stress due to the bending component is considered in the determination of the stress in the fastener. This approach will be used in the next section.
Verification and design
At first, the Wöhler-curve given for nailed connections in EN 1995-1-1:2004+A1:2008 (2008) shall be viewed as it might be considered appropriate for the verification of connections with self-tapping screws. As seen in Fig. 8, the EC-curve lies on the unsafe side of the obtained test results, clearly dismissing it for a verification of the tested fastener. This can satisfactorily be explained by the negative influence of the geometric notch that is formed by the threads as well as metallurgical notches formed during manufacturing of the fasteners. Hence, a dedicated design curve for self-tapping screws or threaded fasteners in general is necessary.
Obviously, for the specific examined fastener, Eq. 4 yields the most accurate verification of timber-to-timber connections with lateral loading of the fastener. However, as described in Sect. 1, empirical data for the fatigue behaviour of threaded fasteners in timber construction is primarily available and easier obtainable for axial loading. Hence, a simple safe-side approach for the verification of lateral fastener loading using axial Wöhler-curves is presented to allow for a utilisation of this more readily available type of data if connections with other fasteners shall be assessed.
As mentioned in the previous section, an axial Wöhler-curve is sufficient for a safe-side estimation of the fatigue behaviour of the fastener under bending. These curves generally describe a \(\sigma _a-N\)-relation which will be transferred into an \(S-N\)-form in order to comply with the common form in timber construction. The stress amplitude from Eq. 11 is rearranged and multiplied with the minimum value of the section modulus to derive a respective maximum bending moment in the description of the material behaviour:
$$\begin{aligned} M(F=F_{max})=\frac{2\cdot \sigma _a}{1-R}\cdot W_z \end{aligned}$$
For a simplified estimation of the bending moment in a fastener under a given external force, a linear approach is proposed. It is based on the ultimate capacity \(F_{ult}\) (determined with a method of choice, e.g., according to EN 1995-1-1:2004+A1:2008, 2008) and the yield moment of the fastener \(M_{y,R}\) (e.g., as given in the respective technical assessment). The bending moment in the fastener is assumed to decrease linearly with decreasing external loading which yields
$$\begin{aligned} M(F=F_{max})=M_{y,R}\cdot \frac{F_{max}}{F_{ult}}=M_{y,R}\cdot S \end{aligned}$$
for the maximum force in a load cycle, yielding
$$\begin{aligned} S=\frac{M(F=F_{max})}{M_{y,R}} \end{aligned}$$
and thus (Eq. 13\(\rightarrow \)14)
$$\begin{aligned} S=\frac{2\cdot \sigma _a}{1-R}\cdot \frac{W_z}{M_{y,R}}\text {.} \end{aligned}$$
Assuming \(M_{y,Rk}=16{,}000\) Nmm for the examined fastener, this approach estimates the numerically estimated bending moment at the failure location \(M_{loc}\) with a mean error of 2.0% (\(CV=10.59\%\)). Considering the extent of the conservativeness of the \(\sigma _{a,axial}\)-curve compared to the \(\sigma _{a,lateral}\)-curve in Fig. 12, this approach can be seen as reasonable.
Exemplarily, the proposed approach will be shown for the examined fastener, as the Wöhler-curve for axial loading is available from an earlier study (Niebuhr and Sieder 2020). There, the \(\sigma _a-N\)-relation for failure in the threaded part of the fastener was determined as
$$\begin{aligned} \log N=12.38-3.40\cdot \log \sigma _a\text {.} \end{aligned}$$
Considering Eq. 15 and \(W_{z}=\pm 6.5\) mm\(^3\) (Eq. 6), Eq. 16 can be written as
$$\begin{aligned} \log N=2.02-3.40\cdot \log S\text {.} \end{aligned}$$
Figure 8 shows the Wöhler-curve given by Eq. 17. Clearly, the simplified approach yields a highly conservative estimation of the behaviour under lateral loading. However, if no other empirical data is available for a specific fastener, but a lateral orientation is desirable (e.g., because of space limitation in the members), the proposed approach enables a safe-side estimation of the fatigue behaviour.
The fatigue behaviour of a timber-to-timber connection with self-tapping screws under lateral loading was examined in the finite-life regime. The design of the connection was successfully aimed at a decisive quasistatic failure mode with two plastic hinges per shear plane according to Johansen's theory, which was mirrored in fatigue failure. The Wöhler-curve for the examined connection was obtained on the basis of 30 fatigue tests with load cycle numbers ranging from 5.00E3 to 2.97E6, whereby the consideration of the individual density of the specimens enabled a continuous description of the results with a high accuracy.
An inspection of the obtained results with regard to the stress in the fastener enabled a comparison with the behaviour of the fastener under axial loading. The general understanding that the fatigue behaviour of threaded fasteners under bending is superior to that under axial loading could be confirmed for the examined fastener. On this basis, a simple safe-side approach for the utilisation of axial Wöhler-curves for connections with lateral loading of the fastener was proposed, allowing to use these results for a wider range of applications.
As described in Sect. 1, a full comparison between connections with lateral and axial loading of the examined fastener has to consider results from an ongoing examination of the withdrawal behaviour of the screw. This will allow for a qualitative and quantitative assessment of the superiority of either configuration with regard to fatigue failure.
Apart from this, it has to be emphasised that an unreserved validity of the obtained results is only given for the specific screw that was examined. Differences in geometry, material, manufacturing and other aspects (cf. Ringhofer 2017) certainly influence the behaviour of different fasteners. For axial loading, this assumption has been confirmed by Niebuhr and Sieder (2020), comparing their test data with similar data from Ringhofer et al. (2019), although a certain affinity of the behaviour of the considered screws was apparent. An extensive experimental study of a wide variety of screws might enable a general description of this type of fasteners, similar to the detail categories in EN 1993-1-9:2005 + AC:2009 (2009) or the respective general consideration of nailed connections and connections with dowels in EN 1995-1-1:2004+A1:2008 (2008). The results given here are only one of many contributions that are neccessary for this extensive study.
Abrahamsen RB (2017) Mjøstårnet—construction of an 81 m tall timber building. In: Proc 23. int Holzbau-forum 2017
Abrahamsen RB, Malo KA (2014) Structural design and assembly of Treet—a 14-storey timber residential building in Norway. In: WCTE 2014 conf proc
Agatonovic P (1973) Verhalten von Schraubenverbindungen bei zusammengesetzter Betriebsbeanspruchung [Behaviour of bolted connections under combined fatigue loading] (in German). PhD thesis, Technische Universität Berlin
Alt A (2005) Dauerfestigkeitsprüfung und Dauerfestigkeit von Schraube-Mutter-Verbindungen unter kombinierter Zug- und Biegebelastung [Fatigue limit and fatigue limit evaluation of bolt and nut connections under combined tension and bending] (in German). PhD thesis, Technische Universität Berlin
Berger C, Schaumann P, Stolle C, Marten F (2008) Experimentelle Ermittlung von Wöhlerlinien großer Schrauben [Fatigue strength of high strength bolts with large diameters] (in German): Report ZP 52-5-16.125-1231/06. DIBt, Berlin
Blaß HJ, Bejtka I, Uibel T (2006) Tragfähigkeit von Verbindungen mit selbstbohrenden Holzschrauben mit Vollgewinde [Strength of connections with self-tapping full-threaded timber screws] (in German). Univ.-Verl. Karlsruhe
Bond IP, Ansell MP (1998) Fatigue properties of jointed wood composites: part I statistical analysis, fatigue master curves and constant life diagrams. J Mater Sci 33(11):2751–2762. https://doi.org/10.1023/A:1017565215274
Bonfield PW, Ansell MP (1991) Fatigue properties of wood in tension, compression and shear. J Mater Sci 26(17):4765–4773. https://doi.org/10.1007/BF00612416
Bröker FW, Krause HA (1991) Orientierende Untersuchungen über das Haltevermögen dynamisch beanspruchter Holzschrauben [Orienting exaxmination of the capacity of timer screws under danymic loading] (in German). Holz Roh Werkst 49(10):381–384. https://doi.org/10.1007/BF02608920
Burmester A, Hoffmann A (1970) Schraubenhaltevermögen von Kiefern- und Fichtenholz unter langdauernder statischer und dynamischer Belastung [Pull-out capacity of pine and spruce under static and dynamic loading] (in German). Die Holzbearbeitung HOB 17(5):9–11
Christian Z, Aicher S (2016) Fatigue behaviour of timber composites and connections for ultra high wooden towers. In: WCTE 2016 conf proc
Collins JA (1993) Failure of materials in mechanical design: analysis, prediction, prevention, 2nd edn. Wiley, New York
DIN 50014:2018-08 (2018) Normalklimate für Vorbehandlung und/oder Prüfung—Festlegungen [Standard atmospheres for conditioning and/or testing—specifications] (in German)
DIN 50100:2016-12 (2016) Load controlled fatigue testing—execution and evaluation of cyclic tests at constant load amplitudes on metallic specimens and components (in German) https://doi.org/10.15488/5157
DIN EN 1995-1-1/NA:2013-08 (2013) National Annex—Nationally determined parameters—Eurocode 5: design of timber structures—Part 1-1: general—common rules and rules for buildings (in German)
Eichstädt R (2019) Fatigue assessment of large-size bolting assemblies for wind turbine support structures. PhD thesis, Universität Hannover. https://doi.org/10.15488/5157
EN 1993-1-9:2005 + AC:2009 (2009) Eurocode 3: design of steel structures—part 1-9: fatigue
EN 1995-1-1:2004+A1:2008 (2008) Eurocode 5: design of timber structures—part 1-1: general—common rules and rules for buildings
EN 1995-2:2004 (2004) Eurocode 5: design of timber structures—part 2: bridges
EN 26891:1991-07 (1991) Timber structures; joints made with mechanical fasteners; general principles for the determination of strength and deformation characteristics (ISO 6891:1983)
ETA-12/0114 (2017) European Technical Assessment: SPAX self-tapping screws
Flach M, Frenette CD (2004) Wood-concrete-composite-technology in bridge construction. In: WCTE 2004 conf proc, pp 289–294
Forsyth PJE (1969) The physical basis of metal fatigue. Blackie & Son, London
Frank KH (1980) Fatigue strength of anchor bolts. J Struct Div 106(6):1279–1293
Gong M (2000) Failure of spruce under compressive low-cycle fatigue loading parallel to grain. PhD thesis, University of New Brunswick
Gräfe M, Bert C, Winter S (2017) Prestressed CLT wind-turbine towers. Bautech 94(11):804–811. https://doi.org/10.1002/bate.201700080
Johansen KW (1949) Theory of timber connections. IABSE Publ 9:249–262. https://doi.org/10.1002/bate.201700080
Kampf M (1997) Dauerhaltbarkeit von Schrauben unter kombinierter Zug- und Biegebelastung [Fatigue limit of bolts under combined tension and bending] (in German). PhD thesis, Technische Universität Berlin
Kreuzinger H, Mohr B (1994) Holz und Holzverbindungen unter nicht vorwiegend ruhenden Einwirkungen [Timber and timber connections under repeated loading] (in German): Report. Technische Universität München
Lefebvre D, Richard G (2014) Design and construction of a 160-metre-long wood bridge in Mistissini, Québec. In: Proc 20 int Holzbau-forum 2014
Malo KA, Abrahamsen RB, Bjertnæs MA (2016) Some structural design issues of the 14-storey timber framed building Treet in Norway. Eur J Wood Wood Prod 74(3):407–424. https://doi.org/10.1007/s00107-016-1022-5
Martin A, Hinkelmann K, Esderts A (2011) Zur Auswertung von Schwingfestigkeitsversuchen im Zeitfestigkeitsbereich—Teil 2 [On the evaluation of fatigue tests in the finite-life regime—part 2] (in German). Mater Test 53(9):513–521. https://doi.org/10.3139/120.110256
Masendorf R, Müller C (2018) Execution and evaluation of cyclic tests at constant load amplitudes—DIN 50100:2016. Mater Test 60(10):961–968. https://doi.org/10.3139/120.111238
Meyer L, Morzier C, Tissot JB (2005) Holz-Beton-Verbundbrüclen für den 40t-Verkehr im Kanton Freiburg (Schweiz) [40-ton wood-concrete composite bridges in Canton Fribourg] (in German). In: Proc 11. int Holzbau-forum 2005
Niebuhr P, Sieder M (2020) High-cycle fatigue behavior of a self-tapping timber screw under axial tensile loading. J Fail Anal Prev 20(2):580–589. https://doi.org/10.1007/s11668-020-00863-4
Petersen C (2013) Stahlbau [Steel construction] (in German). Springer, Wiesbaden https://doi.org/10.1007/s11668-020-00863-4
Pirnbacher G, Brandner R, Schickhofer G (2009) Base parameters of self-tapping screws: CIB-W18: paper 42-7-1. Dübendorf
Rantakokko T, Salokangas L (2000) Design of the Vihantasalmi Bridge, Finland. Struct Eng Int 10(3):150–152. https://doi.org/10.2749/101686600780481590
Ringhofer A (2017) Axially loaded self-tapping screws in solid timber and laminated timber products. PhD thesis, TU Graz
Ringhofer A, Augustin M, Schickhofer G (2019) Basic steel properties of self-tapping timber screws exposed to cyclic axial loading. Constr Build Mater 211:207–216
Röhm J, Brand S, Kunz F (2015) Hoher Züblin-Windkraft-Turm aus Holz [Züblin wooden wind power tower] (in German). In: Proc 21 int Holzbau-forum 2015
Schaumann P, Marten F (2009) Fatigue resistance of high strength bolts with large diameters. In: Proc int symp for steel struct
Schröder C (2015) Ermüdungsbeanspruchte Brettsperrholz-Bauteile am Beispiel eines Turms für Windkraftanlagen [Wooden and wood hybrid towers for wind turbines in the multi-megawatt class] (in German). In: Proceeding 21. int Holzbau-forum
Seidel M (2001) Zur Bemessung geschraubter (Ringflanschverbindungen von Windenergieanlagen [On the verification of bolted ring flange connections in wind energy plants] (in German). PhD thesis, Universität Hannover
Sieder M, Schröder C (2019) TimberTower: Erfahrungen aus Konstruktion und Betrieb der ersten Windkraftanlage mit Holzturm [TimberTower: Experience from construction and operation of the first wind energy plant with wooden tower] (in German). In: Proc 25. Int Holzbau-Forum 2019, pp 259–272
Smith I, Landis E, Gong M (2003) Fracture and fatigue in wood. Wiley, Chichester
Stamatopoulos H, Malo KA (2017) Fatigue strength of axially loaded threaded rods embedded in glulam at \(45^\circ \) to the grain. In: Proc int conf on timber bridges 2017
Sterr R (1963) Untersuchungen zur Dauerfestigkeit von Schichtholzbalken [Investigations on the fatigue resistance of laminated wood beams] (in German). Holz Roh Werkst 21(2):47–61. https://doi.org/10.1007/BF02609715
Trübswetter T (1973) Klammern als Holzverbinder bei wechselnden Lasten [Tacks as timber fasteners under alternating loading] (in German). HK Holz- und Kunststoffverarbeitung 7(5)
Tsai KT, Ansell MP (1990) The fatigue properties of wood in flexure. J Mater Sci 25(2):865–878. https://doi.org/10.1007/BF03372174
The authors wish to thank SPAX International GmbH & Co. KG for the provision of the tested fasteners.
Open Access funding enabled and organized by Projekt DEAL. This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—397985109. The tested fasteners were provided by SPAX International GmbH & Co. KG.
iBHolz Technische Universität Braunschweig, Brunswick, Germany
Peter Niebuhr & Mike Sieder
Peter Niebuhr
Mike Sieder
Correspondence to Peter Niebuhr.
The authors declare that they have no conflict of interest.
Niebuhr, P., Sieder, M. High-cycle fatigue behaviour of a timber-to-timber connection with self-tapping screws under lateral loading. Eur. J. Wood Prod. 79, 785–796 (2021). https://doi.org/10.1007/s00107-021-01699-x
DOI: https://doi.org/10.1007/s00107-021-01699-x | CommonCrawl |
Hailey used 4 identical sticks to form a square as shown below. She then formed a pattern using more of the sticks.
(a) How many sticks are used to form 13 squares?
(b) How many squares are formed using 100 sticks?
(a) Let n = number of squares
Number of sticks = $(n-1) \times 3 + 4 = 3n + 1$
= $3 \times 13 + 1 = 40$
(b) $3n + 1 = 100$
$3n = 100 – 1 = 99$
$n = 99 \div 3 = 33$
Michael Uses identical shaded and unshaded triangles to form figures that follow a pattern as shown below. \begin{array}{|c|c|c | c |} \hline \mbox{Figure No.} & \mbox{No. of shaded triangles} & \mbox{No. of unshaded triangles} & \mbox{Total No. of shaded and unshaded triangles} \\ \hline 1 & 4 & 3 & 7 \\ \hline 2 & 9 & 5 & 14 \\ \hline 3 & 16 & 7 & 23 \\ \hline \end{array} (a) A figure in the pattern has a total of 529 shaded triangles. What is the Figure Number?
(b) Another figure in the pattern has a total of 63 unshaded triangles. What is the total number of shaded and unshaded triangles in this figure?
(a) Figure Number 20
Two types of square-shaped tiles, tile 1 and tile 2 are available to make a larger pattern on the floor. The pattern of each square-shaped tile is shown below.
Tile 1 is made up of 3 white squares and 1 black squares.
Figure 1 shows a floor laid Tile 1 and Tiles 2 in a repeated patterns.
(a) 90 pieces of Tile 1 were used to cover part of the floor in the room in the pattern shown in figure 1. Find the total number of tiles needed to tile the floor in figure 1.
(b) What percentage of the floor in figure 1 was covered with black squares?
(b) 10%
(a) 1 row $\rightarrow$ 2 tiles, 3 tile 2
rows $\rightarrow$ $90 \div 2 = 45$
tiles $\rightarrow$ $45 \times 45 = 225$
225 tiles were needed.
(b) $8 \div 4 \times 5 \times 100% = 40%$
3. Zach used some white and gray tiles to form some patterns. The first four patterns are shown below. The table below shows the number of white and gray tiles used to form the patterns. \begin{array}{|c|c|c | C |} \hline \mbox{Pattern Number} & \mbox {No. of Grey tiles} & \mbox{No. of white tiles} & \mbox{Total No. of tiles} \\ \hline 1 & 2 & 2 & 4 \\ \hline 2 & 5 & 4 & 9 \\ \hline 3 & 8 & 8 & 16 \\ \hline 4 & 13 & 12 & 25 \\ \hline 5 & & & \\ \hline \end{array}
(a) How many tiles were used form pattern 80?
(b) How many grey tiles were used form pattern 120?
(a) 4890
(a) 6561 $\rightarrow$ tiles = $81 \times 81 = 6561$
(b) 7321 $\rightarrow$ Gray tiles = $121 \times 121 + 1 \div 2 = 7321$
How many circles are there in Pattern 25?
Study the following pattern.
(a) In which column will the number 80 appear?
(b) What number will appear in Row 99 Column D?
(a) Column C
(a) Column D
(a) Column E
(a) Column F
(b) $99 – 1 = 98$
$98 \div 2 = 49$
$49 \times 7 = 343$
$343 + 2 = 345$
A repeated pattern is formed using the 4 letters A, B, C and D. The first 26 letters are shown in below figure.
How many 'D' are there in the first 125 letters?
Farah uses black and white buttons to form figure that follow a pattern. The first four figure are shown below.
(a) A figure in the pattern has a total of 176 black and white buttons What is the Figure Number? \begin{array}{|c|c|c | c | c |} \hline \mbox{Figure Number} & 1 & 2 & 3 & 4\\ \hline \mbox{Number of black buttons} & 0 & 1 & 3 & 6 \\ \hline \mbox{Number of white buttons} & 1 & 4 & 9 & 16 \\ \hline \mbox{Total number of buttons}& 1 & 5 & 12 & 22 \\ \hline \end{array}
(b) A figure in the pattern has 784 white buttons. How many black buttons are there in that figure?
The pattern below shown a series of hexagons which are made using beads and strings. Study the patterns and answer the questions that follow. (a) How many beads are there in Pattern 5?
(b) Which patterns will have 253 beads?
(c) Ahmad wants to make a pattern consisting of 43 hexagons. He has 151 beads. How many more beads does he need?
(a) $5 – 1 = 4$
$4 \times 4 + 5 = 21$
(b) $253 – 5 = 248$
$248 \div 4 = 62$
$62 + 1 = 63$
(c) $43 = 1 = 42$
$42 \times 4 + 5 = 173$
$173 – 151 = 22$
Haoming made patterns using triangles. Circles and sticks and recorded the pattern in the table shown below. \begin{array}{|c|c|c |} \hline \mbox{Figure Number} & \mbox{Number of Cicles} & \mbox{Number of Sticks} \\ \hline 1 & 3 & 3 \\ \hline 2 & 4 & 5 \\ \hline 3 & 5 & 7\\ \hline 4 & 6 & 7 \\ \hline \cdots & \cdots & \cdots \\ \hline 20 & (a) & (b) \\ \hline \cdots & \cdots & \cdots \\ \hline (c) & \cdots & 115 \\ \hline \end{array}
(a) How many cicles are needed for Figure 20?
(b) How many sticks are needed for Figure 20?
(c) Which Figure needed a total of 115 sticks?
(a)22
(a) No. of circles in Figure 20 $\rightarrow$ $20 \times 1 + 2 = 22$
(b) No. of sticks in Figure 20 $\rightarrow$ $20 \times 2 + 1 = 41$
(c) Figure with 115 sticks $\rightarrow$ $115 – 1 \div 2 = 57$
Study the pattern below and answer the questions, showing your working clearly whenever possible. \begin{array}{|c|c|c | c |} \hline \mbox{Figure No.} & \mbox{No. of rows of sqr grids} & \mbox{No. of colm of sqr grids} & \mbox{Area of shaded triangles} \\ \hline 1 & 2 & 3 & 2 \\ \hline 2 & 4 & 5 & 8 \\ \hline 3 & 6 & 7 & 18 \\ \hline 4 & 8 & 9 & ? \\ \hline \end{array} (a) What is the aarea of shaded triangle in Figure 4?
(b) What is the number of columns of square grids in Figure 20?
(c) In which would the area of shaded triangle be 2312 square units?
(a) $\frac{1}{2} \times 8 \times 9 = 36$
$36 – 4 = 32$
(b) 41 (br)(c) 34
The following pattern is formed using letters A, B, C, D.
What is the 160$^t$$^h$ letter?
Jeremy arrange 5 letters to form a pattern. The first 4 rows are as shown below. \begin{array}{|c|c|c |} \hline \mbox{Row1} & AB & CDE\\ \hline \mbox{Row 2} & BA & ECD \\ \hline \mbox{Row3} & AB & DEC \\ \hline \mbox{Row4}& BA & CDE\\ \hline \vdots & \vdots & \vdots \\ \hline \end{array} Write the arrangement of the 5 letters in Row 83
BACDE
ABECD
BAECD
Row 5 AB ECD
Row 6 BA DEC
Row 7 AB CDE
1 set = 6 Rows
$83 \div 6 = 13R 5$
ANS: ABECD
John used black and white tiles to create the pattern shown in below figure. Use the patterns that he has created to answer the following questions.
(a) How many tiles will there be in Pattern 15?
(b) Which pattern will be made up of 176 tiles?
(b) 5
(a) $15 – 1 = 14$
$14 \times 3 = 42$
$176 – 8 \div 3 = 56$
$56 + 1 = 57$ Ans: 50
Hatta formed some figures that followed a pattern using squares and circles as shown in figure below.
The table Shows the number of squares and circles for the first four figures. \begin{array}{|c|c|c | c | c |} \hline \mbox{Figure Number} & 1 & 2 & 3 & 4 \\ \hline \mbox{Number of squares} & 1 & 4 & 9 & 16 \\ \hline \mbox{Nunmber of circles} & 2 & 3 & 4 & 5 \\ \hline \mbox{No. of squares divided by No. circles} & 0R1 & 1R1 & 2R1 & 3R1 \\ \hline \end{array} Note: "R" denotes remainder in the above columns.
(a) A Figure has 3481 squares. Find the answer when its number of squares is divided by its number of circles.
(b) In a certain Figure number, 99 R1 is obtained when its number of squares is divided by its number of circles. Find the total number of squares and circles in that Figure number.
(a) 52R1
(b) 10,000
(b) 10100
(a) $\sqrt3481 = 59$
$3481 \div 60 = 58R1$
(b) $99 + 1 = 100$
$100 \times 100 = 10000$
$100 + 1 + 101$
$10000 + 101 = 10101$ Squares and Circles
A table with 4 columns is filled with numbers in a certain pattern. The first 4 rows of the table are shown in below image. In which row and columns will the number 295 appear?
The following figure are made up of unit cubes stacked at a corner of a room and painted. The first three figures are shown below. \begin{array}{|c|c|c |} \hline \mbox{Figure No.} & \mbox{No. of cubes} & \mbox{No. of faces of the cubes that are painted} \\ \hline 1 & 1 & 3 \\ \hline 2 & 4 & 9 \\ \hline 3 & 10 & 18\\ \hline 4 & (i) & (ii) \\ \hline \end{array} (a) Find the number of Cubes and Number of painted faces of cubes for figure 4.
(b) In which figure number would 165 faces of the cubes be painted?
(a) (i) 15
The following figure are made up of small squares and dots. Look at the figure below and answer the following questions. \begin{array}{|c|c|c |} \hline \mbox{Figure No.} & \mbox{No. of small squares} & \mbox{No. of dots} \\ \hline 1 & 1 & 4 \\ \hline 2 & 4 & 9 \\ \hline 3 & 9 & 16 \\ \hline \end{array}
Calculate the number of small squares for figure 4.
(b) Calculate the number of dots for figure 10.
(c) 9
(a) $4 \times 4 = 16$
(b) $11 \times 11 = 121$
(c) $\sqrt256=16$
Helen uses some toothpicks to form the pattern below.
\begin{array}{|c|c|} \hline \mbox{Pattern} & \mbox{Number of toothpick} \\ \hline 1 & 6\\ \hline 2 & 15\\ \hline 3 & 25\\ \hline 4 & 35\\ \hline 5 & \\ \hline \end{array}
(a) How many toothpicks will she need to form Pattern 5?
(b) How many toothpick will she need to form Pattern 40?
(c) Helen uses 4955 toothpicks to form a Pattern. Which Pattern is it?
(c) 496
(b) 395 $\rightarrow$ Pattern $ 40 = 40 \times 10 = 5 = 395$
(c) 496 $\rightarrow$ Pattern no $495 + 5 \div 10 = 496$
Study the number pattern below.
$3 \times 37 = 111$
$\vdots \s \s \s \vdots$
$\vdots \t \vdots$
$G \times 37 = 88$
Find the value that G represents.
The figure which are made up of shaded and unshaded squares follow a pattern as shown below.
(a) Find the number of shaded and unshaded squares in Figure 5. \begin{array}{|c|c|c |} \hline \mbox{Figure Number} & \mbox{Number of shaded squares} & \mbox{Number of unshaded squares} \\ \hline 1 & 2 & 2\\ \hline 2 & 3 & 6\\ \hline 3 & 4 & 12\\ \hline 4 & 5 & 20\\ \hline 5 & (i) & (ii) \\ \hline \end{array}
(b) In which figure is there a total of 256 squares?
(c) A figure in the pattern has a total of 529 shaded and unshaded squares. What is the number of shaded squares in the figure?
(a) i = 2
ii = 10
(b) Figure 9
(b) Figure 12
(a) i =86
(b)$\sqrt256 = 16$
$16 – 1 = Figure 15$
(c) $\sqrt529 = 23$
Study the number pattern below
12, 15, 18, $\cdots\cdots\cdots$, 93, 96, 99.
The pattern is made up of all the 2-digit multiples of 3 written in increasing order.
(a) Find the the sum of all the numbers in the pattern.
(b) How many numbers in the pattern do not contain the digit 3?
(a) $12 + 99 = 111$
$15 \times 111 = 1665$
Oliver used identical cubes to build some structures. The first four structures are shown below. For each structure. He first stacked the cube together and then painted some of the faces of each structure. The shaded faces shown are the faces he painted. The table below shown the number of cubes and the number of faces painted in each structure. \begin{array}{|c|c|c |} \hline \mbox{Structure Number} & \mbox{Number of cubes} & \mbox{Number of faces painted} \\ \hline 1 & 1 & 1 \\ \hline 2 & 10 & 4 \\ \hline 3 & 35 & 9\\ \hline 4 & 84 & 16 \\ \hline 5 & (i) & (ii) \\ \hline \end{array}
(a) Find the number of cubes and Number of faces painted for figure 5
(b) How many cubes do not have any of its faces painted in structure 10?
(a) (i) 165
(b)1230
(b) $1165 + 165 = 1330$
$1330 – 100 = 1230$
Study the pattern below. The first four figures are shown below.
The table below shows the number of sticks and dots used to form each figure. \begin{array}{|c|c|c |} \hline \mbox{Figure} & \mbox{No. of sticks} & \mbox{No. of dots} \\ \hline 1 & 6 & 5 \\ \hline 2 & 11 & 10 \\ \hline 3 & 16 & 20\\ \hline 4 & 21 & 25 \\ \hline 5 & (i) & (ii) \\ \hline \end{array}
(a) How many dots are used to form figure 12?
(b) Which figure has 612 sticks?
(a) $12 + 1 – 13$
$13 \times 13 = 169$
(b) $2(2n + 2)$ = $2n(n + 1)$
$2n(n + 1) = 612$
$n(n + 1) = 306$ Answer 17 as $(n + 1 = 18)$
Kenny and Dylan each used some letters to make a set of patterns on rectangular cards as shown below. They make repeated patterns with the cards created.
Which letter will first appear in the same position in both patterns?
Ali uses rods to foam that follow a pattern. The first five figures are shown in below image.
(a) The table shown the number of rods used and the number of triangles found in each figure. Complete table for figure 6. \begin{array}{|c|c|c |} \hline \mbox{Figure No.} & \mbox{No. of rods used} & \mbox{No. of triangles} \\ \hline 1 & 6 & 4 \\ \hline 2 & 9 & 4 \\ \hline 3 & 16 & 12\\ \hline 4 & 17 & 8 \\ \hline 5 & 26 & 20 \\ \hline 6 & 25 & \\ \hline \end{array}
(b) How many rods would he use in figure 7?
(c) How many rods would he use in figure 30?
(b) $26 + 10 = 36$
(c) $30 \div 2 = 15$
Study the pattern below. The pattern is made up of Identical triangle tiles.
If the Pattern Continues. Which figure will have total of 162 triangular tiles?
A table can seat 6 people as shown in figure A. Following the pattern shown below, how many such tables are needed to seat 42 people?
The pattern below is made up of circles and triangles. Study the pattern carefully and answer the questions below.
(a) How many circles are needed to form pattern 5?
(b) How many triangles are needed to form pattern 10?
(c) The number of circles used in pattern X is exactly the same triangles used to form pattern 32. What is X?
(a) $5 + 4 = 9$
(b) $9 \times 9 = 81$
(c) $31 \times 31 = 961$
$962 \div 2 = 481$
Isaac drew some dots and triangles $( of different sizes )$ in a certain pattern. The first four figures are shown below.
(a) Study the below figure and complete the table for figure 5. \begin{array}{|c|c|c |} \hline \mbox{Figure No.} & \mbox{No. of dots} & \mbox{No. of non-overlapping triangles} \\ \hline 1 & 6 & 5 \\ \hline 2 & 11 & 10 \\ \hline 3 & 16 & 20\\ \hline 4 & 21 & 25 \\ \hline 5 & (i) & (ii) \\ \hline \end{array}
(b) In which figure number will there be 230 non-overlapping triangles?
$225 \div 150 = 15$
$15 \times 2 + 1 = 31$
Roy uses the four letters, C, A, R, E to form a pattern. The first 16 letters are shown below. Which letter is in the 59$^t$$^h$ position?
Study the below pattern and find the number of cans in figure 9.
The square of pattern is formed with squares. The first patterns are shown below. How many squares are needed in Pattern 10?
Azlinda formed the pattern below using white grey tiles. Study the pattern carefully.
How many white tiles would Azlinda use to build Pattern 7?
Fabian Used identical square tiles to form a sequence of patterns. The first four patterns are shown in figure below. The vertical height of Pattern 1 is 3cm.
What is the vertical height of Pattern 50?
$25 \times 3 = 75 $
$75 + 1.5 = 76.5cm$
Look at the pattern in figure below.
(a) Complete the table below by finding the totals number of tiles for pattern 4. \begin{array}{|c|c|c | c |} \hline \mbox{Pattern No.} & \mbox{No. of unshaded}
\mbox{tiles} & \mbox{No. of shaded tiles} & \mbox{Total No. of tiles} \\ \hline 1 & 0 & 1 & 1 \\ \hline 2 & 1 & 2 & 3 \\ \hline 3 & 3 & 3 & 6 \\ \hline 4 & 6 & 4 & ? \\ \hline \end{array}
(b) How many unshaed tiles will there be in pattern number 15?
(a) $6 + 4 = 10$
(b) $1 + 2 + 3\cdots\cdots\cdots\cdots+ 13 + 14$ = $15 \times 7 = 105$
Numbers are written in order beginning from 1 as shown in the below image.
(a) Find the number represented by the letter N.
(b) Find the greatest number inn Row 8.
(c) Find the number in the middle of Row 12.
(a) $6 \times 5 + 1 = 31$
(b) Middle number of row 8 $\rightarrow$ $8 \times 7 + 1 = 57$
$8 – 1 = 7$
(c) $12 \times 11 + 1 = 133$
The figure is made up of identical triangles.
(a) Complete the table for layers 5 and 10.
\begin{array}{|c|c|} \hline \\ \mbox{Layer} & \mbox{Number of Triangles} \\ \hline 1 & 1\\ \hline 2 & 3 \\ \hline 3 & 5 \\ \hline 4 & 7 \\ \hline 5 & (i) \\ \hline \vdots & \vdots \\ \hline \\ 10 & (ii) \\ \hline \end{array}
(b) Each small triangle has a base of 4 cm and a perpendicular height of 3 cm. Find the area of all the triangles at the 30$^t$$^h$ layer.
(a) (i) 3
(b) 339cm$^2$
(a) (i) 11
(b) $\frac{1}{2} \times 4 \times 3 = 6$
$2 \times 30 = 60$
$59 \times 6 = 354cm^2$
The pattern below are made up of identical shaded and unshaded squares.
(a) Find the total number of squares in Pattern 4.
(b) Find the total number of shaded squares in Pattern 10.
(c) Find the total number of unshaded squares in Pattern 43.
(a) 81 $\rightarrow$ P1 $\rightarrow$ total: 9, $1 + 2 = 3$
$3 \times 3 = 9$
P4 $\rightarrow$ Total: ?
$4 + 5 = 9$
$9 \times 9 = 81$
(b) 41 $\rightarrow$ $(10 \times 4) + 1 = 41$
(c) 7396 $\rightarrow$ Shaded $43 \times 4 + 1 = 173$
Total $\rightarrow$ $43 + 44 = 87$
$ 87 \times 87 = 7569$
The Structure below are formed using identical solids stacked on top of each other. The height of Figure 1 is 26cm when the solids are stacked two levels high. It is 35cm when the solids are stacked three levels high.
(a) How many levels must the solids be stacked in order for the structure to reach a height of 89cm?
(b) How many solids are needed to form the structure of height 89cm?
$89 – 17 = 72$
Levels above 1 $\rightarrow$ $72 \div 9 = 8$
(b) $1 + 2 + 3 + 4 + \cdots\cdots\cdots 9 = 45$ | CommonCrawl |
Behavior of compressed plasmas in magnetic fields
Gurudas Ganguli1,
Chris Crabtree ORCID: orcid.org/0000-0002-6682-99921,
Alex Fletcher1 &
Bill Amatucci1
Reviews of Modern Plasma Physics volume 4, Article number: 12 (2020) Cite this article
Plasma in the earth's magnetosphere is subjected to compression during geomagnetically active periods and relaxation in subsequent quiet times. Repeated compression and relaxation is the origin of much of the plasma dynamics and intermittency in the near-earth environment. An observable manifestation of compression is the thinning of the plasma sheet resulting in magnetic reconnection when the solar wind mass, energy, and momentum floods into the magnetosphere culminating in the spectacular auroral display. This phenomenon is rich in physics at all scale sizes, which are causally interconnected. This poses a formidable challenge in accurately modeling the physics. The large-scale processes are fluid-like and are reasonably well captured in the global magnetohydrodynamic (MHD) models, but those in the smaller scales responsible for dissipation and relaxation that feed back to the larger scale dynamics are often in the kinetic regime. The self-consistent generation of the small-scale processes and their feedback to the global plasma dynamics remains to be fully explored. Plasma compression can lead to the generation of electromagnetic fields that distort the particle orbits and introduce new features beyond the purview of the MHD framework, such as ambipolar electric fields, unequal plasma drifts and currents among species, strong spatial and velocity gradients in gyroscale layers separating plasmas of different characteristics, etc. These boundary layers are regions of intense activity characterized by emissions that are measurable. We study the behavior of such compressed plasmas and discuss the relaxation mechanisms to understand their measurable signatures as well as their feedback to influence the global scale plasma evolution.
The holy grail of much of modern science is the comprehensive knowledge of the coupling between the micro, meso, and macro scale processes that characterize physical phenomena. This is particularly important in magnetized plasmas which typically have a very large degree of freedom at all scale sizes. The statistically likely state involves a complex interdependence among all of the scales. In the geospace plasma undergoing global compression during geomagnetically active periods the multiplicity of spatio-temporal scale sizes is astoundingly large. The statistically likely state has mostly been addressed by global magnetohydrodynamic (MHD) or fluid models, which ignore the contributions from the small-scale processes that can be locally dominant. This was understandable in the past when the early space probes could hardly resolve smaller scale features. Also, single point measurements from a moving platform made in evolving plasma are not ideal for resolving the small-scale details of a fast time scale process. Statistical ensembles generated through measurements from repeated satellite visits in a dynamic plasma washes out many small-scale features that evolve rapidly. Therefore, the need for understanding the contributions from the small-scale processes was not urgent.
However, there are pitfalls in relying on global fluid models alone for an accurate assessment of satellite measurements that essentially represent the local physics. These models ignore the kinetic physics, which often operate at faster time scales at the local level and are necessary for dissipation, which is important for relaxation and feedback to form a steady state that satellites measure. For example, the large-scale MHD models cannot account for the ambipolar effects and hence they are inadequate for the physics at ion and electron gyroscales, which are now being resolved by multi-point measurements from modern space probe clusters, such as NASA's Magnetospheric Multi-Scale Satellite (MMS) (Burch et al. 2016), the Time History of Events and Macroscale Interactions during Substorms (THEMIS) mission Angelopoulos (2008), and the European Space Agency's Cluster mission (Escoubet et al. 1997).
Global scale kinetic simulations that can resolve gyroscales are still not practical. These simulations suffer perennial issues such as insufficient mass ratios, insufficient particles per cell, or use implicit algorithms that ignore the small scale features. Thus, these simulations are incapable of accurately resolving the gyroscales for capturing ambipolar effects, which as we show in Sect. 2, can be critical to the comprehensive understanding of the physics necessary for interpreting satellite observations. With technological breakthroughs in the future, resolution of gyroscales in global models will become possible. It is, therefore, necessary to assess the origin of small-scale processes responsible for relaxation and their feedback mechanisms for a deeper understanding and also to motivate future space missions with improved instrumentation to search for them in nature. The objective of this article is to highlight the fundamental role of plasma compression in the inter-connectedness of physical processes at local and global levels in general, and in particular in the earth's immediate plasma environment through specific examples.
Although the large-scale models are not yet suitable for addressing the smaller scale physics, they are necessary for understanding the global morphology and global transport of mass, energy, and momentum that creates the compressed plasma layers when plasmas of different characteristics interface. In the near term, before first principles kinetic global models become practical, the large-scale fluid models should be extended to include small scale (sub-grid) kinetic physics that is discussed in this article so that the effects of natural saturation and dissipation of compression can be accounted for on a larger scale. Clearly, therefore, the knowledge of large and small scale processes are like the proverbial two sides of a coin, both equally necessary for a comprehensive understanding of the salient physics. Since the role of smaller scale processes was not central to most previous studies we focus our analysis here to their self-consistent origin and their contributions to the overall plasma dynamics. Arguably, small-scale structures will be increasingly resolved by future technologically-advanced space probes, so there is now a need to accurately understand their cause and effect.
Equilibirium structure of compressed plasma layers
To understand the physics of compressed layers it is best to consider specific examples of such layers that arise naturally. Weak compressions, which are characterized by scale sizes much larger than an ion gyrodiameter and affect both the ions and electrons similarly, are not of interest here. Large-scale models can address them. The focus of this article is on stronger compressions, characterized by scale sizes comparable to an ion gyrodiameter or less, which affect ions and electrons differently and lead to ambipolar effects that are beyond the scope of electron-MHD (eMHD) frameworks (Gordeev et al. 1994). To address such conditions, we construct the equilibrium plasma distribution function within the compressed layers and analyze the field and flow structures they support in the metastable equilibrium with self-consistent electric and magnetic fields as well as their inherent spatial and velocity gradients. This specifies the background plasma condition, which can then be used as the basis to study their stability, evolution, and feedback to establish steady-state structures. Such small-scale structures, with scale sizes comparable to ion and electron gyroscales, are being resolved with modern space probes, e.g., Fu et al. (2012).
We use relevant constants of motion to construct the appropriate distribution function subject to Vlasov–Poisson or Vlasov–Maxwell constraints as necessary. Given the background parameters the solutions provide the self-consistent electrostatic and vector potentials, which then fully specify the equilibrium distribution function, \(f_{0}(\mathbf {v},\varPhi _{0}(x),\mathbf {A}(x))\) where \(\varPhi _{0}(x)\) and \(\mathbf {A}(x)\) are electrostatic and vector potentials. In effect, the potentials are Bernstein–Green–Kruskal (BGK) (Bernstein et al. 1957) or Grad–Shafranov (Grad and Rubin 1958; Shafranov 1966) like solutions. With the distribution function fully specified, its moments readily provide the static background plasma features and their spatial profiles. As input parameters, i.e., boundary conditions, we can use the output from global models if they can accurately produce them. But since these layers are on the order of ion gyroradii and smaller, which the current generation global models cannot accurately resolve, we rely on high-resolution in situ observations to obtain the input parameters. Given the boundary conditions we allow the density and the potential to freely develop subject to no constraints except quasi-neutrality. This provides the self-consistent distribution function, as was demonstrated for plasma sheaths by Sestero (1964).
a Model profile of density at plasma sheet-lobe interface. b Particle flux data from ISEE 1 (March 31, 1979) versus UT for two energy channels (2 keV and 6 keV). b Reproduced from Fig. 1 of Romero et al. (1990)
Vlasov–Poisson system: plasma sheet-lobe interface
Consider the compressed plasma layer that is observed at the interface of the plasma sheet and the lobe in the earth's magnetotail region (Romero et al. 1990) as sketched in Fig. 1a. The plasma sheet boundary layer is one of the primary regions of transport in the magnetosphere (Eastman et al. 1984). This layer separates the hot (thermal energies > 1 KeV) and dense (density \(\sim 1\hbox {cm}^{-3}\)) plasma of the plasma sheet, which is embedded in closed magnetic field lines of the earth, from the cold (thermal energy \(\sim\) 10's of eV) and tenuous (density \(\sim\) 0.01 cm\(^{-3}\)) plasma in open field lines in the lobe. During geomagnetically active periods, known as substorms, when the coupling of the solar wind energy and momentum to the magnetosphere is strong for southward interplanetary magnetic field, the quantity of magnetic flux and the field strength in the tail lobes increases (Stern 2013; Lui 2013). As the tail lobes grow, increasing stress is transmitted to the near earth plasma sheet and the boundary layer becomes narrow approaching gyroscales. The narrow boundary layer is characterized by intense broadband emissions (Grabbe and Eastman 1984). Figure 1b is an example as observed by the ISEE satellite in which the layer was around half of an ion gyroradius within which the density drops by two orders of magnitude (Romero et al. 1990).
Derivation of the equilibrium distribution function
To obtain the equilibrium distribution function of such boundary layers we consider the region (see inset in Fig. 1a) where the magnetic field lines are assumed to be in the z direction and the pressure gradient is normal to the magnetic field in the x direction. (Note that this is not the Geocentric Solar Magnetospheric (GSM) coordinate system.) We consider a small region such that the magnetic field lines are nearly straight and the curvature that exists close to the equatorial plane can be neglected. The neglect of the curvature may be justified because its scale size, \(L_{\Vert }\), is much larger than the gradient scale size, \(L_{\perp }\), across the magnetic field, i.e., \(L_{\perp }\sim \rho _i \ll L_{||}\), and \(L_{\Vert }=(\partial \log (B)/\partial s)^{-1}\) where s is the position along the magnetic field line and \(\rho _i\) is the ion gyroradius. Equivalently, the particle gyromotion transverse to the magnetic field is much faster than the motion along the magnetic field (either bounce motion on a closed field line or free streaming time-scale on open field lines). This simplifies the problem by reducing it to essentially one dimension across the magnetic field in the x-direction in which the spatial variation is much stronger than it is along the magnetic field.
To represent the pressure gradient in the x-direction we construct a distribution function using the relevant constants of motion, which are the guiding center position, \(X_g=x+v_y/\varOmega _\alpha\), and the Hamiltonian, \(H_\alpha (x)=m_\alpha v^2/2+q_\alpha \varPhi _{0} (x)\), \(\varOmega _{\alpha }=q_{\alpha } B/(m_{\alpha } c)\) is the cyclotron frequency where the subscript \(\alpha\) represents the species, \(m_\alpha\) is the mass, \(q_\alpha\) is the charge and \(\varPhi _{0}(x)\) is the electrostatic potential, so that it is approximately a Maxwellian far away from the boundary layer on either side:
$$\begin{aligned} f_{0\alpha }(X_{g\alpha },H_{\alpha }(x)) = \frac{N_{0\alpha }}{(\pi v_{t\alpha }^2)^{3/2}} Q(X_{g\alpha }) \exp \left( -\frac{H_{\alpha }(x)}{T_{\alpha }}\right) . \end{aligned}$$
The electron and ion thermal velocity is given by \(v_{t\alpha }\), \(T_{\alpha }=m_{\alpha } v_{t\alpha }^2/2\) is the temperature away from the layer, and \(Q_\alpha\) is the distribution of guiding centers, the shape of which is motivated by the observed density structures across the layer and is given by
$$\begin{aligned} Q_{\alpha }(X_{g\alpha }) = \left\{ \begin{array}{lc} R_{\alpha } &{} X_{g\alpha }< X_{g1\alpha } \\ R_{\alpha } + (S_{\alpha }-R_{\alpha })\left( \frac{X_{g\alpha }-X_{g1\alpha }}{X_{g2\alpha }-X_{g1\alpha }}\right) &{} X_{g1\alpha }<X_{g\alpha }<X_{g2\alpha } \\ S_{\alpha } &{} X_{g\alpha } > X_{g2\alpha }. \end{array} \right. \end{aligned}$$
\(N_{0\alpha } R_\alpha\) and \(N_{0\alpha } S_\alpha\) are the densities in the asymptotic high (plasma sheet) and low-pressure (lobe) regions respectively, but in the transition layer the density and its spatial profile is determined self-consistently. The quantity \(|S_{\alpha }-R_{\alpha } |\) is proportional to the pressure difference between the asymptotic regions and \(|X_{g2\alpha }-X_{g1\alpha } |\) represents the distance over which the pressure changes. These quantities determine the magnitude and the scale-size of the electrostatic potential, which in turn determines the characteristics of the emissions that are excited at the boundary, as elaborated in Sect. 3. Different values of the parameters \(X_{g1\alpha }\) and \(X_{g2\alpha }\) may be chosen to reproduce the observed density profile. Hence, the values of the parameters \(R_{\alpha }\), \(S_\alpha\), \(X_{g1\alpha }\), and \(X_{g2\alpha }\) are model inputs determined from observations. These parameters reflect the global plasma condition, i.e., the compression. Hence, they causally connect the small scale processes to the larger scale dynamics.
The density structure within the boundary layer is obtained in terms of the electrostatic potential as the zeroth moment of the distribution function, Eq. (1):
$$\begin{aligned} n_{0\alpha }(x)\equiv \int f_{0\alpha }(\mathbf {v},\varPhi _{0}(x)) \mathrm{d}^3\mathbf {v} = N_{0\alpha } \frac{(R_{\alpha }+S_{\alpha })}{2}\exp \left( -\frac{e\varPhi _{0}(x)}{T_{\alpha }}\right) I_{\alpha }(x) \end{aligned}$$
$$\begin{aligned} I_{\alpha }(x)= & {} 1\pm \left( \frac{R_{\alpha }-S_{\alpha }}{R_{\alpha }+S_{\alpha }}\right) \left( \frac{1}{\xi _{1\alpha }-\xi _{2\alpha }}\right) \nonumber \\&\times \left[ \xi _{2\alpha }\text {erf}(\xi _{2\alpha }) -\xi _{1\alpha }\text {erf}(\xi _{1\alpha })\right] +\frac{1}{\sqrt{\pi }} \left[ \exp (-\xi _{2\alpha }^2) - \exp (-\xi _{1\alpha }^2)\right] \end{aligned}$$
\(\text {erf}\) is the error function, \(\xi _{1,2\alpha }=\varOmega _{\alpha } (x-X_{g1,2\alpha })/v_{t\alpha }\), and ± refers to the species charge. The quasi-neutrality, \(\sum _{\alpha } q_{\alpha } n_{0\alpha }(x,\varPhi _{0}(x))=0\) , then determines \(\varPhi _{0} (x)\), which in the limit that the Debye length is smaller than the plasma scale length (which is well satisfied here) is equivalent to solving Poisson's equation. The existence of the transverse electric field reflects the strong spatial variability and nonlocal interactions that exist across the magnetic field due to the difference in the electron and ion distributions with their characteristic spatial variations. With \(\varPhi _{0}\) determined the distribution function is fully specified and higher moments can be obtained. This distribution function satisfies the Vlasov–Poisson system and is similar to the BGK class of solutions.
As in the previous studies Romero et al. (1990), Ganguli et al. (1994), the temperature variation across the layer is ignored in the above. However, there is a temperature gradient between the plasma sheet and the lobe that can affect the static background properties. The effects of the temperature gradient can be accounted for by considering two different types of plasma population characterized by their respective temperature and density in the asymptotic regions of the plasma sheet and the lobe, assuming isothermal condition exists in both the regions away from the boundary layer. While the plasma sheet population, denoted by subscript ps, goes to zero in the lobe, achieved by setting \(R_{\alpha ,\mathrm{ps}}=1\) and \(S_{\alpha ,\mathrm{ps}}=0\), the lobe population, denoted by subscript l, does just the opposite in the same interval \(|X_{g2\alpha }-X_{g1\alpha } |\) by setting \(R_{\alpha ,l}=0\) and \(S_{\alpha ,l}=1\).
To obtain \(\varPhi _{0}\) quasi-neutrality must be maintained between all populations, that is
$$\begin{aligned} \sum _{\alpha } q_{\alpha } (n_{0\alpha ,ps}(x,\varPhi _{0}(x)) + n_{0\alpha ,l}(x,\varPhi _{0}(x))=0. \end{aligned}$$
The assumption that the transition in both density and temperature takes place in the same interval is for simplicity and can be relaxed. If the intervals differ somewhat, then the details of the spatial variation in the potential profile can be affected. However, these are higher level details and may not be observable due to averaging by the waves that are spontaneously generated by the highly non-Maxwellian distribution functions that develop as we elaborate in Sect. 3.
In addition to the transverse electric field, the interface between the plasma sheet and the lobe is also characterized by ion and electron bi-directional beams along the magnetic field (Takahashi and Hones 1988). In Sect. 2.2.3, we argue that the origin of these beams could be related to the curvature in the magnetic field around the equatorial region, which we ignored here, and not necessarily due to the reconnection process as it is usually assumed.
Comparison between an equilibrium with a single uniform temperature, labeled 1 in the figure, and an equilibrium with a uniform temperature to the left of the layer and a different uniform temperature to the right of the layer, labeled 2 in the figure. a Density of two models. b Temperatures across the layer for model 2. c Electrostatic potential for both models. d Pressures across the layer for both models. For both models the parameters are as follows \(X_{g1i,e}=0,0\), \(X_{g2i,e}/\rho_{i}=0.2,0.2\), \(R_{i,e}=1.0,1.0\), \(S_{i,e}=0.01,0.01\), \(T_e/T_i=1.0\), and \(m_i/m_e=1836.0\) and for the two temperature model the temperature to the right of the layer is \(T_{e,i1}/T_{e,i}=0.1,0.1\)
Equilibrium features
To understand the effects of a temperature gradient in the boundary layer we first consider a case where the density and the temperature gradients are in the same direction and then in the opposite direction. Figure 2 is a comparison of the attributes for an equilibrium with only one temperature as was analyzed in Romero et al. (1990) and the two temperature model as described in Sect. 2.1.1, i.e., different populations in the lobe and the plasma sheet each characterized by their respective temperature and density. The temperature gradient of both populations is in the same direction as the density gradient. This example underscores the kinetic origin of the equilibrium electric field. In the two temperature model, the temperature reduces by a factor of 50 going from the high density side to the low density side, thus the total pressure drop from plasma sheet to lobe is larger. From a fluid (eMHD) perspective one would expect that the larger pressure gradient must induce a larger electric field to maintain the pressure gradient, however, as one sees in panel (c) this is not the case. The electrostatic potential and the magnitude of the electric field is reduced. This is because the ambipolar effect, which scales as \((\rho _{i}-\rho _{e})\) averaged over the distribution, has been reduced by the decrease in the temperature, as ambipolar effects vanish with temperature. The x-axis of both plots is normalized to the constant thermal ion gyroradius calculated to the left of the layer. However, in the two temperature model the actual thermal ion gyroradius decreases by a factor of \(\sqrt{T_{l}/T_{\mathrm{ps}}}\simeq 0.2\), where \(T_{l}\) is the temperature of the lobe plasma and \(T_\mathrm{ps}\) is the temperature of the plasma sheet. This means that the ratio of the ion to electron gyroradius has decreased and thus the kinetic source of the electrostatic potential has reduced.
Generation of an electric field by a temperature gradient and no imposed density gradient. a Density and Temperatures across the layer. b Electrostatic potential. c Flow velocities normalized to the ion thermal velocity defined to the left of the layer. The parameters are as follows \(X_{g1i,e}=0,0\), \(X_{g2i,e}/\rho_i=0.2,0.2\), \(R_{i,e}=1,1\), \(S_{i,e}=1,1\), \(T_e/T_i=1.0\), and \(m_i/m_e=1836.0\) and the temperatures to the right of the layer is \(T_{e,i1}/T_{e,i}=0.1,0.1\)
As further illustration of the ambipolar effect we show an extreme case in Fig. 3, where we have chosen the asymptotic density to be the same on either side of the layer by choosing the distribution of the guiding centers, \(Q(X_{g})\), to be a constant but have allowed the temperature to fall from \(T_{e0}\) to \(0.05T_{e0}\) across the layer. We can see that the temperature gradient creates a change in the difference between the ion and electron gyroradius which generates the ambipolar electric field and the density in the layer adjusts to accommodate the ambipolar potential even though the guiding center distribution is constant. We note that in this case there is a clear electron flow channel within the layer mostly due to \(E\times B\) drift and sheared flow in both the ions and electrons that can be the source of instabilities as discussed in Sect. 3. This also implies that for the temperature gradient driven modes (Rudakov and Sagdeev 1961; Pogutse and Eksp 1967; Coppi et al. 1967) the effect of the self-consistent electric field must be examined.
Bulk plasma flows in narrow layers
It is important to understand the origin and nature of the flows and currents in the compressed plasma layers because they are the sources of free energy for waves that determine the nonlinear evolution of the layers. The bulk flow characteristics change as the layer widths become less than an ion gyrodiameter. The flows are associated with the density and temperature gradients and the ambipolar electric field that develop in the layer as a consequence of the compression. The resulting \(E\times B\) drift may not be identical for the electrons and the ions as we elaborate in the following.
From the Vlasov equation we can calculate the equilibrium momentum balance and using the geometry of our equilibrium we can solve for the fluid (or bulk) flow in the y direction as
$$\begin{aligned} V_{\alpha } = \frac{-cE_x}{B} + \frac{c}{B}\frac{1}{q_{\alpha } n_{\alpha }}\frac{\mathrm{d} P_{\alpha xx}}{\mathrm{d}x} \end{aligned}$$
where the first term is the \(E\times B\) drift, \(V_\mathrm{E}\), and the second term is the diamagnetic drift, \(V_{\nabla \mathrm{p}}\). While this relationship is completely general for this geometry and applies to fluid and kinetic plasmas, the relative strength of each drift may vary between fluid and kinetic approaches. This is because individual particle orbits are important in the kinetic approach but not in the fluid approach. It is especially important in narrow layers when the particle orbits become species dependent (Sect. 3) and the ambipolar effects dominate the physics. This leads to unique static background conditions, which influences the dynamics and hence the observable signatures, as we shall see in Sects. 3 and 4.
In Fig. 4, we show the fluid flows in panel (a), the electron drift components in panel (b), and the ion drift components in panel (c) for the case presented in Fig. 2 with no temperature gradient. The layer width is larger than the electron gyroradius but smaller than the ion gyroradius. Note that the fluid velocity of the electrons is far larger than the ions. In addition, the electron \(E\times B\) drift and the diamagnetic drift are in the same direction within the layer whereas for the ions these drifts are in the opposite direction. When the ion drifts combine these components within the layer mostly cancel and the net ion fluid flow becomes negligible compared to the electrons. Thus the Hall current is mostly generated by the electron flows and localized over electron scales. This can be understood in the following way. The ions have a large gyroradius compared to the scale size of the electric field and the density gradient. Therefore, the orbit-averaged \(E\times B\) drift experienced by the ions is a fraction of what is expected from the zero gyroradius limit. This shows up in a fluid representation as in Eq. 6, by the development of a fluid diamagnetic drift component in the opposite direction to reduce the net ion flow. Note that for broader layer widths larger than an ion gyrodiameter the ambipolar electric field will be negligible and the net current will be due to electron and ion diamagnetic drifts in the opposite directions.
Comparison of fluid flows and drift velocities. a Electron and ion flows, b electron drifts, c ion drifts. The parameters are as follows \(X_{g1i,e}=0,0\), \(X_{g2i,e}/\rho_i=0.2,0.2\), \(R_{i,e}=1.0,1.0\), \(S_{i,e}=0.01,0.01\), \(T_e/T_i=1.0\), and \(m_i/m_e=1836.0\)
In narrow layers of widths comparable to the ion gyroradius but larger than an electron gyroradius the kinetic origin of the electric field from compression of a plasma is shown in Fig. 5. In this figure, we keep all parameters of the equilibrium the same but vary the width of the layer \(\delta x=X_{g1}-X_{g2}\), over which the density changes by a factor of 100. As we decrease the layer width the maximum electric field seen in the layer increases (as one would expect from fluid theory) until the layer width gets below the ion gyroradius and then saturates asymptotically. The ambipolar electric field becomes strong when the density gradient scale size, \(L_n\), becomes less than an ion gyrodiameter. Consequently, on average there are insufficient electrons, with much smaller gyroradii, to charge neutralize the ions over their large gyro-orbit. As a result, a charge imbalance is generated proportional to (\(\rho _i-\rho _e\)) averaged over the distribution, which leads to the electric field. As \(\delta x\) reduces, this imbalance increases because there are fewer electrons that can overlap the larger extent of the ion orbit. When \(\delta x\) falls below an ion gyroradius then there are hardly any electrons that can do the job and, as a result, the value of the averaged (\(\rho _i-\rho _e\)) reaches saturation asymptotically. Hence, the electric field saturates and its scale size, L, becomes independent from \(L_{n}\). In contrast, in a fluid model (e.g., eMHD) \(L/L_n=1\) remains valid throughout the layer, even as \(L_n\rightarrow 0\), because the electric field is directly proportional to the density gradient for constant temperature. The proportionality of the electric field with the pressure gradient breaks down as the ambipolar electric field saturates for gradient scales smaller than an ion gyroradius.
Maximum electric field as a function of the layer width normalized to the ion gyroradius. The ion and electron layer locations are the same. The parameters are as follows \(R_{i,e}=1.0,1.0\), \(S_{i,e}=0.01,0.01\), \(T_e/T_i=1\), and \(m_i/m_e=1836.0\)
In Fig. 6, we consider the case in which the temperature and the density gradients in the transition layer are in the opposite directions. We model this by two plasma populations in either side of the layer with characteristic density and temperatures. While the guiding center density (i.e., \(Q(X_g)\)) of the low temperature population in the left of the transition region drops by a factor of two across a layer that has a width of \(\delta x=0.2\rho _{i}\), the guiding center density of the high temperature population in the right of the layer rises by a factor of 2 in the same interval. The pressures are the same in the asymptotic regions to the left and the right of the layer. Quasi-neutrality determines the details of the spatial variation of the density and temperature of each species in the layer. Panel (a) shows the electron and ion pressures and the densities. One can see that the ion pressure falls across the layer, while the electron pressure rises. This can be understood in the following way. Since the layer width is much larger than the electron gyroradius the population on the left and right effectively mix only within the layer. While the electron temperature increases across the layer, the density falls. However, the density reduction does not fall as much as the guiding center density because it is partly compensated by the ambipolar electric field. Consequently, the electron pressure inside the layer rises. Since the ion gyroradius is much larger than the layer width the ions effectively mix on a scale larger than the layer width. So the ion temperature change is much smaller than the electrons across the layer. However, quasi-neutrality forces the ion density to be identical to the electrons, which decreases across the layer from left to right. The combination of these two effects lowers the ion pressure in the layer. Panel (b) shows that the net electron fluid flow dominates the net ion flow. The individual drift components are plotted in panels (c) and (d). Both the ion and electron \(E\times B\) and diamagnetic drifts are in opposite directions. In contrast, Fig. 4 showed that in the absence of a temperature gradient the electron \(E\times B\) and diamagnetic drifts were in the same direction. This was because both the ions and electrons experienced the identical pressure gradient within the layer. In this case, from panel (a) in Fig. 6, we see that the electron and ion pressure gradients are in the opposite directions within the layer even though asymptotically the pressure is constant on either side of the layer.
Equilibrium where the guiding center density falls by a factor of two from the left to the right and the temperature in the right asymptotic region is twice as high as the temperature in the left asymptotic region. a Density and Pressures. b Density and Temperatures. c Electron and ion fluid velocities. d Electron drift velocities normalized to the electron thermal velocity defined to the left of the layer. e Ion drift velocities normalized to the ion thermal velocity defined to the left of the layer. The parameters are as follows \(X_{g1i,e}=0,0\), \(X_{g2i,e}/\rho_{i}=0.2,0.2\), \(R_{i,e}=1,1\), \(S_{i,e}=0.5,0.5\), \(T_e/T_i=1.0\), and \(m_i/m_e=1836.0\) and the temperatures to the right of the layer is \(T_{e,i1}/T_{e,i}=2.0,2.0\)
Equilibrium where the guiding center density falls by a factor of two and the ion temperature in the right asymptotic is twice as high as the temperature in the left asymptotic region but the electron temperature is only 1.5 times less. a Density and Pressures. b Density and temperatures. c Electron and ion fluid velocities. d Electron drift velocities normalized to the electron thermal velocity defined to the left of the layer. e Ion drift velocities normalized to the ion thermal velocity defined to the left of the layer. The parameters are as follows \(X_{g1i,e}=0,0\), \(X_{g2i,e}/\rho_{i}=0.2,0.2\), \(R_{i,e}=1,1\), \(S_{i,e}=0.5,0.5\), \(T_e/T_i=1.0\), and \(m_i/m_e=1836.0\) and the temperatures to the right of the layer is \(T_{e,i1}/T_{e,i}=2.0,1.5\)
While setting the asymptotic pressure to be equal on either side of the layer was not a sufficient condition to avoid the production of a pressure gradient in the layer, by reducing the asymptotic electron temperature (i.e., pressure) on one side it is possible to create a region where the electron pressure is almost constant across the layer. We illustrate this in Fig. 7. In this case the electron pressure is almost constant across the layer and consequently the electrons have only a small diamagnetic drift as can be seen in panel (c) even though, asymptoticly, there is a pressure difference. From panel (d) we see that the ion \(\mathbf {E}\times \mathbf {B}\) and the diamagnetic drift cancel each other leading to negligible net ion flow as seen in panel (b). Thus, the net flow within the layer is primarily due to electron \(\mathbf {E}\times \mathbf {B}\) drift. This shows that depending on the boundary condition, as in this case with different pressures in the asymptotic regions, it is possible to generate a layer with no diamagnetic current but an electron Hall current. This is typically, the situation in the dipolarization fronts as we shall discuss in Sect. 2.2 (See also Fu et al. 2012). Also, as we will see in Sect. 3, this condition can lead to waves around the lower hybrid frequency driven by the gradient in the electron \(\mathbf {E}\times \mathbf {B}\) flow that can be misinterpreted to be the lower hybrid drift instability, which results in a different nonlinear state that is measurable. Interestingly, the eMHD description of such layers with a negligible pressure gradient would predict a stable condition. This underscores the importance of the kinetic details of compressed plasma layers for accurately analyzing satellite data and assessing the salient physics. Satellites measure the local physics that operates in the layers where the fluid concept does not hold.
Vlasov–Maxwell system: dipolarization fronts
In Sect. 2.1, we considered compressed plasmas in which electromagnetic corrections could be ignored. This may not be possible for all compressed plasma systems, especially when the ratio of the plasma kinetic pressure to the magnetic pressure, \(\beta\), is large such as a dipolarization front (DF) (Nakamura et al. 2002a, 2009; Runov et al. 2009). The typical geometry of a DF is sketched in Fig. 8. DFs are observationally characterized by a rapid rise in the northward component of the magnetic field, a large earthward flow velocity, a sharp drop in the plasma density, and the onset of broadband wave activity (Deng et al. 2010). These changes in plasma parameters are due to a flux tube rapidly propagating past the observing spacecraft. DFs are often observed during bursty bulk flow (BBF) events (Angelopoulos et al. 1992; Runov et al. 2009), during which large-scale magnetic flux tubes that have been depleted of plasma by some event (likely transient reconnection) propagate rapidly towards the Earth so that the quantity \(pV^{5/3}\) (Chen and Wolf 1993) is equalized to the plasma surrounding the transported flux-tube, where p is the plasma thermal pressure and V is the flux tube volume. Flux tubes that have been depleted more than neighboring flux tubes will have a larger earthward velocity, leading to a compression of the plasma at the edge as the faster moving flux tube overtakes the slower moving flux tube (see Fig. 9). This compression maintains the plasma gradients in a narrow layer with widths comparable to an ion gyroradius or smaller as the flux tube propagates Earthward. A kinetic equilibrium solution to the Vlasov–Maxwell system is necessary since the change in the magnetic field by compression in DFs can be sufficiently large especially in high \(\beta\) plasmas (Fletcher et al. 2019).
Equatorial dipolarization front geometry
Profile of \(PV^{5/3}\) in typical magnetotail. Some event depletes flux tubes with some maximum depletion. The earthward speed of the DF is proportional to \(\varDelta PV^{5/3}\) which causes the front to steepen as it propagates
To address such conditions the model discussed in Sect. 2.1 can be generalized to include the electromagnetic effects by considering the Vlasov-Maxwell set of equations instead of the Vlasov–Poisson system of Sect. 2.1 as shown below:
$$\begin{aligned}&\mathbf {v}\cdot \mathbf {\nabla }_{r} f_{\alpha }(\mathbf {r},\mathbf {v}) + \frac{q_{\alpha }}{m_{\alpha }}\left( \mathbf {E} + \frac{\mathbf {v}\times \mathbf {B}}{c}\right) \cdot \mathbf {\nabla }_{v} f_{\alpha }(\mathbf {r},\mathbf {v}) = 0, \nonumber \\&\mathbf {\nabla }\cdot \mathbf {E} = \sum _{\alpha } 4\pi q_{\alpha } \int \mathrm{d}^3 \mathbf {v} f_{\alpha }(\mathbf {r},\mathbf {v}), \nonumber \\&\mathbf {\nabla }\times \mathbf {B} = \frac{4\pi }{c} \sum _{\alpha } q_{\alpha } \int \mathrm{d}^3 \mathbf {v} \,\mathbf {v} f_{\alpha }(\mathbf {r},\mathbf {v}), \end{aligned}$$
In the frame of the DF propagating towards the earth, the variation in the normal direction (with scale size of an ion gyroradius) is orders of magnitude stronger than in the orthogonal directions. Hence,, for small scale physics, it becomes essentially a one-dimensional model, similar to the plasma sheet lobe interface discussed in Sect. 2.1. The local magnetic field is in the z direction and varies in the x direction, i.e. \(\mathbf {B}=B(x)\mathbf {e}_z\), while a nonuniform electric field also varies in the x direction, i.e., \(E_{x}(x)\) as sketched in Fig. 7. We introduce a vector potential, \(\mathbf {A}\), where \(\mathbf {B}=\mathbf {\nabla }\times \mathbf {A}\) and \(\mathbf {A}=A(x)\mathbf {e}_y\). The Hamiltonian is
$$\begin{aligned} H_{\alpha }(x) = \frac{p_x^2}{2m_{\alpha }} +\frac{1}{2m_{\alpha }}\left[ p_y - \frac{q_{\alpha }}{c}A(x)\right] ^2 + \frac{p_z^2}{2m_{\alpha }} + q_{\alpha }\varPhi (x) \end{aligned}$$
where \(p_x\), \(p_y\), and \(p_z\) are the canonical momenta. The Hamiltonian only depends on x and is independent of t, y, and z so H, \(p_y\), and \(p_z\) are constants of motion, where \(p_y=m_{\alpha }v_y+m_{\alpha }\varOmega _{\alpha } a(x)\). Since the system has only one degree of freedom, the dynamics are completely integrable. With \(a(x)=A(x)/B_0\) and \(B_0\) is the upstream background magnetic field it follows that the guiding center position:
$$\begin{aligned} X_{g\alpha }=\frac{p_y}{m_{\alpha }\varOmega _{\alpha }} = a(x)+\frac{v_y}{\varOmega _{\alpha }} \end{aligned}$$
is a constant of motion as well.
The construction of the distribution function is similar to that described in Sect. 2.1.1, except that we now obtain the moments as a function of a(x) and then solve a(x) as a function of x to obtain the spatial profiles of the parameters of interest (Fletcher et al. 2019). Similarly, the constants \(X_{g1,2\alpha}\) become \(a_{1,2\alpha}\). The moments of the distribution provide the physical attributes of the equilibrium configuration, in particular their spatial variations. The zeroth moment (density) is
$$\begin{aligned} n_{\alpha }(a) \equiv \left\langle f_{0\alpha }\right\rangle = \int \mathrm{d}^{3}\mathbf {v}\,f_{0\alpha }(\mathbf {v},\varPhi _0(a)) = N_{0\alpha }\frac{(R_{\alpha }+S_{\alpha })}{2} \exp \left( -\frac{q_{\alpha }\varPhi _{0}(a)}{T_{\alpha }}\right) I_{\alpha }(a) \end{aligned}$$
Note the dependence of various quantities on a(x) in Eq. 10, instead of just x as in Sect. 2.1; a(x) will be determined from the first moment (i.e., the current density). The electrostatic potential is found via quasineutrality, \(n_e\simeq n_i\), as before:
$$\begin{aligned} \varPhi _{0}(a) = \frac{T_e T_i}{q_e T_i - q_i T_e} \log \left[ \frac{N_{0e}(R_e+S_e)I_e}{N_0(R_i+S_i)I_i}\right] \end{aligned}$$
Because \(\nabla n\ne 0\) and \(\nabla B\ne 0\), and the electric field, \(\mathbf {E}=-\nabla \varPhi _0(a)\), are in the x direction, the only nonzero component of the flow is in the y direction. The flow is
$$\begin{aligned} u_{y\alpha }(a)\equiv \left\langle v_y f_{0\alpha } \right\rangle /n_{\alpha }= & {} \frac{1}{n_{\alpha }} \int \mathrm{d}^{3}\mathbf {v}\, v_y f_{0\alpha }(\mathbf {v},\varPhi _0(a)) \nonumber \\= & {} \pm \frac{ \exp \left( \frac{-2q_{\alpha }\phi (a)}{m_{\alpha } v_{t\alpha }^2}\right) N_{0\alpha } (R_{\alpha }-S_{\alpha }) v_{t\alpha } \left[ \text {erf}(\xi _{2\alpha })-\text {erf}(\xi _{1\alpha })\right] }{4n_{\alpha } (\xi _{1\alpha }-\xi _{2\alpha })} \end{aligned}$$
and includes the diamagnetic drift, \(\nabla B\) drift and \(\mathbf {E}\times \mathbf {B}\) drift.
The magnetic field produced by the current density inherent in the equilibrium distribution function is found by the Ampere law:
$$\begin{aligned} \frac{\mathrm{d}B_z}{\mathrm{d}x}=-\frac{4\pi }{c} j_y, \end{aligned}$$
where \(j_y=\sum _{\alpha }q_{\alpha } n_{\alpha } u_{y\alpha }\) is the current density. With \(B_z\), the vector potential is found via
$$\begin{aligned} \frac{\mathrm{d}a}{\mathrm{d}x}=\frac{B_z}{B_0}. \end{aligned}$$
with appropriate initial conditions. Eqs. 13 and 14 effectively forms the Grad—Shafranov equation and may not have a readily apparent closed-form solution but can be integrated numerically. The current density in Ampere's law can be written explicitly as a function of the vector potential a(x). Thus we can numerically solve Eqs. 13 and 14 for the function a(x) which then provides a mapping to x. All plasma parameters that have been determined as a function of a(x) can now be found as a function of x. An electrostatic approximation is equivalent to specifying a(x) explicitly (e.g., for a uniform magnetic field, \(a(x)=x\)).
We can continue and consider higher order moments. For the pressure tensor all off diagonal terms vanish and \(p_{\alpha xx}= p_{\alpha zz}=n_{\alpha } T_{\alpha }\). The remaining component, \(p_{\alpha yy}\), which we do not repeat here involves an integral over \(v_y\) and can be performed in a manner similar to Eq. 12.
Electromagnetic effects on equilibrium. a Magnetic field for different values of \(\beta _{e}\). b Density. c Maximum electric field seen over the layer as as function of \(\beta _{e}\). d Vector potential as a function of position. The legend in panel (d) refers to panels (a, b, and d) . The parameters are as follows \(a_{1i,e}=0,0\), \(a_{2i,e}/\rho_{i}=0.2,0.2\), \(R_{i,e}=1.0,1.0\), \(S_{i,e}=0.01,0.01\), \(T_e/T_i=1.0\), and \(m_i/m_e=1836.0\)
Electromagnetic correction to the equilibrium distribution function
Figure 10 shows the electromagnetic effects on the static background structure. To illustrate the difference we choose the input parameters to be the same as in Fig. 2 but we increase \(\beta _{e}\). As seen from panels (a, c), the electric and magnetic fields increase with \(\beta _{e}\). Panel (b) indicates that the density gradient steepens with increasing \(\beta _{e}\), which explains the increase in the electric field. Panel (d) shows that as long as \(\beta _{e}\) is less than unity the electromagnetic effects on static structures are minimal. Hence, the use of the simpler electrostatic model of Sect. 2.1.1 to understand the static background features is sufficient. However, in dipolarization fronts higher \(\beta _{e}\) is typical. Ganguli et al. (2018) and Fletcher et al. (2019) have analyzed the MMS data in detail and illustrated the difference between the electrostatic and electromagnetic models for a specific observation.
Effects of magnetic field curvature: generation of parallel electric field
Geometry along the magnetic field line of a DF. In a typical DF the variation of plasma parameters across the magnetic field is stronger than the variation along the magnetic field which reduces the problem to 1D. Since the plasma parameters (T,|B|) are different at the two points the electrostatic potentials assumes different values, which leads to a potential difference (\(\varPhi _{0,2} -\varPhi _{0,1}\)) along the magnetic field causing the parallel electric field
In the above discussion of the equilibrium structure of a DF, we considered the stronger variation normal to the magnetic field and ignored the slower variation along the field. For a typical DF the transverse electric field is strongest at a particular point; for example marked \(P_1\) in Fig. 11. As we move from this point along the magnetic field, to point \(P_2\), the x and z coordinates rotate by an angle \(\theta\) as indicated in Fig. 11. Since the local values of the magnetic field, temperature, density, etc. are different at positions \(P_1\) and \(P_2\) along the magnetic field, the electrostatic potential will vary, giving rise to an electric field along the magnetic field direction proportional to the potential difference between the two positions, \(\varPhi _{02}-\varPhi _{01}\). Since \(\varPhi _0\simeq \varPhi _0 (B(s))\), the parallel electric field is \(E_{\Vert } (s)\equiv -\partial \varPhi _0 (B(s))/\partial s=(x/L_{\Vert })E_x (x)\). Figure 3c of Ganguli et al. (2018) shows that \(E_{\Vert }\) peaks in the electron layer and varies in x for a typical DF. Non-thermal plasma particles subjected to \(E_{\Vert }\) will be accelerated along the magnetic field to form inhomogeneous beams or flows. The generation of the beam along the field line by this process provides the physical basis for a non-reconnection origin of the observed beams and its causal connection to the global compression.
Existence of \(E_{\Vert }\) indicates that the off-diagonal terms of the pressure tensor, \(\mathbf {P}_{\alpha }=m_{\alpha }\int (\mathbf {v}-\mathbf {u})(\mathbf {v} -\mathbf {u})f_{0\alpha }\mathrm{d}^3\mathbf {v}\), are non-zero and are necessary to balance it in equilibrium, that is
$$\begin{aligned} en(x)E_{\Vert }=-(\mathbf {\nabla }\cdot \mathbf {P}_{\alpha }(x)) \cdot \mathbf {s} =-(\partial _x p_{xx} \mathbf {b}_x +\partial _x p_{xz} \mathbf {b}_z ), \end{aligned}$$
where \(\mathbf {b}_x=\sin (\theta )\) and \(\mathbf {b}_{z}=\cos (\theta )\), and to leading order \(\partial /\partial y=\partial /\partial z\rightarrow 0\), because the spatial variation is strongest in the x direction at a given location along the magnetic field. These equilibrium features along the magnetic field can also be important to the dynamics of the compressed plasma layers and affect the measurable quantities such as spectral character of the emissions and particle energization. This is discussed in Sects. 3.3.2 and 4.3.
Vlasov–Maxwell System: field reversed geometry in the magnetotail
While the electromagnetic effects of compression are important in DFs, especially when the plasma \(\beta\) is large, electromagnetic effects are essential for the magnetic field reversal geometry and current sheets. Current sheets are important in magnetic fusion experiments and magnetospheric, solar, and astrophysical dynamics because the reversed magnetic field geometry can lead to magnetic reconnection and thus a large-scale reconfiguration of the system. The formation of the current sheet is the result of a global compression on a plasma layer. When this layer includes opposing magnetic fields it can lead to magnetic reconnection, which is often further driven by compression of a large fluid scale current sheet down to kinetic scales (Schindler and Birn 1993; Sitnov et al. 2006; Nakamura et al. 2002b; Artemyev et al. 2019). Tokamak and space plasma researchers have made extensive studies on a related problem, namely forced magnetic reconnection (Hahm and Kulsrud 1985; Vekstein and Kusano 2017). In this idealized problem (the "Taylor problem"), an equilibrium current sheet is perturbed at the boundary and the fluctuations induce magnetic reconnection often in the MHD context. In this section, we focus instead on the kinetic equilibrium that may arise due to global compression just prior to reconnection and not the forced reconnection process itself.
We extend the boundary layer methodology described in Sects. 2.1 and 2.2 to the case of a current sheet with magnetic field reversal (Crabtree et al. 2020) to investigate the effects of an inhomogeneous ambipolar electric field resulting from global compression that cannot be transformed away. Traditionally the field reversed case has been addressed by the Harris equilibrium (Harris 1957, 1962) which is restrictive because it is a specialized distribution designed to produce density and potential gradients such that there is no net electric field by using a transformation to a uniform velocity frame (described below). As a result, this distribution is inflexible and unable to account for the observed spatially localized structures such as embedded (McComas et al. 1986; Sergeev et al. 1993; Sanny et al. 1994) and bifurcated current sheets (Hoshino et al. 1996; Asano et al. 2004; Runov et al. 2004; Schindler and Hesse 2008) that develop during active periods when the plasma sheet thins due to large scale compression causing the current sheet to form such structures. We remove this inflexibility by constructing a solution to the Vlasov equation that is a generalization of the Harris equilibrium (1962) with the inclusion of a non-uniform guiding-center distribution: \(Q_{\alpha }(x_{g\alpha })\),
$$\begin{aligned} f_{0\alpha }(x,\mathbf {v}) = \frac{N_{0\alpha }}{\left( \pi v_{t\alpha }^2\right) ^{3/2}} Q_{\alpha }(x_{g\alpha }) \exp \left( -\frac{E_{\alpha } - U_{\alpha } p_y+ \frac{1}{2}m_{\alpha }U_{\alpha }^2}{T_{\alpha }}\right) , \end{aligned}$$
where the definitions of the various quantities are as before. For \(Q_{\alpha }\rightarrow 1\) Eq. 16 reduces to the Harris distribution while for \(U_{\alpha }\rightarrow 0\) it reduces to the compressed layer distribution discussed in Sects. 2.1 and 2.2 . The inclusion of the inhomogeneous guiding center distribution allows the Harris equilibrium the freedom to develop inhomogeneous structures, such as localized current sheets, as a response to external compression. As in Sects. 2.1 and 2.2 , we specify only the global compression level through the choice of \(X_{g1,2\alpha }\) (or equivalently \(a_{1,2\alpha}\)) and allow the system to develop the density, flows, current, and temperature structures self-consistently.
We can compute the density of each species:
$$\begin{aligned} n_{\alpha }&= \int \mathrm{d}^3\mathbf {v} f_{0\alpha }(x,\mathbf {v}) \nonumber \\&= N_{0\alpha } \exp \left( -\frac{q_{\alpha }\phi }{T_{\alpha }} - \frac{U_{\alpha }m_{\alpha }\varOmega _{\alpha } a }{T_{\alpha }}\right) , I_{\alpha }(a) \end{aligned}$$
$$\begin{aligned} I_{\alpha }(a) = \frac{1}{\left( \pi v_{t\alpha }^2\right) ^{1/2}} \int \mathrm{d}v_y Q_{\alpha }\left( a+\frac{v_y}{\varOmega _{\alpha }}\right) \exp \left( -\frac{(v_y-U_{\alpha })^2 }{v_{t\alpha }^2}\right) . \end{aligned}$$
As in the Harris equilibrium (1962) we choose \(U_e/v_{te}=-U_i/v_{ti}(\rho _e/\rho _i)\) by transforming to the frame where this is satisfied, and use quasi-neutrality to solve for the electrostatic potential. Interestingly, the potential does not depend on \(U_\alpha\) and has a similar form to the cases considered for the plasma sheet-lobe interface and for the dipolarization front:
$$\begin{aligned} \frac{e\phi }{T_{e}} = \frac{1}{1+\frac{T_e}{T_i}} \log \left( \frac{N_{0i}I_{i}(a) }{N_{0e} I_{e}(a) } \right) . \end{aligned}$$
In the Harris equilibrium, the choice of transformation to a uniformly drifting frame is typically made so that quasi-neutrality may be satisfied without an electrostatic potential. This choice corresponds to a uniform drift where the inhomogeneity in the \(\mathbf {E}\times \mathbf {B}\) drift is balanced by the inhomogeneity in the diamagnetic drift so that this transformation can be done globally. While the mathematical simplicity and elegance of the transformation is appealing, it constrains the system from developing substructures as the current sheet thins due to global compression. Introduction of the guiding center distribution, \(Q_{\alpha }\), relaxes this constraint and allows for nonuniform flows to develop in response to global compression. Nevertheless the transformation still can be made to simplify the expressions.
Next, we calculate the current density using the second moment as
$$\begin{aligned} j_{y\alpha } = q_{\alpha } \int \mathrm{d} v_y\, v_y f_{0\alpha } = q_{\alpha } N_{0\alpha } v_{ta} \exp \left( -\frac{q_{\alpha }\phi }{T_{\alpha }} - \frac{U_{\alpha }m_{\alpha }\varOmega _{\alpha } a }{T_{\alpha }}\right) J_{\alpha }(a) \end{aligned}$$
$$\begin{aligned} J_{\alpha }(a) = \frac{1}{(\pi v_{t\alpha }^2)^{1/2}} \int dv_y \,\frac{v_y}{v_{t\alpha }} Q_{\alpha }\left( a+\frac{v_y}{\varOmega _{\alpha }}\right) \exp \left( -\frac{(v_y-U_{\alpha })^2 }{v_{t\alpha }^2}\right) . \end{aligned}$$
Considering a single ion species and electrons we can write down from Ampere's law the equation:
$$\begin{aligned} \rho _{i0}\frac{\mathrm{d}^2 a}{\mathrm{d} x^2}&= \beta _i \left[ \exp \left( -\frac{e\phi }{T_{i}} \right) J_{i}(a(x)) \right. \nonumber \\&\quad \left. - \frac{N_{0e} v_{te}}{N_{0i} v_{ti}} \exp \left( \frac{e\phi }{T_{e}} \right) J_{e}(a(x)) \right] \exp \left( - \frac{U_{i} 2a(x) }{v_{ti}\rho _{i0}}\right) \end{aligned}$$
where \(\beta _i=8\pi N_{0i} T_i/B_0^2\), \(\rho _{i0}=v_{ti}/\varOmega _{i0}\), and \(\varOmega _{i0}=|e|B_0/(m_i c)\). \(B_0\) is a reference magnetic field value, which in the following, takes the value of the magnetic field in the asymptotic limit away from the layer for \(Q_{\alpha }=1\) in the Harris limit. Unlike the potential, the density and current depend on \(U_{\alpha }\). We note that Eq. 22 has the form of an equation of motion, where x is the time-variable and a is the position like variable. With the solution of Eq. 22 (using Eq. 19) the equilibrium is fully specified. In the limit of constant guiding center distribution, \(\phi =0\), \(N_{0i}=N_{0e}\), \(J_i=U_{i}/v_{ti}\) and \(J_e=U_{e}/v_{te}\), and Ampere's law becomes
$$\begin{aligned} \frac{\mathrm{d}^2 a}{\mathrm{d}x^2} = \frac{\beta _i}{L_\mathrm{H}}\left[ 1+\frac{T_e}{T_i}\right] \exp \left( -\frac{2 a(x)}{L_\mathrm{H}}\right) , \end{aligned}$$
where \(L_\mathrm{H}=\rho _{i0} v_{ti}/U_i\) is the single scale size associated with the Harris equilibrium (1962). Equation 23 has solutions \(a(x) = L_H \log (\cosh (x/L_H))+L_H/2\log (\beta _i+\beta _e)\). This is the usual Harris sheet vector potential Harris (1962). Because the Harris sheet has only one length scale, \(L_H\), it is unable to develop substructures in response to the compression. Introduction of another scale, L, associated with \(Q_{\alpha }\), in the generalized Harris equilibrium, Eq. (16), removes this limitation. L is dependent on the compression through the parameters, \(x_{g1,2\alpha }\) as discussed in Sects. 2.1 and 2.2. This makes the generalized Harris equilibrium a more accurate representation of reality.
Using the same linear ramp functions \(Q_{\alpha }(x_{g\alpha })\) as used in Sects. 2.1 and 2.2 we can calculate explicity the functions \(I_{\alpha }\) and \(J_{\alpha }\), for the generalized Harris equilibrium
$$\begin{aligned} I_{\alpha }(a)&= \frac{1}{2}(R_{\alpha }+S_{\alpha }) \nonumber \\&\quad +\frac{b_{\alpha }(R_{\alpha }-S_{\alpha })}{2|b_{\alpha }|(\xi _{1\alpha }-\xi _{2\alpha })} \left[ \frac{1}{\sqrt{\pi }}\left( e^{-\xi _{1\alpha }^2}-e^{-\xi _{2\alpha }^2}\right) + \xi _{1\alpha } \text {Erf}(\xi _{1\alpha }) - \xi _{2\alpha }\text {Erf}(\xi _{2\alpha })\right] \nonumber \\ J_{\alpha }(a)&= \frac{u_{\alpha }}{2}(R_{\alpha }+S_{\alpha }) \nonumber \\&\quad +\frac{b_{\alpha }(R_{\alpha }-S_{\alpha })}{2|b_{\alpha }|(\xi _{1\alpha }-\xi _{2\alpha })}\left[ \frac{u_{\alpha }}{\sqrt{\pi }}\left( e^{-\xi _{1\alpha }^2}-e^{-\xi _{2\alpha }^2}\right) \right. \nonumber \\&\qquad \qquad \qquad \qquad \left. -\frac{1}{2}\left( 1-2u_{\alpha }\xi _{1\alpha }\right) \text {Erf}(\xi _{1\alpha }) +\frac{1}{2}\left( 1-2u_{\alpha }\xi _{2\alpha }\right) , \text {Erf}(\xi _{2\alpha }) \right] \end{aligned}$$
where we have normalized distances by \(\rho _{i0}\) so that \(a_{i\alpha }=x_{gi\alpha }/\rho _{i0}\) and we have defined \(\xi _{i\alpha }=(-b_{\alpha }u_{\alpha }-a/\rho _i+a_{i\alpha })/b_{\alpha }\) where \(u_{\alpha }=U_{\alpha }/v_{t\alpha }\) and \(b_{\alpha }=\text {sign}(q_{\alpha })\rho _{\alpha }/\rho _{i0}\) is negative for electrons.
There are two general cases of the differential equation where the effects of the non-uniform flow are important. Both are achieved by choosing \(a_{1\alpha },a_{2\alpha }\) such that the guiding center distribution changes on a scale comparable to the ion gyroradius. This leads to a current due to an ambipolar electric field drift, which corresponds to a global compression on the current sheet, in addition to the current that supports the current-sheet in the Harris equilibrium due to the drift \(U_{\alpha }\) in the distribution functions. There are two cases to consider (1) when this additional current is in the same direction as the Harris current or (2) when it is in the opposite direction to the Harris current. In this paper, we only review the case when these currents are aligned. For the alternative case see Crabtree et al. (2020).
Phase plane analysis for the case when the current due to the density layer is in the same direction as the Harris current. For this case \(a_{1i,e}=1.1,0.9\), \(a_{2i,e}=0.3,0.6\), \(R_{i,e}=0.1,0.1\), \(S_{i,e}=1.0,1.0\), \(U_{i}/v_{ti}=0.2\), \(T_e/T_i=1.0\), and \(m_i/m_e=1836.0\)
In this case, we can examine the possible categories of equilibria by examining the phase-plane analysis of Eq. 22. We do this by solving the differential equation numerically and plotting \(da/dx=B_z/B_0\) vs \(a/\rho _{0}\). In Fig. 12 we show the phase–plane figure for the case when the currents are in the same direction. In this case, we find three different kinds of equilibria that are determined by the choice of initial conditions for \(B_z/B_0\) and \(a/\rho _{0}\). The choice of the initial point, e.g., the value of a at \(B_z=0\), is in general arbitrary. In nature, all initial values are possible. The choice of a particular one depends on the global condition, which is beyond the purview of this model but may be obtained from a global model. However, once the initial condition is determined our model can predict the resulting sub-structures of the current sheet corresponding to the level of the global compression. This level is represented by both the initial point and the choice of parameters \(a_{1,i,e}\) and \(a_{2,i,e}\) in the guiding center density function \(Q_{\alpha }\). The particular choices of the \(a_{1,i,e}\) and \(a_{2,i,e}\) are indicated by vertical lines in the figure. The first type of solution (in black) is a Harris-like equilibrium because the solutions remain in the asymptotic regime of the guiding center distribution (i.e., where \(Q_{\alpha }\simeq {const.}\)) so there is no significant additional current. The second type of solution (in blue) reaches its turning point at \(B_z=0\) within the guiding center distribution gradient and has solutions that are flattened in the phase plane. The third type of solution (in red) completely traverses the gradient region and becomes elongated in the phase plane.
Embedded thin current sheet. a Vector potential, \(a/\rho _{i}\), b density, c potential, and d electron current density across layer. In all panels the blue curve corresponds to the case with a density gradient achieved by setting \(R_{i,e}=0.1,0.1\) and the orange curve shows the Harris sheet achieved by setting \(R_{i,e}=1\) and the rest of the parameters are as follows \(a_{1i,e}=1.0,0.94\), \(a_{2i,e}=0.3,0.56\), \(S_{i,e}=1,1\), \(U_{i}/v_{ti}=0.2\), \(T_e/T_i=1.0\), and \(m_i/m_e=1836.0\)
In Fig. 13, we show the equilibrium attributes corresponding to the blue region of curves in Fig. 12. For reference, we added the Harris solution in orange. The density gradient scale is comparable to the ion gyroradius and is self-consistently determined. This generates an ambipolar electrostatic potential that cannot be transformed away [panel (c)]. The small dip in density (as opposed to a peaked density) is necessary to create the electric field in the proper direction (away from the current sheet) to generate a current that is in addition to the Harris current. Also note that around \(x = 0\), where the magnetic field vanishes and hence magnetic confinement of the particles becomes weak, the electrostatic potential peaks. Consequently, around this point the particles can be electrostatically confined. As a result, the velocity profile peaks around the null point, which is midway between the turning points of the electrostatic potential (Fig. 14). This creates an ideal situation in which the velocity gradient driven waves (Sect. 3) can originate in the vicinity of the null region and contribute to anomalous resistivity (Romero and Ganguli 1993) necessary for the magnetic reconnection process. Further details are discussed in Crabtree et al. (2020). The case without a density gradient, i.e. the Harris case, is shown in orange in the figure and correspondingly has no electrostatic potential. In panel (d) we show that the current density across the layer consists of a thin central current sheet, of scale size \(\sim L\), due to the electron Hall current, embedded in a broader current sheet of scale size \(\sim L_{\rm H}\) due to the bulk drifting component of the distribution function (the \(U_{\alpha }\) drift). This solution resembles an embedded thin current sheet which are commonly observed in situ by spacecraft (McComas et al. 1986; Sergeev et al. 1993; Sanny et al. 1994). In Fig. 14 we show the individual drift components. The electrons have a small gyro-orbit compared to the electric field scale size and thus have a standard \(E\times B\) drift in the ambipolar electric field. The ions have a larger orbit and thus the orbit averaged electric field sampled is smaller, thus the total flow of the ions is reduced. This is the source of the additional current.
The existence and the magnitude of the electrostatic potential around the magnetic null (Fig. 13c) leads to another interesting question, i.e., how does the electrostatic potential affect the individual particle orbits around the magnetic null? For the 1D equilibria considered here, the particle orbits are all integrable and the details of how the figure eight orbits (Speiser 1965) are modified by the electric field are discussed in Crabtree et al. (2020). An open question remains with the addition of a \(B_x\) (north-south component in our coordinates), so that the magnetic field becomes approximately parabolic. Will the orbits still be chaotic near the null-sheet as they are in the case without an electric field (Chen and Palmadesso 1986)? If so, how does the electrostatic potential affect the extent of the region over which they are chaotic? How does the electrostatic potential affect the onset condition for chaos if chaotic orbits can still survive? These questions remain to be debated and answered in the future.
Embedded thin current sheet. a Electron drifts and the total fluid velocity across the layer normalized to the electron thermal velocity. b Ion drifts and total fluid velocity normalized to the ion thermal velocity. The parameters are the same as in Fig. 13
Current sheet thinning, which is the result of a global compression, is often observed in the magnetotail just prior to the onset of reconnection (Schindler and Birn 1993; Sitnov et al. 2006; Nakamura et al. 2002b; Artemyev et al. 2019). With a thin embedded current sheet there are narrow layers of electron flow with large flow shear which can drive many kinds of instabilities, that would not exist in a standard Harris equilibrium. These shear-flow driven instabilities (discussed in Sect. 3) can provide a source of anomalous resistivity for the onset of magnetic reconnection. Lower-hybrid drift instabilities (LHDI) have been extensively studied in Harris sheets (Huba et al. 1980; Huba and Ganguli 1983; Daughton 1999; Tummel et al. 2014) because of their potential to provide a source of anomalous resistivity, however, these studies were done in a Harris equilibrium where the LHDI is confined away from the magnetic null, because LHDI favors strong magnetic field and strong density gradients. With compression we expect current sheets to develop kinetic scale features as shown here, and also observed in the in situ data, such that the source of the instability can be closer to the magnetic field reversal region and thus can play a significant role in reconnection. This is a topic for further investigation.
Bifurcated current sheet. a Vector potential, \(a/\rho _{i}\), b density, c potential, and d Electron current density across layer. c Electron current density. In all panels the blue curve corresponds to the case with a density gradient achieved by setting \(R_{i,e}=0.1,0.1\) and the orange curve shows the Harris sheet achieved by setting \(R_{i,e}=1\). For both cases the the solution curve for the vector potential was chosen by selecting \(A_{0}=0\) the rest of the parameters are as follows \(a_{1i,e}=1.0,0.94\), \(a_{2i,e}=0.3,0.56\), \(S_{i,e}=1,1\), \(U_{i}/v_{ti}=0.2\), \(T_e/T_i=1.0\), and \(m_i/m_e=1836.0\)
Bifurcated current sheet. a Electron drifts and the total fluid velocity across the layer normalized to the electron thermal velocity. b Ion drifts and total fluid velocity normalized to the ion thermal velocity. The parameters are the same as in Fig. 15
In Fig. 15, we show the vector potential in panel (a), the density in panel (b), the electrostatic potential in panel (c) and the electron current density in panel (d) as a function of the distance across the layer where the magnetic field reversal is located at \(x=0\). The orange curves correspond to the Harris sheet solution with no ambipolar electric field and the blue curves correspond to the new generalized Harris solution. This solution corresponds to the class of red curves in Fig. 12 where we chose \(a=0\) at the field reversal. Figure 15 shows that near the guiding center gradient on either side of the field reversal there is a strong electron Hall current that is stronger than the current of scale size \(L_H\) supported by the uniformly drifting component of the distribution function (i.e., the current due to \(U_{\alpha }\)) but in the same direction. In Fig. 16 we show the electron drifts (in panel a) and ion drifts in panel (b) as well as the total fluid velocities. We see that the \(E\times B\) drift of the electrons (panel a) is in the same direction as the diamagnetic drift in the layer which leads to a strong net sheared flow of electrons. Whereas with the ions (panel b) they are in opposite directions. This figure shows that the electrons experience a significant \(E\times B\) drift but the ions do not because narrow electric fields exist on scales a fraction of the ion gyroradius.
The current sheet solution shown in Figs. 15 and 16 resembles a bifurcated current sheet that has been commonly observed by spacecraft in the magnetotail. Such bifurcated current sheets have also been observed in 1D particle in cell simulations (Schindler and Birn 1993). In these simulations the starting point was a Harris equilibrium and then the layer was compressed by applying time-dependent in-flows at the boundaries (in x in our coordinate system). A steady state was reached in the simulation after compression that resembled the bifurcated equilibrium shown here in Fig. 15d. Thus, there are simulation studies showing that by further compressing a Harris current sheet one can develop ambipolar electric fields which drive an electron current and form a bifurcated current sheet that are consistent with the Vlasov equilibrium solutions discussed here.
As in Sects. 2.1 and 2.2, we find that even in the field reversed magnetic field geometry as the plasma is compressed an electrostatic potential is self-consistently generated. This introduces plasma flows that are highly sheared. As we study in Sec. 3 below, such sheared flows have a natural tendency to relax through emissions that ultimately leads to a new reconfigured steady state. Further details of the current sheet behavior during active periods and its importance to the magnetic reconnection process is discussed in Crabtree et al. (2020).
Plasma response to compression
From Sect. 2, we can conclude that in collisionless environment plasma compression generates an ambipolar electric field across the magnetic field when the layer width becomes less than an ion gyrodiameter. It also describes some natural examples of plasma compression but this can also happen in laboratory devices. The amplitude and gradient of the ambipolar field is proportional to the intensity of the compression, which also creates the pressure gradient that forms in the layer. It is reasonable to identify the transverse ambipolar electric field as a surrogate for the compression for practical purposes. It is interesting that the electric field is a better surrogate for the compression than the pressure gradient because, as we discussed in Sect. 2.1, density and temperature gradients could combine to reduce the pressure gradient in the layer but still lead to intense electric fields as the scale size of the layer reduces with increasing compression. With this identification it becomes possible to quantitatively address the plasma response to compression by studying the variety of linear and nonlinear processes that are triggered by the transverse electric field.
At the kinetic level the collective behavior in plasma is sensitive to the individual particle orbits. The particle orbits are affected by the electric field gradient, which develops self-consistently as a result of the compression. The orbit distortion could be quite substantial and can affect the character of the waves emitted and their nonlinear evolution as well as saturation properties. Hence, we review the particle orbit modifications due to inhomogeneous transverse electric field.
Particle orbit modification due to localized transverse electric field
In a uniform magnetic field the charged particle orbit modification to the gyro-motion introduced by a uniform transverse electric field is a uniform \(\mathbf {E}\times \mathbf {B}\) drift and this electric field can be transformed away in the moving frame. Since the \(\mathbf {E}\times \mathbf {B}\) drift is mass and charge independent, both the electron and ion drifts are identical, which implies that there is no net transverse current. This is no longer true for an inhomogeneous electric field and has implications for plasma fluctuations. In realistic plasmas, both in nature and the laboratory, the transverse electric field encountered is inhomogeneous. For example, we found in Sect. 2 the ambipolar electric field that arises self-consistently due to plasma compression is highly nonuniform. Therefore, we analyze the modifications to particle orbits that such electric field inhomogeneity introduces.
Consider a uniform magnetic field, \(\mathbf {B}_0\), in the z direction and an inhomogeneous electric field, \(\mathbf {E}_0(x)\), in the x-direction. The energy per mass for a charged particle in this field configuration is \(K(x)=v_x^2/2+v_y^2/2+e\varPhi _{0}(x)/m\), where \(\varPhi _{0}(x)\) is the external electrostatic potential, i.e., \(E_{0}=-\mathrm{d}\varPhi _{0}(x)/\mathrm{d}x\). The equations of motion for a charged particle in the x- and y-directions are
$$\begin{aligned} \dot{v}_x= & {} \varOmega v_y - \varOmega V_{E}(x), \end{aligned}$$
$$\begin{aligned} \dot{v}_y= & {} -\varOmega v_x, \end{aligned}$$
where \(V_{E}=-cE_0(x)/B\) is the \(\mathbf {E}_0(x)\times \mathbf {B}\) drift and dots imply time derivative. Integrating Eq. 26 we obtain a constant of motion \(X_g=x+v_y/\varOmega\), which is the guiding center position when the electric field is absent. Expressing \(v_y=\varOmega (X_g-x)\)and using it in a Hamiltonian formulation, we get
$$\begin{aligned} H(x)=\frac{v_x^2}{2} +\frac{\varOmega ^2}{2} (X_g-x)^2 +e\varPhi _{0}(x)/m = v_x^2/2 + G(x) \end{aligned}$$
Minimizing the pseudo potential G(x) at \(x=\xi\)
$$\begin{aligned} \left. \frac{\mathrm{d}G}{\mathrm{d}x}\right| _{x=\xi } = -\varOmega ^2(X_g-\xi ) + \frac{e}{m}\left. \frac{d\varPhi _{0}(x)}{dx}\right| _{x=\xi } = 0. \end{aligned}$$
we obtain the guiding center position \(\xi =x+(v_y-V_E(\xi ))/\varOmega\), when an electric field is present. For an inhomogeneous electric field this expression is an implicit function for \(\xi\) and is valid for all particles with the accuracy determined by the number of terms in the expansion used below. These definitions help understand the modification to the \(\mathbf {E}\times \mathbf {B}\) drift due to the inhomogeneity in the electric field.
At steady state the time-averaged y-drift can be obtained from Eq. 25, i.e., \(\langle \dot{v}_{x}\rangle =0=\varOmega \langle v_{y}\rangle -\varOmega \langle V_{E}(x)\rangle\). Expanding around the guiding center position using 1/L as the expansion parameter (implying weak shear) and retaining terms up to \(O(1/L^{2})\), where L is the scale size of the transverse electric field, the time averaged y-drift is
$$\begin{aligned} \langle v_y\rangle = \langle V_E(x)\rangle = V_E(\xi ) + \langle (x-\xi )^2\rangle V_E''(\xi )/2 + \ldots \end{aligned}$$
The first order term, \(\langle (x-\xi )\rangle\), is oscillatory and vanishes on time averaging and \(\langle v_y\rangle\) is time independent. Thus, in general \(v_y=u_y+\langle v_y\rangle\), where \(u_y\) is the oscillatory component of the velocity in the y-direction. Using the definition of the guiding center, \(x-\xi =-(v_y-V_\mathrm{E}(\xi ))/\varOmega\), in Eq. 29 we can express \(\langle v_y\rangle\) as
$$\begin{aligned} \langle v_y\rangle = V_E(\xi ) + \frac{V_\mathrm{E}''(\xi )\langle u_y\rangle ^2}{2\varOmega ^2 \eta (\xi )} + O(V_\mathrm{E}''^2) \end{aligned}$$
where \(\eta (\xi )=1+(\mathrm{d}V_\mathrm{E}(\xi )/\mathrm{d}\xi )/\varOmega\). The parameter \(\eta\) is a comparison of the influences of the electric and magnetic fields on particle orbits. It is also a measure of the velocity shear strength, and hence of the plasma compression. \(\eta -1\) is the ratio of the shear frequency, \(\omega _s = \mathrm{d}V_\mathrm{E}/\mathrm{d}x\), and the gyrofrequency, \(\varOmega\). In the limit \(\omega _s \rightarrow -\varOmega\) the particle orbits become ballistic as in a field free environment. In the limit \(\omega _s \gg \varOmega\), the particles execute trapped orbits in the electrostatic potential and the electric field dominates. In between the particles respond to both electric and magnetic fields. Because of spatial variability there may be regions where each of these effects could be pronounced. This makes the typical particle orbits much different from the ideal gyro-orbits in a magnetic field, which can affect the collective plasma dynamics. Besides the usual \(\mathbf {E}\times \mathbf {B}\) drift represented by the first term in the right hand side of Eq. 30, there is also a mass dependent second order term. While there is no transverse current in the zeroth order, a second order current arises due to electric field curvature, which is proportional to the magnitude of the compression. This is an important modification to the mean or bulk plasma transverse flow, which is a fluid property. We shall see in Sect. 3.3 that this term is an important contributor to plasma collective effects and hence cannot be ignored with respect to the order unity term in Eq. 30.
There is another important kinetic effect due to the electric field inhomogeneity that affects the individual particle orbits. To understand this we cast the equation of motion in the guiding center frame Ganguli et al. (1988):
$$\begin{aligned} \dot{v}_x= & {} \eta (\xi )\varOmega u_y + \frac{V_\mathrm{E}''(\xi )}{2\varOmega }\left( \langle u_y^2\rangle - u_y^2\right) , \end{aligned}$$
$$\begin{aligned} \dot{v}_y= & {} -\varOmega v_x \end{aligned}$$
Taking the time derivative and multiplying by \(\dot{u}_y\), Eq. 32 becomes \(\dot{u}_y\ddot{u}_y = -\varOmega \dot{u}_y \dot{v}_x\). Substituting \(\dot{v}_x\) from Eq. 31 yields another constant of motion:
$$\begin{aligned} w_{\perp }^2 \equiv v_x^2 + \eta (\xi )u_y^2 - \frac{V_\mathrm{E}''(\xi )}{\varOmega ^2}\left( \frac{u_y^3}{3}-\langle u_y^2\rangle u_y\right) , \end{aligned}$$
which reduces to the perpendicular velocity for uniform electric case when \(L\rightarrow \infty\). Using this and solving Eqs. 31 and 32 for the particle velocities and orbits we get
$$\begin{aligned} v_x= & {} w_{\perp } \sin (\sqrt{\eta (\xi )}\varOmega \tau + \varphi ) - \frac{V_\mathrm{E}''(\xi )w_{\perp }^2}{6\eta (\xi )^{3/2}\varOmega ^2}\sin (2\sqrt{\eta (\xi )}\varOmega \tau + 2\varphi ), \end{aligned}$$
$$\begin{aligned} u_y= & {} \frac{w_{\perp }}{\sqrt{\eta (\xi )}} \cos (\sqrt{\eta (\xi )}\varOmega \tau + \varphi ) - \frac{V_\mathrm{E}''(\xi )w_{\perp }^2}{12\eta (\xi )^{2}\varOmega ^2}\cos (2\sqrt{\eta (\xi )}\varOmega \tau + 2\varphi ), \end{aligned}$$
From Eq. 35\(\langle u_y^2\rangle = w_{\perp }^2/(2\eta )+O(V_E''^2)\) can be calculated so that \(\langle v_y\rangle\) (Eq. 30) becomes
$$\begin{aligned} \langle v_y \rangle = V_\mathrm{E}(\xi ) + \frac{V_\mathrm{E}''(\xi ) w_{\perp }^2}{4\varOmega ^2\eta ^2(\xi )} + O(V_\mathrm{E}''^2). \end{aligned}$$
Integrating the velocities particle orbits are
$$\begin{aligned} x-x_0&= - \frac{w_{\perp }}{\sqrt{\eta (\xi )}\varOmega } \left[ \cos \left(\sqrt{\eta (\xi )}\varOmega \tau + \varphi \right) - \cos (\varphi )\right] \nonumber \\&\quad +\frac{V_\mathrm{E}''(\xi )w_{\perp }^2}{12\eta (\xi )^2\varOmega ^3} \left[ \cos \left(2\sqrt{\eta (\xi )}\varOmega \tau + 2\varphi \right) - \cos (2\varphi )\right] \end{aligned}$$
$$\begin{aligned} y-y_0&= \frac{w_{\perp }}{\eta (\xi )\varOmega } \left[ \sin \left(\sqrt{\eta (\xi )}\varOmega \tau + \varphi\right ) - \sin (\varphi )\right] \nonumber \\&\quad -\frac{V_\mathrm{E}''(\xi )w_{\perp }^2}{24\eta (\xi )^{5/2}\varOmega ^3} \left[ \sin \left(2\sqrt{\eta (\xi )}\varOmega \tau + 2\varphi \right) - \sin (2\varphi )\right] +\left\langle v_y\right\rangle \tau \end{aligned}$$
A major departure from the uniform electric field case is an effective renormalization of the gyrofrequency. To leading order in the field gradient \(\varOmega \rightarrow \bar{\varOmega }=\sqrt{\eta }\varOmega\). Hence, even the oscillatory part of the particle orbits is dependent on the electric field gradient and the effective gyrofrequency becomes spatially dependent even when the magnetic field is uniform.
Depending on the magnitude and sign of the electric field gradient, \(\eta\) can be positive or negative. This has implications for particle orbits. Consider a weak electric field gradient, i.e., \(\rho /L<1\) where \(\rho\) is the particle gyroradius, and \(\eta >0\). To leading order in the gradient the equation of motion may be simplified to \(\ddot{v}_x=-\eta (x)\varOmega ^2v_x+O(V_\mathrm{E}'')\), which shows that the particle orbit is either oscillatory or divergent depending on the sign of \(\eta (x)\). Depending on the magnitude of the gradient, the effective gyroradius, \(\bar{\rho }=v_t/\bar{\varOmega }\), can be larger or smaller compared to the uniform electric field case for which \(\eta =1\). This will be reflected in the averaged equilibrium quantities as larger or smaller temperatures and affect plasma distribution functions, as we shall discuss in more detail in Sect. 3.2. While the \(\eta \rightarrow 0\) limit leads to weak magnetization with large gyroradius resulting in weak magnetic confinement of the particles, \(\eta \rightarrow \infty\) leads to strong magnetization, which effectively is electrostatic confinement of the particles. This property may be especially consequential to the chaotic orbits (Chen 1992) in the neighborhood of the null sheet in the magnetic field reversed geometry in the earth's magnetotail when there is guiding magnetic field normal to the current sheet. As discussed in Sect. 2.3, an electrostatic potential self-consistently develops around the null sheet that has not been considered in the studies of the chaotic particle orbits in this region.
In the weak gradient limit, the higher-order derivatives of the electric field are not important but they become critical for stronger gradients when \(\eta <0\). For \(\eta < 0\) the equation of motion becomes \(\ddot{v}_x=|\eta (x)|\varOmega ^2 v_x+O(V''_\mathrm{E})\) indicating that the restoring nature of the force becomes divergent and the particle accelerates along the electric field. Gavrishchaka (1996) studied the strong gradient limit. He showed that for strong gradients, multiple guiding centers can arise and the particles do not accelerate indefinitely unless the electric field is linear, which is a pathological case. Higher order derivatives prevent indefinite linear acceleration, which results in modified orbits that are no longer the ideal gyromotion. Effectively, the particle acquires a larger gyroradius around a new guiding center. As shown in Sect. 2, this can have major implications to the equilibrium properties when \(\eta _i\) becomes small and negative in the narrow layers with \(\rho _i> L > \rho _e\).
When the scale size of localization reduces much below the gyroradius the gyro-averaged electric field experienced by the particle reduces until a limit is reached below which the electric field becomes negligible (Gavrishchaka 1996). Consequently, the particle \(\mathbf {E}\times \mathbf {B}\) motion is drastically reduced if not eliminated. In plasmas this can lead to an interesting regime when \(\rho _i \gg L > \rho _e\) in which the ions do not experience the \(\mathbf {E}\times \mathbf {B}\) drift but the electrons do. For short time scale processes, such that \(\varOmega _i\ll \omega < \varOmega _e\), the ions effectively behave as an unmagnetized fluid while the electrons remain magnetized. This gives rise to a Hall current even in a collisonless uniform plasma. In plasmas undergoing compression, or relaxing from it, the scale size of the electric field varies in time, which affects the particle orbits differently at different stages of compression or relaxation. These changes in particle orbits affect the collective dynamics resulting in the observed spectral characteristic that includes broadband emission as we discuss in Sect. 3.3.
Analytical distribution function
To understand the ramifications of the orbit modification discussed in Sect. 3.1 on plasma collective effects it is necessary to develop a kinetic formalism to analyze the stability of plasmas including localized DC electric fields. For doing so we must obtain a representative zeroth order distribution function appropriate for the initial equilibrium state characterized by a homogeneous magnetic field and an inhomogeneous electric field in the transverse direction. In Sect. 2, we found such a distribution function for arbitrary magnitude of the compression, but it is a solution that uses special functions and does not lend itself transparently to perturbative analysis of the stability properties, which is ideal for a general understanding of the plasma response to localized electric fields. In this Section we construct an analytical distribution function for weak shear, i.e. for \(\rho /L<1\) and \(\eta >0\), using the constants of motion H(x) and the guiding center position \(\xi\), which will then be perturbed in Sect. 3.3 to understand the stability of the Vlasov equilibrium state of a compressed plasma. Consider the equilibrium distribution function introduced by Ganguli et al. (1988):
$$\begin{aligned} f_{0}(H(x),\xi )= & {} \frac{N}{\sqrt{\eta (\xi )}} g(\xi ) e^{-\beta _t H(x)} e^{-\beta _t H_{\Vert }(\xi )}, \end{aligned}$$
$$\begin{aligned} g(\xi )= & {} e^{\beta _t\left[ \frac{e}{m}\varPhi _{0}(\xi ) + \frac{V_{E}^2}{2}\right] }, \end{aligned}$$
where \(N=n_{0}(\beta _t/(2\pi ))^{3/2}\), \(\beta _t=1/v_t^2\), \(H_{\Vert }(\xi )=(v_z-V_{\Vert }(\xi ))^2/2\), \(v_{t}=\sqrt{T/m}\) is the thermal velocity, and \(V_{\Vert }(\xi )\) is an inhomogeneous drift along the magnetic field. In constructing the distribution function two requirements are imposed: (1) the velocity integrated distribution function should produce a constant density so that a static electric field generated in a quasi-neutral plasma without a significant density gradient can be studied. However, a density gradient, as prevalent in the compressed layers discussed in Sect. 2 , can be included through \(n_{0}(\xi )\) when necessary, and (2) although any function constructed out of constants of the motion is a Vlasov solution, the particular choice must reduce to the fluid limit when the temperature \(T\rightarrow 0\) . The importance of the latter will become apparent in Sect. 3.3.
In the weak compression limit when \(\epsilon =\rho /L<1\) and for \(V_{\Vert }(\xi )=0\) the distribution function can be simplified. Using \(v_y=u_y+\langle v_y\rangle\) in the argument of distribution function Eq. 39 and expanding the argument around the guiding center position it can be simplified to,
$$\begin{aligned} f_{0}\simeq \frac{n_{0}}{\sqrt{\eta (x)}(2\pi v_{t}^{2})^{3/2}} e^{-(v_x^2 + (v_y-V_\mathrm{E}(x))^2/\eta (x) + v_z^2)/(2 v_{t}^2)} + O(\epsilon ), \end{aligned}$$
where terms up to \(O(V_\mathrm{E}')\) are retained. For a uniform electric field, i.e., \(V_\mathrm{E}'=0\), \(\eta =1\) and \(w_{\perp }^2=v_x^2+(v_y-V_\mathrm{E}^0)^2\). Equation 41 reduces to a Maxwellian distribution with \(v_{y}\) shifted by a constant \(V_\mathrm{E}^{0}\) velocity. Since the \(\mathbf {E}\times \mathbf {B}\) drift is identical for both electrons and ions in collisionless plasma there is no relative drift between the species to feed energy to waves and hence the distribution is stable. This shows that global compression results in a deviation from a Maxwellian distribution through the velocity gradient, which is a source of free energy for waves. While pressure gradients could, in principle, be another source of free energy, temperature and density can arise in opposite direction to maintain a low pressure gradient as often found in compressed layers (Ohtani et al. 2004; Runov et al. 2011; Schmid et al. 2015; Zhao et al. 2018; Chen et al. 2020). However, in Sect. 2 we saw that compression intensifies the velocity shear that makes the velocity distribution increasingly non-Maxwellian. Thus, in a collisionless environment compression triggers a relaxation mechanism to reach a steady state through the emission of waves by dissipating the velocity gradient. The dependence of the distribution function on the spatial gradient of the velocity through the parameter \(\eta\) and its asymmetric appearance in the distribution function is noteworthy. It shows that the temperature in the y direction is preferentially affected by the localized electric field across the magnetic field in the x direction, which introduces an asymmetry and breaks the gyrotropy of the distribution function. Agyrotropic distributions are found in the compressed layers, e.g. Chen et al. (2020). This may result in a difference in the temperature in the x and y directions orthogonal to the magnetic field (Ganguli et al. 2018).
In the following sections, we will analyze how the electric field gradient can excite broadband waves that can relax the gradients and hence the compression.
Transforming to the cylindrical coordinates ( \(w_{\perp }\), \(\varphi\), \(v_z\) ) by using the Jacobian:
$$\begin{aligned} |J|= \frac{\sqrt{\eta }w_{\perp }}{ 1+\frac{V''_\mathrm{E} w_{\perp } \cos (\varphi )}{2\eta ^{3/2}\varOmega ^2}} \end{aligned}$$
the velocity integrals can be performed to obtain \(n_{0}(x)=n_{0}(1+O(\epsilon ^2))\) (Ganguli et al. 1988). This shows that a large localized static electric field can be maintained in a quasi-neutral plasma across the magnetic field with negligible density gradient, as is observed in the earth's auroral region (Mozer et al. 1977).
Stability of the Vlasov equilibrium
Electric fields encountered in both laboratory and natural plasmas are nonuniform, albeit with varying degree of nonuniformity. For example, in Sect. 2, we showed that the ambipolar electric field that develops self-consistently in a compressed plasma is highly nonuniform. In Sect. 3.2, we established that such electric fields make the equilibrium distribution function non-Maxwellian and therefore introduces a source of free energy for waves. In collisionless plasmas these waves are a natural response to compression since they relax the shear in the electric field so that a steady state can be achieved. Due to the strong spatial variability across the magnetic field the plane wave or WKB approximations will break down. Also, some of the modes due to transverse flows discussed below are essentially nonlocal in nature with no local limit. Hence, the analysis of these waves must be treated as an eigenvalue problem. Their dispersion relation is usually a differential or an integral equation. In the following we highlight the key aspects of the derivation of the eigenvalue condition and refer the readers to Ganguli et al. (1988) for details.
Linearizing the analytical equilibrium distribution given in Eq. 39 with a nonuniform density, \(N(\xi )\), we get \(f(x,\mathbf {v},t)=f_{0}(x,\mathbf {v})+f_{1}(x,\mathbf {v},t)\). Since the inhomogeneity is in the x-direction the fluctuating quantities, e.g., the electrostatic potential, are periodic in y- and z- directions but localized in the x-direction, i.e., \(\phi (r',t)=\exp [-i(\omega t'-k_y y'-k_z z')]\phi (x')\) where \(\phi (x')=\int dk'_x\exp (ik_x x')\phi _{k}(k'_x)\). Then, the perturbed density fluctuation may be obtained as \(n_{1}(x)=\int \mathrm{d}^3\mathbf {v}\,f_{1}(x,\mathbf {v})\). Using the orbits given in Eqs. 37, 38 it can be shown that,
$$\begin{aligned} n_{1}(k_x)= & {} -\frac{e\beta _t}{2\pi m} \iiint \mathrm{d}x\,\mathrm{d}^{3}\mathbf {v} \mathrm{d}k'_{x}\, \phi (k'_{x}) f_{0}(\xi ,w_{\perp }) \left[ e^{i(k'_{x}-k_{x})x} - e^{i(k'_{x}-k_{x})\bar{\xi }}F \right] , \end{aligned}$$
$$\begin{aligned} F= & {} (\omega -k_y V_g)\sum _{l,l',m,m'} \frac{ J_{l'}(\sigma ') J_{m'}(\hat{\sigma }') J_{l}(\sigma ) J_{m}(\hat{\sigma }) }{ \omega -(l'-2m')\bar{\varOmega } - k_{y}\langle v_{y}\rangle } e^{i\left\{ 2(m-m')-(l-l') \right\} \varphi } e^{i\left\{ l\delta -l'\delta ' - m\hat{\delta } + m'\hat{\delta }'\right\} }, \end{aligned}$$
where \(J_{m}(\sigma )\) are Bessel functions, \(\sigma '=k_{\perp }' w_{\perp }/\varOmega\), \(k_{\perp }'^{2}=k_{x}'^2/\eta + k_{y}^2/\eta ^2\), \(\delta '=\tan ^{-1}(k'_{x}\sqrt{\eta }/k_{y})\), \(\hat{\sigma }'=\hat{k}_{\perp }'\hat{w}_{\perp }/(12\varOmega )\), \(\hat{w}_{\perp }= V''_\mathrm{E} w_{\perp }^2/\varOmega ^2\), \(\hat{\delta }'=\tan ^{-1}(2k'_{x}\sqrt{\eta }/k_{y})\), \(\hat{k}_{\perp }'^2=k_{x}'^2/\eta ^4 + k_{y}^2/4\eta ^5\), \(\bar{\varOmega }=\sqrt{\eta }\varOmega\), and \(\bar{\xi }=x+u_{y}/\varOmega\). \(V_g\) is the bulk fluid drift in the plasma and is given by
$$\begin{aligned} V_{g}(\xi ) = \frac{1}{\eta (\xi )\varOmega \beta _{t}} \frac{1}{f_{0}} \frac{\partial f_{0}}{\partial \xi } = V_{E}(\xi ) - \frac{V_{E}''(\xi )\rho ^2}{2\eta ^2(\xi )} - \frac{\epsilon _n \rho \varOmega }{\eta }, \end{aligned}$$
$$\begin{aligned} \omega -k_{y}V_{g}(\xi ) = \omega - k_{y}V_{E}(\xi ) + \frac{k_{y} V_\mathrm{E}''(\xi ) \rho ^2}{2\eta ^2(\xi )} - \frac{k_{y} \epsilon _{n} \rho \varOmega }{\eta } \equiv \omega _{1}+\omega _{2}-\omega ^{*}, \end{aligned}$$
where \(\omega _{1}=\omega -k_{y}V_\mathrm{E}(\xi )\) is the local Doppler shifted frequency, \(\omega _{2}=k_{y} V_\mathrm{E}''(\xi )\rho ^2/2\eta ^2\) is a frequency that is introduced due to the second derivative, i.e. the curvature, of the electric field, \(\omega ^{*}=k_{y}\rho \epsilon _{n}\varOmega /\eta\) is the diamagnetic drift frequency, and \(\epsilon _{n}=\rho /L_{n}\) where the density gradient scale size \(L_{n}=[(\mathrm{d}n/\mathrm{d}x)/n]^{-1}\).
A number of noteworthy features arise compared to the uniform electric field case. Unlike the trivial case when a global Doppler shift is appropriate, in the nonuniform case a local doppler shift arises and no global transformation can eliminate this spatially dependent shift. Because of the spatial inhomogeneity the plane wave assumption in the direction of the inhomogeneity is no longer possible. Higher harmonics of quantized eigenstates are possible, which can broaden the frequency and wave vector bandwidth of the emissions. The transverse electric field becomes an irreducible feature defining the bulk plasma and affects its dielectric properties including the normal modes of the system. New time scales, represented by the frequencies \(\omega _{1}\) and \(\omega _{2}\), are introduced. A resonance with the bulk plasma flow arises that can affect the fluid (macro) stability. Landau and cyclotron resonances with individual particles are affected through orbit modifications altering the kinetic (micro) stability of the plasma. Consequently, the transverse electric field can affect both the real and imaginary parts of the dispersion relation and therefore affect both the real and imaginary parts of the frequency of oscillations. This can vastly alter the known waves that characterize a plasma with uniform magnetic field and their nonlinear behavior. Under certain conditions the transverse electric field can suppress some waves while in others waves can be reinforced (Gavrishchaka et al. 1996). In addition, an entirely new class of oscillation becomes possible due to an inhomogeneity in the wave energy density introduced by the variable Doppler shift (Ganguli et al. 1985).
Quasi neutrality, i.e., \(\sum _{\alpha }\int \mathrm{d}k_{x}\exp (ik_{x}x)n_{1\alpha }(k_{x})=0\), gives the general dispersion condition for the waves, in the electrostatic approximation, which is an integral equation and cumbersome to solve. However, for weak gradients, i.e. \(\rho /L<1\), \(\eta \sim 1\), and \(k_{x}\simeq k_{x}'\), some simplifications are possible. For example, \(\hat{\sigma }'\propto (\rho /L)^2\ll 1\) so we may use \(J_{0}(\hat{\sigma })\sim J_{0}(\hat{\sigma }')\sim 1\) and ignore terms higher than \(m=m'=0\). Furthermore, \(k_{x}\simeq k_{x}'\) implies \(\sigma '\simeq \sigma\) and \(\delta '\simeq \delta\). In the \(O(\rho /L)^2\) term in the denominator of Eq. 44 we may replace \(w_{\perp }^2\), that appears in \(\langle v_{y}\rangle\), by \(2v_{t}^2\). This simplifies F considerably to
$$\begin{aligned} F=(\omega _1+\omega _2-\omega ^*) \sum _{l',l} \frac{ J_{l'}(\sigma ') J_{l}(\sigma ) }{\omega _1-\omega _2-l'\varOmega } e^{\left[ i(l'-l)\varphi + il\delta - il'\delta ' \right] }. \end{aligned}$$
It is interesting to note that the electric field curvature related frequency, \(\omega _{2}\), that appears in the numerator of Eq. 47 originates from the fluid plasma flow, while the one in the denominator originates from the individual particle orbit due to its kinetic behavior and will be absent in the fluid framework. With these simplifications and transforming coordinates from Cartesian, \((x,v_x,v_y,v_z)\), to cylindrical, \((\xi ,w_{\perp },\varphi ,v_z)\), the velocity integrals can be readily performed to obtain the density fluctuations:
$$\begin{aligned} n_{1}(x)= & {} \frac{e\beta _t}{2\pi m} \int {\rm d}k_{x} \exp (ik_{x}x) \iint \mathrm{d}\xi \mathrm{d}k_x' \, \phi (k'_x) \exp [i(k_x'-k_x)\xi ]n_{0}(\xi ) \nonumber \\&\times \left\{ 1 + \sum _{l}\left( \frac{\omega _{1}+\omega _{2}-\omega ^*}{\sqrt{2}|k_{\Vert }|v_{t}} \right) Z\left( \frac{\omega _{1}-\omega _{2}-l\varOmega }{\sqrt{2}|k_{\Vert }|v_{t}} \right) \varGamma _{l}(\bar{b}) \right\} \end{aligned}$$
where \(Z(\zeta )=(\pi )^{-1/2}\int _{-\infty }^{\infty } \mathrm{d}t\,\exp (-t^2)/(t-\zeta )\) is the plasma dispersion function, \(\varGamma _{n}(\bar{b})=\exp (-\bar{b})I_{n}(\bar{b})\), \(\bar{b}=(k_{\perp }\rho )^2\), and \(I_{n}(\bar{b})\) is the modified Bessel function. The weak gradient condition allows the expansion \(\varGamma _{l}(\bar{b})=\varGamma _{l}(b)-\varGamma '_{l}(b)\rho ^2k_{x}^2+O((\rho k_{x})^4)\), where \(b=(k_{y}\rho )^2\) so that the remaining integrals can be easily performed to obtain
$$\begin{aligned} n_{1}(x)= & {} - \frac{\omega _{p}^2}{4\pi v_{t}^2 q}\left[ -\sum _{n}\left( \frac{\omega _{1}+\omega _{2}-\omega ^*}{\sqrt{2}|k_{\Vert }|v_{t}} \right) Z\left( \frac{\omega _{1}-\omega _{2}-n\varOmega }{\sqrt{2}|k_{\Vert }|v_{t}} \right) \frac{\mathrm{d}\varGamma _{n}(b)}{\mathrm{d}b} \rho ^2 \frac{\mathrm{d}^2}{\mathrm{d}x^2} \right. \nonumber \\&\left. + 1 + \sum _n \left( \frac{\omega _{1}+\omega _{2}-\omega ^*}{\sqrt{2}|k_{\Vert }|v_{t}} \right) Z\left( \frac{\omega _{1}-\omega _{2}-n\varOmega }{\sqrt{2}|k_{\Vert }|v_{t}} \right) \varGamma _{n}(b) \right] \phi (x) \end{aligned}$$
which, in conjunction with the quasi-neutrality condition or the Poisson equation, provides the electrostatic dispersion eigenvalue condition in the form of a second order differential equation. We analyze the details of the electrostatic modes because the wave power in compressed layers are generally found to be concentrated in the electrostatic regime. This may be because inhomogeneity forces the eigenstates to the gradient scale size, which are comparable to the electron skin depth. This makes the wavelengths of the spontaneous emissions from the compressed layers comparable to the electron skin depth, which emphasizes the electrostatic character. However, since some power is also found in the electromagnetic regime, the derivation has also been generalized to the electromagnetic regime using a fluid model for the ions (Peñano and Ganguli 1999, 2000, 2002).
Low frequency limit: fully magnetized ions and electrons
We first consider the linear plasma response to a weak compression where the electric field scale size \(L>\rho _{i}\). As discussed in Sect. 3.1, in this case both ions and electrons experience identical electric field magnitude since on average they sample the electric field throughout their gyro-motion. Hence, to the zeroth order, their \(\mathbf {E}\times \mathbf {B}\) drift will be identical. Under this condition the fluctuating density for both the ions and the electrons is given by Eq. 49 with respective mass and charge, which leads to the electrostatic dispersion relation under the quasi neutrality condition, \(\sum _{\alpha } q_{\alpha } n_{1\alpha }=0\) . Ignoring terms of the order of \((m_{e}/m_{i})^2\) and considering low frequency waves \(\omega _{1}<\omega _{LH}=\omega _{pi}/(1+\omega _{pe}^2/\varOmega _e^2)^{1/2}\), where \(\omega _{LH}\) is the lower-hybrid frequency, the \(n=0\) cyclotron harmonic term for the electrons is sufficient. Then, the eigenvalue condition is
$$\begin{aligned} \left[ \rho _{i}^2 A(x)\frac{\mathrm{d}^2}{\mathrm{d}x^2} + Q(x)\right] \phi (x) + O(\epsilon ^3) = 0, \end{aligned}$$
$$\begin{aligned} A(x)= & {} -\sum _{n} \left( \frac{\omega _{1}+\omega _{2}-\omega ^*}{\sqrt{2}|k_{\Vert }|v_{ti}} \right) Z\left( \frac{\omega _{1}-\omega _{2}-n\varOmega _i}{\sqrt{2}|k_{\Vert }|v_{ti}} \right) \frac{\mathrm{d}\varGamma _{n}(b)}{\mathrm{d}b}, \end{aligned}$$
$$\begin{aligned} Q(x)= & {} 1 + \sum _n \left( \frac{\omega _{1}+\omega _{2}-\omega ^*}{\sqrt{2}|k_{\Vert }|v_{ti}} \right) Z\left( \frac{\omega _{1}-\omega _{2}-n\varOmega _{i}}{\sqrt{2}|k_{\Vert }|v_{ti}} \right) \varGamma _{n}(b) \nonumber \\&+\tau \left[ 1 + \left( \frac{\omega _{1}+\omega _{2}/\tau \mu -\omega ^*/\tau }{\sqrt{2}|k_{\Vert }|v_{te}} \right) Z\left( \frac{\omega _{1}-\omega _{2}/\tau \mu }{\sqrt{2}|k_{\Vert }|v_{te}} \right) \right] , \end{aligned}$$
and \(\tau =T_{i}/T_{e}\), \(\mu =m_{i}/m_{e}\). There are two branches of oscillations driven by the electric field in this equilibrium configuration (Ganguli et al. 1988). These branches do not require a density gradient so in the following analysis we set \(\omega ^{*}=0\).
Kinetic solutions (for three values of \(k_{\Vert }/k_\perp\)) showing that the KH modes are strongly Landau damped. The parameters used are \(\epsilon =\rho _{i}/L =.1\), \(\tau = T_i/T_e=5\), \(\bar{V}_E =V_{0E}/v_{ti}=2\), \(\mu =m_i/m_e=1837\), and no density gradient
Kelvin–Helmholtz instability branch
For low frequencies, such that \(\omega _{1}\ll n\varOmega _{i}\), the \(n=0,\pm 1\) terms for the ions are sufficient in Eq. 50. This gives the kinetic generalization of the dispersion relation for the Kelvin–Helmholtz (KH) modes. Kinetic solutions of Eq. 50 in this limit with \(E(x)=E_0 \text {sech}^2(x/L)\), \(L = 10 \rho _i\), shown in Fig. 17, indicate that the KH mode is strongly Landau damped.
The KH instability is the quintessential shear flow driven instability invoked in innumerable applications in the fluid phenomenology both in space and laboratory plasmas. It is extensively invoked in large-scale fluid models in space plasmas. If long wavelengths, i.e., \(k_{\Vert }\rightarrow 0\), or cold plasma, i.e., \(T\rightarrow 0\), can be realized then this may be justified. But caution must be exercised since, as evident from Fig. 17, the KH mode is highly sensitive to Landau damping especially for \(T_{i}\ge T_{e}\), which is usually the case in the magnetosphere and the \(T\rightarrow 0\) assumption is not realistic. Also, because of the inhomogeneous magnetic field structure in the region, which may introduce geometrical constraints, very long wavelengths necessary to avoid Landau damping may not be possible. Even for long parallel wavelength, \(k_{\Vert }\rightarrow 0\), such that the parallel phase speed of the waves is larger than the ion and electron thermal speeds the KH modes can be damped by finite Larmor radius (FLR) effects if the perpendicular wavelengths are sufficiently short, which is likely in the thin compressed layers. In this case, A(x) and Q(x) reduces to
$$\begin{aligned} A(x)= & {} \left( \frac{\omega _{1}+\omega _{2}}{\omega _{1}-\omega _{2}}\right) \varGamma '_{0}(b) + \left( \frac{\omega _{1}^2-\omega _{2}^2}{(\omega _{1}-\omega _{2})^2-\varOmega _{i}^2}\right) 2\varGamma '_{1}(b) \end{aligned}$$
$$\begin{aligned} Q(x)= & {} 1-\left( \frac{\omega _{1}+\omega _{2}}{\omega _{1}-\omega _{2}}\right) \varGamma _{0}(b) + \left( \frac{\omega _{1}^2-\omega _{2}^2}{(\omega _{1}-\omega _{2})^2-\varOmega _{i}^2}\right) 2\varGamma _{1}(b). \end{aligned}$$
The Bessel functions diminish the magnitude of the source term for the KH modes, which is proportional to \(\omega _{2}\) as will become clear in Eq. 55. If the perpendicular wavelength is also sufficiently long such that \(b=(k_{y}\rho _{i})^2\ll 1\), then \(\varGamma _{0}(b)\sim 1-b\), \(\varGamma _{0}'(b)\sim -1\), \(\varGamma _{1}(b)\sim b/2\), and \(\varGamma _{1}'(b)\sim 1/2\). With these values and in the low frequency limit, \(\varOmega _{i}>\omega _{1}>\omega _{2}\), the order unity terms in Q(x) cancel out making the second order terms proportional to \((\rho _{i}/L)^2\) as the leading order in the eigenvalue condition, which then yields the classical fluid KH mode equation (Raleigh 1896; Drazin and Howard 1966),
$$\begin{aligned} \left[ \frac{\mathrm{d}^2}{\mathrm{d}x^2} - k_{y}^2 + \frac{k_y V_\mathrm{E}''(x)}{\omega -k_{y}V_\mathrm{E}(x)} \right] \phi (x) = 0. \end{aligned}$$
In producing the fluid limit the frequency \(\omega _{2}\) in the numerator of Q(x), which originates from the fluid plasma property, combines in equal part with the one in the denominator, which originates from the kinetic plasma property, to constitute the source term proportional to \(V_\mathrm{E}''\) that feeds the KH instability.
Another kinetic effect is gyro-averaging. As a result, the fluid flow due to the \(E\times B\) drift and its derivatives become smaller as the scale size of the velocity shear becomes comparable or less than an ion gyroradius. This reduces the curvature of the flow and hence lowers the KH source term (see Fig. 19). This shows that the kinetic effects are deeply entrenched in the KH mechanism, which can modify the source term substantially. The kinetic effects can be strong enough to stabilize the instability in a large portion of the parameter space allowed to it within the fluid framework thereby limiting its applicability. In addition Keskinen et al. (1988) and Satyanarayana et al. (1987) have shown that a density gradient has a stabilizing effect on the KH modes.
It is important to realize that in the fluid limit all the order unity terms exactly cancel each other in Eq. 50, making the otherwise negligible second-order terms responsible for KH instability as leading terms. This is critical to the recovery of the KH eigenvalue condition in the fluid limit, implying that the KH limit is sensitive to the choice of the initial distribution function. A number of different initial distribution functions are possible and were tried but only the particular one described by Eq. 39 yielded the classical KH eigenvalue condition in the fluid limit (Ganguli et al. 1988; Ganguli 1997). Since many distribution functions are possible but not all of them lead to the KH modes, the robustness of the KH instability in warm plasma becomes questionable in comparison to the Inhomogeneous Energy Density Driven Instability (IEDDI) discussed below, which does not depend on a particular choice and therefore may be more ubiquitous.
Inhomogeneous energy density driven instability branch
Growth rate vs frequency for IEDDI instability as a function of \(b=(k_y\rho _i)^{1/2}\) in color. For these calculations \(k_{\Vert }/k_{\perp }=0.011\), \(\epsilon =\rho _i/L=0.3\), \(a=1.87\), \(\tau =5\), \(\mu =1837\), and no density gradient
The above discussion on the Kelvin–Helmholtz limit also implies that in the kinetic regime for shorter wavelengths such that the wave phase speed is larger than or of the order of the ion thermal velocity but smaller than the electron thermal velocity, i.e. \(v_{te}>(\omega _{1}-n\varOmega _{i})/k_{\Vert }\ge v_{ti}\), and \(\omega _{1}\sim n\varOmega _{i}\) the second order terms in A(x) and Q(x) may be neglected with respect to the order unity terms. This regime leads to a different branch of oscillations arising due to the inhomogeneity in the wave energy density introduced by the velocity shear (Ganguli et al. 1985). Unlike the KH instability the IEDDI can be enhanced by a density gradient (Ganguli et al. 1988; Liu et al. 2018; Ilyasov et al. 2015). Figure 18 shows the typical linear spectrum of the IEDDI. The background electric field profile used is \(E(x)=E_0 \text {sech}^2(x/L)\) with \(L=3.3\rho _i\), \(\tau =5\), \(V_\mathrm{E}/v_{ti}=0.1\), and \(k_{\Vert }=0.011 k_{\perp }\). The spectrum remains relatively unaffected for an electric field with a top hat profile (Fig. 19), although the growth rates reduce as the field profile becomes smoother. This is because the IEDDI does not depend on the local value of a specific derivative of the electric field like the KH mode.
Left column shows profiles of the electric field for different values of a with \(\epsilon =0.3\). The middle column shows the profile of the second derivative normalized to the ion thermal gyroradius. The column on the right indicates the gyro-averaged second derivative
To understand the general characteristics of the two (KH and IEDDI) branches of oscillations we have considered a generic electric field profile:
$$\begin{aligned} E(x) = \frac{E_0}{A \sinh ^2(x/a)+1}, \end{aligned}$$
where \(A=1/\sinh ^2(x_0/a)\), \(x_0=L/2\), \(\epsilon =\rho _i/L\). At \(x=x_0\) the value of E(x) reduces to \(E_0/2\). For \(a=x_0/\sinh ^{-1}(1)\), \(A=1\) and \(E(x)=E_0\text {sech}^2(x/a)\). For \(a\rightarrow 0\) the profile becomes a top-hat profile. This profile is characterized by two scale lengths, L and a. In the natural environment, especially under compression, the static electric fields are likely to be generated with multiscale profiles. This also becomes apparent from our equilibrium studies in Sect. 2. In Eq. 56 while L determines the overall extent of the localization of the electric field, a determines its local gradient. For \(A\rightarrow 1\), the scale lengths a and L become comparable. The first column of Fig. 19 shows the transition of the electric field profile in Eq. 56 from a top hat to a smooth \(\text {sech}^2(x)\) as a function of increasing a. The second and the third columns of Fig. 19 show the second derivative and the gyro-averaged second derivative of the electric field. For \(a\rightarrow 0\) the gyro-averaged second derivative of the electric field becomes smaller compared to the un-averaged, indicating that the source of the KH modes become weaker due the kinetic effect of gyro-averaging as a decreases. This has a stabilizing effect on the KH mode (see Eq. (56) below). On the other hand, electric field profiles with smaller a favors the IEDDI mechanism as it primarily depends on the localized nature of the electric field rather than the local value of any specific derivative (see Eq. (58) below). The gyro-averaging effect becomes more prominent as the external compression increases and the scale sizes shrink compared to the ion gyroradius. (For the KH instability in neutral fluids there is no gyro-averaging, since the particles are not charged, and this stabilizing effect does not exist in a neutral medium.)
As discussed in Sect. 3.3, the general eigenvalue condition for the IEDDI is an integral equation. For weaker shear it may be approximated to a second order differential equation. The numerical solution for the truncated IEDDI eigenvalue condition is easier in the \(a\rightarrow 0\) limit when the electric field profile is top hat like. It becomes difficult as the profile becomes smoother with increasing a. The potential, Q(x)/A(x), of the second order differential equation, Eq. (50), becomes stiff and there are a number of roots in close vicinity of each other. This poses considerable difficulty in tracking the IEDDI roots by solving the differential equation. Potential barriers develop that obstruct the energy flux away from the negative energy density region created by the localized electric field that is necessary for the IEDDI (as elucidated in Eq. 61). This may partly be because of the truncation of the integral equation to second order. Ganguli et al. (1988) had to use a small density gradient in order to circumvent this difficulty to obtain the roots.
Thus, unlike the KH modes, the solution to the eigenvalue problem, Eq. 50, with the potential Q/A given by Eqs. 51 and 52 for the IEDDI is not trivial. As \(x\rightarrow \infty\), Eq. 50 has two asymptotic solutions: one that is exponentially growing and the other exponentially decaying. The decaying one is the physical solution, but the growing one can easily contaminate numerical solutions. Furthermore, Q/A has poles scattered around the complex plane that can also make finding precise eigenvalues difficult.
The effects of the exponentially growing solution can be minimized significantly by using the Riccati transform. This technique was recently applied to tearing instabilities and an explanation of how and why the method works was provided (Finn et al. 2020). In this method the potential is transformed using \(u =\phi '/\phi\), where the prime denotes an x derivative. This gives the transformed equation:
$$\begin{aligned} \frac{\mathrm{d}u}{\mathrm{d}x} = -\frac{Q}{A} - u^2 \end{aligned}$$
which has asymptotic solutions
$$\begin{aligned} u(x\rightarrow \infty ) = \pm i \sqrt{\frac{Q_\infty }{A_\infty }}, \end{aligned}$$
where the \(+/-\) refers to growing/decaying solutions. Therefore, the decaying solution may be chosen at \(x\rightarrow \infty\) and integrated backwards, using Eq. 57, towards \(x=0\). For modes with even parity in \(\phi (x)\), i.e., \(\phi '(0)=0\), u(x) should be zero at \(x=0\). A complex root finder (e.g. Muller's method or Newton's method) finds the appropriate eigenvalue, \(\omega\), that leads to \(u(0)=0\). A close guess for an appropriate \(\omega\) is still necessary for the root finder to converge reliably.
The spiky nature of Q/A can introduce further difficulty, but as long as the poles do not lie exactly on the real axis, a standard numerical integrator that controls accuracy will be sufficient. In the case that the poles are on the real axis (e.g. both the real and imaginary parts of \(\phi\) are zero simultaneously), a numerical integrator based on Padé approximations is useful (Fornberg and Weideman 2011). These numerical techniques allow robust solutions to be found without the need to add any density gradient (as was needed in Ganguli et al. 1988).
Both the KH and IEDDI branches and their applications have been extensively studied in the literature and are not repeated here. Instead, below we review the physical mechanisms that are responsible for the two branches of oscillations.
Physical origin of the Kelvin–Helmholtz instability
Although both the branches mentioned above are sustained by the velocity gradient, they rely on different mechanisms for drawing the free energy from it. This is best understood by analyzing the energy balance conditions. For the KH modes the energy quadrature can be derived as Ganguli (1997)
$$\begin{aligned} \frac{\partial }{\partial t} \int \mathrm{d}x\left[ \frac{|E_{1}|^2}{8\pi } + \frac{n_{0}m_{i}}{2}\frac{|cE_{1}|^2}{B^2} + \frac{n_{0}m_{i}}{2}|x_1|^2 V_\mathrm{E}V_\mathrm{E}''(x) \right] =0, \end{aligned}$$
where \(E_{1}=-ik_{y}\phi\), \(x_{1}=v_{1x}/(\omega _r-k_{y}V_E(x))\), \(v_{1x}=-cE_{1y}/B\), and \(\mathbf {E}_{1}\) and \(\mathbf {v}_{1}\) are the fluctuating electric field and velocity The first two terms of Eq. 59 are due to the fluctuating wave electric field. The first term represents the electrostatic wave energy density in vacuum, the second term is the wave-induced kinetic energy of the ions. The energy balance condition in Eq. 59 indicates that reduction in the equilibrium flow energy, i.e., \((\langle V_\mathrm{E}(x+x_1)\rangle ^2 - V_\mathrm{E}^2(x))=|x_{1}|^2 V_\mathrm{E}(x)V_\mathrm{E}''(x)+O((1/L)^3)\), at a given position x, which occurs due to time averaging by the waves, is available as the free energy necessary for the growth of the KH instability. The time averaging removes the first derivative and therefore the free energy is proportional to the second derivative of the dc electric field. Consequently, to leading order the KH instability is explicitly dependent on the second derivative, i.e., the curvature, of the electric field. This condition may be a limiting factor to the viability of the KH instability compared to its sister instability, the IEDDI, which does not depend on any particular velocity derivative as we discuss next.
Physical origin of the IEDDI
When both the electrons and the ions are cold fluids it leads to the classical KH description as shown above. The ions play the crucial role while the electrons simply provide a charge neutralizing background. But for \(k_{\Vert }\ne 0\), \(T_{e}\ne 0\) and for waves with \(\omega _{1}\sim n\varOmega _{i}\) the electron response can be adiabatic, i.e., \(v_{te}>(\omega _1-n\varOmega _{i})/k_{\Vert }\ge v_{ti}\). In this limit ignoring the \((\rho _{i}/L)^2\) terms in A(x) and Q(x) we obtain the eigenvalue condition for the IEDDI branch. To understand the physics of this branch of oscillations we may assume the ion response to be fluid so that \(b\ll 1\) and the eigenvalue condition for the IEDDI reduces to
$$\begin{aligned} \left[ \frac{\mathrm{d}^2}{\mathrm{d}\bar{x}^2} - \bar{k}_{y}^2 + \left( \frac{\omega _{1}}{\varOmega _{i}}\right) ^2 - 1 \right] \phi = 0 \end{aligned}$$
where \(\bar{x}=x/\rho _{s}\), \(\rho _{s}=c_{s}/\varOmega _{i}\), \(c_{s}=T_{e}/m_{i}\), and \(\bar{k}_{y}=k_{y}\rho _{s}\). Following the procedure outlined in Ganguli (1997) we obtain the condition:
$$\begin{aligned} S+\frac{2}{\varOmega _{i}^2} \int _{-\infty }^{\infty } d\bar{x}\, \gamma (\omega _{r}-k_{y}V_{E}(x))|\phi |^2 = 0, \end{aligned}$$
where \(S=(\phi ^{*}\phi '-\phi \phi '^{*})/2i\) is the flux and is a positive real number, \(\gamma\) is the growth rate for the IEDDI, \(\phi ^{*}\) is the complex conjugate of \(\phi\), and the primes indicate spatial derivatives. In order for Eq. 61 to be valid the second term must be negative which implies that the product \(\gamma (\omega _{r}-k_{y}V_{E})<0\) in at least a finite interval of space, since other factors are positive definite. Therefore, the necessary condition for IEDDI growth, i.e., \(\gamma >0\), is that the Doppler shifted frequency \((\omega _r-k_{y}V_{E})\) be negative in some region of space.
To understand the physical consequences of \((\omega _{r}-k_{y}V_\mathrm{E})<0\) that can lead to wave growth consider the ion-cyclotron waves. The homogeneous electrostatic dispersion relation for the ion cyclotron waves is Drummond and Rosenbluth (1962)
$$\begin{aligned} D(\omega ) = 1 + \tau - \varGamma _{0}(b) - \sum _{n>0}\frac{2\omega ^2}{\omega ^2-n^2\varOmega _{i}^2}\varGamma _{n}(b)=0 \end{aligned}$$
The wave energy density is given by
$$\begin{aligned} U\propto \omega \frac{\partial D}{\partial \omega } =\omega \left( \sum _{n>0} \frac{4\omega n^2 \varOmega _{i}^2\varGamma _{n}(b)}{ (\omega ^2-n^2\varOmega _{i}^2)^2} \right) \equiv \omega ^2 \varXi (\omega ) \end{aligned}$$
Clearly, the ion cyclotron waves are positive energy density waves. However, introduction of a uniform electric field in the x direction initiates an \(\mathbf {E}\times \mathbf {B}\) drift in the y-direction and consequently there is a Doppler shift in the dynamical frequency, i.e., \(\omega \rightarrow \omega _{1}\). The energy density in the presence of the Doppler shift is \(U_{I}\propto \omega \omega _{1}\varXi (\omega )\), which can be negative provided \(\omega \omega _{1}<0\).
Geometry of Inhomogeneous Energy Density Driven Instability (IEDDI)
Now consider the simplest example of an inhomogeneous electric field geometry given by a piece-wise continuous configuration as shown in Fig. 20 in which a uniform electric field is localized in the region-I of extent L. It is clear that because of the localized nature of the \(\mathbf {E}\times \mathbf {B}\) drift in region-I, the energy density in region-I can become negative provided the Doppler shifted frequency \(\omega _{1}<0\), while it remains positive in region-II. A nonlocal wave packet can couple these two regions so that a flow of energy from region-I into region-II will enable the wave to grow. In region-I it is a negative energy wave while it is positive energy wave in region-II. The situation is complementary to the two-stream instability. In that case there are two waves one of positive energy density and the other of negative energy density at every location and their coupling in velocity space leads to the instability. In the IEDDI case there is only one wave but two regions, one in which the wave energy density is negative and positive in the other. The coupling of these two regions in the configuration space by a wave packet leads to the instability (Ganguli et al. 1985).
This simple idea may be quantified further using the wave-kinetic framework. The growth of the wave in region-I implies a loss of energy from that region. By conservation of energy, this must be the result of convection of energy into region-II in the absence of local sources or sinks. The rate of growth of the total energy deficit in region-I is proportional to the growth rate of the wave, the wave energy density \(U_I\) in region-I, and the volume of region-I given by the extent in the x direction of region-I times a unit area \(A_{\perp }\) in the plane perpendicular to x. The rate of energy convection through \(A_{\perp }\) is \(V_{g}U_{II}\), where \(V_{g}\) is the group velocity in the x-direction and \(U_{II}\) is the wave energy density in region-II, which is positive since the electric field is absent in this region. We can then write the power balance condition as
$$\begin{aligned} \gamma U_{I} L A_{\perp } = - V_{g} U_{II} A_{\perp }, \end{aligned}$$
which implies that the growth rate of the IEDDI is \(\gamma \propto -U_{II}/U_{I}\). Consequently, if \(U_{I}\) is negative then the growth rate is positive showing that the growth of the wave can be sustained by convection of energy into region-II from region-I. On the other hand, if \(U_{I}\) is positive then the convection of energy out of region-I would lead to a negative growth rate and, therefore, to damping of the waves. This shows that if the wave energy density is sufficiently inhomogeneous to change its sign over a small distance then it can support wave growth. This is in contrast to the KH mechanism in which there is an exchange of energy between the medium and the wave via local plasma flow gradient (Eq. 59). In the IEDDI mechanism such an exchange is not necessary. Instead, as described in Eq. 61, the IEDDI is dependent on energy transport from one region to another such that the sign of energy density changes.
In addition to the driving mechanism described above, dissipative mechanisms are also present in a realistic system. If the energy gained from the dc electric field is larger than the energy dissipated the wave can exhibit a net growth. It is important to note that this phenomenon is not restricted to a resonant group of particles in velocity space. The only requirement is that \((\omega _r-k_y V_E)<0\) in a localized region. Thus, the bulk plasma in this region can participate, which results in a broadband frequency spectrum.
Although we used the ion cyclotron waves as a specific example, the IEDD mechanism described here can affect other waves in the system and therefore represents a genre of instabilities in plasmas that contains a localized electric field. This makes the transverse electric field a unique source of free energy.
Magnetron analogy of the IEDDI
A nonlinear description of the waveparticle interaction responsible for IEDDI was given by Palmadesso et al. (1986). It was shown that the fluctuating wave electric field \(\mathbf {E}_{1}\) leads to an average secular (ponderomotive) force \(F_{2y}\sim O(\gamma E_{1y}^2)\) in the y-direction (see Fig. 21). This leads to a \(\mathbf {F}_{2y}\times \mathbf {B}\) drift in the x-direction, which in the small gyroradius limit is \(u_{2x}\propto -\gamma (\omega -k_{y}V_\mathrm{E})^{-3}E_{1y}^2\), leading to a shift in the particle position in the x-direction given by \(\delta x=\int u_{2x}\mathrm{d}\tau \sim E_{1y}^2\). As there is dc electric field \(E_{0}(x)\) in the x direction there is a potential energy gain given by \(E_{0}(x)\delta x\) if \((\omega _{r}-k_{y}V_\mathrm{E})<0\). Since the particle motion is perpendicular to \(F_{2y}\) there can be no net increase in the particle energy. Thus, the energy gained by the particles by falling in the potential of the dc electric field in the x-direction is lost to the waves in the y-direction. Consequently, \(E_{1y}\) grows and \(F_{2y}\) is further enhanced, which closes a positive feedback loop as shown in Fig. (22). This leads to the instability in a way similar to a magnetron.
The second order ion drift in the direction of the electric field constitutes a polarization current that reduces the magnitude of the external electric field. Such polarization current was observed in the Particle-in-Cell (PIC) simulation of the IEDDI by Nishikawa et al. (1988).
Geometry of the ponderomotive force and nonlinear particle drift
Positive feedback loop for IEDDI instability
Intermediate frequency limit: partially magnetized ions and fully magnetized electrons
As compression increases the self-consistent electric field becomes more intense and narrower in scale size. In the intermediate compression regime the scale size is narrower than an ion gyroradius but larger than an electron gyroradius, i.e., \(\rho _{i}>L>\rho _{e}\). As discussed in Sect. 3.1, the ions in this regime do not experience the electric field over their entire gyro-orbit. Consequently, the ions experience a lower gyro-averaged electric field than the electrons. For sufficiently localized electric field the ions experience vanishingly small electric field. In this regime for intermediate frequencies and short wavelengths, i.e., \(\varOmega _{i}<\omega <\varOmega _{e}\) and \(k_{y}\rho _{i}>1>k_{y}\rho _{e}\), the ions behave as an unmagnetized plasma species but the electrons are magnetized. The cyclotron harmonics for the ions can be integrated to rigorously show their unmagnetized character (Ganguli et al. 1988). Since the wave frequency is much smaller than the electron cyclotron frequency it will suffice to consider only the \(n=0\) cyclotron harmonic term for the electrons. Also, for simplicity, we assume that the velocity shear that the electrons experience is small so that we may use \(\eta =1\) for the electrons. The ions do not experience a Doppler shift so the phase speed of the waves can remain larger than the thermal velocity, which allows the assumption of fluid ions in which the density perturbation is given by Ganguli et al. (1988),
$$\begin{aligned} n_{1i}(x)=\frac{1}{4\pi q_{i}} \frac{\omega _{pi}^2}{\omega ^2}\left( k_y^2 + k_{\Vert }^2 - \frac{\mathrm{d}^2}{\mathrm{d}x^2}\right) \phi (x). \end{aligned}$$
However, the electrons experience a spatially varying Doppler shift. The phase speed of the waves can become comparable to the electron thermal velocity at some locations. So for generality we use the kinetic response for the electron, which leads to their density perturbation:
$$\begin{aligned} n_{1e}(x)= & {} -\frac{\omega _{pe}^2}{4\pi v_{te}^2 q_{e}} \left[ -\left( \frac{\omega _{1}+\omega _{2e}-\omega ^*}{\sqrt{2}|k_{\Vert }|v_{te}} \right) Z\left( \frac{\omega _{1}-\omega _{2e}}{\sqrt{2}|k_{\Vert }|v_{te}} \right) \frac{\mathrm{d}\varGamma _{n}(b_e)}{\mathrm{d}b}\rho _{e}^2 \frac{\mathrm{d}^2}{\mathrm{d}x^2} \right. \nonumber \\&\left. +1+\left( \frac{\omega _{1}+\omega _{2e}-\omega ^*}{\sqrt{2}|k_{\Vert }|v_{te}} \right) Z\left( \frac{\omega _{1}-\omega _{2e}}{\sqrt{2}|k_{\Vert }|v_{te}} \right) \varGamma _{0}(b_e) \right] \phi (x). \end{aligned}$$
Combining Eqs. 65 and 66 with the Poisson equation we get the general eigenvalue condition of the EIH instability in the kinetic limit that includes the electron diamagnetic drift.
$$\begin{aligned} &\frac{\mathrm{d}^2 \phi }{\mathrm{d}x^2} + Q(x)\phi = 0 \nonumber \\ Q(x)&= \frac{ \left( 1-\frac{\omega _{pi}^2}{\omega ^2}\right) (k_y^2+k_{\|}^2) -\frac{\omega _{pe}^2}{v_{te}^2}\left[ 1+\left( \frac{\omega _{1}+\omega _{2e}-\omega ^*}{\sqrt{2}|k_{\Vert }|v_{te}} \right) Z\left( \frac{\omega _{1}-\omega _{2e}}{\sqrt{2}|k_{\Vert }|v_{te}} \right) \varGamma _{0}(b_e)\right] }{ 1-\frac{\omega _{pi}^2}{\omega ^2} + \frac{\omega _{pe}^2}{\varOmega _{e}^2} \left( \frac{\omega _{1}+\omega _{2e}-\omega ^*}{\sqrt{2}|k_{\Vert }|v_{te}} \right) Z\left( \frac{\omega _{1}-\omega _{2e}}{\sqrt{2}|k_{\Vert }|v_{te}} \right) \frac{\mathrm{d}\varGamma _{n}(b_e)}{\mathrm{d}b} } \end{aligned}$$
Equation (67) shows that there are two competing terms proportional to the density gradient (\(\omega ^*\)) and the velocity gradient (\(\omega _1=\omega -k_yV_E\) and \(\omega _2\)), which can dominate the evolution of compressed layers through the waves they generate. The relative dominance of the two terms in (\(\omega -k_y V_E + \omega _2-\omega ^*\)) in the dispersion relation decides which way the system will evolve. If \(k_y V_E\sim (k_y L)\omega _s\gg \omega ^*\) then the velocity shear driven processes clearly dominate because on velocity gradient scale the density variation is minimal. The eigenmodes generated in the compressed layers generally have \(k_y L \sim 1\). Therefore, if the shear frequency is comparable to the diamagnetic drift frequency then a comparison of \(\omega _2\) with \(\omega ^*\) determines the dominant process as we elaborate in Figs. 25 and 26 in the following.
In the long wavelength (\(k_{\Vert }\rightarrow 0\), \(k_y\rightarrow 0\)) limit Eq. 67 reduces to
$$\begin{aligned} \frac{d^2\phi }{dx^2} - (k_y^2+k_{\Vert }^2)\phi + \left( \frac{\omega _{pe}^2}{\varOmega _{e}^2+\omega _{pe}^2}\right) \frac{\omega ^2}{(\omega ^2-\omega _{LH}^2)} \left[ \frac{k_y(V_{E}''-\varOmega /L_n)}{\omega _{1}} -\frac{k_{\Vert }^2\varOmega _{e}^2}{\omega _{1}^2} \right] \phi (x)=0 \end{aligned}$$
Equation 68 includes the modified two-stream instability (McBride et al. 1972), which was not in Fletcher et al. (2019) since \(k_{\Vert }=0\) was assumed. The modified two-stream instability dispersion relation can be recovered if the electric field curvature and the density gradient are neglected in Eq. 68. Including the density gradient Eq. 68 represents the lower-hybrid drift instability Krall and Liewer (1971). The lower-hybrid drift modes depend upon the density gradient and hence their growth relaxes the density gradient. If the density gradient is ignored but \(V_{E}''\not =0\) then Eq. 68 reduces to the eigenvalue condition for the electron-ion hybrid (EIH) instability Ganguli et al. (1988) where the free energy is obtained from the sheared electron flow through fast time averaging by the perturbations Ganguli et al. (1988) similar to the KH modes discussed earlier. The growth of the EIH waves relaxes the velocity shear.
From Eq. 68 it is clear that the intermediate frequency waves depend on a double resonance \(\omega \simeq \omega _\mathrm{LH}\simeq k_y V_\mathrm{E} (x)\). The spatial variation of \(k_y V_E (x)\) is particularly important because at some point in x the argument of the Z function in Eq. 67 can become of the order of unity so that Landau damping cannot be ignored unless \(k_{\Vert }\) is sufficiently small. Hence, the limit \(k_{\Vert }\rightarrow 0\) where Landau damping is eliminated and both the EIH and LHD instability growth are maximized is used to determine the most likely modes that will arise in the intermediate frequency range in compressed plasma. The modified two stream instability, whose modification due to shear flow has not been studied sufficiently in the literature, requires \(k_{\Vert }\ne 0\). It is included in the last term in Eq. 68 but its contribution is minimal because for \(k_{\Vert }\rightarrow 0\) the growth rate of the intermediate frequency waves is largest.
For dense plasmas of interest \(\omega _{pe}>\varOmega _{e}\), so \(\omega _{LH}\simeq \sqrt{(\varOmega _i \varOmega _e )}\) and the first factor in the third term of Eq. 68 is about one. In the \(k_{\Vert }\rightarrow 0\) limit the eigenmode equation, Eq. 68, in dimensionless form becomes
$$\begin{aligned} \left\{ \frac{\mathrm{d}^2}{\mathrm{d}\bar{x}^2} - \bar{k}^2 + \left( \frac{\bar{\omega }^2}{\bar{\omega }^2-1}\right) \frac{ \bar{k} (\alpha _s \bar{V}''_\mathrm{E}(\bar{x}) - \frac{L}{L_n}) }{ \bar{\omega } - \bar{k}\alpha _s \bar{V}_\mathrm{E}(\bar{x})} \right\} \phi (\bar{x})=0, \end{aligned}$$
where \(\bar{x}=x/L\), \(\bar{\omega }=\omega /\omega _{LH}\), \(\bar{k}=k_{y}L\), \(\bar{V}_\mathrm{E}=V_\mathrm{E}/V_{0}\), \(V_{0}=cE_{0}/B_{0}\), \(\alpha _s=V_{0}/L\varOmega _{e}\) is the shear parameter.
Eigenfunctions for \(L_n/L=1\) and \(L_n/L=\infty\) and \(\alpha =1\) with \(k_y L\) chosen to maximize the growth rate
Linear growth rate as a function of real frequency, colored by associated \(k_y L\) value. On the left is the fluid case, where the electric field balances the density gradient. On the right is the limit of the kinetic case. Reproduced from Fig. 14 of Fletcher et al. (2019)
Figure 23 shows two solutions to Eq. 69 (i.e. the real and imaginary parts of the eigenfunctions). Figure 24 is a plot of the linear growth rate and the real frequency obtained from solving the eigenvalue condition given in Eq 69. The eigenfunctions and eigenvalues were found via a shooting method in which the large \(\bar{x}\) solution goes to zero at infinity. The density profile is \(n(x)=n_0 \text {tanh}(x/L_n)\) and the electron flow profile by \(E(x)=E_0 \text {sech}^2 (x/L)\) are chosen to match the self-consistent low \(\beta\) (Ganguli et al. 2018) dipolarization front discussed in Sect. 2.2 and its parameters are based on the MMS observations. As the shear parameter is increased, implying higher compression, the growth rate increases. The real frequency is around the lower-hybrid frequency, while Doppler shifting broadens the frequency spectrum. The bandwidth increases with shear parameter.
In the two cases shown, the growth peaks for \(k_y L\sim 1\). The wavelength is much longer than \(\rho _e\) since \(L\gg \rho _e\). As \(L_n/L\) is reduced, the wavelengths become shorter and in the limit of uniform electric field (\(L\rightarrow \infty\)) it is well known that \(k_{y}\rho _{e}\sim 1\) (Krall and Liewer 1971). Note that these discrete eigenmodes in x are still continuously dependent on \(k_y\). The parallel wave vector, \(k_{\Vert }\), is assumed to be zero. In Sect. 4.2 the nonlinear evolution of this equilibrium condition and its observable signatures are studied by PIC simulation and show that the spectral bandwidth becomes even broader nonlinearly as lower frequency waves are naturally triggered with increasing L.
Since Eq. 69 contains both density and electric field gradients, an interesting question is which one of these is responsible for the waves?
Ratio of the two driving terms in Eq. 69 as a function of x (left) and as a function of layer width (right). Reproduced from Fig. 15 of Fletcher et al. (2019)
To answer this question, Fig. 25 compares the relative strength of the LHD and the EIH terms in Eq. 69. The left plot shows the ratio of these EIH to LHD instability source terms for the low beta MMS parameters (Ganguli et al. 2018; Fletcher et al. 2019), which can be reproduced by our electrostatic equilibrium model discussed in Sect. 2.1 with \(R_i=R_e=1\), \(S_i=S_e=0.793\), \(X_{g1e}=-0.438\rho _i\), \(X_{g2e}=-0.346\rho _i\), \(X_{g1i}=-0.0390\rho _i\), \(X_{g2i}=0.850\rho _i\), \(n_0=0.355\) cm\(^{-3}\), \(T_{e0}=654.62\) eV, \(T_i/T_e=6.714\), and \(B_0=12.55\) nT. It shows that even for weak compression, as in the case considered, the EIH term is three times as large as the LHD term. In the stronger compression high beta case (Fig. 26), the EIH term is more than an order of magnitude larger. The right plot shows the maximum of the ratio of EIH/LHD terms as the compression is increased. This plot was made by using the same parameters as the low \(\beta\) case and compressing and expanding the layer via choice of \(X_{g1\alpha }\) and \(X_{g2\alpha }\). Clearly, the EIH instability dominates over the LHD instability as long as the scale size of the density gradient is comparable to ion gyroradius or less.
Magnetic field gradients result in a stronger EIH instability (Romero and Ganguli 1994), but a weaker LHD instability (Davidson et al. 1977). In Sect. 2.1, we showed that a gradient in the temperature can also develop, which can make the pressure gradient in the layer (and hence the diamagnetic drift) weaker, but not significantly affect the ambipolar electric field. This also favors the EIH instability over the LHD instability. Thus, the EIH mechanism will dominate wave generation and hence the nonlinear evolution in a compressed plasma system in the intermediate frequency range.
In general, the self-consistent generation of an ambipolar electric field is unavoidable in warm plasmas with a density gradient scale size comparable to or less than the ion gyroradius. This raises an interesting question: How ubiquitous in nature is the classical LHD instability? To examine this we generalize the Fig. 25 results to include electromagnetic effects in the equilibrium condition and compare the relative strengths of the two drivers of the electrostatic instability in Eq. 66: (1) \(\alpha _s \bar{V}_E''(\bar{x})\), which is the shear-driven EIH instability, and (2) \(-L/L_n\), which is the density gradient-driven LHD instability. Using the electromagnetic equilibrium model of Sect. 2.2 we can investigate the magnitude of these two driving terms. In general for \(T_i/T_e>1\), which is typical in space plasmas (particularly in the magnetotail), we find that the EIH instability drive dominates. In the opposite limit \(T_e/T_i<1\), which is typical in laboratory plasmas, LHD tends to dominate. Figure 26 shows the same ratio of terms for different values of \(\beta _e\). As \(\beta _e\) increases, the EIH term becomes more dominant because the ambipolar electric field intensifies with \(\beta\), as shown in Fig. 10. For typical conditions in the magnetotail (high \(\beta _e\) and \(T_i/T_e\)), the EIH term is greater than the LHD term. The dominance of the EIH over LHD wave becomes further evident in the nonlinear analysis in Sect. 4.
EIH to LHD instability growth term ratio vs temperature ratio and \(\beta _e\) for the equilibrium model where the width of the transition layer is equal to the ion thermal gryoradius. Red indicates that the EIH drive term dominates
Higher frequency transverse flow shear driven modes
As compression increases further so that \(\rho _{i}\gg L\ge \rho _{e}\) then even higher frequency modes with \(\omega _{1}\le \varOmega _{e}\) are possible. For these modes the ions do not play any important role other than charge neutralizing background and they may be ignored. The dispersion relations will become similar to the KH and IEDDI discussed in Sect. 3.3.1 but for the electron species. By symmetry for \(\omega _{1}<\varOmega _{e}\) the electron KH modes can be recovered and for \(\omega _{1}\sim n\varOmega _{e}\) the electron IEDDI can be recovered.
Stability of the Vlasov equilibrium Including \(V_{\Vert }(x)\)
In Sects. 3.3.1 through 3.3.3 we discussed the waves that are driven by the shear in transverse flows. The transverse gradient in the parallel flows can also spontaneously generate short wavelength high frequency waves within the layer that can relax the gradients and lead to a steady state. Large scale parallel propagating MHD waves (such as the kink or Alfvén modes) may also be generated due to the flux tube perturbation or the velocity shear (current gradients), however, these low-frequecy long-wavelength waves will not be effective in the relaxation of the stress that builds up transverse to the magnetic field as discussed in Sec. 2. Hence, they are not of immediate interest to us here.
As discussed in Sect. 2.2.3, large-scale magnetic field curvature can lead to a potential difference along the magnetic field. This originates because the global compression is strongest at a particular point and decreases away from it and hence the transverse electrostatic potential generated by compression also decreases proportionately away from this point along the magnetic field. The potential difference along the field line results in a magnetic field aligned electric field as sketched in Fig. 11. Non-thermal particles can be accelerated by the parallel electric field to form a beam along the magnetic field direction, with a transverse spatial gradient, i.e., \(\mathrm{d}V_{\Vert }/\mathrm{d}x\). The gradient in the parallel flow is also a source for free energy. This has been established both theoretically (D'Angelo 1965; Lakhina 1987; Gavrishchaka et al. 1998, 2000; Ganguli et al. 2002) and through laboratory experiments (D'Angelo 1965; Agrimson et al. 2001, 2002; Teodorescu et al. 2002a, b). Like its transverse counterpart the spatial gradient in the parallel flow can also support a hierarchy of oscillations. Below we summarize the physical origin of these waves.
Consider a uniform magnetic field in the z direction with a transverse gradient in the flow along the magnetic field (\(\mathrm{d}V_{\Vert }/\mathrm{d}x\)). The background plasma condition is sketched in Fig. 27. Unlike the transverse flow shear, the parallel flow shear does not affect the particle gyro-motion, which simplifies the analysis considerably. For simplicity consider a locally linear flow, i.e., \(V_{\Vert ,\alpha }(x)=V_{\Vert ,\alpha }+(\mathrm{d}V_{\Vert ,\alpha }/\mathrm{d}x)x\) where \(V_{\Vert ,\alpha }\) and \(\mathrm{d}V_{\Vert ,\alpha }/\mathrm{d}x\) are constants and \(\alpha\) represents the species and let \(\mathrm{d}V_{\Vert ,e}/\mathrm{d}x=\mathrm{d}V_{\Vert ,i}/\mathrm{d}x\equiv \mathrm{d}V_{\Vert }/\mathrm{d}x\). Transforming to the ion frame (i.e., \(V_{\Vert ,i}=0\)) so that \(V_{\Vert ,e}\equiv V_{\Vert }\) represents the relative electron-ion parallel drift. Although a nonlocal eigenvalue condition is desirable, a local limit exists for the parallel flow shear driven modes.
Geometry for parallel shear flow
First consider the general dispersion relation for waves with \(\omega \ll \varOmega _{e}\), so that only the \(n = 0\) cyclotron harmonic of the electrons is sufficient. For this condition the dispersion relation is Ganguli et al. (2002)
$$\begin{aligned}&1 + \sum _{n} \varGamma _{n}(b) F_{ni} + \tau (1+F_{0e})=0, \end{aligned}$$
$$\begin{aligned}&F_{ni} = \left( \frac{\omega }{\sqrt{2}|k_{\Vert }|v_{ti}}\right) Z\left( \frac{\omega -n\varOmega _{i}}{\sqrt{2}|k_{\Vert }|v_{ti}}\right) \nonumber \\&\quad -\frac{k_{y}}{k_{\Vert }}\frac{1}{\varOmega _{i}}\frac{\mathrm{d}V_{\Vert }}{\mathrm{d}x} \left[ 1+ \left( \frac{\omega -n\varOmega _{i}}{\sqrt{2}|k_{\Vert }|v_{ti}}\right) Z\left( \frac{\omega -n\varOmega _{i}}{\sqrt{2}|k_{\Vert }|v_{ti}}\right) \right] \end{aligned}$$
$$\begin{aligned}&F_{0e}= \left( \frac{\omega -k_{\Vert }V_{\Vert }}{\sqrt{2}|k_{\Vert }|v_{te}}\right) Z\left( \frac{\omega -k_{\Vert }V_{\Vert } }{\sqrt{2}|k_{\Vert }|v_{te}}\right) \nonumber \\&\quad +\frac{k_{y}}{k_{\Vert }}\frac{1}{\mu \varOmega _{i}}\frac{\mathrm{d}V_{\Vert }}{\mathrm{d}x} \left[ 1+ \left( \frac{\omega -k_{\Vert }V_{\Vert }}{\sqrt{2}|k_{\Vert }|v_{te}}\right) Z\left( \frac{\omega -k_{\Vert }V_{\Vert }}{\sqrt{2}|k_{\Vert }|v_{te}}\right) \right] . \end{aligned}$$
In the absence of shear (i.e., \(\mathrm{d}V_{\Vert }/\mathrm{d}x\equiv V_{\Vert }'=0\)), the dispersion relation reduces to the case of a homogeneous flow as discussed by Drummond and Rosenbluth (1962) and applied to space plasmas by Kindel and Kennel (1971).
Low frequency limit: sub-cyclotron frequency waves
We first discusss low (sub-cyclotron) frequency ion-acoustic waves for which only the \(n=0\) cyclotron harmonic term for the ions is sufficient. For long wavelength, i.e., \(b\ll 1\), \(\varGamma _{0}(b)\sim 1\), and Eq. 70 simplifies to
$$\begin{aligned} \sigma ^2 + \tau \hat{\sigma }^2 +\sigma ^2 \xi _{0}Z(\xi _{0}) + \tau \hat{\sigma }^2\xi _{e}Z(\xi _e)=0 \end{aligned}$$
where \(\xi _{0}=\omega /(\sqrt{2}|k_{\Vert }|v_{ti})\), \(\xi _{e}=(\omega -k_{\Vert }V_{\Vert })/(\sqrt{2}|k_{\Vert }|v_{te})\), \(\sigma ^2=(1-k_yV_{\Vert }'/k_{\Vert }\varOmega _{i})\), \(\hat{\sigma }^2=1+k_yV_{\Vert }'/(k_{\Vert }\varOmega _{i}\mu )\). Assuming the ions to be fluid (\(\xi _{0}\gg 1\)) and electrons to be Boltzmann (\(\xi _{e}\ll 1\)) and equating the real part of Eq. 73 to zero we get
$$\begin{aligned} \omega =k_{z}c_{s}\sigma /\hat{\sigma }\sim k_{z}c_{s}\sigma \end{aligned}$$
where \(\hat{\sigma }\sim 1\) is used since the ion to electron mass ratio \(\mu \gg 1\). In the absence of shear (\(dV_{\Vert }/dx=0\), i.e. \(\sigma ^2=1\)) the classical ion acoustic limit is recovered. If \(\sigma ^2<0\) then Eq. 74 reduces to the dispersion relation for the D'Angelo instability (1965) for which the real frequency \(\omega _{r}=0\) in the drifting ion frame. The D'Angelo instability has been the subject of numerous space and laboratory applications (Catto et al. 1973; Huba 1981; Gary and Schwartz 1981).
The \(\sigma ^2>1\) regime was addressed by Gavrishchaka et al. (1998). In this regime Eq. 73 indicates that it is possible to obtain a shear modified ion-acoustic (SMIA) wave with interesting properties. Equation 74 indicates that shear can increase the parallel phase speed (\(\omega _{r}/k_{\Vert }\)) of the ion acoustic mode by the factor \(\sigma\). For a large enough \(\sigma\) the phase speed can be sufficiently increased so that ion Landau damping is reduced or eliminated. Consequently, a much lower threshold for the ion acoustic mode can be realized even for \(T_{i}>T_{e}\). The growth rate expression for the SMIA instability is given by Gavrishchaka et al. (1998),
$$\begin{aligned} \frac{\gamma }{|k_{\Vert }|v_{ti}} = \sqrt{\frac{\pi }{8}} \frac{\sigma ^2}{\tau ^2}\left[ \frac{\tau ^{3/2}}{\mu ^{1/2}} \left( \frac{V_{\Vert }}{\sigma c_s}-1\right) -\sigma ^2\exp (-\sigma ^2/2\tau )\right] \end{aligned}$$
The classical ion-acoustic wave growth rate is recovered for \(\sigma ^2=1\). From Eq. 75 it is clear that \(\sigma\) can rapidly lower the ion Landau damping as seen from the exponential dependence of the second term in the bracket. The critical drift is obtained from Eq. 75 by setting the growth rate to zero and minimizing over the propagation angle (\(k_{\Vert }/k_y\)) as is plotted in Fig. 28 with \(\mathrm{d}V_{\Vert }/\mathrm{d}x=0.1\varOmega _{i}\). It is found that even a small shear can reduce the critical drift for the ion acoustic instability by orders of magnitude and put it below that of the classical ion cyclotron wave (Drummond and Rosenbluth 1962) for a wide range of \(\tau =T_{i}/T_{e}\) but the shear modified ion acoustics waves propagate more obliquely than their classical counterpart. This is a major departure from the conclusion of Kindel and Kennel (1971); that among the waves driven by a field aligned current in the earth's ionosphere the current driven ion cyclotron instability has the lowest threshold. Kindel and Kennel's conclusion had extensively guided the interpretation of in-situ data for a long time until Gavrishchaka et al. reexamined the data (1999) with shear modified instabilities in mind.
Critical Drift vs temperature ratio. Blue curve is for the classical current driven electrostatic ion acoustic mode (CDEIA). Orange curve is for the shear modified ion acoustic-instability
Low frequency limit: ion cyclotron frequency waves
To study the ion cyclotron frequency regime we return to Eq. 70 but relax the constraints of low frequency and long wavelength used to study the shear modified ion acoustic waves. We first examine how a gradient in the parallel plasma flow affects the threshold condition for ion cyclotron waves by analyzing the expression for critical relative drift for the ion cyclotron waves in small and large shear limits. For the marginal stability condition (\(\gamma =0\)) the imaginary part of the dispersion relation, Eq. 70, is set equal to zero, that is
$$\begin{aligned} \sum _{n}\varGamma _{n} \left[ \left( \xi _{0} - \frac{k_{y}V_{\Vert }'}{k_{\Vert }\varOmega _{i}}\xi _{n} \right) \text {Im}Z(\xi _{n}) \right] + \tau \left( 1+ \frac{k_{y}V_{\Vert }'}{k_{\Vert }\varOmega _{i}\mu } \right) \xi _{e} \text {Im}Z(\xi _{e})=0 \end{aligned}$$
where \(\xi _{n}=(\omega -n\varOmega _{i})/(\sqrt{2}|k_{\Vert }|v_{ti})\).
Dividing Eq. 76 throughout by \(\xi _{0}\) and considering the electrons to be adiabatic, i.e., \(\xi _{e}\ll 1\), we get,
$$\begin{aligned}&\sum _{n}\varGamma _{n}\left[ \left( 1-\frac{k_{y}V_{\Vert }'}{k_{\Vert }\varOmega _{i}} \left( 1-\frac{n\varOmega _i}{\omega _r}\right) \right) \exp \left\{ -\left( \frac{\omega _r-n\varOmega _{i}}{\sqrt{2}|k_{\Vert }|v_{ti}}\right) ^2 \right\} \right] \nonumber \\&\quad + \frac{\tau ^{3/2}}{\mu ^{1/2}}\left( 1+\frac{k_y V_{\Vert }'}{k_{\Vert }\varOmega _{i}\mu }\right) \left( \frac{\omega _{r}-k_{\Vert }V_{\Vert c}}{\omega _r}\right) =0 \end{aligned}$$
Under ordinary conditions \((k_y/k_{\Vert })(dV_{\Vert }/dx)/\varOmega _{i}\ll \mu\), which implies that the shear in the electron flow is not as critical as it is in the ion flow and can be ignored. Since only a specific resonant cyclotron harmonic term dominates, Equation 77 can be simplified by considering only that resonant term in the summation to obtain an expression for the critical relative drift:
$$\begin{aligned} V_{\Vert c}=\frac{\omega _{r}}{k_{\Vert }}\left[ 1 + \varGamma _{n}(b)\frac{\mu ^{1/2}}{\tau ^{3/2}} \left\{ 1 - \frac{k_{y}V_{\Vert }'}{k_{\Vert }\varOmega _{i}}\left( 1-\frac{n\varOmega _{i}}{\omega _{r}}\right) \right\} \exp \left( -\frac{(\omega _r-n\varOmega _{i})^2}{2k_{\Vert }^2 v_{ti}^2} \right) \right] \end{aligned}$$
For no shear, \(V_{\Vert }'=0\), the critical drift reduces to
$$\begin{aligned} V_{\Vert c}=\frac{\omega _{r}}{k_{\Vert }}\left[ 1 + \varGamma _{n}(b)\frac{\mu ^{1/2}}{\tau ^{3/2}} \exp \left( -\frac{(\omega _r-n\varOmega _{i})^2}{2k_{\Vert }^2 v_{ti}^2} \right) \right] \end{aligned}$$
This is the critical drift for the homogeneous current driven ion cyclotron instability (CDICI) (Drummond and Rosenbluth 1962). Since the relative sign between the two terms within the bracket is positive and each term is positive definite, the critical drift is always greater than the wave phase speed and increases for higher harmonics since \(\omega _{r}\sim n \varOmega _{i}\).
From Eq. 78, it may appear that for small but non-negligible and positive values of \((k_{y}V_{\Vert }'/k_{\Vert }\varOmega _{i})(1-n\varOmega _{i}/\omega _{r})\) there can be a substantial reduction in the critical drift for the current driven ion cyclotron instability because of reduction in the ion cyclotron damping. However, this is not possible and can be understood by rewriting Eq. 78 as
$$\begin{aligned} \frac{V_{\Vert c}}{V^{0}_{\Vert c}} = 1 - \left( 1 - \frac{(\omega _r/k_{\Vert })}{V_{\Vert c}^{0}} \right) \left( \frac{k_{y}V_{\Vert }'}{k_{\Vert }\varOmega _{i}}\right) \left( 1-\frac{n\varOmega _{i}}{\omega _{r}}\right) . \end{aligned}$$
where the second term represents the correction to the critical drift for the current driven ion cyclotron instability due to shear. A necessary condition for the CDICI is that \(V_{\Vert }>\omega _{r}/k_{\Vert }\). For a given magnitude of \(|\mathrm{d}V_{\Vert }/\mathrm{d}x|/\varOmega _{i}\ll 1\), it is clear from Eq. 80 that the shear correction is small unless the ratio \(k_{y}/k_{\Vert }\) can be made large. However, as \(k_y\) increases, the real frequency of the wave approaches harmonics of the ion cyclotron frequency and consequently \((1-n\varOmega _{i}/\omega _{r})\) becomes small which makes the shear correction small. Alternately, when \(k_z\) decreases the wave phase speed increases and the condition \(V_{\Vert }>\omega _{r}/k_{\Vert }\) is violated. Thus, for realistic (small to moderate) values of the shear magnitude, the reduction in the threshold current for the current driven ion cyclotron instability by a gradient in the ion parallel flow is minimal at best. This is unlike the current driven ion acoustic mode case as discussed in the previous section.
Although a realistic magnitude of shear is ineffective in reducing the threshold current for the ion cyclotron instability, it allows for a novel method to extract free energy from the spatial gradient of the ion flow, which does not involve a resonance of parallel phase speed with the relative drift speed. To illustrate this we return to Eq. 78 and consider the limit \((k_{y}V_{\Vert }'/k_{\Vert }\varOmega _{i})(1-n\varOmega _{i}/\omega _{r})\gg 1\), in which Eq. 78 reduces to,
$$\begin{aligned} V_{\Vert c} = \frac{\omega _{r}}{k_{z}}\left[ 1 - \varGamma _{n}(b)\frac{\mu ^{1/2}}{\tau ^{3/2}}\left\{ \frac{k_{y}V_{\Vert }'}{k_{\Vert }\varOmega _{i}}\left( 1 - \frac{n\varOmega _{i}}{\omega _{r}}\right) \right\} \exp \left( -\frac{(\omega _{r}-n\varOmega _{i})^2}{2k_{\Vert }^{2}v_{ti}^2}\right) \right] , \end{aligned}$$
For \(\omega _r > n\varOmega _i\) each term of Eq. 81 is still positive but the relative sign between them is now negative, which allows for \(V_{\Vert c}=0\). In this regime the ion flow gradient can support ion cyclotron waves. This can be understood by examining the relevant terms in the growth rate (Ganguli et al. 2002):
$$\begin{aligned} \frac{\gamma }{\varOmega _{i}} \propto \frac{\tau ^{3/2}}{\mu ^{1/2}}\left( \frac{V_{\Vert }}{(\omega _{r}/k_{z})}-1\right) -\sum _{n}\varGamma _{n}\left\{ 1 - \frac{k_{y}V_{\Vert }'}{k_{\Vert }\varOmega _{i}}\left( 1-\frac{n\varOmega _{i}}{\omega _{r}}\right) \right\} \exp \left( -\frac{(\omega _{r}-n\varOmega _{i})^2}{2k_{\Vert }^{2}v_{ti}^2}\right) , \end{aligned}$$
The first term in the bracket represents a balance between growth due to the relative field-aligned drift and electron Landau damping while the second term represents cyclotron damping. Provided the drift speed exceeds the wave phase speed and the magnitude of the first term is large enough to overcome the cyclotron damping a net growth for the ion cyclotron waves can be realized. This is the classical case where inverse electron Landau damping leads to wave growth (Drummond and Rosenbluth 1962). For the homogeneous case (i.e., \(\mathrm{d}V_{\Vert }/\mathrm{d}x=0\)), the second term is positive definite and always leads to damping. However, if \((k_{y}V_{\Vert }'/k_{z}\varOmega _{i})(1-n\varOmega _{i}/\omega _{r})>1\) then the sign of the cyclotron damping can be changed and the second term can provide a net growth even for \(V_{\Vert }=0\). This possibility for wave growth is facilitated by velocity shear via inverse cyclotron damping and favors short perpendicular and long parallel wavelengths, which makes the term proportional to shear large even when the magnitude of shear is small. A necessary condition for ion cyclotron instability due to inverse cyclotron damping is
$$\begin{aligned} \left( 1-\frac{n\varOmega _{i}}{\omega _{r}}\right) \left( \frac{k_{y}}{k_{\Vert }} \frac{\mathrm{d}V_{\Vert }/\mathrm{d}x}{\varOmega _{i}}\right) = \left( 1-\frac{n\varOmega _{i}}{\omega _{r}}\right) \left( \frac{V_{py}}{V_{pz}} \frac{\mathrm{d}V_{\Vert }/\mathrm{d}x}{\varOmega _{i}}\right) >1, \end{aligned}$$
where \(V_{py}\) and \(V_{pz}\) are ion cyclotron wave phase speeds in the y and z directions.
Another noteworthy property introduced by the ion flow gradient is in the generation of higher harmonics. From Eq. 79 we see that in the homogeneous case the nth harmonic requires a much larger drift than the first harmonic. However, for \(\omega _{r}\sim n\varOmega _{i}\) the critical shear necessary to excite the nth harmonic of the gradient driven ion cyclotron mode, can be expressed as
$$\begin{aligned} \frac{(\mathrm{d}V_{\Vert }/\mathrm{d}x)_{c}}{\varOmega _{i}} \sim \frac{\tau ^{3/2}}{\mu ^{1/2}} \left( \frac{k_{\Vert }}{k_{y}}\right) \left( \frac{1+\tau -\varGamma _{0}(b)}{\varGamma _{n}^2(b)}\right) , \end{aligned}$$
For short wavelengths, i.e., \(b\gg 1\), \(\varGamma _{n}\sim 1/\sqrt{2\pi b}\) and hence, to leading order, the critical shear is independent of the harmonic number. Consequently, a number of higher harmonics can be simultaneously generated by the shear magnitude necessary for exciting the fundamental harmonic. This is quantitatively shown in Fig. 29 [also in Gavrishchaka et al. (2000)], which indicates about 20 ion cyclotron harmonics can be generated for typical ionospheric plasma parameters. This figure also shows that when the Doppler broadening due to a transverse dc electric field is taken into account the discrete spectra around individual cyclotron harmonics overlap to form a continuous broadband spectrum such as those found in satellite observations. This remarkable ability of velocity shear to excite multiples of ion cyclotron harmonics simultaneously via inverse cyclotron damping is similar to the ion cyclotron maser mechanism (Tsang and Hafizi 1987) that results in broadband spectral signature. However, important differences with the ion cyclotron maser instability exist. The ion cyclotron maser instability is an electromagnetic non-resonant instability while we discuss the electrostatic limit of a resonant instability. Also, in this mechanism the background magnetic field is uniform unlike the ion cyclotron maser mechanism.
First 20 ion cyclotron harmonics. a Growth rate vs frequency, b Growth rate vs \(k_y\rho _{i}\), c growth rate vs doppler shifted frequency with \(V_{E}=0.3 v_{ti}\). Here \(V_{de} = 0\) (current-free case), \(V_{d}'=2\varOmega _{H+}\), \(\mu = 1837\), and \(\varOmega _e/\omega _{pe}=8.2\)
High frequency limit
As discussed in the previous section multiple harmonics of the ion cyclotron frequencies can be generated by the shear in parallel flows. In the presence of a parallel sheared flow and a transverse electric field the waves generated at the cyclotron harmonics can overlap due to Doppler shift, which can result in a broadband spectrum. Romero et al. (1992b) discussed the intermediate and higher frequency modes due to parallel flow shear in which ions can be assumed as an unmagnetized species but the electrons remain magnetized for waves in the frequency range \(\varOmega _{i}<\omega < \varOmega _{e}\) and wavelengths in the range \(k_{y}\rho _{i}>1>k_{y}\rho _{e}\).
For even shorter time scales with frequencies \(\omega >\varOmega _{e}\) both ions and electrons behave as unmagnetized species. Mikhailovskii (1974) has shown that flow shear in this regime can drive modes around the plasma frequency.
Thus, the combination of low, intermediate, and high frequency emissions that are generated by parallel velocity shear can also lead to a broadband spectral signature similar to that due to transverse velocity shear.
Hierarchy of compression driven waves
Summarizing the survey of velocity shear driven waves in Sects. 3.3.1–3.3.4 it can be concluded that the linear response of a magnetized plasma to compression is to generate shear driven waves with frequencies and wave-vectors that scale as the compression. In Sect. 2, we showed that plasma compression self-consistently generates ambipolar electric fields that lead to sheared flows both along and across the magnetic field. This establishes the causal connection of the shear-driven waves with plasma compression. Cumulatively, the gradient in the parallel and perpendicular flows constitute a rich source for waves in a broad frequency and wave vector band. In a collisionless environment their emission is necessary to relax the stress that builds up in the layer due to compression. Figure 30 schematically shows the impressive breadth of the frequency range involved with these waves starting from much below the ion cyclotron frequency and stretching to above the electron cyclotron and plasma frequencies that can be generated by a magnetized plasma system undergoing compression.
Hierarchy of compression driven waves as a function of the magnitude of the compression shown in the first column as shear scale size and the associated wave frequencies. Green corresponds to those cases which have been theoretically predicted and experimentally validated in the laboratory. Yellow corresponds to the cases which have been theoretically predicted but yet to be validated in a laboratory experiment. White indicates the cases that are expected to be there by symmetry arguments but yet to be rigorously analyzed
In the dynamic phase, the relaxing gradient can successively excite the next lower frequency wave in the hierarchy when the gradient scale size is sufficiently relaxed to turn off the higher frequency wave, or vice-versa with a steepening gradient (Ganguli et al. 1994). Both relaxation and compression are longer time scale processes compared to the shear driven wave time scales. This can result in emissions in a very broad frequency band in a quasi-static background that is usually observed in the in-situ data. As a proof of principle a recent laboratory experiment has demonstrated this phenomenon in a limited frequency range that was possible within the constraints of a laboratory device (DuBois et al. 2014) as we elaborate in Sect. 5. Frequency overlap due to Doppler shift and nonlinear processes, such as scattering, vortex merging, etc., can smooth out the spectrum and contribute to seamless frequency broadening as typically observed by satellites. This naturally raises a question of how these waves affect the plasma-saturated state that a satellite observes. This is the topic of discussion in the following section.
Nonlinear evolution and feedback of the waves to global dynamics
We now examine how the linear fluctuations induced by the compression evolve, the dominant nonlinear processes that relax the gradients to establish a steady state, and the measurable signatures of the compression driven waves. For this we need numerical simulations. However, due to the huge disparity in space and time scales it is difficult to simulate the entire chain of physics in a single simulation. Hence, we focus on limited frequency and wavelength domains in order to understand the development of the spectral signature and the steady state features in the nonlinear stage along with other nonlinear characteristics.
Low frequency waves in transverse sheared flows
The ion cyclotron frequency range IEDDI was first invoked to understand observations of ion cyclotron waves associated with a transverse electric field (Mozer et al. 1977) in the auroral region in which the magnetic field aligned current was minimal and the background plasma density was nearly uniform. Soon after the IEDDI mechanism was proposed (Ganguli et al. 1985) Pritchett (1987) conducted a PIC simulation using the simple top hat piecewise continuous electric field model (Fig. 20), which was intended as a proof-of-principle calculation of the IEDDI in the initial article. Because the electric field in the top hat model changes its value discontinuously the simulation showed immediate decay of the electric field due to gyro-averaging, which led Pritchett to conclude that the IEDDI does not exist and identified the fluctuations in the simulation as due to the KH instability. This initiated the derivation of an appropriate equilibrium distribution function in warm plasma that includes a sheared transverse electric field and is suitable for the initial loading in a computer simulation (Ganguli et al. 1988) (also briefly described in Sect. 3.2). This distribution function was used to obtain the general kinetic dispersion relation, which showed the existence of both the IEDDI and the KH branches in the proper parameter regimes as summarized in Sect. 3.3.1. Nishikawa et al. (1988, 1990) used this equilibrium distribution function to successfully simulate the IEDDI and demonstrated that it was another branch of oscillation in magnetized plasmas with transverse electric field distinct from the KH instability. The simulation also showed the development of a polarization current along the electric field direction that reduced the magnitude of the external electric field as the waves grew and a bursty spectrum of waves, which were consistent with the nonlinear IEDDI (ion magnetron) model of Palmadesso et al. (1986). More important to this article, as shown in Fig. 31 (reproduced from Nishikawa et al. (1988)), the growth of the instability relaxed the flow gradient by reducing its peak value and broadening its spatial extent. This establishes that the strong transverse electric field gradients that develop as a response to plasma compression (Sect. 2) can relax through the emission of the shear driven modes discussed in Sect. 3.
Average ion flow velocity \(v_y(x)\) at \(\varOmega _i t=0\), 160, and 240. Reproduced from Fig. 5 of Nishikawa et al. (1988)
The IEDDI was later validated in laboratory experiments in NRL (Amatucci et al. 1996) and elsewhere (Koepke et al. 1994) as discussed in Sect. 5. These laboratory experiments consistently showed that the IEDDI fluctuations have azimuthal mode number \(m = 1\). Interestingly, Hojo et al. (1995) showed that there can be no \(m=1\) KH mode in a cylindrical geometry. The KH wave growth peaks for higher m numbers in a cylindrical geometry (Kent et al. 1969; Jassby 1970, 1972) while the IEDDI growth maximizes for \(m=1\) (Peñano et al. 1998). This is an experimental confirmation that the IEDDI is distinct from the KH instability and that they form separate branches of oscillations in magnetized plasma with transverse sheared flow. Subsequently, Pritchett (1993) also tested the Ganguli et al. (1988) equilibrium model and concluded that it led to more reliable results although he could not resolve the IEDDI in his simulation accurately.
Intermediate frequency waves in transverse sheared flows
In the auroral region the observed velocity shear scale size is generally larger than the ion gyroradius, albeit in the saturated state. This is the weak shear regime. However, as we found in Sect. 2, the scale size of the velocity shear that develops in the boundary layers can be in the intermediate range, i.e., \(\rho _i>L>\rho _e\). Also in this region wave power around the lower hybrid frequency range has been observed. The generation of both electrostatic and electromagnetic waves around the lower hybrid frequency by velocity gradient has been extensively studied. Simulations (Romero and Ganguli 1993) indicate that these waves produce anomalous viscosity and relax the velocity gradients to reach a steady state. In the following sections we study the nonlinear evolution of these waves leading to formation of the steady state and the observable signatures by numerical simulation.
Plasma sheet-lobe interface
In understanding the behavior in the compressed plasma layer formed at the plasma sheet-lobe interface (Sect. 2.1) (Romero et al. 1992b) used the Ganguli et al. (1988) equilibrium (Eq. 41) for the electrons and an unmagnetized Maxwellian distribution for the ions in a 2D electrostatic PIC model to simulate the spontaneous generation of the intermediate frequency EIH waves discussed in Sect. 3.3.2. The localized electric field used in the simulation was in the intermediate scale length defined by \(\rho _{i}>L>\rho _{e}\) and was self-consistent with the density gradient. The simulation was motivated by the ISEE satellite observation in the plasma sheet-lobe interface as shown in Fig. 1. Spontaneous growth of the lower hybrid waves was seen in the boundary layer. The waves formed vortices as expected since vorticity develops naturally in the linear perturbation if the equilibrium flow is inhomogeneous. The scale size of the vortices was comparable to the velocity gradient scale size. Figure 32, [reproduced from Romero and Ganguli (1993)], shows that the growth of the EIH waves relaxed the velocity gradient similar to that observed in the IEDDI simulation of Nishikawa et al. (1990). Interestingly, the density gradient was not relaxed by the EIH instability. The difference in the two simulations was that in the Nishikawa et al. (1990) simulation of ion cyclotron IEDDI the electric field was localized over a distance larger than \(\rho _{i}\) while in the Romero and Ganguli (1993) simulation it was localized over a smaller distance. The inference that can be drawn from the two simulations is that if the initial compression is large such that \(L<\rho _{i}\), then the growth of the lower hybrid waves could relax the velocity gradient so that \(L>\rho _{i}\) at steady state. While this saturates the lower hybrid waves, the flow shear will be in the right magnitude to trigger the lower frequency IEDDI. When IEDDI relaxes the gradient even further so that \(L\gg \rho _{i}\) then the KH modes could be triggered and so on. This nonlinear cascade to appropriate frequencies as the background gradient scale changes is how the shear driven modes can lead to a broadband signature of the emissions that are observed in the compressed plasmas (Grabbe and Eastman 1984). In addition, the Nishikawa et al. (1990) simulation showed the coalescence of smaller vortices into larger ones implying that the wavelengths become larger with time due to nonlinear vortex merging. Thus, these lower hybrid waves have large wavelengths, roughly of the order of the shear scale length rather than an \(\rho _e\) as expected from the LHDI, discussed in Sect. 3.3.2. The spatio-temporal scales associated with the cascading frequencies are so large that it is difficult to simulate the entire bandwidth in a single simulation.
Spatial profiles of the electron cross-field flow at different times indicating the relaxation of the velocity gradient. Reproduced from Fig. 16 of Romero and Ganguli (1993)
The initial Romero et al. simulation (1992b) was followed up with more detailed studies of the nonlinear signatures of these waves, effects of magnetic field inhomogeneity on these waves, as well as their contribution to viscosity and resistivity, which provide the steady state and feedback to the larger scale dynamics (Romero and Ganguli 1993, 1994).
Dipolarization front
More recently, the Romero and Ganguli (1993) simulation model was applied to the DF plasmas (Fletcher et al. 2019) and generalized to the electromagnetic regime (Lin et al. 2019). The plasma parameters used in the simulation (Fletcher et al. 2019) were \(\omega _{pe}/\varOmega _{e}=3.59\), \(\beta _{e}=0.035\), \(m_e/m_i=1/400\), and the peak of the ambipolar field consistent with the density gradient is given by \(cE_0/B_0=0.32v_{te}\). The simulation time is \(175/\omega _{LH}\), the spatial domain is \(21\rho _i\) by \(21\rho _i\) (1200 by 1200 cells), boundaries are periodic in all directions, and 537 million particles were used.
Figure 33 [from Fletcher et al. (2019)] shows a snapshot of the plasma density and electrostatic potential from the simulation at \(t\simeq 28/\omega _\mathrm{LH}\). These images show only a part of the simulation domain in order to make features more visible. Kinking is seen in the density. Vortices are formed on the lower density (right) side of the layer as well; these are visible in the potential (for example, one vortex is located at (\(x/\rho _i\),\(y/\rho _i\))\(\simeq\)(1,-1.5)). Wave activity in the y direction with \(k_y L\sim 1\) is apparent in both the density and the potential. The growth rate of the field energy in the simulation is consistent with the growth rate found by solving Eq. 69. The mass ratio of the simulation is low in order to facilitate quick simulation but a physical mass ratio would enhance the ambipolar electric field and further drive these waves.
Plasma density, n, (left) and electrostatic potential, \(\phi\), (right) at \(t\simeq 28/\omega _\mathrm{LH}\). Waves in the y direction and vortices are both visible. Reproduced from Figure 16 of Fletcher et al. (2019)
Figure 34 is a wavelet spectrum as a function of x position; the layer is centered near \(x/\rho _{i}=0\). It is similar to what a satellite would measure if it were flying through the simulated layer or a DF would propagate past the observing satellite. There are broadband waves spread around and above the lower hybrid frequency. The lower frequency power \(\omega /\omega _{LH}\simeq 0.1\) is consistent with vortices being generated and propagating away from the layer.
Wavelet spectrum of the electric field as a function of position near \(t\simeq 28/\omega _\mathrm{LH}\). The density gradient is steepest near \(x/\rho _{e}=0\). Reproduced from Figure 17 of Fletcher et al. (2019)
As time passes in the simulation, the density gradient is more-or-less unaffected while the electron flow in the y direction and accompanying electric field in the x direction is significantly relaxed, indicating the dominance of shear-driven instability (EIH) over the density gradient-driven instability (LHD). Figure 35 shows the evolution of these two separate source terms responsible for the EIH and the LHD instabilities respectively (as in the numerator of the last term Eq. 69) and the field energy as a function of simulation time. Instability growth and wave emission occurs before \(t=20/\omega _\mathrm{LH}\). The dotted black line is the theoretical linear growth predicted by Eq. 69. During the growth phase, the EIH source term (and thus the velocity shear) is clearly falling, suggesting that the shear is the source of free energy for the waves. The simulation reaches a saturated state at \(t\simeq 20/\omega _\mathrm{LH}\).
Driving terms for the EIH instability and LHD instability (left) and the field energy fraction (right) in the simulation as a function of time. Reproduced from Fig. 18 of Fletcher et al. (2019)
Ion cyclotron waves in parallel sheared flows
In Sects. 4.1 and 4.2, we studied the nonlinear evolution of sheared transverse flows. We found that spontaneous generation of shear-driven waves relaxes the velocity gradient that leads to saturation. The frequency and wavelengths of these waves scale as the shear magnitude. Nonlinear vortex merging results in longer wavelengths. Relaxation of stronger shear leads to weaker shear which can then drive lower frequency modes. This cascade leads to the broadband spectrum of emissions that are often observed. Now we examine the nonlinear behavior of parallel flow shear driven waves.
The nonlinear evolution of the parallel flow shear driven modes discussed in Sect. 3.3.4 was investigated with PIC simulations by Gavrishchaka et al. (2000). The simulations included full ion dynamics but used a gyrocenter approximation for the electrons. To clearly resolve short wavelength modes 900 particles per cell were used with grid size \(\varDelta =\lambda _D=0.2\rho _i\) and mass ratio \(\mu =1837\). A drifting Maxwellian (H+, e–) plasma is initially loaded, with equal ion and electron temperatures. The magnetic field is slightly tilted such that \(k_{\Vert }/k_y=0.01\). A parallel drift velocity \(V_{\Vert }(x)\) is assigned to ions to obtain an inhomogeneous velocity profile. The magnitude of the flow is initially specified and not reinforced during the simulation. To characterize the role of spatial gradients in the flow, the relative drift between the ions and the electrons, i.e., field aligned current, is kept at a minimum. Its value does not exceed \(3v_{ti}\) locally while on average it is negligible. Periodic boundary conditions are used in both x and y directions. The magnitude of shear \(|\mathrm{d}V_{\Vert }/\mathrm{d}x|_{max}=2\varOmega _i\) is used for the simulation. For this case the simulation box size was specified by \(L_x=64\lambda _D\) and \(L_y=64\lambda _D\).
Nonlinear spectral signature from a PIC simulation. (left) Frequency spectrum without a transverse DC electric field. (right) Frequency spectrum including a transverse DC electric field. Figures reproduced from Fig. 3 of Ganguli et al. (2002)
The saturated spectral signature in the simulation without and with a uniform transverse electric field shown in Fig. 36. On the left is the wave spectrum without a transverse electric field. In this simulation several ion cyclotron harmonics are excited with discrete harmonic structure. While on the right a uniform transverse dc electric field is included with \(V_\mathrm{E} =0.8v_{i}\). The washing out of the harmonic structure and broadening of the spectrum due to overlap of the discrete spectra around \(\omega =0\) and multiple cyclotron harmonics becomes evident. Larger Doppler broadening either by large \(V_\mathrm{E}\) or large bandwidth, \(\varDelta k_{y}\), or a combination of both, could lead to an even broader spectrum.
The meso-scale effect of the parallel flow shear driven instability (normalized by their initial values) is given in Fig. 37 (reproduced from Gavrishchaka et al. (2000)). To highlight the role of shorter wavelength ion cyclotron waves the longer wavelengths are removed by using a \((64\times 16)\lambda _{d}\) size simulation box in this case. Figure 37 illustrates that the effect of the ion cyclotron wave generation is relaxation of the flow gradient due to wave-induced viscosity. This is similar to the effect of the transverse shear driven waves but not as strong. This may be because the orbit modifications due to a localized transverse electric field is absent in this case. Thus the primary conclusion is that the compression generated velocity shear either in parallel or transverse flow leads to broadband emissions accompanied by relaxation of the velocity gradient that leads to a steady state and determines the observed features that are measured by satellites.
Electrostatic wave potential obtained from PIC simulations after \(\varOmega _i t= 40\) (a), 60 (b), 100 (c), and the corresponding ion velocity parallel to the magnetic field (d) shown by solid, dashed, and dot-dashed lines, respectively. Reproduced from Fig. 3 of Gavrishchaka et al. (2000)
In the above we discussed only the part of the simulation that showed the formation of the broadband spectral signature and relaxation of the velocity shear due to these waves, which is central to this article. However, the simulation also explained a number of interesting auroral observations that are not elaborated here. For a detailed account of these we refer to Gavrishchaka et al. (1998, 2000) and Ganguli et al. (2002).
While numerical simulations can provide accurate results of the saturated states for the chosen specific plasma conditions they are not general enough to draw broad conclusions regarding an evolving plasma encompassing a variety of physics because there are multiple nonlinear dynamical paths that may not be realized in a single simulation. This is especially true in the case we discuss here where there is coupling between a very wide range of scales sizes and the phenomenon cascades over a wide range of scales with different physics that is hard to reproduce in one simulation. In Sect. 6 we discuss some ideas using modern machine learning and artificial intelligence capabilities to address this extensive cross-scale coupling. However, in the future, physics based comprehensive analytical models will have to be developed using the intuitions obtained from the limited simulations. This remains an open area of further research.
Laboratory experiments of compressed plasma behavior
In Sects. 2–4, we outlined the theoretical foundation for understanding compressed plasma behavior and showed evidence of its characteristics in uncontrolled natural plasmas from in situ data gathered from satellites. The challenge with in situ data is in characterization of a specific phenomenon in constantly evolving plasmas subject to uncertain external forces. As a result, typically there are many competing theories of space plasma phenomena that are difficult to distinguish unambiguously. Because of this difficulty, scaled laboratory experiments have become a valuable tool in understanding space plasma processes. Not every aspect of space plasmas can be faithfully scaled in the laboratory. Large MHD scale phenomena are especially challenging. But others, such as cause and effects of waves and various coherent processes in the meso and micro scales, which are difficult to resolve by in situ measurements in space, are amenable to laboratory scaling. In the modern era, satellite clusters with multi-point measurements and global imaging using energetic neutral atom (ENA) (Roelof 1987; Henderson et al. 1997; Burch 2000; McComas et al. 2009) have been used to overcome some of the difficulties with resolving the space-time ambiguity in measurements made from a single moving platform. While multi-point measurements from a cluster of satellites help, they are expensive and there are still limitations of measurements made from a moving platform at multiple scales. Global imaging using ENA can resolve the space-time ambiguity but can not resolve the small or fast scales that are important for many geospace plasma processes. An area where laboratory experiments can contribute substantially is in the understanding of the effects of highly localized regions of strong spatial variability, such as the strong gradients over ion or electron gyroscales associated with compressed plasmas discussed in this article. These phenomena can be scaled reasonably well in the laboratory. The Space Chamber at the US Naval Research Laboratory (NRL) is especially designed for understanding space plasma phenomena, such as the behavior of compressed plasmas.
The NRL Space Physics Simulation Chamber (SPSC), shown in Fig. 38, consists of two sections that can be operated separately or in conjunction. The main chamber section is 1.8 m in diameter and 5 m long, while the source chamber section provides an additional 0.55 m-diameter, 2-m long experimental volume. The steady-state magnetic field strength in the main and source chamber sections can be controlled up to 220 G and 750 G respectively, generated by 12 independently controlled water-cooled magnets capable of shaping the axial magnetic field. Each section has a separate plasma source. The main chamber has a 1-m x 1-m hot filament plasma source capable of generating plasmas with a range of density \(n \sim 10^4-10^{10}\) cm\(^{-3}\), electron temperature \(T_{e} \sim 0.1-2\) eV, and ion temperature \(T_i \sim 0.05\) eV. The source chamber has a helicon source capable of generating 30-cm diameter plasmas with the following parameters: \(n \sim 10^{8}-10^{12}\) cm\(^{-3}\), \(T_{e} \sim 1-6\) eV, and \(T_{i} \sim 0.1\) eV. When the helicon plasma transitions from the source chamber to the main chamber, the plasma column diameter can be increased up to the full 1.8-m diameter of the main chamber by controlling the ratio of magnetic field strength between the two chamber sections. The large plasma size yields up to 150 ion gyroradii across the column. Table 1 shows the ranges of normalized plasma parameters accessible in the NRL SPSC with comparisons to those found in the ionosphere and the radiation belts .
NRL Space Physics Simulation chamber. Main chamber section (1.8 m by 5 m) is on the right. Source chamber (0.55 m by 2 m) is on the left
We discuss a few experiments performed in the NRL Space Chamber and elsewhere that were designed to understand the effects of strong velocity and pressure gradients typical of compressed plasmas. As discussed in Sect. 2, the localized electric field can be considered a surrogate for the global compression. Thus, by studying the plasma response to localized electric fields we can glean the physical processes that characterize a compressed plasma layer.
Table 1 Comparison of plasma parameters in the ionosphere, the Radiation Belts (RB), and the NRL SPSC. The last three rows (bold) are important dimensionless plasma parameters that can be reproduced in the NRL SPSC
Low frequency limit: transverse velocity gradient
In the 1970s, the NASA S3-3 satellite observed emissions around the ion cyclotron frequency in uniform density plasma at auroral altitudes where spatially localized DC electric fields were large (Mozer et al. 1977). Kelley and Carlson (1977) reported intense shear in plasma flow velocity at the edge of an auroral arc associated with short wavelengths fluctuations, the origin of which was a mystery. They noted that, "A velocity shear mechanism operating at wavelengths short in comparison with the shear scale length, such as those observed here, would be of significant geophysical importance." Kintner (1992) described the difficulty for exciting the current-driven ion cyclotron waves (Kindel and Kennel 1971) in the lower ionosphere where the magnitude of the field-aligned current is usually below the threshold and yet bulk heating of ions suspected due to ion cyclotron waves is detected.
In addition to space observations, there were laboratory experiments, although unconnected with the space observations, reporting ion cyclotron waves correlated to localized transverse dc electric fields (Sato et al. 1986; Alport et al. 1986). The generation mechanism of these ion cyclotron waves was not clear.
These observations led to theoretical analysis at NRL, described in Sect. 3.3.1, which suggested that the Doppler shift by a localized transverse electric field could make the energy density of the ion cyclotron waves negative in the electric field region while it is positive outside. A flow of energy between the regions with opposite signs of wave energy density can lead to an instability (Ganguli et al. 1985, 1988). Because the necessary condition for instability is that the energy must flow from one region to another with opposite sign of energy density, the instability is essentially nonlocal. It was a promising mechanism for understanding a number of mysterious observations in the auroral region including low altitude ion heating (Ganguli et al. 1985), which was a front burner issue of the time. So, its validation and detailed characterization in the laboratory became an important topic.
Examples of end electrode methods for producing localized radial electric fields and sheared azimuthal \(\mathbf {E}\times \mathbf {B}\) flows in cylindrically symmetric laboratory plasmas
Threshold value of current density as a function of transverse, localized, dc electric (TLE) field strength. Current densities are normalized to the zero-TLE-strength value. Error bars represent one standard deviation. Reproduced from Fig. 2 of Amatucci et al. (1994)
Using a segmented disc electrode, shown in Fig. 39, in the West Virginia University Q-machine Amatucci et al. (1994) showed that sub-threshold field-aligned current could support the ion cyclotron instability if a radially localized static electric field produced by biasing the segments is introduced (see Fig. 40). This explained the observation of ion cyclotron waves for sub threshold currents in the auroral region noted by Kintner (1992). Theoretically, however, it was shown that ion cyclotron waves could exist even when a magnetic field aligned current was absent provided a strong enough transverse localized electric field was present (Ganguli et al. 1985) such as those observed by Mozer et al. (1977). However, it was not possible to eliminate the axial current totally in the experiment because the inner segment of the electrode was biased and drew electrons. Subsequently, Amatucci et al. (1998) demonstrated that by increasing the magnitude of the transverse electric field and virtually eliminating the axial current with biased ring electrodes (Fig. 39), the electrostatic ion cyclotron waves could be sustained by a sheared transverse flow alone. These experiments were later followed up by Tejero et al. (2011) to confirm the electromagnetic IEDDI (Peñano and Ganguli 1999). These waves, besides validating the theory, were shown to be efficient in ion heating (Amatucci et al. 1998) as was expected (Ganguli et al. 1985). The experiment also showed that the heating profile was distinct from the typical Joule heating (Amatucci et al. 1999) as shown in Fig. 41. The scale size of the electric field, L, was greater than the ion gyroradius, \(\rho _{i}\), for these experiments.
a Perpendicular ion temperature \(T_i /T_{i0}\) , b mode amplitude, c Doppler-shifted mode frequency, and (d) transverse electric field strength plotted as a function of the normalized ionneutral collision frequency. A transition from a wave-heating regime (\(\nu _{in}/\varOmega _{i}<0.4\)) to a Joule-heating regime (\(\nu _{in}/\varOmega _{i}>0.7\)) is observed as the ionneutral collision frequency is increased. Reproduced from Fig. 3 of Amatucci et al. (1999)
Characterization of the IEDDI in the laboratory was a significant contribution because it clarified the role of localized electric fields in wave generation thereby validating the theory for the origin of these waves and led to numerous applications to understand satellite observations (Bonnell et al. 1996; Liu and Lu 2004; Golovchanskaya et al. 2014a, b). In addition, it successfully addressed a major issue in space plasmas, i.e., ion heating in the lower ionosphere necessary to initiate the out flow of the heavy gravitationally bound oxygen ions observed deep inside the magnetosphere (Pollock et al. 1990). These experiments became anchors for a comprehensive ionospheric heating model (Ganguli et al. 1994) and inspired sounding rocket experiments to look for corroborating signatures in the ionosphere (Earle et al. 1989; Bonnell et al. 1996; Bonnell 1997; Lundberg et al. 2012). Subsequently, a comprehensive statistical survey of satellite data confirmed the importance of static transverse electric fields to wave generation in the ionosphere (Hamrin et al. 2001). More importantly, these early laboratory experiments started a trend in simulating space plasma phenomena in the controlled environment of the laboratory for detailed characterization that helped in the interpretation of in situ data and develop a deeper understanding of the salient physics.
Low drequency limit: parallel velocity gradient
Another intriguing issue in the ionosphere was the observations of low frequency ion acoustic-like waves (Wahlund et al. 1994) in the nearly isothermal ionosphere where the ion acoustic waves are expected to be ion Landau damped. The origin of these low frequency waves became a much-debated issue. As discussed in Sect. 3.3.4, Gavrishchaka et al. (1998) showed that a spatial gradient in the magnetic field aligned flow could drastically lower the threshold of the ion acoustic waves by moving the phase speed of the waves away from Landau resonance. In addition, Gavrishchaka et al. (2000) also showed that higher frequency waves can be triggered by spatial gradients in the parallel flow with multi-harmonic ion cyclotron emissions. The magnitude of the gradient required for generating either of these waves was very modest. These results could potentially explain a number of auroral observations (Gavrishchaka et al. 1999; Ganguli et al. 2002) including the NASA FAST satellite observation of multi-ion harmonic spectrum and spiky parallel electric field structures (Ergun et al. 1998). Thus, validation of the Gavrishchaka et al. theory in the laboratory became an important issue.
In a series of Q-machine experiments with inhomogeneous magnetic field aligned flow at the University of Iowa (Agrimson et al. 2001, 2002) and West Virginia University (Teodorescu et al. 2002a, b) the existence of both the shear modified low frequency and the ion cyclotron frequency range fluctuations were confirmed and their signatures and properties were studied. The experiments highlighted the critical role of the spatial gradient in the flow parallel to the magnetic field. A similar situation can also arise in compressed plasmas in DFs as well as the plasma sheet-lobe interface, as discussed in Sect. 2.2.3. The laboratory validation of the theory and the characterization of the instability increased the confidence in its application to other regions of space plasmas (Nykyri et al. 2006; Slapak et al. 2017).
Other low frequency waves due to parallel inhomogeneous flows with a density gradient were investigated in laboratory experiments by Kaneko et al. (2003, 2005). They also theoretically analyzed the case and showed that drift waves can be both destabilized and stabilized by velocity shear in the parallel ion flow depending on the plasma conditions and shear strength in the parallel flow. Similar conclusions regarding the drift wave behavior in plasma with perpendicular flow shear was discussed by Gavrishchaka (1996).
Intermediate frequency limit: transverse velocity gradient
As described in Sect. 2.1, during geomagnetically active periods, global compression of the magnetosphere by the solar wind stretches the Earth's magnetotail and a pressure gradient builds up between the low-pressure lobe and the high-pressure plasma sheet. The boundary between these regions exhibits a complex structure, which includes thin layers of energetic electrons confined to the outermost region of the plasma sheet (Forbes 1981; Parks et al. 1984). Localized static electric fields in the north-south direction are observed during crossings into the plasma sheet from the lobes (Cattell et al. 1982; Orsini et al. 1984) but their cause and effect was not known. Also, enhanced electrostatic and electromagnetic wave activity is detected at the boundary layer (Grabbe and Eastman 1984; Parks et al. 1984; Cattell et al. 1986; Angelopoulos et al. 1989).
To understand the plasma sheet-lobe equilibrium properties, a kinetic description of the boundary layer was developed by Romero et al. (1990), as described in Sect. 2.1.1. It showed that with increasing activity level, as the boundary layer scale size approaches an ion gyrodiameter, an ambipolar electric field develops across the magnetic field, which intensifies with the global compression. As shown in Sect. 3, for small enough L, ions effectively behave as an unmagnetized species for intermediate scales (\(\varOmega _{i}<\omega <\varOmega _{e}\) and \(k_{\perp }\rho _{i}>1>k_{\perp }\rho _{e}\)) and an instability appears around the lower hybrid frequency. The wavelength of this instability scales as \(k_{\perp }L\sim 1\) where \(L\gg \rho _e\) (Ganguli et al. 1988), which distinguishes it from the lower-hybrid-drift instability with \(k_{\perp }\rho _{e}\sim 1\) scaling. Hence, laboratory validation and characterization of the EIH waves discussed in Sect. 3.3.2 became an important topic.
(top)Schematic of creating localized electric fields in laboratory experiments adapted from Amatucci et al. (1994). On the right is a large plasma source. In front (to the left in the figure) of the large plasma source is a blocking disk that prevents plasma from the large source to stream down the center of the chamber. On the left is a smaller source that can fill in plasma at the center. By biasing the end plates an electric field can be created between the two plasmas. (bottom) Measured density vs radial position in the NRL Space Physics Simulation Chamber for different filament current settings on the plasma source illustrating the experimental control over the plasma density. (bottom) Reproduced from Fig. 3 of Amatucci et al. (2003)
While the basic physics of the EIH instability was verified in Japan by Matsubara and Tanikawa (2000) using a segmented end plate to create the localized radial electric field and then in India by Kumar et al. (2002), their experimental geometry did not correspond to the reality of the lobe-plasma sheet system. The challenge was to produce the conditions of a stretched magnetotail in the lab where the dense plasma sheet is surrounded by tenuous lobe plasma as shown in Fig. 1a of Sect. 2.1. Amatucci et al. (2003) introduced an innovative way to achieve this by using interpenetrating plasmas produced by independent sources with controllable plasma potentials and densities sketched in Fig. 42. This set up was more representative of the realistic plasma sheet-lobe configuration with a boundary layer of scale size on the order or less than an ion gyroradius. The experiment demonstrated spontaneous generation of lower hybrid waves as shown in Fig. 43.
Stack plot of the FFT Amplitude vs Frequency as the electric bias is increased (up in the figure) showing that the EIH wave power increases as the applied electric field is increased. Reproduced from Fig. 8a of Amatucci et al. (2003)
Subsequently, DuBois et al. (2013, 2014) used the Amatucci method in the Auburn University Auburn Linear Experiment for Instability Studies (ALEXIS) device and varied the magnetic field to scale the ion gyroradius from larger to smaller than the electric field scale size thereby effectively simulating the variation of stress that characterizes the relaxation phase of a stressed magnetotail. This showed the generation of a broadband emission starting from the lower hybrid frequency to less than ion cyclotron frequency differing by 5 orders of magnitudes in a single experiment, as shown in Fig. 44.
Log of \(\omega /\varOmega _{i}\) is plotted as a function of the ratio \(\rho _{i}/L\) which was varied experimentally by controlling the magnitude of the magnetic field in ALEXIS. Reproduced from Fig. 5 of DuBois et al. (2014)
The DuBois et al. experiment was a proof of principle of the theory (Ganguli et al. 1994) which had posited that a compressed boundary layer can relax through the emission of a hierarchy of electric field-driven waves starting from above the electron gyrofrequency to much below the ion gyrofrequency and could be the primary source for the observed broadband electrostatic noise. Tejero et al. (2011) and Enloe et al. (2017) have subsequently shown that the plasma compression can also produce electromagnetic emissions but the wave power is primarily concentrated in the electrostatic regime (Ganguli et al. 2014), consistent with the in situ observations (Angelopoulos et al. 1989). These laboratory experiments have elucidated the subtler aspects of the magnetotail dynamics, which would be difficult to discern from in situ measurements alone. They also inspired new experimental research in the laboratory to understand the physics of the dipolarization fronts.
Comprehensive modeling of space plasma environment
Besides academic interests, the practical goal of developing a deeper understanding of space plasma processes is to improve the accuracy of space weather forecasting. The challenge in a physics-based forecasting model is in accounting for the physics at multiple scales in a global model. As discussed in this article, spatiotemporal processes in the space plasma environment are multi-scale. It is not feasible to model the wide range of scales from first principles, because of computational limitations and lack of detailed initial and or boundary conditions. Hence, success of simulations, forecasting, and interpretation of multi-scale spatiotemporal dynamics critically depends on a realistic formulation including the coupling of physical models describing processes on micro- and macro scales. Small-scale kinetic processes could significantly influence larger-scale dynamics, at least in their immediate neighborhood, which may be of practical interest. However, introduction of small-scale kinetic effects as anomalous coefficients into larger-scale fluid simulations without running small-scale simulations involves empirical adjustments of coupling parameters taking into account simulation stability and other considerations. Some attempts in magnetosphere-ionosphere coupling has been made based on this concept (Ganguli and Palmadesso 1987; Ganguli et al. 1988). Similarly, one can use coarse-grain analogue models with just a few main elements (Surjalal Sharma 1995; Klimas et al. 1996) whose characteristics are also inferred from deeper multi-scale physical models. Still such physics-based models may not be accurate enough for certain practical applications. Recent developments in artificial intelligence (AI) and machine learning (ML) offers a new vista for deeper understanding and forecasting in the space plasma environment.
Alternatively, applied modeling of a wide range of complex systems including space weather forecasting are based on data-driven statistical and ML approaches (Gleisner et al. 1996; Gavrishchaka and Ganguli 2001a, b; Camporeale 2019; Gopinath and Prince 2019). Such empirical approaches could offer practical solutions with good accuracy given enough training data covering key regimes of the considered systems are available. However, performance of standard ML approaches could quickly deteriorate with severe data limitations, high dimensionality and non-stationarity (Gavrishchaka et al. 2018, 2019). Domain-expert knowledge including physical models based on deeper understanding of the considered complex system, such as the kinetic processes discussed in this article, could play a key role in applications with severe incompleteness of training data because of natural dimensionality reduction and usage of domain-specific constraints (Gavrishchaka et al. 2018, 2019; Banerjee and Gavrishchaka 2007). Typical practical example of the domain knowledge incorporation into ML solution is selection of model inputs and drivers using physics-based considerations (Gleisner et al. 1996; Gavrishchaka and Ganguli 2001a, b; Gavrishchaka et al. 2019). This procedure of augmenting purely data driven models with physics based models is a step towards gaining physical insight into the system.
The most successful modern ML frameworks such as deep learning (DL) based on deep neural networks (DNNs) and boosting-based ensemble learning offer even more opportunities for efficient synergetic combination with domain-expert knowledge (LeCun et al. 2015; Deng and Yu 2014; Hinton and Salakhutdinov 2006; Schapire 1992; Friedman et al. 2000; Chen and Guestrin 2016; Gavrishchaka et al. 2018, 2019). First, similar to natural sciences, both techniques actively use advantages of hierarchical data and knowledge representations that are capable of crucial reduction of dependency on the training data size. This is achieved by layer-by-layer learning with automated hierarchical feature discovery and dimensionality reduction in DNNs and the intrinsically hierarchical nature of boosting algorithms where it builds a global-scale model at the first iteration and focuses on more detailed modeling of sub-populations, sub-scales and sub-regimes in subsequent iterations (LeCun et al. 2015; Deng and Yu 2014; Hinton and Salakhutdinov 2006; Schapire 1992; Friedman et al. 2000; Gavrishchaka 2006; Gavrishchaka et al. 2018, 2019). For example, in Section 2 we showed that global compression leads to ambipolar effects on ion and electron gyroscales that generate spatially localized transverse electric fields. In Sect. 3 we showed the linear plasma response to such electric fields, which are much smaller scale features. In Sect. 4 we showed the nonlinear evolution of these electric fields and ultimately their saturation to generate macroscopic measurable features of the larger scale dynamics that satellites measure. These micro-macro coupling processes could be iteratively incorporated into global models to produce a more comprehensive model of the space plasma dynamics than currently possible. Such hierarchical physics-based knowledge could significantly improve accuracy in space weather forecasting capability. The described nature of these algorithms creates different channels for efficient integration of many pieces of domain-expert knowledge including physics-based models, scaling and constraints. For example, collection of simplified physical models with a few adjustable empirical parameters, e.g. anomalous coefficients capturing small-scale effects, could be used as base models in boosting algorithms to create ensemble of interpretable models with boosted accuracy and stability compared to a single model (Gavrishchaka et al. 2018, 2019; Banerjee and Gavrishchaka 2007). Alternatively, simplified physical models capturing multi-scale effects in an approximate manner can be used to generate large amounts of synthetic data for all possible regimes. Later actual data can be augmented by this synthetic data to allow a DL framework to discover robust representations that can be used to train or fine-tune DNNs or other ML models (Gavrishchaka et al. 2019). Synergetic combination of ML algorithms and physics-based models, such as those discussed in this article and global MHD models, could be especially useful for representation and detection of rare events and regimes (Senyukova and Gavrishchaka 2011; Miao et al. 2020). Further advancements in discovery of stable and accurate hybrid solutions in complex systems modeling can be achieved by leveraging methods from computational topology which showed promising results in a wide range of applications (Carlsson 2009; Edelsbrunner 2014; Garland et al. 2016; Miao et al. 2020). Until such a time when global models can capture detailed physics at all scale sizes, such hybrid modeling may be necessary for accurate space weather forecasting.
In this review article we have analyzed the behavior of compressed plasmas in a magnetic field, which is a configuration often encountered both in natural and laboratory plasmas. Compression creates stress, or gradients, in the background plasma parameters. When the scale size of the gradient across the magnetic field becomes comparable to an ion gyrodiameter a self-consistent static electric field is generated due to ambipolar kinetic effects. This electric field is highly inhomogeneous. Hence, the localized Doppler shift due to the \(\mathbf {E}\times \mathbf {B}\) flow cannot be transformed away, which affects the dieletric properties of the plasma including the normal modes. In addition, it affects the individual particle orbits as well as shears the mean flow velocities both transverse and along the magnetic field. Velocity shear is a source of free energy for plasma fluctuations. Consequently, a compressed plasma system achieves a higher energy state compared to its relaxed counterpart. The electric field gradient, and by causality the velocity shear, that develops scales with the magnitude of compression. Thermodynamic properties compel the plasma to seek a lower energy state. In response, in a collisionless medium spontaneous generation of emissions follow that dissipate the velocity shear and returns the plasma to a relaxed lower energy state. This makes compressed plasmas to be active regions with characteristic emissions. In the space environment, these regions are relatively easy to detect and measure due to large plasma fluctuations. The spectral signature of the emissions is typically found to be broadband in frequency with power mostly concentrated in the electrostatic regime. This may be due to the inhomogeneity forcing eigenstates to its scale size, which is comparable or smaller than the electron skin depth. Consequently, the wavelength of the emissions is comparable or smaller than the electron skin depth, which emphasizes the electrostatic character. Hence, they have often been referred to as the broadband electrostatic noise (BEN) in the literature. But they are also accompanied by some electromagnetic component (Angelopoulos et al. 1989). As we discussed in Sects. 3 and 4, the velocity shear has the unique ability to produce such broadband signatures in which the power is mostly in the electrostatic regime but with some electromagnetic power as well. The intensity and bandwidth of the emissions, which scale as the velocity shear, is a diagnostic of the level of compression imposed on the plasma. This is evident from in situ measurements in space plasmas where broadband emissions are a hallmark of compressed plasmas found in boundary layers.
Although we used the framework provided here to analyze natural plasma processes, it is general and applicable to laboratory experiments as well as to active experiments in space. For example, compressed plasma layers can be generated locally in the ionosphere by the ionization of exhausts or effluents discharged from rockets (Bernhardt et al. 1995) or by active chemical release experiments (Ganguli et al. 1992; Scales et al. 1992). In the NASA sponsored Nickel Carbonyl Release Experiment (NICARE) (Argo et al. 1992) the introduction of electron capturing agents, such as CF3Br, SF6, Ni(CO)4, etc., in the ionosphere created an electron depleted region in the ionosphere surrounded by natural oxygen-electron plasma. This generated a boundary layer of positive ions, negative ions, and electron plasma with strong spatial gradients in their densities. Experimental data indicated a large enhancement of noise level concurrent with the formation of the negative ion plasma. This resulted in a situation similar to the natural boundary layers, discussed in Sect. 2, in which the negative ion population inside the electron-depleted region diminished to zero outside, while the electron population did the opposite in a narrow boundary layer (Ganguli et al. 1992). Quasi-neutrality between the electron, negative ions, and positive oxygen ions led to a strong self-consistent electrostatic potential in the boundary layer that separated the negative ion plasma from the ambient oxygen-electron plasma. Hybrid simulations showed the formation of the boundary layer with a localized radial electric field in the intermediate (\(\rho _{i}>L>\rho _{e}\)) scale size and spontaneous generation of shear driven EIH waves that relaxed the boundary layer (Scales et al. 1994, 1995).
Laboratory experiments of plasma expansion due to laser ablation, in which the laser front acts as a piston to compresses the plasma, shows interesting similarity with the physics of the dipolarization fronts we discussed in Sect. 2. Dipolarization fronts, characterized by a pressure gradient over a narrow plasma layer comparable to an ion gyroradius, are created in the aftermath of magnetic reconnection when a stretched magnetic field snaps back towards a dipolar configuration. In a laser ablated plasma expansion across an external magnetic field similar density gradient structures with scale size comparable to an ion gyroradius accompanied with a cross-magnetic field flow are observed (Mostovych et al. 1989). Due to the piston-like action of the laser front both ions and electrons move with nearly the same speed across the magnetic field and hence the cross field current is negligible but there is a gradient in the intermediate scale size in the plasma flows that are generated. Furthermore, as in the dipolarization front, waves around the lower hybrid frequency are seen, which were thought to be the lower hybrid drift waves (Krall and Liewer 1971) because of their association with the density gradient just as in the dipolarization front case. However, the wavelength of the lower hybrid waves was found to be much longer than the electron gyroradius and comparable to the scale size of the cross-field flow. As we discussed in section 3, the long wavelength signature is not consistent with the lower hybrid drift waves but similar to that expected from the EIH waves, which depend on the gradient in the flow and not on a cross-field current. Long wavelengths are generated by nonlinear vortex merging (see Sect. 4). Peyser et al. (1992) analyzed a number of experimental cases and compared the data with theoretical models. They concluded that the waves were likely to be the EIH waves; for similar reasons argued for the origin of the emissions in a dipolarization front in Sects. 3 and 4. However, due to the inability in the experiment to measure the details of the parameters, unambiguous characterization of the origin of the waves in laser ablated plasma jets was not possible. More recent laser ablation experiments have shown the generation of waves around the lower hybrid frequency (Niemann et al. 2013) and their origin is still an open issue.
As in the boundary layers discussed in this article, in magnetic confinement fusion experiments the interaction of multiple-scales is thought to be involved in many processes. In particular, in tokamaks there are gradients of magnetic shear, plasma temperatures, densities and flows that are all sources of free energy for fluctuations both large-scale (e.g., tearing modes or ballooning modes) and short scale fluctuations on the gyroradius scale-size (e.g., electron and ion temperature gradient driven turbulence). The interactions are mediated by mesoscales (e.g., transport barriers and zonal flows). There is a rich tradition of studying the interaction between these disparate scales (e.g., Thyagaraja et al. 2005; Muraglia et al. 2009; McDevitt and Diamond 2006; Bonanomi et al. 2018), whereas in geospace plasmas these kinds of studies are few in number. In this article we have attempted to emphasize the need for such studies. Fortunately, with the advent of high performance computing and small swarms of high resolution satellites, this is beginning to change.
While the plasma response to velocity shears in both perpendicular and parallel flows has been studied separately their combined effect has not been analyzed. In nature it is likely that that the velocity shear is in an arbitrary direction due to magnetic field geometry. In Sect. 2.2.3, we showed in a simple case how this may be possible. But in that case the scale size of the magnetic field variation was orders of magnitude larger than the electric field variation, which allowed us to cleanly separate the two scale sizes and study them individually. Effectively, this reduced the problem to one dimension. This may not always be possible in other instances in nature or in laboratory. In general, the linear response will involve two or three dimensional eigenvalue conditions, which are more difficult to solve. There have been some attempts to address the combined effect of parallel and transverse velocity shear [e.g., Kaneko et al. (2007)] but this topic remains an interesting area of research and deserves further attention. In addition, manifestation of the velocity shear effect in a multi-species plasma, which is likely to prevail in some regions in space, is another interesting future research topic since shear effect is mass dependent and hence affects different species differently, which introduces relative differences in properties between species (Gavrishchaka et al. 1997).
A common feature in the nonlinear evolution of a compressed plasma system is that spontaneous generation of shear driven waves relaxes the velocity gradient generated by the compression so that a balance can be achieved. This balance, or the steady state, defines the electromagnetic plasma environment. In addition, the shear driven waves contribute to viscosity and resistivity as feedback to the global physics and modify the meso scale plasma features. Thus, the union of the small and large scale physics is the reality that a satellite measures, which underscores the importance of understanding both the small and large scale processes and the coupling between them as we have attempted to show through natural examples in the earth's neighborhood plasma environment.
E. Agrimson, N. D'Angelo, R.L. Merlino, Excitation of ion-acoustic-like waves by subcritical currents in a plasma having equal electron and ion temperatures. Phys. Rev. Lett. 86(23), 5282 (2001). https://doi.org/10.1103/physrevlett.86.5282
E.P. Agrimson, N. D'Angelo, R.L. Merlino, Effect of parallel velocity shear on the excitation of electrostatic ion cyclotron waves. Phys. Lett. A 293(5–6), 260 (2002). https://doi.org/10.1016/s0375-9601(02)00026-9
M.J. Alport, S.L. Cartier, R.L. Merlino, Laboratory observations of ion cyclotron waves associated with a double layer in an inhomogeneous magnetic field. J. Geophys. Res. 91(A2), 1599 (1986). https://doi.org/10.1029/ja091ia02p01599
W.E. Amatucci, M.E. Koepke, J.J. Carroll, T.E. Sheridan, Observation of ion-cyclotron turbulence at small values of magnetic-field-aligned current. Geophys. Res. Lett. 21(15), 1595 (1994). https://doi.org/10.1029/94gl00881
W.E. Amatucci, D.N. Walker, G. Ganguli, J.A. Antoniades, D. Duncan, J.H. Bowles, V. Gavrishchaka, M.E. Koepke, Plasma response to strongly sheared flow. Phys. Rev. Lett. 77(10), 1978 (1996). https://doi.org/10.1103/physrevlett.77.1978
W.E. Amatucci, D.N. Walker, G. Ganguli, D. Duncan, J.A. Antoniades, J.H. Bowles, V. Gavrishchaka, M.E. Koepke, Velocity-shear-driven ion-cyclotron waves and associated transverse ion heating. J. Geophys. Res. Sp. Phys. 103(A6), 11711 (1998). https://doi.org/10.1029/98ja00659
W.E. Amatucci, G. Ganguli, D.N. Walker, D. Duncan, Wave and Joule heating in a rotating plasma. Phys. Plasmas 6(2), 619 (1999). https://doi.org/10.1063/1.873207
W.E. Amatucci, G. Ganguli, D.N. Walker, G. Gatling, M. Balkey, T. McCulloch, Laboratory investigation of boundary layer processes due to strong spatial inhomogeneity. Phys. Plasmas 10(5), 1963 (2003). https://doi.org/10.1063/1.1562631
V. Angelopoulos, The THEMIS mission. Sp. Sci. Rev. 141(1), 5 (2008). https://doi.org/10.1007/s11214-008-9336-1
V. Angelopoulos, R.C. Elphic, S.P. Gary, C.Y. Huang, Electromagnetic instabilities in the plasma sheet boundary layer. J. Geophys. Res. 94(A11), 15373 (1989). https://doi.org/10.1103/physrevlett.86.52820
V. Angelopoulos, W. Baumjohann, C.F. Kennel, F.V. Coroniti, M.G. Kivelson, R. Pellat, R.J. Walker, H. Lhr, G. Paschmann, Bursty bulk flows in the inner central plasma sheet. J. Geophys. Res. Sp. Phys. 97(A4), 4027 (1992). https://doi.org/10.1103/physrevlett.86.52821
P. Argo, T.J. Fitzgerald, R. Carlos, NICARE I HF propagation experiment results and interpretation. Radio Sci. 27(2), 289 (1992). https://doi.org/10.1103/physrevlett.86.52822
A.V. Artemyev, V. Angelopoulos, A. Runov, A.A. Petrukovich, Global view of current sheet thinning: plasma pressure gradients and large-scale currents. J. Geophys. Res. Sp. Phys. 124(1), 264 (2019). https://doi.org/10.1103/physrevlett.86.52823
Y. Asano, T. Mukai, M. Hoshino, Y. Saito, H. Hayakawa, T. Nagai, Current sheet structure around the near-earth neutral line observed by geotail. J. Geophys. Res. Sp. Phys. (2004). https://doi.org/10.1103/physrevlett.86.52824
S. Banerjee, V.V. Gavrishchaka, Multimoment convecting flux tube model of the polar wind system with return current and microprocesses, Journal of Atmospheric and Solar-Terrestrial Physics 69(16), 2071 (2007). https://doi.org/10.1016/j.jastp.2007.08.004. https://doi.org/10.1103/physrevlett.86.52825. Recent Advances in the Polar Wind Theories and Observations
P.A. Bernhardt, G. Ganguli, M.C. Kelley, W.E. Swartz, Enhanced radar backscatter from space shuttle exhaust in the ionosphere. J. Geophys. Res. 100(A12), 23811 (1995). https://doi.org/10.1103/physrevlett.86.52826
I.B. Bernstein, J.M. Greene, M.D. Kruskal, Exact nonlinear plasma oscillations. Phys. Rev. 108(3), 546 (1957). https://doi.org/10.1103/physrevlett.86.52827
ADS MathSciNet Article MATH Google Scholar
N. Bonanomi, P. Mantica, J. Citrin, T. Goerler, B. Teaca, Impact of electron-scale turbulence and multi-scale interactions in the JET tokamak. Nuclear Fusion 58(12), 124003 (2018). https://doi.org/10.1103/physrevlett.86.52828
J. Bonnell, Identification of broadband elf waves observed during transverse ion acceleration in the auroral ionosphere,. Ph.D. thesis, Cornell University (1997)
J. Bonnell, P. Kintner, J.E. Wahlund, K. Lynch, R. Arnoldy, Interferometric determination of broadband ELF wave phase velocity within a region of transverse auroral ion acceleration. Geophys. Res. Lett. 23(23), 3297 (1996). https://doi.org/10.1029/96gl032389
J.L. Burch, Image mission overview. Sp. Sci. Rev. 91(1), 1 (2000). https://doi.org/10.1016/s0375-9601(02)00026-90
J.L. Burch, T.E. Moore, R.B. Torbert, B.L. Giles, Magnetospheric multiscale overview and science objectives. Sp. Sci. Rev. 199(1–4), 5 (2016). https://doi.org/10.1016/s0375-9601(02)00026-91
E. Camporeale, The challenge of machine learning in space weather: nowcasting and forecasting. Sp. Weather 17(8), 1166 (2019). https://doi.org/10.1016/s0375-9601(02)00026-92
G. Carlsson, Topology and data. Bull. Am. Math. Soc. 46, 255 (2009). https://doi.org/10.1016/s0375-9601(02)00026-93
MathSciNet Article MATH Google Scholar
C.A. Cattell, M. Kim, R.P. Lin, F.S. Mozer, Observations of large electric fields near the plasmasheet boundary by ISEE-1. Geophys. Res. Lett. 9(5), 539 (1982). https://doi.org/10.1016/s0375-9601(02)00026-94
C.A. Cattell, F.S. Mozer, E.W. Hones, R.R. Anderson, R.D. Sharp, ISEE observations of the plasma sheet boundary, plasma sheet, and neutral sheet: 1. Electric field, magnetic field, plasma, and ion composition, J. Geophys. Res. 91(A5), 5663 (1986). https://doi.org/10.1016/s0375-9601(02)00026-95
P.J. Catto, M.N. Rosenbluth, C.S. Liu, Parallel velocity shear instabilities in an inhomogeneous plasma with a sheared magnetic field. Phys. Fluids 16(10), 1719 (1973). https://doi.org/10.1016/s0375-9601(02)00026-96
J. Chen, Nonlinear dynamics of charged particles in the magnetotail. J. Geophys. Res. 97(A10), 15011 (1992). https://doi.org/10.1016/s0375-9601(02)00026-97
T. Chen, C. Guestrin, in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Association for Computing Machinery, New York, NY, USA, 2016), KDD 16, p. 785–794. https://doi.org/10.1145/2939672.2939785
J. Chen, P.J. Palmadesso, Chaos and nonlinear dynamics of single-particle orbits in a magnetotaillike magnetic field. J. Geophys. Res. Sp. Phys. 91(A2), 1499 (1986). https://doi.org/10.1016/s0375-9601(02)00026-98
C.X. Chen, R.A. Wolf, Interpretation of high-speed flows in the plasma sheet. J. Geophys. Res. Sp. Phys. 98(A12), 21409 (1993). https://doi.org/10.1029/93JA020809
L.J. Chen, S. Wang, O.L. Contel, A. Rager, M. Hesse, J. Drake, J. Dorelli, J. Ng, N. Bessho, D. Graham, L.B. Wilson, T. Moore, B. Giles, W. Paterson, B. Lavraud, K. Genestreti, R. Nakamura, Y.V. Khotyaintsev, R.E. Ergun, R.B. Torbert, J. Burch, C. Pollock, C.T. Russell, P.A. Lindqvist, L. Avanov, Lower-hybrid drift waves driving electron nongyrotropic heating and vortical flows in a magnetic reconnection layer. Phys. Rev. Lett. 125(2), 025103 (2020). https://doi.org/10.1029/ja091ia02p015990
B. Coppi, M.N. Rosenbluth, R.Z. Sagdeev, Instabilities due to temperature gradients in complex magnetic field configurations. Phys. Fluids 10(3), 582 (1967). https://doi.org/10.1029/ja091ia02p015991
C. Crabtree, G. Ganguli, A. Fletcher, A. Sen, Kinetic equilibria of compressed neutral sheets with inhomogeneous electric fields, Phys. Plasmas p. to appear (2020)
N. D'Angelo, KelvinHelmholtz Instability in a Fully Ionized Plasma in a Magnetic Field. Phys. Fluids 8(9), 1748 (1965). https://doi.org/10.1029/ja091ia02p015992
W. Daughton, The unstable eigenmodes of a neutral sheet. Phys. Plasmas 6(4), 1329 (1999). https://doi.org/10.1029/ja091ia02p015993
R.C. Davidson, N.T. Gladd, C.S. Wu, J.D. Huba, Effects of finite plasma beta on the lower-hybrid-drift instability. Phys. Fluids 20(2), 301 (1977). https://doi.org/10.1029/ja091ia02p015994
L. Deng, D. Yu, Deep learning: methods and applications. Found. Trends Signal Process. 7(3–4), 197–387 (2014). https://doi.org/10.1029/ja091ia02p015995
X. Deng, M. Ashour-Abdalla, M. Zhou, R. Walker, M. El-Alaoui, V. Angelopoulos, R.E. Ergun, D. Schriver, Wave and particle characteristics of earthward electron injections associated with dipolarization fronts. J. Geophys. Res. Sp. Phys. 115(A9), A09225 (2010). https://doi.org/10.1029/ja091ia02p015996
P. Drazin, L. Howard, Advances in Applied Mechanics, vol. 7 (Acadmemic, New York, 1966)
W.E. Drummond, M.N. Rosenbluth, Anomalous Diffusion Arising from Microinstabilities in a Plasma. Phys. Fluids 5(12), 1507 (1962). https://doi.org/10.1029/ja091ia02p015997
ADS Article MATH Google Scholar
A.M. DuBois, E. Thomas, W.E. Amatucci, G. Ganguli, Plasma response to a varying degree of stress. Phys. Rev. Lett. 111(14), 145002 (2013). https://doi.org/10.1029/ja091ia02p015998
A.M. DuBois, E. Thomas, W.E. Amatucci, G. Ganguli, Experimental characterization of broadband electrostatic noise due to plasma compression. J. Geophys. Res. Sp. Phys. 119(7), 5624 (2014). https://doi.org/10.1002/2014ja0201989
G.D. Earle, M.C. Kelley, G. Ganguli, Large velocity shears and associated electrostatic waves and turbulence in the auroral F region. J. Geophys. Res. 94(A11), 15321 (1989). https://doi.org/10.1029/94gl008810
T.E. Eastman, L.A. Frank, W.K. Peterson, W. Lennartsson, The plasma sheet boundary layer. J. Geophys. Res. 89(A3), 1553 (1984). https://doi.org/10.1029/94gl008811
H. Edelsbrunner, A Short Course in Computational Geometry and Topology (Springer, Berlin, 2014)
C.L. Enloe, E.M. Tejero, C. Crabtree, G. Ganguli, W.E. Amatucci, Electromagnetic fluctuations in the intermediate frequency range originating from a plasma boundary layer. Phys. Plasmas 24(5), 052107 (2017). https://doi.org/10.1029/94gl008812
R.E. Ergun, C.W. Carlson, J.P. McFadden, F.S. Mozer, G.T. Delory, W. Peria, C.C. Chaston, M. Temerin, R. Elphic, R. Strangeway, R. Pfaff, C.A. Cattell, D. Klumpar, E. Shelley, W. Peterson, E. Moebius, L. Kistler, FAST satellite observations of electric field structures in the auroral zone. Geophys. Res. Lett. 25(12), 2025 (1998). https://doi.org/10.1029/94gl008813
C. Escoubet, R. Schmidt, M. Goldstein, Cluster Science and Mission Overview. Sp. Sci. Rev. 79(1–2), 11 (1997). https://doi.org/10.1029/94gl008814
J.M. Finn, A.J. Cole, C. Cihan, D. Brennan, Integration of tearing layer equations by means of matrix riccati methods (2020). Unpublished
A.C. Fletcher, C. Crabtree, G. Ganguli, D. Malaspina, E. Tejero, X. Chu, Kinetic equilibrium and stability analysis of dipolarization fronts. J. Geophys. Res. Sp. Phys. (2019). https://doi.org/10.1029/94gl008815
J.M. Forbes, The equatorial electrojet. Rev. Geophys. 19(3), 469 (1981). https://doi.org/10.1029/94gl008816
B. Fornberg, J.A.C. Weideman, A numerical methodology for the Painlev equations. J. Comput. Phys. 230(15), 5957 (2011). https://doi.org/10.1029/94gl008817
J. Friedman, T. Hastie, R. Tibshirani, Additive logistic regression: a statistical view of boosting (with discussion and a rejoinder by the authors). Ann. Stat. 28(2), 337 (2000). https://doi.org/10.1029/94gl008818
Article MATH Google Scholar
H.S. Fu, Y.V. Khotyaintsev, A. Vaivads, M. Andr, S.Y. Huang, Electric structure of dipolarization front at sub-proton scale. Geophys. Res. Lett. 39(6), L06105 (2012). https://doi.org/10.1029/2012gl0512749
G. Ganguli, P.A. Bernhardt, W. Scales, P. Rodriguez, C. Siefring, H.A. Romero, in Physics of Space Plasmas (1992), SPI Conference Proceedings and Reprint Series, ed. by T. Chang (Scientific Publishers, Inc.,, Cambridge, MA., 1993), p. 161
G. Ganguli, H. Romero, J. Fedder, Interaction Between Global MHD and Kinetic Processes in the Magneotail (American Geophysical Union (AGU), 1994), pp. 135–148. https://doi.org/10.1029/GM084p0135. https://doi.org/10.1103/physrevlett.77.19780
G. Ganguli, Stability of an inhomogeneous transverse plasma flow. Phys. Plasmas 4(5), 1544 (1997). https://doi.org/10.1103/physrevlett.77.19781
S.B. Ganguli, P.J. Palmadesso, Plasma transport in the auroral return current region. J. Geophys. Res. Sp. Phys. 92(A8), 8673 (1987). https://doi.org/10.1103/physrevlett.77.19782
G. Ganguli, P. Palmadesso, Y.C. Lee, A new mechanism for excitation of electrostatic ion cyclotron waves and associated perpendicular ion heating. Geophys. Res. Lett. 12(10), 643 (1985). https://doi.org/10.1103/physrevlett.77.19783
G. Ganguli, Y.C. Lee, P. Palmadesso, Electrostatic ion-cyclotron instability caused by a nonuniform electric field perpendicular to the external magnetic field. Phys. Fluids 28(3), 761 (1985). https://doi.org/10.1103/physrevlett.77.19784
S.B. Ganguli, P.J. Palmadesso, H.G. Mitchell, Effects of electron heating on the current driven electrostatic ion cyclotron instability and plasma transport processes along auroral field lines. Geophys. Res. Lett. 15(11), 1291 (1988). https://doi.org/10.1103/physrevlett.77.19785
G. Ganguli, Y.C. Lee, P.J. Palmadesso, Kinetic theory for electrostatic waves due to transverse velocity shears. Phys. Fluids 31(4), 823 (1988a). https://doi.org/10.1103/physrevlett.77.19786
G. Ganguli, Y.C. Lee, P.J. Palmadesso, Electronion hybrid mode due to transverse velocity shear. Phys. Fluids 31(10), 2753 (1988b). https://doi.org/10.1103/physrevlett.77.19787
G. Ganguli, M.J. Keskinen, H. Romero, R. Heelis, T. Moore, C. Pollock, Coupling of microprocesses and macroprocesses due to velocity shear: an application to the low-altitude ionosphere. J. Geophys. Res. 99(A5), 8873 (1994). https://doi.org/10.1103/physrevlett.77.19788
G. Ganguli, S. Slinker, V. Gavrishchaka, W. Scales, Low frequency oscillations in a plasma with spatially variable field-aligned flow. Phys. Plasmas 9(5), 2321 (2002). https://doi.org/10.1103/physrevlett.77.19789
G. Ganguli, E. Tejero, C. Crabtree, W. Amatucci, L. Rudakov, Generation of electromagnetic waves in the very low frequency band by velocity gradient. Phys. Plasmas 21(1), 012107 (2014). https://doi.org/10.1029/98ja006590
G. Ganguli, C. Crabtree, A.C. Fletcher, E. Tejero, D. Malaspina, I. Cohen, Kinetic equilibrium of dipolarization fronts. Sci. Rep. 8(1), 17186 (2018). https://doi.org/10.1029/98ja006591
J. Garland, E. Bradley, J.D. Meiss, Exploring the topology of dynamical reconstructions, Physica D: Nonlinear Phenomena 334, 49 (2016). https://doi.org/10.1016/j.physd.2016.03.006. https://doi.org/10.1029/98ja006592. Topology in Dynamics, Differential Equations, and Data
S.P. Gary, S.J. Schwartz, Wave-particle transport by weak electrostatic flow shear fluctuations. J. Geophys. Res. 86(A13), 11139 (1981). https://doi.org/10.1029/98ja006593
V. Gavrishchaka, Collective phenomenon in a magnetized plasma with a field-aligned drift and inhomogeneous transverse flow. Ph.D. thesis, West Virginia University (1996)
G.V. Gavrishchaka, Boosting-Based Frameworks in Financial Modeling: Application to Symbolic Volatility Forecasting (Emerald Group Publishing Limited, 2006), vol. 20 Part 2, pp. 123–151. https://doi.org/10.1016/S0731-9053(05)20024-5
V.V. Gavrishchaka, S.B. Ganguli, Support vector machine as an efficient tool for high-dimensional data processing: Application to substorm forecasting. J. Geophys. Res. Sp. Phys. 106(A12), 29911 (2001a). https://doi.org/10.1029/98ja006594
V.V. Gavrishchaka, S.B. Ganguli, Optimization of the neural-network geomagnetic model for forecasting large-amplitude substorm events. J. Geophys. Res. Sp. Phys. 106(A4), 6247 (2001b). https://doi.org/10.1029/98ja006595
V. Gavrishchaka, M.E. Koepke, G. Ganguli, Dispersive properties of a magnetized plasma with a fieldaligned drift and inhomogeneous transverse flow. Phys. Plasmas 3(8), 3091 (1996). https://doi.org/10.1029/98ja006596
V.V. Gavrishchaka, M.E. Koepke, G.I. Ganguli, Ion cyclotron modes in a two-ion-component plasma with transverse-velocity shear. J. Geophys. Res. Sp. Phys. 102(A6), 11653 (1997). https://doi.org/10.1029/98ja006597
V.V. Gavrishchaka, S.B. Ganguli, G.I. Ganguli, Origin of low-frequency oscillations in the ionosphere. Phys. Rev. Lett. 80(4), 728 (1998). https://doi.org/10.1029/98ja006598
V.V. Gavrishchaka, S.B. Ganguli, G.I. Ganguli, Electrostatic oscillations due to filamentary structures in the magnetic-field-aligned flow: the ion-acoustic branch. J. Geophys. Res. Sp. Phys. 104(A6), 12683 (1999). https://doi.org/10.1029/98ja006599
V.V. Gavrishchaka, G.I. Ganguli, W.A. Scales, S.P. Slinker, C.C. Chaston, J.P. McFadden, R.E. Ergun, C.W. Carlson, Multiscale coherent structures and broadband waves due to parallel inhomogeneous flows. Phys. Rev. Lett. 85(20), 4285 (2000). https://doi.org/10.1063/1.8732070
V. Gavrishchaka, Z. Yang, R. Miao, O. Senyukova, Advantages of hybrid deep learning frameworks in applications with limited data. Int. J. Mach. Learn. Comput. 8(6), 549 (2018). https://doi.org/10.1063/1.8732071
V. Gavrishchaka, O. Senyukova, M. Koepke, Synergy of physics-based reasoning and machine learning in biomedical applications: towards unlimited deep learning with limited data. Adv. Phys. X 4(1), 1582361 (2019). https://doi.org/10.1063/1.8732072
H. Gleisner, H. Lundstedt, P. Wintoft, Predicting geomagnetic storms from solar-wind data using time-delay neural networks. Ann. Geophys. 14(7), 679 (1996). https://doi.org/10.1063/1.8732073
I.V. Golovchanskaya, B.V. Kozelov, A.A. Chernyshov, M.M. Mogilevsky, A.A. Ilyasov, Branches of electrostatic turbulence inside solitary plasma structures in the auroral ionosphere. Phys. Plasmas 21(8), 082903 (2014a). https://doi.org/10.1063/1.8732074
I.V. Golovchanskaya, B.V. Kozelov, I.V. Mingalev, M.N. Melnik, A.A. Lubchich, Evaluation of a space-observed electric field structure for the ability to destabilize inhomogeneous energy-density-driven waves. Ann. Geophys. 32(1), 1 (2014b). https://doi.org/10.1063/1.8732075
S. Gopinath, P.R. Prince, A comparison of machine-learning techniques for the prediction of the auroral electrojet index. J. Earth Syst. Sci. 128(7), 172 (2019). https://doi.org/10.1063/1.8732076
A. Gordeev, A. Kingsep, L. Rudakov, Electron magnetohydrodynamics. Phys. Rep. 243(5), 215 (1994). https://doi.org/10.1063/1.8732077
C.L. Grabbe, T.E. Eastman, Generation of broadband electrostatic noise by ion beam instabilities in the magnetotail. J. Geophys. Res. Sp. Phys. 89(A6), 3865 (1984). https://doi.org/10.1063/1.8732078
H. Grad, H. Rubin, Hydromagnetic equilibria and force-free fields. Tech. rep., Proceedings of the 2nd UN Conf. on the Peaceful Uses of Atomic Energy (1958)
T.S. Hahm, R.M. Kulsrud, Forced magnetic reconnection. Phys. Fluids 28(8), 2412 (1985). https://doi.org/10.1063/1.8732079
M. Hamrin, M. Andr, G. Ganguli, V.V. Gavrishchaka, M.E. Koepke, M.W. Zintl, N. Ivchenko, T. Karlsson, J.H. Clemmons, Inhomogeneous transverse electric fields and wave generation in the auroral region: A statistical study. J. Geophys. Res. Sp. Phys. 106(A6), 10803 (2001). https://doi.org/10.1063/1.15626310
E.G. Harris, On a self-consistent field method for a completely ionized gas. Tech. Rep. 4944, Naval Research Laboratory, Washington, D.C. (1957)
E.G. Harris, On a plasma sheath separating regions of oppositely directed magnetic field. Il Nuovo Cimento (1955-1965) 23(1), 115 (1962). https://doi.org/10.1063/1.15626311
M.G. Henderson, G.D. Reeves, H.E. Spence, R.B. Sheldon, A.M. Jorgensen, J.B. Blake, J.F. Fennell, First energetic neutral atom images from polar. Geophys. Res. Lett. 24(10), 1167 (1997). https://doi.org/10.1063/1.15626312
G.E. Hinton, R.R. Salakhutdinov, Reducing the dimensionality of data with neural networks. Science 313(5786), 504 (2006). https://doi.org/10.1063/1.15626313
H. Hojo, Y. Kishimoto, J. VanDam, Stability criterion for kelvin-helmholtz mode in a cylindrical plasma. J. Phys. Soc. Jpn. 64(11), 4073 (1995). https://doi.org/10.1063/1.15626314
M. Hoshino, A. Nishida, T. Mukai, Y. Saito, T. Yamamoto, S. Kokubun, Structure of plasma sheet in magnetotail: double-peaked electric current sheet. J. Geophys. Res. Sp. Phys. 101(A11), 24775 (1996). https://doi.org/10.1063/1.15626315
J.D. Huba, Stabilization of the electrostatic Kelvin-Helmholtz instability in high plasmas. J. Geophys. Res. 86(A11), 8991 (1981). https://doi.org/10.1063/1.15626316
J.D. Huba, G. Ganguli, Influence of magnetic shear on the lowerhybrid drift instability in finite plasmas. Phys. Fluids 26(1), 124 (1983). https://doi.org/10.1063/1.15626317
J.D. Huba, J.F. Drake, N.T. Gladd, Lowerhybriddrift instability in field reversed plasmas. Phys. Fluids 23(3), 552 (1980). https://doi.org/10.1063/1.15626318
S. ichi Ohtani, M.A. Shay, T. Mukai, Temporal structure of the fast convective flow in the plasma sheet: comparison between observations and two-fluid simulations. J. Geophys. Res. 109(A3), A03210 (2004). https://doi.org/10.1063/1.15626319
A.A. Ilyasov, A.A. Chernyshov, M.M. Mogilevsky, I.V. Golovchanskaya, B.V. Kozelov, Inhomogeneities of plasma density and electric field as sources of electrostatic turbulence in the auroral region. Phys. Plasmas 22(3), 032906 (2015). https://doi.org/10.1007/s11214-008-9336-10
D.L. Jassby, Evolution and large-electric-field suppression of the tansverse Kelvin-Helmholtz instability. Phys. Rev. Lett. 25(22), 1567 (1970). https://doi.org/10.1007/s11214-008-9336-11
D.L. Jassby, Transverse velocity shear instabilities within a magnetically confined plasma. Phys. Fluids 15(9), 1590 (1972). https://doi.org/10.1007/s11214-008-9336-12
T. Kaneko, H. Tsunoyama, R. Hatakeyama, Drift-wave instability excited by field-aligned ion flow velocity shear in the absence of electron current. Phys. Rev. Lett. 90(12), 125001 (2003). https://doi.org/10.1007/s11214-008-9336-13
T. Kaneko, E.W. Reynolds, R. Hatakeyama, M.E. Koepke, Velocity-shear-driven drift waves with simultaneous azimuthal modes in a barium-ion Q-machine plasma. Phys. Plasmas 12(10), 102106 (2005). https://doi.org/10.1007/s11214-008-9336-14
T. Kaneko, K. Hayashi, R. Ichiki, R. Hatakeyama, Drift-wave instability modified by superimposed parallel and perpendicular plasma flow velocity shears. Fusion Sci. Technol. 51(2T), 103 (2007). https://doi.org/10.1007/s11214-008-9336-15
M.C. Kelley, C.W. Carlson, Observations of intense velocity shear and associated electrostatic waves near an auroral arc. J. Geophys. Res. 82(16), 2343 (1977). https://doi.org/10.1007/s11214-008-9336-16
G.I. Kent, N.C. Jen, F.F. Chen, Transverse Kelvin-Helmholtz instability in a rotating plasma. Phys. Fluids 12(10), 2140 (1969). https://doi.org/10.1007/s11214-008-9336-17
M.J. Keskinen, H.G. Mitchell, J.A. Fedder, P. Satyanarayana, S.T. Zalesak, J.D. Huba, Nonlinear evolution of the kelvin-helmholtz instability in the high-latitude ionosphere. J. Geophys. Res. Sp. Phys. 93(A1), 137 (1988). https://doi.org/10.1007/s11214-008-9336-18
J.M. Kindel, C.F. Kennel, Topside current instabilities. J. Geophys. Res. 76(13), 3055 (1971). https://doi.org/10.1007/s11214-008-9336-19
P.M. Kintner, Plasma waves and transversely accelerated ions in the terrestrial ionosphere. Phys. Fluids B Plasma Phys. 4(7), 2264 (1992). https://doi.org/10.1103/physrevlett.86.528200
A.J. Klimas, D. Vassiliadis, D.N. Baker, D.A. Roberts, The organized nonlinear dynamics of the magnetosphere. J. Geophys. Res. Sp. Phys. 101(A6), 13089 (1996). https://doi.org/10.1103/physrevlett.86.528201
M.E. Koepke, W.E. Amatucci, J.J. Carroll, T.E. Sheridan, Experimental verification of the inhomogeneous energy-density driven instability. Phys. Rev. Lett. 72(21), 3355 (1994). https://doi.org/10.1103/physrevlett.86.528202
N.A. Krall, P.C. Liewer, Low-frequency instabilities in magnetic pulses. Phys. Rev. A 4(5), 2094 (1971). https://doi.org/10.1103/physrevlett.86.528203
T.A.S. Kumar, S.K. Mattoo, R. Jha, Plasma diffusion across inhomogeneous magnetic fields. Phys. Plasmas 9(7), 2946 (2002). https://doi.org/10.1103/physrevlett.86.528204
G.S. Lakhina, Low-frequency electrostatic noise due to velocity shear instabilities in the regions of magnetospheric flow boundaries. J. Geophys. Res. 92(A11), 12161 (1987). https://doi.org/10.1103/physrevlett.86.528205
Y. LeCun, Y. Bengio, G. Hinton, Deep learning. Nature 521(7553), 436 (2015). https://doi.org/10.1103/physrevlett.86.528206
D. Lin, W.A. Scales, G. Ganguli, X. Fu, C. Crabtree, E. Tejero, Y. Chen, A.C. Fletcher, A new perspective for dipolarization front dynamics: electromagnetic effects of velocity inhomogeneity. J. Geophys. Res. Sp. Phys. 77(10), 1978 (2019). https://doi.org/10.1103/physrevlett.86.528207
H. Liu, G. Lu, Velocity shear-related ion upflow in the low-altitude ionosphere. Ann. Geophys. 22(4), 1149 (2004). https://doi.org/10.1103/physrevlett.86.528208
Y. Liu, J. Lei, M. Li, Y. Ling, J. Yuan, Independent excitation of inhomogeneous energy density driven instability by electron density gradient. Phys. Plasmas 25(10), 102901 (2018). https://doi.org/10.1103/physrevlett.86.528209
A.T.Y. Lui, Extended Consideration of a Synthesis Model for Magnetospheric Substorms (American Geophysical Union (AGU), 2013), pp. 43–60. https://doi.org/10.1029/GM064p0043. https://doi.org/10.1103/physrevlett.86.528210
E.T. Lundberg, P.M. Kintner, K.A. Lynch, M.R. Mella, Multi-payload measurement of transverse velocity shears in the topside ionosphere. Geophys. Res. Lett. 39(1), L01107 (2012). https://doi.org/10.1103/physrevlett.86.528211
A. Matsubara, T. Tanikawa, Anomalous cross-field transport of electrons driven by the electron-ion hybrid instability due to the velocity shear in a magnetized filamentary plasma. Jpn. J. Appl. Phys. 39(Part 1, No. 8), 4920 (2000). https://doi.org/10.1103/physrevlett.86.528212
J.B. McBride, E. Ott, J.P. Boris, J.H. Orens, Theory and simulation of turbulent heating by the modified two-stream instability. Phys. Fluids 15(12), 2367 (1972). https://doi.org/10.1103/physrevlett.86.528213
D.J. McComas, C.T. Russell, R.C. Elphic, S.J. Bame, The near-earth cross-tail current sheet: detailed isee 1 and 2 case studies. J. Geophys. Res. Sp. Phys. 91(A4), 4287 (1986). https://doi.org/10.1103/physrevlett.86.528214
D.J. McComas, F. Allegrini, J. Baldonado, B. Blake, P.C. Brandt, J. Burch, J. Clemmons, W. Crain, D. Delapp, R. DeMajistre, D. Everett, H. Fahr, L. Friesen, H. Funsten, J. Goldstein, M. Gruntman, R. Harbaugh, R. Harper, H. Henkel, C. Holmlund, G. Lay, D. Mabry, D. Mitchell, U. Nass, C. Pollock, S. Pope, M. Reno, S. Ritzau, E. Roelof, E. Scime, M. Sivjee, R. Skoug, T.S. Sotirelis, M. Thomsen, C. Urdiales, P. Valek, K. Viherkanto, S. Weidner, T. Ylikorpi, M. Young, J. Zoennchen, The two wide-angle imaging neutral-atom spectrometers (twins) nasa mission-of-opportunity. Sp. Sci. Rev. 142(1), 157 (2009). https://doi.org/10.1103/physrevlett.86.528215
C.J. McDevitt, P.H. Diamond, Multiscale interaction of a tearing mode with drift wave turbulence: a minimal self-consistent model. Phys. Plasmas 13(3), 032302 (2006). https://doi.org/10.1103/physrevlett.86.528216
R. Miao, Z. Yang, V. Gavrishchaka, in 2020 3rd International Conference on Information and Computer Technologies (ICICT) (2020), pp. 107–113. 10.1109/ICICT50521.2020.00025
A.B. Mikhailovskii, Theory of Plasma Instabilities, vol. 2 (Consultants Bureau, New York, 1974)
A.N. Mostovych, B.H. Ripin, J.A. Stamper, Laser-produced plasma jets: collimation and instability in strong transverse magnetic fields. Phys. Rev. Lett. 62(24), 2837 (1989). https://doi.org/10.1103/physrevlett.86.528217
F.S. Mozer, C.W. Carlson, M.K. Hudson, R.B. Torbert, B. Parady, J. Yatteau, M.C. Kelley, Observations of paired electrostatic shocks in the polar magnetosphere. Phys. Rev. Lett. 38, 292 (1977). https://doi.org/10.1103/physrevlett.86.528218
M. Muraglia, O. Agullo, S. Benkadda, X. Garbet, P. Beyer, A. Sen, Nonlinear dynamics of magnetic islands imbedded in small-scale turbulence. Phys. Rev. Lett. 103, 145001 (2009). https://doi.org/10.1103/physrevlett.86.528219
R. Nakamura, W. Baumjohann, B. Klecker, Y. Bogdanova, A. Balogh, H. Rme, J.M. Bosqued, I. Dandouras, J.A. Sauvaud, K.H. Glassmeier, L. Kistler, C. Mouikis, T.L. Zhang, H. Eichelberger, A. Runov, Motion of the dipolarization front during a flow burst event observed by cluster. Geophys. Res. Lett. 29(20), 3 (2002a). https://doi.org/10.1103/physrevlett.86.528220
R. Nakamura, W. Baumjohann, A. Runov, M. Volwerk, T.L. Zhang, B. Klecker, Y. Bogdanova, A. Roux, A. Balogh, H. Rme, J.A. Sauvaud, H.U. Frey, Fast flow during current sheet thinning. Geophys. Res. Lett. 29(23), 55 (2002b). https://doi.org/10.1103/physrevlett.86.528221
R. Nakamura, A. Retinò, W. Baumjohann, M. Volwerk, N. Erkaev, B. Klecker, E.A. Lucek, I. Dandouras, M. André, Y. Khotyaintsev, Evolution of dipolarization in the near-earth current sheet induced by earthward rapid flux transport. Ann. Geophys. 27(4), 1743 (2009). https://doi.org/10.1103/physrevlett.86.528222
C. Niemann, W. Gekelman, C.G. Constantin, E.T. Everson, D.B. Schaeffer, S.E. Clark, D. Winske, A.B. Zylstra, P. Pribyl, S.K.P. Tripathi, D. Larson, S.H. Glenzer, A.S. Bondarenko, Dynamics of exploding plasmas in a large magnetized plasma. Phys. Plasmas 20(1), 012108 (2013). https://doi.org/10.1103/physrevlett.86.528223
K.I. Nishikawa, G. Ganguli, Y.C. Lee, P.J. Palmadesso, Simulation of ion-cyclotron-like modes in a magnetoplasma with transverse inhomogeneous electric field. Phys. Fluids 31(6), 1568 (1988). https://doi.org/10.1103/physrevlett.86.528224
K.I. Nishikawa, G. Ganguli, Y.C. Lee, P.J. Palmadesso, Simulation of electrostatic turbulence due to sheared flows parallel and transverse to the magnetic field. J. Geophys. Res. 95(A2), 1029 (1990). https://doi.org/10.1103/physrevlett.86.528225
K. Nykyri, B. Grison, P.J. Cargill, B. Lavraud, E. Lucek, I. Dandouras, A. Balogh, N. Cornilleau-Wehrlin, H. Rème, Origin of the turbulent spectra in the high-altitude cusp: Cluster spacecraft observations. Ann. Geophys. 24(3), 1057 (2006). https://doi.org/10.1103/physrevlett.86.528226
S. Orsini, M. Candidi, V. Formisano, H. Balsiger, A. Ghielmetti, K.W. Ogilvie, The structure of the plasma sheet-lobe boundary in the Earth's magnetotail. J. Geophys. Res. 89(A3), 1573 (1984). https://doi.org/10.1103/physrevlett.86.528227
P. Palmadesso, G. Ganguli, Y.C. Lee, A New Mechanism for Excitation of Waves in a Magnetoplasma II. Wave-Particle and Nonlinear Aspects (American Geophysical Union (AGU), 1986), pp. 301–306. https://doi.org/10.1029/GM038p0301. https://doi.org/10.1103/physrevlett.86.528228
G.K. Parks, M. McCarthy, R.J. Fitzenreiter, J. Etcheto, K.A. Anderson, R.R. Anderson, T.E. Eastman, L.A. Frank, D.A. Gurnett, C. Huang, R.P. Lin, A.T.Y. Lui, K.W. Ogilvie, A. Pedersen, H. Reme, D.J. Williams, Particle and field characteristics of the high-latitude plasma sheet boundary layer. J. Geophys. Res. 89(A10), 8885 (1984). https://doi.org/10.1103/physrevlett.86.528229
J.R. Peñano, G. Ganguli, Ionospheric source for low-frequency broadband electromagnetic signatures. Phys. Rev. Lett. 83(7), 1343 (1999). https://doi.org/10.1103/physrevlett.86.528230
J.R. Peñano, G. Ganguli, Generation of ELF electromagnetic waves in the ionosphere by localized transverse dc electric fields: Subcyclotron frequency regime. J. Geophys. Res. Sp. Phys. 105(A4), 7441 (2000). https://doi.org/10.1103/physrevlett.86.528231
J.R. Peñano, G. Ganguli, W.E. Amatucci, D.N. Walker, V. Gavrishchaka, Velocity shear-driven instabilities in a rotating plasma layer. Phys. Plasmas 5(12), 4377 (1998). https://doi.org/10.1103/physrevlett.86.528232
J.R. Pennano, G. Ganguli, Generation of electromagnetic ion cyclotron waves in the ionosphere by localized transverse dc electric fields. J. Geophys. Res. Sp. Phys. 107(A8), SIA 14 (2002). https://doi.org/10.1103/physrevlett.86.528233
T.A. Peyser, C.K. Manka, B.H. Ripin, G. Ganguli, Electronion hybrid instability in laserproduced plasma expansions across magnetic fields. Phys. Fluids B Plasma Phys. 4(8), 2448 (1992). https://doi.org/10.1103/physrevlett.86.528234
O.P. Pogutse, Zh Eksp, Teor. Fiz. 52, 759 (1967)
C.J. Pollock, M.O. Chandler, T.E. Moore, J.H. Waite, C.R. Chappell, D.A. Gurnett, A survey of upwelling ion event characteristics. J. Geophys. Res. 95(A11), 18969 (1990). https://doi.org/10.1103/physrevlett.86.528235
P.L. Pritchett, Electrostatic KelvinHelmholtz instability produced by a localized electric field perpendicular to an external magnetic field. Phys. Fluids 30(1), 272 (1987). https://doi.org/10.1103/physrevlett.86.528236
P.L. Pritchett, Simulation of collisionless electrostatic velocitysheardriven instabilities. Phys. Fluids B Plasma Phys. 5(10), 3770 (1993). https://doi.org/10.1103/physrevlett.86.528237
L. Raleigh, Theory of Sound, vol. II (MacMillan, London, 1896)
E.C. Roelof, Energetic neutral atom image of a storm-time ring current. Geophys. Res. Lett. 14(6), 652 (1987). https://doi.org/10.1103/physrevlett.86.528238
H. Romero, G. Ganguli, Nonlinear evolution of a strongly sheared crossfield plasma flow. Phys. Fluids B Plasma Phys. 5(9), 3163 (1993). https://doi.org/10.1103/physrevlett.86.528239
H. Romero, G. Ganguli, Relaxation of the stressed plasma sheet boundary layer. Geophys. Res. Lett. 21(8), 645 (1994). https://doi.org/10.1103/physrevlett.86.528240
H. Romero, G. Ganguli, P. Palmadesso, P.B. Dusenbery, Equilibrium structure of the plasma sheet boundary layer-lobe interface. Geophys. Res. Lett. 17(13), 2313 (1990). https://doi.org/10.1103/physrevlett.86.528241
H. Romero, G. Ganguli, Y.C. Lee, Ion acceleration and coherent structures generated by lower hybrid shear-driven instabilities. Phys. Rev. Lett. 69(24), 3503 (1992a). https://doi.org/10.1103/physrevlett.86.528242
H. Romero, G. Ganguli, Y.C. Lee, P.J. Palmadesso, Electronion hybrid instabilities driven by velocity shear in a magnetized plasma. Phys. Fluids B Plasma Phys. 4(7), 1708 (1992b). https://doi.org/10.1103/physrevlett.86.528243
L.I. Rudakov, R.Z. Sagdeev, Dokl. Akad. Nauk. SSR 138, 581 (1961)
A. Runov, V. Angelopoulos, X.Z. Zhou, X.J. Zhang, S. Li, F. Plaschke, J. Bonnell, A themis multicase study of dipolarization fronts in the magnetotail plasma sheet, Journal of Geophysical Research: Space Physics 116(A5) (2011). https://doi.org/10.1029/2010JA016316. https://doi.org/10.1103/physrevlett.86.528244
A. Runov, V. Sergeev, R. Nakamura, W. Baumjohann, Z. Vörös, M. Volwerk, Y. Asano, B. Klecker, H. Rème, A. Balogh, Properties of a bifurcated current sheet observed on 29 august 2001. Ann. Geophys. 22(7), 2535 (2004). https://doi.org/10.1103/physrevlett.86.528245
A. Runov, V. Angelopoulos, M.I. Sitnov, V.A. Sergeev, J. Bonnell, J.P. McFadden, D. Larson, K.H. Glassmeier, U. Auster, THEMIS observations of an earthward-propagating dipolarization front. Geophys. Res. Lett. 36(14), 5 (2009). https://doi.org/10.1103/physrevlett.86.528246
J. Sanny, R.L. McPherron, C.T. Russell, D.N. Baker, T.I. Pulkkinen, A. Nishida, Growth-phase thinning of the near-earth current sheet during the cdaw 6 substorm. J. Geophys. Res. Sp. Phys. 99(A4), 5805 (1994). https://doi.org/10.1103/physrevlett.86.528247
N. Sato, M. Nakamura, R. Hatakeyama, Three-dimensional double layers inducing ion-cyclotron oscillations in a collisionless plasma. Phys. Rev. Lett. 57(10), 1227 (1986). https://doi.org/10.1103/physrevlett.86.528248
P. Satyanarayana, Y.C. Lee, J.D. Huba, The stability of a stratified shear layer. Phys. Fluids 30(1), 81 (1987). https://doi.org/10.1103/physrevlett.86.528249
W. Scales, P.A. Bernhardt, G. Ganguli, in Physics of Space Plasmas (1992), SPI Conference Proceedings and Reprint Series, ed. by T. Chang (Scientific Publishers, Inc.,, Cambridge, MA., 1993), p. 161
W.A. Scales, P.A. Bernhardt, G. Ganguli, Early time evolution of negative ion clouds and electron density depletions produced during electron attachment chemical release experiments. J. Geophys. Res. 99(A1), 373 (1994). https://doi.org/10.1103/physrevlett.86.528250
W.A. Scales, P.A. Bernhardt, G. Ganguli, Early time evolution of a chemically produced electron depletion. J. Geophys. Res. 100(A1), 269 (1995). https://doi.org/10.1103/physrevlett.86.528251
R.E. Schapire, The design and analysis of efficient learning algorithms. Ph.D. thesis, Massachusetts Institute of Technology (1992)
K. Schindler, J. Birn, On the cause of thin current sheets in the near-earth magnetotail and their possible significance for magnetospheric substorms. J. Geophys. Res. Sp. Phys. 98(A9), 15477 (1993). https://doi.org/10.1103/physrevlett.86.528252
K. Schindler, M. Hesse, Formation of thin bifurcated current sheets by quasisteady compression. Phys. Plasmas 15(4), 042902 (2008). https://doi.org/10.1103/physrevlett.86.528253
D. Schmid, R. Nakamura, F. Plaschke, M. Volwerk, W. Baumjohann, Two states of magnetotail dipolarization fronts: a statistical study. J. Geophys. Res. Sp. Phys. 120(2), 1096 (2015). https://doi.org/10.1002/2014ja02038054
O.V. Senyukova, V.V. Gavrishchaka, in Computational Intelligence and Bioinformatics / 755: Modelling, Simulation, and Identification (2011). https://doi.org/10.2316/P.2011.753-025
V.A. Sergeev, D.G. Mitchell, C.T. Russell, D.J. Williams, Structure of the tail plasma/current sheet at 11 re and its changes in the course of a substorm. J. Geophys. Res. Sp. Phys. 98(A10), 17345 (1993). https://doi.org/10.1103/physrevlett.86.528255
A. Sestero, Structure of plasma sheaths. Phys. Fluids 7(1), 44 (1964). https://doi.org/10.1103/physrevlett.86.528256
V.D. Shafranov, in Reviews of Plasma Physics, vol. 2 (Consultants Bureau, 1966), p. 103
M.I. Sitnov, M. Swisdak, P.N. Guzdar, A. Runov, Structure and dynamics of a new class of thin current sheets. J. Geophys. Res. Sp. Phys. 111(A8), A08204 (2006). https://doi.org/10.1103/physrevlett.86.528257
R. Slapak, H. Gunell, M. Hamrin, Observations of multiharmonic ion cyclotron waves due to inverse ion cyclotron damping in the northern magnetospheric cusp. Geophys. Res. Lett. 44(1), 22 (2017). https://doi.org/10.1103/physrevlett.86.528258
T.W. Speiser, Particle trajectories in model current sheets: 1. Analytical solutions. J. Geophys. Res. (1896-1977) 70(17), 4219 (1965). https://doi.org/10.1029/JZ070i017p0421959
D.P. Stern, The Beginning of Substorm Research (American Geophysical Union (AGU), 2013), pp. 11–14. https://doi.org/10.1029/GM064p0011. https://doi.org/10.1103/physrevlett.86.528260
A. Surjalal Sharma, Assessing the magnetosphere's nonlinear behavior: Its dimension is low, its predictability, high. Rev. Geophys. 33(S1), 645 (1995). https://doi.org/10.1103/physrevlett.86.528261
K. Takahashi, E.W. Hones Jr., Isee 1 and 2 observations of ion distributions at the plasma sheet-tail lobe boundary. J. Geophys. Res. Sp. Phys. 93(A8), 8558 (1988). https://doi.org/10.1103/physrevlett.86.528262
E.M. Tejero, W.E. Amatucci, G. Ganguli, C.D. Cothran, C. Crabtree, E. Thomas, Spontaneous electromagnetic emission from a strongly localized plasma flow. Phys. Rev. Lett. 106(18), 185001 (2011). https://doi.org/10.1103/physrevlett.86.528263
C. Teodorescu, E.W. Reynolds, M.E. Koepke, Experimental verification of the shear-modified ion-acoustic instability. Phys. Rev. Lett. 88(18), 185003 (2002a). https://doi.org/10.1103/physrevlett.86.528264
C. Teodorescu, E.W. Reynolds, M.E. Koepke, Observation of inverse ion-cyclotron damping induced by parallel-velocity shear. Phys. Rev. Lett. 89(10), 105001 (2002b). https://doi.org/10.1103/physrevlett.86.528265
A. Thyagaraja, P.J. Knight, M.R. de Baar, G.M.D. Hogeweij, E. Min, Profile-turbulence interactions, magnetohydrodynamic relaxations, and transport in tokamaks. Phys. Plasmas 12(9), 090907 (2005). https://doi.org/10.1103/physrevlett.86.528266
K.T. Tsang, B. Hafizi, Magnetic drift induced ion-cyclotron maser instability. Phys. Fluids 30(3), 804 (1987). https://doi.org/10.1103/physrevlett.86.528267
K. Tummel, L. Chen, Z. Wang, X.Y. Wang, Y. Lin, Gyrokinetic theory of electrostatic lower-hybrid drift instabilities in a current sheet with guide field. Phys. Plasmas 21(5), 052104 (2014). https://doi.org/10.1103/physrevlett.86.528268
G. Vekstein, K. Kusano, Taylor problem and onset of plasmoid instability in the hall-magnetohydrodynamics. Phys. Plasmas 24(10), 102116 (2017). https://doi.org/10.1103/physrevlett.86.528269
J.E. Wahlund, P. Louarn, T. Chust, H.D. Feraudy, A. Roux, B. Holback, P.O. Dovner, G. Holmgren, On ion acoustic turbulence and the nonlinear evolution of kinetic Alfvn waves in aurora. Geophys. Res. Lett. 21(17), 1831 (1994). https://doi.org/10.1103/physrevlett.86.528270
S.J. Zhao, S.Y. Fu, W.J. Sun, G.K. Parks, X.Z. Zhou, Z.Y. Pu, D. Zhao, T. Wu, F.B. Yu, Q.G. Zong, Oxygen ion reflection at earthward propagating dipolarization fronts in the magnetotail. J. Geophys. Res. Sp. Phys. 123(8), 6277 (2018). https://doi.org/10.1103/physrevlett.86.528271
This work was partially supported by the Naval Research Laboratory base program and NASA grant NNH17AE70I. Special thanks to Valeriy Gavrishchaka for reading the manuscript and for valuable discussions and help, especially with Sect. 6.
Plasma Physics Division, Naval Research Laboratory, Washington, DC, 20375, USA
Gurudas Ganguli, Chris Crabtree, Alex Fletcher & Bill Amatucci
Gurudas Ganguli
Chris Crabtree
Alex Fletcher
Bill Amatucci
Correspondence to Chris Crabtree.
Ganguli, G., Crabtree, C., Fletcher, A. et al. Behavior of compressed plasmas in magnetic fields. Rev. Mod. Plasma Phys. 4, 12 (2020). https://doi.org/10.1007/s41614-020-00048-4
Plasma compression
Ambipolar potential
Broadband emissions
Velocity shear
Current sheet
Kinetic structures | CommonCrawl |
Revision as of 12:02, 14 August 2013 by T (talk | contribs) (→Pedophilia and homosexuality)
Vladimir Putin (Путин Владимир Владимирович) is mentioned as president, prime–minister and leader of party Edro, one of the top-administrators (dictator) of Russia.
1 Opposite opinions
2 Analogy with Muammar Gaddafi
3 Election fraud
4 Terroristic activity
5 Pedophilia and homosexuality
6 Russia against Putin
6.1 Petition (In Russian)
6.2 About the petition
7 Boycott
8 Inquisitor
9 Corruption
10 Humor about Vladimir Putin
11 Keywords
Opposite opinions
There are many opposite opinions about Putin. Putin is respected in the criminal groups, including edros and nashists, and declared as "leader of nation".
Many authors indicate that Putin is the KGB agent, who took the superior power in Russia in the beginning of century 21; many negative observation (degradation of science to the perpetual motion machines like gravitsapa, terroristic acts, decrease of population) are attributed to the corrupted ministers and administrators promoted by Putin. Putin is believed to be plagiarist, his Ph.D.Thesis is reported to be fake [1][2].
Putin is suggested to be included to the Magnitsky list; in 2012 December within one day, thousands had signed such a petition at http://wh.gov/QgKz together with the most of Russian Duma, approved such a Law https://petitions.whitehouse.gov/petition/identify-russian-law-makers-jeopardizing-lives-russian-orphans-responsible-under-magnitsky-act/q9LbTGRB .
Analogy with Muammar Gaddafi
In connection of the riot against Muammar Gaddafi, and especially after his death the suggestions that Putin may have similar end of his career are expressed [3][4]. Such a concern is based on the terroristic activity of KGB and edro ("Edinaya Rossia"), total corruption in the highest level of the administration, suppression of critics and falsifications at the presidential election. The intent of the tandem to keep the superior power forever is discussed in the internet; sometimes, the revolution is considered as the only way to change the ruling party.
58.99 + 32.96 + 23.74 + 19.41 + 9.32 + 1.46 + 0.59 = 146.47, by [5]
At the parliamentary 2011.12.04.Election, many citizen expressed their disagreement with the way the functionaries of the pro–Putin party edro organized the process, they had qualified it as a fraud even before the announcement of the official results, because of the non–registration of some parties, bribery of electorate, pro–edro agitation and propaganda during the day of election, etcetera. As a protest, in order to make the ballots invalid, some citizen used to write offensive opinions about Putin at the ballots.
At the announcement of the preliminary results of the 2011.12.04.Election, it happened, that in some regions, the total sum of votes greatly exceeds the 100% of the total number of the electors. Up to 46% of votes seem to be just fake. For these reasons, the results are qualified as falsification. Such a fraud caused the world-wide protest action in 2011.12.10 with many anti–Putin slogans. After such a wide protest, December 15, Putin had declared that he considers to leave from the government if he does not feel the support of the people. [6]. In order to show that he has no more support of the people, the protest actions are organized in Moscow December 17 and December 24 [7][8][9].
Putin tries to participate as candidat at the Presidential election in 2011.03.03; it is indicated that such a participation is illegal [10], contradicting the Law about the Presidential Election. The application of the denouncement of the Putin;s registration is submitted to the Superior Court of Russia. See the special article Переуступка (In Russian) on this subject. It is indicated that 6 violation of Law are committed at the registration of Putin as a candidate for the 2012.03.04.Election of president of Russia. These violations are analysed at the site Antiputin [11].
The election fraud in favor of Putin and his party is confirmed by the statistical analysis of the officially reported results[12]. It is shown, that the hypothesis of the honest election in Russia can be rejected at high significance level. The assumption of the huge election fraud is necessary for the description of the experimental data. The model of the election fraud had been suggested by Peter Klimek, Yuri Yegorov, Rudolf Hanel, and Stefan Thurner in 2012 [13]. The three parameters are introduced: for incremental fraud $f_{\mathrm i}$, the extreme fraud $f_{\mathrm e}$ and the deliberate wrong-counting $\alpha$. With these parameters, the good agreement of the model with the observed statistical properties is reported. Data for elections in Austria, Canada, Czech Republic, Finland, Spain, Switzerland, Poland, France, Romania, Russia, and Uganda have been analyzed. For the two last countries, the modeling of the election fraud is essential for the description of the announced results; the model shows good agreement with the observed data. Ruben Enikolopov, Vasily Korovkin, Maria Petrova, Konstantin Sonin and Alexei Zakharov have evaluated the lower estimate of amount of the fake votes at the 2011.12.04.Election to be at least 11 percentage points lower than the official count (36% instead of 47%) [14]. Up to year 2012, no alternative description (without the election fraud) of the observed peculiarities in the distribution of votes has been suggested. In this sense, the massive election fraud in Uganda and Russia can be considered as scientific fact.
Terroristic activity
And now, slowly put your idiotic bill down to the ground [15]
"No! For him, we already have reserved the place at the cemetery!" 2012.12.28 by [16]
Many evidences of the terroristic activity of Putin and his commandos are available in the internet [17][18][19][20][21]. Some episodes are collected in the articles Nevsky Express bombing (2009) and Katyn-2.
Due to his terroristic activity, Putin is considered as a treat to the global piece [22].
According to the statement by V.Churov [23], V.Putin is not at the governmental service; he may be arrested for terrorism as soon as he leaves from the Russian Federation and interrogated as soon as he appears at the USA. The statements by the Russian militaries and navies [24][25] indicate that nobody will combat for him nor for his party edro: Putin has no support among the Russian officers, although he has support in the KGB, the Russian Elective Committee and his party edro. Namely these three groups of people are represented by him. This aspect of the Putin's position for years 2011–2012 is mentioned in the article Putin versus Clinton.
Putin is considered as initiator and leader of the Herod Law; he signed that bill 2012.12.28 [26]. Putin had declared that bill as a response to the Magnitsly bill, in order to affect the administration of the USA and the European countries. The bill prohibits the Russian orphans to be adopted in the American families, even if the children already know families who will to adopt them. Such a low makes the Russian orphans to be hostages. All the senators who voted for that bill and Putin who signed that bill are qualified as terrorists, according to definition of that term.
Pedophilia and homosexuality
Putin advertises pedophilia and homosexuality 2011.05.16 [27]
D.Medvedev and V.Putin with spouses at church, 2013.05.05.[28]
Putin promotes pedophilia and homosexuality. In particular, 2013.05.05. at the commercial center XXC, Putin participated in the show, while Sergei Sobianin played role of his spouse [28]. In his verbal statements, Putin denies his homosexual preferences, and other gays crivicize him [29].
Russia against Putin
In 2011– 2012, especially after the fraud at the 2011.12.04.Election, may Russians declare that the Putin politics of terror, corruption, bribery is not acceptable for Russia and Putin should leave [30].
Due to the total corruption in Russia, [31], and in particular, the election committees, the dismissal of Putin with election is not difficult: at each stage of the election procedure, some ten percent in favor of Putin or his party edro are added, so, the"official" results of the elections do not represent the will of the electors.
The petition against Putin is distributed [32]. For the beginning of 2012, hundreds have signed that petition. The arrest of Putin should be simplified by the discovery that he is not at the governmental service; so, in principle, he could be arrested in a way, similar to that Victor But was arrested. The text of the petition (In Russian) in the state for 2011 March 21 is copypasted from http://putinapodsud.org/en below
Petition (In Russian)
ЗАЯВЛЕНИЕ О ПРОВЕДЕНИИ РАССЛЕДОВАНИЯ И ВОЗБУЖДЕНИИ УГОЛОВНОГО ДЕЛА ПО СТ. 6(A,B,C,D,E), СТ. 7 (A,B,E,F,G,H,I,K) И СТ. 8(А-К) РИМСКОГО СТАТУТА International Criminal Court Office of the Prosecutor Communications Post Office Box 19519 2500 CM The Hague The Netherlands
Главам Государств
Главам Правительств Государств
членов Совета Безопасности
Организации Объединенных Наций
Заявление о проведении расследования и возбуждении уголовного дела по ст. 6(a,b,c,d,e), ст. 7 (a,b,e,f,g,h,i,k) и ст. 8(а-к) Римского Статута
Мы, нижеподписавшиеся, граждане Российской федерации, Общественные, Политические, Международные и Российские организации, граждане других Государств, обращаемся в настоящий суд с данным заявлением и требованием о проведении расследования уголовных преступлений в рамках норм Международного права, принципов нравственности, морали и человечности.
Настоящим просим проведения расследования уголовных преступлений, совершенных гражданином Российской Федерации, Путиным Владимиром Владимировичем, 7 октября 1952 года рождения, уроженцем города Ленинграда, бывшим Президентом Российской Федерации, в настоящее время являющимся Председателем Правительства Российской Федерации, по фактам совершения нижеприведенных преступлений геноцида, преступлений против человечности и военных преступлений.
172 signatures 52 comments 2648 reads
About the petition
For 2012, the site http://putinapodsud.org/en does not provide the English version of the petition; so, it may be considered as a draft rather than an official application.
After the election frauds (2011.12.04.Election and 2012.03.04.Election, and especially after the "Bloody sunday" 2012.05.06 (when many protesters were beaten and arrested by the police) the Russians consider Putin as betrayer, robber, criminal, killer, terrorist. For his inauguration 2011.05.07, the center of Moscow was emptied; Putin gone to Kremlin by the dead, empty city [33]
Inquisitor
The scenario of the Pussy Riot trial is believed to be approved at the level of Kiril Gundiaev and Vladimir Putin. The tight collaboration of the state administration with that of church in the punishment of protesters is qualified as inquisition [34].
The DAILY MAIL REPORTER writes, that Putin buys some villa in Spain for £15m [35].
Humor about Vladimir Putin
http://www.youtube.com/watch?v=Tll6yMNSNNE Medvedev und Putin. rudemocracy on May 13, 2008.
http://www.youtube.com/watch?v=qDt9QzgfGGs German reportage about 2011.12.04.Election
http://imgur.com/o65SR : "I hope, nobody will recognize me with such a wig" – 2011 December 9.
http://www.youtube.com/watch?v=mdBF56ahgLw ZhekaRabkorov on Dec 31, 2011. Предвыборная гонка выборов президента России 2012 года
What 10 Years of Putin Have Brought, Katyn-2, Putin in archeology, Putin_must_go, Mohammar Gaddafi,
↑ http://connection.ebscohost.com/c/articles/25170324/pitt-prof-plagiarized-by-putin Pitt prof plagiarized by Putin? ABSTRACT. The article reports that Russian President Vladimir Vladimirovich Putin allegedly plagiarized portions of his 1997 economics doctoral thesis from two University of Pittsburgh professors' management textbook. Among these professors are professor emeritus of industrial engineering David I. Cleland and professor of business administration William R. King. The textbook, from which the researchers believe some of the thesis appears to have been lifted, is entitled "Strategic Planning and Policy."
↑ http://www.washingtontimes.com/news/2006/mar/24/20060324-104106-9971r/?page=all The Washington Times. Researchers peg Putin as a plagiarist over thesis. Friday, March 24, 2006. Vladimir Putin — KGB spy, politician, Russian Federation president, 2006 host of the Group of Eight international summit — can add a new line to his resume: plagiarist.
↑ http://www.youtube.com/watch?v=ArPKRnSf7-M MrHyperGate on Apr 20, 2011. Каддафи обречен на гибель. Кто следующий? (In Russian)
↑ http://www.sfgate.com/cgi-bin/article.cgi?f=/g/a/2011/10/21/bloomberg_articlesLTEXEQ0YHQ0X.DTL Henry Meyer. 'Dictator' Putin May Be Nervous After Qaddafi Death, McCain Says. Friday, October 21, 2011... "I think dictators all over the world, including Bashar al-Assad, maybe even Mr. Putin, maybe some Chinese, maybe all of them, may be a little bit more nervous," McCain said in an interview with the British Broadcasting Corp. late yesterday. "It's the spring, not just the Arab spring."
↑ http://www.kommersant.ru/doc/1831646 Победа единовбросов. Журнал "Коммерсантъ Власть", №49 (953), 12.12.2011. (in Russian)
↑ http://svpressa.ru/politic/news/51003/ Путин готов уйти из власти, если не будет чувствовать поддержку народа. 15 декабря 2011 года 17:18. (In Russian)
↑ http://www.partbilet.ru/publications/partiya_yabloko_soglasovala_miting_na_bolotnoy_ploschadi_17_dekabrya_10053.html Партия «Яблоко» согласовала митинг на Болотной площади 17 декабря. 14.12.2011, 21:29. Столичные власти разрешили партии «Яблоко» провести митинг против фальсификации на выборах на Болотной площади в субботу, 17 декабря, с 13:00 до 15:00 мск с количеством участников до 10 тысяч человек. Об этом сообщил РИА «Новости» представитель столичного департамента региональной безопасности.
↑ http://www.dp.ru/a/2011/12/13/JAbloko_soglasovalo_miti/ "Яблоко" согласовало митинг на Болотной площади. 13 декабря 2011, 20:39. Партия "Яблоко" согласовала с мэрией Москвы проведение митинга за честные выборы на Болотной площади в субботу, 17 декабря, сообщает пресс–служба партии.
↑ http://www.bbc.co.uk/russian/russia/2011/12/111214_elex_protest_rally_allowed.shtml Мэрия Москвы не дала провести массовый митинг 24 декабря против обмана на выборах у стен Кремля, но согласилась на проспект академика Сахарова... В группе в социальной сети Facebook к вечеру среды о намерении пойти на митинг 24 декабря, задуманный как продолжение многотысячного митинга 10 декабря на Болотной площади, заявили более 19 тысяч человек, в сети "Вконтакте" - 21 тысяча..
↑ http://www.novayagazeta.ru/politics/50362.html Владимир Пастухов. О переуступке прав на должность президента. Точку в вопросе о третьем сроке Владимира Путина должен поставить не митинг, а Верховный суд. Заявление об отмене решения ЦИК. 06.01.2012. (In Russian)
↑ http://antiputin.com/ – Сайт, созданный в январе 2012 года для обсуждения нарушений Закона, допущенных Владимиром Путиным при подаче заявки на участие в качестве кандидата в президенты РФ на президентских выборах 2012 года, и, таким образом, преступления Центральной Избирательной Комиссии (ЦИК) под руководством Чурова, в обход Закона зарегистрировавшей Владимира Путина в качестве кандидата в президенты. (In Russian)
↑ http://www.gazeta.ru/science/2011/12/10_a_3922390.shtml СЕРГЕЙ ШПИЛЬКИН. Статистика исследовала выборы. 10.12.11 14:57.
↑ http://www.pnas.org/content/109/41/16469.abstract Peter Klimek, Yuri Yegorov, Rudolf Hanel, and Stefan Thurner. Statistical detection of systematic election irregularities. Proceeding of the Natinal Academy of Sciences of the USA, vol. 109 no. 41, 16469–16473 (2012)
↑ http://www.pnas.org/content/early/2012/12/19/1206770110.full.pdf Ruben Enikolopov, Vasily Korovkin, Maria Petrova, Konstantin Sonin and Alexei Zakharov. Field experiment estimate of electoral fraud in Russian parliamentary elections. PNAS Early Edition. (2012)
↑ http://imgur.com/w8xP4 А теперь медленно опусти свой дурацкий закон на землю. (2012, December)
↑ http://rimona.livejournal.com/1770989.html Традиционные в этом блоге рождественские дети. 2013-01-07 08:29:00.
↑ http://www.youtube.com/watch?v=hLQSCnTT5-c "ПОКУШЕНИЕ НА РОССИЮ" (Полная Версия). Thesovietstory666 on Feb 11, 2011 (KGB organized bombing of homes in Russia, in Russian).
↑ http://smolensk-2010.pl/2010-05-04-english-in-the-murder-of-kaczynskis-handwriting-is-seen-putin.html George Gordin. In the murder of Kaczynski's handwriting is seen Putin. 2010 May 4.
↑ http://www.americanthinker.com/blog/2011/08/russia_obama_and_america_the_parasite.html Vladimir Socor. GRU Responsible For Bomb Incident At US Embassy In Tbilisi. Eurasia Daily Monitor, Volume: 8, Issue: 146. July 29, 2011 02:18 PM.
↑ http://www.telegraph.co.uk/news/worldnews/europe/russia/8802732/Leaked-document-reveals-plans-to-eliminate-Russias-enemies-overseas.html Duncan Gardham. Russia 'gave agents license to kill' enemies of the state. 10:23PM BST 02 Oct 2011.
↑ http://www.newsland.ru/News/Detail/id/406074/ Давид Авнуков. Судьба летчика-перебежчика. 1 сентября 2009. 21:45. ...Расследование завершилось в нынешнем году - в автомобиле было искусственное повреждение, приведшее к потере управления... Что-то подобное спустя годы произошло с ещё одним 'перелётчиком', который перегнал в Турцию 'Су-29'. Единственной причиной гибели обоих пилотов могла быть только месть кагэбистов.(In Russian)
↑ http://www.telegraph.co.uk/news/worldnews/us-election/8974912/Mitt-Romney-Vladimir-Putin-a-threat-to-global-peace.html Mitt Romney: Vladimir Putin 'a threat to global peace'. 11:00AM GMT 23 Dec 2011.Mr Romney said that Mr Putin, who is seemingly set for a return to the office of Russian President in 2012, had "returned to some of the more heated rhetoric of the past", adding: "I think he endangers the stability and peacefulness of the globe."
↑ http://www.bbc.co.uk/russian/russia/2012/01/120105_churov_interview_echo.shtml 6 января 2012 г., 22:28 GMT. "Владимир Владимирович Путин как председатель правительства РФ не находится на государственной или муниципальной службе" (In Russian)
↑ http://voenmor.ru/home/m-article/56--l-r- Голосовать против «Единой России»! (2011 December; in Russian)
↑ http://www.youtube.com/watch?v=Q1poYMR1f9A&feature=player_detailpage cheloveksvobodniy on Jan 6, 2012. 10 февраля 2011 года в городе Москве по решению Общероссийского офицерского собрания состоялся военный трибунал по рассмотрению разрушительной деятельности Путина В.В. С обвинительной речью на заседании трибунала выступил депутат Государственной Думы, Заслуженный юрист РФ Илюхин В.И. ..
↑ http://top.rbc.ru/politics/28/12/2012/838968.shtml В.Путин все-таки подписал скандальный антисиротский закон. 2012.12.28. Президент России Владимир Путин подписал закон, являющийся ответом на "акт Магнитского", принятый в США в начале декабря. Об этом сообщает пресс-служба Кремля..
↑ http://www.youtube.com/watch?v=NJBwwR901vc Владимир Путин целует мальчика в живот. May 16, 2011.
↑ 28.0 28.1 http://rimona.livejournal.com/1883590.html Пасхальная композиция. Дмитрий Медведев с супругой Светланой, президент РФ Владимир Путин и мэр Москвы Сергей Собянин в Храме Христа Спасителя на пасхальном богослужении 5 мая 2013 г.. 2013-05-06 06:05:00.
↑ http://www.vice.com/en_uk/read/londons-gays-protested-against-putin-this-weekend Simon Childs, Matthew Francey. LONDON'S GAYS PROTESTED AGAINST PUTIN THIS WEEKEND. 2013, August 12
↑ http://www.youtube.com/watch?v=CcY3ReP8R3g Radio5nizza on Dec 27, 2011. Быков, Троицкий, Ахеджакова, Навальный, Яшин, Акунин, Парфенов, Лазарева. Митинг на Чистых Прудах, пл. Болотной, пл. Сахарова. music samples: Yes "Owner of the Lonely Heart", "La Marseillaise". photos: fraticelli, RIA-новости, А. Матюхин. Downloads @ sashavalenti.livejournal.com
↑ http://kremlin.ru/transcripts/1566 Д.Медведев. Вступительное слово на заседании Совета по противодействию коррупции. 30 сентября 2008 года, 16:25 Москва, Кремль. Д.Медведев: Коррупция в нашей стране приобрела не просто масштабный характер, она стала привычным, обыденным явлением, которое характеризует саму жизнь в нашем обществе...(in Russian)
↑ http://putinapodsud.org/en/about_us Igor Puzanov, Igor Kovrigin, Natalia Pelevine, Roman Golovkin, Denis Volvach. "Sign the petition!". (2012) .. Russia had entered another era of dictatorship, that of Vladimir Putin. .. we decided to start collecting signatures in support of the petition "Prosecute Vladimir Putin in the International Criminal Court for genocide, crimes against humanity and war crimes". We ask you to sign this petition...
↑ http://ru-opposition.livejournal.com/7284299.html makhk. Радостные толпы москвичей приветствуют нового честновыбранного президента! Май 7, 2012, 01:03 pm. (In Russian)
↑ http://arthuride.wordpress.com/2012/08/17/pussy-riot-putin-dictator-in-russia-kirill-and-corruption-in-church/ Pussy Riot, Putin Dictator in Russia, Kirill and Corruption in Church. AUGUST 17, 2012 · 6:17 PM. Today's Stalin is Vladimir Putin. Using the Inquisition of the Russian Orthodox Church, he had Pussy Riot arrested.
↑ http://www.dailymail.co.uk/news/article-2219754/What-WILL-talk-Vladimir-Putin-buys-15m-Marbella-mansion-Rod-Stewart-neighbour.html What WILL they talk about? Vladimir Putin buys £15m Marbella mansion, with Rod Stewart as a neighbour. 19:22 GMT, 18 October 2012,
http://wyborcza.pl/1,86117,10215528,Mandat_dla_Putina.html Wacław Radziwinowicz, Moskwa. Mandat dla Putina. 2011-09-01. ..Putin regularnie daje publiczny pokaz lekceważenia przepisów drogowych...
http://www.washingtonpost.com/world/europe/dmitry-medvedev-asks-putin-to-run-for-president-of-russia/2011/09/24/gIQAXGwpsK_story.html?wpisrc=al_national Will Englund, Kathy Lally, Ivan Sekretarev. Dmitry Medvedev asks Vladimir Putin to run for president of Russia. 2011 September 24.
http://online.wsj.com/article/SB10001424052970204422404576592810581701474.html Vladimir the Eternal. SEPTEMBER 26, 2011. Russia faces the prospect of a quarter-century of Putinism... Leonid Brezhnev died in the job at 75...
http://nasha-canada.livejournal.com/1066382.html Борис Юдин, "На воцарение", 2011.09.26.
http://www.youtube.com/watch?v=Ggp4aSGprMs DEVILPUTIN on Sep 26, 2011. Cтрана в говне, но народ всё равно за Путина.
http://russiawatchers.ru/daily/putin-for-president-2012-why-he-may-risk-his-legacy/ Joera Mulders. "He Is Back! Why Putin may risk his legacy." September 24, 2011
http://www.guardian.co.uk/world/2011/sep/25/vladimir-putin-comeback-russia-west?CMP=twt_gu Luke Harding. Vladimir Putin's comeback spells gloom for Russia and the west. Sunday 25 September 2011 18.10 BST. .. With no political mechanism for removing Putin from power, Navalny said, another Russian revolution was inevitable. At some point, he said, frustrations would boil over. "Maybe in five months, maybe in two years, maybe in seven years," he said. Asked what would spark it, he suggested: "The Caucasus."// Many observers have plausibly argued that Putin is tired of being leader. So why did he come back? The Kremlin, of course, is more prestigious that then prime minister's office, and gives Putin an international platform. More than this, though, it allows Putin to protect his own alleged secret assets and those of his team, US diplomats believe. And it allows him to avoid potential law enforcement prosecution – inevitable, once he steps down from power.
http://wyborcza.pl/1,86117,10350579,Wladimir_Putin_jak_Iwan_Grozny.html Wacław Radziwinowicz. Putin jak Iwan Groźny. 2011-09-25.
http://wyborcza.pl/1,75477,10360461,Kapitan_Putin_na_mostku_Titanica.html Wacław Radziwinowicz. Kapitan Putin na mostku Titanica. 2011-09-26 20:40.
http://www.youtube.com/watch?v=6TJaKnB1hB0 KanalPIK on Oct 22, 2011 Муаммар Каддафи убит. Бесславный и трагический конец очередного диктатора. Что ожидает Ливию, не превратится ли она в оплот Алькаиды.
http://www.rf-agency.ru/eng/stat_en.htm Russia: Statistics, Facts, Comments & Predictions. (2011 and updates)
http://www.ft.com/cms/s/0/02c361b8-f430-11e0-bdea-00144feab49a.html#axzz1d6NQaF59 Philip Stephens. Putin's Russia: frozen in decline. October 13, 2011.
http://www.ft.com/intl/cms/s/0/69d1db86-1aa6-11e1-ae14-00144feabdc0.html#axzz1fMNDMLFL Catherine Belton. Analysis: A realm fit for a tsar. November 30, 2011 8:24 pm
http://www.foreignaffairs.com/articles/60263/marshall-i-goldman/putin-and-the-oligarchs Marshall I. Goldman. November/December 2004. Putin and the Oligarchs. In mid-September, Russian President Vladimir Putin announced plans for a radical overhaul of his country's political system, with the goal of centralizing power in the Kremlin.
http://www.onenewspage.com/n/Europe/74myxvug8/Tens-of-thousands-rally-in-Russia-vote-protest.htm Tens of Thousands Protest Alleged Vote Fraud in Russia. Saturday, 10 December 2011, Some estimate at least 60,000 gathered to protest election fraud by Vladimir Putin's party in the biggest demonstration since the fall of the USSR.
http://www.guardian.co.uk/world/2011/dec/10/russia-protests-election-vladimir-putin Miriam Elder. Russians come out in force to protest against alleged electoral fraud. Saturday 10 December 2011 17.22 GMT. Up to 50,000 people braved the cold and snow on Saturday to turn out for the largest ever protest against the rule of prime minister Vladimir Putin.
http://www.guardian.co.uk/commentisfree/2011/dec/26/vladimir-putin-world-falling-apart Masha Gessen. Vladimir Putin's world is falling apart. Monday 26 December 2011 16.15 GMT. The Russian media has lost its fear of Putin's authoritarian regime. History tells us the end must be nigh.
https://www.youtube.com/watch?v=lJILjbIoc98 heruvimruscom on Jan 19, 2012. Putin, Russia and the West .
http://www.videoua.net/movie/13008/ Putin shows his Monika Levinski (2012 ?)
http://irsolo.ru/23-fevralya-uchastniki-vojn-nagradili-putina-prezreniem/ 23 Февраля Участники Войн Наградили Путина Презрением. 2012/02/24 (8:37).
http://www.lefigaro.fr/international/2012/02/13/01003-20120213ARTFIG00629-les-folles-promesses-du-candidat-vladimir-poutine.php Pierre Avril. Les folles promesses du candidat Vladimir Poutine. 13/02/2012 à 19:48.
http://ireport.cnn.com/docs/DOC-772362?ref=feeds%2Flatest Katerin. Russian Lawyers, Journalists File Against Putin in The Hague. 2012.04.06.
Retrieved from "https://mizugadro.mydns.jp/t/index.php?title=Vladimir_Putin&oldid=7642"
Churov list | CommonCrawl |
Machine learning in WLANs for IoT-based systems
Q-learning-enabled channel access for dense WLANs
Experimental results and discussion
Q-learning-enabled channel access in next-generation dense wireless networks for IoT-based eHealth systems
Rashid Ali1,
Yazdan Ahmad Qadri1,
Yousaf Bin Zikria1,
Tariq Umer2,
Byung-Seo Kim3 and
Sung Won Kim1Email authorView ORCID ID profile
EURASIP Journal on Wireless Communications and Networking20192019:178
Received: 3 April 2019
One of the key applications for the Internet of Things (IoT) is the eHealth service that targets sustaining patient health information in digital environments, such as the Internet cloud with the help of advanced communication technologies. In eHealth systems, wireless networks, such as wireless local area networks (WLAN), wireless body sensor networks (WBSN), and wireless medical sensor networks (WMSNs), are prominent technologies for early diagnosis and effective cures. The next generation of these wireless networks for IoT-based eHealth services is expected to confront densely deployed sensor environments and radically new applications. To satisfy the diverse requirements of such dense IoT-based eHealth systems, WLANs will have to face the challenge of assisting medium access control (MAC) layer channel access in intelligent adaptive learning and decision-making. Machine learning (ML) offers services as a promising machine intelligence tool for wireless-enabled IoT devices. It is anticipated that upcoming IoT-based eHealth systems will independently access the most desired channel resources with the assistance of sophisticated wireless channel condition inference. Therefore, in this study, we briefly review the fundamental models of ML and discuss their employment in the persuasive applications of IoT-based systems. Furthermore, we propose Q-learning (QL) that is one of the reinforcement learning (RL) paradigms as the future ML paradigm for MAC layer channel access in next-generation dense WLANs for IoT-based eHealth systems. Our goal is to contribute to refining the motivation, problem formulation, and methodology of powerful ML algorithms for MAC layer channel access in the framework of future dense WLANs. This paper also presents a case study of next-generation WLAN IEEE 802.11ax that utilizes the QL algorithm for intelligent MAC layer channel access. The proposed QL-based algorithm optimizes the performance of WLAN, especially for densely deployed devices environment.
eHealth systems
Next-generation dense WLANs
MAC layer channel access
Internet of Things (IoT) technology connects physical objects with the help of sensors and actuators by utilizing the existing infrastructure of communication networks, specifically with the help of unlicensed wireless networks [1]. Therefore, IoT technology uses the existing network infrastructure and communication technologies to ensure its strength. Sensors and actuators play a vital role in connecting the physical world to the digital world [2, 3]. The applications of IoT technology such as smart-cities, smart-industries, smart-metering, smart-grid, and smart-healthcare systems (IoT-based eHealth) are continuously increasing. It is expected that by the end of 2020, wireless-enabled devices will increase to 36.5 billion, and 70% of those would comprise sensor devices [1].
One of the key applications for the IoT is the eHealth service that targets sustaining patient health information in digital environments such as the Internet cloud with the help of advanced communication technologies. The World Health Organization (WHO) conducted a survey in 2013 and highlighted that upcoming decades would face the challenge of shortage of global health workforce, which would reach 12.9 million [4]. The main reasons of the decline are decreased interest in young people to pursue this profession, aging of current workforce, and the growing risk of non-infectious diseases such as cancer and heart stroke [4]. However, nowadays, health-related information can be easily monitored and tracked with the help of smart sensors and devices. This IoT-based eHealth enables people to allow emergency services/hospitals, doctors, and relatives to access their health-related data through different applications for immediate and efficient treatment. Handheld devices, such as smartphones and fitness bands, can act as on-body coordinators for personalized health monitoring because they are equipped with a variety of sensors, such as heart rate measurement sensor, blood glucose and pressure sensors, temperature sensors, humidity measurement sensors, accelerometers, magnetometers, and gyroscope (Fig. 1) [5]. There exist several built-in applications in such handheld smartphones, such as S-Health, to keep track of daily body fitness. However, there are always concerns regarding data privacy and security, reliability, and trustworthiness in the extensive usage of wearable smart devices [5].
Role of communication technologies in IoT-based eHealth system
One of the key issues in IoT-based eHealth systems is the requirement of appropriate communication technologies for efficient information sharing [6, 7]. Particularly, reliable connectivity is essential for real-time health-related information sharing. Wireless communication technologies are flexible and cost-effective for IoT-based information sharing. As shown in Fig. 1, a combination of both short-range wireless communication technologies, such as Bluetooth Low Energy (BLE), ZigBee, and IEEE 802.11 wireless local area network (WLAN), and long-range wireless communication technologies, such as Sub-1 Giga, LoRaWAN, and 4G/5G/LTE cellular systems, are typically considered [8, 9]. Both academic and industrial communities have recognized the significant attention given to future WLANs (IEEE 802.11) for IoT-based eHealth systems. One of their motivating services is the promisingly high throughput to support extensively advanced technologies even in densely deployed devices environment [10, 11]. However, unlicensed WLAN would face huge challenges in the future to access the shared channel resources, especially for highly dense IoT device deployments. The use of small cells and information-centric sensor networks in forthcoming IoT-based system may help to reduce the performance degradation issues [3, 12]. The most popular wireless channel resource utilization technique utilized by the WLAN medium access control (MAC) protocol is known as carrier sense multiple access with collision avoidance (CSMA/CA). To achieve maximum channel resource utilization through fair channel access in the WLANs with the ever-increasing density of contending IoT devices, the CSMA/CA scheme is very important as a part of IoT-based systems. CSMA/CA uses a binary exponential backoff (BEB) as its typical and traditional channel contention mechanism [11]. In BEB, a backoff value for contention is generated randomly from a specified contention window (CW). The CW size is exponentially increased for each unsuccessful transmission and reset to its initial size once transmitted successfully. For a network with a heavy load, resetting CW to its minimum size after successful transmission will result in more collisions and poor network performance. Similarly, for fewer contending devices, the blind exponential increase of CW for collision avoidance causes an unnecessary long delay. Besides, this blind increase/decrease of the backoff CW is more inefficient in highly dense networks proposed for IoT-based systems. Thus, the current CSMA/CA mechanism does not allow wireless networks to achieve high efficiency in highly dense environments.
Future dense WLANs are anticipated to infer the diverse and interesting features of both the devices' environments and their behavior to spontaneously optimize the reliability and efficiency of communication. Machine learning (ML), which is one of the prevailing machine intelligence tools, establishes an auspicious paradigm for optimization of the performance of WLANs [13]. As illustrated in Fig. 2, we can imagine an intelligent IoT device that is capable of accessing channel resources with the aid of ML. Therefore, an intelligent device would observe and learn the performance of a specific action with the objective of preserving a specific performance metric. Further, based on this learning, the intelligent device aims to reliably improve its performance while executing future actions by exploiting previous experience. ML algorithms are typically categorized into supervised [14] or unsupervised [15] learning algorithms. The supervised and unsupervised algorithms specify whether there are categorized samples in the available data (usually known as training data). Recently, another class of ML, known as reinforcement learning (RL), has emerged. It is encouraged by behavioral psychology [16, 17]. RL is concerned with a certain form of reward for a learner (such as an intelligent IoT device) that is associated with its environment (such as IoT-based eHealth system) through its observations and actions.
Intelligent channel access mechanism for IoT-based systems
In this study, we briefly assess the fundamental perceptions of ML and propose services in persuasive applications for IoT-based systems based on the supervised, unsupervised, and RL categories. ML can be used extensively for revealing numerous practical problems in the future dense WLANs of the IoT-based application like eHealth systems. Examples include massive multiple-input multiple-output (MIMO), device-to-device (D2D) communications, femto/small cell-based heterogeneous networks, and high contention in dense WLAN environments. Following are the contributions of this paper:
We briefly present the fundamental insights of ML in persuasive applications for IoT-based systems.
Furthermore, we propose Q-learning (QL) that is one of the prevailing algorithms of RL as the future ML paradigm for channel access in contention-based dense WLANs for IoT-based systems.
The goal of this paper is to aid readers in refining the enthusiasm for problem devising and the approach to powerful ML algorithms for channel access in the framework of future dense WLANs to tap into previously unexplored applications of IoT-based systems. Table 1 shows list of acronyms used in this paper.
List of acronyms used in this paper
Bayesian learning
CRNs
Cognitive radio networks
CSMA/CA
Carrier sense multiple access/collision avoidance
Device-to-device
Independent component analysis
Long-range wireless area network
Long-Term Evolution
Medium access control
Markov decision process
Multiple-input multiple-output
Mixed integer programming
Principle component analysis
POMDP
Partially observed MDP
Q-learning
Support vector machine
WBSN
Wireless body sensor network
Wireless local area network
WMSN
Wireless medical sensor network
2 Machine learning in WLANs for IoT-based systems
As aforementioned, ML is usually categorized as supervised, unsupervised, and the most recently evolved RL algorithms. In this section, we elaborate the role of these categories in wireless communication networks for IoT-based systems. Figure 3 summarizes the family architecture of ML techniques, models, and their potential applications in dense IoT-based systems.
Machine learning family architecture models and their potential applications in dense IoT systems
2.1 Supervised learning
In supervised ML, the learning agent learns from a labeled training dataset supervised by an erudite exterior supervisor. Each labeled training dataset is a depiction of a state comprising a specification, label, particular action, and class to which that particular action belongs. The objective of supervised ML is to make the system infer its retorts so that it acts intelligently in states not present in the labeled training dataset [18]. Although supervised ML is a significant type of ML, it is not suitable for a learner to learn the environment without the help of a supervisor and the available training dataset in it. Therefore, for the systems that need to deal interactively, it is often impractical to obtain a sample training dataset of anticipated behavior that is equally precise and descriptive regarding all the states in which the device has to perform actions in the future. In an unexplored environment, wherein ML is expected to be most valuable, a device must be able to learn from its own experience of interaction with the environment [18, 19].
Examples of supervised ML algorithms are regression models [20], k-nearest neighbor (KNN) [21], support vector machine (SVM) [22], and Bayesian learning (BL) [18]. Regression analysis (RA) depends on a statistical method for assessing the relations among input parameters. The objective of RA is to envisage the assessment of one or more continuously valued estimation objectives, given the assessment of a vector of input parameters. The estimation objective is a function of the independent parameters. The KNN and SVM techniques are mostly employed to categorize different objects in the system. In the KNN technique, an agent/device is categorized according to the votes of the neighbor agents. The agent is associated with the category that is most common among its k-nearest neighbors. On the contrary, the SVM algorithm uses non-linear mapping for object classification. First, it converts the original training dataset into a higher measurement, where it befits distinguishability. Later, it explores for the optimized linearly separating hyperplane that is accomplished by distinguishing one category of agents from another [18]. On the contrary, the idea of BL is to estimate a posterior distribution of the target variables, given some inputs and the available training datasets. The hidden Markov model (HMM) is a simple example of reproductive paradigms that can be learned with the help of BL [19]. HMM is a tool for expressing probability distributions of the trail of observations in the system. More specifically, it is a generalization method, where the unseen (hidden) variables of the system are associated with each other through a Markov decision process (MDP) [23]. These hidden variables control the particular constituent to be selected for each observation, while being relatively independent of each other.
These examples of supervised ML paradigms can be used for estimating wireless radio parameters that are related to the quality of service and quality of experience requirements of a particular user/device. Similar to a massive MIMO system of hundreds of radio antennas, the available channel estimation may lead to optimal dimensional search problems, which can be easily learned using any of the abovementioned supervised learning models. The SVM functions are cooperative for data classification problems. A hierarchical SVM (H-SVM), in which each hierarchical level is comprised of a fixed number of SVM classifiers, was proposed in [23]. H-SVM is used to intelligently estimate the Gaussian channel's noise level in a MIMO system by exploiting the training data. KNN and SVM can be pragmatic in finding the optimum handover solutions in wireless networks. Similarly, the BL model can be invoked for wireless channel characteristics learning and estimation in future generation ultra-dense wireless networks. For example, Wen et al. [24] estimated both the radio channel parameters in a specific radio cell and those of the intrusive links of the neighboring radio cells using BL techniques to deal with the pilot contamination problem faced by massive MIMO systems. Another application of BL was proposed in [25], where a Bayesian inference model was proposed for considering and statistically describing a variety of methods that are proficient at learning the predominant factors for cognitive radio networks (CRNs). Their proposed mechanism covers both the MAC and the network layers of a wireless network.
2.2 Unsupervised learning
Unsupervised ML is usually regarding the verdict structure veiled in a collection of unlabeled training datasets. The terms supervised ML and unsupervised ML would appear to profoundly categorize most ML-based paradigms; however, they are not accurate. The aim of supervised ML is to learn the mapping from an input dataset to an output result where accurate values are provided by a supervisor. On the contrary, in unsupervised learning, there is no external supervisor but only the available input dataset. The objective is to find symmetries in the dataset. There is an edifice of the available dataset space, e.g., that certain patterns occur often, such patterns can help understand the action to be performed in the future for any unknown input. In the statistical context, this is also known as density estimation [18].
Examples of unsupervised ML algorithms are k-means clustering [21], principle component analysis (PCA) [26], and independent component analysis (ICA) [27]. The objective of k-means clustering is to divide user observations into k clusters, where each observation is associated with the adjacent cluster. It uses the center of gravity (centroid) of the cluster, which is the mean value of the observation points within that particular cluster. Continuous iteration of the k-means clustering algorithm keeps assigning an agent to the particular cluster in which the centroid is close to the agent based on a similarity metric. This similarity metric is known as Euclidean distance. Further, the in-cluster differences are also minimized until convergence by iteratively updating the cluster centroid is achieved [18]. PCA is used to transform a set of possibly associated parameters into a set of unassociated parameters that are known as the principal components (PCs). The number of PCs is always less than or equal to the number of original parameters/components. The first PC has the largest possible variance, and each subsequent PC has the utmost variance probable under the limitation that it is unassociated with the prior PCs. Basically, the PCs are orthogonal (unassociated) because they are the eigenvectors of the covariance matrix that is symmetric. Unlike PCA, ICA is a statistical method applied to expose unseen elements that inspire sets of haphazard parameters/components within the system [18].
Clustering is one of the common problems in densely deployed wireless networks of IoT-based systems, especially in heterogeneous network environments with diverse cell sizes. In such cases, small cells have to be wisely grouped to avoid interference using coordinated multi-point transmission, whereas the mobile devices are grouped to follow an optimum offloading strategy. The devices are grouped in device-to-device (D2D) wireless networks to attain high energy efficiency, and the WLAN users are grouped to uphold an optimum access point (AP) association. Xia et. al. [28] proposed a hybrid scenario to diminish inclusive wireless traffic by encouraging the exploitation of a high-capacity optical infrastructure. They formulated a mixed-integer programming (MIP) problem to cooperatively optimize both network gateway splitting and the virtual radio channel provision based on typical k-means clustering. Both PCA and ICA are formulated to recover statistically autonomous source signals from their linear combinations using powerful statistical signal processing techniques. One of their key applications is in the area of intrusion detection in wireless networks, which depends on traffic monitoring. Besides, similar issues may also be resolved in the dense wireless communications technologies of IoT-based systems. PCA and ICA can also be invoked to classify user behavior in CRNs. In [29], the authors applied PCA and ICA in a smart grid scenario of IoT systems to improve the concurrent wireless transmissions of smart devices set up in the smart home. The statistical possessions of the received signals were oppressed to blindly isolate them using ICA. Their proposed mechanism enhances transmission capability by evading radio channel assessment and data security by excluding any wideband intrusion.
2.3 Reinforcement learning
Reinforcement learning (RL) is motivated by behaviorist sensibility and a control philosophy, where an agent can achieve its objective by interacting with and learning from its surroundings. In RL, the agent does not have clear information whether it is close to its target objective. However, the agent can observe the environment to augment the aggregate reward in an MDP [30]. RL is an ML technique that learns about the environment, what to do, and how to outline circumstances to current actions to maximize a numerical reward signal. Mostly, the agent is not informed about which actions to perform, and it has to learn which actions will produce maximum reward. In some exciting and inspiring situations, it is possible that actions will affect not only the instant reward but also the following state, and consequently, all succeeding rewards. MDPs offer a precise framework for modeling decision-making in particular circumstances, where the consequences are comparatively haphazard, and the decision-maker partially governs the consequences.
Partially observable MDP (POMDP) [31] and QL [17] are the examples of RL. POMDP might be seen as speculation with MDP, where the agent is inadequate to perceive the original state transitions in a straightforward manner; therefore, it only has constrained information. The agent has to retain the trajectory of the probability distribution of the appropriate states based on a set of annotations, and the probability distribution of both the observation probabilities and the original MDP [32]. QL might be conjured up to discover an optimum strategy for performing action from any finite MDP, particularly when the environment is unknown [18].
The uses of POMDP paradigms create vital tools for supportive decision-making in IoT-based systems, where the IoT devices may be considered agents and the wireless network constitutes the environment. In a POMDP problem, the technique first postulates the environment's state space and the agent's action space. Additionally, it endorses the Markov property among the states. Secondly, it constructs the state transition probabilities formulated as the probability of navigating from one state to another under a specific action. The third and final step is to enumerate both the agent's instant reward and its long-term reward via Bellman's equation [17]. Later, a wisely constructed iterative algorithm may be considered to classify the optimum action in each state. The applications of POMDP comprise the network selection problems of heterogeneous networks, channel sensing, and user access in CRNs. In [32], the authors proposed a mechanism for transmission power control problems of energy-harvesting systems, which were scrutinized with the help of the POMDP model. In their proposed investigation, the battery, channel, data transmission, and data reception states are defined as the state space, and an action by the agent is related to transmitting a packet at a certain transmission power. QL, usually in aggregation with the MDP models, has also been used in applications of heterogeneous networks. Alnwaimi et al. [33] presented a heterogeneous, fully distributed, multi-objective strategy for the optimization of femtocells based on a QL model [33]. Their proposed model solves both the channel resource allocation and interference coordination issues in the downlink of heterogeneous femtocell networks. Their proposed model acquires channel distribution awareness and classifies the accessibility of vacant radio channel slots for the establishment of opportunistic access. Further, it helps choose sub-channels from the vacant spectrum pool.
3 Q-learning-enabled channel access for dense WLANs
As described in the previous section, QL has already been extensively applied in heterogeneous wireless networks [14]. In such a case, the QL paradigm also covers a set of states where an agent can make a decision on an action from a set of available actions. By performing an action in a particular state, the agent collects a reward with the objective of exploiting its collective rewards. A collective reward is illustrated as a Q-function and is updated in an iterative approach after the agent performs an action and attains the subsequent reward [18]. The trade-off between exploration and exploitation is one of the challenges arising in QL but not in other types of ML techniques. To achieve considerable rewards, it is obligatory for a QL agent to choose those actions that it has tried before, and found them to be effective in constructing the reward (exploitation). However, to learn more about the environment, an agent has to try actions that it has not selected before (exploration). In exploitation, the agent has to attain what it has already experienced to optimize the process, and additionally, it must explore the environment to maximize the aggregated reward to make better selections in the future. The quandary is that neither exploration nor exploitation can be pursued exclusively without failing in the other process. The agent must try a diversity of actions and gradually favor those that appear to be the best. It is not possible to both explore and exploit with a particular action selection; therefore, we frequently refer to the "tussle" between these two.
3.1 Q-learning prototype
As aforementioned, QL algorithm utilizes a form of RL to solve MDPs without possessing complete information. In addition to the agent and the environment, a QL system has four main sub-elements: a policy, a reward, a Q-value function, and sometimes a model of the environment as an optional entity [17], as shown in Fig. 4.
Q-learning system environment with its sub-elements
3.1.1 Policy
The learning agent's manner of behaving at a particular time is defined as a policy. A policy can be a modest utility or a lookup table; however, it may comprise extensive computations such an exploration process. A policy is fundamental for a QL agent because it alone is adequate to determine the behavior of an agent. Generally, policies might be stochastic. A policy decides which action to perform in which state [17].
3.1.2 Reward
In each iteration, the QL agent receives a particular quantity from the environment known as the reward. The main objective of a QL algorithm is to collect as much reward as possible. An agent's exclusive goal is to exploit the accumulated reward collected over the long run. The reward describes the pleasant and unpleasant events for the agent. Reward signals are the instant and crucial topographies of the problem faced by the agent. The agent decides to change its policy based on the reward. For example, if the current action of the policy is followed by a low reward, then an agent may decide to select other actions in the future [17].
3.1.3 Q-value function
Although the reward specifies what is good at one instant, a Q-value function stipulates what is good in the end. Therefore, the Q-value of a state is the accumulated amount of reward that an agent gains at this state to presume in the future [17]. For example, although a state may continuously produce a low instant reward, it may have a high Q-value owing to being repeatedly trailed by other states that produce high rewards. In a WLAN environment, rewards are similar to a high channel collision probability (unpleased) and a low channel collision probability (pleased), whereas Q-values resemble a more sophisticated and prophetic verdict of how pleased or unpleased the agent is in a particular state (e.g., the backoff stage). If there is no reward, then there will be no Q-value, and the only purpose of estimating the Q-value is to attain additional rewards. An agent is most anxious about the Q-value while giving and assessing verdicts. An agent selects optimum actions based on Q-value findings. It seeks actions that carry states of a maximum Q-value and not a maximum reward because these actions attain the highest amount from the rewards for the agent over the long run.
3.1.4 Environment model
Environment model is an optional element of QL, which imitates the performance of the system to some extent. Typically, it allows drawing inferences to be made about how the environment will perform [17]. For example, given a state and an action, the model might envision the subsequent state and the next reward. Environment models are used for planning a method to decide on a sequence of actions by considering latent future situations. In an example of a WLAN system, a device would like to plan its future decisions based on the given state (e.g., the backoff stage) and action, along with its rewards (e.g., channel collision probability).
3.2 Q-learning algorithm
Let S represent a finite set of conceivable states of an environment and A represent a finite set of allowable actions to be performed. At time t, a learner (IoT device) observes the current state (s) of the environment and performs an action (a), i.e., at=a∈A, based on both the apparent state and its previous experience. The action at changes the environmental state from st to st+1=s∗∈S; consequently, the agent receives the reward (r) at time t, rt for the specific action: at. The QL algorithm finds an optimal policy for state s that optimizes the rewards over a long period of time. In the QL algorithm, a Q-value function, Q(s,a), estimates the reward as the cumulative discounted reward. An optimal Q-value, i.e., Qopt(s,a), is determined using the Q-values. The QL algorithm finds the optimal Q-value in a greedy manner. The Q-value is updated as:
$$ Q(s,a)=(1-\alpha)\times Q(s,a) + \alpha \times \Delta Q(s,a), $$
where α is the learning rate and takes values such as 0≤α≤1. When α is minimum, i.e., zero, the agent does not learn from the environment; therefore, the Q-value is not updated. When α is maximum, i.e., 1, the agent always learns; therefore, learning occurs quickly as seen in the following equation:
$$ \Delta Q(s,a) = \left\{r(s,a) + \beta\times \text{max}_{a^{\prime}}Q\left(s^{\prime},a^{\prime}\right)\right\}-Q(s,a), $$
where β (0≤β≤1) weighs the immediate rewards more heavily than future rewards, and is known as the discount factor. Over a considerable period of time, Q(s,a) converges into Qopt(s,a). The simplest policy for action selection is to choose one of the actions with the maximum measured Q-value (i.e., exploitation). If there are more than one greedy actions, then a choice is randomly made among them. This greedy action selection method can be written as:
$$ a^{\text{opt}} = \text{argmax}_{a}Q(s,a), $$
where argmaxa signifies the action a, for which the expression that follows it is exploited. An agent continuously exploits current knowledge to maximize the instant reward. A simple substitute is to perform greedily in most cases; however, sometimes (e.g., with a small probability ε) the agent can randomly select from all the equal probability actions, independent of the Q-value. The method using this greedy and non-greedy action selection rule is known as the ε-greedy method [17]. An advantage of such a technique is that as the number of iterations increases, every action will guarantee that Q(s,a) converges to Qopt(s,a). This leads to the inference that the probability of choosing the optimum action converges to a value that is larger than 1−ε, i.e., to adjacent certainty. In WLANs, for dense IoT-based systems, an agent would choose greedy actions from high-value actions (exploitation) to improve the throughput performance, and would perform a non-greedy action (exploration) to know the dynamicity of the network environment.
3.3 Case study: DCF-based backoff mechanism
The QL-based channel access scheme can be used to guide densely deployed IoT devices and allocate radio resources more efficiently. When an IoT device is deployed in a new environment, usually, no data are available on historical scenarios. Therefore, QL algorithms are the best choice to observe and learn the environment for optimal policy selection. For example, we consider the case study of DCF-based backoff mechanism of dense WLANs in IoT-based systems. In a densely deployed WLAN, channel collision is the most vital issue causing performance degradation. To tackle collision issues at the MAC layer, we propose adopting the QL algorithm. QL finds solutions through interacting and learning with an environment; therefore, we propose using the QL algorithm to model the optimal contention window (CW) in a channel observation-based scaled backoff (COSB) mechanism [34] for dense wireless networks of IoT-based systems. In other words, a station (STA; a WLAN-enabled IoT device referred to as an STA) controls the CW selection intelligently with the aid of the QL-based algorithm.
In COSB [34] protocol, STAs select a random backoff value from the initial CW (CWmin) to contend for the wireless medium after observing the channel in an idle state for a distributed inter-frame space (DIFS) period. The period after DIFS is divided into Bobs discrete observation time slots. The duration of each discrete time slot is either a constant idle slot time (σ) or a variable busy slot time (owing to successful or collided transmission). In COSB, each STA proficiently measures the channel observation-based collision probability (pobs) as:
$$ p_{\text{obs}} = 1/B_{\text{obs}}\times\sum_{(k=0)}^{({B_{\text{obs}}-1})}S_{k}, $$
where Sk=0 if Bobs is observed as idle or if the transmission is successful, whereas Sk=1 if Bobs is observed as busy or the transmission has collided [34].
We assume backoff stages of COSB as a set of m states, i.e., S={0,1,2,...,m}, where an intelligent IoT device performs an action a from a finite set of permissible actions A={0,1}, where 0 indicates decrement and 1 indicates increment. This is because in COSB, there are two possible actions: increase or decrease the CW size [34]. At time t, the STA collects reward rt in the response to an action at following policy π in a particular state st; i.e., rt(st,at) with the objective to exploit collective reward Q(st,at), which is a Q-value function defined in Eqs. (1) and (2). Figure 5 depicts the proposed QL model environment with its elements in a DCF-based backoff mechanism for channel access in WLANs.
QL model environment with its elements in the case study of DCF-based backoff mechanism
The selection of optimal action following πopt is known as a greedy action \(\left (a^{\pi ^{\text {opt}}}\right)\) selection policy that is defined in Eq. (3). A naivest policy can be to exploit in most cases; however, sometimes, the STA explores according to the default policy π, independent of \({a^{\pi ^{\text {opt}}}}\). The exploration with probability (ε) and exploitation with probability (1−ε) is called ε-greedy method [17]. The ε-greedy technique guarantees the convergence of learning estimate ΔQ(s,a) with the increase of episodes (instances). In a dense WLAN environment, exploitation can be used to improve throughput performance by an IoT device, and exploration can be used to know the dynamicity of the WLAN environment.
In COSB [34], an STA conducts pobs at every transmission attempt. Therefore, we express pobs as the reward of the action at any specific state. Therefore, reward rt produced by action at taken in state st at time t can be described as:
$$ r_{t}(s_{t},a_{t}) = 1 - p_{\text{obs}}. $$
The above equation indicates how pleased the STA was with its action at in state st.
4 Experimental results and discussion
We used ns3.28 [35] simulator to perform experiments of the proposed iQRA mechanism. Some important PHY layer and MAC layer simulation parameters are shown in Table 2. The results in Fig. 6a and b indicate that a small value of α and a large value of β make ΔQ (learning estimate) converge faster. The convergence of ΔQ clearly indicates that there exist optimal values that can be learned and exploited in the future. The throughput performance optimization of COSB using proposed iQRA is depicted in Fig. 7a. The performance of iQRA may degrade in small networks (i.e., for < 10 contending STAs as shown in Fig. 7a owing to low and irregular rewards). Additionally, the channel access delay is also increased for iQRA as compared to COSB; this is obvious owing to the environment inference characteristics; however, it remains lower than the conventional binary exponential backoff (BEB) [11] mechanism shown in Fig. 7b. Figure 7c portrays that the proposed iQRA also improves the fairness of COSB. The optimized performance of COSB using iQRA clearly stipulates that the QL-based proposed mechanism is effective in learning the network environment. Additionally, iQRA is essentially intended to intelligently adjust its learning parameters according to the dynamics of the WLAN. Therefore, we simulated a dynamic network environment by increasing the number of contenders by 5 after every 50 s until the number of STAs reached 50. Figure 8 depicts the properties of network dynamics on ΔQ. The figure shows simulation of a 500 s period with 1500 learning instances of a tagged STA. As shown in the figure, with the network dynamics, a tagged STA observes fluctuation in its learning estimate ΔQ, thereby indicating the inference of change in the network. We see that the throughput performance of iQRA eventually reaches a steady state in a dynamic network environment, as shown in Fig. 9a. To evaluate the performance of the proposed iQRA for moving devices in the network, we simulated a distance-based rate adaptation model. This model changes the transmission rate of the sender device according to the distance between the sender and receiver to achieve the best possible performance. IEEE 802.11a (11 Mbps) WLAN with 10 contending STAs is simulated for distance-based rate adaptation performance evaluation, as shown in Fig. 9b. Contending STAs are placed randomly around the access point (AP) within a distance of 25 m. A tagged STA starts moving away from the AP that is initially placed at a 1-m distance. As the distance from the AP increases, performance of a tagged STA degrades for all the three compared algorithms (BEB, COSB, and iQRA), as shown in Fig. 9b. It is observed that the throughput of the BEB algorithm approaches close to zero after the STA reaches a distance of 60 m, and it finally becomes zero after reaching a distance of 80 m. Owing to the observation-based nature of COSB, it achieves higher throughput even after a 60-m distance, compared to BEB. However, the proposed iQRA performs optimally, even if the distance reaches 80 m owing to its network inference capability.
Learning estimate (ΔQ) convergence with varying a learning rate α (ε=0.5) and b discount factor β (ε=0.5)
Comparison of BEB, COSB, and iQRA for a throughput (Mbps), b channel access delay (ms), and c successfully transmitted packets (fairness)
Convergence of learning estimate (ΔQ) in a dynamic network environment (increasing the number of contenders after every 50 s)
Throughput comparison of BEB, COSB, and iQRA in a dynamic network environment with increasing number of contenders after every 50 s and b distance-based rate adaptation network environment
MAC layer and PHY layer simulation parameters
Parameter type
5 GHz
160/20 MHz
1201/11 Mbps
Payload size
1472 bytes
10 m
Simulation time
100/500 s
Propagation loss model
Log distance
Mobility model
Constant position
Rate adaptation models
Constant rate/minsttrel
In this study, we investigated the benefits of ML-based intelligent dense wireless networks for IoT-enabled eHealth systems. We presented the key families of ML algorithms and deliberated their application in the context of dense IoT systems including next-generation wireless networks with massive MIMO; heterogeneous IoT networks based on small cells; smart applications, such as the smart grid and smart city; and intelligent cognitive radio. The three well-known categories of ML, supervised learning, unsupervised learning, and RL algorithms, are scrutinized in addition to a consistent sculpting methodology and possible future applications in dense IoT systems. Furthermore, we proposed Q-learning as a promising ML paradigm for MAC layer channel access in dense IoT systems. The proposed paradigm is implemented on a case study of DCF-based backoff mechanism in dense WLANs. We proposed an intelligent Q-learning-based resource allocation (iQRA) mechanism to optimize the performance of an existing (COSB) mechanism. The proposed iQRA mechanism infers unknown wireless network conditions and exploits rapidly unexpected changes to learn dynamicity in dense WLANs. The experimental results show that iQRA significantly enhances the performance of COSB in terms of throughput and fairness. Results reveal the ability of the Q-learning scheme to determine dense wireless network environments in IoT-based systems. In conclusion, ML is a promising area for self-scrutinized intelligence-aided dense wireless network research for IoT-enabled eHealth systems.
In the future, we aim to further investigate the applications of our proposed mechanism in various IoT-based systems such as smart city, smart home, smart grid, and smart industry.
BLE:
CSMA/CA:
Carrier sense multiple access with collision avoidance
D2D:
IoT:
KNN:
k-nearest neighbor
MIMO:
Massive multiple-input multiple-output
ML:
QL:
RL:
SVM:
WBSN:
WMSN:
Wireless medical sensor networks
This work was supported by the 2019 Yeungnam University Research Grant.
The funding source is the same as described in the acknowledgements.
RA and YAQ conceived the main idea, designed the algorithm, and proposed the intelligent framework for IoT-based eHealth systems. RA, YBZ, and TU performed the implementation of the proposition in the NS3 simulator. BK and SWK contributed to the structuring, reviewing, and finalizing of the manuscript. All authors read and approved the final manuscript.
Department of Information and Communication Engineering, Yeungnam University, Gyeongsan, 38541, Republic of Korea
Department of Computer Science, COMSATS University Islamabad, Wah, Pakistan
Department of Computer and Information Communication Engineering, Hongik University, Seoul, 04066, Republic of Korea
M. Pasha, W. SMShah, Framework for e-health systems in IoT-based environments. Wirel. Commun. Mob. Comput.2018(6183732), 1–11 (2018). https://doi.org/10.1155/2018/6183732.View ArticleGoogle Scholar
G. T. Singh, F. Al-Turjman, Learning data delivery paths in QoI-aware information-centric sensor networks. IEEE Internet Things J.3(4), 572–580 (2016). https://doi.org/10.1109/JIOT.2015.2504487.View ArticleGoogle Scholar
M. Z. Hasan, F. Al-Turjman, Evaluation of a duty-cycled asynchronous X-MAC protocol for vehicular sensor networks. EURASIP J. Wirel. Commun. Netw.2017(1), 95 (2017). https://doi.org/10.1186/s13638-017-0882-7.View ArticleGoogle Scholar
World Health Organization, Global health workforce shortage to reach 12.9 million in coming decades. http://www.who.int/mediacentre/news/releases/2013/health--workforce-shortage/en/. Accessed 10 Dec 2018.
H. M Alam, M. I. Malik, T. Khan, A. Pardy, Y. L. Kuusik, A. Moullec, Survey on the roles of communication technologies in IoT-based personalized healthcare applications. IEEE Access. 6:, 36611–36631 (2018). https://doi.org/10.1109/ACCESS.2018.2853148.View ArticleGoogle Scholar
M. Faheem, M. Zahid Abbas, G. Tuna, V. C. Gungor, EDHRP: Energy efficient event driven hybrid routing protocol for densely deployed wireless sensor networks. J. Netw. Comput. Appl.58:, 309–326 (2015). https://doi.org/10.1016/j.jnca.2015.08.002.View ArticleGoogle Scholar
M. Faheem, V. C. Gungor, Energy efficient and QoS-aware routing protocol for wireless sensor network-based smart grid applications in the context of industry 4.0. Appl. Soft Comput.68:, 910–922 (2018). https://doi.org/10.1016/j.asoc.2017.07.045.View ArticleGoogle Scholar
M. Faheem, R. A. Butt, B. Raza, M. W. Ashraf, S. Begum, Md. A. Ngadi, V. C. Gungor, in Transactions on Emerging Telecommunications Technologies. Bio-inspired routing protocol for WSN-based smart grid applications in the context of Industry 4.0, (2018). https://doi.org/10.1002/ett.3503.
M. Faheem, V. C. Gungor, Capacity and spectrum-aware communication framework for wireless sensor network-based smart grid applications. Comput. Stand. Interfaces. 53:, 48–58 (2017). https://doi.org/10.1016/j.csi.2017.03.003.View ArticleGoogle Scholar
S. Demir, F. Al-Turjman, Energy scavenging methods for WBAN applications: a review. IEEE Sensors J.18(16), 6477–6488 (2018). https://doi.org/10.1109/JSEN.2018.2851187.View ArticleGoogle Scholar
R. Ali, S. W. Kim, B. Kim, Y. Park, Design of MAC layer resource allocation schemes for IEEE 802.11ax: future directions. IETE Tech. Rev.35(1), 28–52 (2018). https://doi.org/10.1080/02564602.2016.1242387.View ArticleGoogle Scholar
F. Al-Turjman, E. Ever, H. Zahmatkesh, Small cells in the forthcoming 5G/IoT: traffic modelling and deployment overview. IEEE Commun. Surv. Tutor.21(1), 28–65 (2019). https://doi.org/10.1109/COMST.2018.2864779.View ArticleGoogle Scholar
R. Ali, N. Shahin, Y. B. Zikria, B. Kim, S. W. Kim, Deep reinforcement learning paradigm for performance optimization of channel observation-based MAC protocols in dense WLANs. IEEE Access. 7:, 3500–3511 (2019). https://doi.org/10.1109/ACCESS.2018.2886216.View ArticleGoogle Scholar
C. Zhang, P. Patras, H. Haddadi, Deep learning in mobile and wireless networking: a survey. IEEE Commun. Surv. Tutor. Early Access (2019). https://doi.org/10.1109/COMST.2019.2904897.
Y. Sun, M. Peng, Y. Zhou, Y. Huang, S. Mao, Application of machine learning in wireless networks: key techniques and open issues. ArXiv e-prints (2018). https://arxiv.org/abs/1809.08707.
E. M. Joo, Theory and novel applications of machine learning. 12–16 (IntechOpen, London, 2009). https://doi.org/10.5772/56681.Google Scholar
R. S. Sutton, A. G. Barto, Reinforcement learning: an introduction, Second ed. (MIT Press, Cambridge, 1998). isbn:0262193981.MATHGoogle Scholar
E. Alpaydin, Introduction to machine learning, Third ed. (MIT Press, Cambridge, 2014). isbn:978-0-262-028189.MATHGoogle Scholar
R. Ali, N. Shahin, R. Bajracharya, B. S. Kim, S. W. Kim, A self-scrutinized backoff mechanism for IEEE 802.11ax in 5G unlicensed networks. Sustainability. 10:, 1201 (2018). https://doi.org/10.3390/su10041201.View ArticleGoogle Scholar
Q. H. Abbasi, S. Liaqat, L. Ali, A. Alomainy, in 2013 First International Symposium on Future Information and Communication Technologies for Ubiquitous HealthCare (Ubi-HealthTech). An improved radio channel characterisation for ultra wideband on-body communications using regression method (Jinhua, 2013), pp. 1–4. https://doi.org/10.1109/Ubi-HealthTech.2013.6708063.
Y. Xu, T. Y. Fu, W. C. Lee, J. Winter, Processing k nearest neighbor queries in location-aware sensor networks. Signal Process.87(12), 2861–2881 (2007). https://doi.org/10.1016/j.sigpro.2007.05.013.View ArticleGoogle Scholar
Z. Dong, Y. Zhao, Z. Chen, in IEEE MTT-S International Wireless Symposium (IWS), 2018. Support vector machine for channel prediction in high-speed railway communication systems (Chengdu, 2018), pp. 1–3. https://doi.org/10.1109/IEEE-IWS.2018.8400912.
V. S. Feng, S. Y. Chang, Determination of wireless networks parameters through parallel hierarchical support vector machines. IEEE Trans. Parallel Distrib. Syst.23(3), 505–12 (2012). https://doi.org/10.1109/TPDS.2011.156.View ArticleGoogle Scholar
C. -K. Wen, S. Jin, K. -K. Wong, J. -C. Chen, P. Ting, Channel estimation for massive MIMO using Gaussian-mixture Bayesian learning. IEEE Trans. Wirel. Commun.14(3), 1356–68 (2015). https://doi.org/10.1109/TWC.2014.2365813.View ArticleGoogle Scholar
C. -K. Yu, K. -C. Chen, S. -M. Cheng, Cognitive radio network tomography. IEEE Trans. Veh. Technol.59(4), 1980–97 (2010). https://doi.org/10.1109/TVT.2010.2044906.View ArticleGoogle Scholar
M. C. Raja, M. M. A. Rabbani, in 2016 International Conference on Communication and Electronics Systems (ICCES). Combined analysis of support vector machine and principle component analysis for IDS (Coimbatore, 2016), pp. 1–5. https://doi.org/10.1109/CESYS.2016.7889868.
Z. Luo, C. Li, L. Zhu, Full-duplex cognitive radio using guided independent component analysis and cumulant criterion. IEEE Access. 7:, 27065–27074 (2019). https://doi.org/10.1109/ACCESS.2019.2901815.View ArticleGoogle Scholar
M. Xia, Y. Owada, M. Inoue, H. Harai, Optical and wireless hybrid access networks: design and optimization. IEEE/OSA J. Opt. Commun. Netw.4(10), 749–59 (2012). https://doi.org/10.1364/JOCN.4.000749.View ArticleGoogle Scholar
R. C. Qiu, Z. Hu, Z. Chen, N. Guo, R. Ranganathan, S. Hou, G. Zheng, Cognitive radio network for the smart grid: experimental system architecture, control algorithms, security, and micro grid testbed. IEEE Trans. Smart Grid. 2(4), 724–40 (2011). https://doi.org/10.1109/TSG.2011.2160101.View ArticleGoogle Scholar
R. Li, Z. Zhao, X. Zhou, G. Ding, Y. Chen, Z. Wang, H. Zhang, Intelligent 5G: when cellular networks meet artificial intelligence. IEEE Wirel. Commun.24(5), 175–183 (2017). https://doi.org/10.1109/MWC.2017.1600304WC.View ArticleGoogle Scholar
Y. Li, B. Yin, H. Xi, Partially observable Markov decision processes and performance sensitivity analysis. IEEE Trans. Syst Man Cybern. Part B Cybern.38(6), 1645–1651 (2008). https://doi.org/10.1109/TSMCB.2008.927711.View ArticleGoogle Scholar
A. Aprem, C. R. Murthy, N. B. Mehta, Transmit power control policies for energy harvesting sensors with retransmissions. IEEE J. Sel. Top. Signal Process.7(5), 895–906 (2013). https://doi.org/10.1109/JSTSP.2013.2258656.View ArticleGoogle Scholar
G. Alnwaimi, S. Vahid, K. Moessner, Dynamic heterogeneous learning games for opportunistic access in LTE-based macro/femtocell deployments. IEEE Trans. Wirel. Commun.14(4), 2294–2308 (2015). https://doi.org/10.1109/TWC.2014.2384510.View ArticleGoogle Scholar
R. Ali, N. Shahin, Y. T. Kim, B. S. Kim, S. W. Kim, Channel observation-based scaled backoff mechanism for high-efficiency WLANs. Electron. Lett.54(10), 663–665 (2018). https://doi.org/10.1049/el.2018.0617.View ArticleGoogle Scholar
The network simulator-ns-3. https://www.nsnam.org/. Accessed 01 Sept 2018. | CommonCrawl |
Zero correlation zone sequence set with inter-group orthogonal and inter-subgroup complementary properties
AMC Home
February 2015, 9(1): 1-7. doi: 10.3934/amc.2015.9.1
Existence conditions for self-orthogonal negacyclic codes over finite fields
Liren Lin 1, , Hongwei Liu 2, and Bocong Chen 3,
Department of Physical Science and Technology, Central China Normal University, Wuhan, Hubei 430079, China
School of Mathematics and Statistics, Central China Normal University, Wuhan, Hubei 430079, China
School of Physical & Mathematical Sciences, Nanyang Technological University, Singapore 637616, Singapore
Received May 2013 Revised April 2014 Published February 2015
In this paper, we obtain necessary and sufficient conditions for the nonexistence of nonzero self-orthogonal negacyclic codes over a finite field, of length relatively prime to the characteristic of the underlying field.
Keywords: cyclotomic coset., self-orthogonal code, negacyclic code, Cyclic code.
Mathematics Subject Classification: Primary: 11T71; Secondary: 94B1.
Citation: Liren Lin, Hongwei Liu, Bocong Chen. Existence conditions for self-orthogonal negacyclic codes over finite fields. Advances in Mathematics of Communications, 2015, 9 (1) : 1-7. doi: 10.3934/amc.2015.9.1
G. K. Bakshi and M. Raka, Self-dual and self-orthogonal negacyclic codes of length $2p^n$ over a finite field,, Finite Fields Appl., 19 (2013), 39. doi: 10.1016/j.ffa.2012.10.003. Google Scholar
T. Blackford, Negacyclic duadic codes,, Finite Fields Appl., 14 (2008), 930. doi: 10.1016/j.ffa.2008.05.004. Google Scholar
I. F. Blake, S. Gao and R. C. Mullin, Explicit factorization of $X^{2^k}+1$ over $F_p$ with prime $p\equiv3 (mod 4)$,, Appl. Algebra Engrg. Comm. Comput., 4 (1993), 89. doi: 10.1007/BF01386832. Google Scholar
H. Q. Dinh, Repeated-root constacyclic codes of length $2p^s$,, Finite Fields Appl., 18 (2012), 133. doi: 10.1016/j.ffa.2011.07.003. Google Scholar
H. Q. Dinh, Structure of repeated-root constacyclic codes of length $3p^s$ and their duals,, Discrete Math., 313 (2013), 983. doi: 10.1016/j.disc.2013.01.024. Google Scholar
W. Fu and T. Feng, On self-orthogonal group ring codes,, Designs Codes Crypt., 50 (2009), 203. doi: 10.1007/s10623-008-9224-4. Google Scholar
W. C. Huffman, On the classification and enumeration of self-dual codes,, Finite Fields Appl., 11 (2005), 451. doi: 10.1016/j.ffa.2005.05.012. Google Scholar
W. C. Huffman and V. Pless, Fundamentals of Error-Correcting Codes,, Cambridge University Press, (2003). Google Scholar
Y. Jia, S. Ling and C. Xing, On self-dual cyclic codes over finite fields,, IEEE Trans. Inf. Theory, 57 (2011), 2243. doi: 10.1109/TIT.2010.2092415. Google Scholar
X. Kai and S. Zhu, On cyclic self-dual codes,, Appl. Algebra Engrg. Comm. Comput., 19 (2008), 509. doi: 10.1007/s00200-008-0086-9. Google Scholar
L. Kathuria and M. Raka, Existence of cyclic self-orthogonal codes: a note on a result of Vera Pless,, Adv. Math. Commun., 6 (2012), 499. doi: 10.3934/amc.2012.6.499. Google Scholar
R. Lidl and H. Niederreiter, Finite Fields,, Cambridge University Press, (2008). Google Scholar
V. Pless, Cyclotomy and cyclic codes, the unreasonable effectiveness of number theory,, in Proc. Sympos. Appl. Math., (1992), 91. Google Scholar
N. J. A. Sloane and J. G. Thompson, Cyclic self-dual codes,, IEEE Trans. Inf. Theory, 29 (1983), 364. doi: 10.1109/TIT.1983.1056682. Google Scholar
Z. Wan, Lectures on Finite Fields and Galois Rings,, World Scientific Publishing, (2003). Google Scholar
W. Willems, A note on self-dual group codes,, IEEE Trans. Inf. Theory, 48 (2002), 3107. doi: 10.1109/TIT.2002.805076. Google Scholar
Laura Luzzi, Ghaya Rekaya-Ben Othman, Jean-Claude Belfiore. Algebraic reduction for the Golden Code. Advances in Mathematics of Communications, 2012, 6 (1) : 1-26. doi: 10.3934/amc.2012.6.1
Irene Márquez-Corbella, Edgar Martínez-Moro, Emilio Suárez-Canedo. On the ideal associated to a linear code. Advances in Mathematics of Communications, 2016, 10 (2) : 229-254. doi: 10.3934/amc.2016003
Serhii Dyshko. On extendability of additive code isometries. Advances in Mathematics of Communications, 2016, 10 (1) : 45-52. doi: 10.3934/amc.2016.10.45
Masaaki Harada, Takuji Nishimura. An extremal singly even self-dual code of length 88. Advances in Mathematics of Communications, 2007, 1 (2) : 261-267. doi: 10.3934/amc.2007.1.261
Leetika Kathuria, Madhu Raka. Existence of cyclic self-orthogonal codes: A note on a result of Vera Pless. Advances in Mathematics of Communications, 2012, 6 (4) : 499-503. doi: 10.3934/amc.2012.6.499
Amita Sahni, Poonam Trama Sehgal. Enumeration of self-dual and self-orthogonal negacyclic codes over finite fields. Advances in Mathematics of Communications, 2015, 9 (4) : 437-447. doi: 10.3934/amc.2015.9.437
Sihuang Hu, Gabriele Nebe. There is no $[24,12,9]$ doubly-even self-dual code over $\mathbb F_4$. Advances in Mathematics of Communications, 2016, 10 (3) : 583-588. doi: 10.3934/amc.2016027
Masaaki Harada, Ethan Novak, Vladimir D. Tonchev. The weight distribution of the self-dual $[128,64]$ polarity design code. Advances in Mathematics of Communications, 2016, 10 (3) : 643-648. doi: 10.3934/amc.2016032
Olof Heden. The partial order of perfect codes associated to a perfect code. Advances in Mathematics of Communications, 2007, 1 (4) : 399-412. doi: 10.3934/amc.2007.1.399
Selim Esedoḡlu, Fadil Santosa. Error estimates for a bar code reconstruction method. Discrete & Continuous Dynamical Systems - B, 2012, 17 (6) : 1889-1902. doi: 10.3934/dcdsb.2012.17.1889
M. Delgado Pineda, E. A. Galperin, P. Jiménez Guerra. MAPLE code of the cubic algorithm for multiobjective optimization with box constraints. Numerical Algebra, Control & Optimization, 2013, 3 (3) : 407-424. doi: 10.3934/naco.2013.3.407
Andrew Klapper, Andrew Mertz. The two covering radius of the two error correcting BCH code. Advances in Mathematics of Communications, 2009, 3 (1) : 83-95. doi: 10.3934/amc.2009.3.83
José Gómez-Torrecillas, F. J. Lobillo, Gabriel Navarro. Information--bit error rate and false positives in an MDS code. Advances in Mathematics of Communications, 2015, 9 (2) : 149-168. doi: 10.3934/amc.2015.9.149
Jorge P. Arpasi. On the non-Abelian group code capacity of memoryless channels. Advances in Mathematics of Communications, 2019, 0 (0) : 0-0. doi: 10.3934/amc.2020058
Martino Borello, Francesca Dalla Volta, Gabriele Nebe. The automorphism group of a self-dual $[72,36,16]$ code does not contain $\mathcal S_3$, $\mathcal A_4$ or $D_8$. Advances in Mathematics of Communications, 2013, 7 (4) : 503-510. doi: 10.3934/amc.2013.7.503
Dean Crnković, Bernardo Gabriel Rodrigues, Sanja Rukavina, Loredana Simčić. Self-orthogonal codes from orbit matrices of 2-designs. Advances in Mathematics of Communications, 2013, 7 (2) : 161-174. doi: 10.3934/amc.2013.7.161
Crnković Dean, Vedrana Mikulić Crnković, Bernardo G. Rodrigues. On self-orthogonal designs and codes related to Held's simple group. Advances in Mathematics of Communications, 2018, 12 (3) : 607-628. doi: 10.3934/amc.2018036
M. De Boeck, P. Vandendriessche. On the dual code of points and generators on the Hermitian variety $\mathcal{H}(2n+1,q^{2})$. Advances in Mathematics of Communications, 2014, 8 (3) : 281-296. doi: 10.3934/amc.2014.8.281
Michael Kiermaier, Johannes Zwanzger. A $\mathbb Z$4-linear code of high minimum Lee distance derived from a hyperoval. Advances in Mathematics of Communications, 2011, 5 (2) : 275-286. doi: 10.3934/amc.2011.5.275
Anna-Lena Horlemann-Trautmann, Kyle Marshall. New criteria for MRD and Gabidulin codes and some Rank-Metric code constructions. Advances in Mathematics of Communications, 2017, 11 (3) : 533-548. doi: 10.3934/amc.2017042
Liren Lin Hongwei Liu Bocong Chen | CommonCrawl |
Problems in Mathematics
Problems by Topics
Gauss-Jordan Elimination
Linear Transformation
Vector Space
Eigen Value
Cayley-Hamilton Theorem
Diagonalization
Exam Problems
Group Homomorphism
Sylow's Theorem
Module Theory
Ring Theory
LaTex/MathJax
Solve later Problems
My Solved Problems
You solved 0 problems!!
Solved Problems / Solve later Problems
Category: Linear Algebra
by Yu · Published 08/08/2016 · Last modified 11/17/2017
Projection to the subspace spanned by a vector
Let $T: \R^3 \to \R^3$ be the linear transformation given by orthogonal projection to the line spanned by $\begin{bmatrix}
1 \\
\end{bmatrix}$.
(a) Find a formula for $T(\mathbf{x})$ for $\mathbf{x}\in \R^3$.
(b) Find a basis for the image subspace of $T$.
(c) Find a basis for the kernel subspace of $T$.
(d) Find the $3 \times 3$ matrix for $T$ with respect to the standard basis for $\R^3$.
(e) Find a basis for the orthogonal complement of the kernel of $T$. (The orthogonal complement is the subspace of all vectors perpendicular to a given subspace, in this case, the kernel.)
(f) Find a basis for the orthogonal complement of the image of $T$.
(g) What is the rank of $T$?
(Johns Hopkins University Exam)
Read solution
Click here if solved 9
Add to solve later
A Square Root Matrix of a Symmetric Matrix
Answer the following two questions with justification.
(a) Does there exist a $2 \times 2$ matrix $A$ with $A^3=O$ but $A^2 \neq O$? Here $O$ denotes the $2 \times 2$ zero matrix.
(b) Does there exist a $3 \times 3$ real matrix $B$ such that $B^2=A$ where
\[A=\begin{bmatrix}
1 & -1 & 0 \\
-1 &2 &-1 \\
0 & -1 & 1
\end{bmatrix}\,\,\,\,?\]
(Princeton University Linear Algebra Exam)
Click here if solved 40
Inequality Regarding Ranks of Matrices
Let $A$ be an $n \times n$ matrix over a field $K$. Prove that
\[\rk(A^2)-\rk(A^3)\leq \rk(A)-\rk(A^2),\] where $\rk(B)$ denotes the rank of a matrix $B$.
(University of California, Berkeley, Qualifying Exam)
Characteristic Polynomials of $AB$ and $BA$ are the Same
Let $A$ and $B$ be $n \times n$ matrices.
Prove that the characteristic polynomials for the matrices $AB$ and $BA$ are the same.
Perturbation of a Singular Matrix is Nonsingular
Suppose that $A$ is an $n\times n$ singular matrix.
Prove that for sufficiently small $\epsilon>0$, the matrix $A-\epsilon I$ is nonsingular, where $I$ is the $n \times n$ identity matrix.
Simple Commutative Relation on Matrices
Let $A$ and $B$ are $n \times n$ matrices with real entries.
Assume that $A+B$ is invertible. Then show that
\[A(A+B)^{-1}B=B(A+B)^{-1}A.\]
(University of California, Berkeley Qualifying Exam)
All the Eigenvectors of a Matrix Are Eigenvectors of Another Matrix
Let $A$ and $B$ be an $n \times n$ matrices.
Suppose that all the eigenvalues of $A$ are distinct and the matrices $A$ and $B$ commute, that is $AB=BA$.
Then prove that each eigenvector of $A$ is an eigenvector of $B$.
(It could be that each eigenvector is an eigenvector for distinct eigenvalues.)
Find the Limit of a Matrix
\frac{1}{7} & \frac{3}{7} & \frac{3}{7} \\
\frac{3}{7} &\frac{1}{7} &\frac{3}{7} \\
\frac{3}{7} & \frac{3}{7} & \frac{1}{7}
\end{bmatrix}\] be $3 \times 3$ matrix. Find
\[\lim_{n \to \infty} A^n.\]
(Nagoya University Linear Algebra Exam)
Linearly Independent/Dependent Vectors Question
Let $V$ be an $n$-dimensional vector space over a field $K$.
Suppose that $\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_k$ are linearly independent vectors in $V$.
Are the following vectors linearly independent?
\[\mathbf{v}_1+\mathbf{v}_2, \quad \mathbf{v}_2+\mathbf{v}_3, \quad \dots, \quad \mathbf{v}_{k-1}+\mathbf{v}_k, \quad \mathbf{v}_k+\mathbf{v}_1.\]
If it is linearly dependent, give a non-trivial linear combination of these vectors summing up to the zero vector.
How to Calculate and Simplify a Matrix Polynomial
Let $T=\begin{bmatrix}
1 & 0 & 2 \\
0 &1 &1 \\
0 & 0 & 2
Calculate and simplify the expression
\[-T^3+4T^2+5T-2I,\] where $I$ is the $3\times 3$ identity matrix.
(The Ohio State University Linear Algebra Exam)
Trace of the Inverse Matrix of a Finite Order Matrix
Let $A$ be an $n\times n$ matrix such that $A^k=I_n$, where $k\in \N$ and $I_n$ is the $n \times n$ identity matrix.
Show that the trace of $(A^{-1})^{\trans}$ is the conjugate of the trace of $A$. That is, show that $\tr((A^{-1})^{\trans})=\overline{\tr(A)}$.
Calculate Determinants of Matrices
Calculate the determinants of the following $n\times n$ matrices.
1 & 0 & 0 & \dots & 0 & 0 &1 \\
1 & 1 & 0 & \dots & 0 & 0 & 0 \\
\vdots & \vdots & \vdots & \dots & \dots & \ddots & \vdots \\
0 & 0 & 0 &\dots & 1 & 1 & 0\\
0 & 0 & 0 &\dots & 0 & 1 & 1
\end{bmatrix}\]
The entries of $A$ is $1$ at the diagonal entries, entries below the diagonal, and $(1, n)$-entry.
The other entries are zero.
\[B=\begin{bmatrix}
1 & 0 & 0 & \dots & 0 & 0 & -1 \\
-1 & 1 & 0 & \dots & 0 & 0 & 0 \\
0 & -1 & 1 & \dots & 0 & 0 & 0 \\
0 & 0 & 0 &\dots & -1 & 1 & 0\\
0 & 0 & 0 &\dots & 0 & -1 & 1
\end{bmatrix}.\]
The entries of $B$ is $1$ at the diagonal entries.
The entries below the diagonal and $(1,n)$-entry are $-1$.
Find a Matrix that Maps Given Vectors to Given Vectors
Suppose that a real matrix $A$ maps each of the following vectors
\[\mathbf{x}_1=\begin{bmatrix}
\end{bmatrix}, \mathbf{x}_2=\begin{bmatrix}
\end{bmatrix} \] into the vectors
\[\mathbf{y}_1=\begin{bmatrix}
\end{bmatrix}, \mathbf{y}_2=\begin{bmatrix}
-1 \\
\end{bmatrix},\] respectively.
That is, $A\mathbf{x}_i=\mathbf{y}_i$ for $i=1,2,3$.
Find the matrix $A$.
(Kyoto University Exam)
Find All Matrices Satisfying a Given Relation
Let $a$ and $b$ be two distinct positive real numbers. Define matrices
\[A:=\begin{bmatrix}
0 & a\\
a & 0
\end{bmatrix}, \,\,
B:=\begin{bmatrix}
0 & b\\
b& 0
Find all the pairs $(\lambda, X)$, where $\lambda$ is a real number and $X$ is a non-zero real matrix satisfying the relation
\[AX+XB=\lambda X. \tag{*} \]
(The University of Tokyo Linear Algebra Exam)
Symmetric Matrix and Its Eigenvalues, Eigenspaces, and Eigenspaces
Let $A$ be a $4\times 4$ real symmetric matrix. Suppose that $\mathbf{v}_1=\begin{bmatrix}
\end{bmatrix}$ is an eigenvector corresponding to the eigenvalue $1$ of $A$.
Suppose that the eigenspace for the eigenvalue $2$ is $3$-dimensional.
(a) Find an orthonormal basis for the eigenspace of the eigenvalue $2$ of $A$.
(b) Find $A\mathbf{v}$, where
\[ \mathbf{v}=\begin{bmatrix}
Calculate $A^{10}$ for a Given Matrix $A$
Find $A^{10}$, where $A=\begin{bmatrix}
3 &-4 & 0 & 0 \\
0 & 0 & 1 & 1
(Harvard University Exam)
Find a Basis of the Subspace of All Vectors that are Perpendicular to the Columns of the Matrix
Find a basis for the subspace $W$ of all vectors in $\R^4$ which are perpendicular to the columns of the matrix
11 & 12 & 13 & 14 \\
21 &22 & 23 & 24 \\
41 & 42 & 43 & 44
Given the Characteristic Polynomial of a Diagonalizable Matrix, Find the Size of the Matrix, Dimension of Eigenspace
Suppose that $A$ is a diagonalizable matrix with characteristic polynomial
\[f_A(\lambda)=\lambda^2(\lambda-3)(\lambda+2)^3(\lambda-4)^3.\]
(a) Find the size of the matrix $A$.
(b) Find the dimension of $E_4$, the eigenspace corresponding to the eigenvalue $\lambda=4$.
(c) Find the dimension of the kernel(nullspace) of $A$.
(Stanford University Linear Algebra Exam)
If the Kernel of a Matrix $A$ is Trivial, then $A^T A$ is Invertible
Let $A$ be an $m \times n$ real matrix.
Then the kernel of $A$ is defined as $\ker(A)=\{ x\in \R^n \mid Ax=0 \}$.
The kernel is also called the null space of $A$.
Suppose that $A$ is an $m \times n$ real matrix such that $\ker(A)=0$. Prove that $A^{\trans}A$ is invertible.
Diagonalizable Matrix with Eigenvalue 1, -1
Suppose that $A$ is a diagonalizable $n\times n$ matrix and has only $1$ and $-1$ as eigenvalues.
Show that $A^2=I_n$, where $I_n$ is the $n\times n$ identity matrix.
See below for a generalized problem.
Page 23 of 25« First«...10...1819202122232425»
This website's goal is to encourage people to enjoy Mathematics!
This website is no longer maintained by Yu. ST is the new administrator.
Linear Algebra Problems by Topics
The list of linear algebra problems is available here.
Introduction to Matrices
Elementary Row Operations
Gaussian-Jordan Elimination
Solutions of Systems of Linear Equations
Linear Combination and Linear Independence
Nonsingular Matrices
Inverse Matrices
Subspaces in $\R^n$
Bases and Dimension of Subspaces in $\R^n$
General Vector Spaces
Subspaces in General Vector Spaces
Linearly Independency of General Vectors
Bases and Coordinate Vectors
Dimensions of General Vector Spaces
Linear Transformation from $\R^n$ to $\R^m$
Linear Transformation Between Vector Spaces
Orthogonal Bases
Determinants of Matrices
Computations of Determinants
Introduction to Eigenvalues and Eigenvectors
Eigenvectors and Eigenspaces
Diagonalization of Matrices
The Cayley-Hamilton Theorem
Dot Products and Length of Vectors
Eigenvalues and Eigenvectors of Linear Transformations
Jordan Canonical Form
Elementary Number Theory (1)
Field Theory (27)
Group Theory (126)
Linear Algebra (485)
Math-Magic (1)
Module Theory (13)
Ring theory (67)
Mathematical equations are created by MathJax. See How to use MathJax in WordPress if you want to write a mathematical blog.
How to Prove Markov's Inequality and Chebyshev's Inequality
How to Use the Z-table to Compute Probabilities of Non-Standard Normal Distributions
Expected Value and Variance of Exponential Random Variable
Condition that a Function Be a Probability Density Function
Conditional Probability When the Sum of Two Geometric Random Variables Are Known
Find the Inverse Matrix of a $3\times 3$ Matrix if Exists
Independent and Dependent Events of Three Coins Tossing
Compute Determinant of a Matrix Using Linearly Independent Vectors
The Matrix Exponential of a Diagonal Matrix
How to Diagonalize a Matrix. Step by Step Explanation.
Eigenvalues of Real Skew-Symmetric Matrix are Zero or Purely Imaginary and the Rank is Even
Orthonormal Basis of Null Space and Row Space
Determine Whether Each Set is a Basis for $\R^3$
Eigenvalues of a Matrix and its Transpose are the Same
Eigenvalues of Orthogonal Matrices Have Length 1. Every $3\times 3$ Orthogonal Matrix Has 1 as an Eigenvalue
Find the Inverse Matrix Using the Cayley-Hamilton Theorem
The Intersection of Two Subspaces is also a Subspace
Diagonalize a 2 by 2 Matrix $A$ and Calculate the Power $A^{100}$
Find a Basis and the Dimension of the Subspace of the 4-Dimensional Vector Space
Site Map & Index
abelian group augmented matrix basis basis for a vector space characteristic polynomial commutative ring determinant determinant of a matrix diagonalization diagonal matrix eigenvalue eigenvector elementary row operations exam finite group group group homomorphism group theory homomorphism ideal inverse matrix invertible matrix kernel linear algebra linear combination linearly independent linear transformation matrix matrix representation nonsingular matrix normal subgroup null space Ohio State Ohio State.LA probability rank ring ring theory subgroup subspace symmetric matrix system of linear equations transpose vector vector space
Search More Problems
Membership Level Free
If you are a member, Login here.
Problems in Mathematics © 2021. All Rights Reserved. | CommonCrawl |
Table 4 lists the results of 27 tasks from 23 articles on the effects of d-AMP or MPH on working memory. The oldest and most commonly used type of working memory task in this literature is the Sternberg short-term memory scanning paradigm (Sternberg, 1966), in which subjects hold a set of items (typically letters or numbers) in working memory and are then presented with probe items, to which they must respond "yes" (in the set) or "no" (not in the set). The size of the set, and hence the working memory demand, is sometimes varied, and the set itself may be varied from trial to trial to maximize working memory demands or may remain fixed over a block of trials. Taken together, the studies that have used a version of this task to test the effects of MPH and d-AMP on working memory have found mixed and somewhat ambiguous results. No pattern is apparent concerning the specific version of the task or the specific drug. Four studies found no effect (Callaway, 1983; Kennedy, Odenheimer, Baltzley, Dunlap, & Wood, 1990; Mintzer & Griffiths, 2007; Tipper et al., 2005), three found faster responses with the drugs (Fitzpatrick, Klorman, Brumaghim, & Keefover, 1988; Ward et al., 1997; D. E. Wilson et al., 1971), and one found higher accuracy in some testing sessions at some dosages, but no main effect of drug (Makris et al., 2007). The meaningfulness of the increased speed of responding is uncertain, given that it could reflect speeding of general response processes rather than working memory–related processes. Aspects of the results of two studies suggest that the effects are likely due to processes other than working memory: D. E. Wilson et al. (1971) reported comparable speeding in a simple task without working memory demands, and Tipper et al. (2005) reported comparable speeding across set sizes.
When I spoke with Jesse Lawler, who hosts the podcast Smart Drugs Smarts, about breakthroughs in brain health and neuroscience, he was unsurprised to hear of my disappointing experience. Many nootropics are supposed to take time to build up in the body before users begin to feel their impact. But even then, says Barry Gordon, a neurology professor at the Johns Hopkins Medical Center, positive results wouldn't necessarily constitute evidence of a pharmacological benefit.
Most stock quote data provided by BATS. Market indices are shown in real time, except for the DJIA, which is delayed by two minutes. All times are ET. Disclaimer. Morningstar: Copyright 2018 Morningstar, Inc. All Rights Reserved. Factset: FactSet Research Systems Inc.2018. All rights reserved. Chicago Mercantile Association: Certain market data is the property of Chicago Mercantile Exchange Inc. and its licensors. All rights reserved. Dow Jones: The Dow Jones branded indices are proprietary to and are calculated, distributed and marketed by DJI Opco, a subsidiary of S&P Dow Jones Indices LLC and have been licensed for use to S&P Opco, LLC and CNN. Standard & Poor's and S&P are registered trademarks of Standard & Poor's Financial Services LLC and Dow Jones is a registered trademark of Dow Jones Trademark Holdings LLC. All content of the Dow Jones branded indices Copyright S&P Dow Jones Indices LLC 2018 and/or its affiliates.
Furthermore, there is no certain way to know whether you'll have an adverse reaction to a particular substance, even if it's natural. This risk is heightened when stacking multiple substances because substances can have synergistic effects, meaning one substance can heighten the effects of another. However, using nootropic stacks that are known to have been frequently used can reduce the chances of any negative side effects.
Oxiracetam is one of the 3 most popular -racetams; less popular than piracetam but seems to be more popular than aniracetam. Prices have come down substantially since the early 2000s, and stand at around 1.2g/$ or roughly 50 cents a dose, which was low enough to experiment with; key question, does it stack with piracetam or is it redundant for me? (Oxiracetam can't compete on price with my piracetam pile stockpile: the latter is now a sunk cost and hence free.)
Gamma-aminobutyric acid, also known as GABA, naturally produced in the brain from glutamate, is a neurotransmitter that helps in the communication between the nervous system and brain. The primary function of this GABA Nootropic is to reduce the additional activity of the nerve cells and helps calm the mind. Thus, it helps to improve various conditions, like stress, anxiety, and depression by decreasing the beta brain waves and increasing the alpha brain waves. It is one of the best nootropic for anxiety that you can find in the market today. As a result, cognitive abilities like memory power, attention, and alertness also improve. GABA helps drug addicts recover from addiction by normalizing the brain's GABA receptors which reduce anxiety and craving levels in the absence of addictive substances.
Those who have taken them swear they do work – though not in the way you might think. Back in 2015, a review of the evidence found that their impact on intelligence is "modest". But most people don't take them to improve their mental abilities. Instead, they take them to improve their mental energy and motivation to work. (Both drugs also come with serious risks and side effects – more on those later).
Took pill #6 at 12:35 PM. Hard to be sure. I ultimately decided that it was Adderall because I didn't have as much trouble as I normally would in focusing on reading and then finishing my novel (Surface Detail) despite my family watching a movie, though I didn't notice any lack of appetite. Call this one 60-70% Adderall. I check the next evening and it was Adderall.
I posted a link to the survey on my Google+ account, and inserted the link at the top of all gwern.net pages; 51 people completed all 11 binary choices (most of them coming from North America & Europe), which seems adequate since the 11 questions are all asking the same question, and 561 responses to one question is quite a few. A few different statistical tests seem applicable: a chi-squared test whether there's a difference between all the answers, a two-sample test on the averages, and most meaningfully, summing up the responses as a single pair of numbers and doing a binomial test:
Now, what is the expected value (EV) of simply taking iodine, without the additional work of the experiment? 4 cans of 0.15mg x 200 is $20 for 2.1 years' worth or ~$10 a year or a NPV cost of $205 (\frac{10}{\ln 1.05}) versus a 20% chance of $2000 or $400. So the expected value is greater than the NPV cost of taking it, so I should start taking iodine.
The FDA has approved the first smart pill for use in the United States. Called Abilify MyCite, the pill contains a drug and an ingestible sensor that is activated when it comes into contact with stomach fluid to detect when the pill has been taken. The pill then transmits this data to a wearable patch that subsequently transfers the information to an app on a paired smartphone. From that point, with a patient's consent, the data can be accessed by the patient's doctors or caregivers via a web portal.
On the other hand, sometimes you'll feel a great cognitive boost as soon as you take a pill. That can be a good thing or a bad thing. I find, for example, that modafinil makes you more of what you already are. That means if you are already kind of a dick and you take modafinil, you might act like a really big dick and regret it. It certainly happened to me! I like to think that I've done enough hacking of my brain that I've gotten over that programming… and that when I use nootropics they help me help people.
Low level laser therapy (LLLT) is a curious treatment based on the application of a few minutes of weak light in specific near-infrared wavelengths (the name is a bit of a misnomer as LEDs seem to be employed more these days, due to the laser aspect being unnecessary and LEDs much cheaper). Unlike most kinds of light therapy, it doesn't seem to have anything to do with circadian rhythms or zeitgebers. Proponents claim efficacy in treating physical injuries, back pain, and numerous other ailments, recently extending it to case studies of mental issues like brain fog. (It's applied to injured parts; for the brain, it's typically applied to points on the skull like F3 or F4.) And LLLT is, naturally, completely safe without any side effects or risk of injury.
These are the most popular nootropics available at the moment. Most of them are the tried-and-tested and the benefits you derive from them are notable (e.g. Guarana). Others are still being researched and there haven't been many human studies on these components (e.g. Piracetam). As always, it's about what works for you and everyone has a unique way of responding to different nootropics.
Noopept shows a much greater affinity for certain receptor sites in the brain than racetams, allowing doses as small as 10-30mg to provide increased focus, improved logical thinking function, enhanced short and long-term memory functions, and increased learning ability including improved recall. In addition, users have reported a subtle psychostimulatory effect.
In avoiding experimenting with more Russian Noopept pills and using instead the easily-purchased powder form of Noopept, there are two opposing considerations: Russian Noopept is reportedly the best, so we might expect anything I buy online to be weaker or impure or inferior somehow and the effect size smaller than in the pilot experiment; but by buying my own supply & using powder I can double or triple the dose to 20mg or 30mg (to compensate for the original under-dosing of 10mg) and so the effect size larger than in the pilot experiment.
How exactly – and if – nootropics work varies widely. Some may work, for example, by strengthening certain brain pathways for neurotransmitters like dopamine, which is involved in motivation, Barbour says. Others aim to boost blood flow – and therefore funnel nutrients – to the brain to support cell growth and regeneration. Others protect brain cells and connections from inflammation, which is believed to be a factor in conditions like Alzheimer's, Barbour explains. Still others boost metabolism or pack in vitamins that may help protect the brain and the rest of the nervous system, explains Dr. Anna Hohler, an associate professor of neurology at Boston University School of Medicine and a fellow of the American Academy of Neurology.
1 PM; overall this was a pretty productive day, but I can't say it was very productive. I would almost say even odds, but for some reason I feel a little more inclined towards modafinil. Say 55%. That night's sleep was vile: the Zeo says it took me 40 minutes to fall asleep, I only slept 7:37 total, and I woke up 7 times. I'm comfortable taking this as evidence of modafinil (half-life 10 hours, 1 PM to midnight is only 1 full halving), bumping my prediction to 75%. I check, and sure enough - modafinil.
Frustrated by the lack of results, pharmaceutical companies have been shutting down their psychiatric drug research programmes. Traditional methods, such as synthesising new molecules and seeing what effect they have on symptoms, seem to have run their course. A shift of strategy is looming, towards research that focuses on genes and brain circuitry rather than chemicals. The shift will prolong the wait for new blockbuster drugs further, as the new systems are developed, and offers no guarantees of results.
My intent here is not to promote illegal drugs or promote the abuse of prescription drugs. In fact, I have identified which drugs require a prescription. If you are a servicemember and you take a drug (such as Modafinil and Adderall) without a prescription, then you will fail a urinalysis test. Thus, you will most likely be discharged from the military.
Serotonin, or 5-hydroxytryptamine (5-HTP), is another primary neurotransmitter and controls major features of the mental landscape including mood, sleep and appetite. Serotonin is produced within the body by exposure, which is one reason that the folk-remedy of "getting some sun" to fight depression is scientifically credible. Many foods contain natural serotonergic (serotonin-promoting or releasing) compounds, including the well-known chemical L-Tryptophan found in turkey, which can promote sleep after big Thanksgiving dinners.
The advantage of adrafinil is that it is legal & over-the-counter in the USA, so one removes the small legal risk of ordering & possessing modafinil without a prescription, and the retailers may be more reliable because they are not operating in a niche of dubious legality. Based on comments from others, the liver problem may have been overblown, and modafinil vendors post-2012 seem to have become more unstable, so I may give adrafinil (from another source than Antiaging Central) a shot when my modafinil/armodafinil run out.
CDP-Choline is also known as Citicoline or Cytidine Diphosphocholine. It has been enhanced to allow improved crossing of the blood-brain barrier. Your body converts it to Choline and Cytidine. The second then gets converted to Uridine (which crosses the blood-brain barrier). CDP-Choline is found in meats (liver), eggs (yolk), fish, and vegetables (broccoli, Brussels sprout).
With just 16 predictions, I can't simply bin the predictions and say yep, that looks good. Instead, we can treat each prediction as equivalent to a bet and see what my winnings (or losses) were; the standard such proper scoring rule is the logarithmic rule which pretty simple: you earn the logarithm of the probability if you were right, and the logarithm of the negation if you were wrong; he who racks up the fewest negative points wins. We feed in a list and get back a number:
A "smart pill" is a drug that increases the cognitive ability of anyone taking it, whether the user is cognitively impaired or normal. The Romanian neuroscientist Corneliu Giurgea is often credited with first proposing, in the 1960s, that smart pills should be developed to increase the intelligence of the general population (see Giurgea, 1984). He is quoted as saying, "Man is not going to wait passively for millions of years before evolution offers him a better brain" (Gazzaniga, 2005, p. 71). In their best-selling book, Smart Drugs and Nutrients, Dean and Morgenthaler (1990) reviewed a large number of substances that have been used by healthy individuals with the goal of increasing cognitive ability. These include synthetic and natural products that affect neurotransmitter levels, neurogenesis, and blood flow to the brain. Although many of these substances have their adherents, none have become widely used. Caffeine and nicotine may be exceptions to this generalization, as one motivation among many for their use is cognitive enhancement (Julien, 2001).
It can easily pass through the blood-brain barrier and is known to protect the nerve tissues present in the brain. There is evidence that the acid plays an instrumental role in preventing strokes in adults by decreasing the number of free radicals in the body. It increases the production of acetylcholine, a neurotransmitter that most Alzheimer's patients are a deficit in.
And yet aside from anecdotal evidence, we know very little about the use of these drugs in professional settings. The Financial Times has claimed that they are "becoming popular among city lawyers, bankers, and other professionals keen to gain a competitive advantage over colleagues." Back in 2008 the narcolepsy medication Modafinil was labeled the "entrepreneur's drug of choice" by TechCrunch. That same year, the magazine Nature asked its readers whether they use cognitive-enhancing drugs; of the 1,400 respondents, one in five responded in the affirmative.
In addition, large national surveys, including the NSDUH, have generally classified prescription stimulants with other stimulants including street drugs such as methamphetamine. For example, since 1975, the National Institute on Drug Abuse–sponsored Monitoring the Future (MTF) survey has gathered data on drug use by young people in the United States (Johnston, O'Malley, Bachman, & Schulenberg, 2009a, 2009b). Originally, MTF grouped prescription stimulants under a broader class of stimulants so that respondents were asked specifically about MPH only after they had indicated use of some drug in the category of AMPs. As rates of MPH prescriptions increased and anecdotal reports of nonmedical use grew, the 2001 version of the survey was changed to include a separate standalone question about MPH use. This resulted in more than a doubling of estimated annual use among 12th graders, from 2.4% to 5.1%. More recent data from the MTF suggests Ritalin use has declined (3.4% in 2008). However, this may still underestimate use of MPH, as the question refers specifically to Ritalin and does not include other brand names such as Concerta (an extended release formulation of MPH).
Today piracetam is a favourite with students and young professionals looking for a way to boost their performance, though decades after Giurgea's discovery, there still isn't much evidence that it can improve the mental abilities of healthy people. It's a prescription drug in the UK, though it's not approved for medical use by the US Food and Drug Administration and can't be sold as a dietary supplement either.
"Where can you draw the line between Red Bull, six cups of coffee and a prescription drug that keeps you more alert," says Michael Schrage of the MIT Center for Digital Business, who has studied the phenomenon. "You can't draw the line meaningfully - some organizations have cultures where it is expected that employees go the extra mile to finish an all-nighter. "
"As a neuro-optometrist who cares for many brain-injured patients experiencing visual challenges that negatively impact the progress of many of their other therapies, Cavin's book is a god-send! The very basic concept of good nutrition among all the conflicting advertisements and various "new" food plans and diets can be enough to put anyone into a brain fog much less a brain injured survivor! Cavin's book is straightforward and written from not only personal experience but the validation of so many well-respected contemporary health care researchers and practitioners! I will certainly be recommending this book as a "Survival/Recovery 101" resource for all my patients including those without brain injuries because we all need optimum health and well-being and it starts with proper nourishment! Kudos to Cavin Balaster!"
(If I am not deficient, then supplementation ought to have no effect.) The previous material on modern trends suggests a prior >25%, and higher than that if I were female. However, I was raised on a low-salt diet because my father has high blood pressure, and while I like seafood, I doubt I eat it more often than weekly. I suspect I am somewhat iodine-deficient, although I don't believe as confidently as I did that I had a vitamin D deficiency. Let's call this one 75%.
This continued up to 1 AM, at which point I decided not to take a second armodafinil (why spend a second pill to gain what would likely be an unproductive set of 8 hours?) and finish up the experiment with some n-backing. My 5 rounds: 60/38/62/44/5023. This was surprising. Compare those scores with scores from several previous days: 39/42/44/40/20/28/36. I had estimated before the n-backing that my scores would be in the low-end of my usual performance (20-30%) since I had not slept for the past 41 hours, and instead, the lowest score was 38%. If one did not know the context, one might think I had discovered a good nootropic! Interesting evidence that armodafinil preserves at least one kind of mental performance.
Our 2nd choice for a Brain and Memory supplement is Clari-T by Life Seasons. We were pleased to see that their formula included 3 of the 5 necessary ingredients Huperzine A, Phosphatidylserine and Bacopin. In addition, we liked that their product came in a vegetable capsule. The product contains silica and rice bran, though, which we are not sure is necessary.
DNB-wise, eyeballing my stats file seems to indicate a small increase: when I compare peak scores D4B scores, I see mostly 50s and a few 60s before piracetam, and after starting piracetam, a few 70s mixed into the 50s and 60s. Natural increase from training? Dunno - I've been stuck on D4B since June, so 5 or 10% in a week or 3 seems a little suspicious. A graph of the score series26:
Disclaimer: While we work to ensure that product information is correct, on occasion manufacturers may alter their ingredient lists. Actual product packaging and materials may contain more and/or different information than that shown on our Web site. We recommend that you do not solely rely on the information presented and that you always read labels, warnings, and directions before using or consuming a product. For additional information about a product, please contact the manufacturer. Content on this site is for reference purposes and is not intended to substitute for advice given by a physician, pharmacist, or other licensed health-care professional. You should not use this information as self-diagnosis or for treating a health problem or disease. Contact your health-care provider immediately if you suspect that you have a medical problem. Information and statements regarding dietary supplements have not been evaluated by the Food and Drug Administration and are not intended to diagnose, treat, cure, or prevent any disease or health condition. Amazon.com assumes no liability for inaccuracies or misstatements about products.
"How to Feed a Brain is an important book. It's the book I've been looking for since sustaining multiple concussions in the fall of 2013. I've dabbled in and out of gluten, dairy, and (processed) sugar free diets the past few years, but I have never eaten enough nutritious foods. This book has a simple-to-follow guide on daily consumption of produce, meat, and water.
Take at 10 AM; seem a bit more active but that could just be the pressure of the holiday season combined with my nice clean desk. I do the chores without too much issue and make progress on other things, but nothing major; I survive going to The Sitter without too much tiredness, so ultimately I decide to give the palm to it being active, but only with 60% confidence. I check the next day, and it was placebo. Oops.
A total of 14 studies surveyed reasons for using prescription stimulants nonmedically, all but one study confined to student respondents. The most common reasons were related to cognitive enhancement. Different studies worded the multiple-choice alternatives differently, but all of the following appeared among the top reasons for using the drugs: "concentration" or "attention" (Boyd et al., 2006; DeSantis et al., 2008, 2009; Rabiner et al., 2009; Teter et al., 2003, 2006; Teter, McCabe, Cranford, Boyd, & Guthrie, 2005; White et al., 2006); "help memorize," "study," "study habits," or "academic assignments" (Arria et al., 2008; Barrett et al., 2005; Boyd et al., 2006; DeSantis et al., 2008, 2009; DuPont et al., 2008; Low & Gendaszek, 2002; Rabiner et al., 2009; Teter et al., 2005, 2006; White et al., 2006); "grades" or "intellectual performance" (Low & Gendaszek, 2002; White et al., 2006); "before tests" or "finals week" (Hall et al., 2005); "alertness" (Boyd et al., 2006; Hall et al., 2005; Teter et al., 2003, 2005, 2006); or "performance" (Novak et al., 2007). However, every survey found other motives mentioned as well. The pills were also taken to "stay awake," "get high," "be able to drink and party longer without feeling drunk," "lose weight," "experiment," and for "recreational purposes."
I can only talk from experience here, but I can remember being a teenager and just being a straight-up dick to any recruiters that came to my school. And I came from a military family. I'd ask douche-bag questions, I'd crack jokes like so... don't ask, don't tell only applies to everyone BUT the Navy, right? I never once considered enlisting because some 18 or 19 year old dickhead on hometown recruiting was hanging out in the cafeteria or hallways of my high school.Weirdly enough, however, what kinda put me over the line and made me enlist was the location of the recruiters' office. In the city I was living in at the time, the Armed Forces Recruitment Center was next door to an all-ages punk venue that I went to nearly every weekend. I spent many Saturday nights standing in a parking lot after a show, all bruised and bloody from a pit, smoking a joint, and staring at the windows of the closed recruiters' office. Propaganda posters of guys in full-battle-rattle obscured by a freshly scrawled Anarchy symbol or a collage of band stickers over the glass.I think trying to recruit kids from school has a child-molester-vibe to it. At least it did for me. But the recruiters defiantly being right next to a bunch of drunk and high punks, that somehow made it seem more like a truly bad-ass option. Like, sure, I'll totally join. After all, these guys don't run from the horde of skins and pins that descend every weekend like everyone else, they must be bad-ass.
So the chi-squared believes there is a statistically-significant difference, the two-sample test disagrees, and the binomial also disagrees. Since I regarded it as a dubious theory, can't see a difference, and the binomial seems like the most appropriate test, I conclude that several months of 1mg iodine did not change my eye color. (As a final test, when I posted the results on the Longecity forum where people were claiming the eye color change, I swapped the labels on the photos to see if anyone would claim something along the lines when I look at the photos, I can see a difference!. I thought someone might do that, which would be a damning demonstration of their biases & wishful thinking, but no one did.)
One symptom of Alzheimer's disease is a reduced brain level of the neurotransmitter called acetylcholine. It is thought that an effective treatment for Alzheimer's disease might be to increase brain levels of acetylcholine. Another possible treatment would be to slow the death of neurons that contain acetylcholine. Two drugs, Tacrine and Donepezil, are both inhibitors of the enzyme (acetylcholinesterase) that breaks down acetylcholine. These drugs are approved in the US for treatment of Alzheimer's disease.
Kratom (Erowid, Reddit) is a tree leaf from Southeast Asia; it's addictive to some degree (like caffeine and nicotine), and so it is regulated/banned in Thailand, Malaysia, Myanmar, and Bhutan among others - but not the USA. (One might think that kratom's common use there indicates how very addictive it must be, except it literally grows on trees so it can't be too hard to get.) Kratom is not particularly well-studied (and what has been studied is not necessarily relevant - I'm not addicted to any opiates!), and it suffers the usual herbal problem of being an endlessly variable food product and not a specific chemical with the fun risks of perhaps being poisonous, but in my reading it doesn't seem to be particularly dangerous or have serious side-effects.
Studies show that B vitamin supplements can protect the brain from cognitive decline. These natural nootropics can also reduce the likelihood of developing neurodegenerative diseases. The prevention of Alzheimer's and even dementia are among the many benefits. Due to their effects on mental health, B vitamins make an excellent addition to any smart drug stack.
QUALITY : They use pure and high quality Ingredients and are the ONLY ones we found that had a comprehensive formula including the top 5 most proven ingredients: DHA Omega 3, Huperzine A, Phosphatidylserine, Bacopin and N-Acetyl L-Tyrosine. Thrive Natural's Super Brain Renew is fortified with just the right ingredients to help your body fully digest the active ingredients. No other brand came close to their comprehensive formula of 39 proven ingredients. The "essential 5" are the most important elements to help improve your memory, concentration, focus, energy, and mental clarity. But, what also makes them stand out above all the rest was that they have several supporting vitamins and nutrients to help optimize brain and memory function. A critical factor for us is that this company does not use fillers, binders or synthetics in their product. We love the fact that their capsules are vegetarian, which is a nice bonus for health conscious consumers.
But while some studies have found short-term benefits, Doraiswamy says there is no evidence that what are commonly known as smart drugs — of any type — improve thinking or productivity over the long run. "There's a sizable demand, but the hype around efficacy far exceeds available evidence," notes Doraiswamy, adding that, for healthy young people such as Silicon Valley go-getters, "it's a zero-sum game. That's because when you up one circuit in the brain, you're probably impairing another system."
Popular among computer programmers, oxiracetam, another racetam, has been shown to be effective in recovery from neurological trauma and improvement to long-term memory. It is believed to effective in improving attention span, memory, learning capacity, focus, sensory perception, and logical thinking. It also acts as a stimulant, increasing mental energy, alertness, and motivation.
NGF may sound intriguing, but the price is a dealbreaker: at suggested doses of 1-100μg (NGF dosing in humans for benefits is, shall we say, not an exact science), and a cost from sketchy suppliers of $1210/100μg/$470/500μg/$750/1000μg/$1000/1000μg/$1030/1000μg/$235/20μg. (Levi-Montalcini was presumably able to divert some of her lab's production.) A year's supply then would be comically expensive: at the lowest doses of 1-10μg using the cheapest sellers (for something one is dumping into one's eyes?), it could cost anywhere up to $10,000.
Two variants of the Towers of London task were used by Elliott et al. (1997) to study the effects of MPH on planning. The object of this task is for subjects to move game pieces from one position to another while adhering to rules that constrain the ways in which they can move the pieces, thus requiring subjects to plan their moves several steps ahead. Neither version of the task revealed overall effects of the drug, but one version showed impairment for the group that received the drug first, and the other version showed enhancement for the group that received the placebo first.
But where will it all stop? Ambitious parents may start giving mind-enhancing pills to their children. People go to all sorts of lengths to gain an educational advantage, and eventually success might be dependent on access to these mind-improving drugs. No major studies have been conducted on the long-term effects. Some neuroscientists fear that, over time, these memory-enhancing pills may cause people to store too much detail, cluttering the brain. Read more about smart drugs here.
The effect? 3 or 4 weeks later, I'm not sure. When I began putting all of my nootropic powders into pill-form, I put half a lithium pill in each, and nevertheless ran out of lithium fairly quickly (3kg of piracetam makes for >4000 OO-size pills); those capsules were buried at the bottom of the bucket under lithium-less pills. So I suddenly went cold-turkey on lithium. Reflecting on the past 2 weeks, I seem to have been less optimistic and productive, with items now lingering on my To-Do list which I didn't expect to. An effect? Possibly.
With something like creatine, you'd know if it helps you pump out another rep at the gym on a sustainable basis. With nootropics, you can easily trick yourself into believing they help your mindset. The ideal is to do a trial on yourself. Take identical looking nootropic pills and placebo pills for a couple weeks each, then see what the difference is. With only a third party knowing the difference, of course.
Schroeder, Mann-Koepke, Gualtieri, Eckerman, and Breese (1987) assessed the performance of subjects on placebo and MPH in a game that allowed subjects to switch between two different sectors seeking targets to shoot. They did not observe an effect of the drug on overall level of performance, but they did find fewer switches between sectors among subjects who took MPH, and perhaps because of this, these subjects did not develop a preference for the more fruitful sector.
Many of the most popular "smart drugs" (Piracetam, Sulbutiamine, Ginkgo Biloba, etc.) have been around for decades or even millenia but are still known only in medical circles or among esoteric practicioners of herbal medicine. Why is this? If these compounds have proven cognitive benefits, why are they not ubiquitous? How come every grade-school child gets fluoride for the development of their teeth (despite fluoride's being a known neurotoxin) but not, say, Piracetam for the development of their brains? Why does the nightly news slant stories to appeal more to a fear-of-change than the promise of a richer cognitive future?
The word "nootropic" was coined in 1972 by a Romanian scientist, Corneliu Giurgea, who combined the Greek words for "mind" and "bending." Caffeine and nicotine can be considered mild nootropics, while prescription Ritalin, Adderall and Provigil (modafinil, a drug for treating narcolepsy) lie at the far end of the spectrum when prescribed off-label as cognitive enhancers. Even microdosing of LSD is increasingly viewed as a means to greater productivity.
And there are other uses that may make us uncomfortable. The military is interested in modafinil as a drug to maintain combat alertness. A drug such as propranolol could be used to protect soldiers from the horrors of war. That could be considered a good thing – post-traumatic stress disorder is common in soldiers. But the notion of troops being unaffected by their experiences makes many feel uneasy.
The chemical Huperzine-A (Examine.com) is extracted from a moss. It is an acetylcholinesterase inhibitor (instead of forcing out more acetylcholine like the -racetams, it prevents acetylcholine from breaking down). My experience report: One for the null hypothesis files - Huperzine-A did nothing for me. Unlike piracetam or fish oil, after a full bottle (Source Naturals, 120 pills at 200μg each), I noticed no side-effects, no mental improvements of any kind, and no changes in DNB scores from straight Huperzine-A.
Most of the most solid fish oil results seem to meliorate the effects of age; in my 20s, I'm not sure they are worth the cost. But I would probably resume fish oil in my 30s or 40s when aging really becomes a concern. So the experiment at most will result in discontinuing for a decade. At $X a year, that's a net present value of sum $ map (\n -> 70 / (1 + 0.05)^n) [1..10] = $540.5.
There are hundreds of cognitive enhancing pills (so called smart pills) on the market that simply do NOT work! With each of them claiming they are the best, how can you find the brain enhancing supplements that are both safe and effective? Our top brain enhancing pills have been picked by sorting and ranking the top brain enhancing products yourself. Our ratings are based on the following criteria.
Table 5 lists the results of 16 tasks from 13 articles on the effects of d-AMP or MPH on cognitive control. One of the simplest tasks used to study cognitive control is the go/no-go task. Subjects are instructed to press a button as quickly as possible for one stimulus or class of stimuli (go) and to refrain from pressing for another stimulus or class of stimuli (no go). De Wit et al. (2002) used a version of this task to measure the effects of d-AMP on subjects' ability to inhibit a response and found enhancement in the form of decreased false alarms (responses to no-go stimuli) and increased speed of correct go responses. They also found that subjects who made the most errors on placebo experienced the greatest enhancement from the drug.
Another empirical question concerns the effects of stimulants on motivation, which can affect academic and occupational performance independent of cognitive ability. Volkow and colleagues (2004) showed that MPH increased participants' self-rated interest in a relatively dull mathematical task. This is consistent with student reports that prescription stimulants make schoolwork seem more interesting (e.g., DeSantis et al., 2008). To what extent are the motivational effects of prescription stimulants distinct from their cognitive effects, and to what extent might they be more robust to differences in individual traits, dosage, and task? Are the motivational effects of stimulants responsible for their usefulness when taken by normal healthy individuals for cognitive enhancement?
One of the other suggested benefits is for boosting serotonin levels; low levels of serotonin are implicated in a number of issues like depression. I'm not yet sure whether tryptophan has helped with motivation or happiness. Trial and error has taught me that it's a bad idea to take tryptophan in the morning or afternoon, however, even smaller quantities like 0.25g. Like melatonin, the dose-response curve is a U: ~1g is great and induces multiple vivid dreams for me, but ~1.5g leads to an awful night and a headache the next day that was worse, if anything, than melatonin. (One morning I woke up with traces of at least 7 dreams, although I managed to write down only 2. No lucid dreams, though.)
Dosage is apparently 5-10mg a day. (Prices can be better elsewhere; selegiline is popular for treating dogs with senile dementia, where those 60x5mg will cost $2 rather than $3531. One needs a veterinarian's prescription to purchase from pet-oriented online pharmacies, though.) I ordered it & modafinil from Nubrain.com at $35 for 60x5mg; Nubrain delayed and eventually canceled my order - and my enthusiasm. Between that and realizing how much of a premium I was paying for Nubrain's deprenyl, I'm tabling deprenyl along with nicotine & modafinil for now. Which is too bad, because I had even ordered 20g of PEA from Smart Powders to try out with the deprenyl. (My later attempt to order some off the Silk Road also failed when the seller canceled the order.)
When I worked on the Bulletproof Diet book, I wanted to verify that the effects I was getting from Bulletproof Coffee were not coming from modafinil, so I stopped using it and measured my cognitive performance while I was off of it. What I found was that on Bulletproof Coffee and the Bulletproof Diet, my mental performance was almost identical to my performance on modafinil. I still travel with modafinil, and I'll take it on occasion, but while living a Bulletproof lifestyle I rarely feel the need. | CommonCrawl |
Speedlabs > ICSE Class 10 Physics Previous Year Question Paper 2015
ICSE CLASS 10 PHYSICS PREVIOUS YEAR PAPER 2015
Maximum marks : 80
You will not be allowed to write during the first 15 minutes. This time is to be spent in reading the question paper.
The time given at the head of paper is the time allotted for writing the answers.
5. The intended marks of questions or parts of questions are given in brackets [ ].
SECTION – I (40 Marks)
When a body is placed on a tabletop, it exerts a force equal to its weight downwards on the tabletop but does not move or fall. [2]
(i) Name the force exerted by the tabletop.
(ii) What is the direction of the force?
Name one factor that affects the lateral displacement of light as it passes through a rectangular glass slab.
On reversing the direction of the current in a wire, the magnetic field produced by it gets ____ .
(i)On what factor does the position of the centre of gravity of a body depend?
(ii) What is the SI unit of the moment of force?
Name the factors affecting the turning effect of a body. [2]
Define equilibrium.
In a beam balance when the beam is balanced in a horizontal position, it is in
__ equilibrium.
How is work done by a force measured when the force: [2]
is in the direction of displacement
is at an angle to the direction of displacement.
State the energy changes in the following while in use: [2]
(i) Burning of a candle.
(ii) A steam engine.
(i) A scissor is a ___ multiplier.
(ii) l kWh =___ J.
Explain the motion of a planet around the sun in a circular path [2]
Rajan exerts a force of 150 N in pulling a cart at a constant speed of 10 m/s Calculate the power exerted. [2]
Give the expression for mechanical advantage of an inclined plane in terms of the length of an inclined plane.
Name a common device where a gear train is used.
(b)The speed of light in glass is $ 2 \times 10^{5} \mathrm{~km} / \mathrm{s} $. What is the refractive index of glass? [2]
Draw a graph between displacement and the time for a body executing free vibrations.
Where can a body execute free vibrations?
What happens to the resistivity of semi-conductor with the increase of temperature?
For a fuse, higher the current rating _ _ is the fuse wire.
Name the high energetic invisible electromagnetic waves which help in the study of the structure of crystals.
State an additional use of the waves mentioned in part (e)(i).
(a)Rishi is surprised when he sees water boiling at $ 115^{\circ} \mathrm{C} $ in a container. Give reasons as to why water can boil at the above temperature. [2]
(b) [2]
Why does a current carry, freely suspended solenoid rest along a particular direction?
State the direction in which it rests.
Find the equivalent resistance between points A and B. [2]
Give two similarities an AC generator and a DC motor [2]
Why is a cathode ray tube evacuated to a low pressure?
What happens if the negative potential is changed on a grid?
SECTION – II (40 Marks)
Draw a simplified diagram of a lemon crusher, indicting of load and effort. [2]
(i) Name the physical quantity measured in terms of horsepower.
(ii) A nut is opened by a wrench of length 20 cm. If the least force required is 2N, find the moment of force needed to loosen the nut.
(iii) Explain briefly why the work done by a fielder when he takes a catch in a cricket match is negative.
(c)A block and tackle system has V.R. = 5. [4]
Draw a neat, labelled diagram of a system indicating the direction of its load and effort.
Rohan exerts a pull of 150 kg. What is the maximum load he can raise with this pulley system if its efficiency = 75%?
Where should an object be placed so that a real and inverted image of the same size as the object is obtained using a convex lens?
Draw a ray diagram to show the formation of the image as specified in the part (i).
Why does the Sun appear red at sunrise?
Name the subjective property of light related to its wavelength.
Jatin puts a pencil into a glass container having water and is surprised to see the pencil in a different state. [4]
What change is observed in the appearance of the pencil?
Name the phenomenon responsible for the change.
(iii)Draw a ray diagram showing how the eye sees the pencil.
State the safe limit of sound level in terms of decibel for human hearing.
Name the characteristic of sound in relation to its waveform.
A person standing between two vertical cliffs and 480 m from the nearest cliff shouts. He hears the first echo after 3s and the second echo 2s later. Calculate: [3]
The speed of sound.
The distance of the other cliff from the person.
In the diagram below, A, B, C, D are four pendulums suspended from the same elastic string PQ. The length of A and C are equal to each other while the length of pendulum B is smaller than that of D. Pendulum A is set into a mode of vibrations. [5]
(i) Name the type of vibrations taking place in pendulums B and D?
(ii) What is the state of pendulum C?
(iii) State the reason for the type of vibrations in pendulum B and C.
(i) Name the device used to increase the voltage at a generating station.
(ii) At what frequency is AC supplied to residential houses?
(iii) Name the wire in a household electrical circuit to which the switch is connected.
The relationship between the potential difference and the current in a conductor is stated in the form of a law. [3]
(i) Name the law.
(ii) What does the slope of V-I graph for a conductor represent?
(iii) Name the material used for making the connecting wire.
A cell of Emf 2 V and internal resistance 1.2 Ω is connected with an ammeter of resistance 0.8 Ω and two resistors of 4.5 Ω and 9 Ω as shown in the diagram below: [4]
What would be the reading on the Ammeter?
What is the potential difference across the terminals of the cell?
(i) Name a gas caused by the Greenhouse effect.
(ii) Which property of water makes it an effective coolant?
Water in lakes and ponds do not freeze at once in cold countries. Give a reason in support of your answer.
What is the principle of Calorimetry?
(iii) Name the law on which this principle is based.
(iv) State the effect of an increase of impurities on the melting point of ice.
$\text { A refrigerator converts } 100 \mathrm{~g} \text { of water at } 20^{\circ} \mathrm{C}\text { to ice at }-10^{\circ} \mathrm{C} \text { in } 35\text { minutes. Calculate }$
the average rate of heat extraction in terms of watts.
$\text{ Given: Specific heat capacity of ice }=2.1 \mathrm{Jg}^{-1 \circ} \mathrm{C}^{-1}$
$\text { Specific heat capacity of water }=4.2 \mathrm{Jg}^{-1 \circ} \mathrm{C}^{-1}$
$\text {Specific latent heat of fusion of ice }=336\mathrm{Jg}^{-1}$ [4]
What is thermionic emission?
Name the unit in which the work function of a metal is expressed.
Complete the diagram as given above by drawing the deflection of radioactive radiations in an electric filed.
State any two precautions to be taken while handling radioactive substances.
An atomic nucleus A is composed of 84 protons and 128 neutrons. [3]
The nucleus A emits an alpha particle and is transformed into nucleus B. What is the composition of nucleus B?
The nucleus B emits a beta particle is transformed into nucleus C. What is the composition of nucleus C?
Does the composition of nucleus C change if it emits gamma radiations? | CommonCrawl |
Trends and determinants of breastfeeding within one hour in Ethiopia, further analysis of Ethiopian Demographic and Health Survey: multivariate decomposition analysis
Tilahun Yemanu Birhan1,
Wullo Sisay Seretew1 &
Muluneh Alene2
Despite the substantial efforts to improve timely/early initiation of breastfeeding, avoidance of colostrum, and delayed initiation of breastfeeding remains a big challenge in developing countries. Therefore, this study aimed to analyze the trends of early breastfeeding rate over time based on the Ethiopian Demographic and Health Survey (EDHS).
Secondary data analysis was conducted based on the Ethiopian Demographic Health Surveys (EDHSs) conducted in 2005, 2011, and 2016. A total weighted sample of 9, 111, 10,106, and 8564 in 2005, 2011, and 2016 respectively were included for analysis. Trend and Logistic based decomposition analysis technique was used for analyzing the trends of early breastfeeding initiation over time and factors contributing to the change in early breastfeeding initiation rate. STATA 15 was employed for data management and analyses. All analyses presented in this paper were weighted for the sampling probabilities and non-response.
Among children age less than 5 years the rate of early breastfeeding initiation rate overtime was increased from 70.5% in 2005 to 72.7% in 2016. The highest rate of improvement was seen in the second phase of the study (2011–2016) while it shows a decline in the first phase (2005–2011) from 70.5 to 55.1%. The decomposition analysis indicated that about half of the overall change in early breastfeeding initiation rate was due to the difference in women's composition. Particularly, an increase in health facility delivery and vaginal delivery was a significant predictor of the increasing rate of early breastfeeding initiation over the surveys.
Early initiation of breastfeeding slightly increasing over the last 10 years in Ethiopia. Half of the overall increase in the early initiation of breastfeeding was due to the change in compositional characteristics of women over 10 years in Ethiopia. Change in the composition of women according to health facility delivery and vaginal delivery were the major source of the increase in early breastfeeding initiation over time. Public interventions including promoting health facility delivery of women for further improvements of early breastfeeding initiation should be needed.
Early initiation of breastfeeding after birth is an essential intervention to reduce neonatal mortality and morbidity [1,2,3]. Despite the substantial efforts to improve timely/early initiation of breastfeeding, avoidance of colostrum, and delayed initiation of breastfeeding remains a big challenge in developing countries [1, 4, 5]. The World Health Organization (WHO) early initiation of breastfeeding within 1 hour as it confers many benefits to the child and prevents neonatal mortality that extends into adulthood [6, 7]. Early initiation of breastfeeding stimulates the production of breast milk and ensures greater consumption of highly nutritious colostrum breast milk produced during the first few days after birth [1, 8, 9]. Early initiation of breastfeeding also an important bridge for the mother-to-child relationship and to increase the length of breastfeeding [10,11,12]. In addition to these advantages, early initiation of breastfeeding induces uterine contraction after pregnancy, decreasing the risk of postpartum haemorrhage and extending the duration of postpartum infertility, allowing women to return to their presentational weight, as well as reducing the risk of breast and ovarian cancer [9, 11, 13]. Globally, the prevalence of early initiation of breastfeeding ranged from 17.7 to 98.4% with an average of 57.6%, a slightly lower prevalence among mothers with complications during pregnancy and caesarean delivery [5].
Globally, an estimated 4 million newborns die each year from preventable infectious disease, which accounts 41% of mortality among under-five children [14]. In addition, most of the deaths occur within the first week of life, which is considered to be the early neonatal period [15]. The risk of newborn death during the first day of life was approximately 1 % globally, of which two-thirds of the burden already exists in ten countries, including Ethiopia [2, 16]. In recent decades, progress towards improving early initiation of breastfeeding has been sluggish, with the average rate of global initiation only increasing by 14%age points. However, less than half of all newborns are placed to the breast within an hour of birth globally. In the least developed region, the percentage of early initiation of breastfeeding was 53% [17], while the total pooled prevalence of early initiation of breastfeeding in Ethiopia was 61.4% [2]. Different studies on determinants of early initiation of breastfeeding showed that rural residence, educational status, mode of delivery, use of antenatal care (ANC), marital status, and place of delivery were important predictors of early initiation of breastfeeding [2, 3, 18,19,20,21]. Only point survey data was analyze in previous studies [11, 20,21,22,23,24,25,26]; it was difficult to see the patterns and to identify potential factors that have been consistently influenced early breastfeeding initiation over time in Ethiopia.
Studying the change in early initiation breastfeeding using multivariate decomposition analysis to identify predictors associated with the change in early initiation breastfeeding overtime becomes relevant for targeting interventions to improve early initiation of breastfeeding rate over time and could critically inform policies and strategies aimed to rise early initiation of breastfeeding rate in Ethiopia. Therefore, this study aimed to evaluate trends and determinants of early initiation of breastfeeding rate in Ethiopia.
Methods and materials
This study was conducted on secondary data analysis obtained from 2005, 2011, and 2016 Ethiopian Demographic and Health Surveys (EDHS). A stratified two-stage cluster sampling technique was used by the EDHS, chosen as a sampling frame in two stages using the 1994 Population and Housing Census (PHC) framework for EDHS 2005 and the 2007 Population Census (PHC) framework for EDHS 2011 and 2016. Stratification was maintained by separating each region into urban and rural areas. Twenty-one sampling strata were established because the area of Addis Ababa was entirely urban. In the first phase, 540 Enumeration Areas (EAs) (145 in urban areas) were selected for EDHS 2005, 624 EAs (187 in urban areas) were selected for EDHS 2011, and 645 EAs (202 in urban areas) were selected for EDHS 2016 by probability proportional to the size of the enumeration region and independent selection of each sampling stratum. At the second stage, a complete household listing operation was carried out in all selected EAs prior to the start of the fieldwork and an average of 28 households were selected systematically. The detailed sampling strategies are fully presented in the EDHS report [27,28,29]. The source population is all under-five children in Ethiopia, whereas the study population was all underfive children in the selected enumeration region.
Outcome variable
The outcome variable was early initiation of breastfeeding practice (EIBF) among mothers with children younger than 24 months of age. The initiation of breastfeeding within 1 h of birth is an early initiation of breastfeeding. Initiating breastfeeding was coded as '1' within 1 h of birth, while beginning breastfeeding was coded as '0' for logistic regression analysis after 1 h of birth.
Independent variables
Socio-demographic and economic variables (residence, region, maternal age, marital status, religion, maternal education, paternal education, wealth index, maternal occupation, maternal working Status), Pregnancy and pregnancy-related factors (ANC visit, Parity, Preceding birth interval, contraceptive use, Place of delivery, Birth order, Mode of delivery, wanted pregnancy). Behavioural factors.
(Smoking, media exposure) were included in this study.
Data collection procedure
The study was conducted based on EDHS data by accessing from the DHS program official database www.measuredhs.com after permission was granted through an online request by explaining the objective of our study. The raw data was collected from all parts of the country on childbearing aged women using a structured and pre-tested questionnaire. We used the kid Record (KR file) data set and extracted the outcome and independent variables.
Data management and analysis
Before any statistical analysis, the data were weighted using sampling weight, primary sampling unit and strata to restore the representativeness of the survey and to inform STATA to take account of the sampling design when calculating standard errors to obtain reliable parameter estimates. Since the response variable have binary outcome, binary logistic regression model was employed. Sampling was applied as part of a complex survey design using primary sampling unit, strata, and women's individual weight (V005). Data from 2005, 2011, and 2016 were appended together after extracting relevant variables for trend and decomposition analysis. Cross tabulation and summary statistics were performed to describe the study population using STATA 14 software.
Trend and decomposition analysis
The trend was assessed using descriptive statistics stratified by selected background characteristics of respondents and was assessed separately in the periods 2005–2011, 2011–2016, and 2005–2016. A multivariate analysis of the shift in the early initiation of breastfeeding rate was employed to address possible factors contributing to the difference in the percentage of early initiation of breastfeeding over the study era. This approach is used for many purposes in economics, demography, medicine, and other specialties. This research focused on how early initiation breastfeeding rate reacts to the variation in women's characteristics and how these variables form the difference across surveys conducted at different times. The study was a regression analysis of the difference in the percentage of early initiation of breastfeeding rate between EDHS 2005 and 2016. Decomposition analysis aimed to determine the cause of the discrepancy in the percentage of EIBF in the last 10 years. Both the difference in composition (Endowment) of the population and difference in the effect of characteristics (Coefficients) between the surveys is necessary to know the factors contributing to the increase/decrease in the EIBF rate over time. The multivariate decomposition analysis for the nonlinear response model utilizes the output from the logistic regression model since it is "a binary outcome" to parcel out the observed difference in the EIBF rate between the surveys into components. The difference in the rate of EIBF between the surveys can be attributed to the compositional difference in population (difference in characteristics or endowment) and the difference in the effect of explanatory variables (difference in coefficients) between the surveys.
Logit based decomposition analysis technique was used for the analysis of factors contributing to the change in EIBF over time to identify factors contributing to the change in the EIBF rate in the last 10 years. The variations of EIBF over time can be due to the compositional disparity between surveys and discrepancies in the results of the chosen explanatory. Hence, the observed variation in EIBF between surveys is additively decomposed into a characteristics (or endowments) component and a coefficient (or effects of characteristics) component. For logistic regression, the Logit or log-odd of EIBF is taken as:
$$ Logit(A)- Logit(B)=F\left({X}_A{\beta}_A\right)-F\left({X}_B{\beta}_B\right)=\underset{E}{\underbrace{\left[\mathrm{F}\left(\mathrm{XA}\upbeta \mathrm{A}\right)-\mathrm{F}\left(\mathrm{XB}\upbeta \mathrm{A}\right)\right]}}+\underset{C}{\underbrace{\left[\mathrm{F}\left(\mathrm{XB}\upbeta \mathrm{A}\right)-\mathrm{F}\right(\mathrm{XB}\upbeta \mathrm{B}\Big]}} $$
The E component refers to the part of the differential owing to differences in endowments or characteristics. The C component refers to that part of the differential attributable to differences in coefficients or effects [30].
The equation can be presented as:
$$ \mathrm{Logit}\ \left(\mathrm{A}\right)-\mathrm{Logit}\ \left(\mathrm{B}\right)=\left[\upbeta 0\mathrm{A}-\upbeta 0\mathrm{B}\right]+\Sigma \mathrm{XijB}\ast \left[\upbeta \mathrm{ijA}-\upbeta \mathrm{ijB}\right]+\Sigma \upbeta \mathrm{ijB}\ast \left[\mathrm{XijA}-\mathrm{XijB}\right] $$
XijB is the proportion of the jth category of the ith determinant in the DHS 2005,
XijA is the proportion of the jth category of the ith determinant in DHS 2016,
βijB is the coefficient of the jth category of the ith determinant in DHS 2005,
βijA is the coefficient of the jth category of the ith determinant in DHS 2016,
β0B is the intercept in the regression equation fitted to DHS 2005, and
β0A is the intercept in the regression equation fitted to DHS 2016
The recently developed multivariate decomposition for the non-linear model was used for the decomposition analysis of early initiation of breastfeeding using the mvdcmp STATA command [30]. In this study, variables with p-value < 0.2 in the bivariable multivariate decomposition analysis were considered for the multivariable multivariate decomposition analysis. In the multivariable multivariate decomposition analysis variables with p-value < 0.05 in the endowments and coefficients were considered as a significant contributing factor for the change in early initiation of breastfeeding over time.
Characteristics of the study population
In the age group of fewer than 20 years, nearly three-fourths of respondents were found in all three surveys. Based on ANC visits, the number of ANC visits in the first quarter increased from 29% in 2005 to 37% in 2016. With regard to maternal education, 76% of women were not taught in 2005, compared to 65% in 2016. In addition, the percentage of primary education for husbands and spouses grew from 26% in 2005 to 36% in 2011, but decreased marginally to 33% in 2016 (Table 1).
Table 1 Percentage distribution of socio-demographic characteristics among respondents, 2005, 2011 and 2016 Ethiopian Demographic and Health Survey
The media representation of respondents increased from 37% in 2005 to 68% in 2016, and the number of women without an ANC visit fell dramatically from 72% in 2005 to 24% in 2016. The proportion of respondents' institutional delivery was increased from 5.8% in 2005 to 26% in 2016. All variables listed in the table show change in composition, when we compare the sample population in the years 2005, 2011, and 2016.
Trends of early initiation of breastfeeding (EIBF)
The trend period was divided into three phases, 2005–2011, 2011–2016, and 2005–2016 in order to see the difference in early initiation of breastfeeding rate over time and the potential source for the change in the EIBF rate. The rate of EIBF over the study period (2005–2016) has been increased while it shows a decline from 2005 to 2011. The highest improvement was seen in the second phase (2011–2016) i.e. 17.6 percentage point changes in the EIBF rate but the rate of early initiation of breastfeeding was decreased from 70.55 to 55.1% in the first phase 2005–2011 (Fig. 1). The initiation of breastfeeding within the recommended time varies across background characteristics of the study population over the study period. The trends of breastfeeding rate within 1 hour over the study period differ across regions of Ethiopia. Initiation of breastfeeding within 1 hour was improved in the Tigray region in each phase i.e. 5.4% point change from 2005 to 2011, 8.3% from 2011 to 2016, and an overall change of 10% points in 2005–2016. However, initiation of breastfeeding within 1 hour was a decline in the Afar region over the study period by 48% points from 2005 to 2016. Besides, there was an increase in initiation of breastfeeding rate within 1 hour among women with secondary and higher education, an increase in the third phase of the study period (2005–2016) by 11 percentage points. Also, there was an increase in initiation of breastfeeding within 1 hour among women delivered at health institution such as in the second phase (2011–2016) it increases by 25 percentage points and in the third phase (2005–2016) increases by 16 percentage points (Table 2).
trends of breastfeeding within 1 hour in Ethiopia
Table 2 Trends of early initiation of Breastfeeding in Ethiopia by selected characteristics 2005, 2011, and 2016 Ethiopian Demographic and Health Survey
Decomposition analysis
Decomposition analysis of early breastfeeding initiation, 2005–2016
Overall, there have been increases in breastfeeding rates within 1 hour in Ethiopia from 2005 to 2016. The overall decomposition result suggested that the increases in breastfeeding rate within 1 hour has been explained by both the disparity in women's characteristics and effects between the surveys. About half percentage of the increments in early initiation of breastfeeding rate was attributed to the differences in the composition of the respondents while half of the change in early initiation of breastfeeding change was due to the difference in the effect of the selected explanatory variable (Table 3).
Table 3 an overall decomposition analysis of change in breastfeeding rate over time in Ethiopia
In the detailed decomposition analysis, the overall increase in early initiation of breastfeeding rate from 2005 to 2016 was attributed to the difference in characteristics (endowment) of women between the survey. Covariates that provides a significant contribution to the improvements of EIBF were the mode of delivery, place of delivery, birth order, and size of child at birth. The increase in the composition of women with health facility delivery between 2005 and 2016 significantly contributed to the increase in early initiation of breastfeeding. Also, the change in composition of women with vaginal delivery significantly contributed to the increase in early initiation of breastfeeding rate over time. Similarly, an increased composition of women with first and 2nd-3rd birth order significantly contributed to the increase in early initiation of breastfeeding rate over the study period (2005–2016). In addition, the increase in women with small birth size between 2005 and 2016 significantly contributed to the increments of delayed initiation of breastfeeding (Table 4).
Table 4 Decomposition analysis in early initiation of breastfeeding among under-five children in the last ten years, Ethiopian Demographic and Health Survey 2005–2016
Difference due to effects of coefficient (C)
Half of the rise in early initiation of breastfeeding was attributed to behavioural changes to breastfeeding within 1 hour, controlling the effect of change in compositional characteristics. Covariates that have a significant on coefficient contribution to the observed change in breastfeeding within 1 hour are husband/partner education, mode of delivery, and trimester of antenatal care visit. Controlling all compositional change factors, 14% of breastfeeding within 1 hour increment was attributed to behaviour of husband/partner primary education over the last decade. Keeping the compositional factors constant, women who have attend late ANC was negatively correlated with the early initiation of breastfeeding over the last 10 years.
Delayed initiation of breastfeeding is a major public health issue in resource-limited settings like Ethiopia, the potential risk factor for neonatal mortality and morbidity [17, 23, 31, 32]. The incidence of breastfeeding within 1 hour in a community is a reflection of time to antenatal care, residence, and delivery service utilization [8, 11, 16, 18]. This study investigated the trends and determinants of early initiation of breastfeeding among under-five children in Ethiopia. The study aimed to identify the potential factors contributing to the change in the early initiation of breastfeeding either positively or negatively over the past 10 years in Ethiopia. The results of this study revealed that trends in early onset breastfeeding rates increased significantly between 2011 and 2016, even though they showed a decrease between 2005 and 2011. From 2005 to 2011, the breastfeeding rate decreased by 15.4/1000 births, while it increased by 17.6/1000 between 2011 and 2016, with an overall increase of 2.2/1000 from 2005 to 2016. In this study, we note that the timely initiation of breastfeeding from 2005 to 2016 was slightly lower than the WHO and UNICEFF Children's Fund recommendations [33]. This may be due to Ethiopia's government provide sufficient emphasis to key messages since 2004 on the timely introduction of breastfeeding [34] and establishing guidelines for feeding infants and young children.
From the decomposition analysis, the breastfeeding rate within an hour suggests an improvement over the survey period in Ethiopia. Hence, understanding the source of change has critical public health relevance to uncover what are contributing factors to the change in breastfeeding within an hour as well as understanding which factors are making progress in growing breastfeeding with an hour to assess current implementation strategies. The disparity in the composition of women over time was responsible for 50.02% of the rise in the breastfeeding rate within an hour over the entire sample survey period (2005–2016). An improvement in the composition of health facility distribution over the study showed a major impact on the increase in breastfeeding within 1 hour in accordance with other research performed in Tanzania [35], Bangladesh [36], South Asia [37], and Ethiopia [2, 19, 38]. This agreement confirms that prioritization should be given to encouraging and facilitating the use of health facility delivery services in order to make progress towards early initiation of breastfeeding. Hence delivering at a health facility reduces traditional beliefs about breastfeeding practice such as prelacteal feeding and misconception about colostrum because the health professionals empowered the mother to resist the role of external pressure and interference that promotes delayed initiation of breastfeeding. However, there is a suggestion that mothers who were delivered at home were highly exposed to practicing prelacteal feeding in Ethiopia [39,40,41]. Similarly vaginal delivery has a significant positive contribution to the rising of breastfeeding within 1 hour consistent with other studies conducted in Brazil [42], and Ethiopia [19, 21, 26, 43]. Since mothers who deliver vaginally have immediate skin-to-skin contact and infant treatment, as well as the onset of lactation is rapid that inhibits the likelihood of prelacteal feeding supplementation. However, a mother delivered by caesarean section typically associated with reducing timely initiation of breastfeeding hence the effect of anaesthesia delays the onset of lactation.
Also, increasing the composition of women with a child of small birth size has a significant contribution in reducing the timely initiation of breastfeeding within 1 hour of delivery. Since small babies may have an illness and stay away from their mother, resulting in the early supplementation of prelacteal feeding that inhibits the ability to initiate breastfeeding within 1 hour of delivery due to poor breastfeeding reflex and sucking inability similar findings were published in West Africa [20] and Zimbabwe [24], and prospective study in low and middle-income countries [1]. However, the large-size child can be considered as healthy and completely matured who possesses good reflection to initiate timely breastfeeding as viewed by their mothers and health care workers. This result indicates that the initiation of breastfeeding involves additional care and encouragement for mothers with small-sized infants.
Approximately 50% of early initiation breastfeeding progress was attributed to the shift in conduct towards breastfeeding initiation within 1 hour in Ethiopia, monitoring the effects of endowment characteristics. Factors substantially contributing to the positive and negative changes in early breastfeeding initiation were the achievement of husband/partner primary education and ANC visit at the third trimester respectively.
Increasing behavioural improvement in primary education for husbands/partners leads significantly to increasing early initiation of breastfeeding in Ethiopia, i.e. a 14 percent point change. Since husband education has a greater understanding regarding child care practice that receives information from health extension staff as well as daily mass media messages that can pressure their wife to start early breastfeeding even though there is no supporting evidence for this finding.
Late initiation of ANC visits i.e. at the 3rd trimester significantly contributed to the reduction of early initiation of breastfeeding by 16 percentage points. Since a mother attends ANC visit at the 3rd trimester after feeling uncomfortable and illness, they may not well aware of the importance of timely initiation of breastfeeding and colostrum to their child hence their first aim of attending ANC might be receiving treatment their health similar findings are published in low and middle-income countries [1] prospective study. Currently, focussed antenatal care guideline in Ethiopia recommends intensive health education about breastfeeding at the time of antenatal care visits for mothers.
One of the strengths of this analysis is that we used data from a national survey, the 2005–2016 EDHS. Therefore, the results of the study have thoughtful consequences at the personal, group and policy level.
However, there were limited sample sizes in some local areas represented in the survey, and the findings should therefore be viewed with caution. Since this study is a secondary data review of a national sample, other primary variables are not included, such as traditional attitudes, psycho-social influences, the preference of spouses for breastfeeding, and in-depth qualitative views of mothers. This research is focused on cross-sectional data and it is therefore difficult to show the cause and effect relationships of the covariates on the timely initiation of breastfeeding, and a recall bias may be susceptible to the survey responses.
In this study, we find that the trends of early initiation of breastfeeding have increased marginally over the last 10 years in Ethiopia. With regarding to multivariate decomposition analysis, half of the overall rise in early initiation of breastfeeding was attributed to the change in compositional characteristics of women over 10 years in Ethiopia. Change in the composition of reproductive ages women's characteristics according to place of delivery and mode of delivery were the major source of the increase in early breastfeeding initiation rate overtime while the size of the child at birth was contributed to delay timely breastfeeding over the study period.
Also half of the increase in early breastfeeding initiation rate was due to the changed behaviour of women towards early breastfeeding initiation over the last 10 years in Ethiopia. The change in behaviour of husband/partner education has a positive influence on the early initiation of breastfeeding while late initiation of antenatal care visits has been negatively correlated with early initiation of breastfeeding over time. Public action should be required, including encouraging the provision of women's health facilities for more changes in the initiation of early breastfeeding. The ministry of health and other stakeholders should continue to enrich the coverage of early ANC visits and promoting community education regarding the need of early breastfeeding initiation to their newborns. Further interventions are needed among women deliver by caesarean section, one of the main barriers in delayed initiation of breastfeeding over the last 10 years in Ethiopia.
The survey datasets used in this study was based on a publicly available dataset that is freely available online with no participant's identity from.
http://www.dhsprogram.com/data/available-datasets.cfm. Approval was sought from MEASURE DHS/ICF International and permission was granted for this use.
ANC:
DHS:
Demographic health survey
EDHS:
Ethiopian demographic and health survey
EIBF:
Early initiation of breastfeeding
EAs:
Enumeration areas
SNNP:
Southern nations nationalities and peoples
UNICEF:
United Nations international children's fund
Patel A, Bucher S, Pusdekar Y, Esamai F, Krebs NF, Goudar SS, et al. Rates and determinants of early initiation of breastfeeding and exclusive breastfeeding at 42 days postnatal in six low and middle-income countries: a prospective cohort study. Reprod Health. 2015;12(S2):S10. https://doi.org/10.1186/1742-4755-12-S2-S10.
Alebel A, Dejenu G, Mullu G, Abebe N, Gualu T, Eshetie S. Timely initiation of breastfeeding and its association with birthplace in Ethiopia: a systematic review and meta-analysis. Int Breastfeed J. 2017;12(1):44. https://doi.org/10.1186/s13006-017-0133-x.
Debes AK, Kohli A, Walker N, Edmond K, Mullany LC. Time to initiation of breastfeeding and neonatal mortality and morbidity: a systematic review. BMC Public Health. 2013;13(S3):S19. https://doi.org/10.1186/1471-2458-13-S3-S19.
Nair N, Tripathy P, Prost A, Costello A, Osrin D. Improving newborn survival in low-income countries: community-based approaches and lessons from South Asia. PLoS Med. 2010;7(4):e1000246. https://doi.org/10.1371/journal.pmed.1000246.
Takahashi K, Ganchimeg T, Ota E, Vogel JP, Souza JP, Laopaiboon M, et al. Prevalence of early initiation of breastfeeding and determinants of delayed initiation of breastfeeding: secondary analysis of the WHO global Survey. Sci Rep. 2017;7(1):44868. https://doi.org/10.1038/srep44868.
Arts M, Taqi I, Bégin F. Improving the early initiation of breastfeeding: the WHO-UNICEF breastfeeding advocacy initiative. Breastfeed Med. 2017;12(6):326–7. https://doi.org/10.1089/bfm.2017.0047.
World Health Organization. Guideline: protecting, promoting and supporting breastfeeding in facilities providing maternity and newborn services. 2017.
Ndirangu M, et al. Trends and factors associated with early initiation of breastfeeding in Namibia: analysis of the demographic and health surveys 2000–2013. BMC Pregnancy Childbirth. 2018;18(1):171. https://doi.org/10.1186/s12884-018-1811-4.
Abie BM, Goshu YA. Early initiation of breastfeeding and colostrum feeding among mothers of children aged less than 24 months in Debre Tabor, Northwest Ethiopia: a cross-sectional study. BMC Res Notes. 2019;12(1):65. https://doi.org/10.1186/s13104-019-4094-6.
Kiwango F, et al. Prevalence and factors associated with timely initiation of breastfeeding in the Kilimanjaro region, northern Tanzania: a cross-sectional study. BMC Pregnancy Childbirth. 2020;20(1):1–7.
Lyellu HY, et al. Prevalence and factors associated with early initiation of breastfeeding among women in Moshi municipal, northern Tanzania. BMC Pregnancy Childbirth. 2020;20:1–10.
Khan GN, Ariff S, Khan U, Habib A, Umer M, Suhag Z, et al. Determinants of infant and young child feeding practices by mothers in two rural districts of Sindh, Pakistan: a cross-sectional survey. Int Breastfeed J. 2017;12(1):40. https://doi.org/10.1186/s13006-017-0131-z.
Nkoka O, Ntenda PAM, Kanje V, Milanzi EB, Arora A. Determinants of timely initiation of breast milk and exclusive breastfeeding in Malawi: a population-based cross-sectional study. Int Breastfeed J. 2019;14(1):37. https://doi.org/10.1186/s13006-019-0232-y.
Lawn J, Kerber K, Enweronu-Laryea C, Massee Bateman O. Newborn survival in low resource settings—are we delivering? BJOG Int J Obstet Gynaecol. 2009;116:49–59. https://doi.org/10.1111/j.1471-0528.2009.02328.x.
Dewey KG, Vitta BS. Strategies for ensuring adequate nutrient intake for infants and young children during the period of complementary feeding. Washington: Alive & Thrive; 2013. 7.
Derso T, Biks GA, Tariku A, Tebeje NB, Gizaw Z, Muchie KF, et al. Correlates of early neonatal feeding practice in Dabat HDSS site, Northwest Ethiopia. Int Breastfeed J. 2017;12(1):25. https://doi.org/10.1186/s13006-017-0116-y.
Victora CG, Bahl R, Barros AJD, França GVA, Horton S, Krasevec J, et al. Breastfeeding in the 21st century: epidemiology, mechanisms, and lifelong effect. Lancet. 2016;387(10017):475–90. https://doi.org/10.1016/S0140-6736(15)01024-7.
Adhikari M, Khanal V, Karkee R, Gavidia T. Factors associated with early initiation of breastfeeding among Nepalese mothers: further analysis of Nepal Demographic and health Survey, 2011. Int Breastfeed J. 2014;9(1):21. https://doi.org/10.1186/s13006-014-0021-6.
Belachew A. Timely initiation of breastfeeding and associated factors among mothers of infants-age 0–6 months old in Bahir Dar City, northwest, Ethiopia, 2017: a community-based cross-sectional study. Int Breastfeed J. 2019;14(1):5. https://doi.org/10.1186/s13006-018-0196-3.
Ezeh OK, et al. Factors associated with the early initiation of breastfeeding in economic Community of West African States (ECOWAS). Nutrients. 2019;11(11):2765. https://doi.org/10.3390/nu11112765.
Gebremeskel SG, Gebru TT, Gebrehiwot BG, Meles HN, Tafere BB, Gebreslassie GW, et al. Early initiation of breastfeeding and associated factors among mothers of aged less than 12 months children in the rural eastern zone, Tigray, Ethiopia: a cross-sectional study. BMC Res Notes. 2019;12(1):671. https://doi.org/10.1186/s13104-019-4718-x.
Ekubay M, Berhe A, Yisma E. Initiation of breastfeeding within one hour of birth among mothers with infants younger than or equal to 6 months of age attending public health institutions in Addis Ababa, Ethiopia. Int Breastfeed J. 2018;13(1):4. https://doi.org/10.1186/s13006-018-0146-0.
Smith ER, et al. Delayed breastfeeding initiation and infant survival: A systematic review and meta-analysis. PloS one. 2017;12(7):e0180722.
Mukora-Mutseyekwa F, et al. Predictors of early initiation of breastfeeding among Zimbabwean women: secondary analysis of ZDHS 2015. Matern Health Neonatol Perinatol. 2019;5(1):2.
Setegn T, Gerbaba M, Belachew T. Determinants of timely initiation of breastfeeding among mothers in Goba Woreda, south East Ethiopia: a cross-sectional study. BMC Public Health. 2011;11(1):217. https://doi.org/10.1186/1471-2458-11-217.
Tewabe T. Timely initiation of breastfeeding and associated factors among mothers in Motta town, east Gojjam zone, Amhara regional state, Ethiopia, 2015: a cross-sectional study. BMC Pregnancy Childbirth. 2016;16(1):314. https://doi.org/10.1186/s12884-016-1108-4.
Demographic, C.E. Health Survey 2011 Addis Ababa, Ethiopia, and Calverton. Maryland: Central Statistical Agency and ICF International; 2011.
Demographic, E., Health Survey 2005. Addis Ababa, Ethiopia, and Calverton. Maryland: Central Statistical Agency and ORC Macro; 2011.
CSACE, I. Ethiopia demographic and health survey 2016. Addis Ababa, and Rockville, Maryland: CSA and ICF; 2016.
Powers DA, Yoshioka H, Yun M-S. Mvdcmp: multivariate decomposition for nonlinear response models. Stata J. 2011;11(4):556–76. https://doi.org/10.1177/1536867X1201100404.
Karimi FZ, Sadeghi R, Maleki-Saghooni N, Khadivzadeh T. The effect of mother-infant skin to skin contact on success and duration of first breastfeeding: a systematic review and meta-analysis. Taiwan J Obstet Gynecol. 2019;58(1):1–9. https://doi.org/10.1016/j.tjog.2018.11.002.
Bhandari S, Thorne-Lyman AL, Shrestha B, Neupane S, Nonyane BAS, Manohar S, et al. Determinants of infant breastfeeding practices in Nepal: a national study. Int Breastfeed J. 2019;14(1):14. https://doi.org/10.1186/s13006-019-0208-y.
Organization, W.H. Evidence for the ten steps to successful breastfeeding: World Health Organization; 1998.
World Health Organization. Implementing the Global Strategy for Infant and Young Child Feeding: Geneva, 3-5 February 2003: meeting report. 2003.
Victor R, et al. Determinants of breastfeeding indicators among children less than 24 months of age in Tanzania: a secondary analysis of the 2010 Tanzania Demographic and health Survey. BMJ Open. 2013;3(1).
Karim F, Billah SM, Chowdhury MAK, Zaka N, Manu A, Arifeen SE, et al. Initiation of breastfeeding within one hour of birth and its determinants among normal vaginal deliveries at primary and secondary health facilities in Bangladesh: a case-observation study. PLoS One. 2018;13(8):e0202508. https://doi.org/10.1371/journal.pone.0202508.
Sharma IK, Byrne A. Early initiation of breastfeeding: a systematic literature review of factors and barriers in South Asia. Int Breastfeed J. 2016;11(1):17. https://doi.org/10.1186/s13006-016-0076-7.
Temesgen H, Negesse A, Woyraw W, Getaneh T, Yigizaw M. Prelacteal feeding and associated factors in Ethiopia: systematic review and meta-analysis. Int Breastfeed J. 2018;13(1):49. https://doi.org/10.1186/s13006-018-0193-6.
Tariku A, Biks GA, Wassie MM, Gebeyehu A, Getie AA. Factors associated with prelacteal feeding in the rural population of Northwest Ethiopia: a community cross-sectional study. Int Breastfeed J. 2016;11(1):14. https://doi.org/10.1186/s13006-016-0074-9.
Legesse M, Demena M, Mesfin F, Haile D. Prelacteal feeding practices and associated factors among mothers of children aged less than 24 months in Raya kobo district, north eastern Ethiopia: a cross-sectional study. Int Breastfeed J. 2014;9(1):189. https://doi.org/10.1186/s13006-014-0025-2.
Amele EA, et al. Prelacteal feeding practice and its associated factors among mothers of children age less than 24 months old in southern Ethiopia. Ital J Pediatr. 2019;45(1):1–8.
Vieira TO, Vieira GO, Giugliani ERJ, Mendes CMC, Martins CC, Silva LR. Determinants of breastfeeding initiation within the first hour of life in a Brazilian population: a cross-sectional study. BMC Public Health. 2010;10(1):760. https://doi.org/10.1186/1471-2458-10-760.
Liben ML, Yesuf EM. Determinants of early initiation of breastfeeding in Amibara district, Northeastern Ethiopia: a community based cross-sectional study. Int Breastfeed J. 2016;11(1):1–7.
We, authors, acknowledge The Demographic and Health Surveys (DHS) Program funded by the U.S. Agency for International Development (USAID) for the accusation of this dataset.
This study did not receive any funding from any organization.
Department of Epidemiology and Biostatistics, Institute of public health, College of Medicine and Health Science, University of Gondar, Gondar, Ethiopia
Tilahun Yemanu Birhan & Wullo Sisay Seretew
Department of Public Health, College of Health Science, DebreMarkos University, Debre Markos, Ethiopia
Muluneh Alene
Tilahun Yemanu Birhan
Wullo Sisay Seretew
TYB was involved in this study from the inception to design, acquisition of data, data cleaning, data analysis and interpretation, and drafting and revising of the manuscript. WSS was involved in project administration, principal supervision, and revising the final manuscript. MA was involved in data cleaning and analysis as well as revising the whole work of the manuscript. All authors read and approved the final manuscript.
Correspondence to Tilahun Yemanu Birhan.
This study is a secondary data analysis of the EDHS, which is publicly available, approval was sought from MEASURE DHS/ICF International, and permission was granted for this use. The original DHS data were collected in conformity with international and national ethical guidelines. Ethical clearance was provided by the Ethiopian Public Health Institute (EPHI) (formerly the Ethiopian Health and Nutrition Research Institute (EHNRI) Review Board, the National Research Ethics Review Committee (NRERC) at the Ministry of Science and Technology, the Institutional Review Board of ICF International, and the United States Centers for Disease Control and Prevention (CDC). Written consent was obtained from mothers/caregivers and data were recorded anonymously at the time of data collection during the EDHS 2005–2016.
Birhan, T.Y., Seretew, W.S. & Alene, M. Trends and determinants of breastfeeding within one hour in Ethiopia, further analysis of Ethiopian Demographic and Health Survey: multivariate decomposition analysis. Ital J Pediatr 47, 77 (2021). https://doi.org/10.1186/s13052-021-01032-5
Multivariate decomposition analysis and trend | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.